[Freeswitch-dev] MTU setting and application buffer size
Juraj Fabo
juraj.fabo at gmail.com
Wed Sep 28 16:53:17 MSD 2011
On Wed, Sep 28, 2011 at 8:55 AM, Juraj Fabo <juraj.fabo at gmail.com> wrote:
> Thank you for your answers.
>
>> ---------- Forwarded message ----------
>> From: Moises Silva <moises.silva at gmail.com>
>> To: freeswitch-dev at lists.freeswitch.org
>> Date: Thu, 22 Sep 2011 16:05:06 -0400
>> Subject: Re: [Freeswitch-dev] MTU setting and application buffer size
>> On Mon, Sep 19, 2011 at 12:30 PM, Juraj Fabo <juraj.fabo at gmail.com> wrote:
>>> Target application is reading/writing from/to another network stack
>>> A-law data with 20ms frame length.
>>>
>>> With default MTU settings (80) I am experiencing latency which I
>>> consider too high, it is about 400ms from targetApp to loopback and
>>> back to targetApp.
>>
>> There is something wrong with your app. The MTU in wanpipeX.conf is
>> meant to control the transfer size between the driver and the card,
>> not between the user space app and the driver. In general, higher MTU
>> means lower interrupt load. The lowest value is probably 8, which is
>> used for applications working in DAHDI-mode, where one interrupt is
>> received every millisecond and 8 bytes are transferred from the driver
>> to the hardware and viceversa.
>>
>>> I think I missed some point about proper setting of MTU and consequences.
>>> Decreasing the MTU configuration setting in the
>>> /etc/wanpipe/wanpipe1.conf from default value 80 to e.g 40 or 16 leads
>>> to desired lower latency, however, the data the targetApp is reading
>>> are often corrupted with many gaps.
>>
>> 80 means a hardware interrupt is received every 10ms with 80 bytes per
>> time slot. This is the recommended mode. A value of 40 will increase
>> interrupt load and not necesarily reduce your latency, you must reduce
>> the "user period", which is how often the driver will deliver
>> media/data to the user application. This is done using
>> sangoma_tdm_set_usr_period().
>>
>>> Please, what is the proper way of setting MTU?
>>> I assume I have to set the same MTU per wXg1 sections in
>>> /etc/wanpipe/wanpipeX.conf files on both servers, since the serverA
>>> card is providing a clock for server B cards.
>>> Is it necessary to change the value of codec_ms in the
>>> /usr/local/freetdm/conf/wanpipe.conf ?
>>
>> The codec_ms is used to call sangoma_tdm_set_usr_period(). This is how
>> often the driver will deliver data to the user application (for
>> example, waking it up from select()).
>>
>>> I also noticed following behaviour of ifconfig:
>>> a. MTU set to 40 or higher in section w1g1 in wanpipe1.conf
>>> also all other spans of the card which has the w1g1 (so w2g1 for
>>> the dualspan and w2g1, w3g1,w4g1 for quad card) will be displayed in
>>> ifconfig output with MTU: 40
>>>
>>> b. MTU set to 80 or higher (e.g. 160) in section w1g1 in wanpipe1.conf
>>> ifconfig will display MTU:80 also for higher values
>>
>> Some MTU values are disallowed, you would have to check which ones in
>> the driver, I don't recall. You can't use any value, as this values
>> are directly related with the capabilities of the hardware (some
>> cards, like analog cards, may only accept MTU of 8 for example).
>>
>
>
>
>
> I did set of latency / delay measurements in reduced testenvironment.
> Single server, single quad card with interconnected spans 1 and 2,
> testapplication is doing dump of read and written data on both call
> ends to binary files in the single thread.
> Please, have a look at the results with various parameters used:
> mtu:80 txqueue_size=1 rxqueue_size=1 one-direction delay=60ms
> round-trip delay=120ms
> mtu:80 txqueue_size=2 rxqueue_size=2 one-direction delay=80ms
> round-trip delay=160ms
> mtu:80 txqueue_size=10 rxqueue_size=10 one-direction delay=240ms
> round-trip delay=480ms
> mtu:40 txqueue_size=1 rxqueue_size=1 one-direction delay=40ms
> round-trip delay=80ms
> mtu:40 txqueue_size=2 rxqueue_size=2 one-direction delay=60ms
> round-trip delay=120ms
> mtu:40 txqueue_size=5 rxqueue_size=5 one-direction delay=120ms
> round-trip delay=240ms
> mtu:16 txqueue_size=1 rxqueue_size=1 one-direction delay=30ms
> round-trip delay=60ms
> mtu:8 txqueue_size=1 rxqueue_size=1 one-direction delay=25ms
> round-trip delay=50ms
> mtu:8 txqueue_size=10 rxqueue_size=10 one-direction delay=205ms
> round-trip delay=410ms
>
> The result from these tests is that the rxqueue_size is affecting the
> delay and no matter what is the MTU chunk size, increasing the
> rxqueue_size by one will increase the one-direction delay by 20ms.
I apologize for mistake. Here I wanted to say that txqueue_size is
affecting the delay.
With best regards
Juraj Fabo
Join us at ClueCon 2011 Aug 9-11, 2011
More information about the FreeSWITCH-dev
mailing list