[Freeswitch-dev] MTU setting and application buffer size

Juraj Fabo juraj.fabo at gmail.com
Wed Sep 28 10:55:17 MSD 2011


Thank you for your answers.

> ---------- Forwarded message ----------
> From: Moises Silva <moises.silva at gmail.com>
> To: freeswitch-dev at lists.freeswitch.org
> Date: Thu, 22 Sep 2011 16:05:06 -0400
> Subject: Re: [Freeswitch-dev] MTU setting and application buffer size
> On Mon, Sep 19, 2011 at 12:30 PM, Juraj Fabo <juraj.fabo at gmail.com> wrote:
>> Target application is reading/writing from/to another network stack
>> A-law data with 20ms frame length.
>>
>> With default MTU settings (80) I am experiencing latency which I
>> consider too high, it is about 400ms from targetApp to loopback and
>> back to targetApp.
>
> There is something wrong with your app. The MTU in wanpipeX.conf is
> meant to control the transfer size between the driver and the card,
> not between the user space app and the driver. In general, higher MTU
> means lower interrupt load. The lowest value is probably 8, which is
> used for applications working in DAHDI-mode, where one interrupt is
> received every millisecond and 8 bytes are transferred from the driver
> to the hardware and viceversa.
>
>> I think I missed some point about proper setting of MTU and consequences.
>> Decreasing the MTU configuration setting in the
>> /etc/wanpipe/wanpipe1.conf from default value 80 to e.g 40 or 16 leads
>> to desired lower latency, however, the data the targetApp is reading
>> are often corrupted with many gaps.
>
> 80 means a hardware interrupt is received every 10ms with 80 bytes per
> time slot. This is the recommended mode. A value of 40 will increase
> interrupt load and not necesarily reduce your latency, you must reduce
> the "user period", which is how often the driver will deliver
> media/data to the user application. This is done using
> sangoma_tdm_set_usr_period().
>
>> Please, what is the proper way of setting MTU?
>> I assume I have to set the same MTU per wXg1 sections in
>> /etc/wanpipe/wanpipeX.conf files on both servers, since the serverA
>> card is providing a clock for server B cards.
>> Is it necessary to change the value of codec_ms in the
>> /usr/local/freetdm/conf/wanpipe.conf ?
>
> The codec_ms is used to call sangoma_tdm_set_usr_period(). This is how
> often the driver will deliver data to the user application (for
> example, waking it up from select()).
>
>> I also noticed following behaviour of ifconfig:
>> a. MTU set to 40 or higher in section w1g1 in wanpipe1.conf
>>    also all other spans of the card which has the w1g1 (so w2g1 for
>> the dualspan and w2g1, w3g1,w4g1 for quad card) will be displayed in
>> ifconfig output with MTU: 40
>>
>> b. MTU set to 80 or higher (e.g. 160) in section w1g1 in wanpipe1.conf
>>    ifconfig will display MTU:80 also for higher values
>
> Some MTU values are disallowed, you would have to check which ones in
> the driver, I don't recall. You can't use any value, as this values
> are directly related with the capabilities of the hardware (some
> cards, like analog cards, may only accept MTU of 8 for example).
>




I did set of latency / delay measurements in reduced testenvironment.
Single server, single quad card with interconnected spans 1 and 2,
testapplication is doing dump of read and written data on both call
ends to binary files in the single thread.
Please, have a look at the results with various parameters used:
mtu:80 txqueue_size=1  rxqueue_size=1   one-direction delay=60ms
round-trip delay=120ms
mtu:80 txqueue_size=2  rxqueue_size=2   one-direction delay=80ms
round-trip delay=160ms
mtu:80 txqueue_size=10 rxqueue_size=10 one-direction delay=240ms
round-trip delay=480ms
mtu:40 txqueue_size=1  rxqueue_size=1   one-direction delay=40ms
round-trip delay=80ms
mtu:40 txqueue_size=2  rxqueue_size=2   one-direction delay=60ms
round-trip delay=120ms
mtu:40 txqueue_size=5  rxqueue_size=5   one-direction delay=120ms
round-trip delay=240ms
mtu:16 txqueue_size=1  rxqueue_size=1   one-direction delay=30ms
round-trip delay=60ms
mtu:8  txqueue_size=1   rxqueue_size=1   one-direction delay=25ms
round-trip delay=50ms
mtu:8  txqueue_size=10 rxqueue_size=10 one-direction delay=205ms
round-trip delay=410ms

The result from these tests is that the rxqueue_size is affecting the
delay and no matter what is the MTU chunk size, increasing the
rxqueue_size by one will increase the one-direction delay by 20ms.
I also realized, that assymetric setting of tx/rx queue size can be
used: e.g. tx_q_s=1 rx_q_s=10 to reduce the delay.
Comment in the wanpipe.conf says:
; size of the driver queue of elements of MTU size
; typical case is 10 elements of 80 bytes each (10ms of ulaw/alaw)
; don't mess with this if you don't know what you're doing

but I had not seen 80 bytes per element in the logs. With enabled
DEBUG_TDMAPI following can be seen:

Sep 27 20:02:34 v184 kernel: [2806213.900348] wanpipe1: Configuring
Interface: w1g1 (log supress)
Sep 27 20:02:34 v184 kernel: [2806213.900355] wanpipe1:    Active Ch
Map :0x00000004
Sep 27 20:02:34 v184 kernel: [2806213.900357] wanpipe1:    First TSlot   :2
Sep 27 20:02:34 v184 kernel: [2806213.900366] w1g1: TDM API ACTIVE CH
0x00000004  CHAN=2
Sep 27 20:02:34 v184 kernel: [2806213.900368] w1g1: TDM API ACTIVE CH
0x00000004 SPAN=1 CHAN=2
Sep 27 20:02:34 v184 kernel: [2806213.900370] w1g1: SPAN=1, CHAN=2
Chunk=80 Period=10 Mtu=80
Sep 27 20:02:34 v184 kernel: [2806213.900371] w1g1: conf->mtu=0
Sep 27 20:02:34 v184 kernel: [2806213.900372] w1g1: tdm_api_chunk=80,
tdm_api_period=10
Sep 27 20:02:34 v184 kernel: [2806213.900374] wanpipe1: Chunk=160,
Period=20, MTU=160
Sep 27 20:02:34 v184 kernel: [2806213.900398] wanpipe_tdm_api_reg():
usr_period: 20, hw_mtu_mru: 8
Sep 27 20:02:34 v184 kernel: [2806213.900400] wanpipe1: Configuring
TDM API NAME=wanpipe1_if2 Qlen=5 TS=1 MTU=224

Here I found interesting these 2lines:
Sep 27 20:02:34 v184 kernel: [2806213.900374] wanpipe1: Chunk=160,
Period=20, MTU=160
Sep 27 20:02:34 v184 kernel: [2806213.900398] wanpipe_tdm_api_reg():
usr_period: 20, hw_mtu_mru: 8

which are result of the aft_core_prot.c
721                 if (chan->wp_tdm_api_dev->cfg.usr_mtu_mru < 160) {
722                         chan->tdm_api_period=20;
723                         chan->wp_tdm_api_dev->cfg.usr_period=20;
724                         chan->tdm_api_chunk=160;
725                 chan->wp_tdm_api_dev->cfg.usr_mtu_mru=160;
726                 }

As I understand it, this chan->tdm_api_period=20 and
chan->tdm_api_chunk=160 is the reason of 20ms increase interval
measured in my tests.


>> Finally, I tried to access the card more often than the default 20ms
>> with default MTU:80
>> Since 80Bytes is 10ms, application was configured to read/write on
>> 10ms rate with 80Bytes buffer.
>> Following pseudocode ftdm_channel_read(fchan, bufferPtr, 80) failed
>> with kernel error message in dmesg coming from wanpipe_tdm_api.c
>> "User API Error: User Rx Len=144 < Driver Rx Len=224 (hdr=64). User
>> API must increase expected rx length"
>> Does it mean, that the smallest buffer used in the ftdm_channel_read()
>> must be at least 160 B length even if MTU is 80 (or less) ?
>
> I suspect you changed the MTU to 80 but not the codec_ms to 10. If I
> were you I'd stop messing around with the MTU unless you're willing to
> look at the Wanpipe driver code. Not all values are meant to work for
> all hardware and there are many types of hardware and many modes (API
> mode, span mode, DAHDI mode etc) that can get complex to get right.
>
> Setting MTU to 80 (the default for TDM API mode) and codec_ms to 10
> you should not have latency bigger than 10ms. If you do, there is
> something wrong with your app.

The best I achieved with MTU:80 was by using txqueue_size=1 and this
resulted in 60ms one-direction latency.
Results were the same with wanpipe-3.5.20 and wanpipe-3.5.23.

With best regards

Juraj Fabo

>
> Moises Silva
> Senior Software Engineer, Software Development Manager
> Sangoma Technologies Inc. | 100 Renfrew Drive, Suite 100, Markham ON
> L3R 9R6 Canada
> t. 1 905 474 1990 x128 | e. moy at sangoma.com



Join us at ClueCon 2011 Aug 9-11, 2011
More information about the FreeSWITCH-dev mailing list