[Freeswitch-users] performance between bridged call and conference

Michael Collins msc at freeswitch.org
Mon Aug 23 08:53:28 PDT 2010


What happened when you created several thousand bridged channels as opposed
to 2-person conferences? Just curious to see where your upper limits came in
to play there.
-MC

On Sun, Aug 22, 2010 at 7:05 PM, Seven Du <dujinfang at gmail.com> wrote:

> Talked with brian, he thought the there might not much difference
> between them and suggest a perf test by me.
>
> I test on my Mac 10.6.4 64bit. First I increased cps to 100 and
> max-sessions to 8000. It seems that    it cannot create more than 2560
> sessions on Mac, and I don't know how to increase the limit so I just
> use small numbers.
>
> The following ruby code create channels through ESL slowly.  30 * 10 *
> 2 means 600 channels, and because I used loopback, it actually use
> 1200 channels.
>
> bridge:
> 1200 threads.
> 300-400% cpu. (on Activity monitor) and load avg 600-800 (on top).
> Memory 300M.
>
> conf:
> 2100 threads.
> 300-400%cpu, load avg 600-800, or can be 39-1000(dosen't make sense?)
> memory 500M.
>
>
>
>
>
> 30.times do |i|
>    puts i * 10
>    10.times do |j|
>        if ARGV[0].nil? #bridge
>            conn.bgapi("originate", "loopback/9664 &bridge(loopback/9664)")
>        else #conference
>            conf = "c#{i* 10 + j}@default"
>            conn.bgapi("originate", "loopback/9664 &conference(#{conf})")
>            conn.bgapi("originate", "loopback/9664 &conference(#{conf})")
>        end
>    end
>
>    sleep 1
> end
>
>
>
> when I double the channels( 30 * 20 = i * j), ESL stuck when threads
> up to 2560 and it throws "cannot create channels". But, when I run
> "hupall" in FS, it start creating channels again. Don't know why FS on
> Mac has a 2560 threads limit.
>
> And after "hupall" again, there are dead channels(9 channels and 9
> threads):
>
> 4d5a2408-eaf6-4f3a-a401-7a916b1911f1,outbound,2010-08-23
>
> 09:52:35,1282528355,loopback/9664-a,CS_EXECUTE,,0000000000,,9664,conference,c193 at default
> ,xml,default,L16,8000,L16,8000,,seven-macpro.local,,,HANGUP,,,,ffc4367e-bcde-43a5-a95e-e0fd5c4069ea
> f93fa824-a6ae-47ca-ae9c-ff6b8948df4f,outbound,2010-08-23
>
> 09:52:35,1282528355,loopback/9664-a,CS_EXECUTE,,0000000000,,9664,conference,c194 at default
> ,xml,default,L16,8000,L16,8000,,seven-macpro.local,,,HANGUP,,,,4c8b1826-5afa-43ad-a73b-59a8c5d8f41c
> 7611288b-be9e-4f70-9880-3919de567222,inbound,2010-08-23
>
> 09:52:35,1282528355,loopback/9664-b,CS_REPORTING,,0000000000,,9664,playback,local_stream://moh,xml,default,L16,8000,L16,8000,,seven-macpro.local,,,HANGUP,,,,
> 9142d487-f0b5-4636-a2eb-a0adaee19634,inbound,2010-08-23
>
> 09:52:35,1282528355,loopback/9664-b,CS_REPORTING,,0000000000,,9664,playback,local_stream://moh,xml,default,L16,8000,L16,8000,,seven-macpro.local,,,HANGUP,,,,
> 0d8bc340-edf7-4461-b1b4-91056c68474b,outbound,2010-08-23
>
> 09:52:35,1282528355,loopback/9664-a,CS_NEW,,,,,,,,,L16,8000,L16,8000,,seven-macpro.local,,,HANGUP,,,,a9a8369e-f509-4b39-b689-8e616d29d5c3
> ed93c3d5-6780-4e75-bf48-6e326274be04,outbound,2010-08-23
>
> 09:52:35,1282528355,loopback/9664-a,CS_NEW,,,,,,,,,L16,8000,L16,8000,,seven-macpro.local,,,HANGUP,,,,e321a328-0b7c-4f74-9af5-a5e3f083ff8a
> 62aabb4b-4bfc-4ba0-a87c-2e1a76323cce,outbound,2010-08-23
>
> 09:52:39,1282528359,loopback/9664-a,CS_EXECUTE,,0000000000,,9664,conference,c200 at default
> ,xml,default,L16,8000,L16,8000,,seven-macpro.local,,,HANGUP,,,,3a86d88d-33b2-42b2-a57c-e2cb2c1ec486
> b13c3e9d-7fb8-46f2-90c9-04a0ec770a2e,inbound,2010-08-23
>
> 09:52:39,1282528359,loopback/9664-b,CS_REPORTING,,0000000000,,9664,playback,local_stream://moh,xml,default,L16,8000,L16,8000,,seven-macpro.local,,,HANGUP,,,,
> b8bb7b64-c57c-4c8f-9264-09eecf4aef57,outbound,2010-08-23
>
> 09:52:39,1282528359,loopback/9664-a,CS_NEW,,,,,,,,,L16,8000,L16,8000,,seven-macpro.local,,,HANGUP,,,,e50d768b-37b8-4cfb-ab46-60e7ac029c83
>
>
> I also tested on a Linux server. It performs better. no detail data
> collected though.
>
> And, interesting, when I run a 30 * 20 bridge + 30 * 20 conference,
> loadavg suddenly grew to 2000+. But I can still run a "hupall" in
> fs_cli with so high load.
>
> Note:
> 1) loopback might not typical in test.
> 2) This is not a FS performance test, I only want the conclusion that
> 2-way conference uses more resource than bridged calls.
>
> On Thu, Aug 19, 2010 at 8:31 AM, Seven Du <dujinfang at gmail.com> wrote:
> > Hi,
> >
> > Can someone explain the performance difference between bridged calls
> > and 2-party conference? or just in the code point of view?
> >
> > Since in some scenarios third party may join into a bridged call, so
> > we need to transfer a bridged call into a conference first. Make a
> > conference anyway event for 2-parties will make logic simpler and
> > clear.
> >
> > Thanks.
> >
> > --
> > Blog: http://www.dujinfang.com
> > Proj:  http://www.freeswitch.org.cn
> >
>
>
>
> --
> Blog: http://www.dujinfang.com
> Proj:  http://www.freeswitch.org.cn
>
> _______________________________________________
> FreeSWITCH-users mailing list
> FreeSWITCH-users at lists.freeswitch.org
> http://lists.freeswitch.org/mailman/listinfo/freeswitch-users
> UNSUBSCRIBE:http://lists.freeswitch.org/mailman/options/freeswitch-users
> http://www.freeswitch.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.freeswitch.org/pipermail/freeswitch-users/attachments/20100823/8cb144cb/attachment.html 


More information about the FreeSWITCH-users mailing list