[Freeswitch-dev] Fwd: Call Center - Memory Leak
kahrimanovic.mersed at gmail.com
Mon Apr 25 21:42:51 MSD 2016
Sorry, I forgot to mention that I already ran the test with the valgrind
and i was not able to find anything.
Additionally, on your question i did quite a lot research before I sent
this email. As per my understanding, behavior I encountered and
documentation i read, freeswitch will not release memory until its
restarted. (and that is happening in my case as well, if I stop the calls,
memory will not increase anymore for sure, but will not decrease until you
stop/restart freeswitch - When i start calling again it will follow already
I can create the ticket for sure and copy all of the relevant data from my
first email to the JIRA.
On Mon, Apr 25, 2016 at 7:35 PM, Mersed Kahrimanovic <
kahrimanovic.mersed at gmail.com> wrote:
> Hi Nicholas, thank you for your reply.
> Regarding your questions:
> 1. Are you sure that channels are being released -> yes for sure. That is
> the first thing i checked. Test environment which is created do not goes
> over 65 channels as i mentioned within initial email.
> channels count, status and all other cli comands confirms that.
> Additionally on sipp side you are always able to check in live how much
> calls you have connected currently.
> 2. Regarding the schedule hangup, there is no issues with it i suppose,
> because i use schedule hangup to recreate simple call center with pure
> freeswitch installation. Within our call center solution we are not hanging
> up calls like that and it is encountering same issue as well.
> 3. Regarding the request for memory usage per application:
> That test i explained above is created using plain freeswitch
> installation without any kind of additions. Basically during that test, on
> server we had only freeswitch running and nothing else. As a confirmation
> here is the screenshot http://prnt.sc/awpxr6
> Additionally i need to mention that i ran the same test before maybe few
> weeks ago with valgrind, and I was not able to find anything.
> All the best,
> On Mon, Apr 25, 2016 at 7:18 PM, Nicholas Blasgen <
> nicholas at hellohunter.com> wrote:
>> NewRelic, which you seem to be using, also provides per-application
>> memory usage. Would be nice to see that after a day or partial day run.
>> Freeswitch uses a good amount of memory per channel. Are you sure the
>> channels are being released? fs_cli -x status >> log_file ... and maybe
>> run it on a crontab just to make sure the channels are being released.
>> Besides that, I personally don't have any ideas without recreating your
>> test environment. I guess you could see if FIFO or Scheduled Hangup is the
>> issue by calling HANGUP instead of FIFO for extension 7011. I've never
>> personally used Schedule Hangup and as that's the method being used here to
>> release the channels, it might be a concern.
>> Nicholas Blasgen
>> Predictive Dialer Limited
>> +1 (724) 252-7436 (cell)
>> Skype: nblasgen
>> 24/7 Support available:
>> www.hellohunter.com | (800) 513-5555 | skype hello.hunter |
>> support at hellohunter.com
>> On Mon, Apr 25, 2016 at 6:11 AM, Mersed Kahrimanovic <
>> kahrimanovic.mersed at gmail.com> wrote:
>>> We have implemented call center using a freeswitch as switch for
>>> handling calls. It is basically the logic where you have an agents within
>>> the queue waiting for an contacts to come in, or contact within the queue
>>> waiting if there is no any agents available.
>>> For this kind of implementation we used pure lua and hash tables (and
>>> not mod_fifo) in order to have a better flexibility and possibility to
>>> manage different contact/agent states.
>>> Everything was working fine on our old servers, now we are migrating to
>>> the cloud and we are using aws for that purpose, of course trying to cut
>>> some costs.
>>> After initial setup of our solution we found out that there is a memory
>>> leak causing for freeswitch to consume 3 - 10 MB per minute during
>>> production hours. (we did not notice anything on old servers because we
>>> had like 24 cores and 94GB of ram on those).
>>> This is not too much of course but the bottom point is that increase is
>>> linear and its constantly raising consuming more and more of RAM.
>>> In order to isolate an issue and to conclude that we don't have an issue
>>> within our solution I installed 1.6.7 version of freeswitch and setup
>>> really simple call center with mod_fifo without anything else (totally
>>> independent from call center we are working on)
>>> For the reference:
>>> *Aws:* m4.large
>>> vCPU: 2
>>> RAM: 8GB
>>> Throughput (Mbps): 450
>>> *Freeswitch version:* 1.6.7
>>> *OS:* Debian GNU/Linux 8 (jessie)
>>> *Number of agents:* 30
>>> *Number of calls per agent:* 1
>>> *Total numbers of channels at time:* 60
>>> *Agents simulated with:* pjsip(http://www.pjsip.org/)
>>> *Contact simulated with:* sipp(http://sipp.sourceforge.net/)
>>> *Configuration:* I used configuration which came with installation of
>>> freeswitch. The only things i changed is:
>>> - default password
>>> - internal rtp timeout
>>> - rtpip and sip ip
>>> Test scenario:
>>> 1. Start the freeswitch.
>>> 2. Autoload lua script is going to ring all of the internal (agents) and
>>> push them to the extension where we had mod_fifo
>>> 3. After some time, same autoload script is going to start calling
>>> contacts each 800 miliseconds and push them to the extension where we have
>>> 4. mod_fifo will do the rest, bridge agent with the contact and handling
>>> the queue properly.
>>> As a result we will have 30 agents constantly "talking", and when ever
>>> call is dropped we will have agent connected with another. Parameters for
>>> the test match that scenario, and we dont have more then 60 - 65 channels
>>> open at the time (never).
>>> Dialplan for handling mod_fifo
>>> Autoload lua script
>>> The results are for 24hrs period, and indeed 24hrs is a lot. We will
>>> never have production running more then 12hrs (and daily restart will
>>> occur). But this basically means that we are not able to host more then
>>> 40-45 agents on servers with 8GB of ram.
>>> My question is:
>>> Is this something which can be considered as memory leak, or this is
>>> simply how freeswitch works, behaves and requirements it have?
>>> Thank you,
>>> Professional FreeSWITCH Consulting Services:
>>> consulting at freeswitch.org
>>> Official FreeSWITCH Sites
>>> FreeSWITCH-dev mailing list
>>> FreeSWITCH-dev at lists.freeswitch.org
-------------- next part --------------
An HTML attachment was scrubbed...
Join us at ClueCon 2016 Aug 8-12, 2016
More information about the FreeSWITCH-dev