[Freeswitch-dev] On high CPU usage and NONBLOCK-ed sockets
math.parent at gmail.com
Fri Sep 3 10:19:39 PDT 2010
I have made some investigations on high CPU usage (Methodology below).
In the most CPU-consuming threads, there is:
* Several threads related to queues handling (Can't we use interrupt here?)
* The time thread
* Several threads related to opened sockets. i think there can be some
Thread 3 (Thread 0xb626fb70 (LWP 5922)):
#0 0xb7693f6a in clock_nanosleep (clock_id=-1215964116, flags=0,
#1 0xb7764faa in do_sleep (t=882) at src/switch_time.c:165
#2 0xb7765971 in switch_cond_next () at src/switch_time.c:428
#3 0xb6bb553d in read_packet (listener=0x966c418, event=0xb626ecd4, timeout=0)
#4 0xb6bb95a6 in listener_run (thread=0x965b0f0, obj=0x966c418)
#5 0xb77933f5 in dummy_worker (opaque=0x965b0f0) at
#6 0xb7645955 in start_thread (arg=0xb626fb70) at pthread_create.c:300
#7 0xb73e910e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:130
Modules concerned are mod_event_socket, mod_sofia, mod_skinny and
probably all endpoints modules.
Currently, FS try to read from the socket and if there is no data in
the buffer, some checks are made then the thread sleeps for some time
(by using do_sleep() or switch_cond_next()). Most of the time this is
not needed to do those extra checks that cost CPU.
I propose to move to BLOCK-ed sockets with timeout. Also, ajusting the
requested length will reduce execution time. Most of the job will be
done on the kernel side which is more efficient.
Maybe I'm missing some corner-cases, but for mod_skinny, the move
improves greatly the performance. This protocol was easy to handle
because we first should to read the header then we read the remaining
data whose length is in the header.
- Started FS
- Connecting various kind of clients (event_socket, SIP, Skinny)
- getting list of high CPU thread: ps -eLf | egrep '(freeswitch|CMD)'
| sort -n -k 5
- debugging freeswitch with gdb to show backtraces on all thread:
"thread apply all bt"
More information about the FreeSWITCH-dev