[Freeswitch-users] Advice on scalable design pattern

Ben Langfeld ben at langfeld.co.uk
Thu Mar 14 18:43:24 MSK 2013


Have you looked at how 2600hz do this kind of thing with Kazoo?

Regards,
Ben Langfeld


On 14 March 2013 03:02, Cal Leeming [Simplicity Media Ltd] <
cal.leeming at simplicitymedialtd.co.uk> wrote:

> Hello all,
>
> I'm currently looking at the various different ways you can deploy
> FreeSWITCH in a scalable manner, but struggle a little bit on design.
>
> The sweet spot I'm trying to find is one where I can scale out capacity by
> simply throwing more servers at it.
>
> In an ideal world, this would mean support for;
>
> * Have multiple users from multiple domains to be spread over multiple
> servers... a single domain should not be restricted to a single FreeSWITCH
> instance
> * Have no single point of failure within the structure
> * Have no single point of bottleneck within the structure
> * Should not use OpenSIPS.. (I suspect this might get me a lot of flack,
> but seriously, I'd rather write my own in python or ZXTM traffic script
> than use OpenSIPS lol).
>
> So far, the best option I can come up with is (although I'm not sure if
> it's the best available);
>
> * Proxy sitting in front of all backend FreeSWITCH instances, acting in a
> media proxy fashion only (dual pair of proxies in active/passive mode)
> * Proxy tracks registrations to the appropriate backend instance, and
> makes their session sticky
> * If backend instance needs to make a call to another user in the same
> domain, it bridges to the call to back to the proxy, the proxy then
> determines which other FreeSWITCH instance has the user then routes the
> request accordingly. If the call is to an external destination, the proxy
> will route it to the traffic aggregation switches (which is basically
> another pair of FreeSWITCH instances), which then gets routed to the
> upstream provider.. this means you only have to maintain 2 sets of trunk
> configuration.. so when you need to scale out your freeswitch backends, it
> doesn't require putting in a request to your upstream providers for an
> additional set of trunks.
> * The bottleneck within these clusters is the dual proxies in
> active/passive mode.. you could fix this by allocating customers to a
> specific cluster (rather than instance), thus controlling which customers
> go to which proxy.. if an entire cluster dies, you can re-route that
> clusters traffic to a different cluster.
>
> The other simpler option is to allocate domains to a specific backend
> instance.. but this really doesn't feel clean.. it means a customer cannot
> scale past the floor limit of a single instance, it has less redundancy,
> and overall just feels wrong.
>
> Any general thoughts/comments on this would be much appreciated.
>
> Thanks
>
> Cal
>
> _________________________________________________________________________
> Professional FreeSWITCH Consulting Services:
> consulting at freeswitch.org
> http://www.freeswitchsolutions.com
>
> 
> 
>
> Official FreeSWITCH Sites
> http://www.freeswitch.org
> http://wiki.freeswitch.org
> http://www.cluecon.com
>
> FreeSWITCH-users mailing list
> FreeSWITCH-users at lists.freeswitch.org
> http://lists.freeswitch.org/mailman/listinfo/freeswitch-users
> UNSUBSCRIBE:http://lists.freeswitch.org/mailman/options/freeswitch-users
> http://www.freeswitch.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.freeswitch.org/pipermail/freeswitch-users/attachments/20130314/2122a576/attachment.html 


Join us at ClueCon 2011 Aug 9-11, 2011
More information about the FreeSWITCH-users mailing list