<html><body><span style="font-family:Verdana; color:#000000; font-size:10pt;">There is a need for ensuring that calls do not drop, but we must balance that with the cost of making the system redundant. We took some small, inexpensive measures, to improve our odds, but we could spend a lot more, for basically nothing more than giving some client a warm fuzzy. <br><br>To expand on what's mentioned below, The biggest cause for downtime that we've experienced is human accidents. We only have our solution in Tier 1 co-location facilities, so power/net dying isn't really an issue. (If the power does go out, 1000s of systems are down, and everyone notices). What we do end up with is IT admins tripping over power cords, pulling the wrong Ethernet cable, blowing a fuse on one side of the rack, etc. So we've doubled-up on all our cables. <br><br>After that, the next biggest cause has been MoBo/CPU failure due to fan failure. This issue doesn't really have a good solution, and is why we began looking at Xen. This is where SUN systems look attractive, as systems like the E10000 can shut down one CPU or board and keep the rest of the system running. But the cost for that solution is high, and I think that's SPARC-only. I'd love to head others take on a solution for this, as Xen is really a lot of overhead for a rare problem. Though it is at least technically interesting for me :) <br><br>As for storage, this was completely personal experience. Our SSD have had no issues, while SATA drives seem to fail about 1 every month. The NAS storage is connected via dual NICs as well (again for the cabling) and is completely separated from the network, very close to DAS, just using GigE as the "cable". We are always looking for ways to improve, but the newest and greatest from EMC and others just doesn't seem to offer anything significant and cost a LOT more. <br><br>I like the idea about the complete custom chassis. I hadn't considered that due to my thinking it would be expensive. Sounds like it's worth a look. As we consider creating an appliance offering, this may become more important.<br><br>-pete<br><br><br><br>
<blockquote webmail="1" style="border-left: 2px solid blue; margin-left: 8px; padding-left: 8px; font-size: 10pt; color: black; font-family: verdana;">
<div >
-------- Original Message --------<br>
Subject: Re: [Freeswitch-users] FreeSWITCH HA + Loadbalancing<br>
From: Steve Underwood <steveu@coppice.org><br>
Date: Sat, August 29, 2009 8:17 pm<br>
To: freeswitch-users@lists.freeswitch.org<br>
<br>
This sounds like so many "redundancy" projects that will probably offer <br>
nothing in the real world.<br>
<br>
On 08/30/2009 05:52 AM, Pete Mueller wrote:<br>
> I guess I should also mention that Xen is a side-project.<br>
><br>
> When considering this issue for an existing production systems, we <br>
> chose to put as much HA into hardware as we can. We are not concerned <br>
> with FS crashing, as so far we've never seen that happen (except when <br>
> our module caused it :) So for each of our systems:<br>
> - We have dual NIC cards (onboad NIC + PCI card) both bridged together <br>
> in case one fails<br>
NICs hardly ever fail. Its the wiring which is the vulnerable area. How <br>
independent can you make the two wiring paths, when they come from the <br>
same box?<br>
> - We have redundant power supplies.<br>
Even with a good UPS, power fails more often than a high quality power <br>
supply. Just how independent are the two power sources feeding your two <br>
power supplies? Do you have two completely independent UPS sets? Do you <br>
have spacially diverse wiring from them?<br>
> - We use Mirrored Solid State Disks for local storage (far better MTBF <br>
> than HDD, a lot faster too)<br>
My experience so far is that SSD reliability is very poor, with entire <br>
drives disappearing, rather than just getting the odd bad sector. I <br>
guess to balance this, hard disk drive reliability seems to have <br>
plummeted in the last year or so, after several good years.<br>
> - All but OS and speed-critical data is stored on a NAS device<br>
NAS == more wiring. More wiring == more vulnerabilities. Are you sure <br>
your setup is a win? NAS tends to help keep the data secure, but it <br>
isn't good for reliable access to that data.<br>
> - We have redundant DBs with Memcache infront for speed<br>
><br>
> At the same time we chose to use COTS hardware (SuperMicro <br>
> chassis/MoBo) rather than the big-boys like IBM or Dell. This kept <br>
> the overall cost per machine low. Initially some were concerned that <br>
> not having a name like IBM on our servers would be concerning to some <br>
> potential clients. The solution was to pay a company to deisgn and <br>
> build a custom face plate for the SuperMicro boxes. Which oddly looks <br>
> more impressive to clients that a rack full of IBM faceplates. It was <br>
> suprisingly low cost for the faceplates too.<br>
Some years ago we made an entire custom chassis for off the shelf <br>
boards. The quotes for fabricating that in small numbers were all over <br>
the place, but we ended with a good quality chassis at low cost. Most <br>
off the shelf rack mount enclosures are really pricy, so it isn't that <br>
hard to match their price with a custom build. We ended up with a <br>
better design (at least for our purposes) that cost us no more. It can <br>
really make your stuff stand out.<br>
<br>
A simple respray of the front panel can achieve a distinctive look at <br>
low cost too. :-)<br>
><br>
> For scalability, OpenSIPS was our choice. There's a very nice <br>
> tutorial on their website on how to configure Load Balancing.<br>
<br>
Regards,<br>
Steve<br>
<br>
<br>
_______________________________________________<br>
FreeSWITCH-users mailing list<br>
FreeSWITCH-users@lists.freeswitch.org<br>
<a href="http://lists.freeswitch.org/mailman/listinfo/freeswitch-users" target="_blank" mce_href="http://lists.freeswitch.org/mailman/listinfo/freeswitch-users">http://lists.freeswitch.org/mailman/listinfo/freeswitch-users</a><br>
UNSUBSCRIBE:<a href="http://lists.freeswitch.org/mailman/options/freeswitch-users" target="_blank" mce_href="http://lists.freeswitch.org/mailman/options/freeswitch-users">http://lists.freeswitch.org/mailman/options/freeswitch-users</a><br>
<a href="http://www.freeswitch.org" target="_blank" mce_href="http://www.freeswitch.org">http://www.freeswitch.org</a><br>
</div>
</blockquote></span></body></html>