Life, Football, Technology and Vespas…

Zimbra Cluster – what is the point?

I am in the process of deciding what to do about the Zimbra cluster installation that has been previously installed in my current place of work. Having had quite a bit of experience with Zimbra clusters as well as having numerous discussions with colleagues and friends, I have come to the conclusion that there is little to almost no point in clustering a Zimbra mail store – at all.

To understand how I get to this conclusion, you have to look at how the Zimbra clustering was designed and what the clustering is meant to achieve. Starting with – what does setting up a zimbra cluster actually achieve?

In my opinion, the Zimbra clustering has not been designed to achieve ‘high availability’ -it has been designed more for physical failover. Which, in service provider environments, is not ideal. Service providers would prefer to have  as minimal downtime as possible. So, during upgrades or device failures, you would have no impact to users. In a corporate environment where you can afford larger windows to perform maintenance, this is ok.

The key problem is that Zimbra mail store has too many elements that are configured to reside as an individual instance, making high availability difficult. Databases, indexes and even the storage for the data blobs are designed to run on a single physical machine. During a cluster service relocate, the service must be shutdown on the one node and then file systems remounted on the secondary node and then the service restarted. Anyone that runs Zimbra in a cluster environment will tell you that this is minutes rather than seconds.

A better design, in my opinion, would be to start by breaking up the elements that are designed to run as a single instance on a single machine. Start with the database. The database is one of the services at present that cannot be ‘shared’. Each mail store needs it’s own db but by making a slight change so that it is available over the wire, then the possibility opens up to having multiple machines using the database.

The next consideration would be the data store for the message blobs. As the blobs are stored as individual files in a hierarchy of folders, the possibility of having the message store available via NFS becomes viable. I have run NFS as a mail store for million+ user environments for many years and it has never given me any issues in both reliability or performance – and I am prepared to challenge anyone on this! So by clustering an NFS service and offering the storage over the wire, you now have multiple machines that can access the DB, AND the store, which leaves only really leaves elements such as the logs, config files and the indexes to consider. My first thought would be to utilise a cluster file system, such as GFS, but I have to be honest, I have not completely thought it through. But it seems trivial to me though, and I cannot think of a reason why this setup would not work. Logs can be delivered over the wire to a centralised log server – so this is another element that can be ‘spread out’.

What the above suggestion does is a total redesign of the mail store, breaking it up into a front and a back end. The front end being the devices that actually accept the mail via LMTP with only the binaries being locally installed. The front end machines could be part of a GFS cluster so that configurations (that are not already in LDAP) and indexes can be shared across multiple machines. The back end set-up could be a DB and NFS cluster either individual or combined. This set-up allows for no interruption of service should any device fail.

What the above suggests is a total redesign of the mail store and introduces an increased level of complexity, maybe rendering the mail store difficult to support by Zimbra. So if stick with the traditional cluster set-up offered by Zimbra, what do you really get?

It appears that the only reason for clustering the Zimbra mail store would be for physical device failure. The service would simply be relocated to the standby node, should the primary node fail. When you start looking at large mail store environments that require many CPUs and a fair bit or RAM, you end up possibly having a very expensive machine idling, waiting for the primary machine to failover. A better solution would be to cluster the machine rather than the service.

Using virtualization, such as Xen, it is possible to run a virtual machine as a clustered service. Should the physical machine fail, then the virtual machine would simply be relocated to another physical machine in the cluster. This has the added benefit that should you need to perform physical maintenance then there would be no interruption to the Zimbra service if you utilise live migration. Another benefit is the dynamic allocation of resources. If you notice that too much or too little resources have been allocated to the virtual machine, then they can dynamically be added without any impact to the service.

So all in all, I cannot see the point in the Zimbra cluster setup. If I want physical machine redundancy (as it appears to be designed for) then I would prefer to use virtualization / cloud computing. And if I want high availability, then I have to step away from the existing Zimbra cluster design anyway.

Advertisements

5 responses

  1. anonymouse

    I think you are right,

    What I dont understand is how did they build the yahoo solution? Iam still searching for some info

    Iam sure it can be scaled , split the mysql / indexing / mailstore across machines and create HA between these elements

    February 13, 2010 at 1:07 pm

  2. I agree with the post above as I stumbled across the same issue and the only solution we see so far is to utilize VmWare/Xen to make it highly available on the OS level.

    The other thing that is really upsetting is Zimbra’s denial of NFS usage – that’s is absurd. Their main reason for denial is degraded performance, but if you have 100’s of disks spinning for the same LUN in RAID10, i dont see how performance can be degraded.

    To previous poster, I dont believe Yahoo actually uses Zimbra solution for their mail, rather, they acquired Zimbra in 2007 to brand it as Yahoo Mail solution. I assume than they realized that the product had many limitations (like the one discussed here) and happily pushed it off to EMC/VmWare. Since the price for the sale is not posted officially, I would think it was sold cheaper that what Yahoo paid for it.

    April 28, 2010 at 4:30 pm

    • bonoboslr

      Yep. There is no reason that NFS would in anyway affect performance or reliability for that matter. It is interesting now what their strategy might be now that they have been acquired by VmWare.

      Yahoo already had a large scale consumer grade mail platform and would have addressed all of the performance issues that Zimbra is busy going through (of which I have logged many bugs and requests for). The only thing that I think Yahoo saw as value was the collaboration element that it is clearly lacking.

      April 28, 2010 at 8:09 pm

  3. Could it be that the performance concerns are with the number of IOPS to the disk subsystem and not the throughput? Also, NFS file locking characteristics are different then direct attached storage. This can cause issues across some implementations. Those concerns are valid in general from a software vendor when you don’t control the quality of the hardware being deployed. So they address the worst case scenario publicly.

    Devil is in the details… If you are using NFS to attach to a FC SAN, and not to some JBOD running sata drives, you probably have no worries. It will come down to the use case… msgs/second at peak load, average size of messages transmitted at peak, latency in subsystems like the directory server.

    November 12, 2010 at 12:42 am

  4. whatever

    I totally Agree.
    Zimbra is in some cases a Mess (bu honestly exchange is nothing better there 🙂

    Point is they didnt even get Backup right in the network edition – they simply fail on this
    Atm its all about the redologs on which Backup rely –
    and as you can imagine – if a simple taskt like a mailbox backup isnt achieveable out of the store itself – clustering is becomming a real mess

    I also dont believe the Performance issue – actually the biggest Performance hit within zimbra is their java engine itself and internal (cpu) processing
    i never had an issue with I/O related to zimbra – so its very intresting why zimbra strong not recommend (not only nfs) not to use iscsi and or raid 5/6

    in fact the only thing that might get problematic is not mailboxstore but mysqldatabase/ldap databases – but wiht the right config and a fine load of ram that io issue dissapears like magic 🙂

    If ife to guess i would say that those suggestions are made for another reason – maybe to push vmware based solutions or something less technicall

    but really the hole situation is just to funny:
    networkedition agains oss:
    you get clustering – fail you dont need it
    you get backup – fail still not relyable still massive issues and still backup on the same machine – you can do of course a mount on a nfs drive but backup has no mechanic to detec if share is down / or make a failover so its really only working if the drive is up

    outlooksupport – a partly fail if you know their connector you know what i mean
    mobile devices – this one works good but oss/z-push works not so bad there too

    So not only that there not many arguments left for a very overpriced networkedition – there are also issues when it comes for a service as service provider – backup and cluster is a must have basic – both are broken enought to be considered as nonexistant and you need another solution – or a additional with the networkedition

    August 25, 2011 at 9:34 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s