If your site serves many Domains, you may want to install several independent CommuniGate Pro Servers and distribute the load by distributing domains between the servers. In this case you do not need to employ the special Cluster Support features. However if you have one or several Domains with 100,000 or more Accounts in each, and you cannot guarantee that clients will always connect to the proper server, or if you need dynamic load balancing and very high availability, you should implement a CommuniGate Pro Cluster on your site.
Many vendors use the term Cluster for simple fail-over or hot stand-by configurations. The CommuniGate Pro software can be used in fail-over, as well as in Distributed Domains configurations, however these configurations are not referred to as Cluster configurations.
A CommuniGate Pro Cluster is a set of Server computers that handle the site mail load together. Each Cluster Server hosts a set of regular, non-shared domains (the CommuniGate Pro Main Domain is always a non-shared one), and it also serves (together with other Cluster Servers) a set of Shared Domains.
To use CommuniGate Pro servers in a Cluster, you need a special CommuniGate Pro Cluster License.
Please read the Scalability section first to learn how to estimate your Server load, and how to get most out of each CommuniGate Pro Server running in the Single-server or in the Cluster mode.
There are two main types of Cluster configurations: Static and Dynamic.
Each Account in a Shared Domain served with a Static Cluster is created (hosted) on a certain Server, and only that Server can access the account data directly. When a Static Cluster Server needs to perform any operation with an account hosted on a different Server, it establishes a TCP/IP connection with the account Host Server and accesses account data via that Host Server. This architecture allows you to use local (i.e. non-shared) storage devices for account data.
Note: some vendors have "Mail Multiplexor"-type products. Those products usually implement a subset of Static Cluster Frontend functionality.
Accounts in Shared Domains served with a Dynamic Cluster are stored on a shared storage, so each Cluster Server (except for Frontend Servers, see below) can access the account data directly. At any given moment, one of the Cluster Servers acts as a Cluster Controller synchronizing access to Accounts in Shared Domains. When a Dynamic Cluster Server needs to perform any operation with an account currently opened on a different Server, it establishes a TCP/IP connection with that "current host" Server and accesses account data via that Server. This architecture provides the highest availability (all accounts can be accessed as long as at least one Server is running), and does not require file-locking operations on the storage device.
Clusters of both types are usually equipped with Frontend Servers. Frontend Servers cannot access Account data directly - they always open connections to other (Backend) Servers to perform any operation with Account data.
Frontend servers accept TCP/IP connections from client computers (usually - from the Internet). In a pure Frontend-Backend configuration no Accounts are created on any Frontend Server, but nothing prohibits you from serving some Domains (with Accounts and mailing lists) directly on the Frontend servers.
When a client establishes a connection with one of the Frontend Servers and sends the authentication information (the Account name), the Frontend server detects on which Backend server the addressed Account can be opened, and establishes a connection with that Backend Server.The Frontend Servers:
If the Frontend Servers are directly exposed to the Internet, and the security of a Frontend Server operating system is compromised so that someone gets unauthorized access to that Server OS, the security of the site is not totally compromised. Frontend Servers do not keep any Account information (Mailboxes, passwords) on their disks. The "cracker" would then have to go through the firewall and break the security of the Backend Server OS in order to get access to any Account information. Since the network between Frontend and Backend Servers can be disabled for all types of communications except the CommuniGate Pro inter-server communications, breaking the Backend Server OS is virtually impossible.
Both Static and Dynamic Clusters can work without dedicated Frontend Servers This is called a symmetric configuration, where each Cluster Server implements both Frontend and Backend functions.
In the example below, the domain1.dom and domain2.dom Domain Accounts are distributed between three
Static Cluster Servers, and each Server accepts incoming connections for these Domains.
If the Server SV1 receives a connection for the Account email@example.com located on
the Server SV2, the Server SV1 starts to operate as a Frontend Server, connecting to the
Server SV2 as the Backend Server hosting the addressed Account.
At the same time, an external connection established with the server SV2 can request access to the firstname.lastname@example.org Account located on the Server SV1. The Server SV2 acting as a Frontend Server will open a connection to the Server SV1 and will use it as the Backend Server hosting the addressed Account.
In a symmetric configuration, the number of inter-server connections can be equal to the number of external (user) access-type (POP, IMAP, HTTP) connections. For a symmetric Static Cluster, the average number of inter-server connections is M*(N-1)/N, where M is the number of external (user) connections, and the N is the number of Servers in the Static Cluster. For a symmetric Dynamic Cluster, the average number of inter-Server connections is M*(N-1)/N * A/T, where T is the total number of Accounts in Shared Domains, and A is the average number of Accounts opened on each Server. For large ISP-type and portal-type sites, the A/T ratio is small (usually - not more than 1:100).
In a pure Frontend-Backend configuration, the number of inter-server connections is usually the same as the number of external (user) connections: for each external connection, a Frontend Server opens a connection to a Backend Server. A small number of inter-server connections can be opened between Backend Servers, too.
Access to all Shared Domain Accounts is provided without interruption as long as at least one Frontend Server is running.
If a Frontend server fails, no Account becomes unavailable and no mail is lost. While POP and IMAP sessions conducted via the failed Frontend server are interrupted, all WebUser Interface session remain active, and WebUser Interface clients can continue to work via remaining Frontend Servers. POP and IMAP users can immediately re-establish their connections via remaining Frontend Servers.
If the failed Frontend server cannot be repaired quickly, its Queue can be processed with a different server, as a Foreign Queue.
This section specifies how each CommuniGate Pro Server should be configured to participate in a Static or Dynamic Cluster. These settings control inter-server communications in your Cluster.First, install CommuniGate Pro Software on all Servers that will take part in your Cluster. Specify the Main Domain Name for all Cluster Servers. Those names should differ in the first domain name element only:
Use the WebAdmin Interface to open the Settings->General->Cluster page on each Backend and Frontend Server, and enter all Frontend and Backend Server IP addresses. CommuniGate Pro Servers will accept Cluster connections from the specified IP addresses only. If the Frontend Servers use dedicated Network Interface Cards (NICs) to communicate with Backend Servers, specify the IP addresses the Frontend Servers have on that internal network:
If you change this setting value, the new value will be in effect only after Server restart.
In all types of CommuniGate Pro cluster, connections to Backend servers can be established from Frontend servers and from other Backend servers.
If your Backend Servers use non-standard port numbers for its services, change the Backend Server Ports values.
For example, if your Backend Servers accept WebUser Interface connections not on the port number 8100, but on the standard HTTP port 80, set 80 in the HTTP User field and click the Update button.
CommuniGate Pro can reuse inter-server connections for some services. Instead of closing a connection when an operation is completed, the connection is placed into an internal cache, and it is reused later, when this server needs to connect to the same server. The Cache parameter specifies the size of that connection cache. If there are too many connections in the cache, older connection are closed and pushed out of the cache.
Cluster members use the PWD protocol to perform administration operations remotely, on other cluster members. The port number they use to connect to on other Cluster members is the same as the port specified for the PWD protocol connections. These remote administrative operations have their own Log Level settings.
Servers in a Dynamic Cluster use the SMTP modules of other Cluster Backend members for remote message delivery (though the protocol between the servers is not the SMTP protocol). Use the Delivery port setting to specify the port number used with SMTP modules on other cluster members.
Servers in a Dynamic Cluster use the SMTP modules of other Cluster members to submit messages remotely (though the protocol between the servers is not the SMTP protocol). Use the Submit port setting to specify the port number used with SMTP modules on other cluster members.
When a user session running on one Cluster member needs to access a foreign Mailbox, and the account that this Mailbox belongs to cannot be opened by the same Cluster member, the Cluster Mailbox manager is used to access Mailboxes remotely. The Cluster Mailbox manager uses the IMAP port to connect to other cluster members. The Cluster Mailbox manager has its own Log Level setting.
A CommuniGate Pro Cluster can serve several Shared Domains. If you plan to provide POP and IMAP access to Accounts in those Domains, you may want to assign dedicated IP addresses to those Domains to simplify client mailer setups. See the Access section for more details.
If you use Frontend Servers, only Frontend Servers should have dedicated IP Addresses for Shared Domains. Inter-server communications always use full account names (accountname@domainname), so there is no need to dedicate IP Addresses to Shared Domains on Backend Servers.
If you use the DNS round-robin mechanisms to distribute the site load, you need to assign N IP addresses to each Shared Domain that needs dedicated IP addresses, where N is the number of your Frontend Servers. Configure the DNS Server to return these addresses in the round-robin manner:
In this example, the Cluster is serving two Shared Domains: domain1.dom and domain2.dom, and the Cluster has three Frontend Servers. Three IP addresses are assigned to each domain name in the DNS server tables, and the DNS server returns all three addresses when a client is requesting A-records for one of these domain names. Each time the DNS server "rotates" the order of the IP addresses in its responses, implementing the DNS "round-robin" load balancing (client applications usually use the first address in the DNS server response, and use other addresses only if an attempt to establish a TCP/IP connection with the first address fails).
When configuring these Shared Domains in your CommuniGate Pro Servers, you assign all three IP addresses to each Domain.
If you use a Load Balancer to distribute the site load, you need to place only one "external" IP address into DNS records describing each Shared Domain. You assign one "virtual" (LAN) IP address to each Shared Domain on each Frontend Server:
In this example, the Cluster is serving two Shared Domains: domain1.dom and domain2.dom, and the Cluster has three Frontend Servers. One IP Addresses assigned to each Shared Domain in the DNS server tables, and those addresses are external (Internet) addresses of your Load Balancer. You should instruct the Load Balancer to distribute connections received on each of its external IP addresses to three internal IP addresses - the addresses assigned to your Frontend Servers.
When configuring these Shared Domains in your CommuniGate Pro Servers, you assign these three internal IP addresses to each Domain.
DNS MX-records for Shared Domains can point to their A-records.
The best way to specify additional startup parameters is to create the Startup.sh file inside the base directory.
To create a Static Cluster, use the --staticBackend startup parameter with the Backend servers, and the --staticFrontend startup parameter with the Frontend servers.
To create a Dynamic Cluster, use the --clusterBackend startup parameter with the Backend servers, and the --clusterFrontend startup parameter with the Frontend servers.
To protect your site from DoS attacks, you may want to open SMTP, POP, IMAP, and other Listeners and limit the number of connections accepted from the same IP address. Set those limits on Frontend servers only, since Backend servers receive all connections from Frontends, and each Frontend can open a lot of connections from the same IP address.
Usually the Backend servers are not directly accessible from the Internet. If you
need to change the settings or monitor one of the Backend servers from "outside", you
can use the WebAdmin interface of one of the Frontend servers, using the following URL:
where 188.8.131.52 is the [internal] IP address of the Backend server you want to access.
The outgoing mail traffic generated with regular (POP/IMAP) clients is submitted to the site using the A-records of the site Domains. As a result, the submitted messages go to the Frontend Servers and the messages are distributed from there.
Messages generated with WebUser clients and messages generated automatically (using the Automated Rules) are generated on the Backend Servers. Since usually the Backend servers are behind the firewall and since you usually do not want the Backend Servers to spend their resources maintaining SMTP queues, it is recommended to use the forwarding feature of the CommuniGate Pro SMTP module.
Select the Forward to option and specify the asterisk (*) symbol. In this case all messages generated on the Backend Servers will be quickly sent to the Frontend Servers and they will be distributed from there. If you do not want to use all Frontend servers for Backend mail relaying, change the Forward To setting to include the IP addresses of some Frontend Servers, separating the addresses with the comma (,) symbol.
In a Static Cluster, RPOP activity takes place on Backend servers.
As a result, it is essential for those servers to be able to initiate outgoing TCP connections to remote servers.
If the Backend servers are connected to a private LAN behind a firewall, you should install some NAT server software on that network
and configure the Backend servers (using their OS TCP/IP settings) to route all non-local packets
via the NAT server(s). Frontend servers can be used to run NAT services.
In a Dymanic Cluster, RPOP activity takes place on those servers that have the RPOP service set to Locally for Others. These servers (usually - frontends) should be able to initiate outgoing TCP connections to remote servers. RPOP activity is scheduled on the active Cluster Controller, so if Backend servers do not have direct access to the Internet, their RPOP setting should be set to Remotely.
The FTP module does not "proxy" connections to Backend servers. Instead, it uses CLI to manage Account File Storage data on Backend servers. This eliminates a problem of Backend servers opening FTP connections directly to clients. If all FTP connections come to the Frontend servers, the FTP services on Backends can be switched off.
The FTP module running on cluster Frontends behind a load balancer and/or a NAT has the same problems as any FTP server running in such a configuration. To support the active mode, make sure that Frontend servers can open outgoing connections to client FTP ports (when running via a NAT, make sure that the "address in use" problems are addressed by the NAT software). To support the passive mode, make sure that your load balancer allows clients to connect directly to the Frontend ports the FTP module opens for individual clients.
The LDAP module does not "proxy" connections to Backend servers. Instead, it uses CLI to authenticate users and, optionally, to perform LDAP Provisioning operations. If all LDAP connections come to the Frontend servers, the LDAP services on Backends can be switched off, unless they serve Cluster-wide Directory Units (see below).
Cluster-wide Directory Units are processed on the Cluster Controller server. Other Cluster members communicate
with Cluster-wide Units by sending LDAP requests to the Controller.
If you use Cluster-wide Directory Units, the LDAP service on the Backend servers should be enabled.
The RADIUS module does not "proxy" connections to Backend servers. Instead, it uses CLI to authenticate users and to update their statistical data. If all RADIUS connections come to the Frontend servers, the RADIUS services on Backends can be switched off.
See the Cluster Signals section.
RSIP processing is similar to RPOP processing. RSIP registrations are implemented as Real-Time Tasks, so they run on those servers that can run Call Legs. See the Cluster Signals section.
Do not remove the "postmaster" Account from the Main Domains on your Backend servers. This Account is opened "by name" (bypassing the Router) when any other Cluster member has to connect to that Backend. You should also keep at least the "Can Modify All Domains and Accounts Settings" Access Right for the postmaster Account.
For extremely large sites (more than 5,000,000 active accounts), you can deploy a Static Cluster of Dynamic Clusters. It is essentially the same as a regular Static Cluster with Frontend Servers, but instead of Backend Servers you install Dynamic Clusters. This solves the redundancy problem of Static Clusters, but does not require extremely large Shared Storage devices and excessive network traffic of extra-large Dynamic Clusters:
Frontend Servers in a "Cluster of Clusters" need access to the Directory in order to implement Static Clustering. The Frontend Servers only read the information from the Directory, while the Backend Servers modify the Directory when accounts are added, renamed, or removed. The hostServer attribute of Account directory records contains the name of the Backend Dynamic Cluster hosting the Account (the name of Backend Cluster Servers without the first domain name element).
Frontend Servers can be grouped into subsets for traffic segmentation. Each subset can have its own load balancer(s), and a switch that connects this Frontend Subset with every Backend Dynamic Cluster.
If you plan to deploy many (50 and more) Frontend Servers, the Directory Server itself can become the main site bottleneck. To remove this bottleneck and to provide redundancy on the Directory level, you can deploy several Directory Servers (each Server serving one or several Frontend subsets). Backend Dynamic Clusters can be configured to update only one "Master" Directory Server, and other Directory Servers can use replication mechanisms to synchronize with the Master Directory Server, or the Backend Clusters can be configured to modify all Directory Servers at once.