How does the cluster appear on the Linux box? For each cluster you add through the lnlbctl tool, a new virtual interface (lnlb0.. and so on) will appear. The cluster IP address will be associated to the virtual interface with a netmask of 255.255.255.255 (so outbound traffic will be routed through the real interface, eth0 for example)
Does it work with switches (or does it work only with hubs)?
Yes. It can work with switches too (and you're really encouraged to use a switch instead of a hub, of course!) since the cluster MAC address is never learnt and so cluster traffic will be regularly broadcasted.
Is it mandatory for the nodes real interfaces (e.g. eth0) to have an IP address? Yes, since outgoing traffic need routing rules through it.
Do I need to keep time synchronization between nodes (e.g. using ntp)?
No. It's not required that nodes are strictly time synchronized since time synchronization is achieved on convergence phases (it's not based on system clock). Although the driver will not tolerate more than 1 week date difference between nodes. Anyway it could be a nice idea to have clocks synchronized within a cluster (ntp should a fine solution)
Can a node be part of two or more clusters?
Yes, simply doing #lnlbctl addif <ip> ethN you can make a node part of more than a cluster, even binding them to the same real ethN interface (you'll see more virtual interfaces on the system, lnlb0 and lnlb1 for instance). Obiouvsly the two clusters must have different IP addresses.
Which kernel version is required ?
The driver only supports recent 2.6 Linux kernels. The driver has been tested under 188.8.131.52 and 184.108.40.206
I cannot reach the cluster IP address when ping-ing it from a node. Is it normal?
Yes, it's absolutely normal that you cannot reach the cluster IP within the nodes. It's done to avoid inconsistencies of the tracking table between nodes (since traffic between two interfaces is not broadcasted on the LAN)
How much bandwidth does the synchronization data (heartbeat, etc) between nodes steal? Synchronization messages exchange during convergence phase are really really limited. They're about some Kb every 7 seconds, so you should really not worry about this.
Can I change the convergence interval (7 seconds)?
Yes. You can do it passing a "heartbeat_interval=N" parameter when modprobe'ing the lnlb driver. The value is in seconds. Anyway I suggest you not to set it below 5 seconds.
How is synchronization data (heartbeat, etc) exchanged? Nodes uses special ethernet frames (Ethertype 0x8870) to exchange messages in the cluster.
What happens if a node of the cluster has a fault? Should this happen, it will be detected on the next convergence and will be not considered part of the cluster anymore within about 10 sec (time depends on the convergence interval). Connections mapped to the dead node will be redistributed. When it will be working again, he will join the cluster (upon lnlbctl addif .... command) and will partecipate to the balancing process again
How does the balancing algorithm work?
The balancing algorithm is based on weighted hash tables. It's extremely fast and allows each node to map incoming datagrams to cluster quickly even if the number of nodes is high. The algorithm is dynamic and is based on the weight heartbeats exchanged between nodes.The more weight a node has, the less incoming datagrams he gets.
How is the "weight" estimated?
The administrator can choose to take the weight from System load averages (1,5,15 mins), from free memory (%) or can even choose to manually push the weight value (1-65535) by a crontab script. See the getting started guide
How does the session tracking module work?
The default session tracking module shipped with the sources (lnlb_mod_default) does a simple and lightweight source IP based tracking. Once a remote IP is mapped to a node, all subsequent datagrams sent to the cluster will be delivered to the same node (except if node has a fault, in this case the connection will be reassigned). If this is not enough for you, you can redesign the default session tracking module, or add a new module that will handle tracking for a specific transport protocol (e.g. TCP, UDP,...). See Developers page
There's a limit to the number of nodes for a cluster? Yes. each cluster can have at most 255 nodes. Although this is a very high value. My suggestion is not to exceed 16-32 nodes for cluster (it really depends on the inbound traffic. See the remarks section in the getting started guide)
What about Denial Of Services? Since the default connection tracking is only based upon source IP address, attacks like TCP Syn attack will not cause any particular proble to the lnlb driver. Pratically they should have the same impact as if they were directed to the real host IP address.
How can I get the cluster status? You can a get a list of cluster, with nodes MAC addresses and their weight simply doing a cat /proc/net/lnlb
Can I know which cluster node is a source IP address tracked to?
Yes. You can read the session tracking table doing a cat /proc/net/lnlb_conntrack.
Will it change my eth0 MAC address? No. The real interface will continue common network operations on its own MAC and IP address (if any). The interface is put in promiscuous mode in order to catch frames directed to the cluster MAC address
How is cluster MAC address determined? Cluster MAC address will be 02-00-a-b-c-d where a,b,c,d are the four octects of the cluster IP address
Is there any chance to avoid inbound traffic flooding? Inbound flooding to cluster nodes can't be avoided by design. Although you can avoid flooding to other hosts in the network grouping the cluster in a separate VLAN. I'm working on a multicast MAC feature in order to do this without vlan possibly dealing with switches IGMP snooping feature.