We're currently putting together a hands-on workshop for LinuxWorldExpo SF 2005.
We've titled the workshop Linux-HA Release 2 - Hands-On Learning Workshop
In this workshop, participants will set up and configure their own Linux-HA high-availability cluster. This will use the new Release 2 version of the Open Source Linux-HA (aka "heartbeat") software. After 15 attendees, participants will work together in pairs to create their HA clusters.
Release 2 of Linux-HA allows for the creation of highly available services spread across a cluster of several machines with no single point of failure. These services and the servers they run on are monitored. If a services should fail to operate correctly, or a server should fail, the services will be migrated quickly to another server, providing for highly-available services. Release 2 also allows for very powerful rules for expressing dependencies between services and rules for where to locate them in the cluster. Because these services are all based on init(8) service scripts, they are extremely easy to configure and manage, and very familiar to system administrators. Linux-HA Release 2 is the most powerful Open Source HA solution available, comparable to virtually any HA package available from anyone.
In the workshop, these clusters will run Power Linux on IBM OpenPower logical partitions using advanced virtualization[*] techniques. Prior experience with high-availability, or Power hardware is not required. General knowledge of Linux (or UNIX) administration is recommended.
We prefer 220V power (1475 Watts) but have a Philmore 2000 Watt Step Up & Down Power Transformer in our possession to handle 110V->220V
--Dave R has confirmed the availability of suitable power and ethernet connectivity in the room.
Having access to the external Internet is highly desirable. If it cannot be provided we need to know.
To do 40 LPARS, with 40 participants (for example), we will need the following IP resources:
Grand Total: 124 IP addresses
Total should be around 37 connections including cables. Dead minimum is 24 - one for each of 19 people connected to partitions, plus 5, for HMC, 720 FSP and Ethernet, instructors.
Tool from bug 609