This site best when viewed with a modern standards-compliant browser. We recommend Firefox Get Firefox!.

Linux-HA project logo
Providing Open Source High-Availability Software for Linux and other OSes since 1999.

USA Flag UK Flag

Japanese Flag

Homepage

About Us

Contact Us

Legal Info

How To Contribute

Security Issues

This web page is no longer maintained. Information presented here exists only to avoid breaking historical links.
The Project stays maintained, and lives on: see the Linux-HA Reference Documentation.
To get rid of this notice, you may want to browse the old wiki instead.

1 February 2010 Hearbeat 3.0.2 released see the Release Notes

18 January 2009 Pacemaker 1.0.7 released see the Release Notes

16 November 2009 LINBIT new Heartbeat Steward see the Announcement

Last site update:
2017-12-13 14:47:08

Site Designations in Heartbeat

Right now, heartbeat has no conceptual idea about nodes being in different sites.

This page describes a proposal to possibly add site designations into the heartbeat infrastructure.

In this view, every node would then be designated as belonging to a site. In effect site would just be an attribute which can be queried by using the Heartbeat API.

This site information could then be used by quorum plugins and/or quorum tiebreaker plugins.

I momentarily thought about having different deadtimes for same-site and different-site nodes, but that doesn't work - because membership still needs to be computed at once for the entire stretch (or split-site) cluster. This is a limitation of the stretch cluster architecture currently being proposed.

What this means in practice is that local failovers and remote failovers have to have the same deadtime.

But, if this information were provided it could be used. And the place to provide it is at the bottom of the infrastructure.

We currently have a /var/lib/heartbeat/hostcache file which we keep the the nodes currently considered to be part of the cluster in. It would be ideal to add the site information to that same file - since it is automatically kept in sync, and distributed throughout the cluster. One could add a new command, or new options to existing commands to create or modify this information.