<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://kb.linux-vs.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Awebb</id>
		<title>LVSKB - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://kb.linux-vs.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Awebb"/>
		<link rel="alternate" type="text/html" href="http://kb.linux-vs.org/wiki/Special:Contributions/Awebb"/>
		<updated>2026-04-27T06:54:28Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.26.2</generator>

	<entry>
		<id>http://kb.linux-vs.org/wiki?title=Maui&amp;diff=248</id>
		<title>Maui</title>
		<link rel="alternate" type="text/html" href="http://kb.linux-vs.org/wiki?title=Maui&amp;diff=248"/>
				<updated>2005-09-01T22:19:24Z</updated>
		
		<summary type="html">&lt;p&gt;Awebb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Maui is an advanced job scheduler for use on clusters and supercomputers. It is the precursor to [[Moab]]. It is a highly optimized and configurable tool capable of supporting a large array of scheduling policies, dynamic priorities, extensive reservations, and fairshare. It is currently in use at hundreds of leading government, academic, and commercial sites throughout the world. It improves the manageability and efficiency of machines ranging from clusters of a few processors to multi-teraflop supercomputers.&lt;br /&gt;
&lt;br /&gt;
Maui is a community project* and may be downloaded, modified, and distributed.  It has been made possible by the support of Cluster Resources, Inc and the contributions of many individuals and sites including the U.S. Department of Energy, PNNL, the Center for High Performance Computing at the University of Utah (CHPC), Ohio Supercomputing Center (OSC), University of Southern California (USC), SDSC, MHPCC, BYU, NCSA, and many others.&lt;/div&gt;</summary>
		<author><name>Awebb</name></author>	</entry>

	<entry>
		<id>http://kb.linux-vs.org/wiki?title=Moab&amp;diff=249</id>
		<title>Moab</title>
		<link rel="alternate" type="text/html" href="http://kb.linux-vs.org/wiki?title=Moab&amp;diff=249"/>
				<updated>2005-09-01T22:17:29Z</updated>
		
		<summary type="html">&lt;p&gt;Awebb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Moab Cluster Suite™, developed by [http://www.clusterresources.com Cluster Resources] is a professional cluster workload management solution that integrates the scheduling, managing, monitoring and reporting of cluster workloads. Moab Cluster Suite simplifies management across and unifies management across one or multiple hardware, operating system, storage, network, license and resource manager environments. Its task-oriented management and the industry's most flexible policy engine ensure service levels are delivered and workload is processed faster. This enables organizations to accomplish more work resulting in improved cluster ROI.&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Moab Cluster Suite is the predecessor Maui Workload Manager.  It's technology is used on clusters such as [http://teragrid.ncsa.uiuc.edu/TGIA64LinuxCluster.html Teragrid], [http://www.westgrid.ca/support/topics/scheduling.php WestGrid], [http://www.tcf.vt.edu/systemX.html ViriginaTech], [http://www.irb.hr/en/cir/projects/dcc/00006/ IRB], and [http://www.sara.nl/userinfo/lisa/usage/batch/index.html SARA]. These organizations and others often improve performance of clusters by 20 to 30 percent, with some organizations more than doubling the usage of their compute resources.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.clusterresources.com/products/moabclustersuite.shtml Moab Cluster Suite]&lt;/div&gt;</summary>
		<author><name>Awebb</name></author>	</entry>

	<entry>
		<id>http://kb.linux-vs.org/wiki?title=LVS_Cluster_Management&amp;diff=155</id>
		<title>LVS Cluster Management</title>
		<link rel="alternate" type="text/html" href="http://kb.linux-vs.org/wiki?title=LVS_Cluster_Management&amp;diff=155"/>
				<updated>2005-09-01T22:07:30Z</updated>
		
		<summary type="html">&lt;p&gt;Awebb: /* Cluster Management Software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Cluster Management ==&lt;br /&gt;
&lt;br /&gt;
Cluster management is to monitor and administrate all the computers in a computer cluster. It covers a wide range of functionality, such as resource monitoring, cluster membership management, reliable group communication, and full-featured administration interfaces.&lt;br /&gt;
&lt;br /&gt;
One of the advantages of a cluster system is that it has hardware and software redundancy, because the cluster system consists of a number of independent nodes, and each node runs a copy of operating system and application software. Cluster Management can help achieve high  availability by detecting node or daemon failures and reconfiguring the system appropriately, so that the workload can be taken over by the remaining nodes in the cluster.&lt;br /&gt;
&lt;br /&gt;
== LVS Cluster Management ==&lt;br /&gt;
&lt;br /&gt;
Since LVS Cluster is [[load balancing]] cluster, the requirement of LVS cluster management is simple, cluster monitoring and administration interface are two major parts.&lt;br /&gt;
&lt;br /&gt;
=== Cluster Monitoring ===&lt;br /&gt;
&lt;br /&gt;
The major work of cluster monitoring in [[LVS]] is to monitor the availability of real servers and load balancers, and reconfigure the system if any partial failure happens, so that the whole cluster system can still serve requests. Note that monitoring the availability of database, network file system or distributed file system is not addressed here.&lt;br /&gt;
&lt;br /&gt;
To monitor the availability of real servers, there are two approaches, one is to run service monitoring daemons at the load balancer to check server health periodically, the other is to run monitoring agents at real servers to collect information and report to the load balancer. The service monitor usually sends service requests and/or ICMP ECHO_REQUEST to real servers periodically, and remove/disable a real server in server list at the load balancer if there is no response in a specified time or error response, thus no new requests will be sent to this dead server. When the service monitor detects the dead server has recovered to work, the service monitor will add the server back to the available server list at the load balancer. Therefore, the load balancer can mask the failure of service daemons or servers automatically.&lt;br /&gt;
&lt;br /&gt;
In the monitoring agent approach, there is also a monitoring master running at the load balancer to receive information from the agents. The monitoring master will add/remove servers at the load balancer based on the availability of agents, can also adjust server weight based on server load information. However, there is more efforts to make the monitoring agents running at all kinds of server operating systems, such as Linux, FreeBSD, and Windows.&lt;br /&gt;
&lt;br /&gt;
The load balancer is the core of a server cluster system, and it cannot be a single failure point of the whole system. In order to prevent the whole system from being out of service because of the load balancer failure, we need setup a backup (or several backups) of the load balancer, which are connected by heartbeat or VRRP. Two heartbeat daemons run on the primary and the backup respectively, they heartbeat the message like &amp;quot;I'm alive&amp;quot; each other through serial lines and/or network interfaces periodically. When the heartbeat daemon of the backup cannot hear the heartbeat message from the primary in the specified time, it will take over the virtual IP address to provide the load-balancing service. When the failed load balancer comes back to work, there are two solutions, one is that it becomes the backup load balancer automatically, the other is the active load balancer releases the VIP address, and the recover one takes over the VIP address and becomes the primary load balancer again.&lt;br /&gt;
&lt;br /&gt;
The primary load balancer has state of connections, i.e. which server the connection is forwarded to. If the backup load balancer takes over without those connections information, the clients have to send their requests again to access service. In order to make load balancer failover transparent to client applications, we implement connection synchronization in [[IPVS]], the primary IPVS load balancer synchronizes connection information to the backup load balancers through UDP multicast. When the backup load balancer takes over after the primary one fails, the backup load balancer will have the state of most connections, so that almost all connections can continue to access the service through the backup load balancer.&lt;br /&gt;
&lt;br /&gt;
=== Administration Interface ===&lt;br /&gt;
&lt;br /&gt;
The administration interface of LVS cluster management should enable administrators to do the following things:&lt;br /&gt;
* add new servers to increase the system throughput or remove servers for system maintenance, without bringing down the whole system service&lt;br /&gt;
* monitor the traffic of LVS cluster and provide statistics&lt;br /&gt;
&lt;br /&gt;
== Cluster Management Software ==&lt;br /&gt;
&lt;br /&gt;
There are many cluster management software in conjuction with LVS to provide high availability and management of the whole system.&lt;br /&gt;
&lt;br /&gt;
* [[Moab]]&lt;br /&gt;
* [[Maui]]&lt;br /&gt;
* [[Piranha]]&lt;br /&gt;
* [[Keepalived]]&lt;br /&gt;
* [[UltraMonkey]]&lt;br /&gt;
* [[heartbeat plus ldirectord]]&lt;br /&gt;
* [[heartbeat plus mon]]&lt;br /&gt;
* [[feedbackd]]&lt;br /&gt;
* [[LVSM]]&lt;br /&gt;
* [[lvs-kiss]]&lt;br /&gt;
* [[OpenSSI Cluster integrated HA-LVS]]&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Computer_cluster Computer Cluster]&lt;br /&gt;
&lt;br /&gt;
[[Category:Cluster Management]]&lt;/div&gt;</summary>
		<author><name>Awebb</name></author>	</entry>

	</feed>