Difference between revisions of "Performance and Tuning"
Dogonthesun (Talk | contribs) (→Connection Hash Table Size) |
|||
(5 intermediate revisions by 2 users not shown) | |||
Line 3: | Line 3: | ||
=== Tools === | === Tools === | ||
+ | * testlvs: simple throughput testing tool for LVS from Julian, see [http://www.ssi.bg/~ja/#testlvs the testlvs page] | ||
* ab: apache benchmark | * ab: apache benchmark | ||
+ | * httperf: a tool for measuring web server performance from HP, see [http://www.hpl.hp.com/research/linux/httperf/ the httperf homepage] | ||
+ | * Tsung: an open-source multi-protocol distributed load testing tool, see [http://tsung.erlang-projects.org/ Tsung project homepage] | ||
* specweb: web server benchmark by spec.org, it is now [http://www.spec.org/web2005/ specweb2005]. | * specweb: web server benchmark by spec.org, it is now [http://www.spec.org/web2005/ specweb2005]. | ||
* webbench: a licensed benchmark program that measures the performance of Web servers, developed by VeriTest. | * webbench: a licensed benchmark program that measures the performance of Web servers, developed by VeriTest. | ||
− | |||
== Performance Tuning == | == Performance Tuning == | ||
Line 34: | Line 36: | ||
<pre> | <pre> | ||
Networking Options --> | Networking Options --> | ||
− | IP: Virtual Server Configuration --> | + | Network packet filtering framework (Netfilter) ---> |
− | + | IP: Virtual Server Configuration --> | |
− | + | [*] IPv6 support for IPVS | |
− | + | [ ] IP virtual server debugging | |
− | + | (20) IPVS connection table size (the Nth power of 2) | |
− | + | *** IPVS transport protocol load balancing support *** | |
− | + | [*] TCP load balancing support | |
− | + | [*] UDP load balancing support | |
− | + | [*] ESP load balancing support | |
− | + | [*] AH load balancing support | |
− | + | *** IPVS scheduler *** | |
− | + | <M> round-robin scheduling | |
− | + | <M> weighted round-robin scheduling | |
− | + | <M> least-connection scheduling | |
− | + | <M> weighted least-connection scheduling | |
− | + | <M> locality-based least-connection scheduling | |
− | + | <M> locality-based least-connection with replication scheduling | |
+ | <M> destination hashing scheduling | ||
+ | <M> source hashing scheduling | ||
+ | <M> shortest expected delay scheduling | ||
+ | <M> never queue scheduling | ||
+ | *** IPVS application helper *** | ||
+ | <M> FTP protocol helper | ||
</pre> | </pre> | ||
Line 71: | Line 79: | ||
* [http://www.kegel.com/c10k.html The C10K problem] written by Dan Kegel, good notes on how to configure operating systems and write code to support 10K clients on a single server | * [http://www.kegel.com/c10k.html The C10K problem] written by Dan Kegel, good notes on how to configure operating systems and write code to support 10K clients on a single server | ||
+ | |||
+ | [[Category:LVS Handbook]] |
Latest revision as of 11:37, 26 January 2012
Contents
Performance Measurement
Tools
- testlvs: simple throughput testing tool for LVS from Julian, see the testlvs page
- ab: apache benchmark
- httperf: a tool for measuring web server performance from HP, see the httperf homepage
- Tsung: an open-source multi-protocol distributed load testing tool, see Tsung project homepage
- specweb: web server benchmark by spec.org, it is now specweb2005.
- webbench: a licensed benchmark program that measures the performance of Web servers, developed by VeriTest.
Performance Tuning
Connection Hash Table Size
The IPVS connection hash table uses the chaining scheme to handle hash collisions. Using a big IPVS connection hash table will greatly reduce conflicts when there are hundreds of thousands of connections in the hash table.
Note the table size must be power of 2. The table size will be the value of 2 to the your input number power. The number to choose is from 8 to 20, the default number is 12, which means the table size is 4096. Don't input the number too small, otherwise you will lose performance on it. You can adapt the table size yourself, according to your virtual server application. It is good to set the table size not far less than the number of connections per second multiplying average lasting time of connection in the table. For example, your virtual server gets 200 connections per second, the connection lasts for 200 seconds in average in the connection table, the table size should be not far less than 200x200, it is good to set the table size 32768 (2**15).
We can configure the size of IPVS connection hash table before compiling the Linux kernel. Here are the IPVS configurations in the 'make menuconfig' menu:
Networking Options --> Network packet filtering framework (Netfilter) ---> IP: Virtual Server Configuration --> [*] IPv6 support for IPVS [ ] IP virtual server debugging (20) IPVS connection table size (the Nth power of 2) *** IPVS transport protocol load balancing support *** [*] TCP load balancing support [*] UDP load balancing support [*] ESP load balancing support [*] AH load balancing support *** IPVS scheduler *** <M> round-robin scheduling <M> weighted round-robin scheduling <M> least-connection scheduling <M> weighted least-connection scheduling <M> locality-based least-connection scheduling <M> locality-based least-connection with replication scheduling <M> destination hashing scheduling <M> source hashing scheduling <M> shortest expected delay scheduling <M> never queue scheduling *** IPVS application helper *** <M> FTP protocol helper
Netfilter Connection Track
IPVS uses its own simple and fast connection tracking for performance reasons, instead of using netfilter connection tracking. So, if you don't use firewalling feature at load balancer and you need an extremely fast load balancer, do not load netfilter conntrack modules into you system, because there is no need to do double tracking. Note that LVS/NAT should work too without the conntrack modules.
Julian compared the performance of IPVS with ip_conntrack and without ip_conntrack. See http://archive.linuxvirtualserver.org/html/lvs-users/2001-12/msg00141.html
Some Performance Data
While the LVS/DR cluster was pushing out 9.6Gbps traffic, the LVS load balancer was doing a negilgable ammount of work, which seems to indicate that LVS could push a great deal more traffic given sufficient real-servers and end-users.
External Links
- The C10K problem written by Dan Kegel, good notes on how to configure operating systems and write code to support 10K clients on a single server