OS-level optimizations
*This article is meant to improve the performance of Packet Sensors and Packet Filters. It is not intended for Flow Sensors or Flow Filters.
- Download the latest ixgbe driver. Intel latest driver was responsible for a decrease in CPU usage. With Ubuntu 12.04 kernel version, the CPU usage was 80% and it is only 45% with the latest Intel driver.
-
Load PF_RING in transparent mode 2 and set a reasonable buffer size.
modprobe pf_ring transparent_mode=2 min_num_slots=16384
-
Load ixgbe driver. We found that setting the InterruptThrottleRate to 4000 was optimal for our traffic. Setting the FdirPballoc to 3 enabled 32k hash filters or 8k perfect filters in Flow Director.
modprobe ixgbe InterruptThrottleRate=4000 FdirPballoc=3
-
Bring up the 1 or 10gbe interface (in our case this was eth3).
ifconfig eth3 up
-
Optimise the Ethernet device. We mostly turned off options which hinder throughput. Substitute eth3 with the interface appropriate to your instance.
ethtool -C eth3 rx-usecs 1000 ethtool -C eth3 adaptive-rx off ethtool -K eth3 tso off ethtool -K eth3 gro off ethtool -K eth3 lro off ethtool -K eth3 gso off ethtool -K eth3 rx off ethtool -K eth3 tx off ethtool -K eth3 sg off
-
Set up CPU affinity for interrupts based on the number of RX queues on the NIC, balanced across both processors. This may vary from system to system. Check /proc/cpuinfo to see which processor IDs are associated with each physical CPU.
printf "%s" 1 > /proc/irq/73/smp_affinity #cpu0 node0 printf "%s" 2 > /proc/irq/74/smp_affinity #cpu1 node0 printf "%s" 4 > /proc/irq/75/smp_affinity #cpu2 node0 printf "%s" 8 > /proc/irq/76/smp_affinity #cpu3 node0 printf "%s" 10 > /proc/irq/77/smp_affinity #cpu4 node0 printf "%s" 20 > /proc/irq/78/smp_affinity #cpu5 node0 printf "%s" 40 > /proc/irq/79/smp_affinity #cpu6 node1 printf "%s" 80 > /proc/irq/80/smp_affinity #cpu7 node1 printf "%s" 100 > /proc/irq/81/smp_affinity #cpu8 node1 printf "%s" 200 > /proc/irq/82/smp_affinity #cpu9 node1 printf "%s" 400 > /proc/irq/83/smp_affinity #cpu10 node1 printf "%s" 800 > /proc/irq/84/smp_affinity #cpu11 node1 printf "%s" 1000 > /proc/irq/85/smp_affinity #cpu12 node0 printf "%s" 2000 > /proc/irq/86/smp_affinity #cpu13 node0 printf "%s" 40000 > /proc/irq/78/smp_affinity #cpu18 node1 printf "%s" 80000 > /proc/irq/88/smp_affinity #cpu19 node1
Or you can also use the ./set_irq_affinity.sh script from the driver./set_irq_affinity.sh
-
We recommend that several network buffer sizes be increased from their defaults. Please add the following lines to /etc/sysctl.conf:
net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.netdev_max_backlog = 250000
For RHEL6 you may also want to add the following to /etc/sysctl.conf:net.ipv4.tcp_congestion_control = bic'
Autor
Andrisoft Team
Andrisoft Team
Erstellt am
2014-06-24 20:59:32
2014-06-24 20:59:32
Aktualisiert am
2017-12-10 01:42:28
2017-12-10 01:42:28
Aufrufe
15845
15845