I have already written an article to check the network bandwidth speed using netperf available at below link
First of all what I used to have this test setup (you can adapt it to your needs)
HW :
- 2 HP Blade on same enclosure chassis with 10Gig HP Flex NICs. (servers)
- HP VC Flex 10 (the network device)
Software :- Centos 5.4 x64
- I only installed core GNU packages + development libraries. No unnecessary service/software loaded. Even I took XWindow(Gnome/KDE) out of the package and run the system in rulevel 3.
- Disable Firewall/SELinux
- netperf rpm (ftp://ftp.netperf.org/netperf/netperf-2.4.5.tar.gz)
I made the first run of the test using single Vnet (counterpart of VLANs in HP Virtual Connect). You can also use the same tools to create a setup that utilize Shared Uplink Sets (trunk links on VC). In order to set this up create a Vnet for your load VLAN(Vnet_LOAD). Create the profiles for the blades and assign one FlexNIC with 100Mb to management VNet and the other one to VNet_LOAD (10Gb)
Netperf is based on client server model. After installing the software on both blades you execute different processes on different nodes. netserver as the name states is the server part of the test suite. You can also use the
[root@SERVER ~]# netserver
Starting netserver at port 12865
Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC
while netperf is the tool that executes the test and gives output.
[root@CLIENT ~]# netperf -H SERVER -l 15
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to SERVER (*******) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 15.00 9387.92
Also you can fetch the CPU utilization info while doing the test using -c (local) and -C(remote) parameters
![Install Netperf On Windows Install Netperf On Windows](https://linuxx.info/wp-content/uploads/2019/10/stunt-rally-6fea5d2-1.jpg)
For detailed documentation and command line options you can check :
Oct 26, 2015 More than a simple speed test, nPerf brings you the best and the fullest mobile connection quality measurement tool up to 1 Gb/s speeds! Full QoS test: In few seconds, test your bitrate speed, latency, browsing speed and video streaming quality on your mobile device. IPerf3 binaries for Windows, Linux, MacOS X. Install Iperf 3.1.3 via the command line: Packages are manually installed via the dpkg command (Debian Package Management System). Dpkg is the backend to commands like apt and aptitude, which in turn are the backend for GUI install apps like the Software Center and Synaptic. Managed server nPerf handles everything: software installation, server protection, maintenance and application updates. We just need a server with a distribution like Debian / Ubuntu installed and root access. Unmanaged server An installation script is available, on request, to assist you in configuring the server. Mar 29, 2018 Download Netperf Clone or download various revisions of the Netperf benchmark. Netperf Numbers Submit and Retrieve Netperf results from the Netperf Database. Download the Latest bfgminer 3.99.0 for Windows With Gridseed ASIC Support. We have compiled a windows binary based on the. Clock=850 and you can set the.
How to monitor network bandwidth in Linux using netperfIn this article I will guide you the steps to be used to monitor the available network bandwidth using iperf3.
One advantage here you have with iperf3 that it is a part of the Red Hat Vanilla DVD and you need not download any third party tool.
One advantage here you have with iperf3 that it is a part of the Red Hat Vanilla DVD and you need not download any third party tool.
Below steps are validated on Red Hat Enterprise Linux 7
Give me liberty an american history 3rd edition pdf. You can install the iperf3 using yum command assuming you have a valid repository or you can copy the rpm from the Red Hat DVD and install it manually
NOTE: On RHEL system you must have an active subscription to RHN or you can configure a local offline repository using which 'yum' package manager can install the provided rpm and it's dependencies.
The latest version of the iperf source code is at https://github.com/esnet/iperf
With the below list of steps the iperf sets a large send and receive buffer size to maximise throughput, and performs a test for 60 seconds which should be long enough to fully exercise a network.
On Server (IP: 10.58.160.101)
# yum install iperf3
OR
# rpm -Uvh /home/deepak/iperf3-3.1.7-2.el7.x86_64.rpm
warning: /home/deepak/iperf3-3.1.7-2.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing.. ################################# [100%]
Updating / installing..
1:iperf3-3.1.7-2.el7 ################################# [100%]
Explanation of the switches used:
-i, --interval n
pause n seconds between periodic bandwidth reports; default is 1, use 0 to disable
-s, --server
run in server mode
Run the below command on the server
server # iperf3 -i 10 -s
warning: this system does not seem to support IPv6 - trying IPv4
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.58.160.103, port 40614
[ 5] local 10.58.160.101 port 5201 connected to 10.58.160.103 port 40616
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 1.27 GBytes 1.09 Gbits/sec
[ 5] 10.00-20.00 sec 1.27 GBytes 1.09 Gbits/sec
[ 5] 20.00-30.00 sec 1.27 GBytes 1.09 Gbits/sec
[ 5] 30.00-40.00 sec 1.27 GBytes 1.09 Gbits/sec
[ 5] 40.00-50.00 sec 1.27 GBytes 1.09 Gbits/sec
[ 5] 50.00-60.00 sec 1.27 GBytes 1.09 Gbits/sec
[ 5] 60.00-60.04 sec 4.78 MBytes 1.04 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-60.04 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-60.04 sec 7.63 GBytes 1.09 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.169.173.7, port 35190
[ 5] local 192.169.173.5 port 5201 connected to 192.169.173.7 port 35192
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 3.55 GBytes 3.05 Gbits/sec
[ 5] 10.00-20.00 sec 3.59 GBytes 3.08 Gbits/sec
[ 5] 20.00-30.00 sec 3.59 GBytes 3.08 Gbits/sec
[ 5] 30.00-40.00 sec 3.59 GBytes 3.08 Gbits/sec
[ 5] 40.00-50.00 sec 3.59 GBytes 3.08 Gbits/sec
[ 5] 50.00-60.00 sec 3.59 GBytes 3.08 Gbits/sec
[ 5] 60.00-60.04 sec 14.4 MBytes 3.14 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-60.04 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-60.04 sec 21.5 GBytes 3.07 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
By default the server will use TCP port 5201, if you intend to use some other port use '-p' switch
-p, --port n
set server port to listen on/connect to to n (default 5201)
OR
# rpm -Uvh /home/deepak/iperf3-3.1.7-2.el7.x86_64.rpm
warning: /home/deepak/iperf3-3.1.7-2.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing.. ################################# [100%]
Updating / installing..
1:iperf3-3.1.7-2.el7 ################################# [100%]
Explanation of the switches used:
-i, --interval n
pause n seconds between periodic bandwidth reports; default is 1, use 0 to disable
-s, --server
run in server mode
Run the below command on the server
server # iperf3 -i 10 -s
warning: this system does not seem to support IPv6 - trying IPv4
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.58.160.103, port 40614
[ 5] local 10.58.160.101 port 5201 connected to 10.58.160.103 port 40616
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 1.27 GBytes 1.09 Gbits/sec
[ 5] 10.00-20.00 sec 1.27 GBytes 1.09 Gbits/sec
[ 5] 20.00-30.00 sec 1.27 GBytes 1.09 Gbits/sec
[ 5] 30.00-40.00 sec 1.27 GBytes 1.09 Gbits/sec
[ 5] 40.00-50.00 sec 1.27 GBytes 1.09 Gbits/sec
[ 5] 50.00-60.00 sec 1.27 GBytes 1.09 Gbits/sec
[ 5] 60.00-60.04 sec 4.78 MBytes 1.04 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-60.04 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-60.04 sec 7.63 GBytes 1.09 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.169.173.7, port 35190
[ 5] local 192.169.173.5 port 5201 connected to 192.169.173.7 port 35192
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 3.55 GBytes 3.05 Gbits/sec
[ 5] 10.00-20.00 sec 3.59 GBytes 3.08 Gbits/sec
[ 5] 20.00-30.00 sec 3.59 GBytes 3.08 Gbits/sec
[ 5] 30.00-40.00 sec 3.59 GBytes 3.08 Gbits/sec
[ 5] 40.00-50.00 sec 3.59 GBytes 3.08 Gbits/sec
[ 5] 50.00-60.00 sec 3.59 GBytes 3.08 Gbits/sec
[ 5] 60.00-60.04 sec 14.4 MBytes 3.14 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-60.04 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-60.04 sec 21.5 GBytes 3.07 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
By default the server will use TCP port 5201, if you intend to use some other port use '-p' switch
-p, --port n
set server port to listen on/connect to to n (default 5201)
On Client (192.169.173.7, 10.58.160.103)
# rpm -Uvh /tmp/iperf3-3.1.7-2.el7.x86_64.rpm
warning: /tmp/iperf3-3.1.7-2.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing.. ################################# [100%]
Updating / installing..
1:iperf3-3.1.7-2.el7 ################################# [100%]
Explanation of the switches used:
-i, --interval n
pause n seconds between periodic bandwidth reports; default is 1, use 0 to disable
warning: /tmp/iperf3-3.1.7-2.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing.. ################################# [100%]
Updating / installing..
1:iperf3-3.1.7-2.el7 ################################# [100%]
Explanation of the switches used:
-i, --interval n
pause n seconds between periodic bandwidth reports; default is 1, use 0 to disable
-w, --window n[KM]
window size / socket buffer size (this gets sent to the server and used on that side)
window size / socket buffer size (this gets sent to the server and used on that side)
-t, --time n
time in seconds to transmit for (default 10 secs)
time in seconds to transmit for (default 10 secs)
-c, --client host
run in client mode, connecting to the specified server
run in client mode, connecting to the specified server
Run the below command wherein the IP with -c is the server IP (eth0)
client # iperf3 -i 10 -w 1M -t 60 -c 10.58.160.101
Connecting to host 10.58.160.101, port 5201
[ 4] local 10.58.160.103 port 40616 connected to 10.58.160.101 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-10.00 sec 1.27 GBytes 1.09 Gbits/sec 0 2.30 MBytes
[ 4] 10.00-20.00 sec 1.27 GBytes 1.09 Gbits/sec 0 2.30 MBytes
[ 4] 20.00-30.00 sec 1.27 GBytes 1.09 Gbits/sec 0 2.30 MBytes
[ 4] 30.00-40.00 sec 1.27 GBytes 1.09 Gbits/sec 0 2.30 MBytes
[ 4] 40.00-50.00 sec 1.27 GBytes 1.09 Gbits/sec 0 2.30 MBytes
[ 4] 50.00-60.00 sec 1.27 GBytes 1.09 Gbits/sec 0 2.30 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 7.63 GBytes 1.09 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 7.63 GBytes 1.09 Gbits/sec receiver
client # iperf3 -i 10 -w 1M -t 60 -c 10.58.160.101
Connecting to host 10.58.160.101, port 5201
[ 4] local 10.58.160.103 port 40616 connected to 10.58.160.101 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-10.00 sec 1.27 GBytes 1.09 Gbits/sec 0 2.30 MBytes
[ 4] 10.00-20.00 sec 1.27 GBytes 1.09 Gbits/sec 0 2.30 MBytes
[ 4] 20.00-30.00 sec 1.27 GBytes 1.09 Gbits/sec 0 2.30 MBytes
[ 4] 30.00-40.00 sec 1.27 GBytes 1.09 Gbits/sec 0 2.30 MBytes
[ 4] 40.00-50.00 sec 1.27 GBytes 1.09 Gbits/sec 0 2.30 MBytes
[ 4] 50.00-60.00 sec 1.27 GBytes 1.09 Gbits/sec 0 2.30 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 7.63 GBytes 1.09 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 7.63 GBytes 1.09 Gbits/sec receiver
iperf Done.
Another attempt using a different interface (eth2)
client # iperf3 -i 10 -w 1M -t 60 -c 192.169.173.5
Connecting to host 192.169.173.5, port 5201
[ 4] local 192.169.173.7 port 35192 connected to 192.169.173.5 port 5201
Another attempt using a different interface (eth2)
client # iperf3 -i 10 -w 1M -t 60 -c 192.169.173.5
Connecting to host 192.169.173.5, port 5201
[ 4] local 192.169.173.7 port 35192 connected to 192.169.173.5 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-10.00 sec 3.56 GBytes 3.06 Gbits/sec 0 2.10 MBytes
[ 4] 10.00-20.00 sec 3.59 GBytes 3.08 Gbits/sec 0 2.10 MBytes
[ 4] 20.00-30.00 sec 3.59 GBytes 3.08 Gbits/sec 0 2.10 MBytes
[ 4] 30.00-40.00 sec 3.59 GBytes 3.08 Gbits/sec 0 2.10 MBytes
[ 4] 40.00-50.00 sec 3.59 GBytes 3.08 Gbits/sec 0 2.10 MBytes
[ 4] 50.00-60.00 sec 3.59 GBytes 3.08 Gbits/sec 0 2.10 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 21.5 GBytes 3.08 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 21.5 GBytes 3.08 Gbits/sec receiver
[ 4] 0.00-10.00 sec 3.56 GBytes 3.06 Gbits/sec 0 2.10 MBytes
[ 4] 10.00-20.00 sec 3.59 GBytes 3.08 Gbits/sec 0 2.10 MBytes
[ 4] 20.00-30.00 sec 3.59 GBytes 3.08 Gbits/sec 0 2.10 MBytes
[ 4] 30.00-40.00 sec 3.59 GBytes 3.08 Gbits/sec 0 2.10 MBytes
[ 4] 40.00-50.00 sec 3.59 GBytes 3.08 Gbits/sec 0 2.10 MBytes
[ 4] 50.00-60.00 sec 3.59 GBytes 3.08 Gbits/sec 0 2.10 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 21.5 GBytes 3.08 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 21.5 GBytes 3.08 Gbits/sec receiver
iperf Done.
Lets check the allowed bandwidth speed for each of this interface
Lets check the allowed bandwidth speed for each of this interface
Linux Netperf
The first attempt was done using eth0
# ethtool eth0 | grep Speed
Speed: 1000Mb/s
The second attempt was done using eth2
# ethtool eth2 | grep Speed
Speed: 3000Mb/s
So as we see the allowed bandwidth for eth0 was 1 Gb and we had a bandwidth speed of almost 1.09Gb/s
# ethtool eth0 | grep Speed
Speed: 1000Mb/s
The second attempt was done using eth2
# ethtool eth2 | grep Speed
Speed: 3000Mb/s
So as we see the allowed bandwidth for eth0 was 1 Gb and we had a bandwidth speed of almost 1.09Gb/s
while for the other interface we had an allowed bandwidth of 3 Gb/s wherein the bandwidth throughput reached 3.08 Gb/s
I hope this article was useful.
In this multi-part series, I will explain how to use GNU tools and Linux to have a free network throughput test setup. In Part 1 of the series we will use only one NIC to create single unicast TCP/UDP stream to saturate the linkspeed. First of all what I used to have this test setup (you can adapt it to your needs)
HW :
- 2 HP Blade on same enclosure chassis with 10Gig HP Flex NICs. (servers)
- HP VC Flex 10 (the network device)
Software :- Centos 5.4 x64
- I only installed core GNU packages + development libraries. No unnecessary service/software loaded. Even I took XWindow(Gnome/KDE) out of the package and run the system in rulevel 3.
- Disable Firewall/SELinux
- netperf rpm (ftp://ftp.netperf.org/netperf/netperf-2.4.5.tar.gz)
I made the first run of the test using single Vnet (counterpart of VLANs in HP Virtual Connect). You can also use the same tools to create a setup that utilize Shared Uplink Sets (trunk links on VC). In order to set this up create a Vnet for your load VLAN(Vnet_LOAD). Create the profiles for the blades and assign one FlexNIC with 100Mb to management VNet and the other one to VNet_LOAD (10Gb)
Netperf is based on client server model. After installing the software on both blades you execute different processes on different nodes. netserver as the name states is the server part of the test suite. You can also use the
[root@SERVER ~]# netserver
Starting netserver at port 12865
Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC
while netperf is the tool that executes the test and gives output.
[root@CLIENT ~]# netperf -H SERVER -l 15
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to SERVER (*******) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 15.00 9387.92
The default test is TCP_STREAM you can also define other tests like UDP Request Response to fully saturate Full Duplex Link :
[root@CLIENT ~]# netperf -t UDP_RR -H SERVER -l 15
UDP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to SERVER (*********) port 0 AF_INET
Socket Size Request Resp. Elapsed Trans.
bytes Bytes bytes bytes secs. per sec
[root@CLIENT ~]# netperf -t UDP_RR -H SERVER -l 15 -c -C
![Install Netperf On Windows Install Netperf On Windows](https://linuxx.info/wp-content/uploads/2019/10/stunt-rally-6fea5d2-1.jpg)
UDP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to SERVER (*******) port 0 AF_INET
Socket Size Request Resp. Elapsed Trans. CPU CPU S.dem S.dem
Send Recv Size Size Time Rate local remote local remote
bytes bytes bytes bytes secs. per sec % S % S us/Tr us/Tr
Install Netperf On Windows 8
129024 129024 1 1 15.00 20964.03 2.06 2.03 15.759 15.481
Install Netperf On Windows 7
For detailed documentation and command line options you can check :
Netperf Example
http://www.netperf.org/netperf/training/Netperf.html#0.2.2Z141Z1.SUJSTF.8R2DBD.JOn the next parts of the series I will focus on different types of throughput/load tests like multi flow & multi IP throughput testing using netperf & some Linux tweaking and IP multicast testing using MGEN.