# iperf3 -s <--- run iperf3 in server mode ----------------------------------------------------------- Server listening on 5201 (test #1) ----------------------------------------------------------- . . .
Chapter 4. IPFW Dummynet and Traffic Shaping
Table of Contents
FreeBSD’s dummynet is not a network for dummies. It is a sophisticated network traffic shaping tool for bandwidth usage and scheduling algorithms. In this use of ipfw, the focus is not on ruleset development, although rules are still used to select traffic to pass to dummynet objects. Instead, the focus is on setting up a system to shape traffic flows.
Imagine if you had the ability to model traffic flow across the wild Internet. dummynet allows you to model scheduling, queuing, and similar tasks similar to the real-world Internet.
dummynet works with three main types of objects - a pipe, a queue, and a sched (short for scheduler) which also happen to be the three keywords to use with dummynet.
A pipe (not to be confused with a Unix pipe(2)) is a model of a network link with a configurable bandwidth, and propagation delay.
A queue is an abstraction used to implement packet scheduleing using one of several different scheduling algorithms. Packets sent to a queue are first grouped into flows according to a mask on a 5-tuple (protocol, source address, source port, destination address, destination port) specification. Flows are then passed to the scheduler associated with the queue, and each flow uses scheduling parameters (weight, bandwidth, etc.) as configured in the queue itself. A sched (scheduler) in turn is connected to a pipe (an emulated link) and arbitrates the link’s bandwidth among backlogged flows according to weights and to the features of the scheduling algorighm in use.
Network performance testing is a complex subject that can encompass many variables across many different testing strategies. For our purposes, we want to understand the basics behind dummynet, so we will not be diving into the deepest levels of network performance testing - only enough to understand how to use dummynet. Also, for these tests, we will restrict our methodologies to using IP and TCP exclusively.
4.1. Measuring Default Throughput
The idea behind dummynet is that it lets you model and/or shape network speeds, available bandwidth, and scheduling algorithms. But first you have to know what your current transfer speeds are for the current environment (QEMU virtual machines over a FreeBSD bridge). To find out, we take a short detour to learn iperf3, the network bandwidth testing tool to perform simple transfer and bitrate calculations.
With iperf3, you can determine the effective throughput of data transfer for your system. Often called "goodput", this is the basic speed the user will see for transferring data across the network - the value that is unencumbered by protocol type and overhead.
To use iperf3, ensure that the software is installed on both the firewall VM system, and the external1 VM (and external2 and external3), and that ipfw on the firewall host is disabled (# kldunload ipfw).
The basic operation of iperf3 is as a client-server architecture, so on the external1 VM system, start the iperf3 software in server mode:
Then, on the firewall VM, run the client:
# iperf3 -c 203.0.113.10 <--- connect to external1 server and send test data Connecting to host 203.0.113.10, port 5201 [ 5] local 203.0.113.50 port 19359 connected to 203.0.113.10 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.03 sec 12.5 MBytes 102 Mbits/sec 0 1.07 MBytes [ 5] 1.03-2.09 sec 13.8 MBytes 108 Mbits/sec 0 1.07 MBytes [ 5] 2.09-3.07 sec 12.5 MBytes 107 Mbits/sec 0 1.07 MBytes [ 5] 3.07-4.09 sec 12.5 MBytes 103 Mbits/sec 0 1.07 MBytes [ 5] 4.09-5.08 sec 12.5 MBytes 106 Mbits/sec 0 1.07 MBytes [ 5] 5.08-6.09 sec 12.5 MBytes 105 Mbits/sec 0 1.07 MBytes [ 5] 6.09-7.07 sec 12.5 MBytes 107 Mbits/sec 0 1.07 MBytes [ 5] 7.07-8.05 sec 12.5 MBytes 107 Mbits/sec 0 1.07 MBytes [ 5] 8.05-9.04 sec 12.5 MBytes 106 Mbits/sec 0 1.07 MBytes [ 5] 9.04-10.02 sec 12.5 MBytes 107 Mbits/sec 0 1.07 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.02 sec 126 MBytes 106 Mbits/sec 0 sender [ 5] 0.00-10.02 sec 126 MBytes 106 Mbits/sec receiver iperf Done. #
A key test for measuring throughput is to send a file of data and measure the transfer speed. To create the file, we can use the jot(1) program:
# jot -r -s "" 10000000 > A.bin
This command creates a 10MB file of random ASCII digits. (Note that this takes roughly 30 seconds to a minute to create the file on a QEMU virtual machine.)
To transfer the file to the server on the firewall VM we can use this command:
# iperf3 -F A.bin -c 203.0.113.10 Connecting to host 203.0.113.10, port 5201 [ 5] local 203.0.113.50 port 51657 connected to 203.0.113.10 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.04 sec 12.5 MBytes 101 Mbits/sec 0 490 KBytes [ 5] 1.04-1.52 sec 5.81 MBytes 101 Mbits/sec 0 490 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-1.52 sec 18.3 MBytes 101 Mbits/sec 0 sender Sent 18.3 MByte / 18.3 MByte (100%) of A.bin [ 5] 0.00-1.52 sec 18.3 MBytes 101 Mbits/sec receiver iperf Done. #
Running this command several times shows that a consistent averate bitrate for throughput on this system is about 101Mbits/second - or about 18.3 MBytes/second.
We now have a baseline TCP-based "goodput" value for testing dummynet traffic shaping commands.
4.2. IPFW Commands for Dummynet
To use dummynet, load the kernel module dummynet.ko:
# kldload dummynet load_dn_sched dn_sched FIFO loaded load_dn_sched dn_sched QFQ loaded load_dn_sched dn_sched RR loaded load_dn_sched dn_sched WF2Q+ loaded load_dn_sched dn_sched PRIO loaded load_dn_sched dn_sched FQ_CODEL loaded load_dn_sched dn_sched FQ_PIE loaded load_dn_aqm dn_aqm CODEL loaded load_dn_aqm dn_aqm PIE loaded #
dummynet announces the schedulers it is configured to use.
4.2.1. Simple Pipe Configuration
Recall that dummynet uses pipes, queues, and sched (schedulers) to shape traffic.
To see dummynet in action, create a pipe with limited bandwidth, and assign it to a rule matching traffic to the external1 VM:
# ipfw pipe 1 config bw 300Kbit/s # ipfw pipe 1 show 00001: 300.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active #
The above output shows the pipe configration limiting bandwidth (bw) to 300Kbits/sec.
Now add ipfw rules to send traffic between the firewall VM and the external1 VM through the pipe:
# ipfw add 100 check-state 00100 check-state :default # # ipfw add 1000 pipe 1 ip from any to any 01000 pipe 1 ip from any to any # # ipfw list 00100 check-state :default 01000 pipe 1 ip from any to any 65535 deny ip from any to any #
By adding the matching phrase "ip from any to any" and assigning it to pipe 1, we have configured the firewall to send all ip-based traffic through the pipe, now configured as a 300K bps link.
If we re-run the basic file transfer command for iperf3 the same as we did earlier, we can see the difference take shape:
# iperf3 -F A.bin -c 203.0.113.10 Connecting to host 203.0.113.10, port 5201 [ 5] local 203.0.113.50 port 22303 connected to 203.0.113.10 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 69.1 KBytes 564 Kbits/sec 0 15.6 KBytes [ 5] 1.00-2.01 sec 36.3 KBytes 294 Kbits/sec 0 18.4 KBytes [ 5] 2.01-3.01 sec 33.9 KBytes 278 Kbits/sec 0 21.3 KBytes [ 5] 3.01-4.00 sec 47.6 KBytes 394 Kbits/sec 0 24.1 KBytes [ 5] 4.00-5.01 sec 26.9 KBytes 218 Kbits/sec 0 25.5 KBytes [ 5] 5.01-6.00 sec 37.7 KBytes 312 Kbits/sec 0 27.0 KBytes [ 5] 6.00-7.00 sec 43.8 KBytes 360 Kbits/sec 0 28.4 KBytes [ 5] 7.00-8.01 sec 34.9 KBytes 282 Kbits/sec 0 29.8 KBytes [ 5] 8.01-9.00 sec 29.7 KBytes 246 Kbits/sec 0 31.2 KBytes [ 5] 9.00-10.00 sec 46.2 KBytes 378 Kbits/sec 0 32.7 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 406 KBytes 332 Kbits/sec 0 sender Sent 406 KByte / 18.3 MByte (2%) of A.bin [ 5] 0.00-10.55 sec 358 KBytes 278 Kbits/sec receiver iperf Done. #
Here, we can see that during iperf3's 10-second run, the ipfw dummynet configuration limited the transfer speed to an average of about 332 Kbits/sec, and only about 2% of the entire 10MB file was transferred.
To see how we can use dummynet to configure different link speeds, we will set up a second pipe:
# ipfw pipe 2 config bw 3Mbit/s # ipfw pipe show 00001: 300.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 3.000 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active #
This pipe is set up to be 10 times faster (3Mb/sec instead of 300Kb/sec) than pipe 1. We then reconfigure the ipfw rules to send to the external2 VM (also running iperf3 -s) through pipe 2:
# ipfw list 00100 check-state :default 01000 pipe 1 ip from any to any 65535 deny ip from any to any # # ipfw delete 1000 # # ipfw add 1000 pipe 1 ip from me to 203.0.113.10 // external1 01000 pipe 1 ip from me to 203.0.113.10 # # ipfw add 1100 pipe 1 ip from 203.0.113.10 to me // external1 01100 pipe 1 ip from 203.0.113.10 to me # # ipfw add 2000 pipe 2 ip from me to 203.0.113.20 // external2 02000 pipe 2 ip from me to 203.0.113.20 # # ipfw add 2100 pipe 2 ip from 203.0.113.20 to me // external2 02100 pipe 2 ip from 203.0.113.20 to me # # ipfw list 00100 check-state :default 01000 pipe 1 ip from me to 203.0.113.10 01100 pipe 1 ip from 203.0.113.10 to me 02000 pipe 2 ip from me to 203.0.113.20 02100 pipe 2 ip from 203.0.113.20 to me 65535 deny ip from any to any #
As expected, pipe 2 is approximately 10 times faster than pipe 1:
# iperf3 -F A.bin -c 203.0.113.20 Connecting to host 203.0.113.20, port 5201 [ 5] local 203.0.113.50 port 21569 connected to 203.0.113.20 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 417 KBytes 3.41 Mbits/sec 0 34.1 KBytes [ 5] 1.00-2.00 sec 325 KBytes 2.66 Mbits/sec 0 45.5 KBytes [ 5] 2.00-3.00 sec 373 KBytes 3.06 Mbits/sec 0 55.5 KBytes [ 5] 3.00-4.00 sec 334 KBytes 2.73 Mbits/sec 0 64.0 KBytes [ 5] 4.00-5.00 sec 348 KBytes 2.85 Mbits/sec 0 64.0 KBytes [ 5] 5.00-6.00 sec 337 KBytes 2.76 Mbits/sec 0 64.0 KBytes [ 5] 6.00-7.00 sec 339 KBytes 2.78 Mbits/sec 0 64.0 KBytes [ 5] 7.00-8.00 sec 348 KBytes 2.85 Mbits/sec 0 64.0 KBytes [ 5] 8.00-9.00 sec 351 KBytes 2.87 Mbits/sec 0 64.0 KBytes [ 5] 9.00-10.00 sec 351 KBytes 2.88 Mbits/sec 0 64.0 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 3.44 MBytes 2.89 Mbits/sec 0 sender Sent 3.44 MByte / 18.3 MByte (18%) of A.bin [ 5] 0.00-10.12 sec 3.42 MBytes 2.83 Mbits/sec receiver iperf Done. #
Note that the pipe configuration can be changed without changing the ruleset. Below, the pipe 1 bandwidth is changed to the equivalent of a telecommunications T1 line in days of yore:
# ipfw pipe 1 config bw 1544Kbit/s # ipfw pipe show 00001: 1.544 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 3.000 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active #
Resending the 10MB file across the T1 configured line shows these results:
# iperf3 -F A.bin -c 203.0.113.10 Connecting to host 203.0.113.10, port 5201 [ 5] local 203.0.113.50 port 16696 connected to 203.0.113.10 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 222 KBytes 1.82 Mbits/sec 0 65.0 KBytes [ 5] 1.00-2.00 sec 181 KBytes 1.48 Mbits/sec 0 65.0 KBytes [ 5] 2.00-3.00 sec 181 KBytes 1.48 Mbits/sec 0 65.0 KBytes [ 5] 3.00-4.00 sec 184 KBytes 1.51 Mbits/sec 0 65.0 KBytes [ 5] 4.00-5.00 sec 181 KBytes 1.48 Mbits/sec 0 65.0 KBytes [ 5] 5.00-6.00 sec 181 KBytes 1.48 Mbits/sec 0 65.0 KBytes [ 5] 6.00-7.00 sec 178 KBytes 1.46 Mbits/sec 0 65.0 KBytes [ 5] 7.00-8.00 sec 178 KBytes 1.46 Mbits/sec 0 65.0 KBytes [ 5] 8.00-9.00 sec 181 KBytes 1.48 Mbits/sec 0 65.0 KBytes [ 5] 9.00-10.00 sec 175 KBytes 1.44 Mbits/sec 0 65.0 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.80 MBytes 1.51 Mbits/sec 0 sender Sent 1.80 MByte / 18.3 MByte (9%) of A.bin [ 5] 0.00-10.18 sec 1.78 MBytes 1.46 Mbits/sec receiver iperf Done. #
About half of the 3Mbits/sec speed of pipe 2, again as expected.
So far, we have only been working with the pipe object. By definition, a pipe has just one queue, and it is subject to "First In First Out" (FIFO) operation. All traffic that flows through this pipe shares the same characteristics.
However, creating a pipe also does something else. It creates a default sched (scheduler) that governs the pipe:
Start with no pipes or schedulers # # ipfw pipe list # # ipfw sched list # Create a simple pipe. # ipfw pipe 1 config bw 100KBit/s # # ipfw pipe list 00001: 100.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active # Observe the default scheduler for this pipe # ipfw sched list 00001: 100.000 Kbit/s 0 ms burst 0 sched 1 type WF2Q+ flags 0x0 0 buckets 0 active #
The default scheduler for a new pipe is of type WF2Q+
, a version of the Weighted Fair Queueing algorithm for packet transfer.
We now have a single pipe of type FIFO operation that is managed by a WF2Q+
scheduling algorithm.
The ipfw(8) man page makes note of serveral other scheduling algorithms. These can be selected by using the "type" keyword on the pipe command. The type keyword selects the type of scheduler applied to the pipe - not the type of the pipe itself (the pipe remains FIFO):
# ipfw pipe list # # ipfw sched list # Create a pipe and assign a scheduler of type Round Robin (Deficit Round Robin) # ipfw pipe 1 config bw 100KBit/s type rr # # ipfw pipe list 00001: 100.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active # View the new sheduler of type RR (Deficit Round Robin) # ipfw sched list 00001: 100.000 Kbit/s 0 ms burst 0 sched 1 type RR flags 0x0 0 buckets 0 active #
*pipes and *sched*s (schedulers) are tighly bound. In fact, there is no command to delete a scheduler. The scheduler is deleted when the pipe is deleted.
Note however that the scheduler can be configured independently if desired. Below we change the scheduler type from the above type RR to QFQ, a variant of WF2Q+:
# # ipfw sched 1 config type qfq Bump qfq weight to 1 (was 0) Bump qfq maxlen to 1500 (was 0) # # ipfw sched list 00001: 100.000 Kbit/s 0 ms burst 0 sched 1 type QFQ flags 0x0 0 buckets 0 active #
There are other keywords that can be added to a pipe specification: delay, burst, profile, weight, buckets, mask, noerror, plr, queue, red or gred, codel, and pie. These are described in the ipfw(8) man page.
A contrived example might be:
Start fresh # ipfw pipe 1 delete # # ipfw pipe 1 config bw 100kbit/s delay 20 burst 2000 weight 40 buckets 256 mask src-ip 0x000000ff noerror plr 0.01 queue 75 red .3/25/30/.5 type qfq # # ipfw pipe list 00001: 100.000 Kbit/s 20 ms burst 2000 q131073 75 sl.plr 0.010000 0 flows (1 buckets) sched 65537 weight 40 lmax 0 pri 0 RED w_q 0.299988 min_th 25 max_th 30 max_p 0.500000 sched 65537 type FIFO flags 0x1 256 buckets 0 active mask: 0x00 0x000000ff/0x0000 -> 0x00000000/0x0000 # # ipfw sched list 00001: 100.000 Kbit/s 20 ms burst 2000 sched 1 type QFQ flags 0x1 256 buckets 0 active mask: 0x00 0x000000ff/0x0000 -> 0x00000000/0x0000 #
Setting up two separate pipes to send data to the same destination is overkill. It’s like setting up two separate network links between the two points. While that may be desireable for reduncancy or high-availability, it makes no difference for bandwidth allocation. (Yes, link aggregation is possible, but we are not considering that case here.)
What is usually needed is a way to separate traffic into different "lanes" and assign different "speed limits" to each lane. That is exactly what queues are for.
4.2.2. Simple Pipe and Queue Configuration
Before we go further, it’s useful to disambiguate the two meanings of the word "queue".
In a pipe definition, by default, the pipe is assigned a queue where incoming packets are held before processing and transit. The size of this "pipe queue" is by default 50 packets, but can be changed with the queue keyword on the pipe definition:
# ipfw pipe 1 config bw 200Kbit/s # # ipfw pipe list 00001: 200.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active # # ipfw pipe 2 config bw 200Kbit/s queue 75 # # ipfw pipe list 00001: 200.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 200.000 Kbit/s 0 ms burst 0 q131074 75 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active #
In contrast, dummynet has the concept of flow queues which are virtual groupings of packets assigned to a flow according to a mask in their own definition with an ipfw queue statements.
Configuring a queue is almost as simple as configuring a pipe.
Start with a clean slate (all objects and rules deleted):
# kldunload dummynet # kldunload ipfw # kldload ipfw ipfw2 (+ipv6) initialized, divert loadable, nat loadable, default to deny, logging disabled # kldload dummynet load_dn_sched dn_sched FIFO loaded load_dn_sched dn_sched QFQ loaded load_dn_sched dn_sched RR loaded load_dn_sched dn_sched WF2Q+ loaded load_dn_sched dn_sched PRIO loaded load_dn_sched dn_sched FQ_CODEL loaded load_dn_sched dn_sched FQ_PIE loaded load_dn_aqm dn_aqm CODEL loaded load_dn_aqm dn_aqm PIE loaded # # ipfw queue 1 config pipe 1 # # ipfw queue show q00001 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
Here we see that one queue of size 50 packets was created and assigned to pipe 1. Since we did not assign a weight, the default weight is 0 (zero), which is the least weight possible. The queue currently has 0 flows, meaning that this queue has no traffice flowing through it
Notice however, that we created the queue before we created the pipe. That is why the weight is zero. We have actually done this configuration out of order. To maintain your sanity, (and those reading the configuration after you), it’s best to configure the objects in the following order:
pipes (also creates a scheduler, which can be assigned a specific scheduler type)
queues - create queues and assign weights, source and destination masks, delay, and other characteristics to the queue
Assign rules to match traffic using standard 5-tuples or as needed
dummynet also has the ability to separate out different flows within the same pipe to perform different scheduling algorithms.
When transferring a file to the external1 VM and attempting to type interactively on the external1 VM at the same time, the ability to type at speed is dramatically reduced. The file transfer packets, being much larger than interactive typing packets are hogging all the bandwidth. This effect is a well known limitation to anyone who edits documents on a remote site. Since packets are created much faster by a file transfer program than you can type, the outbound queue is almost always full of large packets, leaving your keystrokes to be separated by large amounts of file transfer data in the queue.
You should try this out on the firewall VM by resetting the pipe 1 bandwidth to 300Kbit/sec, and in one session, run iperf3 as iperf3 -c 203.0.113.10 -t 60. Then in another session, add rules for ssh traffic and ssh to external1 VM and try to enter text into a scratch file. The typing delay is almost unbearable. |
To control traffic flow between the firewall VM and any external VM host, you will need to set up individual queues to separate traffic within a pipe. queues can be either static - you define them yourself with the ipfw queue config … - or they can be dynamic. Dynamic queues are created when using the mask keyword. Masks for queues are callled flow masks. The mask determines if a packet that arrives at the firewall is selected to be entered into a queue. Consider the following example:
# ipfw pipe 1 config bw 200Kbit/s mask src-ip 0x000000ff
Each /24 host transferring data through pipe 1 (based on suitable rules) will have its own dynamic queue, all sharing the bandwidth in the pipe equally.
If instead, we wish to create separate individual queues with different characteristics such as different weights or delay, we can create static queues and then assign them to individual pipes as desired:
# # ipfw pipe 1 config bw 300kbit/s # # ipfw pipe show 00001: 300.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active # # ipfw queue 1 config pipe 1 weight 10 mask dst-ip 0xffffffff dst-port 5201 Bump flowset buckets to 64 (was 0) # # ipfw queue 2 config pipe 1 weight 10 mask dst-ip 0xffffffff dst-port 5202 Bump flowset buckets to 64 (was 0) # # ipfw queue show q00001 50 sl. 0 flows (64 buckets) sched 1 weight 10 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x1451 q00002 50 sl. 0 flows (64 buckets) sched 1 weight 10 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x1452 # # ipfw list 00010 allow icmp from any to any 00100 check-state :default 01000 queue 1 tcp from me to 203.0.113.10 5201 setup keep-state :default 01100 queue 2 tcp from me to 203.0.113.20 5202 setup keep-state :default 65535 deny ip from any to any #
Running
# iperf3 -c 203.0.113.10 -p 5201 -t 180 -O 30
produces the output below.
The output is the result of using the "omit" flag (-O) on the sender to ignore the first 30 seconds of output. This removes the "slow start" portion of the TCP test, and focuses instead on the "steady state" that occurs after slow start gets up to speed.
This example shows the steady-state results of transmitting data through one queue - queue 1. Throughput was consistently about 277Kbits/sec. During the transmission, a view of the queue status was:
# ipfw queue show q00001 50 sl. 2 flows (64 buckets) sched 1 weight 10 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x1451 BKT Prot Source IP/port Dest. IP/port Tot_pkt/bytes Pkt/Byte Drp 136 ip 0.0.0.0/0 203.0.113.10/5201 2293 3425216 42 63000 0 50 ip 0.0.0.0/0 203.0.113.50/1040 752 39104 1 52 0 q00002 50 sl. 0 flows (64 buckets) sched 1 weight 10 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x1452 #
The queue mask, set to show the full destination address and destination port is highlighted.
Note that the port number is displayed in hexadecimal. A decimal/hexadecimal calculator may save you some confusion if you are looking at a lot of queue displays. |
The next example shows the result of starting transmission through the second queue about halfway through the first transmission. Notice how the queue is adjusted to accomodate the presence of a second queue of equal weight:
Since the queues were equally weighted, the result was that the transmission rate for both ended up at about 139Kbits/sec or roughly half of the previous transmission.
Queue characteristics can be changed at any time, even during an active flow. Consider the case below where, during simultaneous transmission through queues of equal weight, the queue weights were modifed as follows:
queue 1: original weight 10 modified weight 10
queue 2: original weight 10 modified weight 50
This change can be effected by the command:
# ipfw queue 2 config weight 50
The transmission rate for queue 1 dropped from and average of 139 Kbits/sec to an average of 46.3 Kbits/sec; while queue 2, after restarting the transmission with the new queue weight, expanded from an average of 139Kbits/sec to an average of 232 Kbits/sec. As expected, 232 Kbits is about five times the transmission rate of 46.3 Kbits/sec.
Note however, that the above command had a side effect:
# ipfw queue show q00001 50 sl. 0 flows (64 buckets) sched 1 weight 10 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x1451 q00002 50 sl. 0 flows (1 buckets) sched 1 weight 50 lmax 0 pri 0 droptail #
The flow mask for queue 2 has been deleted. In fact, all settings not explicitly reset will revert to their default settings. Here is a complicated queue setup:
# ipfw queue 1 config pipe 1 weight 40 buckets 256 mask src-ip 0x000000ff dst-ip 0x0000ffff noerror plr 0.01 queue 75 red .3/25/30/.5 # # ipfw queue show q00001 75 sl.plr 0.010000 0 flows (256 buckets) sched 1 weight 40 lmax 0 pri 0 RED w_q 0.299988 min_th 25 max_th 30 max_p 0.500000 mask: 0x00 0x000000ff/0x0000 -> 0x0000ffff/0x0000 #
And if, similar to the previous example, we only change the weight:
# ipfw queue 1 config weight 20 # # ipfw queue show q00001 50 sl. 0 flows (1 buckets) sched 1 weight 20 lmax 0 pri 0 droptail #
All the other parameters of the queue are reset to their defaults. Therefore, it is best to retain the original commands used to construct queues, pipes, and schedulers, so that if you are only changing one parameter, all other parameters can be replicated on the command line. Otherwise you will have to reconstruct the parameters from the output of ipfw queue show which can be quite tedious.
4.2.3. Dynamic Pipes
Here, we note the simplest setup for pipes creates dynamic pipes when needed:
# ipfw pipe 1 config bw 300kbit/s weight 10 mask src-ip 0x0000ffff dst-ip 0xffffffff Bump sched buckets to 64 (was 0) # # ipfw pipe show 00001: 300.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 10 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x1 64 buckets 0 active mask: 0x00 0x0000ffff/0x0000 -> 0xffffffff/0x0000 # # ipfw list 00050 allow icmp from any to any 00100 check-state :default 65535 deny ip from any to any # # ipfw add 1000 pipe 1 tcp from me to 203.0.113.0/24 5201-5203 setup keep-state 01000 pipe 1 tcp from me to 203.0.113.0/24 5201-5203 setup keep-state :default # # ipfw list 01000 pipe 1 tcp from me to 203.0.113.0/24 5201-5203 setup keep-state :default 65535 deny ip from any to any #
Sending some data with this configuration:
# ipfw pipe show 00001: 300.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 10 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x1 64 buckets 4 active mask: 0x00 0x0000ffff/0x0000 -> 0xffffffff/0x0000 BKT Prot Source IP/port Dest. IP/port Tot_pkt/bytes Pkt/Byte Drp 6 ip 0.0.10.10/0 203.0.113.50/0 236 12272 0 0 0 78 ip 0.0.10.50/0 203.0.113.10/0 1493 2225216 43 64500 0 80 ip 0.0.10.50/0 203.0.113.20/0 1355 2018216 42 63000 0 58 ip 0.0.10.20/0 203.0.113.50/0 366 19032 0 0 0 # # ipfw list 00050 allow icmp from any to any 00100 check-state :default 01000 pipe 1 tcp from me to 203.0.113.0/24 5201-5203 setup keep-state :default 65535 deny ip from any to any #
All three transmissions running together, single pipe:
# ipfw pipe show 00001: 300.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 10 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x1 64 buckets 6 active mask: 0x00 0x0000ffff/0x0000 -> 0xffffffff/0x0000 BKT Prot Source IP/port Dest. IP/port Tot_pkt/bytes Pkt/Byte Drp 6 ip 0.0.10.10/0 203.0.113.50/0 588 30576 0 0 0 78 ip 0.0.10.50/0 203.0.113.10/0 1508 2247716 43 64500 0 80 ip 0.0.10.50/0 203.0.113.20/0 1357 2021216 43 64500 0 90 ip 0.0.10.50/0 203.0.113.30/0 1322 1981552 41 61500 0 46 ip 0.0.10.30/0 203.0.113.50/0 34 1768 0 0 0 58 ip 0.0.10.20/0 203.0.113.50/0 702 36504 0 0 0
Because of the ipfw rule:
01000 pipe 1 tcp from me to 203.0.113.0/24 5201-5203 setup keep-state :default
All are getting 290 Kbit/sec from iperf3 and they are all sharing the pipe equally.
If we change iperf3 to send to different ports for each system (5201, 5202, 5203) to external1, external2, and external3 VMs respectively there is no change. It is only with queues, where you can set the individual flow rate, that you can effect change.
Below are examples of different masks and their effect on traffic flow:
* dst-ip 0x0000ffff # ipfw pipe show 00001: 300.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x1 64 buckets 4 active mask: 0x00 0x00000000/0x0000 -> 0x0000ffff/0x0000 BKT Prot Source IP/port Dest. IP/port Tot_pkt/bytes Pkt/Byte Drp 10 ip 0.0.0.0/0 0.0.10.10/0 1183 1760218 43 64500 0 20 ip 0.0.0.0/0 0.0.10.20/0 974 1446718 42 63000 0 30 ip 0.0.0.0/0 0.0.10.30/0 688 1017718 35 52500 0 50 ip 0.0.0.0/0 0.0.10.50/0 1717 89284 0 0 0 * dst-ip 0xffffffff # ipfw pipe show 00001: 300.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x1 64 buckets 4 active mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 BKT Prot Source IP/port Dest. IP/port Tot_pkt/bytes Pkt/Byte Drp 18 ip 0.0.0.0/0 203.0.113.50/0 402 20888 0 0 0 42 ip 0.0.0.0/0 203.0.113.10/0 144 204722 0 0 0 52 ip 0.0.0.0/0 203.0.113.20/0 359 525971 0 0 0 62 ip 0.0.0.0/0 203.0.113.30/0 562 843000 37 55500 0 * src-ip 0x0000ffff # ipfw pipe show 00001: 300.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x1 64 buckets 4 active mask: 0x00 0x0000ffff/0x0000 -> 0x00000000/0x0000 BKT Prot Source IP/port Dest. IP/port Tot_pkt/bytes Pkt/Byte Drp 20 ip 0.0.10.10/0 0.0.0.0/0 361 19348 0 0 0 100 ip 0.0.10.50/0 0.0.0.0/0 2102 3079974 36 54000 27 40 ip 0.0.10.20/0 0.0.0.0/0 193 10416 0 0 0 60 ip 0.0.10.30/0 0.0.0.0/0 47 2612 0 0 0 * mask src-ip 0x0000ffff dst-ip 0x0000ffff <-only one keyword mask needs to be specified # ipfw pipe show 00001: 300.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x1 64 buckets 6 active mask: 0x00 0x0000ffff/0x0000 -> 0x0000ffff/0x0000 BKT Prot Source IP/port Dest. IP/port Tot_pkt/bytes Pkt/Byte Drp 14 ip 0.0.10.30/0 0.0.10.50/0 253 13156 0 0 0 26 ip 0.0.10.20/0 0.0.10.50/0 61 3172 0 0 0 38 ip 0.0.10.10/0 0.0.10.50/0 771 40094 0 0 0 110 ip 0.0.10.50/0 0.0.10.10/0 853 1265218 40 60000 0 112 ip 0.0.10.50/0 0.0.10.20/0 723 1083052 37 55500 0 122 ip 0.0.10.50/0 0.0.10.30/0 644 951718 34 51000 0 * mask src-ip 0x0000ffff dst-ip 0x0000ffff dst-port 5201 # ipfw pipe show 00001: 300.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x1 64 buckets 6 active mask: 0x00 0x0000ffff/0x0000 -> 0x0000ffff/0x1451 BKT Prot Source IP/port Dest. IP/port ot_pkt/bytes Pkt/Byte Drp 204 ip 0.0.10.50/0 0.0.10.10/5201 2132 3183718 43 64500 0 14 ip 0.0.10.30/0 0.0.10.50/4096 823 42796 0 0 0 210 ip 0.0.10.50/0 0.0.10.20/5201 2001 2987218 43 64500 0 152 ip 0.0.10.20/0 0.0.10.50/4161 663 34476 0 0 0 216 ip 0.0.10.50/0 0.0.10.30/5201 1981 2957218 43 64500 0 164 ip 0.0.10.10/0 0.0.10.50/65 471 24492 0 0 0 * mask src-ip 0xffffffff dst-ip 0xffffffff # ipfw pipe 1 show 00001: 300.000 Kbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x1 64 buckets 6 active mask: 0x00 0xffffffff/0x0000 -> 0xffffffff/0x0000 BKT Prot Source IP/port Dest. IP/port Tot_pkt/bytes Pkt/Byte Drp 64 ip 203.0.113.50/0 203.0.113.20/0 1215 1808218 43 64500 0 74 ip 203.0.113.50/0 203.0.113.30/0 1023 1533052 43 64500 0 22 ip 203.0.113.10/0 203.0.113.50/0 746 38792 0 0 0 94 ip 203.0.113.50/0 203.0.113.10/0 1863 2780218 42 63000 0 42 ip 203.0.113.20/0 203.0.113.50/0 481 25012 0 0 0 62 ip 203.0.113.30/0 203.0.113.50/0 159 8268 0 0 0
4.2.4. Other Pipe and Queue Commands
To delete pipes and queues use the following syntax:
For queues, specify the queue number on the command line: # ipfw queue delete 1 For pipes, specify the pipe number on the command line: # ipfw pipe delete 1
Note however that:
# ipfw delete pipe 1 <----- does not throw error, and does not delete the pipe.
The same is true for the corresponding queue keyword. You should take care to use the proper syntax.
You can delete a pipe with a pipe statement still in the ruleset. ipfw will not throw an error - but any data transfer matching a pipe statement will not work. |
scheds (schedulers) and pipes are tightly bound. To delete a scheduler, you must first delete the pipe. You can then re-create the pipe if needed. The scheduler for the new pipe is reset to the default scheduler.
To change the scheduler type: # ipfw sched config 1 type wfq2 # or rr or any other sched type
4.3. Adding Additional Virtual Machines
Up to this point, we have been using only two virtual machines for exploring ipfw. The later material in this book requires the use of several additional virtual machines.
In the NAT chapter, we will use several more VMs for:
4.3.1. Setting Up The Entire IPFW Lab
A suggested host machine file directory layout for these machines is shown below. All scripts use relative path names, so the directory can be located anywhere.
~/ipfw /SCRIPTS _CreateAllVMs.sh (script to create QEMU disks images) mkbr.sh (script to create bridge and tap devices) vm_envs.sh (script to manage all parameters) dnshost.sh (script for running a BIND 9 DNS server) external1.sh (scripts for running 'external VM host' VMs) external2.sh " external3.sh " firewall.sh (script for running a firewall VM) firewall2.sh (script for running a firewall VM) internal.sh (script for running an internal VM) v6only.sh (script for running an IPv6 only VM) /ISO fbsd.iso (link to latest FreeBSD install iso) /VM dnshost.qcow2 (QEMU disk image for a BIND 9 DNS server) external1.qcow2 (QEMU disk image for 'external' hosts) external2.qcow2 " external3.qcow2 " firewall.qcow2 (QEMU disk image for the 'firewall') firewall2.qcow2 (QEMU disk image for the 'firewall2' host) internal.qcow2 (QEMU disk image for an internal host) v6only.qcow2 (QEMU disk image for an IPv6 only host) /BMP dns_splash_640x480.bmp (QEMU splash image) external1_splash_640x480.bmp " external2_splash_640x480.bmp " external3_splash_640x480.bmp " firewall_splash_640x480.bmp " firewall2_splash_640x480.bmp " internal_splash_640x480.bmp " v6only_splash_640x480.bmp "
Finish setting up the entire lab by referring to the instructions found in Section Quick Start.
Also, ensure each virtual machine is set up to boot a serial console by adding "console=comconsole" to /boot/loader.conf.