aboutsummaryrefslogtreecommitdiff
path: root/content/en/blog/20200502-frcp.md
diff options
context:
space:
mode:
Diffstat (limited to 'content/en/blog/20200502-frcp.md')
-rw-r--r--content/en/blog/20200502-frcp.md236
1 files changed, 236 insertions, 0 deletions
diff --git a/content/en/blog/20200502-frcp.md b/content/en/blog/20200502-frcp.md
new file mode 100644
index 0000000..28c5794
--- /dev/null
+++ b/content/en/blog/20200502-frcp.md
@@ -0,0 +1,236 @@
+---
+date: 2020-05-02
+title: "Flow and Retransmission Control Protocol (FRCP) implementation"
+linkTitle: "Flow and Retransmission Control Protocol (FRCP)"
+description: "A quick demo of FRCP"
+author: Dimitri Staessens
+---
+
+With the longer weekend I had some fun implementing (parts of) the
+[Flow and Retransmission Control Protocol (FRCP)](/docs/concepts/protocols/#flow-and-retransmission-control-protocol-frcp)
+to the point that it's stable enough to bring you a very quick demo of it.
+
+FRCP is the Ouroboros alternative to TCP / QUIC / LLC. It assures
+delivery of packets when the network itself isn't very reliable.
+
+The setup is simple: we run Ouroboros over the Ethernet loopback
+adapter _lo_,
+```
+systemctl restart ouroboros
+irm i b t eth-dix l dix n dix dev lo
+```
+to which we add some impairment
+[_qdisc_](http://man7.org/linux/man-pages/man8/tc-netem.8.html):
+
+```
+$ sudo tc qdisc add dev lo root netem loss 8% duplicate 3% reorder 10% delay 1
+```
+
+This causes the link to lose, duplicate and reorder packets.
+
+We can use the oping tool to uses different [QoS
+specs](https://ouroboros.rocks/cgit/ouroboros/tree/include/ouroboros/qos.h)
+and watch the behaviour. Quality-of-Service (QoS) specs are a
+technology-agnostic way to request a network service (current
+status - not finalized yet). I'll also capture tcpdump output.
+
+We start an oping server and tell Ouroboros for it to listen to the _name_ "oping":
+```
+#bind the program oping to the name oping
+irm b prog oping n oping
+#register the name oping in the Ethernet layer that is attached to the loopback
+irm n r oping l dix
+#run the oping server
+oping -l
+```
+
+We'll now send 20 pings. If you try this, it can be that the flow
+allocation fails, due to the loss of a flow allocation packet (a bit
+similar to TCP losing the first SYN). The oping client currently
+doesn't retry flow allocation. The default payload for oping is 64
+bytes (of zeros); oping waits 2 seconds for all packets it has
+sent. It doesn't detect duplicates.
+
+Let's first look at the _raw_ QoS cube. That's like best-effort
+UDP/IP. In Ouroboros, however, it doesn't require a packet header at
+all.
+
+First, the output of the client using a _raw_ QoS cube:
+```
+$ oping -n oping -c 20 -i 200ms -q raw
+Pinging oping with 64 bytes of data (20 packets):
+
+64 bytes from oping: seq=0 time=0.880 ms
+64 bytes from oping: seq=1 time=0.742 ms
+64 bytes from oping: seq=4 time=1.303 ms
+64 bytes from oping: seq=6 time=0.739 ms
+64 bytes from oping: seq=6 time=0.771 ms [out-of-order]
+64 bytes from oping: seq=6 time=0.789 ms [out-of-order]
+64 bytes from oping: seq=7 time=0.717 ms
+64 bytes from oping: seq=8 time=0.759 ms
+64 bytes from oping: seq=9 time=0.716 ms
+64 bytes from oping: seq=10 time=0.729 ms
+64 bytes from oping: seq=11 time=0.720 ms
+64 bytes from oping: seq=12 time=0.718 ms
+64 bytes from oping: seq=13 time=0.722 ms
+64 bytes from oping: seq=14 time=0.700 ms
+64 bytes from oping: seq=16 time=0.670 ms
+64 bytes from oping: seq=17 time=0.712 ms
+64 bytes from oping: seq=18 time=0.716 ms
+64 bytes from oping: seq=19 time=0.674 ms
+Server timed out.
+
+--- oping ping statistics ---
+20 packets transmitted, 18 received, 2 out-of-order, 10% packet loss, time: 6004.273 ms
+rtt min/avg/max/mdev = 0.670/0.765/1.303/0.142 ms
+```
+
+The _netem_ did a good job of jumbling up the traffic! There were a
+couple out-of-order, duplicates, and quite some packets lost.
+
+Let's dig into an Ethernet frame captured from the "wire". The most
+interesting thing its small total size: 82 bytes.
+
+```
+13:37:25.875092 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet), ethertype Unknown (0xa000), length 82:
+ 0x0000: 0042 0040 0000 0001 0000 0011 e90c 0000 .B.@............
+ 0x0010: 0000 0000 203f 350f 0000 0000 0000 0000 .....?5.........
+ 0x0020: 0000 0000 0000 0000 0000 0000 0000 0000 ................
+ 0x0030: 0000 0000 0000 0000 0000 0000 0000 0000 ................
+ 0x0040: 0000 0000
+```
+
+The first 12 bytes are the two MAC addresses (all zeros), then 2 bytes
+for the "Ethertype" (the default for an Ouroboros layer is 0xa000, so
+you can create more layers and seperate them by Ethertype[^1]. The
+Ethernet Payload is thus 68 bytes. The Ouroboros _ipcpd-eth-dix_ adds
+and extra header of 4 bytes with 2 extra "fields". The first field we
+needed to take over from our [Data
+Transfer](/docs/concepts/protocols/) protocol: the Endpoint Identifier
+that identifies the flow. The _ipcpd-eth-dix_ has two endpoints, one
+for the client and one for the server. 0x0042 (66) is the destination
+EID of the server, 0x0043 (67) is the destination EID of the client.
+The second field is the _length_ of the payload in octets, 0x0040 =
+64. This is needed because Ethernet II has a minimum frame size of 64
+bytes and pads smaller frames (called _runt frames_)[^2]. The
+remaining 64 bytes are the oping payload, giving us an 82 byte packet.
+
+That's it for the raw QoS. The next one is _voice_. A voice service
+usually requires packets to be delivered with little delay and jitter
+(i.e. ASAP). Out-of-order packets are rejected since they cause
+artifacts in the audio output. The voice QoS will enable FRCP, because
+it needs to track sequence numbers.
+
+```
+$ oping -n oping -c 20 -i 200ms -q voice
+Pinging oping with 64 bytes of data (20 packets):
+
+64 bytes from oping: seq=0 time=0.860 ms
+64 bytes from oping: seq=2 time=0.704 ms
+64 bytes from oping: seq=3 time=0.721 ms
+64 bytes from oping: seq=4 time=0.706 ms
+64 bytes from oping: seq=5 time=0.721 ms
+64 bytes from oping: seq=6 time=0.710 ms
+64 bytes from oping: seq=7 time=0.721 ms
+64 bytes from oping: seq=8 time=0.691 ms
+64 bytes from oping: seq=10 time=0.691 ms
+64 bytes from oping: seq=12 time=0.702 ms
+64 bytes from oping: seq=13 time=0.730 ms
+64 bytes from oping: seq=14 time=0.716 ms
+64 bytes from oping: seq=15 time=0.725 ms
+64 bytes from oping: seq=16 time=0.709 ms
+64 bytes from oping: seq=17 time=0.703 ms
+64 bytes from oping: seq=18 time=0.693 ms
+64 bytes from oping: seq=19 time=0.666 ms
+Server timed out.
+
+--- oping ping statistics ---
+20 packets transmitted, 17 received, 0 out-of-order, 15% packet loss, time: 6004.243 ms
+rtt min/avg/max/mdev = 0.666/0.716/0.860/0.040 ms
+```
+
+As you can see, packets are delivered in-order, and some packets are
+missing. Nothing fancy. Let's look at a data packet:
+
+```
+14:06:05.607699 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet), ethertype Unknown (0xa000), length 94:
+ 0x0000: 0045 004c 0100 0000 eb1e 73ad 0000 0000 .E.L......s.....
+ 0x0010: 0000 0000 0000 0012 a013 0000 0000 0000 ................
+ 0x0020: 705c e53a 0000 0000 0000 0000 0000 0000 p\.:............
+ 0x0030: 0000 0000 0000 0000 0000 0000 0000 0000 ................
+ 0x0040: 0000 0000 0000 0000 0000 0000 0000 0000 ................
+
+```
+
+The same 18-byte header is present. The flow endpoint ID is a
+different one, and the length is also different. The packet is 94
+bytes, the payload length for the _ipcp-eth_dix_ is 0x004c = 76
+octets. So the FRCP header adds 12 bytes, the total overhead is 30
+bytes. Maybe a bit more detail on the FRCP header contents (more depth
+is available the protocol documentation). The first 2 bytes are the
+FLAGS (0x0100). There are only 7 flags, it's 16 bits for memory
+alignment. This packet only has the DATA bit set. Then follows the
+flow control window, which is 0 (not implemented yet). Then we have a
+4 byte sequence number (eb1e 73ae = 3944641454)[^3] and a 4 byte ACK
+number, which is 0. The remaining 64 bytes are the oping payload.
+
+Next, the data QoS:
+
+```
+$ oping -n oping -c 20 -i 200ms -q data
+Pinging oping with 64 bytes of data (20 packets):
+
+64 bytes from oping: seq=0 time=0.932 ms
+64 bytes from oping: seq=1 time=0.701 ms
+64 bytes from oping: seq=2 time=200.949 ms
+64 bytes from oping: seq=3 time=0.817 ms
+64 bytes from oping: seq=4 time=0.753 ms
+64 bytes from oping: seq=5 time=0.730 ms
+64 bytes from oping: seq=6 time=0.726 ms
+64 bytes from oping: seq=7 time=0.887 ms
+64 bytes from oping: seq=8 time=0.878 ms
+64 bytes from oping: seq=9 time=0.883 ms
+64 bytes from oping: seq=10 time=0.865 ms
+64 bytes from oping: seq=11 time=401.192 ms
+64 bytes from oping: seq=12 time=201.047 ms
+64 bytes from oping: seq=13 time=0.872 ms
+64 bytes from oping: seq=14 time=0.966 ms
+64 bytes from oping: seq=15 time=0.856 ms
+64 bytes from oping: seq=16 time=0.849 ms
+64 bytes from oping: seq=17 time=0.843 ms
+64 bytes from oping: seq=18 time=0.797 ms
+64 bytes from oping: seq=19 time=0.728 ms
+
+--- oping ping statistics ---
+20 packets transmitted, 20 received, 0 out-of-order, 0% packet loss, time: 4004.491 ms
+rtt min/avg/max/mdev = 0.701/40.864/401.192/104.723 ms
+```
+
+With the data spec, we have no packet loss, but some packets have been
+retransmitted (hence the higher latency). The reason for the very high
+latency is that the current implementation only ACKs on data packets,
+this will be fixed soon.
+
+Looking at an Ethernet frame, it's again 94 bytes:
+
+```
+14:35:42.612066 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet), ethertype Unknown (0xa000), length 94:
+ 0x0000: 0044 004c 0700 0000 81b8 0259 e2f3 eb59 .D.L.......Y...Y
+ 0x0010: 0000 0000 0000 0012 911a 0000 0000 0000 ................
+ 0x0020: 86b3 273b 0000 0000 0000 0000 0000 0000 ..';............
+ 0x0030: 0000 0000 0000 0000 0000 0000 0000 0000 ................
+ 0x0040: 0000 0000 0000 0000 0000 0000 0000 0000 ................
+
+```
+
+The main difference is that it has 2 flags set (DATA + ACK), and it
+thus contains both a sequence number (81b8 0259) and an
+acknowledgement (e2f3 eb59).
+
+That's about it for now. More to come soon.
+
+Dimitri
+
+[^1]: Don't you love standards? One of the key design objectives for Ouroboros is exactly to avoid such shenanigans. Modify/abuse a header and Ouroboros should reject it because it _cannot work_, not because some standard says one shouldn't do it.
+[^2]: Lesser known fact: Gigabit Ethernet has a 512 byte minimum frame size; but _carrier extension_ handles this transparently.
+[^3]: In _network byte order_. \ No newline at end of file