aboutsummaryrefslogtreecommitdiff
path: root/content/en/blog/news/20200502-frcp.md
blob: f6777309f6d10e36ffbee2b0f7091907f9b3f189 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
---
date: 2020-02-16
title: "Flow and Retransmission Control Protocol (FRCP) implementation"
linkTitle: "Flow and Retransmission Control Protocol (FRCP)"
description: "A quick demo of FRCP"
author: Dimitri Staessens
---

With the longer weekend I had some fun implementing (parts of) the
[Flow and Retransmission Control Protocol (FRCP)](/docs/concepts/protocols/#flow-and-retransmission-control-protocol-frcp)
to the point that it's stable enough to bring you a very quick demo of it.

FRCP is the Ouroboros alternative to TCP / QUIC / LLC. It assures
delivery of packets when the network itself isn't very reliable.

The setup is simple: we run Ouroboros over the Ethernet loopback
adapter _lo_,
```
systemctl restart ouroboros
irm i b t eth-dix l dix n dix dev lo
```
to which we add some impairment
[_qdisc_](http://man7.org/linux/man-pages/man8/tc-netem.8.html):

```
$ sudo tc qdisc add dev lo root netem loss 8% duplicate 3% reorder 10% delay 1
```

This causes the link to lose, duplicate and reorder packets.

We can use the oping tool to uses different [QoS
specs](https://ouroboros.rocks/cgit/ouroboros/tree/include/ouroboros/qos.h)
and watch the behaviour. Quality-of-Service (QoS) specs are a
technology-agnostic way to request a network service (current
status - not finalized yet). I'll also capture tcpdump output.

We start an oping server and tell Ouroboros for it to listen to the _name_ "oping":
```
#bind the program oping to the name oping
irm b prog oping n oping
#register the name oping in the Ethernet layer that is attached to the loopback
irm n r oping l dix
#run the oping server
oping -l
```

We'll now send 20 pings. If you try this, it can be that the flow
allocation fails, due to the loss of a flow allocation packet (a bit
similar to TCP losing the first SYN). The oping client currently
doesn't retry flow allocation. The default payload for oping is 64
bytes (of zeros); oping waits 2 seconds for all packets it has
sent. It doesn't detect duplicates.

Let's first look at the _raw_ QoS cube. That's like best-effort
UDP/IP. In Ouroboros, however, it doesn't require a packet header at
all.

First, the output of the client using a _raw_ QoS cube:
```
$ oping -n oping -c 20 -i 200ms -q raw
Pinging oping with 64 bytes of data (20 packets):

64 bytes from oping: seq=0 time=0.880 ms
64 bytes from oping: seq=1 time=0.742 ms
64 bytes from oping: seq=4 time=1.303 ms
64 bytes from oping: seq=6 time=0.739 ms
64 bytes from oping: seq=6 time=0.771 ms [out-of-order]
64 bytes from oping: seq=6 time=0.789 ms [out-of-order]
64 bytes from oping: seq=7 time=0.717 ms
64 bytes from oping: seq=8 time=0.759 ms
64 bytes from oping: seq=9 time=0.716 ms
64 bytes from oping: seq=10 time=0.729 ms
64 bytes from oping: seq=11 time=0.720 ms
64 bytes from oping: seq=12 time=0.718 ms
64 bytes from oping: seq=13 time=0.722 ms
64 bytes from oping: seq=14 time=0.700 ms
64 bytes from oping: seq=16 time=0.670 ms
64 bytes from oping: seq=17 time=0.712 ms
64 bytes from oping: seq=18 time=0.716 ms
64 bytes from oping: seq=19 time=0.674 ms
Server timed out.

--- oping ping statistics ---
20 packets transmitted, 18 received, 2 out-of-order, 10% packet loss, time: 6004.273 ms
rtt min/avg/max/mdev = 0.670/0.765/1.303/0.142 ms
```

The _netem_ did a good job of jumbling up the traffic! There were a
couple out-of-order, duplicates, and quite some packets lost.

Let's dig into an Ethernet frame captured from the "wire". The most
interesting thing its small total size: 82 bytes.

```
13:37:25.875092 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet), ethertype Unknown (0xa000), length 82:
        0x0000:  0042 0040 0000 0001 0000 0011 e90c 0000  .B.@............
        0x0010:  0000 0000 203f 350f 0000 0000 0000 0000  .....?5.........
        0x0020:  0000 0000 0000 0000 0000 0000 0000 0000  ................
        0x0030:  0000 0000 0000 0000 0000 0000 0000 0000  ................
        0x0040:  0000 0000
```

The first 12 bytes are the two MAC addresses (all zeros), then 2 bytes
for the "Ethertype" (the default for an Ouroboros layer is 0xa000, so
you can create more layers and seperate them by Ethertype[^1].  The
Ethernet Payload is thus 68 bytes. The Ouroboros _ipcpd-eth-dix_ adds
and extra header of 4 bytes with 2 extra "fields".  The first field we
needed to take over from our [Data
Transfer](/docs/concepts/protocols/) protocol: the Endpoint Identifier
that identifies the flow. The _ipcpd-eth-dix_ has two endpoints, one
for the client and one for the server. 0x0042 (66) is the destination
EID of the server, 0x0043 (67) is the destination EID of the client.
The second field is the _length_ of the payload in octets, 0x0040 =
64. This is needed because Ethernet II has a minimum frame size of 64
bytes and pads smaller frames (called _runt frames_)[^2]. The
remaining 64 bytes are the oping payload, giving us an 82 byte packet.

That's it for the raw QoS. The next one is _voice_. A voice service
usually requires packets to be delivered with little delay and jitter
(i.e. ASAP). Out-of-order packets are rejected since they cause
artifacts in the audio output. The voice QoS will enable FRCP, because
it needs to track sequence numbers.

```
$ oping -n oping -c 20 -i 200ms -q voice
Pinging oping with 64 bytes of data (20 packets):

64 bytes from oping: seq=0 time=0.860 ms
64 bytes from oping: seq=2 time=0.704 ms
64 bytes from oping: seq=3 time=0.721 ms
64 bytes from oping: seq=4 time=0.706 ms
64 bytes from oping: seq=5 time=0.721 ms
64 bytes from oping: seq=6 time=0.710 ms
64 bytes from oping: seq=7 time=0.721 ms
64 bytes from oping: seq=8 time=0.691 ms
64 bytes from oping: seq=10 time=0.691 ms
64 bytes from oping: seq=12 time=0.702 ms
64 bytes from oping: seq=13 time=0.730 ms
64 bytes from oping: seq=14 time=0.716 ms
64 bytes from oping: seq=15 time=0.725 ms
64 bytes from oping: seq=16 time=0.709 ms
64 bytes from oping: seq=17 time=0.703 ms
64 bytes from oping: seq=18 time=0.693 ms
64 bytes from oping: seq=19 time=0.666 ms
Server timed out.

--- oping ping statistics ---
20 packets transmitted, 17 received, 0 out-of-order, 15% packet loss, time: 6004.243 ms
rtt min/avg/max/mdev = 0.666/0.716/0.860/0.040 ms
```

As you can see, packets are delivered in-order, and some packets are
missing. Nothing fancy. Let's look at a data packet:

```
14:06:05.607699 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet), ethertype Unknown (0xa000), length 94:
        0x0000:  0045 004c 0100 0000 eb1e 73ad 0000 0000  .E.L......s.....
        0x0010:  0000 0000 0000 0012 a013 0000 0000 0000  ................
        0x0020:  705c e53a 0000 0000 0000 0000 0000 0000  p\.:............
        0x0030:  0000 0000 0000 0000 0000 0000 0000 0000  ................
        0x0040:  0000 0000 0000 0000 0000 0000 0000 0000  ................

```

The same 18-byte header is present. The flow endpoint ID is a
different one, and the length is also different. The packet is 94
bytes, the payload length for the _ipcp-eth_dix_ is 0x004c = 76
octets. So the FRCP header adds 12 bytes, the total overhead is 30
bytes. Maybe a bit more detail on the FRCP header contents (more depth
is available the protocol documentation).  The first 2 bytes are the
FLAGS (0x0100). There are only 7 flags, it's 16 bits for memory
alignment. This packet only has the DATA bit set. Then follows the
flow control window, which is 0 (not implemented yet). Then we have a
4 byte sequence number (eb1e 73ae = 3944641454)[^3] and a 4 byte ACK
number, which is 0. The remaining 64 bytes are the oping payload.

Next, the data QoS:

```
$ oping -n oping -c 20 -i 200ms -q data
Pinging oping with 64 bytes of data (20 packets):

64 bytes from oping: seq=0 time=0.932 ms
64 bytes from oping: seq=1 time=0.701 ms
64 bytes from oping: seq=2 time=200.949 ms
64 bytes from oping: seq=3 time=0.817 ms
64 bytes from oping: seq=4 time=0.753 ms
64 bytes from oping: seq=5 time=0.730 ms
64 bytes from oping: seq=6 time=0.726 ms
64 bytes from oping: seq=7 time=0.887 ms
64 bytes from oping: seq=8 time=0.878 ms
64 bytes from oping: seq=9 time=0.883 ms
64 bytes from oping: seq=10 time=0.865 ms
64 bytes from oping: seq=11 time=401.192 ms
64 bytes from oping: seq=12 time=201.047 ms
64 bytes from oping: seq=13 time=0.872 ms
64 bytes from oping: seq=14 time=0.966 ms
64 bytes from oping: seq=15 time=0.856 ms
64 bytes from oping: seq=16 time=0.849 ms
64 bytes from oping: seq=17 time=0.843 ms
64 bytes from oping: seq=18 time=0.797 ms
64 bytes from oping: seq=19 time=0.728 ms

--- oping ping statistics ---
20 packets transmitted, 20 received, 0 out-of-order, 0% packet loss, time: 4004.491 ms
rtt min/avg/max/mdev = 0.701/40.864/401.192/104.723 ms
```

With the data spec, we have no packet loss, but some packets have been
retransmitted (hence the higher latency). The reason for the very high
latency is that the current implementation only ACKs on data packets,
this will be fixed soon.

Looking at an Ethernet frame, it's again 94 bytes:

```
14:35:42.612066 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet), ethertype Unknown (0xa000), length 94:
        0x0000:  0044 004c 0700 0000 81b8 0259 e2f3 eb59  .D.L.......Y...Y
        0x0010:  0000 0000 0000 0012 911a 0000 0000 0000  ................
        0x0020:  86b3 273b 0000 0000 0000 0000 0000 0000  ..';............
        0x0030:  0000 0000 0000 0000 0000 0000 0000 0000  ................
        0x0040:  0000 0000 0000 0000 0000 0000 0000 0000  ................

```

The main difference is that it has 2 flags set (DATA + ACK), and it
thus contains both a sequence number (81b8 0259) and an
acknowledgement (e2f3 eb59).

That's about it for now. More to come soon.

Dimitri

[^1]: Don't you love standards? One of the key design objectives for Ouroboros is exactly to avoid such shenanigans. Modify/abuse a header and Ouroboros should reject it because it _cannot work_, not because some standard says one shouldn't do it.
[^2]: Lesser known fact: Gigabit Ethernet has a 512 byte minimum frame size; but _carrier extension_ handles this transparently.
[^3]: As you have figured out, the loopback is not in _network byte order_.