aboutsummaryrefslogtreecommitdiff
path: root/content/en/docs/Releases/0_18.md
blob: c489d33f6d80bf8cd46d0e6e30176ce2320d3ed5 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
date: 2021-02-12
title: "Ouroboros 0.18"
linkTitle: "Ouroboros 0.18"
description: "Major additions and changes in 0.18.0"
author: Dimitri Staessens
---

With version 0.18 come a number of interesting updates to the prototype.

### Automated Repeat-Request (ARQ) and flow control

We finished the implementation of the base retransmission
logic. Ouroboros will now send, receive and handle acknowledgments
under packet loss conditions.  It will also send and handle window
updates for flow control. The operation of flow control is very
similar to the operation of window-based flow control in TCP, the main
difference being that our sequence numbers are per-packet instead of
per-byte.

The previous version of FRCP had some partial implementation of the
ARQ functionality, such as piggybacking ACK information on _writes_
and handling sequence numbers on _reads_. But now, Ourobroos will also
send (delayed) ACK packets without data if the application is not
sending and finish sending when a flow is closed if not everything was
acknowledged (can be turned off with the FRCTFLINGER flag).

Recall that Ouroboros has this logic implemented in the application
library, it's not a separate component (or kernel) that is managing
transmit and receive buffers and retransmission. Furthermore, our
implementation doesn't add a thread to the application. If a
single-threaded application uses ARQ, it will remain single-threaded.

It's not unlikely that in the future we will add the option for the
library to start a dedicated thread to manage ARQ as this may have
some beneficial characteristics for read/write call durations. Other
future addditions may include fast-retransmit and selective ACK
support.

The most important characteristic of Ouroboros FRCP compared to TCP
and derivative protocols (QUIC, SCTP, ...) is that it is 100%
independent of congestion control, which allows for it to operate at
real RTT timescales (i.e. microseconds in datacenters) without fear of
RTT underestimates severely capping throughput. Another characteristic
is that the RTT estimate is really measuring the responsiveness of the
application, not the kernel on the machine.

A detailed description of the operation of ARQ can be found
in the [protocols](/docs/concepts/protocols/#operation-of-frcp)
section.

### Congestion Avoidance

The next big addition is congestion avoidance. By default, the unicast
layer's default configuration will now congestion-control all client
traffic sent over them[^1]. As noted above, congestion avoidance in
Ouroboros is completely independent of the operation of ARQ and flow
control. For more information about how this all works, have a look at
the developer blog
[here](/blog/2020/12/12/congestion-avoidance-in-ouroboros/) and
[here](/blog/2020/12/19/exploring-ouroboros-with-wireshark/).

### Revision of the flow allocator

We also made a change to the flow allocator, more specifically the
Endpoint IDs to use 64-bit identifiers. The reason for this change is
to make it harder to guess these endpoint identifiers.  In TCP,
applications can listen to sockets that are bound to a port on a (set
of) IP addresses. You can't imagine how many hosts are trying to brute
force password guess SSH logins on TCP port 22. To make this at least
a bit harder, Ouroboros has no well-known application ports, and after
this patch they are roughtly equivalent to a 32-bit random
number. Note that in an ideal Ouroboros deployment, sensitive
applications such as SSH login should run on a different layer/network
than publicly available applications.

### Revision of the ipcpd-udp

The ipcpd-udp has gone through some revisions during its lifetime. In
the beginning, we wanted to emulate the operation of an Ouroboros
layers, having the flow allocator listening on a certain UDP port, and
mapping endpoints identifiers to random ephemeral UDP ports.  So as an
example, the source would generate a UDP socket, e.g. on port 30927,
and send a request for a new flow the fixed known Ouroboros UDP port
(3531) at the receiver. This also generates a socket on an ephemeral
UDP port, say 23705, and it sends a response back to the source on UDP
port 3531. Traffic for the "client" flow would be on UDP port pair
(30927, 23705). This was giving a bunch of headaches with computers
behind NAT firewalls, rendering that scheme only useful in lab
environments. To make it more useable, the next revision used a single
fixed incoming UDP port for the flow allocator protocol, using an
ephemeral UDP port from the sender side per flow and added the flow
allocator endpoints as a "next header" inside UDP. So traffic would
always be sent to destination UDP port 3531. Benefit was that only a
single port was needed in the NAT forwarding rules, and that anyone
running Ouroboros would be able to receive allocation messages, and
this is enforcing a bit all users to participate in a mesh topology.
However, opening a certain UDP port is still a hassle, so in this
(most likely final) revision, we just run the flow allocator in the
ipcpd-udp as a UDP server on a (configurable) port. No more NAT
firewall configurations required if you want to connect (but if you
want to accept connections, opening UDP port 3531 is still required).

The full changelog can be browsed in
[cgit](/cgit/ouroboros/log/?showmsg=1).

[^1]: This is not a claim that every packet inside a layer is
      flow-controlled: internal management traffic to the layer (flow
      allocator protocol, etc) is not congestion-controlled.