Ouroboros 0.18

Major additions and changes in 0.18.0

With version 0.18 come a number of interesting updates to the prototype.

Automated Repeat-Request (ARQ) and flow control

We finished the implementation of the base retransmission logic. Ouroboros will now send, receive and handle acknowledgments under packet loss conditions. It will also send and handle window updates for flow control. The operation of flow control is very similar to the operation of window-based flow control in TCP, the main difference being that our sequence numbers are per-packet instead of per-byte.

The previous version of FRCP had some partial implementation of the ARQ functionality, such as piggybacking ACK information on writes and handling sequence numbers on reads. But now, Ourobroos will also send (delayed) ACK packets without data if the application is not sending and finish sending when a flow is closed if not everything was acknowledged (can be turned off with the FRCTFLINGER flag).

Recall that Ouroboros has this logic implemented in the application library, it’s not a separate component (or kernel) that is managing transmit and receive buffers and retransmission. Furthermore, our implementation doesn’t add a thread to the application. If a single-threaded application uses ARQ, it will remain single-threaded.

It’s not unlikely that in the future we will add the option for the library to start a dedicated thread to manage ARQ as this may have some beneficial characteristics for read/write call durations. Other future addditions may include fast-retransmit and selective ACK support.

The most important characteristic of Ouroboros FRCP compared to TCP and derivative protocols (QUIC, SCTP, …) is that it is 100% independent of congestion control, which allows for it to operate at real RTT timescales (i.e. microseconds in datacenters) without fear of RTT underestimates severely capping throughput. Another characteristic is that the RTT estimate is really measuring the responsiveness of the application, not the kernel on the machine.

A detailed description of the operation of ARQ can be found in the protocols section.

Congestion Avoidance

The next big addition is congestion avoidance. By default, the unicast layer’s default configuration will now congestion-control all client traffic sent over them1. As noted above, congestion avoidance in Ouroboros is completely independent of the operation of ARQ and flow control. For more information about how this all works, have a look at the developer blog here and here.

Revision of the flow allocator

We also made a change to the flow allocator, more specifically the Endpoint IDs to use 64-bit identifiers. The reason for this change is to make it harder to guess these endpoint identifiers. In TCP, applications can listen to sockets that are bound to a port on a (set of) IP addresses. You can’t imagine how many hosts are trying to brute force password guess SSH logins on TCP port 22. To make this at least a bit harder, Ouroboros has no well-known application ports, and after this patch they are roughtly equivalent to a 32-bit random number. Note that in an ideal Ouroboros deployment, sensitive applications such as SSH login should run on a different layer/network than publicly available applications.

Revision of the ipcpd-udp

The ipcpd-udp has gone through some revisions during its lifetime. In the beginning, we wanted to emulate the operation of an Ouroboros layers, having the flow allocator listening on a certain UDP port, and mapping endpoints identifiers to random ephemeral UDP ports. So as an example, the source would generate a UDP socket, e.g. on port 30927, and send a request for a new flow the fixed known Ouroboros UDP port (3531) at the receiver. This also generates a socket on an ephemeral UDP port, say 23705, and it sends a response back to the source on UDP port 3531. Traffic for the “client” flow would be on UDP port pair (30927, 23705). This was giving a bunch of headaches with computers behind NAT firewalls, rendering that scheme only useful in lab environments. To make it more useable, the next revision used a single fixed incoming UDP port for the flow allocator protocol, using an ephemeral UDP port from the sender side per flow and added the flow allocator endpoints as a “next header” inside UDP. So traffic would always be sent to destination UDP port 3531. Benefit was that only a single port was needed in the NAT forwarding rules, and that anyone running Ouroboros would be able to receive allocation messages, and this is enforcing a bit all users to participate in a mesh topology. However, opening a certain UDP port is still a hassle, so in this (most likely final) revision, we just run the flow allocator in the ipcpd-udp as a UDP server on a (configurable) port. No more NAT firewall configurations required if you want to connect (but if you want to accept connections, opening UDP port 3531 is still required).

The full changelog can be browsed in cgit.


  1. This is not a claim that every packet inside a layer is flow-controlled: internal management traffic to the layer (flow allocator protocol, etc) is not congestion-controlled.

    [return]
Last modified February 12, 2021