Ouroboros: Difference between revisions

From Ouroboros
Jump to navigation Jump to search
Line 15: Line 15:
Setting up a service over TCP/IP usually involves many different technologies. By the time the service is up and running, it will likely have involved configuring (switchport-based and trunk) VLANs, enabling some Spanning Tree Protocol variant in parts of the network, setting up link aggregation between ports on stacked switches, defining IP subnetworks, configuring a DHCP server to assign addresses to the subnets, setting up gateways, DNS servers, possibly configuring OSPF, IS-IS or iBGP/eBGP, selecting TCP and UDP ports for the applications, configuring  reverse proxies, setting firewall and Network Address Translation (NAT) rules, adding some servers to a demilitarized zone, configuring a Virtual Private Network server, establishing a few SSH tunnels here and there... the list is almost endless. To make things worse, a lot of this configuration is mostly static and done manually. Once the service is in place, everything needs to be painstakingly documented. A networked service configuration is very brittle , introducing even small errors can bring the whole service down, and tracking down bugs, configuration errors or faults can take hours or even days. News stories about some DNS or [https://blog.cloudflare.com/october-2021-facebook-outage/ BGP misconfiguration] taking down a global service pop up regularly. The configuration is also literally everywhere. The application IP addresses and ports need to be set in a configuration file for each server application, and needs to be consistent between different devices (DHCP and DNS servers, NAT firewalls, clients). Storing, maintaining and automating network and service configuration has become so elaborate and daunting that it has its own buzzword: ''infrastructure as code''. The service configuration is also not very scalable or portable, if an IP subnet has been over- or under-dimensioned, changing it can cause the need for redesigning and reconfiguring many parts of the network. Moving infrastructure within or between datacenters or reintegrating it in a different parts can cause many headaches, some can be mitigated using virtualization, but the configuration of virtual machines and containers is still much more complicated than necessary.
Setting up a service over TCP/IP usually involves many different technologies. By the time the service is up and running, it will likely have involved configuring (switchport-based and trunk) VLANs, enabling some Spanning Tree Protocol variant in parts of the network, setting up link aggregation between ports on stacked switches, defining IP subnetworks, configuring a DHCP server to assign addresses to the subnets, setting up gateways, DNS servers, possibly configuring OSPF, IS-IS or iBGP/eBGP, selecting TCP and UDP ports for the applications, configuring  reverse proxies, setting firewall and Network Address Translation (NAT) rules, adding some servers to a demilitarized zone, configuring a Virtual Private Network server, establishing a few SSH tunnels here and there... the list is almost endless. To make things worse, a lot of this configuration is mostly static and done manually. Once the service is in place, everything needs to be painstakingly documented. A networked service configuration is very brittle , introducing even small errors can bring the whole service down, and tracking down bugs, configuration errors or faults can take hours or even days. News stories about some DNS or [https://blog.cloudflare.com/october-2021-facebook-outage/ BGP misconfiguration] taking down a global service pop up regularly. The configuration is also literally everywhere. The application IP addresses and ports need to be set in a configuration file for each server application, and needs to be consistent between different devices (DHCP and DNS servers, NAT firewalls, clients). Storing, maintaining and automating network and service configuration has become so elaborate and daunting that it has its own buzzword: ''infrastructure as code''. The service configuration is also not very scalable or portable, if an IP subnet has been over- or under-dimensioned, changing it can cause the need for redesigning and reconfiguring many parts of the network. Moving infrastructure within or between datacenters or reintegrating it in a different parts can cause many headaches, some can be mitigated using virtualization, but the configuration of virtual machines and containers is still much more complicated than necessary.


Core network technology itself has become ''ossified''; the core protocols haven't changed much in 30 years because making changes that are easy in theory (adding a new L4 protocol, for instance) have become nearly impossible in practice.
Core Internet technology itself has become ''ossified''; the core protocols haven't changed much in 30 years because making changes that are easy in theory (adding a new L3 or L4 protocol, for instance) have become nearly impossible in practice.


In a nutshell, our objectives are to simplify and reduce configuration, reduce protocol attack surface, prevent ossification, and make networks more robust in general.
In a nutshell, our objectives are to simplify and reduce configuration, reduce protocol attack surface, prevent ossification, and make networks more robust in general.

Revision as of 08:44, 1 November 2023

Under contruction This page is under construction  

Summary

Ouroboros is a (work-in-progress) prototype packet-switching technology, aimed at substantially simplifying networking. It is based on a redesign of the current packet networking model – from the programming API almost to the wire. If we had to describe Ouroboros in a single sentence, it would be micro-services architecture applied to the network itself.

From an end-user application perspective, an Ouroboros network is a black box with a simple application programming interface to request communication services. Ouroboros can provision unicast flows - (bidirectional) channels that deliver message streams or byte streams with some requested operational (QoS) parameters such as maximum delay and bandwidth, protection against packet loss and authentication of peers and encryption of in-flight data; or it can provide broadcast flows to sets of processes.

From an administrative perspective, an Ouroboros network is a collection of daemons that can be thought of as software routers (unicast) or software hubs (broadcast) that can be connected to each other; again through a simple management API.

The prototype is not directly compatible with TCP/IP (it uses different protocols) or POSIX sockets (it has a different API), but it has interfaces and tools to run over Ethernet or UDP, or to create IP/Ethernet tunnels over Ouroboros by exposing tap or tun devices.

Objectives

Setting up a service over TCP/IP usually involves many different technologies. By the time the service is up and running, it will likely have involved configuring (switchport-based and trunk) VLANs, enabling some Spanning Tree Protocol variant in parts of the network, setting up link aggregation between ports on stacked switches, defining IP subnetworks, configuring a DHCP server to assign addresses to the subnets, setting up gateways, DNS servers, possibly configuring OSPF, IS-IS or iBGP/eBGP, selecting TCP and UDP ports for the applications, configuring reverse proxies, setting firewall and Network Address Translation (NAT) rules, adding some servers to a demilitarized zone, configuring a Virtual Private Network server, establishing a few SSH tunnels here and there... the list is almost endless. To make things worse, a lot of this configuration is mostly static and done manually. Once the service is in place, everything needs to be painstakingly documented. A networked service configuration is very brittle , introducing even small errors can bring the whole service down, and tracking down bugs, configuration errors or faults can take hours or even days. News stories about some DNS or BGP misconfiguration taking down a global service pop up regularly. The configuration is also literally everywhere. The application IP addresses and ports need to be set in a configuration file for each server application, and needs to be consistent between different devices (DHCP and DNS servers, NAT firewalls, clients). Storing, maintaining and automating network and service configuration has become so elaborate and daunting that it has its own buzzword: infrastructure as code. The service configuration is also not very scalable or portable, if an IP subnet has been over- or under-dimensioned, changing it can cause the need for redesigning and reconfiguring many parts of the network. Moving infrastructure within or between datacenters or reintegrating it in a different parts can cause many headaches, some can be mitigated using virtualization, but the configuration of virtual machines and containers is still much more complicated than necessary.

Core Internet technology itself has become ossified; the core protocols haven't changed much in 30 years because making changes that are easy in theory (adding a new L3 or L4 protocol, for instance) have become nearly impossible in practice.

In a nutshell, our objectives are to simplify and reduce configuration, reduce protocol attack surface, prevent ossification, and make networks more robust in general.

Robust configuration

No well-known ports.

No manual addressing.

abstraction - management API

Single point of configuration: Instead of having network configuration per application, single network configuration file per system.

Reduce protocol attack surface

Single point of contact: Flow allocator - authentication/security before first application byte

Prevent ossification

HTTP has taken over the role of 'narrow waist' anymore from IP, reverse proxy has become the service endpoint. The protocol stack up to TCP/UDP port 443 is becoming more and more ossified.

Fast Bootstrap

Kick nodes from network , hot-swap entire networks.

References