| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
| |
Apparently that function isn't implemented on some versions of OS
X. On these systems, we can just use sigwait, but now the IPCP will
also accept signals not coming from the IRMd.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logic for additive increase was botched. It was adding to the
current window limit, and to avoid a count-to-infinity when sending
below the limit, I added a check that the application was trying to
send more than the current limit. This fails in congestion avoidance
mode when the IPCP is throttling traffic below the limit; causing the
app to never increase the congestion window (and even worse, to keep
decreasing in some cases).
The Additive Increase will now always add bandwidth to the latest
sending rate instead of the window bandwidth limit, avoiding all the
problems.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
The u_snd and u_rcv variables were not guarded by ifdefs when
updating.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
The RIB of the flow allocator will now export the number of flow
updates sent/received per flow.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
| |
This will reduce the linux high resolution slack timer in IPCPs. Linux
default for userspace processes is 50us. It is configurable at build
using IPCP_LINUX_SLACKTIMER_NS. Default is now 1us.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
The rate was supposed to be 1 update per 8 data packets, but the
calculation was doing 1 update per 4 data packets.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
The upstream/downstream stats were switched.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
Layer info was not converted to parse the full path with the latest RIB
change.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The read functions for the RIB will now receive the full path, instead
of only the entry name. For IPCPs, we organized the RIB in an
/<ipcp>/<component>/entries
structure with a directory per component, so we don't need the full
path at this point. For process flow information, it's a lot more
convenient to organize it the following way
/<pid>/<fd>/stat
We can then register/unregister the flow descriptor when the frct
instance is created, and for getting the stats, we'd know the flow
descriptor from the fuse file path. If we would create a file per flow
instead of a directory per flow, something like
/<pid>/flows/<fd>
we'd need to do additional bookkeeping to list the contents of that
directory (we would need to track all flows with an active FRCT
instance), that fuse knows because it tracks the directories.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
These RIBs were not properly unregistered on shutdown.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
| |
The RIB API had a struct stat in the getattr() function, which made
all components that exposed variables via the RIB dependent on
<sys/stat.h>. The rib now has its own struct rib_attr to set
attributes such as size and last modified time.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
| |
Compilation failed on FreeBSD 14 with fuse enabled because of some
missing definitions. __XSI_VISIBLE must be set before including
<ouroboros/rib.h> for some definitions in <sys/stat.h>. FreeBSD
doesn't know the MSG_CONFIRM flag to sendto() or
CLOCK_REALTIME_COARSE, which are Linux-specific.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
| |
The fa_handle_packet function loop is non-void but didn't have a
return statement. Only got picked up if I build from AUR, which is
weird.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This add an ouroboros/pthread.h header that wraps the
pthread_..._unlock() functions for cleanup using
pthread_cleanup_push() as this casting is not safe (and there were
definitely bad casts in the code). The close() function is now also
wrapped for cleanup in ouroboros/sockets.h.
This allows enabling more compiler checks.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
There was a switch-case without a default, causing some compiler
warnings.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
This was beneficial on dual-XEON CPU systems, but on most hardware,
this kills performance when running multiple IPCPs on a single CPU.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
Max value of UINT64 can be 20 characters when printed, need an extra
byte for sprintf trailing '\0'.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
This fixes scaling of the congestion window, which was a buggy mess.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This moves Resource Information Base (RIB) initialization into the
ipcp_init() function, so all IPCPs initialize a RIB. The RIB not shows
some common IPCP information, such as the IPCP name, IPCP state and
the layer name if the IPCP is part of a layer.
The initialization of the hash algorithm and layer name was moved out
of the common ipcp source because IPCPs may only know this information
after enrollment. Some IPCPs were not even storing this information.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
| |
The new GCC 11.1 compiler discovered s_dist would be uninitialized
with an unknown policy, so it doesn't need to be free'd. Also removes
some unneeded includes in broadcast dt.c that I had pending.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
| |
The connection manager and enrolment components of the unicast and
broadcast IPCP have a lot in common, as conjectured in the paper. The
initial implementation of the broadcast IPCP just duplicated the
code. This moves the shared functionality to common ground.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
| |
This removes the raptor IPCP. The code hasn't been updated for a
while, and wouldn't compile. Raptor served its purpose as a PoC for
Ouroboros-over-Ethernet-Layer-1, but giving the extreme niche hardware
needed to run it, it's not worth maintaining this anymore.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
It's not needed to include enroll.h.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The UDP layer will now use a single (configurable) UDP port, default
3435. This makes it easer to allocate flows as a client from behind a
NAT firewall without having to configure port forwarding rules. So
basically, from now on Ouroboros traffic is transported over a
bidirectional <src><port>:<dst><port> UDP tunnel. The reason for not
using/allowing different client/server ports is that it would require
reading from different sockets using select() or something similar,
but since we need the EID anyway (mgmt packets arrive on the same
server UDP port), there's not a lot of benefit in doing it. Now the
operation is similar to the ipcpd-eth, with the port somewhat
functioning as a "layer name", where in UDP, the Ethertype functions
as a "layer name".
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
The ugent email addresses are shut down, updated to Ouroboros mail
addresses.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
Happy New Year, Ouroboros!
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
The ECN marking function should be able to use the packet QoS to allow
prioritizing traffic under congestion. Not yet implemented in MB-ECN.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
The t_sent variable is a remnant from the first version and isn't
needed anymore.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
The previous value of the ECN field should be passed to the congestion
notification function.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The EIDs are now 64-bit. This makes it a tad harder to guess them
(think of port scanning). The implementation has only the most
significant 32 bits random to quickly map EIDs to N+1 flows. While
this is equivalent to a random cookie as a check on flows, the
rationale is that valid endpoint IDs should be pretty hard to guess
(and thus be 64-bit random at least). Ideally one would use
content-addressable memory for this kind of mapping.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
| |
There is a check not to rapidly double the window to astronomical
sizes when there is no congestion experienced for long periods of
time, but the if-else logic was botched and it still grew to
astronomical sizes (albeit linear instead of exponential).
I also lowered the ECN threshold a bit.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The DT will now post all packets for N+1 flows through the flow
allocator component. This means that N+1 flows can be monitored
through the flow allocator stats, and N-1 flows through the DT stats.
The DT component still keeps stats for the local components (FA and
DHT), but this can be removed once the DHT has its own RIB
output.
The flow allocator show statistics for
Sent packets: total packets that were presented for sending
on this specific flow
Send failed: packets that were unable to be sent
Received packets: total packets that were presented by the DT component
on this specific flow
Received failed: packets that were unable to be delivered
These stats are presented as both packet counts and byte counts. To
know how many were successful, the values for failed need to be
subtracted from the values for total.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
Noticed an off-by-one in the packet counter because it was incremented
before and the byte counter after the flow update.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
| |
The RIB will now show some stats for the flow allocator, including
congestion avoidance statistics. This is needed before decoupling the
data transfer component and the flow allocator as some current stats
show in DT will move to FA.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The mb-ecn policy has a couple of divisions in the math, which I
wanted to avoid. Now it measures the number of bytes sent in a window,
and updates the next window with AIMD logic. If the number of bytes in
the window is reached, the call blocks. To avoid long packet bursts,
the window size continually scales to contain between CA_MINPS (8) and
CA_MAXPS (64) packets.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
| |
The dt component bypasses the flow allocator on the receiver side, and
may try to update congestion context when the flow has already been
deallocated by the receiver. I will fix this bypass and always pass
through the flow allocator sometime soon; for now, I added a check in
the flow allocator call to avoid the SEGV.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
The enrollment procedure was not passing the policy for congestion
avoidance.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds congestion avoidance policies to the unicast IPCP. The
default policy is a multi-bit explicit congestion avoidance algorithm
based on data-center TCP congestion avoidance (DCTCP) to relay
information about the maximum queue depth that packets experienced to
the receiver. There's also a "nop" policy to disable congestion
avoidance for testing and benchmarking purposes.
The (initial) API for congestion avoidance policies is:
void * (* ctx_create)(void);
void (* ctx_destroy)(void * ctx);
These calls create / and or destroy a context for congestion control
for a specific flow. Thread-safety of the context is the
responsability of the flow allocator (operations on the ctx should be
performed under a lock).
ca_wnd_t (* ctx_update_snd)(void * ctx,
size_t len);
This is the sender call to update the context, and should be called
for every packet that is sent on the flow. The len parameter in this
API is the packet length, which allows calculating the bandwidth. It
returns an opaque union type that is used for the call to check/wait
if the congestion window is open or closed (and allowing to release
locks before waiting).
bool (* ctx_update_rcv)(void * ctx,
size_t len,
uint8_t ecn,
uint16_t * ece);
This is the call to update the flow congestion context on the receiver
side. It should be called for every received packet. It gets the ecn
value from the packet and its length, and returns the ECE (explicit
congestion experienced) value to be sent to the sender in case of
congestion. The boolean returned signals whether or not a congestion
update needs to be sent.
void (* ctx_update_ece)(void * ctx,
uint16_t ece);
This is the call for the sending side top update the context when it
receives an ECE update from the receiver.
void (* wnd_wait)(ca_wnd_t wnd);
This is a (blocking) call that waits for the congestion window to
clear. It should be stateless (to avoid waiting under locks). This may
change later on if passing the context is needed for different algorithms.
uint8_t (* calc_ecn)(int fd,
size_t len);
This is the call that intermediate IPCPs(routers) should use to update
the ECN field on passing packets.
The multi-bit ECN policy bases the value for the ECN field on the
depth of the rbuff queue packets will be sent on. I created another
call to grab the queue depth as fccntl is write-locking the
application. We can further optimize this to avoid most locking on the
rbuff.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
The flow stats had quite a lot of duplication.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
The condition variable was not initialized correctly and using the
wrong clock for pthread_cond_timedwait.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
A flow_set is thread-safe and doesn't need to be protected by a lock.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
Fix assignment instead of comparison operator.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
There were some issues identified by the Clang static analyzer that
are now fixed.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
GCC 10 static analyzer found that the wrong index was used in the fail
path of psched_create, causing double (multiple) frees.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
| |
GCC 10 defaults to -fno-common, so some variables that were defined in
the headers needed to be declared "extern". The GCC 10 static analyzer
can now be invoked using the DebugAnalyzer build option.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
The compiler spotted some variables that weren't really used.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
| |
This is more in line with the write() system call and prepares for
partial writes. Partial writes are disabled by default (and not yet
implemented).
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
There were updates under rdlock instead of wrlock, causing data races
and trouble. Also speeds up shutdown a bit.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
| |
The initial implementation for the ECDHE key exchange was doing the
key exchange after a flow was established. The public keys are now
sent allowg on the flow allocation messages, so that an encrypted
tunnel can be created within 1 RTT. The flow allocation steps had to
be extended to pass the opaque data ('piggybacking').
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
This adds tests for LFA and ECMP to the graph_test routine.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|