| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
The event handler was registered before the scheduler was
started. Which could in theory cause addition of fds to an
uninitialized scheduler. The event handler is now registered after the
scheduler is created as part of dt_start. Likewise it now unregisters
as part of dt_stop.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
This revises the logging in the IPCPs to be a more consistent and
reduce duplicate messages in nested functions.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
| |
The ipcp configuration struct now has internal structures for the
different IPCPs and for IPCP components of the unicast IPCP.
Split the very long IPCP main loop into individual handler functions.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
2022 was a rather slow year...
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
Growing pains.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reading/writing to (N + 1)-flows from the IPCP was using a raw QoS flow
to bypass some functions in the ipcp_flow_read call. But this call was
broken for keepalive packets. Fixing the ipcp_flow_read call for
(N - 1) flows causes the IPCPs to drop 0-byte keepalive packets coming from
(N + 1) client flows.
>From now on, there is a dedicated call for (N + 1) reads/writes from
the IPCPs that's more efficient and cleaner. The (N + 1) flow internal
QoS is now also defaulted to a qos_np1 qosspec, instead of tampering
with the qosspec requested by the (N + 1) client.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reading packets from the rbuff and checking their validity (non-zero
size, pass crc check, pass decryption) is now extracted into a
function.
Also adds a function to get the length of an sdu_du_buff instead of
subtracting the tail and head pointers.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
| |
The free of the buffer in the failure path of the readdir RIB
functions was taking the wrong pointer in a couple of places. The FRCT
RIB readdir was missing error handling for malloc and strdup.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The read functions for the RIB will now receive the full path, instead
of only the entry name. For IPCPs, we organized the RIB in an
/<ipcp>/<component>/entries
structure with a directory per component, so we don't need the full
path at this point. For process flow information, it's a lot more
convenient to organize it the following way
/<pid>/<fd>/stat
We can then register/unregister the flow descriptor when the frct
instance is created, and for getting the stats, we'd know the flow
descriptor from the fuse file path. If we would create a file per flow
instead of a directory per flow, something like
/<pid>/flows/<fd>
we'd need to do additional bookkeeping to list the contents of that
directory (we would need to track all flows with an active FRCT
instance), that fuse knows because it tracks the directories.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
These RIBs were not properly unregistered on shutdown.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
| |
The RIB API had a struct stat in the getattr() function, which made
all components that exposed variables via the RIB dependent on
<sys/stat.h>. The rib now has its own struct rib_attr to set
attributes such as size and last modified time.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
| |
Compilation failed on FreeBSD 14 with fuse enabled because of some
missing definitions. __XSI_VISIBLE must be set before including
<ouroboros/rib.h> for some definitions in <sys/stat.h>. FreeBSD
doesn't know the MSG_CONFIRM flag to sendto() or
CLOCK_REALTIME_COARSE, which are Linux-specific.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
| |
The connection manager and enrolment components of the unicast and
broadcast IPCP have a lot in common, as conjectured in the paper. The
initial implementation of the broadcast IPCP just duplicated the
code. This moves the shared functionality to common ground.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
The ugent email addresses are shut down, updated to Ouroboros mail
addresses.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
Happy New Year, Ouroboros!
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
The ECN marking function should be able to use the packet QoS to allow
prioritizing traffic under congestion. Not yet implemented in MB-ECN.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
The previous value of the ECN field should be passed to the congestion
notification function.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The EIDs are now 64-bit. This makes it a tad harder to guess them
(think of port scanning). The implementation has only the most
significant 32 bits random to quickly map EIDs to N+1 flows. While
this is equivalent to a random cookie as a check on flows, the
rationale is that valid endpoint IDs should be pretty hard to guess
(and thus be 64-bit random at least). Ideally one would use
content-addressable memory for this kind of mapping.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The DT will now post all packets for N+1 flows through the flow
allocator component. This means that N+1 flows can be monitored
through the flow allocator stats, and N-1 flows through the DT stats.
The DT component still keeps stats for the local components (FA and
DHT), but this can be removed once the DHT has its own RIB
output.
The flow allocator show statistics for
Sent packets: total packets that were presented for sending
on this specific flow
Send failed: packets that were unable to be sent
Received packets: total packets that were presented by the DT component
on this specific flow
Received failed: packets that were unable to be delivered
These stats are presented as both packet counts and byte counts. To
know how many were successful, the values for failed need to be
subtracted from the values for total.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
| |
The RIB will now show some stats for the flow allocator, including
congestion avoidance statistics. This is needed before decoupling the
data transfer component and the flow allocator as some current stats
show in DT will move to FA.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds congestion avoidance policies to the unicast IPCP. The
default policy is a multi-bit explicit congestion avoidance algorithm
based on data-center TCP congestion avoidance (DCTCP) to relay
information about the maximum queue depth that packets experienced to
the receiver. There's also a "nop" policy to disable congestion
avoidance for testing and benchmarking purposes.
The (initial) API for congestion avoidance policies is:
void * (* ctx_create)(void);
void (* ctx_destroy)(void * ctx);
These calls create / and or destroy a context for congestion control
for a specific flow. Thread-safety of the context is the
responsability of the flow allocator (operations on the ctx should be
performed under a lock).
ca_wnd_t (* ctx_update_snd)(void * ctx,
size_t len);
This is the sender call to update the context, and should be called
for every packet that is sent on the flow. The len parameter in this
API is the packet length, which allows calculating the bandwidth. It
returns an opaque union type that is used for the call to check/wait
if the congestion window is open or closed (and allowing to release
locks before waiting).
bool (* ctx_update_rcv)(void * ctx,
size_t len,
uint8_t ecn,
uint16_t * ece);
This is the call to update the flow congestion context on the receiver
side. It should be called for every received packet. It gets the ecn
value from the packet and its length, and returns the ECE (explicit
congestion experienced) value to be sent to the sender in case of
congestion. The boolean returned signals whether or not a congestion
update needs to be sent.
void (* ctx_update_ece)(void * ctx,
uint16_t ece);
This is the call for the sending side top update the context when it
receives an ECE update from the receiver.
void (* wnd_wait)(ca_wnd_t wnd);
This is a (blocking) call that waits for the congestion window to
clear. It should be stateless (to avoid waiting under locks). This may
change later on if passing the context is needed for different algorithms.
uint8_t (* calc_ecn)(int fd,
size_t len);
This is the call that intermediate IPCPs(routers) should use to update
the ECN field on passing packets.
The multi-bit ECN policy bases the value for the ECN field on the
depth of the rbuff queue packets will be sent on. I created another
call to grab the queue depth as fccntl is write-locking the
application. We can further optimize this to avoid most locking on the
rbuff.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
| |
The flow stats had quite a lot of duplication.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
|
|
|
| |
Otherwise the compile will complain that the comparison of an unsigned
enum expression < 0 is always false.
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Packet Forwarding Function (PFF) was user-configurable using the
irm tool. However, this isn't really wanted since the PFF is dictated
by the routing algorithm. This moves the responsability for selecting
the correct PFF from the network admin to the unicast IPCP
implementation. Each routing policy now has to specify which PFF it
will use.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
|
|
|
| |
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|
|
This completes the renaming of the normal IPCP to the unicast IPCP in
the sources, to get everything consistent with the documentation.
Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks>
Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
|