summaryrefslogtreecommitdiff
path: root/src/ipcpd/unicast/pol/ca-mb-ecn.c
Commit message (Collapse)AuthorAgeFilesLines
* ipcpd: Fix congestion window scalingDimitri Staessens2021-06-211-18/+41
| | | | | | | This fixes scaling of the congestion window, which was a buggy mess. Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks> Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
* build: Update email addressesDimitri Staessens2021-01-031-2/+2
| | | | | | | | The ugent email addresses are shut down, updated to Ouroboros mail addresses. Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks> Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
* build: Update copyright to 2021Dimitri Staessens2021-01-031-1/+1
| | | | | | | Happy New Year, Ouroboros! Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks> Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
* ipcpd: Pass qoscube to ECN marking functionDimitri Staessens2020-12-201-0/+2
| | | | | | | | The ECN marking function should be able to use the packet QoS to allow prioritizing traffic under congestion. Not yet implemented in MB-ECN. Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks> Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
* ipcpd: Remove unused variable in MB-ECN policyDimitri Staessens2020-12-121-4/+0
| | | | | | | | The t_sent variable is a remnant from the first version and isn't needed anymore. Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks> Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
* ipcpd: Pass previous ECN value in congestion APIDimitri Staessens2020-12-121-3/+6
| | | | | | | | The previous value of the ECN field should be passed to the congestion notification function. Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks> Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
* ipcpd: Fix slow start in multi-bit (F)ECN policyDimitri Staessens2020-12-071-8/+11
| | | | | | | | | | | There is a check not to rapidly double the window to astronomical sizes when there is no congestion experienced for long periods of time, but the if-else logic was botched and it still grew to astronomical sizes (albeit linear instead of exponential). I also lowered the ECN threshold a bit. Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks> Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
* ipcpd: Fix off-by-one in Multi-bit (F)ECN policyDimitri Staessens2020-12-051-2/+1
| | | | | | | | Noticed an off-by-one in the packet counter because it was incremented before and the byte counter after the flow update. Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks> Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
* ipcpd: Add RIB statistics for flow allocatorDimitri Staessens2020-12-051-1/+40
| | | | | | | | | | The RIB will now show some stats for the flow allocator, including congestion avoidance statistics. This is needed before decoupling the data transfer component and the flow allocator as some current stats show in DT will move to FA. Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks> Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
* ipcpd: Simplify multi-bit (F)ECN policyDimitri Staessens2020-12-021-54/+69
| | | | | | | | | | | | The mb-ecn policy has a couple of divisions in the math, which I wanted to avoid. Now it measures the number of bytes sent in a window, and updates the next window with AIMD logic. If the number of bytes in the window is reached, the call blocks. To avoid long packet bursts, the window size continually scales to contain between CA_MINPS (8) and CA_MAXPS (64) packets. Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks> Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>
* ipcpd: Add congestion avoidance policiesDimitri Staessens2020-12-021-0/+216
This adds congestion avoidance policies to the unicast IPCP. The default policy is a multi-bit explicit congestion avoidance algorithm based on data-center TCP congestion avoidance (DCTCP) to relay information about the maximum queue depth that packets experienced to the receiver. There's also a "nop" policy to disable congestion avoidance for testing and benchmarking purposes. The (initial) API for congestion avoidance policies is: void * (* ctx_create)(void); void (* ctx_destroy)(void * ctx); These calls create / and or destroy a context for congestion control for a specific flow. Thread-safety of the context is the responsability of the flow allocator (operations on the ctx should be performed under a lock). ca_wnd_t (* ctx_update_snd)(void * ctx, size_t len); This is the sender call to update the context, and should be called for every packet that is sent on the flow. The len parameter in this API is the packet length, which allows calculating the bandwidth. It returns an opaque union type that is used for the call to check/wait if the congestion window is open or closed (and allowing to release locks before waiting). bool (* ctx_update_rcv)(void * ctx, size_t len, uint8_t ecn, uint16_t * ece); This is the call to update the flow congestion context on the receiver side. It should be called for every received packet. It gets the ecn value from the packet and its length, and returns the ECE (explicit congestion experienced) value to be sent to the sender in case of congestion. The boolean returned signals whether or not a congestion update needs to be sent. void (* ctx_update_ece)(void * ctx, uint16_t ece); This is the call for the sending side top update the context when it receives an ECE update from the receiver. void (* wnd_wait)(ca_wnd_t wnd); This is a (blocking) call that waits for the congestion window to clear. It should be stateless (to avoid waiting under locks). This may change later on if passing the context is needed for different algorithms. uint8_t (* calc_ecn)(int fd, size_t len); This is the call that intermediate IPCPs(routers) should use to update the ECN field on passing packets. The multi-bit ECN policy bases the value for the ECN field on the depth of the rbuff queue packets will be sent on. I created another call to grab the queue depth as fccntl is write-locking the application. We can further optimize this to avoid most locking on the rbuff. Signed-off-by: Dimitri Staessens <dimitri@ouroboros.rocks> Signed-off-by: Sander Vrijders <sander@ouroboros.rocks>