aboutsummaryrefslogtreecommitdiff
path: root/content/en/docs/Tutorials/tutorial-3.md
blob: 90d4cb43277c5c5cadecc56c7b6ec630906dca33 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
---
title: "Flow statistics"
author: "Dimitri Staessens"
date:  2019-08-31
#type:  page
draft: false
weight: 30
description: >
   Monitoring your flows.
---

For this tutorial, you should have a local layer, a normal layer and a
ping server registered in the normal layer. You will need to have the
FUSE libraries installed and Ouroboros compiled with FUSE support. We
will show you how to get some statistics from the network layer which is
exported by the IPCPs at /tmp/ouroboros (this mountpoint can be set at
compile time):

```bash
$ tree /tmp/ouroboros
/tmp/ouroboros/
|-- ipcpd-normal.13569
|   |-- dt
|   |   |-- 0
|   |   |-- 1
|   |   `-- 65
|   `-- lsdb
|       |-- 416743497.465922905
|       |-- 465922905.416743497
|       |-- dt.465922905
|       `-- mgmt.465922905
`-- ipcpd-normal.4363
    |-- dt
    |   |-- 0
    |   |-- 1
    |   `-- 65
    `-- lsdb
        |-- 416743497.465922905
        |-- 465922905.416743497
        |-- dt.416743497
        `-- mgmt.416743497

6 directories, 14 files
```

There are two filesystems, one for each normal IPCP. Currently, it shows
information for two components: data transfer and the link-state
database. The data transfer component lists flows on known flow
descriptors. The flow allocator component will usually be on fd 0 and
the directory (DHT). There is a single (N-1) data transfer flow on fd 65
that the IPCPs can use to send data (these fd's will usually not be the
same). The routing component sees that data transfer flow as two
unidirectional links. It has a management flow and data transfer flow to
its neighbor. Let's have a look at the data transfer flow in the
network:

```bash
$ cat /tmp/ouroboros/ipcpd-normal.13569/dt/65
Flow established at:       2018-03-07 18:47:43
Endpoint address:                    465922905
Queued packets (rx):                         0
Queued packets (tx):                         0

Qos cube   0:
 sent (packets):                             4
 sent (bytes):                             268
 rcvd (packets):                             3
 rcvd (bytes):                             298
 local sent (packets):                       4
 local sent (bytes):                       268
 local rcvd (packets):                       3
 local rcvd (bytes):                       298
 dropped ttl (packets):                      0
 dropped ttl (bytes):                        0
 failed writes (packets):                    0
 failed writes (bytes):                      0
 failed nhop (packets):                      0
 failed nhop (bytes):                        0

<no traffic on other qos cubes>
```

The above output shows the statistics for the data transfer component of
the IPCP that enrolled into the layer. It shows the time the flow was
established, the endpoint address and the number of packets that are in
the incoming and outgoing queues. Then it lists packet statistics per
QoS cube. It sent 4 packets, and received 3 packets. All the packets
came from local sources (internal components of the IPCP) and were
delivered to local destinations. Let's have a look where they went.

```bash
$ cat /tmp/ouroboros/ipcpd-normal.13569/dt/1
Flow established at:       2018-03-07 18:47:43
Endpoint address:               flow-allocator
Queued packets (rx):                         0
Queued packets (tx):                         0

<no packets on this flow>
```

There is no traffic on fd 0, which is the flow allocator component. This
will only be used when higher layer applications will use this normal
layer. Let's have a look at fd 1.

```
$ cat /tmp/ouroboros/ipcpd-normal.13569/dt/1
Flow established at:       2018-03-07 18:47:43
Endpoint address:                          dht
Queued packets (rx):                         0
Queued packets (tx):                         0

Qos cube   0:
 sent (packets):                             3
 sent (bytes):                             298
 rcvd (packets):                             0
 rcvd (bytes):                               0
 local sent (packets):                       0
 local sent (bytes):                         0
 local rcvd (packets):                       6
 local rcvd (bytes):                       312
 dropped ttl (packets):                      0
 dropped ttl (bytes):                        0
 failed writes (packets):                    0
 failed writes (bytes):                      0
 failed nhop (packets):                      2
 failed nhop (bytes):                       44

<no traffic on other qos cubes>
```

The traffic for the directory (DHT) is on fd1. Take note that this is
from the perspective of the data transfer component. The data transfer
component sent 3 packets to the DHT, these are the 3 packets it received
from the data transfer flow. The data transfer component received 6
packets from the DHT. It only sent 4 on fd 65. 2 packets failed because
of nhop. This is because the forwarding table was being updated from the
routing table. Let's send some traffic to the oping server.

```cmd
$ oping -n oping_server -i 0
Pinging oping_server with 64 bytes of data:

64 bytes from oping_server: seq=0 time=0.547 ms
...
64 bytes from oping_server: seq=999 time=0.184 ms

--- oping_server ping statistics ---
1000 SDUs transmitted, 1000 received, 0% packet loss, time: 106.538 ms
rtt min/avg/max/mdev = 0.151/0.299/2.269/0.230 ms
```

This sent 1000 packets to the server. Let's have a look at the flow
allocator:

```bash
$ cat /tmp/ouroboros/ipcpd-normal.13569/dt/0
Flow established at:       2018-03-07 18:47:43
Endpoint address:               flow-allocator
Queued packets (rx):                         0
Queued packets (tx):                         0

Qos cube   0:
 sent (packets):                             1
 sent (bytes):                              59
 rcvd (packets):                             0
 rcvd (bytes):                               0
 local sent (packets):                       0
 local sent (bytes):                         0
 local rcvd (packets):                       1
 local rcvd (bytes):                        51
 dropped ttl (packets):                      0
 dropped ttl (bytes):                        0
 failed writes (packets):                    0
 failed writes (bytes):                      0
 failed nhop (packets):                      0
 failed nhop (bytes):                        0

<no traffic on other qos cubes>
```

The flow allocator has sent and received a message: a request and a
response for the flow allocation between the oping client and server.
The data transfer flow will also have additional traffic:

```bash
$ cat /tmp/ouroboros/ipcpd-normal.13569/dt/65
Flow established at:       2018-03-07 18:47:43
Endpoint address:                    465922905
Queued packets (rx):                         0
Queued packets (tx):                         0

Qos cube   0:
 sent (packets):                          1013
 sent (bytes):                           85171
 rcvd (packets):                          1014
 rcvd (bytes):                           85373
 local sent (packets):                      13
 local sent (bytes):                      1171
 local rcvd (packets):                      14
 local rcvd (bytes):                      1373
 dropped ttl (packets):                      0
 dropped ttl (bytes):                        0
 failed writes (packets):                    0
 failed writes (bytes):                      0
 failed nhop (packets):                      0
 failed nhop (bytes):                        0
```

This shows the traffic from the oping application. The additional
traffic (the oping sent 1000, the flow allocator 1 and the DHT
previously sent 3) is additional DHT traffic (the DHT periodically
updates). Also note that the traffic reported on the link includes the
FRCT and data transfer headers which in the default configuration is 20
bytes per packet.

This concludes tutorial 3.