Ouroboros Tutorial 01

From Ouroboros
Jump to navigation Jump to search
Ouroboros tutorial 01

The description of this tutorial uses a debug build.

We will start the IRMd, create a local Layer, start a ping server and connect a client to send a few pings. This will involve binding (1) that server application to a name and registering (2) that name into the local layer. After that, the client will be able to allocate a flow (3) to that name for which the server will respond.

We recommend to open multiple terminal windows for this tutorial. In the first window, start the IRMd (as a superuser) in stdout mode. The output shows the process id (pid) of the IRMd, which will be different on your machine.

$ sudo irmd --stdout
==06773== irmd(II): Ouroboros IPC Resource Manager daemon started...

The type of IPCP we will create is a local IPCP. The local IPCP is a kind of loopback interface, it implements all the functions that the O7s API provides, but only for communication between processes on the same (local) machine. The ipcp create CLI command instantiates a new local IPC process, but it will not be a part of a Layer. We also need to bootstrap this IPCP in a layer, we will name it “local-layer”. As a shortcut, the ipcp bootstrap command automatically creates an IPCP if no IPCP by that name exists, so the ipcp create command is usually optional. In a second terminal, enter the command:

$ irm ipcp bootstrap type local name local-ipcp layer local-layer

The IRMd and ipcpd output in the first terminal reads:

==06773== irmd(II): Created IPCP 6794.
==06794== ipcpd/ipcp(II): Bootstrapping...
==06794== ipcpd/ipcp(II): Finished bootstrapping:  0.
==06773== irmd(II): Bootstrapped IPCP 6794 in layer local-layer.

From the third terminal window, let’s start our oping application in server mode (“oping –-help” shows all oping command line parameters):

$ oping --listen
Ouroboros ping server started.

The IRMd will notice that an oping server (in our case with pid 6810) has started, but there are no service names bound to it yet, as shown in the (debug-level) log output:

==06773== irmd(DB): New instance (6810) of oping added.
==06773== irmd(DB): This process accepts flows for:

The server application is not yet reachable by clients. Next we will bind the server to a name and register that name in the “local-layer”. The name for the server can be chosen at will, let’s take “oping-server”. In the second terminal window, execute:

$ irm bind process 6810 name oping-server
$ irm name register oping-server layer local-layer

The IRMd and IPCPd in terminal one will now acknowledge that the name is bound and registered:

==06773== irmd(II): Bound process 6810 to name oping-server.
==06773== irmd(II): Created new name: oping-server.
==06794== ipcpd/ipcp(II): Registering f6c93ff2...
==06794== ipcpd/ipcp(II): Finished registering f6c93ff2 : 0.
==06773== irmd(II): Registered oping-server with IPCP 6794 as f6c93ff2.

Ouroboros registers names not in plaintext but using a (configurable) hashing algorithm. The default hashing algorithm for a local Layer is a 256-bit SHA3 hash. The output in the logs is truncated to the first 4 bytes in a HEX notation.

Now that we have bound and registered our server, we can connect from the client. In a terminal window, start an oping client with destination oping-server and it will begin pinging:

$ oping --server-name oping-server --count 5           #short form: oping -n oping-server -c 5
Pinging oping-server with 64 bytes of data (5 packets):

64 bytes from oping-server: seq=0 time=0.633 ms
64 bytes from oping-server: seq=1 time=0.498 ms
64 bytes from oping-server: seq=2 time=0.471 ms
64 bytes from oping-server: seq=3 time=0.505 ms
64 bytes from oping-server: seq=4 time=0.441 ms

--- oping-server ping statistics ---
5 packets transmitted, 5 received, 0 out-of-order, 0% packet loss, time: 5002.413 ms
rtt min/avg/max/mdev = 0.441/0.510/0.633/0.073 ms

The server will acknowledge that it has a new flow connected on flow descriptor 64, which will time out a few seconds after the oping client stops sending:

New flow 64.
Received 64 bytes on fd 64.
Received 64 bytes on fd 64.
Received 64 bytes on fd 64.
Received 64 bytes on fd 64.
Received 64 bytes on fd 64.
Flow 64 timed out.

The IRMd and IPCP logs provide some additional output detailing the flow allocation process:

==06773== irmd(II): Allocating flow for 6836 to oping-server.
==06794== ipcpd/ipcp(II): Allocating flow 0 to f6c93ff2.
==06794== ipcpd-local(DB): Allocating flow to f6c93ff2 on fd 64.
==06773== irmd(DB): Flow req arrived from IPCP 6794 for f6c93ff2.
==06773== irmd(II): Flow request arrived for oping-server.
==06794== ipcpd-local(II): Pending local allocation request on fd 64.
==06794== ipcpd/ipcp(II): Finished allocating flow 0 to f6c93ff2: 0.
==06794== ipcpd/ipcp(II): Responding 0 to alloc on flow_id 1.
==06773== irmd(II): Flow on flow_id 0 allocated.
==06794== ipcpd-local(II): Flow allocation completed, fds (64, 65).
==06794== ipcpd/ipcp(II): Finished responding to allocation request: 0
==06773== irmd(II): Flow on flow_id 1 allocated.
==06773== irmd(DB): New instance (6810) of oping added.
==06773== irmd(DB): This process accepts flows for:
==06773== irmd(DB):         oping-server

First, the IPCPd 6794 (our local IPCPd) shows that it will allocate a flow towards a destination hash “f6c93ff2” (truncated) and this flow has flow descriptor (fd) 64. The IRMd logs that IPCPd 6794 (the same one that allocates it, this is a loopback function) requests a flow towards any process that is listening for “f6c93ff2”, and resolves it to “oping-server”, as that is a process that we bound to that name. At this point, the local IPCPd has a pending flow on the client side on fd 64. Since this is our first flow_id in the system, it has flow_id 0. The server will accept the flow and the other end of the flow gets flow_id 1. The local IPCPd sees that the flow allocation is completed. Internally it sees the endpoints as flow descriptors 64 and 65 which map to flow_id 0 and flow_id 1 respectively. The IPCP cannot directly access flow_ids, they are assigned and managed by the IRMd. After it has accepted the flow, the oping server enters flow_accept() again. The IRMd notices the instance and reports that it accepts flows for “oping-server”.

After all packets have been sent, the oping client (process 6836) deallocates the flow and exits. This process can also be seen in the IRMd/IPCP log output:

==06773== irmd(DB): Deallocating flow 0 for process 6836.
==06773== irmd(DB): Partial deallocation of flow_id 0 by process 6836.
==06794== ipcpd/ipcp(II): Deallocating flow 0.
==06773== irmd(DB): Deallocating flow 0 for process 6794.
==06773== irmd(II): Completed deallocation of flow_id 0 by process 6794.
==06794== ipcpd-local(II): Flow with fd 64 deallocated.
==06794== ipcpd/ipcp(II): Finished deallocating flow 0: 0.
==06773== irmd(DB): Dead process removed: 6836.
==06773== irmd(DB): Deallocating flow 1 for process 6810.
==06773== irmd(DB): Partial deallocation of flow_id 1 by process 6810.
==06794== ipcpd/ipcp(II): Deallocating flow 1.
==06773== irmd(DB): Deallocating flow 1 for process 6794.
==06773== irmd(II): Completed deallocation of flow_id 1 by process 6794.
==06794== ipcpd-local(II): Flow with fd 65 deallocated.
==06794== ipcpd/ipcp(II): Finished deallocating flow 1: 0.

Deallocation is a two-phase process at each side of the flow, it is initiated by the client process and finalized by the IPCP. The first 3 lines show the deallocation by the oping client, lines 4-6 the deallocation of the client side flow endpoint by the IPCP. Line 7 shows the client process (5836) has exited. After a few seconds, the flow will also time out at the server side, and then the logs in the IRMd show the deallocation initiated by the oping server (lines 8-10) and finished up by the ipcp (lines 11-14).

This concludes this first short tutorial. All running processes can be terminated by issuing a Ctrl-C command in their respective terminals.