Ouroboros Tutorial 01: Difference between revisions
No edit summary |
No edit summary |
||
Line 7: | Line 7: | ||
<syntaxhighlight> | <syntaxhighlight> | ||
$ sudo irmd --stdout | $ sudo irmd --stdout | ||
== | ==05813== irmd(II): Ouroboros IPC Resource Manager daemon started... | ||
</syntaxhighlight> | </syntaxhighlight> | ||
The type of IPCP we will create is a | The type of IPCP we will create is a ''local'' IPCP. The local IPCP is a kind of loopback interface, it implements all the functions that the O7s API provides, but only for communication between processes on the same (local) machine. The ''ipcp create'' CLI command instantiates a new local IPC process, but it will not be a part of a Layer. We also need to bootstrap this IPCP in a layer, we will name it “local-layer”. As a shortcut, the ''ipcp bootstrap'' command automatically creates an IPCP if no IPCP by that name exists, so the ''ipcp create'' command is usually optional. In a second terminal, enter the command: | ||
<syntaxhighlight> | <syntaxhighlight> | ||
$ irm ipcp | $ irm ipcp bootstrap type local name local-ipcp layer local-layer | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 20: | Line 19: | ||
<syntaxhighlight> | <syntaxhighlight> | ||
== | ==05813== irmd(II): Created IPCP 5843. | ||
== | ==05843== ipcpd/ipcp(II): Bootstrapping... | ||
== | ==05843== ipcpd/ipcp(II): Finished bootstrapping: 0. | ||
==05813== irmd(II): Bootstrapped IPCP 5843 in layer local-layer. | |||
</syntaxhighlight> | </syntaxhighlight> | ||
From the third terminal window, let’s start our oping application in server mode (“oping | From the third terminal window, let’s start our oping application in server mode (“oping –-help” shows all oping command line parameters): | ||
<syntaxhighlight> | <syntaxhighlight> | ||
Line 32: | Line 32: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
The IRMd will notice that an oping server with pid | The IRMd will notice that an oping server (in our case with pid 5886) has started, but there are no service names bound to it yet: | ||
<syntaxhighlight> | <syntaxhighlight> | ||
== | ==05813== irmd(DB): New instance (5886) of oping added. | ||
== | ==05813== irmd(DB): This process accepts flows for: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
The server application is not yet reachable by clients. Next we will bind the server to a name and register that name in the | The server application is not yet reachable by clients. Next we will ''bind'' the server to a ''name'' and register that ''name'' in the “local-layer”. The name for the server can be chosen at will, let’s take “oping-server”. In the second terminal window, execute: | ||
<syntaxhighlight> | <syntaxhighlight> | ||
$ irm bind | $ irm bind process 5886 name oping-server | ||
$ irm name register | $ irm name register oping-server layer local-layer | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 49: | Line 49: | ||
<syntaxhighlight> | <syntaxhighlight> | ||
== | ==05813== irmd(II): Bound process 5886 to name oping-server. | ||
== | ==05813== irmd(II): Created new name: oping-server. | ||
== | ==05843== ipcpd/ipcp(II): Registering f6c93ff2... | ||
==05843== ipcpd/ipcp(II): Finished registering f6c93ff2 : 0. | |||
==05813== irmd(II): Registered oping-server with IPCP 5843 as f6c93ff2. | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Ouroboros registers | Ouroboros registers names not in plaintext but using a (configurable) hashing algorithm. The default hashing algorithm for a ''local'' Layer is a 256-bit SHA3 hash. The output in the logs is truncated to the first 4 bytes in a HEX notation. | ||
Now that we have bound and registered our server, we can connect from the client. In | Now that we have bound and registered our server, we can connect from the client. In a terminal window, start an oping client with destination oping-server and it will begin pinging: | ||
<syntaxhighlight> | <syntaxhighlight> | ||
$ oping -n | $ oping -n oping-server -c 5 | ||
Pinging | Pinging oping-server with 64 bytes of data (5 packets): | ||
64 bytes from | 64 bytes from oping-server: seq=0 time=0.633 ms | ||
64 bytes from | 64 bytes from oping-server: seq=1 time=0.498 ms | ||
64 bytes from | 64 bytes from oping-server: seq=2 time=0.471 ms | ||
64 bytes from | 64 bytes from oping-server: seq=3 time=0.505 ms | ||
64 bytes from | 64 bytes from oping-server: seq=4 time=0.441 ms | ||
--- | --- oping-server ping statistics --- | ||
5 | 5 packets transmitted, 5 received, 0 out-of-order, 0% packet loss, time: 5002.413 ms | ||
rtt min/avg/max/mdev = 0. | rtt min/avg/max/mdev = 0.441/0.510/0.633/0.073 ms | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 78: | Line 79: | ||
<syntaxhighlight> | <syntaxhighlight> | ||
New flow 64. | New flow 64. | ||
Received 64 bytes on fd 64. | |||
Received 64 bytes on fd 64. | |||
Received 64 bytes on fd 64. | |||
Received 64 bytes on fd 64. | |||
Flow 64 timed out. | Flow 64 timed out. | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 84: | Line 89: | ||
<syntaxhighlight> | <syntaxhighlight> | ||
== | ==05843== ipcpd/ipcp(II): Allocating flow 0 to f6c93ff2. | ||
== | ==05843== ipcpd-local(DB): Allocating flow to f6c93ff2 on fd 64. | ||
== | ==05813== irmd(DB): Flow req arrived from IPCP 5843 for f6c93ff2. | ||
== | ==05813== irmd(II): Flow request arrived for oping-server. | ||
== | ==05843== ipcpd-local(II): Pending local allocation request on fd 64. | ||
== | ==05843== ipcpd/ipcp(II): Finished allocating flow 0 to f6c93ff2: 0. | ||
== | ==05843== ipcpd/ipcp(II): Responding 0 to alloc on flow_id 1. | ||
== | ==05813== irmd(II): Flow on flow_id 0 allocated. | ||
== | ==05843== ipcpd-local(II): Flow allocation completed, fds (64, 65). | ||
== | ==05843== ipcpd/ipcp(II): Finished responding to allocation request: 0 | ||
==05813== irmd(II): Flow on flow_id 1 allocated. | |||
==05813== irmd(DB): New instance (5886) of oping added. | |||
==05813== irmd(DB): This process accepts flows for: | |||
==05813== irmd(DB): oping-server | |||
</syntaxhighlight> | </syntaxhighlight> | ||
First, the IPCPd shows that it will allocate a flow towards a destination hash “4721372d” (truncated). The IRMd logs that IPCPd 2324 (our local IPCPd) requests a flow towards any process that is listening for “4721372d”, and resolves it to “oping_server”, as that is a process that is bound to that name. At this point, the local IPCPd has a pending flow on the client side. Since this is the first port_id in the system, it has port_id 0. The server will accept the flow and the other end of the flow gets port_id 1. The local IPCPd sees that the flow allocation is completed. Internally it sees the endpoints as flow descriptors 64 and 65 which map to port_id 0 and port_id 1. The IPCP cannot directly access port_ids, they are assigned and managed by the IRMd. After it has accepted the flow, the oping server enters flow_accept() again. The IRMd notices the instance and reports that it accepts flows for “oping_server”. | First, the IPCPd shows that it will allocate a flow towards a destination hash “4721372d” (truncated). The IRMd logs that IPCPd 2324 (our local IPCPd) requests a flow towards any process that is listening for “4721372d”, and resolves it to “oping_server”, as that is a process that is bound to that name. At this point, the local IPCPd has a pending flow on the client side. Since this is the first port_id in the system, it has port_id 0. The server will accept the flow and the other end of the flow gets port_id 1. The local IPCPd sees that the flow allocation is completed. Internally it sees the endpoints as flow descriptors 64 and 65 which map to port_id 0 and port_id 1. The IPCP cannot directly access port_ids, they are assigned and managed by the IRMd. After it has accepted the flow, the oping server enters flow_accept() again. The IRMd notices the instance and reports that it accepts flows for “oping_server”. | ||
After all packets have been sent, the client will exit and deallocate the flow. The process can also be seen in the IRMd output: | |||
<syntaxhighlight> | |||
==05813== irmd(DB): Deallocating flow 0 for process 5929. | |||
==05813== irmd(DB): Partial deallocation of flow_id 0 by process 5929. | |||
==05843== ipcpd/ipcp(II): Deallocating flow 0. | |||
==05813== irmd(DB): Deallocating flow 0 for process 5843. | |||
==05813== irmd(II): Completed deallocation of flow_id 0 by process 5843. | |||
==05843== ipcpd-local(II): Flow with fd 64 deallocated. | |||
==05843== ipcpd/ipcp(II): Finished deallocating flow 0: 0. | |||
==05813== irmd(DB): Dead process removed: 5929. | |||
==05813== irmd(DB): Deallocating flow 1 for process 5886. | |||
==05813== irmd(DB): Partial deallocation of flow_id 1 by process 5886. | |||
==05843== ipcpd/ipcp(II): Deallocating flow 1. | |||
==05813== irmd(DB): Deallocating flow 1 for process 5843. | |||
==05813== irmd(II): Completed deallocation of flow_id 1 by process 5843. | |||
==05843== ipcpd-local(II): Flow with fd 65 deallocated. | |||
==05843== ipcpd/ipcp(II): Finished deallocating flow 1: 0. | |||
</syntaxhighlight> | |||
This concludes this first short tutorial. All running processes can be terminated by issuing a Ctrl-C command in their respective terminals or you can continue with the next tutorial. | This concludes this first short tutorial. All running processes can be terminated by issuing a Ctrl-C command in their respective terminals or you can continue with the next tutorial. |
Revision as of 11:14, 5 November 2023
We will start the IRMd, create a local IPCP, start a ping server and connect a client. This will involve binding (1) that server application to a name and registering (2) that name into the local layer. After that, the client will be able to allocate a flow (3) to that name for which the server will respond.
We recommend to open 3 terminal windows for this tutorial. In the first window, start the IRMd (as a superuser) in stdout mode. The output shows the process id (pid) of the IRMd, which will be different on your machine.
$ sudo irmd --stdout
==05813== irmd(II): Ouroboros IPC Resource Manager daemon started...
The type of IPCP we will create is a local IPCP. The local IPCP is a kind of loopback interface, it implements all the functions that the O7s API provides, but only for communication between processes on the same (local) machine. The ipcp create CLI command instantiates a new local IPC process, but it will not be a part of a Layer. We also need to bootstrap this IPCP in a layer, we will name it “local-layer”. As a shortcut, the ipcp bootstrap command automatically creates an IPCP if no IPCP by that name exists, so the ipcp create command is usually optional. In a second terminal, enter the command:
$ irm ipcp bootstrap type local name local-ipcp layer local-layer
The IRMd and ipcpd output in the first terminal reads:
==05813== irmd(II): Created IPCP 5843.
==05843== ipcpd/ipcp(II): Bootstrapping...
==05843== ipcpd/ipcp(II): Finished bootstrapping: 0.
==05813== irmd(II): Bootstrapped IPCP 5843 in layer local-layer.
From the third terminal window, let’s start our oping application in server mode (“oping –-help” shows all oping command line parameters):
$ oping --listen
Ouroboros ping server started.
The IRMd will notice that an oping server (in our case with pid 5886) has started, but there are no service names bound to it yet:
==05813== irmd(DB): New instance (5886) of oping added.
==05813== irmd(DB): This process accepts flows for:
The server application is not yet reachable by clients. Next we will bind the server to a name and register that name in the “local-layer”. The name for the server can be chosen at will, let’s take “oping-server”. In the second terminal window, execute:
$ irm bind process 5886 name oping-server
$ irm name register oping-server layer local-layer
The IRMd and IPCPd in terminal one will now acknowledge that the name is bound and registered:
==05813== irmd(II): Bound process 5886 to name oping-server.
==05813== irmd(II): Created new name: oping-server.
==05843== ipcpd/ipcp(II): Registering f6c93ff2...
==05843== ipcpd/ipcp(II): Finished registering f6c93ff2 : 0.
==05813== irmd(II): Registered oping-server with IPCP 5843 as f6c93ff2.
Ouroboros registers names not in plaintext but using a (configurable) hashing algorithm. The default hashing algorithm for a local Layer is a 256-bit SHA3 hash. The output in the logs is truncated to the first 4 bytes in a HEX notation.
Now that we have bound and registered our server, we can connect from the client. In a terminal window, start an oping client with destination oping-server and it will begin pinging:
$ oping -n oping-server -c 5
Pinging oping-server with 64 bytes of data (5 packets):
64 bytes from oping-server: seq=0 time=0.633 ms
64 bytes from oping-server: seq=1 time=0.498 ms
64 bytes from oping-server: seq=2 time=0.471 ms
64 bytes from oping-server: seq=3 time=0.505 ms
64 bytes from oping-server: seq=4 time=0.441 ms
--- oping-server ping statistics ---
5 packets transmitted, 5 received, 0 out-of-order, 0% packet loss, time: 5002.413 ms
rtt min/avg/max/mdev = 0.441/0.510/0.633/0.073 ms
The server will acknowledge that it has a new flow connected on flow descriptor 64, which will time out a few seconds after the oping client stops sending:
New flow 64.
Received 64 bytes on fd 64.
Received 64 bytes on fd 64.
Received 64 bytes on fd 64.
Received 64 bytes on fd 64.
Flow 64 timed out.
The IRMd and IPCP logs provide some additional output detailing the flow allocation process:
==05843== ipcpd/ipcp(II): Allocating flow 0 to f6c93ff2.
==05843== ipcpd-local(DB): Allocating flow to f6c93ff2 on fd 64.
==05813== irmd(DB): Flow req arrived from IPCP 5843 for f6c93ff2.
==05813== irmd(II): Flow request arrived for oping-server.
==05843== ipcpd-local(II): Pending local allocation request on fd 64.
==05843== ipcpd/ipcp(II): Finished allocating flow 0 to f6c93ff2: 0.
==05843== ipcpd/ipcp(II): Responding 0 to alloc on flow_id 1.
==05813== irmd(II): Flow on flow_id 0 allocated.
==05843== ipcpd-local(II): Flow allocation completed, fds (64, 65).
==05843== ipcpd/ipcp(II): Finished responding to allocation request: 0
==05813== irmd(II): Flow on flow_id 1 allocated.
==05813== irmd(DB): New instance (5886) of oping added.
==05813== irmd(DB): This process accepts flows for:
==05813== irmd(DB): oping-server
First, the IPCPd shows that it will allocate a flow towards a destination hash “4721372d” (truncated). The IRMd logs that IPCPd 2324 (our local IPCPd) requests a flow towards any process that is listening for “4721372d”, and resolves it to “oping_server”, as that is a process that is bound to that name. At this point, the local IPCPd has a pending flow on the client side. Since this is the first port_id in the system, it has port_id 0. The server will accept the flow and the other end of the flow gets port_id 1. The local IPCPd sees that the flow allocation is completed. Internally it sees the endpoints as flow descriptors 64 and 65 which map to port_id 0 and port_id 1. The IPCP cannot directly access port_ids, they are assigned and managed by the IRMd. After it has accepted the flow, the oping server enters flow_accept() again. The IRMd notices the instance and reports that it accepts flows for “oping_server”.
After all packets have been sent, the client will exit and deallocate the flow. The process can also be seen in the IRMd output:
==05813== irmd(DB): Deallocating flow 0 for process 5929.
==05813== irmd(DB): Partial deallocation of flow_id 0 by process 5929.
==05843== ipcpd/ipcp(II): Deallocating flow 0.
==05813== irmd(DB): Deallocating flow 0 for process 5843.
==05813== irmd(II): Completed deallocation of flow_id 0 by process 5843.
==05843== ipcpd-local(II): Flow with fd 64 deallocated.
==05843== ipcpd/ipcp(II): Finished deallocating flow 0: 0.
==05813== irmd(DB): Dead process removed: 5929.
==05813== irmd(DB): Deallocating flow 1 for process 5886.
==05813== irmd(DB): Partial deallocation of flow_id 1 by process 5886.
==05843== ipcpd/ipcp(II): Deallocating flow 1.
==05813== irmd(DB): Deallocating flow 1 for process 5843.
==05813== irmd(II): Completed deallocation of flow_id 1 by process 5843.
==05843== ipcpd-local(II): Flow with fd 65 deallocated.
==05843== ipcpd/ipcp(II): Finished deallocating flow 1: 0.
This concludes this first short tutorial. All running processes can be terminated by issuing a Ctrl-C command in their respective terminals or you can continue with the next tutorial.