This article presents a new device named VirtIOBus that allows cores to communicate by relying on VirtIO. The design is simple, each core has a reception queue for each core in the system. For example, in a three-cores system, each core has (N-1) reception queues, i.e., 2 reception queues per core. When the core #0 wants to send a packet to core #1, the core #0 gets a descriptor from the available ring of the reception virtqueue of the core #1, fills the buffer, and then enqueues the descriptor in the used ring. To notify the core #1, the core#0 can either trigger an inter-core interruption or simple continues until the core #1 gets the buffer. In the current implementation, interruptions are disabled, and the core #1 has just to pool the used rings to get new buffers. The enqueue and dequeue operations do not require atomic operations or any lock mechanism.
The API is built of two functions:
- SendTo(core, buf)
- RecvFrom(core, buf)
When a core wants to send data, it has to just set the destination core and the buffer from which the content is copied from. To get a new buffer, the core invokes RecvFrom that returns the oldest buffer in the queue. If the queue is empty, the function blocks until a new buffer arrives.
You can see the use of these functions in the example here. In this example, data is broadcasted from core #0 to all cores in a three-cores system. The core #0 sends a “ping” to core #1 and core #2, and then, core #1 and core#2 responds with a “pong”. The output is as follows:
VirtIOBus: Core[1]->Core[0] queue has been initiated
VirtIOBus: Core[2]->Core[0] queue has been initiated
VirtIOBus: Core[0]->Core[1] queue has been initiated
VirtIOBus: Core[2]->Core[1] queue has been initiated
VirtIOBus: Core[0]->Core[2] queue has been initiated
VirtIOBus: Core[1]->Core[2] queue has been initiated
Core[0] -> Core[1]: ping
Core[1] -> Core[0]: pong
Core[0] -> Core[2]: ping
Core[2] -> Core[0]: pong
In the future, the current mechanism will be used for the implementation of MPI functions like MPI_Send() and MPI_Recv(). The motivation is to port MPI applications to the Toro unikernel so stay tunned!
P.S.: You can find a demo here.
No comments:
Post a Comment