rCUDA

rCUDA
Developer(s) Universidad Politécnica de Valencia
Stable release
16.11 / November 12, 2016 (2016-11-12)
Operating system Linux
Type GPGPU
License Proprietary (Free for academic use)
Website www.rcuda.net

rCUDA, which stands for Remote CUDA, is a type of middleware software framework for remote GPU virtualization. Fully compatible with the CUDA application programming interface (API), it allows the allocation of one or more CUDA-enabled GPUs to a single application. Each GPU can be part of a cluster or running inside of a virtual machine. The approach is aimed at improving performance in GPU clusters that are lacking full utilization. GPU virtualization reduces the number of GPUs needed in a cluster, and in turn, leads to a lower cost configuration – less energy, acquisition, and maintenance.

Example GPU cluster

The recommended distributed acceleration architecture is a high performance computing cluster with GPUs attached to only a few of the cluster nodes. When a node without a local GPU executes an application needing GPU resources, remote execution of the kernel is supported by data and code transfers between local system memory and remote GPU memory. rCUDA is designed to accommodate this client-server architecture. On one end, clients employ a library of wrappers to the high-level CUDA Runtime API, and on the other end, there is a network listening service that receives requests on a TCP port. Several nodes running different GPU-accelerated applications can concurrently make use of the whole set of accelerators installed in the cluster. The client forwards the request to one of the servers, which accesses the GPU installed in that computer and executes the request in it. Time-multiplexing the GPU, or in other words sharing it, is accomplished by spawning different server processes for each remote GPU execution request.[1][2][3]

rCUDA 16.11

The rCUDA Framework enables the concurrent usage of CUDA-compatible devices remotely.

rCUDA employs the socket API for the communication between clients and servers. Thus, it can be useful in three different environments:

The current version of rCUDA (v16.11) supports CUDA version 8.0, excluding graphics interoperability. rCUDA 16.11 targets the Linux OS (for 64-bit architectures) on both client and server sides.

CUDA applications do not need any change in their source code in order to be executed with rCUDA.

References

  1. Duato, José; Igual, Francisco; Mayo, Rafael; Peña, Antonio; Quintana-Ortí, Enrique; Silla, Federico (August 25, 2009). "An Efficient Implementation of GPU Virtualization in High Performance Clusters". Lecture Notes in Computer Science. 6043. Euro-Par 2009 – Parallel Processing Workshops HPPC, HeteroPar, PROPER, ROIA, UNICORE, VHPC, Delft, The Netherlands: 385–394. doi:10.1007/978-3-642-14122-5_441. ISBN 978-3-642-14122-5.
  2. Duato, José; Peña, Antonio; Silla, Federico; Mayo, Rafael; Quintana-Ortí, Enrique (June 28, 2010). "rCUDA: Reducing the number of GPU-based accelerators in high performance clusters". High Performance Computing and Simulation (HPCS), 2010 International Conference on, Caen, France: 224–231. doi:10.1109/HPCS.2010.5547126. ISBN 978-1-4244-6827-0.
  3. Duato, José; Peña, Antonio; Silla, Federico; Mayo, Rafael; Quintana-Ortí, Enrique (September 13, 2011). "Performance of CUDA Virtualized Remote GPUs in High Performance Clusters.". International Conference on Parallel Processing (ICPP), 2011 International Conference on Taipei, Taiwan: 365–374. doi:10.1109/ICPP.2011.58. ISBN 978-1-4577-1336-1.


This article is issued from Wikipedia - version of the 12/1/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.