Host/GuestVM Install
Run tensor-fusion-worker on the host and deploy tensor-fusion-client inside a virtual machine (VM) to use the host’s physical GPUs from VMs without GPU passthrough, named as TensorFusion HyperGPU.
Terminology
- Tensor-fusion-worker (worker): A standalone binary. You can run multiple worker instances on the same host. It receives compute requests from clients and executes the tasks.
- Tensor-fusion-client (client): A set of shared libraries that fully export CUDA and NVML APIs. It runs inside your application environment and forwards compute requests to the worker on the host.
Multiple clients can connect to one worker instance. For production, we recommend one client per worker instance for easier resource isolation and management.
Prerequisites
- A Linux host with NVIDIA GPUs. Recommended NVIDIA driver version: 570.xx or newer.
- One or more VMs (QEMU/MVisor/VMware/Hyper‑V, etc.) that can reach the host either over the network or via shared memory.
Step 1: Download Tensor Fusion
- Download the latest release: https://cdn.tensor-fusion.ai/archive-lastest.zip
- Unzip the archive:
TensorFusion Docs