GPU Go Overview
Introduction to GPU Go — use remote GPUs like they are local. Local-first GPU sharing for developers, homelabs, and teams.
GPU Go Overview
GPU Go (腾思Go) is the official CLI and client for TensorFusion GPU Go. It lets you use remote GPUs as if they were attached to your machine — ideal for sharing GPU servers at home, in the lab, or at work.
Slogan: "Local First. Bring GPU to You, Not You to GPU."
Think of it as NFS for GPUs: low-latency, easy to manage, and cost-effective when multiple developers share one or a few GPU servers using TensorFusion virtualization.
Who It's For
- Individual developers — Use a remote GPU from your laptop without moving code to the cloud.
- Homelab & students — Share a single GPU box across machines and projects.
- Universities & labs — Let students and researchers share lab GPUs via simple links.
- Small teams — One GPU server, many developers; no per-seat cloud bills.
How It Works
- GPU host: You run the GPU Go Agent (
ggo agent) on the machine that has the GPU. The agent registers with the TensorFusion platform and exposes vGPUs (virtualized GPU units) to the cloud. - Sharing: From the Dashboard or CLI you create share links (e.g.
https://go.gpu.tf/s/abc123) that grant access to a vGPU. - Your machine: You use the CLI or VS Code extension to create a Studio — a local containerized environment (Docker, Colima, WSL, or Apple Container) that is wired to the remote GPU via the share link. You code and run training/inference locally while the GPU runs remotely.
So: Agent on the GPU server, Studio on your laptop, connected by a share link.
Key Concepts
| Concept | Description |
|---|---|
| Agent | Daemon running on the GPU host. Registers with the platform, reports GPUs, and runs vGPUs. |
| vGPU | A virtualized GPU slice managed by the agent. One machine can have multiple vGPUs. |
| Share link | Short URL or code (e.g. abc123 or https://go.gpu.tf/s/abc123) that authorizes a client to use a specific vGPU. |
| Studio | Local dev environment (container) created by ggo studio create. It connects to a remote vGPU via a share link and provides a full AI/ML stack (e.g. PyTorch, CUDA libs). |
Features
- Zero friction — Create a Studio with one command and get instant access to a remote GPU.
- Cost effective — Share one GPU server among many users via TensorFusion virtualization.
- Cross-platform — macOS (Apple Silicon/Intel), Windows (WSL), and Linux.
- Flexible backends — Studio runs on Docker, Colima, WSL, or Apple Container (macOS 26+). Use
ggo studio backendsto see available runtimes. - VS Code integration — Manage agents, vGPUs, and studios from the GPU Go extension.
- Share links — Hand a link or short code to teammates so they can use your GPU without touching the server.
Quick Start
1. Account and dashboard
Register and log in at TensorFusion. Use the dashboard to generate agent install tokens and Personal Access Tokens (PAT) for the CLI and VS Code.
2. Install and run the agent (GPU host)
On the machine that has the GPU:
# One-line install (see dashboard for your token)
curl -fsSL https://cdn.tensor-fusion.ai/archive/gpugo/install.sh | sh
# Or manually:
ggo agent register -t "<token-from-dashboard>"
ggo agent startYou can track onboarding and vGPU status on the dashboard.
3. Create a share link (GPU host or dashboard)
From the dashboard, or on the GPU host:
ggo login # use PAT from dashboard
ggo share createUse the generated link (e.g. https://go.gpu.tf/s/abc123) or short code (abc123) in the next step.
4. Use the GPU from your machine (client)
Option A — Studio (recommended): containerized environment + SSH.
ggo login
ggo studio create my-project -s "https://go.gpu.tf/s/abc123"
ggo studio ssh my-projectOption B — Direct use (Linux and Windows only): use the GPU in your current environment without a container. Not available on macOS.
ggo use abc123
# or: ggo use https://go.gpu.tf/s/abc123
# Then run your Python/ML workload; it will use the remote GPU.
# To clean up: ggo clean or ggo clean --allCLI at a Glance
| Command | Purpose |
|---|---|
ggo login / ggo logout | Authenticate with TensorFusion (PAT). |
ggo agent register / ggo agent start | Register and run the agent on the GPU host. |
ggo worker | Manage vGPUs on the GPU host. |
ggo share create / ggo share list | Create or list share links for vGPUs. |
ggo studio create / ggo studio list / ggo studio ssh | Create, list, or SSH into Studio environments. |
ggo use <code-or-link> | (Linux/Windows only) Use a remote GPU in the current environment; ggo clean to undo. |
ggo deps sync / ggo deps download | Sync and download GPU libraries for local/Studio use. |
ggo studio backends | List available container runtimes (Docker, Colima, WSL, Apple Container). |
VS Code Extension
The GPU Go extension gives you a GUI for the same workflow: login, create studios, connect via SSH, and manage vGPUs. Install from the VS Code Marketplace or Open VSX.
Next Steps
- Studio in depth — Volume mounts, ports, resource limits, and backends: see the Studio guide in the gpu-go repo.
- Troubleshooting — Troubleshooting handbook and Discord.
- Dashboard — TensorFusion Dashboard for tokens, agents, and share links.
TensorFusion Docs