Gpu
CUDA on Kubernetes
·1316 words·7 mins
With the LLM era upon us, I’ve been wanting to play around with some of the open source, self-hosted toys available. I’m using an old workstation as a homelab, which conveniently has an old NVIDIA GPU installed. Seeing as I’m running a Kubernetes cluster I want to expose the GPU to the workloads to utilise the existing infrastructure for easy hosting, scheduling, and deployment of GPU assisted applications.