Here’s our giant list of alternative GPU clouds.| gpus.llm-utils.org
GPU VRAM 16-bit inference perf rank Available on Runpod (Instant) Available on Lambda (Instant) Available on FluidStack (Instant) H100 80 GB 🏆 1 No ✅ $1.| gpus.llm-utils.org
To run Falcon-40B, 85GB+ of GPU ram is preferred.| gpus.llm-utils.org
Availability # Lambda Labs - At least 1x (actual max unclear) H100 GPU instant access Max H100s avail: 60,000 with 3 year contract (min 1 GPU) Pre-approval requirements: Unknown, didn’t do the pre-approval.| gpus.llm-utils.org
Overall if you’re not stuck with your existing cloud, then I’d recommend FluidStack, Runpod, and Lambda Labs for GPUs.| gpus.llm-utils.org