Apple just approved a driver that lets Nvidia eGPUs work with Arm Macs. Apple and Nvidia haven’t played nice in years. These two facts shouldn’t coexist, yet here we are in 2026, and I’m genuinely excited about what this means for anyone building AI bots on Mac hardware.
The twist? Nvidia didn’t write this driver. Tiny Corp did.
That’s right—a third party managed to bridge the gap that two tech giants couldn’t be bothered to cross themselves. And honestly, that makes this story even better for those of us actually building things.
Why This Matters for Bot Builders
If you’re training models or running inference locally, you know the pain of Apple Silicon’s GPU limitations. The M-series chips are impressive for everyday tasks, but when you’re iterating on transformer architectures or testing multi-agent systems, you hit walls fast. Apple’s Metal API is fine, but CUDA has the ecosystem. The libraries. The community support. The examples that actually work.
I’ve been running my development setup on an M2 Mac, and I’ve lost count of how many times I’ve had to spin up a cloud instance just to test something that needed proper GPU acceleration. The cost adds up. The latency is annoying. And there’s something deeply unsatisfying about not being able to use the machine sitting right in front of you.
External GPUs were supposed to solve this. But when Apple switched to Arm architecture, eGPU support vanished. Intel Macs could use them. Arm Macs couldn’t. The official line was that the architecture didn’t support it, but let’s be real—it was more about Apple wanting you to buy their hardware solutions.
The Thunderbolt Limitation Nobody’s Talking About
Before we get too excited, there’s a technical reality we need to address. These eGPUs connect through Thunderbolt, which means you’re getting a fraction of what that Nvidia card can actually do. Thunderbolt 4 tops out around 40 Gbps. A PCIe 4.0 x16 slot? That’s 256 Gbps. You’re leaving performance on the table.
For training large language models from scratch, this setup won’t replace a proper workstation. But for fine-tuning smaller models? Running inference on quantized versions? Testing agent architectures before you commit to expensive cloud runs? This could be exactly what we need.
I’m thinking about all the times I’ve wanted to experiment with different quantization strategies or test how a model performs with various batch sizes. Having local GPU access—even bottlenecked GPU access—means faster iteration cycles. It means I can test an idea at 2 AM without worrying about cloud costs or quota limits.
What Tiny Corp Actually Built
According to their announcement, the installation process is simple enough that “a Qwen could do it.” That’s both a joke and a promise—they’re claiming the setup is straightforward enough that an AI model could walk you through it. Then you can actually run that model on the hardware you just configured.
The driver supports both AMD and Nvidia cards, which gives us options. Nvidia’s CUDA ecosystem is the obvious draw, but AMD’s ROCm has been getting better, and their cards often offer more VRAM for the price. For bot builders working with context-heavy applications or multi-modal models, that extra memory matters.
The Bigger Picture
This approval signals something interesting about Apple’s current position. They’re allowing third-party solutions to fill gaps in their ecosystem. That’s not the Apple of five years ago. Maybe they’ve realized that developers need flexibility, or maybe they’re just picking their battles differently now.
Either way, I’m ordering a Thunderbolt enclosure this week. I’ve got an RTX 4070 sitting in a drawer from an old build, and I’m curious to see how it performs for the kind of work I actually do—fine-tuning embedding models, testing retrieval systems, running local inference for development.
Will it match a dedicated Linux workstation? No. Will it beat spinning up a cloud instance every time I want to test something? Absolutely. And for iterative development work, that’s what matters.
Sometimes the best solution isn’t the most powerful one. It’s the one that removes friction from your workflow. Tiny Corp just removed a significant piece of friction for Mac-based AI developers, and Apple—surprisingly—let them do it.
đź•’ Published:
Related Articles
- Comment créer un chatbot en 2026 : Frameworks, astuces & code
- Uma Nova Fronteira de IA: Por Que o Apoio da Nvidia ao CoreWeave Ă© Importante para os Construtores de Bots
- Por que estou criando bots que relatam para humanos (e você também deveria)
- Développement de Bot WhatsApp : Le Guide Non Officiel