Posts

Featured Post

So Long, Lamy SF — With Ink-Stained Fingers and a Full Heart

  Well, that’s it. The Lamy flagship store in San Francisco has closed its doors as of June 15, 2025. And while a storefront shuttering isn’t supposed to make you feel like you’ve just lost an old friend… this one does. I still remember the first time I wandered into that temple of German precision and minimalism. It was during an RSA Conference—one of those frenzied tech-and-terrorism jamborees where everyone’s peddling zero-trust frameworks and quantum-safe buzzwords. My mind was buzzing with firewalls and threat matrices when I found myself, quite accidentally, standing before a glass shrine of pens. Real pens. Fountain pens. Lamy pens. Inside was peace. Ink. Order. And just like that, I was a child again. I’ve had my share of fine writing instruments. I own a Montblanc 149, and yes, it writes like a dream. There’s a Pelikan in my desk drawer that’s practically aristocratic. But my Lamy 2000? That’s my daily driver. It’s not a pen, it’s a companion. I’ve written entire no...

Future AI Datacenter - A light read!!

Image
  Copyright: Sanjay Basi In the shadowy depths of digital empires — somewhere between racks of humming servers and blinking lights — lies the future of AI datacenters, quietly plotting its world domination. Well, perhaps not domination in the traditional Skynet sense, but certainly in a transformative way that’s reshaping industries faster than you can say, “Is this thing sentient yet?” Datacenter Evolution Just a decade ago, a datacenter was mostly just racks, cooling systems, and servers. Fast forward to today, and AI-driven datacenters resemble something from a sci-fi blockbuster — complete with autonomous robots, predictive maintenance drones, and AI overseers named after obscure mythical beings. According to Gartner, by 2027, AI will automate more than 60% of infrastructure operations, reducing human intervention by over half. We have gone from Rowa to Robots! Yes, that’s right, the days of Sanjay from IT strolling down aisles diagnosing servers by instinct and cable-jiggling ...

Demystifying NVIDIA Dynamo Inference Stack

Image
Courtesy: https://developer.nvidia.com/blog/introducing-nvidia-dynamo-a-low-latency-distributed-inference-framework-for-scaling-reasoning-ai-models/ If you’re anything like me — and you probably are, or you wouldn’t be reading this — you’re perpetually amazed (and occasionally overwhelmed) by the warp-speed pace at which NVIDIA keeps rolling out innovations. Seriously, I sometimes think they have more GPU models and software stacks than my inbox has unread emails. So, grab your favorite caffeinated beverage, buckle up, and let’s talk about one of their latest marvels — the NVIDIA Dynamo Inference Stack. Wait, Dynamo What? Glad you asked. NVIDIA Dynamo isn’t just another fancy buzzword NVIDIA cooked up for its annual GTC showcase — although it does sound suspiciously like something Tony Stark would install in Iron Man’s suit. Dynamo is an open AI engine stack designed to simplify deploying and scaling inference. Think of it as NVIDIA’s way of saying, “Look, running models at scale d...