Posts

Showing posts from December, 2025

Our Dollar, Your Problem by Kenneth Rogoff (2025)

Image
  Copyright: Sanjay Basu In  Our Dollar, Your Problem , Kenneth Rogoff takes aim at one of the most comfortable assumptions in modern economics: that the dominance of the US dollar is permanent. Borrowing its title from John Connally’s blunt remark to European finance ministers in 1971, the book argues that the dollar’s supremacy has never been guaranteed and is now facing its most serious threats, not from rivals abroad but from dysfunction at home. Rogoff’s core claim is simple and unsettling. The dollar did not rule the world because of flawless policy or moral authority. It survived because competitors failed, because history broke in convenient ways, and because markets were willing to forgive American excess. That tolerance, he argues, is wearing thin. Fiscal indiscipline, political polarization, and the aggressive use of financial sanctions are steadily eroding the trust that underwrites the dollar’s special status. Rogoff opens with the Nixon Shock of 1971, when the Un...

2026 Is A Year That Stops Pretending

Image
  A set of predictions from someone who has watched the machinery up close Copyright: Sanjay Basu The Year the Illusions Crack Every few years, technology has a year where it stops pretending. 2026 feels like one of those years. Not the kind where a single breakthrough grabs headlines and everyone pretends it was inevitable. This is subtler. More unsettling. A year where systems show their seams. Where slogans give way to spreadsheets. Where the mythology of “infinite scale” collides with power constraints, human limits, and physics that refuses to negotiate. If 2023 was about awe, and 2024–2025 were about acceleration, 2026 will be about reckoning. Not collapse. Adjustment. And for those paying attention, opportunity. What follows are not predictions designed to impress futurists or scare boards. They are grounded guesses. Informed by watching AI infrastructure get built, by sitting in rooms where power budgets matter more than press releases, by reading too much physics late at n...

How Nothing Could Destroy the Universe

Image
  Copyright: Sanjay Basu Nothing has always been more dangerous than it sounds. For most of daily life, nothing is a complaint, not a concept. You open the fridge. You sigh. There is nothing to eat. This sort of nothing is negotiable. It depends on hunger, expectations, and how brave you feel about expired yogurt. Physics is not interested in that kind of emptiness. Physics worries about stricter kinds of nothing. There is the modest version, what philosophers might call nothing with a lowercase n. You start with something and remove it piece by piece. Matter goes. Air follows. Radiation fades. What remains is a vacuum. Sparse. Cold. Seemingly empty. But still something. Then there is Nothing, capital N. Absolute absence. No space. No time. No fields. No laws waiting quietly in the wings. Not emptiness, but non-being. It is hard to imagine because imagination itself requires a stage. If true Nothing exists, it cannot be part of the universe. It cannot interact with it. It cannot ev...

Fine-Tuning Language Models on NVIDIA DGX Spark

Image
 Complete How-To Guide Copyright: Sanjay Basu Overview This guide provides comprehensive instructions for fine-tuning open-source language models on the NVIDIA DGX Spark personal AI supercomputer. The DGX Spark’s unique 128GB unified memory architecture enables local training of models that would traditionally require cloud infrastructure. Fine-tuning allows you to customize pre-trained models for specific tasks, domains, or response styles while preserving their general capabilities. This guide covers three fine-tuning strategies: Full fine-tuning for maximum customization, LoRA for memory-efficient adaptation, and QLoRA for training even larger models within memory constraints. DGX Spark Hardware Advantages The NVIDIA DGX Spark provides several key advantages for local AI development: 128GB Unified Memory: CPU and GPU share the same memory pool via NVLink-C2C, eliminating memory transfer bottlenecks Grace Blackwell Architecture: Purpose-built for AI workloads with up to 1 PF...