When the Grid Goes Dark: Why Cyber Resilience Is Now a Disaster Management Problem
I've been thinking about this a lot lately — what happens when a cyberattack doesn't just crash a website, but knocks out a dam's sensor network, or delays a flood alert by 40 minutes? That's not a hypothetical. It's something that has already happened in variants across the world, and India — with its rapidly digitalising infrastructure — is more exposed than most people realise.
Disaster management has traditionally been a physical problem. Roads, shelters, relief camps, rescue teams. But the command layer sitting above all of that — the early warning systems, the coordination apps, the control systems for power grids and water reservoirs — that's software now. And software fails differently from physical infrastructure. It fails silently, instantly, and sometimes deliberately.
What bothers me as a CS student is how rarely these two worlds talk to each other. DRR frameworks spend very little time on the question: what if the digital infrastructure itself is the disaster?The Sendai Framework's Priority 1 is "Understanding disaster risk" — but most of the risk models I've read treat cyber threats as an IT problem, not a DRR problem. That distinction matters less and less every year.
AI adds another layer to this. Right now, AI is being used in disaster contexts for image-based flood detection, satellite analysis, real-time resource allocation. These tools are genuinely useful. But they introduce new failure points — adversarial inputs, model brittleness under novel scenarios, overconfidence in predictions. A model trained on historical flood patterns might be confidently wrong about a flood shaped by a changing climate. And in disaster management, confidence is dangerous when it's misplaced.
I don't have clean answers to any of this, which is part of why I'm interested in working at NIDM. These problems sit exactly at the intersection I care about — systems thinking, resilience engineering, and the very unglamorous work of making critical infrastructure actually trustworthy. The question I keep coming back to: if we're building AI tools for disaster response, who is stress-testing them for adversarial conditions? And how do we make sure the failure mode of that tool is graceful rather than catastrophic?