- The Gore Report
- Posts
- When the Internet Held Its Breath: A Personal Tale of Digital Dependency.
When the Internet Held Its Breath: A Personal Tale of Digital Dependency.
You, Me & Everyone Else are All at the Mercy of Connectivity.

Yesterday morning, I sat down with my coffee, ready to check my Beehiiv account and review my latest newsletter stats. I typed in my login credentials. Nothing. I refreshed the page. Still nothing. Confusion ensued. However my Substack account worked just fine. Weird. Had I been hacked? Did someone break into my account and delete everything I’d built? A year of building a subscriber base, carefully crafted content, all my analytics—was it all just… gone? I tried different browsers. I restarted my computer & mobile phone. I checked my email for any suspicious password reset notifications. Nothing made sense. Then I did what any rational person does in 2025 when technology fails them: I raced to Twitter to see if anyone else was freaking out. Turns out, I hadn’t been hacked. Something far more widespread was happening.

Amazon Web Services’ US-East-1 region—a data center located in Northern Virginia—had gone down. And when it went down, it took a massive chunk of the internet with it. The casualties read like a who’s who of modern digital life. Snapchat stopped sending messages. Reddit’s endless scroll ground to a halt. Gamers couldn’t log into Fortnite or Roblox, sparking panic among millions of kids (and let’s be honest, plenty of adults too). Designers couldn’t access Canva. Language learners lost their Duolingo streaks. People trying to book vacation rentals found Airbnb unavailable. WhatsApp messages sat undelivered. Etsy sellers watched helplessly as their online storefronts vanished. Bank of Scotland customers couldn’t check their balances. Every single UK government website went dark simultaneously. You get the drift. For a few hours yesterday, a significant portion of the internet just… stopped working. And the culprit? One data center in Virginia!
Here’s the thing that should terrify us all: this wasn’t some sophisticated cyberattack or apocalyptic scenario. It was a routine outage at a single facility. Yet it brought down thousands of websites and services that billions of people depend on daily. That meme showing the entire internet balanced precariously on top of “AWS US-East-1” suddenly didn’t seem funny anymore. It seemed prophetic. Let me explain why this matters. Amazon Web Services isn’t just some tech company you’ve hardly heard of. Of course you know about Amazon. We all do. But what’s AWS?

AWS (Amazon Web Services) is the backbone of the modern internet. I’ve become familiar with AWS by way of doing my due diligence on the semiconductors. When you use most apps or websites, you’re not connecting directly to that company’s computers. Instead, you’re connecting to servers that company rents from AWS. It’s like how most businesses don’t own their office buildings—they lease space. AWS leases out computing power, and they’re really, really good at it. The problem is that AWS has become too good at it. They control roughly one-third of the entire cloud computing market. (See chart below)Their closest competitors, Microsoft Azure and Google Cloud, trail significantly behind. This concentration of power means that when AWS sneezes, the internet catches a cold. But it gets worse. Within AWS itself, certain regions have become disproportionately important.

US-East-1, the region that failed yesterday, is AWS’s oldest and largest data center. This means thousands of companies, from tiny startups to massive corporations, have built their entire digital presence on this one location. It’s like everyone decided to build their house on the same plot of land, and then we act surprised when a sinkhole affects all of them. The architectural choices that led to yesterday’s chaos were made by well-meaning engineers who were optimizing for cost and simplicity. Why spread your application across multiple regions when that’s more expensive and complicated? Why not just use the default settings that AWS recommends?
For years, these decisions seemed perfectly reasonable. Then yesterday happened, and those decisions looked a lot less smart. What made the outage particularly frustrating was its cascading nature. When US-East-1 went down, it didn’t just affect services directly hosted there. Many companies use AWS for their authentication systems, databases, or content delivery networks. So even if your main website was hosted elsewhere, if any component depended on US-East-1, your entire service could fail. It’s like how a car won’t run even if only one small part breaks—everything’s connected. The real wake-up call here isn’t just about Amazon. It’s about how we’ve structured the entire internet. We’ve created these massive single points of failure, and we’ve done it in the name of efficiency and cost savings. Three companies—Amazon, Microsoft, and Google—control the majority of cloud computing. Most internet traffic flows through a handful of massive data centers. A few companies provide most of the world’s connectivity services. We’ve built an incredibly fragile system and convinced ourselves it’s robust because it usually works. Yesterday, millions of people got a glimpse of what happens when “usually works” stops working. Businesses lost revenue. People couldn’t do their jobs. Students couldn’t access their homework. Essential services became unavailable. All because one data center in Virginia had a bad day.
The irony is that we have the technology to prevent this. Companies can distribute their services across multiple regions and multiple cloud providers. They can build redundancy into every layer of their infrastructure. But doing so is expensive and complex. It requires planning, testing, and ongoing maintenance. Most importantly, it requires thinking about failure before failure happens—something humans are notoriously bad at. So what’s the solution?

For individual companies, it’s investing in redundancy even when everything seems fine. For users, it’s recognizing that cloud services, despite their ethereal name, depend on very physical infrastructure that can and will fail. For society, it’s probably having some uncomfortable conversations about whether we’re comfortable with so much of our digital infrastructure concentrated in so few hands. As for me, when Beehiiv finally came back online and I saw my account intact, I felt the sense of relief over me. But I also felt something else: vulnerability. This will happen again. In building my Gore Report newsletter, my business, my digital presence, I’d constructed it entirely on someone else’s foundation. When that foundation shook, there was nothing I could do but wait. That’s the reality for all of us now. We’re all building on AWS US-East-1, whether we realize it or not. And yesterday reminded us just how precarious that foundation really is.
