Why Does Cybersecurity Actually Exist?


I’ve spent several years working in IT governance and compliance — change management, vulnerability management, GRC — and at some point I realized I had never properly asked the question sitting underneath all of it: why does cybersecurity as a field actually exist? Not “what does it do” or “how does it work” — but what is it fundamentally for, and why has it become so critical?

The more I dug into it, the more I realized the answer goes much deeper than most people think. This is my attempt to lay it out clearly.

The physical security comparison breaks down fast

The instinct most people have is to compare cybersecurity to physical security. A business has a building. The building gets a guard, some locks, maybe cameras. The attack surface is finite — someone has to physically show up. So why does securing a digital business cost so much more and require so much more effort?

Because a digital business doesn’t exist in one place. It exists everywhere simultaneously. Your bank’s app is reachable from Russia, North Korea, a coffee shop in Brazil, and your bedroom — all at once, 24 hours a day. You’re not defending a building. You’re defending something that anyone on earth with an internet connection can attempt to access at any moment, automatically, at scale, for free.

One attacker with a script can probe a million companies overnight. No physical security problem has ever worked like that.

That single shift changes the entire nature of the problem. It’s not the same category as physical security. It’s fundamentally different.

It’s not really about computers

Here’s the insight that reframes everything: cybersecurity isn’t protecting computers. It’s protecting whatever those computers control.

Think about a hospital. The business is treating patients. But the hospital runs on electronic health records, drug dispensing systems, connected MRI machines, ventilators with wireless interfaces, billing software, and scheduling tools. Every single one of those is software. And software can be reached, broken into, or held hostage.

When a hospital gets hit with ransomware, those systems lock up. Surgeries get cancelled. Medications get delayed. People have died from this. The attack didn’t target a computer — it targeted the hospital’s ability to function as a hospital. The cyber threat became a patient safety threat.

That’s the core of it: the more that everyday life runs on digital systems, the more that securing those systems is actually about protecting people. Healthcare, water treatment, power grids, financial systems, emergency services — all of it runs on software now. All of it can be disrupted.

The asymmetry is the real structural problem

What makes cybersecurity genuinely hard — not just expensive, but structurally difficult — is the asymmetry between offense and defense.

Attackers need to find one way in. Defenders need to protect every possible way in. Attackers share their techniques globally, for free. Defenders are often starting from scratch per organization. And the attack happens once; the defense has to hold forever.

This is why the actual goal of cybersecurity is never perfect security. It’s raising the cost of attack high enough that most attackers move on to an easier target. You’re not trying to make something impenetrable. You’re trying to make yourself not worth the effort. At its core, the whole field is about economics as much as technology.

Why the insurance analogy isn’t quite right

A useful way to think about cybersecurity is through the lens of risk management — risk exists, so an industry exists to manage it. That’s true as far as it goes. But cyber risk has a property that makes it different from the risks that traditional insurance handles.

Traditional insurance works because losses are bounded and independent. Your house burns down — that’s a defined loss, and it doesn’t cause my house to burn down. The math works across a large pool of policyholders.

Cyber risk doesn’t work like that. A single breach can cascade across thousands of organizations simultaneously. One software vulnerability, one compromised vendor, one bad update — and the impact spreads instantly and globally. Losses aren’t independent; they’re contagious. This is why cyber insurers are actually pulling back from coverage right now. The math doesn’t hold the same way.

The better analogy is public health. One compromised system can infect thousands. Defense has to be infrastructural and collective, not just individual. A single unpatched system creates risk for everyone connected to it. That’s not an IT problem — it’s a systemic one.

The internet was never designed to be secure

Something that puts all of this in context: the internet wasn’t built for security. ARPANET — the original internet — was designed in the 1960s with open communication as the goal. The foundational assumption was that everyone on the network was trusted. Security came later, retrofitted onto a system that wasn’t built for it.

The internet’s openness is not a bug that slipped through. It was a deliberate design choice — because openness is what makes it valuable. The problem is that the same property creates the vulnerability.

The same architecture that lets a doctor in a rural area access the latest medical research from across the world is the same architecture that lets an attacker probe that hospital’s records from anywhere on earth. You can’t fully separate those two things. The vulnerability is structural, not accidental. Which is why the defense can never be “solved” permanently — it has to be continuous.

At its core, it’s a trust problem

The further you go with this, the more you realize cybersecurity is fundamentally a trust problem — and trust is one of the oldest human problems there is.

How do I know you are who you say you are? How do I know this message wasn’t altered in transit? How do I know this system is doing what it’s supposed to and not something designed to look like it is?

These aren’t new questions. Philosophers have asked them for thousands of years. We just now have to answer them at the speed of light, at global scale, billions of times per second. The entire field of cryptography is built around a single question: how do two parties who have never met establish trust over an untrusted medium? Every authentication system, every certificate, every handshake protocol is just a different answer to that same question.

And here’s where it gets interesting: computers only trust what humans told them to trust. Every root certificate, every hardware identity, every trust policy — those all end at a human decision made upstream. Computers don’t have better trust than humans. They execute trust decisions faster and more consistently. If the human judgment at the foundation is flawed or corrupted, the programmatic system built on top executes that flaw at machine speed. That’s actually more dangerous in some ways — because humans second-guess each other. Computers don’t.

What this actually means for the field

When you put all of this together, cybersecurity stops looking like an IT function and starts looking like critical infrastructure for how modern society operates.

The systems that carry our communication now are our communication. Elections run on them. Commerce runs on them. Medical decisions get made through them. Wars are being fought across them right now. Securing those systems isn’t protecting a server — it’s protecting the substrate of how people relate to each other, access services, and exercise power in the modern world.

And the limit of any security system — no matter how technically sophisticated — is always the same: the authorized human on the inside. You can build extraordinary defenses, and someone still gets tired, makes a mistake, gets coerced, or gets greedy. The technical problem is largely solvable in theory. The human problem never fully is.

That’s what makes this field endlessly relevant. It exists because trust is hard, systems are imperfect, and humans are human. Those conditions aren’t going away — which means neither is the need for people who understand this problem at a deep level and know how to address it thoughtfully.

Why I keep coming back to this

I started asking this question out of intellectual curiosity — wanting to understand the field I work in at a level beyond frameworks and controls. What I found is that cybersecurity, properly understood, sits at the intersection of technology, philosophy, economics, and public safety. It’s one of the few fields where the technical and the deeply human are inseparable.

That intersection is what makes it worth taking seriously — especially in areas like healthcare, where the stakes of getting it wrong aren’t measured in data or dollars, but in patient outcomes.

If you work in this space and have never paused to ask why it matters, I’d encourage you to. The tools and certifications matter. But understanding the purpose behind them changes how you approach the work.