O utside my apartment window, piles of snow lay coated with an icy sheen formed overnight. Inside, the warm air condenses on the windows. Droplets form and break free, running down the foggy glass in the morning light. I am fortunate to have a job that I can do from home but, like many, the long year of social distancing and working from home has taken its toll on my ability to focus. So I stare off into the distance, half gazing at the rivulets forming on my window. Distraction might not be such a problem, but I work in cybersecurity. While the world grapples with one kind of public health crisis in a physical space, another is taking place online.

The internet is over fifty years old and the World Wide Web is in its late thirties, yet 2020 was the worst year yet for cybersecurity. The painful transition to a digital economy has been accelerated by remote work, robotic process automation, and other no-touch solutions, all of which are compounded by the precarious political state of the world. Moving toward this new economy has dramatically increased the impact of criminal and nation state cyber incidents. For example, up to eighteen thousand customers were affected by last year’s SolarWinds breach. The incident was one where adversaries hijacked customers’ systems, including multiple U.S. federal agencies, by infiltrating SolarWinds, a software company that sells security monitoring products. Another example of the difficult transition is the pronounced uptick in ransomware, which is a kind of attack that encrypts critical data and holds it hostage for payment. This all occurs in a milieu of misinformation and rampant social media misinformation campaigns. Previously uninterested friends are beginning to ask, “How can we fix cybersecurity?” Instead we should be asking, “Where can we improve cybersecurity?”

It is difficult to know where to start. On a practical level, most organizations don’t even know all the devices (phones, laptops, web servers, email servers, TVs, door sensors, refrigerators, wearables, etc.) on their networks (home networks, office networks, intranets, etc.). What’s more, the high cost and low (no) revenue generated by security activities make solving many basic tasks a nonstarter. More philosophically, security can’t be fixed any more than temperature can be fixed. Temperature, like security, is a property of a system, by which scientists mean it is a description of a system at a point in time. It is a difficult idea to conceptualize, which gives us something in common with the scientists who studied thermodynamics two hundred years ago.

Sadi Carnot, the “father of thermodynamics,” led a brief but incendiary life. Born in Paris during the tumultuous French Revolution, Carnot graduated from the prestigious École Polytechnique and became an army officer in the last days of the Napoleonic Empire. He was not well suited for military life, however, and began dedicating more time to the study of scientific pursuits, particularly the steam engine. At the age of twenty-seven he published Reflections on the Motive Power of Fire, the founding text of thermodynamics. Just eight years later, he was interned in an asylum for “mania,” caught cholera, and died, leaving behind him a legacy far greater than he could have imagined.

Not only did Carnot’s work explain and improve upon the steam engine, but his theory of heat introduced the concept of dynamic systems. He published two breakthrough insights that would be elaborated on by Lord Kelvin and Rudolph Clausius to become the first two laws of thermodynamics. The first insight was to consider the heat engine as a closed system. This system could be described as a series of transitions from one state to another. The second was the observation that heat moving from high temperature to low produces work. By measuring changes in heat and work, scientists could measure the internal energy of a system. These facts about heat engines had profound implications for science and engineering that led to the second industrial revolution. Today, they remain the intellectual bedrock for what we know about systems thinking.

Information systems, like thermodynamic systems, are in a constant state of flux. They are created to accept, transform, and emit information under certain conditions. The Second Law of Thermodynamics, which Carnot discovered in 1824 and Clausius expanded upon in the 1850s, states that “heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.” He named this dissipation of energy “entropy,” from the Greek ἐν (pronounced “in” and meaning “content”) and τροπή (pronounced tropē and meaningtransformation”). This “transformation content,” or energy leakage, creates some loss of energy in a thermodynamic cycle. Engineer and cryptographer Claude Shannon, often called the father of information theory, proposed a similar relationship for information. Just as the thermodynamic systems leak energy through entropy, information systems leak information through information entropy.

“It is incumbent upon good people doing hard work to stay at the frontier of technology.”

Knowing that an information system in use inevitably leaks information and that security is a property of a system, should we simply expect cybersecurity—not to mention all of the information age institutions we have constructed on top of it—to get worse? It is tempting to take a nihilistic view, but the transition from “how can we fix cybersecurity?” to “where can we improve cybersecurity?” is reason for hope, not despair. It’s easy to take systems for granted. Computer scientists and programmers are routinely impressed that anything in technology works at all. When you think about the level of undirected coordination necessary for the internet to work, much less the hardware and software required to build the internet, it is a technological miracle. Milton Friedman has used the simple example of a pencil to describe the complex system of cooperation for the creation, distribution, and maintenance of consumer products and services: “Look at this lead pencil. There’s not a single person in the world who could make this pencil. Remarkable statement? Not at all.”

That systems so routinely fail or fall victim to attack and that resilience, redundancy, and resolve keep our institutions afloat, is cause for optimism. So far, while we have managed to avoid a cyber Pearl Harbor, we will not be so lucky in the future unless we reframe our perspective on cybersecurity. To improve security without naively hoping to “fix it,” we will have to reinforce weak points in the system. These weak points are often points of transition, such as software development, patching systems in production, and software end-of-life.

Software development is the process of gathering requirements, designing, implementing, testing and documenting software. Few risks manifest at the design stage because the system is not yet in use. Security, however, is a property of a system, not simply a feature that can be added on later. Therefore, development is the most critical stage to determine a system’s security. Other properties of systems like convenience, scalability, safety, quality, usability, and reliability have to be designed in as well. Similar to launching a ship or flying an airplane, it is when systems are put into production that shortcomings in any of these properties may become apparent, as unexpected problems may occur due to an inhospitable environment.

Production software is where most cybersecurity effort is spent. Like healthcare, ninety percent of cybersecurity costs are incurred after people are “sick” despite preventative care being a more affordable and more effective first line of defense. Fixing design flaws in production is expensive, but many flaws are only discovered during the transition. Furthermore, software developers may have higher priorities such as convenience and usability. As computer security expert Dan Geer wrote, “Installing the patch in a production machine can be like changing the tires on a moving car. In the best case, it is a delicate operation—and only possible if you plan ahead.” Systems that survive deployment, avoid significant changes in use, and depend on relatively few other systems can remain secure in production for a long time. This is both a blessing and a curse; a blessing, because they appear robust and a curse in the sense that many systems may come to depend on these long-lived systems, creating potential failure cascades.

While systems that make it through the treacherous transition from design to production may operate securely for years or even decades, developers often take these legacy systems for granted. For example, Internet Protocol version 4 (IPV4), the system of assigning every website a numerical address (e.g. http://172.217.11.14 is equivalent to http://google.com, just try it in your browser!), is currently being replaced with IPV6, a much larger set of internet addresses, to confront the issue of a limited number of IPV4 addresses. Although intended to expand the number of available addresses, this transition (like all transitions) comes with unintended consequences. IT professionals transition to the new protocol, opening up the system to all sorts of attacks and brand new security flaws.

Another type of  transition, when systems move from production to end-of-life, is even more dangerous than the transition from design to production. Deploying a flawed system may leave that system open to attack, but deprecating software may affect many systems built on top of it. Most ATMs still run Windows XP, even though official support for the operating system ended in 2014. As our society becomes even more dependent on computer networks and as systems complexity increases, the risks associated with end-of-life software grow exponentially.

The improvements needed to address transitional weak points in systems are going to require massive investment of resources, time, and intellect. Most of the focus has been on mitigating risk, which is an understandable focus just to maintain workable systems. Software engineering curricula increasingly requires engineers to learn security best practices. Advances in machine learning, cloud computing, and cryptography will provide defenders with ever improving tools to prevent, detect, and correct vulnerabilities.

To simply believe we can innovate our way out of this problem would, however, be naïve techno-utopianism. There will be new tools that use advances in machine learning, cloud computing, and cryptography, available for developers and attackers as well. It is incumbent upon good people doing hard work to stay at the frontier of technology. So far, that arrangement has been possible, and it is my great hope that it continues. Yet, a technological solution poses one further and more insidious risk.

To quote science fiction author Arthur C. Clarke, “Any sufficiently advanced [antivirus] is indistinguishable from [malware].” It is a cliche that security software is both a shield and a weapon. As the SolarWinds breach showed, surreptitiously gaining access to administrative tools is as good as (if not better than) being able to exploit a vulnerability. Although a greater dose of technology has proven the medicine to its own sickness thus far, we cannot count on that holding true in the future.

So where does my tentative optimism come from, if not technological risk mitigation? It comes from the fact we have hardly begun to apply the three other approaches to risk management: avoidance, transfer, and acceptance. The first of these will be the easiest for society to implement. The second will be an evolution, not a revolution. The third will require a fundamental reconceptualization of norms and privacy.

Risk avoidance means eliminating hazards by not engaging in risky activities. This can be encouraged by imposing costs on organizations that do not fully bear the burden of the risks they take. Regulations are already beginning to have an effect. For example, Europe’s General Data Protection Regulation, demands organizations state the explicit purpose for data gathering. The California Consumer Protection Act requires disclosure of consumer data collection. Governments can then begin to price the costs of negligence or willful data mismanagement to citizens. Even more promising than regulation is consumer pressure. For example, products like Apple’s facial recognition or Signal’s encrypted messaging app do not even send sensitive data to a central location.

Providing open code review and guarantees about the minimum necessary data collection obviates the information asymmetry between developer and user. This asymmetry is usually the sticking point that prevents any consumer-led reform. Comparing the pay-for-use business model with advertising and other data collection business models is a helpful juxtaposition that can punish data collectors. However, some sectors, such as finance and healthcare, are legally obligated to collect and store sensitive data. For these industries, there are still two more approaches: transfer and accept.

Risk transfer is a powerful tool for aligning incentives. It shifts risk from one party to another, thereby creating an advocate that has a clear incentive to minimize risk. For many organizations, missing out on an opportunity may be more painful than the potential risk incurred. To return to our comparison with thermodynamic systems, consider the steam boiler. In the early hours of April 27, 1865, three of the four massive coal-fired boilers on the steamboat Sultana exploded. The Mississippi paddle steamer burned to the waterline, just north of Memphis, Tennessee, killing over eleven hundred passengers and crew members. It was forty-one years after Carnot published his Reflections on the Motive Power of Fire; the steam engine, no longer a novelty, was not yet a mature technology. The Hartford Steam Boiler Inspection and Insurance Company was founded one year later, during a period in which one steam boiler exploded every four days.

Over the next fifty-five years, insurers inspected, standardized, and supported the engineering of steam boilers to ensure their safety. Where risk cannot be avoided and where it has not successfully been mitigated, it can be useful to provide an incentive for oversight, research, and development. Yet, risk avoidance and transfer will not work for organizations that must collect sensitive information but are not responsive to market forces.

Risk acceptance is the last sanctuary of these organizations. Governments and NGOs face particularly vexing cybersecurity problems. Their adversaries are the most sophisticated, their goals are the least clear, and their organizational challenges are the greatest. For them, there are only two choices: mitigate risk or accept it. Risk acceptance does not indicate defeatism. Acceptance, rather, means a sober analysis of the consequences with a plan for responding to an event. The result may be releasing information voluntarily or operating in public to meet the potential consequences head on, rather than being surprised in the future. The necessity of secrets will always be a feature of some institutions but minimizing the need to dissemble can be far more powerful than imperfect mitigation.

In my apartment, my attention has turned from window-mediated heat transfer back to Zoom meetings, terminal windows, and emails on a tiny laptop screen. It is afternoon now, and the sun has heated the air outside to a temperature that does not cause condensation. Some heat still escapes through the glass, but so much less now that the temperature inside verges on stifling. Rising from my chair to turn off the heat, I pause to reflect on what kind of world I hope to see in five, ten, or fifty years.

Will the extreme of tech idealism lead to immolation, or will the icy chill of cyber nihilism freeze all progress? How do we tread the moderate path and launch the second information revolution? In any case, cybersecurity won’t be “fixed,” but we can build a world in which it won’t need to be. Returning to information theory founder Claude Shannon, he once wrote that “We may have knowledge of the past but cannot control it; we may control the future but have no knowledge of it.” Fewer secrets better protected is a way to control the future, even though we won’t know what that future will look like until the transition has already happened.  ◘