Cyber Security

Cybersecurity Bolt-On vs Baked-In – Lawfare

Cybersecurity Bolt-On vs Baked-In – Lawfare
Written by ga_dahmani
Cybersecurity Bolt-On vs Baked-In – Lawfare

A few weeks ago, the RSA Annual Conference met in San Francisco. The conference is among the largest cybersecurity events in the world and thus provides a useful opportunity to reflect on current cybersecurity issues.

One of the most prominent issues in cybersecurity is to “build” security into product development from the start, rather than “add it on” to security as an afterthought. A company that uses additional security as a default product development practice often acts according to its economic incentives. Efforts devoted to security do nothing to improve the functionality of a new product. In an environment where time to market is often the key to market success, it makes good economic sense to fix security issues as long as they become apparent after product launch, rather than spending the initial effort to prevent these problems. to arise in the first place. A product manager may believe that the probability of discovering a vulnerability is low or that the economic loss resulting from its potential discovery is low. Therefore, the product manager can make what appears to be an economically rational decision to fix only those problems that are discovered in the field and are serious. If the product manager is right, the resulting costs will be lower than the costs incurred in an upfront or integrated security model.

But this additional approach comes with significant cybersecurity drawbacks. Fixing a security vulnerability discovered after product completion often involves revisiting important decisions made early in product development. You could also end up with the costly result of much of the product having to be redeveloped from those stages.

The added security leaves users vulnerable to security issues that could have been avoided. Even worse is the security outcome when the product without built-in security achieves a high degree of market acceptance and success. Latent security issues in a widely accepted product affect a much larger user base, potentially exacerbating the consequences of such issues. And when a security issue appears that needs to be remediated, the cost of remediation on the back end is likely to increase as the number of users of the product increases. With a sufficiently large number of users of a product, there are social costs that go beyond the sum of individual remediation costs.

Therefore, it is almost impossible to find someone in the world of cybersecurity who advocates additional security. In fact, wandering around RSA listening to various talks and presentations reveals that almost every product vendor claims that while others may add security, they build it in from the start.

Really? Let’s unpack that claim. Built-in security, or security by design as it is more formally known, requires vendors to address security issues early in the product development process. Security thus becomes a criterion for product design on a par with other software attributes, including maintainability, reliability, usability, efficiency, adaptability, availability, portability, scalability, security, fault tolerance, testability, usability, reuse and sustainability.

Often called the “-ilities” of software, these attributes are not functional, that is, they do not help anyone to do useful work. Instead, it is the functional requirements of a product, as they are eventually implemented, that are valued by users. When a product innovator has a great idea, it is expressed in functional requirements as an articulation of what the product is supposed to do for the user. Conceptually and in principle, such an expression is given Prior to product design.

To quote Fred Brooks from the “Mythical man-month”:

The programmer, like the poet, works only slightly away from pure thought. He builds castles in the air, creating through the exercise of the imagination… However, the construction of the program, unlike the words of the poet, is real in the sense that it moves and works, producing visible results separate from the reality. construction itself… The magic of myth and legend has come true in our time. You type the correct spell on a keyboard and a screen comes to life, showing things that never were and never could be.

In short, it is the manifestation of the functional requirements that animate the product. While the ultimate restriction on what physical systems can do is the laws of physics, the ultimate restriction on what software can do is the human imagination, which is far less restrictive.

But reality kicks in when software “-capabilities” are considered, and many products have failed to materialize because one or more of the “-capabilities” was not adequately addressed. And the same goes for security, even with built-in security, which is often less than the panacea it is made out to be. When product designers receive functional requirements from senior management in the C-suite, they are generally forced to treat those functional requirements as boundary conditions on what the product must do. They are not free to relax or waive functional requirements, even if compliance with a particular functional requirement may contradict good practices or security architecture. In practice, what integrated security often means is that the product design team accepts the functional requirements and does the best job possible within the stated constraints.

But what if the functional requirements themselves pose security challenges? On some level, they must. A rock is completely immune to cybersecurity challenges, but it’s also not useful in any way. For the rock to be useful, we must give it the ability to do certain things, and the functional requirements are how we specify those things. Once the rock with increased requirements can do some useful things, its functionality can be abused. For example, suppose a requirement is that the rock can only be dropped by certain authorized persons. Therefore, we must develop a mechanism to keep the rock locked to the table on which it usually rests. This mechanism can only be activated by means of a specific key, copies of which are delivered to all authorized persons. Herb is one of those people. But when Herb loses his key and George finds it, George gains the ability to drop the stone. Herb is trustworthy and he would throw the rock just to illustrate the phenomenon of gravity to interested students. But the cowardly George would throw the stone to hurt the cute furry animals. In short, the rock and its supporting infrastructure, which together make up The Rock™, have now become more vulnerable to attack and therefore more susceptible to abuse by the Georges of the world.

Repeated conversations I’ve had with security leaders suggest that C-suite leadership rarely considers the cybersecurity implications of product innovation or the functional requirements for those products. From this point of view, the security team is an internal service organization for the C-suite rather than a partner in high-level decision-making about the products. The role of a service organization is essentially to say hello when given an order and then to do the best job possible. By definition, C-suite orders are not subject to challenge, even if those orders involve high security risks. This description is certainly a caricature, but it points out some of the essential characteristics of the relationship between the C-suite and those responsible for security.

To move away from this paradigm, two things have to happen. First, the entire C-suite needs to have more than just an understanding of security basics, in particular, they need to know how difficult it is to get it right, so they can understand how and why security considerations might arise in a proposed product. . Second, the security team, particularly the chief information security officer (CISO), needs to understand the rudiments of the business running the C-suite so they can understand the potential business issues at play. The CEO is responsible for making the balance between functionality and security, while the role of the CISO is to ensure that the balance is informed.

Top management is not used to making trade-offs between product functionality and product security. But you are likely to make similar trade-offs with respect to other non-functional product attributes. Consider the issue of cost. No competent CEO would insist that a proposed functional requirement be retained regardless of the potential cost that would be incurred in implementing that requirement or the expected return on investment. A CFO is expected to be familiar enough with the business to be able to provide helpful advice and input on the financial ramifications of any particular product proposition.

For security to be truly considered a critical product attribute, a company must sometimes give up an aspect of product functionality in order to gain security benefits. Not all the time, as that would result in a totally non-functional product (like a rock), but sometimes. Therefore, a useful question for companies that claim to care about security is: “Describe an instance where product functionality was sacrificed to improve security.” If the answer isn’t unambiguously clear, beware of security hucksters who sell appearances over substance.

About the author

ga_dahmani

Leave a Comment