Thursday, September 19, 2024

Risk Metrics Needed for IT Security

Business leaders worldwide are becoming more aware of the importance of assuring the security of information assets. Information-security issues are among the hottest topics being addressed in trade media for organizational governance, executive, financial, audit, and IT leaders. Conferences covering the latest information-security issues, tools, and problems abound in both the public and private sectors.

Government efforts have helped increase security awareness, as well. In the United States, the President’s Commission on Critical Infrastructure Protection (PCCIP) issued recommendations and launched information-security initiatives in both the government and private-sector arenas. The PCCIP has also established public-private cooperation and information sharing through the Partnership for Critical Infrastructure Security (PCIS) and Critical Infrastructure (CI) Information Sharing and Advisory Centers (ISAC), which are coordinated by the Critical Infrastructure Assurance Office (CIAO). These efforts address the emerging threats associated with the rapid growth of global Internet connectivity, as well as the disruptive potential of cyber and physical attacks, accidents, and natural disasters.

Despite this increased awareness and the persistent recommendations for improvement, key areas of information-security risk management and associated risk metrics continue to receive precious little attention. Although many guidance documents advocate taking a managed approach to risk — including risk analysis and assessment — none of them clearly and consistently define what constitutes a proper risk analysis and assessment. Even the well-known ISO 17799 standard falls well short of providing the kind of nuts-and-bolts “how-to” guidance that is needed, in my opinion.

The lack of formalized qualitative and quantitative risk metrics impairs the ability of risk managers and security professionals to effectively and consistently measure risk and points to the absence of a sound framework against which to record quantitative threat-experience data. Establishing a risk-management framework and risk metrics would greatly improve risk management by giving organizations a basis for risk analysis and assessment that would enable them to make business decisions about managing security risks.

SOME PROGRESS MADE As early as the mid-1970s, the basic metrics of risk were established, but they were not formalized or widely disseminated. In these early years, a variety of risk-assessment methodologies and techniques emerged to help organizations identify and manage nonclassified information-security risks on a cost-benefit basis.

Some of the manual methodologies and automated approaches that were developed during the 1980s were well-conceived and are still used today. Other approaches fell by the wayside. Highly subjective qualitative methodologies provided no real support for the standard business decision-making model, which is based on return on investment (ROI).

Conducting quantitative risk assessments without supporting automated tools proved to be almost impossibly time-consuming, complex, and inflexible. Also, they were completely incapable of supporting the “what-if” analysis that is essential to sound business decision-making. The inconsistent use of risk metrics and misinformation about risk further clouded the issues.

There has been progress in developing information-security risk metrics over the past two decades, but there is still a way to go before standard metrics are established, adopted, and practiced. To start with, the need to identify, measure, and manage information-security risk has been established and subsequently reinforced, albeit tentatively. The U.S. National Institute of Standards and Technology identified key qualitative and quantitative risk metrics and established a high-level framework of the risk-analysis and assessment process related to the broader function of information-security risk management, but this work was never formalized. Many organizations have published information-security risk-management guidance, including:

However, in most of the above documents and other guidance, the essential distinctions between control objectives and controls is either not clearly established or is not established at all. If the managed risk approach to information security were not recognized as the best way to achieve good information security, this would not be a big deal. But it is. It is virtually impossible to measure risk against “objectives,” but it is not difficult to measure risk against the lack or ineffective implementation of controls.

In addition to the above guidance publications, the Information Systems Security Association (ISSA) Guidance for Information Valuation has established methods and metrics for valuing an organization’s information assets. Critics who are unaware of this guidance have asserted that the lack of such metrics is an obstacle to executing quantitative risk analysis and assessment, because organizations don’t know how to establish the monetary value of their information assets.

Additionally, a variety of automated disaster-recovery planning, logical access-control, antivirus, authentication, encryption, and firewall technologies have helped organizations manage information security. But, that said, without applying quantitative risk-analysis and assessment techniques to the issues, there is no reliable basis — specifically ROI — for determining how much money to spend to acquire and administer these risk-management tools.

QUALITATIVE VS. QUANTITATIVE APPROACHES Despite the general progress that has been made in recognizing the need for good information security, standard, well-defined metrics for analyzing and assessing information-security risks have not been established and formalized. Many guidance documents advocate a risk-based approach to managing information security, and they often suggest a quantitative methodology, in the loosest possible terms, as a solution. The time has come to establish and formalize the framework of metrics and measurement methods necessary to support this now-proven approach.

A previous article discussed the need for a standard language of information-security risk and defined important risk terms. In addition to this language, it is necessary to distinguish quantitative and qualitative approaches to risk analysis and assessment.

Qualitative approaches are characterized by subjective risk measures such as ordinal ranking (low risk or value, medium risk or value, and high risk or value) in a risk-to-value matrix. The qualitative methods emerged in part from a persistent belief that it was simply too difficult to get the real numbers. Also, qualitative approaches appealed to management, which was looking for the “least-effort” way to prove they had “assessed their risks.” After all, little attention has been paid to the results of risk analysis and assessment — until recently.

In my experience, qualitative approaches, however otherwise encouraged, provide little basis for illustrating the scale of risk in monetary terms or for making informed risk-management decisions. The metrics of a qualitative risk analysis do not reflect independently objective values such as the monetary value of an asset, the annualized rate of occurrence (frequency), the single loss exposure (impact), or the probability of loss. Although these qualitative metrics can be useful to establish for management that a problem exists, they can only address problems known by the user to exist, and they cannot support information-security investment decisions with ROI data.

Quantitative approaches are characterized by the use of independently objective measures for all risk metrics, including qualitative risk-metric descriptors such as “information asset,” “threat,” “vulnerability,” and “safeguard/control” nomenclatures. Asset values are expressed in monetary terms and threat frequency in annualized expressions that represent actual expected frequency (e.g., 1/10 for once in 10 years, or 50/1 for 50 times per year).

Quantitative risk metrics can be readily applied in basic risk-modeling algorithms. The best automated quantitative risk-analysis and assessment tools discuss risk in the familiar, numbers-oriented language of business (monetary value, probability, ROI). They readily support “what-if” analyses, and they facilitate risk-mitigation cost-benefit and ROI analyses.

THREAT DATA LACKING Establishing metrics for quantifying risks in monetary terms isn’t the only challenge. Another serious problem is that there presently is no central repository of threat-experience (actuarial) data on which to base information-security risk analysis and assessment, nor are organizations required to collect that data, except for threats involving natural disasters, crime, and fires.

Such an actuarial database could provide a key element to a risk-metrics and measurement framework in which threat-experience data can be accumulated, “cleansed” of source-identifying attributes (where necessary), and made available for quantitative, probabilistic risk analysis and assessment. This framework would also give organizations a basis for measuring and cost-efficiently managing their compliance with qualitatively sound information-security principles such as those mentioned above. Historically, organizations have been reluctant to report information-security threat-experience information to government agencies and law enforcement for competitive, liability, and legal reasons. That fact has made it difficult to gather current and accurate information about security threat experiences.

The U.S. Congress has begun to provide legislation that would protect organizations that share information about security threats and incidents with the federal government. Under the Senate Bennett-Kyl bill [SB1456], certain information-security disclosures would not be subject to the Freedom of Information Act (FOIA), which provides public access to government information. Companies that share such information would also have a limited exemption from antitrust laws. The Patriot Act also has provided key FOIA and liability relief.

These measures, if broadly applied, could significantly strengthen national and international information-security strategies by encouraging more organizations to report security incidents and share information about security threats. By compiling current and accurate information about information-security threat experiences, private- and public-sector organizations would be able to use this information to conduct increasingly credible real-time information-security risk analysis and assessment. Moreover, the results of quantitative risk analysis and assessments could also be more reliable.

However, legislation such as Bennett-Kyl would only solve part of the problem of gathering current and reliable threat data. Most companies are vigorously opposed to sharing their threat-experience data and resulting losses because disclosing this information could damage their reputation and cost them market share and revenues.

Further, in the rush to get new technologies to market quickly, computer hardware and software companies have largely ignored the information-security issues and vulnerabilities inherent in their products. Laissez-faire has reigned, resulting in products that are buggy, unstable, and vulnerable to cyber attack or other catastrophic failure. Dozens of new security vulnerabilities are reported each week, followed closely by a non-stop flood of patches. Microsoft alone released 72 security advisories last year.

Yet, technology companies and industry associations have aggressively resisted the development of product-profile standards for information technology and communications (IT&C) products, as well as the creation of detailed principles for information-security practitioners, auditors, and business managers. These organizations have considered such efforts to improperly interfere with the marketplace. Until recently, IT&C companies have faced little market pressure to assure the security and reliability of their products, and governments have been unwilling to impose security requirements through regulation.

SOLUTION: ESTABLISH METRICS There are signs that public- and private-sector enterprises — the consumers of technology products — are beginning to make information security a top priority. The time may be ripe to raise the information-security bar globally by establishing standard metrics for measuring security risks and a repository for collecting and analyzing the accumulated actuarial data.

The first step is to establish, formalize, and maintain both qualitative and quantitative risk metrics. These would include:

  • Detailed, level-set qualitative risk metrics and “how-to” guidance that sets forth good information-security risk-management practices and principles.
  • A “standard” qualitative risk-metrics population of threats that is maintained at a central repository such as an information threat-experience center.
  • Quantitative threat-experience frequency data that will support quantitative approaches to information-security risk analysis and assessment. This collected threat-experience data could be made broadly available on a “not-for-attribution” basis and organized in a variety of analytic profiles.

In addition to these metrics, others are needed to support quantitative risk-analysis and assessment approaches, including the:

  • Credible monetary value of assets.
  • “Impact” as a percentage of asset value.
  • Annualized probability of loss.
  • Annualized expected loss.
  • Annualized safeguard and control costs.
  • Uncertainty.

Such risk metrics have been the foundation of the insurance industry for centuries.

TIME FOR A SECURITY RISK FRAMEWORK Many areas of risk — such as hazard loss, health, market, credit, project, and product development — are now routinely and effectively managed with often highly complex techniques and methodologies based on extensive experience-driven databases. It is time for the information-security industry and profession to establish its own risk metrics, measurement, and management framework. This framework would give business managers the tools they need to identify, measure, and manage the risks to their information assets and manage their information-security investments based on sound and reliable ROI data.

First appeared at ITAudit

Will Ozier is founder, President and CEO of the information security products and consulting services firm, OPA Inc. – The Integrated Risk Management Group (OPA). He is a leading expert in risk assessment, with broad experience consulting to many Fortune 500 companies and state governments, as well as NASA, GSA, the US Army, and the Presidents Commission on Critical Infrastructure Protection. Prior to becoming an information security consultant in 1982, Mr. Ozier held key technical and management positions with Levi-Strauss, World Savings, United Vintners, Fireman’s Fund Insurance Company, and Wells Fargo Bank. Mr. Ozier was Principal Author for The Institute of Internal Auditors Information Security Management and Assurance: A Call to Action for Corporate Governance under contract to the federal Critical Infrastructure Assurance Office. Mr. Ozier was instrumental in advancing this CIAO initiative as well as recommendations of the PCCIP embodied in and promoted by this document, advocating quantitative risk assessment and advancement of the GASSP (now the Generally Accepted Information Security Principles GAISP).

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles