The recent SolarWinds attack highlights an Achilles heel for enterprises: software updates for critical enterprise applications. Digital signing of code is one solution, but organizations need to modernize their code signing processes to prioritize security and integrity and align with DevOps best practices, writes Brian Trzupek the Senior Vice President of Products at DigiCert in this thought leadership article.


Even in today’s security-charged world, the SolarWinds breach was a wakeup call for cybersecurity professionals. It was distinguished by its sophistication and the fact that it was carried out as part of legitimate software updates. The incident was quickly all over the news and has brought renewed focus on need for secure DevOps.

Automating your way out of PKI chaos.”

Code Signing Alone Is Not Enough

The security incident at SolarWinds was especially unsettling because could easily happen at any large software organization that delivers regular updates. Code signing certificates are a highly effective way to ensure that software is not compromised. However, it is only as effective as the strategy and best practices behind it. When poorly implemented, code signing loses its effectiveness in mitigating risk for software publishers and users. Issues often include:

  • Using the same signing key to sign all files, across multiple product lines and businesses
  • Lack of mechanisms in place to control who can sign specific files
  • Insufficient reporting capabilities for insights into who signed what and when
  • Failure to sign code at every stage of development, as part of an overall security by design process
  • Lack of signing and verifying code from third parties
  • Poor processes for securing keys and updating them to new key size or algorithm requirements
  • Failure to test code integrity before signing
  • Inadequate visibility into where certificates are, and how they are managed

Common pitfalls might include using the same signing key to sign all files, across multiple product lines and businesses. Some organizations might have no mechanisms in place to control who can sign specific files. They may also lack reporting capabilities, which can provide insights into who signed what—and when.

[Read Brian’s piece Staying Secure Through 5G Migration.]

What have we learned from the SolarWinds attack? For organizations where DevOps is fundamental, applying best practices to signing processes is more essential than ever. According to some studies, more than half of IT security professionals are concerned about bad actors forging or stealing certificates to sign code—but fewer than a third enforce code signing policies on a consistent basis. It’s time for organizations to do better and enforce zero-trust
across all their systems, signing everything, at every stage after verifying it is secure.

Simplifying and standardizing

Traditional code signing processes can be complex and difficult to enforce. They are often based on storing keys on desktops as well as sharing them. Visibility into activities is often limited, making mismanagement or flawed processes difficult to discover and track. To mitigate these issues, many organizations are simplifying their processes using code-signing- as-a-service approaches. Code-signing-as-a-service can accelerate the steps required to get code signed, while making it easier to keep code secure. A robust solution can empower organizations with automation, enabling teams to minimize manual steps and accelerate signing processes. APIs can enable it to integrate seamlessly with development workflows and automated scheduling capabilities enable organizations to proactively and approve signature windows to support new releases and updates.

To strengthen accountability throughout the process, administrators can apply permission- based access. Strictly controlling access helps improve visibility into which users are allowed to sign code and which certificates and private keys they are allowed to utilize.

Standardizing workflows

Standardizing code signing workflows can also help reduce risk to an organization. Instead of allowing everyone in an organization to use the same key for signing, many organizations are using separate code signing keys for different DevOps teams, while granting administrators visibility over key usage. This best practice helps minimize the risk of mistakes that can occur across a company by limiting the ability of breaches to propagate. For example, if a key is used to sign a release that has been compromised, only one team’s code will be impacted.

Maximum flexibility to minimize risk

Key management flexibility is another helpful best practice, reducing risks by enabling administrators to specify shorter certificate lifetimes, rotate keys and control keypairs. For example, Microsoft recognizes publishers that rotate keys with higher reputation levels. With the right key management approach, an administrator could specify a specific number of days or months exclusively for files designed for use in Microsoft operating systems.

Secure key storage offline except during signing events

Taking keys offline is another measure that can secure code signing. With the right code signing administrative tool, administrators can place keys in a “offline mode,” making it impossible to use them to sign releases without the proper level of permission in advance. Release planning is a fundamental to software development, so most developers are comfortable scheduling signatures for specific keys directly into their processes.

Taking keys offline is a strong step to ensure that keys will not be used in situations where they should not be. It also adds a layer of organizational security by splitting responsibilities between signers and those who approve them—while providing improved visibility into which keys are signed by whom.

Freeing up developers to do what they do best

It’s clear that safeguarding DevOps environments correctly remains challenging, but fortunately the right management tools can minimize hassles—and maximize protection. As we’ve discussed, automation is essential for applying security across CI/CD pipelines. Seek out a solution that can fit smoothly within workflows and free up engineers from individual steps required for cryptographic asset protection. The tool should make signing keys easily accessible when pushing code and automate signing of packages, binaries and containers on every merge to master when authorized. Organizations also need a process for testing code integrity before they sign. A centralized, effective signing management tool can handle the signing tasks, while integrating with other systems that perform necessary integrity tests. For key security, the solution should provide the option of storing the keys offline in virtual HSMs. During a signing event, it should enable developers to access the keys to sign with one click, then return them back to secure offline storage

DevOps pros work within a variety of environments, so the signing solution should support portable, flexible deployment models via SaaS or on a public or private data center. Businesses in every industry are becoming increasingly software-driven and the challenges to DevOps organizations won’t disappear anytime soon. However, with the right approach to code signing, organizations can dramatically strengthen their security posture, minimize their chances of becoming the next victim and ensure customer confidence in their solutions.


(*) Disclosure: This podcast was sponsored by Digicert. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

Overworked, understaffed teams? Reactive processes? Noisy tools and a river of false positives? Welcome to the “SOC hop,” the dysfunctional “normal” of the typical corporate security operations center (SOC). But it is a normal that is not sustainable, writes Steve Garrison of Stellar Cyber in this Security Ledger expert insight.


There have always been two ways to view the Security Operations Center (SOC). Idealized, the SOC runs with precision and control. Operations are linear, it is well staffed, and the workload is reasonable. Surprises may occur, but they are readily handled in stride. There is no real sense of panic or fatigue, because everything works as planned. Then there is the other view of the typical SOC, the one experienced by most companies. This is the SOC hop.

Enter the SOC Hop

The hop is characterized by an overworked, understaffed team that is constantly jumping from one fire to the next. As much as the team is qualified and desirous of being proactive, all their time is consumed by reacting to events and alerts. Most of the professionals are exhausted and some even question the value of certain tools that have too many false positives and lack the ability to prioritize alerts in a meaningful, productive way. Ultimately, the SOC hop is not sustainable. Security success is getting worse, not better, and a data breach or something even worse seems a foregone conclusion.

[You might also like Futility or Fruition? Re-thinking Common Approaches to Cyber Security]

Steve Garrison, Stellar AI
Steve is the Vice President of Marketing at Stellar AI

Fortunately, organizations are rethinking the SOC and how it works in a classic “there must be a better way” reevaluation. The SOC is being reimaged or reimagined to better deal with the realities of today and the ability to scale to challenges still to come. The considerations may seem trite or tired—all too familiar—but they represent the fundamental changes needed to leave the SOC hop and move on to something better.

The SOC: re-imagined

First, there is visibility. Everyone knows this. An old security adage is “you can’t secure what you cannot see.” This is just as true today as when it first became a cliché. Visibility is a combination of depth and breadth. Attackers may target any portion of an attack surface and will traverse networks and infrastructure to gain access to valuable assets. This means that every part of the attack surface must be monitored and that organizations can see across their entire infrastructure to find the East-West activities intrinsic to the work of a skilled attacker. At the same time, data must also provide contextual references for an event to help boost accuracy and understanding of findings.

Second, the re-imaged SOC needs combined intelligence. The silos of separate tools with their individual alerts—and SIEMs that cannot gather a deep and broad enough amount of data to provide comprehensive understanding and relevant admonitions—need to be united. Not only do security tools and systems need to connect, they need to correlate their data to help paint a broader, clearer picture of potential in-progress attacks. This is more than API connectivity and rudimentary integrations. It also means real time or close to real time. Again, attacks are not static; they are dynamic and attackers move, conducting a campaign to maximize the return on their activity. One event may be inconsequential and below the radar, but connecting the dots may clearly reveal an attack in progress.

Mind the Gap

Third, gaps need to be covered. There are normally gaps between coverage areas and the realm that each security tool ingests. Logs will only reflect a portion of the evidence. There are typically some kind of boundaries between what is monitored as perimeter, endpoint, cloud, server, data center, etc. In addition, gaps may come into being through the natural, dynamic change of a company’s networks and infrastructure. Ideally, gaps can be met with sensors to ensure full visibility and assessment. Data from these sensors may be instrumental in finding attack activity or bolstering findings.

The combination of full visibility, covering gaps and combined intelligence can be a game-changer for the SOC. It can substantially change the way the team works. Rather than hopping from incident to incident or between every alert or event that pops up, the cacophony of disparate can be put to an end to produce alerts that are fewer in number, higher in accuracy and relevance and prioritized for action. Here, the tools support the SOC team rather than the other way around. The tiresome SOC hop can give way to a wholly new way of working and getting an upper hand on the many challenges of protecting data, infrastructure and valuable assets.


Goodness is hard to measure. More so in the field of Cybersecurity. In the physical world, if you possess something, say a $1 bill, you have it. If you spend it, you don’t have it. If someone steals it, you don’t have it, either. The digital world is quite different. Digital copies are the same as the original – exactly the same. Each replicated copy is at least as original as the original original. “Can you send me a copy?” can only be answered, “No, but I can send you an original.”

You know all that.

A non time-sensitive digital asset that could be infinitely replicated was itself of little value. It could be replicated many times and in theory “spent” many times. But of course, there were no buyers. Enter cryptocurrency, Bitcoin for an obvious example. A Bitcoin aspires to be a digital $1 bill that can neither be double-spent nor infinitely replicated. How do those two miracles occur? Blockchain. 

Data’s Deep Fake Problem

What else can we do with this marvelous technology that allows us to prove in the digital world that if I have something, I really have it, and if I do not have it, I really don’t have it?

First Digital Photo

The first digital image ever created was of Russel Kirsch’s son, Walden, scanned from a photograph in 1957.
(Source: Wikipedia.)

More than 60 years ago, the first digital photograph was created. Businesses missed the implication. Film-based photographs were hard to manipulate; not so digital photographs which can be easily manipulated. The implication is that the integrity of the photographic data on which a business decision was being made had very substantially degraded. And, no one seemed to notice… for a while.

When businesses did notice, they just started to drop photographs from their business processes. Rightly so. The integrity of the data was highly suspect and nowhere near the quality for a serious business decision. Enter blockchain once again. Blockchain enables the data to be “frozen” at the “moment of creation.” The integrity of the data is preserved and actionable business decisions can be made by responsible people.

How do we think about this? What is the right way to analogize what we know? For illustration and conversation, the present authors offer the table below, the Data Integrity Scale, in the hope of making levels of “goodness” contributory to decision support. Availability has metrics – downtime can easily be measured – but, until now, Integrity has not had a firm scale to measure with.

A Scale for Data Integrity

Most current systems are not designed to protect the Integrity of the data from the moment of creation until the point of use. Protect its Confidentiality? Yes. Protect its Availability? Yes, again. The more we depend on data to drive processes of increasing complexity, the more Integrity supplants Confidentiality and Availability as the paramount goal of cybersecurity.

The authors propose a scale to measure data integrity.

The Cyber Integrity Question of 2021

The table attempts to correlate the measures of trustworthiness across the domains of Law, Accounting, and Business. The sort of question that jumps out from the table might be:

Since I require the proof of a person’s identity (credentialing) be above the red bar before I would let him or her act on the company’s data, why should I not also require that data be above the red bar before I allow it to act on other company data?

“Data integrity is the maintenance of, and the assurance of the accuracy and consistency of data over its entire life-cycle, and is a critical aspect to the design, implementation, and usage of any system which stores, processes, or retrieves data.” … It is at times used as a proxy term for data quality.”5

But “quality” without a way to define and measure it, is an ephemeral term. One common definition of quality is “conformance to requirements.” Here, we might require that the Integrity of data be “above the bar” on the Data Integrity Scale. 

A report from Deloitte (PDF) indicates that Data Integrity violations account for over 40 percent of pharmaceutical warning letters issued globally. 

The historical methods of chasing Visibility and Context through Data Governance down a long chain-of-custody/audit trail are now outdated techniques (and not very reliable in any event – too many steps along the way). A registered “record copy” via blockchain technology is a far better solution. Businesses that are assiduously checking for viruses (aka automated tampering), should also ensure the data they actually use for major decisions has Integrity and is not the result of automated or physical tampering. Blockchain technology allows photos, videos, and other data to jump “above the bar.”

Back to the Future

Roll back those 50 years – actually to 1957 – when the world encountered the first digital photograph. A person needed the skills of a professional photographer to fake a photograph. There was a general feeling of “trust” in what was depicted in a photograph. That was then and this is now, but with adroit use of blockchain technology it is once again possible to have “trust” in photographs and videos, and restore Integrity

What can you do with that “trust?” Business decision makers no longer have to deal with information along a previously believed continuum of certitude; “through a glass darkly,”  but rather can see clearly the demarcations where information is useful and not useful.

The rapid digitalization of business processes has caused a greater need for accurate data as there are no longer humans further upstream in the process to keep the low-quality data from infecting the automated business decision process.

Now is the time to align the ordinal scales of jurisprudence and accounting with each other and with like-minded ordinal scales for business processes. We offer a first cut at that necessary advance; we hope that it is sufficient to purpose and self-explanatory, and will allow this advancement in technology to open new markets with innovative products.


Footnotes

  1. Legal

“Beyond a Reasonable Doubt.”  Whitman J. (2005) The Origins of Reasonable Doubt, Yale University Press. 

“Clear and Convincing Proof.” Colorado v. New Mexico, 467 U.S. 310, 467 (1984)

“Preponderance of the evidence.” Leubsdorf J., (2015), The Surprising History of The Preponderance of the Standard of Civil Proof, 67 Fla. L. Rev. 1569 

“Substantial Evidence” Richardson v. Perales, 402 U.S. 389, 401 (1971)

“Probable Cause” United States v. Clark, 638 F.3d 89, 100–05 (2d Cir. 2011)

“Reasonable Suspicion” Terry v. Ohio 392 U.S. 1 (1968)

“Mere Scintilla” Hayes v. Lucky, 33 F. Supp. 2d 987 (N.D. Ala. 1997)

  1. CPA

“In all material respects” Materiality considerations for attestation engagements, AICPA, 2020 

“Reasonable Assurance” Guide to Financial Statement Services: Compilation, Review, and Audit. AICPA. 2015 AU-C 200: Overall Objectives of the Independent Auditor. AICPA. 2015. AU-C 240: Consideration of Fraud in a Financial Statement Audit. AICPA. 2015 

“Substantial Authority” “Realistic possibility “Reasonable basis” “Frivolous or Patently Improper”

Interpretations of Statement on Standards for Tax Services No. 1, Tax Return Positions, AICPA (Effective Jan. 1, 2012, updated April 30, 2018,)

  1. Identities

NIST Special Publication 800-63 Revision 3 June 2017

  1. Photos and Videos

“SOC2” AICPA -Reporting on an Examination of Controls at a Service Organization Relevant to Security, Availability, Processing Integrity, Confidentiality, or Privacy. Updated January 1, 2018

“ISO 270001” is an international standard on how to manage information security. Revised 2013. The International Organization for Standardization is an international standard-setting body composed of representatives from various national standards organizations.

“GDPR” The General Data Protection Regulation is a regulation in EU law on data protection and privacy in the European Union and the European Economic Area. Implementation date: 25 May 2018

  1. Boritz, J. “IS Practitioners’ Views on Core Concepts of Information Integrity”. International Journal of Accounting Information Systems. Elsevier. Archived from the original on 5 October 2011. https://www.veracode.com/blog/2012/05/what-is-data-integrit
  1. Under the spotlight: Data Integrity in life sciences [Internet]. Deloitte LLP. 2017. [Cited: 4 March 2020]. https://www2.deloitte.com/content/dam/Deloitte/uk/Documents/life-sciences-health-care/deloitte-uk-data-integrity-report.pdf

The rash of high-profile breaches including the SolarWinds attack show that the current approaches to securing IT environments are inadequate to the task, argues Albert Zhichun Li, the Chief Security Scientist at Stellar Cyber in this Security Ledger expert insight.


The recently disclosed breach of FireEye should give everyone pause over both the importance and difficulty of security. This high-profile breach left the vendor with a black eye and some serious questions. The disclosure almost immediately had every security vendor writing blogs and articles about the importance of this or that in accordance to what they sell and market. Opportunity strikes!

At the same time, it is hard not to feel stifled by the seeming futility of security. Here is a company known for expertise in investigating or addressing some of the largest security breaches in the world and now victimized by a successful attack. Perhaps not since the NSA was breached and attackers made off with custom hacking tools did the idea of protecting one’s assets and information seem so bleak. “If the NSA can’t protect their own tools and secrets, how can anyone remain safe?” is a question on the minds of so many.

Security futility? Certainly the odds favor attackers by a huge margin. Attackers have an almost unlimited number of chances to mount a successful attack, but defenders must successfully defend themselves from every one of them. With so many avenues for attack, the cause of effective security seems nearly hopeless.

There’s another way to view these current events. While the task of establishing and maintaining effective security is gigantic, it is not necessarily futile. Security can deflect a majority of attacks or find them early enough to mitigate loss and damage. These high profile breaches should serve as a wake-up call, however. The current approaches most organizations take towards security is not good enough. Something has to change.

The current high-profile breaches demonstrate the current approaches are inadequate—that the way security is currently practiced is insufficient.

Albert Zhichun Li, Stellar Cyber

One important change is to stop compartmentalizing security. Traditionally, organizations view security as segments with different systems, policies, reports and personnel. The desktop or endpoint group has its own charter. The network security team has another. There might also be a cloud team and an applications team. Separate systems, separate efforts.

This security specialization makes sense. Such focus splits up the arduous task of security and divides complexity into more manageable segments. Instead of having to “boil the ocean,” security vendors can concentrate on a particular set of problems and challenges to tackle. Security practitioners can focus on the strategies, policies and procedures to protect certain aspects, such as endpoints, applications or resources in the public cloud.
At the same time, the divisions between security are hampering overall effectiveness. A well regarded historical axiom is, “divided we fall.” And security certainly is divided. Ironically, the segmentation helps security, but it also hampers it.

The current high-profile breaches demonstrate the current approaches are inadequate—that the way security is currently practiced is insufficient. One of these inadequacies is the lack of a unified, holistic approach to security. This is not to say that what we that we need a mega-security tool to perform all aspects of security. Instead, we need to aggregate security data to achieve a deeper, more holistic understanding of potential attack activities.
A combination of depth and breadth are needed to get an edge on attackers. Attackers are not limited to just one segment of infrastructure. What may start at an endpoint, through a web application or in cloud infrastructure will evolve as attackers move sequentially to get to valuable assets. Seeing this entire surface provides necessary context and history. Different systems or sensors will be adept at seeing different elements. These inputs need to be aggregated to provide a forest-for-the-trees perspective.
In addition, depth is necessary for fine tuning and more granular understanding. The combination of depth and breadth brings more completeness and greater fidelity—both are essential in turning the tables on attackers.

Security is a daunting task, and there is always an inherent trade-off between openness and accessibility. The web, digital business and mobility all require some compromise to this trade-off. The challenge then is to make infrastructure and assets as secure as possible. This means security can’t stay still. Security must constantly advance and improve. Yesterday’s tactics and technology need to move forward. This evolution and avoiding the natural ruts that occur are essential for success. It’s difficult but not futile .


(*) Disclosure: This article was sponsored by Stellar Cyber. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

Keyboard to the internet

Modern enterprise networks are populated by both people and, increasingly, “things.” But securing the growing population of Internet of Things devices presents unique challenges. In this thought leadership article, Brian Trzupek, the Senior Vice President of Emerging Markets at DigiCert discusses what is needed for effective IoT security.


We’ve seen the IoT come of age over just the past few years, and innovative use cases continue to build momentum. Gartner forecasts that 25 billion connected things will be in use by 2021. However, although the IoT has tremendous potential across many industries, Gartner surveys still show security is the most significant area of technical concern.

When it comes to security, IoT challenges are distinct from the enterprise. Although identity and identification are cornerstones of effective security, IoT and enterprise environments face different challenges. End users are generally involved in enterprise authentication. When trying to use an application or service, they can be present to respond to multifactor authentication challenges. End-users may also have varying sets of roles or access constraints that evolve as their position changes in the organization.

IoT: Insecure by Design

 


(*) Disclosure: This article was sponsored by DigiCert. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.