In this episode of the podcast (#216), sponsored by Digicert, we talk with Brian Trzupek, Digicert’s Vice President of Product, about the growing urgency of securing software supply chains, and how digital code signing can help prevent compromises like the recent hack of the firm SolarWinds.


We spend a lot of time talking about software supply chain security these days? But what does that mean. At the 10,000 foot level it means “don’t be the next Solar Winds” – don’t let a nation state actor infiltrate your build process and insert a backdoor that gets distributed to thousands of customers – including technology firms three letter government agencies. 

OK. Sure. But speaking practically, what are we talking about when we talk about securing the software supply chain? Well, for one thing: we’re talking about securing the software code itself. We’re talking about taking steps to insure that what is written by our  developers is actually what goes into a build and then gets distributed to users.

Digital code signing – using digital certificates to sign submitted code – is one way to do that. And use of code signing is on the rise. But is that alone enough?  In this episode of the podcast, we’re joined by Brian Trzupek the SVP of Product at Digicert to talk about the growing role of digital code signing in preventing supply chain compromises and providing an audit trail for developed code.

Brian is the author of this recent Executive Insight on Security Ledger where he notes that code signing certificates are a highly effective way to ensure that software is not compromised -but only as effective as the strategy and best practices that support it. When poorly implemented, Brian notes, code signing loses its effectiveness in mitigating risk for software publishers and users.

In this conversation we talk about the changes to tooling, process and staff that DEVOPS organizations need to embrace to shore up the security of their software supply chain. 

“It boils down to do you have something in place to ensure code quality, fix vulnerabilities and make sure that code isn’t incurring tech debt,” Brian says. Ensuring those things involves both process, new products and tools as well as the right mix of staff and talent to assess new code for security issues. 

One idea that is gaining currency within DEVOPS organizations is “quorum based deployment” in which multiple staff members review and sign off on important code changes before they are deployed. Check out our full conversation using the player (above) or download the MP3 using the button below.


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted.

The recent SolarWinds attack highlights an Achilles heel for enterprises: software updates for critical enterprise applications. Digital signing of code is one solution, but organizations need to modernize their code signing processes to prioritize security and integrity and align with DevOps best practices, writes Brian Trzupek the Senior Vice President of Products at DigiCert in this thought leadership article.


Even in today’s security-charged world, the SolarWinds breach was a wakeup call for cybersecurity professionals. It was distinguished by its sophistication and the fact that it was carried out as part of legitimate software updates. The incident was quickly all over the news and has brought renewed focus on need for secure DevOps.

Automating your way out of PKI chaos.”

Code Signing Alone Is Not Enough

The security incident at SolarWinds was especially unsettling because could easily happen at any large software organization that delivers regular updates. Code signing certificates are a highly effective way to ensure that software is not compromised. However, it is only as effective as the strategy and best practices behind it. When poorly implemented, code signing loses its effectiveness in mitigating risk for software publishers and users. Issues often include:

  • Using the same signing key to sign all files, across multiple product lines and businesses
  • Lack of mechanisms in place to control who can sign specific files
  • Insufficient reporting capabilities for insights into who signed what and when
  • Failure to sign code at every stage of development, as part of an overall security by design process
  • Lack of signing and verifying code from third parties
  • Poor processes for securing keys and updating them to new key size or algorithm requirements
  • Failure to test code integrity before signing
  • Inadequate visibility into where certificates are, and how they are managed

Common pitfalls might include using the same signing key to sign all files, across multiple product lines and businesses. Some organizations might have no mechanisms in place to control who can sign specific files. They may also lack reporting capabilities, which can provide insights into who signed what—and when.

[Read Brian’s piece Staying Secure Through 5G Migration.]

What have we learned from the SolarWinds attack? For organizations where DevOps is fundamental, applying best practices to signing processes is more essential than ever. According to some studies, more than half of IT security professionals are concerned about bad actors forging or stealing certificates to sign code—but fewer than a third enforce code signing policies on a consistent basis. It’s time for organizations to do better and enforce zero-trust
across all their systems, signing everything, at every stage after verifying it is secure.

Simplifying and standardizing

Traditional code signing processes can be complex and difficult to enforce. They are often based on storing keys on desktops as well as sharing them. Visibility into activities is often limited, making mismanagement or flawed processes difficult to discover and track. To mitigate these issues, many organizations are simplifying their processes using code-signing- as-a-service approaches. Code-signing-as-a-service can accelerate the steps required to get code signed, while making it easier to keep code secure. A robust solution can empower organizations with automation, enabling teams to minimize manual steps and accelerate signing processes. APIs can enable it to integrate seamlessly with development workflows and automated scheduling capabilities enable organizations to proactively and approve signature windows to support new releases and updates.

To strengthen accountability throughout the process, administrators can apply permission- based access. Strictly controlling access helps improve visibility into which users are allowed to sign code and which certificates and private keys they are allowed to utilize.

Standardizing workflows

Standardizing code signing workflows can also help reduce risk to an organization. Instead of allowing everyone in an organization to use the same key for signing, many organizations are using separate code signing keys for different DevOps teams, while granting administrators visibility over key usage. This best practice helps minimize the risk of mistakes that can occur across a company by limiting the ability of breaches to propagate. For example, if a key is used to sign a release that has been compromised, only one team’s code will be impacted.

Maximum flexibility to minimize risk

Key management flexibility is another helpful best practice, reducing risks by enabling administrators to specify shorter certificate lifetimes, rotate keys and control keypairs. For example, Microsoft recognizes publishers that rotate keys with higher reputation levels. With the right key management approach, an administrator could specify a specific number of days or months exclusively for files designed for use in Microsoft operating systems.

Secure key storage offline except during signing events

Taking keys offline is another measure that can secure code signing. With the right code signing administrative tool, administrators can place keys in a “offline mode,” making it impossible to use them to sign releases without the proper level of permission in advance. Release planning is a fundamental to software development, so most developers are comfortable scheduling signatures for specific keys directly into their processes.

Taking keys offline is a strong step to ensure that keys will not be used in situations where they should not be. It also adds a layer of organizational security by splitting responsibilities between signers and those who approve them—while providing improved visibility into which keys are signed by whom.

Freeing up developers to do what they do best

It’s clear that safeguarding DevOps environments correctly remains challenging, but fortunately the right management tools can minimize hassles—and maximize protection. As we’ve discussed, automation is essential for applying security across CI/CD pipelines. Seek out a solution that can fit smoothly within workflows and free up engineers from individual steps required for cryptographic asset protection. The tool should make signing keys easily accessible when pushing code and automate signing of packages, binaries and containers on every merge to master when authorized. Organizations also need a process for testing code integrity before they sign. A centralized, effective signing management tool can handle the signing tasks, while integrating with other systems that perform necessary integrity tests. For key security, the solution should provide the option of storing the keys offline in virtual HSMs. During a signing event, it should enable developers to access the keys to sign with one click, then return them back to secure offline storage

DevOps pros work within a variety of environments, so the signing solution should support portable, flexible deployment models via SaaS or on a public or private data center. Businesses in every industry are becoming increasingly software-driven and the challenges to DevOps organizations won’t disappear anytime soon. However, with the right approach to code signing, organizations can dramatically strengthen their security posture, minimize their chances of becoming the next victim and ensure customer confidence in their solutions.


(*) Disclosure: This podcast was sponsored by Digicert. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

In this episode of the podcast, we bring you the second installment of our interview with Jeremy O’Sullivan of the Internet of Things analytics firm Kytch. (The first part is here.) In this episode Jeremy talks about the launch of Kytch, his second start-up, which helped owners of soft ice cream machines by the manufacturer Taylor to monitor and better manage their equipment. We hear about how what Kytch revealed about Taylor’s hardware put him at odds with the company and its long-time partner: McDonald’s.


We generally view and talk about phenomena like “digital transformation” in a positive light. The world’s growing reliance on software, cloud computing, mobility and Internet connected “things” is remaking everything from how we catch a cab, to how we grow food or educate our children

Jeremy O’Sullivan, co-founder of Kytch
Jeremy O’Sullivan, co-founder of Kytch.

But what happens when that “digital transformation” is transformation to something worse than what came before, not the better? What happens when technology isn’t used to build a “better mousetrap” but to support a racket that enshrines expensive inefficiencies or a monopoly that stifles competition

What the hell is going on?

In his week’s episode, we’re digging deep on that question with the second installment of our interview with Jeremy O’Sullivan, the co-founder of the Internet of Things intelligence start-up Kytch. As we discussed last week, O’Sullivan and his wife, Melissa Nelson, launched the company in an effort to use data analysis to revolutionize the industrial kitchen, starting with one common but troublesome piece of machinery: soft ice cream machines manufactured by the company Taylor and used by the likes of McDonald’s and Burger King. 

Episode 147: Forty Year Old GPS Satellites offer a Warning about securing the Internet of Things

“What the hell is going on with the software on this ice cream machine? Why as the versions increase…is the software getting worse?”

– Jeremy O’Sullivan of Kytch on the Taylor soft ice cream machines.

The Dark Possibilities of Digital Transformations

Kytch-Iphone Display

In this episode, O’Sullivan talks about how – as  McDonald’s franchisees scooped up Kytch devices- his understanding of Taylor’s “business model” changed, even as the relationship with the company soured, culminating in what O’Sullivan alleges was the theft of a Kytch device and the reverse engineering of its proprietary technology. 

Far more than a story about massive, wealthy incumbents crushing a smaller challenger, the Kytch story is one that hints at the dark possibilities of digital transformation, as equipment makers use software to lock out their customers and deliver on “planned obsolescence.”

We start with Jeremy’s account of how his relationship with Taylor, which had been amicable when he was trying to build Fro Bot, a platform for stand-alone yogurt and ice cream kiosks, suddenly soured when he introduced the Kytch product and began giving Taylor customers better control over their equipment. The relationship with Taylor and its partner, McDonald’s, went down hill fast from there, as Taylor’s previously friendly management cut off contact with O’Sullivan and Kytch.

Law Suit Filed

In recent weeks, O’Sullivan and Kytch filed suit against Taylor and one of its major distributors for breach of contract, misappropriation of trade secrets and “tortious interference.” (PDF) Kytch alleges, among other things, that the company – working with a customer who was a prominent McDonald’s franchisee and a Taylor distributor – illegally obtained a Kytch device and reverse engineered it. Soon after, the company announced that it would be launching its own Kytch like device in 2021. At around the same time, McDonald’s warned franchisees using the Kytch device that doing so could violate its warranty with Taylor and put its employees physical safety at risk – a message that many franchisees interpreted as a warning against using the device from McDonald’s corporate leadership.

Seeds of Destruction: Cyber Risk Is Growing in Agriculture

For O’Sullivan, the behaviors reinforced concerns and misgivings he had about Taylor after analyzing data from the large number of Kytch devices deployed in McDonald’s in other restaurants. The company’s software, he said, seemed to get worse over time not better – with each software update introducing more instability – not less- more ways for the ice cream machines to break down, not fewer. Most suspicious of all: Taylor refused to talk about it.

“These people don’t want to have a forthright, open conversation about their software because they’re using for malicious means – to support their healthy service and repair business.”

A Fairytale of the Deflating Variety

In this podcast, we talk with Jeremy about his experience with Taylor and McDonald’s, the role that software can play in creating powerful constraints on customers and the marketplace. We also discuss the lawsuit Kytch filed and some of the other unseemly revelations contained in his suit.

For O’Sullivan, the lessons of his experience aren’t the uplifting kind. “This is a very sad story and a very un-American story,” O’Sullivan told me. If Kytch was a “vaccine” for the virus of software-driven inefficiency, the real story is about the virus and the “McDonald’s industrial complex” that gave rise to it, not his company’s cure for it.

“This is crazy because it’s a story about McDonald’s that is also about the demise of McDonald’s. McDonald’s is supposed to be a symbol for America and a forward thinking tech-oriented company and its really exposes how the company has devolved.”

Check out the podcast by clicking the button below!

Overworked, understaffed teams? Reactive processes? Noisy tools and a river of false positives? Welcome to the “SOC hop,” the dysfunctional “normal” of the typical corporate security operations center (SOC). But it is a normal that is not sustainable, writes Steve Garrison of Stellar Cyber in this Security Ledger expert insight.


There have always been two ways to view the Security Operations Center (SOC). Idealized, the SOC runs with precision and control. Operations are linear, it is well staffed, and the workload is reasonable. Surprises may occur, but they are readily handled in stride. There is no real sense of panic or fatigue, because everything works as planned. Then there is the other view of the typical SOC, the one experienced by most companies. This is the SOC hop.

Enter the SOC Hop

The hop is characterized by an overworked, understaffed team that is constantly jumping from one fire to the next. As much as the team is qualified and desirous of being proactive, all their time is consumed by reacting to events and alerts. Most of the professionals are exhausted and some even question the value of certain tools that have too many false positives and lack the ability to prioritize alerts in a meaningful, productive way. Ultimately, the SOC hop is not sustainable. Security success is getting worse, not better, and a data breach or something even worse seems a foregone conclusion.

[You might also like Futility or Fruition? Re-thinking Common Approaches to Cyber Security]

Steve Garrison, Stellar AI
Steve is the Vice President of Marketing at Stellar AI

Fortunately, organizations are rethinking the SOC and how it works in a classic “there must be a better way” reevaluation. The SOC is being reimaged or reimagined to better deal with the realities of today and the ability to scale to challenges still to come. The considerations may seem trite or tired—all too familiar—but they represent the fundamental changes needed to leave the SOC hop and move on to something better.

The SOC: re-imagined

First, there is visibility. Everyone knows this. An old security adage is “you can’t secure what you cannot see.” This is just as true today as when it first became a cliché. Visibility is a combination of depth and breadth. Attackers may target any portion of an attack surface and will traverse networks and infrastructure to gain access to valuable assets. This means that every part of the attack surface must be monitored and that organizations can see across their entire infrastructure to find the East-West activities intrinsic to the work of a skilled attacker. At the same time, data must also provide contextual references for an event to help boost accuracy and understanding of findings.

Second, the re-imaged SOC needs combined intelligence. The silos of separate tools with their individual alerts—and SIEMs that cannot gather a deep and broad enough amount of data to provide comprehensive understanding and relevant admonitions—need to be united. Not only do security tools and systems need to connect, they need to correlate their data to help paint a broader, clearer picture of potential in-progress attacks. This is more than API connectivity and rudimentary integrations. It also means real time or close to real time. Again, attacks are not static; they are dynamic and attackers move, conducting a campaign to maximize the return on their activity. One event may be inconsequential and below the radar, but connecting the dots may clearly reveal an attack in progress.

Mind the Gap

Third, gaps need to be covered. There are normally gaps between coverage areas and the realm that each security tool ingests. Logs will only reflect a portion of the evidence. There are typically some kind of boundaries between what is monitored as perimeter, endpoint, cloud, server, data center, etc. In addition, gaps may come into being through the natural, dynamic change of a company’s networks and infrastructure. Ideally, gaps can be met with sensors to ensure full visibility and assessment. Data from these sensors may be instrumental in finding attack activity or bolstering findings.

The combination of full visibility, covering gaps and combined intelligence can be a game-changer for the SOC. It can substantially change the way the team works. Rather than hopping from incident to incident or between every alert or event that pops up, the cacophony of disparate can be put to an end to produce alerts that are fewer in number, higher in accuracy and relevance and prioritized for action. Here, the tools support the SOC team rather than the other way around. The tiresome SOC hop can give way to a wholly new way of working and getting an upper hand on the many challenges of protecting data, infrastructure and valuable assets.


We all harbor magical and romantic ideas about the transformative power of both technology and entrepreneurship. “Build a better mousetrap and the world will beat a path to your door.” That romantic idea is at the heart of many start ups, and the Internet has created a platform on which a never ending stream of new “mousetraps” can be conceived. Think, for example, of the NEST thermostat – that iconic Internet of Things product, which stylishly re-imagined the boring old thermostat: outfitting it with brains, motion sensors, a cool graphical interface and a powerful, web-based back end.  

Company Town 2.0

But technology can just as easily create dystopias as utopias: obscuring the operation of formerly mechanical instruments whose workings were easily observable, or harvesting data from unwitting users and ferrying it off to the cloud for use by shadowy global corporations. Technology can even trap owners and business people in a kind of servitude – like the share croppers or the residents of “company towns” in the 19th and 20th centuries whose lives were prescribed and diminished by invisible bonds of contract and dependence, rather than by physical chains. 

Report: Companies Still Grappling with IoT Security

As the Internet of Things expands from “smart thermostats” to cars; home appliances; the agricultural equipment that plants and harvests our food; or the machinery that businesses rely on; the chances that you or someone you love might end up as a digital share cropper or the resident of one of these virtual  “company towns” are growing. 

And that’s what our podcast this week is about. In the first part of a two part episode, we’re joined by Jeremy O’Sullivan, co-founder of the IoT analytics company, Kytch. Jeremy and his wife and co-founder, Melissa Nelson, were the subjects of an amazing profile in Wired Magazine by Andy Greenberg that described the young company’s travails with the commercial ice cream machine manufacturer, Taylor, a storied multi-national whose equipment is used by businesses from the corner soft serve joints to giants like Burger King and McDonalds. 

TV Maker TCL Denies Back Door, Promises Better Process

Despite the company’s sterling reputation and dominant position in the marketplace, O’Sullivan found that the Taylor equipment was notoriously unreliable – to the point of becoming an Internet meme. Today, sites like McBroken.com use McDonald’s online ordering websites to determine that anywhere from 5% to almost a quarter of all McDonald’s ice cream machines in major U.S. metro regions are not operational at any given time.

O’Sullivan’s big idea for Kytch: use technology he had developed to allow franchise owners better stay on top of the Taylor machines maintenance and cleaning functions – to keep their machines running and churning out soft serve to happy consumers.

A Cautionary Tale for IoT Entrepreneurs

It wasn’t to be, for reasons that still aren’t that clear, but are the subject of a lawsuit filed in Superior Court in California on May 10 by Kytch against Taylor and a prominent McDonald’s franchisee: Jonathan Tyler Gamble.

Regardless of how that lawsuit turns out or what it reveals, O’Sullivan’s story is a cautionary tale for entrepreneurs who are hoping to carve out a niche in what amount to software-enforced monopolies that are becoming more and more common. That’s true whether those monopolies govern Apple’s phones (see the ongoing lawsuit between Apple and Epic over access to the AppStore), John Deere’s monopoly on service and repair of its agricultural equipment or – in this case – Taylor’s strangle hold on its ice cream machines. 

In this first installment, Jeremey tells us about the origins of Kytch in the fabulously named “Fro Bot,” which he and Melissa – soft serve enthusiasts – imagined as a kind of “Red Box” that automated Taylor’s ice cream machines. What starts out as a story about a plucky entrepreneur showing an older, richer company how to make their business better ends up in a different, darker place. In this episode, Jeremy takes us back to the beginning.


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted.

In this episode of the podcast (#214), Brandon Hoffman, the CISO of Intel 471 joins us to discuss the recent ransomware attack on the Georgia-based Colonial Pipeline, and the suspected group behind it: DarkSide a ransomware for hire cybercrime outfit.


It was just a week ago, May 7th, 2021, that a successful cyberattack against one of the largest U.S. oil and gas pipelines, operated by the Colonial Pipeline Company, forced it to shut down and plunged the U.S. government into an unanticipated crisis. Within days, there were reports of consumers panic-buying petrol leading to gas shortages in the southeastern United States.

Do Cities deserve Federal Disaster Aid after Cyber Attacks?

Then, almost as suddenly as the crisis appeared it was over. Colonial, which was reported to have paid the Darkside group a $5 million ransom to regain access to their servers, announced that it would restore pipeline operations by the end of the week. And, in a message to a private forum on Thursday captured by the firm Intel 471, the ransomware group credited with the attack, known as “Darkside,” said that it was shutting down after its blog, payment server and Internet infrastructure were seized by law enforcement and cryptocurrency from a Darkside controlled payment server was diverted to what was described as an “unknown account.” 

An image of the message posted by the Darkside group ceasing operations. (Image courtesy of intel 471.)

Other news reports suggests the cyber criminal underground was getting skittish about ransomware groups, now that the full force of the U.S. government appears to be focused on rooting them out. Reports out Friday claim that the Russian cyber hacking forum XSS has banned all topics related to ransomware

Episode 169: Ransomware comes to the Enterprise with PureLocker

What happened? And who – or what – is the Darkside group responsible for the Colonial pipeline attack? We invited Brandon Hoffman, CISO at the firm Intel 471 back into the studio to talk about Darkside, which Intel 471 has followed and profiled in depth since it emerged last summer.

“They (DarkSide) don’t necessarily want to have their affiliates attack Critical Infrastructure or the government.”

-Brandon Hoffman, CISO Intel 471

The quick collapse seen in recent days may be a case of Darkside biting off more than it can chew by attacking a target that managed to put it in the cross hairs of the U.S. government. But, as we discuss, the Colonial Pipeline hack also raises a number of questions regarding the state of America’s Critical Infrastructure, and whether it is secure enough to withstand both directed and opportunistic attacks. “Ransomware is no longer a cybercrime problem, it’s really a national security issue,” Brandon tells me.

Report: Critical Infrastructure Cyber Attacks A Global Crisis

In this conversation, Brandon briefs us on DarkSide and outlines the group’s motivations and processes when it works with affiliates and targets victims. The attack on Colonial will almost certainly prompt changes by attackers, which will be wary of inviting retaliation from nations like the U.S.

Carolynn van Arsdale (@Carolynn_VA) contributed to this story.


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted.

It already seems like a lifetime ago that the hack of the Orion network management software by SolarWinds consumed the attention of the media, lawmakers and the for-profit information security industry. In reality, it was barely six months ago that the intrusion first came to light.

In the space of a few months, however, there have already been successive, disruptive events of at least the same seriousness. Among them: the attacks on vulnerabilities in Microsoft’s Exchange Server and – in the last week – the ransomware attack that brought down the Colonial Pipeline in the Eastern United States.

Lessons Still Being Learned

The “party” may have moved on from SolarWinds, the impact and import of the SolarWinds Orion hack are still being worked out. What were the “lessons” from SolarWinds? Beyond the 200 + organizations that were impacted by the supply chain compromise, how can private and public sector firms learn from it and become more resilient and less susceptible to the kind of attacks that hit the likes of FireEye, Qualys, Equifax and more?

To help answer those questions, the security firm ForAllSecure assembled* an all-star roundtable to tackle the question not so much of “what happened” with SolarWinds, but “how to prevent it from happening again.” I had the pleasure of moderating that discussion and I’m excited to share the video of our talk below.

Our panel included some of the sharpest minds in information security. David Brumley, a star security researcher and the CEO of ForAllSecure was joined by Chenxi Wang, a Managing General Partner at Rain Capital. We also welcomed infosec legend and Metasploit creator H.D. Moore, now the CEO of Rumble, Inc. and Vincent Liu, the CEO of the firm Bishop Fox.

I’ve included a link to our discussion below. I urge you to check it out – there are a lot of valuable insights and take aways from our panelists. In the meantime, here are a couple of my high-level take aways:

Supply Chain’s Fuzzy Borders

One of the biggest questions we wrestled with as a panel was how to define “supply chain risk” and “supply chain hacks.” It’s a harder question than you might think. Our panelists were in agreement that there was a lot of confusion in the popular and technology media about what is- and isn’t supply chain risk. Our panelists weren’t even in agreement on whether SolarWinds, itself, constituted a supply chain hack, given that the company wasn’t a supply chain partner to the victims of the attack, so much as a service provider. Regardless, there was broad agreement that the risk is real and pronounced.

Episode 208: Getting Serious about Hardware Supply Chains with Goldman Sachs’ Michael Mattioli

“We rely on third party software and services a lot more – APIs and everything,” said Chenxi Wang. “With that, the risk from software supply chain is really more pronounced.”

“What your definition of supply chain risk depends on where you sit,” said Vincent Liu. That starts with third party or open source libraries. “If you’re a software developer your supply chain looks very different from an end user who does no software development but needs to ensure that the software you bring into an organization is secure.”

At the end of the day: getting the exact definition of “supply chain risk” is less important than understanding what the term means for your organization and – of course – making sure you’re accounting for that risk in your security planning and investment.

Third Party Code: Buggy, or Bugged?

Another big question our panel considered was who was behind supply chain attacks and who the targets were. Vincent Liu of Bishop Fox. Supply chain insecurities are deliberately introduced into software development. A different approach needs to be taken when securing against that versus traditional secure software development tools, which are about identifying flaws in the (legitimate) software development process.

Firms are embracing Open Source. Securing it? Not so much.

The impact of a flawed open source component and a compromised one matters, even if the impact is the same. The distinction between inadvertent vulnerabilities and attacks matters, said Brumley. “You really have to tease out the question of a vulnerability and whether inserted by the attacker or just latent there,” he said. The media is more interested in the concept of a malicious actor working behind the scenes to undermine software, whereas shoddy software may be less dramatic. But both need to be addressed, he said.

Developers in the Crosshairs

Another big takeaway concerns the need for more attention within organizations to an overlooked target in supply chain attacks: the developer. “It used to be that attacks were focused on company repositories and company source code. Now they’re very focused on developer machines instead,” said Moore of Rumble. “There are a lot more phishing attacks on developers, a lot of folks scraping individual developer accounts on GitHub looking for secrets. So it’s really shifted from attacks on large corporations with centralized resources to individual developer resources.”

The days of sensitive information like credentials and API keys hiding in plain site are over. “Especially for developers, your mean time to compromise now is so fast,” said Moore.

Spotlight Podcast: Fixing Supply Chain Hacks with Strong Device Identities

Unfortunately, too few organizations are investing to solve that very real problem. Too many firms are continuing to direct the lion’s share of their security investment into legacy tools, technologies and capabilities. ““Tons of tooling and money has been spent in terms of things that find vulnerabilities because they can find them,” notes Liu. “But most of them don’t matter. Really spending your time on the right things rather than just doing things because you can do them is so critically important.”

Check out our whole conversation below!

(*) Disclosure: This blog post was sponsored by ForAllSecure. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

Digital transformation is revolutionizing how healthcare is delivered. But a simmering dispute between a UK security researcher and a domestic healthcare non-profit suggests that the road ahead may be bumpy for both organizations embracing new software and services and for those who dare to ask questions about how that software works.

Rob Dyke (@robdykedotcom) is an independent security researcher. (Image courtesy of Twitter.)

A case in point: UK-based engineer Rob Dyke has spent months caught in an expensive legal tussle with the Apperta Foundation, a UK-based clinician-led non-profit that promotes open systems and standards for digital health and social care. The dispute stems from a confidential report Dyke made to the Foundation in February after discovering that two of its public Github repositories exposed a wide range of sensitive data, including application source code, user names, passwords and API keys.

The application in question was dubbed “Apperta Portal” and was publicly accessible for two years, Dyke told Security Ledger. Dyke informed the Foundation in his initial notification that he would hold on to the data he discovered for 90 days before deleting it, as a courtesy. Dyke insists he followed Apperta’s own information security policy, with which he was familiar as a result of earlier work he had done with the organization.

Open Source vs. An Open Sorcerer

Dyke (@robdykedotcom) is a self-described “open sorcerer” with expertise in the healthcare sector. In fact, he previously worked on Apperta-funded development projects to benefit the UK’s National Health Service (NHS) and had a cordial relationship with the organization. Initially, the Foundation thanked Dyke for disclosing the vulnerability and removed the exposed public source code repositories from GitHub.

Researchers Test UN’s Cybersecurity, Find Data on 100k

That honeymoon was short lived. On March 8, 2021, Dyke received a letter from a law firm representing Apperta Foundation that warned that he “may have committed a criminal offense under the Computer Misuse Act 1990,” a thirty-year-old U.K. computer crime law.  “We understand (you) unlawfully hacked into and penetrated our client’s systems and databases and extracted, downloaded (and retain) its confidential business and financial data. You then made threats to our client…We are writing to advise you that you may have committed a criminal offence under the Computer Misuse Act 1990 and the Investigatory Powers Act 2016,” the letter read, in part. Around the same time, he was contacted by a Northumbria Police cyber investigator inquiring about a report of “Computer Misuse” from Apperta.

A Hard Pass By Law Enforcement

The legal maneuvers by Apperta prompted Dyke to go public with the dispute – though he initially declined to name the organization pursuing him – and to hire his own lawyers and to set up a GoFundMe to help offset his legal fees. In an interview with The Security Ledger, Dyke said Apperta’s aggressive actions left him little choice. “(The letter) had the word ‘unlawful’ in there, and I wasn’t about to sign anything that had said I’d committed a criminal offense,” he said.

After interviewing Dyke, law enforcement in the UK declined to pursue a criminal case against him for violating the CMA. However, the researcher’s legal travails have continued all the same.

Episode 183: Researcher Patrick Wardle talks Zoom 0days and Mac (in)Security

Keen to ensure that Dyke deleted the leaked data and application code he downloaded from the organization’s public GitHub repositories, Apperta’s lawyers sent multiple emails instructing Dyke to destroy or immediately deliver the data he found from the security vulnerability; to give confirmation he had not and would not publish the data he “unlawfully extracted;” and to give another confirmation that he had not shared this data with anyone.

Dyke insists he deleted the exposed Apperta Foundation data weeks ago, soon after first being asked by the Foundation. Documents provided by Dyke that were sent to Apperta attest that he “destroyed all Apperta Foundation CIC’s data and business information in my possession, the only such relevant material having been collated in February 2021 for the purposes of my responsible disclosure of serious IT security concerns to Apperta of March 2021 (“the Responsible Disclosure Materials”).”

Not Taking ‘Yes’ For An Answer

Nevertheless, Apperta’s legal team found that Dyke’s response was “not an acceptable undertaking,” and that Dyke posed an “imminent further threat” to the organization. Months of expensive legal wrangling ensued, as Apperta sought to force Dyke to delete any and all work he had done for the organization, including code he had developed and licensed as open source. All the while, Dyke fielded correspondence that suggests Apperta was preparing to take him to court. In recent weeks, the Foundation has inquired about his solicitor and whether he would be representing himself legally. Other correspondence has inquired about the best address at which to serve him an injunction and passed along forms to submit ahead of an interim hearing before a judge.

In late April, Dyke transmitted a signed undertaking and statement to Apperta that would seem to satisfy the Foundation’s demands. But the Foundation’s actions and correspondence have him worried that it may move ahead with legal action, anyway – though he is not sure exactly what for. “I don’t understand what the legal complaint would be. Is this about the database access? Could it be related to (intellectual property)?” Dyke said he doubts Apperta is pursuing “breach of contract” because he isn’t under contract with the Foundation and -in any case – followed the organization’s responsible disclosure policy when he found the exposed data, which should shield him from legal repercussions. 

In the meantime, the legal bills have grown. Dyke told The Security Ledger that his attorney’s fees in March and April totaled £20,000 and another £5,000 since – with no clear end in sight. “It is not closed. I have zero security,” Dyke wrote in a text message.

In a statement made to The Security Ledger, an Apperta Foundation spokesperson noted that, to date, it “has not issued legal proceedings against Mr. Dyke.” The Foundation said it took “immediate action” to isolate the breach and secure its systems. However, the Foundation also cast aspersions on Dyke’s claims that he was merely performing a public service in reporting the data leak to Apperta and suggested that the researcher had not been forthcoming.

“While Mr. Dyke claims to have been acting as a security researcher, it has always been our understanding that he used multiple techniques that overstepped the bounds of good faith research, and that he did so unethically.”

-Spokesperson for The Apperta Foundation

Asked directly whether The Foundation considered the legal matter closed, it did not respond, but acknowledged that “Mr. Dyke has now provided Apperta with an undertaking in relation to this matter.”

Apperta believes “our actions have been entirely fair and proportionate in the circumstances,” the spokesperson wrote.

Asked by The Security Ledger whether the Foundation had reported the breach to regulators (in the UK, the Information Commissioner’s Office is the governing body), the Foundation did not respond but said that it “has been guided by the Information Commissioner’s Office (ICO) and our legal advisers regarding our duties as a responsible organisation.”

OK, Boomer. UK’s Cyber Crime Law Shows Its Age

Dyke’s dealings with the Apperta highlights the fears that many in the UK cybersecurity community have in regards to the CMA, a 30 year-old computer crime law that critics say is showing its age.

A 2020 report from TechUK found that 80% of respondents said that they have been worried about breaking the CMA when researching vulnerabilities or investigating cyber threat actors. Also, out of those same respondents, around 40% said the law has acted as a barrier to them or their colleagues and has even prevented cybersecurity employees from proactively safeguarding against security breaches. 

Those who support amending the CMA believe that the legislation poses several threats for Great Britain’s cyber and information security industry. Edward Parsons of the security firm F-Secure observed that “the CMA not only impacts our ability to defend victims, but also our competitiveness in a global market.” This downside, along with the ever-present talent shortage of cybersecurity professionals in the U.K., are reasons that Parsons believes justify a newly updated version of the CMA be passed into law. 

Dyke said the CMA looms over the work of security researchers. “You have to be very careful about participating in any, even formal bug bounties (…) If someone’s data gets breached, and it comes out that I’ve reported it, I could actually be picked up under the CMA for having done that hacking in the first place,” he said. 

Calls for Change

Calls for changes are growing louder in the UK. A January, 2020 report by the UK-based Criminal Law Reform Now Network (CLRNN) found that the Computer Misuse Act “has not kept pace with rapid technological change” and “requires significant reform to make it fit for the 21st century.” Among the recommendations of that report were allowances for researchers like Dyke to make “public interest defences to untie the hands of cyber threat intelligence professionals, academics and journalists to provide better protections against cyber attacks and misuse.”

Policy makers are taking note. This week, British Home Secretary Priti Patel told the audience at the National Cyber Security Centre’s (NCSC’s) CyberUK 2021 virtual event, that the CMA had served the country well, but that “now is the right time to undertake a formal review of the Computer Misuse Act.”

For researchers like Dyke, the changes can’t come soon enough. “The Act is currently broken and disproportionately affects hackers,” he wrote in an email. He said a reformed CMA should make allowances for the kinds of “accidental discoveries” that led him to the Apperta breach. “It needs to ensure safe harbour for professional, independent security researchers making Responsible Disclosure in the public interest.”

Carolynn van Arsdale contributed to this story.

Web sites for customers of agricultural equipment maker John Deere contained vulnerabilities that could have allowed a remote attacker to harvest sensitive information on the company’s customers including their names, physical addresses and information on the Deere equipment they own and operate.

The researcher known as “Sick Codes” (@sickcodes) published two advisories on Thursday warning about the flaws in the myjohndeere.com web site and the John Deere Operations Center web site and mobile applications. In a conversation with Security Ledger, the researcher said that a he was able to use VINs (vehicle identification numbers) taken from a farm equipment auction site to identify the name and physical address of the owner. Furthermore, a flaw in the myjohndeere.com website could allow an unauthenticated user to carry out automated attacks against the site, possibly revealing all the user accounts for that site.

Sick Codes disclosed both flaws to John Deere and also to the U.S. Government’s Cybersecurity and Infrastructure Security Agency (CISA), which monitors food and agriculture as a critical infrastructure sector. As of publication, the flaws discovered in the Operations Center have been addressed while the status of the myjohndeere.com flaws is not known.

Contacted by The Security Ledger, John Deere did not offer comment regarding the bulletins prior to publication.

Sick Codes, the researcher, said he created a free developer account with Deere and found the first myjohndeere.com vulnerability before he had even logged into the company’s web site. The two flaws he disclosed represent only an hour or two of probing the company’s website and Operations Center. He feels confident there is more to be found, including vulnerabilities affecting the hardware and software deployed inside the cabs of Deere equipment.

“You can download and upload stuff to tractors in the field from the web. That is a potential attack vector if exploitable.”

Ag Equipment Data: Fodder for Nation States

The information obtained from the John Deere websites, including customer names and addresses, could put the company afoul of data security laws like California’s CCPA or the Personal Information Protection Act in Deere’s home state of Illinois. However, the national security consequences of the company’s leaky website could be far greater. Details on what model combines and other equipment is in use on what farm could be of very high value to an attacker, including nation-states interested in disrupting U.S. agricultural production at key junctures, such as during planting or harvest time.

The consolidated nature of U.S. farming means that an attacker with knowledge of specific, Internet connected machinery in use by a small number of large-scale farming operations in the midwestern United States could launch targeted attacks on that equipment that could disrupt the entire U.S. food supply chain.

Despite creating millions of lines of software to run its sophisticated agricultural machinery, Deere has not registered so much as a single vulnerability with the Government’s CVE database, which tracks software flaws.

At Risk: Devastating Attacks on Food Chain

Agriculture is uniquely susceptible to such disruptions, says Molly Jahn, a Program Manager in the Defense Sciences Office at DARPA, the Defense Advanced Research Projects Agency and a researcher at the University of Wisconsin, Madison.

Molly Jahn is Program Manager at DARPA and a researcher at the University of Wisconsin, Madison.

“Unlike many industries, there is extreme seasonality in the way John Deere’s implements are used,” Jahn told Security Ledger. “We can easily imagine timed interference with planting or harvest that could be devastating. And it wouldn’t have to persist for very long at the right time of year or during a natural disaster – a compound event.” An attack aimed at economic sabotage and carried out through combines at harvest time in the midwest it would be “devastating and unrecoverable depending on the details,” said Jahn.

DHS Warns That Drones Made in China Could Steal U.S. Data

However, the Agriculture sector and firms that supply it, like Deere, lag other industries in cyber security preparedness and resilience. A 2019 report released by Department of Homeland Security concluded that the “adoption of advanced precision agriculture technology and farm information management systems in the crop and livestock sectors is introducing new vulnerabilities into an industry which had previously been highly mechanical in nature.”

DHS Report: Threats to Ag Not Taken Seriously

“Most of the information management / cyber threats facing precision agriculture’s embedded and digital tools are consistent with threat vectors in all other connected industries. Malicious actors are also generally the same: data theft, stealing resources, reputation loss, destruction of equipment, or gaining an improper financial advantage over a competitor,” the report read.

The research group that prepared that report visited large farms and precision agriculture technology manufacturers “located throughout the United States.” The report concluded that “potential threats to precision agriculture were often not fully understood or were not being treated seriously enough by the front-line agriculture producers,” the report concluded.

Jahn said the U.S. agriculture sector has emphasized efficiency and cost savings over resilience. The emergence of precision agriculture in the last 15 years has driven huge increases in productivity, but also introduced new risks of disruptions that have not been accounted for.

“We have not thought about protecting the data from unwanted interference of any type,” she said. “That includes industrial espionage, sabotage or a full on attack…I have consistently maintained cyber risk on the short list of existential threats to US food and agriculture system.”

In just the last two weeks, three of the world’s most prominent social networks have been linked to stories about data leaks. Troves of information on both Facebook and LinkedIn users – hundreds of millions of them – turned up for sale in marketplaces in the cyber underground. Then, earlier this week, a hacker forum published a database purporting to be information on users of the new Clubhouse social network. 

Andrew Sellers is the Chief Technology Officer at QOMPLX Inc.

To hear Facebook, LinkedIn and Clubhouse speak, however, nothing is amiss. All took pains to explain that they were not the victims of a hack, just “scraping” of public data on their  users by individuals. Facebook went so far as to insist that it would not notify the 530 million users whose names, phone numbers, birth dates and other information were scraped from its site. .

So which is it? Is scraping the same as hacking or just an example of “zealous” use of a social media platform? And if it isn’t considered hacking…should it be? As more and more online platforms open their doors to API-based access, what restrictions and security should be attached to those APIs to prevent wanton abuse? 

To discuss these issues and more, we invited Andrew Sellers into the Security Ledger studios. Andrew is the Chief Technology Officer at the firm QOMPLX* where he oversees the technology, engineering, data science, and delivery aspects of QOMPLX’s next-generation operational risk management and situational awareness products. He is also an expert in data scraping with specific expertise in large-scale heterogeneous network design, deep-web data extraction, and data theory. 

While the recent incidents affecting LinkedIn, Facebook and Clubhouse may not technically qualify as “hacks,” Andrew told me, they do raise troubling questions about the data security and data management practices of large social media networks, and beg the question of whether more needs to be done to regulate the storage and retention of data on these platforms. 


(*) QOMPLX is a sponsor of The Security Ledger.