In this episode of the podcast (#216), sponsored by Digicert, we talk with Brian Trzupek, Digicert’s Vice President of Product, about the growing urgency of securing software supply chains, and how digital code signing can help prevent compromises like the recent hack of the firm SolarWinds.


We spend a lot of time talking about software supply chain security these days? But what does that mean. At the 10,000 foot level it means “don’t be the next Solar Winds” – don’t let a nation state actor infiltrate your build process and insert a backdoor that gets distributed to thousands of customers – including technology firms three letter government agencies. 

OK. Sure. But speaking practically, what are we talking about when we talk about securing the software supply chain? Well, for one thing: we’re talking about securing the software code itself. We’re talking about taking steps to insure that what is written by our  developers is actually what goes into a build and then gets distributed to users.

Digital code signing – using digital certificates to sign submitted code – is one way to do that. And use of code signing is on the rise. But is that alone enough?  In this episode of the podcast, we’re joined by Brian Trzupek the SVP of Product at Digicert to talk about the growing role of digital code signing in preventing supply chain compromises and providing an audit trail for developed code.

Brian is the author of this recent Executive Insight on Security Ledger where he notes that code signing certificates are a highly effective way to ensure that software is not compromised -but only as effective as the strategy and best practices that support it. When poorly implemented, Brian notes, code signing loses its effectiveness in mitigating risk for software publishers and users.

In this conversation we talk about the changes to tooling, process and staff that DEVOPS organizations need to embrace to shore up the security of their software supply chain. 

“It boils down to do you have something in place to ensure code quality, fix vulnerabilities and make sure that code isn’t incurring tech debt,” Brian says. Ensuring those things involves both process, new products and tools as well as the right mix of staff and talent to assess new code for security issues. 

One idea that is gaining currency within DEVOPS organizations is “quorum based deployment” in which multiple staff members review and sign off on important code changes before they are deployed. Check out our full conversation using the player (above) or download the MP3 using the button below.


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted.

The recent SolarWinds attack highlights an Achilles heel for enterprises: software updates for critical enterprise applications. Digital signing of code is one solution, but organizations need to modernize their code signing processes to prioritize security and integrity and align with DevOps best practices, writes Brian Trzupek the Senior Vice President of Products at DigiCert in this thought leadership article.


Even in today’s security-charged world, the SolarWinds breach was a wakeup call for cybersecurity professionals. It was distinguished by its sophistication and the fact that it was carried out as part of legitimate software updates. The incident was quickly all over the news and has brought renewed focus on need for secure DevOps.

Automating your way out of PKI chaos.”

Code Signing Alone Is Not Enough

The security incident at SolarWinds was especially unsettling because could easily happen at any large software organization that delivers regular updates. Code signing certificates are a highly effective way to ensure that software is not compromised. However, it is only as effective as the strategy and best practices behind it. When poorly implemented, code signing loses its effectiveness in mitigating risk for software publishers and users. Issues often include:

  • Using the same signing key to sign all files, across multiple product lines and businesses
  • Lack of mechanisms in place to control who can sign specific files
  • Insufficient reporting capabilities for insights into who signed what and when
  • Failure to sign code at every stage of development, as part of an overall security by design process
  • Lack of signing and verifying code from third parties
  • Poor processes for securing keys and updating them to new key size or algorithm requirements
  • Failure to test code integrity before signing
  • Inadequate visibility into where certificates are, and how they are managed

Common pitfalls might include using the same signing key to sign all files, across multiple product lines and businesses. Some organizations might have no mechanisms in place to control who can sign specific files. They may also lack reporting capabilities, which can provide insights into who signed what—and when.

[Read Brian’s piece Staying Secure Through 5G Migration.]

What have we learned from the SolarWinds attack? For organizations where DevOps is fundamental, applying best practices to signing processes is more essential than ever. According to some studies, more than half of IT security professionals are concerned about bad actors forging or stealing certificates to sign code—but fewer than a third enforce code signing policies on a consistent basis. It’s time for organizations to do better and enforce zero-trust
across all their systems, signing everything, at every stage after verifying it is secure.

Simplifying and standardizing

Traditional code signing processes can be complex and difficult to enforce. They are often based on storing keys on desktops as well as sharing them. Visibility into activities is often limited, making mismanagement or flawed processes difficult to discover and track. To mitigate these issues, many organizations are simplifying their processes using code-signing- as-a-service approaches. Code-signing-as-a-service can accelerate the steps required to get code signed, while making it easier to keep code secure. A robust solution can empower organizations with automation, enabling teams to minimize manual steps and accelerate signing processes. APIs can enable it to integrate seamlessly with development workflows and automated scheduling capabilities enable organizations to proactively and approve signature windows to support new releases and updates.

To strengthen accountability throughout the process, administrators can apply permission- based access. Strictly controlling access helps improve visibility into which users are allowed to sign code and which certificates and private keys they are allowed to utilize.

Standardizing workflows

Standardizing code signing workflows can also help reduce risk to an organization. Instead of allowing everyone in an organization to use the same key for signing, many organizations are using separate code signing keys for different DevOps teams, while granting administrators visibility over key usage. This best practice helps minimize the risk of mistakes that can occur across a company by limiting the ability of breaches to propagate. For example, if a key is used to sign a release that has been compromised, only one team’s code will be impacted.

Maximum flexibility to minimize risk

Key management flexibility is another helpful best practice, reducing risks by enabling administrators to specify shorter certificate lifetimes, rotate keys and control keypairs. For example, Microsoft recognizes publishers that rotate keys with higher reputation levels. With the right key management approach, an administrator could specify a specific number of days or months exclusively for files designed for use in Microsoft operating systems.

Secure key storage offline except during signing events

Taking keys offline is another measure that can secure code signing. With the right code signing administrative tool, administrators can place keys in a “offline mode,” making it impossible to use them to sign releases without the proper level of permission in advance. Release planning is a fundamental to software development, so most developers are comfortable scheduling signatures for specific keys directly into their processes.

Taking keys offline is a strong step to ensure that keys will not be used in situations where they should not be. It also adds a layer of organizational security by splitting responsibilities between signers and those who approve them—while providing improved visibility into which keys are signed by whom.

Freeing up developers to do what they do best

It’s clear that safeguarding DevOps environments correctly remains challenging, but fortunately the right management tools can minimize hassles—and maximize protection. As we’ve discussed, automation is essential for applying security across CI/CD pipelines. Seek out a solution that can fit smoothly within workflows and free up engineers from individual steps required for cryptographic asset protection. The tool should make signing keys easily accessible when pushing code and automate signing of packages, binaries and containers on every merge to master when authorized. Organizations also need a process for testing code integrity before they sign. A centralized, effective signing management tool can handle the signing tasks, while integrating with other systems that perform necessary integrity tests. For key security, the solution should provide the option of storing the keys offline in virtual HSMs. During a signing event, it should enable developers to access the keys to sign with one click, then return them back to secure offline storage

DevOps pros work within a variety of environments, so the signing solution should support portable, flexible deployment models via SaaS or on a public or private data center. Businesses in every industry are becoming increasingly software-driven and the challenges to DevOps organizations won’t disappear anytime soon. However, with the right approach to code signing, organizations can dramatically strengthen their security posture, minimize their chances of becoming the next victim and ensure customer confidence in their solutions.


(*) Disclosure: This podcast was sponsored by Digicert. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

It already seems like a lifetime ago that the hack of the Orion network management software by SolarWinds consumed the attention of the media, lawmakers and the for-profit information security industry. In reality, it was barely six months ago that the intrusion first came to light.

In the space of a few months, however, there have already been successive, disruptive events of at least the same seriousness. Among them: the attacks on vulnerabilities in Microsoft’s Exchange Server and – in the last week – the ransomware attack that brought down the Colonial Pipeline in the Eastern United States.

Lessons Still Being Learned

The “party” may have moved on from SolarWinds, the impact and import of the SolarWinds Orion hack are still being worked out. What were the “lessons” from SolarWinds? Beyond the 200 + organizations that were impacted by the supply chain compromise, how can private and public sector firms learn from it and become more resilient and less susceptible to the kind of attacks that hit the likes of FireEye, Qualys, Equifax and more?

To help answer those questions, the security firm ForAllSecure assembled* an all-star roundtable to tackle the question not so much of “what happened” with SolarWinds, but “how to prevent it from happening again.” I had the pleasure of moderating that discussion and I’m excited to share the video of our talk below.

Our panel included some of the sharpest minds in information security. David Brumley, a star security researcher and the CEO of ForAllSecure was joined by Chenxi Wang, a Managing General Partner at Rain Capital. We also welcomed infosec legend and Metasploit creator H.D. Moore, now the CEO of Rumble, Inc. and Vincent Liu, the CEO of the firm Bishop Fox.

I’ve included a link to our discussion below. I urge you to check it out – there are a lot of valuable insights and take aways from our panelists. In the meantime, here are a couple of my high-level take aways:

Supply Chain’s Fuzzy Borders

One of the biggest questions we wrestled with as a panel was how to define “supply chain risk” and “supply chain hacks.” It’s a harder question than you might think. Our panelists were in agreement that there was a lot of confusion in the popular and technology media about what is- and isn’t supply chain risk. Our panelists weren’t even in agreement on whether SolarWinds, itself, constituted a supply chain hack, given that the company wasn’t a supply chain partner to the victims of the attack, so much as a service provider. Regardless, there was broad agreement that the risk is real and pronounced.

Episode 208: Getting Serious about Hardware Supply Chains with Goldman Sachs’ Michael Mattioli

“We rely on third party software and services a lot more – APIs and everything,” said Chenxi Wang. “With that, the risk from software supply chain is really more pronounced.”

“What your definition of supply chain risk depends on where you sit,” said Vincent Liu. That starts with third party or open source libraries. “If you’re a software developer your supply chain looks very different from an end user who does no software development but needs to ensure that the software you bring into an organization is secure.”

At the end of the day: getting the exact definition of “supply chain risk” is less important than understanding what the term means for your organization and – of course – making sure you’re accounting for that risk in your security planning and investment.

Third Party Code: Buggy, or Bugged?

Another big question our panel considered was who was behind supply chain attacks and who the targets were. Vincent Liu of Bishop Fox. Supply chain insecurities are deliberately introduced into software development. A different approach needs to be taken when securing against that versus traditional secure software development tools, which are about identifying flaws in the (legitimate) software development process.

Firms are embracing Open Source. Securing it? Not so much.

The impact of a flawed open source component and a compromised one matters, even if the impact is the same. The distinction between inadvertent vulnerabilities and attacks matters, said Brumley. “You really have to tease out the question of a vulnerability and whether inserted by the attacker or just latent there,” he said. The media is more interested in the concept of a malicious actor working behind the scenes to undermine software, whereas shoddy software may be less dramatic. But both need to be addressed, he said.

Developers in the Crosshairs

Another big takeaway concerns the need for more attention within organizations to an overlooked target in supply chain attacks: the developer. “It used to be that attacks were focused on company repositories and company source code. Now they’re very focused on developer machines instead,” said Moore of Rumble. “There are a lot more phishing attacks on developers, a lot of folks scraping individual developer accounts on GitHub looking for secrets. So it’s really shifted from attacks on large corporations with centralized resources to individual developer resources.”

The days of sensitive information like credentials and API keys hiding in plain site are over. “Especially for developers, your mean time to compromise now is so fast,” said Moore.

Spotlight Podcast: Fixing Supply Chain Hacks with Strong Device Identities

Unfortunately, too few organizations are investing to solve that very real problem. Too many firms are continuing to direct the lion’s share of their security investment into legacy tools, technologies and capabilities. ““Tons of tooling and money has been spent in terms of things that find vulnerabilities because they can find them,” notes Liu. “But most of them don’t matter. Really spending your time on the right things rather than just doing things because you can do them is so critically important.”

Check out our whole conversation below!

(*) Disclosure: This blog post was sponsored by ForAllSecure. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

Digital transformation is revolutionizing how healthcare is delivered. But a simmering dispute between a UK security researcher and a domestic healthcare non-profit suggests that the road ahead may be bumpy for both organizations embracing new software and services and for those who dare to ask questions about how that software works.

Rob Dyke (@robdykedotcom) is an independent security researcher. (Image courtesy of Twitter.)

A case in point: UK-based engineer Rob Dyke has spent months caught in an expensive legal tussle with the Apperta Foundation, a UK-based clinician-led non-profit that promotes open systems and standards for digital health and social care. The dispute stems from a confidential report Dyke made to the Foundation in February after discovering that two of its public Github repositories exposed a wide range of sensitive data, including application source code, user names, passwords and API keys.

The application in question was dubbed “Apperta Portal” and was publicly accessible for two years, Dyke told Security Ledger. Dyke informed the Foundation in his initial notification that he would hold on to the data he discovered for 90 days before deleting it, as a courtesy. Dyke insists he followed Apperta’s own information security policy, with which he was familiar as a result of earlier work he had done with the organization.

Open Source vs. An Open Sorcerer

Dyke (@robdykedotcom) is a self-described “open sorcerer” with expertise in the healthcare sector. In fact, he previously worked on Apperta-funded development projects to benefit the UK’s National Health Service (NHS) and had a cordial relationship with the organization. Initially, the Foundation thanked Dyke for disclosing the vulnerability and removed the exposed public source code repositories from GitHub.

Researchers Test UN’s Cybersecurity, Find Data on 100k

That honeymoon was short lived. On March 8, 2021, Dyke received a letter from a law firm representing Apperta Foundation that warned that he “may have committed a criminal offense under the Computer Misuse Act 1990,” a thirty-year-old U.K. computer crime law.  “We understand (you) unlawfully hacked into and penetrated our client’s systems and databases and extracted, downloaded (and retain) its confidential business and financial data. You then made threats to our client…We are writing to advise you that you may have committed a criminal offence under the Computer Misuse Act 1990 and the Investigatory Powers Act 2016,” the letter read, in part. Around the same time, he was contacted by a Northumbria Police cyber investigator inquiring about a report of “Computer Misuse” from Apperta.

A Hard Pass By Law Enforcement

The legal maneuvers by Apperta prompted Dyke to go public with the dispute – though he initially declined to name the organization pursuing him – and to hire his own lawyers and to set up a GoFundMe to help offset his legal fees. In an interview with The Security Ledger, Dyke said Apperta’s aggressive actions left him little choice. “(The letter) had the word ‘unlawful’ in there, and I wasn’t about to sign anything that had said I’d committed a criminal offense,” he said.

After interviewing Dyke, law enforcement in the UK declined to pursue a criminal case against him for violating the CMA. However, the researcher’s legal travails have continued all the same.

Episode 183: Researcher Patrick Wardle talks Zoom 0days and Mac (in)Security

Keen to ensure that Dyke deleted the leaked data and application code he downloaded from the organization’s public GitHub repositories, Apperta’s lawyers sent multiple emails instructing Dyke to destroy or immediately deliver the data he found from the security vulnerability; to give confirmation he had not and would not publish the data he “unlawfully extracted;” and to give another confirmation that he had not shared this data with anyone.

Dyke insists he deleted the exposed Apperta Foundation data weeks ago, soon after first being asked by the Foundation. Documents provided by Dyke that were sent to Apperta attest that he “destroyed all Apperta Foundation CIC’s data and business information in my possession, the only such relevant material having been collated in February 2021 for the purposes of my responsible disclosure of serious IT security concerns to Apperta of March 2021 (“the Responsible Disclosure Materials”).”

Not Taking ‘Yes’ For An Answer

Nevertheless, Apperta’s legal team found that Dyke’s response was “not an acceptable undertaking,” and that Dyke posed an “imminent further threat” to the organization. Months of expensive legal wrangling ensued, as Apperta sought to force Dyke to delete any and all work he had done for the organization, including code he had developed and licensed as open source. All the while, Dyke fielded correspondence that suggests Apperta was preparing to take him to court. In recent weeks, the Foundation has inquired about his solicitor and whether he would be representing himself legally. Other correspondence has inquired about the best address at which to serve him an injunction and passed along forms to submit ahead of an interim hearing before a judge.

In late April, Dyke transmitted a signed undertaking and statement to Apperta that would seem to satisfy the Foundation’s demands. But the Foundation’s actions and correspondence have him worried that it may move ahead with legal action, anyway – though he is not sure exactly what for. “I don’t understand what the legal complaint would be. Is this about the database access? Could it be related to (intellectual property)?” Dyke said he doubts Apperta is pursuing “breach of contract” because he isn’t under contract with the Foundation and -in any case – followed the organization’s responsible disclosure policy when he found the exposed data, which should shield him from legal repercussions. 

In the meantime, the legal bills have grown. Dyke told The Security Ledger that his attorney’s fees in March and April totaled £20,000 and another £5,000 since – with no clear end in sight. “It is not closed. I have zero security,” Dyke wrote in a text message.

In a statement made to The Security Ledger, an Apperta Foundation spokesperson noted that, to date, it “has not issued legal proceedings against Mr. Dyke.” The Foundation said it took “immediate action” to isolate the breach and secure its systems. However, the Foundation also cast aspersions on Dyke’s claims that he was merely performing a public service in reporting the data leak to Apperta and suggested that the researcher had not been forthcoming.

“While Mr. Dyke claims to have been acting as a security researcher, it has always been our understanding that he used multiple techniques that overstepped the bounds of good faith research, and that he did so unethically.”

-Spokesperson for The Apperta Foundation

Asked directly whether The Foundation considered the legal matter closed, it did not respond, but acknowledged that “Mr. Dyke has now provided Apperta with an undertaking in relation to this matter.”

Apperta believes “our actions have been entirely fair and proportionate in the circumstances,” the spokesperson wrote.

Asked by The Security Ledger whether the Foundation had reported the breach to regulators (in the UK, the Information Commissioner’s Office is the governing body), the Foundation did not respond but said that it “has been guided by the Information Commissioner’s Office (ICO) and our legal advisers regarding our duties as a responsible organisation.”

OK, Boomer. UK’s Cyber Crime Law Shows Its Age

Dyke’s dealings with the Apperta highlights the fears that many in the UK cybersecurity community have in regards to the CMA, a 30 year-old computer crime law that critics say is showing its age.

A 2020 report from TechUK found that 80% of respondents said that they have been worried about breaking the CMA when researching vulnerabilities or investigating cyber threat actors. Also, out of those same respondents, around 40% said the law has acted as a barrier to them or their colleagues and has even prevented cybersecurity employees from proactively safeguarding against security breaches. 

Those who support amending the CMA believe that the legislation poses several threats for Great Britain’s cyber and information security industry. Edward Parsons of the security firm F-Secure observed that “the CMA not only impacts our ability to defend victims, but also our competitiveness in a global market.” This downside, along with the ever-present talent shortage of cybersecurity professionals in the U.K., are reasons that Parsons believes justify a newly updated version of the CMA be passed into law. 

Dyke said the CMA looms over the work of security researchers. “You have to be very careful about participating in any, even formal bug bounties (…) If someone’s data gets breached, and it comes out that I’ve reported it, I could actually be picked up under the CMA for having done that hacking in the first place,” he said. 

Calls for Change

Calls for changes are growing louder in the UK. A January, 2020 report by the UK-based Criminal Law Reform Now Network (CLRNN) found that the Computer Misuse Act “has not kept pace with rapid technological change” and “requires significant reform to make it fit for the 21st century.” Among the recommendations of that report were allowances for researchers like Dyke to make “public interest defences to untie the hands of cyber threat intelligence professionals, academics and journalists to provide better protections against cyber attacks and misuse.”

Policy makers are taking note. This week, British Home Secretary Priti Patel told the audience at the National Cyber Security Centre’s (NCSC’s) CyberUK 2021 virtual event, that the CMA had served the country well, but that “now is the right time to undertake a formal review of the Computer Misuse Act.”

For researchers like Dyke, the changes can’t come soon enough. “The Act is currently broken and disproportionately affects hackers,” he wrote in an email. He said a reformed CMA should make allowances for the kinds of “accidental discoveries” that led him to the Apperta breach. “It needs to ensure safe harbour for professional, independent security researchers making Responsible Disclosure in the public interest.”

Carolynn van Arsdale contributed to this story.

Security researchers analyzing a widely used open source component have discovered security vulnerabilities that leave hundreds of thousands of software applications vulnerable to attack, according to a report released Monday.

The group of five researchers found the security vulnerabilities in netmask, an open source library used in a staggering 270,000 software projects. According to the report, the flaws open the door to a wide range of malicious attacks that could enable attackers to ferry malicious code into a protected network, or siphon sensitive data out of one.

Exploitable Flaw in NPM Private IP App Lurks Everywhere, Anywhere

Among the attacks enabled by the flaw are so-called server-side request forgeries (SSRF), as well as remote file inclusion, local file inclusion and more, the researcher called Sick Codes told The Security Ledger. Work to discover the extent of the flaws continues. The researchers have received a preliminary vulnerability ID for the flaw, CVE-2021-28918.

Even worse, the flaws appear the stretch far beyond a single open source module, affecting a wide range of open source development languages, researchers say.

Work on one vulnerable open source module uncovers another

In addition to Sick Codes, the researchers conducting the investigation are Victor Viale (@koroeskohr), Kelly Kaoudis (@kaoudis), John Jackson (@johnjhacking) and Nick Sahler (@tensor_bodega).

According to Sick Codes, the vulnerability was discovered while doing work to fix another vulnerability in a widely used NPM library known as Private IP. That module, which was also widely used by open source developers, enables applications to block request forgery attacks by filtering out attempts to access private IP4 addresses and other restricted IP4 address ranges, as defined by ARIN. In a report published in November, researchers revealed that the Private IP module didn’t work very well and was susceptible to being bypassed using SSRF attacks against top tier applications.

Spotlight Podcast: Security Automation is (and isn’t) the Future of Infosec

SSRF attacks allow malicious actors to abuse functionality on a server: reading data from internal resources or modifying the code running on the server. Private-ip was created to help application developers spot and block such attacks. SSRF is one of the most common forms of attack on web applications according to OWASP.

The researchers working to fix the Private IP flaw turned to netmask, a widely used package that allows developers to evaluate whether a IP address attempting to access an application was inside or outside of a given IPv4 range. Based on an IP address submitted to netmask, the module will return true or false about whether or not the submitted IP address is in the defined “block.”

Opinion: Better Code Won’t Save Developers in the Short Run

The module has a range of other useful features, as well, such as reporting back on how many IPs are inside a given block. And, with no other “dependencies,” netmask seemed like the perfect fit to fix Private IP’s problems.

There’s only one problem: netmask itself was flawed, as the researchers soon discovered. Specifically: the module evaluates certain IP addresses incorrectly: improperly validating so-called “octal strings” rendering IPv4 addresses that contain certain octal strings as integers.

For example, the IP4 address 0177.0.0.1 should be evaluated by netmask as the private IP address 127.0.0.1, as the octal string “0177” translates to the integer “127.” However, netmask evaluates it as a public IPv4 address: 177.0.0.1, simply stripping off the leading zero and reading the remaining parts of the octal string as an integer.

Sample output from the netmask NPM module showing incorrect parsing of the IPv4 address. The flaw could enable a range of attacks on applications using the popular open source module.
Sample output from the netmask NPM module showing incorrect parsing of the IPv4 address. The flaw could enable a range of attacks on applications using the popular open source module. (Image courtesy of Sick Codes.)

And the flaw works both ways. The IP4 address 0127.0.0.01 should be evaluated as the public IP address 87.0.0.1 as the octal string “0127” is the same as the integer “87.” However, netmask reads the address as 127.0.0.1, a trusted, localhost address. Treating an untrusted public IP address as a trusted private IP address opens the door to local- and remote file inclusion (LFI/RFI) attacks, in which a remote authenticated or unauthenticated attacker can bypass packages that rely on netmask to filter IP address blocks.

A Popular Module with a Sterling Pedigree

Netmask is the creation of Olivier Poitrey, an accomplished developer who is the Director of Engineering at Netflix and a co-founder of the firm NextDNS. The module was first released nine years ago and gained prominence around 5 years ago, with version 1.06, which has been downloaded more than 3 million times.

The netmask module currently sees around 3.1 million downloads weekly, though development on netmask appears to have ceased during the last five years, up until the release of version 2.0.0 around 10 days ago, after discovery of the security holes.

Throwing open the doors to hackers

The implications for modules that are using the vulnerable version of netmask are serious. According to Sick Codes, remote attackers can use SSRF attacks to upload malicious files from the public Internet without setting off alarms, because applications relying on netmask would treat a properly configured external IP address as an internal address.

Similarly, attackers could also disguise remote IP addresses local addresses, enabling remote file inclusion (RFI) attacks that could permit web shells or malicious programs to be placed on target networks.

Parsing IPv4 Addresses: Black Magic

Updates to address the flaws in netmask began appearing on March 19 with the release of version 2.0.0. A subsequent update, 2.0.1, was released on Monday. To date, only 6,641 downloads of the updated netmask module have completed, meaning the vast majority of open source projects using it remain vulnerable. Contacted via chat, Poitrey advised netmask users to upgrade to the latest version of the module.

But researchers say much more is to come. The problems identified in netmask are not unique to that module. Researchers have noted previously that textual representation of IPv4 addresses weren’t ever standardized, leading to disparities in how different but equivalent versions of IPv4 addresses (for example: octal strings) are rendered and interpreted by different applications and platforms.

In the case of the netmask flaw, the true culprit ends up being “some unexpected behaviors of the parseInt function in Javascript combined with an oversight on those odd IP notations nobody (uses),” Poitrey told Security Ledger. “Parsing IPs correctly is black magic,” he said. “Those weird notations should just be deprecated, so nobody can exploit them like this.”

More to come

The impact of the flaw will become more clear in the weeks ahead, according to Sick Codes, as more examples of the IPv4 parsing problem are identified and patched in other popular open interpreted languages. “The implications are infinite,” he said.

A serious flaw in Zoom’s Keybase secure chat application left copies of images contained in secure communications on Keybase users’ computers after they were supposedly deleted.

The flaw in the encrypted messaging application (CVE-2021-23827) does not expose Keybase users to remote compromise. However, it could put their security, privacy and safety at risk, especially for users living under authoritarian regimes in which apps like Keybase and Signal are increasingly relied on as a way to conduct conversations out of earshot of law enforcement or security services.

The flaw was discovered by researchers from the group Sakura Samurai as part of a bug bounty program offered by Zoom, which acquired Keybase in May, 2020. Zoom said it has fixed the flaw in the latest versions of its software for Windows, macOS and Linux.

Deleted…but not gone

According to researcher John Jackson of Sakura Samurai, the Keybase flaw manifested itself in two ways. First: Jackson discovered that images that were copy and pasted into Keybase chats were not reliably deleted from a temporary folder, /uploadtemps, associated with the client application. “In general, when you would copy and paste in a Keybase chat, the folder would appear in (the uploadtemps) folder and then immediately get deleted,” Jackson told Security Ledger in a phone interview. “But occasionally that wouldn’t happen. Clearly there was some kind of software error – a collision of sorts – where the images were not getting cleared.”

Exploitable Flaw in NPM Private IP App Lurks Everywhere, Anywhere

Discovering that flaw put Sakura Samurai researchers on the hunt for more and they soon struck pay dirt again. Sakura Samurai members Aubrey Cottle (@kirtaner), Robert Willis (@rej_ex) and Jackson Henry (@JacksonHHax) discovered an unencrypted directory, /Cache, associated with the Keybase client that contained a comprehensive record of images from encrypted chat sessions. The application used a custom extension to name the files, but they were easily viewable directly or simply by changing the custom file extension to the PNG image format, Jackson said.

In a statement, a Zoom spokesman said that the company appreciates the work of the researchers and takes privacy and security “very seriously.”

“We addressed the issue identified by the Sakura Samurai researchers on our Keybase platform in version 5.6.0 for Windows and macOS and version 5.6.1 for Linux. Users can help keep themselves secure by applying current updates or downloading the latest Keybase software with all current security updates,” the spokesman said.

Podcast Episode 141: Massive Data Breaches Just Keep Happening. We Talk about Why.

In most cases, the failure to remove files from cache after they were deleted would count as a “low priority” security flaw. However, in the context of an end-to-end encrypted communications application like Keybase, the failure takes on added weight, Jackson wrote.

“An attacker that gains access to a victim machine can potentially obtain sensitive data through gathered photos, especially if the user utilizes Keybase frequently. A user, believing that they are sending photos that can be cleared later, may not realize that sent photos are not cleared from the cache and may send photos of PII or other sensitive data to friends or colleagues.”

Messaging app flaws take on new importance

The flaw takes on even more weight given the recent flight of millions of Internet users to end-to-end encrypted messaging applications like Keybase, Signal and Telegram. Those users were responding to onerous data sharing policies, such as those recently introduced on Facebook’s WhatsApp chat. In countries with oppressive, authoritarian governments, end to end encrypted messaging apps are a lifeline for political dissidents and human rights advocates.

As Cybercrooks Specialize, More Snooping, Less Smash and Grab

As a result of the flaw, however, adversaries who gained access to the laptop or desktop on which the Keybase application was installed could view any images contained in Keybase encrypted chats. The implications of that are clear enough. For example, recent reports say that North Korean state hackers have targeted security researchers via phishing attacks sent via Keybase, Signal and other encrypted applications.

The flaws in Keybase do not affect the Zoom application, Jackson said. Zoom acquired Keybase in May to strengthen the company’s video platform with end-to-end encryption. That acquisition followed reports about security flaws in the Zoom client, including in its in-meeting chat feature.

Jackson said that the Sakura Samurai researchers received a $1,000 bounty from Zoom for their research. He credited the company with being “very responsive” to the group’s vulnerability report.

The increased use of encrypted messaging applications has attracted the attention of security researchers, as well. Last week, for example, a researcher disclosed 13 vulnerabilities in the Telegram secure messaging application that could have allow a remote attacker to compromise any Telegram user. Those issues were patched in Telegram updates released in September and October, 2020.

In the past 20 years, bug hunting has transformed from a hobby (or maybe even a felony) to a full-time profession for tens of thousands of talented software engineers around the globe. Thanks to the growth in private and public bug bounty programs, men and women with the talent can earn a good living by sniffing out flaws in the code for applications and – increasingly -physical devices that power the 21st century global economy. 

Asus ShadowHammer suggests Supply Chain Hacks are the New Normal

Bug Hunting Smart TVs To Supply Chain

What does that work look like and what platforms and technologies are drawing the attention of cutting edge vulnerability researchers? To find out we sat down with the independent researcher known as Sick Codes (@sickcodes). In recent months, he has gotten attention for a string of important discoveries. Among other things, he discovered flaws in Android smart television sets manufactured by the Chinese firm TCL and was part of the team, along with last week’s guest John Jackson, that worked to fix a serious server side request forgery flaw in a popular open source security module, NPM Private IP

Spotlight Podcast: How Machine Learning is revolutionizing Application Fuzzing

In this interview, Sick Codes and I talk about his path to becoming a vulnerability researcher, the paid and unpaid research he conducts looking for software flaws in common software and internet of things devices, some of the challenges and impediments that still exist in reporting vulnerabilities to corporations and what’s in the pipeline for 2021. 


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

In this episode of the podcast (#200), sponsored by Digicert: John Jackson, founder of the group Sakura Samurai talks to us about his quest to make hacking groups cool again. Also: we talk with Avesta Hojjati of the firm Digicert about the challenge of managing a growing population of digital certificates and how  automation may be an answer.


Life for independent security researchers has changed a lot in the last 30 years. The modern information security industry grew out of pioneering work by groups like Boston-based L0pht Heavy Industries and the Cult of the Dead Cow, which began in Lubbock, Texas.

After operating for years in the shadows of the software industry and in legal limbo, by the turn of the millennium hackers were coming out of the shadows. And by the end of the first decade of the 21st century, they were free to pursue full fledged careers as bug hunters, with some earning hundreds of thousands of dollars a year through bug bounty programs that have proliferated in the last decade.

Despite that, a stigma still hangs over “hacking” in the mind of the public, law enforcement and policy makers. And, despite the growth of bug bounty programs, red teaming and other “hacking for hire” activities, plenty of blurry lines still separate legal security research from illegal hacking. 

Hacks Both Daring…and Legal

Still, the need for innovative and ethical security work in the public interest has never been greater. The Solar Winds hack exposed the ways in which even sophisticated firms like Microsoft and Google are vulnerable to compromised software supply chain attacks. Consider also the tsunami of “smart” Internet connected devices like cameras, television sets and appliances are working their way into homes and workplaces by the millions. 

Podcast Episode 112: what it takes to be a top bug hunter

John Jackson is the co -founder of Sakura Samurai, an independent security research group. 

What does a 21st century hacking crew look like? Our first guest this week is trying to find out. John Jackson (@johnjhacking) is an independent security researcher and the co-founder of a new hacking group, Sakura Samurai, which includes a diverse array of security pros ranging from a 15 year old Australian teen to Aubrey Cottle, aka @kirtaner, the founder of the group Anonymous. Their goal: to energize the world of ethical hacking with daring and attention getting discoveries that stay on the right side of the double yellow line.

Update: DHS Looking Into Cyber Risk from TCL Smart TVs

In this interview, John and I talk about his recent research including vulnerabilities he helped discover in smart television sets by the Chinese firm TCL, the open source security module Private IP and the United Nations. 

Can PKI Automation Head Off Chaos?

One of the lesser reported sub plots in the recent Solar Winds hack is the use of stolen or compromised digital certificates to facilitate compromises of victim networks and accounts. Stolen certificates played a part in the recent hack of Mimecast, as well as in an attack on employees of a prominent think tank, according to reporting by Reuters and others. 

Avesta Hojjati is the head of Research & Development at Digicert.

How is it that compromised digital certificates are falling into the hands of nation state actors? One reason may be that companies are managing more digital certificates than ever, but using old systems and processes to do so. The result: it is becoming easier and easier for expired or compromised certificates to fly under the radar. 

Our final guest this week, Avesta Hojjati, the  Head of R&D at DigiCert, Inc. thinks we’ve only seen the beginning of this problem. As more and more connected “things” begin to populate our homes and workplaces, certificate management is going to become a critical task – one that few consumers are prepared to handle.

Episode 175: Campaign Security lags. Also: securing Digital Identities in the age of the DeepFake

What’s the solution? Hojjati thinks more and better use of automation is a good place to start. In this conversation, Avesta and I talk about how digital transformation and the growth of the Internet of Things are raising the stakes for proper certificate management and why companies need to be thinking hard about how to scale their current certificate management processes to meet the challenges of the next decade. 


(*) Disclosure: This podcast was sponsored by Digicert. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

Independent security researchers testing the security of the United Nations were able to compromise public-facing servers and a cloud-based development account for the U.N. and lift data on more than 100,000 staff and employees, according to a report released Monday.

Researchers affiliated with Sakura Samurai, a newly formed collective of independent security experts, exploited an exposed Github repository belonging to the International Labour Organization and the U.N.’s Environment Programme (UNEP) to obtain “multiple sets of database and application credentials” for UNEP applications, according to a blog post by one of the Sakura Samurai researchers, John Jackson, explaining the group’s work.

Specifically, the group was able to obtain access to database backups for private UNEP projects that exposed a wealth of information on staff and operations. That includes a document with more than 1,000 U.N. employee names, emails; more than 100,000 employee travel records including destination, length of stay and employee ID numbers; more than 1,000 U.N. employee records and so on.

The researchers stopped their search once they were able to obtain personally identifying information. However, they speculated that more data was likely accessible.

Looking for Vulnerabilities

The researchers were scanning the U.N.’s network as part of the organization’s Vulnerability Disclosure Program. That program, started in 2016, has resulted in a number of vulnerabilities being reported to the U.N., many of them common cross-site scripting (XSS) and SQL injection flaws in the U.N.’s main website, un.org.

You might also be interested in: Data Breach Exposes Records of 114 Million U.S. Citizens, Companies

For their work, Sakura Samurai took a different approach, according to Jackson, in an interview with The Security Ledger. The group started by enumerating UN subdomains and scanning them for exposed assets and data. One of those, an ILO.org Apache web server, was misconfigured and exposing files linked to a Github account. By downloading that file, the researchers were able to recover the credentials for a UN survey management panel, part of a little used, but public facing survey feature on the UN site. While the survey tool didn’t expose a tremendous amount of data, the researchers continued scanning the site and eventually discovered a subdomain that exposed a file containing the credentials for a UN Github account containing 10 more private GitHub repositories encompassing databases and database credentials, backups and files containing personally identifying information.

Much more to be found

Jackson said that the breach is extensive, but that much more was likely exposed prior to his group’s discovery.

“Honestly, there’s way more to be found. We were looking for big fish to fry.” Among other things, a Sakura Samurai researcher discovered APIs for the Twilio cloud platform exposed – those also could have been abused to extract data and personally identifying information from UN systems, he said.

In an email response to The Security Ledger, Farhan Haq, a Deputy Spokesman for the U.N. Secretary-General said that the U.N.’s “technical staff in Nairobi … acknowledged the threat and … took ‘immediate steps’ to remedy the problem.”

You might also be interested in: Veeam mishandles Own Data, exposes 440M Customer E-mails

“The flaw was remedied in less than a week, but whether or not someone accessed the database remains to be seen,” Haq said in the statement.

A disclosure notice from the U.N. on the matter is “still in the works,” Haq said. According to Jackson, data on EU residents was among the data exposed in the incident. Under the terms of the European Union’s Genderal Data Privacy Rule (GDPR), the U.N. has 72 hours to notify regulators about the incident.

Nation State Exposure?

Unfortunately, Jackson said that there is no way of knowing whether his group was the first to discover the exposed data. It is very possible, he said, that they were not.

“It’s likely that nation state threat actors already have this,” he said, noting that data like travel records could pose physical risks, while U.N. employee email and ID numbers could be useful in tracking and impersonating employees online and offline.

Another danger is that malicious actors with access to the source code of U.N. applications could plant back doors or otherwise manipulate the functioning of those applications to suit their needs. The recent compromise of software updates from the firm Solar Winds has been traced to attacks on hundreds of government agencies and private sector firms. That incident has been tied to hacking groups associated with the government of Russia.

Asked whether the U.N. had conducted an audit of the affected applications, Haq, the spokesperson for the U.N. Secretary General said that the agency was “still looking into the matter.”

A Spotty Record on Cybersecurity

This is not the first cybersecurity lapse at the U.N. In January, 2020 the website the New Humanitarian reported that the U.N. discovered but did not disclose a major hack into its IT systems in Europe in 2019 that involved the compromise of UN domains and the theft of administrator credentials.

A serious security flaw in a commonly used, but overlooked open source security module may be undermining the integrity of hundreds of thousands or even millions of private and public applications, putting untold numbers of organizations and data at risk.

A team of independent security researchers that includes application security professionals at Shutterstock and Squarespace identified the flaw in private-ip, a npm module first published in 2016 that enables applications to block request forgery attacks by filtering out attempts to access private IP4 addresses and other restricted IP4 address ranges, as defined by ARIN

The SSRF Blocker That Didn’t

The researchers identified a so-called Server Side Request Forgery (SSRF) vulnerability in commonly used versions of private-ip. The flaw allows malicious attackers to carry out SSRF attacks against a population of applications that may number in the hundreds of thousands or millions globally. It is just the latest incident to raise questions about the security of the “software supply chain,” as more and more organizations shift from monolithic to modular software application development built on a foundation of free and open source code.

Report: Cybercriminals target difficult-to-secure ERP systems with new attacks

According to an account by researcher John Jackson of Shutterstock, flaws in the private-ip code meant that the filtering allegedly carried out by the code was faulty. Specifically, independent security researchers reported being able to bypass protections and carry out Server-Side Request Forgeries against top tier applications. Further investigation uncovered a common explanation for those successful attacks: private-ip, an open source security module used by the compromised applications.

SSRF attacks allow malicious actors to abuse functionality on a server: reading data from internal resources or modifying the code running on the server. Private-ip was created to help application developers spot and block such attacks. SSRF is one of the most common forms of attack on web applications according to OWASP.

Black Box Device Research reveals Pitiful State of Internet of Things Security

The problem: private-ip didn’t do its job very well.

“The code logic was using a simple Regular Expression matching,” Jackson (@johnjhacking) told The Security Ledger. Jackson, working with other researchers, found that private-ip was blind to a wide number of variations of localhost, and other private-ip ranges as well as simple tricks that hackers use to obfuscate IP addresses in attacks. For example, researchers found they could send successful requests for localhost resources by obscuring those addresses using hexadecimal equivalents of private IP addresses or with simple substitutions like using four zeros for each octet of the IP address instead of one (so: 0000.0000.0000.0000 instead of 0.0.0.0). The result: a wide range of private and restricted IP addresses registered as public IP addresses and slipped past private-ip.

Private-IP: small program, BIG footprint

The scope of the private-ip flaws are difficult to grasp. However, available data suggests the component is very widely used. Jackson said that hundreds of thousands, if not millions of applications likely incorporate private-ip code in some fashion. Many of those applications are not publicly addressable from the Internet, but may still be vulnerable to attack by an adversary with access to their local environment.

Private-ip is the creation of developer Damir Mustafin (aka “frenchbread”), a developer based in the Balkan country of Montenegro, according to his GitHub profile, which contains close to 60 projects of different scopes. Despite its popularity and widespread use, private-ip was not a frequent focus of Mr. Mustafin’s attention. After first being published in August 2016, the application had only been updated once, in April 2017, prior to the most recent update to address the SSRF flaw.

A Low Key, High Distribution App

The lack of steady attention didn’t dissuade other developers from downloading and using the npm private-ip package, however. It has an average of 14,000 downloads weekly, according to data from GitHub. And direct downloads of private-ip are just one measure of its use. Fully 355 publicly identified npm modules are dependents of private-ip v1.0.5, which contains the SSRF flaws. An additional 73 GitHub projects have dependencies on private-ip. All told, that accounts for 153,374 combined weekly downloads of private-ip and its dependents. One of the most widely used applications that relies on private-ip is libp2p, an open source network stack that is used in a wide range of decentralized peer-to-peer applications, according to Jackson.

While the flaw was discovered by so-called “white hat” vulnerability researchers, Jackson said that it is almost certain that malicious actors knew about and exploited it -either directly or inadvertently. Other security researchers have almost certainly stumbled upon it before as well, perhaps discovering a single address that slipped through private-ip and enabled a SSRF attack, while failing to grasp private-ip’s role or the bigger flaws in the module.

In fact, private-ip may be the common source of a long list of SSRF vulnerabilities that have been independently discovered and reported in the last five years, Jackson said.”This may be why a lot of enterprises have struggled with SSRF and block list bypasses,” he said.

After identifying the problem, Jackson and his team contacted the developer, Damir Mustafin (aka “frenchbread”), looking for a fix. However, it quickly became clear that they would need to enlist additional development talent to forge a patch that was comprehensive. Jackson tapped two developers: Nick Sahler of the website hosting provider Square Space and the independent developer known as Sick Codes (@sickcodes) to come up with a comprehensive fix for private-ip. The two implemented the netmask utility and update private-ip to correctly filter private IP ranges and translate all submitted IP addresses at the byte level to catch efforts to slip encoded addresses past the filter.

Common Mode Failures and Software Supply Chain

Even though it is fixed, the private-ip flaw raises larger and deeply troubling questions about the security of software applications on which our homes, businesses and economy are increasingly dependent.

The greater reliance on open source components and the shift to agile development and modular applications has greatly increased society’s exposure to so-called “common cause” failures, in which a the failure of a single element leads to a systemic failure. Security experts say the increasingly byzantine ecosystem of open source and proprietary software with scores or hundreds of poorly understood ‘dependencies’ is ripe for such disruptions.

Sites like npm are a critical part of that ecosystem -and part of the problem. Created in 2008, npm is a package manager for the JavaScript programming language that was acquired by GitHub in March. It acts as a public registry of packages of open source code that can be downloaded and incorporated into web and mobile applications as well as a wide range of hardware from broadband routers to robots. But vetting of the modules uploaded to npm and other platforms is often cursory. Scores have been called out as malicious and an even greater number are quietly dropped from the site every day after being discovered to be malicious in nature.

Less scrutinized is low quality code and applications that may quickly be adopted and woven into scores or hundreds or thousands of other applications and components.

“The problem with (software) dependencies is once you identify a problem with a dependency, everything downstream is f**ked,” the developer known as Sick Codes told The Security Ledger. “It’s a house of cards.”

Patch Now

In the short term, organizations that know they are using private-ip version 1.0.5 or earlier as a means of preventing SSRF or related vulnerabilities should upgrade to the latest version immediately, Jackson said. Static application security testing tools can help identify whether private-ip is in use within your organization.

The bigger fix is for application developers to pay more attention to what they’re putting into their creations. “My recommendations is that when software engineers use packages in general or third party code, they need to evaluate what they’re using and where its coming from,” Jackson said.