In this episode of the podcast (#216), sponsored by Digicert, we talk with Brian Trzupek, Digicert’s Vice President of Product, about the growing urgency of securing software supply chains, and how digital code signing can help prevent compromises like the recent hack of the firm SolarWinds.


We spend a lot of time talking about software supply chain security these days? But what does that mean. At the 10,000 foot level it means “don’t be the next Solar Winds” – don’t let a nation state actor infiltrate your build process and insert a backdoor that gets distributed to thousands of customers – including technology firms three letter government agencies. 

OK. Sure. But speaking practically, what are we talking about when we talk about securing the software supply chain? Well, for one thing: we’re talking about securing the software code itself. We’re talking about taking steps to insure that what is written by our  developers is actually what goes into a build and then gets distributed to users.

Digital code signing – using digital certificates to sign submitted code – is one way to do that. And use of code signing is on the rise. But is that alone enough?  In this episode of the podcast, we’re joined by Brian Trzupek the SVP of Product at Digicert to talk about the growing role of digital code signing in preventing supply chain compromises and providing an audit trail for developed code.

Brian is the author of this recent Executive Insight on Security Ledger where he notes that code signing certificates are a highly effective way to ensure that software is not compromised -but only as effective as the strategy and best practices that support it. When poorly implemented, Brian notes, code signing loses its effectiveness in mitigating risk for software publishers and users.

In this conversation we talk about the changes to tooling, process and staff that DEVOPS organizations need to embrace to shore up the security of their software supply chain. 

“It boils down to do you have something in place to ensure code quality, fix vulnerabilities and make sure that code isn’t incurring tech debt,” Brian says. Ensuring those things involves both process, new products and tools as well as the right mix of staff and talent to assess new code for security issues. 

One idea that is gaining currency within DEVOPS organizations is “quorum based deployment” in which multiple staff members review and sign off on important code changes before they are deployed. Check out our full conversation using the player (above) or download the MP3 using the button below.


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted.

The recent SolarWinds attack highlights an Achilles heel for enterprises: software updates for critical enterprise applications. Digital signing of code is one solution, but organizations need to modernize their code signing processes to prioritize security and integrity and align with DevOps best practices, writes Brian Trzupek the Senior Vice President of Products at DigiCert in this thought leadership article.


Even in today’s security-charged world, the SolarWinds breach was a wakeup call for cybersecurity professionals. It was distinguished by its sophistication and the fact that it was carried out as part of legitimate software updates. The incident was quickly all over the news and has brought renewed focus on need for secure DevOps.

Automating your way out of PKI chaos.”

Code Signing Alone Is Not Enough

The security incident at SolarWinds was especially unsettling because could easily happen at any large software organization that delivers regular updates. Code signing certificates are a highly effective way to ensure that software is not compromised. However, it is only as effective as the strategy and best practices behind it. When poorly implemented, code signing loses its effectiveness in mitigating risk for software publishers and users. Issues often include:

  • Using the same signing key to sign all files, across multiple product lines and businesses
  • Lack of mechanisms in place to control who can sign specific files
  • Insufficient reporting capabilities for insights into who signed what and when
  • Failure to sign code at every stage of development, as part of an overall security by design process
  • Lack of signing and verifying code from third parties
  • Poor processes for securing keys and updating them to new key size or algorithm requirements
  • Failure to test code integrity before signing
  • Inadequate visibility into where certificates are, and how they are managed

Common pitfalls might include using the same signing key to sign all files, across multiple product lines and businesses. Some organizations might have no mechanisms in place to control who can sign specific files. They may also lack reporting capabilities, which can provide insights into who signed what—and when.

[Read Brian’s piece Staying Secure Through 5G Migration.]

What have we learned from the SolarWinds attack? For organizations where DevOps is fundamental, applying best practices to signing processes is more essential than ever. According to some studies, more than half of IT security professionals are concerned about bad actors forging or stealing certificates to sign code—but fewer than a third enforce code signing policies on a consistent basis. It’s time for organizations to do better and enforce zero-trust
across all their systems, signing everything, at every stage after verifying it is secure.

Simplifying and standardizing

Traditional code signing processes can be complex and difficult to enforce. They are often based on storing keys on desktops as well as sharing them. Visibility into activities is often limited, making mismanagement or flawed processes difficult to discover and track. To mitigate these issues, many organizations are simplifying their processes using code-signing- as-a-service approaches. Code-signing-as-a-service can accelerate the steps required to get code signed, while making it easier to keep code secure. A robust solution can empower organizations with automation, enabling teams to minimize manual steps and accelerate signing processes. APIs can enable it to integrate seamlessly with development workflows and automated scheduling capabilities enable organizations to proactively and approve signature windows to support new releases and updates.

To strengthen accountability throughout the process, administrators can apply permission- based access. Strictly controlling access helps improve visibility into which users are allowed to sign code and which certificates and private keys they are allowed to utilize.

Standardizing workflows

Standardizing code signing workflows can also help reduce risk to an organization. Instead of allowing everyone in an organization to use the same key for signing, many organizations are using separate code signing keys for different DevOps teams, while granting administrators visibility over key usage. This best practice helps minimize the risk of mistakes that can occur across a company by limiting the ability of breaches to propagate. For example, if a key is used to sign a release that has been compromised, only one team’s code will be impacted.

Maximum flexibility to minimize risk

Key management flexibility is another helpful best practice, reducing risks by enabling administrators to specify shorter certificate lifetimes, rotate keys and control keypairs. For example, Microsoft recognizes publishers that rotate keys with higher reputation levels. With the right key management approach, an administrator could specify a specific number of days or months exclusively for files designed for use in Microsoft operating systems.

Secure key storage offline except during signing events

Taking keys offline is another measure that can secure code signing. With the right code signing administrative tool, administrators can place keys in a “offline mode,” making it impossible to use them to sign releases without the proper level of permission in advance. Release planning is a fundamental to software development, so most developers are comfortable scheduling signatures for specific keys directly into their processes.

Taking keys offline is a strong step to ensure that keys will not be used in situations where they should not be. It also adds a layer of organizational security by splitting responsibilities between signers and those who approve them—while providing improved visibility into which keys are signed by whom.

Freeing up developers to do what they do best

It’s clear that safeguarding DevOps environments correctly remains challenging, but fortunately the right management tools can minimize hassles—and maximize protection. As we’ve discussed, automation is essential for applying security across CI/CD pipelines. Seek out a solution that can fit smoothly within workflows and free up engineers from individual steps required for cryptographic asset protection. The tool should make signing keys easily accessible when pushing code and automate signing of packages, binaries and containers on every merge to master when authorized. Organizations also need a process for testing code integrity before they sign. A centralized, effective signing management tool can handle the signing tasks, while integrating with other systems that perform necessary integrity tests. For key security, the solution should provide the option of storing the keys offline in virtual HSMs. During a signing event, it should enable developers to access the keys to sign with one click, then return them back to secure offline storage

DevOps pros work within a variety of environments, so the signing solution should support portable, flexible deployment models via SaaS or on a public or private data center. Businesses in every industry are becoming increasingly software-driven and the challenges to DevOps organizations won’t disappear anytime soon. However, with the right approach to code signing, organizations can dramatically strengthen their security posture, minimize their chances of becoming the next victim and ensure customer confidence in their solutions.


(*) Disclosure: This podcast was sponsored by Digicert. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

United Parcel Service (UPS) announced this week that it will test electric vertical takeoff and landing aircraft (eVTOLs) for package delivery. UPS purchased 10 eVTOL from Beta Technologies (Beta), which it plans to test for use in its Express Air Delivery network. These eVTOLs are set to be delivered to UPS in 2024, pending certification from the Federal Aviation Administration (FAA). Beta Technologies also plans to provide landing pads and rechargeable batteries. With just a single charge, the eVTOLs can fly up to 250 miles at 170 miles per hour.

All testing and operation of the eVTOLs will be done under Beta’s Flight Forward division, which is tasked with research and development for package delivery by drone.

Vice President for UPS’s Advanced Technology Group, Bala Ganesh, said “We can see a future where [the eVTOLs are] carrying, let’s say 1,000 pounds, 1,500 pounds to rural hospitals,” and landing on a helipad instead of at an airport.

However, there will be some literal obstacles in the way. For example, delivery by eVTOLs in a busy, congested city like New York might restrict some use. UPS says it may not be a one size-fits-all solution, but that the willingness to pay and urgency of need could mean that UPS would find a safe way for the eVTOLs to get there.

UPS said it initially plans to use them in smaller markets and create a series of short routes or one long route to meet customer needs. However, these eVTOLs can increase efficiency and sustainability, while reducing costs.

Security researchers analyzing a widely used open source component have discovered security vulnerabilities that leave hundreds of thousands of software applications vulnerable to attack, according to a report released Monday.

The group of five researchers found the security vulnerabilities in netmask, an open source library used in a staggering 270,000 software projects. According to the report, the flaws open the door to a wide range of malicious attacks that could enable attackers to ferry malicious code into a protected network, or siphon sensitive data out of one.

Exploitable Flaw in NPM Private IP App Lurks Everywhere, Anywhere

Among the attacks enabled by the flaw are so-called server-side request forgeries (SSRF), as well as remote file inclusion, local file inclusion and more, the researcher called Sick Codes told The Security Ledger. Work to discover the extent of the flaws continues. The researchers have received a preliminary vulnerability ID for the flaw, CVE-2021-28918.

Even worse, the flaws appear the stretch far beyond a single open source module, affecting a wide range of open source development languages, researchers say.

Work on one vulnerable open source module uncovers another

In addition to Sick Codes, the researchers conducting the investigation are Victor Viale (@koroeskohr), Kelly Kaoudis (@kaoudis), John Jackson (@johnjhacking) and Nick Sahler (@tensor_bodega).

According to Sick Codes, the vulnerability was discovered while doing work to fix another vulnerability in a widely used NPM library known as Private IP. That module, which was also widely used by open source developers, enables applications to block request forgery attacks by filtering out attempts to access private IP4 addresses and other restricted IP4 address ranges, as defined by ARIN. In a report published in November, researchers revealed that the Private IP module didn’t work very well and was susceptible to being bypassed using SSRF attacks against top tier applications.

Spotlight Podcast: Security Automation is (and isn’t) the Future of Infosec

SSRF attacks allow malicious actors to abuse functionality on a server: reading data from internal resources or modifying the code running on the server. Private-ip was created to help application developers spot and block such attacks. SSRF is one of the most common forms of attack on web applications according to OWASP.

The researchers working to fix the Private IP flaw turned to netmask, a widely used package that allows developers to evaluate whether a IP address attempting to access an application was inside or outside of a given IPv4 range. Based on an IP address submitted to netmask, the module will return true or false about whether or not the submitted IP address is in the defined “block.”

Opinion: Better Code Won’t Save Developers in the Short Run

The module has a range of other useful features, as well, such as reporting back on how many IPs are inside a given block. And, with no other “dependencies,” netmask seemed like the perfect fit to fix Private IP’s problems.

There’s only one problem: netmask itself was flawed, as the researchers soon discovered. Specifically: the module evaluates certain IP addresses incorrectly: improperly validating so-called “octal strings” rendering IPv4 addresses that contain certain octal strings as integers.

For example, the IP4 address 0177.0.0.1 should be evaluated by netmask as the private IP address 127.0.0.1, as the octal string “0177” translates to the integer “127.” However, netmask evaluates it as a public IPv4 address: 177.0.0.1, simply stripping off the leading zero and reading the remaining parts of the octal string as an integer.

Sample output from the netmask NPM module showing incorrect parsing of the IPv4 address. The flaw could enable a range of attacks on applications using the popular open source module.
Sample output from the netmask NPM module showing incorrect parsing of the IPv4 address. The flaw could enable a range of attacks on applications using the popular open source module. (Image courtesy of Sick Codes.)

And the flaw works both ways. The IP4 address 0127.0.0.01 should be evaluated as the public IP address 87.0.0.1 as the octal string “0127” is the same as the integer “87.” However, netmask reads the address as 127.0.0.1, a trusted, localhost address. Treating an untrusted public IP address as a trusted private IP address opens the door to local- and remote file inclusion (LFI/RFI) attacks, in which a remote authenticated or unauthenticated attacker can bypass packages that rely on netmask to filter IP address blocks.

A Popular Module with a Sterling Pedigree

Netmask is the creation of Olivier Poitrey, an accomplished developer who is the Director of Engineering at Netflix and a co-founder of the firm NextDNS. The module was first released nine years ago and gained prominence around 5 years ago, with version 1.06, which has been downloaded more than 3 million times.

The netmask module currently sees around 3.1 million downloads weekly, though development on netmask appears to have ceased during the last five years, up until the release of version 2.0.0 around 10 days ago, after discovery of the security holes.

Throwing open the doors to hackers

The implications for modules that are using the vulnerable version of netmask are serious. According to Sick Codes, remote attackers can use SSRF attacks to upload malicious files from the public Internet without setting off alarms, because applications relying on netmask would treat a properly configured external IP address as an internal address.

Similarly, attackers could also disguise remote IP addresses local addresses, enabling remote file inclusion (RFI) attacks that could permit web shells or malicious programs to be placed on target networks.

Parsing IPv4 Addresses: Black Magic

Updates to address the flaws in netmask began appearing on March 19 with the release of version 2.0.0. A subsequent update, 2.0.1, was released on Monday. To date, only 6,641 downloads of the updated netmask module have completed, meaning the vast majority of open source projects using it remain vulnerable. Contacted via chat, Poitrey advised netmask users to upgrade to the latest version of the module.

But researchers say much more is to come. The problems identified in netmask are not unique to that module. Researchers have noted previously that textual representation of IPv4 addresses weren’t ever standardized, leading to disparities in how different but equivalent versions of IPv4 addresses (for example: octal strings) are rendered and interpreted by different applications and platforms.

In the case of the netmask flaw, the true culprit ends up being “some unexpected behaviors of the parseInt function in Javascript combined with an oversight on those odd IP notations nobody (uses),” Poitrey told Security Ledger. “Parsing IPs correctly is black magic,” he said. “Those weird notations should just be deprecated, so nobody can exploit them like this.”

More to come

The impact of the flaw will become more clear in the weeks ahead, according to Sick Codes, as more examples of the IPv4 parsing problem are identified and patched in other popular open interpreted languages. “The implications are infinite,” he said.

In this episode of the Security Ledger Podcast (#203) we talk about the apparent hack of a water treatment plant in Oldsmar Florida with Frank Downs of the firm BlueVoyant. In our second segment: is infosec’s lack of diversity a bug or a feature? Tennisha Martin of Black Girls Hack joins us to talk about the many obstacles that black women face as they try to enter the information security field.


Part 1: Don’t Hack the Water!

An obscure water treatment facility in Oldsmar Florida became ground zero for the United States concerns about foreign adversaries ability to access and control critical infrastructure last week, after local officials revealed in a news conference that an unknown assailant had remotely accessed the facility’s SCADA system and attempted to raise levels of the poisonous chemical sodium hydroxide in the drinking water by a factor of more than 100. 

Frank Downs is the Director of Proactive Services at Bluevoyant.

The attack failed after a worker at the treatment plant saw it play out on his terminal in real time, and adjusted the sodium hydroxide levels back to normal. Nor would it have worked, officials assured a worried public: sensors elsewhere in the water distribution system would almost certainly have caught the abrupt increase in the dangerous chemical. 

But closies do count when it comes to critical infrastructure hacks, and the Oldsmar incident set off a federal investigations and a flurry of warnings and editorial hand-wringing about the risks facing critical infrastructure systems. That’s especially true with so many workers accessing them remotely during the pandemic, leaving sensitive systems exposed. 

Episode 202: The Byte Stops Here – Biden’s Cyber Agenda

In our first segment this week, Frank Downs of the firm BlueVoyant joins us in the Security Ledger studio to discuss the water system hack and why critical infrastructure firms continue to struggle to protect their environments. 

Can Infosec Walk the Talk on Diversity?

For years professionals have decried the lack of diversity in the information security field which, even more than high tech in general, is dominated by white men. At infosec conferences, concerted effort has been made giving more visibility and voice to women and minorities. The dreaded “MANels” – panels made up entirely of men – have been targeted and, in many cases, banished. But down in the trenches – where information hiring takes place and information work is done – there is little evidence of change. 

Tennisha Martin Black Girls Hack
Tennisha Martin is the Executive Director of Black Girls Hack.

The lack of progress, despite a crushing shortage of infosec workers and the stated intentions of infosec leaders and executives, might get you wondering whether cyber’s lack of diversity is a bug or a feature of the system. 

Episode 200: Sakura Samurai Wants To Make Hacking Groups Cool Again. And: Automating Our Way Out of PKI Chaos

Our next guest suggests that it may be a feature indeed. Tennisha Martin is the founder of Black Girls Hack, a group that looks to promote women of color in cyber security. In this conversation, Tennisha and I talk about the many large and small obstacles that keep women like herself from pursuing cyber security careers: from inequalities in K-12 education to pricey certifications and acronym-stuffed job requirements. Solving those problems, Tennisha says, is going to take more than kind words and promises from Infosec leaders. 

Tenniesha Martin is the founder of Black Girls Hack, a non profit organization that promotes women of color in the information security field. 

In this episode of the podcast (#200), sponsored by Digicert: John Jackson, founder of the group Sakura Samurai talks to us about his quest to make hacking groups cool again. Also: we talk with Avesta Hojjati of the firm Digicert about the challenge of managing a growing population of digital certificates and how  automation may be an answer.


Life for independent security researchers has changed a lot in the last 30 years. The modern information security industry grew out of pioneering work by groups like Boston-based L0pht Heavy Industries and the Cult of the Dead Cow, which began in Lubbock, Texas.

After operating for years in the shadows of the software industry and in legal limbo, by the turn of the millennium hackers were coming out of the shadows. And by the end of the first decade of the 21st century, they were free to pursue full fledged careers as bug hunters, with some earning hundreds of thousands of dollars a year through bug bounty programs that have proliferated in the last decade.

Despite that, a stigma still hangs over “hacking” in the mind of the public, law enforcement and policy makers. And, despite the growth of bug bounty programs, red teaming and other “hacking for hire” activities, plenty of blurry lines still separate legal security research from illegal hacking. 

Hacks Both Daring…and Legal

Still, the need for innovative and ethical security work in the public interest has never been greater. The Solar Winds hack exposed the ways in which even sophisticated firms like Microsoft and Google are vulnerable to compromised software supply chain attacks. Consider also the tsunami of “smart” Internet connected devices like cameras, television sets and appliances are working their way into homes and workplaces by the millions. 

Podcast Episode 112: what it takes to be a top bug hunter

John Jackson is the co -founder of Sakura Samurai, an independent security research group. 

What does a 21st century hacking crew look like? Our first guest this week is trying to find out. John Jackson (@johnjhacking) is an independent security researcher and the co-founder of a new hacking group, Sakura Samurai, which includes a diverse array of security pros ranging from a 15 year old Australian teen to Aubrey Cottle, aka @kirtaner, the founder of the group Anonymous. Their goal: to energize the world of ethical hacking with daring and attention getting discoveries that stay on the right side of the double yellow line.

Update: DHS Looking Into Cyber Risk from TCL Smart TVs

In this interview, John and I talk about his recent research including vulnerabilities he helped discover in smart television sets by the Chinese firm TCL, the open source security module Private IP and the United Nations. 

Can PKI Automation Head Off Chaos?

One of the lesser reported sub plots in the recent Solar Winds hack is the use of stolen or compromised digital certificates to facilitate compromises of victim networks and accounts. Stolen certificates played a part in the recent hack of Mimecast, as well as in an attack on employees of a prominent think tank, according to reporting by Reuters and others. 

Avesta Hojjati is the head of Research & Development at Digicert.

How is it that compromised digital certificates are falling into the hands of nation state actors? One reason may be that companies are managing more digital certificates than ever, but using old systems and processes to do so. The result: it is becoming easier and easier for expired or compromised certificates to fly under the radar. 

Our final guest this week, Avesta Hojjati, the  Head of R&D at DigiCert, Inc. thinks we’ve only seen the beginning of this problem. As more and more connected “things” begin to populate our homes and workplaces, certificate management is going to become a critical task – one that few consumers are prepared to handle.

Episode 175: Campaign Security lags. Also: securing Digital Identities in the age of the DeepFake

What’s the solution? Hojjati thinks more and better use of automation is a good place to start. In this conversation, Avesta and I talk about how digital transformation and the growth of the Internet of Things are raising the stakes for proper certificate management and why companies need to be thinking hard about how to scale their current certificate management processes to meet the challenges of the next decade. 


(*) Disclosure: This podcast was sponsored by Digicert. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

In this episode of the podcast (#199), sponsored by LastPass, we’re joined by Barry McMahon, a Senior Global Product Marketing Manager at LogMeIn, to talk about data from that company that weighs the security impact of poor password policies and what a “passwordless” future might look like. In our first segment, we speak with Sareth Ben of Securonix about how massive layoffs that have resulted from the COVID pandemic put organizations at far greater risk of data theft.


The COVID Pandemic has done more than scramble our daily routines, school schedules and family vacations. It has also scrambled the security programs of organizations large and small, first by shifting work from corporate offices to thousands or tens of thousands of home offices, and then by transforming the workforce itself through layoffs and furloughs.

In this episode of the podcast, we did deep COVID’s lesser discussed legacy of enterprise insecurity.

Layoffs and Lost Data

We’ve read a lot about the cyber risks of Zoom (see our interview with Patrick Wardle) or remote offices. But one of the less-mentioned cyber risks engendered by COVID are the mass layoffs that have hit companies in sectors like retail, travel and hospitality, where business models have been upended by the pandemic. The Department of Labor said on Friday that employers eliminated 140,000 jobs in December alone. Since February 2020, employment in leisure and hospitality is down by some 3.9 million jobs, the Department estimates. If data compiled by our next guest is to be believed, many of those departing workers took company data and intellectual property out the door with them. 

Shareth Ben is the executive director of field engineering at Securonix. That company has assembled a report on insider threats that found that most employees take some data with them. Some of that is inadvertent – but much of it is not.

While data loss detection has long been a “thing” in the technology industry, Ben notes that evolving technologies like machine learning and AI are making it easier to spot patterns of behavior that correlate with data theft- for example: spotting employees who are preparing to leave a company and take sensitive information with them. In this discussion, Shareth and I talk about the Securonix study on data theft, how common the problem is and how COVID and the layoffs stemming from the pandemic have exacerbated the insider data theft problem. 

It’s Not The Passwords…But How We Use Them

Nobody likes passwords but getting rid of them is harder than it seems. Even in 2021, User names and passwords are part and parcel of establishing access to online services – cloud based or otherwise. But all those passwords pose major challenges for enterprise security. Data from LastPass suggest that the average organization IT department spends up to 5 person hours a week just to assist with password problems of users – almost a full day of work. 

Barry McMahon a senior global product marketing manager at LastPass and LogMeIn. McMahon says that, despite talk of a “password less” future, traditional passwords aren’t going anywhere anytime soon. But that doesn’t mean that the current password regime of re-used passwords and sticky notes can’t be improved drastically – including by leveraging some of the advanced security features of smart phones and other consumer electronics. Passwords aren’t the problem, so much as how we’re using them, he said. 

To start off, I ask Barry about some of the research LastPass has conducted on the password problem in enterprises. Barry McMahon a senior global product marketing manager at LastPass and LogMeIn.


(*) Disclosure: This podcast was sponsored by LastPass, a LogMeIn brand. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted.

Let’s face it, 2020 was a terrible year. The Coronavirus has killed almost two million people globally and caused trillions of dollars in economic disruption. Wildfires, floods and hurricanes have ravaged the United States, central America, Australia and parts of Asia.

But trying times have a way of peeling back the curtains and seeing our world with new eyes. COVID messed up our lives, and focused our attention on what really matters.

Maybe that’s why this very bad year has led to some really good conversations and insights here on The Security Ledger on topics ranging from election security, to security supply chains and the security risks of machine learning.

The Security Risks of Machine Learning

To start off, I pulled a March interview from Episode 180 that i did with security luminary Gary McGraw, the noted entrepreneur, author and now co-founder of the Berryville Institute of Machine Learning.

To wrap up 2020, I went back through 35 episodes that aired this year and selected four interviews that stuck out and, in my mind, captured the 2020 zeitgeist, as we delved into issues as diverse as the security implications of machine learning to the cyber threats to election systems and connected vehicles. We’re excerpting those conversations now in a special end of year edition of the podcast. We hope you enjoy it.

Taking Hardware Off Label to Save Lives

As winter turned to spring this year, the COVID virus morphed from something happening “over there” to a force that was upending life here at home. As ICUs in places like New York City rapidly filled, the U.S. faced shortage of respirators for critically ill patients. As they often do: the hacking community rose to the challenge. In our second segment, I pulled an interview from Episode 182 with Trammell Hudson of Lower Layer Labs. In this conversation, Trammell talks to us about Project Airbreak, his work to jailbreak a CPAP machines and how an NSA hacking tool helped make this inexpensive equipment usable as a makeshift respirator.

Report: Hacking Risk for Connected Vehicles Shows Significant Decline

COVID Spotlights Zoom’s Security Woes

One of the big cyber security themes of 2020 was of the security implications of changes forced by the COVID virus. Chief among them: the rapid shift to remote work and the embrace of technologies, such as Zoom that enabled remote work and remote meetings. For our third segment, I returned to Episode 183 and my interview with security researcher Patrick Wardle, a Principle Security Researcher at the firm JAMF. In April, he made headlines for disclosing a zero day vulnerability in the Zoom client – one that could have been used by an attacker to escalate their privileges on a compromised machines. That earned him a conversation with Zoom’s CEO that took place – to Wardle’s dismay – via Zoom.

Securing Connected Vehicles

Finally, while COVID and the ripple effects of the pandemic dominated the news in 2020, it isn’t as it was the only news. In the shadows of the pandemic, other critical issues continued to bubble. One of them is the increasing tensions about the power held by large companies and technology firms. In our final segment, I’m returning to my conversation with Assaf Harel of Karamba Security in Episode 193. Harel is one of the world’s top experts in the security of connected vehicles. In this conversation, Assaf and I talk about the state of vehicle cyber security: what the biggest cyber risks are to connected cars. We also go deep on the right to repair -and how industries like automobiles can balance consumer rights with security and privacy concerns.


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

In this episode of the podcast (#197), sponsored by LastPass, former U.S. CISO General Greg Touhill joins us to talk about news of a vast hack of U.S. government networks, purportedly by actors affiliated with Russia. In our second segment, with online crime and fraud surging, Katie Petrillo of LastPass joins us to talk about how holiday shoppers can protect themselves – and their data – from cyber criminals.


Every day this week has brought new revelations about the hack of U.S. Government networks by sophisticated cyber adversaries believed to be working for the Government of Russia. And each revelation, it seems, is worse than the one before – about a purported compromise of US government networks by Russian government hackers. As of Thursday, the U.S. Cyber Security and Infrastructure Security Agency CISA was dispensing with niceties, warning that it had determined that the Russian hackers “poses a grave risk to the Federal Government and state, local, tribal, and territorial governments as well as critical infrastructure entities and other private sector organizations”

The incident recalls another from the not-distant past: the devastating compromise of the Government’s Office of Personnel Management in 2014- an attack attributed to adversaries from China that exposed the government’s personnel records – some of its most sensitive data – to a foreign power. 

Do Cities deserve Federal Disaster Aid after Cyber Attacks?

Now this attack, which is so big it is hard to know what to call it. Unlike the 2014 incident it isn’t limited to a single federal agency. In fact, it isn’t even limited to the federal government: state, local and tribal governments have likely been affected, in addition to hundreds or thousands of private firms including Microsoft, which acknowledged Thursday that it had found instances of the software compromised by the Russians, the SolarWinds Orion product, in its environment. 

Former Brigadier General Greg Touhill is the President of Federal Group at the firm AppGate.

How did we get it so wrong? According to our guest this week, the failures were everywhere. Calls for change following OPM fell on deaf ears in Congress. But the government also failed to properly assess new risks – such as software supply chain attacks – as it deployed new applications and computing models. 

U.S. sanctions Russian companies, individuals over cyber attacks

Greg Touhill, is the President of the Federal Group of secure infrastructure company AppGate. he currently serves as a faculty member of Carnegie Mellon University’s Heinz College. In a prior life, Greg was a Brigadier General Greg Touhill and  the first Federal Chief Information Security Officer of the United States government. 

U.S. Customs Data Breach Is Latest 3rd-Party Risk, Privacy Disaster

In this conversation, General Touhill and I talk about the hack of the US government that has come to light, which he calls a “five alarm fire.” We also discuss the failures of policy and practice that led up to it and what the government can do to set itself on a new path. The federal government has suffered “paralysis through analysis” as it wrestled with the need to change its approach to security from outdated notions of a “hardened perimeter” and keeping adversaries out. “We’ve got to change our approach,” Touhill said.

The malls may be mostly empty this holiday season, but the Amazon trucks come and go with a shocking regularity. In pandemic plagued America, e-commerce has quickly supplanted brick and mortar stores as the go-to for consumers wary of catching a potentially fatal virus. 

(*) Disclosure: This podcast was sponsored by LastPass, a LogMeIn brand. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

If you work within the security industry, compliance is seen almost as a dirty word. You have likely run into situations like that which @Nemesis09 describes below. Here, we see it’s all too common for organizations to treat testing compliance as a checkbox exercise and to thereby view compliance in a way that goes against its entire purpose.

There are challenges when it comes to compliance, for sure. Organizations need to figure out whether to shape their efforts to the letter of an existing law or to base their activities in the spirit of a “law” that best suits their security needs—even if that law doesn’t exists. There’s also the assumption that a company can acquire ‘good enough’ security by implementing a checkbox exercise, never mind the confusion explained by @Nemesis09.

Zoë Rose is a cyber security analyst at BH Consulting
Zoë Rose is a highly regarded hands-on cyber security specialist, who helps her clients better identify and manage their vulnerabilities, and embed effective cyber resilience across their organisation.

Podcast Episode 141: Massive Data Breaches Just Keep Happening. We Talk about Why.

However, there is truth behind why security compliance continues forward. It’s a bloody good way to focus efforts in the complex world of security. Compliance requirements are also using terms that senior leadership understand with risk-based validation of which cyber security teams can make use.

Security is ever-changing. One day, you have everything patched and ready. The next, a major security vulnerability is publicized, and you rush to implement the appropriate updates. It’s only then that you realise that those fixes break something else in your environment.

Opinion: The Perils and Promise of the Data Decade

Containers Challenge Compliance

Knowing where to begin your compliance efforts and where to focus investment in order to mature your compliance program is stressful and hard to do. Now, add to that the speed and complexity of container-isation and three compliance challenges come to mind:

  1. Short life spans – Containers tend to not last too long. They spin up and down over days, hours, even minutes. (By comparison, traditional IT assets like servers and laptops usually remain live for months or years.) Such dynamism makes container visibility constantly evolving and hard to pinpoint. The environment might be in flux, but organizations need to make sure that it always aligns with its compliance requirements regardless of what’s live at the moment.
  2. Testing records – The last thing organizations want to do is walk into an audit without any evidence of the testing they’ve implemented on their container environments. These tests provide crucial evidence into the controls that organizations have incorporated into their container compliance strategies. With documented tests, organizations can help their audits to run more smoothly without needing to try to remember what they did weeks or months ago.
  3. Integrity of containers– Consider the speed of a container’s lifecycle, as discussed above. You need to carefully monitor your containers and practice highly restricted deployment. Otherwise, you won’t be able to tell if an unauthorized or unexpected action occurred in your environment. Such unanticipated occurrences could be warning signs of a security incident.

Building a Container Security Program

One of the most popular certifications I deal with is ISO/IEC 27001, in which security is broken down into areas within the Information Security Management System. This logical separation allows for different areas of the business to address their security requirements while maintaining a holistic lens.

Let’s look at the first challenge identified above: short container life spans. Organizations can address this obstacle by building their environments in a standardized way: hardening it with appropriate measures and continuously validating it through build-time and (importantly) run-time. This means having systems in place to actively monitor actions that these containers make, interactions between systems and running services along with alerts that are in place for unexpected transactions.

Now for the second challenge above. In order to have resilient containers in production, an organisation has to have a proper validation/testing phase done prior to launch. In almost every program I have been a part of, when rolling out new features or services, there is always a guide on “Go/No Go” requirements. This includes things like which tests can fail gracefully, which types of errors are allowed and which tests are considered a “no go” because they can cause an incident or the transaction cannot be completed. In a container-ised environment, such requirements could take the form of bandwidth or latency requirements within your network. These elements, among others, could shape the conditions for when and to what extent your organization is capable of running a test.

In addressing the third challenge, the integrity of containers, we face a major compliance issue. Your organization therefore needs to ask itself the following questions?

  • Have we ever conducted a stress test of our containers’ integrity before?
  • Has our environment ever had a table-top exercise done with the scenario of a container gone rouge?
  • Has a red team exercise ever been launched with the sole purpose of distrusting or attacking the integrity of said containers?

Understand the Value of Compliance

In this article, the author discusses the best practices and known risks associated with Docker. It covers  the expected foundations that you must align with in order to reduce the likelihood of a configuration  causing an incident within your containerized infrastructure.

No environment is perfect, and no solution is 100% secure. That being said, the value of compliance when it comes to container-isation security programs is to validate that these processes so that they can help to reduce the likelihood of an incident, quickly identify the occurrence of events and minimize the potential impact to the overall environment.

Whilst compliance is often seen as a dirty word, it can be leveraged to enhance to overall program through a holistic lens, becoming something richer and attractive to all parties.