In this episode of the podcast (#204) we’re joined by Josh Corman of CISA, the Cybersecurity and Infrastructure Security Agency, to talk about how that agency is working to secure the healthcare sector, in particular vaccine supply chains that have come under attack by nations like Russia, China and North Korea.


Incidents like the Solar Winds hack have focused our attention on the threat posed by nation states like Russia and China, as they look to steal sensitive government and private sector secrets. But in the vital healthcare sector, nation state actors are just one among many threats to the safety and security of networks, data, employees and patients.

Joshua Corman is the Chief Strategist for Healthcare and COVID on the CISA COVID Task Force.
Joshua Corman is the Chief Strategist for Healthcare and COVID on the CISA COVID Task Force.

In recent years, China has made a habit of targeting large health insurers and healthcare providers as it seeks to build what some have described as a “data lake” of U.S. residents that it can mine for intelligence. Criminal ransomware groups have released their malicious wares on the networks of hospitals, crippling their ability to deliver vital services to patients and – more recently – nation state actors like North Korea, China and Russia have gone phishing – with a “ph” – for information on cutting edge vaccine research related to COVID 19.

How is the U.S. government responding to this array of threats? In this episode of the podcast, we’re bringing you an exclusive interview with Josh Corman, the Chief Strategist for Healthcare and COVID for the COVID Task Force at CISA, Cybersecurity and Infrastructure Security Agency.

Cryptocurrency Exchanges, Students Targets of North Korea Hackers

In this interview, Josh and I talk about the scramble within CISA to secure a global vaccine supply chain in the midst of a global pandemic. Among other things, Josh talks about the work CISA has done in the last year to identify and shore up the cyber security of vital vaccine supply chain partners – from small biotech firms that produce discrete but vital components needed to produce vaccines to dry ice manufacturers whose product is needed to transport and store vaccines.

Episode 194: What Happened To All The Election Hacks?

To start off I asked Josh to talk about CISA’s unique role in securing vaccines and how the Federal Government’s newest agency works with other stake holders from the FBI to the FDA to address widespread cyber threats.



As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

In this episode of the Security Ledger Podcast (#202) we do a deep dive on President Biden’s cyber agenda with three experts on federal cyber policy and the challenges facing the new administration.


Well, it almost didn’t happen, but on January 20, Joseph Robinette Biden Jr. was sworn in as the 46th President of the United States. More than any president since Franklin Roosevelt, Biden inherited a country in the throws of a crisis. By the time of his inauguration, the COVID virus had killed upwards of 400,000 U.S. residents and tanked the  national economy. As the incidents of January 6 indicated, right wing militant groups are stirring and threatening to topple democratic institutions.

Enter Solar Storm

And, as if that wasn’t enough, the weeks between the November Election and Biden’s January inauguration brought to light evidence of what is perhaps the biggest cyber intrusion by a foreign adversary into US government networks, the so called Solar Storm hack, which has been widely attributed to the government of Russia. 

Even before Solar Storm, Biden made clear as a candidate that a cyber security reset was needed and that cyber would be a top priority of his administration. The wide ranging hack of the US Treasury, Departments, of State, Justice, Defense and Homeland Security – among others – just added fuel to the roaring dumpster fire of Federal IT security. 

But what will that reset look like? To understand a bit better what might be in store in the months ahead we devoted this episode of the podcast to interviewing three experts on federal IT security and cyber defense. 

Rebuilding Blocks

But first, before you can do a reset you need to understand what went wrong the first time around. In the case of federal cyber security, that’s not a short list.

Spotlight Podcast: Taking a Risk-Based Approach to Election Security

In our fist segment, we’re joined by two experts on cyber policy about the US governments struggles to get cyber security right, culminating with the problems seen during the Trump administration.

Lauren Zabierek is the Executive Director of Cyber Project at Belfer Center For Science and International Affairs at Harvard’s Kennedy School of Government. She’s joined by Paul Kolbe, the Director of the Intelligence Project at Belfer Center. The two joined me in the Security Ledger studios to talk about how the Biden Administration might rebuild the US government’s cyber function and who might populate key positions in the new administration. 

Spotlight Podcast: QOMPLX CISO Andy Jaquith on COVID, Ransomware and Resilience

To start off, I asked them what the biggest challenges are out of the gate for the new administration. 

The Byte Stops Here: What Cyber Leadership Looks Like

As Harry Truman famously said: the “Buck stops” at the President’s desk. That wasn’t a phrase that was heard much during the Trump years. But with a new President sworn in, what does real leadership look like on federal cyber security?

Mark Weatherford is the Chief Strategy Officer at the National Cyber Security Center.

To find out, we invited Mark Weatherford into the studio to talk. Mark is the Chief Strategy Officer at the national cyber security center. a former CISO for the State of California and Deputy Under Secretary for Cyber Security at the DHS. In this conversation, Mark and I talk about the importance of presidential leadership on cyber security and what – if anything – the Trump administration got right on cyber policy in its four years in power. 


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

Independent security researchers testing the security of the United Nations were able to compromise public-facing servers and a cloud-based development account for the U.N. and lift data on more than 100,000 staff and employees, according to a report released Monday.

Researchers affiliated with Sakura Samurai, a newly formed collective of independent security experts, exploited an exposed Github repository belonging to the International Labour Organization and the U.N.’s Environment Programme (UNEP) to obtain “multiple sets of database and application credentials” for UNEP applications, according to a blog post by one of the Sakura Samurai researchers, John Jackson, explaining the group’s work.

Specifically, the group was able to obtain access to database backups for private UNEP projects that exposed a wealth of information on staff and operations. That includes a document with more than 1,000 U.N. employee names, emails; more than 100,000 employee travel records including destination, length of stay and employee ID numbers; more than 1,000 U.N. employee records and so on.

The researchers stopped their search once they were able to obtain personally identifying information. However, they speculated that more data was likely accessible.

Looking for Vulnerabilities

The researchers were scanning the U.N.’s network as part of the organization’s Vulnerability Disclosure Program. That program, started in 2016, has resulted in a number of vulnerabilities being reported to the U.N., many of them common cross-site scripting (XSS) and SQL injection flaws in the U.N.’s main website, un.org.

You might also be interested in: Data Breach Exposes Records of 114 Million U.S. Citizens, Companies

For their work, Sakura Samurai took a different approach, according to Jackson, in an interview with The Security Ledger. The group started by enumerating UN subdomains and scanning them for exposed assets and data. One of those, an ILO.org Apache web server, was misconfigured and exposing files linked to a Github account. By downloading that file, the researchers were able to recover the credentials for a UN survey management panel, part of a little used, but public facing survey feature on the UN site. While the survey tool didn’t expose a tremendous amount of data, the researchers continued scanning the site and eventually discovered a subdomain that exposed a file containing the credentials for a UN Github account containing 10 more private GitHub repositories encompassing databases and database credentials, backups and files containing personally identifying information.

Much more to be found

Jackson said that the breach is extensive, but that much more was likely exposed prior to his group’s discovery.

“Honestly, there’s way more to be found. We were looking for big fish to fry.” Among other things, a Sakura Samurai researcher discovered APIs for the Twilio cloud platform exposed – those also could have been abused to extract data and personally identifying information from UN systems, he said.

In an email response to The Security Ledger, Farhan Haq, a Deputy Spokesman for the U.N. Secretary-General said that the U.N.’s “technical staff in Nairobi … acknowledged the threat and … took ‘immediate steps’ to remedy the problem.”

You might also be interested in: Veeam mishandles Own Data, exposes 440M Customer E-mails

“The flaw was remedied in less than a week, but whether or not someone accessed the database remains to be seen,” Haq said in the statement.

A disclosure notice from the U.N. on the matter is “still in the works,” Haq said. According to Jackson, data on EU residents was among the data exposed in the incident. Under the terms of the European Union’s Genderal Data Privacy Rule (GDPR), the U.N. has 72 hours to notify regulators about the incident.

Nation State Exposure?

Unfortunately, Jackson said that there is no way of knowing whether his group was the first to discover the exposed data. It is very possible, he said, that they were not.

“It’s likely that nation state threat actors already have this,” he said, noting that data like travel records could pose physical risks, while U.N. employee email and ID numbers could be useful in tracking and impersonating employees online and offline.

Another danger is that malicious actors with access to the source code of U.N. applications could plant back doors or otherwise manipulate the functioning of those applications to suit their needs. The recent compromise of software updates from the firm Solar Winds has been traced to attacks on hundreds of government agencies and private sector firms. That incident has been tied to hacking groups associated with the government of Russia.

Asked whether the U.N. had conducted an audit of the affected applications, Haq, the spokesperson for the U.N. Secretary General said that the agency was “still looking into the matter.”

A Spotty Record on Cybersecurity

This is not the first cybersecurity lapse at the U.N. In January, 2020 the website the New Humanitarian reported that the U.N. discovered but did not disclose a major hack into its IT systems in Europe in 2019 that involved the compromise of UN domains and the theft of administrator credentials.

In this episode of the podcast (#199), sponsored by LastPass, we’re joined by Barry McMahon, a Senior Global Product Marketing Manager at LogMeIn, to talk about data from that company that weighs the security impact of poor password policies and what a “passwordless” future might look like. In our first segment, we speak with Sareth Ben of Securonix about how massive layoffs that have resulted from the COVID pandemic put organizations at far greater risk of data theft.


The COVID Pandemic has done more than scramble our daily routines, school schedules and family vacations. It has also scrambled the security programs of organizations large and small, first by shifting work from corporate offices to thousands or tens of thousands of home offices, and then by transforming the workforce itself through layoffs and furloughs.

In this episode of the podcast, we did deep COVID’s lesser discussed legacy of enterprise insecurity.

Layoffs and Lost Data

We’ve read a lot about the cyber risks of Zoom (see our interview with Patrick Wardle) or remote offices. But one of the less-mentioned cyber risks engendered by COVID are the mass layoffs that have hit companies in sectors like retail, travel and hospitality, where business models have been upended by the pandemic. The Department of Labor said on Friday that employers eliminated 140,000 jobs in December alone. Since February 2020, employment in leisure and hospitality is down by some 3.9 million jobs, the Department estimates. If data compiled by our next guest is to be believed, many of those departing workers took company data and intellectual property out the door with them. 

Shareth Ben is the executive director of field engineering at Securonix. That company has assembled a report on insider threats that found that most employees take some data with them. Some of that is inadvertent – but much of it is not.

While data loss detection has long been a “thing” in the technology industry, Ben notes that evolving technologies like machine learning and AI are making it easier to spot patterns of behavior that correlate with data theft- for example: spotting employees who are preparing to leave a company and take sensitive information with them. In this discussion, Shareth and I talk about the Securonix study on data theft, how common the problem is and how COVID and the layoffs stemming from the pandemic have exacerbated the insider data theft problem. 

It’s Not The Passwords…But How We Use Them

Nobody likes passwords but getting rid of them is harder than it seems. Even in 2021, User names and passwords are part and parcel of establishing access to online services – cloud based or otherwise. But all those passwords pose major challenges for enterprise security. Data from LastPass suggest that the average organization IT department spends up to 5 person hours a week just to assist with password problems of users – almost a full day of work. 

Barry McMahon a senior global product marketing manager at LastPass and LogMeIn. McMahon says that, despite talk of a “password less” future, traditional passwords aren’t going anywhere anytime soon. But that doesn’t mean that the current password regime of re-used passwords and sticky notes can’t be improved drastically – including by leveraging some of the advanced security features of smart phones and other consumer electronics. Passwords aren’t the problem, so much as how we’re using them, he said. 

To start off, I ask Barry about some of the research LastPass has conducted on the password problem in enterprises. Barry McMahon a senior global product marketing manager at LastPass and LogMeIn.


(*) Disclosure: This podcast was sponsored by LastPass, a LogMeIn brand. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted.

Neopets, a website that allows children to care for “virtual pets,” has exposed a wide range of sensitive data online including credentials needed to access company databases, employee emails, and even repositories containing the proprietary code for the site, according to information shared with The Security Ledger.

The data includes the IP addresses of Neopets visitors, information that could be used to target Neopets users, according to independent researcher John Jackson, who said he discovered the information after scanning the company’s website with a security tool.

Stolen Accounts For Sale

Neopets is a “virtual pet website” that first launched in 1999. It permits users – many of them children – to care for virtual pets and buy virtual items for them using virtual points earned in-game (Neopoints) or with “Neocash” that can be purchased with real-world money, or won in-game. Purchased by Viacom for $160 million in 2005, in 2017, it was acquired by the Chinese company NetDragon.

In an email to The Security Ledger, Jackson said that he noticed Neopets accounts being offered for sale on an online forum. That prompted him to run a scan on the Neopets site using a forensics tool. That scan revealed a Neopets subdomain that exposed the guts of the Neopets website, Jackson said via instant message.

China Using Big Brother-Like System to Track, Monitor Minorities

“We looked through and found employee emails, database credentials and their whole codebase,” he said.

Jackson shared screen shots of the Neopets directory as well as snippets of code captured from the site that suggest credentials were “hard coded,” or embedded in the underlying code of the website. Working with security researcher Nick Sahler, Jackson was able to download Website’s entire codebase, revealing database credentials, employee emails, user IP addresses and private code repositories. The two researchers also uncovered internal IP addresses and the underlying application logic for the entire Neopets application.

Snippet of code from the NeoPets website showing hard coded credentials. (Image courtesy of John Jackson.)
Snippet of code from the Neopets website showing hard coded credentials. (Image courtesy of John Jackson.)

“This is extremely bad because even though we didn’t attempt to access PII (personally identifying information), with these codebases we can undoubtedly do so,” Jackson said. “They need to fix the root issues, otherwise they will suffer yet another threat-actor related breach.”

U.S. Customs Data Breach Is Latest 3rd-Party Risk, Privacy Disaster

Jackson and Sahler said they have reported their findings to Neopets and provided copies of email exchanges with a support tech at the company who said he would pass the issue to “one of our coders.”

Neopets has not yet responded to requests for comment on the researchers’ allegations.

If true, this would be the second serious security incident involving the Neopets site. In 2016, the company acknowledged a breach that spilled usernames, passwords, IP addresses and other personal information for some 27 million users. That breach may have occurred as early as 2013, according to the website HaveIbeenPwned.

The issue appears to be related to a misconfigured Apache web server, Jackson said. Though many web-based applications are hosted on infrastructure owned by cloud providers such as Amazon, Google or Microsoft’s Azure, leaked documents indicate that the 20 year-old Neopets website continues to operate from infrastructure it owns and operates.

Episode 145: Veracode CTO Chris Wysopal and Life After Passwords with Plurilock

Misconfigured web servers are a frequent source of security breaches -whether self-hosted or hosted by a third party. In 2017, credit rating agency Equifax acknowledged that a hole in the Apache Struts platform first identified in March, 2017 and patched in August of that year was used by hackers to compromise a web application and gain access to the information which included names, email addresses and, for US residents, Social Security Numbers. The vulnerability, identified as CVE-2017-5638, was associated with a string of attacks in 2017 and 2018.

High Bar for Collecting Information on Children

The breach could spell legal trouble for Neopets andWebsites and NetDragon. Online firms that manage information on children are held to a high standard under the federal Children’s Online Privacy Protection Act (“COPPA”).

In June, the U.S. Federal Trade Commission (FTC) announced that it reached a settlement with children’s mobile application developer HyperBeard Inc. that included a $4 million fine for COPPA violations for obtaining parental consent before processing children’s personal information for targeted advertising. (HyperBeard ultimately paid just $150,000 of that penalty, citing an inability to pay the full amount.)

In September, 2019 Google and its YouTube subsidiary agreed to pay a record $170 million fine to settle allegations by the Federal Trade Commission and the New York Attorney General that the YouTube video sharing service violated COPPA by illegally collecting personal information from children without their parents’ consent.

In this episode of the podcast (#197), sponsored by LastPass, former U.S. CISO General Greg Touhill joins us to talk about news of a vast hack of U.S. government networks, purportedly by actors affiliated with Russia. In our second segment, with online crime and fraud surging, Katie Petrillo of LastPass joins us to talk about how holiday shoppers can protect themselves – and their data – from cyber criminals.


Every day this week has brought new revelations about the hack of U.S. Government networks by sophisticated cyber adversaries believed to be working for the Government of Russia. And each revelation, it seems, is worse than the one before – about a purported compromise of US government networks by Russian government hackers. As of Thursday, the U.S. Cyber Security and Infrastructure Security Agency CISA was dispensing with niceties, warning that it had determined that the Russian hackers “poses a grave risk to the Federal Government and state, local, tribal, and territorial governments as well as critical infrastructure entities and other private sector organizations”

The incident recalls another from the not-distant past: the devastating compromise of the Government’s Office of Personnel Management in 2014- an attack attributed to adversaries from China that exposed the government’s personnel records – some of its most sensitive data – to a foreign power. 

Do Cities deserve Federal Disaster Aid after Cyber Attacks?

Now this attack, which is so big it is hard to know what to call it. Unlike the 2014 incident it isn’t limited to a single federal agency. In fact, it isn’t even limited to the federal government: state, local and tribal governments have likely been affected, in addition to hundreds or thousands of private firms including Microsoft, which acknowledged Thursday that it had found instances of the software compromised by the Russians, the SolarWinds Orion product, in its environment. 

Former Brigadier General Greg Touhill is the President of Federal Group at the firm AppGate.

How did we get it so wrong? According to our guest this week, the failures were everywhere. Calls for change following OPM fell on deaf ears in Congress. But the government also failed to properly assess new risks – such as software supply chain attacks – as it deployed new applications and computing models. 

U.S. sanctions Russian companies, individuals over cyber attacks

Greg Touhill, is the President of the Federal Group of secure infrastructure company AppGate. he currently serves as a faculty member of Carnegie Mellon University’s Heinz College. In a prior life, Greg was a Brigadier General Greg Touhill and  the first Federal Chief Information Security Officer of the United States government. 

U.S. Customs Data Breach Is Latest 3rd-Party Risk, Privacy Disaster

In this conversation, General Touhill and I talk about the hack of the US government that has come to light, which he calls a “five alarm fire.” We also discuss the failures of policy and practice that led up to it and what the government can do to set itself on a new path. The federal government has suffered “paralysis through analysis” as it wrestled with the need to change its approach to security from outdated notions of a “hardened perimeter” and keeping adversaries out. “We’ve got to change our approach,” Touhill said.

The malls may be mostly empty this holiday season, but the Amazon trucks come and go with a shocking regularity. In pandemic plagued America, e-commerce has quickly supplanted brick and mortar stores as the go-to for consumers wary of catching a potentially fatal virus. 

(*) Disclosure: This podcast was sponsored by LastPass, a LogMeIn brand. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

If you work within the security industry, compliance is seen almost as a dirty word. You have likely run into situations like that which @Nemesis09 describes below. Here, we see it’s all too common for organizations to treat testing compliance as a checkbox exercise and to thereby view compliance in a way that goes against its entire purpose.

There are challenges when it comes to compliance, for sure. Organizations need to figure out whether to shape their efforts to the letter of an existing law or to base their activities in the spirit of a “law” that best suits their security needs—even if that law doesn’t exists. There’s also the assumption that a company can acquire ‘good enough’ security by implementing a checkbox exercise, never mind the confusion explained by @Nemesis09.

Zoë Rose is a cyber security analyst at BH Consulting
Zoë Rose is a highly regarded hands-on cyber security specialist, who helps her clients better identify and manage their vulnerabilities, and embed effective cyber resilience across their organisation.

Podcast Episode 141: Massive Data Breaches Just Keep Happening. We Talk about Why.

However, there is truth behind why security compliance continues forward. It’s a bloody good way to focus efforts in the complex world of security. Compliance requirements are also using terms that senior leadership understand with risk-based validation of which cyber security teams can make use.

Security is ever-changing. One day, you have everything patched and ready. The next, a major security vulnerability is publicized, and you rush to implement the appropriate updates. It’s only then that you realise that those fixes break something else in your environment.

Opinion: The Perils and Promise of the Data Decade

Containers Challenge Compliance

Knowing where to begin your compliance efforts and where to focus investment in order to mature your compliance program is stressful and hard to do. Now, add to that the speed and complexity of container-isation and three compliance challenges come to mind:

  1. Short life spans – Containers tend to not last too long. They spin up and down over days, hours, even minutes. (By comparison, traditional IT assets like servers and laptops usually remain live for months or years.) Such dynamism makes container visibility constantly evolving and hard to pinpoint. The environment might be in flux, but organizations need to make sure that it always aligns with its compliance requirements regardless of what’s live at the moment.
  2. Testing records – The last thing organizations want to do is walk into an audit without any evidence of the testing they’ve implemented on their container environments. These tests provide crucial evidence into the controls that organizations have incorporated into their container compliance strategies. With documented tests, organizations can help their audits to run more smoothly without needing to try to remember what they did weeks or months ago.
  3. Integrity of containers– Consider the speed of a container’s lifecycle, as discussed above. You need to carefully monitor your containers and practice highly restricted deployment. Otherwise, you won’t be able to tell if an unauthorized or unexpected action occurred in your environment. Such unanticipated occurrences could be warning signs of a security incident.

Building a Container Security Program

One of the most popular certifications I deal with is ISO/IEC 27001, in which security is broken down into areas within the Information Security Management System. This logical separation allows for different areas of the business to address their security requirements while maintaining a holistic lens.

Let’s look at the first challenge identified above: short container life spans. Organizations can address this obstacle by building their environments in a standardized way: hardening it with appropriate measures and continuously validating it through build-time and (importantly) run-time. This means having systems in place to actively monitor actions that these containers make, interactions between systems and running services along with alerts that are in place for unexpected transactions.

Now for the second challenge above. In order to have resilient containers in production, an organisation has to have a proper validation/testing phase done prior to launch. In almost every program I have been a part of, when rolling out new features or services, there is always a guide on “Go/No Go” requirements. This includes things like which tests can fail gracefully, which types of errors are allowed and which tests are considered a “no go” because they can cause an incident or the transaction cannot be completed. In a container-ised environment, such requirements could take the form of bandwidth or latency requirements within your network. These elements, among others, could shape the conditions for when and to what extent your organization is capable of running a test.

In addressing the third challenge, the integrity of containers, we face a major compliance issue. Your organization therefore needs to ask itself the following questions?

  • Have we ever conducted a stress test of our containers’ integrity before?
  • Has our environment ever had a table-top exercise done with the scenario of a container gone rouge?
  • Has a red team exercise ever been launched with the sole purpose of distrusting or attacking the integrity of said containers?

Understand the Value of Compliance

In this article, the author discusses the best practices and known risks associated with Docker. It covers  the expected foundations that you must align with in order to reduce the likelihood of a configuration  causing an incident within your containerized infrastructure.

No environment is perfect, and no solution is 100% secure. That being said, the value of compliance when it comes to container-isation security programs is to validate that these processes so that they can help to reduce the likelihood of an incident, quickly identify the occurrence of events and minimize the potential impact to the overall environment.

Whilst compliance is often seen as a dirty word, it can be leveraged to enhance to overall program through a holistic lens, becoming something richer and attractive to all parties.

A serious security flaw in a commonly used, but overlooked open source security module may be undermining the integrity of hundreds of thousands or even millions of private and public applications, putting untold numbers of organizations and data at risk.

A team of independent security researchers that includes application security professionals at Shutterstock and Squarespace identified the flaw in private-ip, a npm module first published in 2016 that enables applications to block request forgery attacks by filtering out attempts to access private IP4 addresses and other restricted IP4 address ranges, as defined by ARIN

The SSRF Blocker That Didn’t

The researchers identified a so-called Server Side Request Forgery (SSRF) vulnerability in commonly used versions of private-ip. The flaw allows malicious attackers to carry out SSRF attacks against a population of applications that may number in the hundreds of thousands or millions globally. It is just the latest incident to raise questions about the security of the “software supply chain,” as more and more organizations shift from monolithic to modular software application development built on a foundation of free and open source code.

Report: Cybercriminals target difficult-to-secure ERP systems with new attacks

According to an account by researcher John Jackson of Shutterstock, flaws in the private-ip code meant that the filtering allegedly carried out by the code was faulty. Specifically, independent security researchers reported being able to bypass protections and carry out Server-Side Request Forgeries against top tier applications. Further investigation uncovered a common explanation for those successful attacks: private-ip, an open source security module used by the compromised applications.

SSRF attacks allow malicious actors to abuse functionality on a server: reading data from internal resources or modifying the code running on the server. Private-ip was created to help application developers spot and block such attacks. SSRF is one of the most common forms of attack on web applications according to OWASP.

Black Box Device Research reveals Pitiful State of Internet of Things Security

The problem: private-ip didn’t do its job very well.

“The code logic was using a simple Regular Expression matching,” Jackson (@johnjhacking) told The Security Ledger. Jackson, working with other researchers, found that private-ip was blind to a wide number of variations of localhost, and other private-ip ranges as well as simple tricks that hackers use to obfuscate IP addresses in attacks. For example, researchers found they could send successful requests for localhost resources by obscuring those addresses using hexadecimal equivalents of private IP addresses or with simple substitutions like using four zeros for each octet of the IP address instead of one (so: 0000.0000.0000.0000 instead of 0.0.0.0). The result: a wide range of private and restricted IP addresses registered as public IP addresses and slipped past private-ip.

Private-IP: small program, BIG footprint

The scope of the private-ip flaws are difficult to grasp. However, available data suggests the component is very widely used. Jackson said that hundreds of thousands, if not millions of applications likely incorporate private-ip code in some fashion. Many of those applications are not publicly addressable from the Internet, but may still be vulnerable to attack by an adversary with access to their local environment.

Private-ip is the creation of developer Damir Mustafin (aka “frenchbread”), a developer based in the Balkan country of Montenegro, according to his GitHub profile, which contains close to 60 projects of different scopes. Despite its popularity and widespread use, private-ip was not a frequent focus of Mr. Mustafin’s attention. After first being published in August 2016, the application had only been updated once, in April 2017, prior to the most recent update to address the SSRF flaw.

A Low Key, High Distribution App

The lack of steady attention didn’t dissuade other developers from downloading and using the npm private-ip package, however. It has an average of 14,000 downloads weekly, according to data from GitHub. And direct downloads of private-ip are just one measure of its use. Fully 355 publicly identified npm modules are dependents of private-ip v1.0.5, which contains the SSRF flaws. An additional 73 GitHub projects have dependencies on private-ip. All told, that accounts for 153,374 combined weekly downloads of private-ip and its dependents. One of the most widely used applications that relies on private-ip is libp2p, an open source network stack that is used in a wide range of decentralized peer-to-peer applications, according to Jackson.

While the flaw was discovered by so-called “white hat” vulnerability researchers, Jackson said that it is almost certain that malicious actors knew about and exploited it -either directly or inadvertently. Other security researchers have almost certainly stumbled upon it before as well, perhaps discovering a single address that slipped through private-ip and enabled a SSRF attack, while failing to grasp private-ip’s role or the bigger flaws in the module.

In fact, private-ip may be the common source of a long list of SSRF vulnerabilities that have been independently discovered and reported in the last five years, Jackson said.”This may be why a lot of enterprises have struggled with SSRF and block list bypasses,” he said.

After identifying the problem, Jackson and his team contacted the developer, Damir Mustafin (aka “frenchbread”), looking for a fix. However, it quickly became clear that they would need to enlist additional development talent to forge a patch that was comprehensive. Jackson tapped two developers: Nick Sahler of the website hosting provider Square Space and the independent developer known as Sick Codes (@sickcodes) to come up with a comprehensive fix for private-ip. The two implemented the netmask utility and update private-ip to correctly filter private IP ranges and translate all submitted IP addresses at the byte level to catch efforts to slip encoded addresses past the filter.

Common Mode Failures and Software Supply Chain

Even though it is fixed, the private-ip flaw raises larger and deeply troubling questions about the security of software applications on which our homes, businesses and economy are increasingly dependent.

The greater reliance on open source components and the shift to agile development and modular applications has greatly increased society’s exposure to so-called “common cause” failures, in which a the failure of a single element leads to a systemic failure. Security experts say the increasingly byzantine ecosystem of open source and proprietary software with scores or hundreds of poorly understood ‘dependencies’ is ripe for such disruptions.

Sites like npm are a critical part of that ecosystem -and part of the problem. Created in 2008, npm is a package manager for the JavaScript programming language that was acquired by GitHub in March. It acts as a public registry of packages of open source code that can be downloaded and incorporated into web and mobile applications as well as a wide range of hardware from broadband routers to robots. But vetting of the modules uploaded to npm and other platforms is often cursory. Scores have been called out as malicious and an even greater number are quietly dropped from the site every day after being discovered to be malicious in nature.

Less scrutinized is low quality code and applications that may quickly be adopted and woven into scores or hundreds or thousands of other applications and components.

“The problem with (software) dependencies is once you identify a problem with a dependency, everything downstream is f**ked,” the developer known as Sick Codes told The Security Ledger. “It’s a house of cards.”

Patch Now

In the short term, organizations that know they are using private-ip version 1.0.5 or earlier as a means of preventing SSRF or related vulnerabilities should upgrade to the latest version immediately, Jackson said. Static application security testing tools can help identify whether private-ip is in use within your organization.

The bigger fix is for application developers to pay more attention to what they’re putting into their creations. “My recommendations is that when software engineers use packages in general or third party code, they need to evaluate what they’re using and where its coming from,” Jackson said.  

Chinese electronics giant TCL has acknowledged security holes in some models of its smart television sets, but denies that it maintains a secret “back door” that gives it control over deployed TVs.

In an email statement to The Security Ledger dated November 16, Senior Vice President for TCL North America Chris Larson acknowledged that the company issued a security patch on October 30 for one of two serious security holes reported by independent researchers on October 27. That hole, assigned the Common Vulnerabilities and Exposure (CVE) number 2020-27403 allowed unauthenticated users to browse the contents of a TCL smart TV’s operating system from an adjacent network or even the Internet.

A patch for a second flaw, CVE-2020-28055, will be released in the coming days, TCL said. That flaw allows a local unprivileged attacker to read from- and write to critical vendor resource directories within the TV’s Android file system, including the vendor upgrades folder.

The Security Ledger reported last week on the travails of the researchers who discovered the flaws, @sickcodes and @johnjhacking, who had difficulty contacting security experts within TCL and then found a patch silently applied without any warning from TCL.

A Learning Process for TCL

In an email statement to Security Ledger, Larson acknowledged that TCL, a global electronics giant with a market capitalization of $98 billion, “did not have a thorough and well-developed plan or strategy for reacting to issues” like those raised by the two researchers. “This was certainly a learning process for us,” he wrote.

At issue was both the security holes and the manner in which the company addressed them. In an interview with The Security Ledger, the researcher using the handle Sick Codes said that a TCL TV set he was monitoring was patched for the CVE-2020-27403 vulnerability without any notice from the company and no visible notification on the device itself.

IT Asset Disposition (ITAD) is the Slow Motion Data Breach Nobody notices

By TCL’s account, the patch was distributed via an Android Package (APK) update on October 30. APK files are a method of installing (or “side loading”) applications and code on Android-based systems outside of sanctioned application marketplaces like the Google Play store. The company did not address in its public statements the question of whether prior notification of the update was given to customers or whether TV set owners were required to approve the update before it was installed.

Limited Impact in North America

However, the patch issued on October 30 is unlikely to have affected TCL customers in the U.S. and Canada, as none of the TCL models sold in the North America contain the CVE-2020-24703 vulnerability, TCL said in its statement. However, some TCL TV models sold in the U.S. and Canada are impacted by CVE-2020-28055, the company warned. They are TCL models 32S330, 40S330, 43S434, 50S434, 55S434, 65S434, and 75S434.

The patched vulnerability was linked to a feature called “Magic Connect” and an Android APK by the name of T-Cast, which allows users to “stream user content from a mobile device.” T-Cast was never installed on televisions distributed in the USA or Canada, TCL said. For TCL smart TV sets outside of North America that did contain T-Cast, the APK was “updated to resolve this issue,” the company said. That application update may explain why the TCL TV set studied by the researchers suddenly stopped exhibiting the vulnerability.

Consumer Reports: Flaws Make Samsung, Roku TVs Vulnerable

No Back Doors, Just “Remote Maintenance”

While TCL denied having a back door into its smart TVs, the company did acknowledge the existence of remote “maintenance” features that could give its employees or others control over deployed television sets, including onboard cameras and microphones.

In particular, TCL acknowledges that an Android APK known as “Terminal Manager…supports remote diagnostics in select regions,” but not in North America. In regions where sets with the Terminal Manager APK are deployed, TCL is able to “operate most functions of the television remotely.” That appears to include cameras and microphones installed on the set.

However, TCL said that Terminal Manager can only be used if the user “requests such action during the diagnostic session.” The process must be “initiated by the user and a code provided to TCL customer service agents in order to have diagnostic access to the television,” according to the company’s FAQ.

Other clarifications from the vendor suggest that, while reports of secret back doors in smart TVs may be overwrought, there is plenty of reason to worry about the security of TCL smart TVs.

The TCL statement acknowledged, for example, that two publicly browsable directories on the TCL Android TVs identified by the researchers could have potentially opened the door for malicious actors. A remotely writeable “upgrade” directory /data/vendor/upgrade on TCL sets has “never been used” but is intended for over the air firmware upgrades. Firmware update files placed in the directory are loaded on the next TV reboot. Similarly a directory /data/vendor/tcl, has also “never been used,” but stores “advertising graphics” that also are loaded “as part of the boot up process,” TCL said.

Promises to work with Independent Researchers

The company said it has learned from its mistakes and that it is undertaking efforts to work more closely with third party and independent security researchers in the future.

“Going forward, we are putting processes in place to better react to discoveries by 3rd parties. These real-world experts are sometimes able to find vulnerabilities that are missed by testing. We are performing additional training for our customer service agents on escalation procedures on these issues as well as establishing a direct reporting system online,” the company said.

China Risk Rising

Vendor assurances aside, there is growing concern within the United States and other nations about the threat posed by hundreds of millions of consumer electronic devices manufactured – or sourced in China. The firm Intsights in August warned that China was using technological exports as “weaponized trojans in foreign countries.” The country is “exporting technology around the world that has hidden backdoors, superior surveillance capability, and covert data collection capabilities that surpass their intended purposes and are being used for widespread reconnaissance, espionage, and data theft,” the company warned, citing reports about gear from the telecommunications vendor Huawei and social media site TikTok among others.

Western governments and non-governmental organizations have also raised alarms about the country’s blend of technology-enabled authoritarianism, including the use of data theft and data harvesting, coupled with artificial intelligence to identify individuals whose words or actions are counter to the ruling Communist Party.

Today marks two weeks since Election Day 2020 in the U.S., when tens of millions went to the polls on top of the tens of millions who had voted early or by mail in the weeks leading up to November 3.

The whole affair was expected to be a hot mess of suffrage, what with a closely divided public and access to the world’s most powerful office hung on the outcome of voting in a few, key districts sprinkled across a handful of states. Election attacks seemed a foregone conclusion.

Election Attack, Anyone?

Memories of the 2016 Presidential contest are still fresh in the minds of U.S. voters. During that contest, stealthy disinformation operations linked to Russia’s Internet Research Agency are believed to have swayed the vote in a few, key states, helping to hand the election to GOP upstart Donald Trump by a few thousands of votes spread across four states.

Listen: Russian Hacking and the Future of Cyber Conflict

Adam Meyers CrowdStrike
Adam Meyers is the Vice President of Threat intelligence at the firm Crowdstrike.

In 2020, with social media networks like Facebook more powerful than ever and the geopolitical fortunes of global powers like China and Russia hanging in the balance, it was a foregone conclusion that this year’s U.S. election would see one or more cyber incidents grab headlines and – just maybe- play a part in the final outcome.  

But two weeks and more than 140 million votes later, wild conspiracy theories about vote tampering are rampant in right wing media. But predictions of cyber attacks on the U.S. presidential election have fallen flat.

From Russia with…Indifference?

So what happened? Did Russia, China and Iran decide to sit this one our, or were planned attacks stopped in their tracks? And what about the expected plague of ransomware? Did budget and talent constrained local governments manage to do just enough right to keep cyber criminals and nation state actors at bay? 

Allan Liska is a Threat Intelligence Analyst at the firm Recorded Future,

To find out we invited two experts who have been following election security closely into the Security Ledger studios to talk.

Allan Liska is a Threat Intelligence Analyst at the firm Recorded Future, which has been monitoring the cyber underground for threats to elections systems.

Joining Allan is a frequent Security Ledger podcast guest: Adam Meyers the Senior Vice President of Threat Intelligence at the firm Crowdstrike back into the studio as well. Crowdstrike investigated the 2016 attack on the Hillary Clinton presidential campaign and closely monitors a wide range of cyber criminal and nation state groups that have been linked to attacks on campaigns and elections infrastructure. 

To start out I asked both guests – given the anticipation of hacks targeting the US election – what happened – or didn’t happen – in 2020. 


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted.