In this episode of the podcast (#204) we’re joined by Josh Corman of CISA, the Cybersecurity and Infrastructure Security Agency, to talk about how that agency is working to secure the healthcare sector, in particular vaccine supply chains that have come under attack by nations like Russia, China and North Korea.
How is the U.S. government responding to this array of threats? In this episode of the podcast, we’re bringing you an exclusive interview with Josh Corman, the Chief Strategist for Healthcare and COVID for the COVID Task Force at CISA, Cybersecurity and Infrastructure Security Agency.
In this interview, Josh and I talk about the scramble within CISA to secure a global vaccine supply chain in the midst of a global pandemic. Among other things, Josh talks about the work CISA has done in the last year to identify and shore up the cyber security of vital vaccine supply chain partners – from small biotech firms that produce discrete but vital components needed to produce vaccines to dry ice manufacturers whose product is needed to transport and store vaccines.
To start off I asked Josh to talk about CISA’s unique role in securing vaccines and how the Federal Government’s newest agency works with other stake holders from the FBI to the FDA to address widespread cyber threats.
In the past 20 years, bug hunting has transformed from a hobby (or maybe even a felony) to a full-time profession for tens of thousands of talented software engineers around the globe. Thanks to the growth in private and public bug bounty programs, men and women with the talent can earn a good living by sniffing out flaws in the code for applications and – increasingly -physical devices that power the 21st century global economy.
In this interview, Sick Codes and I talk about his path to becoming a vulnerability researcher, the paid and unpaid research he conducts looking for software flaws in common software and internet of things devices, some of the challenges and impediments that still exist in reporting vulnerabilities to corporations and what’s in the pipeline for 2021.
In this episode of the podcast (#200), sponsored by Digicert: John Jackson, founder of the group Sakura Samurai talks to us about his quest to make hacking groups cool again. Also: we talk with Avesta Hojjati of the firm Digicert about the challenge of managing a growing population of digital certificates and how automation may be an answer.
Life for independent security researchers has changed a lot in the last 30 years. The modern information security industry grew out of pioneering work by groups like Boston-based L0pht Heavy Industries and the Cult of the Dead Cow, which began in Lubbock, Texas.
After operating for years in the shadows of the software industry and in legal limbo, by the turn of the millennium hackers were coming out of the shadows. And by the end of the first decade of the 21st century, they were free to pursue full fledged careers as bug hunters, with some earning hundreds of thousands of dollars a year through bug bounty programs that have proliferated in the last decade.
Despite that, a stigma still hangs over “hacking” in the mind of the public, law enforcement and policy makers. And, despite the growth of bug bounty programs, red teaming and other “hacking for hire” activities, plenty of blurry lines still separate legal security research from illegal hacking.
Hacks Both Daring…and Legal
Still, the need for innovative and ethical security work in the public interest has never been greater. The Solar Winds hack exposed the ways in which even sophisticated firms like Microsoft and Google are vulnerable to compromised software supply chain attacks. Consider also the tsunami of “smart” Internet connected devices like cameras, television sets and appliances are working their way into homes and workplaces by the millions.
What does a 21st century hacking crew look like? Our first guest this week is trying to find out. John Jackson (@johnjhacking) is an independent security researcher and the co-founder of a new hacking group, Sakura Samurai, which includes a diverse array of security pros ranging from a 15 year old Australian teen to Aubrey Cottle, aka @kirtaner, the founder of the group Anonymous. Their goal: to energize the world of ethical hacking with daring and attention getting discoveries that stay on the right side of the double yellow line.
One of the lesser reported sub plots in the recent Solar Winds hack is the use of stolen or compromised digital certificates to facilitate compromises of victim networks and accounts. Stolen certificates played a part in the recent hack of Mimecast, as well as in an attack on employees of a prominent think tank, according to reporting by Reuters and others.
How is it that compromised digital certificates are falling into the hands of nation state actors? One reason may be that companies are managing more digital certificates than ever, but using old systems and processes to do so. The result: it is becoming easier and easier for expired or compromised certificates to fly under the radar.
Our final guest this week, Avesta Hojjati, the Head of R&D at DigiCert, Inc. thinks we’ve only seen the beginning of this problem. As more and more connected “things” begin to populate our homes and workplaces, certificate management is going to become a critical task – one that few consumers are prepared to handle.
What’s the solution? Hojjati thinks more and better use of automation is a good place to start. In this conversation, Avesta and I talk about how digital transformation and the growth of the Internet of Things are raising the stakes for proper certificate management and why companies need to be thinking hard about how to scale their current certificate management processes to meet the challenges of the next decade.
The fallout from the SolarWinds hacking incident linked to Russian threat actors has not only wreaked havoc on governmental agencies and private companies whose data are at risk following the incident, but this week, Bitsight and Kovrr released an analysis outlining the effect of the event on insurance losses that estimates the incident could cost more than $90 million when all is said and done.
The $90 million includes costs related to forensic analyses, incident response, potential regulatory fines and public relations costs. Although it has been reported that 18,000 customers of SolarWinds may have been affected by the incident, the analysis indicates that 40 specific firms were targeted in the incident, 80 percent of which are located in the U.S. It further notes that those firms were primarily federal agencies or in the information technology sector.
The analysis highlights the importance of assessing supply-chain cyber risk and how supply chain and vendor security incidents can cause direct losses that may not be easily recoverable from downstream companies. As part of the assessment, companies also may wish to determine whether insurance coverage may be available if it experiences a vendor or supply chain incident like the SolarWinds example.
Independent security researchers testing the security of the United Nations were able to compromise public-facing servers and a cloud-based development account for the U.N. and lift data on more than 100,000 staff and employees, according to a report released Monday.
Specifically, the group was able to obtain access to database backups for private UNEP projects that exposed a wealth of information on staff and operations. That includes a document with more than 1,000 U.N. employee names, emails; more than 100,000 employee travel records including destination, length of stay and employee ID numbers; more than 1,000 U.N. employee records and so on.
The researchers stopped their search once they were able to obtain personally identifying information. However, they speculated that more data was likely accessible.
Looking for Vulnerabilities
The researchers were scanning the U.N.’s network as part of the organization’s Vulnerability Disclosure Program. That program, started in 2016, has resulted in a number of vulnerabilities being reported to the U.N., many of them common cross-site scripting (XSS) and SQL injection flaws in the U.N.’s main website, un.org.
For their work, Sakura Samurai took a different approach, according to Jackson, in an interview with The Security Ledger. The group started by enumerating UN subdomains and scanning them for exposed assets and data. One of those, an ILO.org Apache web server, was misconfigured and exposing files linked to a Github account. By downloading that file, the researchers were able to recover the credentials for a UN survey management panel, part of a little used, but public facing survey feature on the UN site. While the survey tool didn’t expose a tremendous amount of data, the researchers continued scanning the site and eventually discovered a subdomain that exposed a file containing the credentials for a UN Github account containing 10 more private GitHub repositories encompassing databases and database credentials, backups and files containing personally identifying information.
Much more to be found
Jackson said that the breach is extensive, but that much more was likely exposed prior to his group’s discovery.
“Honestly, there’s way more to be found. We were looking for big fish to fry.” Among other things, a Sakura Samurai researcher discovered APIs for the Twilio cloud platform exposed – those also could have been abused to extract data and personally identifying information from UN systems, he said.
In an email response to The Security Ledger, Farhan Haq, a Deputy Spokesman for the U.N. Secretary-General said that the U.N.’s “technical staff in Nairobi … acknowledged the threat and … took ‘immediate steps’ to remedy the problem.”
“The flaw was remedied in less than a week, but whether or not someone accessed the database remains to be seen,” Haq said in the statement.
A disclosure notice from the U.N. on the matter is “still in the works,” Haq said. According to Jackson, data on EU residents was among the data exposed in the incident. Under the terms of the European Union’s Genderal Data Privacy Rule (GDPR), the U.N. has 72 hours to notify regulators about the incident.
Nation State Exposure?
Unfortunately, Jackson said that there is no way of knowing whether his group was the first to discover the exposed data. It is very possible, he said, that they were not.
“It’s likely that nation state threat actors already have this,” he said, noting that data like travel records could pose physical risks, while U.N. employee email and ID numbers could be useful in tracking and impersonating employees online and offline.
Asked whether the U.N. had conducted an audit of the affected applications, Haq, the spokesperson for the U.N. Secretary General said that the agency was “still looking into the matter.”
A Spotty Record on Cybersecurity
This is not the first cybersecurity lapse at the U.N. In January, 2020 the website the New Humanitarian reported that the U.N. discovered but did not disclose a major hack into its IT systems in Europe in 2019 that involved the compromise of UN domains and the theft of administrator credentials.
Between Black Friday and Cyber Monday, consumers across the U.S. spent the weekend snapping up deals on home electronics like smart TVs, game consoles and appliances. Total season-to date holiday spending, including Cyber Monday, is over the $100 billion threshold according to data from Adobe.
Lots of factors drive consumer decisions to buy one product over another: price and features chief among them. But what about cyber security? Unlike, say, the automobile marketplace, concerns about safety and security are not top of mind when consumers step into a Best Buy or Wal Mart looking for a new flat screen TV. And ratings systems for cyber security, from organizations like UL and Consumer Reports, are in their infancy and not widely used.
A serious security flaw in a commonly used, but overlooked open source security module may be undermining the integrity of hundreds of thousands or even millions of private and public applications, putting untold numbers of organizations and data at risk.
A team of independent security researchers that includes application security professionals at Shutterstock and Squarespace identified the flaw in private-ip, a npm module first published in 2016 that enables applications to block request forgery attacks by filtering out attempts to access private IP4 addresses and other restricted IP4 address ranges, as defined by ARIN
The SSRF Blocker That Didn’t
The researchers identified a so-called Server Side Request Forgery (SSRF) vulnerability in commonly used versions of private-ip. The flaw allows malicious attackers to carry out SSRF attacks against a population of applications that may number in the hundreds of thousands or millions globally. It is just the latest incident to raise questions about the security of the “software supply chain,” as more and more organizations shift from monolithic to modular software application development built on a foundation of free and open source code.
According to an account by researcher John Jackson of Shutterstock, flaws in the private-ip code meant that the filtering allegedly carried out by the code was faulty. Specifically, independent security researchers reported being able to bypass protections and carry out Server-Side Request Forgeries against top tier applications. Further investigation uncovered a common explanation for those successful attacks: private-ip, an open source security module used by the compromised applications.
SSRF attacks allow malicious actors to abuse functionality on a server: reading data from internal resources or modifying the code running on the server. Private-ip was created to help application developers spot and block such attacks. SSRF is one of the most common forms of attack on web applications according to OWASP.
The problem: private-ip didn’t do its job very well.
“The code logic was using a simple Regular Expression matching,” Jackson (@johnjhacking) told The Security Ledger. Jackson, working with other researchers, found that private-ip was blind to a wide number of variations of localhost, and other private-ip ranges as well as simple tricks that hackers use to obfuscate IP addresses in attacks. For example, researchers found they could send successful requests for localhost resources by obscuring those addresses using hexadecimal equivalents of private IP addresses or with simple substitutions like using four zeros for each octet of the IP address instead of one (so: 0000.0000.0000.0000 instead of 0.0.0.0). The result: a wide range of private and restricted IP addresses registered as public IP addresses and slipped past private-ip.
Private-IP: small program, BIG footprint
The scope of the private-ip flaws are difficult to grasp. However, available data suggests the component is very widely used. Jackson said that hundreds of thousands, if not millions of applications likely incorporate private-ip code in some fashion. Many of those applications are not publicly addressable from the Internet, but may still be vulnerable to attack by an adversary with access to their local environment.
Private-ip is the creation of developer Damir Mustafin (aka “frenchbread”), a developer based in the Balkan country of Montenegro, according to his GitHub profile, which contains close to 60 projects of different scopes. Despite its popularity and widespread use, private-ip was not a frequent focus of Mr. Mustafin’s attention. After first being published in August 2016, the application had only been updated once, in April 2017, prior to the most recent update to address the SSRF flaw.
A Low Key, High Distribution App
The lack of steady attention didn’t dissuade other developers from downloading and using the npm private-ip package, however. It has an average of 14,000 downloads weekly, according to data from GitHub. And direct downloads of private-ip are just one measure of its use. Fully 355 publicly identified npm modules are dependents of private-ip v1.0.5, which contains the SSRF flaws. An additional 73 GitHub projects have dependencies on private-ip. All told, that accounts for 153,374 combined weekly downloads of private-ip and its dependents. One of the most widely used applications that relies on private-ip is libp2p, an open source network stack that is used in a wide range of decentralized peer-to-peer applications, according to Jackson.
While the flaw was discovered by so-called “white hat” vulnerability researchers, Jackson said that it is almost certain that malicious actors knew about and exploited it -either directly or inadvertently. Other security researchers have almost certainly stumbled upon it before as well, perhaps discovering a single address that slipped through private-ip and enabled a SSRF attack, while failing to grasp private-ip’s role or the bigger flaws in the module.
In fact, private-ip may be the common source of a long list of SSRF vulnerabilities that have been independently discovered and reported in the last five years, Jackson said.”This may be why a lot of enterprises have struggled with SSRF and block list bypasses,” he said.
After identifying the problem, Jackson and his team contacted the developer, Damir Mustafin (aka “frenchbread”), looking for a fix. However, it quickly became clear that they would need to enlist additional development talent to forge a patch that was comprehensive. Jackson tapped two developers: Nick Sahler of the website hosting provider Square Space and the independent developer known as Sick Codes (@sickcodes) to come up with a comprehensive fix for private-ip. The two implemented the netmask utility and update private-ip to correctly filter private IP ranges and translate all submitted IP addresses at the byte level to catch efforts to slip encoded addresses past the filter.
Common Mode Failures and Software Supply Chain
Even though it is fixed, the private-ip flaw raises larger and deeply troubling questions about the security of software applications on which our homes, businesses and economy are increasingly dependent.
The greater reliance on open source components and the shift to agile development and modular applications has greatly increased society’s exposure to so-called “common cause” failures, in which a the failure of a single element leads to a systemic failure. Security experts say the increasingly byzantine ecosystem of open source and proprietary software with scores or hundreds of poorly understood ‘dependencies’ is ripe for such disruptions.
Less scrutinized is low quality code and applications that may quickly be adopted and woven into scores or hundreds or thousands of other applications and components.
“The problem with (software) dependencies is once you identify a problem with a dependency, everything downstream is f**ked,” the developer known as Sick Codes told The Security Ledger. “It’s a house of cards.”
In the short term, organizations that know they are using private-ip version 1.0.5 or earlier as a means of preventing SSRF or related vulnerabilities should upgrade to the latest version immediately, Jackson said. Static application security testing tools can help identify whether private-ip is in use within your organization.
The bigger fix is for application developers to pay more attention to what they’re putting into their creations. “My recommendations is that when software engineers use packages in general or third party code, they need to evaluate what they’re using and where its coming from,” Jackson said.
Millions of Android smart television sets from the Chinese vendor TCL Technology Group Corporation contained gaping software security holes that researchers say could have allowed remote attackers to take control of the devices, steal data or even control cameras and microphones to surveil the set’s owners.
The security holes appear to have been patched by the manufacturer in early November. However the manner in which the holes were closed is raising further alarm among the researchers about whether the China-based firm is able to access and control deployed television sets without the owner’s knowledge or permission.
Two Flaws, Lots of Red Flags
In a report published on Monday, two security researchers described two serious software security holes affecting TCL brand television sets. First, a vulnerability in the software that runs TCL Android Smart TVs allowed an attacker on the adjacent network to browse and download sensitive files over an insecure web server running on port 7989.
That flaw, CVE-2020-27403, would allow an unprivileged remote attacker on the adjacent network to download most system files from the TV set up to and including images, personal data and security tokens for connected applications. The flaw could lead to serious critical information disclosure, the researchers warned.
Second, the researchers found a vulnerability in the TCL software that allowed a local unprivileged attacker to read from- and write to critical vendor resource directories within the TV’s Android file system, including the vendor upgrades folder. That flaw was assigned the identifier CVE-2020-28055.
The researchers, John Jackson, an application security engineer for Shutter Stock, and the independent researcher known by the handle “Sick Codes,” said the flaws amount to a “back door” on any TCL Android smart television. “Anybody on an adjacent network can browse the TV’s file system and download any file they want,” said Sick Codes in an interview via the Signal platform. That would include everything from image files to small databases associated with installed applications, location data or security tokens for smart TV apps like Gmail. If the TCL TV set was exposed to the public Internet, anyone on the Internet could connect to it remotely, he said, noting that he had located a handful of such TCL Android smart TVs using the Shodan search engine.
CVE-2020-28055 was particularly worrisome, Jackson said. “It was clear that utilizing this vulnerability could result in remote code execution or even network ‘pivots’ by attackers.” That would allow malicious actors to move from the TV to other network connected systems with the intention of exploiting systems quickly with ransomware, Jackson observed. That, coupled with a global population of millions of TCL Android TVs, made the risk considerable.
Nobody Home at TCL
The researchers said efforts to alert TCL about the flaws in October initially fell on deaf ears. Emails sent to a designated email address for reporting security issues bounced. And inquiries to the company on October 16 and 20th went unanswered. Furthermore, the company did not appear to have a dedicated product security team to reach out to, Jackson said in a phone interview.
Only after reaching out to a security contact at TCL partner Roku did Sick Codes and Jackson hear from a security resource within TCL. In an email dated October 29th, Eric Liang of TCL wrote to the two researchers thanking them for their discovery and promising a quick fix.
“Here is how is it going on now: A new version to fix this vulnerability is going to release to SQA on Oct. 29 (UTC+8). We will arrange the upgrade plan after the regression test passes.”
Silent Patch Raises More Questions
Following that, however, there was no further communication. And, when that fix came, it raised more questions than it answered, the researchers said.
According to the researchers, TCL patched the vulnerabilities they had identified silently and without any warning. “They updated the (TCL Android) TV I was testing without any Android update notification or warning,” Sick Codes said. Even the reported firmware version on the TV remained unchanged following the patch. “This was a totally silent patch – they basically logged in to my TV and closed the port.”
Sick Codes said that suggests that TCL maintains full, remote access to deployed sets. “This is a full on back door. If they want to they could switch the TV on or off, turn the camera and mic on or off. They have full access.”
Jackson agreed and said that the manner in which the vulnerable TVs were updated raises more questions than it answers. “How do you push that many gigabytes (of data) that fast with no alert? No user notification? No advisory? Nothing. I don’t know of a company with good security practices that doesn’t tell users that it is going to patch.”
There was no response to emails sent by Security Ledger to Mr. Liang and to TCL media relations prior to publication. We will update this story with any comment or response from the company when we receive it.
Questions on Smart Device Security
The vulnerabilities raise serious questions about the cyber security of consumer electronics that are being widely distributed to the public. TCL, a mainland Chinese firm, is among those that have raised concerns within the U.S. Intelligence community and among law enforcement and lawmakers, alongside firms like Huawei, which has been labeled a national security threat, ZTE and Lenovo. TCL smart TVs are barred from use in Federal government facilities. A 2019 U.S. Department of Defense Inspector General’s report raised warnings about the cyber security risks to the Pentagon of commercial off the shelf (COTS) technology purchased by the U.S. military including televisions, laptops, surveillance cameras, drones and more. (PDF)
TCL has risen quickly in the past five years to become a leading purveyor of smart television sets in the U.S. with a 14% market share, second behind Samsung. The company has been aggressive in both partnerships and branding: teaming with firms like Alcatel Mobile and Thompson SA to produce mobile phones and other electronics, and sponsoring sports teams and events ranging from the Rose Bowl in Pasadena, California, to The Ellen Show to the 2019 Copa América Brasil soccer tournament.
TCL’s TV sets are widely available in the US via online e-tailers like Amazon and brick and mortar “box stores” like Best Buy. It is unclear whether those retailers weigh software security and privacy protections of products before opting to put them on their store shelves. An email to Best Buy seeking comment on the TCL vulnerabilities was not returned.
The security researchers who discovered the flaw said that consumers should beware when buying smart home electronics like TV sets, home surveillance cameras, especially those manufactured by companies with ties to authoritarian regimes.
“Don’t buy it just because a TVs cheap. Know what you’re buying,” said Sick Codes. “That’s especially true if it’s hooked up to the Internet.”