A serious flaw in Zoom’s Keybase secure chat application left copies of images contained in secure communications on Keybase users’ computers after they were supposedly deleted.

The flaw in the encrypted messaging application (CVE-2021-23827) does not expose Keybase users to remote compromise. However, it could put their security, privacy and safety at risk, especially for users living under authoritarian regimes in which apps like Keybase and Signal are increasingly relied on as a way to conduct conversations out of earshot of law enforcement or security services.

The flaw was discovered by researchers from the group Sakura Samurai as part of a bug bounty program offered by Zoom, which acquired Keybase in May, 2020. Zoom said it has fixed the flaw in the latest versions of its software for Windows, macOS and Linux.

Deleted…but not gone

According to researcher John Jackson of Sakura Samurai, the Keybase flaw manifested itself in two ways. First: Jackson discovered that images that were copy and pasted into Keybase chats were not reliably deleted from a temporary folder, /uploadtemps, associated with the client application. “In general, when you would copy and paste in a Keybase chat, the folder would appear in (the uploadtemps) folder and then immediately get deleted,” Jackson told Security Ledger in a phone interview. “But occasionally that wouldn’t happen. Clearly there was some kind of software error – a collision of sorts – where the images were not getting cleared.”

Exploitable Flaw in NPM Private IP App Lurks Everywhere, Anywhere

Discovering that flaw put Sakura Samurai researchers on the hunt for more and they soon struck pay dirt again. Sakura Samurai members Aubrey Cottle (@kirtaner), Robert Willis (@rej_ex) and Jackson Henry (@JacksonHHax) discovered an unencrypted directory, /Cache, associated with the Keybase client that contained a comprehensive record of images from encrypted chat sessions. The application used a custom extension to name the files, but they were easily viewable directly or simply by changing the custom file extension to the PNG image format, Jackson said.

In a statement, a Zoom spokesman said that the company appreciates the work of the researchers and takes privacy and security “very seriously.”

“We addressed the issue identified by the Sakura Samurai researchers on our Keybase platform in version 5.6.0 for Windows and macOS and version 5.6.1 for Linux. Users can help keep themselves secure by applying current updates or downloading the latest Keybase software with all current security updates,” the spokesman said.

Podcast Episode 141: Massive Data Breaches Just Keep Happening. We Talk about Why.

In most cases, the failure to remove files from cache after they were deleted would count as a “low priority” security flaw. However, in the context of an end-to-end encrypted communications application like Keybase, the failure takes on added weight, Jackson wrote.

“An attacker that gains access to a victim machine can potentially obtain sensitive data through gathered photos, especially if the user utilizes Keybase frequently. A user, believing that they are sending photos that can be cleared later, may not realize that sent photos are not cleared from the cache and may send photos of PII or other sensitive data to friends or colleagues.”

Messaging app flaws take on new importance

The flaw takes on even more weight given the recent flight of millions of Internet users to end-to-end encrypted messaging applications like Keybase, Signal and Telegram. Those users were responding to onerous data sharing policies, such as those recently introduced on Facebook’s WhatsApp chat. In countries with oppressive, authoritarian governments, end to end encrypted messaging apps are a lifeline for political dissidents and human rights advocates.

As Cybercrooks Specialize, More Snooping, Less Smash and Grab

As a result of the flaw, however, adversaries who gained access to the laptop or desktop on which the Keybase application was installed could view any images contained in Keybase encrypted chats. The implications of that are clear enough. For example, recent reports say that North Korean state hackers have targeted security researchers via phishing attacks sent via Keybase, Signal and other encrypted applications.

The flaws in Keybase do not affect the Zoom application, Jackson said. Zoom acquired Keybase in May to strengthen the company’s video platform with end-to-end encryption. That acquisition followed reports about security flaws in the Zoom client, including in its in-meeting chat feature.

Jackson said that the Sakura Samurai researchers received a $1,000 bounty from Zoom for their research. He credited the company with being “very responsive” to the group’s vulnerability report.

The increased use of encrypted messaging applications has attracted the attention of security researchers, as well. Last week, for example, a researcher disclosed 13 vulnerabilities in the Telegram secure messaging application that could have allow a remote attacker to compromise any Telegram user. Those issues were patched in Telegram updates released in September and October, 2020.

In the past 20 years, bug hunting has transformed from a hobby (or maybe even a felony) to a full-time profession for tens of thousands of talented software engineers around the globe. Thanks to the growth in private and public bug bounty programs, men and women with the talent can earn a good living by sniffing out flaws in the code for applications and – increasingly -physical devices that power the 21st century global economy. 

Asus ShadowHammer suggests Supply Chain Hacks are the New Normal

Bug Hunting Smart TVs To Supply Chain

What does that work look like and what platforms and technologies are drawing the attention of cutting edge vulnerability researchers? To find out we sat down with the independent researcher known as Sick Codes (@sickcodes). In recent months, he has gotten attention for a string of important discoveries. Among other things, he discovered flaws in Android smart television sets manufactured by the Chinese firm TCL and was part of the team, along with last week’s guest John Jackson, that worked to fix a serious server side request forgery flaw in a popular open source security module, NPM Private IP

Spotlight Podcast: How Machine Learning is revolutionizing Application Fuzzing

In this interview, Sick Codes and I talk about his path to becoming a vulnerability researcher, the paid and unpaid research he conducts looking for software flaws in common software and internet of things devices, some of the challenges and impediments that still exist in reporting vulnerabilities to corporations and what’s in the pipeline for 2021. 


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

In this episode of the podcast (#200), sponsored by Digicert: John Jackson, founder of the group Sakura Samurai talks to us about his quest to make hacking groups cool again. Also: we talk with Avesta Hojjati of the firm Digicert about the challenge of managing a growing population of digital certificates and how  automation may be an answer.


Life for independent security researchers has changed a lot in the last 30 years. The modern information security industry grew out of pioneering work by groups like Boston-based L0pht Heavy Industries and the Cult of the Dead Cow, which began in Lubbock, Texas.

After operating for years in the shadows of the software industry and in legal limbo, by the turn of the millennium hackers were coming out of the shadows. And by the end of the first decade of the 21st century, they were free to pursue full fledged careers as bug hunters, with some earning hundreds of thousands of dollars a year through bug bounty programs that have proliferated in the last decade.

Despite that, a stigma still hangs over “hacking” in the mind of the public, law enforcement and policy makers. And, despite the growth of bug bounty programs, red teaming and other “hacking for hire” activities, plenty of blurry lines still separate legal security research from illegal hacking. 

Hacks Both Daring…and Legal

Still, the need for innovative and ethical security work in the public interest has never been greater. The Solar Winds hack exposed the ways in which even sophisticated firms like Microsoft and Google are vulnerable to compromised software supply chain attacks. Consider also the tsunami of “smart” Internet connected devices like cameras, television sets and appliances are working their way into homes and workplaces by the millions. 

Podcast Episode 112: what it takes to be a top bug hunter

John Jackson is the co -founder of Sakura Samurai, an independent security research group. 

What does a 21st century hacking crew look like? Our first guest this week is trying to find out. John Jackson (@johnjhacking) is an independent security researcher and the co-founder of a new hacking group, Sakura Samurai, which includes a diverse array of security pros ranging from a 15 year old Australian teen to Aubrey Cottle, aka @kirtaner, the founder of the group Anonymous. Their goal: to energize the world of ethical hacking with daring and attention getting discoveries that stay on the right side of the double yellow line.

Update: DHS Looking Into Cyber Risk from TCL Smart TVs

In this interview, John and I talk about his recent research including vulnerabilities he helped discover in smart television sets by the Chinese firm TCL, the open source security module Private IP and the United Nations. 

Can PKI Automation Head Off Chaos?

One of the lesser reported sub plots in the recent Solar Winds hack is the use of stolen or compromised digital certificates to facilitate compromises of victim networks and accounts. Stolen certificates played a part in the recent hack of Mimecast, as well as in an attack on employees of a prominent think tank, according to reporting by Reuters and others. 

Avesta Hojjati is the head of Research & Development at Digicert.

How is it that compromised digital certificates are falling into the hands of nation state actors? One reason may be that companies are managing more digital certificates than ever, but using old systems and processes to do so. The result: it is becoming easier and easier for expired or compromised certificates to fly under the radar. 

Our final guest this week, Avesta Hojjati, the  Head of R&D at DigiCert, Inc. thinks we’ve only seen the beginning of this problem. As more and more connected “things” begin to populate our homes and workplaces, certificate management is going to become a critical task – one that few consumers are prepared to handle.

Episode 175: Campaign Security lags. Also: securing Digital Identities in the age of the DeepFake

What’s the solution? Hojjati thinks more and better use of automation is a good place to start. In this conversation, Avesta and I talk about how digital transformation and the growth of the Internet of Things are raising the stakes for proper certificate management and why companies need to be thinking hard about how to scale their current certificate management processes to meet the challenges of the next decade. 


(*) Disclosure: This podcast was sponsored by Digicert. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

Independent security researchers testing the security of the United Nations were able to compromise public-facing servers and a cloud-based development account for the U.N. and lift data on more than 100,000 staff and employees, according to a report released Monday.

Researchers affiliated with Sakura Samurai, a newly formed collective of independent security experts, exploited an exposed Github repository belonging to the International Labour Organization and the U.N.’s Environment Programme (UNEP) to obtain “multiple sets of database and application credentials” for UNEP applications, according to a blog post by one of the Sakura Samurai researchers, John Jackson, explaining the group’s work.

Specifically, the group was able to obtain access to database backups for private UNEP projects that exposed a wealth of information on staff and operations. That includes a document with more than 1,000 U.N. employee names, emails; more than 100,000 employee travel records including destination, length of stay and employee ID numbers; more than 1,000 U.N. employee records and so on.

The researchers stopped their search once they were able to obtain personally identifying information. However, they speculated that more data was likely accessible.

Looking for Vulnerabilities

The researchers were scanning the U.N.’s network as part of the organization’s Vulnerability Disclosure Program. That program, started in 2016, has resulted in a number of vulnerabilities being reported to the U.N., many of them common cross-site scripting (XSS) and SQL injection flaws in the U.N.’s main website, un.org.

You might also be interested in: Data Breach Exposes Records of 114 Million U.S. Citizens, Companies

For their work, Sakura Samurai took a different approach, according to Jackson, in an interview with The Security Ledger. The group started by enumerating UN subdomains and scanning them for exposed assets and data. One of those, an ILO.org Apache web server, was misconfigured and exposing files linked to a Github account. By downloading that file, the researchers were able to recover the credentials for a UN survey management panel, part of a little used, but public facing survey feature on the UN site. While the survey tool didn’t expose a tremendous amount of data, the researchers continued scanning the site and eventually discovered a subdomain that exposed a file containing the credentials for a UN Github account containing 10 more private GitHub repositories encompassing databases and database credentials, backups and files containing personally identifying information.

Much more to be found

Jackson said that the breach is extensive, but that much more was likely exposed prior to his group’s discovery.

“Honestly, there’s way more to be found. We were looking for big fish to fry.” Among other things, a Sakura Samurai researcher discovered APIs for the Twilio cloud platform exposed – those also could have been abused to extract data and personally identifying information from UN systems, he said.

In an email response to The Security Ledger, Farhan Haq, a Deputy Spokesman for the U.N. Secretary-General said that the U.N.’s “technical staff in Nairobi … acknowledged the threat and … took ‘immediate steps’ to remedy the problem.”

You might also be interested in: Veeam mishandles Own Data, exposes 440M Customer E-mails

“The flaw was remedied in less than a week, but whether or not someone accessed the database remains to be seen,” Haq said in the statement.

A disclosure notice from the U.N. on the matter is “still in the works,” Haq said. According to Jackson, data on EU residents was among the data exposed in the incident. Under the terms of the European Union’s Genderal Data Privacy Rule (GDPR), the U.N. has 72 hours to notify regulators about the incident.

Nation State Exposure?

Unfortunately, Jackson said that there is no way of knowing whether his group was the first to discover the exposed data. It is very possible, he said, that they were not.

“It’s likely that nation state threat actors already have this,” he said, noting that data like travel records could pose physical risks, while U.N. employee email and ID numbers could be useful in tracking and impersonating employees online and offline.

Another danger is that malicious actors with access to the source code of U.N. applications could plant back doors or otherwise manipulate the functioning of those applications to suit their needs. The recent compromise of software updates from the firm Solar Winds has been traced to attacks on hundreds of government agencies and private sector firms. That incident has been tied to hacking groups associated with the government of Russia.

Asked whether the U.N. had conducted an audit of the affected applications, Haq, the spokesperson for the U.N. Secretary General said that the agency was “still looking into the matter.”

A Spotty Record on Cybersecurity

This is not the first cybersecurity lapse at the U.N. In January, 2020 the website the New Humanitarian reported that the U.N. discovered but did not disclose a major hack into its IT systems in Europe in 2019 that involved the compromise of UN domains and the theft of administrator credentials.

A serious security flaw in a commonly used, but overlooked open source security module may be undermining the integrity of hundreds of thousands or even millions of private and public applications, putting untold numbers of organizations and data at risk.

A team of independent security researchers that includes application security professionals at Shutterstock and Squarespace identified the flaw in private-ip, a npm module first published in 2016 that enables applications to block request forgery attacks by filtering out attempts to access private IP4 addresses and other restricted IP4 address ranges, as defined by ARIN

The SSRF Blocker That Didn’t

The researchers identified a so-called Server Side Request Forgery (SSRF) vulnerability in commonly used versions of private-ip. The flaw allows malicious attackers to carry out SSRF attacks against a population of applications that may number in the hundreds of thousands or millions globally. It is just the latest incident to raise questions about the security of the “software supply chain,” as more and more organizations shift from monolithic to modular software application development built on a foundation of free and open source code.

Report: Cybercriminals target difficult-to-secure ERP systems with new attacks

According to an account by researcher John Jackson of Shutterstock, flaws in the private-ip code meant that the filtering allegedly carried out by the code was faulty. Specifically, independent security researchers reported being able to bypass protections and carry out Server-Side Request Forgeries against top tier applications. Further investigation uncovered a common explanation for those successful attacks: private-ip, an open source security module used by the compromised applications.

SSRF attacks allow malicious actors to abuse functionality on a server: reading data from internal resources or modifying the code running on the server. Private-ip was created to help application developers spot and block such attacks. SSRF is one of the most common forms of attack on web applications according to OWASP.

Black Box Device Research reveals Pitiful State of Internet of Things Security

The problem: private-ip didn’t do its job very well.

“The code logic was using a simple Regular Expression matching,” Jackson (@johnjhacking) told The Security Ledger. Jackson, working with other researchers, found that private-ip was blind to a wide number of variations of localhost, and other private-ip ranges as well as simple tricks that hackers use to obfuscate IP addresses in attacks. For example, researchers found they could send successful requests for localhost resources by obscuring those addresses using hexadecimal equivalents of private IP addresses or with simple substitutions like using four zeros for each octet of the IP address instead of one (so: 0000.0000.0000.0000 instead of 0.0.0.0). The result: a wide range of private and restricted IP addresses registered as public IP addresses and slipped past private-ip.

Private-IP: small program, BIG footprint

The scope of the private-ip flaws are difficult to grasp. However, available data suggests the component is very widely used. Jackson said that hundreds of thousands, if not millions of applications likely incorporate private-ip code in some fashion. Many of those applications are not publicly addressable from the Internet, but may still be vulnerable to attack by an adversary with access to their local environment.

Private-ip is the creation of developer Damir Mustafin (aka “frenchbread”), a developer based in the Balkan country of Montenegro, according to his GitHub profile, which contains close to 60 projects of different scopes. Despite its popularity and widespread use, private-ip was not a frequent focus of Mr. Mustafin’s attention. After first being published in August 2016, the application had only been updated once, in April 2017, prior to the most recent update to address the SSRF flaw.

A Low Key, High Distribution App

The lack of steady attention didn’t dissuade other developers from downloading and using the npm private-ip package, however. It has an average of 14,000 downloads weekly, according to data from GitHub. And direct downloads of private-ip are just one measure of its use. Fully 355 publicly identified npm modules are dependents of private-ip v1.0.5, which contains the SSRF flaws. An additional 73 GitHub projects have dependencies on private-ip. All told, that accounts for 153,374 combined weekly downloads of private-ip and its dependents. One of the most widely used applications that relies on private-ip is libp2p, an open source network stack that is used in a wide range of decentralized peer-to-peer applications, according to Jackson.

While the flaw was discovered by so-called “white hat” vulnerability researchers, Jackson said that it is almost certain that malicious actors knew about and exploited it -either directly or inadvertently. Other security researchers have almost certainly stumbled upon it before as well, perhaps discovering a single address that slipped through private-ip and enabled a SSRF attack, while failing to grasp private-ip’s role or the bigger flaws in the module.

In fact, private-ip may be the common source of a long list of SSRF vulnerabilities that have been independently discovered and reported in the last five years, Jackson said.”This may be why a lot of enterprises have struggled with SSRF and block list bypasses,” he said.

After identifying the problem, Jackson and his team contacted the developer, Damir Mustafin (aka “frenchbread”), looking for a fix. However, it quickly became clear that they would need to enlist additional development talent to forge a patch that was comprehensive. Jackson tapped two developers: Nick Sahler of the website hosting provider Square Space and the independent developer known as Sick Codes (@sickcodes) to come up with a comprehensive fix for private-ip. The two implemented the netmask utility and update private-ip to correctly filter private IP ranges and translate all submitted IP addresses at the byte level to catch efforts to slip encoded addresses past the filter.

Common Mode Failures and Software Supply Chain

Even though it is fixed, the private-ip flaw raises larger and deeply troubling questions about the security of software applications on which our homes, businesses and economy are increasingly dependent.

The greater reliance on open source components and the shift to agile development and modular applications has greatly increased society’s exposure to so-called “common cause” failures, in which a the failure of a single element leads to a systemic failure. Security experts say the increasingly byzantine ecosystem of open source and proprietary software with scores or hundreds of poorly understood ‘dependencies’ is ripe for such disruptions.

Sites like npm are a critical part of that ecosystem -and part of the problem. Created in 2008, npm is a package manager for the JavaScript programming language that was acquired by GitHub in March. It acts as a public registry of packages of open source code that can be downloaded and incorporated into web and mobile applications as well as a wide range of hardware from broadband routers to robots. But vetting of the modules uploaded to npm and other platforms is often cursory. Scores have been called out as malicious and an even greater number are quietly dropped from the site every day after being discovered to be malicious in nature.

Less scrutinized is low quality code and applications that may quickly be adopted and woven into scores or hundreds or thousands of other applications and components.

“The problem with (software) dependencies is once you identify a problem with a dependency, everything downstream is f**ked,” the developer known as Sick Codes told The Security Ledger. “It’s a house of cards.”

Patch Now

In the short term, organizations that know they are using private-ip version 1.0.5 or earlier as a means of preventing SSRF or related vulnerabilities should upgrade to the latest version immediately, Jackson said. Static application security testing tools can help identify whether private-ip is in use within your organization.

The bigger fix is for application developers to pay more attention to what they’re putting into their creations. “My recommendations is that when software engineers use packages in general or third party code, they need to evaluate what they’re using and where its coming from,” Jackson said.  

Chinese electronics giant TCL has acknowledged security holes in some models of its smart television sets, but denies that it maintains a secret “back door” that gives it control over deployed TVs.

In an email statement to The Security Ledger dated November 16, Senior Vice President for TCL North America Chris Larson acknowledged that the company issued a security patch on October 30 for one of two serious security holes reported by independent researchers on October 27. That hole, assigned the Common Vulnerabilities and Exposure (CVE) number 2020-27403 allowed unauthenticated users to browse the contents of a TCL smart TV’s operating system from an adjacent network or even the Internet.

A patch for a second flaw, CVE-2020-28055, will be released in the coming days, TCL said. That flaw allows a local unprivileged attacker to read from- and write to critical vendor resource directories within the TV’s Android file system, including the vendor upgrades folder.

The Security Ledger reported last week on the travails of the researchers who discovered the flaws, @sickcodes and @johnjhacking, who had difficulty contacting security experts within TCL and then found a patch silently applied without any warning from TCL.

A Learning Process for TCL

In an email statement to Security Ledger, Larson acknowledged that TCL, a global electronics giant with a market capitalization of $98 billion, “did not have a thorough and well-developed plan or strategy for reacting to issues” like those raised by the two researchers. “This was certainly a learning process for us,” he wrote.

At issue was both the security holes and the manner in which the company addressed them. In an interview with The Security Ledger, the researcher using the handle Sick Codes said that a TCL TV set he was monitoring was patched for the CVE-2020-27403 vulnerability without any notice from the company and no visible notification on the device itself.

IT Asset Disposition (ITAD) is the Slow Motion Data Breach Nobody notices

By TCL’s account, the patch was distributed via an Android Package (APK) update on October 30. APK files are a method of installing (or “side loading”) applications and code on Android-based systems outside of sanctioned application marketplaces like the Google Play store. The company did not address in its public statements the question of whether prior notification of the update was given to customers or whether TV set owners were required to approve the update before it was installed.

Limited Impact in North America

However, the patch issued on October 30 is unlikely to have affected TCL customers in the U.S. and Canada, as none of the TCL models sold in the North America contain the CVE-2020-24703 vulnerability, TCL said in its statement. However, some TCL TV models sold in the U.S. and Canada are impacted by CVE-2020-28055, the company warned. They are TCL models 32S330, 40S330, 43S434, 50S434, 55S434, 65S434, and 75S434.

The patched vulnerability was linked to a feature called “Magic Connect” and an Android APK by the name of T-Cast, which allows users to “stream user content from a mobile device.” T-Cast was never installed on televisions distributed in the USA or Canada, TCL said. For TCL smart TV sets outside of North America that did contain T-Cast, the APK was “updated to resolve this issue,” the company said. That application update may explain why the TCL TV set studied by the researchers suddenly stopped exhibiting the vulnerability.

Consumer Reports: Flaws Make Samsung, Roku TVs Vulnerable

No Back Doors, Just “Remote Maintenance”

While TCL denied having a back door into its smart TVs, the company did acknowledge the existence of remote “maintenance” features that could give its employees or others control over deployed television sets, including onboard cameras and microphones.

In particular, TCL acknowledges that an Android APK known as “Terminal Manager…supports remote diagnostics in select regions,” but not in North America. In regions where sets with the Terminal Manager APK are deployed, TCL is able to “operate most functions of the television remotely.” That appears to include cameras and microphones installed on the set.

However, TCL said that Terminal Manager can only be used if the user “requests such action during the diagnostic session.” The process must be “initiated by the user and a code provided to TCL customer service agents in order to have diagnostic access to the television,” according to the company’s FAQ.

Other clarifications from the vendor suggest that, while reports of secret back doors in smart TVs may be overwrought, there is plenty of reason to worry about the security of TCL smart TVs.

The TCL statement acknowledged, for example, that two publicly browsable directories on the TCL Android TVs identified by the researchers could have potentially opened the door for malicious actors. A remotely writeable “upgrade” directory /data/vendor/upgrade on TCL sets has “never been used” but is intended for over the air firmware upgrades. Firmware update files placed in the directory are loaded on the next TV reboot. Similarly a directory /data/vendor/tcl, has also “never been used,” but stores “advertising graphics” that also are loaded “as part of the boot up process,” TCL said.

Promises to work with Independent Researchers

The company said it has learned from its mistakes and that it is undertaking efforts to work more closely with third party and independent security researchers in the future.

“Going forward, we are putting processes in place to better react to discoveries by 3rd parties. These real-world experts are sometimes able to find vulnerabilities that are missed by testing. We are performing additional training for our customer service agents on escalation procedures on these issues as well as establishing a direct reporting system online,” the company said.

China Risk Rising

Vendor assurances aside, there is growing concern within the United States and other nations about the threat posed by hundreds of millions of consumer electronic devices manufactured – or sourced in China. The firm Intsights in August warned that China was using technological exports as “weaponized trojans in foreign countries.” The country is “exporting technology around the world that has hidden backdoors, superior surveillance capability, and covert data collection capabilities that surpass their intended purposes and are being used for widespread reconnaissance, espionage, and data theft,” the company warned, citing reports about gear from the telecommunications vendor Huawei and social media site TikTok among others.

Western governments and non-governmental organizations have also raised alarms about the country’s blend of technology-enabled authoritarianism, including the use of data theft and data harvesting, coupled with artificial intelligence to identify individuals whose words or actions are counter to the ruling Communist Party.

Binary Check Ad Blocker Security News

Millions of Android smart television sets from the Chinese vendor TCL Technology Group Corporation contained gaping software security holes that researchers say could have allowed remote attackers to take control of the devices, steal data or even control cameras and microphones to surveil the set’s owners.

The security holes appear to have been patched by the manufacturer in early November. However the manner in which the holes were closed is raising further alarm among the researchers about whether the China-based firm is able to access and control deployed television sets without the owner’s knowledge or permission.

Two Flaws, Lots of Red Flags

In a report published on Monday, two security researchers described two serious software security holes affecting TCL brand television sets. First, a vulnerability in the software that runs TCL Android Smart TVs allowed an attacker on the adjacent network to browse and download sensitive files over an insecure web server running on port 7989.

More Questions as Expert Recreates Chinese Super Micro Hardware Hack

That flaw, CVE-2020-27403, would allow an unprivileged remote attacker on the adjacent network to download most system files from the TV set up to and including images, personal data and security tokens for connected applications. The flaw could lead to serious critical information disclosure, the researchers warned.

Consumer Reports: Flaws Make Samsung, Roku TVs Vulnerable

Second, the researchers found a vulnerability in the TCL software that allowed a local unprivileged attacker to read from- and write to critical vendor resource directories within the TV’s Android file system, including the vendor upgrades folder. That flaw was assigned the identifier CVE-2020-28055.

Both flaws affect TCL Android Smart TV series V8-R851T02-LF1 V295 and below and V8-T658T01-LF1 V373 and below, according to the official CVE reports.

John Jackson is an application security engineer at Shutterstock.

The researchers, John Jackson, an application security engineer for Shutter Stock, and the independent researcher known by the handle “Sick Codes,” said the flaws amount to a “back door” on any TCL Android smart television. “Anybody on an adjacent network can browse the TV’s file system and download any file they want,” said Sick Codes in an interview via the Signal platform. That would include everything from image files to small databases associated with installed applications, location data or security tokens for smart TV apps like Gmail. If the TCL TV set was exposed to the public Internet, anyone on the Internet could connect to it remotely, he said, noting that he had located a handful of such TCL Android smart TVs using the Shodan search engine.

CVE-2020-28055 was particularly worrisome, Jackson said. “It was clear that utilizing this vulnerability could result in remote code execution or even network ‘pivots’ by attackers.” That would allow malicious actors to move from the TV to other network connected systems with the intention of exploiting systems quickly with ransomware, Jackson observed. That, coupled with a global population of millions of TCL Android TVs, made the risk considerable.

Nobody Home at TCL

The researchers said efforts to alert TCL about the flaws in October initially fell on deaf ears. Emails sent to a designated email address for reporting security issues bounced. And inquiries to the company on October 16 and 20th went unanswered. Furthermore, the company did not appear to have a dedicated product security team to reach out to, Jackson said in a phone interview.

A screen shot of the browse-able file system of a TCL television set.
A screen capture showing the full, browsable file system on an Internet-connected TCL television set.

Podcast Episode 128: Do Security and Privacy have a Booth at CES?

Only after reaching out to a security contact at TCL partner Roku did Sick Codes and Jackson hear from a security resource within TCL. In an email dated October 29th, Eric Liang of TCL wrote to the two researchers thanking them for their discovery and promising a quick fix.

“Here is how is it going on now: A new version to fix this vulnerability is going to release to SQA on Oct. 29 (UTC+8). We will arrange the upgrade plan after the regression test passes.”

Silent Patch Raises More Questions

Following that, however, there was no further communication. And, when that fix came, it raised more questions than it answered, the researchers said.

According to the researchers, TCL patched the vulnerabilities they had identified silently and without any warning. “They updated the (TCL Android) TV I was testing without any Android update notification or warning,” Sick Codes said. Even the reported firmware version on the TV remained unchanged following the patch. “This was a totally silent patch – they basically logged in to my TV and closed the port.”

Sick Codes said that suggests that TCL maintains full, remote access to deployed sets. “This is a full on back door. If they want to they could switch the TV on or off, turn the camera and mic on or off. They have full access.”

Jackson agreed and said that the manner in which the vulnerable TVs were updated raises more questions than it answers. “How do you push that many gigabytes (of data) that fast with no alert? No user notification? No advisory? Nothing. I don’t know of a company with good security practices that doesn’t tell users that it is going to patch.”

There was no response to emails sent by Security Ledger to Mr. Liang and to TCL media relations prior to publication. We will update this story with any comment or response from the company when we receive it.

Questions on Smart Device Security

The vulnerabilities raise serious questions about the cyber security of consumer electronics that are being widely distributed to the public. TCL, a mainland Chinese firm, is among those that have raised concerns within the U.S. Intelligence community and among law enforcement and lawmakers, alongside firms like Huawei, which has been labeled a national security threat, ZTE and Lenovo. TCL smart TVs are barred from use in Federal government facilities. A 2019 U.S. Department of Defense Inspector General’s report raised warnings about the cyber security risks to the Pentagon of commercial off the shelf (COTS) technology purchased by the U.S. military including televisions, laptops, surveillance cameras, drones and more. (PDF)

And while disputes over Chinese apps like TikTok dominate the headlines,  a recent report from the firm IntSights on China’s growing cyber risk notes that the Chinese Communist Party (CCP) is engaged in a far broader campaign to elevate the country to superpower status by treating “data as the most valuable asset.”

The supply chain for a seemingly endless variety of technology sold and used in the United States originates in China. A 2019 study by the security firm Interos, for example, found that one fifth (20%) of the hardware and software components in a popular voting machine came from suppliers in China. Furthermore, close to two-thirds (59%) of components in that voting machine came from companies with locations in both China and Russia.

TCL has risen quickly in the past five years to become a leading purveyor of smart television sets in the U.S. with a 14% market share, second behind Samsung. The company has been aggressive in both partnerships and branding: teaming with firms like Alcatel Mobile and Thompson SA to produce mobile phones and other electronics, and sponsoring sports teams and events ranging from the Rose Bowl in Pasadena, California, to The Ellen Show to the 2019 Copa América Brasil soccer tournament.

TCL’s TV sets are widely available in the US via online e-tailers like Amazon and brick and mortar “box stores” like Best Buy. It is unclear whether those retailers weigh software security and privacy protections of products before opting to put them on their store shelves. An email to Best Buy seeking comment on the TCL vulnerabilities was not returned.

Buyer Beware

The security researchers who discovered the flaw said that consumers should beware when buying smart home electronics like TV sets, home surveillance cameras, especially those manufactured by companies with ties to authoritarian regimes.

“Don’t buy it just because a TVs cheap. Know what you’re buying,” said Sick Codes. “That’s especially true if it’s hooked up to the Internet.”

Data Center and Code

This podcast is the latest in a series of interviews we’re doing on “left-shifted security” that explores how information security is transforming to embrace agile development methodologies and DEVOPS. If you like this, check out some of the other podcasts in this series!


Information security is “shifting left”: moving closer to the development process and becoming part and parcel of agile “DEVOPS” organizations. But while building security into development may be a familiar idea, what does it mean to build compliance into development? 

Galen Emery is the Lead Compliance & Security Architect at Chef Software. 

To find out, we invited Galen Emery the Lead Compliance & Security Architect at Chef Software, in to the Security Ledger studios to talk about the job of blending both security and compliance into agile development processes. We also talk about Chef’s increasing investments in security testing and compliance and how the “shift left” is impacting other security investments including access control, auditing and more. 

Spotlight Podcast: RSA CTO Zulfikar Ramzan on confronting Digital Transformation’s Dark Side

To start out, I asked Galen to tell us a bit about Chef and how the company’s technology has evolved from configuration management to security testing and compliance as well as areas like endpoint protection. 


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

Binary Check Ad Blocker Security News

The pandemic isn’t the only thing shaking up development organizations. Application security is a top concern and security work is “shifting left” and becoming more intertwined with development. In this podcast, Security Ledger Editor in Chief Paul Roberts talks about it with Jonathan Hunt, Vice President of Security at the firm GitLab.


Even before the COVID pandemic set upon us, the information security industry was being transformed. Security was long a matter of hardening organizations to threats and attacks. The goal was “layered defenses” starting with firewalls and gateway security servers and access control lists to provide hardened network perimeter and intrusion detection and endpoint protection software to protect IT assets within the perimeter. 

Spotlight: Synopsys on democratizing Secure Software Development

Security Shifting Left

Jonathan Hunt is the Vice President of Security at GitLab

These days, however,  security is “shifting left” – becoming part and parcel of the development process. “DEVSECOPS”  marries security processes like code analysis and vulnerability scanning to agile application development in a way that results in more secure products. 

That shift is giving rise to a whole new type of security firm, including the likes of GitLab, a web-based DevOps lifecycle tool and Git-repository manager that is steadily building its roster of security capabilities. What does it mean to be a security provider in the age of DEVSECOPS and left-shifted security?

Application Development and COVID

To answer these questions, we invited Jonathan Hunt, the Vice President of Security at GitLab into the Security Ledger studio to talk about it. In this conversation, Jonathan and I talk about what it means to shift security left and marry security processes like vulnerability scanning and fuzzing with development in a seamless way. 

Spotlight Podcast: Intel’s Matt Areno – Supply Chain is the New Security Battlefield

We also discuss how the COVID pandemic has shaken up development organizations – including GitLab itself – and how the changes wrought by COVID may remain long after the virus itself has been beaten back. 


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted.