Digital transformation is revolutionizing how healthcare is delivered. But a simmering dispute between a UK security researcher and a domestic healthcare non-profit suggests that the road ahead may be bumpy for both organizations embracing new software and services and for those who dare to ask questions about how that software works.

Rob Dyke (@robdykedotcom) is an independent security researcher. (Image courtesy of Twitter.)

A case in point: UK-based engineer Rob Dyke has spent months caught in an expensive legal tussle with the Apperta Foundation, a UK-based clinician-led non-profit that promotes open systems and standards for digital health and social care. The dispute stems from a confidential report Dyke made to the Foundation in February after discovering that two of its public Github repositories exposed a wide range of sensitive data, including application source code, user names, passwords and API keys.

The application in question was dubbed “Apperta Portal” and was publicly accessible for two years, Dyke told Security Ledger. Dyke informed the Foundation in his initial notification that he would hold on to the data he discovered for 90 days before deleting it, as a courtesy. Dyke insists he followed Apperta’s own information security policy, with which he was familiar as a result of earlier work he had done with the organization.

Open Source vs. An Open Sorcerer

Dyke (@robdykedotcom) is a self-described “open sorcerer” with expertise in the healthcare sector. In fact, he previously worked on Apperta-funded development projects to benefit the UK’s National Health Service (NHS) and had a cordial relationship with the organization. Initially, the Foundation thanked Dyke for disclosing the vulnerability and removed the exposed public source code repositories from GitHub.

Researchers Test UN’s Cybersecurity, Find Data on 100k

That honeymoon was short lived. On March 8, 2021, Dyke received a letter from a law firm representing Apperta Foundation that warned that he “may have committed a criminal offense under the Computer Misuse Act 1990,” a thirty-year-old U.K. computer crime law.  “We understand (you) unlawfully hacked into and penetrated our client’s systems and databases and extracted, downloaded (and retain) its confidential business and financial data. You then made threats to our client…We are writing to advise you that you may have committed a criminal offence under the Computer Misuse Act 1990 and the Investigatory Powers Act 2016,” the letter read, in part. Around the same time, he was contacted by a Northumbria Police cyber investigator inquiring about a report of “Computer Misuse” from Apperta.

A Hard Pass By Law Enforcement

The legal maneuvers by Apperta prompted Dyke to go public with the dispute – though he initially declined to name the organization pursuing him – and to hire his own lawyers and to set up a GoFundMe to help offset his legal fees. In an interview with The Security Ledger, Dyke said Apperta’s aggressive actions left him little choice. “(The letter) had the word ‘unlawful’ in there, and I wasn’t about to sign anything that had said I’d committed a criminal offense,” he said.

After interviewing Dyke, law enforcement in the UK declined to pursue a criminal case against him for violating the CMA. However, the researcher’s legal travails have continued all the same.

Episode 183: Researcher Patrick Wardle talks Zoom 0days and Mac (in)Security

Keen to ensure that Dyke deleted the leaked data and application code he downloaded from the organization’s public GitHub repositories, Apperta’s lawyers sent multiple emails instructing Dyke to destroy or immediately deliver the data he found from the security vulnerability; to give confirmation he had not and would not publish the data he “unlawfully extracted;” and to give another confirmation that he had not shared this data with anyone.

Dyke insists he deleted the exposed Apperta Foundation data weeks ago, soon after first being asked by the Foundation. Documents provided by Dyke that were sent to Apperta attest that he “destroyed all Apperta Foundation CIC’s data and business information in my possession, the only such relevant material having been collated in February 2021 for the purposes of my responsible disclosure of serious IT security concerns to Apperta of March 2021 (“the Responsible Disclosure Materials”).”

Not Taking ‘Yes’ For An Answer

Nevertheless, Apperta’s legal team found that Dyke’s response was “not an acceptable undertaking,” and that Dyke posed an “imminent further threat” to the organization. Months of expensive legal wrangling ensued, as Apperta sought to force Dyke to delete any and all work he had done for the organization, including code he had developed and licensed as open source. All the while, Dyke fielded correspondence that suggests Apperta was preparing to take him to court. In recent weeks, the Foundation has inquired about his solicitor and whether he would be representing himself legally. Other correspondence has inquired about the best address at which to serve him an injunction and passed along forms to submit ahead of an interim hearing before a judge.

In late April, Dyke transmitted a signed undertaking and statement to Apperta that would seem to satisfy the Foundation’s demands. But the Foundation’s actions and correspondence have him worried that it may move ahead with legal action, anyway – though he is not sure exactly what for. “I don’t understand what the legal complaint would be. Is this about the database access? Could it be related to (intellectual property)?” Dyke said he doubts Apperta is pursuing “breach of contract” because he isn’t under contract with the Foundation and -in any case – followed the organization’s responsible disclosure policy when he found the exposed data, which should shield him from legal repercussions. 

In the meantime, the legal bills have grown. Dyke told The Security Ledger that his attorney’s fees in March and April totaled £20,000 and another £5,000 since – with no clear end in sight. “It is not closed. I have zero security,” Dyke wrote in a text message.

In a statement made to The Security Ledger, an Apperta Foundation spokesperson noted that, to date, it “has not issued legal proceedings against Mr. Dyke.” The Foundation said it took “immediate action” to isolate the breach and secure its systems. However, the Foundation also cast aspersions on Dyke’s claims that he was merely performing a public service in reporting the data leak to Apperta and suggested that the researcher had not been forthcoming.

“While Mr. Dyke claims to have been acting as a security researcher, it has always been our understanding that he used multiple techniques that overstepped the bounds of good faith research, and that he did so unethically.”

-Spokesperson for The Apperta Foundation

Asked directly whether The Foundation considered the legal matter closed, it did not respond, but acknowledged that “Mr. Dyke has now provided Apperta with an undertaking in relation to this matter.”

Apperta believes “our actions have been entirely fair and proportionate in the circumstances,” the spokesperson wrote.

Asked by The Security Ledger whether the Foundation had reported the breach to regulators (in the UK, the Information Commissioner’s Office is the governing body), the Foundation did not respond but said that it “has been guided by the Information Commissioner’s Office (ICO) and our legal advisers regarding our duties as a responsible organisation.”

OK, Boomer. UK’s Cyber Crime Law Shows Its Age

Dyke’s dealings with the Apperta highlights the fears that many in the UK cybersecurity community have in regards to the CMA, a 30 year-old computer crime law that critics say is showing its age.

A 2020 report from TechUK found that 80% of respondents said that they have been worried about breaking the CMA when researching vulnerabilities or investigating cyber threat actors. Also, out of those same respondents, around 40% said the law has acted as a barrier to them or their colleagues and has even prevented cybersecurity employees from proactively safeguarding against security breaches. 

Those who support amending the CMA believe that the legislation poses several threats for Great Britain’s cyber and information security industry. Edward Parsons of the security firm F-Secure observed that “the CMA not only impacts our ability to defend victims, but also our competitiveness in a global market.” This downside, along with the ever-present talent shortage of cybersecurity professionals in the U.K., are reasons that Parsons believes justify a newly updated version of the CMA be passed into law. 

Dyke said the CMA looms over the work of security researchers. “You have to be very careful about participating in any, even formal bug bounties (…) If someone’s data gets breached, and it comes out that I’ve reported it, I could actually be picked up under the CMA for having done that hacking in the first place,” he said. 

Calls for Change

Calls for changes are growing louder in the UK. A January, 2020 report by the UK-based Criminal Law Reform Now Network (CLRNN) found that the Computer Misuse Act “has not kept pace with rapid technological change” and “requires significant reform to make it fit for the 21st century.” Among the recommendations of that report were allowances for researchers like Dyke to make “public interest defences to untie the hands of cyber threat intelligence professionals, academics and journalists to provide better protections against cyber attacks and misuse.”

Policy makers are taking note. This week, British Home Secretary Priti Patel told the audience at the National Cyber Security Centre’s (NCSC’s) CyberUK 2021 virtual event, that the CMA had served the country well, but that “now is the right time to undertake a formal review of the Computer Misuse Act.”

For researchers like Dyke, the changes can’t come soon enough. “The Act is currently broken and disproportionately affects hackers,” he wrote in an email. He said a reformed CMA should make allowances for the kinds of “accidental discoveries” that led him to the Apperta breach. “It needs to ensure safe harbour for professional, independent security researchers making Responsible Disclosure in the public interest.”

Carolynn van Arsdale contributed to this story.

Chinese electronics giant TCL has acknowledged security holes in some models of its smart television sets, but denies that it maintains a secret “back door” that gives it control over deployed TVs.

In an email statement to The Security Ledger dated November 16, Senior Vice President for TCL North America Chris Larson acknowledged that the company issued a security patch on October 30 for one of two serious security holes reported by independent researchers on October 27. That hole, assigned the Common Vulnerabilities and Exposure (CVE) number 2020-27403 allowed unauthenticated users to browse the contents of a TCL smart TV’s operating system from an adjacent network or even the Internet.

A patch for a second flaw, CVE-2020-28055, will be released in the coming days, TCL said. That flaw allows a local unprivileged attacker to read from- and write to critical vendor resource directories within the TV’s Android file system, including the vendor upgrades folder.

The Security Ledger reported last week on the travails of the researchers who discovered the flaws, @sickcodes and @johnjhacking, who had difficulty contacting security experts within TCL and then found a patch silently applied without any warning from TCL.

A Learning Process for TCL

In an email statement to Security Ledger, Larson acknowledged that TCL, a global electronics giant with a market capitalization of $98 billion, “did not have a thorough and well-developed plan or strategy for reacting to issues” like those raised by the two researchers. “This was certainly a learning process for us,” he wrote.

At issue was both the security holes and the manner in which the company addressed them. In an interview with The Security Ledger, the researcher using the handle Sick Codes said that a TCL TV set he was monitoring was patched for the CVE-2020-27403 vulnerability without any notice from the company and no visible notification on the device itself.

IT Asset Disposition (ITAD) is the Slow Motion Data Breach Nobody notices

By TCL’s account, the patch was distributed via an Android Package (APK) update on October 30. APK files are a method of installing (or “side loading”) applications and code on Android-based systems outside of sanctioned application marketplaces like the Google Play store. The company did not address in its public statements the question of whether prior notification of the update was given to customers or whether TV set owners were required to approve the update before it was installed.

Limited Impact in North America

However, the patch issued on October 30 is unlikely to have affected TCL customers in the U.S. and Canada, as none of the TCL models sold in the North America contain the CVE-2020-24703 vulnerability, TCL said in its statement. However, some TCL TV models sold in the U.S. and Canada are impacted by CVE-2020-28055, the company warned. They are TCL models 32S330, 40S330, 43S434, 50S434, 55S434, 65S434, and 75S434.

The patched vulnerability was linked to a feature called “Magic Connect” and an Android APK by the name of T-Cast, which allows users to “stream user content from a mobile device.” T-Cast was never installed on televisions distributed in the USA or Canada, TCL said. For TCL smart TV sets outside of North America that did contain T-Cast, the APK was “updated to resolve this issue,” the company said. That application update may explain why the TCL TV set studied by the researchers suddenly stopped exhibiting the vulnerability.

Consumer Reports: Flaws Make Samsung, Roku TVs Vulnerable

No Back Doors, Just “Remote Maintenance”

While TCL denied having a back door into its smart TVs, the company did acknowledge the existence of remote “maintenance” features that could give its employees or others control over deployed television sets, including onboard cameras and microphones.

In particular, TCL acknowledges that an Android APK known as “Terminal Manager…supports remote diagnostics in select regions,” but not in North America. In regions where sets with the Terminal Manager APK are deployed, TCL is able to “operate most functions of the television remotely.” That appears to include cameras and microphones installed on the set.

However, TCL said that Terminal Manager can only be used if the user “requests such action during the diagnostic session.” The process must be “initiated by the user and a code provided to TCL customer service agents in order to have diagnostic access to the television,” according to the company’s FAQ.

Other clarifications from the vendor suggest that, while reports of secret back doors in smart TVs may be overwrought, there is plenty of reason to worry about the security of TCL smart TVs.

The TCL statement acknowledged, for example, that two publicly browsable directories on the TCL Android TVs identified by the researchers could have potentially opened the door for malicious actors. A remotely writeable “upgrade” directory /data/vendor/upgrade on TCL sets has “never been used” but is intended for over the air firmware upgrades. Firmware update files placed in the directory are loaded on the next TV reboot. Similarly a directory /data/vendor/tcl, has also “never been used,” but stores “advertising graphics” that also are loaded “as part of the boot up process,” TCL said.

Promises to work with Independent Researchers

The company said it has learned from its mistakes and that it is undertaking efforts to work more closely with third party and independent security researchers in the future.

“Going forward, we are putting processes in place to better react to discoveries by 3rd parties. These real-world experts are sometimes able to find vulnerabilities that are missed by testing. We are performing additional training for our customer service agents on escalation procedures on these issues as well as establishing a direct reporting system online,” the company said.

China Risk Rising

Vendor assurances aside, there is growing concern within the United States and other nations about the threat posed by hundreds of millions of consumer electronic devices manufactured – or sourced in China. The firm Intsights in August warned that China was using technological exports as “weaponized trojans in foreign countries.” The country is “exporting technology around the world that has hidden backdoors, superior surveillance capability, and covert data collection capabilities that surpass their intended purposes and are being used for widespread reconnaissance, espionage, and data theft,” the company warned, citing reports about gear from the telecommunications vendor Huawei and social media site TikTok among others.

Western governments and non-governmental organizations have also raised alarms about the country’s blend of technology-enabled authoritarianism, including the use of data theft and data harvesting, coupled with artificial intelligence to identify individuals whose words or actions are counter to the ruling Communist Party.