Digital transformation is revolutionizing how healthcare is delivered. But a simmering dispute between a UK security researcher and a domestic healthcare non-profit suggests that the road ahead may be bumpy for both organizations embracing new software and services and for those who dare to ask questions about how that software works.

Rob Dyke (@robdykedotcom) is an independent security researcher. (Image courtesy of Twitter.)

A case in point: UK-based engineer Rob Dyke has spent months caught in an expensive legal tussle with the Apperta Foundation, a UK-based clinician-led non-profit that promotes open systems and standards for digital health and social care. The dispute stems from a confidential report Dyke made to the Foundation in February after discovering that two of its public Github repositories exposed a wide range of sensitive data, including application source code, user names, passwords and API keys.

The application in question was dubbed “Apperta Portal” and was publicly accessible for two years, Dyke told Security Ledger. Dyke informed the Foundation in his initial notification that he would hold on to the data he discovered for 90 days before deleting it, as a courtesy. Dyke insists he followed Apperta’s own information security policy, with which he was familiar as a result of earlier work he had done with the organization.

Open Source vs. An Open Sorcerer

Dyke (@robdykedotcom) is a self-described “open sorcerer” with expertise in the healthcare sector. In fact, he previously worked on Apperta-funded development projects to benefit the UK’s National Health Service (NHS) and had a cordial relationship with the organization. Initially, the Foundation thanked Dyke for disclosing the vulnerability and removed the exposed public source code repositories from GitHub.

Researchers Test UN’s Cybersecurity, Find Data on 100k

That honeymoon was short lived. On March 8, 2021, Dyke received a letter from a law firm representing Apperta Foundation that warned that he “may have committed a criminal offense under the Computer Misuse Act 1990,” a thirty-year-old U.K. computer crime law.  “We understand (you) unlawfully hacked into and penetrated our client’s systems and databases and extracted, downloaded (and retain) its confidential business and financial data. You then made threats to our client…We are writing to advise you that you may have committed a criminal offence under the Computer Misuse Act 1990 and the Investigatory Powers Act 2016,” the letter read, in part. Around the same time, he was contacted by a Northumbria Police cyber investigator inquiring about a report of “Computer Misuse” from Apperta.

A Hard Pass By Law Enforcement

The legal maneuvers by Apperta prompted Dyke to go public with the dispute – though he initially declined to name the organization pursuing him – and to hire his own lawyers and to set up a GoFundMe to help offset his legal fees. In an interview with The Security Ledger, Dyke said Apperta’s aggressive actions left him little choice. “(The letter) had the word ‘unlawful’ in there, and I wasn’t about to sign anything that had said I’d committed a criminal offense,” he said.

After interviewing Dyke, law enforcement in the UK declined to pursue a criminal case against him for violating the CMA. However, the researcher’s legal travails have continued all the same.

Episode 183: Researcher Patrick Wardle talks Zoom 0days and Mac (in)Security

Keen to ensure that Dyke deleted the leaked data and application code he downloaded from the organization’s public GitHub repositories, Apperta’s lawyers sent multiple emails instructing Dyke to destroy or immediately deliver the data he found from the security vulnerability; to give confirmation he had not and would not publish the data he “unlawfully extracted;” and to give another confirmation that he had not shared this data with anyone.

Dyke insists he deleted the exposed Apperta Foundation data weeks ago, soon after first being asked by the Foundation. Documents provided by Dyke that were sent to Apperta attest that he “destroyed all Apperta Foundation CIC’s data and business information in my possession, the only such relevant material having been collated in February 2021 for the purposes of my responsible disclosure of serious IT security concerns to Apperta of March 2021 (“the Responsible Disclosure Materials”).”

Not Taking ‘Yes’ For An Answer

Nevertheless, Apperta’s legal team found that Dyke’s response was “not an acceptable undertaking,” and that Dyke posed an “imminent further threat” to the organization. Months of expensive legal wrangling ensued, as Apperta sought to force Dyke to delete any and all work he had done for the organization, including code he had developed and licensed as open source. All the while, Dyke fielded correspondence that suggests Apperta was preparing to take him to court. In recent weeks, the Foundation has inquired about his solicitor and whether he would be representing himself legally. Other correspondence has inquired about the best address at which to serve him an injunction and passed along forms to submit ahead of an interim hearing before a judge.

In late April, Dyke transmitted a signed undertaking and statement to Apperta that would seem to satisfy the Foundation’s demands. But the Foundation’s actions and correspondence have him worried that it may move ahead with legal action, anyway – though he is not sure exactly what for. “I don’t understand what the legal complaint would be. Is this about the database access? Could it be related to (intellectual property)?” Dyke said he doubts Apperta is pursuing “breach of contract” because he isn’t under contract with the Foundation and -in any case – followed the organization’s responsible disclosure policy when he found the exposed data, which should shield him from legal repercussions. 

In the meantime, the legal bills have grown. Dyke told The Security Ledger that his attorney’s fees in March and April totaled £20,000 and another £5,000 since – with no clear end in sight. “It is not closed. I have zero security,” Dyke wrote in a text message.

In a statement made to The Security Ledger, an Apperta Foundation spokesperson noted that, to date, it “has not issued legal proceedings against Mr. Dyke.” The Foundation said it took “immediate action” to isolate the breach and secure its systems. However, the Foundation also cast aspersions on Dyke’s claims that he was merely performing a public service in reporting the data leak to Apperta and suggested that the researcher had not been forthcoming.

“While Mr. Dyke claims to have been acting as a security researcher, it has always been our understanding that he used multiple techniques that overstepped the bounds of good faith research, and that he did so unethically.”

-Spokesperson for The Apperta Foundation

Asked directly whether The Foundation considered the legal matter closed, it did not respond, but acknowledged that “Mr. Dyke has now provided Apperta with an undertaking in relation to this matter.”

Apperta believes “our actions have been entirely fair and proportionate in the circumstances,” the spokesperson wrote.

Asked by The Security Ledger whether the Foundation had reported the breach to regulators (in the UK, the Information Commissioner’s Office is the governing body), the Foundation did not respond but said that it “has been guided by the Information Commissioner’s Office (ICO) and our legal advisers regarding our duties as a responsible organisation.”

OK, Boomer. UK’s Cyber Crime Law Shows Its Age

Dyke’s dealings with the Apperta highlights the fears that many in the UK cybersecurity community have in regards to the CMA, a 30 year-old computer crime law that critics say is showing its age.

A 2020 report from TechUK found that 80% of respondents said that they have been worried about breaking the CMA when researching vulnerabilities or investigating cyber threat actors. Also, out of those same respondents, around 40% said the law has acted as a barrier to them or their colleagues and has even prevented cybersecurity employees from proactively safeguarding against security breaches. 

Those who support amending the CMA believe that the legislation poses several threats for Great Britain’s cyber and information security industry. Edward Parsons of the security firm F-Secure observed that “the CMA not only impacts our ability to defend victims, but also our competitiveness in a global market.” This downside, along with the ever-present talent shortage of cybersecurity professionals in the U.K., are reasons that Parsons believes justify a newly updated version of the CMA be passed into law. 

Dyke said the CMA looms over the work of security researchers. “You have to be very careful about participating in any, even formal bug bounties (…) If someone’s data gets breached, and it comes out that I’ve reported it, I could actually be picked up under the CMA for having done that hacking in the first place,” he said. 

Calls for Change

Calls for changes are growing louder in the UK. A January, 2020 report by the UK-based Criminal Law Reform Now Network (CLRNN) found that the Computer Misuse Act “has not kept pace with rapid technological change” and “requires significant reform to make it fit for the 21st century.” Among the recommendations of that report were allowances for researchers like Dyke to make “public interest defences to untie the hands of cyber threat intelligence professionals, academics and journalists to provide better protections against cyber attacks and misuse.”

Policy makers are taking note. This week, British Home Secretary Priti Patel told the audience at the National Cyber Security Centre’s (NCSC’s) CyberUK 2021 virtual event, that the CMA had served the country well, but that “now is the right time to undertake a formal review of the Computer Misuse Act.”

For researchers like Dyke, the changes can’t come soon enough. “The Act is currently broken and disproportionately affects hackers,” he wrote in an email. He said a reformed CMA should make allowances for the kinds of “accidental discoveries” that led him to the Apperta breach. “It needs to ensure safe harbour for professional, independent security researchers making Responsible Disclosure in the public interest.”

Carolynn van Arsdale contributed to this story.

In this episode of the podcast (#212), Brandon Hoffman, the CISO of Intel 471 joins us to discuss that company’s latest report that looks at China’s diversified marketplace for stolen data and stolen identities.


Data leaks, data breaches and data dumps are so common these days that they don’t even attract that much attention. Back in 2013, news that hackers stole data on tens of millions of customers of the software maker Adobe dominated the headlines for days. These days, news that companies like Facebook or LinkedIn exposed data on hundreds of millions of users barely registered a collective shrug. 

“What’s a better way to understand a person you’re trying to victimize than to understand their habits? That way you can have a better chance that whatever scam you’re trying to run has success.” 

-Brandon Hoffman, CISO Intel 471

Data leaks and data breaches, for all intents and purposes, have become just the price of doing business online. But those who are ready to be blasé about breaches may be overlooking the role that leaked and stolen data plays in other, more serious problems such as targeted cyber attacks.

Waiting for Federal Data Privacy Reform? Don’t Hold Your Breath.

A Stolen Data Ecosystem Grows In China

Data lifted today from a health insurer, government agency or retailer often informs tomorrow’s targeted spear phishing attack that can steal sensitive intellectual property, redirect government secrets or fuel attacks on critical infrastructure. That’s the conclusion of a recent report by the company Intel 471. That company recently made a study of how Chinese cyber criminal groups were using big data technology to monetize the data they obtained (often: stole) in the Chinese language underground. The company’s research revealed a sophisticated, cybercriminal ecosystem involving cybercriminals, data brokers and insiders as well as cybercriminals who obtain sensitive data

Report: Critical Infrastructure Cyber Attacks A Global Crisis

In this interview, we invited Brandon Hoffman, the CISO at Intel 471 into the studio to talk about the report and the way that the market for stolen data has created a number of “sub economies” that help fuel cyber crime. 

In just the last two weeks, three of the world’s most prominent social networks have been linked to stories about data leaks. Troves of information on both Facebook and LinkedIn users – hundreds of millions of them – turned up for sale in marketplaces in the cyber underground. Then, earlier this week, a hacker forum published a database purporting to be information on users of the new Clubhouse social network. 

Andrew Sellers is the Chief Technology Officer at QOMPLX Inc.

To hear Facebook, LinkedIn and Clubhouse speak, however, nothing is amiss. All took pains to explain that they were not the victims of a hack, just “scraping” of public data on their  users by individuals. Facebook went so far as to insist that it would not notify the 530 million users whose names, phone numbers, birth dates and other information were scraped from its site. .

So which is it? Is scraping the same as hacking or just an example of “zealous” use of a social media platform? And if it isn’t considered hacking…should it be? As more and more online platforms open their doors to API-based access, what restrictions and security should be attached to those APIs to prevent wanton abuse? 

To discuss these issues and more, we invited Andrew Sellers into the Security Ledger studios. Andrew is the Chief Technology Officer at the firm QOMPLX* where he oversees the technology, engineering, data science, and delivery aspects of QOMPLX’s next-generation operational risk management and situational awareness products. He is also an expert in data scraping with specific expertise in large-scale heterogeneous network design, deep-web data extraction, and data theory. 

While the recent incidents affecting LinkedIn, Facebook and Clubhouse may not technically qualify as “hacks,” Andrew told me, they do raise troubling questions about the data security and data management practices of large social media networks, and beg the question of whether more needs to be done to regulate the storage and retention of data on these platforms. 


(*) QOMPLX is a sponsor of The Security Ledger.

The California State Controller’s Office (SCO) was recently a victim of phishing. According to its website, an employee of the SCO’s Unclaimed Property Division clicked on a link in an email, entered their user ID and password, and unknowingly provided a hacker with access to the email account. According to the website, “SCO has reason to believe the compromised email account had personally identifying information contained in Unclaimed Property Holder Reports. The unauthorized user also sent potentially malicious emails to some of the SCO employee’s contacts.”

The SCO was in the process of notifying individuals who either received one of the malicious emails or may have had their information potentially exposed. SCO recommended these individuals place a fraud alert on their accounts with the three major credit bureaus. We have said many times that organizations must be vigilant in training employees not to click on links in emails, particularly when being asked to input user credentials and log in information. Given the fact that the SCO in California oversees disbursements for what California State Controller Betty T. Yee has called the fifth largest economy in the world, this attack could have been much worse.

A serious flaw in Zoom’s Keybase secure chat application left copies of images contained in secure communications on Keybase users’ computers after they were supposedly deleted.

The flaw in the encrypted messaging application (CVE-2021-23827) does not expose Keybase users to remote compromise. However, it could put their security, privacy and safety at risk, especially for users living under authoritarian regimes in which apps like Keybase and Signal are increasingly relied on as a way to conduct conversations out of earshot of law enforcement or security services.

The flaw was discovered by researchers from the group Sakura Samurai as part of a bug bounty program offered by Zoom, which acquired Keybase in May, 2020. Zoom said it has fixed the flaw in the latest versions of its software for Windows, macOS and Linux.

Deleted…but not gone

According to researcher John Jackson of Sakura Samurai, the Keybase flaw manifested itself in two ways. First: Jackson discovered that images that were copy and pasted into Keybase chats were not reliably deleted from a temporary folder, /uploadtemps, associated with the client application. “In general, when you would copy and paste in a Keybase chat, the folder would appear in (the uploadtemps) folder and then immediately get deleted,” Jackson told Security Ledger in a phone interview. “But occasionally that wouldn’t happen. Clearly there was some kind of software error – a collision of sorts – where the images were not getting cleared.”

Exploitable Flaw in NPM Private IP App Lurks Everywhere, Anywhere

Discovering that flaw put Sakura Samurai researchers on the hunt for more and they soon struck pay dirt again. Sakura Samurai members Aubrey Cottle (@kirtaner), Robert Willis (@rej_ex) and Jackson Henry (@JacksonHHax) discovered an unencrypted directory, /Cache, associated with the Keybase client that contained a comprehensive record of images from encrypted chat sessions. The application used a custom extension to name the files, but they were easily viewable directly or simply by changing the custom file extension to the PNG image format, Jackson said.

In a statement, a Zoom spokesman said that the company appreciates the work of the researchers and takes privacy and security “very seriously.”

“We addressed the issue identified by the Sakura Samurai researchers on our Keybase platform in version 5.6.0 for Windows and macOS and version 5.6.1 for Linux. Users can help keep themselves secure by applying current updates or downloading the latest Keybase software with all current security updates,” the spokesman said.

Podcast Episode 141: Massive Data Breaches Just Keep Happening. We Talk about Why.

In most cases, the failure to remove files from cache after they were deleted would count as a “low priority” security flaw. However, in the context of an end-to-end encrypted communications application like Keybase, the failure takes on added weight, Jackson wrote.

“An attacker that gains access to a victim machine can potentially obtain sensitive data through gathered photos, especially if the user utilizes Keybase frequently. A user, believing that they are sending photos that can be cleared later, may not realize that sent photos are not cleared from the cache and may send photos of PII or other sensitive data to friends or colleagues.”

Messaging app flaws take on new importance

The flaw takes on even more weight given the recent flight of millions of Internet users to end-to-end encrypted messaging applications like Keybase, Signal and Telegram. Those users were responding to onerous data sharing policies, such as those recently introduced on Facebook’s WhatsApp chat. In countries with oppressive, authoritarian governments, end to end encrypted messaging apps are a lifeline for political dissidents and human rights advocates.

As Cybercrooks Specialize, More Snooping, Less Smash and Grab

As a result of the flaw, however, adversaries who gained access to the laptop or desktop on which the Keybase application was installed could view any images contained in Keybase encrypted chats. The implications of that are clear enough. For example, recent reports say that North Korean state hackers have targeted security researchers via phishing attacks sent via Keybase, Signal and other encrypted applications.

The flaws in Keybase do not affect the Zoom application, Jackson said. Zoom acquired Keybase in May to strengthen the company’s video platform with end-to-end encryption. That acquisition followed reports about security flaws in the Zoom client, including in its in-meeting chat feature.

Jackson said that the Sakura Samurai researchers received a $1,000 bounty from Zoom for their research. He credited the company with being “very responsive” to the group’s vulnerability report.

The increased use of encrypted messaging applications has attracted the attention of security researchers, as well. Last week, for example, a researcher disclosed 13 vulnerabilities in the Telegram secure messaging application that could have allow a remote attacker to compromise any Telegram user. Those issues were patched in Telegram updates released in September and October, 2020.

Binary Check Ad Blocker Security News

Marriott recently won dismissal of a proposed class action data breach lawsuit alleging several violations, including a violation of the California Consumer Privacy Act (CCPA). The case, Arifur Rahman v. Marriott International, Inc. et al., Case No.: 8:20-cv-00654, was dismissed in an Order by U.S. District Court Judge David O. Carter on January 12, 2021.

The Plaintiff in the lawsuit alleged that he was a member of a “class that were victims of a cybersecurity breach at Marriott when to employees of a Marriott franchise in Russia accessed class members’ names, addresses, phone numbers, email addresses, genders, birth dates, and loyalty account numbers without authorization.” Marriott admitted there was a breach, sent letters to affected individuals, and confirmed that no sensitive information, such as social security numbers, credit card information, or passwords, was compromised.

The matter was dismissed, as the Court found that it lacked subject matter jurisdiction as the Plaintiff lacked standing to sue. The Court was clear that in the 9th Circuit, the sensitivity of the personal information, combined with its theft, are prerequisites to finding that plaintiffs alleged injury in fact. Injury in fact is one of the three elements necessary to support Article III standing.

The data breach in this case affected approximately 5.2 million Marriott customers, but the information accessed by hackers was not “sensitive information,” which was a required element to be able to continue the lawsuit.

Binary Check Ad Blocker Security News

The U.S. Department of Health and Human Services Office for Civil Rights (OCR) recently announced that it had entered into a Resolution Agreement, Corrective Action Plan, and settlement with Lifetime Healthcare, Inc., the parent of Excellus Health Plan, over alleged violations of HIPAA relating to a data breach that occurred from December 23, 2013 through May 11, 2015. During that time, a cybercriminal obtained access to its IT systems and installed malware that allowed the intruder to obtain access to the protected health information of more than 9.3 million individuals.

The accessed information included the individuals’ names, addresses, dates of birth, Social Security numbers, bank account information, health insurance claims, and clinical treatment information.

Following an investigation, OCR found potential violations of HIPAA and the parties agreed to settle the action for a payment of $5.1 million, along with the standard requirements in a Corrective Action Plan that OCR imposes on covered entities following a data breach, including completion of a security risk assessment, implementation of a risk management plan, updating policies and procedures, and annual reporting to OCR.

Ubiquiti, a manufacturer of products used for networks such as routers, webcams and mesh networks, announced this week that an unauthorized access to its systems hosted by a third-party cloud provider may have compromised customers’ name, email address and “the one-way encrypted password to your account” as well as address and telephone number if that also had been provided.

 Ubiquiti did not identify the name of the third-party cloud provider. It is urging customers to change their passwords and to enable multi-factor authentication.

 Changing the default passwords on networking equipment such as routers and webcams is good cybersecurity hygiene, and even more important following a data breach of the manufacturer of these products.

Independent security researchers testing the security of the United Nations were able to compromise public-facing servers and a cloud-based development account for the U.N. and lift data on more than 100,000 staff and employees, according to a report released Monday.

Researchers affiliated with Sakura Samurai, a newly formed collective of independent security experts, exploited an exposed Github repository belonging to the International Labour Organization and the U.N.’s Environment Programme (UNEP) to obtain “multiple sets of database and application credentials” for UNEP applications, according to a blog post by one of the Sakura Samurai researchers, John Jackson, explaining the group’s work.

Specifically, the group was able to obtain access to database backups for private UNEP projects that exposed a wealth of information on staff and operations. That includes a document with more than 1,000 U.N. employee names, emails; more than 100,000 employee travel records including destination, length of stay and employee ID numbers; more than 1,000 U.N. employee records and so on.

The researchers stopped their search once they were able to obtain personally identifying information. However, they speculated that more data was likely accessible.

Looking for Vulnerabilities

The researchers were scanning the U.N.’s network as part of the organization’s Vulnerability Disclosure Program. That program, started in 2016, has resulted in a number of vulnerabilities being reported to the U.N., many of them common cross-site scripting (XSS) and SQL injection flaws in the U.N.’s main website, un.org.

You might also be interested in: Data Breach Exposes Records of 114 Million U.S. Citizens, Companies

For their work, Sakura Samurai took a different approach, according to Jackson, in an interview with The Security Ledger. The group started by enumerating UN subdomains and scanning them for exposed assets and data. One of those, an ILO.org Apache web server, was misconfigured and exposing files linked to a Github account. By downloading that file, the researchers were able to recover the credentials for a UN survey management panel, part of a little used, but public facing survey feature on the UN site. While the survey tool didn’t expose a tremendous amount of data, the researchers continued scanning the site and eventually discovered a subdomain that exposed a file containing the credentials for a UN Github account containing 10 more private GitHub repositories encompassing databases and database credentials, backups and files containing personally identifying information.

Much more to be found

Jackson said that the breach is extensive, but that much more was likely exposed prior to his group’s discovery.

“Honestly, there’s way more to be found. We were looking for big fish to fry.” Among other things, a Sakura Samurai researcher discovered APIs for the Twilio cloud platform exposed – those also could have been abused to extract data and personally identifying information from UN systems, he said.

In an email response to The Security Ledger, Farhan Haq, a Deputy Spokesman for the U.N. Secretary-General said that the U.N.’s “technical staff in Nairobi … acknowledged the threat and … took ‘immediate steps’ to remedy the problem.”

You might also be interested in: Veeam mishandles Own Data, exposes 440M Customer E-mails

“The flaw was remedied in less than a week, but whether or not someone accessed the database remains to be seen,” Haq said in the statement.

A disclosure notice from the U.N. on the matter is “still in the works,” Haq said. According to Jackson, data on EU residents was among the data exposed in the incident. Under the terms of the European Union’s Genderal Data Privacy Rule (GDPR), the U.N. has 72 hours to notify regulators about the incident.

Nation State Exposure?

Unfortunately, Jackson said that there is no way of knowing whether his group was the first to discover the exposed data. It is very possible, he said, that they were not.

“It’s likely that nation state threat actors already have this,” he said, noting that data like travel records could pose physical risks, while U.N. employee email and ID numbers could be useful in tracking and impersonating employees online and offline.

Another danger is that malicious actors with access to the source code of U.N. applications could plant back doors or otherwise manipulate the functioning of those applications to suit their needs. The recent compromise of software updates from the firm Solar Winds has been traced to attacks on hundreds of government agencies and private sector firms. That incident has been tied to hacking groups associated with the government of Russia.

Asked whether the U.N. had conducted an audit of the affected applications, Haq, the spokesperson for the U.N. Secretary General said that the agency was “still looking into the matter.”

A Spotty Record on Cybersecurity

This is not the first cybersecurity lapse at the U.N. In January, 2020 the website the New Humanitarian reported that the U.N. discovered but did not disclose a major hack into its IT systems in Europe in 2019 that involved the compromise of UN domains and the theft of administrator credentials.

In this episode of the podcast (#199), sponsored by LastPass, we’re joined by Barry McMahon, a Senior Global Product Marketing Manager at LogMeIn, to talk about data from that company that weighs the security impact of poor password policies and what a “passwordless” future might look like. In our first segment, we speak with Sareth Ben of Securonix about how massive layoffs that have resulted from the COVID pandemic put organizations at far greater risk of data theft.


The COVID Pandemic has done more than scramble our daily routines, school schedules and family vacations. It has also scrambled the security programs of organizations large and small, first by shifting work from corporate offices to thousands or tens of thousands of home offices, and then by transforming the workforce itself through layoffs and furloughs.

In this episode of the podcast, we did deep COVID’s lesser discussed legacy of enterprise insecurity.

Layoffs and Lost Data

We’ve read a lot about the cyber risks of Zoom (see our interview with Patrick Wardle) or remote offices. But one of the less-mentioned cyber risks engendered by COVID are the mass layoffs that have hit companies in sectors like retail, travel and hospitality, where business models have been upended by the pandemic. The Department of Labor said on Friday that employers eliminated 140,000 jobs in December alone. Since February 2020, employment in leisure and hospitality is down by some 3.9 million jobs, the Department estimates. If data compiled by our next guest is to be believed, many of those departing workers took company data and intellectual property out the door with them. 

Shareth Ben is the executive director of field engineering at Securonix. That company has assembled a report on insider threats that found that most employees take some data with them. Some of that is inadvertent – but much of it is not.

While data loss detection has long been a “thing” in the technology industry, Ben notes that evolving technologies like machine learning and AI are making it easier to spot patterns of behavior that correlate with data theft- for example: spotting employees who are preparing to leave a company and take sensitive information with them. In this discussion, Shareth and I talk about the Securonix study on data theft, how common the problem is and how COVID and the layoffs stemming from the pandemic have exacerbated the insider data theft problem. 

It’s Not The Passwords…But How We Use Them

Nobody likes passwords but getting rid of them is harder than it seems. Even in 2021, User names and passwords are part and parcel of establishing access to online services – cloud based or otherwise. But all those passwords pose major challenges for enterprise security. Data from LastPass suggest that the average organization IT department spends up to 5 person hours a week just to assist with password problems of users – almost a full day of work. 

Barry McMahon a senior global product marketing manager at LastPass and LogMeIn. McMahon says that, despite talk of a “password less” future, traditional passwords aren’t going anywhere anytime soon. But that doesn’t mean that the current password regime of re-used passwords and sticky notes can’t be improved drastically – including by leveraging some of the advanced security features of smart phones and other consumer electronics. Passwords aren’t the problem, so much as how we’re using them, he said. 

To start off, I ask Barry about some of the research LastPass has conducted on the password problem in enterprises. Barry McMahon a senior global product marketing manager at LastPass and LogMeIn.


(*) Disclosure: This podcast was sponsored by LastPass, a LogMeIn brand. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted.