Many individuals already use facial recognition technology to authenticate and authorize payment through their smartphone. According to Jupiter Research, by 2025 (only four years away), 95 percent of smartphones will have biometric technology capabilities for authentication, including face, fingerprint, iris, and voice recognition. According to Juniper Research, this will amount to the authentication of over $3 trillion in payment transactions on a yearly basis.

Technology vendors are starting to use biometric information more and more to provide services to consumers. For instance, Spotify recently released its “Hey Spotify” feature for its app. If you use Spotify, and the new feature is rolled out to your device, you will see a pop-up with a big green button at the bottom that reads, “Turn on Hey Spotify” and a very small link in white that reads, “Maybe later.” Above the big green button in white is text that reads, “LEARN HOW WE USE VOICE DATA” and “When we hear ‘Hey Spotify’ your voice input and other information will be sent to Spotify.”

The big green button is very noticeable and the white text less so, but when you click on the “LEARN HOW” button, you are sent to a link that reads, “When you use voice features, your voice input and other information will be sent to Spotify.” Hmmm. What other information?

It continues, “This includes audio recording and transcripts of what you say, and other related information such as the content that was returned to you by Spotify.” This means that your biometric information–your voice–and what you actually say to Hey Spotify is collected by Spotify. Spoiler alert: you only have one voice and you are giving it to an app that is collecting it and sharing it with others, including unknown third parties.

The Spotify terms then explain that it will use your voice, audio recordings, transcripts and the other information that is collected “to help us provide you with advertising that is more relevant to you. It also includes sharing information, from time to time, with our service providers, such as cloud storage providers.”  It then explains that you can “interact with advertisements on Spotify using your voice. During a voice-enabled ad, you will hear a voice prompt followed by an audible tone.” Of course, you should know that your response will then be recorded,  collected, and shared.

In response to the question “Is Spotify recording all of my conversations?,” the terms state that “Spotify listens in short snippets of a few seconds which are deleted if the wake-word is not detected.” That means that it is listening frequently until you say, “Hey Spotify.” It doesn’t say how often the short snippets occur.

Consumers can turn off the voice controls and voice ads by disabling their microphone. This is true for all apps that include access to the microphone, which is why it is important to frequently look at your privacy settings and see which apps have access to your microphone and to manage that capability (along with all of the apps in your privacy settings).

It is important to know which apps have access to your biometric information and who they share it with, as you cannot manage that biometric information once you give it away. You don’t know how they are really using it, or how they are storing, securing, disclosing, or retaining it. Think about your Social Security number and how many times you have received a breach notification letter. You can try to protect your credit and your identity with credit monitoring and credit freezes, but you can’t use those tools for the disclosure of your biometric information to scammers and fraudsters.

Your voice can be used for fraudulent purposes. It can be used for authentication to get into accounts, and for vishing (see blog post on vishing here).  Your voice is unique and sharing it with apps or others without knowing how it is secured is something worth considering. If the information is not secured and is subject to a security incident, it gives criminals another very potent tool to commit fraud against you and others.

Before providing your biometric information to any app, or anyone else for that matter, read the Privacy Policy and Terms of Use and understand what you are giving away merely for the convenience of using the app.

In this episode of the podcast (#206): with movement towards passage of a federal data privacy law stronger than ever, we invite two experts in to the Security Ledger studio to talk about what that might mean for U.S. residents and businesses.


Data theft and misuse has been an acute problem in the United States for years. And, despite the passage of time, little progress has been made in addressing it. Just this week, for example, SITA, an IT provider for the world’s leading airlines said that a breach had exposed data on potentially millions of travelers – just the latest in a steady drumbeat of breach and hacking revelations affecting nearly every industry. 

In the E.U. the rash of massive data breaches from retail firms, data brokers and more led to the passage of GDPR – the world’s first, comprehensive data privacy regime. In the years since then, other nations have followed suit.

But in the U.S., despite the passage of a hodgepodge of state data privacy laws, no comprehensive federal law exists. That means there is still no clear federal framework covers critical issues such as data ownership, the disclosure of data breaches, private rights of action to sue negligent firms and so on. 

Changes In D.C. Bring Data Privacy Into Focus

But that may be about to change. In a closely divided Washington D.C. data privacy is the rare issue that has bipartisan support. And now, with Democrats in control of Congress and the Whitehouse, the push is on to pass pro-consumer privacy legislation into law. 

Rehal Jalil, the CEO of Securiti.ai into the studio to dig deep on the security vs. privacy question. SECURE – ITI is a firm that sells privacy management and compliance services.  

n this conversation, Rahil and I talk about the evolving thinking on data privacy and security and about the impact on IT  the EU’s GDPR and state laws like CCPA are having on how businesses manage their data. Rehan and I also talk about whether technology might provide a way to bridge the gap between security and privacy: allowing companies to derive the value from data without exposing it to malicious or unscrupulous actors. 


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

The UK National Cyber Security Centre (NCSC) issued an alert on October 16, 2020, to raise awareness “of a new remote code execution vulnerability (CVE – 2020 – 16952)”, which affects Microsoft’s SharePoint product. According to the alert, “successful exploitation of this vulnerability would allow an attacker to run arbitrary code and to carry out security actions in the context of the local administrator on affected installations of SharePoint server.”

The NCSC recommends applying security updates promptly, “but in this case the NCSC has previously seen a large number of exploitations of SharePoint vulnerabilities…against UK organisations…NCSC is issuing this alert to ensure that system owners are aware of this vulnerability and to ensure remediation actions are taken.”

According to the alert, the vulnerability affects:

  • Microsoft SharePoint Foundation 2013 Service Pack 1
  • Microsoft SharePoint Enterprise Server 2016
  • Microsoft SharePoint server 2019

It is important to note that SharePoint online, which is part of Office 365 is not affected by the vulnerability.

The NCSC “strongly advises that organisations refer to the Microsoft guidance…and ensure the necessary updates are installed in affected SharePoint products. It is also important to keep informed of any possible updated future updates to the guidance…”