Companies are on track to file 27% more cyber claims in 2020, one insurer estimates, while another underwriter finds five out of every 100 companies file a claim each year.

Cyberattacks and security incidents have become the top business risk for companies, with the number of insurance claims rising 27% in the first nine months of 2020, according to a report released earlier this month by insurance company Allianz.

Allianz policyholders filed 770 claims in the first nine months of the year, compared with a little more than 800 for all of 2019, the company stated in its “Trends in Cyber Risk” report. In a second report, the company found that while businesses ranked the “cyber incidents” category as the 15th most significant threat seven years ago, it took the top slot in 2020, with 39% of companies considering cyber incidents as the most important risk.

While part of the growth in claims is due to the overall expansion of the cyber-insurance market, the growing cost of cybercrime to companies is also a major factor, says Josh Navarro, executive underwriter in the Cyber and Professional Liability group for Allianz Global Corporate & Specialty (AGCS).

“A growing ‘commercialization of cyber-hacks’ is a contributing factor leading to a growth in ransomware claims in particular,” he says. “Increasingly, criminals are selling malware to other attackers who then target businesses demanding ransom payments, meaning high-end hacking tools are more widely available and cheaper to come by.”

Allianz is not the only insurer to see a jump in ransomware claims. Ransomware attacks accounted for 41% of policyholder claims, insurer Coalition stated in its 2020 “Cyber Insurance Claims Report,” released in September. Those ransomware incidents also grew more serious, with the dollar value of the average ransom demand doubling in a year, according to the insurer. 

“Although the frequency of ransomware claims has decreased by 18% from 2019 into the first half of 2020, we’ve observed a dramatic increase in the severity of these attacks,” Coalition stated in its report. “The ransom demands are higher, and the complexity and cost of remediation is growing.”

The trend toward more costly and numerous claims is also driven by the increased exodus of employees from offices to their homes in response to the coronavirus pandemic. While attackers targeted companies with an increased volume of phishing attacks, gaps in security measures — such as a lack of multifactor authentication or VPN access — left workers more vulnerable, AGCS’s Navarro says.

“Many companies were left unprepared for a high level of remote access, and gaps in security controls and procedures create an environment with increased exposure to bad actors,” he says. Add to that, “employees are not always following best practices in a remote environment, [which] increases the potential for phishing events to be successful, as well as data leakage.”

Overall, Allianz’s analysis of its cyber claims found that business interruption drove losses higher. Business interruption took second place in the insurer list of top risks, with 37% of companies rating it the top threat.

While ransomware accounted for a great deal of business interruption, human error was the most frequent threat, although with a much lower overall cost to the business. Accidental internal incidents account for 54% of all claims, but only 6% of the value of losses, meaning incidents had one-ninth the average cost. Malicious internal actors accounted for 3% by volume but 9% by value or triple the average per incident, and malicious external attacks accounted for 43% by volume and 86% by value, or about twice the average.

Some attacks, such as NotPetya, caused such high damages that companies claimed as much as $1.3 billion, and which insurers declined payment, citing “act of war” clauses in the policies.

The claims data also showed that larger companies are hit with greater frequency than smaller companies, although smaller companies are far more numerous. Consumer retailers topped the list of targeted industries, accounting for 28% of all claims, while professional services accounted for 16% and healthcare accounted for 12% of claims, according to Coalition’s report.

Still, AGCS’s Navarro recommends that companies train their employees in best practices, especially phishing-awareness training, and use multifactor authentication, which insurer Coalition noted would have stopped the majority of attacks that led to claims. Finally, other technologies, such as network segmentation, can minimize the damage from an attack and make intruders easier to detect.

“Companies of all sizes need to invest heavily in a multipronged cybersecurity program,” says Navarro. “Cross-sector exchange and cooperation among companies … is also key when it comes to defying highly commercially organized cybercrime, developing joint security standards, and improving cyber resilience.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Recommended Reading:

More Insights

If you work within the security industry, compliance is seen almost as a dirty word. You have likely run into situations like that which @Nemesis09 describes below. Here, we see it’s all too common for organizations to treat testing compliance as a checkbox exercise and to thereby view compliance in a way that goes against its entire purpose.

There are challenges when it comes to compliance, for sure. Organizations need to figure out whether to shape their efforts to the letter of an existing law or to base their activities in the spirit of a “law” that best suits their security needs—even if that law doesn’t exists. There’s also the assumption that a company can acquire ‘good enough’ security by implementing a checkbox exercise, never mind the confusion explained by @Nemesis09.

Zoë Rose is a cyber security analyst at BH Consulting
Zoë Rose is a highly regarded hands-on cyber security specialist, who helps her clients better identify and manage their vulnerabilities, and embed effective cyber resilience across their organisation.

Podcast Episode 141: Massive Data Breaches Just Keep Happening. We Talk about Why.

However, there is truth behind why security compliance continues forward. It’s a bloody good way to focus efforts in the complex world of security. Compliance requirements are also using terms that senior leadership understand with risk-based validation of which cyber security teams can make use.

Security is ever-changing. One day, you have everything patched and ready. The next, a major security vulnerability is publicized, and you rush to implement the appropriate updates. It’s only then that you realise that those fixes break something else in your environment.

Opinion: The Perils and Promise of the Data Decade

Containers Challenge Compliance

Knowing where to begin your compliance efforts and where to focus investment in order to mature your compliance program is stressful and hard to do. Now, add to that the speed and complexity of container-isation and three compliance challenges come to mind:

  1. Short life spans – Containers tend to not last too long. They spin up and down over days, hours, even minutes. (By comparison, traditional IT assets like servers and laptops usually remain live for months or years.) Such dynamism makes container visibility constantly evolving and hard to pinpoint. The environment might be in flux, but organizations need to make sure that it always aligns with its compliance requirements regardless of what’s live at the moment.
  2. Testing records – The last thing organizations want to do is walk into an audit without any evidence of the testing they’ve implemented on their container environments. These tests provide crucial evidence into the controls that organizations have incorporated into their container compliance strategies. With documented tests, organizations can help their audits to run more smoothly without needing to try to remember what they did weeks or months ago.
  3. Integrity of containers– Consider the speed of a container’s lifecycle, as discussed above. You need to carefully monitor your containers and practice highly restricted deployment. Otherwise, you won’t be able to tell if an unauthorized or unexpected action occurred in your environment. Such unanticipated occurrences could be warning signs of a security incident.

Building a Container Security Program

One of the most popular certifications I deal with is ISO/IEC 27001, in which security is broken down into areas within the Information Security Management System. This logical separation allows for different areas of the business to address their security requirements while maintaining a holistic lens.

Let’s look at the first challenge identified above: short container life spans. Organizations can address this obstacle by building their environments in a standardized way: hardening it with appropriate measures and continuously validating it through build-time and (importantly) run-time. This means having systems in place to actively monitor actions that these containers make, interactions between systems and running services along with alerts that are in place for unexpected transactions.

Now for the second challenge above. In order to have resilient containers in production, an organisation has to have a proper validation/testing phase done prior to launch. In almost every program I have been a part of, when rolling out new features or services, there is always a guide on “Go/No Go” requirements. This includes things like which tests can fail gracefully, which types of errors are allowed and which tests are considered a “no go” because they can cause an incident or the transaction cannot be completed. In a container-ised environment, such requirements could take the form of bandwidth or latency requirements within your network. These elements, among others, could shape the conditions for when and to what extent your organization is capable of running a test.

In addressing the third challenge, the integrity of containers, we face a major compliance issue. Your organization therefore needs to ask itself the following questions?

  • Have we ever conducted a stress test of our containers’ integrity before?
  • Has our environment ever had a table-top exercise done with the scenario of a container gone rouge?
  • Has a red team exercise ever been launched with the sole purpose of distrusting or attacking the integrity of said containers?

Understand the Value of Compliance

In this article, the author discusses the best practices and known risks associated with Docker. It covers  the expected foundations that you must align with in order to reduce the likelihood of a configuration  causing an incident within your containerized infrastructure.

No environment is perfect, and no solution is 100% secure. That being said, the value of compliance when it comes to container-isation security programs is to validate that these processes so that they can help to reduce the likelihood of an incident, quickly identify the occurrence of events and minimize the potential impact to the overall environment.

Whilst compliance is often seen as a dirty word, it can be leveraged to enhance to overall program through a holistic lens, becoming something richer and attractive to all parties.

Active Directory account lockouts can be hugely problematic for organizations. There have been documented instances of attackers leveraging the account lockout feature in a type of denial of service attack. By intentionally entering numerous bad passwords, attackers can theoretically lock all of the users out of their accounts.

But what do you do if you are experiencing problems with account lockouts?

The Windows operating system is somewhat limited in its ability to troubleshoot account lockouts, but there are some things that you can do. For example, you can use Windows PowerShell to determine which accounts have been locked out. The command for doing so is:

Search-ADAccount -LockedOut -UsersOnly | Select-Object Name, SamAccountName

Incidentally, the UsersOnly parameter prevents computer objects from being included in the results, while the Select-Object command filters the results list to display only the user’s name and their account name.

If you find that accounts have been locked out, then there are a couple of ways of unlocking them. You can unlock accounts one at a time by using this command:

Unlock-ADAccount -Identity <username>

If, on the other hand, you need to unlock user accounts in bulk, then you can do so with this command:

Search-ADAccount –LockedOut | Unlock-ADAccount

While it is undeniably important to be able to unlock user accounts, it is equally important to be able to find out why accounts were locked out in the first place. You can gain a little bit of insight into the problem by using a variation of the Search-ADAccount command that you saw a moment ago:

Search-ADAccount -LockedOut | Select-Object *

This command will display additional information about all of the accounts that have been locked out. You can use this information to find out when the user last logged on and whether the user’s password is expired. Because this command can return a lot of data, you may find it helpful to write the results to a CSV file. Here is an example of how to do so:

Search-ADAccount -LockedOut | Select-Object * | Export-CSV -Path c:\temp\lockout.csv

It is possible to go further with Active Directory lockout troubleshooting using the native Windows tools, but in order to do so, you’re going to need to make a change to your group policy settings prior to lockouts occurring. Oddly enough, account lockouts are not logged by default.

You can enable logging by opening the Group Policy Editor and navigating through the console tree to Computer Configuration | Windows Settings | Security Settings | Advanced Audit Policy Configuration | System Audit Policies | Account Management. Now, enable both success and failure auditing for user account management.

Once the new group policy setting has been applied across the domain, it will cause event number 4740 to be written to the Security event log any time that an account becomes locked out.

Get-WinEvent -FilterHashtable @{logname=”Security”; ID=4740}

There is a good chance that this command will produce an overwhelming number of results. You can use the Select-Object cmdlet to limit the number of results shown. If, for instance, you only want to see the ten most recent results, you could use this command:

Get-WinEvent -FilterHashtable @{logname=”Security”; ID=4740} | Select-Object UserID, Message -Last 10

Notice that I also included references to UserID and Message in the Select-Object cmdlet. The UserID will cause the username to be displayed, and the reference to Message will cause PowerShell to display detailed information about the event. Perhaps the most useful item displayed in the message is the Caller Computer Name, which reflects the name of the machine that caused the user account to be locked out. If necessary, you can also use the TimeCreated property to find out when the lockout occurred.

The command shown above can sometimes cut off the Message. If this happens to you, you can get around this problem by appending the Format-List command, as shown below:

Get-WinEvent -FilterHashtable @{logname=”Security”; ID=4740} | Select-Object UserID, Message -Last 10 | Format-List

As you can see, Windows is limited in its ability to help you to troubleshoot account lockout problems. If you are consistently experiencing account lockout issues and need additional troubleshooting capabilities or if you, like many other organizations, are experiencing an increase in account lockout related calls during the global pandemic, then you might consider checking out some of the third-party tools that are available such as a self-service password reset solution.

Identifying what is driving lockouts and rectifying the issue is one part of the equation. To address the issue holistically, IT departments need to provide users with the ability to unlock their own accounts securely, anytime, anywhere.

The business priority of speed of development and deployment is overshadowing the need for secure code.

The push to develop and deploy applications faster has evolved from simply a goal for developers to a business-level priority that affects every organization’s bottom line. To meet this goal, companies have begun to de-silo development, operations, and security, moving toward a DevSecOps model to deliver increased agility and speed in the software development life cycle (SDLC).

Often lost in the chaos of this cultural shift to a “need for speed” SDLC approach is the misalignment between DevOps and security practitioners’ goals. Both teams must strive to balance their respective goals: getting new features out the door and minimizing software risk. We know this misalignment contributes to vulnerable code being shipped more often than it should be, but what most people don’t realize is that this is happening knowingly, and quite often. According to a recent ESG research report, almost half (48%) of organizations are regularly pushing vulnerable code, and they know it.

The simple question this statistic raises is: why? Cybersecurity continues to be a priority concern for every organization, with one vulnerability holding the potential to diminish a brand’s reputation — that took decades to build — in just a few seconds. So why are developers knowingly deploying vulnerable code?  

The findings of the report help shed some light on the reasons:

  • 54% of organizations push vulnerable code in order to meet a critical deadline, with plans to remediate in a later release.
  • 49% of organizations push vulnerable code because they think it holds very low risk.
  • 45% of organizations publish vulnerable code because the vulnerabilities were discovered too late in the cycle to resolve them in time before the code was deployed.

This data helps us understand that the business priority of speed of development and deployment is overshadowing the need for secure code. Organizations are acknowledging various risks regarding cybersecurity, speed, and business objectives, and they’re willing to take on some of these risks because priorities are in opposition to one another. To address the frequency of knowingly publishing vulnerable code, we must look at the underlying causes that have caused such a challenging situation to arise.

First, developers — who ultimately own the responsibility for fixing flaws in code — don’t have the training or access to security tools with the proper integrations to be effective at mitigating vulnerabilities. ESG’s report found:

  • 29% of developers lack the knowledge to mitigate issues identified with current testing tools.
  • 26% of developers found difficulty or a lack of integration between different AppSec vendor tools to cause challenges.
  • 26% of developers said testing tools added friction or slowed down development cycles.
  • 35% of organizations say less than half of their development teams are involved in formal AppSec training.
  • Less than half of developers are engaged in formal AppSec training more than once a year.

These challenges can and should be addressed to help developers reduce the volume of vulnerable code being shipped.

As security continues to shift left into the hands of developers, organizations need to continue to prepare for this culture change by modernizing their approach to AppSec.

For AppSec managers and CISOs looking to create an integrated DevSecOps approach, some common elements of effective AppSec programs include:

  • Application security controls being highly integrated into the CI/CD toolchain.
  • Application security training being an essential part of the training that is included for development security training programs.
  • Application security and development best practices being documented and communicated.
  • Security issues and continuous improvement metrics being tracked for development teams.

It’s up to security and development team leaders to formalize an AppSec strategy that encompasses training, goal setting, tool integration, and policy enforcement without introducing too much friction.

Even with a robust application security program, organizations will still deploy vulnerable code! The difference is that they will do so with a thorough and contextual understanding of the risks they’re taking rather than allowing developers or engineering managers — who lack security expertise — to make that decision. Application security requires a constant triage of potential risks, involving prioritization decisions that allow development teams to mitigate risk while still meeting key deadlines for delivery.

As application security has matured, no single testing technique has helped development teams mitigate all security risk. Teams typically employ multiple tools, often from multiple vendors, at various points in the SDLC. Usage varies, as do the tools that organizations deem most important, but most organizations end up utilizing a set of tools to satisfy their security needs.

Lastly, while most organizations provide developers with some level of security training, more than 50% only do so annually or less often. This is simply not frequent or thorough enough to develop secure coding habits. While development managers are often responsible for this training, in many organizations, application security analysts carry the burden of performing remedial training for development teams or individual developers who have a track record of introducing too many security issues. Organizations should seek more hands-on training tools that offer secure coding practices through interactive exercises based on modern threats that developers can practice exploiting and patching. A labs-based approach to developer enablement can improve time to resolve flaws and help developers learn to avoid flaws altogether.

Chris Eng is Chief Research Officer at Veracode. Throughout his career, he has led projects breaking, building, and defending software for some of the world’s largest companies. He is an unabashed supporter of the Oxford comma and hates it when you use the word “ask” as … View Full Bio

Recommended Reading:

More Insights

Not everyone in a security department is acting in good faith, and they’ll do what they can to bypass those who do. Here’s how to spot them.

Wikipedia defines “good faith” as “a sincere intention to be fair, open, and honest, regardless of the outcome of the interaction.” A person who acts in good faith must be truthful and forthcoming with information, even if it affects the end state of a negotiation or transaction. In other words, lying and withholding information, by their very nature, make an interaction anything but good faith.

For many security professionals, good faith is the only way they know how to operate. Unfortunately, the security profession, like any profession, has its share of bad faith actors, too. For example, consider a co-worker who is underperforming and introducing unnecessary risk into the security organization. In certain cases, underperformers will look to sabotage others rather than improve the quality of their work. Or, as another example, consider a bad faith actor who is out to gain competitive intelligence or other information that can be used for any number of purposes, including social engineering.

How can good faith security practitioners identify bad actors and understand when they’re being taken advantage of? Here are five signs.

1. Information hoarding: Ever had a conversation, meeting, chat correspondence, or email exchange that feels more like an interrogation than a two-way exchange information? This is a well-known trick – and sign of – a bad faith actor. By the time most good faith actors catch on to the fact that the information flow is entirely one-way, they’ve already given the bad faith actor a wealth of information.

2. My way or the highway: As a generally rational bunch, good faith actors understand that life is a give and take. But bad faith actors know only how to take, making it difficult to negotiate. Their only concern is what they want, and they will employ a variety of tactics to get what they want while offering little to nothing in return. Unfortunately, good faith actors often fall for this approach, as they would rather disengage and get back to constructive activities than get dirty wrestling in the mud with a bad actor.

3. False generosity: When bad faith actors seek to manipulate people or situations, they will sometimes make what appears to be a generous offer. Conversely, these offers often come at a tremendous cost. How so? If a good faith actor takes a bad faith actor up on an offer, it could be used against them in the future. The bad faith actor could also attempt to convince others of their “good nature” and “generosity” by pointing to a good faith actor who took the offer.

4. Bait and switch: Bait and switch is one of the oldest tricks in the book. As the Latin phrase so aptly states, caveat emptor: Buyer beware. Bad faith actors will often make promises of something they have absolutely no intention of giving to extract what they want from good actors. Once they have what they were after, they go quiet or become evasive. The chances of a good faith actor ever seeing what they wanted are very slim.

5. Promoting a narrative: One way bad faith actors seek out, persuade, and take advantage of new victims is by surrounding themselves with a chorus of approvers. This “posse,” of sorts, may consist of witting and/or unwitting accomplices. In some cases, accomplices were recruited via lies or manipulation. In other cases, the accomplices may have their own motivations for why they wish to partake in certain bad faith activities. In any event, bad faith actors will often promote a narrative to help convince new audiences they can be believed. This can be difficult to navigate and often catches good faith actors by surprise.

In the end, a heaping dose of awareness – and even a bit of healthy cynicism – of misleading behaviors can stop bad faith actors from taking advantage and achieving their goals.

Josh (Twitter: @ananalytical) is currently Director of Product Management at F5.  Previously, Josh served as VP, CTO – Emerging Technologies at FireEye and as Chief Security Officer for nPulse Technologies until its acquisition by FireEye.  Prior to joining nPulse, … View Full Bio

Recommended Reading:

More Insights

Each security incident should lead to a successive reduction in future incidences of the same type. Organizations that fail toward zero embrace failure and learn from their mistakes.

“Hard times create strong people.”

“What doesn’t kill you makes you stronger.”

Maybe you’ve whispered these mantras to yourself in the aftermath of a personal setback at home or work. We’ve all heard some take on this expression, but the sentiment is always the same: Failing doesn’t feel good in the moment, but it’s possible to appreciate failure as a lesson in overcoming adversity. To put it simply, you have to fail in order to get better.

But what if the stakes for failure mean more than another checkmark under the “loss” column?

This is the predicament faced by organizations every day when it comes to cybersecurity. At best, failure means an embarrassing and inconvenient organizational disruption. At worst, it means a catastrophic loss of records and loss of business.

Failure, it would seem, is not an option when it comes to cybersecurity. Or is it?

Author and scholar Nassim Nicholas Taleb can help us answer this question. Taleb has a useful concept called “antifragile,” which he uses to describe any person, organization, or entity that benefits from failure. Not only that, as Taleb puts it, the antifragile “loves” randomness, uncertainty, volatility, and errors. Think of it as evolution with a twist. Instead of survival of the fittest, this is survival of the smartest. Whoever can understand and react to environmental stressors best wins.

And let’s face it, your cybersecurity will fail at some point. There’s no such thing as 100% protection. Cybercriminals need to succeed only once, but organizations need to succeed every time. While it’s more than likely that your organization will be the target of a successful cyberattack, a successful cyberattack doesn’t necessarily make a catastrophic data breach. If you know your security is going to fail at some point, you can prepare for this eventuality and mitigate its impact on operations. It’s at this intersection of antifragility and cybersecurity that we get a model I’m calling “failing toward zero.”

Failing toward zero is a state in which each security incident leads to a successive reduction in future incidences of the same type. Organizations that fail toward zero embrace failure and learn from their mistakes. Our data suggests that smart companies are already starting to do this.

The Data Science and Engineering team at Malwarebytes examined all detection data on business endpoints for the past three years. It’s no surprise that malware detections on business endpoints went up every single year, from 7,553,354 in 2017 to around 49 million in 2020 — and the year isn’t even over yet.

However, the detections we’re facing today are different from those we saw just a few years ago. Two of the biggest blockbuster threats of yesteryear — spyware and Trojans — are both down. Since the winter of 2019, spyware detections on business targets dropped 49%, while Trojans dropped 63%. Criminals have since altered tactics in favor of adware and hacktools, a category of riskware that is used to hack into computers and networks. While adware is mostly a nuisance, hacktools can be used to gain access to a system, steal data, and distribute malware. We’ve seen hacktools detections increase 2,431% since winter 2019.

And it’s not that spyware and Trojans have gone away. With the help of technologies like machine learning, we’ve discovered new strains from these threat categories every day. The truth is that businesses that suffer breaches tend to get better at dealing with them. Yes, they “failed” in the sense that their network security had been breached, but they were failing towards zero.

Now that we have hacktools to contend with, how can we fail toward zero?

The mechanics of failing toward zero vary. Thanks to machine learning, your endpoint protection should be able to “learn” a strain of malware and automatically block threats that behave similarly. There’s an equally critical human element as well. You should have an incident response team and put your team and procedures to the test in the following ways:

  • Deliberately introduce stress into the system and see how your team responds in the face of failure.
  • Figure out how you will maintain business continuity during and after an attack.
  • Make sure employees receive adequate training.
  • Ensure institutional knowledge is properly documented for new team members.

Look at your own data. Are you part of the group that’s failing toward zero or are you part of the group that’s failing toward infinity?

Beyond this basic blocking and tackling, perhaps the biggest challenge in failing toward zero is just to accept failure as a condition of long-term success. We’re programmed to win, especially when so much is at stake. We’ve developed a mindset opposite of failing toward zero — the “losing is not an option” mindset. Frankly, that mindset is not helpful.

I prefer to think of it like this: If your network is breached and you’re able to stop that breach before any damage is done and, most importantly, you know that it’s not going to happen again, then you’ve actually won.

Taleb sometimes calls errors “unknowledge.” Being ignorant and lacking knowledge is an error in and of itself. I cannot overstate how important it is to study and act on the data from past attacks. So, take the time to study the shortcomings in your security. Look to the past and study attacks at your business and other businesses as well. Cybercriminals have done the work of finding the failures in your security. Take advantage of that.

To fail toward zero, you’ve got to see the error in your ways. Or as Taleb might put it, you’ve got to see the way in your errors.

Akshay Bhargava is the Chief Product Officer at Malwarebytes. He drives the company’s technology vision, product road map and execution. He previously served as Vice President for Oracle’s Cloud Business Group, as a product executive at FireEye and as a management consultant … View Full Bio

Recommended Reading:

More Insights

A cyberespionage group with suspected ties to the Kazakh and Lebanese governments has unleashed a new wave of attacks against a multitude of industries with a retooled version of a 13-year-old backdoor Trojan.

Check Point Research called out hackers affiliated with a group named Dark Caracal in a new report published yesterday for their efforts to deploy “dozens of digitally signed variants” of the Bandook Windows Trojan over the past year, thus once again “reigniting interest in this old malware family.”

The different verticals singled out by the threat actor include government, financial, energy, food industry, healthcare, education, IT, and legal institutions located in Chile, Cyprus, Germany, Indonesia, Italy, Singapore, Switzerland, Turkey, and the US.

The unusually large variety of targeted markets and locations “reinforces a previous hypothesis that the malware is not developed in-house and used by a single entity, but is part of an offensive infrastructure sold by a third party to governments and threat actors worldwide, to facilitate offensive cyber operations,” the researchers said.

Dark Caracal’s extensive use of Bandook RAT to execute espionage on a global scale was first documented by the Electronic Frontier Foundation (EFF) and Lookout in early 2018, with the group attributed to the theft of enterprise intellectual property and personally identifiable information from thousands of victims spanning over 21 countries.

The prolific group, which has operated at least since 2012, has been linked to the Lebanese General Directorate of General Security (GDGS), deeming it a nation-state level advanced persistent threat.

The concurrent use of the same malware infrastructure by different groups for seemingly unrelated campaigns led the EFF and Lookout to surmise that the APT actor “either uses or manages the infrastructure found to be hosting a number of widespread, global cyberespionage campaigns.”

Now the same group is back at it with a new strain of Bandook, with added efforts to thwart detection and analysis, per Check Point Research.

A Three-Stage Infection Chain

The infection chain is a three-stage process that begins with a lure Microsoft Word document (e.g. “Certified documents.docx”) delivered inside a ZIP file that, when opened, downloads malicious macros, which subsequently proceeds to drop and execute a second-stage PowerShell script encrypted inside the original Word document.

In the last phase of the attack, this PowerShell script is used to download encoded executable parts from cloud storage services like Dropbox or Bitbucket in order to assemble the Bandook loader, which then takes the responsibility of injecting the RAT into a new Internet Explorer process.

The Bandook RAT — commercially available starting in 2007 — comes with all the capabilities typically associated with backdoors in that it establishes contact with a remotely-controlled server to receive additional commands ranging from capturing screenshots to carrying out various file-related operations.

But according to the cybersecurity firm, the new variant of Bandook is a slimmed-down version of the malware with support for only 11 commands, while prior versions were known to feature as many as 120 commands, suggesting the operators’ desire to reduce the malware’s footprint and evade detection against high-profile targets.

That’s not all. Not only valid certificates issued by Certum were used to sign this trimmed version of the malware executable, Check Point researchers uncovered two more samples — full-fledged digitally-signed and unsigned variants — which they believe are operated and sold by a single entity.

“Although not as capable, nor as practiced in operational security like some other offensive security companies, the group behind the infrastructure in these attacks seems to improve over time, adding several layers of security, valid certificates and other techniques, to hinder detection and analysis of its operations,” the researchers concluded.

Apps Data

Two popular Android apps from Chinese tech giant Baidu were temporarily unavailable on the Google Play Store in October after they were caught collecting sensitive user details.

The two apps in question—Baidu Maps and Baidu Search Box—were found to collect device identifiers, such as the International Mobile Subscriber Identity (IMSI) number or MAC address, without users’ knowledge, thus making them potentially trackable online.

The discovery was made by network security firm Palo Alto Networks, who notified both Baidu and Google of their findings, after which the search company pulled the apps on October 28, citing “unspecified violations.”

As of writing, a compliant version of Baidu Search Box has been restored to the Play Store on November 19, while Baidu Maps remains unavailable until the unresolved issues highlighted by Google are fixed.

A separate app named Homestyler was also found to collect private information from users’ Android devices.

According to Palo Alto researchers, the full list of data collected by the apps include:

  • Phone model
  • Screen resolution
  • Phone MAC address
  • Carrier (Telecom Provider)
  • Network (Wi-Fi, 2G, 3G, 4G, 5G)
  • Android ID
  • IMSI number
  • International Mobile Equipment Identity (IMEI) number

Using a machine learning-based algorithm designed to detect anomalous spyware traffic, the origin of the data leak was traced to Baidu’s Push SDK as well as ShareSDK from the Chinese vendor MobTech, the latter of which supports 37,500 apps, including more than 40 social media platforms.

While Google has taken steps to secure the Play store and stop the malicious activity, bad actors are still finding ways to infiltrate the app marketplace and leverage the platform for their gain.

Indeed, an academic study published by researchers from NortonLifeLock earlier this month found the Play Store to be the primary source of malware installs (about 67.5%) on Android devices based on an analysis of app installations on 12 million handsets over a four-month period between June and September 2019, fueled in part due to the wide popularity of the platform.

However, its vector detection ratio — the ratio of unwanted apps installed through that vector overall apps installed through that vector — was found to be only 0.6% when compared to alternative third-party app stores (3.2%).

“Thus, the Play market defenses against unwanted apps work, but still significant amounts of unwanted apps are able to bypass them, making it the main distribution vector for unwanted apps,” the researchers said.

If anything, the incident is yet another reminder that no app, even if developed by a legitimate third-party, can be taken for granted.

This also means the usual safeguards such as scrutinizing app reviews, developer details, and the list of requested permissions may not offer enough protection, thus making it difficult to ascertain if a permission is misused by cybercriminals to steal private data.

“In mobile devices, it is typical to ask a user to grant a list of permissions upon installation of an application or to prompt a user to allow or deny a permission while the application is running,” Palo Alto researchers concluded.

“Disallowing permissions can often result in a non-working application, which leads to a bad user experience and might tempt a user to click on ‘allow’ just to be able to use an application. Even if a certain permission is granted, it is often up to the app developers whether it is used in accordance with the official guidelines.”

White Hat Hacker

Many of us here would love to turn hacking into a full-time career. To make that dream come true, you need to master your subject and earn some key certifications.

To speed up this process, you might want to take a little guidance from the experts.

Featuring 98 hours of content from top instructors, The Ultimate 2020 White Hat Hacker Certification Bundle is the ultimate launchpad for your career. It provides an incredible introduction to white hat hacking and helps you become a CompTIA-certified professional.

The courses in this bundle are separately worth $1,345, but The Hacker News has put together a special deal for readers.

Special Offer — For a limited time, you can pick up all 10 courses for just $39.90 with this bundle. That’s a 97% saving on the full price!

Learn hacking

According to industry experts, there will be 3.5 million unfilled cybersecurity jobs by next year. If you want to take advantage of this gold rush, now is an excellent time to start studying.

Perfect for beginners and improvers alike, this bundle helps you master the fundamentals of cybersecurity and white hat hacking.

Through concise video lessons, you learn about Darknet, malware, exploit kits, phishing, zero-day vulnerabilities, identity theft, and countless other threats. You also learn how to set up defenses, run your own attacks, and automate pentesting.

Just as importantly, the training helps you work towards two industry certifications: CompTIA PenTest+ and CySA+ Cybersecurity Analyst.

Order today to get lifetime access to all the courses at 97% off the full price.

Three Nigerian citizens suspected of being members of an organized cybercrime group behind distributing malware, carrying out phishing campaigns, and extensive Business Email Compromise (BEC) scams have been arrested in the city of Lagos, Interpol reported yesterday.

The investigation, dubbed “Operation Falcon,” was jointly undertaken by the international police organization along with Singapore-based cybersecurity firm Group-IB and the Nigeria Police Force, the principal law enforcement agency in the country.

About 50,000 targeted victims of the criminal schemes have been identified so far, as the probe continues to track down other suspected gang members and the monetization methods employed by the group.

Group-IB’s participation in the year-long operation came as part of Interpol’s Project Gateway, which provides a framework for agreements with selected private sector partners and receives threat intel directly.

“The suspects are alleged to have developed phishing links, domains, and mass mailing campaigns in which they impersonated representatives of organizations,” Interpol said. “They then used these campaigns to disseminate 26 malware programmes, spyware and remote access tools, including AgentTesla, Loki, Azorult, Spartan and the nanocore and Remcos Remote Access Trojans.”

In addition to perpetrating BEC campaigns and sending out emails containing malware-laced email attachments, the attacks have been used to infiltrate and monitor the systems of victim organizations and individuals, leading to the compromise of at least 500,000 government and private sector companies in more than 150 countries since 2017.

According to Group-IB, the three individuals — identified only by their initials OC, IO, and OI — are believed to be members of a gang which it has been tracking under the moniker TMT, a prolific cybercrime crew that it says is divided into multiple smaller subgroups based on an analysis of the attackers’ infrastructure and techniques.

Some of their mass email phishing campaigns took the form of purchasing orders, product inquiries, and even COVID-19 aid impersonating legitimate companies, with the operators leveraging Gammadyne Mailer and Turbo-Mailer to send out phishing emails. The group also relied on MailChimp to track whether a recipient opened the message.

The ultimate goal of the attacks, Group-IB noted, was to steal authentication data from browsers, email, and FTP clients from companies located in the US, the UK, Singapore, Japan, Nigeria, among others.

“This group was running a well-established criminal business model,” Interpol’s Cybercrime Director Craig Jones noted. “From infiltration to cashing in, they used a multitude of tools and techniques to generate maximum profits.”