Binary Check Ad Blocker Security News

In this Spotlight Podcast, sponsored by RSA, we take on the question of securing the 2020 Presidential election. Given the magnitude of the problem, could taking a more risk-based approach to security pay off? We’re joined by two information security professionals: Rob Carey is the Vice President and General Manager of Global Public Sector Solutions at RSA. Also joining us: Sam Curry, the CSO of Cybereason.


With just over two months until the 2020 presidential election in the United States, campaigns are entering the final stretch as states and local governments prepare for the novel challenge of holding a national election amidst a global pandemic. 

As Election Threats Mount, Voting Machine Hacks are a Distraction

Lurking in the background: the specter of interference and manipulation of the election by targeted, disinformation campaigns like those Russia used during the 2016 campaign – or by outright attacks on election infrastructure. A report by the Senate Intelligence Committee warns that the Russian government is preparing to try to influence the 2020 vote, as well.

A Risk Eye on the Election Guy

Securing an election that takes place over weeks or even months across tens of thousands cities and towns – each using a different mix of technology and process – may be an impossible task. But that’s not necessarily what’s called for either.

Robert Carey RSA Security
Robert J. Carey is the  Vice President and GM of Global Public Sector Solutions at RSA.

Like large organizations who must contend with a myriad of threats, security experts say that elections officials would do well to adopt a risk-based approach to election security: focusing staff and resources in the communities and on the systems that are most critical to the outcome of the election. 

What does such an approach look like? To find out, we invited two, seasoned security professionals with deep experience in cyber threats targeting the public sector. 

Robert J. Carey is the  Vice President and GM of Global Public Sector Solutions at RSA.

Feds, Facebook Join Forces to Prevent Mid-Term Election Fraud

Rob retired from the Department of Defense in 2014 after over 31 years of distinguished public service after serving a 3½ years as DoD Principal Deputy Chief Information Officer.

Sam Curry, CISO Cybereason
Sam Curry is the CISO at Cybereason

Also with us is this week is Sam Curry, Chief Security Officer of the firm Cybereason. Sam has a long career in information security including work as CTO and CISO for Arbor Networks (NetScout)  CSO and SVP R&D at Microstrategy in addition to senior security roles at McAfee and CA. He spent seven years at RSA variously as CSO, CTO and SVP of Product and as Head of RSA Labs. 

Voting Machine Maker Defends Refusal of White-Hat Hacker Testing at DEF-CON

To start off our conversation: with a November election staring us in the face,  I asked Rob and Sam what they imagined the next few weeks would bring us in terms of election security. 

Like Last Time – But Worse

Both Rob and Sam said that the window has closed for major new voting security initiatives ahead of the 2020 vote. “This election…we’re rounding third base. Whatever we’ve done, we have to put the final touches on,” said Carey.

Like any other security program, election security needs baselines, said Curry. Elections officials need to “game out” various threat, hacking scenarios and contingencies. Election officials need to figure out how they would respond and how communications with the public will be handled in the event of a disruption, Curry said.

“The result we need is an election with integrity and the notion that the people have been heard. So let’s make that happen,” Curry said.

Spotlight Podcast: As Attacks Mount, ERP Security Still Lags

Carey said that – despite concerns – little progress had been made on election security. “The elections process has not really moved forward much. We had hanging chads and then we went to digital voting and then cyber came out and now we’re back to paper,” he said.

Going forward into the future, both agree that there is ample room for improvement in election security – whether that is through digital voting or more secure processes and technologies for in person voting. Carey said that the government does a good job securing classified networks and a similar level of seriousness needs to be brought to securing voting sessions.

“Is there something that enables a secure digital vote?” Carey said. “I’m pretty sure our classified networks are tight. I know we’re not in that space here, but I know we need that kind of confidence in that result to make this evidence of democracy stick,” he said.  


(*) Disclosure: This podcast and blog post were sponsored by RSA Security for more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.

As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted. 

Binary Check Ad Blocker Security News

Anyone whose business depends on online traffic knows how critical it is to protect your business against Distributed Denial of Service (DDoS) attacks. And with cyber attackers more persistent than ever – Q1 2020 DDoS attacks surged by 80% year over year and their average duration rose by 25%—you also know how challenging this can be.

Now imagine you’re responsible for blocking, mitigating, and neutralizing DDoS attacks where the attack surface is tens of thousands of websites. That’s exactly what HubSpot, a top marketing and sales SaaS provider, was up against. How they overcame the challenges they faced makes for an interesting case study in DDoS response and mitigation.

Drinking from a Firehouse

HubSpot’s CMS Hub powers thousands of websites across the globe. Like many organizations, HubSpot uses a Content Delivery Network (CDN) solution to help bolster security and performance.

CDNs, which are typically associated with improving web performance, are built to make content available at edges of the network, providing both performance and data about access patterns across the network. To handle the CDN log data spikes inherent with DDoS attacks, organizations often guesstimate how much compute they may need and maintain that higher level of resource (and expenditure) for their logging solution. Or if budgets don’t allow, they dial back the amount of log data they retain and analyze.

In HubSpot’s case, they use Cloudflare CDN as the first layer of protection for all incoming traffic on the websites they host. This equates to about 136,000 requests/second, or roughly 10TB/day, of Cloudflare log data that HubSpot has at its disposal to help triage and neutralize DDoS attacks. Talk about drinking from a firehouse!

HubSpot makes use of Cloudflare’s Logpushservice to push Cloudflare logs that contain headers and cache statuses for each request directly to HubSpot’s Amazon S3 cloud object storage. In order to process that data and make it searchable, HubSpot’s dedicated security team deployed and managed their own open-source ELK Stack consisting of Elasticsearch (a search database), Logstash (a log ingestion and processing pipeline), and Kibana (a visualization tool for log search analytics). They also used open source Kafka to queue logs into the self-managed ELK cluster.

To prepare the Cloudflare logs for ingestion into the ELK cluster, HubSpot had created a pipeline that would download the Cloudflare logs from S3 into a Kafka pipeline, apply some transformations on the data, insert into a second Kafka queue whereby Logstash would then process the data, and output it into the Elasticsearch cluster. The security team would then use Kibana to interact with the Cloudflare log data to triage DDoS attacks as they occur.

Managing an Elasticsearch cluster dedicated to this Cloudflare/DDoS mitigation use case presented a number of continuing challenges. It required constant maintenance by members of the HubSpot Elasticsearch team. The growth in log data from HubSpot’s rapid customer base expansion was compounded by the fact that DDoS attacks themselves inherently generate a massive spike in log data while they are occurring. Unfortunately, these spikes often triggered instability in the Elastic cluster when they were needed most, during the firefighting and mitigation process. 

Cost was also a concern. Although Elasticsearch, Logstash, and Kibana open source applications can be acquired at no cost, the sheer volume of existing and incoming log data from Cloudflare required HubSpot to manage a very large and increasingly expensive ELK cluster. Infrastructure costs for storage, compute, and networking to support the growing cluster grew faster than the data. And certainly, the human capital in time spent monitoring, maintaining, and keeping the cluster stable and secure was significant. The team constantly had discussions about whether to add more compute to the cluster or reduce data retention time. To accommodate their Cloudflare volume, which was exceeding 10TB/day and growing, HubSpot was forced to limit retention to just five days. 

The Data Lake Way

Like many companies whose business solely or significantly relies on online commerce, HubSpot wanted a simple, scalable, and cost-effective way to handle the continued growth of their security log data volume.

They were wary of solutions that might ultimately force them to reduce data retention to a point where the data wasn’t useful. They also needed to be able to keep up with huge data throughput at a low latency so that when it hit Amazon S3, HubSpot could quickly and efficiently firefight DDoS attacks.

HubSpot decided to rethink its approach to security log analysis and management. They embraced a new approach that consisted primarily of these elements:

– Using a fully managed log analysis serviceso internal teams wouldn’thave to manage the scaling of ingestion or query side components and could eliminate compute resources

– Leveraging the Kibana UIthat the security team is already proficient with

– Turning their S3 cloud object storage into a searchable analytic data lakeso Cloudflare CDN and other security-related log data could be easily cleaned, prepared, and analyzed in place, without data movement or schema management

By doing this, HubSpot can effectively tackle DDoS challenges. They significantly cut their costs and can easily handle the 10TB+/day flow of Cloudflare log data, without impacting performance.

HubSpot no longer has to sacrifice data retention time. They can retain Cloudflare log data for much longer than 5 days, without worrying about costs, and can dynamically scale resources so there is no need to invest in compute that’s not warranted. This is critical for long-tail DDoS protection planning and execution, and enables HubSpot to easily meet SLAs for DDoS attack response time.

Data lake-based approaches also enable IT organizations to unify all their security data sources in one place for better and more efficient overall protection. Products that empower data lake thinking allow  new workloads to be added on the fly with no provisioning or configuration required, helping organizations gain even greater value from log data for security use cases. For instance, in addition to storing and analyzing externally generated log data within their S3 cloud object storage, HubSpot will be storing and monitoring internal security log data to enhance insider threat detection and prevention.

Incorporating a data lake philosophy into your security strategy is like putting log analysis on steroids. You can store and process exponentially more data volume and types, protect better, and spend much less.

About the author: Dave Armlin is VP of Customer Success and Solutions Architecture at ChaosSearch. Dave has spent his 25+ year career building, deploying, and evangelizing secure enterprise and cloud-based architectures.

Binary Check Ad Blocker Security News

COVID-19 may be complicating organizations’ cybersecurity efforts as they shift more of their operations online, but that doesn’t lessen the pressure to comply with government regulations that are placing increased scrutiny on data privacy.

Despite the pandemic, companies are obligated to comply with many laws governing data security and privacy, including the two most familiar to consumers — the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). With CCPA enforcement set to begin July 1, organizations’ regulatory responsibilities just got tougher.

The CCPA is similar to GDPR in that it is designed to improve privacy rights and consumer protection, giving Californians the right to know when their personal data is being collected, whether their personal data is being disclosed or sold, and to whom. It allows them to access their personal data, say no to its sale, and request that a business delete it.

The law applies to any business with gross revenues over $25 million and that has personal information on 50,000 or more California citizens, whether the company is based in California or not. Violations can result in stiff fines.

Like GDPR before it, CCPA makes data security and regulatory compliance more of a challenge and requires businesses to create a number of new processes to fully understand what data they have stored in their networks, who has access to it, and how to protect it.

The challenge is especially rigorous for large organizations that collect and store high volumes of data, which is often spread across multiple databases and environments. And CCPA’s enforcement date comes as companies have already been scrambling to deal with COVID-19’s impact – enabling remote workforces while guarding against hackers trying to exploit fresh openings to infiltrate networks.

Here are four things that every business should consider in maintaining a rigid security posture to protect its most important asset – its data – and meet rising regulatory requirements:

1.    Protect headcount.

We may be in an economic downturn, but now is not the time to lay off anyone with data security and privacy responsibility. Oftentimes when a company is forced to fire people, the pain is spread equally across the organization – say 10 percent for each department. Because the CISO organization (as well as the rest of IT) are usually considered “general and administrative” overhead, the target on its back can be just as large.

In the current environment, security staff certainly needs to be exempt from cuts. Most security teams have little to no overlap – there is a networking expert, an endpoint specialist, someone responsible for cloud, etc. And one person who focuses on data and application security, if you’re lucky enough to have this as a dedicated resource.

The data and application security role has never been more vital, both to safeguard the organization as more data and applications move online and to handle data security regulatory compliance, an onus companies continue to carry despite the pandemic. This person should be considered untouchable in any resource action.

2.    Don’t drop the ball on breach notification.

It’s a question mark to what extent officials are aggressively conducting audits to vigorously enforce these laws during the pandemic. However, I would advise companies to assume that stringent enforcement remains the norm.

This is another reason that fostering strong security is all the more crucial now. For example, companies are still required to notify the relevant governing body if it suffers a breach. This initiates a process involving its IT, security, and legal teams, and any other relevant departments. Who wants that distraction anytime, and especially during a global crisis?

Beyond regulatory factors, companies simply owe it to their customers to handle their data responsibly. This was of course true before COVID-19 and CCPA enforcement, but its importance has intensified. A Yahoo-style scandal now could cause reputational damage that the company never recovers from.

3.    Ask the critical questions that regulations raise.

Where is personal data stored? Companies must scan their networks and servers to find any unknown databases, identify sensitive data using dictionary and pattern-matching methods, and pore through database content for sensitive information such as credit card numbers, email addresses, and system credentials

Which data has been added or updated within the last 12 months? You need to monitor all user database access — on-premises or in the cloud — and retain all the audit logs so you can identify the user by role or account type, understand whether the data accessed was sensitive, and detect non-compliant access behaviors.

Is there any unauthorized data access or exfiltration? Using machine learning and other automation technologies, you need to automatically uncover unusual data activity, uncovering threats before they become breaches.

Are we pseudonymizing data? Data masking techniques safeguard sensitive data from exposure in non-production or DevOps environments by substituting fictional data for sensitive data, reducing the risk of sensitive data exposure.

4.    Assume more regulation will come.

As digital transformation makes more and more data available everywhere, security and privacy concerns keep growing. One can assume that GDPR and CCPA may just be the tip of the regulatory iceberg. Similar initiatives in Wisconsin, Nevada, and other states show that it behooves organizations to get their data protection houses very much in order. Compliance will need to be a top priority for organizations for many years into the future.

About the author: Terry Ray has global responsibility for Imperva’s technology strategy. He was the first U.S.-based Imperva employee and has been with the company for 14 years. He works with organizations around the world to help them discover and protect sensitive data, minimize risk for regulatory governance, set data security strategy and implement best practices.

The official Call for Presentations (speakers) for SecurityWeek’s 2020 Industrial Control Systems (ICS) Cyber Security Conference, being held October 19 – 22, 2020 in SecurityWeek’s Virtual Conference Center, has been extended to August 31st.

As the premier ICS/SCADA cyber security conference, the event was originally scheduled to take place at the InterContinental Atlanta, but will now take place in a virtual environment due to COVID-19.

“Due to the impact of COVID-19 and transition to a fully virtual event, we have extended the deadline for submissions to allow more time for speakers to put together their ideas under the new format,” said Mike Lennon, Managing Director at SecurityWeek. “Given SecurityWeek’s global reach and scale, we expect this to be the largest security-focused gathering of its kind serving the industrial and critical infrastructure sectors.” 

ICS Cyber Security ConferenceThe 2020 Conference is expected to attract thousands of attendees from around the world, including large critical infrastructure and industrial organizations, military and state and Federal Government. 

SecurityWeek has developed a fully immersive virtual conference center on a cutting- edge platform that provides attendees with the opportunity to network and interact from anywhere in the world.

As the original ICS/SCADA cyber security conference, the event is the longest-running cyber security-focused event series for the industrial control systems sector. 

With an 18-year history, the conference has proven to bring value to attendees through the robust exchange of technical information, actual incidents, insights, and best practices to help protect critical infrastructures from cyber-attacks.

Produced by SecurityWeek, the conference addresses ICS/SCADA topics including protection for SCADA systems, plant control systems, engineering workstations, substation equipment, programmable logic controllers (PLCs), and other field control system devices.

Through the Call for Speakers, a conference committee will accept speaker submissions for possible inclusion in the program at the 2020 ICS Cyber Security Conference.

The conference committee encourages proposals for both main track, panel discussions, and “In Focus” sessions. Most sessions will be mixed between 30 and 45 minutes in length including time for Q&A.

Submissions will be reviewed on an ongoing basis so early submission is highly encouraged. Submissions must include proposed presentation title, an informative session abstract, including learning objectives for attendees if relevant; and contact information and bio for the proposed speaker.

All speakers must adhere to the 100% vendor neutral / no commercial policy of the conference. If speakers cannot respect this policy, they should not submit a proposal.

To be considered, interested speakers should submit proposals by email to events(at)securityweek.com with the subject line “ICS2020 CFP” by August 31, 2020.

Plan on Attending the 2020 ICS Cyber Security Conference? Online registration is open, with discounts available for early registration.

Binary Check Ad Blocker Security News

Enterprise Security Professional to Discuss Latest Cloud Security Trends and Strategies Via Fully Immersive Virtual Event Experience

SecurityWeek will host its 2020 Cloud Security Summit virtual event on Thursday, August 13, 2020.

Through a fully immersive virtual environment, attendees will be able to interact with leading solution providers and other end users tasked with securing various cloud environments and services.

“As enterprises adopt cloud-based services to leverage benefits such as scalability, increased efficiency, and cost savings, security has remained a top concern,” said Mike Lennon, Managing Director at SecurityWeek. “SecurityWeek’s Cloud Security Summit will help organizations learn how to utilize tools, controls, and design models needed to properly secure cloud environments.”

The Cloud Security Summit kicks off at 11:00AM ET on Thursday, August 13, 2020 and features sessions, including:

  • Augmenting Native Cloud Security Services to Achieve Enterprise-grade Security
  • Measuring and Mitigating the Risk of Lateral Movement
  • Weathering the Storm: Cyber AI for Cloud and SaaS
  • Securing Cloud Requires Network Policy and Segmentation
  • Managing Digital Trust in the Era of Cloud Megabreaches
  • The Rise of Secure Access Service Edge (SASE)
  • Fireside Chat with Gunter Ollmann, CSO of Microsoft’s Cloud and AI Security Division

Sponsors of the 2020 Cloud Security Summit include: DivvyCloud by Rapid7, Tufin, Darktrace, SecurityScorecard, Bitglass, Orca Security, Auth0 and Datadog.

Register for the Cloud Security Summit at: https://bit.ly/CloudSec2020

Binary Check Ad Blocker Security News

We all know that the prices of key commodities such as oil, gold, steel and wheat don’t just impact individual business sectors as they fluctuate according to supply and demand:  they also power international trading markets and underpin the global economy. And it’s exactly the same with cyber-crime.

The prices of key commodities in the cyber-crime economy – such as stolen credentials, hacked accounts, or payment card details – not only reflect changes in supply and usage, but also influence the types of attack that criminals will favor.  After all, criminals are just as keen to maximise return on their investments and create ‘value’ as any legitimate business.

A recent report gave the current average prices during 2020 for some of these cyber-crime commodities on the Dark Web. Stolen credit-card details start at $12 each, and online banking details at $35. ‘Fullz’ (full identity) prices are typically $18, which is cheaper than just two years ago due to an oversupply of personally identifiable information following several high-profile breaches. A very basic malware-as-a-service attack against European or U.S. targets starts at $300, and a targeted DDoS attack starts at $10 per hour.

Extortion evolves

These prices help to explain one of the key shifts in cyber crime over the past two years:  the move away from ransomware to DDoS attacks for extortion. Ransomware has been around for decades, but on a relatively small scale, because most types of ransomware were unable to spread without users’ intervention. This meant attacks were limited in their scope to scrambling data on a few PCs or servers, unless the attacker got lucky.

But in 2017, the leak of the ‘EternalBlue’ exploit changed the game. Ransomware designed to take advantage of it – 2017’s WannaCry and NotPetya – could spread automatically to any vulnerable computer in an organization. All that was needed was a single user to open the malicious attachment, and the organization’s network could be paralyzed in minutes – making it much easier for criminals to monetize their attacks.

While this drove an 18-month bubble of ransomware attacks, it also forced organizations to patch against EternalBlue and deploy additional security measures, meaning attacks became less effective. Sophisticated malware like WannaCry and NotPetya cost time and money to develop, and major new exploits like EternalBlue are not common. As such, use of ransomware has declined, returning to its roots as a targeted attack tool.

DDoS deeds, done dirt cheap

DDoS attacks have replaced ransomware as the weapon of choice for extortion attempts. As mentioned earlier, a damaging attack is cheap to launch, using one of the many available DDoS-for-hire services at just $10 per hour or $60 for 24 hours (like any other business looking to attract customers, these services offer discounts to customers on bigger orders).

Why are DDoS attacks so cheap?  One of the key reasons is DDoS-for-hire service operators are increasingly using the scale and flexibility of public cloud services, just as legitimate organizations do. Link11’s researchshows the proportion of attacks using public clouds grew from 31% in H2 2018 to 51% in H2 2019. It’s easy to set up public cloud accounts using a $18 fake ID and a $12 stolen credit card, and simply hire out instances as needed to whoever wants to launch a malicious attack. When that credit card stops working, buy another.

Operating or renting these services is also very low-risk:  the World Economic Forum’s ‘Global Risks Report 2020’ states that in the US, the likelihood of a cybercrime actor being caught and prosecuted is as low as 0.05%.  Yet the impact on the businesses targeted by attacks can be huge:  over $600,000 on average, according to Ponemon Institute´s Cost of Cyber Crime Study.

Further, the Covid-19 pandemic has made organizations more vulnerable than ever to the loss of online services, with the mass shift to home working and consumption of remote services – making DDoS attacks even more attractive as an extortion tool, as they cost so little, but have a strong ROI. This means any organization could find itself in attackers’ cross-hairs:  from banks and financial institutions to internet infrastructure, retailers, online gaming site, as well as public sector organizations and local governments.  If services are taken offline, or slowed to a crawl for just a few hours, employees’ normal work will be disrupted, customers won’t be able to transact, and revenues and reputation will take a hit. 

Make sure crime doesn’t pay

To avoid falling victim to the new wave of DDoS extortion attacks, and fuelling the cyber-crime economy through ransom payments, organizations need to defend their complex, decentralized and hybrid environments with cloud-based protection. This should route all traffic to the organization’s networks via an external cloud service, that identifies and filters out all malicious traffic instantly using AI techniques before an attack can impact on critical services – helping to ensure that those services are not disrupted.  Online crime may continue to be profitable for threat actors – but with the right defences, individual organizations can ensure that they’re not contributing.

Binary Check Ad Blocker Security News

In the coming years, organizations’ insatiable desire to understand consumers through behavioral analytics will result in an invasive deployment of cameras, sensors and applications in public and private places. A consumer and regulatory backlash against this intrusive practice will follow as individuals begin to understand the consequences.

Highly connected ecosystems of digital devices will enable organizations to harvest, repurpose and sell sensitive behavioral data about consumers without their consent, with attackers targeting and compromising poorly secured systems and databases at will.

Impacts will be felt across industries such as retail, gaming, marketing and insurance that are already dependent on behavioral analytics to sell products and services. There are also a growing number of sectors that will see an increased dependency on behavioral analytics, including finance, healthcare and education.

Organized criminal groups, hackers and competitors will begin stealing and compromising these treasure troves of sensitive data. Organizations whose business model is dependent on behavioral analytics will be forced to backtrack on costly investments as their practices are deemed to be based on mass surveillance and seen as a growing privacy concern by regulators and consumers alike.

What is the Justification for This Threat?

Data gathered from sensors and cameras in the physical world will supplement data already captured by digital platforms to build consumer profiles of unprecedented detail. The gathering and monetization of data from social media has already faced widespread condemnation, with regulators determining that some organizations’ practices are unethical.

For example, Facebook’s role in using behavioral data to affect political advertising for the European Referendum resulted in the UK’s Information Commissioner’s Office fining the organization the maximum penalty of £500,000 in late 2019 – citing a lack of protection of personal information and privacy and failing to preserve a strong democracy.

Many organizations and governments will become increasingly dependent on behavioral analytics to underpin business models, as well as for monitoring the workforce and citizens. The development of ‘smart cities’ will only serve to amplify the production and gathering of behavioral data, with people interacting with digital ecosystems and technologies throughout the day in both private and public spaces. Data will be harvested, repurposed and sold to third parties, while the analysis will provide insights about individuals that they didn’t even know themselves.

An increasing number of individuals and consumer-rights groups are realizing how invasive behavioral analytics can be. An example of an associated backlash involved New York’s Hudson Yard in 2019, where the management required visitors to sign away the rights to their own photos taken of a specific building. However, this obligation was hidden within the small print of the contract signed by visitors upon entry. These visitors boycotted the building and sent thousands of complaints, resulting in the organization backtracking and rewriting the contracts.

Another substantial backlash surrounding invasive data collection occurred in London when Argent, a biometrics vendor, used facial recognition software to track individuals across a 67-acre site surrounding King’s Cross Station without consent.

Attackers will also see this swathe of highly personal data as a key target. For example, data relating to individuals’ personal habits, medical and insurance details, will present an enticing prospect. Organizations that do not secure this information will face further scrutiny and potential fines from regulators.

How Should Your Organization Prepare?

Organizations that have invested in a range of sensors, cameras and applications for data gathering and behavioral analysis should ensure that current technical infrastructure is secure by design and is compliant with regulatory requirements.

In the short term, organizations should build and incorporate data gathering principles into a corporate policy. Additionally, they need to create transparency over data gathering practices and use and fully understand the legal and contractual exposure on harvesting, repurposing and selling data.

In the long term, implement privacy by design across the organization and identify the use of data in supply chain relationships. Finally, ensure that algorithms used in behavioral analytical systems are not skewed or biased towards particular demographics.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Binary Check Ad Blocker Security News

At one of the last cyber-security events I attended before the Covid-19 enforced lockdowns, I was talking with an IT director about how his organization secures its public cloud deployments. He told me: “We have over 500 separate AWS accounts in use, it helps all our development and cloud teams to manage the workloads they are responsible for without crossover or account bloat, and it also makes it easier to control cloud usage costs: all the accounts are billed centrally, but each account is a separate cost center with a clear owner.”

I asked about security, and he replied that each AWS account had different logins, meaning fewer staff had access to each account, which helped to protect each account.

While it’s true that having hundreds of separate public cloud accounts will help to keep a closer eye on cloud costs, it also creates huge complexity when trying to manage the connectivity and security of applications and workloads.  Especially when making changes to applications that cross different public cloud accounts, or when introducing infrastructure changes that touch many – or even all- accounts.

As I covered in my recent article on public cloud security, securing applications and data in these environments can be challenging. It’s far easier for application teams to spin up cloud resources and move applications to them, than it is for IT and security teams to get visibility and control across their growing cloud estates.

Even if you are using a single public cloud platform like AWS, each account has its own security controls – and many of them. Each VPC in every region within the account has separate security groups and access lists: even if they embody the same policy, you need to write and deploy them individually. Any time you need to make a change, you need to duplicate the work across each of these elements.

Then there’s the question of how security teams get visibility into all these cloud accounts with their different configurations, to ensure they are all properly protected according to the organization’s security policy. It’s almost impossible to do this using manual processes without overlooking – or introducing – potential vulnerabilities.

So how do the teams in charge of those hundreds of accounts manage them effectively? Here are my three key steps:

1. Gain visibility across your networks

The first challenge to address is a lack of visibility into all your AWS cloud accounts, from one vantage point. The security teams need to be able to observe all the security controls, across all account/region/VPC combinations.

2. Manage changes from a single console

The majority of network security policy changes need to touch a mix of the cloud providers’ own security controls as well as other controls, both in the cloud and on-premise. No cloud application is an island that is entire of itself – it needs to access resources in other parts of the organization’s estate. When changes to network security policies in all these diverse security controls are managed from a single system, security policies can be applied consistently, efficiently, and with a full audit trail of every change.

3. Automate security processes

In order to manage multiple public cloud accounts efficiently, automation is essential. Security automation dramatically accelerates change processes, avoids manual processing mistakes and misconfigurations, and enables better enforcement and auditing for regulatory compliance. It also helps organizations overcome skill gaps and staffing limitations.

With an automation solution handling these steps, organizations can get holistic, single-console security management across all their public cloud accounts, as well as their private cloud and on-premise deployments – which ensures they can count on robust security across their entire IT estate. 

About the author: Professor Avishai Wool is the CTO and Co-Founder of AlgoSec.

Binary Check Ad Blocker Security News

Infosec professionals have always had their work cut out for them, as the threat landscape continuously challenges existing security measures to adapt, improve and cope with the unexpected. As the coronavirus pandemic forced organizations to migrate their entire workforce to a work-from-home context, practically overnight, security professionals faced a new challenge for which half of them had not planned.

A recent Bitdefender survey reveal that 83 percent of US security and IT professionals believe the COVID-19 pandemic will change the way their business operates, mostly because their infrastructure had to adapt to accommodate remote work. Another concern for companies is that employees tend to be more relaxed about security (34 percent) and that working remotely means they will not be as vigilant in identifying and flagging suspicious activity and sticking to security protocols (34 percent).

Lessons learned

Having managed the initial work-from-home technology transition challenges, 1 in 4 security professionals understands the significant value and deployment of endpoint risk assessment tools. As mobility shifted to 100% for all employees, organizations could no longer rely on infrastructure-embedded and perimeter defense technologies to protect endpoints. Augmenting the endpoint security stack with risk assessment and risk analytics tools became mandatory in order to give infosec professionals needed visibility and more control over remote employee devices.

In addition to deploying risk analytics, 31 percent of infosec professionals indicated they would also increase employee training, as the current threat landscape has been witness to more socially engineered threats than actual malware sophistication. Employees are more at risk of clicking the wrong link or opening a tainted attachment, potentially compromising both their devices and company infrastructure.

With a greater need for visibility of weak spots within their infrastructure, 28 percent of security professionals have also had to adjust security policies. For instance, pre-pandemic policies that took into account infrastructure hardware and security appliances became useless in a remote work context.

The New Normal

While some companies have transitioned to the new normal faster than others, businesses understand they need to provide additional cybersecurity measures for employees, and to permanently increase their capability to monitor and protect devices outside of the office. There’s never been a silver bullet for addressing cybersecurity challenges, and the current post-pandemic era is further proof that security is a living organism that needs to adapt to ensure business continuity.

Nothing new to the role of an infosecurity professional.They still need to deploy the right people, the proper process and products, and the correct procedures to achieve long-term safety and success.

About the author: Liviu Arsene is a Senior E-Threat analyst for Bitdefender, with a strong background in security and technology. Reporting on global trends and developments in computer security, he writes about malware outbreaks and security incidents while coordinating with technical and research departments.

Binary Check Ad Blocker Security News

Learning from the experiences of others should be a key job requirement for all cybersecurity, AppSec, DevSecOps, CISO, CRMO and SecSDLC professionals. The recent attack against Twitter where high-profile accounts were compromised to promote a Bitcoin scam is one such opportunity.

As new information comes to light (and I sincerely hope that Twitter continues to provide meaningful details), everyone within the cybersecurity realm should look to both their internal IT and application development practices as well as those of your suppliers for evidence that this particular attack pattern couldn’t be executed against your organization.

What we know as of now is that on July 15th, an attack was launched against Twitter that targeted 130 accounts. Of those 130, 45 had their passwords reset and eight had their Twitter data downloaded. While the initial public focus was on Twitter Verified accounts, those eight accounts were not verified.

The attack itself was based on the concept of social engineering where the targets were Twitter employees with access to an administrative tool capable of modifying account access of individual Twitter employees.

The attacker’s actions included posting a Bitcoin scam on prominent accounts, but it has also been reported that there was an effort to acquire Twitter accounts with valuable names.

That the attack had a prominent component of a Bitcoin scam and a secondary component of account harvesting, there is an obvious first question we should be thinking about: With the level of access the attackers had, why wasn’t their attack more disruptive? This is a perfect example of attackers defining the success criteria and thus the rules of their attack.

That being said, it’s entirely plausible that the true goal of this attack has yet to be identified and that the attackers might easily have installed backdoors in Twitter’s systems that could lay dormant for some time.

Looking solely at the known information, everyone working with user data should be asking these types of questions:

  • Which accounts have administrator, super administrator or God-mode privileges?
  • Can a normal user possess administrator capabilities, or do they need to request them with specific justification?
  • Are all administrator-level changes logged and auditable?
  • Can an administrator modify logs of their activities?
  • Are there automated alerts to identify abnormal administrator activity, which might occur from rarely used accounts?
  • What limits are in place surrounding administrator access to user data?
  • What controls are in place to limit damage should an administrator misuse their credentials, either intentionally or as the result of a credential hack?

For most organizations, administrator access is something given to their most trusted employees. For some, this trust might stem from how long the employee has been with the organization. For others, trust might stem from a variety of background checks. None-the-less, administrators are humans and humans make errors in judgement – precisely the type of scenario social engineering targets.

Knowing that an administrator, particularly one with God-mode access rights, will be a prime target for social engineering efforts, any access granted to an administrator should be as limited as possible. This includes scenarios where an administrator is called upon to resolve users access issues.

After all, someone claiming to be locked out from their account could easily be an attacker attempting to coerce someone in tech support to transfer rightful ownership into their hands. This implies that on occasion a successful account takeover will occur, and that the legitimate owner will retain control of the original contact methods, such as email address, phone numbers and authenticator apps.

If the business sends a confirmation notice to the previous contact method when it changes, that then offers an additional level of warning for users who may be potential targets. The same situation should play out with any security settings such as recovery questions or 2FA configuration.

Since this attack on Twitter exploited weaknesses in their account administration process, it effectively targeted some of the most trusted people and processes within Twitter. Every business has trusted processes and people, which means that they could be equally vulnerable to such an attack.

This then serves as an opportunity for all businesses to reassess how they build and deploy applications with an eye on how they would be administered and what process weaknesses could be exploited.

About the author: Tim Mackey is Principal Security Strategist, CyRC, at Synopsys. Within this role, he engages with various technical communities to understand how to best solve application security problems. He specializes in container security, virtualization, cloud technologies, distributed systems engineering, mission critical engineering, performance monitoring, and large-scale data center operations.