An open source security event brought discussions of supply chain security and managing flaws in open source projects.

As more organizations rely on open source components in their software, the issue of securing those components grows ever more urgent.

This was the premise of an event hosted today by Google, during which open source experts discussed the myriad challenges in securing open source software, what companies should prioritize, and steps the industry can take to improve the overall state of open source security.

The average software application depends on at least 500 open source libraries and components, a 77% increase from 298 dependencies in two years, Synopsys data shows. Open source libraries and components made up more than 75% of the code in the average software application, 84% of applications had at least one vulnerability, and the typical application had 158.

In a talk on open source supply chain security, Google software engineer Dan Lorenc advised organizations to know what they’re using – a step he acknowledged seems obvious but isn’t easy, especially when developers start building artifacts and publishing them, and combining artifacts into other artifacts. When a vulnerability gets reported, whether unintentional or malicious, not knowing what’s running can get you into trouble.

“Be in control when your dependencies are added,” he said. Governance and continuous audit of new dependencies, whether internal or open source, is a good way to protect the software.

This control can extend into building the components you use, Lorenc continued, noting this is also a tough step for most organizations. Most of the time, the content of a binary package can be hard to verify. It doesn’t need to be all or nothing, he added, but part of open sourcing code is building and compiling it. Knowing you can build it if you must is half the battle and shows you’re in control of the code that goes into your applications.

“Open source software is software,” Lorenc said. “It’s full of bugs; it’s full of CVEs that can be exploited.” While some of these bugs won’t lead to much damage, some can prove harmful.

Organizations should have plans for handling both zero-day vulnerabilities and known flaws, Lorenc emphasized. Zero-days are the flashy, exciting bugs that typically make headlines, and businesses should have an emergency playbook for how to quickly patch them, but it’s the older vulnerabilities that may not be getting the attention they warrant. In large organizations running a lot of environments and systems, these flaws can be easy to overlook.

“Just because you forgot about it doesn’t mean an attacker won’t find it,” he continued. “These things are easy to find from the outside.”

Organizations must track the open source software they’re running and constantly update it, he said, noting this is often considered “grungy” and “boring” work that isn’t often rewarded. Lorenc recommended automating, monitoring, and tracking the process to make it as easy as possible. 

“This is what everyone should be worrying about,” he said of known vulnerabilities.

More broadly, the industry can do a better job of finding and fixing unknown bugs.

“Normalize working upstream in projects you use,” Lorenc said. 

“Upstream” refers to the direction of the original software authors or maintainers. There’s a common misperception that because code is on GitHub and has been well-reviewed, it’s free of errors. This isn’t true, he said, and “fixing bugs upstream can help build important bridges and improve the public good.”

Open Source Vulnerability Disclosure: Tips For The Process
In a separate talk, Google program manager Anne Bertucio explained the process of verifying, communicating, and documenting a vulnerability for open source project managers in a way that meets the needs of open source project/products owners and people reporting flaws.

For starters, she said, it shouldn’t be difficult for the people who find vulnerabilities to contact the vulnerability management team (VMT). The team may decide to use a common tool or one they already use, but email is perfectly fine and works well as a backup option, Bertucio said. A security policy should be both accessible and include what to explain in the bug report as well as response expectations. If it’ll take three days to acknowledge a submission, just say so.

From there, the issue is acknowledged and verified. The project owner should ask the reporter whether they want to help develop the patch, whether they’d like to be credited in the CVE, and whether they agree to their disclosure timeline.

“Reporters really like to see things disclosed and credited as quickly as possible,” Bertucio said. While 90 days is around the standard, it’s important to figure out what works for both parties.

When it’s time to disclose, the security advisory should be factual and brief – get straight to the point about what people need to know and how to mitigate it, she added. If you’d like to share the story of a bug’s discovery and the details of how it works, write it up in a separate blog post.

There’s no point in hiding the details of a vulnerability, said Bertucio, noting “security through obscurity isn’t really security at all.” Similarly, she said, there’s nothing wrong with having a lot of CVEs for a specific open source project. This means you have a strong response to disclosing flaws and hardening the project, she added.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial … View Full Bio

Recommended Reading:

More Insights