Recently, an enterprising security researcher, Alex Birsan hacked tech companies and managed to pocket over $130,000 in security bounties. Birsan did this by uncovering a single vulnerability affecting multiple big-name companies. Targeted companies included Netflix, PayPal, Microsoft, Apple, Tesla, and more.
As the breadth and complexity of the technology at our fingertips expand, so does the range of cyberthreats we face. Furthermore, the security industry is currently facing a shortage of seasoned and skilled individuals. As a result, for-profit businesses are finding it particularly hard to secure the necessary SecOps talent and resources – at a time when it’s needed more than ever before.
Many organizations are effectively “crowdsourcing” bug finding and vulnerability testing, through platforms like HackerOne. Via these programs, ethical hackers can launch investigations, often including penetration testing independently. If they discover a potential security flaw, they can report it to the affected entities. And, if verified, they can secure a bug bounty according to the severity level of the vulnerability. As well as any pre-established bug bounty programs.
However, this approach comes with its own risks. There is little ongoing oversight or formal agreements between the two parties.
If anything, this incident once again showed the value of augmenting your internal SecOps with professional, experienced, and trustworthy external experts. These services can carry out penetration testing, red teaming, or vulnerability scanning operations to test your security efficacy. A new perspective can help you detect and address vulnerabilities before they lead to real-world consequences.
LIFAR’s Penetration Testing Services will test the real-world effectiveness of your security controls while achieving compliance and protecting your brand. Our ethical hackers, comprising of Cyberwarfare, NATO offensive Top Security Clearance, and ex-NSA experts will find weaknesses in your infrastructure, exploit them, and report their findings.
Companies with private internal applications often integrate with publicly available repositories for open source technologies such as PyPI, npm, and RubyGems as part of their development pipeline. Pull requests from private repositories are used to update internal projects when changes are made to open source repositories – updating existing files or adding new ones.
As a matter of course, individual organizations would have files unique to their own internal projects that do not exist in the public repositories.
Birsan first became aware of this flaw when coming across the names of internal file dependencies within a package.json file used in a PayPal project.
He then wondered what would happen if a file with the same name existed within these public repositories. The next time a pull request was executed, would the internal or external public file get priority? In other words, would the internal codebase retain its integrity and reject the new public version of the file or would it replace, confusing it as a legitimate update?
Turns out, private projects would prioritize the public version of the file. This provided hackers with a direct route to inject their own code.
In this case, Birsan injected a file with a preinstall script using the above exploit. When the internal build process kicked off, it would exfiltrate information of the build machine to the “attack server.” This information included the company IP address, username, and the home directory of the compromised machine. Birsan also used DNS exfiltration to bypass internal detection measures.
Obtaining this information is often the first step hackers hope to complete as part of a multi-layered attack. From here, installing additional malicious packages, changing the behavior of the target machine, or escalating privileges, etc. becomes trivial.
This hacking technique is called “dependency confusion.” It utilizes flaws in the existing continuous integration pipeline to inject malicious code into internal, private infrastructure. This type of attack is also often referred to as a “supply chain” attack because it exploits intermediary systems of software to automatically deploy malicious code to a target system and then use it to spread laterally within the target infrastructure.
In this case, the exploit relied on “insecure by design” logic built-in to a number of open-source package managers. Examples include PyPI’s pip, Ruby’s gem, and npm. In pip’s case, it would simplify default to installing the package file with the highest version number, regardless of whether it’s internal or external. You can find a complete breakdown of how Birsan completed his investigation and his findings in this Medium Story.
A number of companies implicated in this report have since acknowledged the flaw and released patches to counter it. Microsoft themselves published a white paper on vulnerability and awarded Birsan with their highest possible bounty of $40,000. The bug has also been identified as issue CVE-2021-24105 under the Azure Artifactory product.
A number of other implicated companies have also publicly disclosed Bisan’s report and you can find these on his HackerOne profile.
As this story once again shows, the current security landscape presents an overwhelming number of challenges. Even if your own systems are secure, exploits down the supply chain can lead to critically compromising your overall security. This problem is only getting worse as organizations’ list of external dependencies grows, from new and legacy sources.
Researcher hacks over 35 tech firms in novel supply chain attack
Dependency Confusion: How I Hacked Into Apple, Microsoft and Dozens of Other Companies