Several weeks ago, the Linux community was shocked by the disturbing news that researchers at the University of Minnesota had developed (but as it turned out they were not completed) a method of introducing what they called “hypocrites commit”
This was quickly followed by – in some senses, equally disturbing – the announcement that the university had been banned, at least temporarily, from contributing to core development. A public apology from the researchers followed.
Although exploitation of development and disclosure is often messy, it feels a little extra to run technically complex “red team” programs against the world’s largest and most important open source project. It is difficult to imagine researchers and institutions that are so naive or abandoned that they do not understand the potentially large explosion radius of such behavior.
Equally, maintainers and project management are obliged to enforce policies and avoid wasting their time. Common sense suggests (and users demand) that they try to produce core releases that do not contain exploits. But killing the messenger seems to miss at least some of the point – that this was research instead of pure evil, and that it sheds light on a kind of software (and organizational) vulnerability that calls for technical and systemic mitigation.
Projects of the scope and critique of the Linux kernel are not prepared to contend with game-changing, hyperscale threat models.
I think the “hypocrite commits” the contemporaries are symptomatic of all aspects of related trends that threaten the entire extended open source ecosystem and its users. This ecosystem has long struggled with issues of scale, complexity, and free and open source software (FOSS) that are becoming increasingly important to all kinds of human businesses. Let’s look at that complex of problems:
- The largest open source projects now present major goals.
- Their complexity and pace have grown beyond the scale at which traditional “commons” approaches or even more developed governance models can cope.
- They evolve to commodify each other. For example, it is becoming increasingly difficult to say, categorically, whether “Linux” or “Kubernetes” should be treated as the “operating system” for distributed applications. Profit organizations have taken note of this and have begun to reorganize around “full stack” portfolios and stories.
- In this way, some association organizations have begun to distort traditional patterns of FOSS participation. Many experiments are underway. In the meantime, funding, the number of employees at FOSS and other calculations are declining.
- CIS projects and ecosystems adapt in different ways, which sometimes makes it difficult for for-profit organizations to feel at home or see the benefits of participating.
Meanwhile, the threat landscape continues to evolve:
- Attackers are bigger, smarter, faster and more patient, leading to long games, undermining the supply chain and so on.
- Attacks are more economically, economically and politically profitable than ever.
- Users are more vulnerable, exposed to more vectors than ever before.
- The increasing use of public clouds creates new layers of technical and organizational monocultures that can enable and justify attacks.
- Complex commercial shelving solutions (COTS) assembled in part or in full from open source software create elaborate attack surfaces whose components (and interactions) are accessible and well understood by poor actors.
- Software componenting enables new types of attacks in the supply chain.
- In the meantime, all of this is happening as organizations seek to shed non-strategic expertise, shift capital expenditures to operating costs, and evolve into reliance on cloud providers and other entities to do the hard work of security.
The net result is that projects in scope and criticism of the Linux kernel are not prepared to fight with game-changing, hyperscale threat models. In the specific case we are investigating here, the researchers were able to target candidate entry points with relatively little effort (using static analysis tools to assess units of code already identified that require the attention of contributors), suggesting “fixes” informally via e mail, and leverage many factors, including their own established reputation as trusted and frequent contributors, to bring the exploitation code to the brink of being committed.
This was a serious betrayal, effectively by the “insiders” of a trust system that has historically worked very well to produce robust and secure core releases. The abuse of trust in itself changes the game, and the implicit follow-up requirement – to strengthen mutual human trust with systematic mitigation – is great.
But how do you deal with threats like this? Formal verification is actually impossible in most cases. Static analysis may not reveal cleverly constructed whims. Project speed must be maintained (after all, there are known errors to fix). And the threat is asymmetrical: When the classic line goes – blue layer needs to protect against everything, red layer only needs to succeed once.
I see some possibilities for remediation:
- Limit the spread of monocultures. Things like Alva Linux and AWS ‘open distribution of ElasticSearch are good, partly because they keep widely used FOSS solutions free and open source, but also because they inject technical diversity.
- Evaluate project management, organization and financing with a view to overcoming complete dependence on the human factor, as well as stimulating profit-making companies to contribute their expertise and other resources. Most for-profit companies want to contribute to open source code because of openness, and not in spite of it, but in many communities this may require a cultural change for existing contributors.
- Accelerate commodification by simplifying the stack and verifying the components. Push appropriate safety responsibility up into the application layers.
Basically, what I am talking about here is that orchestrators like Kubernetes should have less importance, and Linux should have less impact. Finally, we should move as fast as we can towards formalizing the use of things as unique cores.
In any case, we must ensure that both companies and individuals provide the resources open source code needs to continue.