At the end of the day, the real conflict between the OSS maintainers of something like FFmpeg and Google is a misalignment on what purpose and scope the project ultimately serves.

The OSS maintainers discuss this project in terms of a best-effort labor of love and, as in all things given away for free, there’s a sense that you should “take it or leave it” and be appreciative that someone is doing that work. If you want it to be different you can make it your own (assuming the requisite skills, time, etc).

Google is the apex standard bearer for mega-scale software engineering and, from that perspective, any piece of software (OSS or proprietary) needs to conform to a standard of maintainable integrity that cannot abide known/unmanaged flaws.

There are facile, viable, but ultimately disadvantageous arguments to be made in blaming/dismissing either party—

  • Why doesn’t Google just fork FFmpeg, fix it for themselves, and move on?
  • Why don’t OSS maintainers scale up the available maintainers by resorting to Google’s bounties, broader donations for more support?

Gaming out the scenarios, I’m not sure that we should be so quick to root for either one.

The former, which Google is perfectly able to do, is likely to result in a more fragmented ecosystem and greater dependency on a software behemoth whose primary purpose is to serve its own needs and for whom there’s little room for niche use cases and deviations, the likes of which FFmpeg currently enables.

The latter is a sizable job, and not necessarily one that these OSS maintainers are looking to take on. Effective long-term OSS projects feel to me like a miraculous happenstance, often drawing benevolent contributions from passersby and folks whose talents are already assigned to other meaningful pursuits and who aren’t looking for another full-time job to serve a faceless ungrateful mass at greater personal sacrifice.

As a side note, I’d be curious to hear about the experience of ‘hybrid models’ where a commercial product is built on top of an OSS project, like CodeWeavers’ support of LGPL-licensed Wine through the development of CrossOver, claiming 2/3rds of Wine’s commits. Interestingly, not all LGPL projects follow this path—FFmpeg, while also LGPL-licensed, explicitly states that it’s “not available under any other licensing terms, especially not proprietary/commercial ones, not even in exchange for payment.” Combined with its flat, consensus-driven governance structure, this means FFmpeg has no commercial steward equivalent to CodeWeavers, leaving it dependent on sporadic grants and donations despite being critical infrastructure for YouTube, Netflix, and countless other services. This raises questions about whether the Wine/CrossOver model could or should be replicated for other foundational open-source projects, or whether FFmpeg’s strict licensing philosophy—while perhaps more ideologically pure—condemns it to chronic underfunding.)

These arguments fall into familiar intractable tropes, particularly on the side of the OSS community, with unbending figures like Linus Torvalds espousing a sort of petulant stewardship model that insists the maintainer is a benevolent regent beholden to no one, security people are just out to get publicity (not entirely untrue), and if folks want different outcomes than the maintainer decides, they should just go make their own. This view implies that bike shed arguments have no cost. And there’s no resolving this issue without addressing the misaligned vision and incentives of the parties involved.

Ultimately, there are some levers involved in this scenario that I’d highlight for reframing the problem: utility, complexity, and capacity.

  • A project like FFmpeg provides immense utility to a myriad downstream projects and their respective users. It is an important dependency for many things we love and use regularly and, as recipients of that utility, we have a stake in its integrity and stability.
  • Part of the immense benefit of its existence and proper maintenance is that we need not recreate the same thing a hundred different ways, expending massive amounts of our collective engineering capacity to reimplement some subset of FFmpeg’s features at varying standards of quality and integrity.
  • The main adversary of collective security is systemic complexity. It’s not criminals or state-sponsored spies. Complexity creates non-linear effects against the integrity of a system. To put it in simpler terms, there’s a limit to our collective capacity to analyze, debug, and maintain systems effectively.
  • Creating a new project whole cloth (say Google’s own media codecs library) creates an entirely new codebase in need of that capacity.
  • So does encouraging everyone to fork their own version of an existing codebase, as we now have a branched universe of development whose successive improvements (the result of spent capacity) are drastically less likely to find their way down to the original codebase or its other branches.

In those terms, we all have interesting parts of this problem to grapple with:

  • How do we maintain and support a healthy “collective commons” of software utility?
  • How do we increase capacity for that collective commons while also insuring that the output of that spent capacity – be it added utility (more features) or increased integrity (fewer bugs and vulnerabilities) – is effectively diffused to the greatest possible majority?

While all software users and maintainers have a vested interest in this discussion, the level of importance or say should ultimately be determined by their respective contributions to the collective capacity spent on these efforts and the diffusion of its effects. Meaningful security does not exist in the absence of a diffusion to the majority of an ecosystem’s users. Those that are comfortable living in a niche sidepocket of the software ecosystem defined by pleasing syntactic sugar are not actively participating in advancing our collective security and we should assign their concerns that appropriate level of importance.

The takeaway for companies leveraging vast AI-supplied increases in capacity to contribute to security is that for the OSS ecosystem, contributing bugs and vulns without comprehensive patches is a net subtraction in capacity. It seems we are well underway in solving half of the problem of capacity increase, but until AI reliably addresses the patching and maintenance portion of that equation, it’s possible that we’ll do more harm than good to the integrity of OSS projects.