Project Maven: US use of artificial intelligence in war

By Matthew Parish, Associate Editor

Tuesday 7 April 2026

Project Maven sits at the uneasy intersection of modern warfare, artificial intelligence and moral philosophy โ€” a programme born not out of science fiction, but from the brutally practical demands of contemporary conflict. Initiated in 2017 by the United States Department of Defense, it was conceived as a response to a simple but overwhelming problem: the sheer volume of data generated by modern surveillance systems had outstripped the human capacity to interpret it.

In the wars of the early twenty-first century โ€” particularly those fought in Iraq and Syria โ€” unmanned aerial vehicles produced thousands of hours of video footage every day. Analysts were tasked with watching, tagging and interpreting this material, searching for signs of insurgent activity, patterns of life, or imminent threats. The process was labour-intensive, slow and prone to human fatigue. Valuable intelligence was often lost not because it did not exist, but because no one had the time or endurance to find it.

It was into this gap that Project Maven โ€” formally the Algorithmic Warfare Cross-Functional Team โ€” was introduced. Its purpose was to apply machine learning techniques to the analysis of drone footage, enabling computers to identify objects โ€” vehicles, buildings, individuals โ€” and detect patterns that might otherwise go unnoticed. Rather than replacing human analysts, Maven was intended to augment them, reducing cognitive burden and accelerating decision-making.

At a technical level Maven drew upon advances in computer vision, a field of artificial intelligence concerned with enabling machines to interpret visual information. Algorithms trained on vast datasets could be taught to recognise specific features โ€” the outline of a truck, the heat signature of a person, the movement patterns associated with hostile activity. Over time these systems improved through exposure to more data, refining their accuracy and reducing false positives.

Yet the significance of Project Maven lies not merely in its technical achievements but in what it represents โ€” the gradual automation of perception within the machinery of war.

Historically the act of identifying a target was inseparable from human judgment. A soldier, pilot or analyst would observe, interpret and decide. With Maven this process began to shift. Machines could now pre-process reality, presenting humans with curated interpretations rather than raw data. This introduced a subtle but profound transformation โ€” the human operator was no longer the primary observer, but rather the reviewer of a machineโ€™s perception.

The implications are considerable. When a machine flags an object as suspicious, the human decision-maker may be inclined โ€” consciously or otherwise โ€” to trust that assessment. The authority of the algorithm, especially when backed by statistical performance metrics, can shape human judgement. In this way Maven does not simply assist decision-making; it influences it.

The programme also sparked one of the earliest and most visible ethical controversies surrounding artificial intelligence in warfare. In 2018, it was revealed that Google had been contracted to provide machine learning capabilities for Project Maven. This led to significant internal dissent within the company. Thousands of employees signed petitions, arguing that their work should not be used to improve the lethality of military operations. Several prominent engineers resigned.

The protest was not merely about one contract; it was about the broader trajectory of artificial intelligence. Many within the technology sector had long viewed their work as fundamentally civilian โ€” oriented towards search engines, consumer applications and scientific progress. Maven challenged this assumption. It demonstrated that the same tools used to organise information or recognise faces could also be employed to identify targets in a war zone.

Under pressure, Google chose not to renew her contract and subsequently published a set of ethical principles governing the use of artificial intelligence. Yet the withdrawal of one company did not halt the programme. The Pentagon turned to other partners, including defence contractors and smaller technology firms less constrained by internal opposition.

Figures such as Eric Schmidt โ€” who later became involved in advising the Pentagon on artificial intelligence โ€” argued that democratic states could not afford to cede technological leadership in this domain. If liberal societies refrained from developing military AI, they reasoned, authoritarian adversaries would not โ€” and the strategic balance would shift accordingly.

This argument reflects a broader tension inherent in Project Maven. On the one hand it promises efficiency, precision and potentially reduced collateral damage. Better analysis could mean more accurate targeting, fewer mistakes and a clearer understanding of complex environments. On the other hand it lowers the cognitive and logistical barriers to conducting surveillance and, by extension, to the use of force.

There is also the question of accountability. When a decision is informed โ€” or shaped โ€” by an algorithm, where does responsibility lie? If an error occurs, is it the fault of the human operator, the developer of the algorithm, or the institution that deployed it? These questions remain largely unresolved, yet they are central to the future of warfare.

In the years since its inception Project Maven has expanded beyond its original focus on drone footage. It has been integrated into a broader network of military artificial intelligence, supporting tasks ranging from intelligence fusion to predictive analysis. The aim is not merely to process information more quickly, but to create a more coherent and responsive system of command.

This evolution reflects a deeper shift in the nature of military power. In previous eras advantage was measured in terms of manpower, industrial capacity or technological hardware โ€” tanks, aircraft, missiles. Today it increasingly depends upon the ability to process and interpret information. Data has become both a resource and a battlefield.

Project Maven is less a discrete programme than a harbinger. It signals the arrival of a form of warfare in which perception itself is mediated by machines, and in which the speed of decision-making is accelerated beyond traditional human limits.

For countries engaged in high-intensity conflict โ€” including Ukraine โ€” the relevance of such systems is immediate. The proliferation of drones, sensors and open-source intelligence has created an environment saturated with data. The ability to extract meaning from this abundance can determine operational success or failure. Artificial intelligence offers a means of coping with this complexity, but it also introduces new dependencies and vulnerabilities.

There is finally a philosophical dimension to Project Maven that cannot be ignored. War has always involved a degree of abstraction โ€” distance between decision and consequence. Yet as machines take on a greater role in interpreting the world, that distance may increase. The danger is not that humans will be removed from the loop entirely, but that their role will become more passive โ€” less engaged with the raw reality of what they are doing.

Project Maven does not herald the arrival of fully autonomous weapons, nor does it eliminate the need for human judgement. What it does is alter the context in which that judgment is exercised. It reshapes the relationship between observation, interpretation and action โ€” and in doing so, it forces us to confront a fundamental question.

In an age where machines can see for us, how do we ensure that we do not also begin to think โ€” and decide โ€” as they do?

 

35 Views