AI-powered lethality, less civilian oversight – Firstpost
A day after his Senate confirmation on January 24, Pete Hegseth had a message for the Department of Defence (DoD): rebuild the US military into the
most lethal force that will put America first.
“We will rebuild our military by matching threats to capabilities. … We will remain the strongest and most lethal force in the world,” he said in a press release.
The 2025 Global Fire Power (GFP) index ranks the US number one, with a score of 0.0744 (a score of 0.0000 is considered perfect), of the 145 world powers. The GPF considers 60 factors to determine a nation’s power index, including defence technology, financial resources, logistics, geography and strategic position.
According to the Stockholm International Peace Research Institute, the 2023 US defence budget was $880 billion, more than the next eight countries, including China and Russia, combined. The massive defence outlay allows the US to acquire the most advanced weapons and cutting-edge tech, sustain a massive and well-trained military and maintain around 750 military bases in more than 80 countries.
Hegseth’s message never mentioned a word about how he intends to make the world’s most lethal military more lethal. However, the Pentagon’s two recent perilous decisions show the path to adding more lethality.
First, the Donald Trump administration has decided to advance the use of artificial intelligence (AI) in the military from hunting down terrorists and interacting with commanders to operational and theatre-level planning.
Second, Hegseth has started gutting or cutting down offices and programmes established to prevent civilian casualties caused by the lethal US military.
What’s more worrying is that both decisions are interconnected.
Advanced AI-military integration
In 2011, a stealth ISR RQ-170 drone, aka the “Beast of Kandahar”, took off from Afghanistan, flew undetected in the Pakistani airspace, monitored Osama bin Laden’s Abbottabad compound for months and provided live video to the US Joint Special Operations Command (JSOC).
Simultaneously, Palantir Gotham, the defence and intelligence software of Palantir Technologies—specialising in software platforms for big data analytics—integrated and analysed the data, other surveillance and reconnaissance reports and intel on bin Laden and identified patterns and connections within them.
Finally, JSOC’s SEAL Team Six eliminated the 9/11 mastermind on May 2, 2011.
Palantir made it easy for JSCO operators to connect the dots within the data trove. Still, it took months from monitoring to killing bin Laden—AI hadn’t entered combat by then.
Six years later, a new era dawned in AI-military integration on April 26, 2017.
Project Maven, officially known as the Algorithmic Warfare Cross-Functional Team, was established by then-deputy defence secretary Robert O Work with a
memo. Funded by the DoD, the project involved AI’s integration with the US military’s kill chain—find the target, fix it, track it, select the weapon of choice and destroy it.
Maven could quickly harvest and analyse an enormous amount of data gathered by drones and satellites via machine learning systems and identify an array of targets, including humans and military facilities and weapons.
Such a task was manually impossible and time-consuming, as analysts would have to spend days sifting through satellite and drone images/videos and surveillance data by relying on their eyes.
Maven initially collaborated with Google in data fusion and later switched to Palantir, Microsoft, Amazon Web Services, Maxar Technologies, and others.
In eight months, the Maven Smart System interface was using special algorithms to identify objects in a video feed sent via a ScanEagle drone in an undisclosed location in West Asia. Yellow-outlined boxes on the interface meant potential targets, and blue-outlined boxes indicated civilian-inhabited places or friendly forces.
Soon, Maven was used to hunt down Islamic State (IS) members in Syria and Iraq. In 2020, the interface was used to evacuate American military personnel from Afghanistan; in 2022, to help Ukrainian troops locate and target Russian soldiers; and in 2024, to destroy Houthi rocket launchers and vessels. In February of the same year, Maven helped the Pentagon identify targets for more than 85 airstrikes in the Middle East.
Guess it performed as well as Google’s Gemini AI?
The US military has ramped up its use of artificial intelligence tools based on Google’s Project Maven to identify targets for more than 85 air strikes in the Middle East this month.
US bombers and fighter aircraft carried out… pic.twitter.com/LfVTt7mGS5
— StarBoySAR 🇭🇰 🇨🇳 🥭 (@StarboySAR) February 28, 2024
In the Pentagon’s own words, AI is used to
speed up killing. “We obviously are increasing the ways in which we can speed up the execution of the kill chain,” Radha Plumb, the outgoing chief digital and AI officer in the DoD, told TechCrunch in a January interview.
The Joe Biden administration went further by using generative AI in the military.
The Pentagon’s Defence Intelligence Unit (DIU), or Unit X, ensures that the US military gets access to emerging technology in Silicon Valley.
Last December, the Pentagon established the AI Rapid Capabilities Cell (AIRCC) to expedite the adoption of Large Language Models (LLMs) and other forms of generative AI. AIRCC, with $100 million in funding, will implement the recommendations of
Task Force Lima, set up in August 2023. The task force aimed to utilise generative AI models in warfighting and other fields, like health and finance, and leverage partnerships across the Pentagon, the intelligence community and other federal agencies to reduce redundancy and risk.
Generative AI was used during the annual South Korea-US Freedom Shield exercise last year when a chatbot interface like ChatGPT scoured open-source intelligence like articles, reports, images and videos; translated and summarised foreign news sources; and wrote daily intelligence reports for US commanders.
Defence-tech company Vannevar Labs, which designed the interface and received a $99 million production contract from the DIU last November, uses LLMs, including from OpenAI and Microsoft. Since 2021, it has been collecting terabytes of open-source intelligence in 80 languages in 180 countries. Subsequently, Vannevar constructs AI models to translate data and detect threats using its ChatGPT-like interface.
According to Leigh Madden, vice president of the National Security Group at Microsoft, generative AI can process intelligence, signals and reconnaissance data in real time, enhancing decision-makers’ situational awareness. In a
SIGNAL Media Executive Video Series episode, Madden said that generative AI training scenarios incorporate terrains, weather conditions, and enemy tactics and behaviour.
Now, Trump and Hegseth want to use AI for military operational and theatre-level planning.
This is the third phase of military AI. Agentic AI is a revolutionary technology in which the system can decide and plan action per the human creator’s need while adapting to changing circumstances—a degree of autonomy that traditional AI models lack.
Agentic AI quickly and comprehensively synthesises “a broad scope of traditional and non-traditional planning factors than humans alone to help produce more thorough, objective courses of action (COA)”, according to Richard Farnell and Kira Coffey.
In an
article written for the Belfer Centre for Science and International Affairs, Farnell, a 2024 National Security Fellow at Harvard Kennedy School’s Belfer Centre, and Coffey, a 2024 Air Force National Defence Fellow and International Security Program Research Fellow at Harvard Kennedy School’s Belfer Centre, explain Agentic AI’s potential.
“Once a COA is selected, Agentic AI also has the potential to help rapidly publish downstream directives and orders, flattening communication and saving hundreds of man-hours in each planning cycle.”
Agentic AI can help solve large-scale, complex problems independently amid changing battlespace conditions. “Creating multiple dilemmas for a near-peer adversary requires continuous integration of capabilities across all instruments of power and all domains, including the electromagnetic spectrum and the information environment,” they write.
The Pentagon has awarded a contract to data annotation company Scale AI, whose
Thunderforge system will accelerate decision-making, allowing planners to more rapidly synthesise vast amounts of information, generate multiple courses of action, and conduct AI-powered wargaming to anticipate and respond to evolving threats.
Thunderforge will be deployed initially with the US Indo-Pacific Command and the US European Command. It “brings AI-powered analysis and automation to operational and strategic planning, allowing decision-makers to operate at the pace required for emerging conflicts”, according to Bryce Goodman, DIU Thunderforge lead and contractor.
Scale AI’s customers include OpenAI, Microsoft, Cisco, Meta and TIME. Thunderforge will also include Anduril’s Lattice software platform and Microsoft-enabled state-of-the-art LLMs.
Scale AI explains the massive gap between current warfare and agentic warfare. Current warfare deploys people with decades of single-domain knowledge who connect workflows and decide in days. On the other hand, agentic warfare has AI models with around 4,000 years of all-domain knowledge; AI agents automatically connect workflows with human oversight and decide in minutes.
The DIU press release said the new AI marks a decisive shift in how the Pentagon plans to fight wars. “Thunderforge marks a decisive shift toward AI-powered, data-driven warfare, ensuring that US forces can anticipate and respond to threats with speed and precision. Following its initial deployment, Thunderforge will be scaled across combatant commands,” the agency explained.
Pentagon guts civilian casualty programmes
At least
4.5-4.7 million civilians have been killed in post-9/11 wars in Iraq, Afghanistan, Pakistan, Syria, Yemen, and Somalia. According to a 2023 report by Rhode Island-based Brown University’s Watson Institute for International and Public Affairs, an estimated 408,000 civilians out of the toll died directly from war violence.
In the first 20 years of the war on terror, America killed around 48,308 civilians in more than 91,000 airstrikes in Afghanistan, Iraq, Libya, Pakistan, Somalia, Syria, and Yemen, a 2021 analysis by UK-based airstrike monitoring group Airwars shows.
The US has been facing stinging international criticism for years for killing civilians in war zones. Damning probes by the media, NGOs and think tanks have revealed the merciless killing of civilians by the US military since 9/11.
In 2022, the Biden administration established the Civilian Harm Mitigation and Response Action Plan (CHMR-AP) to develop and implement strategies to prevent, mitigate and respond to civilian harm in military operations in a better way. The plan also aimed to increase accountability for civilian casualties, improve transparency in Pentagon practices related to civilian protection, and ensure a swift and effective response to civilian casualties.
The CHMR-AP also established a Civilian Protection Centre of Excellence (CPCE) to guide the DoD’s understanding of the capabilities and practices that support civilian harm mitigation and response. It was the hub and facilitator of department-wide analysis, learning and strategic approaches and was to help institutionalise good practices for civilian harm mitigation and response during operations.
Biden also instructed the military and CIA drone operators to obtain permission before targeting a suspected militant outside a conventional war zone. In October 2022, he ordered drone operators to be certain of no civilian injuries before a strike.
Except for Iraq and Syria, where the IS still operates, presidential permission is compulsory for drone strikes in Afghanistan, Yemen, Libya, Somalia and FATA, Pakistan.
Trump has not only removed the restrictions but also decided to do away with CHMR-AP and CPCE.
The US has launched several strikes in Iraq, Syria and Somalia. Since the March 15 airstrikes in Yemen, more than 200 people have been killed, with the most recent attack on the Ras Isa oil port killing around 80 people.
Now, Hegseth has decided to terminate CHMR and CPCE staff across all US commands despite the Pentagon policy requiring that possible dangers to civilians be considered in combat planning and operations.
That’s Hegseth’s idea of rebuilding the US military into a
more lethal force.
“I’ve thought very deeply about the balance between legality and lethality, ensuring that the men and women on the frontlines have the opportunity to destroy the enemy and that lawyers aren’t the ones getting in the way,” Hegseth said at his Senate confirmation hearing.
Hegseth also said that laws like the Geneva Convention existed “above reality”. “We follow rules. But we don’t need burdensome rules of engagement [that] make it impossible for us to win these wars.”
Hegseth also feels that lawyers hinder military effectiveness. On February 21, he sacked the judge advocate generals (JAGs) of the Army and Air Force, and the Navy’s JAG suddenly retired in December.
‘Lethal’ concoction in the making
The character of warfare is rapidly changing. The traditional military doctrine of the three-domain ‘land, sea and air’ approach will be junked.
Though none of the global powers have allowed AI to take over their militaries with humans still in control, machines turning autonomous in combat is inevitable.
China has already created
an AI commander at the Joint Operations College of the National Defence University in Shijiazhuang, Hebei province, similar to a human counterpart with his experience, strengths and flaws.
In a peer-reviewed paper published in the Chinese-language journal Common Control & Simulation in May 2024, senior engineer Jia Chenxing writes: “The highest-level commander is the sole core decision-making entity for the overall operation with ultimate decision-making responsibilities and authority.”
Deciding in less time and with more speed on the battlefield when the data is voluminous tempts the human mind, especially when the adversary is also using AI. Decisions taken in a fraction of a second decide the course of the war and the outcome.
However, even agentic AI isn’t immune to algorithmic biases. Any AI system is trained on datasets with inherent biases that could lead to disastrous consequences.
An AI system in war trained on a dataset with prejudices against a particular community, race, ethnicity or even gender will pick targets accordingly.
The consequences can be more severe in the case of agentic AI, as commanders plan operations at the theatre level, which is much more complex than merely picking and eliminating targets. A scenario where machines in a war turn autonomous would be frightening.
As the Pentagon axes programmes aimed at ensuring minimum collateral damage, or civilian deaths, accountability will be the main casualty if the AI system makes an error resulting in the death of non-combatants.
Who will be responsible for the collateral damage? The Pentagon can’t blame the machines, but it won’t take the blame either.
A prime example is Israel’s use of generative AI programmes like Lavender, Where’s Daddy? and The Gospel to eliminate Hamas and Palestinian Islamic Jihad (PIJ), resulting in massive civilian casualties.
Lavender, developed by Israel’s elite and clandestine counterintelligence and cyberwarfare division, Unit 8200, was designed to make kill lists of suspected Hamas and PIJ junior operatives in the initial months of the war.
According to a joint investigation by +972 Magazine and Hebrew news website Local Call, the Israel Defence Forces (IDF) trusted Lavender, which selected 37,000 Palestinians and their homes as targets, as if it were a human, and bombed Gaza accordingly.
Where’s Daddy? tracked the suspected militants as they entered their houses, and The Gospel identified structures and buildings as targets.
The IDF was so reliant on Lavender that it spent merely 20 seconds on each target before bombing it—the main criterion was that it should be a man.
Israel is using AI systems “Lavender” and “Where’s Daddy?” to kill Palestinians in Gaza, an investigative report by +972 Magazine has revealed pic.twitter.com/9HXHScKeRN
— TRT World (@trtworld) April 5, 2024
Despite Lavender being 10 per cent inaccurate—meaning, 10 out of 100 targets identified weren’t terrorists—the IDF didn’t review the system’s assessment. The mistaken targets could have been the police, civil defence workers, relatives of Hamas or PIJ members or Gazans having a name and nickname similar to that of an operative.
As Where’s Daddy? signalled to the IDF about a target entering his house, the residence was targeted with an unguided bomb to save expensive armaments for high-value targets. Consequently, the whole house was blown away, killing the target and his entire family.
“When they reach their homes, daddy’s home, and then the entire house, and everybody in it, could be blown up.”
How the Israel army’s ‘Lavender’ and ‘Where’s Daddy?’ artificial intelligence systems operate in Gaza. #GazaCrimes #AI pic.twitter.com/Fm7lLBnYqb
— Al Jazeera Investigations (@AJIunit) October 19, 2024
Around 16-18 houses in Al-Bureij refugee camp were blown to smithereens and 300 civilians killed on October 17 as Lavender failed to pinpoint a top Hamas commander’s location.
Axing the CHMR and CPCE and using agentic AI will pave the way for more civilian casualties. Neither will machines prevent, mitigate or respond to civilian harm, nor will they be trained to stop collateral damage.
Hegseth wants the US military to be more lethal, not legal. Therefore, there’s a high probability that datasets fed into AI systems are biased towards more lethality at the cost of civilian casualties.
AI systems need constant monitoring, oversight and review.
That’s the reason China has restricted the AI commander to a laboratory despite it “possessing sound mental faculties, a poised and steadfast character, capable of analysing and judging situations with calmness, devoid of emotional or impulsive decisions, and swift in devising practical plans by recalling similar decision-making scenarios from memory”.
The writer is a freelance journalist with more than two decades of experience and comments primarily on foreign affairs. Views expressed in the above piece are personal and solely those of the writer. They do not necessarily reflect Firstpost’s views.
Post Comment