07/15/2025 / By Willow Tohi
China’s state-controlled military newspaper has launched a rare warning about the ethical perils of deploying autonomous humanoid robots in warfare, as global superpowers accelerate development of AI-driven combat systems. An op-ed published July 10 in the People’s Liberation Army Daily highlighted concerns that faulty robots could “inevitably” trigger legal and moral crises through indiscriminate killings, while the U.S. Army concurrently unveiled breakthroughs in human-robot collaboration to enhance battlefield control. This clash of priorities underscores a critical crossroads: Can nations develop lethal AI technologies without eroding core principles of human dignity and accountability?
Beijing’s warning emerged in an op-ed signed by three unnamed authorities in the PLA Daily, which serves as the Communist Party’s military mouthpiece. The piece emphasized that militarized humanoid robots “clearly violate” Isaac Asimov’s First Law of Robotics—that machines “may not injure a human being.” The authors argued outdated ethical frameworks must evolve to address modern military applications, stressing that autonomous systems must prioritize “obeying,” “respecting,” and “protecting” humans above all else.
The article further cited growing legal risks, noting that faulty robot errors could lead to “accidental deaths” and drawn-out war crime investigations. It acknowledged humanoid robots’ potential—to manipulate tools and navigate obstacles—but cautioned that technical limitations, such as slower speed and poor terrain adaptability, mean they cannot fully supplant human soldiers or other unmanned systems.
China’s cautions come amid its aggressive push in robotics, with state entities like military contractor CETC predicting mass production of humanoid robots for civilian and industrial use within two years. However, military applications remain fraught; last month, heavy robots tested by the U.S. Army’s sympower team struggled with basic battlefield tasks like door-opening, highlighting unresolved challenges.
While China deliberates ethical boundaries, the U.S. Army is racing to bridge the gap between human soldiers and autonomous machines. Researchers at the Army’s Combat Capabilities Development Command unveiled advancements in off-road mobility for AI-equipped ground vehicles, such as real-time battlefield language understanding and obstacle negotiation.
“This isn’t science fiction,” said Phil Osteen, lead researcher for the Army’s Artificial Intelligence for Maneuver and Mobility (AIMM) program. “We’re building robots that can communicate naturally with troops—assessing damage, sharing mission updates, adjusting paths based on real-world chaos.”
The Army also demonstrated bi-directional communication systems, allowing soldiers to issue commands via voice and receive instant feedback on robot movements. Udam Silva, AIMM manager, noted that prototypes now autonomously navigate forests at operational speeds—critical for contested zones like the Taiwan Strait—or mountainous theaters such as Kyrgyzstan.
Human-Autonomy Teaming (HAT) programs under development aim to refine soldier-robot collaboration further. Dr. Brandon Perelman emphasized the system’s adaptability, enabling robots to “correct deviations” based on soldier input. “This isn’t just about moving robots; it’s about trusting them,” Perelman stated.
China and the U.S. are locked in an AI arms race, each exploring humanoid robots as a deterrent but struggling with conflicting imperatives. While Beijing fears overreach, Washington prioritizes battlefield effectiveness—a tension echoed in remarks by PLA analysts who noted robots “cannot replace the human mind.”
Historically, military innovation has often outpaced ethics: WWI’s toxic gas or nuclear weapons shocked global norms, yet today’s AI poses unprecedented dilemmas. Asimov’s fiction-inspired laws—written in 1942—now guide debates, but experts like Prof. Noel Sharkey argue they lack enforceability.
The confluence of geopolitical competition and technological ambition risks normalizing autonomous killing machines. “The race isn’t just for superiority—it’s for legitimacy,” said tech ethicist Dr. Mei Lin. “Without binding regulations, war crimes could stem from technical glitches, not just human malice.”
As China and the U.S. wield billions on AI militaries, the PLA Daily’s warning—a rarity for its propaganda arm—signals even Beijing fears the unchecked consequences. Simultaneously, U.S. breakthroughs reveal battlefield-ready systems within reach. The path forward demands global dialogue, lest a Terminator-style dystopia of “indiscriminate killings” materialize. In the words of the op-ed’s authors: “Human life cannot be reduced to a cost-benefit calculation.”
Sources for this article include:
Tagged Under:
AI, China, computing, future tech, Glitch, information technology, military tech, national security, robotics, robots, weapons tech
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 COLLAPSE.NEWS
All content posted on this site is protected under Free Speech. Collapse.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Collapse.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.