VG-Wort Pixel

AI researcher on the drone debate "For me, it would be terrible if my work contributed to the death of people"

Zwei US-Soldaten munitionieren eine Drohne mit Hellfireraketen.
Zwei US-Soldaten munitionieren eine Drohne mit Hellfireraketen.
© Senior Airman Larry E. Reid Jr. / Picture Alliance
Dr. Jakob Foerster researches artificial swarm intelligence and fights against arming the German armed forces with combat drones. Der stern spoke with the expert, who doesn't shy away from discussions.

This is the English translation of an interview conducted in German. Click here for the original.

Mr. Foerster, you have publicly positioned yourself against the acquisition of armed combat drones. Your open letter to the SPD has reignited the debate in Germany about arming the Bundeswehr with combat drones. Why are these drones so important? Other countries have been using these weapons for a long time.   

Combat drones are used worldwide in armed conflicts, wars without a declaration of war and so-called "targeted killings." This weapon has led to a de-limitation of wars and softening of international law. Its use means permanent terror for the population of the affected regions, and a large proportion of the victims are civilians. The drone war in Nagorno-Karabakh also shows the brutality of this weapon.

Hence my urgent public call to refrain from acquiring these weapons and to take international action against their use.

We have been experiencing drone attacks since about the year 2000, during which time there has been a tumultuous development.

In the last 20 years, a revolution has taken place in artificial intelligence that has brought the risk of full automation of these weapons within reach. Techniques that were distant dreams when the first combat drones were developed are now a reality and used on a daily basis, from automatic image recognition and speech recognition to algorithms that learn to master strategy games. Particularly in light of this new state of information, it is imperative to work toward an international outlawing of combat drones; Germany can play a pioneering role internationally in this regard. The current "no" to combat drones was a first, decisive step in this direction. Now it is absolutely crucial that the grand coalition stops the development of the planned, weaponized Eurodrone.

Even with combat drones, it is ultimately - still - the human being who decides whether to fire and not the machine.

You say it yourself: "still." In fact, every remote-controlled combat drone has two capabilities by definition: It can both pick up information via sensors and send it to a control center, and it can receive and execute control commands via radio. From the drone's perspective, it is completely irrelevant whether the command from the control center was generated by a human or a computer program. The hardware question of whether a drone is armed or not is very clear to define and regulate. In contrast, the transition between remote control, partial autonomy and full autonomy is completely fluid.

So it's not a big step from human remote control and fully automated killing?

From an AI research perspective, large parts of drone control can already act in an automated manner, and at the same time, AI development continues to advance rapidly. In the long term, any modern remotely controlled armed drone can be converted into a fully automated drone via software updates at control centers and act in a swarm with other drones. Although this is denied by some proponents of combat drones: The decision to arm or not to arm drones, then, from the perspective of AI research, has great implications for the future automation of warfare. Given the scope of the issue, such misleading and negligent downplaying outrages me as a researcher in the field.

Where is the difference between this and an observation drone that searches for a target, acquires it, and relays the target data to another weapon carrier in a fully automated fashion? So if the missile is not launched from the drone, but from a truck?

That would first require a truck to be positioned nearby. What I am saying is that combat drones radically lower the inhibition threshold to wage war even on the other side of the world. The general lowering of the inhibition threshold was also impressively demonstrated by the war over Nagorno-Karabakh.

Of course, I am also generally in favor of outlawing autonomous warfare. Neither tanks, trucks, nor drones should be allowed to fire without the decision of a human being in the specific situation. By the way, the defense company Rheinmetall has just developed an unmanned tank and is starting to market it worldwide. So the problem of automated, unmanned weapons systems is not only in the air.

Combat drones are the current cause. Behind this is the entry of AI into autonomous combat systems, whether these are drones or mini tanks. Modern air defense or artillery systems also work this way. Can you describe the dangers you see?

The basic problem that automated warfare replaces the decision on life and death and, in case of doubt, on the outbreak of a war by algorithms, affects all weapons classes. So does the problem that this can lead to an acceleration of the course of war and constant escalation of violence.

On the other hand, there are differences in how acute the threat of automation is. For example, drones operate in the air and are therefore relatively easy to model and automate compared to remotely piloted mini tanks - there are fewer obstacles in the air, for example. Combat drones are therefore a momentous step toward automated warfare.

Automated air defense systems would also be tricky because of the problems I mentioned earlier, although I see a different dimension here because this is primarily a defensive technology.

Put simply, technologies that allow countries to defend themselves efficiently but cannot be used to attack tend to have a stabilizing effect, while technologies that open up new attack options have a destabilizing effect. Armed drones are a disaster in this regard because they can be produced and operated relatively cheaply and, as noted previously, significantly lower the inhibition threshold to attack.

War and killing is always horrific. Weapons maim, burn, and shred people. What makes AI weapon systems so much more horrible?

Well, first, because of the unimaginable horrors you describe, war as a means of politics is internationally outlawed. From the experience of two world wars started by Germany, even the preparation of war of aggression in this country is punishable under the Basic Law.

Combat drones are offensive weapons, and the current German debate is accordingly about combat drones for foreign missions.

A war of aggression would be punishable with and without combat drones.

International law presupposes that a human being makes the decision to kill in concrete terms, who can then also be held accountable. This step must therefore not be transferred to machines in the future. This principle threatens to be weakened with the introduction of combat drones.

Moreover, it is to be expected that by further removing the human inhibition threshold, killing will only become more, efficient and brutal. Humans are capable of empathy and can feel compassion for the pain of others. In contrast, such concepts are meaningless to algorithms.

There are two megatrends in the field of armaments. One is the advent of AI, reducing the ability of humans to control. The other is the development of hypersonic weapons, which will greatly reduce response times. The vicious cycle: the attack comes so fast that only AI can respond in time. Are we in for a science fiction scenario where autonomous systems on both sides wage war against each other?

Yes, Marcel Dickow of the Stiftung Wissenschaft und Politik (German Institute for International and Security Affairs), for example, pointed to this scenario in the Defense Committee's drone hearing back in 2014. Ever shorter reaction times of enemy drones are forcing us to join in this unmanned automation race. But I see another huge problem: Automated weapons can be used directly against defenseless people.

We're already seeing that in the global drone wars.

Exactly. But the scenario you describe poses another huge danger: the outbreak of nuclear war has been prevented several times because, at crucial points, humans failed to respond to false alarms of an alleged nuclear first strike by the opposing side from technical systems - how would history have played out if the response had been completely automated?

Humans are capable of stopping a military escalation for reasons of reason or empathy - the course of an autonomous war would be determined only by the algorithms.

War by algorithms is also becoming conceivable in Western Europe.

First and foremost, there is a danger that automated warfare will be carried from the major military powers, including Western Europe, to the rest of the world. France, Germany and Spain are currently planning a joint armament project that is to function largely automatically and will be nuclear-armed by France, the so-called Future Combat Air System (FCAS). At its core is a combat aircraft accompanied by armed drone swarms. These could operate autonomously.

The arms race, in turn, has a destabilizing effect on the entire world and thus also endangers Western Europe. It is necessary to take the emergency exit before it is too late to prevent the scenario you describe. Armament projects such as combat drones and FCAS must be stopped at all costs. This would create a new situation in Europe as well - Germany would then set an international example, and there would be a chance for a peaceful European foreign policy.

In the past, the saying was often quoted that war is the father of all things. In this context, I would like to formulate: "War is the exploiter of many things". Civilian AI research has to fear being used for military purposes without being asked. Would you agree with that thesis? And if so, explain this process of repurposing.

That's a very good reference. Indeed, many technological developments are very quickly put to military use, but few are directly driven by the military, the best-known example here probably being the atomic bomb mentioned earlier. Of course, the military has also been actively investing in AI for many decades, however, in my opinion, these efforts have not contributed to the breakthroughs in the last two decades.

The breakthroughs are being achieved in the civilian sector. The military's exploitation of AI research works particularly quickly here. Why is that?

That's right, the successes come from civilian research institutions and companies. AI research places a high value on openness and reproducibility of results. As a result, the latest algorithms are typically available as open source to the general public and can therefore also be used by the military sector. Since this is software that can be freely shared and reproduced, the findings spread particularly quickly. Without the revolution in machine vision over the past twenty years - based almost entirely on research by civilian universities and companies - automated recognition of objects for military purposes, for example, would not even be possible.

To prevent its use in the military, we need blocking treaties in time, as they unfortunately came very late for the atomic bomb - and will hopefully soon be enforced now for unmanned weapons systems. My own research is also available to everyone as open source. For me, it would be terrible if my work contributed to the death of people. I hope, through my open letter and also this interview, to contribute to ending this insanity in time.

I come to a very pragmatic point of view. Germany, or rather the Bundeswehr, is not a leader in these areas. The drones that are being discussed are bought or rented. They have been on the market for a long time. What effect should Germany's renunciation have?

We can either run blindly into the abyss of a global arms race with a supposed race to catch up regarding drones, or we can return to our pioneering role for disarmament, global understanding and diplomacy. And even the temporary halt to the arming of drones in Germany has a major signal effect and raises hope for a change in policy toward a policy of disarmament and cooperation. This is noticeable in the public debate and is also well perceived internationally. A definite "no" from the German government to armed drones and the renunciation of the development of a Eurodrone would be a signal for an exit from the arms spiral and for a new policy of détente.

The driving forces in this technology are the military superpowers USA, China, Russia - plus smaller states such as Israel. These states would not stop the development after all.

These countries are also dependent on global challenges to humanity, such as climate change, being solved globally. Therefore, either way, we need mechanisms of global cooperation and coordination for the long-term survival of humanity on this planet.

Preserving peace and preventing arms races is an imperative requirement to successfully address these challenges. More and more people see this. Incidentally, according to the Polit-Barometer of 2014 - I myself am unfortunately not aware of a more recent survey - the vast majority of the German population is against combat drones.

The mood of the population in Germany will impress neither the USA nor China.

Also, out of security self-interest, no country has an interest in the proliferation of these weapons, which are increasingly easy to produce and have a destabilizing effect. The number of countries possessing these weapons has increased rapidly in recent years. It is therefore in the interest of the industrialized military powers in particular to stop this development. This can only be achieved through international agreements. The idea that Western countries could prevent others from acquiring these weapons without appropriate agreements is completely unrealistic.

In the conflict over Nagorno-Karabakh, one could see that a country like Azerbaijan is more modernly armed than many NATO countries. The drones of the Azeri side won the war. If you want to watch it, you can see how they hunt down individual vehicles and shoot them down. If this is what the conflicts of the present look like, our soldiers would not stand a chance against an opponent armed with combat and kamikaze drones. What do you say to soldiers who, in case of doubt, would have to go into the field against an opponent armed in this way?

The crucial thing is: Without combat drones, this murderous war would probably not have taken place! And militarily, the war Turkey is involved in teaches us, if at all, to invest in modern air defense and targeted drone defense, such as radio jamming; unmanned offensive weapons offer no protection here.

Apart from that, it can be stated: An attack from outside is currently not a true horror for Germany and German soldiers do not fight against combat drones. However, the only way to ensure that this remains the case in the long term is to strengthen international diplomacy again and to disarm. We can learn from the example of the U.S. that arming the armed forces and using a terrorist weapon such as combat drones only jeopardizes the safety of the soldiers, since violence breeds counter-violence.

Let me make one more comment here: The real enemy of soldiers is war itself; they are sent on dangerous missions for geostrategic and economic reasons. That is where the basic problem lies. The killing is inhumane, for example, death rates from suicide among active duty soldiers in the U.S. are higher than from combat operations in many years. Controlling drone killings is extremely traumatizing for soldiers. U.S. whistleblowers on the drone program Lisa Ling and Cian Westmoreland also made it clear to German politicians in early December 2020 that the use of combat drones is neither necessary nor ethical for the protection of soldiers because entire populations are terrorized by them.

Primarily for this horror scenario of national defense, we afford a Bundeswehr with a budget of 45 billion euros. A singular renunciation of the Bundeswehr would above all have a political signal effect. What do you think would have to happen for these weapons to be outlawed worldwide?

Hundreds of thousands, if not millions, of people have been murdered with chemical weapons, nuclear weapons, land mines and cluster munitions before their use was outlawed. You yourself hinted at the cruel effects of combat drones using the example of Nagorno-Karabakh; they are already evident. I believe that the videos from Nagorno-Karabakh, at the latest, must be a wake-up call for the international community.

Already now it would be possible to sanction states that produce or use these weapons instead of supporting them. Turkey, which supplied Azerbaijan with the drones, is still a recipient of German arms exports - as far as is known, by the way, TDW, a German arms company, helped facilitate the development of Turkish drone missiles. The U.S., which is waging a global terrorist war with combat drones, uses Ramstein Airbase to process the data. This, too, could be stopped by the German government, citing international law.

There are also encouraging examples: In a few weeks, the United Nations Treaty on the Prohibition of Nuclear Weapons will enter into force. The global movement against nuclear weapons, ICAN, was awarded the Nobel Peace Prize in 2016 for its commitment. This sets standards, even if the countries possessing nuclear weapons have so far refused to join the process.

If weapons systems operate autonomously and networked, isn't there a danger that hackers will hijack these systems and use them for attacks?

Indeed, no software application can categorically rule out security vulnerabilities. This problem may be exacerbated in the future by networked swarm systems, since entire weapon systems could be hijacked and networking always brings vulnerabilities. Furthermore, AI systems can also be hijacked under certain circumstances by so-called "adversarial attacks" in which the input data is manipulated in such a way that the AI system malfunctions.

The war of the robots is an extreme vision, but AI is moving into life everywhere. Just as swarms of drones can simulate fireworks or form a flag. In other areas, too, people are asking: what is the place of humans in the future? In the military, the fear is that when push comes to shove, humans will simply be removed from decision-making. What kind of signal does that send to other areas?

We are at a crossroads, and times of crisis are always times of upheaval. As a global community of destiny, do we want to submit to the logic of AI algorithms? Or do we see ourselves as a community consisting of self-determined subjects? We must not let the market, much less a destructive arms race, dictate the answers to these fundamental questions.

In the field of autonomous warfare, the exponential acceleration typical of the IT sector and the momentum resulting from the game-theoretic interaction of threat and defense combine. The result is a dynamic system that, in the worst case, allows no reversal and ends in wars of previously unimaginable destructiveness and brutality once tipping points are crossed. However, we still have the opportunity to break through this fatal momentum through global action and to take the fate of humanity into our own hands instead of tolerating that our downfall is accelerated out of supposed national self-interest. In history, mankind has always managed to overcome crises, to break through self-destructive dynamics and to reach a new stage of development of civilization. Precisely because we are not algorithms, we must now recall our common humanity, make use of our reason, and place the well-being of all humanity at the center of our common efforts.


Mehr zum Thema


Wissenscommunity


Newsticker