Artificial Intelligence and the Search for a Better World

As technology continues to advance, a series of important philosophical considerations arise surrounding the moral use of the tools we construct. Of these constructs, none raises more interesting and pertinent questions than artificial intelligence (AI). As we increasingly make use of AI in all fields to reduce human error and promote efficiency, ethical concerns are raised: what to do with weapons that can literally fire themselves?

The 2015 open letter with over three thousand signatures from AI/robotics researchers titled Autonomous Weapons: An Open Letter From AI & Robotics Researchers implores scientists, governments, and the international community to work toward a ban on offensive autonomous weapons. The hope is that such a ban will be achieved before such weapons become the “Kalashnikovs of tomorrow” and cause untold harm to humanity. To me philosophy is the structured study of thought, the breaking down of how and why we think things for the purpose of directing our actions to the betterment of the world around us. To Mill that means finding “one fundamental principle or law, at the root of all morality, or if there be several, there should be a determinate order of precedence among them; and the one principle, or the rule for deciding between the various principles when they conflict, ought to be self-evident.” (Mill, 1) In other words, he seeks to break down thought to a single truth that justifies how and why in order to direct actions always toward the greatest possible happiness. In utilitarianism the question is, in principle, simple: what action will cause the greatest happiness? To the signatories of this letter, the answer is a ban on autonomous weapons. They argue that the benefit gained by reducing human casualties of war does not outweigh the cost of making war more palatable or the risks of an AI arms race. In making that consideration and in justifying it in this manner, the writers are breaking down our thought using utilitarian concepts to engage in philosophical activity. They measure the usual consequences of harm to human soldiers; individual suffering, familial harm, cost of treatment and so on against those that are likely to be the usual result of an AI arms race. These being the creation of tools useful for “assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.” They then let their study inform their actions towards what they see as the betterment of our world. So this letter, to all those who read it, serves as an invitation into a conversation about the ethical use of AI and an exploration of our thought on what form weapons are allowed to take and how far progress is allowed to take us.

The most common way in which I engage in philosophical activity is through personal conversations with my peers. In such conversation we begin with our thoughts on a relevant issue; should AI have rights, what is the best method of gun control, is death really bad for a cow? From that point we break down our positions into their constituent arguments and progress through those arguments, in or out of order. As we discuss, or argue, a given argument, we enter tangential discussions and pull apart our opinions on a variety of topics. To give this mental wandering purpose, and proper philosophical status, we answer the questions and resolve the arguments until we arrive at a set of new realizations about the world we live in. With that new understanding, we become better able to shape our actions toward the betterment of the world in which we live.

 

Works Cited

https://futureoflife.org/open-letter-autonomous-weapons/

         Note: As there exists no given primary author for this letter, all uncited           quotations are attributed to the single page of this letter.