• Skip to main content
  • Skip to primary navigation
  • Departments
    • Bioengineering
    • Civil and Environmental Engineering
    • Electrical Engineering and Computer Sciences
    • Industrial Engineering and Operations Research
    • Materials Science and Engineering
    • Mechanical Engineering
    • Nuclear Engineering
    • Aerospace program
    • Engineering Science program
  • News
    • Berkeley Engineer magazine
    • Social media
    • News videos
    • News digest (email)
    • Press kit
  • Events
    • Events calendar
    • Commencement
    • Homecoming
    • Cal Day
    • Space reservations
    • View from the Top
    • Kuh Lecture Series
    • Minner Lecture
  • College directory
  • For staff & faculty
Berkeley Engineering

Educating leaders. Creating knowledge. Serving society.

  • About
    • Facts & figures
    • Rankings
    • Mission & values
    • Equity & inclusion
    • Voices of Berkeley Engineering
    • Leadership team
    • Milestones
    • Buildings & facilities
    • Maps
  • Admissions
    • Undergraduate admissions
    • Graduate admissions
    • New students
    • Visit
    • Maps
    • Admissions events
    • K-12 outreach
  • Academics
    • Undergraduate programs
    • Majors & minors
    • Undergraduate Guide
    • Graduate programs
    • Graduate Guide
    • Innovation & entrepreneurship
    • Kresge Engineering Library
    • International programs
    • Executive education
  • Students
    • New students
    • Advising & counseling
    • ESS programs
    • CAEE academic support
    • Student life
    • Wellness & inclusion
    • Undergraduate Guide
    • > Degree requirements
    • > Policies & procedures
    • Forms & petitions
    • Resources
  • Research & faculty
    • Centers & institutes
    • Undergrad research
    • Faculty
    • Sustainability and resiliency
  • Connect
    • Alumni
    • Industry
    • Give
    • Stay in touch
Home > News > Open letter on AI
Stuart Russell quote:

Open letter on AI

Fall 2015 Berkeley Engineer
November 1, 2015
This article appeared in Berkeley Engineer magazine, Fall 2015
  • In this issue

    Features

    Sophie’s super hand

    Microscopic hearts

    Heavy lifting

    CellScope in Cameroon

    Dean’s Word

    Upfront

    • Still shakin’ it
    • Open letter on AI
    • Opening Jacobs Hall
    • Fortifying breast milk
    • Transit trends
    • Q+A with BRETT

    Breakthroughs

    • Origin science
    • GMOs on lockdown
    • Light-speed genetics
    • Print and plug
    • Radioactive wrecks?

    Alumni notes

    • Toy tinkerer makes good
    • Spider-inspired silken threads
    • Farewell

    Download this issue

  • Past issues

Computer science professor Stuart Russell has written a series of open letters calling on the global community of scientists, engineers, and technologists, to develop guidelines surrounding artificial intelligence (AI) research. Leaders in the field are signing the letters, insisting that AI research show a societal benefit, and not just rush headlong towards building the most powerful machines. Chief among Russell concerns are the development of lethal autonomous weapon systems. The letter below, announced July 28, was posted by the Future of Life Institute.

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Stuart Russell

Stuart Russell (Photo by Noah Berger)

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.


Related links

  • Tech Experts Warn Of Artificial Intelligence Arms Race In Open Letter (NPR, July 28, 2015)
  • Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech (KQED, The California Report, Oct. 27, 2015)
  • This Artificial Intelligence Pioneer Has a Few Concerns (Wired, May, 23, 2015)

 

Topics: Computing, Computer science, Public policy
  • Contact
  • Give
  • Privacy
  • UC Berkeley
  • Accessibility
  • Nondiscrimination
  • instagram
  • X logo
  • linkedin
  • facebook
  • youtube
  • bluesky
© 2025 UC Regents