Science

A group of tech executives warn military about unintended harm caused by AI in combat


Tech leaders from Google, Microsoft, and Facebook suggest ethics guidelines for using AI in the military to avoid ‘unintended harm to humans’

  • The Defense Innovation Board made 12 recommendations for AI in the military
  • The Board warns against unintended adverse consequences of using AI 
  • Members include tech execs from Google, Microsoft, Facebook, and LinkedIn 

This week, the Defense Innovation Board issued a series of recommendations to the Department of Defense on how artificial intelligence should be implemented in future military conflict.

The Defense Innovation Board was first created in 2016 to establish a series of best practices on potential collaborations between the US military and Silicon Valley.

There are sixteen current board members from a broad number of disciplines, including former Google CEO Eric Schmidt, Facebook executive Marne Levine, Microsoft’s Chief Digital Officer Kurt Delbene, astrophysicist Neil deGrasse Tyson, Steve Jobs biographer Walter Isaacson, and LinkedIn co-founded Reid Hoffman. 

Scroll down for video 

‘Now is the time, at this early stage of the resurgence of interest in AI, to hold serious discussions about norms of AI development and use in a military context—long before there has been an incident.’ the report says.

The report says that using AI for military actions or decision-making comes with ‘the duty to take feasible precautions to reduce the risk of harm to the civilian population and other protected persons and objects.’ 

The report outlines five ethical principles that should be at the heart of every major decision related to using AI in the military.

AI in the military should always be: Responsible, Equable, Traceable, Reliable, and Governable. 

WHAT ARE THE RECOMMENDATIONS?

The  Defense Innovation Board defined five ethical principles for using AI in the military: Responsible, Equable, Traceable, Reliable, and Governable.

Based on these five principles it recommended the following 12 things: 

1. Formalize these principles via official DoD channels.

2. Establish a DoD-wide AI Steering Committee.

3.  Cultivate and grow the field of AI engineering.

4. Enhance DoD training and workforce programs.

5. Invest in research on novel security aspects of AI.

6. Invest in research to bolster reproducibility. 

7. Define reliability benchmarks. 

8. Strengthen AI test and evaluation techniques.

9. Develop a risk management methodology. 

10. Ensure a proper implementation of AI ethics principles.

 11. Expand research into understanding how to implement AI ethics principles.

12. Convene an annual conference on AI safety, security, and robustness.  

Going off these principles the report makes twelve concrete recommendations for how to move forward integrating AI in contemporary warfare. 

The Board recommends creating a risk management strategy that would formalize a taxonomy of negative outcomes.

The purpose of this taxonomy would be to ‘encourage and incentivize the rapid adoption of mature technologies in low-risk applications, and emphasize and prioritize greater precaution and scrutiny in applications that are less mature and/or could lead to more significant adverse consequences.’ 

The report recommends the development of a risk management methodology to account for all the potential negative outcomes that could come from deferring a significant amount of work or decision-making to a computer. 

The Board also emphasize the importance of developing specific benchmarks to evaluate the reliability of AI as it compares to human performance in military setting. 

In the same spirit, the Board encourages the military to create a rating for how reproducible an AI driven outcome or action is, so as to minimize the prevalence of unintended consequences.     

A number of other recommendations are mainly administrative. 

One calls for the creation of an official Department of Defense policy communications channel to make announcements and field questions different stakeholders. 

Another recommendation calls for the creation of an internal ‘AI Steering Committee’ to oversee any current or future AI programs, and organizing workforce AI training in the department.

Former Google CEO Eric Schmidt (pictured above) is one of the members of the Defense Innovation Board, which was formed in 2016 to help encourage cooperation between Silicon Valley and the US Army

Former Google CEO Eric Schmidt (pictured above) is one of the members of the Defense Innovation Board, which was formed in 2016 to help encourage cooperation between Silicon Valley and the US Army

The report comes as a potential check on an earlier report from the Pentagon that recommended making AI a major focus of the future, something that would help keep America ahead of Russia and China compete for influence around the world.

This summer the Army announced it was developing a new missile system, called Cannon-Delivered Area Effects Munition (C-DAEM), that would use AI for guidance. 

In 2018, Google declined to renew the contract for Project Maven, an AI initiative the company ran with the Department of Defense that helped train drones to identify potential military targets.

More than 3,000 Google employees signed a letter protesting the project.

WHAT IS PROJECT MAVEN?

The U.S. military has been looking to incorporate elements of artificial intelligence and machine learning into its drone program.

Project Maven, as the effort is known, aims to provide some relief to military analysts who are part of the war against Islamic State.

These analysts currently spend long hours staring at big screens reviewing video feeds from drones as part of the hunt for insurgents in places like Iraq and Afghanistan.

The Pentagon is trying to develop algorithms that would sort through the material and alert analysts to important finds, according to Air Force Lieutenant General John N.T. ‘Jack’ Shanahan, director for defense intelligence for warfighting support.

A British Royal Air Force Reaper hunter killer unmanned aerial vehicle on the flight line February 21, 2014 in Kandahar, Afghanistan.  Military bosses say intelligence analysts are 'overwhelmed' by the amount of video being recorded over the battlefield by drones with high resolution cameras

A British Royal Air Force Reaper hunter killer unmanned aerial vehicle on the flight line February 21, 2014 in Kandahar, Afghanistan.  Military bosses say intelligence analysts are ‘overwhelmed’ by the amount of video being recorded over the battlefield by drones with high resolution cameras

‘A lot of times these things are flying around(and)… there’s nothing in the scene that’s of interest,’ he told Reuters.

Shanahan said his team is currently trying to teach the system to recognize objects such as trucks and buildings, identify people and, eventually, detect changes in patterns of daily life that could signal significant developments.

‘We’ll start small, show some wins,’ he said.

A Pentagon official said the U.S. government is requesting to spend around $30 million on the effort in 2018.

Similar image recognition technology is being developed commercially by firms in Silicon Valley, which could be adapted by adversaries for military reasons.

Shanahan said he’ not surprised that Chinese firms are making investments there.

‘They know what they’re targeting,’ he said.

Research firm CB Insights says it has tracked 29 investors from mainland China investing in U.S. artificial intelligence companies since the start of 2012.

The risks extend beyond technology transfer.

‘When the Chinese make an investment in an early stage company developing advanced technology, there is an opportunity cost to the U.S. since that company is potentially off-limits for purposes of working with (the Department of Defense),’ the report said.

 

 



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.