Science

Google recruits eight leading experts for its newly-founded AI ethics board


AI ethics board founded by Google will include eight leading experts – but will it be enough stop the search giant prying into our private lives?

  • Advisory board will consider some of Google’s most complex challenges
  • It will advise on matters relating to the development and application of AI 
  • Designed to help Google avoid any further AI faux pas or privacy scandals
  • Includes AI academics, philosophers and a former US deputy secretary of state

Google has set up an external AI ethics council to guide the tech giant away from morally questionable uses of its technology and encroaching on the privacy of its customers.

It will advise the search giant on matters relating to the development and application of its artificial intelligence research.

Google has been embroiled in past controversies regarding the use of its AI, as well as the way it protects the data it gathers.

It established an internal AI ethics board in 2014 when it acquired DeepMind but this has been shrouded in secrecy with no details ever released about who it includes. 

The firm is a world-leader in many aspects of AI and the eight people recruited for the advisory board will ‘consider some of Google’s most complex challenges’.

Members of the board include Joanna Bryson, an associate professor at the University of Bath and William Joseph Burns, former US deputy secretary of state. 

Scroll down for video 

Google has been embroiled in past controversies regarding the use of its AI concerns over it encroaching on the privacy of customers. The board, formally known as the Advanced Technology External Advisory Council (ATEAC), will meet four times in 2019 (file photo)

Google has been embroiled in past controversies regarding the use of its AI concerns over it encroaching on the privacy of customers. The board, formally known as the Advanced Technology External Advisory Council (ATEAC), will meet four times in 2019 (file photo)

The board, formally known as the Advanced Technology External Advisory Council (ATEAC), was announced at MIT Technology Review’s EmTech Digital this week. 

It has been specially curated to steer the Mountain View-based firm away from any future controversies by ensuring it fully considers morality while developing its artificial intelligence. 

Google uses AI in many high-profile forms, including its smart speaker, Google Home, and DeepMind, its specialist AI division. 

Privacy concerns piqued when Google absorbed its DeepMind Health AI lab – a leading UK health technology developer last year.

The news raised concerns about the privacy of NHS patient’s data which is used by DeepMind and could therefore be commercialised by Google.

WHO IS ON GOOGLE’S AI ETHICS BOARD? 

  • Alessandro Acquisti – s a Professor of Information Technology and Public Policy at the Heinz College, Carnegie Mellon University 
  • Bubacarr Bah – Assistant Professor in the Department of Mathematical Sciences at Stellenbosch University.
  • De Kai – Professor of Computer Science and Engineering at the Hong Kong University of Science and Technology
  • Dyan Gibbens – CEO of Trumbull, a Forbes Top 25 veteran-founded startup 
  • Joanna Bryson – Associate Professor in the Department of Computer Science at the University of Bath
  • Kay Coles James – President of The Heritage Foundation
  • Luciano Floridi – Professor of Philosophy and Ethics of Information at the University of Oxford
  • William Joseph Burns – Former US deputy secretary of state 
Technology executives were put on the spot at an artificial intelligence summit this week, each faced with a simple question growing out of increased public scrutiny of Silicon Valley: 'When have you put ethics before your business interests?' (file photo)

The AI ethics board was specially curated to steer the Mountain View-based firm away from any controversies by ensuring it fully considers morality while developing its artificial intelligence (file photo)

DeepMind was bought by Google’s parent company Alphabet for £400 million ($520m) in 2014 and had maintained independence until the acquisition in November. 

But now the London-based lab shares operations with the US-based Google Health unit.

Google had also previously received criticism from the public and internal members of staff over Project Maven, a collaboration between Google and the US military to use its AI to control drones destined for enemy territory. 

Google decided not to renew this contract in June 2018 following protest resignations from some employees. 

Kent Walker, SVP of Global Affairs at Google, said in a blog post: ‘ Last June we announced Google’s AI Principles,, an ethical charter to guide the responsible development and use of AI in our research and products.

‘To complement the internal governance structure and processes that help us implement the principles, we’ve established an Advanced Technology External Advisory Council (ATEAC). 

‘This group will consider some of Google’s most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work.’

He added: ‘This inaugural Council will serve over the course of 2019, holding four meetings starting in April.’ 

WHAT ARE SOME OF GOOGLE’S PAST CONTROVERSIES?

March 2019Google refused to scrap a Saudi government app which lets men track and control women.

The tech giant says that software allowing men to keep tabs on women meets all of its terms and conditions. 

October 2018: A software bug in Google+ meant that the personal information of ‘hundreds of thousands’ of users was exposed. The issue reportedly affected users on the site between 2015 and March 2018.

The bug allowed app developers to access information like names, email addresses, occupation, gender and more.

Google announced it would be shutting down the Google+ social network permanently, partly as a result of the bug. 

It also announced other security features that meant apps would be required to inform users what data they will have access to. Users have to provide ‘explicit permission’ in order for them to gain access to it.  

August 2018: A new investigation led by the Associated Press found that some Google apps automatically store time-stamped location data without asking – even when Location History has been paused.

The investigation found that the following functions were enabled by default: 

  • The Maps app storing a snapshot of where the user is when it is open
  • Automatic weather updates on Android phones pinpointing to where the user is each time the forecast is refreshed  
  • Simple searchers, such as ‘chocolate chip cookies,’ or ‘kids science kits,’ tagging the user’s precise latitude and longitude – accurate to the square foot – and saving it to the Google account

This information was all logged as part of the ‘Web and App Activity feature, which does not specifically reference location information in its description.    

July 2018: The EU fined Google $5 Billion in for shutting-out competitors by forcing major phone manufacturers including South Korea’s Samsung and China’s Huawei to pre-install its search engine and Google Chrome browser by default. 

July 2018: The Wall Street Journal revealed that data privacy practices of Gmail means that it was common for third-party developers to read the contents of users’ Gmail messages. 





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.