The council will offer guidance on ethical issues relating to artificial intelligence, automation and related technologies.
The Council’s panel consists of eight people. It includes former US deputy secretary of state and a University of Bath associate professor.
Google says the group will “consider some of Google’s most complex challenges.”
The announcement came at MIT Technology Review’s EmTech Digital. A conference organised by the Massachusetts Institute of Technology.
How Google plans to use emerging technologies has been under intense criticism. Both internally as well as externally.
In June 2018 the company said it would not renew a contract it had with the Pentagon to develop AI technology to control drones.
Project Maven, as it was known, was unpopular among Google’s staff, and prompted some resignations.
In response, Google published a set of AI “principles” it said it would abide by. They included pledges to be “socially beneficial’ and “accountable to people”.
The newly launched global advisory council will be known as The Advanced Technology External Advisory Council (ATEAC) and will meet for the first time in April.
Google’s head of global affairs, Kent Walker, blogged there would be three further meetings in 2019.
It includes leading mathematician Bubacarr Bah. Former US deputy secretary of state William Joseph Burns and Joanna Bryson, who teaches computer sciences at the University of Bath, UK. It will discuss recommendations about how to use technologies such as facial recognition.
Last year, Google’s then-head of cloud computing, Diane Greene, described facial recognition tech as having an “inherent bias” due to a lack of diverse data.
Joanna Bryson wrote, in a highly-cited thesis entitled Robots Should Be Slaves, an argument against the trend of treating robots like people.
“In humanising them,” she wrote, “we not only further dehumanise real people, but also encourage poor human decision making in the allocation of resources and responsibility.”
In 2018 she argued that complexity should not be used as an excuse to not properly inform the public of how AI systems operate.
“When a system using AI causes damage, we need to know we can hold the human beings behind that system to account.”