Why Google pulled the plug on its AI ethics panel

Why Google pulled the plug on its AI ethics panel

In its youthful beginnings, Google’s slogan was “Don’t be evil”. Amusement at this memory may be an unfair reflection on the company’s moral standards, but it is an understandable response to some of the company’s behaviours, in particular, its increasing use of artificial intelligence.

Google and AI

Even though science fiction often turns into science fact (and sometimes at a very rapid pace), we’re still a long way off the sort of highly-sophisticated androids that come as standard in film and TV. We are, however, very much in the era of “big data” and machine learning, which, as the tech giants well know, has the potential to be a commercial gold mine.

It also has the potential to be an ethical minefield that risks infuriating consumers and driving away customers and, while Google is one of the biggest and richest companies in the world, it must be well aware of the fact that even the mightiest can fall if the attack is strong enough.

Enter the Google AI Ethics Panel

Snappily named the Advanced Technology External Advisory Council (ATEAC), Google’s AI ethics panel comprised 8 people from both industry and academia. While there were obvious reasons why Google reached out to the people in question, there were major concerns about the composition of the panel right from its announcement. Six of the eight people involved were male and four were white males.

In other IT contexts, the fact that the panel included two women and four people of colour might actually have been seen as a distinct positive (given the demographics of the IT industry). However, in this context, it was rather more questionable, since one of the biggest ethical issues facing the development of artificial intelligence is that the humans who are developing it are imbuing it with their own biases, even unconscious ones. This means that there is a compelling need to ensure that demographics other than white males do have meaningful representation in any discussion of the ethics related to artificial intelligence.

This criticism only intensified when questions began to be raised about the background of some of the people on the panel, especially Kay Coles James, who is known for her involvement with right-wing organisations that oppose LGBT and immigrant rights.

Faced with mounting criticism, Google abandoned the panel.

The backlash against the backlash

Google’s decision may have pleased, or at least appeased, one set of critics, at least insofar as it severed its connection with Kay Coles James, but it left others less than impressed.

The backlash against the backlash pointed out that Kay Coles James was an experienced policy-maker whose views on trans rights and immigrants’ rights were not only far from extreme but also far from unusual. They questioned why Google should have considered her to be unfit for the post (especially after asking her to be on it in the first place).

Others questioned whether the forming of the committee was ever anything more than a cosmetic exercise since it had no meaningful power and its members were giving up their time for free. It will be very interesting to see whether Google takes action to create a new AI ethics committee and, if so, what form it will take.

It is clear they did not get it quite right. So starting with an appropriate representation of society and rejecting bigoted ideologies should be a good place to start.

Share This Article

In this article