Google was among the first to declare its principles for responsible AI. Ron Bodkin, Technical Director of Applied AI at Google, shares pieces of advice on what to include in your AI guidelines.
Artificial intelligence can be leveraged to help prevent wildfires, diagnose cancer and prevent blindness. It can also be used to optimise your search result when you use Google as your search engine. But AI is powerful, and the technology has the capability to be harmful to human beings.
Responsible AI is on the agenda
Just two years ago, concerns regarding the responsible use and development of AI were of little interest to anyone. Today everyone from C-level to developers have their eyes and ears on the subject.
Last week GoTo Copenhagen, the week-long event for developers and C-level, hosted Google’s Technical Director of applied AI, Ron Bodkin, to talk about responsible AI the Google way.
“There are many harmful impacts of AI. So many intended and unintended consequences. Many that will take years before they manifest. Thus, we need responsible principles and governance dealing with AI now,” Ron Bodkin said.
From bias to awareness
He listed examples of bias, insufficient controls and the resulting negative impact for society from the implementation of AI.
Examples included Amazon’s recruiting tools that exhibited a bias against women, a recidivism predictor used for parole decisions that showed racial bias, implicit racial bias in mortgage lending, and a model for identifying toxic comments online that was biased against groups that were victims of slurs.
Sergey Brin, the President of Alphabet, Google’s parent company, first wrote about the power and responsibility of using AI two years ago. Supplementing this, Google published its seven principles of how to use and deal with AI last year.
The seven principles for AI at Google are:
- Be socially beneficial
- Avoid creating or reinforcing unfair bias
- Be built and tested for safety
- Be accountable to people
- Incorporate privacy design principles
- Uphold high standards of scientific excellence
- Be made available for uses that accord with these principles.
On the other hand, Google will not pursue applications that:
- Are likely to cause overall harm
- Have a principal purpose to cause injury
- Conduct surveillance that violates internationally accepted norms
- Have a purpose that contravenes international law and human rights
It’s all up to you
Yet abiding by principles for working with AI is still optional at present. There is no baseline for how to approach this correctly, nor any follow-up. Declaring and reporting are still in their early days, and like the Wild West.
This is reminiscent of the sustainability and corporate social responsibility (CSR) landscape before the most significant companies were regulated in this area.
Just as with CSR issues, companies dealing with AI should also expect regulation in the near future. By getting started now, it’s possible to be ahead of regulation and to use it as a competitive advantage.
What the consumer wants
The need for the establishment of AI governance stems, according to Google’s Ron Bodkin, from consumer pressure and legislation lurking just around the corner.
“Remember that humans drive virtually every facet of machine learning and AI. We choose the data sources. We determine the evaluation metrics, and we get affected by the results,” said Ron Bodkin.
Test, test and test
Ron Bodkins advice? Explore the varied needs of different stakeholders when determining your AI principles and governance structure.
At Google, responsible AI practices direct employees to:
- Take a human-centred design approach
- Identify multiple metrics to assess training and monitoring
- When possible, directly examine your raw data
- Understand the limitations of your dataset and model
- Test, test, test
And continue to monitor and update the system after deployment.