Only a few companies have a code of conduct for artificial intelligence. Today, documentation for responsible use of AI is still a bit like the Wild West, but expect it to be a must-have within a couple of years.
While most companies have a set code of conduct for engaging in responsible business practices, it’s still rare to find guidelines for the responsible use of AI within these same companies.
This is set to change however, as the Danish European competition commissioner, Margrethe Vestager, will present the European Union’s approach to AI within the next 100 days. There are rumours of mandatory requirements for businesses in Europe to declare how they work with AI, starting with the largest listed companies and moving further from there.
EU as a frontrunner
That’s the opinion of Massimo Pellegrino, Artificial Intelligence Leader at PwC who visited Denmark earlier this year.
“The European Union is a frontrunner in AI ethics. An expert group has identified several principles which for now are not mandatory. I expect that the principles will be mandatory within the next few years,” says Massimo Pellegrino.
He refers to the Ethics Guidelines for Trustworthy AI. The guidelines were released earlier this year, with a pilot program that ended on the 1st of September this year.
Pick and choose
These guidelines deal with issues such as how to keep humans in the AI loop and ensuring that AI is used to empower human beings and not to harm, as well as privacy, data governance, transparency, diversity and fairness.
Currently, the European Union’s guidelines are far from being the only reference guidelines for companies wanting to document a sustainable and responsible approach to AI.
“Right now, we see a lot of different initiatives around AI ethics. Nobody has to invent from scratch. The companies can pick principles identified by others such as Universities, labs and frontrunner companies like Google,” says Massimo Pellegrino.
The hard work of implementing
He defines fairness as one of the essential parameters to deal with, as well as considerations about how much control humans should have over AI. Declarations around privacy and the use of sensible data must also be considered in an AI code of conduct, says Massimo Pellegrino.
“Every company using AI needs to formulate a very fundamental set of principles. If you, for example, develop self-driving cars, who is then accountable for what the systems do based on AI?”
He believes it isn’t hard work to create a code of conduct by identifying the core principles surrounding AI. The hard work according to Massimo Pellegrino is to implement these policies.
When it comes to AI, there are two ways for companies to go:
- To develop intelligent systems by yourself.
- To buy external applications.
From the wild west to law
Most companies purchase external AI systems. In this situation, it’s necessary to ensure that what you buy is aligned with your company’s values and principles, Massimo Pellegrino states. This isn’t a technical thing that can be delegated to developers entirely, as many of the issues at hand must be dealt with by other parts of the organisation.
“When doing an AI strategy and an AI code of conduct, you need to consider what is good for your stakeholders.”
What’s the current state of companies having a clear code of conduct today? It sounds like it’s still a Wild West out there regarding transparency and insight into how companies deal with AI?
“It is a Wild West. But there are governance tools in place now that C-level can work with. This must come from the top of the company. And then we have some companies and industries that are leading the way for others, like the financial sector,” says Massimo Pellegrino, while adding a timeline for others to follow:
“I expect it to be mandatory for all large corporations to have principles for the responsible use of AI in place within the next couple of years.”