BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Where Is Your Global Organization At In Trusted AI?

Following
This article is more than 3 years old.

In my prior Forbes blogs , the business imperative for Board Directors and CEOs to advance their governance practices to lead forward with AI was framed. This blog shares the insights from a recent interview with Cathy Cobey, the EY global trusted AI leader, where we explore: how practicing responsible AI is stacking up, the impact of data bias and key board director questions to ensure CEO’s are managing the new risks that AI presents.

One of the key insights Cathy shared is that from all of her global client interactions to date, she has yet to find any organization, large or small, which has a robust inventory management process to easily identify or inventorize their AI models. This also mirrors my global research that Board Directors and CEO’s don’t know where their AI algorithms are. Simply ask any CEO to produce in five minutes where their AI algo’s are and if they have a risk rating or a third party verification. You will quickly learn the gaps are real, wide and worrisome.

This is starting to slowing change as there is an increasing anxiousness of this vulnerability and AI is now rates as one of the top ten security risks reported by EY in their global risk survey.

According to Cathy many of the risks stem from the decentralized nature of AI systems. Although they can be very technical to build from scratch, many pre-developed AI models can be downloaded in a matter of minutes from open sources or technology companies. For example, many AI start-ups are providing free pilots of their tools to rapidly support model building and build their client base. In addition, there is a tremendous ease of access to learn AI machine learning methods which bypass some of the discipline knowledge and more structured approaches in formal educational programs. Cathy stressed that these creative practices often inspire market growth, however, these sources can also circumvent more robust technology procurement processes in companies which were designed, for large, expensive technology builds or licenses. 

From Cathy’s perspective, third-party AI audits are still an emerging governance practice and very rare. However, EY is starting to work with a number of internal audit groups that have begun to audit AI programs. They normally start with a focus on the broader governance and controls over the AI program before moving to specific AI systems or projects. 

External audit opinions or certifications of AI models or systems are still in development, as standards that can be used as evaluation criteria, for both technical performance and ethical practices, are not yet available.

Organizations like IEEE, ISO are developing standards for AI, but these guidance standards won’t be available until at least 2021. Also, work is still required to develop audit accreditation ad the certification program itself, although work is underway by accounting bodies, such as CPA Canada and the AICPA to consider the assurance standards, evaluation criteria, and auditor credentials, including technical knowledge need to audit or certify AI.

The ethical development of AI is critical to ensure that the risk of machine to machine decisions that could impact where personal data ends up is explained in the  IEEE P7006™ – Standard for Personal Data Artificial Intelligence (AI) Agent which describes the technical elements required when developing AI ethically and also keeping a human involved in all decision making. This will ensure that personal data use remains transparent even when an AI agent is being used. This standard is not yet finalized but is planned to be accessible in 2021.

With the standards lagging on governance, diverse industries are maturing at different rates. From Cathy’s global experiences, financial service organizations are ahead of other sectors. The main reason is that they have already had to comply with robust model risk management and validation regulations for over a decade and regulators have communicated that they expect those same standards to be applied to machine learning and AI models. However, even financial service organizations are struggling to keep pace with the growth of AI across their organization into areas they previously hadn’t used analytic models, such as credit lending, marketing and branch optimization. To keep pace, they need to automate their model validation processes and tests, and expand them to broader trusted AI attributes.

Another sector that is ahead of the others is the public sector. Several governments, including Canada have released standards to govern the use of AI in their public services, including social services and healthcare, to ensure that they are used in a responsible and trustworthy way. However, it is still early days and many departments are still working through how to implement the governance and control standards expected. 

Other highly regulated industries such as aviation and automobiles are also early adopters driven partly by regulatory requirements and in building consumer trust in their autonomous vehicles and drones. 

One of the major concerns with AI is the ethical risks of data bias. One of the frequently discussed cases was in 1988 when the UK Commission for Racial Equality found that a computer software program was biases to women and also those with non – European names, impacting successful admission rates to the British medical school. The school was subsequently found guilty and a blot on the medical profession. Fast forward nearly thirty years, AI’s growth rates and continued lack of governance controls and audit standards on data sets, in only increasing the risks of AI at scale.

Google reports that training natural language processing models on news articles can exhibit gender stereotypes.

Cathy also highlighted the risks and misconceptions about bias and reminded me that there is inherent bias in all AI models with value judgements necessary to make predictions or decisions. She stressed that there is a natural requirement of an algorithmic model, or any decision whether made by a person or machine to have some level of bias. The goal is not to eliminate bias, but to identify biases that are discriminatory or unfair. Cathy reinforced that what is key is to focus on the context of the AI model, in other words, understand what is the objective that the AI model is working to achieve? Define clearly what are its target outcomes and what is the potential impact of that outcome? And to who? Through exploring these diverse angles and asking robust questions, only then can leaders then ask what are the risks of unintended, unfair or illegal bias.

One of the most frequently reported and common bias error in using AI is risks towards minority classes. For example, there are many cases where bias against a minority class in credit lending decisions can stem from many different sources – limited records, lack of previous banking records, systemic historical underemployment of a particular class, or historical bias in lending decisions used to train the algorithm.

Census data shows that black and Hispanic Americans are apt to be deprived of banking services more than white or Asian Americans. There remain many systemic and racial gaps in mortgage loans where applications for loans are declined due to data bias AI Algorithms. One Wallstreet leader, Cathy O’Neil, an academically trained mathematician who studied and worked at UC Berkeley, Harvard and MIT and left a job on Wall Street to write a book on the dangers of algorithms.

But bias can manifest in any dataset, not just those involving humans. Cathy Cobey, EY Global AI leader shared the story of  a conversation she had with a telco that was using AI to optimize their fiber cable network. She advised her client that in that situation the minority class would be rural customers and that they needed to guard against optimizing the network for urban / high density regions to the detriment of their rural customers. 

Strong governance practices must look at all directions, with particular the opposite or contrarian questions. Few leaders provide the time to explore risk from diverse angles, and often are in the mucky soup before putting in sufficient safe guards. There are a lot of existing controls that can leverage, but the unique characteristics of AI require new governance and controls to be put in place, and others to be modified.

Cathy emphasized that Board Directors and CEO’s need to ensure there are clearly defined decision criteria being used in the model, and to enable conversations on what are examples of unintended, illegal or unfair bias in a particular AI-enabled process or decision, and how might it manifest in the systems outcomes?

A common example is a record with more limited information. If you don’t have information for a particular data record for the features being prioritized, the model will not have the information it needs to operate in its decision framework, and therefore could result in an unfair decision.

Ultimately, there needs to be a close collaboration between the technicians designing and testing the AI system and the business SMEs that understand the business context of the AI’s outcomes and will be operating with the AI system. They need to work in tandem to understand both the technical and business context of bias and fairness. They also need to consider the alternative or exception routines to be followed for records that are identified as being biased, such as outliers.

EY is one of the leading global firms that have bias and fairness as one of their five trust attributes incorporated into their Trusted AI Framework. It is the most common principle in any AI guidance document issued, whether by a public or private organization.

To get AI governance heading in the right direction, the composition of the board with AI know-how will be key. There has been a growing trend to expand the technolgy knowledge of boards, and experts with expertise in cybersecurity, cloud, ERP systems, digital are more common. However, when Cathy and I pooled our global networks, we both agreed that we have yet to see AI experts joining as board directors, with the relevant skills to help advance the AI risk practices.

Currently, board directors are learning about AI at different rates, and emerging technology risks have started to be added as top risks that are monitored and reported by the risk and internal audit teams to board risk committees. Some organizations have also been adding ethics-based senior executive roles and ethics advisory committees, but as a member of management not of the board. Most of these strides are being made in the technology sector versus broadly in other sectors.

The International Corporate Governance Network has been advocating for the requirements for Board Directors to be trained on AI and to augment their board governance practices.

A list of ten board governance questions is provided below that can help advance increased leadership with AI. These questions are good starting anchors to get underway with stronger AI governance practices.

1.) Does the board know of AI use cases used within the company and what were the results?

2.) Is there a risk and control framework and inventory management process in place to measure and manage any AI risks and has the board reviewed this operating process?

3.) What processes oversee operations using AI to ensure that the outcomes are appropriate and/or indicate risks to the company?

4.) Have there been any cases where there were incorrect AI outcomes, such as data bias? How were these risks communicated and resolved to stakeholders?

5.) How does the company ensure the effective protection of AI intellectual property assets?

6.) What data is purchased from an external vendor, and is the company ensuring the data’s integrity and that it is appropriate for the use case?

7.) How is data privacy ensured across the organization and how does the data oversight reporting structure communicate with senior management and the board?

8.) What training is provided to upgrade skills in the company workforce to allow them to assist the company with the AI journey? What are the board and management’s plans to fill the knowledge gaps or job impacts from AI?

9.) Does the board have AI/technical/innovation expertise on the board, and what is their selection process?

10.)                Does the board have clear AI benchmarks of companies that are role model in AI board governance to learn from, in particular around Machine Learning Operations and AI Inventory Management practices?

 Google has one of the most well developed Responsible AI ethical principles which is a good starting point as having a clear position on ethics is key to enable organizations to lead with integrity and responsibly.

Clearly, we have much work to do to advance board governance forward with effective practices and controls to ensure man and machine have trusted AI harmonization.

Notes:

Interviewee Profile: Cathy is a Technology Risk Partner based in Toronto, Canada and is EY’s Global Trusted AI Leader. In this role, she leads the development of EY’s Trusted AI methods and tools, and assists clients in building trust into their AI systems. This involves not only the technical design and functionality of their AI systems, but also in considering the broader governance and control environments that they operate in.

More Information:

To see the video interview of Cathy Cobey, EY Global Trusted AI Leader, with Dr. Cindy Gordon, CEO SalesChoice, see the Youtube Link. This video content compliments this blog content.

To see more of Cathy Cobey, EY Global Trusted AI and Dr. Cindy Gordon, CEO SalesChoice Inc.,  you can watch their video series: Managing the Risks of AI, go here.

For more questions addressing board director and CEO governance on AI, refer to Dr. Cindy Gordon’s Forbes blog roster.

Research: If you know of a company with an AI expert on a publicly traded board, please send an email to cindy.gordon@saleschoice.com

Follow me on Twitter or LinkedInCheck out my website or some of my other work here