Skip to main content

Technology Landscape Artificial Intelligence

Click HERE to download this section

Artificial Intelligence

Trends

  1. AI combines computer science and robust datasets to enable problem-solving and decision-making capabilities that mimic human intelligence. Today’s AI is considered relatively “narrow” or “weak AI,” where machines focus on performing specific tasks. Such AI-enabled applications are comparatively commonplace. Examples include digital assistants, natural language question-answering systems, medical imaging analysis tools, statistical and predictive tools, text generating language models, and early-stage autonomous vehicles. AI engineers and scientists are striving for “general AI” or “strong AI,” where AI systems are envisioned to have cognitive abilities similar to a human. Whereas these AI systems are still theoretical with no practical examples in use today, AI researchers continue to explore their development.42
  2. As AI systems continue to grow in sophistication and complexity, there is a significant risk that they will become less explainable as to how such systems evaluate data and reach outcomes or decisions becomes more opaque.43 PwC, amongst many other organizations, observes in a whitepaper on the topic:
The central challenge is that many of the AI applications using [machine learning] operate within black boxes, offering little if any discernible insight into how they reach their outcomes. For relatively benign, high volume, decision making applications such as an online retail recommender system, an opaque, yet accurate algorithm is the commercially optimal approach. [...] the use of AI for ‘big ticket’ risk decisions in the finance sector, diagnostic decisions in healthcare and safety critical systems in autonomous vehicles have brought this issue [knowing if it’s an error or a reasonable decision] into sharp relief. With so much at stake, decision [m]aking AI needs to be able to explain itself.44

Therefore, as the whitepaper notes, the more critical a function an AI system performs, the more interpretability (through a combination of transparency and explainability)45 is required.

Opportunities

  1. AI provides opportunities for PAs to leverage their organizational data, by uncovering new relationships through analyzing such data, and increasing efficiencies. For example, data analytics AI software can augment understanding of data relationships and fuel predictive models for financial processes, such as forecasting sales and informing more accurate demand planning (e.g., expected credit loss forecasting in banking and finance). In addition, intelligent drones can be used for inventory and infrastructure management, etc.
  2. Specific to audit firms, and in particular larger firms, it is observed that some examples of AI used to enable efficiencies include:46
  • Using AI to analyze data from non-traditional sources, such as social media, emails, phone calls, public statements from management, etc., to identify potential risks relevant to client acceptance and continuance assessments.
  • Using natural language processing and machine learning to analyze both structured and unstructured information, such as global regulatory notices, industry reports, regulatory penalties, news, public forums, etc., to detect relevant audit risks and for fraud detection.
  • AI tools, benefiting from increases in the quality and quantity of available “training” data (i.e., data that the system uses to learn), applied to data sets to algorithmically identify outliers and anomalous data and to perform predictive analytics for use in areas such as testing large transaction populations, auditing accounting estimates, and going concern assessments.
  • Document processing, review, and analysis using optical character recognition to identify and extract key details from contracts (e.g., leases) and other documents (e.g., invoices).
  • Inventory and physical asset verification procedures through the use of intelligent drones with computer vision (image recognition), particularly for larger capital assets, such as trucks, utility infrastructure, or the inspection of large-scale business sites, such as tree farms.
  • AI technologies to support auditors’ work on financial statement disclosures, enabling easier identification of missing disclosure requirements and non-compliance.
  1. In general, AI models need data to train on, and training on actual client and customer data is the most effective and efficient way of doing this. As a result, it is becoming more common for firms and companies to want to use such “real” data to train their AI models to enhance audit quality or business insights. This is seen by firm stakeholders to be akin to PAs of the past taking the “lessons learned” from prior engagements or projects and applying them to their next project or task, except that now the “lessons learned” are applied by the AI model instead. It was noted that along with the benefits of improving the quality of the AI model’s outputs, using such “real” training data comes with risks to cybersecurity, confidentiality and privacy, as well as potential threats to independence. See discussion on Focus on Data Governance.
  2. AI systems and AI-based applications are also becoming increasingly important as tools to monitor other technology systems, including other AI systems, because more traditional monitoring methods are unable to maintain the frequency and volume of evaluation needed. Examples include the need for continuous monitoring in some cybersecurity environments to mitigate threats from sophisticated actors, as well as helping to validate AI models in search of bias or other vulnerabilities as organizations strive for ethical AI.47

Impact/Risks

  1. There is often an assumption that AI technology is neutral, but the reality is far from it.48 AI algorithms are created by humans, and humans have inherent and unconscious biases.49 Therefore, AI is never fully objective and instead reflects the world view of those who built the systems and the data ingested and processed by such systems.50 Stakeholders observed that inherent bias in data is the biggest issue with AI, and that such bias might not be fully mitigated in the programming, and attempts to correct bias might actually introduce new bias.
  2. Bias can creep into algorithms in several ways. AI systems learn to make decisions based on both training data and testing data,51 which can include biased human decisions or reflect historical or social inequities, even if sensitive variables such as gender, race, and sexual orientation have been removed. Data sampling is also a source of bias, in which groups are overor under-represented in the data set.52 Stakeholders commented that Pas need to be aware of the extent to which bias is impacting the outputs of technology, and to ensure that they have the appropriate mindset, competence, and tools to do this.
  3. Understanding the technology and having regard to the purpose for which it is to be used are also key to assessing whether the output of technology is reasonable. In this regard, stakeholders also highlighted that PAs need to be aware that the approach to AI learning might also affect its risk profile for producing accurate and reliable outputs.53 Furthermore, understanding how data was made available for training and testing the AI system – and how confidentiality, including data privacy, has been considered and maintained – is also important.
  4. This illustrates the importance of building ethical AI, in respect of which there are many parallel initiatives around the world (around 200 sets of AI ethics guidelines have been developed by various governments, multilateral organizations, nongovernmental organizations, and corporations).54 Importantly, in November 2021, UNESCO’s General Conference of 193 member states adopted the Recommendation on the Ethics of AI, which is the first truly global standard-setting instrument on AI ethics.55
  5. Stakeholders observed that building or ensuring ethical AI systems includes understanding the data going into the model, how the model operates, and the potential unintended consequences of operating the model. PAs cannot be expected to be the “expert” in technology and fully understand what is “under the hood,” but in order to rely on a system, PAs must be comfortable that the output from the technology is reasonable. Given the challenges of some AI systems lacking transparency and explainability, this might not always be possible. In many cases, however, the PA’s reliance on the system can be enhanced through gaining an understanding of the controls around the inputs to the system (i.e., quality of the data, including being proactive to understand the inherent biases within the dataset); the system, application, and other general IT controls, such as monitoring the operation of the system or making changes; as well as controls over the analysis of the output. This means that although the PA might not understand the “black box,” they can at least be comfortable with the inputs and the control structure monitoring the system and its output in order to reasonably rely on the technology. It is also imperative that for systems supporting decisions with significant consequences, the PA has access to one or more experts who can answer both “how does the system work?” and “why did the system do what it did?”.56
  6. In addition, stakeholders commented that having the ability and competence to ask the “right” questions so that appropriate and fit-for-purpose AI is procured or developed is important. This can be achieved by the PA keeping current and educating themselves on relevant practical guidance and “best practices” specific to their role. Examples include the World Economic Forum’s “toolkits” for C-suite executives57 and Board of Directors.58
  7. Stakeholders stress that building or ensuring ethical AI systems also involves utilizing a “human in the loop” approach to ensure human expert oversight of, and accountability for, the system. For example, the volume of data inputs and inherent complexity that drive machine learning can create a scenario where the system lacks transparency and explainability, and the impact of bias potentially also goes undetected. Regular monitoring and feedback of any developments or changes in the AI outputs and consulting with experts might help the PA assess the ongoing reasonableness of such outputs. In this regard, the Working Group notes that the Code’s requirement for a PA to have an inquiring mind when applying the conceptual framework will help a PA challenge the system to test how it responds across a wide range of stimuli, notwithstanding any conditions, policies and procedures that might be established by the employing organization or firm to address the system’s accountability.
  8. Ensuring an ethical organizational culture is also core to fostering a safe environment for data scientists and others to escalate concerns over any bias or discrimination identified in AI systems or data without the fear of retaliation. For example, the former co-lead of Google’s Ethical AI team has alleged that she was fired over a dispute in relation to a research paper she coauthored opining that technology companies could do more to stop AI systems designed to mimic human writing and speech from exacerbating historical gender biases and using offensive language.59 The Working Group notes that PAs are expected to encourage and promote an ethics-based culture within their organizations, taking into account their position and seniority in the organization. This role is key and becoming even more important in the face of transformational technology.
  9. Against this backdrop, the importance of regulating AI systems is also being increasingly recognized by governments around the world.60 For example, the European Commission has proposed a risk-based approach to regulating AI systems, whereby such systems are rated on a scale ranging from “minimal or no risk” to “unacceptable risk.”61 Under this approach, AI systems providing social scoring of humans are classified as being of unacceptable risk and are prohibited, whereas AI enabling recruitment and medical services are of high risk and are only permitted subject to compliance with certain additional requirements.

 

 

Endnotes

42 IBM Cloud Education. “Artificial Intelligence (AI).” IBM, 3 June 2020, https://www.ibm.com/cloud/learn/what-is-artificial-intelligence. And Dean, Jeff. “Google Research: Themes from 2021 and Beyond.” Google Research, 11 January 2022, https://ai.googleblog.com/2022/01/google-researchthemes-from-2021-and.html.

43 On a related note, a significant qualitative research study involving 602 thought leaders (e.g., technology innovators and developers, business and policy leaders, researchers and activists) found that 68% believed that ethics principles focused primarily on the public good will not be employed in most AI systems by 2030 and will instead continue to be primarily focused on optimizing profits and social control. See Rainie, Lee, et al. “Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade.” Pew Research Center, 16 June 2021, https://www.pewresearch.org/internet/2021/06/16/experts-doubt-ethical-ai-design-will-be-broadly-adopted-as-the-norm-within-the-next-decade/.

44 “Explainable AI: Driving business value through greater understanding.” PwC, 2018, https://www.pwc.co.uk/audit-assurance/assets/explainable-ai.pdf.

See also, for example:

45 Herzog, Christian. “On the risk of confusing interpretability with explicability.” AI and Ethics 2, 219-225, 2022, https://doi.org/10.1007/s43681-021-00121-9.

46 “IAASB Digital Technology Market Scan: Artificial Intelligence—A Primer.” IAASB, 23 March 2022, https://www.iaasb.org/news-events/2022-03/iaasb-digital-technology-market-scan-artificial-intelligence-primer.

47 See, for example, “Deloitte AI Institute Team With Chatterbox Labs to Ensure Ethical Application of AI.” Deloitte, 15 March 2021, https://www2.deloitte.com/us/en/pages/about-deloitte/articles/press-releases/deloitte-ai-institute-teams-with-chatterbox-labs-to-ensure-ethical-application-of-ai.html.

48 See, for example, Hao, Karen. “The true dangers of AI are closer than we think.” MIT Technology Review, 21 October 2020, https://www.technologyreview.com/2020/10/21/1009492/william-isaac-deepmind-dangers-of-ai/.

49 See, for example, Gabbrielle M Johnson. “Algorithmic Bias: On the Implicit Biases of Social Technology.” Synthese 198, 9941-9961, 2021, https://doi.org/10.1007/s11229-020-02696-y.

50 Satell, Greg, and Yassmin Abdel-Magied. “AI Fairness Isn’t Just an Ethical Issue.” Harvard Business Review, 20 October 2020, https://hbr.org/2020/10/ai-fairness-isnt-just-an-ethical-issue.

51 Training data is the information used to train an algorithm for a specific output. Training data contains both the anticipated output as well as the input data in order to get the algorithm’s desired output to run smoothly. Testing data is a dataset that is used to assess how well the model performs when making forecasts on it. Testing data contains only the input data, not the anticipated result. The algorithm’s output is then compared to the “actual” result to assess how well the algorithm was trained.

52 Manyika, James, et al. “What Do We Do About Biases in AI?” Harvard Business Review, 25 October 2019, https://hbr.org/2019/10/what-do-we-doabout-the-biases-in-ai.

53 AI can learn through supervised or unsupervised learning. Supervised learning uses labelled (i.e., preprocessed data which has been labelled for a specific context) datasets to train the AI to classify data or predict outcomes accurately with human intervention. Unsupervised learning uses unlabeled (i.e., raw data straight from the source) datasets to discover “hidden” patterns in data without human intervention. Classifying big data can be a real challenge in supervised learning, but the results are highly accurate and trustworthy. In contrast, unsupervised learning can handle large volumes of data in real time, but there is a lack of transparency into how data is clustered and a higher risk of inaccurate results. (Delua, Julianna. Supervised vs. Unsupervised Learning: What’s the Difference?” IBM, 12 March 2021, Supervised vs. Unsupervised Learning: What’s the Difference? | IBM.

54 See, for example:

55 “Ethics of artificial intelligence.” UNESCO, November 2021, https://www.unesco.org/en/artificial-intelligence/recommendation-ethics.

56 Supra note 44; see also an interesting example of OpenAI’s GPT-3 platform being used to explain the purpose of specific computer in Willison, Simon. “Using GPT-3 to explain how code works.” Simon Willison blog, 9 July 2022 https://simonwillison.net/2022/Jul/9/gpt-3-explain-code/.

57 “Empowering AI Leadership: AI C-Suite Toolkit.” World Economic Forum, 12 January 2022, Empowering AI Leadership: AI C-Suite Toolkit | World Economic Forum (weforum.org).

58 “Empowering AI Leadership - An Oversight Toolkit for Boards of Directors.” World Economic Forum, 2022, https://express.adobe.com/page/RsXNkZANwMLEf/.

59 Supra note 21.

60 There are indications that increased government regulation is supported by knowledgeable business leaders. For example, a 2021 KPMG US study found that “business leaders are conscious that controls are needed and overwhelmingly believe the government has a role to play in regulating AI technology…Business leaders with high AI knowledge (92 percent) are more likely to say the government should be involved in regulating AI technology in comparison to total business leaders (87 percent).” See “Thriving in an AI World.” KPMG, April 2021, https://info.kpmg.us/content/dam/info/en/news-perspectives/pdf/2021/Updated%204.15.21%20-%20Thriving%20in%20an%20AI%20world.pdf.

61 European Commission’s proposed artificial intelligence act (April 2021): ”Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS.” Artificial Intelligence Act, 21 April 2021, The Act | The Artificial Intelligence Act; Heikkilä, Melissa “A quick guide to the most important AI law you’ve never heard of.” MIT Technology Review, 13 May 2022, https://www.technologyreview.com/2022/05/13/1052