28-01-2021 · 可持續投資《開闊視野》

SI Opener: Companies should not ignore the challenges of AI

Recent events in the US have show that the use of artificial intelligence can bring with it negative social issues. People who are interested in one topic, such as ’the US elections were stolen’, will only see information confirming this due to the algorithms used by social media companies. They are not exposed to other facts and opinions – and this can be detrimental.

    作者

  • Masja Zandbergen-Albers - Head of Sustainability Integration

    Masja Zandbergen-Albers

    Head of Sustainability Integration

  • Daniëlle Essink-Zuiderwijk - Engagement Specialist

    Daniëlle Essink-Zuiderwijk

    Engagement Specialist

Research1 by McKinsey shows that organizations are using AI as a tool for generating value. The leaders in its use come from a variety of industries that already attribute 20% or more of their organizations’ earnings before interest and taxes to AI.

While that is a positive sign for investors, we strongly believe that companies should also address the risks in using AI. The same study also finds that a minority of companies recognize these risks, and even fewer are working to reduce them. In fact, during our engagement with companies on these issues, we often hear them say that regulators should set clearer expectations around the use of such technology. Companies that are fit for the future should not wait for regulation, but take responsibility from the outset.

What are some of the social issues?

Civil Rights: AI systems are increasingly being used in socially sensitive spaces such as education, employment, housing, credit scoring, policing and criminal justice. Often, these are deployed without contextual knowledge or informed consent, and thus threaten civil rights and liberties. For example, the right to privacy is at risk, especially with the increasing use of facial recognition technologies, where it is almost impossible to opt out of its operations.

Labor and automation: AI-driven automation in the workplace has the potential to improve efficiency and reduce the amount of repetitive jobs. Occupations are expected to change, as automation creates jobs in some industries and replaces workers in others. AI may also mean greater surveillance of our work, which means companies should ensure that their workers are fully aware of how they are being tracked and evaluated. Another example specifically related to the technology sector is the hidden labor by individuals who help build, maintain and test AI systems. This kind of unseen repetitive and often unrecognized work – also called ‘click working’ – is paid per task and is frequently underpaid.

Safety and accountability: AI is already entrusted with decision-making across many industries, including financial services, hospitals and energy grids. Due to market pressure to innovate, various AI-systems have been deployed before its technical safety was secured Uber’s self-driving car, which killed a woman, and IBM’s system recommending unsafe and incorrect cancer treatments are examples of what can go wrong. Since accidents can occur, oversight and accountability is essential if AI systems are going to be key decision-makers in society.

Bias: The most commonly discussed issue in AI systems is that it is prone to biases that may reflect and even reinforce prejudices and social inequalities. These biases could arise from data that reflects existing discrimination, or is unrepresentative of modern society. Even if the underlying data is free of bias, its deployment could encode bias in various ways. A report published by UNESCO2 found that AI voice assistants, from Amazon’s Alexa to Apple’s Siri, reinforce gender biases. According to the report, those AI voice assistants have female voices by default, and are programmed in a way that suggests women are submissive. Today, women make up only 12% of AI researchers and only 6% of software developers. As AI engineers are predominantly white male technicians, bias towards their values and beliefs might arise in the design of applications. Furthermore, using the wrong models, or models with inadvertently discriminatory features, could lead to a biased system. Another bias-related issue is the ’black box’ problem, where it is impossible to understand the steps taken by the AI system to reach certain decisions, and therefore bias can arise unconsciously. Lastly, intentional bias could be built into the algorithms.

Content moderation in the spotlight

Social media platforms use content moderation algorithms and human review teams to monitor user-generated submissions, based on a pre-determined set of rules and guidelines. Content moderation work requires strong psychological resilience – it is often not suitable for working from home with family members in the room. This has meant that during Covid-19, companies had to scale down on the amount of content that could be checked.

The relevance and materiality of content moderation became clear from the #StopHateForProfit campaign highlighting the profitability of harmful speech and disinformation on Facebook. The campaign resulted in more than 1,000 advertisers – including major players like Target, Unilever and Verizon – boycotting Facebook advertisements in July 2020. The focus on content moderation continued in the run up to the US elections, with stricter guidelines and procedures in place at all of the main social media companies.

Investment requires engagement

For the reasons mentioned above, we started an engagement theme on the social impact of artificial intelligence in 2019. From an investment perspective, we see big opportunities in this trend. More in-depth information about artificial intelligence as an investment opportunity can be found in the white paper our trends investing team published in December 2016. However, we also acknowledged that AI can bring with it unwanted effects that we believe our investee companies should address. We ask companies to do five things:

  1. Develop and publish clear policies for the use, procurement and development of artificial intelligence that explicitly address societal and human rights implications.

  2. Perform periodical impact assessments on their AI activities. The assessment should cover discriminatory outcomes, social biases, hidden labor and privacy concerns.

  3. Put in place strong governance provisions, given the control complexities around machine learning. The company should maintain control processes that identify incidents and risks associated with the unintended consequences of AI. The board should be sufficiently trained and experienced to exercise meaningful oversight on the control framework for AI and sign off AI policies and risk reporting.

  4. Take the social issues of AI into account in the design and development stage. This means, among other things, that there should be enough knowledge of human rights and ethics in their development teams. In order to mitigate social biases, the company should also promote diversity and inclusion in their AI teams.

  5. Take a multi-stakeholder approach in the company’s development and use of AI. Several initiatives and platforms are available to share and promote best practices. We also expect companies to report on their lobbying activities related to AI legislation.

Much more awareness is needed

Over the course of 2020, we have spoken to most of the companies in our engagement peer group. In our initial conversations, some companies challenged the relevance of the issue, or would not take accountability for it. That stance already seems to be changing somewhat for some of them.

The 2020 voting season showed an increasing number of shareholder proposals focused on digital human rights. Robeco co-led the filing of a shareholder proposal at Alphabet’s AGM asking for a Human Rights Risk Oversight committee to be established, comprised of independent directors with relevant experience. Some 16% of shareholders voted in favor of our resolution, a substantial part of the non-controlling shareholder votes.

In the first week of November, Alphabet announced an update of its Audit Committee Charter, which now includes the review of major risk exposures around sustainability and civil and human rights. This is in line with our request to formalize board oversight, and is a first step towards getting this in place on specific sustainability-related issues, such as human rights.

Use of artificial intelligence will only increase and influence our lives and work to a large extent in the future. As morals and ethics cannot be programmed, we believe companies should accept their responsibility. There is still a long way to go.

Footnotes

1Global survey: The state of AI in 2020 | McKinsey
2Artificial intelligence and gender equality: key findings of UNESCO’s Global Dialogue, August 2020

免責聲明

本文由荷宝海外投资基金管理(上海)有限公司(“荷宝上海”)编制, 本文内容仅供参考, 并不构成荷宝上海对任何人的购买或出售任何产品的建议、专业意见、要约、招揽或邀请。本文不应被视为对购买或出售任何投资产品的推荐或采用任何投资策略的建议。本文中的任何内容不得被视为有关法律、税务或投资方面的咨询, 也不表示任何投资或策略适合您的个人情况, 或以其他方式构成对您个人的推荐。 本文中所包含的信息和/或分析系根据荷宝上海所认为的可信渠道而获得的信息准备而成。荷宝上海不就其准确性、正确性、实用性或完整性作出任何陈述, 也不对因使用本文中的信息和/或分析而造成的损失承担任何责任。荷宝上海或其他任何关联机构及其董事、高级管理人员、员工均不对任何人因其依据本文所含信息而造成的任何直接或间接的损失或损害或任何其他后果承担责任或义务。 本文包含一些有关于未来业务、目标、管理纪律或其他方面的前瞻性陈述与预测, 这些陈述含有假设、风险和不确定性, 且是建立在截止到本文编写之日已有的信息之上。基于此, 我们不能保证这些前瞻性情况都会发生, 实际情况可能会与本文中的陈述具有一定的差别。我们不能保证本文中的统计信息在任何特定条件下都是准确、适当和完整的, 亦不能保证这些统计信息以及据以得出这些信息的假设能够反映荷宝上海可能遇到的市场条件或未来表现。本文中的信息是基于当前的市场情况, 这很有可能因随后的市场事件或其他原因而发生变化, 本文内容可能因此未反映最新情况,荷宝上海不负责更新本文, 或对本文中不准确或遗漏之信息进行纠正。