28-01-2021 · SI Opener

SI Opener: Companies should not ignore the challenges of AI

Recent events in the US have show that the use of artificial intelligence can bring with it negative social issues. People who are interested in one topic, such as ’the US elections were stolen’, will only see information confirming this due to the algorithms used by social media companies. They are not exposed to other facts and opinions – and this can be detrimental.

    Authors

  • Masja Zandbergen-Albers - Head of Sustainability Integration

    Masja Zandbergen-Albers

    Head of Sustainability Integration

  • Daniëlle Essink-Zuiderwijk - Engagement Specialist

    Daniëlle Essink-Zuiderwijk

    Engagement Specialist

Research1 by McKinsey shows that organizations are using AI as a tool for generating value. The leaders in its use come from a variety of industries that already attribute 20% or more of their organizations’ earnings before interest and taxes to AI.

While that is a positive sign for investors, we strongly believe that companies should also address the risks in using AI. The same study also finds that a minority of companies recognize these risks, and even fewer are working to reduce them. In fact, during our engagement with companies on these issues, we often hear them say that regulators should set clearer expectations around the use of such technology. Companies that are fit for the future should not wait for regulation, but take responsibility from the outset.

What are some of the social issues?

Civil Rights: AI systems are increasingly being used in socially sensitive spaces such as education, employment, housing, credit scoring, policing and criminal justice. Often, these are deployed without contextual knowledge or informed consent, and thus threaten civil rights and liberties. For example, the right to privacy is at risk, especially with the increasing use of facial recognition technologies, where it is almost impossible to opt out of its operations.

Labor and automation: AI-driven automation in the workplace has the potential to improve efficiency and reduce the amount of repetitive jobs. Occupations are expected to change, as automation creates jobs in some industries and replaces workers in others. AI may also mean greater surveillance of our work, which means companies should ensure that their workers are fully aware of how they are being tracked and evaluated. Another example specifically related to the technology sector is the hidden labor by individuals who help build, maintain and test AI systems. This kind of unseen repetitive and often unrecognized work – also called ‘click working’ – is paid per task and is frequently underpaid.

Safety and accountability: AI is already entrusted with decision-making across many industries, including financial services, hospitals and energy grids. Due to market pressure to innovate, various AI-systems have been deployed before its technical safety was secured Uber’s self-driving car, which killed a woman, and IBM’s system recommending unsafe and incorrect cancer treatments are examples of what can go wrong. Since accidents can occur, oversight and accountability is essential if AI systems are going to be key decision-makers in society.

Bias: The most commonly discussed issue in AI systems is that it is prone to biases that may reflect and even reinforce prejudices and social inequalities. These biases could arise from data that reflects existing discrimination, or is unrepresentative of modern society. Even if the underlying data is free of bias, its deployment could encode bias in various ways. A report published by UNESCO2 found that AI voice assistants, from Amazon’s Alexa to Apple’s Siri, reinforce gender biases. According to the report, those AI voice assistants have female voices by default, and are programmed in a way that suggests women are submissive. Today, women make up only 12% of AI researchers and only 6% of software developers. As AI engineers are predominantly white male technicians, bias towards their values and beliefs might arise in the design of applications. Furthermore, using the wrong models, or models with inadvertently discriminatory features, could lead to a biased system. Another bias-related issue is the ’black box’ problem, where it is impossible to understand the steps taken by the AI system to reach certain decisions, and therefore bias can arise unconsciously. Lastly, intentional bias could be built into the algorithms.

Content moderation in the spotlight

Social media platforms use content moderation algorithms and human review teams to monitor user-generated submissions, based on a pre-determined set of rules and guidelines. Content moderation work requires strong psychological resilience – it is often not suitable for working from home with family members in the room. This has meant that during Covid-19, companies had to scale down on the amount of content that could be checked.

The relevance and materiality of content moderation became clear from the #StopHateForProfit campaign highlighting the profitability of harmful speech and disinformation on Facebook. The campaign resulted in more than 1,000 advertisers – including major players like Target, Unilever and Verizon – boycotting Facebook advertisements in July 2020. The focus on content moderation continued in the run up to the US elections, with stricter guidelines and procedures in place at all of the main social media companies.

Investment requires engagement

For the reasons mentioned above, we started an engagement theme on the social impact of artificial intelligence in 2019. From an investment perspective, we see big opportunities in this trend. More in-depth information about artificial intelligence as an investment opportunity can be found in the white paper our trends investing team published in December 2016. However, we also acknowledged that AI can bring with it unwanted effects that we believe our investee companies should address. We ask companies to do five things:

  1. Develop and publish clear policies for the use, procurement and development of artificial intelligence that explicitly address societal and human rights implications.

  2. Perform periodical impact assessments on their AI activities. The assessment should cover discriminatory outcomes, social biases, hidden labor and privacy concerns.

  3. Put in place strong governance provisions, given the control complexities around machine learning. The company should maintain control processes that identify incidents and risks associated with the unintended consequences of AI. The board should be sufficiently trained and experienced to exercise meaningful oversight on the control framework for AI and sign off AI policies and risk reporting.

  4. Take the social issues of AI into account in the design and development stage. This means, among other things, that there should be enough knowledge of human rights and ethics in their development teams. In order to mitigate social biases, the company should also promote diversity and inclusion in their AI teams.

  5. Take a multi-stakeholder approach in the company’s development and use of AI. Several initiatives and platforms are available to share and promote best practices. We also expect companies to report on their lobbying activities related to AI legislation.

Much more awareness is needed

Over the course of 2020, we have spoken to most of the companies in our engagement peer group. In our initial conversations, some companies challenged the relevance of the issue, or would not take accountability for it. That stance already seems to be changing somewhat for some of them.

The 2020 voting season showed an increasing number of shareholder proposals focused on digital human rights. Robeco co-led the filing of a shareholder proposal at Alphabet’s AGM asking for a Human Rights Risk Oversight committee to be established, comprised of independent directors with relevant experience. Some 16% of shareholders voted in favor of our resolution, a substantial part of the non-controlling shareholder votes.

In the first week of November, Alphabet announced an update of its Audit Committee Charter, which now includes the review of major risk exposures around sustainability and civil and human rights. This is in line with our request to formalize board oversight, and is a first step towards getting this in place on specific sustainability-related issues, such as human rights.

Use of artificial intelligence will only increase and influence our lives and work to a large extent in the future. As morals and ethics cannot be programmed, we believe companies should accept their responsibility. There is still a long way to go.

Footnotes

1Global survey: The state of AI in 2020 | McKinsey
2Artificial intelligence and gender equality: key findings of UNESCO’s Global Dialogue, August 2020