latamen
SI Opener: Companies should not ignore the challenges of AI

SI Opener: Companies should not ignore the challenges of AI

28-01-2021 | SI Opener
Recent events in the US have show that the use of artificial intelligence can bring with it negative social issues. People who are interested in one topic, such as ’the US elections were stolen’, will only see information confirming this due to the algorithms used by social media companies. They are not exposed to other facts and opinions – and this can be detrimental.
  • Masja Zandbergen - Albers
    Masja
    Zandbergen - Albers
    Head of sustainability Integration
  • Daniëlle Essink
    Daniëlle
    Essink
    Engagement Specialist

Speed read

  • Artificial intelligence accounts for 20% of EBIT in some industries 
  • Drawbacks include threat to civil rights, accidents and biases 
  • Companies need to take responsibility for these issues

Research1 by McKinsey shows that organizations are using AI as a tool for generating value. The leaders in its use come from a variety of industries that already attribute 20% or more of their organizations’ earnings before interest and taxes to AI.  

While that is a positive sign for investors, we strongly believe that companies should also address the risks in using AI. The same study also finds that a minority of companies recognize these risks, and even fewer are working to reduce them. In fact, during our engagement with companies on these issues, we often hear them say that regulators should set clearer expectations around the use of such technology. Companies that are fit for the future should not wait for regulation, but take responsibility from the outset. 

Climate investing: from urgency to solutions
Climate investing: from urgency to solutions
Read more

What are some of the social issues?

Civil Rights: AI systems are increasingly being used in socially sensitive spaces such as education, employment, housing, credit scoring, policing and criminal justice. Often, these are deployed without contextual knowledge or informed consent, and thus threaten civil rights and liberties. For example, the right to privacy is at risk, especially with the increasing use of facial recognition technologies, where it is almost impossible to opt out of its operations. 

Labor and automation: AI-driven automation in the workplace has the potential to improve efficiency and reduce the amount of repetitive jobs. Occupations are expected to change, as automation creates jobs in some industries and replaces workers in others. AI may also mean greater surveillance of our work, which means companies should ensure that their workers are fully aware of how they are being tracked and evaluated. Another example specifically related to the technology sector is the hidden labor by individuals who help build, maintain and test AI systems.  This kind of unseen repetitive and often unrecognized work – also called ‘click working’ – is paid per task and is frequently underpaid.  

Safety and accountability: AI is already entrusted with decision-making across many industries, including financial services, hospitals and energy grids. Due to market pressure to innovate, various AI-systems have been deployed before its technical safety was secured Uber’s self-driving car, which killed a woman, and IBM’s system recommending unsafe and incorrect cancer treatments are examples of what can go wrong. Since accidents can occur, oversight and accountability is essential if AI systems are going to be key decision-makers in society.  

Bias: The most commonly discussed issue in AI systems is that it is prone to biases that may reflect and even reinforce prejudices and social inequalities. These biases could arise from data that reflects existing discrimination, or is unrepresentative of modern society. Even if the underlying data is free of bias, its deployment could encode bias in various ways. A report published by UNESCO2 found that AI voice assistants, from Amazon’s Alexa to Apple’s Siri, reinforce gender biases. According to the report, those AI voice assistants have female voices by default, and are programmed in a way that suggests women are submissive. Today, women make up only 12% of AI researchers and only 6% of software developers. As AI engineers are predominantly white male technicians, bias towards their values and beliefs might arise in the design of applications. Furthermore, using the wrong models, or models with inadvertently discriminatory features, could lead to a biased system. Another bias-related issue is the ’black box’ problem, where it is impossible to understand the steps taken by the AI system to reach certain decisions, and therefore bias can arise unconsciously. Lastly, intentional bias could be built into the algorithms. 

Content moderation in the spotlight

Social media platforms use content moderation algorithms and human review teams to monitor user-generated submissions, based on a pre-determined set of rules and guidelines. Content moderation work requires strong psychological resilience –  it is often not suitable for working from home with family members in the room. This has meant that during Covid-19, companies had to scale down on the amount of content that could be checked.  

The relevance and materiality of content moderation became clear from the #StopHateForProfit campaign highlighting the profitability of harmful speech and disinformation on Facebook. The campaign resulted in more than 1,000 advertisers – including major players like Target, Unilever and Verizon – boycotting Facebook advertisements in July 2020. The focus on content moderation continued in the run up to the US elections, with stricter guidelines and procedures in place at all of the main social media companies.  

Investment requires engagement

For the reasons mentioned above, we started an engagement theme on the social impact of artificial intelligence in 2019. From an investment perspective, we see big opportunities in this trend. More in-depth information about artificial intelligence as an investment opportunity can be found in the white paper3 our trends investing team published in December 2016. However, we also acknowledged that AI can bring with it unwanted effects that we believe our investee companies should address. We ask companies to do five things: 

  1. Develop and publish clear policies for the use, procurement and development of artificial intelligence that explicitly address societal and human rights implications.  
  2. Perform periodical impact assessments on their AI activities. The assessment should cover discriminatory outcomes, social biases, hidden labor and privacy concerns. 
  3. Put in place strong governance provisions, given the control complexities around machine learning. The company should maintain control processes that identify incidents and risks associated with the unintended consequences of AI. The board should be sufficiently trained and experienced to exercise meaningful oversight on the control framework for AI and sign off AI policies and risk reporting. 
  4. Take the social issues of AI into account in the design and development stage. This means, among other things, that there should be enough knowledge of human rights and ethics in their development teams. In order to mitigate social biases, the company should also promote diversity and inclusion in their AI teams. 
  5. Take a multi-stakeholder approach in the company’s development and use of AI. Several initiatives and platforms are available to share and promote best practices. We also expect companies to report on their lobbying activities related to AI legislation. 

Much more awareness is needed

Over the course of 2020, we have spoken to most of the companies in our engagement peer group. In our initial conversations, some companies challenged the relevance of the issue, or would not take accountability for it. That stance already seems to be changing somewhat for some of them. 

The 2020 voting season showed an increasing number of shareholder proposals focused on digital human rights. Robeco co-led the filing of a shareholder proposal at Alphabet’s AGM asking for a Human Rights Risk Oversight committee to be established, comprised of independent directors with relevant experience. Some 16% of shareholders voted in favor of our resolution, a substantial part of the non-controlling shareholder votes.  

In the first week of November, Alphabet announced an update of its Audit Committee Charter, which now includes the review of major risk exposures around sustainability and civil and human rights. This is in line with our request to formalize board oversight, and is a first step towards getting this in place on specific sustainability-related issues, such as human rights.  

Use of artificial intelligence will only increase and influence our lives and work to a large extent in the future. As morals and ethics cannot be programmed, we believe companies should accept their responsibility. There is still a long way to go.   

2 Artificial intelligence and gender equality: key findings of UNESCO’s Global Dialogue, August 2020
SI Opener
SI Opener
Seeing is believing.
Read all articles
Logo

Important information

The Robeco Capital Growth Funds have not been registered under the United States Investment Company Act of 1940, as amended, nor or the United States Securities Act of 1933, as amended. None of the shares may be offered or sold, directly or indirectly in the United States or to any U.S. Person (within the meaning of Regulation S promulgated under the Securities Act of 1933, as amended (the “Securities Act”)). Furthermore, Robeco Institutional Asset Management B.V. (Robeco) does not provide investment advisory services, or hold itself out as providing investment advisory services, in the United States or to any U.S. Person (within the meaning of Regulation S promulgated under the Securities Act).

This website is intended for use only by non-U.S. Persons outside of the United States (within the meaning of Regulation S promulgated under the Securities Act who are professional investors, or professional fiduciaries representing such non-U.S. Person investors. By clicking “I Agree” on our website disclaimer and accessing the information on this website, including any subdomain thereof, you are certifying and agreeing to the following: (i) you have read, understood and agree to this disclaimer, (ii) you have informed yourself of any applicable legal restrictions and represent that by accessing the information contained on this website, you are not in violation of, and will not be causing Robeco or any of its affiliated entities or issuers to violate, any applicable laws and, as a result, you are legally authorized to access such information on behalf of yourself and any underlying investment advisory client, (iii) you understand and acknowledge that certain information presented herein relates to securities that have not been registered under the Securities Act, and may be offered or sold only outside the United States and only to, or for the account or benefit of, non-U.S. Persons (within the meaning of Regulation S under the Securities Act), (iv) you are, or are a discretionary investment adviser representing, a non-U.S. Person (within the meaning of Regulation S under the Securities Act) located outside of the United States and (v) you are, or are a discretionary investment adviser representing, a professional non-retail investor. Access to this website has been limited so that it shall not constitute directed selling efforts (as defined in Regulation S under the Securities Act) in the United States and so that it shall not be deemed to constitute Robeco holding itself out generally to the public in the U.S. as an investment adviser. Nothing contained herein constitutes an offer to sell securities or solicitation of an offer to purchase any securities in any jurisdiction. We reserve the right to deny access to any visitor, including, but not limited to, those visitors with IP addresses residing in the United States.

This website has been carefully prepared by Robeco. The information contained in this publication is based upon sources of information believed to be reliable. Robeco is not answerable for the accuracy or completeness of the facts, opinions, expectations and results referred to therein. Whilst every care has been taken in the preparation of this website, we do not accept any responsibility for damage of any kind resulting from incorrect or incomplete information. This website is subject to change without notice. The value of the investments may fluctuate. Past performance is no guarantee of future results. If the currency in which the past performance is displayed differs from the currency of the country in which you reside, then you should be aware that due to exchange rate fluctuations the performance shown may increase or decrease if converted into your local currency. For investment professional use only. Not for use by the general public.

I Disagree