AI can offer considerable advantages, from simple machine learning that seems to know what you meant to type, to more complex algorithms that can predict health care needs and detect patterns in climate change. It is now routinely used across the tech spectrum and often kicks in without the user even realizing that it is there.
However, it also poses significant threats to privacy, data management, and the prospect of machine ‘learning’ that leads to unwanted surveillance, racial profiling or discrimination. And it is hard to know the true state of affairs due to a lack of disclosure about companies’ AI activities.
This lack of information was one of the reasons why the Active Ownership team was only able to successfully conclude four out of five cases in the engagement program that ran from 2019 to 2022. The other five cases were transferred to the team’s SDG Engagement theme to further engage these companies on their societal impact.
Aligning practices
“Through our engagement, we learned that companies are gradually aligning internal practices to principles of responsible AI,” says engagement specialist Danielle Essink. “Many companies formalized AI principles that address topics like inclusion, fairness and transparency.”
“Additionally, companies are increasingly pursuing a collaborative approach by actively contributing to initiatives that aim to advance responsible governance and best practices. These types of initiatives play a decisive role in guaranteeing trustworthy AI across the industry.”
“However, ethical principles on their own do not ensure the responsible development and deployment of AI. Businesses require robust governance mechanisms to effectively implement their principles.”
Lack of disclosure
A major stumbling block is the lack of disclosure about what companies are actually doing to address concerns, along with their willingness to engage in the first place. Much of the AI technology and how it is implemented on different platforms is still shrouded in secrecy.
“In our engagement, we observed that transparency around governance and implementation remained low, as most companies’ public disclosures lacked clarity about how such principles translate into practice, and which checks and balances are in place,” says engagement specialist Claire Ahlborn, who also worked on the program.
“After talking to the companies, we learned about the specifics of the implementation, which then gave us the confidence to close some of the objectives successfully. The engagement results of this theme are, therefore, highly correlated with the company's willingness to set up constructive dialogues.”
Huge growth predicted
The International Data Corporation’s Worldwide Artificial Intelligence Software Forecast 2022 said the worldwide AI market is set to show compound annual growth of 18.6% from 2022 to 2026 alone.
Yet the potential benefits come with risks that are not yet fully explored, let alone understood, Essink says. To achieve the full potential of AI, companies need to manage the associated risks that come with the development and use of the technology, including human rights-related risks.
“Given the speed at which AI is being developed, there is no doubt that in the next few decades, this technology will transform our economy and society in ways we cannot imagine,” Essink says.
Positive changes
“This type of growth represents massive opportunities for AI to contribute to positive changes, such as detecting patterns in environmental data, or improving the analysis of health information.”
“At the same time, AI could cause new problems or aggravate existing ones if companies do not have enough understanding of the risks associated with these technologies. For example, using AI algorithms for profiling can have discriminatory effects, such as credit rating algorithms disfavoring people from certain ethnic backgrounds, or those living in certain areas.”
“Similarly, AI can be used for surveillance – in public spaces but also in the workplace – putting the right to privacy at risk. This shows a growing need for the responsible governance of AI systems to ensure that such systems conform to ethical values, norms, and the growing number of AI regulations.”
Upcoming regulation
Such regulatory moves and policy proposals have already been launched by governments, ethics committees, non-profit organizations, academics and the EU. In April 2021, the European Commission issued the AI Act. It sets out clear requirements and obligations regarding the specific uses of AI for developers, deployers and users.
The proposal identifies four regulatory categories based on the level of risk. At the high end, AI systems identified as high-risk, such as CV-scanning tools that rank job applicants, will be subject to strict obligations including enhanced risk management processes and human oversight. AI systems with limited risks will remain largely unregulated.
“This growing legislative pressure around AI could pose serious regulatory risks for companies that are not well prepared to conform with the rising obligations,” says Ahlborn.
Aligning engagement with the SDGs
“Meanwhile, the alignment of AI technologies with ethical values and principles will be critical to promote and protect human rights in society. As a result, we will continue our engagement work with a selection of companies in the tech sector under our ‘Sustainable Development Goals (SDG) engagement’ theme.”
“These dialogues have a strong focus on human rights and societal impact, and highlight topics like misinformation, content moderation and stakeholder collaboration. We will focus on how companies can contribute to SDG 10 (Reduced inequalities) and SDG 16 (Peace, justice and strong institutions) by safeguarding human rights in the development and use of AI and promoting social, economic and political inclusion.”
Read the full Q4 Active Ownership report here
免責聲明
本文由荷宝海外投资基金管理(上海)有限公司(“荷宝上海”)编制, 本文内容仅供参考, 并不构成荷宝上海对任何人的购买或出售任何产品的建议、专业意见、要约、招揽或邀请。本文不应被视为对购买或出售任何投资产品的推荐或采用任何投资策略的建议。本文中的任何内容不得被视为有关法律、税务或投资方面的咨询, 也不表示任何投资或策略适合您的个人情况, 或以其他方式构成对您个人的推荐。 本文中所包含的信息和/或分析系根据荷宝上海所认为的可信渠道而获得的信息准备而成。荷宝上海不就其准确性、正确性、实用性或完整性作出任何陈述, 也不对因使用本文中的信息和/或分析而造成的损失承担任何责任。荷宝上海或其他任何关联机构及其董事、高级管理人员、员工均不对任何人因其依据本文所含信息而造成的任何直接或间接的损失或损害或任何其他后果承担责任或义务。 本文包含一些有关于未来业务、目标、管理纪律或其他方面的前瞻性陈述与预测, 这些陈述含有假设、风险和不确定性, 且是建立在截止到本文编写之日已有的信息之上。基于此, 我们不能保证这些前瞻性情况都会发生, 实际情况可能会与本文中的陈述具有一定的差别。我们不能保证本文中的统计信息在任何特定条件下都是准确、适当和完整的, 亦不能保证这些统计信息以及据以得出这些信息的假设能够反映荷宝上海可能遇到的市场条件或未来表现。本文中的信息是基于当前的市场情况, 这很有可能因随后的市场事件或其他原因而发生变化, 本文内容可能因此未反映最新情况,荷宝上海不负责更新本文, 或对本文中不准确或遗漏之信息进行纠正。