Robeco logo

免責聲明

1. 一般事項

請細閱以下資料。

此網站由Robeco Hong Kong Limited(「荷寶」)擬備及刊發,荷寶是獲香港證券及期貨事務監察委員會發牌從事第1類(證券交易)、第4類(就證券提供意見)及第9類(資產管理)受規管活動的企業。荷寶不持有客戶資產,並受到發牌條件所規限。荷寶在擴展至零售業務之前,必須先得到證監會的批准。本網頁未經證券及期貨事務監察委員會或香港的任何監管當局審閱。

2. 風險披露聲明

Robeco Capital Growth Funds以其特定的投資政策或其他特徵作識別,請小心閱讀有關Robeco Capital Growth Funds的風險:

  • 部份基金可涉及投資、市場、股票投資、流動性、交易對手、證券借貸及外幣風險及小型及/或中型公司的相關風險。

  • 部份基金所涉及投資於新興市場的風險包括政治、經濟、法律、規管、市場、結算、執行交易、交易對手及貨幣風險。

  • 部份基金可透過合格境外機構投資者("QFII")及/或 人民幣合格境外機構投資者 ("RQFII")及/或 滬港通計劃直接投資於中國A股,當中涉及額外的結算、規管、營運、交易對手及流動性風險。

  • 就分派股息類別,部份基金可能從資本中作出股息分派。股息分派若直接從資本中撥付,這代表投資者獲付還或提取原有投資本金的部份金額或原有投資應佔的任何資本收益,該等分派可能導致基金的每股資產淨值即時減少。

  • 部份基金投資可能集中在單一地區/單一國家/相同行業及/或相同主題營運。 因此,基金的價值可能會較為波動。

  • 部份基金使用的任何量化技巧可能無效,可能對基金的價值構成不利影響。

  • 除了投資、市場、流動性、交易對手、證券借貸、(反向)回購協議及外幣風險,部份基金可涉及定息收入投資有關的風險包括信貨風險、利率風險、可換股債券的風險、資產抵押證券的的風險、投資於非投資級別或不獲評級證券的風險及投資於未達投資級別主權證券的風險。

  • 部份基金可大量運用金融衍生工具。荷寶環球消費新趨勢股票可為對沖目的及為有效投資組合管理而運用金融衍生工具。運用金融衍生工具可涉及較高的交易對手、流通性及估值的風險。在不利的情況下,部份基金可能會因為使用金融衍生工具而承受重大虧損(甚至損失基金資產的全部)。

  • 荷寶歐洲高收益債券可涉及投資歐元區的風險。

  • 投資者在Robeco Capital Growth Funds的投資有可能大幅虧損。投資者應該參閱Robeco Capital Growth Funds之銷售文件內的資料﹙包括潛在風險﹚,而不應只根據這文件內的資料而作出投資。


3. 當地的法律及銷售限制

此網站僅供“專業投資者”進接(其定義根據香港法律《證券及期貨條例》(第571章)和/或《證券及期貨(專業投資者)規則》(第571D章)所載)。此網站並非以在禁止刊發或提供此網站(基於該人士的國籍、居住地或其他原因)的任何司法管轄區內的任何人士為對象。受該等禁例限制的人士或並非上述訂明的人士不得登入此網站。登入此網站的人士需注意,他們有責任遵守所有當地法例及法規。一經登入此網站及其任何網頁,即確認閣下已同意並理解以下使用條款及法律資料。若閣下不同意以下條款及條件,不得登入此網站及其任何網頁。

此網站所載的資料僅供資料參考用途。

在此網站發表的任何資料或意見,概不構成購買、出售或銷售任何投資,參與任何其他交易或提供任何投資建議或服務的招攬、要約或建議。此網站所載的資料並不構成投資意見或建議,擬備時並無考慮可能取得此網站的任何特定人士的個別目標、財務狀況或需要。投資於荷寶產品前,必須先細閱相關的法律文件,例如管理法規、基金章程、最新的年度及半年度報告,所有該等文件可於www.robeco.com/hk/zh免費下載,亦可向荷寶於香港的辦事處免費索取。

4. 使用此網站

有關資料建基於特定時間適用的若干假設、資料及條件,可隨時更改,毋需另行通知。儘管荷寶旨在提供準確、完整及最新的資料,並獲取自相信為可靠的資料來源,但概不就該等資料的準確性或完整性作出明示或暗示的保證或聲明。

登入此網站的人士需為其資料的選擇和使用負責。

5. 投資表現

概不保證將可達到任何投資產品的投資目標。並不就任何投資產品的表現或投資回報作出陳述或承諾。閣下的投資價值可能反覆波動。荷寶投資產品的資產價值可能亦會因投資政策及/或金融市場的發展而反覆波動。過去所得的業績並不保證未來回報。此網站所載的往績、預估或預測不應被視為未來表現的指示或保證,概不就未來表現作出任何明示或暗示的陳述或保證。基金的表現數據以月底的交易價格為基礎,並以總回報基礎及股息再作投資計算。對比基準的回報數據顯示未計管理及/或表現費前的投資管理業績;基金回報包括股息再作投資,並以基準估值時的價格及匯率計算的資產淨值為基礎。

投資涉及風險。往績並非未來表現的指引。準投資者在作出任何投資決定前,應細閱相關發售文件所載的條款及條件,特別是投資政策及風險因素。投資者應確保其完全明白與基金相關的風險,並應考慮其投資目標及風險承受程度。投資者應注意,基金股份的價格及收益(如有)可能反覆波動,並可能在短時間內大幅變動,投資者或無法取回其投資於基金的金額。若有任何疑問,請諮詢獨立財務及有關專家的意見。

6. 第三者網站

本網站含有來自第三方的資料或第三方經營的網站連結,而其中部分該等公司與荷寶沒有任何聯繫。跟隨連結登入任何其他此網站以外的網頁或第三方網站的風險,應由跟隨該連結的人士自行承擔。荷寶並無審閱此網站所連結或提述的任何網站,概不就該等網站的內容或所提供的產品、服務或其他項目作出推許或負上任何責任。荷寶概不就使用或依賴第三方網站所載的資料而導致的任何虧損或損毀負上法侓責任,包括(但不限於)任何虧損或利益或任何其他直接或間接的損毀。 此網站以外的網頁或第三方網站皆旨在作參考之用。

7. 責任限制

荷寶及(潛在的)其他網站資料供應商概不就此網站內容或其所載的資料或建議負責,而該等內容、資料或建議可予更改,毋需另行通知。

荷寶並無責任確保及保證此網站的功能將不受干擾或並無失誤。荷寶概不就有關荷寶(交易)服務電郵訊息的後果承擔任何責任,該等電郵訊息可能無法接收或發出、損毀、不正確接收或發出或並無準時接收或發出。

荷寶亦不就因登入及使用此網站而可能導致的任何虧損或損毀負責。

8. 知識產權

所有版權、專利、知識產權和其他財產,以及有關此網站資料的授權均由荷寶持有及獲取。該等權利不會轉授予查閱有關資料的人士。

9. 私隠

荷寶保證將會根據現行的資料保障法例,以保密方式處理登入此網站的人士的數據。除非荷寶需按法律責任行事,否則在未經登入此網站的人士許可,不會向第三方提供該等數據。 請於我們的私隱及Cookie政策 中查找更多詳情。

10. 適用法律

此網站受香港法律監管及據此解釋。因此網站導致或有關此網站的所有爭議應交由香港法庭作出專有裁決。

如果您已閱讀並理解本頁並同意上述免責聲明以及同意荷寶收集和使用您的個人資料,用於私隱及Cookie政策 所列的收集和使用個人資料的目的(包括用於直接推廣荷寶的產品或服務),請點擊“我同意”按鈕。否則,請點擊“我不同意”離開本網站。


我不同意

28-01-2021 · 可持續投資《開闊視野》

SI Opener: Companies should not ignore the challenges of AI

Recent events in the US have show that the use of artificial intelligence can bring with it negative social issues. People who are interested in one topic, such as ’the US elections were stolen’, will only see information confirming this due to the algorithms used by social media companies. They are not exposed to other facts and opinions – and this can be detrimental.

    作者

  • Masja Zandbergen-Albers - Head of Sustainability Integration

    Masja Zandbergen-Albers

    Head of Sustainability Integration

  • Daniëlle Essink-Zuiderwijk - Engagement Specialist

    Daniëlle Essink-Zuiderwijk

    Engagement Specialist

概要

  1. Artificial intelligence accounts for 20% of EBIT in some industries

  2. Drawbacks include threat to civil rights, accidents and biases

  3. Companies need to take responsibility for these issues

Research1 by McKinsey shows that organizations are using AI as a tool for generating value. The leaders in its use come from a variety of industries that already attribute 20% or more of their organizations’ earnings before interest and taxes to AI.

While that is a positive sign for investors, we strongly believe that companies should also address the risks in using AI. The same study also finds that a minority of companies recognize these risks, and even fewer are working to reduce them. In fact, during our engagement with companies on these issues, we often hear them say that regulators should set clearer expectations around the use of such technology. Companies that are fit for the future should not wait for regulation, but take responsibility from the outset.

What are some of the social issues?

Civil Rights: AI systems are increasingly being used in socially sensitive spaces such as education, employment, housing, credit scoring, policing and criminal justice. Often, these are deployed without contextual knowledge or informed consent, and thus threaten civil rights and liberties. For example, the right to privacy is at risk, especially with the increasing use of facial recognition technologies, where it is almost impossible to opt out of its operations.

Labor and automation: AI-driven automation in the workplace has the potential to improve efficiency and reduce the amount of repetitive jobs. Occupations are expected to change, as automation creates jobs in some industries and replaces workers in others. AI may also mean greater surveillance of our work, which means companies should ensure that their workers are fully aware of how they are being tracked and evaluated. Another example specifically related to the technology sector is the hidden labor by individuals who help build, maintain and test AI systems. This kind of unseen repetitive and often unrecognized work – also called ‘click working’ – is paid per task and is frequently underpaid.

Safety and accountability: AI is already entrusted with decision-making across many industries, including financial services, hospitals and energy grids. Due to market pressure to innovate, various AI-systems have been deployed before its technical safety was secured Uber’s self-driving car, which killed a woman, and IBM’s system recommending unsafe and incorrect cancer treatments are examples of what can go wrong. Since accidents can occur, oversight and accountability is essential if AI systems are going to be key decision-makers in society.

Bias: The most commonly discussed issue in AI systems is that it is prone to biases that may reflect and even reinforce prejudices and social inequalities. These biases could arise from data that reflects existing discrimination, or is unrepresentative of modern society. Even if the underlying data is free of bias, its deployment could encode bias in various ways. A report published by UNESCO2 found that AI voice assistants, from Amazon’s Alexa to Apple’s Siri, reinforce gender biases. According to the report, those AI voice assistants have female voices by default, and are programmed in a way that suggests women are submissive. Today, women make up only 12% of AI researchers and only 6% of software developers. As AI engineers are predominantly white male technicians, bias towards their values and beliefs might arise in the design of applications. Furthermore, using the wrong models, or models with inadvertently discriminatory features, could lead to a biased system. Another bias-related issue is the ’black box’ problem, where it is impossible to understand the steps taken by the AI system to reach certain decisions, and therefore bias can arise unconsciously. Lastly, intentional bias could be built into the algorithms.

Content moderation in the spotlight

Social media platforms use content moderation algorithms and human review teams to monitor user-generated submissions, based on a pre-determined set of rules and guidelines. Content moderation work requires strong psychological resilience – it is often not suitable for working from home with family members in the room. This has meant that during Covid-19, companies had to scale down on the amount of content that could be checked.

The relevance and materiality of content moderation became clear from the #StopHateForProfit campaign highlighting the profitability of harmful speech and disinformation on Facebook. The campaign resulted in more than 1,000 advertisers – including major players like Target, Unilever and Verizon – boycotting Facebook advertisements in July 2020. The focus on content moderation continued in the run up to the US elections, with stricter guidelines and procedures in place at all of the main social media companies.

Investment requires engagement

For the reasons mentioned above, we started an engagement theme on the social impact of artificial intelligence in 2019. From an investment perspective, we see big opportunities in this trend. More in-depth information about artificial intelligence as an investment opportunity can be found in the white paper our trends investing team published in December 2016. However, we also acknowledged that AI can bring with it unwanted effects that we believe our investee companies should address. We ask companies to do five things:

  1. Develop and publish clear policies for the use, procurement and development of artificial intelligence that explicitly address societal and human rights implications.

  2. Perform periodical impact assessments on their AI activities. The assessment should cover discriminatory outcomes, social biases, hidden labor and privacy concerns.

  3. Put in place strong governance provisions, given the control complexities around machine learning. The company should maintain control processes that identify incidents and risks associated with the unintended consequences of AI. The board should be sufficiently trained and experienced to exercise meaningful oversight on the control framework for AI and sign off AI policies and risk reporting.

  4. Take the social issues of AI into account in the design and development stage. This means, among other things, that there should be enough knowledge of human rights and ethics in their development teams. In order to mitigate social biases, the company should also promote diversity and inclusion in their AI teams.

  5. Take a multi-stakeholder approach in the company’s development and use of AI. Several initiatives and platforms are available to share and promote best practices. We also expect companies to report on their lobbying activities related to AI legislation.



Much more awareness is needed

Over the course of 2020, we have spoken to most of the companies in our engagement peer group. In our initial conversations, some companies challenged the relevance of the issue, or would not take accountability for it. That stance already seems to be changing somewhat for some of them.

The 2020 voting season showed an increasing number of shareholder proposals focused on digital human rights. Robeco co-led the filing of a shareholder proposal at Alphabet’s AGM asking for a Human Rights Risk Oversight committee to be established, comprised of independent directors with relevant experience. Some 16% of shareholders voted in favor of our resolution, a substantial part of the non-controlling shareholder votes.

In the first week of November, Alphabet announced an update of its Audit Committee Charter, which now includes the review of major risk exposures around sustainability and civil and human rights. This is in line with our request to formalize board oversight, and is a first step towards getting this in place on specific sustainability-related issues, such as human rights.

Use of artificial intelligence will only increase and influence our lives and work to a large extent in the future. As morals and ethics cannot be programmed, we believe companies should accept their responsibility. There is still a long way to go.

Footnotes

1Global survey: The state of AI in 2020 | McKinsey
2Artificial intelligence and gender equality: key findings of UNESCO’s Global Dialogue, August 2020


SI Opener: Seeing is believing.

Important information

The contents of this document have not been reviewed by the Securities and Futures Commission ("SFC") in Hong Kong. If you are in any doubt about any of the contents of this document, you should obtain independent professional advice. This document has been distributed by Robeco Hong Kong Limited (‘Robeco’). Robeco is regulated by the SFC in Hong Kong. This document has been prepared on a confidential basis solely for the recipient and is for information purposes only. Any reproduction or distribution of this documentation, in whole or in part, or the disclosure of its contents, without the prior written consent of Robeco, is prohibited. By accepting this documentation, the recipient agrees to the foregoing This document is intended to provide the reader with information on Robeco’s specific capabilities, but does not constitute a recommendation to buy or sell certain securities or investment products. Investment decisions should only be based on the relevant prospectus and on thorough financial, fiscal and legal advice. Please refer to the relevant offering documents for details including the risk factors before making any investment decisions. The contents of this document are based upon sources of information believed to be reliable. This document is not intended for distribution to or use by any person or entity in any jurisdiction or country where such distribution or use would be contrary to local law or regulation. Investment Involves risks. Historical returns are provided for illustrative purposes only and do not necessarily reflect Robeco’s expectations for the future. The value of your investments may fluctuate. Past performance is no indication of current or future performance.