Artificial Intelligence (AI) has taken the world by storm, pervading the collective consciousness with a plethora of open-sourced generative AI tools. Amid the feverish speculation about what AI might mean for humanity in the future, the reality for the investment funds industry may fall somewhere in between the more extreme visions, presenting both opportunities and threats.
In this article we consider these opportunities and potential pitfalls for funds – and for those charged with their governance in the age of AI. We also examine how to manage the various risks and rewards as we navigate this new technology and the impact it may have across the fund industry, including investment managers, investors, key stakeholders and service providers.
AI at the fund portfolio level
Through a focus on technology, investment managers are increasingly using AI to enhance decision making, automate processes, manage risk and improve portfolio performance. Some hedge fund managers, such as statistical arbitrage shops, will already have been using AI for high-speed programmatic trading for many years.
Sophisticated AI tools are being used to enhance portfolio optimisation and risk assessment by analysing vast swathes of both financial and non-financial data. Investment managers can therefore make more informed decisions to maximise returns and achieve investor goals. Activities such as machine learning for stock selection or predicative analysis, where AI models try to predict market movements, asset performance, or economic trends based on historical data and market conditions, present potentially highly lucrative opportunities for managers to ‘beat the market’.
Ultimately, it is the sheer speed and volume of data AI can sort through and interpret in meaningful ways, which is proving to be most valuable to investment managers, assisting them to make faster and more informed decisions for fund portfolios as they try to generate active returns. Generative AI can also add value by producing reports, summarising information, answering questions, and even compiling emails to respond to clients.
As is often the case, investment managers should avoid chasing trends driven by AI systems and need to assess if these tools are really in line with their business and overall investment thesis. Depending on the type of portfolio and strategy, such tools may be very beneficial, for example in high volume and high liquidity portfolios where trade execution is key and being just ahead of the curve makes all the difference.
Other investment managers, with less liquid strategies and longer investment time horizons may find these tools less helpful. AI (so far) cannot be a suitable substitute for a management team of a portfolio company or replace the real human interactions and due diligence that needs to take place when negotiating complex deals.
AI, like any tool, has massive potential, but it needs to be deployed properly and used correctly in the context of the fund’s investment strategy to be most effective. Otherwise, the technology could lead investment managers into making wrong decisions.
Compliance and risk management
One of the most common ways we are seeing AI assist investment managers is monitoring portfolios and managing risk. AI systems are being deployed in real-time investment monitoring, immediately flagging any compliance violations, which allows investment managers to promptly action and remediate the issue.
The ability of AI systems to process huge amounts of data can be scaled up across multiple portfolios and clients, allowing for greater efficiencies and data integration. AI is also performing more sophisticated tasks, such as processing and reacting in real time to changes in regulatory requirements by updating rules and checks used to monitor portfolios.
Managers are also using AI tools to stress test portfolios by simulating more complex real-world scenarios and analysing news reports and social media to gauge market sentiment, so managers can adjust portfolios and mitigate risk. AI can also flag any suspected changes needed or updates directly to compliance teams.
The reporting abilities are extremely powerful. Generative AI can write compliance reports and complete forms and documentation for internal or external reporting, reducing the manual workload of compliance officers and minimising the risk of manual error in the reporting process.
Overall, AI is proving to be a potent tool, as it enhances the accuracy and speed of portfolio compliance monitoring by automating rules-based checks, providing real time monitoring, and enabling managers to adapt to an ever changing and more complex regulatory environment, thus reducing risks for both investment managers and investors.
Service providers at the cutting edge
In the same vein as investment managers, fund service providers are constantly looking to implement the most advanced technology into their business operations. From administrators to broker-dealers, auditors and lawyers, AI is now emerging into these sectors, as service providers strive to stay at the cutting edge and as the partner of choice for investment manager clients.
Fund administrators are using AI to reduce manual processes and streamline efficiencies. This can help them service more clients across multiple portfolios and process investor transactions faster with greater accuracy. AI can also help legal teams find greater efficiencies to benefit their clients. For example, AI when used correctly can enhance contract review, legal research, large-scale due diligence and more, allowing the best legal teams and attorneys to focus on adding true value to their client’s business and prioritise complex and strategic work streams.
However, it should be acknowledged that the limitations of the current iteration of generative AI are also already becoming clear to those regularly using it. While we still see significant benefits, we also expect the value of human expertise to increase, as well as increased importance being placed on authentication and verification.
Smart contracts and cybersecurity
Smart contracts, powered by blockchain technology, can augment transparency and trust in the funds sphere, by enhancing contract functionality with automated execution and compliance verification to predefined rules. Smart contracts can also automate various fund operations, such as distribution calculations, fee structures and performance reporting. These self-executing contracts can reduce administrative overheads, minimise errors and enhance transparency, through a tamper-proof and auditable record of fund activities.
With cybersecurity, AI plays a dual role. AI can bolster defences by identifying and mitigating potential threats through advanced detection, picking up on anomalies or using predictive analysis. It can also improve user authentication methods, such as biometrics and behavioural analysis.
On the other hand, malicious actors are also able to leverage AI to enhance the sophistication of cyber-attacks, creating a constant challenge for cybersecurity experts to stay ahead. The adaptive nature of AI allows cybersecurity systems to learn and evolve alongside the dynamic threat landscape, which is crucial in staying ahead of cyber-attacks.
Companies using AI as part of their cybersecurity infrastructure need to be careful not to place an overreliance on AI, as this could lead to complacency. Human expertise remains critical for interpreting complex threats and understanding the broader context of each situation. Too much reliance on automation without effective oversight may result in vulnerabilities that can easily be exploited.
Emerging regulatory framework
With such a rapid pace of development in AI, regulators can only try and catch up. Great consideration is being given to establish the best way to regulate AI without impeding progress, while ensuring the technology develops safely without harming the public.
The European Union first proposed a regulatory framework for AI in 2021, with the Artificial Intelligence Act (the “AI Act”) regarded as the most far-reaching regulation of AI worldwide. The Act sets out a framework to classify systems according to their risk, with mandatory requirements for trustworthy AI and obligations on the providers of systems to ensure safety. An agreement between the European Council and European Parliament in December 2023 introduced new elements, including rules on high-impact models that can cause systemic risk in the future. The AI Act also outlines unacceptable risks in AI development, prohibiting activities such as cognitive behavioural manipulation of people or specific vulnerable groups, for instance, voice-activated toys that encourage dangerous behaviour in children; social scoring – where people are classified based on behaviour, socio-economic status or personal characteristics; and real-time remote biometric identification systems, such as facial recognition in public spaces.
The US lags the EU with regards to regulatory measures, and currently doesn’t have any federal legislation in place, although there have been significant discussions and several federal agencies, such as the Food & Drug Administration, are working on creating pathways to incorporate the regulation of AI for matters under their purview, such as medical imaging.
Other countries, such as Canada and the UK are somewhere in-between, with neither having passed explicit AI regulations, although various working groups and committees have been established, with papers and proposed legislation and regulations published.
Fund governance and future trends
As AI continues to develop, we can anticipate even greater breakthroughs in the future, especially with generative AI. Fund governance professionals need to remain vigilant to the potential dangers.
Senior management, boards, executive committees, and others charged with governance of funds should be prepared, by staying educated and current on the latest trends and developments. Being an expert in the AI field is not essential, however understanding how AI is developing and the impact it may have on investment managers, portfolios, service providers, regulators, and funds themselves is critical.
This also means asking the right questions at board meetings to assess the opportunities and threats AI poses, as well as understanding its strengths and weaknesses. It means asking investment managers how they are currently using AI, what oversight is in place and what risks they see from the technology. Only by remaining vigilant and aware of the challenges, can the fund industry truly benefit from the advances in AI.
Benoit Sansoucy is Senior Vice President, Fiduciary Services at Maples Group.
Jarard Blake is Vice President, Fiduciary Services at Maples Group.