The Netherlands Authority for the Financial Markets

04/07/2026 | Press release | Distributed by Public on 04/07/2026 03:20

AI use is growing, and so are the risks

News 07/04/26

AI use is growing, and so are the risks

Today, the Netherlands Authority for the Financial Markets (AFM) is publishing the report 'AI in the Dutch asset management sector: use is growing, and so are the risks'. We investigated how artificial intelligence is being used, the developments the sector is undergoing, and the risks this entails. The findings show that usage is increasing, whilst policy and agreements are lagging behind.

Adoption is outpacing investment

More and more asset managers are adopting artificial intelligence, but the organizations behind them are not always keeping pace. Some firms are leading the way in adoption. They use artificial intelligence for analysis, price forecasting and improving trading strategies, amongst other things. However, many institutions are investing very little or have set aside no budget at all. At the same time, most of these parties expect expenditure to rise in the coming years in order to keep up. These investments will be necessary: the technology is developing rapidly, but processes, knowledge and controls are not always keeping pace. This creates the risk that institutions will soon become dependent on systems they do not fully understand or can control.

Opportunities lie in working more efficiently and big data analysis

For the time being, institutions do not expect significant immediate cost savings. They see the main benefits in working more efficiently and quickly, better data processing and improved internal processes. In the longer term, artificial intelligence can contribute to better portfolio choices and more in-depth market analyses. Transparency towards investors regarding the role of artificial intelligence remains essential in this regard.

Policy and ethics are lagging behind

As artificial intelligence grows, so do the risks. These include poor data quality, algorithmic bias, limited explainability and dependence on a small number of technology providers. At the same time, it appears that many organisations are not yet adequately prepared for this.A quarter of institutions have no policy regarding the use of artificial intelligence by employees. For generative AI applications, that proportion is even higher: over two-thirds. Furthermore, more than half do not have an ethics handbook or code of conduct that specifically addresses artificial intelligence.

We call on the sector to assess whether it is sufficiently equipped to apply artificial intelligence, for example by drawing up clear policies on its use, ensuring good data quality, carefully vetting suppliers and ensuring that staff work with explainable and verifiable models. In this way, the use of artificial intelligence remains responsible and controlled.






Contact for this article

Would you like to receive the latest news from AFM?

Subscribe to our newsletter, we will keep you up-to-date.

The Netherlands Authority for the Financial Markets published this content on April 07, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 07, 2026 at 09:20 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]