12/16/2025 | Press release | Distributed by Public on 12/16/2025 09:32
Good morning, Chairs Weprin and Otis; Ranking Members Blankenbush and Blumencranz; and distinguished Members of the Assembly Standing Committees on Insurance and Science and Technology.
My name is Kaitlin Asrow, and I am the Acting Superintendent of the New York State Department of Financial Services ("DFS"). Thank you for the opportunity to address you at today's hearing regarding the use of artificial intelligence to underwrite and price insurance policies in New York. I also want to thank Governor Hochul for trusting me to lead DFS into its next chapter.
I appreciate, and share, Governor Hochul, and the Legislature's commitment to ensuring the responsible use of innovation in financial services. My background is in innovation, exploring how it may be applied in financial services, and how we as regulators balance the risk and opportunity of that activity.
With innovation like artificial intelligence ("AI") we can find more efficient and effective ways of working and parsing those essential data we depend on for decision making. We also must ensure that we are protecting consumers and maintaining a stable financial system amidst the new, and still developing risks, of this technology.
Data is the foundation of AI, and it is essential to consider how it is managed alongside the models themselves. This is why, for example, the Circular Letter referenced in this hearing covers both the use of artificial intelligence systems and external consumer data and information sources.
Before turning to the substance of my remarks, I would like to briefly share a bit of my background and approach to innovation in financial services.
Earlier in my career I served in a research and consulting capacity for a non-profit focused on consumer financial health. I began advising early-stage financial technology, or fintech, companies that were part of the organization's accelerator program. These fintechs were all focused on helping consumers better navigate their financial lives.
This experience allowed me to participate in, and guide, conversations around consumer access, safety, and demand during the creation of new financial products and services. I became especially focused on the use of data in these products, both the risks and opportunities of this increasingly valuable resource.
Following this experience, I went to the Federal Reserve, where I worked in Supervision with the Bank of San Francisco and advised the Board of Governors. In both roles I continued to focus on innovation. I supported and trained federal bank examiners on how to supervise technology use and innovative products inside of banks, and I helped draft the Federal Reserve's overall policy and approach on these topics.
From there I joined DFS, where for the past four years, prior to the Governor appointing me Acting Superintendent, I ran the Research & Innovation Division, leading the Department's work on a range of innovation topics, including virtual currency, fintech, and artificial intelligence.
The team I led in that capacity worked closely with colleagues in the Insurance Division to release the 2024 Circular Letter on artificial intelligence, which I will speak more to. That year, DFS also released guidance on customer service and complaints processing in virtual currency where we discuss the expected approach if AI is used to interact with customers. And the final piece of AI policy work I will highlight was our October 2024 guidance on Cybersecurity Risks Arising from Artificial Intelligence.
In addition to these policy efforts, I also stood up an internal AI Steering Committee focused on the policy implications of AI, and an AI Governance Committee to oversee the Department's own use of AI.
The use of AI is rapidly growing and evolving, and its use in the financial services sector is no exception. AI brings enormous potential for the financial services sector, with opportunities ranging from greater efficiencies for both businesses and consumers, to new and improved product offerings that leverage AI. It also presents potential risks, ranging from bias and discrimination to data privacy and cybersecurity. As a regulator, DFS is focused on ensuring that the companies and industries we regulate are deploying AI responsibly and with appropriate risk management. The benefits of AI cannot come at the expense of consumer protection or the safety and soundness of the entities using them.
It is important to recognize that many of the laws that DFS enforces are technology-agnostic, meaning the core regulatory obligations are the same for manual processes as they are for AI models and systems. This agnostic approach is positive because it enables companies to innovate under existing law. Given this, the Department has taken a diligent and considered approach to overseeing the use of AI within our regulated entities. We analyze our existing rules and processes, and where appropriate, clarify how they apply to AI through guidance and circular letters. We also integrate reviews of these new systems and datasets into our supervisory approaches.
This is consistent with how innovation is approached in many areas. Regardless of whether banking services are offered through brick and mortar or a phone app, the same laws apply. Regardless of whether a car has a manual or automatic transmission, the same traffic laws apply. Of course, as new risks arise, there may be room for specific AI requirements, but DFS has not taken that approach to date.
I will also note that while AI as an area is evolving rapidly, it remains rare to have a full deployment of AI without redundant, manual processes in financial services. Furthermore, the practice of modeling, and prediction has always been fundamental to insurance.
I believe the application of existing laws and guidance to new approaches and tools gives us the flexibility to adapt as the market evolves. It also allows for innovation while still maintaining strong safeguards for consumers and the stability of the marketplace.