Results

Dentons US LLP

01/07/2025 | News release | Distributed by Public on 01/07/2025 06:41

You can't AI-ways get what you want: Key considerations in procuring artificial intelligence

January 7, 2025

As revolutionary as generative AI has the potential to be, it has confirmed one of the oldest truths about IT procurement - if you are not clear about what you want, it is unlikely that you will get what you need.

The current tsunami of extravagant, vague and sometimes wishful claims of AI capabilities often obscures the key legal and commercial issues and makes it harder to isolate what parts of the solution and data will need to be homegrown and what will need to be procured from outside. Nonetheless, every AI procurement should have a clearly identified set of business problems that it is intended to address and involve detailed analysis of the internal and external technology, data and human resources that will be required to support and utilise it.

Third party LLMs

Most generative AI solutions are based on large language models (LLMs) developed and maintained by third parties. Anyone licensing or utilising a third-party model will naturally have concerns to ensure that the model is free from intellectual property right (IPR) infringement claims. However, it is difficult to undertake fundamental due diligence on LLM intellectual property issues as most models have been trained on a wide spectrum of materials which are not specifically notified to users and generally it is impossible to identify what these specific background materials are simply by reference to the licensed model. The difficulty of undertaking IP due diligence is not an academic concern as most of the major LLM providers have received legal challenges. For example, OpenAI, the company ultimately behind ChatGPT, has faced multiple lawsuits objecting to its having used third party materials without consent in training its various models.

While definitive due diligence in relation to IPR is usually not possible, risks can be mitigated by checking that the relevant contractual documents provide appropriate legal protections (such as indemnities against third-party claims and warranties to compensate for the losses the purchaser might suffer on its own account) and that the LLM provider is reputable, high-profile and sufficiently solvent that such provisions will be worth the paper on which they are written.

In practice, it is likely that aggrieved third parties will seek to enforce their claims against providers (such as OpenAI) rather than their licensees and end-users. However, unlike other forms of IP infringement, in many cases it will be challenging even for the provider to reverse out training activities to make the model non-infringing if they are found to have been undertaken in breach of others' rights.

LLMs may also be subject to unethical bias which when deployed may cause them to drive unfair or discriminatory behaviours. Again, careful due diligence before entering into a contract is strongly advisable and there are a number of respected tools, such as AI Fairness 360 and FairTest, which can be used to measure and avoid such bias.

In the vast majority of cases, the underlying LLM cannot be treated as a static asset which needs to be checked only at the start of a procurement exercise. Most LLMs licensed in by a customer will evolve over time and even the most thorough pre-contract diligence cannot serve as a substitute for ongoing vigilance against problems arising. Some forms of deployment require use in connection with data which has not been verified or in situations where they may be manipulated with malicious intent. It is likely to be important to monitor these types of deployments more closely than others to check for unusual behaviour and inappropriate output.

Purchaser data

Beyond the LLM and other third-party datasets licensed-in, many AI deployments will process a customer's own data in training and live use. If the customer holds third-party information which is subject to confidentiality obligations, it will be important that the implementation and contemplated use is permissible under all of those obligations. If the customer's data comes from a variety of sources and has been mixed, or if derivatives of third-party confidential information (such as summaries or abstracts) have been created and are accessible by the AI system, this exercise may be particularly challenging.

Equally, a customer should assure itself that its own interests in commercial confidentiality are respected by an AI solution. Third party AI solution providers frequently analyse customer prompts to look for trends and ongoing use of a customer's data to train a general model further. If not specifically controlled by contract, such analysis and training may allow a third party to obtain insights into users' strategic plans or commercial positioning.

If it is possible that the system will process personal data, such processing must adhere to all applicable law. This may be more complex than with more traditional IT systems as many AI deployments include a substantial "black box" component making the processing opaque in a manner which potentially conflicts with legal requirements for transparency. Equally, unless the AI system's access to personal data is strictly segregated and controlled, it is easy to create new security risks, fall outside the relevant consent and legitimate purpose requirements, and create outputs which inappropriately expose training or live data. By their nature, many AI systems struggle to apply data minimisation principles.

A purchaser also needs to ensure that it is able to use the outputs of the system in the manner needed to meet the relevant business objectives. From a legal perspective, this means that, at the very least, it needs a broad licence. Restrictions on the use that the output can be put to (for example, in terms of territory, period of time, commercialisation, further adaptation or purpose) must be critically considered against those objectives. However, a purchaser also needs to consider whether there is a need to own the rights in the output it causes to be generated, so that others cannot access or use it.

In addition, AI systems are potentially subject to attacks from malicious users seeking to manipulate them into disclosing confidential or personal information. A key part of a procurement may therefore be to ensure appropriate data filtering, so that the training model does not retain the purchaser's confidential information in a way which can be accessed and released, and protections against malicious prompts to allow it to recognise and log inappropriate requests.

The security and ethical precautions used in an AI deployment should be proportionate to the risks and designed into the solution. If the business objectives underlying the procurement demand a system which has direct access to confidential or personal information and which is open to interaction with the outside world without any mediation, a higher level of control and monitoring will be required. Conversely, where a system is based on less sensitive commercially confidential information and which supports only selected and trained employees of the customer who are able to discern whether the system is generating appropriate responses and apply judgment about how to use it, a more relaxed approach may be justifiable.

Avoiding lock-in

The capabilities of AI deployments, the systems with which they can be integrated and the diverse legal environments in which they operate are all evolving rapidly. In parallel, the economics of the marketplace are not yet clearly established - whilst the major providers of AI solutions or components are investing extraordinary sums in the underlying technology, in many cases their revenue model is not yet mature.

A number of financial analysts have voiced concerns that the big technology companies' level of investment will make it difficult to achieve acceptable rates of return on investment from their customers. Correspondingly, those customers may well be concerned that the prices they are currently enjoying under their contracts will rise dramatically over the part of the useful life of the system that falls after such contracts expire.

Waiting for a more stable commercial and technical environment carries its own risks of competitive disadvantage, stifling both AI and non-AI innovation in case a better solution arrives, missing end customer innovation expectations and losing the opportunity to learn which business processes and activities can best be adapted to take advantage of AI.

Each company will need to undertake its own assessment of the opportunities and risks involved, and align these with its own risk appetite, strategic goals and capacity for innovation. However, as with most technology procurements, it is prudent to minimise the risks of being locked into a specific vendor. Avoiding vendor lock-in requires careful thought in any given procurement but is likely to involve demanding solutions which adopt widely accepted protocols and standards for each component element, using open-source models where appropriate, ensuring practical data portability and investing in data migration planning. In this context, long-term contracts may be a two-edged sword - on the one hand, they may deliver pricing certainty for an extended period but correspondingly will tend to inhibit flexibility to move to another provider if, for example, the vendor ceases to keep pace with market innovation. In some cases, it may be useful to require a solution which is (as far as possible) cloud-agnostic.

However, usually if an AI deployment delivers a step change in operational efficiency for a purchaser, then, after a relatively brief honeymoon, it may be extremely difficult to move away from AI solutions as a whole, even if it is possible to move away from a specific solution and provider. A purchaser should carefully consider at the outset about what operational aspects can be delivered and supported by AI. That analysis should be supplemented by an understanding of what resources and skills will be needed to support the change and the extent to which those skills should form part of the core business rather than being safely outsourced.

Each of the issues identified above - clarity of objectives, quality, tailoring of the IPRs sought and granted, and baking in an exit strategy structurally and contractually - apply to any form of technology procurement. However, the rapidly changing environment and possibilities of the current AI marketplace give them renewed emphasis.

This article was authored by Partner Dan Burge and trainee Beata Kolodziej in Dentons' Technology, Media and Telecoms (TMT) practice