Capgemini SE

09/24/2024 | News release | Distributed by Public on 09/24/2024 01:10

How RAG Based Custom LLM can transform your Analysis Phase Journey

How RAG Based Custom LLM can transform your Analysis Phase Journey

Hemank Lowe

24 Sep 2024

Gathering project requirements is laborious - and often incomplete or inaccurate. Here's how Gen AI can make the process more efficient and comprehensive

The cornerstone of any successful software development project is comprehensive requirements. But gathering, analyzing, documenting, and structuring requirements can be tedious, and the results are often laden with errors.

The traditional process for gathering requirements and documentation is manual, which makes it time-consuming and prone to inaccuracies, omissions, and inconsistencies. This can lead to miscommunication, missed requirements, and costly reworking needed later in the development process, all of which can impact the success of a project.

Here's a glimpse into how our team has been leveraging generative AI to improve the process of requirements gathering. Here's how.

Taking a RAG approach

The retrieval-augmented generation (RAG) approach is a powerful technique that leverages the capabilities of Gen AI to make requirements engineering more efficient and effective.

What is Retrieval-Augmented Generation (RAG)?

According to Google Cloud, RAG (Retrieval-Augmented Generation) is an AI framework combining the strengths of traditional information retrieval systems (such as databases) with the capabilities of generative large language models (LLMs). By combining this extra knowledge with its own language skills, the AI can write text that is more accurate, up-to-date, and relevant to your specific needs.

As a Google Cloud Partner, in this instance we refer to text-based Gemini 1.5 Pro, a large language model (LLM). Gemini 1.5 Pro automates and enhances requirements engineering, by using a retrieval system that fetches relevant document chunks from a large knowledge base, as well as an LLM that produces answers to prompts using the information from those chunks.

This system excels at interpreting the nuances of human language, allowing it to grasp the true intent behind user inputs and project documentation. Its deep language understanding leads to more accurate and relevant requirements.

Unlike conventional tools that simply rephrase existing information, Gemini 1.5 Pro can generate entirely new content, drawing on its vast knowledge base and understanding of user needs. This fosters innovation and ensures the system caters to unforeseen scenarios.

Gemini 1.5 Pro automates a significant portion of the manual work involved in requirements analysis, saving time and resources. It can handle large volumes of data with ease, making it ideal for complex projects. It's also able to handle various document types, including Word, PDFs, CSV files, etc. It learns quickly with limited data, can reason through complex problems, leverages real-world knowledge, and transfers its learnings across tasks.

The following example illustrates a detailed response delivered through our RAG-based custom LLM, as compared to the response produced by a public LLM:

Prompt RAG-based LLM response Public LLM response
Create user stories on the characteristics of the cloud infrastructure within the conceptual framework such that it illustrates the responsibilities of Cloud Infrastructure Managers in maintaining and optimizing the abstracted compute and networking capabilities in geographically disparate instances. The user stories on the characteristics of the cloud infrastructure within the conceptual framework are as follows:
As a Cloud Infrastructure Manager, I want to be able to provision compute and networking resources in a geographically disparate manner so that I can meet the needs of my users.
As a Cloud Infrastructure Manager, I want to be able to monitor the performance of my cloud infrastructure so that I can ensure that it is meeting the needs of my users.
As a Cloud Infrastructure Manager, I want to be able to troubleshoot issues with my cloud infrastructure so that I can resolve them quickly and efficiently.
The user stories that illustrate the responsibilities of Cloud Infrastructure Managers in maintaining and optimizing the abstracted compute and networking capabilities in geographically disparate instances are as follows:
The Cloud Infrastructure Manager is responsible for provisioning compute and networking resources in a geographically disparate manner so that they can meet the needs of their users.
The Cloud Infrastructure Manager is responsible for monitoring the performance of their cloud infrastructure to ensure that it is meeting the needs of their users.
The Cloud Infrastructure Manager is responsible for troubleshooting issues with their cloud infrastructure to resolve them quickly and efficiently.
As a Cloud Infrastructure Manager, I want to be able to easily scale up or down the compute resources in my cloud infrastructure to meet changing demands, without having to worry about the underlying hardware.

As these examples clearly show, the benefits of a RAG-based approach with Gemini 1.5 Pro include the ability to capture a wider range of stakeholder needs and expectations, which leads to more complete requirements.

Automating manual processes involved in gathering and documenting requirements reduces time spent and minimizes errors. It also proactively identifies dependencies between requirements and ultimately generates well-defined requirements with clear metrics. Thanks to Gemini 1.5 Pro's ability to process large amounts of data, along with the scalability of Google Cloud, complex projects can be handled effectively.

What are the pros/ cons of using Gemini 1.5 Pro for RAG vs. other multimodal AI models?

Pros:

  • Enhanced accuracy and relevance: Gemini 1.5 Pro combines traditional information retrieval systems with generative large language models (LLMs), resulting in more accurate, up-to-date, and relevant text generation1.
  • Deep language understanding: It excels at interpreting the nuances of human language, leading to more accurate and relevant requirements2.
  • Innovation and flexibility: Unlike conventional tools, Gemini 1.5 Pro can generate entirely new content, fostering innovation and catering to unforeseen scenarios3.
  • Efficiency: It automates a significant portion of the manual work involved in requirements analysis, saving time and resources4.
  • Scalability: It can handle large volumes of data with ease, making it ideal for complex projects4.
  • Versatility: It can process various document types, including Word, PDFs, and CSV files4.

Cons:

  • Dependency on data quality: As with all AI models, the effectiveness of Gemini 1.5 Pro depends on the quality and comprehensiveness of the data it retrieves and processes.
  • Complexity: Again, as with all AI models, implementing and maintaining a RAG-based system with Gemini 1.5 Pro may require significant technical expertise and resources.

Use cases for a RAG-based approach to requirements engineering

The following use cases showcase two scenarios where we've found a RAG-based approach to requirements engineeringto be valuable.

User research and behavior analysis

  • Use case: Analyze user interviews (audio and video data)
  • Benefit: Gain deeper insights into user needs, pain points, and behavior beyond what's explicitly stated. For example: identify frustration in user tone during interviews or observe user body language in videos.
  • Expected outcome: Develop software that better addresses user needs and provides a more intuitive user experience.
  • Challenge solved: Uncovers implicit user behavior and emotions not readily captured through traditional surveys or questionnaires.

Identifying operational inefficiencies

  • Use case: Analyze security footage, equipment operation videos, and employee training sessions (audio and video data) to generate the functional requirements, user stories, or epics for the solution.
  • Benefit: Identify and capture requirements for the potential security risks, inefficient processes, and areas for employee training improvement.
  • Expected outcome: Develop software that optimizes workflows, strengthens security protocols, and enhances employee training methods.
  • Challenge solved: Gain objective insights into operational processes by analyzing visual data alongside audio instructions or explanations.

Leveraging the cutting-edge capabilities of RAG and Gemini 1.5 Pro LLM offers a promising solution to the challenges faced in traditional requirements engineering. Automating generation, improving accuracy and scope, and ensuring security and explainability revolutionizes the way software requirements are gathered, documented, and managed, leading to a more efficient, effective, and secure the end-product.

Customers can connect to have a detailed discussion on their modernization journey where we can help them through the requirements analysis, functional requirements, epics and User stories generation.

Capgemini Research : https://www.capgemini.com/insights/research-library/generative-ai-in-organizations-2024/

Author

Hemank Lowe

Senior Director - Global Google Cloud Sub Practice Leader for Cloud and Custom Applications

Hemank is an experienced cloud expert with over 5 years in Google Cloud architecture, Generative AI, and data analytics, holding three Google certifications (Cloud Architect, Data Engineer, Cloud Digital Leader) and a Master's in Computer Applications. With 22+ years of diverse industry experience, excels in delivering Enterprise architecture solutions on the Google Cloud Platform and advocating for technology in business transformations.

Related