Prioritizing GenAI Use Cases – A Structured Approach

Introduction:

In the application of Generative AI (GenAI) to solve complex business problems, the abundance of candidate use cases can be overwhelming, making it challenging to govern and prioritize which use cases make the cut from idea to proof-of-concept (POC) and ultimately industrialized solutions.  Of course the capabilities of GenAI are sufficiently broad to handle many-many use cases so it should not be surprising that companies and organizations will typically have many (if not hundreds) of design ideas for use cases of putting GenAI to work for their business.  But how do you prioritize the use cases through their lifecycle?  It's essential to think through the lifecycle of a generative AI project, from conception to launch, including how to build effective prompts and tailor the solution to specific requirements.

 

With over 40 years of experience in computer science applied to software engineering across various industries, I take a structured approach to problem-solving. Applying GenAI technology brings a unique set of key attributes that should be considered in your GenAI projects.

 

The approach I’m proposing here for prioritizing use cases is to define a set of attributes or key considerations for your organization and use them to weight and rank your use cases for priority (e.g., a highly regulated environment will give a higher weight to accuracy requirements). Iterating through the assignment of weights and rankings over time as new data and insights become available is crucial. Additionally, your team might weigh and rank attributes differently based on your unique needs and challenges, so feel free to expand or discard any of my suggested attributes/key considerations.

Sample Attributes/Key Considerations for GenAI Use Cases

The intent here is to have a comprehensive (enough) set of attributes/key considerations that would describe a set of generative AI use cases and guide the prioritization and development efforts.  These attributes/key considerations would be used to help teams prioritize potential use cases for implementation. 

I’m presenting a variety of ideas here, so consider this an output with a high temperature setting 😉.  For instance, the use of Generative Adversarial Networks (GANs), although not client facing, its use could be a game changer for winning more sales by generating realistic documents or data to prove if a GenAI project is viable (especially useful when sensitive data is not available for dev/test purposes).

 

 

Name:

-        A brief, descriptive label for the use case.

o   Example: Performance Findings Reports

Priority:

-        The level of urgency or importance assigned to the use case.

o   High / Medium / Low

Description/Scope:

-        Detailed explanation of the use case, including well defined intended users, workflow, data inputs/outputs, and specific objectives.

o   Example: <Company> <user type: DBA>’s aim to use LLM to <transform/augment> their work by automating the generation of <performance findings reports/risk analysis/remediation actions> that take into <specific data: application change control timelines and maintenance window> for its <impacted applications>. 

Innovation Potential:

-        The extent to which the use case introduces novel or innovative solutions to the business and impacts the end-user experience.

o   High/Medium/Low

Business Impact/Value:

-        The anticipated strategic or financial value the use case will deliver.

o   High / Medium / Low  or  €|£|$ figures if known/estimated

Regulatory Compliance:

-        The degree to which the use case must adhere to industry regulations and standards.

o   High/Medium/Low

Cost to Implement GenAI Solution

-        The estimated financial investment required to develop the solution.

o   High / Medium / Low  or   €|£|$ figures if known/estimated

Time required to Implement GenAI Solution

-        The duration needed to develop and deploy the solution.

o   Weeks/months

Life-Cycle Status:

-        The current stage in the development and deployment lifecycle of the use case.

o   Idea|Concept / Data Prep / POC / Development / Testing / Launch

Custom Data Input with Prompt Engineering:

-        The inclusion of domain-specific data and the use of prompt engineering techniques. In-context-learning through prompt engineering [results in larger prompts]. Larger prompts may hit model limits which may lead toward RAG or Fine-Tuning approaches.

o    Y/N

RAG Required:

-        Whether Retrieval-Augmented Generation is necessary to incorporate custom data. RAG tends to be more expensive (at least to start) than in-context-learning through prompt engineering, but it is less expensive than Fine-Tuning a foundation model.

o   Y/N

Fine-Tuning:

-        The need to fine-tune the foundational AI model with specific, proprietary data. Fine-Tuning of a foundation model may be required to incorporate custom/proprietary data, especially in cases where the specific domain context needs to be incorporated into the foundation model for higher accuracy is accomplishing domain specific tasks.

o   Y/N

Accuracy Requirements:

-        The necessity for the AI to provide accurate and reliable responses with minimal hallucinations.

o   High / Medium / Low

Human Verification of Output:

-        Whether the generated output requires verification by a human.

o   Y/N

Specific Security Considerations

-        Any special security requirements or considerations for the use case.

o   Sensitive data

o   Role related access controls

o   Regulatory security requirements

External Client Focus:

-        Indicates whether the GenAI solution is intended for external clients or for internal development teams.

o   Y/N

Skills needed Implement GenAI Solution

-        The expertise required for developing and deploying the solution.

o   e.g. Technology Expert ; Domain Expertise ; Prompt Engineer ; RAG ; Model Fine-Tuning ; Model Training

Foundation Model Name or Type:

-        The specific AI model or architecture being utilized.

o   For Example: GPT 4o/Cohere Command/Llama-2 70

o   Generative

o   Generative Adversarial Networks (GANs)
[e.g. Realistic Document Generation using Generative Adversarial Networks]

o   Encoding

o   Translation

Application Type:

-        The primary functionality or category of the GenAI application.

o   Digital Assistant / Chat / Generative / Summarization / Translation

Cloud or Other Hosting:

-        The hosting environment for the GenAI application.

o   e.g. AWS/Google/Azure/OCI/multi cloud / On-prem or CoLo datacenter

Cost to operate GenAI Workload

-        The ongoing operational costs associated with running the GenAI application.

o   €|£|$ per month figures if known/estimated

Scale of Custom/Proprietary Data

-        The volume of custom or proprietary data that needs to be processed or managed.

o   Pages of documents / TB data / …

Source of Custom/Proprietary Data:

-        The origin of the data used for training/fine-tuning/RAG/prompting/operating the AI.

o   RDBMS, documents, e-mails, images, video, …

Scale of GenAI Workload

-        The anticipated load on the GenAI system in terms of usage and data processing.

o   # of tokens per request

o   # of tokens per response

o   # active users at a given time

o   # requests/user/day [8 hour day; 24 hour day]

AI Agents:

-        Does the workflow incorporate autonomous AI agents to perform tasks.

o   Y/N

Making the Cut:

I have many attributes listed above, but if you dive deeper into the decision-making process, a lot of the attributes would mostly be used during the design/engineering process, and a select few would be essential for prioritization decisions. 

The Essential Assessment Criteria:

I’ve selected out the ones I believe are essential for the prioritization process:

Innovation Potential: The extent to which the use case introduces novel or innovative solutions to the business and impacts the end-user experience.

Business Impact/Value: The anticipated strategic or financial value the use case will deliver.

Cost to Implement GenAI Solution: The estimated financial investment required to develop the solution.

Cost to operate GenAI Workload: The ongoing operational costs associated with running the GenAI application.

Time required to Implement GenAI Solution: The duration needed to develop and deploy the solution.

Skills needed Implement GenAI Solution: The expertise required for developing and deploying the solution.

Ranking:

What you need to do next is weight the criteria according to your business needs, and assign a ranking to the use cases under consideration.  A high ranking being the one fitting that criteria the best among the alternatives.

Sample Assessment:

In the hypothetical situation of evaluating GenAI use cases that would be of value to a database consulting practice to augment their services, I’ve created a sample prioritization matrix to help with the decision-making process.

N.B. Free Excel Spreadsheet Prioritization Matrix from the folks at Continuous Improvement Toolkit (citoolkit.com) https://citoolkit.com/templates/prioritization-matrix-template/ .



Sample GenAI use case prioritization matrix.

 

As you can see in this hypothetical assessment of priority, implementing the Database Health Check Report use case came on top, followed by Asses Database Replatforming, and hot on it’s heels, Technical Document Generation.

 

 

Conclusion:

You should make it your ambition to approach GenAI projects with structure and discipline. Perhaps you will find my suggestions for using a comprehensive set of attributes/key considerations useful. By adopting a structured approach, you can ensure that your GenAI initiatives are well-prioritized, effectively managed, and ultimately successful in delivering value to your organization or your client’s organization.

ECO’24: GenAI Primer for DBA’s and Developers

Generative AI Inference in OCI:  On-Demand vs Dedicated AI Cluster