Studying for the OCI GenAI Certification? Consider The Power of a Taxonomy of Terms

The aim of my series on “GenAI Demystified” is to help a wide range of readers fill their AI tool boxes with not only tools but knowledge, approaches and a healthy dose of skepticism.  Equipped with such a “tool-kit” [“tool-kit” being a metaphor for a set of skills and knowledge], one can master the complex web of AI technologies.

With this overarching goal in mind, I built a Taxonomy of Terms during My OCI Generative AI Certification Training.  My intent here was to not only help myself learn but to inspire/help others to uptake this material as well.

Taxonomies Introduction:

In the rapidly evolving field of Generative AI (GenAI), particularly with the diverse offerings from Oracle's GenAI suite, a well-structured taxonomy or categorization of terms is indispensable for mastering the subject. In this article, we delve into the fascinating world of using taxonomies to organize this massive amount of information, enabling us to proactively and efficiently identify and understand this complex field of study with precision. While studying for the OCI Generative AI Professional certification, I was struck by the sheer volume and complexity of new terms. This can be overwhelming for both novices and seasoned professionals alike. By organizing these terms into a coherent taxonomy, we create a structured framework that simplifies learning and comprehension. This framework serves as a roadmap, guiding learners through the intricate landscape of GenAI technologies, methodologies, and applications. It provides a clear context for each term, facilitating a deeper understanding of the concepts and their interconnections.

 

Categorizing and organizing terms into specific categories and subcategories also aids in improving comprehension. When terms are grouped logically, such as under "Models," "Methods," "Frameworks," or "Tools," it becomes easier to see the bigger picture and understand how different components fit together. This organization helps learners to not only grasp individual concepts but also to appreciate the broader context within which these concepts operate. For example, understanding how "Fine-Tuning" relates to "Foundation Models" and "Parameter-Efficient Methods" can clarify the nuances of optimizing GenAI models.

 

Moreover, a well-structured taxonomy can reveal relationships and patterns that might not be immediately apparent when looking at individual terms in isolation. It can highlight how certain methods are interconnected, how specific tools support various processes, and how foundational models underpin multiple applications. This holistic view is crucial for identifying synergies and potential areas for innovation. For instance, recognizing the link between "Attention Mechanisms" and "Memory Networks" can inspire new approaches to enhancing model recall and performance.

 

In essence, imposing a taxonomy on the myriad of terms associated with GenAI and Oracle's offerings transforms a [perhaps] chaotic collection of concepts into an organized, navigable body of knowledge. It empowers learners to build a solid foundation, fosters advanced understanding, and encourages the discovery of new insights and relationships. This structured approach not only enhances individual learning but also supports collaborative efforts, enabling teams to communicate more effectively and innovate more rapidly in the dynamic field of GenAI.

The Taxonomy [or The Big Reveal]:

Here’s my disclaimer: since my use of taxonomy here is essentially an arbitrary categorization imposed on the terms for a specific purpose (i.e. my learning and perhaps to help others), another person might suggest alternate categorizations and sub categorizations.  Please feel free to do so if you are so inclined [it won’t hurt my feelings 😉].

I have to say just the process of building the taxonomy was instructive as I would re-categorize terms and I was understanding more fully what they were and how they are associated with other terms.  Also, I came across terms like “Accuracy” that seem to be overloaded and used in a variety of circumstances with slightly different meanings (e.g. fine-tuning, vector databases, prediction accuracy), so my taxonomy of terms did not turn out as “pure” as I had hoped for [i.e. a taxonomy should follow the MECE principle].

Mastering Taxonomy Design Using the MECE Principle

The MECE (Mutually Exclusive and Collectively Exhaustive) principle is a fundamental concept used in various fields, including data analysis, management consulting, problem-solving as well as in taxonomy design/development.

In the context of taxonomy development, the MECE principle states that categories within a taxonomy should be:

1.      Mutually Exclusive: Each category should be distinct and not overlap with other categories. In other words, an item should only fit into one category and not belong to multiple categories simultaneously.

2.      Collectively Exhaustive: The categories should encompass all possible options or scenarios, leaving no gaps or missing elements. Every item being classified should fit into at least one category within the taxonomy.

By adhering to the MECE principle, taxonomies become well-structured and comprehensive, ensuring that all elements of the data are accounted for without any redundancy or ambiguity. This principle helps in maintaining clarity, consistency, and accuracy when organizing and categorizing information.

Applying the MECE principle in taxonomy development aids in effective analysis/interpretation, as it allows for precise classification, easy retrieval, and reliable comparisons between different categories. It also enhances communication and understanding among stakeholders by providing a clear framework for organizing and discussing information in a systematic and comprehensive manner.

 

 

The Categories:

My categories and the thought process for including these as categories:

Architecture: Refers to the overall design and structure of AI systems, including the organization and interaction of various components, frameworks, and models within a generative AI environment.

Decoding: Involves the processes and techniques used to interpret and generate text from models. This includes various mechanisms, parameter settings, and strategies that influence the output quality and relevance.

Embedding Model: Models that convert text or data into numerical vectors, allowing for similarity measurements and further processing. This category includes different types of similarity measures, use cases, and databases that store these vectors.

Encoding: Processes and techniques used to transform input data into a specific format or representation, such as embeddings, that can be utilized by models for various tasks.

Foundation Model: Large pre-trained models that serve as a base for fine-tuning and adaptation to specific tasks. These models include different types of architectures like encoder-only, decoder-only, and encoder-decoder models.

Framework: Comprehensive systems or collections of tools and libraries designed to facilitate the development and implementation of generative AI models, such as those used for retrieval-augmented generation (RAG).

Implementation: Strategies and methodologies employed to apply AI models and systems in practical scenarios. This includes specific approaches like chunking to manage and process data efficiently.

Infrastructure: The computational and hardware resources necessary to support AI model development, training, and deployment. This includes types of compute clusters, GPU instances, and other dedicated resources.

LangChain: A specific framework and set of tools designed to facilitate the creation and management of language models. This category includes various components, prompt creation tools, and wrapper classes.

Methods: Various approaches and techniques used in AI model development, fine-tuning, and prompting. This includes fine-tuning methods, prompt engineering, and other specific methodologies to enhance model performance.

Model: Refers to the different types of AI models, their training processes, generation mechanisms, and output characteristics. This includes models' types, training parameters, and generation methods.

OCI Application Development: Tools and frameworks provided by Oracle Cloud Infrastructure (OCI) for developing and deploying AI applications. This includes various development tools and configurations specific to OCI.

Other Terms: Miscellaneous terms that are relevant to AI but do not fit neatly into other specific categories. This includes concepts related to model access, prediction accuracy, and other mechanisms.

Programming Languages: Languages and specific tools or libraries used in programming and developing AI models. This includes general-purpose languages like Python and specific libraries like LangChain.

Training: The process and methodologies involved in training AI models, including fine-tuning methods and hyperparameters settings. This category focuses on the specifics of model training and adaptation.

Vanilla Fine-Tuning Configuration: Basic or standard configurations and hyperparameters used for fine-tuning AI models. This includes common settings and practices applied during the fine-tuning process.

Vector Database: Databases designed to store and manage high-dimensional vectors generated by embedding models. This includes capabilities like searching, indexing, and examples of such databases.

 

 

Table of Categories and Sub Categories:

Go to the end of this post for a JSON listing of the categories, sub categories and terms I used.

Conclusion

In conclusion, a well-organized taxonomy of terms is an invaluable tool for demystifying the intricate landscape of Generative AI (GenAI). By categorizing and structuring the myriad concepts and technologies, a taxonomy provides a clear and systematic framework that enhances our understanding and mastery of this rapidly evolving field. It acts as a roadmap, guiding both novices and seasoned professionals through the dense and often overwhelming terminology, ensuring that each term is not just understood in isolation but also in the context of its relationship with other terms.

 

Moreover, a comprehensive taxonomy fosters better communication and collaboration within the AI community. It creates a common language that bridges the gap between different domains and specialties, enabling more effective knowledge sharing and innovation. Whether you are working on cutting-edge research, developing new AI applications, or simply trying to stay informed about the latest advancements, having a structured framework of terms can significantly enhance your ability to grasp and apply complex AI concepts.

 

In practice, leveraging a taxonomy can streamline your learning process and improve your ability to quickly locate relevant information. It allows you to identify patterns, draw connections between related concepts, and develop a more cohesive understanding of how various AI technologies interoperate. This not only accelerates your learning curve but also empowers you to contribute more effectively to the field.

 

So, next time you find yourself navigating the dense jungle of AI terminology, remember the power of taxonomy. It is not just a tool for organization but a strategic guide that can illuminate your path through the complexities of GenAI, helping you to stay ahead in this dynamic and transformative domain. Embrace the structure and clarity that a well-constructed taxonomy brings, and use it to unlock deeper insights and greater proficiency in the world of Generative AI.

 

Table of Categories and Sub Categories:

[

{ "Category": "Architecture", "SubCategory": "LangChain", "Term": "Retrievers" },

{ "Category": "Decoding", "SubCategory": "Framework", "Term": "RAG" },

{ "Category": "Decoding", "SubCategory": "Mechanism", "Term": "Greedy Decoding" },

{ "Category": "Decoding", "SubCategory": "Mechanism", "Term": "Self-Attention" },

{ "Category": "Decoding", "SubCategory": "Parameter Setting", "Term": "Temperature" },

{ "Category": "Embedding Model", "SubCategory": "Similarity", "Term": "Cosine" },

{ "Category": "Embedding Model", "SubCategory": "Similarity", "Term": "Dot Product" },

{ "Category": "Embedding Model", "SubCategory": "Similarity", "Term": "Semantic" },

{ "Category": "Embedding Model", "SubCategory": "Use Case", "Term": "RAG" },

{ "Category": "Embedding Model", "SubCategory": "Vector Database", "Term": "Examples" },

{ "Category": "Embedding Model", "SubCategory": "Vector Database", "Term": "Vector" },

{ "Category": "Encoding", "SubCategory": "Embedding", "Term": "Cosine Distance" },

{ "Category": "Encoding", "SubCategory": "Embedding", "Term": "Dimension" },

{ "Category": "Foundation Model", "SubCategory": "Decoder", "Term": "BLOOM" },

{ "Category": "Foundation Model", "SubCategory": "Decoder", "Term": "Codex" },

{ "Category": "Foundation Model", "SubCategory": "Decoder", "Term": "Command" },

{ "Category": "Foundation Model", "SubCategory": "Decoder", "Term": "Command-Light" },

{ "Category": "Foundation Model", "SubCategory": "Decoder", "Term": "Copilot" },

{ "Category": "Foundation Model", "SubCategory": "Decoder", "Term": "GPT-3" },

{ "Category": "Foundation Model", "SubCategory": "Decoder", "Term": "GPT-4" },

{ "Category": "Foundation Model", "SubCategory": "Decoder", "Term": "LLaMA 2" },

{ "Category": "Foundation Model", "SubCategory": "Encoder", "Term": "BERT" },

{ "Category": "Foundation Model", "SubCategory": "Encoder", "Term": "RoBERTa" },

{ "Category": "Foundation Model", "SubCategory": "Encoder-Decoder", "Term": "FLAN-UL2" },

{ "Category": "Foundation Model", "SubCategory": "Encoder-Decoder", "Term": "T5" },

{ "Category": "Foundation Model", "SubCategory": "Encoder-Decoder", "Term": "UL2" },

{ "Category": "Foundation Model", "SubCategory": "Processing", "Term": "Layers" },

{ "Category": "Foundation Model", "SubCategory": "Processing", "Term": "Recall" },

{ "Category": "Framework", "SubCategory": "RAG", "Term": "Answer Relevance" },

{ "Category": "Framework", "SubCategory": "RAG", "Term": "RAG Sequence Model" },

{ "Category": "Implementation", "SubCategory": "Strategy", "Term": "Chunking" },

{ "Category": "Infrastructure", "SubCategory": "Compute", "Term": "Dedicated AI Cluster" },

{ "Category": "Infrastructure", "SubCategory": "Compute", "Term": "GPU Instances" },

{ "Category": "Infrastructure", "SubCategory": "Compute", "Term": "RDMA Superclusters" },

{ "Category": "Infrastructure", "SubCategory": "Compute Cluster Type", "Term": "Fine-tuning Cluster" },

{ "Category": "Infrastructure", "SubCategory": "Dedicated Cluster Unit Types", "Term": "Embed Cohere Unit Size" },

{ "Category": "Infrastructure", "SubCategory": "Dedicated Cluster Unit Types", "Term": "Endpoint Capacity" },

{ "Category": "Infrastructure", "SubCategory": "Dedicated Cluster Unit Types", "Term": "Fine Tuning" },

{ "Category": "Infrastructure", "SubCategory": "Dedicated Cluster Unit Types", "Term": "Hosting" },

{ "Category": "Infrastructure", "SubCategory": "Dedicated Cluster Unit Types", "Term": "Large Cohere Unit Size" },

{ "Category": "Infrastructure", "SubCategory": "Dedicated Cluster Unit Types", "Term": "Llama 2-70 Unit Size" },

{ "Category": "Infrastructure", "SubCategory": "Dedicated Cluster Unit Types", "Term": "Service Limits" },

{ "Category": "Infrastructure", "SubCategory": "Dedicated Cluster Unit Types", "Term": "Small Cohere Unit Size" },

{ "Category": "LangChain", "SubCategory": "Components", "Term": "Chains" },

{ "Category": "LangChain", "SubCategory": "Prompt Creation", "Term": "Chat Prompt Template" },

{ "Category": "LangChain", "SubCategory": "Prompt Creation", "Term": "Prompt Templates" },

{ "Category": "LangChain", "SubCategory": "Prompt Creation", "Term": "String Prompt Templates" },

{ "Category": "LangChain", "SubCategory": "Wrapper Class", "Term": "Langchain_Community" },

{ "Category": "Methods", "SubCategory": "Fine-Tuning", "Term": "AI Alignment" },

{ "Category": "Methods", "SubCategory": "Fine-Tuning", "Term": "Continual Pretraining" },

{ "Category": "Methods", "SubCategory": "Fine-Tuning", "Term": "Domain Adaptation" },

{ "Category": "Methods", "SubCategory": "Fine-Tuning", "Term": "Fine-Tuning" },

{ "Category": "Methods", "SubCategory": "Fine-Tuning", "Term": "LORA - Low Rank Adaptation" },

{ "Category": "Methods", "SubCategory": "Fine-Tuning", "Term": "Reinforcement Learning" },

{ "Category": "Methods", "SubCategory": "Fine-Tuning", "Term": "RLHF" },

{ "Category": "Methods", "SubCategory": "Fine-Tuning", "Term": "Soft Prompting" },

{ "Category": "Methods", "SubCategory": "Fine-Tuning", "Term": "T-Few" },

{ "Category": "Methods", "SubCategory": "Fine-Tuning", "Term": "Vanilla" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "Chain-of-Thought" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "In-Context Learning" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "Jailbreaking" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "K-Shot" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "Least-to-Most" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "Llama2 Prompt Formats" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "MPT-Instruct" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "Prompt Engineering" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "Prompt Injection" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "Prompting" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "Step-Back" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "Three-Shot" },

{ "Category": "Methods", "SubCategory": "Prompting", "Term": "Zero-Shot" },

{ "Category": "Model", "SubCategory": "Fine-Tuning", "Term": "Accuracy" },

{ "Category": "Model", "SubCategory": "Generation", "Term": "Auto-Regressive Decoding" },

{ "Category": "Model", "SubCategory": "Generation", "Term": "Diffusion Models" },

{ "Category": "Model", "SubCategory": "Generation", "Term": "Summarization Models" },

{ "Category": "Model", "SubCategory": "Output", "Term": "Grounding" },

{ "Category": "Model", "SubCategory": "Output", "Term": "Token" },

{ "Category": "Model", "SubCategory": "Training", "Term": "Parameters" },

{ "Category": "Model", "SubCategory": "Training", "Term": "Pretraining" },

{ "Category": "Model", "SubCategory": "Training", "Term": "Training" },

{ "Category": "Model", "SubCategory": "Training", "Term": "Training Costs" },

{ "Category": "Model", "SubCategory": "Type", "Term": "Decoder" },

{ "Category": "Model", "SubCategory": "Type", "Term": "Embedding Models" },

{ "Category": "Model", "SubCategory": "Type", "Term": "Encoder" },

{ "Category": "Model", "SubCategory": "Type", "Term": "Encoder-Decoder" },

{ "Category": "Model", "SubCategory": "Type", "Term": "Generation Models" },

{ "Category": "Model", "SubCategory": "Type", "Term": "MPT-Instruct" },

{ "Category": "Model", "SubCategory": "Vector Databases", "Term": "Accuracy" },

{ "Category": "OCI Application Development", "SubCategory": "Tool", "Term": "Chroma DB" },

{ "Category": "OCI Application Development", "SubCategory": "Tool", "Term": "LangChain" },

{ "Category": "OCI Application Development", "SubCategory": "Tool", "Term": "LangSmith" },

{ "Category": "OCI Application Development", "SubCategory": "Tool", "Term": "OCI Configuration File" },

{ "Category": "OCI Application Development", "SubCategory": "Tool", "Term": "Pydantic" },

{ "Category": "OCI Application Development", "SubCategory": "Tool", "Term": "PyPDF" },

{ "Category": "OCI Application Development", "SubCategory": "Tool", "Term": "Streamlit" },

{ "Category": "OCI Application Development", "SubCategory": "Tool", "Term": "Wise" },

{ "Category": "Other Terms", "SubCategory": "Model Access", "Term": "Endpoint" },

{ "Category": "Other Terms", "SubCategory": "Model Mechanism", "Term": "Presence Penalty" },

{ "Category": "Other Terms", "SubCategory": "Prediction Accuracy", "Term": "Accuracy" },

{ "Category": "Other Terms", "SubCategory": "Prediction Accuracy", "Term": "Groundedness" },

{ "Category": "Other Terms", "SubCategory": "Prediction Accuracy", "Term": "Loss" },

{ "Category": "Programming Languages", "SubCategory": "LangChain", "Term": "Components" },

{ "Category": "Programming Languages", "SubCategory": "LangChain", "Term": "LECL" },

{ "Category": "Programming Languages", "SubCategory": "Python", "Term": "F-Strings" },

{ "Category": "Programming Languages", "SubCategory": "Python", "Term": "Str.Format" },

{ "Category": "Training", "SubCategory": "Fine-Tuning", "Term": "Parameter Efficient Fine-Tuning Methods" },

{ "Category": "Vanilla Fine-Tuning Configuration", "SubCategory": "Fine-Tuning Hyperparameters", "Term": "Number of Last Layers" },

{ "Category": "Vector Database", "SubCategory": "Capabilities", "Term": "Searching" },

{ "Category": "Vector Database", "SubCategory": "Example", "Term": "Chroma DB" },

{ "Category": "Vector Database", "SubCategory": "Example", "Term": "FAISS" },

{ "Category": "Vector Database", "SubCategory": "Method", "Term": "Indexing" }

]


How Does OCI Generative AI Services Provide Privacy and Security?

OCI RDMA Superclusters in GenAI Applications