Supercharge how you interact with proprietary documents with LiQA

By 
Predli
September 25, 2023

Introduction:

The advent of large language models (LLMs) like OpenAI’s GPT family and Meta’s open-source LLaMA, among others, are poised to dramatically change the landscape of information retrieval. These powerful AI systems have an unparalleled ability to understand natural language and generate highly relevant responses. In this blog post, we will explore how pairing LLMs with personalized knowledge bases can enhance question answering and information search.

While the breadth of knowledge encoded in LLMs is unparalleled, there remain significant gaps. For any given individual or organisation, an LLM lacks crucial context about the person’s/organisation’s history, interests, and preferences needed to deliver truly personalised responses. This is where integrations with personal vector databases come in.

By maintaining structured data profiles for each user, these personal databases can fill in the missing context to augment LLMs. They can store preferences, behavior history, relationships, and other rich personalized data. The vector format allows this knowledge to be rapidly queried and incorporated into LLM inference.

Together, the combination enables smarter question answering tailored to each user. LLMs provide expansive world knowledge and inference capabilities while personal databases supply the specifics to filter and personalise the responses.

To harness the power of LLMs with personal databases, we at Predli have developed LiQA. LiQA is an exciting new enhancement for QA systems that improves answer accuracy by considering the context of proprietary documents.

How LiQA Works:

Since this model runs on proprietary documents, we parse the uploaded files uniquely based on their format, ensuring that the document extraction takes into account the varied range of modalities in the dataset. Once we remove the noise and have the data in a clean format, we proceed to split the document into semantic chunks with the appropriate metadata for improved querying. These chunks are then stored into ChromaDB, where the Vector embeddings are created for each fragment to capture semantic meaning and are indexed in a vector knowledge base.

Once a question is asked, it is queried against the knowledge base for the most relevant document fragments that can provide context to answer the question. The maximal marginal relevance (MMR) search algorithm we use is then able to match the intent of the question to the vectors of the document fragments to retrieve the most useful information. MMR is optimal for our needs because it balances relevance and diversity - it returns fragments that are both similar to the question and different from each other. This avoids repetitive results by covering multiple aspects of the query. MMR also lets us tune the relevance-diversity tradeoff to prefer more on-topic extracts depending on the question.

For example, if we have a vector database of a product catalogue, and a question is asked about a particular product feature, the algorithm will locate fragments from technical specifications, user manuals, support documents, and other materials that provide details on that feature. By supplying these relevant extracts to the language model, we ensure that it has the background information needed to compose an accurate and complete response.

The language model then reviews the retrieved fragments and synthesizes the key points into a natural language answer. It is able to filter out redundant or irrelevant information to provide users with just the essence of what they need to know by summarizing lengthy excerpts into concise, human-readable responses. 

A key advantage of this approach is that the knowledge base continuously expands as new documents are ingested. So, the depth of knowledge available to answer questions grows steadily over time. The vector search is also able to account for slight differences in wording or intent between the user's question and the indexed documents. This allows a broad range of inquiries to be addressed even when there is no exact keyword match.

System Architecture of LiQA


Key Benefits of LiQA:

Increased efficiency - The semantic search rapidly identifies the most salient fragments to answer the question, eliminating the need for lengthy document review. This allows users to find information faster.

Improved accuracy - The model can fill in gaps based on background information extracted from technical materials related to the question topic. This boosts precision by reducing guesses or assumptions.

Highly scalable - As new proprietary documents are ingested, they are seamlessly encoded into the ever-expanding knowledge base. This allows the range of supported topics to grow steadily without major retraining required.

Works with any existing QA system - LiQA integrates seamlessly with virtually any question answering or chatbot framework. The document ingestion and vector indexing comprise a self-contained pipeline that feeds contextual information to downstream models.

Upcoming Features:

By expanding our capabilities in these areas, LiQA will become an even more advanced and flexible enterprise question answering solution. More precise document retrieval, expanded knowledge sources, alternate ML methods, relationship modeling, privacy protection, and external search integration all represent exciting ways to enhance accuracy and value.

The team at Predli is proud to be driving this revolution in enterprise conversational AI. Just imagine having your own Iron Man-esque Jarvis able to pull up any detail at your command. LiQA makes this a reality today. We can't wait for you to experience the future of search and unlock the potential of your knowledge assets. Let us know if you would like to see LiQA in action!

Learn more