
Enterprise Search was delivering poor search results for a long time. The problem stems from the inability to find relevant results in disconnected unconnected private data. Public search worked because of metadata links. Private documents lack the metadata links.
When LLMs started to appear as a replacement for existing web search, Enterprise companies started to use that for search. However, LLMs hallucinate much. Hallucination is a fancy name for making up facts by LLMs. To fix that hallucinations, vector/embedding based results are fed to LLMs to reduce the hallucinations.
There are several problems with this approach.
First, vector/embeddings are generated from the LLMs or other AI models. The embeddings may be hallucinated.
Second, k-Nearest Neighbor search is ineffective even with 100s of documents. It is memory and computing intensive.
Third, similar documents must be calculated for every query. There is ground truth for creating similar documents.
Fourth, for other formats, such as data in the spreadsheets, chats, emails etc current RAG doesn't work or ineffective.
Fifth, LLM results can't be evaluated because it will take another search to verify which depends upon embeddings from LLM.
NaturalText Solution
NaturalRAG is based on Symbolic Graph representation. No vectors/embeddings. Free from LLM based search.
NaturalText AI treats text as symbols, this retains meaning, context and links.
NaturalRAG generates links between documents. This serves as a ground truth for finding relevant results.
NaturalRAG popularity scores can be stored in a database. This removes the computing time needed for generating results for every search query.
​Relevancy scores are human readable and verifiable. Trust and Security are inbuilt into the search.
Take a look at these blog posts for more information.