top of page

AI agents - Meaning, Understanding and making it work

  • Writer: Rajasankar Viswanathan
    Rajasankar Viswanathan
  • 12 hours ago
  • 4 min read

How can you make agents understand exactly and precisely what you say and do it? 

This is the burning question for everyone working with agents. 


Why do agents, and by extension LLMs misunderstand the prompt? Isn't AI supposed to understand clearly? 


Nope. That is fundamental problem in Linguistics not of AI. 


It comes down to the meaning of the word. 

When people read a word, how meaning is understood? 

By looking at the words near by. 


There is hypothesis in the Linguistics that is called distributional hypothesis which states exactly that. Now, it is called distributional semantics. 


The famous quote by Firth in "a word is characterized by the company it keeps" in 1957 was developed into statistical learning theory by a simple premise "linguistic items with similar distributions have similar meanings."


To apply statistics to languages, take a sample, drive the statistical spread or distributions, assume that words have similar meanings. However, that is the fundamental problem. 


Why? 


Distribution is not context. Meaning is derived from the context. Not the other way around. 


Almost a decade and a half ago, Google boasted that they solved the translation problem by using the distribution in a clever way. News articles flooded with examples such as "King is to queen" and "Man is to woman." 


Then there are Word2Vec and Glove type of papers written to extend this argument into finding meaning to the words. 


Even I would argue that the Transformer architecture, which is the foundation of current AI story is tied to this argument that distribution shows meaning. 


Why I say that as argument because the distributional semantics hypothesis is still a hypothesis not proven as a theorem. Which is another way of saying, it is just heuristics not a logical or mathematical model. 


Superimposing distribution onto meaning works. Sort of. 


It works sometimes or usually. Not always. Why it works sometimes is for another post. 


That is the problem. 


Enterprise Agentic workflow must work always. Not usually works argument.  


"It works sometimes" is an easy way to bankrupt a business.  


Well what this actually mean in technical jargon?. 


Context window is the key. 


Shorter context instructions will always a problem. This is known for long. When IBM released their chatbot several years ago, people found that easily. If you given a shorter question the reply will be bad. But longer more exhaustive question would get little bit more accurate result. 


So all you need a better prompt that is longer and more descriptive? 


Not so fast.


You will never know what is the correct prompt for an agentic action. 


  1. "Dont Delete my Inbox"

  2. "Keep the mails in Inbox"

  3. "Dont touch the mails in Inbox"

  4. "Keep the Inbox mails"

  5. "Leave mails in Inbox untouched"



Which one will be executed correctly? Your guess is as good as mine. Humans would understand that all these statements mean the same thing. Would AI see it in the same way? 


The next bigger problem will arise when you write a longer prompt or agentic action code with several prompts sequenced like this. 


There are previous solutions exist in the Robotic process automation domain.


All fall into the range of writing code. Specific instructions for specific tasks. 


From domain-specific languages to easy automation tools, it was always some sort of code. 


Precise unambiguous instructions to the computer, which is the definition of code. 


However, we are into this AI where it works sometimes, how can you make it work all the time? 


A translation of all the instructions into a universal set of commands. 


A translation like this is not possible to written by humans. You can't write for every possibility of the instructions. 


That would be a knowledge base of similar instructions or similar commands. 

Essentially, an RAG kind of system for Agentic AI execution. 



Once you have this universal set of examples, how will the agentic AI work? 


A contextual reply from the agent that verifies the instructions.


A kind of eval for the agent before it executes the action.


Making this work in practice involves using contextual clustering for the public data. 


For the private data, ie clustering the actions found in chats, logs, manuals etc would be easier. 


Once this knowledge base is created AI agents must confirm with this base before executing. 



Here is Full Workflow



  1. Contextual Clustering of the legacy data

  2. Creating a knowledge base for actions. 

  3. Agnetic AI given instructions to perform action

  4. AI agent reponse is confimred with knowelg base



If confirmed, the Agent can execute; otherwise, refer to Approval


Simply put, an RAG verification or eval system for ring fencing AI agents to perform within the boundaries. 


This could be named Retrieval Augmented Agents or Retrieval Augmented Evals. Context Verified Agents would be another description. I used RAG as people would get the idea easily. 


Finally, can you make the AI to write these execution commands? Well that is not possible because current LLMs can't extract these actions or commands without making errors or hallucinating. 


NaturalText's Symbolic AI, based on Graphs creates this Context Graph base for both human understanding, Process automation and now, LLM based execution. 


Comments are welcome.  



 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
NaturalText_Logo-09.png
About

NaturalText AI uses novel artificial intelligence technology to uncover patterns and reveal insights hidden in text data.

NaturalText, Inc.

Delaware, USA

Navigation
NaturalText_Website-Buttons_Request-a-De
  • Instagram
  • Facebook
  • Twitter
  • LinkedIn
  • YouTube
  • TikTok
Contact

Thank you for submitting! We will be in touch.

© 2025 NaturalText, Inc. All rights reserved.

bottom of page