Skip to content

Approaches To AI RAG

ThinkAutomation provides multiple options for implementing RAG (Retrieval Augmented Generation) with AI. In simple terms, RAG means retrieving up-to-date or relevant information and supplying it to an AI model so it can answer a question about that information.

ThinkAutomation is purpose-built for RAG, combining seamless access to on-premises and cloud data sources - such as documents, emails, databases, and APIs - with the ability to perform real-time lookups across them. It also includes a built-in Web Chat message source, making it easy to add an interactive, AI-powered chat interface to your RAG workflows.

ThinkAutomation includes several built-in features that can be used to build RAG pipelines:

  • Embedded Knowledge Store
  • Embedded Vector Database
  • Full Text Search
  • Document-to-Text Conversion
  • AI Connector message source
  • Database and API-based lookups

The right method depends on your use case.

For example, the Knowledge Store and Vector Database offer semantic or fuzzy search - ideal for rich, unstructured content such as product manuals or knowledge articles, where you want results based on meaning rather than exact text matches. However, for structured data (like invoices or order records), these techniques may be less effective. In such cases, Full Text Search, SQL lookups, or API queries provide more accurate, context-specific results.

Use Cases

Below are some use cases and the recommend approaches to asking questions about:

1. Product Documentation/Knowledge Articles

Use the Embedded Knowledge Store if the total number of articles is fewer than 10,000. The Knowledge Store can be updated automatically using the Embedded Knowledge Store automation action, or maintained manually using the Embedded Knowledge Store Browser. To use it during an AI query, call the Ask AI automation action with the Add Context From A Knowledge Store Search operation. This adds relevant Knowledge Store entries to the context before the AI generates its response.

If your dataset exceeds 10,000 articles, use the Embedded Vector Database instead. Update it using the Embedded Vector Database action, and retrieve context using the Add Context From A Vector Database Search operation.

2. A Collection Of Documents Stored On Your File System

There are two main approaches you can use - either individually or together:

  1. File Pickup Source > Vector Database or Full Text Search Use the File Pickup message source to automatically process and index new documents as they are added to your file system. In the File Pickup automation, use the Embedded Vector Database action to add document contents to a vector database collection. This method works well for general, content-rich documents. However, if the documents mainly contain specific structured data (e.g., 'Invoice number 12345'), use the Full Text Search automation action instead.
  2. AI Connector Source > On-Demand Retrieval Use one or more AI Connector message sources to allow the AI itself to decide when it needs to call ThinkAutomation for additional context. In the AI Connector automation, locate the relevant document using the parameters supplied by the AI, and convert it to text using the Convert Document To Text action. The resulting text is then returned to the AI as dynamic context.

3. Data Stored In A Database

There are also two primary approaches for database-driven RAG:

  1. Natural Language Querying Use the Lookup From A Database Using AI action to translate a natural language question into a SQL query automatically. You can then pass the results to the Ask AI action using the Add Static Context operation.
  2. AI Connector with Direct Database Lookups Use AI Connector message sources to let the AI request specific data directly. In the corresponding automation, perform targeted database lookups using the parameter values supplied. Return the query results as text - which the AI will then use as contextual input.

4. Data Obtained From An API Call

For real-time data retrieval via external systems or APIs:

  • Use one or more AI Connector message sources.
  • Within the related automation, use HTTP Get or HTTP Post actions to perform the API call.
  • Return the API results from the automation - these will be sent back to the AI as context for its response.

Combining RAG Techniques

You can combine multiple approaches in a single workflow. For example, a top-level automation can use Ask AI to determine which retrieval technique to use based on the question type, then Call a sub-automation to perform the lookup. The result can then be added to the parent automation’s conversation context before generating a final AI response.

Professional Services

If you need help designing or optimizing your AI + RAG implementation, the Parker Software Professional Services team can assist with planning, configuration, or custom development. Contact Professional Services for more information.