Skip to content

AI Automation Actions

ThinkAutomation includes a powerful set of AI automation actions that allow your workflows to analyze, classify, summarize, extract, and generate text or data using both cloud and on-premises AI models. These actions enable you to send prompts to AI models, add and manage custom context from your documents and databases, perform natural language database queries, score and classify sentiment, and build intelligent chat or email responder workflows. By combining AI with the rest of your automation logic, you can create context-aware bots, automated responders, intelligent routing workflows, and data extraction pipeline - while maintaining full control over context, security, and data privacy.

ThinkAutomation includes built-in support for the following AI providers: ChatGPT, Azure OpenAI, Grok, Google Gemini, Claude or OptimaGPT.

Ask AI

Send a prompt to OpenAI ChatGPT, Azure OpenAI, xAI Grok, Google Gemini, Claude or a local OptimaGPT server, and assign the response to a variable. You can send single one-off prompts or prompts that are part of a conversation. The ThinkAutomation Ask AI action enables you to automate AI requests and then use the response further in your Automation.

Before you can use this action you must setup an AI Provider in the ThinkAutomation Server Settings - AI Providers section. See: AI Providers.

See OptimaGPT if you prefer to use an on-premises or private-cloud AI server.

From the AI Provider list, select one of your configured AI Providers.

Specify the Operation:

  • Ask AI To Respond To A Prompt
  • Add Context To A Conversation
  • Clear Conversation Context
  • Ask AI To Respond To A Prompt With An Image Or PDF Document

Ask AI To Respond To A Prompt

This operation is used to send a prompt to an AI. The response to the prompt can then be assigned to a variable which can then be used further in the automation workflow.

The System Message is optional. This can help to set the behavior of the assistant. For example: 'You are a helpful assistant'.

The Prompt is the text you want the AI to respond to. This can be any sort of AI prompt and can contain %variables%.

Examples:

What category is the email below? Is it sales, marketing, support or spam?  
Respond with 'sales', 'marketing', 'support' or 'spam' only.  
If it is a support email and it appears to be urgent, respond with 'support:urgent'.  

Subject: %Msg_Subject%  
%Msg_Digest%  

Response: sales  
Extract the name and mailing address from this email: 

Dear Kelly, 

It was great to talk to you at the seminar. I thought Jane's talk was good.  
Thank you for the book. Here's my address 2111 Ash Lane, Crestview CA 92002  

Best,  

Maya

Response:  
Name: Maya  
Mailing Address: 2111 Ash Lane, Crestview CA 92002  
I am flying from Manchester (UK) to Orlando.  
What are the airport codes? Respond with just the codes separated by comma.   

Response:  
MAN,MCO  

Prompts have a limit of approximately 30,000 words (depending on the AI Model).

Tip: When using AI to analyze incoming emails, you can use the %Msg_Digest% built-in variable instead of %Msg_Body%. The %Msg_Digest% variable contains the last reply text only, with all blank lines and extra whitespace removed. It is also trimmed to the first 750 characters. This is usually enough to categorize the text and will ensure the prompt does not exceed the token limit.

Tip: If you want to use AI to automatically respond to incoming emails you should use the %Msg_LastReplyBody% built-in variable instead of the %Msg_Body%. The %Msg_LastReplyBody% variable contains only the current reply, without any quoted text & previous replies. This ensures only the current message and not the entire email thread is sent.

For the OpenAI AI Provider you can optionally enable Allow AI To Perform Web Searches. If this option is enabled then the AI may choose (depending on the prompt text) to perform a web search. The results of the search will then be used as context to answer the question. This is useful for accessing up-to-date information. You can limit the search to a specific set of domains by entering one or more domains in the Only Domains entry (leave blank for no filtering).

Response

Specify the variable to receive the AI response from the Assign Response To list.

You can also optionally assign the number of tokens used for the prompt/response. Select the variable to receive the tokens used from the Assign Used Token Count To list. AI provider charges are based on tokens used.

Conversations

You can optionally specify a Conversation Id. This is useful if multiple AI requests will be made and you want to include previous prompts/responses for context, or if you want to add your own context prior to asking the AI for a response.

The Conversation Id can be any text. For example, setting it to %Msg_FromEmail% will link any requests for the same incoming email address. The built-in variable %Msg_ConversationId% can be used for the Conversation Id. This is a hash of the from/to addresses and subject.

The Max Conversation Lines entry controls the maximum number of previous prompts/response pairs that are included with each request. For example, if the Max Conversation Lines is set to 25 then the last (most recent) 25 prompt/response pairs will be sent prior to the current prompt. As the conversation grows, the oldest items will be removed to prevent the total prompt text going over the AI token limit.

Conversations are shared by all Automations within a Solution and conversation lines older than 48 hours are removed.

For example:

Suppose you send 'What is the capital city of France?' in one prompt and receive a response. If you then send another separate prompt of 'What is the population?' with the same conversation id then you will receive a correct response about the population of Paris because the AI already knows the context. This would work across multiple Automation executions for up to 48 hours, as long as the conversation id is the same.

Add Context To A Conversation

You can add context to a conversation. Context is used to help the AI give the correct answer to a question based on the context you have provided. This can be static text, or you can search articles related to the incoming message from the Embedded Knowledge Store or Embedded Vector Database, and add the most relevant articles. Using the Ask AI action along with the Add Context operations enables you to create bots that can answer business specific questions - even when the AI itself has no knowledge of the subject matter.

Using the Embedded Knowledge Store enables you to add text and documents (PDF, Word, HTML etc) as articles to a Knowledge Store collection. You can then search the knowledge store using the incoming message as the Search Text. This will return the Top x most relevant articles from the specified Knowledge Store Collection.

The Relevancy Threshold setting controls the relevancy level. Articles below the relevancy % will not be included. This value defaults to 20%.

The Return Max Tokens entry should be set to a value lower than the maximum tokens that the AI model supports. It is recommended to keep this at approximately 50% of the model maximum. For example, if you will be using the gpt-4o-mini model, then this allows 128,000 tokens, so the Return Max Tokens entry should be no higher than 64,000. This will ensure that there is enough tokens remaining for the prompt text and the AI response itself.

The returned articles will each be added as context, allowing the AI to use this information when responding to the incoming message.

If you want to add a specific article from a Knowledge Store collection, rather than relying on a search (for example, based on specific keywords found in the incoming message), use the Embedded Knowledge Store action to lookup a specific article (using the Get operation). Set the Return As to Json, assign the returned article to a %variable%, then add this variable to the Add Static Context tab.

Top Title/Tag

You can optionally assign the Top Title and/or Top Tag to variables. The first (most relevant) article Title and Tag will be assigned. This can be used further in your Automation. For example: Suppose we have a number of articles relating to 'pricing' or 'quotes' or 'sales'. We could tag all of these with a 'sales' tag. At the end of the Automation we could check the Top Tag %variable%. If this is equal to 'sales' we could send an email to the sales team informing them that someone has asked something sales related - we could send the question and AI response with this email.

Using Top Tag For Dynamic Context

You can also use the Top Tag for dynamic context. For example: Suppose we have an article titled 'What is the current weather?' and article text set to:

If the user asks what the current weather conditions are, 
you may answer using the details provided.

The Top Tag is set to 'WEATHER'.

If the user asks what the current weather is, your article will most likely be the top result.

In your automation, you check the top tag value after the Add Context To Conversation operation action. If the top tag is equal to 'WEATHER', perform an API lookup to obtain current weather conditions, and use the Add Static Context operation to add the details. The AI will then be able to answer using live information.

You can search the Embedded Vector Database to return the Top x most relevant items. The text for each record returned will be added as context. The Relevancy Threshold and Return Max Tokens can be used as described above.

You can search a Full Text Search collection to return the Top x matches. The text for each record return will be added as context.

4. Add Static Context

You can also add Static Context. This can be any text (which can contain %variable% replacements). This can be used to provide default context, for example:

You are a very enthusiastic representative working at {your company}. 
Given the following sections from our documentation, answer the user's 
question using only that information, outputted in markdown format. 

If you are unsure and the answer is not explicitly written in the 
documentation, say "Sorry, I don't know how to help with that."

You should always add some default context, this should be used to tell the AI who and what it is and how it should respond. You can also provide some general information about your business.

For email responder bots you can use default context such as:

Your name is '{bot name}' and you are a very enthusiastic representative 
working at {your company} answering emails. Given the provided sections 
from our documentation, answer the question using only that information, 
outputted in markdown format.

If an answer cannot be found in the information provided, respond with 
'I cannot help with that' only.  
Do not try and answer the question if the information is not provided.  

Add a friendly greeting and sign off message to your response. 
Your email address is '{bot email address}'.

My email address is %Msg_FromEmail%

This tells the AI that it's responding to emails and to include a greeting and sign off message with its response.

If the Required option is enabled then the context text will remain in the conversation. This option can be used to ensure that the default or important context is always part of the conversation.

Adding Context From Other Sources

You could also lookup static context via a database or web lookup. For example: If the customer provides an email at the start of the chat, or you are responding to incoming emails, you could lookup customer & accounting/order information and add this to the context in case the customer asks about outstanding orders. Or you could lookup current service status if the user wants live status information.

Adding Context On Demand Using MCP

If you are using the OpenAI, Azure AI or OptimaGPT providers, then you can also create multiple AI Connector message sources within the same solution. If any AI Connector message sources are created, then these will be made available to the AI automatically. The AI may choose to call one of the AI Connector message sources during a conversation to obtain additional context.

How Is Context Used

Regardless of how the context is added, the same context wont be added to a conversation if the conversation already has it. So you can add standard context (for example, general information about your business) along with searched for context within your Automation prior to asking the AI for a response.

You can add multiple Ask AI - Add Context To A Conversation actions in your Automation prior to the Ask AI - Ask AI To Respond To A Prompt action.

For example: Suppose you have a company chat bot on your website using the Web Chat message source. A user asks 'What are the benefits of widgets, and can you tell me the current price?'. You first add default context, you then do a knowledge base search with the Search Text set to the incoming question. This adds the most relevant articles relating to widgets to the conversation as context. If the incoming message contains 'widgets', you could then do a database lookup to get the current price for Widgets and add 'The current price for Widgets is %Price%' as static context. AI will then be able to answer the user's questions from the context you provided.

The context itself does not appear in the chat, it is only added to the prompt sent to the AI to provide context to help the AI answer that specific question. The benefit of this is that you can use the standard AI models without training - and you can provide up to date context by keeping your local knowledge base updated or looking up context from your own database. It allows you to quickly create working bots, maintain up-to-date information, and harness the full potential of AI while maintaining control over their data and data privacy.

Note: AI providers have limits on tokens per request, for example: The OpenAI GPT-4o model has a limit of 128,000 tokens per request. Typically a token corresponds to about 4 characters of text. Prompt text includes any context added and the response from AI itself. ThinkAutomation automatically removes older context to ensure the token limit is not exceeded. If you add too much context, some of it may not be included.

Adding Tabular Context

You can add tabular context to a conversation. A user can then ask questions relating to the data.

For example, you could lookup invoices for a customer (based on the email address provided) from a database and return the data in CSV format:

invoice_number,invoice_date,product,amount_due
INV-2023351,2023-01-05,Plain Widgets,1500.00
INV-2023387,2023-01-10,Orange Niblets,2500.00
INV-2023421,2023-01-15,Flat Widgets,1800.00
INV-2023479,2023-01-20,Flat Widgets,3500.00
INV-2023521,2023-01-25,Round Niblets,1200.00

You would assign the CSV data to a %variable% and then add Static Context:

Given the following list of invoices in CSV format for the user, 
answer questions about this data. The 'amount_due' column gives the 
outstanding balance for the invoice in dollars.

%CSVData%

The chat user could then ask questions such as:

What is the date of my most recent invoice? or Can you show me a list of my invoices?

When adding context as tabular data, you need to proceed the data with a clear instruction of what the data is. You would need to experiment with the prompt text to ensure the AI responds correctly.

You can use the Lookup From A Database Using AI action to perform database lookups using natural language queries. The results can be returned in CSV format and then added as context.

Clear Conversation Context

This operation will clear any Context added to a conversation. Specify the Conversation Id.

Ask AI To Respond To A Prompt With An Image Or PDF Document

This operation can be used to ask a question about an image file or PDF document. For images, you can use a local file or a URL. The System Message is optional. This can help to set the behavior of the assistant. For example: 'You are a helpful assistant'.

The Prompt is the text is the question you want to ask about the provided image/pdf.

In the Image Or PDF Path entry, specify a local file path or URL for the image. You can use %variable% replacements. The following image types are supported: PNG, JPEG, WEBP & GIF.

Select the variable to receive the response from the Assign Response To list.

Examples:

You can ask general questions about the image, such as 'What is in this image?' or 'Is this image an animal?'. You can also perform OCR, for example: 'Convert the image to text.', or 'Convert this PDF document to text', or 'The image is a receipt. What is the total paid and the tax?'.

Rate Limits

Your AI provider will set a rate limit for the maximum requests per minute. ThinkAutomation will retry the request if a rate limit error is returned. It will automatically increase the wait time for each retry. The default wait period is 30 seconds. If the request still fails after the retries then an error will be raised.

If your Ask AI action is timing out then your AI account may be rate limited. For OpenAI, you can increase the rate limit by adding pre-payment funds to your OpenAI account. This will move your account to then next usage tier. For example, adding a $50 pre-payment will move your account to usage tier 2 - which will have a higher rate limit and faster responses. See: OpenAI Rate Limits for more information.

AI Automation Use Cases

Using AI with ThinkAutomation has many uses. Other than being a regular chat bot that has knowledge of many subjects, you can use it to:

  • Create a Chat Bot using the Web Chat Message Source type that can answer specific questions about your business by adding private context using the Embedded Knowledge Store.
  • Provide automated responses to incoming emails or SMS messages, utilizing the Embedded Knowledge Store.
  • Create a Teams Message Source type so that Microsoft Teams users can ask your bot questions.
  • Create a Chat Bot Automation that uses the API Message Source type. This can be the same type of Automation that you would use for the Web Chat Message Source type - but called via the API. This would allow you to use ThinkAutomation from your own chat web app or external chat apps/products.
  • Parse unstructured text and extract key information.
  • Summarize text.
  • Anonymize text.
  • Classify text.
  • Translate text.
  • Sentiment analysis.
  • Correct grammar/spelling.
  • Convert natural language into code (SQL, PowerShell etc) and then use the response further in your automation.
  • Ask questions about images or extract text from images.

Additional Examples: Creating An AI Powered Chat Bot


Lookup From A Database Using AI

The Lookup From A Database Using AI action can be used to query a database using natural language. A prompt is sent to an AI along with your database schema. The AI will convert the natural language query into a parameterized SQL SELECT statement. The SQL SELECT statement is then executed against your database. The returned data can then be assigned to a variable.

Before you can use this action you must setup an AI Provider in the ThinkAutomation Server Settings - AI Providers section. See: AI Providers.

Select a Database Type to connect to from the list. You must specify a Connection String that ThinkAutomation will use to open the database. Click the ... button to build the connection string. Click the Test button to verify that ThinkAutomation can connect to the database.

In the Natural Language Query entry specify the query text (or a %variable%). This can by any text that describes the data you want to retrieve, for example:

I need a list of employees located in the United States hired within
the last 12 months. Show the list in date order. 

Specify the Max Rows to read. A value should be specified here, to prevent very large queries from executing.

From the Return As list, select the format to return the data:

  • Markdown table
  • CSV Text
  • Json Array

Database Schema

In the Database Schema entry, you must list the SQL CREATE statements for your database tables. Click the Load Schema button to automatically load the schema for all tables in the selected database. Each table and index will be added. It is recommended to add comments to each table that describes what the table does and how it links to other tables - this will help the AI in generating queries.

For example:

-- Employee table (Contains a record for each employee)
CREATE TABLE Employee(Id INTEGER,LastName VARCHAR,FirstName VARCHAR,Title VARCHAR,
BirthDate DATE,HireDate DATE,Address VARCHAR,City VARCHAR,Region VARCHAR,
PostalCode VARCHAR,Country VARCHAR,HomePhone VARCHAR,Notes VARCHAR);
CREATE UNIQUE INDEX PK_Employee ON Employee (Id);
-- EmployeeTerritory table (Contains a record for each territory) 
-- An employee can have multiple territories. 
-- The EmployeeId column links to the Id column on the Employee table.
CREATE TABLE EmployeeTerritory(Id VARCHAR,EmployeeId INTEGER,TerritoryName VARCHAR);
CREATE UNIQUE INDEX EmployeeTerritory_1 ON EmployeeTerritory (Id);

The Prompt Text entry can be used to adjust the prompt that will be sent to the AI. In most cases you will not need to change this.

Click the Test button to test queries. For example: The above natural language query would generate the following SQL query:

SELECT LastName, FirstName, Title, HireDate FROM Employee 
WHERE Country = @Country AND HireDate >= date('now', '-12 months') 
ORDER BY HireDate LIMIT 100;  
ParamName=@Country,ParamType=VARCHAR,ParamValue="United States"

The SQL generated will be the the correct syntax according to the database type selected.

Most of the time the AI can figure out what a column is based on its name (IE: It will know that the 'CompanyName' column is used for the company name). However for vague column names or where the value represents a specific thing - you should add comments to the schema text. For example:

-- Movie Reviews table (a record for each movie review)
-- The score column is between 1 and 100 - with 100 being the highest rated.
-- The country column is the 2 character ISO country code (in upper case) where the movie was made.
-- The orig_lang is the language that the movie is in (eg: 'English')
-- The crew column lists the actors and the parts they played. eg: 'Tom Hanks, Otto Anderson, Mariana Treviño, Marisol' means: Tom Hanks played Otto Anderson and Mariana Treviño played Marisol.
CREATE TABLE [Reviews] (
  [name] TEXT,
  [date_x] DATETIME,
  [score] REAL,
  [genre] TEXT,
  [overview] TEXT,
  [crew] TEXT,
  [orig_title] TEXT,
  [orig_lang] TEXT,
  [budget_x] REAL,
  [revenue] REAL,
  [country] TEXT
);

From the AI Provider list, select the AI provider to use.

You can optionally specify a Conversation Id. This is useful if multiple AI requests will be made and you want to include previous prompts/responses for context. For example, if the Lookup From A Database Using AI action is used within a general AI Conversation workflow. The previous question/responses will be added to the prompt. This will provide better results when follow-on questions are asked.

The Conversation Id can be any text. For example, setting it to %Msg_FromEmail% will link any requests for the same incoming email address. The built-in variable %Msg_ConversationId% can be used for the Conversation Id. This is a hash of the from/to addresses and subject.

Select a %variable% to receive the returned database rows from the Assign Response list. You can also optionally assign the generated SQL statement to a variable selected from the Assign Generated SQL To list.

How It Works

When the action is executed, ThinkAutomation sends the prompt to your selected AI provider. The prompt will include the database schema and user query. The prompt asks the AI to generate a SQL statement based on the schema and user query. The returned SQL statement is then executed against your database and the returned rows are then assigned to the selected variable in either Markdown, CSV, Json format.

The prompt directs the AI to use parameters, to ensure against SQL injection.

Use Cases

This action can be used to provide a simple front end for natural language database queries. Used in conjunction with the web chat message source - a user can ask a question and the database results can be returned to the chat in markdown format.

This action could also be used along with the Ask AI action - to provide database context as part of a larger AI workflow. The rows returned in Markdown format can then be used to add context to a conversation - allowing the AI to further process database results.

See Also: Using AI To Chat With Your Own Databases - Example


Score Sentiment

Performs Sentiment Analysis on any text and returns the Sentiment Score to a variable. The ThinkAutomation Sentiment Analyzer is able to detect if text is either positive or negative in sentiment or any other yes/no, positive/negative construct. You could then use the result to perform specific actions, for example, to send alert emails or SMS texts if an incoming email contains a high negative sentiment score.

Before Sentiment Analysis can work the ThinkAutomation Sentiment Analyzer must be 'trained'. The Sentiment Analyzer accuracy will improve the more it is trained. See: Train Sentiment

In the Get Sentiment Score For entry enter the text you want to analyze. Any text can be entered including %variable% replacements. To analyze the incoming message body set the value to %Msg_Body%.

The Sentiment Class Name is used to categorize the Sentiment Analysis database. For example: "Sales", "Spam" etc. Each class name stores training data separately. Class names are global to your ThinkAutomation instance. For example, a '"Sales" class name would contain the same training data across all of your Solutions.

In the Assign Sentiment Score To list select variable to assign the sentiment analysis score to.

The result will be returned as a number between 1 and 100. 100 being maximum positive sentiment and 1 being maximum negative sentiment. A result of 50 indicates neutral sentiment.

You can optionally also get a list of the most relevant tokens used in the scoring process. Select a field or variable to assign the list to from the Assign Relevant Tokens List To list.

The list is returned as a string. Each token on its own line as:

token=score,count
token=score,count
...

Where score between 1 and 100. Count shows the number of occurrences of the token in the text analyzed. This list shows the tokens that have had the most effect (positive or negative) on the sentiment score.

Sentiment Analysis can be used to classify a message for any construct - not just Positive or Negative sentiment. For example, it could be used to classify a message as a sales inquiry or not. The construct is defined only by the training data. So if you trained the Sentiment Analyzer with 1000 sales inquiries and 1000 non-sales inquiry messages then it could be used to classify incoming messages as sales inquiries and then take appropriate action.

Sentiment Analyzer Control Panel

You can also use the included Sentiment Analyzer Control Panel to add training data and run run tests. See: Sentiment Analyzer Control Panel

This action uses the ThinkAutomation built-in sentiment analyzer - which requires training before it will return accurate scores but is free to use.

You can also use the Ask AI action to perform sentiment analysis - this will work without any training.


Train Sentiment

Trains the ThinkAutomation Sentiment Analyzer with positive or negative sentiment text. The ThinkAutomation Sentiment Analyzer should be trained with Positive and Negative sentiment messages to improve sentiment analysis accuracy.

In the Train Sentiment Analyzer With entry enter the text you want to use. This can contain %variable% replacements. To train the incoming message body set the value to %Msg_Body%.

Select Positive, Negative or Add Ignore Words from the Train As drop down.

The Sentiment Class Name is used to categorize the Sentiment Analysis database. For example, you could have "Sales" and "Spam" class names. Each would produce their own specific results. Class names are global to your ThinkAutomation instance. For example, a "Sales" class name would contain the same training data across all of your Solutions.

The number of tokens added to the Sentiment Analysis database can be returned to a variable. Select the variable to use from the Assign Result To list.

Training Process

The Sentiment Analysis accuracy will improve with more training data. The Sentiment Analyzer includes built-in training data for common English positive and negative words. For best results you should also train the Sentiment Analyzer with your own training data. Train roughly equal numbers of Positive and Negative messages. You should where possible use actual positive and negative messages rather than just individual positive/negative keywords.

Ignore Words

In addition to adding positive and negative sentiment messages you can use this action to add 'Ignore Words'. Select Add Ignore Words from the Train As option. The text used will be split into words and each added separately to the Sentiment Database. When Sentiment Analysis is performed any words in the Ignore list wont be used to in the sentiment scoring process. You can add email address, links and any other words/text. The number of unique ignore words added will be returned to the Assign To Variable.

Sentiment Analyzer Control Panel

You can also use the included Sentiment Analyzer Control Panel to add training data and run run tests. See: Sentiment Analyzer Control Panel

Parker Software Professional Services team can assist in creating a training plan. Contact our Professional Services team for more information.


Classify Sentiment

Finds the most relevant class name for any text.

In the Get Sentiment Classification For entry enter the text you want to use. This can contain %variable% replacements. To classify the incoming message body set the value to %Msg_Body%.

This action will score the given text against all class names that you have created training data for. The class name with the score furthest from neutral will be returned.

In the Assign Class Name To list select variable to assign the class name to.

For example: Suppose you have training data for class names 'Sales' & 'Support'. You can use this action to classify a new message as either 'Sales' or 'Support' depending on the class that scores furthest from neutral (IE: The class that ThinkAutomation thinks is the most likely).

Before Classification can work the ThinkAutomation Sentiment Analyzer must be 'trained'. The Sentiment Analyzer accuracy will improve the more it is trained. See: Train Sentiment

You can also use the Ask AI action to classify any text. This will work without training.