Data Automation Actions
ThinkAutomation provides a set of database lookup and update actions that allow your Automations to read data from external databases, loop through result sets, execute stored procedures, and insert or update records. These actions enable you to connect to SQL databases, document databases, and the embedded ThinkAutomation data stores, and then use the returned values within your workflow. They are typically used for tasks such as retrieving account details, checking reference data, updating records based on extracted fields, or synchronizing data between systems.
ThinkAutomation supports native access to the following database types:
- Microsoft SQL Server
- Microsoft SQL Server Azure
- MySQL / Maria DB
- SQLite
- Oracle
- PostgreSQL
- DB2
- Firebird
- Microsoft Access
- MongoDB (and Azure Cosmos)
In addition it can also connect to an ODBC DSN and use any OLEDB driver. See: Database Connection Notes.
ThinkAutomation also includes an embedded server-less document database that you can use to store any arbitrary data in Json format and then later query the data using SQL statements. See: Embedded Document DB Notes.
All SQL statements use consistent quoted identifiers - regardless of the database type. Identifiers should be enclosed in double quotes. For example:
SELECT * FROM "Person"."ContactType" WHERE Name = 'xyz'
SQL Parameters
All SQL statements used on database actions allow you to specify parameters.
You can substitute parameters using @parametername in the SQL statement. For example:
SELECT * FROM Person WHERE Id = @Id OR Name = @Name
or
INSERT INTO Customers (Name, CreatedDate) VALUES (@Name, @CreatedDate)
For any parameters you must complete the Parameters grid. Specify the Name, Type & Value for each parameter used. The Name column in the parameters list has a drop down selector that will be auto populated with any @parameter names used in the SQL statement. The Type must match your database column type. Parameter values can be set to fixed values or %variable% replacements (or combination).
Using parameters is recommended over directly specifying %variables% in the SQL statement itself as this correctly sets the value based on its Type and avoids any possible SQL injection attacks.
If you specify %variables% directly inside a SQL statement instead of using parameters you must ensure the value is correctly escaped (ie: single quotes represented by '') and string values enclosed in quotes.
Lookup From A Database
The Lookup From A Database automation action can be used to read records from a database and assign returned column values to multiple ThinkAutomation variables. These values can then be used further in your automation workflow.
Select a Database Type to connect to from the list. You must specify a Connection String that ThinkAutomation will use to open the database. Click the ... button to build the connection string. Click the Test button to verify that ThinkAutomation can connect to the database.
Specify the Max Rows to read.
Enter the SQL Statement to use to query records from the database. The SQL Statement can contain Parameters. Eg:
SELECT * FROM Person WHERE Id = @Id
For any Parameters you must complete the Parameters grid. Specify the Name, Type & Value for each parameter used. Parameter values can be set to %variable% replacements. See: SQL Parameters. Click the Test button to verify the query.
Assignments
Column Assignments
You can assign individual column values to ThinkAutomation Variables (optional).
In the Column Assignments grid you can map database columns returned from the query to ThinkAutomation variables.
In the Column Name/Index column specify a database field name or position number from the SELECT statement.
In the Assign Value To column select a ThinkAutomation Variable that you want the database column value assigned to.
If the database query returned multiple rows, then the first row returned will be used for variable assignment. If no rows are returned then assign-to variables will not be assigned.
Optional Assignments
You can assign the row count to a ThinkAutomation variable. Select a variable from the Assign Row Count To list (optional).
Assign All Rows/Columns To A Variable In JSON Format
You can assign all rows/columns returned by the query as JSON text to a ThinkAutomation variable. Select a variable from the Assign Json To list (optional).
Each row returned by the query will be a JSON value. For example:
{
"PersonId": 1,
"PersonType": "EM",
"NameStyle": false,
"Title": "",
"FirstName": "Ken",
"MiddleName": "J",
"LastName": "Sánchez",
"Suffix": "",
"EmailPromotion": 0,
"AdditionalContactInfo": "",
"ModifiedDate": "2009-01-07"
}
If the query returned multiple rows then the JSON will be set to an array.
You can then perform other actions on this value - or pass it to another Automation using the Call Automation action. You can use the Convert JSON To Html action to convert the JSON to a HTML table if the data needs to be sent or viewed in human readable format.
Assign All Rows/Columns To A Variable In CSV Format
You can assign all rows/columns returned by the query as CSV text to a ThinkAutomation variable. Select a variable from the Assign CSV To list (optional).
For the JSON/CSV content you can use the Read Or Write Text File action to save the content to a file for use on subsequent actions (Convert Document, Add Attachment to outgoing email etc.).
Converting The CSV To Displayable Format
If you want to use the Database Lookup to lookup multiple rows that you can then return in your Automation for a user to view, you can use the Set Variable action with the Convert CSV To Markdown Table option. Eg:
CSV = Lookup From A Database MySQL on world SELECT * FROM world.country
Markdown = Convert CSV To Markdown Table(%CSV%)
Return %Markdown%
When run in the Studio this will display the table when used via the Send Message option. When used via a Web Form or API Message Source the markdown will be automatically converted to HTML.
Reading Blob Data
If any columns return binary data (data types: blob, binary, varbinary etc) the data will be returned to the variable in Base64 format. If you want to write the base64 data to a file you can use the File Operation action with the Write Binary File From Base64 String operation.
Lookups From The Embedded Document DB
This action can also be used to perform lookups using the Embedded Document DB. Select Embedded Database when selecting the Database Type. See: Embedded Data Store action
Open Database Reader
Opens a connection to a database for use with For.. Each Actions
The Open Database Reader Action opens a connection to a database using a SQL query. The connection remains open during Automation execution. You can then create a For..Each loop to read each row returned by the query.
Enter the Reader Name. This is a unique name for the data reader. A single Automation can open multiple data readers - each having a unique name.
Select a Database Type to connect to from the list. You must specify a Connection String that ThinkAutomation will use to open the database. Click the ... button to build the connection string. Click the Test button to verify that ThinkAutomation can connect to the database.
Enter the SQL Statement to use to query rows from the database. The SQL Statement can contain Parameters. Eg:
SELECT * FROM Person WHERE PersonType = @Type
For any Parameters you must complete the Parameters grid. Specify the Name, Type & Value for each parameter used. Parameter values can be set to %variable% replacements. See: SQL Parameters. Click the Test button to verify the query.
Now create a For..Each Action. Specify the For Each option as Data Reader Row In and select the Reader Name.
You can then select a variable from the Assign Data Row Json To selector to be assigned the current row Json. The current row Json will be set for each record returned from the query. For example:
{
"PersonId": 1,
"PersonType": "EM",
"NameStyle": false,
"Title": "",
"FirstName": "Ken",
"MiddleName": "J",
"LastName": "Sánchez",
"Suffix": "",
"EmailPromotion": 0,
"AdditionalContactInfo": "",
"ModifiedDate": "2009-01-07"
}
You can then perform other actions on this value - or pass it to another Automation using the Call Automation action.
The For..Each loop will continue until all rows from the query have been read or an Exit Loop action is used.
The Open Database Reader action is designed for queries that return a small number of rows. For example: To read a list of email addresses from a database and send an email to each. If your query will return many rows consider using the Database message source type instead. You can also use the Set Logging Level action before your For..Each loop. Set the logging level to Minimal so that only errors are logged during the loop. This will improve performance.
A For..Each - Data Reader Row In loop block cannot contain the following actions:
- Wait For User Response
- Wait For Webhook
- Twilio Wait For SMS Reply
- Twilio Send SMS Message (where waiting for status is enabled)
- Twilio Make A Telephone Call
The reason is that the Automation will exit during the waiting phase of the above Actions (allowing the next message to be processed). The underlying data source for the Open Database Reader action may change during this waiting period causing the loop to become invalid.
Execute A Database Command
The Execute A Database command automation action can be used to execute a SQL Command or Stored Procedure with optional parameters and optionally return multiple return values. The values values can then be used further in your automation workflow.
This Action allows you to execute a SQL Statement or Stored Procedure. You can pass any number of parameters and assign output parameters to variables.
Select a Database Type to connect to from the list. You must specify a Connection String that ThinkAutomation will use to open the database. Click the ... button to build the connection string. Click the Test button to verify that ThinkAutomation can connect to the database.
Select the Command Type. This is either a SQL Statement or Stored Procedure.
Specify the Command SQL Statement or Stored Procedure Call depending on the command type. You can substitute parameters using @parametername in the SQL Command statement.
Command Parameters
You can pass multiple parameters to your command.
For each parameter in the SQL statement you must specify the Name, Type, Direction & Size. These must match your stored procedure parameters types when using a store procedure, or column types when using a SQL statement. See: SQL Parameters.
For Output Parameters and the Return Value you can specify the variable to Assign Result To.
For Input Parameters you set the Value - this can be fixed or a %variable% replacement.
Blob Data (Saving File Contents)
For parameters with type Blob - if the Value assigned is a file path, then the file contents are read and the binary data is assigned to the Value.
Saving Attachments
If you want to store message attachments to a database you can use a For..Each action to loop on Attachment. Inside the loop set variables for the Filename and Temporary Location values. You can then assign these variables to the relevant database parameter values. See: Example.
Execute Method
Select Non Query if your SQL statement does not return a result set. You can optionally assign the rows affected to a variable selected from the Assign Rows Affected To list.
Select Scalar if your SQL statement returns a result set. The first column of the first row of any results can be optionally be assigned to a variable selected from the Assign Result To list. For example, the SQL statement:
INSERT INTO "Production"."ProductCategory" (Name) VALUES (@Name);
SELECT scope_identity();
The above SQL Server statement will insert a new record and then return the new identity value. The returned value can be assigned to a variable.
Update A Database Using Custom SQL
The Update A Database Using Custom SQL automation action can be used to insert or update a row in a database depending on the result of an optional select statement.
Select a Database Type to connect to from the list. You must specify a Connection String that ThinkAutomation will use to open the database. Click the ... button to build the connection string. Click the Test button to verify that ThinkAutomation can connect to the database.
The Insert tab is used to enter any valid SQL statement. You can also optionally enter statements in the Update and Select tabs.
If a select statement is entered in the Select tab, then the SQL entered in the Update tab is executed if the select returns one or more rows. If no rows are returned then the SQL entered in the Insert tab is executed.
The select, insert & update statements can contain parameters (using @parametername).
You must specify the Name, Type & Value of each parameter used. Parameter values can be assigned to %variable% replacements. See: SQL Parameters.
It is not recommended that you directly specify %variables% in your SQL statements. You should use parameters instead and set the parameter values to each %variable%. This will ensure the database value is set correctly. It is also more secure. If you do use %variables% directly in your SQL statement you must ensure the value is correctly escaped (any single quotes must be replaced with two single ) and string values are enclosed in single quotes.
For the Insert & Update statements you can assign the rows affected to a variable.
Blob Data (Saving File Contents)
For parameters with type Blob - if the Value assigned is a file path, then the file contents are read and the binary data is assigned to the Value.
Saving Attachments
If you want to store message attachments to a database you can use a For..Each action to loop on Attachment. Inside the loop set variables for the Filename and Temporary Location values. You can then assign these variables to the relevant database parameter values. See: Example.
Update A Database Using Extracted Fields
Update a database with fields extracted from the incoming message.
This action can be used to automatically insert or update a record in a database based on the Extract Field actions defined in your Automation. The tables and column names used in the SQL commands are specified on the Database Update tab on each individual Extract Field Action. ThinkAutomation will then create the necessary SQL commands automatically.
Select a Database Type to connect to from the list. You must specify a Connection String that ThinkAutomation will use to open the database. Click the ... button to build the connection string. Click the Test button to verify that ThinkAutomation can connect to the database.
The UPDATE and SELECT commands will only be created if you have defined one or more of your Extracted Fields as Key Fields. ThinkAutomation will then first check if a record exists with the key field values by issuing a SELECT * FROM ... command. It will then execute the UPDATE command if a record is found or the INSERT command otherwise.
You can have only have one Update A Database Using Extracted Fields Action in your Actions List since the SQL statements are automatically created based on your Extract Field actions. If you want to update multiple tables within the same database you can specify different table names against each Extract Field action in the Update Table Name entry - ThinkAutomation will then create separate SQL commands for each separate table being updated. You need to add your Update A Database Using Extracted Fields action below your Extract Field actions to ensure each extracted field has a value.
The result of the update can be assigned to a variable. Select a variable from the Assign Result To list. The variable will receive either 'Inserted','Updated' or an error message if the update failed.
If you need to update multiple separate databases within the same Automation then you can use the Update A Database Using Custom SQL Action. You can have any number of Update A Database Using Custom SQL Actions within your Automation.
Update A Database Using CSV Or Json
This action can be used to automatically insert or update multiple records in a database from CSV or Json text (or file). The column names used in the SQL commands are mapped to CSV columns or Json paths. ThinkAutomation will then create the necessary SQL commands automatically.
Select a Database Type to connect to from the list. You must specify a Connection String that ThinkAutomation will use to open the database. Click the ... button to build the connection string. Click the Test button to verify that ThinkAutomation can connect to the database.
Specify the Table Name to be updated. Click the Get Tables button to read the table names from the database schema. You can then select a table from the Table Name drop-down list. If you are connecting to a SQLite database you can also click the Create Table button to create a new
From the Update Using list, select:
- CSV Data - to update using CSV data or a linked file.
- JSON Data - to update using JSON data or a linked file.
For CSV data, enable the CSV Has Header Row if the CSV data contains column names in the first line.
For Json data, you can optionally specify the Start At Path (using dot notation).
In the CSV/Json Data Or File Path entry, specify the source CSV or Json data. This can be a %variable% containing the data. You can also specify a file path (or a variable containing a file path). If a file path is used, the file will be read and the contents used for the source data when the Automation executes.
Complete the Column Mappings grid to map database columns to your source data columns.
When the Automation executes all records in the CSV/Json will be inserted or updated in the database. The database update is performed within a single transaction. This means that if one insert or update fails, then the transaction is rolled back and no database changes are made.
The number of inserted and updated database rows can be returned to variables. Select the variables from the Assign Inserted Count To and Assign Updated Count To lists.
Mapping Source Columns To Database Columns
You must map columns from the source data to columns in the database table you want to update. The easiest way to do this is to first paste a sample of your CSV or Json data into the CSV/Json Data editor, then use the Auto Map button. Once you have mapped your columns, replace the CSV/Json Data value with your %variable% that will contain the CSV/Json data or file path.
In the Column Mappings grid, for each source column, specify the Database Column Name, Type, Size and Source Column Name Or Value.
Click the Auto Map Database Columns To Data Columns button to auto-map. This will populate the column selector drop-down lists and also match database column names with source column names. You can then manually match where needed.
The Source Column Name Or Value can either be a column name in your source data or a %variable%. If you use a %variable% then the database column will be assigned the fixed %variable% value for each row inserted/updated.
For the Size value, you can specify the database column size for text data types. If a size is specified then the source data will be trimmed before being assigned to the database column to ensure it fits. Specify zero for no auto-trimming.
Enable the Key option for database columns where you want to update existing database records instead of inserting new. Multiple columns can be set as keys. ThinkAutomation will then first check if a record exists with the key field values by issuing a SELECT * FROM table WHERE keyfield1 = @value [AND keyfield2 = @value] command. It will then execute the UPDATE command if a record is found or the INSERT command otherwise.
CSV Data
For example, suppose we have the follow CSV data:
Index,Organization Id,Name,Website,Founded,Industry,Number of employees
1,FAB0d41d5b5d22c,Ferrell LLC,https://price.net/,1990,Plastics,3498
2,6A7EdDEA9FaDC52,"Mckinney, Riley and Day",http://www.hall-buchanan.info/,2015,Ceramics,4952
3,0bFED1ADAE4bcC1,Hester Ltd,http://sullivan-reed.com/,1971,Public Safety,5287
And a database table:
CREATE TABLE "Customers" (
"Index" INTEGER,
"OrganizationId" TEXT,
"Name" TEXT,
"Website" TEXT,
"Founded" INTEGER,
"Industry" TEXT,
"NumberOfEmployees" TEXT,
PRIMARY KEY("Index")
)
The Automap button would automatically map the CSV columns to the database columns (ignoring case and spaces). The 'Index' could be marked as a key.
When the Automation executes, 3 new records will be inserted. If the Automation was run again with the same CSV data then the 3 records would be updated if they already exist in the table with the same 'Index' value.
Excel Data
If you want to update a database from an Excel spreadsheet you can use the Lookup From Excel action to read a range of cells in CSV format. You can then use the variable containing the CSV text as your source CSV data.
JSON Data
When using Json data instead of CSV, the Json must be a Json array. The objects in the array must all be the same type. Sub-objects are not supported.
For example, the following JSON data could be used for the above database example:
[
{
"Index": 1,
"Organization Id": "FAB0d41d5b5d22c",
"Name": "Ferrell LLC",
"Website": "https://price.net/",
"Founded": 1990,
"Industry": "Plastics",
"Number of employees": 3498
},
{
"Index": 2,
"Organization Id": "6A7EdDEA9FaDC52",
"Name": "Mckinney, Riley and Day",
"Website": "http://www.hall-buchanan.info/",
"Founded": 2015,
"Industry": "Ceramics",
"Number of employees": 4952
},
{
"Index": 3,
"Organization Id": "0bFED1ADAE4bcC1",
"Name": "Hester Ltd",
"Website": "http://sullivan-reed.com/",
"Founded": 1971,
"Industry": "Public Safety",
"Number of employees": 5287
}
]
You can use the Start At Path entry to specify an array path within the Json, for example:
{
"Id": "1234",
"Customers": [
{
"Index": 1,
"Organization Id": "FAB0d41d5b5d22c",
"Name": "Ferrell LLC",
"Website": "https://price.net/",
"Founded": 1990,
"Industry": "Plastics",
"Number of employees": 3498
},
{
"Index": 2,
"Organization Id": "6A7EdDEA9FaDC52",
"Name": "Mckinney, Riley and Day",
"Website": "http://www.hall-buchanan.info/",
"Founded": 2015,
"Industry": "Ceramics",
"Number of employees": 4952
},
{
"Index": 3,
"Organization Id": "0bFED1ADAE4bcC1",
"Name": "Hester Ltd",
"Website": "http://sullivan-reed.com/",
"Founded": 1971,
"Industry": "Public Safety",
"Number of employees": 5287
}
]
}
If the Start At Path is set to 'Customers', then only the Customers array will be used.
Update MongoDB
Insert, Update or Delete documents in a MongoDB collection. This action can update any local MongoDB, or cloud based MongoDB compatible document databases including Amazon DocumentDB, Azure Cosmos and MongoDB Atlas.
Specify the MongoDB Connection String, Database Name & Collection Name.
Select the Operation:
- Insert: To insert a new document.
- Upsert: If you want an existing document returned from the Query to be updated. If no document is returned from the Query then a new document will be inserted.
- Update: To Update one or more existing documents.
- Delete: To delete one or more existing documents.
If Upsert, Update or Delete is selected you must specify the Query Json. See: Query Documents — MongoDB Manual for query syntax. You can use %variable% replacements in the query and document json. If the value is a string then it must be enclosed in quotes. For example:
{ status: "%statuscode%" }
Click the Test Query button to test the connection and query. A maximum of 100 documents will be returned when using the Test option.
Enter Document Json for Insert, Upsert or Update.
If Update or Delete is selected then you can enable All Matching Documents. If enabled then all documents returned from the query will be updated/deleted.
Example Update:
With Query set to:
{ "Name" : { "$gt" : "A" } }
And Document Json set to:
{
"$set" : {
"Downloads" : 11
}
}
If All Matching Documents is enabled this would set the 'Downloads' field to 11 for all documents with 'Name' field greater than 'A'. If All Matching Documents not enabled then the first document found would be updated.
If you want to store a full copy of the incoming message you can use the built-in variable %Msg_Json%. If the incoming message body is already Json (for example, if using the Database message source) then you can set the Document Json to %Msg_Body%.
You can use the Create Json action to create a Json document to insert/update.
You can assign the result of the operation to a variable. Select from the Assign Result To list. For Update/Delete operations the result will be the number of documents affected. For Insert/Upsert operations the result will be the _id of the Inserted/Updated document.
Lookup From A MongoDB Database
Read documents from a MongoDB collection and assign the Json or CSV to a variable. This action can lookup data from any local MongoDB, or cloud based MongoDB compatible document databases including Amazon DocumentDB, Azure Cosmos and MongoDB Atlas.
Enter the MongoDB Connection String, Database Name and Collection Name. Enter the Query json and optionally the Projection and Sort json. Click the Test button to test the connection and query.
You can use %variable% replacements in the query. For example:
{ "_id": { "$eq": "%OrderNumber%" } }
See: Query Documents — MongoDB Manual for query syntax.
Select the variable to assign the returned document(s) to from the Assign To list.
The data can be returned as either:
- MongoDB Relaxed Extended Json - returns Json as stored in MongoDB. This will include additional objects for date, long, objectid etc.
- Standard Json - returns standard Json.
- CSV - converts the returned Json to CSV.
Select from the Output Mode list.
You can then use Extract Field Actions (with the extract Json path option) to extract individual fields from the document.
Embedded Data Store
Insert, Update and Query data using the embedded document DB. ThinkAutomation includes an embedded server-less document database that you can use to store any arbitrary data in Json format and then later query the data using SQL statements. The Embedded Data Store makes it easy to store and retrieve data in your Automations without having to configure a database. See: Embedded Document DB Notes.
If you only need to store single values against a key (key/value pairs) then you can use the Embedded Value Store action which provides simple storage and retrieval of key/value pairs.
Database Name
Any number of separate databases can be created. Database names can contain letters or numbers only. You can use a %variable% replacement for the database name (all none alpha numeric characters will be removed during execution). Databases are global to the ThinkAutomation instance (IE: The same database can be used on all Solutions/Automations). A database is automatically created when it is first accessed.
Password
A database can be optionally assigned a password. If a password is specified then the database file will be encrypted with AES encryption. You cannot change a database password after it has been created.
Collection Name
A database can contain multiple collections. A 'collection' is similar to a 'table' in a traditional database. Collection names can contain letters or numbers only. You can use a %variable% replacement for the collection name (all none alpha numeric characters will be removed during execution).
Operations:
Insert
Inserts a new document into the specified database/collection. Specify the Document Json. This can contain %variable% replacements - or be a single %variable% containing Json created from a previous action. If you are using the Database Message Reader then you can use %Msg_Body% since this will already contain Json read from the source database.
If you want to easily store all extracted field values you can use the %Msg_ExtractedFieldsJson% built-in variable. This will return a Json document containing each extracted field name & value.
Each document must have an '_id' field containing a unique id. This field will be added automatically with a unique value if it is not included in the Json. If an '_id' field is included in the Json and its value is not blank then the existing document will be updated if it exists, otherwise a new document will be inserted (upsert).
The '_id' field is automatically indexed. The unique id will be returned after the insert. Select the variable to receive the new Id from the Assign _id To list.
The Ensure Indexed entry allows you to optionally define one or more fields as additional indexes (separated by commas). This will enable faster queries. For example, suppose we want to insert the following:
{
"isbn": "123-456-222",
"author": {
"lastname": "Doe",
"firstname": "Jane"
},
"editor": {
"lastname": "Smith",
"firstname": "Jane"
},
"title": "The Ultimate Database Study Guide",
"category": [
"Non-Fiction",
"Technology"
]
}
We could set the Ensure Indexed entry to 'isbn,author.lastname'. This will ensure both the "isbn" and "author.lastname" fields are indexed.
Update
Replaces an existing document. The Document Json should contain the new document. The Where _id Equals should be set to the _id of the document to replace. The Assign _id To variable will receive the _id - or be set to blank if the existing document was not found.
You can also use the SQL operation to update specific fields, for example:
UPDATE books SET editor.firstname = 'Jane', editor.lastname = 'Doe' WHERE isbn = '123-456-222'
Delete
Deletes an existing document. The Where _id Equals should be set to the _id of the document to delete. The Assign _id To variable will receive the _id - or be set to blank if the existing document was not found.
Get
Retrieves a single document. The Where _id Equals should be set to the _id of the document to retrieve. The Assign _id To variable will receive the document Json - or be set to blank if the existing document was not found.
SQL
You can execute SQL statements against a Database. SELECT, UPDATE & DELETE can be used. The Collection Name entry is not required when using the SQL option, since the collection name will be specified in the SQL statement itself.
You can use SELECT * FROM {collectionname} to return full documents, or SELECT FieldName,FieldName2... FROM {collectionname} to return only specific fields.
Parameters can also be used. For example:
SELECT _id,title,isbn,author.lastname FROM books WHERE author.lastname = @Name
ORDER BY author.lastname LIMIT 10
If a @parameter is specified in the SQL statement then you must set its type and value in the Parameters grid. Parameter values can be set to %variable% replacement values.
When using a SELECT statement the returned documents can be returned as Json or CSV. If returning Json then a Json array will be returned if the SELECT statement returns more than 1 document.
When using an UPDATE or DELETE statement then returned value will be the number of documents affected.
Select the variable to receive the results from the Assign To list. The results can be returned as Json, CSV (without headers) or CSV (with headers).
You can also use the regular Lookup From A Database action to perform a lookup using the Embedded Document DB. This action allows specific column values to be assigned to variables. The Update A Database Using Extracted Fields action can also be used to update (insert or update) the Embedded database.
Drop Collection
Deletes all documents in the specified Database / Collection.
The Assign To variable will receive the dropped collection name or blank if the collection does not exist.
Drop Database
Deletes the specified Database.
The Assign To variable will receive the dropped database name or blank if the database does not exist.
Embedded Value Store
Get, Set and Delete from a dictionary of key/value pairs using the embedded document database. This action enables you to store any number of key/value pairs against a Collection Name. Values can later be retrieved by their key. You can store any sort of text data for the value. Keys can be any text, up to 1024 characters. For example: You could use a collection called 'subscribed' and use email addresses as keys - with the value set to Yes or No. Then do a simple lookup in any of your Automations to see if an email address is subscribed to a newsletter.
Collection Name
Key/value pairs are contained within a Collection. Multiple collections can be used. Collection names can contain letters or numbers only. Key/value pair collections are global to the ThinkAutomation instance (IE: The same collection can be used on all Solutions/Automations).
Operations:
Set
Add or update a key/value pair within the Collection. Specify the Key and Value (both can contain %variable% replacements). If the key already exists then its value will be updated, otherwise a new key/value pair will be created. The Assign To variable will receive the key value (or blank if an error occurred). You can use the Add Now button to manually add key/value pairs.
You can optionally set an Expires After value in seconds. If an expires after value is specified then any Get operation with the same key will return a blank value if the value was set or updated more than the Expires After seconds ago. This is useful when implementing caching. Leave at zero for never expire.
Get
Retrieve an existing value. Specify the Key. The Assign To variable will receive the value - or be set to blank if key does not exist (or has expired).
Delete
Delete an existing value. Specify the Key to delete. The Assign To variable will receive the key value - or blank if the key does not exist. You can use the Delete Now button to manually delete keys.
Drop
Removes all key/value pairs from the specified Collection Name. The Assign To variable will receive the collection name - or blank if the collection does not exist. You can use the Drop Now button to manually drop the collection (all key/value pairs in the collection will be deleted).
Get All
Get all key/value pairs in the specified Collection Name. You can specify a Limit to limit the number of key/value pairs returned. You can also Sort by Key or Value. You can Return Keys, Values or both (in CSV format). The Assign To variable will receive the returned keys/values (one per line). You can use the returned data in a For Each.. Line In loop if you need to perform actions on each value returned.
Get Count
Get the total number of keys for the specified Collection Name. The Assign To variable will receive the count.
Find
This operation allows you to return all Keys, Values or both where the keys Starts With, Ends With or Contains the specified Where Key text. You can specify a Limit to limit the number of key/value pairs returned. You can also Sort by Key or Value. You can Return Keys, Values or both (in CSV format). The Assign To variable will receive the returned keys/values (one per line). You can use the returned data in a For Each.. Line In loop if you need to perform actions on each value returned.
Embedded Files Store
Save, get and delete files using the embedded document DB. ThinkAutomation includes an embedded server-less document database that you can use to store and retrieve files. See: Embedded Document DB Notes.
Database Name
Any number of separate databases can be created. Database names can contain letters or numbers only. You can use a %variable% replacement for the database name (all none alpha numeric characters will be removed during execution). Databases are global to the ThinkAutomation instance (IE: The same database can be used on all Solutions/Automations). A database is automatically created when it is first accessed.
Password
A database can be optionally assigned a password. If a password is specified then the database file will be encrypted with AES encryption. You cannot change a database password after it has been created.
Collection Name
A database can contain multiple collections. A 'collection' is similar to a 'table' in a traditional database. Collection names can contain letters or numbers only. You can use a %variable% replacement for the collection name (all none alpha numeric characters will be removed during execution).
Operations
Save
Save files to the database. You can specify specific local files or a %variable% containing a file path obtained from a previous action. You can also Include Incoming Attachments. For attachments you can specify a Mask (eg: *.pdf).
The Path In Database entry allows you to define a path within the database (similar to directories) where the files will be saved. This allows you to organize files within the database. You can use %variable% replacements in the path. For example: /Documents/%Msg_FromEmail%/
The Assign To variable will receive the saved file information.
Get
Read a previously saved file from the database and save it to a specified folder.
Specify the Path In Database for the file. For example: /Documents/Pdfs/Document1.pdf
Specify the Save To Local Folder path where the file should be saved. If Delete saved copy after message is processed is enabled then the saved file will be deleted after the Automation has completed for the current message. The file held in the database is not deleted - only the saved copy. This is useful if you need to access a file during an Automation (for example, to attach it to an outgoing message) but do not need it afterwards.
The Assign To variable will receive the local saved path (or blank if the file was not found in the database).
Delete
Delete a previously uploaded file from the database.Specify the Path In Database for the file. For example: /Documents/Pdfs/Document1.pdf
The Assign To variable will receive the deleted file path (or blank if the file was not found in the database).
Get Info
Gets file information for a previously saved file. The file itself is not read from the database.
Specify the Path In Database for the file. For example: /Documents/Pdfs/Document1.pdf
The Assign To variable will receive the file information (or blank if the file was not found in the database).
Get List
Gets a list of file information for a specified path.
Specify the Path In Database for the files. For example: /Documents/Pdfs/ - will return all documents starting with '/Documents/Pdfs/'. You can also use wildcards, for example: /Documents/Pdfs/Quotes*.pdf.
The file list can be returned as Json or CSV.
Drop Collection
Deletes all files in the specified Database / Collection.
The Assign To variable will receive the dropped collection name or blank if the collection does not exist.
File Information Format
The Save, Get Info and Get List operations return file information the following format:
{
"DatabaseName": "Attachments",
"CollectionName": "Files",
"Id": "$/Documents/Pdfs/Document1.pdf",
"Size": 116601,
"UploadDate": "2021-09-16T08:35:32.34+01:00",
"MimeType": "application/pdf"
}
A Json array will be returned for multiple files.
You can include the File Information %variable% in your Automation Return value. If the Automation has been executed from the Studio or Desktop Connector Send Message form then any File Information json will be automatically converted to clickable links. This will allow the user to save and view the file.
Get List CSV Option
The Get List operation can also return the list as CSV text, for example:
DatabaseName,CollectionName,Id,Size,UploadDate,MimeType
FilesStore,quotes,$/Documents/Quotes/Quote15.pdf,79467,2022-02-07 13:52:29,application/pdf
FilesStore,quotes,$/Documents/Quotes/Quote16.pdf,79467,2022-02-07 13:57:23,application/pdf
FilesStore,quotes,$/Documents/Quotes/Quote17.pdf,79468,2022-02-07 13:57:41,application/pdf
You could use the Set Variable action with the Convert CSV To Markdown Table if you wanted to include the list in the Automation return value to display a table.
Embedded Knowledge Store
Update, Get, Search and Delete from a knowledge base store using embedded document database. This action enables you to store knowledge base articles against a Collection Name. Articles can later be retrieved by their title, and you can perform semantic searches to retrieve the top x most relevant articles for any search text.
This action is useful when used with the Ask AI action. When a user asks a question, you can search the knowledge base to provide context to the asked question. The AI will then be able to answer organization specific questions based on private data.
The Embedded Knowledge Store performs searches in memory. The maximum number of articles is therefore limited depending on the memory capacity of the host computer. As a general rule, the total number of articles should be less than 10,000. If you need to perform semantic searches against any number of items, you can use the Embedded Vector Database action, which has no limit.
Collection Name
Articles are contained within a Collection. Multiple collections can be used. Collection names can contain letters or numbers only. Article collections are global to the ThinkAutomation instance (IE: The same collection can be used on all Solutions/Automations). A collection can contain any number of articles - however it is recommended to keep articles per collection below 10,000.
Operations:
Update
Add or update a knowledge base article within the Collection. Specify the Article Title, Tag and Article Text (these can contain %variable% replacements). If the title already exists then it's article text will be updated, otherwise a new article will be created. The Tag is optional.
The Assign To variable will receive the number of articles updated (or blank if an error occurred). You can use the Update Now button to manually add/update articles.
Update: Importing Document Files
You can optionally specify a Import Document File for the Update operation. This is a path to a document file (Word, PDF, HTML, Markdown, RTF, Email EML,Outlook MSG, Text or Excel files). If a file is specified the contents will be read, converted to plain text and added to the article text. When importing a document, the file name will be used as the Article Title if no title is specified.
Note: Imported documents will be split into pages and a separate article will be added for each page.
Update: Special Handling Of Markdown Files
When importing Markdown files (.md extension), the Markdown text will be split into articles based on # Headings (up to 4 levels) and the title of each article will be the full heading for all levels (eg: Main Heading, Heading Level 2, Heading Level 3).
Note: This operation may take several minutes when importing large documents with many pages, or large Markdown files. You can use the Knowledge Store Browser option to manually import documents into your Knowledge Store.
Update: Add Embeddings
This option is used to include embeddings with the article record. Embeddings are a list of numbers (a 'vector') that are used to measure the relatedness of text strings. This enables a much more accurate list of relevant articles when doing a search. If you are using the Embedded Knowledge Store action in conjunction with an AI then you should enable this option. Enter your AI Provider Settings in the Server Settings - AI Providers section).
When using OpenAI, adding embeddings is not expensive (you could add embeddings for 1000 articles for less than $0.25).
If you do not add embeddings then the Search operation will only do a keyword match based search rather than a semantic .
Delete
Delete an existing article. Specify the Article Title to delete. The Assign To variable will receive the title value - or blank if the title does not exist. You can use the Delete Now button to manually delete articles.
Search
Search the Knowledge Base for relevant articles based on the Search For text. You can return the Top x most relevant articles - in relevance order. The Relevancy Threshold setting controls the relevancy level. Articles below the relevancy % will not be included. This value defaults to 30%.
Enable the Add Embeddings option if you added embeddings when the article was created. When this option is enabled then embeddings are obtained for the Search For text and then used to perform a semantic search of stored articles. This will provide a much more accurate list of relevant articles.
When using with AI you can specify the Max Tokens and Max Characters that should be returned in the search. This will prevent your requests from going over the token limit.
In the Return As list select either Text or Json. Specify Json if you are searching for articles to add as context for the Ask AI action. The Assign To variable will be set to the returned articles.
You can optionally assign the top (most relevant) title and tag to variables, select from the Assign Top Title and Assign Top Tag to lists.
You can also perform a Knowledge Base search on the Ask AI action itself when adding context to a conversation.
Get
Retrieve an existing article. Specify the Article Title. The Assign To variable will receive the article text - or be set to blank if article does not exist. You can use this option to lookup a specific article. In the Return As list select either Text or Json. Specify Json if you are looking up an article to add as context for the Ask AI action. The Assign To variable will be set to the returned article.
List
Get a list of all article titles in the specified Collection Name. The Assign To variable will receive the returned titles (one per line). You can use the returned data in a For Each.. Line In loop if you need to perform actions on each value returned.
Drop
Removes all articles from the specified Collection Name. The Assign To variable will receive the collection name - or blank if the collection does not exist. You can use the Drop Now button to manually drop the collection (all articles in the collection will be deleted).
Manual Import And Editing
Click the Browse Knowledge Store button to open the Knowledge Store Browser. Here you can view articles for each collection. You can edit, add & delete articles. You can also test Searches and import multiple files & documents. You can also access the Knowledge Store Browser from the Studio - File menu.
You can also use the Web Spider action if you want to import an entire web site into your Knowledge Store.
Tips For Using With AI
Ensure the Add Embeddings option is enabled when adding articles to the Knowledge Store.
If you are building a Knowledge Store in order to provide context to an AI conversation you should ensure each article text is less than approximately 25,000 characters. It is better to have lots of specific knowledge store articles than a few long ones (when importing files & documents into your Knowledge Store, these will be split into pages automatically where required).
If you have thousands of articles it is better to split them in to separate Collections based on some category. This will provide better performance during searches. For example, suppose you want to have a Chatbot that provides product information. You could ask for the product name at the start of the chat (using the Subject field). Then use this to lookup from the relevant Knowledge Base collection during the chat.
The Embedded Knowledge Store search is performed in memory, so there is a limit to the number of articles (it is recommended not to exceed 10,000). If you want to use external data sources for AI context, you can also use the Embedded Vector Database action. This enables embeddings to be stored and searched that link to external data. The vector database is stored and searched on disk, so has no upper limit.
If you are adding product support type articles, or frequently asked questions, each article should be a question and answer, for example:
I have forgotten my password, what can I do?
Answer:
Click the forgot password link in the control panel 'Account' menu. This will ask for
an email address. Provided the email address is registered on our system,
a reset password email will be sent.
Click the link in the email to reset your password.
There may be occasions where you need to ensure a specific article is added to the context instead of relying on a search. For, example: If a specific keyword or combination of keywords exist in the incoming message. You can use the Get operation to lookup a specific article via it's title, and add this as context after adding context using a search. The context wont be added twice if the article was already included in the search.
If you want to add data related context for AI (for example, customer accounts/order information) - and the data is available via a Database/CRM etc, then you don't need to add this to the Knowledge Store. Instead, lookup the customer information via their email address from your database and add the context directly.
Using The Tag Value
The Tag is an optional text string you can add to articles. You can return the top tag when performing a Knowledge Store search. This can be useful if you want to perform specific actions within your Automation based on the tag value.
For example: Suppose you have several articles relating to 'service status'. You could add the tag 'service status' to each of these. When a search is performed, if the top tag returned is 'service status' your Automation could lookup your service status and add this as context.
General Tips For Building A Knowledge Store
You can use separate Automations to update the Knowledge Store. For example: You could use an Email message source to read emails from a specific 'knowledgebase' mailbox. Any emails coming into this mailbox could update the Knowledge Store - using the %Msg_Subject% as the title and the %Msg_Body% as the article text. You can then simply send/forward emails to that email address to add/update the Knowledge Store. Or you could use the File Pickup message source to monitor a folder - and simply add files/documents to the folder that you want adding to the knowledge store.
Embedded Vector Database
ThinkAutomation includes a built-in 'vector database'. A vector database is a type of database designed to store, index, and search data represented as embeddings. Embeddings are a list of numbers (a 'vector'). These vectors are mathematical representations of data such as text, images, audio, or other unstructured content -generated by machine learning models. When searching, instead of exact matches (like in traditional databases), vector databases find similar items using approximate nearest neighbor (ANN) algorithms.
The ThinkAutomation vector database allows you to store vectors for any external data (such as database records, images or document and email content). With each vector record you also store a 'title' and optional 'source'. When a search is performed the closest matching records will be returned (in relevancy order). You could then use the results to add to the 'context' for the Ask AI action, or for advanced search operations.
The ThinkAutomation Embedded Knowledge Store allows you to add the content and embeddings to 'articles' that can then be used as context for the Ask AI action. However, this is limited to about 10,000 articles, since the search is performed in memory. The Vector Database on the other hand has no limit, since the database is maintained on disk.
Collection Name
Title/vector pairs are contained within a Collection. Multiple collections can be used. Collection names can contain letters or numbers only. Title/vector pair collections are global to the ThinkAutomation instance (IE: The same collection can be used on all Solutions/Automations).
From the Vector Operation list, choose: Update, Search, Delete, Drop, Count or Collections:
Update
Add or update a record in the vector database collection. If a record with the specified title does not exist, a new record will be added, otherwise the existing record will be updated.
Specify the Title. The title can be any text. This should be some form of unique id for record (such as a document title, filename or database primary key).
Specify an optional Source. This can be a file path or URL or any other value. The source will be returned in search results and can be used to lookup the source data.
You can optionally specify a Import Document File for the Update operation. This is a path to a document file (Word, PDF, HTML, Markdown, RTF, Email EML,Outlook MSG, Text or Excel files). If a file is specified the contents will be read, converted to plain text and added to the Text. When importing a document, the file name will be used as the Title if no title is specified and the file path will be used for the Source.
Note: Imported documents will be split into pages and a separate article will be added for each page.
Specify the Text. This is the text content you want to store vectors (embeddings) for. If you have setup an AI Provider in the ThinkAutomation Server Settings, then you can enable the Get Embeddings option. When the record is saved the AI Provider will be called to obtain the embeddings, which will then be used as the vectors.
Each record in the collection can be assigned a date. Searches can be limited by a from and to date. Specify the Dated value. This can be a %variable%. The default is %Msg_Date% - which is the current message date.
Enable the Save Text option if you want the actual text stored with the vectors in the database. The text can then be returned when a search is performed. If this option is not enabled, then only the title, source and the vectors will be stored. You would then use the returned titles (and/or source) to lookup the source text when a search is performed.
If the update is successful then the title value will be assigned to the variable specified in the Assign To list.
You can also specify the vectors in the text itself. This is for use cases where you are obtaining vectors via another method. In this case the Get Embeddings and Save Text options should be disabled.
Note: If you change your default AI Provider you will need to re-create any vector database collections (since the vector values will not be the same, so searches will not be accurate).
Search
Search the vector database for relevant items based on the Search Text text. You can return the Top x most relevant items - in relevance order. The Relevancy Threshold setting controls the relevancy level. Items below the relevancy % will not be included. This value defaults to 20%.
If you have setup an AI Provider in the ThinkAutomation Server Settings, then you can enable the Get Embeddings option. Before the search is performed the AI Provider will be called to obtain the embeddings, which will then be used as the vectors.
You can specify the Max Tokens to return. When a record is added to the vector database, the number of tokens used in the text is also saved. Search results will be limited to the max tokens specified. This is useful when using the vector database search along with the Ask AI action.
Searches can be optionally filtered by a date range. Enable the Search Between Dates option and specify From and To dates (both can be %variables%).
In the Return As list select either:
- Titles Only (One Per Line) : The search will return a list of titles, one per line.
- Titles And Text : The search will include the title and text content (if the Save Text option was enabled when the record was added).
- Json : A JSON array will be returned containing the search results in the following format:
[
{
"Title": "About Parker Software",
"Source": "",
"Text": "Parker Software is an independent software house.",
"Dated": "2025-04-23T18:25:43.511Z",
"Similarity": 0.78213344,
"Tokens": 4
},
{
...
}
]
Specify Json if you are searching for items to add as context for the Ask AI action.
Select the variable to receive the results from the Assign To list. You can optionally assign the top (most relevant) title to a variable, select from the Assign Top Title list.
Delete
Delete an existing item. Specify the Title to delete. If the delete was successful the title will be returned to the variable specified in the Assign To list.
Drop
Drops the entire collection. If the drop was successful the collection name will be returned to the variable specified in the Assign To list.
Count
Returns the total number of records stored in the specified collection. The count will be returned to the variable specified in the Assign To list.
Collections
Returns a comma-separated list of all vector collections (or blank if no collections have yet been created).
Vector Database Files Location
ThinkAutomation stores vector database files in the following location:
\ProgramFiles\ThinkAutomation.NET\EmbeddedStore\Vectors\
Each collection is a separate file, with the filename: {collectionname}.vector. These files can be backed up and copied to other ThinkAutomation instances. If a file is deleted it will be recreated on the next Update operation.
Vector Database Use Cases
- As a local document search engine : Use the Convert Document To Text action to obtain the plain text for local documents or incoming attachments. Add these to the Vector Database using the document file path as the title. A separate automation could then perform a search and return the top x matching document titles.
- As context for the Ask AI action. For use cases where the number of items is too large for the Embedded Knowledge Store.
Full Text Search
This action enables you to store any number of text records against a Collection Name. Each record is saved with a key value. Full text searches can then be performed against the collection. Matching keys and/or text will be returned sorted by search rank. The full text database uses a full text index, allowing for fast retrieval against a large number of records.
Collection Name
Key/text pairs are contained within a Collection. Multiple collections can be used. Collection names can contain letters or numbers only. Key/text pair collections are global to the ThinkAutomation instance (IE: The same collection can be used on all Solutions/Automations).
Operations:
Set
Add a key/text pair within the Collection. Specify the Key and Text (both can contain %variable% replacements). The Assign To variable will receive the key value (or blank if an error occurred). You can use the Add Now button to manually add key/text pairs.
You can optionally specify a Import Document File for the Set operation. This is a path to a document file (Word, PDF, HTML, Markdown, RTF, Email EML,Outlook MSG, Text or Excel files). If a file is specified the contents will be read, converted to plain text and added to the Text. When importing a document, the file path will be used as the Key if no key is specified.
Search
This operation allows you to perform a full text search. Specify the search text in the Search For entry. This can be a %variable%.
The search for text can be multiple words or phrases and can contain boolean operators. Examples:
| Search Text | Results |
|---|---|
| pricing quote | Searches for any text containing 'pricing' and 'quote' anywhere in the text. |
| "pricing quote" | Searches for any text containing the phrase 'pricing quote'. |
| pricing + quote | Searches for any text containing the phrase 'pricing quote'. |
| quot* | Searches for any text beginning with 'quot'. |
| pricing NOT quote | Searches for any text containing 'pricing' and not 'quote'. |
| pricing OR quote | Searches for any text containing 'pricing' OR 'quote'. |
| pricing AND (quote OR quotation) | Searches for any text containing pricing and either quote or quotation. |
From the Return list, select:
- Keys : to only return the keys
- Text : to only return the text
- Keys, Text : to return both
If text or keys+text is selected, then you can enable the Highlight Matched Text option. If this option is enabled then matched words in the returned text will be enclosed in <b> and </b> tags.
From the Return As list, select:
- Text : to return text. If 'text' or 'keys, text' is selected for the Return type, then each record will be separated by a single line '---'. When only returning keys, then each key will be returned on a separate line.
- Json : to return Json data.
The Limit entry is used to specify the maximum number of records to return.
Records will be returned in rank order, with the closest matches being returned first.
The Assign To variable will receive the returned results. Use the Test button to test searches.
Delete
This operation will delete a full text record. Specify the Key to delete. The Assign To value will contain the deleted key, or blank if no record was deleted.
Drop
This operation will delete the entire collection.
Update Excel File
Update a Microsoft Excel Spreadsheet file.
Enter or select the Excel File Name to update. Optionally enter the Worksheet Name to update. If no worksheet is specified then the first worksheet in the Excel file will be updated.
ThinkAutomation will create the file if it does not already exist. Excel does not need to be installed on the ThinkAutomation computer for this Action to work.
Select the Operation:
Add New Row
Add a new row to the worksheet.
If the Automatically Add A New Row Using Extracted Fields option is selected, then a new row will be automatically added with a column for each Extract Field action.
The spreadsheet will be in the following format:
| ExtractedField1 | ExtractedField2 | ExtractedField3 |
|---|---|---|
| Value | Value | Value |
If the worksheet contains no rows then a header line will be created with each extracted field name.
The data line will be added to the existing data lines with each extracted field value.
If Automatically Add A New Row Using Extracted Fields option is not selected then you must specify the Value and Header for each column number. Specify each Column Number that you want to add (starting from column 1). Then specify its Value and Header. The value can be a fixed value or %variable% replacement (or combination). You can skip columns (eg: Add columns 1,3 & 5). The Header will only be used if the worksheet contains no existing rows.
Update Specific Cells
Select the Update Specific Cells option to update specific cells within a worksheet. You can then specify specific cell references (eg: A1, B4, E23 etc) and the value to assign each cell. The value can be a fixed value or %variable% replacement (or combination).
Insert CSV Data
This option enables you to insert CSV data into a worksheet. In the Insert A Cell Reference enter the cell reference where the inserted rows can columns should start. If this entry is blank then the data will be inserted starting at the last row used in the existing worksheet (or at the first row for a blank spreadsheet).
If the Shift Rows Below Down option is enabled then any rows below the specified Insert At Cell Reference will be shifted down to make room for the inserted rows (does not apply if no Insert At Cell Reference specified or if the spreadsheet is blank).
Specify your CSV data in the CSV Data Or File Path entry. This can be a %variable% replacement containing CSV data or a file path. If a file path is used then the CSV data is read from the file.
Enable the Has Column Headers option if the CSV data contains column headers.
Enable the Insert Headers option if you want the CSV column headers to be inserted in the spreadsheet.
If you simply want to convert a CSV file into an Excel Document you can use the Convert Document action.
Recalculate
Enable the Recalculate option if you want ThinkAutomation to recalculate all formulas in the Excel file after any updates. Normally you would not need to enable this option (since Excel itself will recalculate when the file is opened). However, if you make use of the Excel file in subsequent actions and any updated cells are used in formulas then you should enable this option to ensure the Excel file is fully updated before its saved.
Password
Enter a Password if you are updating an existing Excel file that is password protected. If you are creating a new Excel File then it will be saved with the password (optional). Note: Updating a password protected Excel file is slower.
Lookup From Excel Or CSV
Read specific cell and cell range values from an Excel compatible spreadsheet or CSV file, and assign returned values to multiple ThinkAutomation variables.
Enter or select the Excel Or CSV File Path to read. You can optionally specify a Password if the Excel file is password protected.
In the Cell Assignments grid you can list one or more Cell References to read from the Excel or CSV file. For each you can specify the Worksheet. If the Worksheet is blank then the first worksheet will be used (worksheets do not apply for CSV files). In the Assign To column specify the ThinkAutomation variable to receive the value of the specified cell reference.
Cell References can be a single cell (eg: B12), a cell range or a Named Range/Named Cell. In Excel you can define a Name for a range or cell so that if rows/columns are inserted before it, the 'Name' still references the existing cells even though their row/column references may have changed. This is useful if you need to read a total cell where new rows are added regularly.
| Cell Reference Examples | |
|---|---|
| B12 | Returns the single cell value for cell B12. |
| A1:C3 | Range that includes cells from top-left cell A1 to the bottom-right cell C3. |
B3:F{last} |
Range that includes cells from top-left cell B3 to the last row bottom-right cell F |
| A:A | Range that includes the entire column A. |
| A:C | Range that includes the entire columns from A to C. |
| 1:1 | Range that includes the entire row 1. |
| 1:3 | Range that includes the entire rows from 1 to 3. |
3:{last} |
Range that includes the entire rows from 3 to the last row in the sheet. |
Cell Ranges To CSV Text
When a cell reference is specified that returns multiple cells, the data will be returned as CSV text, with the number of rows and columns depending on the selected range (without column headers, unless the range itself starts with the column headers).
You can use the {last} marker in ranges to use the highest available row number.
Enable the Compress Returned CSV option if you want to remove any blank rows and columns from any CSV data extracted from cell ranges.
Counter
Update a counter value for any name/value pair and optional period.
This action can be used to update a counter value that is stored by the ThinkAutomation Server. After updating the counter the new counter value can be returned to a variable.
Enter the Counter Name. The name is limited to 50 characters and will be truncated if longer.
Enter the Counted Value. This entry is optional. If a Value is specified then a separate counter will be maintained for each unique name/value pair. The Value can contain %variable% replacements.
Counters are shared within a Solution. So if a counter on two Automations within the same Solution update the same name/value pair then the same counter will be used.
Period
Counters can be maintained per :
| Period | Details |
|---|---|
| Static | No period |
| Day | The month day number (1-31) |
| Day Of Week | The day of week (0-6, 0=Sunday, 1=Monday etc) |
| Year And Month | The year and month (2022 01, 2021 02 etc) |
| Year | The year |
| Month | The month number (1-12) |
| Year And Week Number | The year and week number (0-52) (Iso8601) |
| Hour | The hour (0-23) |
| Minute | The minute (0-59) |
The period date is based on the message date %Msg_Date%
If a period is selected then a new counter is created for each unique period. A Static counter has no period.
Operation
You can Increment, Decrement, Get or Set a counter. If Set is selected then you can specify a value.
When a message is processed, ThinkAutomation will lookup the unique name/value pair. A new counter will be created if the name/value pair does not exist.
Any arbitrary data can be counted. For example:
| Aim | Usage |
|---|---|
| Messages by day of week | Leave Value blank and set the Period to Day Of Week. |
| Messages received by email address per year/month | Set the Value to %Msg_FromEmail% and the Period to Year And Month. |
| Messages by any extracted field or variable value (payment types, currencies etc) | Set the Value to a %variable% and any select a Period. |
| Orders by company name per year | Set the Value to a %variable% holding a company name and the Period to Year. |
| Outgoing emails sent by email address per month | Set the Value to the variable containing the receiver address and the Period to month. |
The counted Value has a maximum size of 250 characters. Values larger than this will be truncated before being counted.
Select a variable to receive the updated counter value from the Assign Counter Value To list.
Getting All Counter Values
You can return all counter values for a given counter name/value using the Get All operation. This will return all periods currently counted. The Get All operation returns the data in CSV format to a variable. If no Value is specified then all values/periods will be returned.
You can use the Set Variable action with the Convert CSV To Markdown Table operation to convert the CSV data to a Markdown table if you want to create an Automation to quickly return and view counter values.
Viewing Counters Using The Studio
Counters can be viewed using the ThinkAutomation Studio. Open the Solution properties page and click the Counters tab. You can clear a counter using the Clear button.