February 12, 2025

Chains 101: Core Components

Just like the components of a desktop computer, Chains has foundational core components - Environments, Connectors & Connections, Commands, Links, and Inputs & Outputs - that must be understood well in order to realize the full value of this powerful application. 

Introduction

Workiva Chains have foundational core components that must be understood well in order to realize the full value of this powerful application.  In this article we explore each of these and their importance to a well designed integration or automation built through a Chain.

As we discussed in our Chains Overview article, a Chain is a sequence of actions used to achieve an outcome.  Chains are the component that is executed in order to integrate data or automate different processes.  These following components define how a Chain is built and how it operates.  


Environments

Lifecycle Management describes managing a product or process from beginning to end.  Environments allow us to apply lifecycle management principals to the creation, testing, and maintenance of Workiva Chains.  

Consider a new integration that needs to be created to load data from a Human Capital Management (HCM) system like Workday to the Workiva ESG reporting solution.  While this integration is being created, we want to avoid one of our users errantly running it before we have fully tested that it is working properly.  We would only make it available for use in a production process after it has been properly tested and the data is validated.  

Now consider an example where we need to make changes to an existing process due to changes in the business that have been announced but are not yet in place.  Our existing process needs to continue to be utilized until the change is in effect but we need to be able to cutover to the new process immediately after the change is effective.   

Lifecycle management can be applied to both of these examples and Environments is the key technical component of Chains that delivers this.  You can think of an Environment almost like a directory or folder.  Chains can be organized into a Development Environment, a Testing Environment, and a Production Environment and there is built-in application functionality that allows Chains to be moved between Environments.

Every Workiva Chains Workspace has a DEV environment by default.  An organization can create up to 75 Environments; however, we caution against such a large number as this often signals that a customer is using Environments in a way that is seeking to organize by business process rather than by lifecycle management stage.  


Connectors and Connections

Terminology matters!  

A Connector is the software that enables Chains to interact with 3rd party applications.  Connectors are built and maintained by Workiva.  In most cases, Connectors utilize the Application Programming Interface (API) that the 3rd party vendor makes available.  An API is simply a programmatic way to interact with the application.  For example, Oracle EPM has a REST API that allows actions such as loading data and running calculations to be performed without a user needing to interact with the application through the user interface.  

Before we proceed, let’s address a common misunderstanding about Connectors and APIs.  Workvia Chains do not allow the customer to create a new API or API function (technically called an endpoint) for either the Workiva platform or the 3rd party application.  The Connector is only able to utilize publicly available APIs and their existing endpoints.  

If a vendor has a publicly available API and a Connector does not yet exist, a customer is unable to create a new Connector.  That said, integration with that system may still be possible through the use of a generic web Connector (HTTP) and we will cover this in a future article.  If there is a critical application in your organization’s ecosystem and a Connector does not exist, you should contact your Workiva Customer Success Manager and ask that a request be logged with the Chains Product Management team.  Connectors are prioritized based on multiple factors including customer requests.  Be sure your voice is heard.  

Additionally, if a new endpoint is added to the API by the 3rd party vendor, Workiva needs to add the ability to use the endpoint to the Connector.  We’ll discuss endpoints further in the Commands section.  

A Connection is a user defined configuration of the Connector.  The Connection is used to store the information needed by the Connector to interact with the customer’s specific deployment of the application.  Credentials (e.g., user name & password), the web address of the application, the name of the application (if applicable) are examples of the type of information that is defined in a Connection.  

A Connection can be limited to specific Environments.  This is a powerful way to ensure data quality.  Let’s consider a common enterprise application landscape.  There is a production and non-production (sometimes called development) instance of an application such as Oracle Planning and Budgeting Cloud.  In Chains, a separate Connection is created for production and non-production not only because the application web address is different but also as a mechanism to prevent non-production data from being utilized in a production process.  We outline this relationship in more detail in the Commands section.


Commands

Technically, Commands are referred to as nodes in the Chain.  There are four types of nodes - Command, Events, Triggers, and Command Groups.  In this article we focus on Commands but will explore all node types in a future article.   

Each Connector has a set of Commands that can be added to a Chain. Commands are the primary objects that perform an action.  Example actions include extracting data from a database table by running a SQL query, executing a calculation defined in the 3rd party application, or importing data to a Wdata table.  

Another way to think about the concept of Commands is to think about the concept of a spreadsheet formula.  The function (e.g., SUM, VLOOKUP) is the Command itself.  Like a function, the Command has arguments that need to be specified.  In the example of a VLOOKUP function, arguments are the lookup value, the lookup range, the lookup column, and the match type.  A Command works in the same way and each Command has its own unique set of arguments (called Inputs) that need to be supplied.  We’ll discuss arguments more in the Inputs & Outputs section. 

Commands and their arguments are well documented on the Workiva help center.  To access documentation for a Command, select the Connector and then select the commands link.  The page outlines all Commands for the Connector.  Here is the Workiva Connector Commands documentation. 

When a Chain is being built, the Connection to use when the Chain is run is specified as the Command is configured.  Imagine a Chain is being built in the Chains DEV Environment.  The Oracle PBCS Command is configured to use the Oracle PBCS Connection that has been created for the non-production instance.  When the Chain is tested and ready for production use, the promotion process allows the Chain builder to replace the Oracle PBCS non-production Connection with the Oracle PBCS Production Connection as the Chain is migrated to the Production Environment.  Scoping Command executions to a specific Environment mitigates risk.

As previously discussed, Connectors are built using the API of the 3rd party application.  Generally speaking, the Commands of Connector invoke API endpoints.  It is important to know that not all endpoints are always available as Commands.  There are different reasons for this including that the vendor added the endpoint to the API and the Connector has not yet been updated.  If you encounter this situation, contact your Workiva Customer Success Manager and ask that a request be logged with the Chains Product Management team for an update to the Connector.   


Links

Links are used to define the flow of actions that the Chain performs.  Links have conditions - success, warning, failure, or any - that allow a Chain to be configured to handle Command outcomes.  This often overlooked functionality allows error handling to be easily added to a Chain which increases the reliability and resiliency of the data management process.  

By default when a link is created between two Commands, the link has a success condition.  This means that the Chain will execute the next connected Command(s) when the Command completes successfully.  This is a good place to discuss Command results.  

In data management, you need to consider technical and functional execution results.  For example, consider a query that is run against a relational database.  The query executes without any technical errors.  Can we consider that a success?  Maybe.  If the query was expected to return data then functionally we would consider this a failure.  There are varying reasons why a Command could technically execute successfully and still be considered a functional failure.  In this specific example, perhaps the query is not defined properly or data is potentially not loaded to the database.  

Some Commands have additional business intelligence embedded to account for both technical and functional failures.  In these cases, even if the Command technically executes successfully, the Command result will be a failure.  Unfortunately, there is not a defined list of the Commands that have this intelligence embedded so testing is generally required.  

A link condition can be changed simply by double clicking it and setting a new condition.  For example, a Chain creator wants to add a step that emails a shared service center support team distribution list when the Command to import data to a Wdata table fails.  When the import is successful, the Chain continues executing the next Command to refresh Connections.  In this example, two links would be created from the Import Data Command.  The success link would connect to a Refresh Connections Command and the failure Link would connect to a Send Email Command.  

As an aside, those with knowledge of Chains and Commands might ask, why not use Command notification to send the alert?  That is a fair question but remember, we want to send the email to a distribution list and only named users can be alerted through Command notifications.  

Links are a critical component of Chains.  They define the process flow and provide simple mechanisms to improve the resiliency of a Chain.  


Inputs & Outputs

Inputs and Outputs are perhaps the most important component of Chains.  Inputs control how a Command functions and Outputs are the information generated by the Command having executed.  Together, they are the basis for how a Chain is built and performs its tasks.  

An Output is what is created when a Command is executed although not all Commands produce an Output.  Outputs can be a simple result status (success/failure), a count of how many rows are in the resulting data file, a JSON response, or a file containing data.  Commands can have multiple Outputs.  For example, when using the Workiva Get Sheet Data Command, the outputs are the row count and a file containing the data read from the Workiva Spreadsheet.  

Inputs are specified when configuring a Command.  They are the information that is needed for the Command to perform its operation.  An Input can be a hard-coded value, a variable value, or an Output from another Command in the Chain.  It is the latter that is the foundational concept of Chains.  

As we have outlined, Chains are a series of actions that are performed to achieve an outcome.  Data moves through the Chain with each Command performing some operation on the data.  Let’s consider a simple example of integrating data from Salesforce and loading it to a Wdata Table.  

The first step is to extract the data from Salesforce using a SOQL query.  The SOQL Query Command executes a query that has been input in the Command configuration and produces a data file as the output based on the results of the query.  

Next the Map Headers Command is used to change the header of the file generated by Salesforce.  To do this, the Command was configured to use the output from the SOQL Query Command as the input of the Input File argument of the Map Header Command.  The Map Header Command produces an Output with the changed header.  

Then the data file needs to be uploaded to Wdata.  The Output of the Map Headers Command is used as the input for the File argument of the Create File Command.  

Finally, the data is imported into the Wdata table.  The Create File Output is a JSON response with information about the file that was uploaded to Wdata.  One of the elements of that JSON response is the ID of the file that was created.  That element is used as the File ID argument of the Import File into Table Command.

In this simple example, you can see how Inputs and Outputs flow through the various steps of a Chain.  This is a core concept and one that you must understand in order to effectively utilize Chains.  


Summary

You likely noticed throughout this article how each of these foundational components are inter-related.  Connections can be scoped to Environments.  Commands use Connections.  Links control the process flow of Commands in a Chain.  Outputs from one Command are an Input to a subsequent Command in the Chain.  

Like all technology, having a solid foundation of the core application components ensures that you utilize the technology properly and achieve the maximum value from your investment.  We encourage you to contact us if you would like to learn more about how Chains or the Workiva Data Management Suite can improve your organization’s data management processes.