MuleSoft & Event Driven Architecture — Apisero

Apisero
12 min readJul 19, 2022
  1. Introduction

The objective of this document is to provide an in-depth analysis of how Event Driven Architecture can be used within the Anypoint platform especially while designing APIs based on Anypoint led Connectivity principles. All this while stressing on the fact that EDA is not a silver bullet for all scenarios. Hence this document attempts to help architects and developers narrow down use cases where it can be implemented to gain the maximum benefits out of it. It attempts to provide a glimpse of how EDA can be used to solve everyday challenges faced in the real world and accomplish event driven services.

2. EDA definition

What is an event?

At the very grass root level an event is an occurrence or a change in state of a system that is of particular importance. Events are immutable pieces of information. These cover a wide spectrum of events from a simple user keystroke or a mouse click to an application generated event, airport gate change, financial transaction etc.

Why do we care about it?

Sync request/reply patterns are usually useful when we know what to ask and what to expect. This introduces tight coupling. In this modern era data is the new oil. However, organizations often struggle to connect their many sources of data. Data silos are the biggest obstacle towards digital transformation. Modern enterprises require data to be shared among multiple systems. Sync request/response patterns become a limiting factor or bottleneck in such situations.

In today’s world we have a plethora of events generated from various sources like IOTs, Applications, people, bots etc…to name a few. These events should be correlated and integrated in a rapid, agile way to make them more predictive, preventive and actionable to keep abreast with the competition.

Not only this, in the recent past we have seen a lot of enterprises with their monolith legacy applications need to adapt cloud capabilities, integrate with the modern SAAS applications & expose out critical data silos. This is where EDA helps as discussed later in this blog.

What is EDA?

To the above mentioned few real world problems the answer comes in the form of Event Driven Architecture. It is primarily a software architecture or a design pattern in which a software component responds to one or more such event notifications. Decoupled applications can asynchronously publish and subscribe to events, usually accomplished with a combination of modern event broker and iPAAS solution like MuleSoft.

What’s the future of EDA?

Events never travel, they just occur. Everything else needs to be portable. The more each service can stand alone, the more resilient and fault-tolerant the system is. The economies of this programming approach create the likelihood of greater adoption in the field.

Gartner identified event-driven architecture as a top tech trend for 2018 and predicted that by 2022, event notifications will form part of over 60% of new digital business solutions. By 2022, over 50% of business organizations will participate in event-driven digital business ecosystems.

Hyper Automation is about automating anything that can be automated. For example, the #1 use case for artificial intelligence according to Gartner is process automation. On the path to hyper automation is a spectrum of technologies from RPA to AI. Event processing via APIs and event-driven architecture is seen as an underlying technology that will help underpin the march towards hyper automation. https://www.gartner.com/document/3867165

3. EDA in Anypoint API led Connectivity

API led connectivity is a standardized way to connect data and applications with reusable, composable APIs designed to perform a specific role — such as

1. System API- Unlocking data from backend systems.

2. Process API-Compose unlocked data into processes. Take care of business logic/transformation

3. Experience API- Delivering an experience for an intended audience. API is designed for the specific user experience in mind. Eg mobile users, desktop application etc

Each of the Anypoint APIs can publish its events to a common queue or topic and get the advantages of event driven services and be more autonomous. However certain rules or best practices need to be adhered to in order to maintain API led connectivity approach as detailed below

i. Best Practices

  • Any API that publishes events should define its own queues
  • Destinations belong logically to the same Api led connectivity tier eg System API publish system events , Process API publish process events Exp API publish Experience API
  • Event consumption must be from lower tiers only and not vice versa eg System APi should consume from process API published events and not vice versa.
  • System API should not directly consume from Experience API, thus bypassing process API

ii. How to implement EDA in API led Connectivity

Now with the help of an example of API led connectivity let us discuss common scenarios where EDA can be implemented irrespective of broker which can be internal (VM, Anypoint MQ or external such as Solace, Kafka,RabbitMQ etc..).

Let us take a simple example where a user triggers an event or notification and does not expect a response i.e fire and forget. We have the following APIs based on the API led connectivity approach.

1. An experience API (EAPI-1) is exposed to an application/user. This layer takes in the request, does basic checks and calls the Process layer API (PAPI-1).

2. Process layer API (PAPI-1) transforms data and submits to the System API (SAPI-1)

3. SAPI-1 validates data against a DB. Once successfully validated it calls the next process API (PAPI-2)

4. PAPI-2 converts data into required format and calls the respective System APIs which connect to backend systems

5. SAPI-2 has sole responsibility to update data into Salesforce

6. SAPI-3 sends analytical data to Splunk

With the above example let us discuss where we can fit EDA within API led connectivity to improve the overall user experience and overcome the drawbacks of tight coupling due to sync REST request response. The idea is to use sync communication where necessary and utilize eventing as much as possible. Not applicable in scenarios where immediate response is required.

a. Latency and User experience

Problem: In a chain of microservices the overall response time is the sum of all the APIs response time in the chain. In API led connectivity this can pose a problem if any of the process or system API is slow. For example in the above scenario if DB response is slow it ripples down upto the experience API.

Solution: With the introduction of eventing. The experience API immediately responds back as soon as it has delivered the message to the message broker. Guaranteed delivery is ensured by the messaging layer as it is able to persist message till a successful DB update

Benefits:

1. Experience API need not wait for all the backend processes to complete execution thus providing better user experience.

2. Eventual consistency is guaranteed.

3. Useful in scenarios that involve long running processes

b. Error Handling

Problem: Consider a scenario where one of the services is calling or dependent on another service and the called service is unavailable or errors out. In this case the responsibility of retry and error handling logic or rollback most often needs to be implemented on the calling service. This logic might need to be duplicated if it’s dependent on multiple different APIs which need to be error handled differently. Thus adding complexity and maintenance overhead along with precious worker core consumption.

Solution: With the introduction of eventing. The calling API just needs to care about connecting to the message broker and dumping the event onto a queue or topic.

Benefits:

1. Calling service need not worry whether the message has been received or not by the consumer. The message broker takes care of persisting the message until it is successfully consumed by the consumer service. Thus increasing the overall reliability & availability.

2. Overall developer efforts and worker cores/memory utilization is reduced as the complex retry logic, and data persistence is offloaded to the messaging layer.

3. If there are multiple instances of target or consumer. The message can be redlivered to the next available instance if the first instance is down.

4. Overall user experience is improved.

5. Redelivery is a common trait of message brokers. Hence Mule workers can save upon costly CPU required for redelivery / retry implementation within code.

c. Scalability and Resource utilization

Problem: Imagine a scenario where PAPI-2 involves a lot of complex transformations and business logic. Due to this, especially during peak hours, it can cause a ripple effect of spiking up the resource utilization of all the upstream synchronously dependent services. The effect cascades down across all the services just because one service is bottleneck / heavily loaded in a sync request/response layout. This in turn may demand a scale up of resources across the entire chain of services, which is not at all cost efficient.

Solution: Decouple PAPI-2 and scale it independently rather than scaling up the the entire chain of services.

Benefits:

1. Since PAPI-2 is decoupled you need to increase resources or scale out only PAPI-2 and not the entire chain of APIs as now the cascading effect is negated.

2. Better user experience as the impact is not cascaded to experience API.

3. If the incoming rate of messages is higher than the rate at which PAPI-2 is able to process these messages, the message broker can throttle and persist messages for PAPI-2 to consume at its own pace without impacting other services.

4. Allows a more granular vertical scale up of PAPI-2 API.

5. Allows horizontal scaling of PAPI-2, as each individual message can be consumed by separate instances of PAPI-2 in a round robin fashion.

6. Allows load balancing of incoming payload across multiple instances of PAPI-2

d. Reusability

Problem: Imagine a scenario where an application is sending customer data which is currently received by experience API, forwards it to process API and finally the system API updates it into a DB. After some time this same data is required by other systems eg Salesforce,SAP etc.. that too from the beginning.

Solution:

For use cases where data reuse is forecasted, the publishing API can dump the data onto a topic from where messages can be broadcasted to ’n’ number of new APIs that require the same data as is the case with many enterprises today.

Benefits:

1. Drastically reduced development time as new APIs just need to plugin to the message broker and subscribe to data.

2. Complements the reusability principle of API led connectivity and can act as a reliable backbone in Anypoint network.

3. Most of the modern message brokers like e.g. Solace Pubsub+ and Kafka, have built in advanced replay capabilities that can potentially replay all the previous messages from the beginning or from a particular timestamp/ message-Id for new applications to consume. Thus completely eliminating the need to build separate APIs to sync data.

4. Also useful in a variety of scenarios where you can just plugin and listen to live data for fraud detection, audit, analytics, trend analysis etc… without impacting the main flow or reinventing the wheel.

4. Common EDA Patterns

Apart from the above mentioned scenarios below are some of the common EDA patterns that can be implemented within the API led connectivity for achieving robust architectural designs. We will not go in depth as these patterns are widely discussed and upto the architect as to how best it can be utilized within the API led connectivity framework to achieve maximum benefits.

1. CQRS pattern- Command Query Responsibility Segregation (CQRS) is the segregation of the responsibilities of the commands and queries in a system. That means that we’re slicing our application logic vertically. In addition to that, we’re segregating state mutation (command -Writes) from the data retrieval (Reads / query handling).

2. SAGA Pattern- For implementing transactions that scan across different APIs. Implementing each business transaction that spans multiple services is a saga. A saga is a sequence of local transactions. Each local transaction updates the database and publishes a message or event to trigger the next local transaction in the saga. If a local transaction fails because it violates a business rule then the saga executes a series of compensating transactions that undo the changes that were made by the preceding local transactions.There are two ways of coordination sagas: Choreography — each local transaction publishes domain events that trigger local transactions in other services Orchestration — an orchestrator (object) tells the participants what local transactions to execute.

3. Event Sourcing- Event sourcing persists the state of a business entity such an Order or a Customer as a sequence of state-changing events. Whenever the state of a business entity changes, a new event is appended to the list of events. Since saving an event is a single operation, it is inherently atomic. The application reconstructs an entity’s current state by replaying the events.

4. DB per Service- The Database-per-Service pattern One of the core characteristics of the microservices architecture is the loose coupling of services. However there may be scenarios where the data needs to be synced across to different Databases. In this case a copy of data is published to an event broker by the API and is consumed at the other end by another API which syncs its own DB.

5. ASync API i. Introduction to Async API

Just like how we define RAML or an OAS specification for REST based APIs we have the AsyncAPI specification for event-driven APIs. It is the industry standard for defining asynchronous APIs. It aims at building the future of Event-Driven Architectures (EDA) tools to easily build and maintain your event-driven architecture.

The AsyncAPI Specification is a project used to describe and document event-driven APIs in a machine-readable format. It’s protocol-agnostic, so you can use it for APIs that work over any protocol. The AsyncAPI Specification defines a set of files required to describe such an API. These files can then be used to create utilities, such as documentation, integration and/or testing tools.

The AsyncAPI specification does not assume any kind of software topology, architecture or pattern. Therefore, a server MAY be a message broker, a web server or any other kind of computer program capable of sending and/or receiving data. However, AsyncAPI offers a mechanism called “bindings” that aims to help with more specific information about the protocol and/or the topology.

Ref: AsyncAPI Initiative for event-driven APIs

ii. Designing AsyncAPI in Anypoint

AsyncAPI specification can be defined in Anypoint designer in YAML or JSON format. At a high level the document provides details of the message schemas, producer or consumer application, the channel (queues), broker details, security etc which when published to Exchange provides developers with the necessary information to publish or subscribe to events.

Currently the option to scaffold it into Studio is not available. However there are multiple tools on the internet to import and plumb it into a code for popular languages

Below are the high level steps

1. Navigate to Design Center and click on Create New→New AsyncAPI

2. Select format YAML or JSON

3. Define the ASync API and publish to Exchange.

High Level AsyncAPI definition

Below is a simple code snippet of AsyncAPI definition to get started

6. MuleSoft Connectors for EDA

Anypoint platform provides a plethora of connectors to enable developers to connect to multiple Message / Event brokers & different messaging protocols available on exchange. Some of the common ones are showcased below.

7. Conclusion

Event driven architecture can enhance the API led connectivity. The traditional request-driven model and the event-driven model are complementary. This combination of events and APIs gives rise to solutions that are more reliable, loosely coupled, scalable, reusable, robust, load-balanced and fault tolerant, while enhancing the overall user experience. And also making it future proof as new services requiring the same data can be easily incorporated in the Anypoint Network.

It is definitely not a silver bullet but the idea is to use sync communication where necessary and utilize eventing as much as possible. Architects should consider the adoption of EDA at the very beginning of any new projects to make full use of its benefits.

Originally published at https://apisero.com on July 19, 2022.

--

--