Introduction to StreamSets and its Integration with Snowflake

Apisero
7 min readOct 4, 2021

Author(s): Daksh Trehan, Sumit Chahal

What is StreamSets?

StreamSets is a dataflow performance management tool. It is a form of Message queue category, that helps in end-to-end data integration to conquer the motive of building, monitoring and managing smart data pipelines to deliver continuous data for DataOps.
Data is like oxygen for technology giants and hence, dataflow feeds vital business processes and applications. Data pipelines are like veins that help to carry data from source to destination and finally to perform insightful operations.
But these data pipelines often pose risk to organization due to its complex, brittle and black box nature. These pipelines are often influenced by Data Drift, and, due to their black box nature, the change in the data can’t be observed easily.
And that’s where StreamSets help the industry by providing continuous integration and delivery of data for DataOps, with always-on operational monitoring and built-in data protection.

What is DataOps?

DataOps is based on the idea of DevOps i.e., it tries to automate the testing and deployment of data analytics. It helps to integrate People & Processes and Infrastructure & Technology together.

The goal is simple: to optimize the development & execution of data pipelines.

Data is the new fuel, and it powers various organizations through data pipelines. Due to its imperative potential, it is necessary to collect clean and useful data.
Data Pipelines follows a 5-step approach:

But data pipelines often face the curse of 3Cs(Complexity, Crew & Coordination):

  • Growing demand for data
  • Complexity of data pipelines
  • Data Drifts
  • Less skilled workers
  • Slow Delivery
  • Defected Outcome
  • High Cost
  • Disappointed Customers

DataOps is a process that helps to combat the drawbacks of data pipelines by integrating data management practices with AI and continuous improvements.

It applies Agile & DevOps to rapidly turn new insights into productions deliverable by assimilating Data Engineers, Data Scientists, Data Analytics with the Operational team, Chief Data Officer, Architect.

How DataOps help the teams?

As discussed, DataOps helps to integrate the ideas of CDO, Analysts, Stakeholders, Data Scientists, Operations together.
CDO: DataOps provides clear, automated regular trails with quality insights.
Analysts: Better quality input data, improved model control, and collaboration.
IT Managers: Accelerated software developments, with fewer bugs and better alignment between analytics and operational team.
Stakeholders: Quick response to change, a stronger analytics platform for power users, Happy customers.

DataOps vs DevOps

How StreamSets helps the industry?

StreamSets is a modern DataOps platform that is primarily used to avoid Data Drifts and provide continuous flow to Data Pipelines.
Data Drift can be defined as a change in distribution of data over time. The change could range from base-line dataset to precise amends.
StreamSets employ two components:

StreamSets Data Collector (SDC): It is used to move data from one source to another. It provides a data pipeline authoring environment that lets you map, measure, and master the data in motion. It focuses on building any-to-any data movement pipelines using a drag-and-drop approach. The pipelines can work with minimal/no schema and can filter/transform data upon commands.
The pipelines support various modes of running i.e., standalone mode, cluster streaming mode, or cluster batch mode. The SDCs to run these pipelines can be easily installed on dedicated nodes or clusters nodes alike.
The SDC image is distributed as an rpm, tar-ball, Cloudera parcel, Docker image, and custom VM for various cloud environments.

StreamSets Dataflow Performance Management (DPM): The DPM takes the challenge of operating end-to-end dataflows. It acts as the control panel and can operate thousands of dataflows. It can be further used to organize and visualize these dataflows residing in our infrastructure into complex graphs called topologies. These topologies are responsible for the health of our dataflow and lets you master the performance by implementing service-level-agreements that ensures you’re always delivering the data in a timely and trustworthy manner.

How to use StreamSets?

By installing SDC, one can easily utilize the modernized approach of StreamSets. Once up and running, SDCs can provide a continuous dataflow. To use more than one pipeline, connect all your SDC instances to a DPM and use it as a control manager for all dataflows.

Integrating SnowFlake & StreamSets

  • SnowFlake works as a data warehouse-as-a-service.
  • It focuses on delivering an efficient BI solution with an array of BI products.
  • Users may utilise relevant insights at scale using the best BI tools in Snowflake’s cloud architecture.

StreamSets is considered one of the user-friendly tools for data acquisition. By integrating it with Snowflake, we are trying to use its easy-to-build pipeline for moving from a snowflake data source, transforming our data and keeping it in another Snowflake cluster.

There will be total three stages we will be employing:

Stage No.Stage NameStage TypeStage Instance NameStage Description1Snowflake_SourceOriginSnowflake_01Read data from Snowflake2Field_Remover_1ProcessorFieldRemover_01Remove fields from record3Snowflake_DestinationDestinationSnowflake_02Write data to Snowflake

Components used:
Docker: To set-up the deployment on StreamSets we need docker, so that we can create an image and then be able to run the engine into that image.

Snowflake: To import the data and again to push the data, we need Snowflake, since in this POC Snowflake is acting as Source and Destination.

StreamSets: It is obvious that we need StreamSets to perform this POC because data is processed in the pipeline (i.e., in StreamSets).

Integrations:

1. Creating Deployment

2. To create an Image on Docker run this script in Window PowerShell

3. Go to Docker and see that one image is created and a new container is present there

4. Create a Pipeline

5. Add stages in the pipeline

6. Configure Stages one by one

7. Run Preview

8. Create a job for the pipeline by clicking Check In

9. Summary for Job

References:
https://www.snowflake.com/trending/business-intelligence-tool

https://datakitchen.io/what-is-dataops/

https://medium.com/data-ops/dataops-is-not-just-devops-for-data-6e03083157b7

https://streamsets.com/why-dataops/what-is-dataops/

https://www.youtube.com/watch?v=sA_kXnNucuc

https://www.youtube.com/watch?v=V35ltYqtjIk

--

--