Why We Open-Sourced Our Data Observability Products

Why open source DataOps Observability and DataOps TestGen? Our decision to share full-featured versions of these products stems from DataKitchen's long-standing commitment to enhancing productivity for data teams and promoting the use of automated, observed, and trusted tools. It aligns with our company's philosophy of sharing knowledge and now software to inspire teams to implement DataOps effectively.

Introducing DataKitchenโ€™s Open Source Data Observability Software

Today, we announce that we have open-sourced two complete, feature-rich products that solve the data observability problem: DataOps Observervability and DataOps TestGen.ย  With these two products, you will know if your pipelines are running without error and on time and can finally trust your data. ย  In this blog, we peel back the curtain to share the philosophy, journey, and, most importantly, the why behind this significant move.

What is DataOps Observability?ย 

It monitors every Data Journey, from data source to customer value, across every tool, team, environment, and customer so that problems are detected, localized, and understood immediately. Imagine a world where you have end-to-end visibility instantly. That’s what DataOps Observability promises.

 

What is DataOps TestGen?

The most effortless way to institute comprehensive, agile data quality testing is to derive actionable information, start testing and measuring immediately, and then iterate, using tests and results to refine.ย  DataOps TestGen is the silent warrior that ensures the integrity of your data. It simplifies data quality test generation and execution, algorithmically crafting validations that ensure you can trust your data every time it’s refreshed in production.

The Open Source Philosophy: Why Take This Road?

We’ve been a profitable customer-funded company for the last ten years. Our goal has always been to get data teams to work more productively, delivering insight with their favorite automated, observed, and trusted tools. When we started, these ideas still needed a name: DataOps. ย  As technical founders, we have seen the challenges teams have trying to implement the DataOps Manifesto’s ideas. ย  Our journey hasn’t been a sprint but a marathon, with varied strategies deployed to embed DataOps into the heart of data teams.

We’ve poured our insights into content, hoping to inspire teams towards DataOps. Weโ€™ve written two books, manifestos, and training programs and spoken at hundreds of conferences and podcasts. ย  We attempted to seed DataOps at the CDO leadership level, but the transient nature of senior data leadership roles often left these initiatives adrift.ย  For years, we tried to get teams from senior leaders to adopt DataOps centrally. However, senior data leaders in data teams have a short tenure, 1-2 years, and when they leave, the initiative on DataOps disappears.ย  Itโ€™s possible but tricky.ย  From a technology angle, we tried to go from deployment first with our DataOps Automation product โ€“ orchestrate, deploy, test, DevOps, everything in one run time abstraction. ย  However, very few people start from a single orchestrator encompassing their entire toolchain. Everyone builds, builds, and builds, then worries about the errors in production and deployment cycle time on days two and three (if at all).

Over the last three years, weโ€™ve invested over $5 million in developing our Data Observability tools.ย  We still hope to make a difference in the dismal lives of data teams by starting their path to DataOps, observing their entire Data Journey, and quickly deploying data quality validation tests.ย ย ย 

We’ve realized that a bottom-up approach, starting with the individual, stands a better chance of embedding DataOps into the fabric of data teams than transient leadership mandates, making teams change what they have already built or just talking about these ideas.ย  Open-sourcing DataOps Observability and TestGen are steps towards fulfilling this vision, aiming to alleviate the “morning dread” many data engineers face, haunted by the possibility of errors and the daunting task of resolving them.ย 

We are sharing two full-featured, battle-tested applications with a complete UI and APIs for an individual to use.ย ย We are also capitalists, so we have an enterprise version of each tool we think teams and enterprises will find advantageous.ย  ย  Companies like Eisai, BMS, CNH, and others already use it in production today.ย  ย Our enterprise version is for teams and foundational for enterprises.ย 

Two years ago, we surveyed 700 data engineers and found that 78% found their jobs so dismal they wanted to come with a therapist.ย  We hope that our open-source data observability tools will make a difference in those individual lives.

 

What Makes These Tools Unique?

The Data Journey.ย  Plenty of workflow tools run DAGs, steps, and schedules.ย  However, another fundamental idea must be defined and given an open-source treatment: the Data Journey. ย  As an industry, we have a conceptual hole in how we think about data analytic systems. ย  We put them into production, but then we hope all the steps that data goes through from source to customer value work out correctly. We all know that our customers frequently find data and dashboard problems.ย  Teams are shamed and blamed for problems they didnโ€™t cause. ย  They have problems with the data trapped in existing complicated multi-step data processes they need help understanding, often fail, and output insights that no one trusts.ย  A Data Journey is the missing abstraction from our data analytic systems โ€“ the tool to say what should be rather than what will be. ย  Itโ€™s not an Airflow DAG, nor is it data lineage.ย  Itโ€™s the fire alarm panel for your data and data analytic product process.ย  Itโ€™s the mission control panel for your data production.

Fully Automated Data Quality Test Generation. ย Weโ€™ve been talking and writing about data quality validation for a decade.ย  We’ve been doing this in practice for almost twenty years. However, I am continually amazed at how few data teams have any data testing in production. They just hope things donโ€™t go wrong.ย  For many years, I had the former software engineer bias โ€“ just write the damn data test yourself. ย  But, knowing which test to write is challenging for data engineers.ย  They are crushed with responding to errors, working on new tasks, and staying sane. They need a way to auto-create a test that gives them a high degree of assurance that their data is correct.ย  And a way to configure more tests.ย  And have a single repository of tests that can be co-owned by data specialists and their business customers.ย  DataOps TestGen is not a DSL for data testing or a pick-and-choose library of tests.ย  DataOps Observability saves teams time and accelerates testing and anomaly detection on data.

What are these tools for?

You, the data engineer or data analytic team technical team lead, are our focus. We understand the pressure of errors, the overload of tasks, and the desire for solutions that are quick, comprehensive, and easy to implement. Our open-source tools are designed to offer immediate value, enabling you to tackle specific use cases and start small, scaling as needed.ย  Weโ€™ve built integration to tools like DBT, AIrflow, and a dozen other tools that act upon data.ย  We have a rich, simple API and โ€˜agent frameworkโ€™ to add new monitoring tools.ย  TestGen supports major SQL databases like Postgres, Snowflake, SQL Server, Synapse, and Redshift, with more coming.ย ย 

 

Our Design Criteria: Simplicity and Functionality

Our design ethos aligns with UNIX principlesโ€””Make each program do one thing well.” DataOps Observability and TestGen embody simplicity, flexibility, and expansiveness, catering to every tool and aspect of the data analytic ecosystem. We aim for our open-source tools to be fully functional for individuals. And fit as good citizens in your data estate.ย ย 

 

What can I do with these tools?

Like any new idea, the definition of Data Observability is changing.ย  Our goal is to cover all the use cases of monitoring data and your data estate for issues, including:

  1. Understanding And Check Data Pre-Production: โ€œPatch Or Pushback Dataโ€
  2. Finding Problems In Constantly Changing Data With Anomaly Detection: โ€œPolling: Arrival Abberation Alertsโ€
  3. Locating Problems During Production Before Your Customers: โ€œProduction: Check Down and Acrossโ€
  4. Data Observability in Development โ€œProductivity: Regression and Impact Assessment, Data Migrationโ€ย 

 

An Invitation to Collaborate

As we open the doors to our open-source tools, we extend an invitation to the community. Your feedback, contributions, and insights are invaluable as we evolve these products. Join us on this path, explore our getting started guide, and become part of our Slack community. Together, let’s redefine data observability and ensure that the path to DataOps is not just a vision but a tangible reality for data teams worldwide.ย  Itโ€™s simple to install and has a short walk-through that you can try today.ย 

Sign-Up for our Newsletter

Get the latest straight into your inbox

Open Source Data Observability Software

DataOps Observability: Monitor every Data Journey in an enterprise, from source to customer value, and find errors fast! [Open Source, Enterprise]

DataOps Data Quality TestGen: Simple, Fast Data Quality Test Generation and Execution. Trust, but verify your data! [Open Source, Enterprise]

DataOps Software

DataOps Automation: Orchestrate and automate your data toolchain to deliver insight with few errors and a high rate of change. [Enterprise]

recipes for dataops success

DataKitchen Consulting Services


Assessments

Identify obstacles to remove and opportunities to grow

DataOps Consulting, Coaching, and Transformation

Deliver faster and eliminate errors

DataOps Training

Educate, align, and mobilize

Commercial Data & Analytics Platform for Pharma

Get trusted data and fast changes to create a single source of truth

 

dataops-cookbook-download

DataOps Learning and Background Resources


DataOps Journey FAQ
DataOps Observability basics
Data Journey Manifesto
Why it matters!
DataOps FAQ
All the basics of DataOps
DataOps 101 Training
Get certified in DataOps
Maturity Model Assessment
Assess your DataOps Readiness
DataOps Manifesto
Thirty thousand signatures can't be wrong!

 

DataKitchen Basics


About DataKitchen

All the basics on DataKitchen

DataKitchen Team

Who we are; Why we are the DataOps experts

Careers

Come join us!

Contact

How to connect with DataKitchen

 

DataKitchen News


Newsroom

Hear the latest from DataKitchen

Events

See DataKitchen live!

Partners

See how partners are using our Products

 

Monitor every Data Journey in an enterprise, from source to customer value, in development and production.

Simple, Fast Data Quality Test Generation and Execution. Your Data Journey starts with verifying that you can trust your data.

Orchestrate and automate your data toolchain to deliver insight with few errors and a high rate of change.