DataOps Mission Control And Managing Your Data Infrastructure Risk

The Head of data got a call from the CEO of the entire company about a compliance report that was empty, with no data. So, he had to rally 26 different people across his team all-day And what was the problem? A field passed through the pipeline that was blank. Can you imagine how embarrassed he is at the error? How frustrated all those 26 people -- most likely the best he has on his team -- at having to chase a crappy error? And he has 1000 other pipelines in the same 'hope it works' position, just waiting for some customer to find a problem. High risk, indeed.

by | Jun 1, 2022 | Blog

DataOps Mission Control

Data Teams can’t answer very basic questions about the many, many pipelines they have in production and in development.ย  For example:

Data

  • Is there a troublesome pipeline (lots of errors, intermittent errors)?
  • Did my source files/data arrive on time?
  • Is the data in the report I am looking at “fresh”?
  • Is my output data the right quality?
  • Do I have a troublesome data supplier?

Jobs

  • Did Job X run after all the jobs in Group Y were completed?
  • How many jobs ran yesterday, and how long did they take?
  • How many jobs will run today, how long will they take, and when are they running?
  • Did every job that was supposed to run, actually run?

Quality/Tests/Trust

  • How many tests do I have in production?
  • What are my pass/fail metrics over time in production?

Tools/Models/Dashboards

  • Is my model still accurate?
  • Is my dashboard displaying the correct data?

Root Cause

  • Where did the problem happen?
  • Is the delay because the job started late, or did it take too long to process?

They also can’t answer a similar set of questions about their development process:

Deploys

  • Did the deployment work?
  • How many deploys of artifacts/code did we do?
  • What is the average number of tests per pipeline?
  • How many deploys failed in the past?
  • How many models and dashboards were deployed?

Environments

  • What code is in what environment?
  • Looking across my entire organization, how many pipelines are in production? development?

Testing/Impact/Regressions

  • How many tests ran in the QA environment? passed? failed? warning?
  • How often do we change the production schema?

Productivity/Team/Projects

  • How many tickets did we release?
  • Which tickets did we release? per project?
  • For a particular project, what pipelines, tests, deploys and tickets are happening?

Why does this happen?ย  Why is this problem not solved today?

  1. Team Are Very Busy: teams are already busy and stressed โ€“ and know they are not meeting their customerโ€™s expectations
  2. They have a Low Change Appetite:ย  Teams have complicated in place data architectures and tools. They fear change in what already running
  3. There is no single pane of glass:ย  no ability to see across all tools, pipelines, data sets, and teams in one place. They have hundreds or thousands of existing pipelines, jobs, and processes running already.
  4. They donโ€™t know what, where, and how to check.ย ย  They need to make sure their customers are happy with the resulting analytics.ย  But they often donโ€™t know the salient points to check or test.
  5. They live with lots of blame and shame without shared context.ย  Problems are raised after the customer has found them, with panicked teams running around trying to find who and what is responsible.

How do other organizations solve this risk problem? The biggest risk of all is space flight. How do SpaceX and NASA manage risk?ย ย Build a Mission Control

  • Build a UI with information about every aspect of the flight
  • Use that information as the basis for making decisions and communicating to the interested parties
  • Store information for after fact analysis
  • Automatically alert and flag

A New Concept:ย 

DataOps Mission Control’s goal is to provide visibility of every journey that data takes from source to customer value, across every tool, environment, data analytic team, and customer so that problems are detected, localized, and raised immediately. How? By being able to test and monitor every data analytics pipeline in an organization, from source to value, in development and production, teams can deliver insight to their customer with no errors and a high rate of pipeline change.

However, if you solve this problem, you will see:

  • Less embarrassment:ย  No finding things are broken after the fact, from your customers
  • Less hassle:ย  Provides a way to get people off your back and self service the production status
  • More Space to Create: Instead of chasing problems and answering simple questions
  • A Big Step to Transformation: You canโ€™t focus on delivering customer value if your customers donโ€™t trust the data or your team

Learn more, watch our on-demand webinar

Click Here To Learn More About DataOps Mission Control In This Recently Recorded Webinar

Sign-Up for our Newsletter

Get the latest straight into your inbox

Open Source Data Observability Software

DataOps Observability: Monitor every Data Journey in an enterprise, from source to customer value, and find errors fast! [Open Source, Enterprise]

DataOps Data Quality TestGen: Simple, Fast Data Quality Test Generation and Execution. Trust, but verify your data! [Open Source, Enterprise]

DataOps Software

DataOps Automation: Orchestrate and automate your data toolchain to deliver insight with few errors and a high rate of change. [Enterprise]

recipes for dataops success

DataKitchen Consulting Services


Assessments

Identify obstacles to remove and opportunities to grow

DataOps Consulting, Coaching, and Transformation

Deliver faster and eliminate errors

DataOps Training

Educate, align, and mobilize

Commercial Pharma Agile Data Warehouse

Get trusted data and fast changes from your warehouse

 

dataops-cookbook-download

DataOps Learning and Background Resources


DataOps Journey FAQ
DataOps Observability basics
Data Journey Manifesto
Why it matters!
DataOps FAQ
All the basics of DataOps
DataOps 101 Training
Get certified in DataOps
Maturity Model Assessment
Assess your DataOps Readiness
DataOps Manifesto
Thirty thousand signatures can't be wrong!

 

DataKitchen Basics


About DataKitchen

All the basics on DataKitchen

DataKitchen Team

Who we are; Why we are the DataOps experts

Careers

Come join us!

Contact

How to connect with DataKitchen

 

DataKitchen News


Newsroom

Hear the latest from DataKitchen

Events

See DataKitchen live!

Partners

See how partners are using our Products

 

Monitor every Data Journey in an enterprise, from source to customer value, in development and production.

Simple, Fast Data Quality Test Generation and Execution. Your Data Journey starts with verifying that you can trust your data.

Orchestrate and automate your data toolchain to deliver insight with few errors and a high rate of change.