4 Easy Ways to Start DataOps Today

The primary source of information aboutย DataOpsย is from vendors (like DataKitchen) who sell enterprise software into the fast-growing DataOps market. There are overย 70 vendorsย that would be happy to assist in your DataOps initiative. Hereโ€™s something you likely wonโ€™t hear from any of them (except us) โ€“ย you can start your DataOps journey without buying any software.ย 

Itโ€™s important to remember that DataOps is a culture and methodology, implemented using automated augmentation of your existing tools. You are free to select one of many best-in-class free and open sourceย tools. When we started sharing the โ€œSeven Steps of DataOpsโ€ a few years ago, our intent was (and still is) to evangelize DataOps as a free and open methodology.ย 

If you are a CDO or a VP, you have the power to institute broad change, but what if you are an individual contributor? What can you do? This is a common question that we hear from our conversations with data scientists, engineers and analysts. An individual contributor has assigned duties and usually no ability to approve purchases. How can one get started given these limitations?

DataOps is not an all-or-nothing proposition. There are small but impactful things that an individual contributor can do to move forward. Hopefully, with metrics in place, you can show measured improvements in productivity and quality that will win converts. As your DataOps activities reach enterprise scale, you may indeed decide that itโ€™s much easier to partner with a vendor than to build and support an end-to-end DataOps Platform from scratch. When that day arrives, weโ€™ll be here, but until then, here are some suggestions for DataOps-aligned improvements you can make with open-source tools and a little self-initiative.

DataOps Objectives

DataOps includes four key objectives:

  • Measureย Your Process โ€“ As data professionals, we advocate for the benefits ofย data-driven decision making. Yet, many are surprisingly unanalytical about the activities relating to their own work.ย 
  • Improve Collaboration, bothย Inter-ย andย Intra-team โ€“ If the individuals in your data-analytics team donโ€™t work together, it can impact analytics-cycle time, data quality, governance, security and more. Perhaps more importantly, itโ€™s fun to work on a high-achieving team.
  • Lower Error Rates inย Developmentย andย Operationsย โ€“ Finding your errors is the first step to eliminating them.
  • Decrease the Cycle Timeย of Change โ€“ Reduce the time that elapses from the conceptualization of a new idea or question to the delivery of robust analytics.

We view the steps in analytics creation and data operations as a manufacturing process. Like any complex, procedure-based workflow, the data-analytics pipeline has bottlenecks. We subscribe to theย Theory of Constraints, which advises to find and mitigate your bottlenecks to increase the throughput of your overall system.

If thatโ€™s too abstract, weโ€™ll suggest four projects, one in each of the areas above, that will start the ball rolling on your DataOps initiative. These tasks illustrate how an individual contributor can start to implement DataOps on their own.

Start TodayFigure 1: 4 simple projects to get started with DataOps.

ย 

Measure Your Process

Internal analyticsย could help you pinpoint areas of concern or provide a big-picture assessment of the state of the analytics team. A burn-down chart, velocity chart, or tornado report can help your team understand its bottlenecks. A data arrival report enables you to track data suppliers and quickly spot delivery issues. Test Coverage and Inventory Reports show the degree of test coverage of the data analytics pipeline. Statistical process controls allow the data analytics team to monitor streaming data and the end-to-end pipeline, ensuring that everything is operating as expected. A Net Promoter Score is a customer satisfaction metric that gauges a teamโ€™s effectiveness.ย 

Data Arrival SLA report

Figure 2:ย  The data arrival report shows which data sources meet their target service levels.

When you bring these reports to the team, it will help everyone understand where time and resources are being wasted. Perhaps this will inspire a project to mitigate your worst bottleneck, leading to another project in one of the next areas.

Improve Collaboration

Conceptually, the data-analytics pipeline is a set of stages implemented using a wide variety of tools. All of the artifacts associated with these tools (JSON, XML, scripts, …) are just source code. Code deterministically controls the entire data-analytics pipeline from end to end.

If the code that runs your data pipeline is not in source control, then it may be spread out on different systems, not revision controlled, even misplaced. You can take a big step toward establishing a controlled, repeatable data pipeline by putting all your code in a source code repository. For example,ย Gitย is a free and open-source, distributed version control system used by many software developers. With version control, your team will be better able to reuse code, work in parallel and trace bugs back to source code changes. Version control also serves as the foundation for DataOps continuous deployment, which is an excellent long-term goal.

Lower Error Rates

Maybe the test coverage report mentioned above helped you understand that your data operations pipeline needs more tests. Tests apply toย codeย (analytics) and streamingย data. Tests can verify inputs, outputs and business logic at each stage of the data pipeline. Testing should also confirm that new analytics integrate seamlessly into the current production pipeline.

Below are some example tests:

  • The number of customers should always be above a certain threshold value.
  • The number of customers is not decreasing.
  • The zip code for pharmacies has five digits.

SQL query

Figure 3: Every processing or transformation step should include tests that check inputs, outputs and evaluate results against business logic.

When you have started counting and cataloging your errors, start a quality circle, find patterns and aim to fix one error per month.

Decrease the Cycle Time of Change

ย In many enterprises, lengthy cycle time is a primary reason that analytics fail to deliver on the promise of improving data-driven decision making. When the process for creating new analytics depends on manual processes, there are many opportunities for a project to go off track.

Factors that lengthen cycle time

Figure 4: Factors that derail the development team and lengthen analytics cycle time

Leading software organizations deploy new and updated applications through an automated procedure that might resemble something like this:

  1. Spin-up hardware and software infrastructure
  2. Check source code out of source control
  3. Build
  4. Test
  5. Deploy into production

The first step in creating an efficient, repeatable build process is to minimize any dependencies on manual intervention. Each of these steps is a whole topic unto itself, but when you are starting out, a good place to focus is on testing. Your code tests should fully validate that analytics work, can handle errors such as bad data (by stopping or sending alerts) and integrate with the existing operations pipeline.ย 

The image below shows the many different kinds of tests that should be performed. We explain each of these types of tests in our recentย blog on impact view.

Types of tests

Figure 5: A broad set of tests can validate that the analytics work and fit into the overall system.

Tests that validate and monitor new analytics enable you to deploy with confidence. When you have certainty, you can deploy and integrate new analytics more quickly.

Conclusion

There are many small yet effective projects that you can start today that will serve your DataOps goals. Hopefully, weโ€™ve given you a few ideas. We encourage you to learn more about DataOps by reading our book, โ€œThe DataOps Cookbook.โ€ Good Luck, and tweet us to let us know how it goes. (@datakitchen_io #DataOps).ย ย ย 

And, if you want to accelerate your DataOps journey, we have some software that can help you!

Start Today2

Sign-Up for our Newsletter

Get the latest straight into your inbox

Open Source Data Observability Software

DataOps Observability: Monitor every Data Journey in an enterprise, from source to customer value, and find errors fast! [Open Source, Enterprise]

DataOps Data Quality TestGen: Simple, Fast Data Quality Test Generation and Execution. Trust, but verify your data! [Open Source, Enterprise]

DataOps Software

DataOps Automation: Orchestrate and automate your data toolchain to deliver insight with few errors and a high rate of change. [Enterprise]

recipes for dataops success

DataKitchen Consulting Services


Assessments

Identify obstacles to remove and opportunities to grow

DataOps Consulting, Coaching, and Transformation

Deliver faster and eliminate errors

DataOps Training

Educate, align, and mobilize

Commercial Pharma Agile Data Warehouse

Get trusted data and fast changes from your warehouse

 

dataops-cookbook-download

DataOps Learning and Background Resources


DataOps Journey FAQ
DataOps Observability basics
Data Journey Manifesto
Why it matters!
DataOps FAQ
All the basics of DataOps
DataOps 101 Training
Get certified in DataOps
Maturity Model Assessment
Assess your DataOps Readiness
DataOps Manifesto
Thirty thousand signatures can't be wrong!

 

DataKitchen Basics


About DataKitchen

All the basics on DataKitchen

DataKitchen Team

Who we are; Why we are the DataOps experts

Careers

Come join us!

Contact

How to connect with DataKitchen

 

DataKitchen News


Newsroom

Hear the latest from DataKitchen

Events

See DataKitchen live!

Partners

See how partners are using our Products

 

Monitor every Data Journey in an enterprise, from source to customer value, in development and production.

Simple, Fast Data Quality Test Generation and Execution. Your Data Journey starts with verifying that you can trust your data.

Orchestrate and automate your data toolchain to deliver insight with few errors and a high rate of change.