Reducing Organizational Complexity with DataOps

Organizational complexity creates significant problems, but executives in aย McKinsey Surveyย showed little understanding of the types of complexity that create or destroy shareholder value. In their research, McKinsey identified two types of complexity:

  • Institutional complexity โ€“ includes regulatory environments, geographies, new markets, number of business units and their diverse interactions. Executives focus almost exclusively on this type of complexity.
  • Individual complexity – how hard it is for employees and managers to โ€œget things doneโ€ due to inadequateย role definitions, ambiguous accountability, inefficient processes, and lack of skills and capabilities. ย Executives rarely understand the factors that create this type of complexity and how to address them.

While these two are related, McKinsey found that companies reporting low levels of individual complexity have higher returns and lower costs. Furthermore, lowering individual complexity enables organizations to meet the challenges of additional institutional complexity that often accompanies growth. The message is that reducing and managing organizational complexity at the level of individual contributors and first-level managers can serve as a competitive advantage and a foundation for successful growth.

ย 

The Complexity of Data Teams

No group within the modern enterprise faces a higher level of individual complexity than the data organization. With its cacophony of tools, mission-critical deliverables, and interaction with nearly every other group in the organization from marketing to accounting to theย CEO, the data organization has become incredibly challenging toย lead and manage. Despite theirย valuable expertise, data professionals are routinely dealing with project failures, slipping schedules, busted budgets and embarrassing errors. It has become increasingly hard to โ€œget things doneโ€ in data organizations.

ย 

Sources of Data Organization Complexity

There is a wide range ofย rolesย and functions in the data organization of a typical enterprise, that share the mandate to use data inย descriptive and predictiveย analytics. The team includes data scientists, business analysts, data analysts, statisticians,ย data engineers, architects, database administrators, governance, self-service users and managers. Each of these roles has a unique mindset, specific goals, distinct skills, and a preferred set of tools.

Fragmented Toolchain

Figure 1: The fast-growing market for big data and analytics tools is around $200B. At the enterprise level, the array of tools is broad and fragmented.

Despite all of the complexity and diversity which pushes them apart, the roles of the data team are tightly and intricately woven together. Each stage of the data pipeline builds upon the work of previous stages (see Figure 2).ย  While the work of each functionary is distinct, they are linked together in a value chain. To make matters more interesting, the members of the data team, in most cases, do not report to the same boss. Some will report into a shared, technical services team. Others fall under a line of business. The various functions may be centralized or decentralized, and individuals and teams may be geographically dispersed. All of these variables affect the communication patterns and workflow of the data team.

Interdependent data team

Figure 2:ย  The work output of individuals in the data team builds as data moves through the data pipeline.

 

Managing Complexity with DataOps

If one were to actually map out these communication andย task coordinationย patterns in a real organization, the resulting diagram would quickly exceed the space allowed here. Imagine having to manage these groups, keeping them on track and under budget. Imagine what would happen if corrupted data entered the bloodstream of the organization and then dispersed throughout the data pipelines. These are real challenges faced by data team managers daily. This is the type of internal organizational complexity that can destroy shareholder value or prevent an enterprise from meeting its objectives.

Weโ€™ve helped numerous organizations tackle these challenges using a methodology calledย DataOps. DataKitchen produces a DataOps Platform that will help move your DataOps capabilities from the whiteboard into the data center in the shortest amount of time possible. DataOps utilizes automation to govern workflows, coordinate tasks and define roles. It createsย teamworkย out of chaos and makes it easier to โ€œget things done.โ€ DataOps greatly simplifies the level of individual complexity in an organization.

A DataOps Platform attacks the problem of organizational complexity from multiple sides. It eliminates data and coding errors by applying tests at all pipeline stages. These tests drive sensor indicators which provide unprecedented transparency into data operations and analytics development. The DataOps platform also defines, manages and coordinates all the data pipelines, so teams collaborate better, and new analytics move into deployment with greater ease.

Team Collaboration

Figure 3: A DataOps Platform enforces clear workflows that enable better team collaboration and brings greater structure, transparency and automation to data pipelines.

 

Cycle Time at the Speed of Ideation

Fundamentally, the quantities and uses of data are expanding and proliferating so rapidly that enterprises are unable to use conventional management methods to โ€œtame the beast.โ€ To reduce individual complexity in an organization, you first need to measure it. The key metric that reflects complexity is analyticsย cycle timeย โ€“ the time it takes to transition an idea into working analytics. The challenge here is that your business users want cycle time to be as fast as ideation. That may sound impossible, but DataOps delivers on that goal.

A DataOps Platform reduces workflow efficiencies, encouragesย reuseย and brings transparency to data pipelines. Tests clamp down on code and data errors, eliminating unplanned work that saps productivity. The key to reducing complexity in your data organization is DataOps. The key to implementing DataOps is a DataOps Platform that enforces DataOps methods across data teams and self-service users.

For a more in-depth discussion of โ€˜individual complexityโ€™ in data organizations, please see our whitepaper โ€œReducing Organizational Complexity with DataOps,โ€ which inspired this blog post.

ย 

Sign-Up for our Newsletter

Get the latest straight into your inbox

Open Source Data Observability Software

DataOps Observability: Monitor every Data Journey in an enterprise, from source to customer value, and find errors fast! [Open Source, Enterprise]

DataOps Data Quality TestGen: Simple, Fast Data Quality Test Generation and Execution. Trust, but verify your data! [Open Source, Enterprise]

DataOps Software

DataOps Automation: Orchestrate and automate your data toolchain to deliver insight with few errors and a high rate of change. [Enterprise]

recipes for dataops success

DataKitchen Consulting Services


Assessments

Identify obstacles to remove and opportunities to grow

DataOps Consulting, Coaching, and Transformation

Deliver faster and eliminate errors

DataOps Training

Educate, align, and mobilize

Commercial Pharma Agile Data Warehouse

Get trusted data and fast changes from your warehouse

 

dataops-cookbook-download

DataOps Learning and Background Resources


DataOps Journey FAQ
DataOps Observability basics
Data Journey Manifesto
Why it matters!
DataOps FAQ
All the basics of DataOps
DataOps 101 Training
Get certified in DataOps
Maturity Model Assessment
Assess your DataOps Readiness
DataOps Manifesto
Thirty thousand signatures can't be wrong!

 

DataKitchen Basics


About DataKitchen

All the basics on DataKitchen

DataKitchen Team

Who we are; Why we are the DataOps experts

Careers

Come join us!

Contact

How to connect with DataKitchen

 

DataKitchen News


Newsroom

Hear the latest from DataKitchen

Events

See DataKitchen live!

Partners

See how partners are using our Products

 

Monitor every Data Journey in an enterprise, from source to customer value, in development and production.

Simple, Fast Data Quality Test Generation and Execution. Your Data Journey starts with verifying that you can trust your data.

Orchestrate and automate your data toolchain to deliver insight with few errors and a high rate of change.