Application Security

Demystifying the Data Grid: Part One

Demystifying the Data Grid: Part One
Written by ga_dahmani
Demystifying the Data Grid: Part One

Data management products and architectures, such as Data Lakehouse, Data Mesh, and Data Fabric, are of great interest to organizations looking to become more data-driven in search of better ways to collect their data, gain insights from it, and deliver better data. experiences to your customers.

Among these, Data Lakehouse seems to be the best understood, mainly because vendors like Snowflake and Databricks have done a good job of spreading the idea through their products.

Data Mesh and Data Fabric, on the other hand, are still relatively resume at the moment. There are no concrete products or reference implementations that you can play around with to better understand them. Invariably, discussions around these come across as ambiguous and hand-wavy, as illustrated by the following comment I recently heard from a chief data officer:

🤯 … data mesh is a state of mind …

But of course, it is much more than that!

In this two-part series, I’ll explain what is data mesh, what problems it solves, and how to implement it for your organization. This is the first post in the series and will focus on clarifying the specific problem that Data Mesh aims to solve. The next one will delve into the architectural and operational aspects of its implementation.

So, let’s start… but first, let’s talk about data silos!

Data silos are pockets of data that are created in an organization over time as a result of teams working independently. They often get a bad rap, but as seasoned professionals know, they are unavoidable at scale and even desirable for agile, data-driven organizations, allowing teams to become more specialized and more independent in terms of release management.

Silos are often cited as the reason Data Mesh is needed. While true, this does not precisely articulate what problem Data Mesh solves and how it will benefit organizations.

To understand Data Mesh, it is essential to understand a related very different problem within data engineering, called data integration. The challenges associated with data integration are what a good Data Mesh implementation will simplify, which in turn will enable teams to be more agile and data-driven.

We’ll cover these challenges below, but as you read, remember this:

The key to understanding Data Mesh is data integration, not data silos.

Ok, but what exactly is data integration?

Let’s make this discussion more concrete!

Consider a financial services organization that offers retail banking, personal and mortgage loans, and wealth management services to its clients. The image below shows a quite simplified version of the silos along with the relevant data flows and the equipment that produces or consumes the data.

Quick apart:

🤔 Simplicity aside, what’s wrong with the image?

We’ll come back to this in a bit, but first let’s mention a few things:

  • Notice how silos can be both natural and necessary: natural because each financial service offer would have arisen at a different time, and necessary because it allows each individual team made up of data owners, product owners, and developers to specialize in their respective business domains.
  • service owners (Banking, loans, Wealth Management) are usually data producers. Your data is usually Operational in nature with little or no global context outside of its domain.
  • Analyst teams (data scientists, data analysts, Marketing), on the other hand, are data consumers. Your data is very analytical in the wild with a broad global context to bridge individual silos to make meaningful business decisions.
  • A network of ETL pipelines, owned by a data engineering team, is responsible for cleansing, annotating, and transforming silo-specific operational data into analytical data.

Given this data architecture, what kind of information might an analyst be interested in? Here are a couple (admittedly trivial but illustrative in this context):

  • Find people with high balances in savings accounts to increase wealth management sales.
  • Find high net worth individuals to offer cheap personal loans with margin.

Notice how answering these queries requires bridging (joining) context from multiple silos. This is what data integration achieves: identifying common business semantics between different silos, unifying them significantly, and presentation a holistic view of data for business intelligence and insights.

This transformation of operational data into analytical data occurs as part of the ETL processes and is one of the most difficult problems within data engineering. Silos will often use different names for the same business entities; customer names can be called name, fullnameor custname. Sometimes different data types are used to encode the same entity: ages can be stored as integersor strings.

Getting these transformations to scale requires both silo-specific business domain knowledge and coordination between data owners across silos. Data engineering teams often become a bottleneck due to this highly centralized operating model. Every change to a siled data model has the potential to break data integration processes across the organization. In the absence of coordination, data engineering teams naturally react in responding to these dramatic changes, disrupting the flow of data to business consumers, worsening the quality of insights that can be derived from it, and increasing uncertainty about business results.

Now, going back to the image above, we can see more clearly what is wrong: it is the creation of data engineering teams as a separate and centralized entity, distinct and agnostic to business domains (silos), whose data is supposed to “design” and “integrate”!

Summarized in another way, using the monolith analogy of the service mesh world:

Today’s data integration processes are centralized and run like huge monoliths, ironically making organizations less agile in their quest to be data-driven.

This is the precise statement of the problem that is driving organizations to explore and redesign their data infrastructure using Data Mesh principles.

the promise of data mesh is that it will serve data what service mesh did for Applications — break monoliths into smaller, more manageable, and more agile components that hide complexity and change, and communicate with each other using API-driven abstractions.

Data Mesh takes a decentralized, bottom-up approach to sharing data between producers and consumers. Data that used to live in the far reaches of a siled application stack is now pulled out and placed front and center (think Data as product) for consumers to consume using their preferred tools. Data integration becomes a shared responsibility among data owners and they are incentivized to cooperate with each other to generate high-quality data abstractions for business users to consume.

However, how does it all come together? What does a practical implementation look like? What operational and personnel changes does it imply?

We will explore this in more detail in the next post, both from a technology and business operational perspective. In the meantime, hopefully this post has been able to articulate the fundamental problem Data Mesh must solve and what it holds out for large data-driven organizations operating at scale.

What do you think? Is your team also struggling with these data integration challenges? and how you think about data mesh? Let me know your thoughts in the comments below.

I look forward to sharing the follow-up post with you soon!

Health!

Read the original post about Dragon’s Egg.

The charge Demystifying the Data Grid: Part One first appeared in Ciral.

*** This is a syndicated Security Bloggers Network blog from Blog Archive – Cyral written by Srini Vadlamani. Read the original post at: https://cyral.com/blog/demystifying-the-data-grid-part-one/

About the author

ga_dahmani

Leave a Comment