Analysis

The People Challenge Behind Data Mesh Architecture

Data mesh Promised to end IT bottlenecks — then came the people problems.

Written by Kelli Korducki | 4 min September 04, 2025

The People Challenge Behind Data Mesh Architecture

For one US-based real estate market insights firm, the switch to data mesh infrastructure was spurred by a classic problem: business was booming, but the firm’s data team was barely treading water.

Employees who submitted data requests waited weeks for the information they needed, said Omar Kouhlani, the CEO of Rumnic, who consulted for the firm through its data transition. After the switch, these teams were delighted to suddenly have the ability to generate their own datasets without relying on central IT, removing troublesome bottlenecks in an instant. But new — and unexpected — organizational challenges swiftly took their place. “It was a much harder transition than expected,” said Kouhlani, who declined to name the firm due to client confidentiality.

Data mesh has become a hot topic in the world of data management, and for good reason: this decentralized data architecture model allows organizations to cross-functionally tap and maintain key datasets, bolstering access, security and scalability.

It represents a fundamental shift away from the data lakes and warehouses that have dominated enterprise architecture for decades, moving instead toward a distributed model where business domains own and manage their own data. Organizations such as Netflix, Intuit and JPMorgan Chase are among the many major corporations that have gotten on board. 

Behind the scenes, however, the transition can be rocky, as teams adjust to shifting responsibilities across data ownership, access and oversight.

In practical terms, data mesh is an information management system that effectively treats data as a product. “It’s the shift from delivering data or a dataset that’s centralized in the IT organization to delivering value for an organization within the enterprise,” said Craig Gravina, CTO of the data management platform Semarchy. In other words, data mesh can make it easier for stakeholders outside of IT to get relevant insights quickly.

Gravina likens the shift to the way in which agile software development evolved from a process of large, monolithic development cycles toward smaller, iterative sprints delivered by independent teams. As with the introduction of agile methodology, the switch to a continuous and decentralized approach to data-product delivery can be instantly beneficial for organizations and teams looking to move quickly.

However, Gravina warns that in the early stages of data mesh implementation, many organizations struggle to maintain oversight of their data products across distributed teams, mirroring the challenges many agile adopters faced early on.

Kouhlani saw this struggle firsthand, as his real estate client’s domain teams initially hesitated to accept ownership and accountability for their data. “The mortgage analytics team loved having direct access to permit data, but when we suggested they own the data pipeline and SLAs” – service level agreements – “they balked,” Kouhlani recalled. “They said they were mortgage experts, not data engineers.”

Some members of the company’s IT team, meanwhile, struggled to accept the transition from a central knowledge-keeping role to one of dispersed knowledge facilitation. “A few quit because of the perceived loss of control and technical ownership,” Kouhlani said. 

The Tension Between Innovation and Governance

Robin Patra, Head of Data for ARCO Construction, oversaw a similar transformation when his company implemented data mesh in 2022. In an instant, the organization’s domain teams became data product owners with end-to-end responsibility for their data life cycles, while IT evolved into platform enablers that provided self-service infrastructure rather than custom solutions. Some domain team members worried that their new data ownership responsibilities would amount to heavier workloads.  “However, once they experienced the autonomy to iterate and innovate with their data without central approvals, adoption increased dramatically,” Patra said.

Careful organizational restructuring also helped ease the transition. ARCO Construction redistributed its data engineers to embed within business domains, working in partnership with teams instead of in centralized pools. Business analysts evolved into data product managers who identified where and how new data insights might be useful, and then oversaw their delivery.

Another challenge emerged when a safety analytics initiative forced the organization to reconcile its newly accelerated pace of innovation with ironclad data governance.

“Our field operations team wanted to implement real-time safety monitoring that used IoT sensors and machine learning to predict potential incidents,” recalled Patra. The team needed rapid iteration capabilities to test different algorithms and data combinations. The company’s central compliance team, on the other hand, “demanded extensive documentation, formal approval processes and standardized reporting formats that would take months to implement.”

The company quickly recognized the need to develop a system that would guarantee compliance in a high-velocity environment. It landed on a solution by marrying data mesh architecture’s federated approach to governance with domain-specific data standards, which would be developed in accordance with enterprise-wide compliance requirements. This solution treated safety data as a formal product with clear ownership, quality metrics and consumption interfaces. It also, critically, automated compliance by building governance controls directly into the data pipeline.

“Innovation and governance aren’t opposing forces when properly architected,” said Patra. “The key is embedding governance as code rather than treating it as a separate approval layer.”

An Iterative Process

Though the implementation of data mesh can be transformative, it must be undertaken with care. Enterprises and their teams must understand the problems they’re aiming to address before building solutions.

Gravina, Semarchy’s CTO, recommends that organizations apply the same product management processes to building and delivering data products as they would to creating software. They should also use existing operations tools for deployment, he said. “When those practices are in place, they enable iteration and trial and error, and give teams a foundation to roll out new data products across multiple domains and see where success emerges.”

As with any structural overhaul, success is also contingent on an organization’s ability to put its people and teams first.

“Start by giving one domain team better self-service tools and see what they can do with them,” advised Kouhlani. “Above all, prepare for the emotional roller coaster. Like most transformations, technical implementation is table stakes. Managing the anxieties of central and domain teams is where you’ll spend most of your energy.”  

  • AI
  • Data Management
  • Hybrid Cloud
Kelli Korducki

Kelli Korducki

Contributor

Kelli Korducki is a journalist, editor, and content strategist who covers tech, enterprise, and leadership.