Image courtesy of Unsplash

It would be an understatement to say your company is bullish on data.

Your CEO can’t stop talking about her new Tableau dashboard, a report that tells which of your products are “stickiest” with customers. It didn’t take much convincing to sell your CTO on Snowflake. And your entire data engineering team is all in on this “data as code” movement.

The flip side of this data-driven coin: your stakeholders (CEO and CTO included) ping you nearly every other hour to ask you: “is my data up-to-date?”, “who owns this report?”, and even “why is my data missing?”

As data…


How to set expectations around data quality and reliability for your company

Image courtesy of Yevgenij_D on Shutterstock, available for use by author with Standard License.

For today’s data engineering teams, the demand for real-time, accurate data has never been higher, yet broken pipelines and stale dashboards are an all-too-common reality. So, how can we break this vicious cycle and achieve reliable data?

Just like our software engineering counterparts 20 years ago, data teams in the early 2020s are facing a significant challenge: reliability.

Companies are ingesting more and more operational and third-party data than ever before. Employees from across the business are interacting with data at all stages of its lifecycle, including those on non-data teams. …


Why we need to rethink our approach to metadata management and data governance

Image courtesy of Andrey_Kuzmin on Shutterstock

As companies increasingly leverage data to power digital products, drive decision making, and fuel innovation, understanding the health and reliability of these most critical assets is fundamental. For decades, organizations have relied on data catalogs to power data governance. But is that enough?

Debashis Saha, VP of Engineering at AppZen, formerly at eBay and Intuit, and Barr Moses, CEO and Co-founder of Monte Carlo, discuss why data catalogs aren’t meeting the needs of the modern data stack, and how a new approach — data discovery — is needed to better facilitate metadata management and data reliability.

It’s no secret: knowing…

Image courtesy of Monte Carlo

Monte Carlo, the data observability company, today announced the launch of the Monte Carlo Data Observability Platform, the first end-to-end solution to prevent broken data pipelines. Monte Carlo’s solution delivers the power of data observability, giving data engineering and analytics teams the ability to solve the costly problem of data downtime.

As businesses increasingly rely on data to drive better decision making and maintain their competitive edge, it’s mission-critical that this data is accurate and trustworthy. Today, companies spend upwards of $15 million annually tackling data downtime, in other words, periods of time where data is missing, broken, or otherwise…

Introducing a more proactive approach to data quality: the Data Reliability lifecycle.

Picture courtesy of Georg Arthur Pflueger on Unsplash.

Delivering reliable data products doesn’t have to be so painful.

Here’s why and how some of the best data teams are turning to DevOps and Site Reliability Engineering for inspiration when it comes to achieving a proactive, iterative model for data trust. Introducing: the Data Reliability lifecycle.

Imagine for a moment that you’re a car mechanic.

A sedan drives into your garage, engine sputtering.

“What’s wrong?” You ask, lifting your eyes from your desk.

The driver rolls down their window. “Something’s wrong with my car,” they respond.

Very descriptive, you think, wiping the sweat from your brow. …

Tricks of the Trade

What is Reverse ETL? Inquiring minds want to know

Image courtesy of Alice Donovan Rouse on Unsplash.

Modern data teams have all the right solutions in place to ensure that data is ingested, stored, transformed, and loaded into their data warehouse, but what happens at “the last mile?” In other words, how can data analysts and engineers ensure that transformed, actionable data is actually available to access and use?

Tejas Manohar, co-founder of Hightouch and former Tech Lead at Segment, and I explain where Reverse ETL and Data Observability can help teams go the extra mile when it comes to trusting your data products.

It’s 9 a.m. — you’ve had your second cup of coffee, your favorite…

Image courtesy of Monte Carlo.

I’m excited to share that Monte Carlo has raised $60 million in Series C funding from ICONIQ Growth with participation from Salesforce Ventures and existing investors Accel, GGV Capital, and Redpoint Ventures — bringing our total funding to $101M. With this round, we will fuel the growth of the Data Observability category, further develop our product offerings to better serve our customers, support more use cases, and expand to new markets.

Our Series C establishes us as the first Data Observability company to reach this milestone, a testament to our team’s industry-defining thought leadership, new product releases, and rapid customer…

Building a data mesh? Avoid these 7 mesh-conceptions.

Image courtesy of Ricardo Gomez Angel on Unsplash.

Zhamak Dehghani, founder of the data mesh, dispels common misunderstandings around this the data mesh, an increasingly popular approach to building a distributed data architecture, and shares how some of the best teams are getting started.

Nowadays, it seems like every data person falls into two camps: those who understand the data mesh and those who don’t.

Rest assured: if you’re in either camp, you’re not alone!

Rarely in recent memory has a topic taken the data world by storm, spawning a thriving community, hundreds of blog articles, and sighs of relief from data leaders across industries struggling with democratization…

Data Observability 101

Here’s how the company pioneering incident management prevents data downtime

Image courtesy of Michael V on Shutterstock, available for use by author with Standard License.

PagerDuty helps over 16,800 businesses across 90 countries hit their uptime SLAs through their digital operations management platform, powering on-call management, event intelligence, analytics, and incident response.

So how does PagerDuty approach data-specific incident management within their own organization? I recently sat down with Manu Raj, Senior Director of Data Platform and Analytics (aptly named the DataDuty team), to learn more about his team’s strategy for preventing “data downtime” and the associated fire drills.

The data landscape at PagerDuty

PagerDuty’s Business data platform team has a clear mandate: to provide its customers with trusted data anytime, anywhere, that is easy to understand and enables efficient…

Getting Started

How to set up end-to-end detection and alerting to identify and prevent silent errors in your data.

Image courtesy of John Schnobrich on Unsplash.

Broken data pipelines? Unreliable dashboards? One-too-many null values in that critical report?

When it comes to achieving reliable and accurate data, testing and circuit breakers will only get you so far. In our latest series, we highlight how data teams can set up end-to-end incident management workflows for their production data pipelines, including incident detection, response, root cause analysis & resolution (RCA), and a blameless post-mortem.

In this guest post, Monte Carlo‘s Scott O’Leary walks us through how to get started with incident detection and alerting, your first line of defense against data downtime.

As companies increasingly rely on data…

Barr Moses

Co-Founder and CEO, Monte Carlo ( @BM_DataDowntime #datadowntime

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store