Opinion

How to set expectations around data quality and reliability for your company

Image courtesy of Yevgenij_D on Shutterstock, available for use by author with Standard License.

For today’s data engineering teams, the demand for real-time, accurate data has never been higher, yet broken pipelines and stale dashboards are an all-too-common reality. So, how can we break this vicious cycle and achieve reliable data?

Just like our software engineering counterparts 20 years ago, data teams in the early 2020s are facing a significant challenge: reliability.

Companies are ingesting more and more operational and third-party data than ever before. Employees from across the business are interacting with data at all stages of its lifecycle, including those on non-data teams. …


Opinion

Why we need to rethink our approach to metadata management and data governance

Image courtesy of Andrey_Kuzmin on Shutterstock

As companies increasingly leverage data to power digital products, drive decision making, and fuel innovation, understanding the health and reliability of these most critical assets is fundamental. For decades, organizations have relied on data catalogs to power data governance. But is that enough?

Debashis Saha, VP of Engineering at AppZen, formerly at eBay and Intuit, and Barr Moses, CEO and Co-founder of Monte Carlo, discuss why data catalogs aren’t meeting the needs of the modern data stack, and how a new approach — data discovery — is needed to better facilitate metadata management and data reliability.

It’s no secret: knowing…


Image courtesy of Monte Carlo

Monte Carlo, the data observability company, today announced the launch of the Monte Carlo Data Observability Platform, the first end-to-end solution to prevent broken data pipelines. Monte Carlo’s solution delivers the power of data observability, giving data engineering and analytics teams the ability to solve the costly problem of data downtime.

As businesses increasingly rely on data to drive better decision making and maintain their competitive edge, it’s mission-critical that this data is accurate and trustworthy. Today, companies spend upwards of $15 million annually tackling data downtime, in other words, periods of time where data is missing, broken, or otherwise…


Monitor the health of your Snowflake data pipelines with these simple queries

Image courtesy of Sydney Rae on Unsplash.

Your team just migrated to Snowflake. Your CTO is all in on this “modern data stack,” or as he calls it: “The Enterprise Data Discovery.” But as any data engineer will tell you, not even the best tools will save you from broken pipelines.

In fact, you’ve probably been on the receiving end of schema changes gone bad, duplicate tables, and one-too-many null values on more occasions than you wish to remember.

The good news? When it comes to managing data quality in your Snowflake environment, there are few steps data teams can take to understand the health of your…


4 steps to identify, root cause, and fix data quality issues at scale

Image courtesy of Barr Moses.

As data systems become increasingly distributed and companies ingest more and more data, the opportunity for error (and incidents) only increases. For decades, software engineering teams have relied on a multi-step process to identify, triage, resolve, and prevent issues from taking down their applications.

As data operations mature, it’s time we treat data downtime, in other words, periods of time when data is missing, inaccurate, or otherwise erroneous, with the same diligence, particularly when it comes to building more reliable and resilient data pipelines.

While not a ton of literature exists about how data teams can handle incident management for…


Notes from Industry

One startup’s journey to more reliable data, from BigQuery to Looker

Image courtesy of WindAwake via Shutterstock under Standard License terms, as purchased by the author.

Many data leaders tell us that their data scientists and engineers spend 40 percent or more of their time tackling data issues instead of working on projects that actually move the needle.

It doesn’t have to be this way. Here’s how the data engineering team at Resident, a house of direct-to-consumer furnishings brands, reduced their data incidents by 90% with data observability at scale.

Direct-to-consumer mattress brands may not be the first category that comes to mind when discussing data-driven companies. But Daniel Rimon, Head of Data Engineering at Resident, credits their investment in technology, data, and marketing with their…


Introducing a better metric for calculating the cost of bad data at your company

Image courtesy of Barr Moses.

To quote a friend, “Building your data stack without factoring in data quality is like buying a Ferrari but keeping it in the garage.”

In this article, guest columnist Francisco Alberini, Product Manager at Monte Carlo, introduces a better way measure the cost of bad data on your company.

Last week, I was on a Zoom call with Lina, a Data Product Manager at one of our larger customers who oversees their data quality program.

Her team is responsible for maintaining 1000s of data pipelines that populate many of the company’s most business critical tables. Reliable and trustworthy data is…


The Definitive Guide

5 essential steps for troubleshooting data quality issues in your pipelines

Image courtesy of Monte Carlo.

This guest post was written by Francisco Alberini, Product Manager at Monte Carlo and former Product Manager at Segment.

Data pipelines can break for a million different reasons, and there isn’t a one-size-fits all approach to understanding how or why. Here are five critical steps data engineers must take to conduct root cause analysis for data quality issues.

While I can’t know for sure, I’m confident many of us have been there.

I’m talking about the frantic late afternoon Slack message that looks like:


Data Downtime Interview

A conversation with with Cindi Howson on what takes to achieve data democratization at scale.

We sat down with Cindi Howson, Chief Data Strategy Officer at ThoughtSpot, the leading search and AI-driven analytics platform, for a wide-ranging conversation about her daily work, common challenges organizations face on the road to data democratization, and diversity in data science.

Over the past few decades, the world of data analytics has undergone transformation from a siloed entity to a cross-functional powerhouse. Now, in 2021, in this decade of data, the time is ripe for yet another sea-change, this time in the form of data democratization and accessibility.

Paving the way forward for this new movement towards actionable…


Why we need a distributed approach to data governance and metadata management

Jason Leung on Unsplash.

Over the past few years, data lakes have emerged as a must-have for the modern data stack. But while the technologies powering our access and analysis of data have matured, the mechanics behind understanding and trusting this data in distributed environments have lagged behind.

Here’s where data discovery can help ensure your data lake doesn’t turn into a data swamp.

One of the first decisions data teams must make when building a data platform (second only perhaps to “why are we building this?”) is whether to choose a data warehouse or lake to power storage and compute for their analytics.

Barr Moses

Co-Founder and CEO, Monte Carlo (www.montecarlodata.com). @BM_DataDowntime #datadowntime

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store