Stressful leadership meetings are par for the course for most startups and other high-growth companies—I know they have been for me.
It often looks like this: everyone scrambles to get the data "right" before the meeting begins, but all too frequently, no one really trusts the numbers. It's not uncommon for one or more leaders to show up to leadership meetings with different data for the same metric, which doesn't exactly inspire confidence.
I learned this all the hard way, as VP of BizOps and then VP of Customer for a real estate technology company that rapidly expanded to ~$50 million in ARR and 150 employees. I oversaw all customer-facing functions, including sales, customer success, customer support, and pricing, so as you can imagine, there was a lot of data involved. I needed to report on my team's performance and how it impacted the financial success of the company. Measuring, monitoring, and moving important metrics was a vital part of my job.
We also had to contend with the growing pains of a rapidly expanding organization with an ever-increasing appetite for analytics. This was a major driver of the data-related leadership meeting challenges we faced. Hopefully, the lessons I learned during those three years can help other teams avoid, or at least quickly address, the same problems.
The reasons your data-reporting meetings aren't working for you
Data-related challenges can crop up and derail your leadership meetings at any point, even if you've already invested in a modern data stack (the suite of technology and tools that companies use to collect, process, store, and disseminate data). And in my experience, there are four main data-related reasons for leadership meeting failures. Each one is categorized based on whether or not a company has invested in a modern data stack.
1. Pre-data stack investment: The spaghetti stack problem
In this scenario, every team pulls data from siloed systems and SaaS tools. This mismatched data infrastructure closely resembles a mess of spaghetti, and it's the default state that startups and high-growth companies are in before they invest in a modern data stack.
When I first joined that real estate platform, I was a BizOps team of one. I inherited a Rube Goldberg machine of spreadsheets and CSVs exported from SaaS tools like Salesforce and HubSpot. Every week, I would manually aggregate data from each source and report my findings to the rest of the leadership team. I often woke up at 4 a.m. on Monday mornings to get this done before our weekly leadership meeting.
This wasn't fun, or ideal—it was rife with errors, time-consuming, and ruined many a Sunday and Monday morning as I scrambled to find the data I needed for our weekly business review (which is what we called our weekly leadership meetings).
We needed to clean up our spaghetti, which meant we needed to invest in a modern data stack that included:
A data warehouse to store our most important data
A data ingestion tool to move data into our data warehouse
A data transformation and orchestration tool to define and model our key metrics
A data analytics and business intelligence (BI) tool to visualize the data
There's a dizzying array of technologies available to power your modern data stack, and there's no one-size-fits-all suite of tooling. That being said, these are the products I recommend most often:
Data warehouse: Snowflake, Amazon Redshift, or Google BigQuery
Transformation and orchestration: dbt and Apache Airflow
Data analytics and BI: Google Data Studio (this is an area where costs can quickly balloon out of control, which is why I recommend this free tool to start; you can always upgrade later if needed)
It looks straightforward enough when you lay it out in a simple list like this, but this was an extensive cross-functional project. A modern data stack didn't solve all of my data-related weekly leadership meeting problems, either.
2. Post-data stack investment: The old dog, new tricks problem
Even with a powerful modern data stack that centralizes data and metrics, trust in data is still a big problem for many companies. Since business teams don't always trust warehouse data, they continue to pull data directly from source systems, like Google Analytics, Salesforce, or a production database. From there, they report their own metrics.
Sometimes, it's tough to teach an old dog new tricks.
This is the situation I found myself in at the real estate startup. As my role in the company evolved, my team and I were responsible for both determining and measuring our goals and OKRs (objectives and key results). In many cases, we also owned specific cross-functional projects designed to move those key metrics.
While we had implemented a centralized data infrastructure and adopted a BI tool, some team leaders remained highly skeptical. They continued to pull metrics directly from SaaS tools and our production database, where the data hadn't yet been cleaned and normalized. Confusion ensued. The metrics were also out of whack and showed different results. We were living the spaghetti stack problem all over again, and weekly leadership meetings were still stressful and ineffective.
How did we solve this? Ultimately, we did it through education, communication, and developing a common understanding of what we were trying to accomplish. We also committed to improving the documentation behind our core data sets and metrics. This gave non-data team members a look behind the curtain, so they could confirm that data was trustworthy, accurate, and up-to-date.
When a company finds itself living the "old dog, new tricks" problem, it needs to encourage data and business teams to work with, not against, each other. Employees should develop rapport and work to understand each other's roles and responsibilities, so they can share more bi-directional context.
It's also important to note that your data stack can play a role in exacerbating the issues described above. BI tools are a prime example of this—organizations want to use data to drive action, and they're realizing that dashboards aren't the only way to communicate (and use) data.
If your company finds itself living the "old dog, new tricks" issue after investing in a data stack, take a careful look at your people and your tooling to determine your next steps.
3. Post-data stack investment: The multiple sources of truth problem
To ensure that stakeholders are using the right data to make decisions, you need to maintain a single source of truth with accurate and trustworthy data. This can be easier said than done, which is why so many organizations find themselves with multiple sources of truth.
Most often, this problem manifests when someone downloads a data set from a data team-approved view. Then, they perform and maintain their own transformations and "shadow" business logic in Google Sheets, Excel, or another operating document. The result is a second source of truth—and you'd be surprised how many leadership meeting decks get shipped with incorrect data (and therefore incorrect conclusions) because of this.
The "multiple sources of truth" problem is all-too-familiar for me—I experienced it at the real estate platform when we had a few teams maintaining their own data sets outside of our core systems and SaaS tools. Since teams were maintaining their own metrics, the data team had zero data visibility. They couldn't vet the logic and cleanliness of the data and had no way of knowing what data was accurate or up-to-date.
We solved this problem by building more scalable processes connected to our data infrastructure—for example, building processes into low/no-code tools like Zapier and Retool. We also developed well-documented data governance processes around how data is stored, accessed, and refreshed across the entire organization.
Zapier is a no-code automation tool that lets you connect your apps into automated workflows, so that every person and every business can move forward at growth speed. Learn more about how it works.
4. Pre- or post-data stack investment: The air traffic control problem
To run a high-ROI leadership meeting with stakeholders across departments, businesses need to aggregate, cross-reference, and analyze multiple datasets at once. Coordinating the syncs that need to occur to make sure the right data is coming from the right place at the right time often leads to time lags and differing cutoff periods.
Without the right infrastructure, this is like trying to land multiple jets at the same time on the same aircraft carrier—which is why I call this one the "air traffic control" problem.
The air traffic control problem can happen to companies both before and after data stack investments, so the solutions will vary from one organization to another. If you don't have a modern data stack, investing in one is the first order of business. Hiring a great data leader can also help, but you don't necessarily need a data analyst, data engineer, or data scientist. Personally, I'm partial to onboarding an analytics engineer as your first data team hire. From dbt:
"Analytics engineers provide clean data sets to end users, modeling data in a way that empowers end users to answer their own questions. While a data analyst spends their time analyzing data, an analytics engineer spends their time transforming, testing, deploying, and documenting data. Analytics engineers apply software engineering best practices like version control and continuous integration to the analytics code base."
This is a relatively new position in the data space, so it wasn't the first option we explored at the real estate startup. By the time we encountered the air traffic control problem, I was the leader of all our customer-facing functions, including Sales, Account Management, and Customer Support. We'd come a long way since the spaghetti mess days and had solid data infrastructure and reporting. But we wanted to run our weekly leadership meeting on Monday morning to create as much momentum as we could for the rest of the week.
That meant we had to consolidate, make sense of, and roll up data for six different departments, all before 10 a.m. on Monday morning. No small feat.
We never did solve this problem completely—our solution was a bunch of SQL queries that dumped data into a massive spreadsheet with 10 different tabs. It wasn't the most elegant solution and broke frequently, but it was the best we could do at the time.
The pain from this process inspired my co-founder and me to start AirOps, which helps businesses drive trusted data into all the places they need it. In the context of a leadership meeting, AirOps allows folks to easily and automatically prepare data inside of the operating documents they use to run the meeting, including Google Sheets, Google Docs, Notion, Airtable, and more. That means you can make decisions based on reliable, accurate data that's consistent across the company—without it feeling like you're putting on a Broadway show every Monday morning. The diagram below shows how it works.
Once you have data inside of a tool like Google Sheets, you can either create your own output or use a prebuilt AirOps template to run your leadership meetings (like the one below).
The data is automatically synced before the meeting, so attendees only need to update the status of their goals and write descriptive commentary that brings context to the numbers.
Best practices to wrangle data ahead of your leadership meetings
Each data-related reason for leadership meeting failures has its own ideal solution. That being said, there are still some universal best practices that you can use to wrangle data in advance of leadership meetings, regardless of where you're currently at with data:
Normalize your "7-day business week" across teams, tools, and time zones.
Set a specific time for metrics cutoff to ensure consistent reporting (e.g., all metrics need to be in the leadership meeting doc two hours before the meeting starts).
Then, schedule your import sync timing accordingly.
Set up monitoring and quality control to proactively identify and fix downstream data issues before they derail an expensive meeting.
Train your leaders on what metrics matter most to the business, how they're calculated, and what drivers are most impactful to moving them.
Once you identify which data-related problem is responsible for mismatched metrics and untrustworthy data, you'll have better data visibility throughout the entire company, increased organizational-wide trust in data, improved data adoption, and far fewer data-related headaches before, during, and after leadership meetings.
Related reading:
This was a guest post from Matt Hammel, co-founder and COO at AirOps, a data platform that lets non-data people use whatever data they need, wherever they need it. Want to see your work on the Zapier blog? Read our guidelines, and get in touch.