Building Cross-Brand Performance Reporting in BigQuery

Data visibility separates fragmented holding companies from cohesive enterprises. But for multi-brand groups, building a single, trusted performance view across units is complex.

cross-brand-performance-reporting-bigquery
No headings found on page

The gap between a loose collection of brands and a coordinated enterprise often comes down to visibility. In multi-brand organizations, establishing reliable cross brand performance reporting is harder than it looks.

Marketing data almost always ends up siloed. Each brand typically has its own ad accounts, analytics properties, and regional workflows. Those teams can run well locally, but executives still need portfolio-level clarity to allocate budget and set strategy. That’s where cross brand performance reporting becomes an architectural problem, not just a reporting request.

BigQuery is a strong fit for centralization, but it is not plug-and-play. If you load raw exports without a plan, you can end up with unmapped IDs, inconsistent metric definitions, and avoidable security exposure. The system becomes expensive to maintain and hard to trust.

A sustainable solution requires more than SQL. You need governance and a repeatable architecture. This guide lays out a practical blueprint for data architects and BI leads unifying multi-brand data: a unified model, master mapping tables for identity resolution, and controls like row-level security.

Why BigQuery Is the Foundation for Enterprise-Scale Reporting

Most teams start centralizing because platform-native reporting hits a ceiling. Facebook Ads Manager and GA4 are fine for day-to-day optimization inside one account, but they don’t give you a clean portfolio view across brands, regions, and channels. Spreadsheets can patch the gap for a while, but as volume grows, manual aggregation turns into an error-prone process with weak version control.

BigQuery supports cross brand performance reporting well because it is serverless and scales without capacity planning. Storage and compute are separated, so you can ingest data from many ad accounts and query it flexibly without managing warehouse infrastructure. Cost aligns to what you store and what you query, whether you’re looking at one brand’s last 30 days or aggregating years of history across the portfolio.

If you’re already invested in Google tools, the native integrations also help. GA4, Google Ads, and Campaign Manager 360 can land directly in BigQuery through managed transfers. That reduces dependency on third-party connectors for a meaningful portion of the stack and keeps the pipeline closer to first-party sources.

Core Architecture: Designing a Unified Data Model

The most common failure in cross-brand reporting isn’t ingestion. It’s the model. If you ingest twenty accounts into twenty different tables and stop there, you don’t have a reporting system. You have a growing mess. To deliver cross brand performance reporting, you need a unified schema that normalizes brand-specific quirks into a consistent shape.

Step 1: Standardize Metrics and Dimensions

Start with definitions, not code. In multi-brand portfolios, the same KPI label can mean different things.

For example, Brand A might treat a Conversion as a completed purchase, while Brand B counts Add to Cart events. If you roll those up as-is, portfolio CPA will be misleading.

Work with marketing and finance stakeholders to build a Global Data Dictionary that clearly defines:

  • Core KPIs: One agreed calculation for ROAS, margin, and CPA.

  • Standardized dimensions: A consistent convention for channel, region, and platform. If your sources contain values like Facebook, fb, and Meta, decide how they map into a single standard value such as Paid Social.

Step 2: Create a Central Mapping Table

A master mapping table is the engine behind accurate cross brand performance reporting. Ad platforms don’t know your internal hierarchy. They only expose account IDs and similar identifiers. You need a repeatable way to translate a raw account_id into business context.

Create a BigQuery table (for example, dim_account_mapping) and treat it as the system of record for identity resolution. Many teams maintain it manually or sync it from a governed spreadsheet owned by operations. Either way, it becomes the join point that enriches source data with consistent dimensions.

Essential schema for the mapping table:

  • source_account_id (the joining key, for example 123-456-7890)

  • source_platform (for example Google Ads, LinkedIn)

  • brand_name (for example Acme Corp, Beta Inc)

  • business_unit (for example SaaS Division, Consumer Goods)

  • region (for example NA, EMEA)

  • currency (the local currency of the ad account)

Keeping this mapping separate from raw ingestion is important. It decouples business logic from pipelines. When a brand moves to a new business unit, you update one row and reporting reflects the change without rewiring transformations.

Step 3: Structure Datasets and Tables

BigQuery organization decisions show up later as maintenance cost, so it’s worth being deliberate. Creating a separate Google Cloud Project per brand often makes cross-brand querying and access control harder than it needs to be. A cleaner approach is a layered dataset structure inside one central project:

  1. Raw layer (ingestion): Store data by source (or source plus region), like raw_google_ads and raw_facebook_ads. Keep schemas close to the source and avoid editing these tables.

  2. Staging layer (cleaning): Use views or ephemeral tables for light typing and cleanup (for example, dates as dates, numbers as numbers).

  3. Production layer (marts): Unify here. A star schema is usually the simplest path, with a fact table such as fct_marketing_performance linked to shared dimensions.

Your production output should look the same regardless of where it came from. A simplified schema might include:

  • date

  • brand_name (mapped)

  • campaign_name

  • cost

  • impressions

  • clicks

  • conversions

Implementation: The Data Pipeline and Governance

Once the model is set, execution is straightforward in concept: ingest reliably, transform consistently, and lock down access. Automation matters here. Manual CSV uploads won’t hold up at enterprise scale.

Ingestion and Transformation

For ingestion, prefer managed ELT tooling over one-off scripts that break whenever an API changes. Google sources can use the BigQuery Data Transfer Service for low-friction ingestion. For platforms like LinkedIn or TikTok, third-party connectors are common.

After data lands, the transformation layer does the real work. dbt (data build tool) and Google Dataform are popular because they keep transformation logic in SQL while adding software practices like version control, testing, and documentation.

Most transformation logic boils down to two recurring steps:

  1. Normalize and union sources into a consistent schema.

  2. Join the master mapping table to add brand and organizational context.

A conceptual example:

WITH all_ad_spend AS (
  -- Normalize Google Ads columns
  SELECT
    DATE(date) AS date,
    CAST(account_id AS STRING) AS source_account_id,
    'Google Ads' AS source_platform,
    campaign_name,
    cost_micros / 1000000 AS cost,
    impressions
  FROM `project.raw_google_ads.campaign_stats`

  UNION ALL

  -- Normalize Facebook Ads columns
  SELECT
    DATE(date_start) AS date,
    CAST(account_id AS STRING) AS source_account_id,
    'Facebook Ads' AS source_platform,
    campaign_name,
    spend AS cost,
    impressions
  FROM `project.raw_facebook_ads.insights`
)

SELECT
  t.date,
  m.brand_name,
  m.business_unit,
  t.campaign_name,
  SUM(t.cost) AS total_spend,
  SUM(t.impressions) AS total_impressions
FROM all_ad_spend t
LEFT JOIN `project.config.dim_account_mapping` m
  ON t.source_account_id = m.source_account_id
 AND t.source_platform = m.source_platform
GROUP BY 1, 2, 3, 4
WITH all_ad_spend AS (
  -- Normalize Google Ads columns
  SELECT
    DATE(date) AS date,
    CAST(account_id AS STRING) AS source_account_id,
    'Google Ads' AS source_platform,
    campaign_name,
    cost_micros / 1000000 AS cost,
    impressions
  FROM `project.raw_google_ads.campaign_stats`

  UNION ALL

  -- Normalize Facebook Ads columns
  SELECT
    DATE(date_start) AS date,
    CAST(account_id AS STRING) AS source_account_id,
    'Facebook Ads' AS source_platform,
    campaign_name,
    spend AS cost,
    impressions
  FROM `project.raw_facebook_ads.insights`
)

SELECT
  t.date,
  m.brand_name,
  m.business_unit,
  t.campaign_name,
  SUM(t.cost) AS total_spend,
  SUM(t.impressions) AS total_impressions
FROM all_ad_spend t
LEFT JOIN `project.config.dim_account_mapping` m
  ON t.source_account_id = m.source_account_id
 AND t.source_platform = m.source_platform
GROUP BY 1, 2, 3, 4
WITH all_ad_spend AS (
  -- Normalize Google Ads columns
  SELECT
    DATE(date) AS date,
    CAST(account_id AS STRING) AS source_account_id,
    'Google Ads' AS source_platform,
    campaign_name,
    cost_micros / 1000000 AS cost,
    impressions
  FROM `project.raw_google_ads.campaign_stats`

  UNION ALL

  -- Normalize Facebook Ads columns
  SELECT
    DATE(date_start) AS date,
    CAST(account_id AS STRING) AS source_account_id,
    'Facebook Ads' AS source_platform,
    campaign_name,
    spend AS cost,
    impressions
  FROM `project.raw_facebook_ads.insights`
)

SELECT
  t.date,
  m.brand_name,
  m.business_unit,
  t.campaign_name,
  SUM(t.cost) AS total_spend,
  SUM(t.impressions) AS total_impressions
FROM all_ad_spend t
LEFT JOIN `project.config.dim_account_mapping` m
  ON t.source_account_id = m.source_account_id
 AND t.source_platform = m.source_platform
GROUP BY 1, 2, 3, 4

With this approach, source-specific complexity stays contained, and downstream reporting runs off a clean, business-ready table.

Governance and Access Control

In multi-brand environments, access isn’t always shared. A regional lead for Brand A may need full detail for their own business, but they shouldn’t automatically see granular cost data for Brand B, especially when P&L ownership is separate.

BigQuery gives you a few practical options:

  • Authorized views: Build views that filter the master table (for example, WHERE brand_name = 'Brand A') and grant access to the views instead of the base tables.

  • Row-level security (RLS): Apply policies to the base table and map users to permitted brands via a security table. BigQuery filters rows at query time so users only see what they’re allowed to see, which reduces the need to maintain many brand-specific views.

Activating the Data: Visualization and Analysis

Centralizing in BigQuery only matters if it enables actionable cross brand performance reporting. With a unified, governed dataset, BI tools like Looker Studio, Tableau, or Power BI can query consistent tables and deliver comparable reporting across brands.

This architecture makes several reporting patterns much easier to maintain:

  1. The portfolio command center: An executive view of aggregated spend, revenue, and ROAS across business units to spot outliers and reallocate budget.

  2. Cross-channel efficiency analysis: Roll up performance by channel_group across the portfolio to benchmark results (for example, Paid Search CPA for B2B brands versus B2C brands).

  3. Self-service brand dashboards: Brand managers get dashboards filtered to their own data. With a standardized model, you can build one template and reuse it across brands, which cuts dashboard maintenance significantly.

A centralized model also reduces time wasted reconciling competing spreadsheet versions. Instead, stakeholders work from one source of truth and spend meetings on decisions rather than disputes.

Conclusion

A centralized BigQuery architecture turns marketing data from an operational burden into an asset you can scale. With a unified model and clear governance, cross brand performance reporting becomes accurate, secure, and maintainable.

If you’re starting from scratch, build the master mapping table first. Audit your active ad accounts, list every account_id, and map each one to the right brand and business unit. That single step makes the rest of the design concrete.

Before you lock the schema, confirm metric definitions with finance, especially calculations like margin and ROAS. Getting alignment early prevents rework later and increases trust in the reporting from day one.

Ready to scale without reporting stress?

Join agencies who have eliminated manual reporting and built a data infrastructure they can actually trust