Most marketing teams know something is off with their technology stack. Reports do not match. Automations fire incorrectly. The CRM is full of duplicates. But diagnosing exactly what is broken — and in what order to fix it — is where teams get stuck.

This is the audit checklist we use at Axiolo when we onboard a new client. It covers 20 items across four categories: data foundations, integration health, attribution and reporting, and automation readiness. Each item includes what to check, what good looks like, what broken looks like, and how to fix it.

According to Gartner, poor data quality costs organizations an average of $12.9 million per year, and nearly 60% of organizations do not measure the annual financial cost of poor quality data. This checklist is designed to surface those hidden costs before they compound.

Section 1: Data Foundation (Items 1–5)

1. Lifecycle Stage Definitions

What to check: Are your CRM lifecycle stages defined, documented, and consistently applied?

What good looks like: Every contact and company has a lifecycle stage. Stages map to your actual sales process with clear criteria for progression. Definitions are documented and shared between marketing and sales. Stage transitions are automated based on explicit criteria.

What broken looks like: Contacts stuck in “Subscriber” or “Lead” indefinitely. No documented definitions. Marketing and sales disagree on what “MQL” means. Manual stage changes with no audit trail.

How to fix it: Start with our guide on HubSpot lifecycle stages for revenue reporting. Define each stage, get sales and marketing to sign off, then implement automation rules that enforce the definitions.

2. Contact and Company Naming Standards

What to check: Are names, company records, and contact fields standardized?

What good looks like: Company names follow a single format (e.g., “Acme Corp” not a mix of “ACME,” “Acme Corporation,” “acme corp”). Contact name fields are properly capitalized. Industry, job title, and other text fields use controlled vocabularies where possible.

What broken looks like: The same company appears under five different spellings. Job title contains everything from “VP Marketing” to “vice president, marketing” to “VP of Mktg.” Reporting by company or title is unreliable.

How to fix it: Run a deduplication pass using your CRM’s built-in tools or a third-party solution. Establish formatting rules and enforce them via form validation, import rules, and periodic cleanup workflows. See our CRM data cleanup guide for the full process.

3. UTM and Campaign Naming Governance

What to check: Do all campaigns follow a standardized UTM and naming convention?

What good looks like: Every campaign URL uses UTM parameters from a controlled vocabulary. Campaign names in Google Ads, LinkedIn, Meta, and your CRM all follow the same structure. A tracking spreadsheet or URL builder enforces consistency.

What broken looks like: GA4 shows google, Google, adwords, and google.com as separate sources. Campaign names are free-text with no pattern. Nobody knows which UTM values are “official.”

How to fix it: Implement the framework in our UTM and campaign naming convention guide. Build a URL generator with validation and audit monthly.

4. Duplicate Management

What to check: What is your duplicate contact and company rate, and is there a process to prevent new duplicates?

What good looks like: Duplicate rate below 5%. Automated deduplication rules run on a regular schedule. Import processes include dedup checks. Form submissions match to existing records before creating new ones.

What broken looks like: Duplicate rates above 15%. Contacts exist as multiple records with different lifecycle stages, creating phantom pipeline and inflated lead counts. No automated dedup process. Every import creates new duplicates.

Research from IBM published in Harvard Business Review estimated that poor data quality — including duplicates — cost the U.S. economy $3.1 trillion per year. At the individual CRM level, the 1-10-100 rule applies: preventing a bad record costs $1, cleaning it later costs $10, and leaving it unchecked costs $100 in downstream impact.

How to fix it: Run a deduplication audit using HubSpot’s built-in duplicate management tool or Salesforce’s duplicate rules. Set up ongoing rules to prevent new duplicates from forming.

5. Data Hygiene Workflows

What to check: Are there automated workflows that maintain data quality over time?

What good looks like: Workflows that standardize field values on record creation or update. Automated processes that flag or archive stale records (e.g., no activity in 12+ months). Regular re-engagement campaigns that validate email addresses and contact status.

What broken looks like: No automated hygiene. The database grows indefinitely with no archival. Bounce rates climb because dead emails are never removed. Fields drift into inconsistency over time because nothing enforces standards.

How to fix it: Build a minimum set of hygiene workflows: field standardization on create, stale record flagging, bounce management, and quarterly data quality reviews.

Section 2: Integration Health (Items 6–10)

6. CRM-MAP Sync Configuration

What to check: Is your CRM (Salesforce, HubSpot CRM) properly synced with your marketing automation platform?

What good looks like: Bidirectional sync is configured with clear rules about which system wins on field conflicts. Sync is real-time or near-real-time. Field mappings are documented. Errors are monitored.

What broken looks like: One-way sync only, missing critical updates. Sync delays of hours or days. No documentation of field mappings. Sync errors pile up unnoticed.

How to fix it: Review your sync settings in HubSpot’s Salesforce integration or your MAP’s CRM connector. Document every field mapping. Set up error notifications.

7. Bidirectional Data Flow

What to check: Does data flow correctly in both directions between your CRM and MAP?

What good looks like: When sales updates a field in the CRM, it reflects in the MAP. When marketing captures a form fill, it creates or updates the CRM record. Lifecycle stage changes propagate correctly.

What broken looks like: Sales updates are not reflected in marketing tools. Form submissions create new records instead of updating existing ones. Lifecycle stages are out of sync between systems.

How to fix it: Map every field that needs to sync, define which system is the source of truth for each field, and test sync behavior with sample records in both directions.

8. Field Mapping Audit

What to check: Are all critical fields mapped between systems, with correct data types and no truncation?

What good looks like: A documented field mapping matrix showing CRM field → MAP field relationships. Data types match (text to text, date to date, picklist to picklist). No data truncation or format mismatches.

What broken looks like: Key fields like industry, company size, or lead score are not synced. Date fields display incorrectly due to format mismatches. Picklist values in one system do not match the other.

How to fix it: Export your field list from both systems, create a mapping matrix, and validate that each pair syncs correctly. Fix data type mismatches and update picklist values to align.

9. API Connection Health

What to check: Are all third-party integrations (ad platforms, analytics, enrichment tools) connected and functional?

What good looks like: All API connections are active and authenticated. Data flows on schedule. Error logs are clean. API rate limits are not being hit.

What broken looks like: Stale OAuth tokens causing silent failures. Integrations that stopped syncing months ago without anyone noticing. API rate limit errors causing incomplete data.

How to fix it: Create an integration inventory. Check each connection’s status and last successful sync date. Set up monitoring alerts for failures. Rotate API keys and tokens on a schedule.

10. Data Transformation Rules

What to check: When data moves between systems, is it transformed correctly?

What good looks like: Date formats convert correctly between systems. Country codes map to full names (or vice versa) consistently. Currency values are handled properly across regions. Lead scores translate to CRM fields without data loss.

What broken looks like: Dates show as January 3 in one system and March 1 in another (MM/DD vs DD/MM). Country data is inconsistent. Numeric fields lose precision.

How to fix it: Document every transformation rule. Test with edge cases (dates near year boundaries, special characters in names, multi-currency values). Build validation checks into your integration middleware.

Section 3: Attribution and Reporting (Items 11–15)

11. Source Tracking Accuracy

What to check: Are first-touch and last-touch source fields accurately captured for every contact?

What good looks like: Every contact has a populated original source field. Source values match your UTM taxonomy. No gaps where source is “unknown” or “direct” due to missing UTM parameters.

What broken looks like: 30%+ of contacts have “unknown” or blank original source. “Direct” traffic is inflated because UTM parameters were not applied. Source values are inconsistent (see naming convention issues).

How to fix it: Audit your source tracking field population rate. Implement UTM governance. Configure your forms and landing pages to capture and persist source data correctly.

12. Campaign Hierarchy

What to check: Do your campaigns have a clear hierarchy from program to campaign to tactic?

What good looks like: A campaign structure like: Program (Q1 Demand Gen) → Campaign (Data Quality eBook) → Tactics (LinkedIn ad, email blast, webinar). Each level rolls up cleanly for aggregate reporting.

What broken looks like: Flat campaign structure with no hierarchy. Impossible to compare programs or aggregate across tactics. Every individual email or ad is its own disconnected campaign.

How to fix it: Define a three-level campaign hierarchy. Map existing campaigns into the structure. Build reports that aggregate at each level.

13. Multi-Touch Model Configuration

What to check: If you are using multi-touch attribution, is the model configured correctly?

What good looks like: Attribution model (first-touch, last-touch, linear, W-shaped, etc.) is deliberately chosen based on your sales cycle. Touchpoint definitions are clear. The model accounts for both online and offline interactions.

What broken looks like: Default attribution model was never changed from the platform’s out-of-box setting. Touchpoints are not consistently captured. Offline interactions (calls, events) are missing entirely.

How to fix it: Read our guide on why marketing attribution breaks before investing in a new tool. Fix data quality first, then configure the model.

14. Dashboard Accuracy Validation

What to check: Do the numbers in your dashboards match the underlying data?

What good looks like: Dashboard totals match CRM record counts within an acceptable margin. Filters are clearly defined and documented. Multiple stakeholders agree on the numbers.

What broken looks like: Marketing’s pipeline number does not match sales’ number despite pulling from the same CRM. Dashboard filters silently exclude records. Nobody trusts the dashboards, so everyone builds their own.

How to fix it: Pick three key metrics (MQLs, pipeline generated, closed-won revenue). Pull the numbers from both the dashboard and a direct data export. Reconcile differences. Document filter definitions.

15. Forecast Methodology

What to check: Is your revenue forecast based on verifiable, consistent data inputs?

What good looks like: Forecast uses historical conversion rates by stage, applied to current pipeline with consistent stage definitions. Inputs are auditable. Marketing contribution to pipeline is trackable.

What broken looks like: Forecast is based on gut feel or inconsistent definitions. Pipeline values are inflated because lifecycle stages are wrong. Marketing cannot show its contribution because attribution data is unreliable.

How to fix it: Fix lifecycle stages and attribution first (items 1, 11, and 13), then build a forecast model based on clean historical data.

Section 4: Automation and AI Readiness (Items 16–20)

16. Lead Scoring Model Audit

What to check: Is your lead scoring model actively maintained and aligned with actual conversion data?

What good looks like: Scoring model was built from conversion data analysis, not assumptions. Scores are recalibrated quarterly. High-scoring leads convert at a meaningfully higher rate than low-scoring leads. Sales agrees the score is useful.

What broken looks like: Scoring model was set up once and never updated. Scores do not correlate with conversion. Sales ignores the score entirely. Points are assigned to vanity actions (visited homepage = +10) rather than buying signals.

How to fix it: Pull your scored leads from the last six months. Compare conversion rates between score tiers. If there is no meaningful difference, rebuild the model using actual closed-won deal data as the training set.

17. Routing Rule Validation

What to check: Are leads routed to the correct owner based on current rules?

What good looks like: Routing rules are documented and account for territory, company size, industry, and product interest. Round-robin assignments are balanced. Fallback rules exist for edge cases.

What broken looks like: Leads land in a generic queue and sit for days. Routing rules reference territories or reps that no longer exist. No fallback — unmatched leads disappear.

How to fix it: Map current routing rules. Test with sample leads from each segment. Update rules quarterly or whenever sales territories change.

18. Workflow Conflict Detection

What to check: Do your marketing automation workflows conflict with each other?

What good looks like: A workflow map showing all active automation, their triggers, and their actions. No overlapping enrollment criteria that could put a contact in two conflicting workflows simultaneously. Suppression lists prevent over-communication.

What broken looks like: A contact receives three emails in one day because they triggered multiple workflows. Workflows overwrite each other’s field values. Nobody has a complete picture of all active automation.

How to fix it: Export your complete workflow list. Map triggers and actions. Identify conflicts. Implement enrollment suppression and mutual exclusion rules.

19. AI Input Quality Check

What to check: If you are using AI-powered features (predictive lead scoring, content recommendations, chatbots), is the input data clean enough to produce reliable outputs?

What good looks like: The data feeding AI models is complete, consistent, and representative. Historical records used for training do not contain systematic errors. Model outputs are validated against actual outcomes.

What broken looks like: AI features were enabled on dirty data. Predictive scores reflect historical data quality problems, not real buying signals. Chatbot responses are based on incomplete or outdated product information.

According to IBM’s State of Salesforce 2025-26 report, 53% of organizations cite poor data availability or quality as the top adoption barrier for agentic AI, and only 33% of AI initiatives are meeting ROI targets. Clean data is a prerequisite for AI, not an afterthought.

How to fix it: Audit the data sources that feed your AI features. Fix data quality issues at the source before relying on AI outputs. Items 1 through 10 on this checklist address most of the foundational issues.

What to check: Are your data collection, storage, and communication practices compliant with applicable regulations?

What good looks like: Clear opt-in mechanisms for email communication. Consent records stored and auditable. Unsubscribe processes work correctly. Data retention policies are defined and enforced. Privacy policy is current.

What broken looks like: No clear opt-in records. Unsubscribed contacts still receive marketing emails due to workflow errors. No data retention policy — records are kept indefinitely regardless of relevance or consent.

How to fix it: Audit your subscription types and opt-in mechanisms. Test the unsubscribe process end-to-end. Define data retention rules and implement archival workflows for records past the retention window.

How to Use This Checklist

Do not try to fix everything at once. We recommend this sequence:

Week 1–2: Run through all 20 items and score each as Green (good), Yellow (needs attention), or Red (broken). This gives you a heat map of your stack’s health.

Week 3–4: Fix the Red items in Section 1 (Data Foundation). Everything else depends on these.

Month 2: Address Section 2 (Integration Health). You cannot report accurately if data is not flowing correctly between systems.

Month 3: Tackle Section 3 (Attribution and Reporting) now that the data feeding your reports is clean.

Ongoing: Section 4 (Automation and AI Readiness) is a continuous improvement cycle. Revisit quarterly.

This checklist connects to every other piece of our marketing data management framework. Each item is a building block — skip one, and the ones above it become unreliable.

Get Help With Your Audit

If running through this checklist surfaced more Red items than you expected, you are not alone. Most mid-market B2B teams we work with at Axiolo score Red on 8–12 of the 20 items on their first pass.

We offer a free MarTech Audit call where we walk through your specific stack and identify the highest-impact fixes. Our developer-first team does not just point out problems — we go into your HubSpot, Salesforce, and GA4 environments and fix them.

Book a Free MarTech Audit Call →


At Axiolo, we help B2B marketing teams build the data infrastructure that makes attribution, reporting, and automation actually work. Learn more about our marketing operations services →