PPC

26 fragments · Layer 3 Synthesized established · 34 evidence · updated 2026-04-08
↓ MD ↓ PDF

Summary

Platform-reported conversions are not leads — they are events, and the gap between the two can be 17:1 (AHS: 135 Google Ads form fills vs. 8 MarketSharp leads). Every PPC engagement should establish CRM-linked attribution before drawing conclusions about campaign performance. Within that constraint, the portfolio shows consistent outperformance on Microsoft Ads for local services and B2B, with CPCs of $0.60–$1.00 and cost-per-call around $30 across Overhead Door Madison and Trachte. The two most common failure modes are landing page misalignment driving Quality Scores below 6, and conversion goal misconfiguration causing Smart Bidding to optimize toward traffic rather than leads. Both are fixable in under a week and have outsized impact on cost efficiency.


Current Understanding

The single most important structural fact about PPC management in this portfolio is that measurement integrity precedes optimization. Campaigns cannot be improved if the conversion signal is wrong — and wrong conversion signals are the norm, not the exception, across client accounts.

Conversion Tracking Integrity

Platform-reported conversions systematically overstate qualified lead volume. AHS showed a 17:1 discrepancy between Google Ads form fills (135) and MarketSharp internet leads (8) over a 30-day period [1]. Adava Care generated 20 conversions/month at ~$139 each with no downstream visibility into actual move-ins [2]. Trachte's Bing Ads generated 42 conversions, but leads were not tagged as "Bing" in Dynamics CRM, making source attribution impossible [3].

The root cause splits into two categories. First, CRM integration failure: leads arrive but aren't tagged with source, so platform and CRM counts diverge. Second, conversion goal misconfiguration: Loot Point's account tracked page views as the primary conversion goal, causing actual lead events to report zero conversion value [4]. When Smart Bidding (Maximize Conversions, Target CPA) runs against a page-view goal, it optimizes for traffic volume — the opposite of what's intended [4]. Page views, DOM ready events, and session starts should never be primary conversion goals.

Quality Score as a Cost Multiplier

Quality Score is not a vanity metric — it directly multiplies or discounts the effective bid. The formula (Bid × Quality Score × Expected Impact of Extensions) means a QS of 2/10 can require 5x the bid to achieve the same position as a QS of 10/10 [3]. Capitol Bank's CD campaign scored 2/10 because ad traffic routed to a generic page rather than a dedicated CD landing page; the same account's checking campaign scored higher and generated 7 conversions at $1.96 CPC [5]. Adavacare carried 32+ keywords below QS 6, identified as the primary waste driver during an analytics review [6].

Landing page misalignment is the dominant cause of low Quality Scores across the portfolio — observed at Capitol Bank, Adavacare, and Adava Care [7]. The fix is structural: dedicated single-page landing pages with no navigation, clear CTAs, and keyword-aligned copy. Cordwainer's landing page strategy documents the mechanics — removing competing navigation focuses high-intent visitors and signals relevance to Google [8].

Ad Creative and Copy Performance

Specific, transparent messaging consistently outperforms generic category copy. Adava Care's no-fee ads achieved an 18% conversion rate at the Irish Road location against a prior benchmark of 5–10% for top performers; 5 of 6 top-converting ads were the newly launched no-fee variants [9]. Overhead Door Madison's Bing ad refresh from "Need a new garage door?" to "Overhead Door Madison" increased CTR from 2–3% to 11% on $70 spend [10]. Cost-focused messaging outperformed personalized messaging in Adavacare copy tests [6].

Ad fatigue is real and measurable. The diagnostic signature is CTR decline without a corresponding impression decline — that combination points to a copy or targeting problem, not a budget or bid problem. Overhead Door Madison's installation campaign CTR fell from 4–5% to 1% between October and December 2025 while the repair campaign on the same account held at 12% [11]. The old Microsoft Ads installation ad had <1% CTR and a 2–3% top-of-page impression rate after roughly one year; a fresh creative achieved 12% CTR and 81% top-of-page rate [12]. Pausing the old ad and consolidating budget to the new one confirmed the performance gain held at higher spend.

Microsoft Ads vs. Google Ads

Microsoft Ads (Bing) delivers materially lower CPCs and higher efficiency than Google for local home services and B2B — observed at both Overhead Door Madison and Trachte [13]. Overhead Door Madison achieved 15 conversions at ~$28 CPA on Bing, with a ~14% CTR that is 2–3x the industry average of 4–5%, at a $1.00 CPC [14]. Trachte's Microsoft Ads display campaigns generated ~200 conversions at ~$2/call across the Door Hallway and Canopy campaigns [15].

The apparent contradiction — Trachte's display ads outperforming search, while Overhead Door's search ads are the consistent performer — resolves by vertical and campaign intent. Trachte's display targeting reached in-market self-storage developers at low CPM; Overhead Door's repair-focused search captures high-intent local queries. Neither finding generalizes across all contexts, but both confirm that Microsoft Ads is underutilized relative to Google in local and B2B accounts.

Campaign Structure and Budget Allocation

Broad, ambiguous creative concepts are the fastest way to burn budget on unqualified traffic. Trachte's "Blueprint to Build" campaign spent $1,150/month attracting DIYers and hobbyists with no self-storage development intent before being paused [16]. The failure mode is top-of-funnel campaigns that match on interest rather than intent. Aviary's discovery-phase approach — $1,000 initial budget across three tightly scoped campaigns, bids starting at 2/3 of Google's estimated CPC — is the correct model for early-stage or skeptical clients [17].

Budget underpacing (Adavacare spent $458 by January 6 against a $968 expected pace) indicates either daily budget caps or bids too low to compete for available impressions — not a demand problem [6]. A 2.3x CPA variance across Adavacare campaigns points to budget misallocation rather than market differences; reallocating from the Heartland campaign (CPA ~$285 vs. ~$100 target) to better-performing campaigns is the mechanical fix [18].


What Works

1. Transparent pricing in ad copy (assisted living, local services)
Stating the actual price or fee structure in ad copy pre-qualifies clicks and increases conversion rate among visitors who proceed. Adava Care's no-fee ads drove an 18% conversion rate vs. a 5–10% benchmark, with 5 of 6 top-converting ads being no-fee variants [9]. The mechanism is self-selection: price-sensitive visitors who would not convert filter themselves out before clicking.

2. Dedicated single-page landing pages with no navigation
Removing navigation menus and competing CTAs focuses high-intent visitors on a single conversion action and signals relevance to Google's Quality Score algorithm. Cordwainer's landing page strategy documents the structural approach; Capitol Bank's checking campaign outperformed its CD campaign on the same account specifically because of better landing page alignment [19].

3. Microsoft Ads for local home services and B2B
Bing consistently delivers CPCs of $0.60–$1.00 and cost-per-call around $28–$30 for local service accounts, at CTRs 2–3x Google's industry average. Overhead Door Madison achieved 29 conversions at ~$24 CPA on $700 spend; Trachte's Bing campaigns generated 42 conversions at ~$30/call before pausing due to budget exhaustion [20]. The efficiency gap relative to Google is large enough to warrant running Bing first on new local accounts.

4. Pausing fatigued ads and consolidating budget to fresh creative
When CTR declines without an impression decline, the correct response is a creative refresh — not a bid increase. Overhead Door Madison's installation campaign recovered from <1% CTR to 12% CTR after pausing the old ad and launching new copy; the gain held as spend increased, confirming it wasn't a low-volume artifact [12].

5. Lead magnet campaigns for earlier-funnel B2B conversions
Trachte's "self-storage mistakes" lead magnet generated 22 form submissions over 90 days, outperforming direct product ads on conversion volume for an audience that wasn't yet ready to buy [21]. This approach works when the buying cycle is long and the audience needs education before committing to a sales conversation.

6. CRM-linked UTM attribution before scaling spend
Establishing source tagging in the CRM before scaling is not optional — it's the prerequisite for knowing whether a campaign is working. Trachte's 42 Bing conversions were unattributable in Dynamics CRM because leads weren't tagged; AHS's 17:1 discrepancy between Google Ads and MarketSharp was only discovered after integration work [22].

7. Brand-name-first ad copy in local service search
Leading with the brand name rather than a generic category question increases CTR in local search. Overhead Door Madison's Bing refresh from "Need a new garage door?" to "Overhead Door Madison" moved CTR from 2–3% to 11% [10]. The likely mechanism is that brand-first copy signals local relevance and authority to users who already have brand awareness.

8. Low-budget test campaigns for skeptical clients
Clients with prior negative PPC experience respond better to a constrained discovery phase than a full campaign launch. Aviary's $1,000 discovery budget across three campaigns, with bids starting at 2/3 of Google's CPC estimate, is the documented approach [17]. The goal is generating enough signal (approximately 5,000 impressions) to draw reliable conclusions before committing larger budgets.

9. Remarketing to price-hesitant prospects
Trachte's Door Hallway remarketing campaign targeted past site visitors who hadn't converted, generating incremental conversions at low cost [21]. Remarketing works best when the initial visit indicated intent but the prospect needed more time or information — common in high-consideration B2B purchases.


What Doesn't Work

1. Page views and session starts as primary conversion goals
Using engagement metrics as conversion goals causes Smart Bidding to optimize for traffic volume. Loot Point's account reported zero conversion value for actual lead events because page views were set as the primary goal [4]. The damage is compounding: the longer Smart Bidding trains on the wrong signal, the harder it is to retrain.

2. Routing ad traffic to the homepage
Sending paid traffic to a homepage instead of a keyword-specific landing page is the most common cause of low Quality Scores in the portfolio. Capitol Bank's CD campaign scored 2/10 for this reason; Adavacare carried 32+ keywords below QS 6 with the same root cause [7]. A QS of 2/10 means paying roughly 5x more per click for the same position as a well-aligned competitor.

3. Broad, intent-ambiguous top-of-funnel campaigns
Trachte's "Blueprint to Build" campaign spent $1,150/month on clicks from DIYers and hobbyists — audiences with no self-storage development intent [16]. The failure pattern is using creative that matches on interest (building, construction) rather than commercial intent (self-storage facility development). Top-of-funnel campaigns require specific audience qualification signals, not just broad thematic relevance.

4. Trusting platform optimization score recommendations on budget
Bing's optimization score recommendations default to "increase budget" as a standing suggestion regardless of actual performance. Overhead Door Madison reached 85% optimization score; the remaining gap was Bing's budget increase recommendation, which was correctly identified as a platform default rather than an actionable finding [23]. Optimization scores should be read selectively — structural recommendations (keyword coverage, ad extensions) are useful; budget recommendations are not.

5. Running Google Display with repair-oriented or urgency language
Google Display consistently disapproves ads containing "repair," "24/7," and "fast" in the home services vertical. Overhead Door Madison's display ads required rewriting to "new door sales" and "replacement" language before approval [24]. Branded logos and branded images were also disapproved; generic images ran successfully. This is not intuitive and wastes setup time if not anticipated.

6. Scaling spend before validating lead quality
Trachte's Microsoft Ads display campaigns generated ~200 conversions at ~$2/call — impressive on paper — but budget scaling was correctly held pending sales team validation of call quality [15]. A $2/call cost means nothing if the calls are from unqualified prospects. Platform efficiency metrics and business lead quality are different measurements.

7. Running campaigns without a minimum impression threshold
Drawing conclusions from campaigns with fewer than 5,000 impressions produces unreliable performance data [18]. Early campaign decisions based on low-volume data lead to premature pauses of campaigns that would have performed, or continued spend on campaigns that wouldn't.


Patterns Across Clients

1. Platform-reported conversions overstate qualified lead volume — consistently
Observed at AHS (17:1 discrepancy), Adava Care (no downstream move-in visibility), and Trachte (CRM attribution gap). The pattern appears regardless of platform (Google, Microsoft) or vertical (home services, assisted living, B2B manufacturing). The common cause is that PPC platforms count any configured event as a conversion, while CRMs only record contacts that pass a qualification threshold. Without CRM integration, cost-per-lead calculations are fiction [25].

2. Landing page misalignment is the default state of inherited accounts
Capitol Bank, Adavacare, and Adava Care all entered engagements with landing page problems — either routing to homepages or using pages with low keyword relevance. This is not a client sophistication issue; it's a structural gap in how most accounts are initially set up. The first audit action on any inherited account should be mapping every active campaign to its destination URL and checking Quality Score [26].

3. Microsoft Ads is underutilized relative to its efficiency
Overhead Door Madison and Trachte both show Microsoft Ads outperforming Google on cost-per-conversion metrics. Overhead Door Madison's $1.00 CPC and 14% CTR on Bing are exceptional by any benchmark; Trachte's $2/call display cost is difficult to replicate on Google Display at equivalent volume [27]. The pattern likely reflects lower advertiser competition on Bing in local and B2B verticals — an efficiency window that may close as more advertisers shift budget.

4. Ad creative fatigue follows a predictable timeline and diagnostic signature
Overhead Door Madison's installation ad degraded from functional to <1% CTR over approximately one year. The diagnostic — CTR decline without impression decline — appeared in both the Microsoft Ads installation campaign and the Bing garage door installation campaign [28]. Accounts running the same creative for more than six months should be proactively audited for this pattern rather than waiting for performance to visibly collapse.

5. Multi-location accounts create budget allocation conflicts
Adavacare and AHS both face the problem of allocating budget across locations with different capacity constraints, different CPAs, and different profitability profiles. Adavacare's Heartland campaign ran at $285 CPA against a $100 target while other campaigns performed near target — a 2.3x variance that indicates misallocation rather than market differences [29]. The correct response is reallocation, not account-level budget increases.

6. Skeptical clients with prior PPC failures require constrained discovery phases
Cordwainer and Aviary both entered with prior negative PPC experience. The documented approach — low initial budget, conservative bids at 2/3 of platform estimates, clear success criteria before scaling — is the right framework for rebuilding confidence [30]. Launching a full campaign for a skeptical client without a discovery phase risks repeating the failure that created the skepticism.

7. Budget reallocation from paused campaigns is rarely fully documented
Trachte's Blueprint to Build pause freed $1,150/month; documented reallocation accounted for roughly $450/month (renewal campaign at $15/day plus Fathom reduction), leaving ~$700/month untracked [16]. This pattern — pausing a campaign without explicitly documenting where the budget goes — creates ambiguity in account-level spend reporting and makes it harder to attribute performance changes to specific decisions.


Exceptions and Edge Cases

1. Google Display disapproves branded assets, not just copy
The conventional assumption is that Google Display approval is primarily a copy review. Overhead Door Madison's experience shows that branded logos and branded images were also disapproved, while generic images ran successfully [24]. The implication: don't assume a brand-heavy creative approach will work in Display — test generic assets first.

2. Seasonal campaigns should sometimes run at reduced budget, not pause
The default recommendation for seasonal businesses is to pause campaigns during off-season. Overhead Door Madison ran installation campaigns through winter at $1.00 CPC specifically to build brand familiarity during the research phase that precedes buying season [14]. This works when the buying cycle is long enough that off-season impressions influence in-season decisions — likely true for home improvement but not for impulse purchases.

3. PPC as reconnaissance, not revenue driver
For Aviary, PPC is explicitly a secondary channel used to validate market demand and identify high-performing verticals before committing to ABM execution [17]. This is a legitimate use case for early-stage B2B companies with limited budgets, but it requires different success metrics — search volume, CPC benchmarks, and click intent signals matter more than conversion volume at this stage.

4. Trachte's anniversary creative outperformed product-specific display assets
The TrackRight 125th anniversary graphic achieved ~70% CTR in display — anomalously high relative to typical display benchmarks [31]. Single-message brand milestone creative can outperform product-specific assets in remarketing contexts, likely because it's visually distinct from standard product ads and triggers curiosity rather than ad-blindness.

5. Low Quality Scores don't always mean poor campaign performance
Capitol Bank's CD campaign scored 2/10 and performed poorly; the checking account campaign in the same account had better alignment and generated 7 conversions at $1.96 CPC [5]. Quality Score predicts efficiency, not absolute performance — a well-funded low-QS campaign can still generate conversions, just at higher cost than necessary.

6. Cordwainer's prior PPC failure may reflect market constraints, not mismanagement
Single-source finding: Cordwainer's prior Google Ads failure may have resulted from a small geographic area, niche service, and limited search volume rather than poor campaign setup [32]. This matters because the remediation approach differs — structural market limitations require different solutions than campaign configuration errors.


Evolution and Change

The portfolio's PPC work spans late 2024 through early 2026, a period that saw meaningful shifts in how campaigns are structured and measured.

The earliest engagements (late 2024) show a pattern of inherited accounts with basic configuration problems — wrong conversion goals, homepage routing, no CRM integration. The AHS MarketSharp discrepancy and Loot Point's page-view conversion goal both surfaced during this period. These are not sophisticated problems; they're foundational errors that suggest many accounts were set up without a measurement-first framework.

Through 2025, the portfolio shifted toward more deliberate campaign architecture: dedicated landing pages (Cordwainer), discovery-phase budgeting (Aviary), and explicit CRM attribution requirements (Trachte Bing). The Trachte Blueprint to Build pause in mid-2025 represents a maturation in how top-of-funnel campaigns are evaluated — moving from "is it generating conversions?" to "are those conversions qualified?"

The most recent fragments (early 2026) show active Quality Score remediation work at Adavacare and ongoing conversion tracking refinement at Adava Care via GHL-Monday integration. The direction is toward tighter measurement loops and faster creative refresh cycles. The Overhead Door Madison CTR decline pattern — caught and diagnosed within two months — suggests improving diagnostic speed compared to earlier engagements where problems ran longer before detection.

The one structural change worth monitoring externally: Google's Smart Bidding algorithms are increasingly sensitive to conversion signal quality. As more accounts use Maximize Conversions or Target CPA, the cost of a misconfigured conversion goal compounds faster than it did when manual bidding was standard. The Loot Point case is likely to become more common, not less, as Smart Bidding adoption increases.


Gaps in Our Understanding

1. No data on Google Performance Max campaigns
All documented campaigns use standard Search, Display, or Shopping campaign types. Performance Max is now Google's default recommendation for most advertisers, and we have no evidence on how it performs relative to standard campaigns in our client verticals. If a client account migrates to PMax, current optimization frameworks may not apply.

2. Meta Ads performance is entirely unobserved
Overhead Door Madison referenced Bing's 29 conversions as a benchmark for Meta expectations, but no Meta campaign data exists in the portfolio [33]. We cannot make evidence-based recommendations on Meta for local services or B2B without at least one completed engagement.

3. Long-term Quality Score improvement timelines are undocumented
We know landing page misalignment causes low Quality Scores and that fixing alignment improves them. We don't have data on how long QS improvement takes after remediation, which affects how quickly clients should expect CPC reductions after structural fixes.

4. No evidence from enterprise-scale or high-spend accounts
All documented PPC accounts operate at relatively modest budgets ($500–$2,000/month range). Optimization patterns at $10,000+/month accounts may differ materially — particularly around audience segmentation, bid strategy selection, and campaign structure. If we take on a high-spend account, these frameworks should be treated as hypotheses.

5. Competitor-specific landing page performance is unresolved
Citrus America's competitor-specific pages (ZoomX, Zumo) were approved and launched, but no conversion comparison data against the generic "Commercial Juicer" page exists yet [34]. This is a live test with no documented outcome.

6. Yahoo Ads latent demand is unacted upon
Overhead Door Madison has organic Yahoo traffic at ~50% of Google organic volume with no paid Yahoo presence [23]. Whether Yahoo Ads (served through Microsoft Ads) would convert at comparable rates to Bing is unknown. The opportunity cost of not testing is real but unquantified.


Open Questions

1. Does Microsoft Ads' efficiency advantage persist as advertiser competition increases?
The Bing efficiency gap (14% CTR, $1.00 CPC for Overhead Door Madison) likely reflects lower competition in local verticals. As more advertisers shift budget from Google to Microsoft, CPCs will rise. What's the current trajectory, and at what CPC does the efficiency advantage disappear?

2. How does Google's 2025–2026 algorithm treatment of landing page experience affect Quality Score calculation?
Google has signaled increasing weight on Core Web Vitals and page experience signals in ad quality evaluation. Does page load speed now materially affect Quality Score for accounts with otherwise well-aligned landing pages?

3. At what impression volume does Smart Bidding outperform manual bidding for small local accounts?
The 5,000-impression threshold for reliable conclusions is documented [18], but Smart Bidding typically requires 30–50 conversions per month to exit the learning phase. For small local accounts generating 10–15 conversions/month, is manual bidding consistently better?

4. Does the transparent pricing effect (Adava Care's 18% conversion rate) generalize to other verticals?
The no-fee messaging result is striking, but it's a single client in assisted living — a vertical where pricing anxiety is unusually high. Does the same mechanism produce comparable lifts in home services, B2B, or financial services?

5. What is the actual impact of Bing's optimization score recommendations on account performance?
The documented finding is that Bing's budget increase recommendations are platform defaults, not actionable signals [23]. Are there specific optimization score recommendations (beyond budget) that correlate with actual performance improvement?

6. How does the Microsoft Ads display vs. search performance split vary by vertical?
Trachte's display ads dramatically outperformed search on Microsoft; Overhead Door's search ads are the consistent performer. Is there a vertical or audience characteristic that predicts which format wins?

7. What is the minimum viable budget for a PPC discovery phase to generate statistically reliable signal?
Aviary's $1,000 discovery budget is documented as the approach, but whether $1,000 generates 5,000 impressions across three campaigns in a B2B SaaS vertical is unknown [17].



Sources

Synthesized from 27 Layer 2 articles, spanning 2024-12-04 to 2026-04-08.

Sources

34 cited of 26 fragments in PPC

  1. Ahs Marketsharp Conversion Tracking
  2. Adava Care Roi Tracking Ghl Monday
  3. Index
  4. Conversion Goal Optimization
  5. Quality Score Optimization
  6. Adavacare Quality Score Optimization
  7. Index, Adavacare Quality Score Optimization
  8. Cordwainer Landing Page Strategy
  9. Adava Care No Fee Ads Performance
  10. Overhead Door Bing Ad Refresh
  11. Overhead Door Bing Ads Ctr Decline
  12. Overhead Door Microsoft Ads Installation
  13. Overhead Door Bing Ads Performance, Trachte Microsoft Ads Performance
  14. Overhead Door Microsoft Ads Performance
  15. Trachte Microsoft Ads Performance
  16. Trachte Blueprint To Build Pause
  17. Aviary Ppc Discovery Strategy
  18. Client Extractions
  19. Cordwainer Landing Page Strategy, Quality Score Optimization
  20. Overhead Door Ppc Strategy, Trachte Bing Ads Performance
  21. Trachte Door Hallway Campaign
  22. Trachte Bing Ads Performance, Ahs Marketsharp Conversion Tracking
  23. Overhead Door Bing Optimization
  24. Overhead Door Google Display Strategy
  25. Ahs Marketsharp Conversion Tracking, Adava Care Roi Tracking Ghl Monday, Trachte Bing Ads Performance
  26. Quality Score Optimization, Adavacare Quality Score Optimization, Index
  27. Overhead Door Microsoft Ads Performance, Trachte Microsoft Ads Performance
  28. Overhead Door Microsoft Ads Installation, Overhead Door Bing Ads Ctr Decline
  29. Adavacare Quality Score Optimization, Client Extractions
  30. Cordwainer Google Ads Strategy, Aviary Ppc Discovery Strategy
  31. Trachte Ad Graphics Refresh
  32. Cordwainer Google Ads Strategy
  33. Overhead Door Ppc Strategy
  34. Citrus America Competitor Landing Pages

Layer 2 Fragments (26)