
If you’ve set up a new Meta campaign recently and excluded a placement like Audience Network or Facebook Right Column, you might have noticed something different. Meta now gives you a checkbox that says: Up to 5% of your budget is spent for each excluded placement when it’s likely to improve performance. And that checkbox is turned on by default. That means when you exclude a placement, Meta doesn’t fully exclude it anymore. Not unless you go back and manually uncheck that box. Your “excluded” placement can still receive up to 5% of your ad set budget. And that 5% applies per excluded placement, not total. If you’ve excluded 4 placements, that’s potentially 20% of your budget going to places you specifically said you didn’t want. As PPC Land documented when the feature first appeared: “Campaigns with multiple placement exclusions could see significantly more than 5% of total budget directed to placements advertisers intended to avoid.” This is part of Meta’s broader push toward algorithmic control over placement delivery. They’ve been steadily removing manual controls since 2024: detailed targeting exclusions were eliminated in January 2025, Dynamic Media was enabled by default for Advantage+ Catalog ads by October 2025, and now placement exclusions have this soft override baked in. The logic from Meta’s side makes sense. Their data shows that Advantage+ Placements (where Meta chooses everything) generally delivers lower cost per result because the algorithm has maximum flexibility to find cheap impressions wherever they exist. By sneaking 5% of spend into “excluded” placements, Meta is trying to prove that those placements can contribute to your results. The problem is that many advertisers exclude placements for good reasons: brand safety concerns with Audience Network, low-quality traffic from specific surfaces, or simply because they’ve tested those placements and they don’t convert for their offer. A default opt-in that overrides those decisions without clear notice is frustrating. Let me walk you through how to actually control your placements in 2026. How Placement Control Works at the Ad Set Level At the ad set level, you have two options: Advantage+ Placements (default). Meta decides where your ads run across all 25+ placement options. You give up control, and the algorithm finds the cheapest impressions. This is what Meta recommends for most advertisers, and honestly, for purchase-optimized campaigns with strong pixel data, it often works fine. The algorithm is good at finding cost-efficient impressions. Manual Placements. You choose exactly which platforms (Facebook, Instagram, Messenger, Audience Network) and which surfaces within them (Feed, Stories, Reels, Marketplace, Search Results, etc.) your ads appear on. This gives you full control. When you select Manual Placements and deselect specific placements, this is where the 5% spending feature kicks in. After you exclude placements, look for the checkbox that allows Meta to spend limited budget on those excluded surfaces. It may appear as a recommendation or as a checked option within the placement settings. The catch: This feature currently applies to Sales and Leads campaign objectives. If you’re running Traffic, Engagement, or Awareness campaigns, the behavior may differ. Check your specific campaign setup to confirm. For campaigns where you’re testing which placements work, leaving Advantage+ Placements on makes sense. You let Meta explore, collect data, and then review the breakdown reports to see which placements actually convert. But once you have that data and know that certain placements don’t work for you, switching to Manual Placements with genuine exclusions is a reasonable choice. Just make sure the 5% override isn’t silently undermining your exclusions. How Placement Control Works at the Account Level Meta also offers account-level placement controls. These apply to every campaign in the account, so you don’t need to remember to exclude specific placements each time you create a new campaign. To access account-level placement controls: Go to Advertising Settings in your Meta Ads Manager Select Placement Controls Toggle on “My business can only advertise on specific placements“ From here, you can exclude: Audience Network (ads on third-party apps and websites) Facebook Marketplace Facebook Right Column These account-level controls are separate from the ad set level placement selections. When you set an exclusion at the account level, it overrides any ad set level settings. Even if someone on your team creates a new campaign with Advantage+ Placements, the account-level exclusion will still apply. This is the cleanest way to permanently block a placement across your account. No checkboxes to worry about. No 5% overrides. The placement is simply off. However, if you want to exclude any specific placement like for example “Ads on Facebook Reels”, then you have to do this at the ad set placement control settings. When to use account-level exclusions: You’ve tested Audience Network extensively and it consistently delivers low-quality traffic for your business Your brand has content adjacency requirements that Audience Network can’t satisfy You have compliance or regulatory reasons that require restricting where your ads appear You want a “set it and forget it” solution that applies to all current and future campaigns Important: Account-level placement controls are only available for Auction campaigns. If you are running Reach and Frequency campaigns, then you need to manage your placement selection under your ad set placement control settings. Steps to Completely Remove a Placement If your goal is to fully block a placement with zero spend leaking through, here’s the process. Option A: Account-Level Block (Recommended for Permanent Exclusions) Open Meta Ads Manager Go to Advertising Settings (gear icon > Advertising Settings) Click Placement Controls Toggle on “My business can only advertise on specific placements“ Uncheck the placements you want to block (Audience Network, Marketplace, Right Column) Click Review Changes and then Apply Wait up to 48 hours for changes to take effect across existing campaigns This is the safest method to get rid of Audience Network, Marketplace, Right Column. It applies to everything in the account and isn’t affected by the 5% spend checkbox at the ad set level. For additional placements, consider option B. Option B: Ad Set Level Block (For Per-Campaign Control) Create or edit your campaign At the ad set level, scroll to Placements Click on Show more settings to make Placement controls visible. Click and expand Placement controls Uncheck the placements you want to exclude. Look for a checkbox saying “Allow limited spending to excluded placements“. This is the option that allows Meta to spend up to 5% on excluded placements and will appear only after excluding placements. Uncheck this box. Save and publish If you don’t uncheck the box on step 6, your “excluded” placements will still receive up to 5% of your ad set budget each. Option C: Combine Both for Maximum Protection Use account-level controls to permanently block placements you never want (like Audience Network), and use ad set level controls for per-campaign adjustments (like excluding Stories for a campaign that doesn’t have vertical creative). How TheOptimizer Handles Placement Optimization Automatically Here’s the approach I recommend for experienced buyers who want the best of both worlds: algorithmic flexibility for discovery, automated protection against waste. Instead of manually excluding placements upfront (which limits Meta’s ability to find cheap impressions), start with Advantage+ Placements or a broad manual placement selection. Let Meta explore. Then use automation rules to cut underperforming placements based on actual data. TheOptimizer connects to Meta’s API and lets you build rules that automatically block placements based on performance thresholds. Here’s what that looks like: Rule: Block Underperforming Placements IF Placement Spend > $X AND Placement CPA > Target CPA by 30% AND Placement Conversions < Y THEN Block Placement Run every 10 to 30 minutes This rule gives every placement a fair chance to prove itself. If a placement spends meaningful budget and doesn’t convert at an acceptable rate, it gets blocked automatically. No manual checking. No forgetting to review placement breakdowns. Why this is better than pre-excluding placements? You don’t miss hidden winners. Sometimes a placement you’d normally exclude turns out to work well for a specific creative or audience. Automated rules let it run until the data says otherwise. You respond to real data, not assumptions. Excluding Audience Network because “everyone says it’s bad” ignores the fact that for some offers and verticals, it converts at a very low CPC. Let the data decide. You handle the 5% problem automatically. Even if Meta’s 5% override sneaks spend into an excluded placement, your automation rules will catch it if that spend doesn’t convert. The placement gets blocked based […]
May 14, 2026

Before getting into budget sharing, let’s make sure the foundation is clear. Meta gives you two places to set your budget: at the campaign level or at the ad set level. Campaign Budget (CBO / Advantage+ Campaign Budget): You set one budget for the entire campaign. Meta distributes that budget across your ad sets automatically based on where it predicts the best results. This means Meta might put 70% of your budget into one ad set and 10% into another if it believes that’s where the conversions are. You give up control over per-ad-set spend in exchange for Meta’s optimization. Ad Set Budget (ABO): You set a separate budget for each ad set. Each one gets its own fixed daily amount. If Ad Set A gets $100/day and Ad Set B gets $100/day, that’s what they’ll spend (roughly), regardless of which one is performing better. CBO is good when you want Meta to chase performance across ad sets. ABO is good when you want controlled testing with equal spend per audience or creative. Use campaign budget when you have multiple ad sets and want Meta to push more spend toward the stronger one. Use ad set budget when you want stricter control, cleaner tests, or you need to protect spend across audiences. The problem with ABO has always been that it’s rigid. One ad set might be performing well and hitting its budget cap by 2 PM, while another is barely spending and delivering weak results. Your budget is locked on both sides. The strong ad set can’t spend more, and the weak one keeps spending its allocation anyway. Ad set budget sharing is Meta’s attempt to fix that rigidity without going full CBO. What Is Ad Set Budget Sharing? Ad set budget sharing is a feature that lets Meta redistribute up to 20% of one ad set’s daily budget to another ad set within the same campaign when it predicts better performance. According to Meta’s documentation: “We’ll share up to 20% of your ad set budget with other ad sets within this campaign when it’s likely to improve performance.” Here’s how it works in practice. Say you have two ad sets in a campaign, each with a $100/day budget: Without budget sharing: Each ad set spends up to $100. Total possible spend: $200. With budget sharing: If Meta believes Ad Set A has better opportunities, it can take up to $20 from Ad Set B’s budget and shift it to Ad Set A. Ad Set A now has up to $120. Ad Set B has $80. Total campaign spend stays the same. It’s a middle ground. You still set individual ad set budgets (unlike CBO where Meta controls everything), but you give the algorithm a little room to shift money toward what’s working. LeadEnforce’s analysis describes it well: “Budget sharing allows Meta to move up to 20% of one ad set’s daily budget into another active ad set inside the same campaign. The total campaign spend does not increase. Meta simply redistributes part of the budget toward stronger opportunities in real time.” This relates directly to the campaign structure decisions we covered in our campaign structure best practices guide. If you’re running ABO for creative testing, budget sharing adds a layer of flexibility that can improve your results without giving up the control that makes ABO useful for testing in the first place. When Is Budget Sharing Active (and When to Keep It On) Budget sharing appears as a checkbox when you’re using ad set budgets with two or more ad sets. In some accounts, it’s checked by default on new campaigns. In others, it’s opt-in. Check your ad set settings to confirm. Keep it on when: You’re running multiple ad sets that target similar or overlapping audiences and you want Meta to lean into whichever one is performing better on a given day. You’re in a scaling phase and want slightly more algorithmic flexibility without fully switching to CBO. You’re running broad targeting with diverse creatives across ad sets and want the budget to follow performance. Think of it as ABO with a soft CBO layer on top. You still control the base budget per ad set. But Meta gets permission to move up to 20% around based on real-time performance signals. For campaigns where you want the algorithm to have room to optimize but you’re not ready to give up ad set level budget control entirely, budget sharing is a solid option. How and When to Disable Ad Set Budget Sharing Disabling is simple. Go to your campaign settings, find the budget sharing checkbox, and uncheck it. When to turn it off: During controlled A/B tests. If you’re testing two audiences or two creative sets against each other, you need equal spend per ad set. Budget sharing breaks that controlled environment by shifting money toward whichever ad set shows early signals, which can skew your test results before you have enough data. When testing bidding strategies. If you’re comparing cost cap vs. bid cap across ad sets (which we covered in our bidding strategies article), budget sharing can muddy the results. One ad set getting 20% more budget than the other makes it hard to attribute performance differences to the bidding strategy alone. When you have very different audience sizes across ad sets. If Ad Set A targets a broad audience and Ad Set B targets a small retargeting list, budget sharing might drain the retargeting budget toward the broader audience since it has more opportunities. That’s not necessarily better. Your retargeting audience might have higher conversion quality even if it can’t absorb more spend. When you’re running TheOptimizer’s automation rules on per-ad-set budgets. If you’ve set up automation rules that adjust budgets based on ad set performance (for example, increasing budget by 20% when ROI is stable), budget sharing might affect the total spend of the ad set. The rule changes the budget based on the ad set’s performance, but in the meantime Meta is silently shifting 20% to or from that ad set. The two systems can work against each other. For campaigns managed through TheOptimizer, I generally recommend disabling budget sharing and letting the automation rules handle budget allocation instead. TheOptimizer’s rules run every 10 minutes with explicit logic you’ve defined, whereas budget sharing operates on Meta’s internal predictions with no transparency into why it shifted the money. The Impact on Campaign Spend This is the part most people miss. Budget sharing doesn’t just move money between ad sets. It can also affect how much you spend on a given day. Segwise’s budget analysis found a critical detail: “If you have turned on ad set budget sharing, you may spend up to 75% over the total of your daily budget plus the maximum shared budget per day.” That’s worth reading twice. Without budget sharing, Meta can already spend up to 25% over your daily ad set budget on high-opportunity days (a $100 budget might hit $125). With budget sharing enabled, that overspend cap increases to 75%. So a $100 daily budget with sharing enabled could theoretically hit $175 on a strong day. Meta balances this over a 7-day window. Your weekly spend won’t exceed 7x your daily budget. But the day-to-day fluctuations can be more extreme with sharing turned on. What this means for budget planning: If you set ad set budgets with the expectation that each one will spend roughly its daily amount, budget sharing can introduce surprises. One ad set might spend 40% over its budget on a Tuesday while another spends 30% under. Over the week, it evens out, but on a daily basis the numbers look volatile. For advertisers who need predictable daily spend (client-managed accounts with fixed daily caps, or campaigns where overspend triggers compliance issues), this matters. Turn sharing off and accept the trade-off of slightly less algorithmic flexibility. Control your budgets with precision! TheOptimizer lets you build budget rules that run every 10 minutes across all your Meta ad accounts. Scale winners, protect losers, and maintain the spend control that budget sharing can undermine. Get Started for Free FAQ Is ad set budget sharing the same as Campaign Budget Optimization (CBO)? No. CBO gives Meta full control over budget distribution across ad sets. Budget sharing still lets you set individual ad set budgets but allows Meta to move up to 20% between them. CBO can put […]
May 14, 2026

Before talking about bidding strategies, you need to understand what you’re actually bidding in. Every time there’s an opportunity to show an ad to someone, Meta runs an auction. Your ad competes against every other eligible ad targeting that same person. But the winner isn’t simply the highest bidder. Meta calculates a Total Value Score for each ad: Total Value = (Your Bid x Estimated Action Rate) + Ad Quality + User Value Three things matter. Your bid (how much you’re willing to pay). The estimated action rate (how likely this specific person is to take the action you’re optimizing for). And ad quality (how relevant and engaging your creative is based on past signals). This means a lower bid with a highly relevant ad can beat a higher bid with a poor one. Your creative quality and relevance score are multipliers on your bid. A $20 bid with strong relevance can outperform a $40 bid with weak relevance. Your bidding strategy controls one piece of this equation: how Meta decides what to bid on your behalf. Everything else (creative quality, audience relevance) is determined by your ads and your account history. The 5 Bidding Strategies Available in 2026 Meta offers five bidding strategies in 2026. Three are goal-based (they control cost), and two are spend-based (they control volume). 1. Highest Volume (formerly Lowest Cost) What it does: Meta bids whatever it takes to get you the most results within your budget. No cost control. No cap. It just spends your budget as efficiently as it can. The upside: Maximum delivery. Fastest exit from the learning phase. Zero setup. It’s Meta’s default for a reason. The downside: Your CPA fluctuates. Monday might be $25, Tuesday $55, Wednesday $30. You have no cap, so when competition spikes (holidays, weekends, industry events), your costs spike with it. Meta doesn’t care about your profit margins. It cares about spending your budget. Best for: New campaigns, new accounts, data collection phases, and situations where volume matters more than per-unit cost. 2. Highest Value What it does: Instead of maximizing the number of conversions, Meta maximizes the total conversion value. It finds people likely to make bigger purchases rather than more purchases. The upside: Higher average order value. Better for e-commerce with a wide product price range. The downside: Requires purchase value data sent through your pixel or Conversions API. Without it, Meta has nothing to optimize against. Best for: E-commerce stores where order values vary significantly ($20 t-shirt vs. $200 jacket). Useless for lead gen or flat-value conversions. 3. Cost Cap What it does: You set a target CPA, and Meta tries to keep your average cost per result at or below that target. Key word: average. Individual conversions might cost more or less, but the average should hover around your cap. The upside: Predictable costs over time. Meta still has flexibility to bid above your cap when it finds high-probability users, as long as the average stays in line. The downside: During the learning phase, costs can exceed your cap significantly before stabilizing. A lot of people burn their hands at cost caps because they expect it to behave like bid caps. Cost cap is an average, not a ceiling. Best for: Scaling campaigns where you want to maintain profitability without micromanaging bids. Ideal when you know your target CPA but want Meta to have room to find volume. 4. Bid Cap What it does: You set the maximum amount Meta can bid in any single auction. If winning an impression would require bidding above your cap, Meta doesn’t bid. Period. The upside: Hard cost control. Your CPA will never exceed your cap (on a per-auction basis). As Mathias Schrøder told Ads Uploader: “Last year was our most profitable year ever. We made a deliberate shift to prioritize profit over revenue. Bid caps were central to that strategy.” The downside: Your spend becomes the variable instead of your CPA. Some days you’ll only spend $200 of a $500 budget because Meta couldn’t find enough auctions to win at your price. Delivery can stall completely if your cap is too low. Best for: Advertisers who know their exact break-even CPA and prioritize profitability over volume. Requires data and experience. 5. Minimum ROAS (Return on Ad Spend) What it does: You set a minimum ROAS threshold (say, 2.5x), and Meta only bids on auctions where it predicts the purchase will meet or exceed that return. The upside: Directly ties bidding to revenue outcomes, not just cost. The downside: Needs high purchase volume for the algorithm to predict accurately. Not available for non-purchase optimization events. Best for: E-commerce brands with strong purchase data who want to guarantee a minimum return threshold. Cost Cap vs. Bid Cap: The Real Differences This is the comparison most people get wrong, so let me be very specific about what’s different. Cost Cap Bid Cap What it controls Average CPA across all conversions Maximum bid per individual auction Can individual conversions exceed the cap? Yes. Some will be above, some below. The average is the target. No. Meta will not bid above your cap in any single auction. What varies? Individual conversion costs fluctuate Daily spend fluctuates Daily budget usage Tends to spend your full daily budget May not spend your full budget if it can’t find enough auctions at your price Learning phase behavior May overshoot the cap initially, then stabilize May severely limit delivery initially if the cap is too tight Volume vs. profitability Leans toward volume with cost guardrails Leans toward profitability with volume trade-offs When it works best You want to scale while keeping CPA roughly predictable You want hard cost control on every conversion The fundamental mental model: With cost cap, you’re telling Meta: “Spend my budget, but try to keep the average cost around $X.” With bid cap, you’re telling Meta: “Only compete in auctions where you can win for $X or less. If you can’t find enough of those, spend less.” The trade-off is always volume vs. control. Cost cap gives you more volume with less precise control. Bid cap gives you more control with less predictable volume. When to Use Each Bidding Strategy Here’s the practical decision framework I use: Phase 1: Discovery (new campaign, new offer, new account) Use Highest Volume. You don’t know your CPA yet. You don’t know which creatives work. You need data fast. Let Meta spend freely and collect baseline numbers. Stay here for 7 to 14 days or until you have at least 50 conversions. Phase 2: Optimization (you know your numbers) Switch to Cost Cap. You now have a baseline CPA and you want to maintain it while scaling. Set your cost cap at your target CPA (not your break-even, your target). This gives Meta flexibility to find more volume while keeping your average cost in check. If your CPA starts creeping above target despite the cost cap, it usually means competition is rising or your creatives are fatiguing. See our article on detecting creative fatigue early for the specific automation rules to catch this. Phase 3: Profitability (you want to protect margins) Switch to Bid Cap. You know exactly what a customer is worth and what you can afford to pay. Set the cap at your maximum acceptable CPA with a 10 to 20% buffer for auction competition. Accept that you’ll spend less budget but at better unit economics. Phase 4: Value optimization (e-commerce with varied AOV) Layer in Minimum ROAS or Highest Value. These only make sense when you have strong purchase value data and want to optimize for revenue, not just conversion count. The transition principle: You don’t pick a strategy and stick with it forever. You graduate from one to the next as your data matures. As Stackmatix’s analysis found: “You switch strategies based on performance signals, not a calendar.” This phased approach aligns with the campaign lifecycle framework in our Meta Ads automation playbook: launch with broad settings, validate with data, then tighten controls as you scale. Can I Use Cost Cap or Bid Cap on a Brand New Account? Short answer: you can, but you probably shouldn’t. Cost cap and bid cap both rely on Meta having enough data to predict conversion probabilities. On a brand new account with zero conversion history, Meta is guessing. And guessing with a bid cap usually means one of two things: Your cap is […]
May 14, 2026

For the past two years, Meta has been stripping away manual controls. Interest targeting became a “suggestion.” Detailed targeting expanded by default. Advantage+ took over audience selection. The message was clear: trust the algorithm. Then, in June 2025, Meta did something that seemed to contradict all of that. They gave advertisers value rules. Value rules let you tell Meta’s algorithm that certain audience segments are worth more (or less) to your business, and to adjust its bids accordingly. You can bid 60% more for women aged 25 to 34. Or bid 40% less for users in a specific country. Or deprioritize a placement where your conversions have a high refund rate. All without creating separate ad sets or fragmenting your campaign structure. As Jon Loomer put it in his deep dive on the feature: value rules address a real weakness in Meta’s optimization. The algorithm is designed to get you the most results possible. But it doesn’t inherently care about the value of those results. It optimizes for volume, not quality. Value rules let you layer business intelligence on top of algorithmic efficiency. The feature launched for Sales and App Promotion campaigns in June 2025, expanded to all campaign objectives by August 2025, and received significant enhancements in 2026 including placement-specific rules and device platform adjustments. But here’s the thing Meta tells you upfront in the setup screen: “When you use value rules, you may see more conversions from your preferred audiences, but your overall cost per result may increase.” That warning isn’t decoration. Meta’s own documentation says value rules can increase your cost per result by 20 to 1,000%. Not a typo. One thousand percent. So the question isn’t whether value rules are powerful. They are. The question is whether you know enough about your business data to use them without lighting money on fire. How Value Rules Actually Work (With Examples) The logic is straightforward. Value rules are bid multipliers applied at the audience segment level within Meta’s auction system. Every time your ad is eligible to appear, Meta calculates a Total Value Score: Total Value = (Advertiser Bid x Estimated Action Rate) + Ad Quality + User Value When you set a value rule, you’re adjusting the “Advertiser Bid” component for specific audience segments. A +50% rule means Meta bids 50% higher for people matching that segment. A -40% rule means Meta bids 40% less. What you can target with value rules: Age ranges GenderLocation (countries, regions, states) Mobile operating system (iOS or Android) Device platform Ad placement (Feed, Stories, Reels, Audience Network, Marketplace) You can combine up to 2 criteria per rule (for example, “women aged 25 to 34” or “iOS users in California”). You can create up to 10 rules per rule set, and up to 6 rule sets per account. Rule priority matters. When a user qualifies for multiple rules, Meta applies only the first matching rule in the sequence. So if Rule 1 is “+20% for women in California” and Rule 2 is “+50% for iOS users,” a woman in California using an iOS device gets the +20%, not the +50%. Order your rules from most specific to least specific. Example 1: E-commerce LTV optimization Your data shows women aged 25 to 44 have an average lifetime value of $850 over 12 months. Your overall average is $530. Women in this age range are worth about 60% more to your business. Setup: Rule 1: Increase bid +60% for Women, Age 25-44 Rule 2: Decrease bid -20% for Men, Age 55-65+ Your current CPA is $45 across all demographics. With the +60% bid increase, you’re willing to pay up to $72 to acquire a woman aged 25 to 44, because her LTV justifies it. Meanwhile, you’re bidding less for a segment that your CRM data shows has a high return rate and low repeat purchase rate. Example 2: Geographic performance differences You run a B2B SaaS product. Leads from major US metro areas convert to paid customers at 3x the rate of leads from smaller markets. Setup: Rule 1: Increase bid +40% for Location: New York, San Francisco, Chicago, Los Angeles, Austin Rule 2: Decrease bid -30% for Location: [lower-converting regions] Meta’s algorithm still reaches all audiences. But it bids more aggressively for impressions in high-converting metros, steering more budget toward the leads that actually close. Example 3: Placement quality differences Your data shows that Feed conversions have 2x the average order value compared to Audience Network conversions. Setup: Rule 1: Increase bid +30% for Placement: Facebook Feed, Instagram Feed Rule 2: Decrease bid -50% for Placement: Audience Network Instead of excluding Audience Network entirely (which reduces the algorithm’s delivery options), you’re deprioritizing it through bid adjustments. Meta will still deliver there when it’s extremely cheap, but your budget concentrates on placements where the conversion quality is higher. How Value Rules Impact Campaign Pacing and Spend This is where things get practical and where most advertisers underestimate the effect. Value rules don’t change your daily budget setting. Your budget stays the same. What changes is how aggressively Meta competes in auctions for specific segments. When you increase bids for a segment, Meta enters higher-priced auctions to reach those people. That means: You win more auctions for that segment. More of your budget goes toward the people you value most. You pay more per impression for that segment. Higher bids mean higher CPMs. Your budget depletes faster if a large portion of your audience matches the high-bid rule. When you decrease bids for a segment, the opposite happens. Meta is less competitive in auctions for those people, which means fewer impressions but lower cost per impression. The net effect on pacing: If your high-value segment is a small portion of your total audience (say, 15%), the pacing impact is manageable. The budget shifts toward that 15% without dramatically changing overall delivery. But if your high-value segment is 60% of your audience and you’re bidding +50%, you’ve effectively increased your average bid across most of your delivery. Your budget will pace faster and may exhaust earlier in the day. What to watch for: Check hourly delivery after activating value rules. If your budget is exhausting by 2 PM instead of running through the full day, your bid increases are too aggressive for your budget. Monitor impression share by segment. Are you actually getting more delivery to the segments you increased bids for? If not, competition might be too high at that bid level. How Value Rules Impact Cost Per Result Let me be blunt about this. Value rules will almost always increase your average cost per result. Meta tells you this outright during setup. The question isn’t “will costs go up?” They will. The question is “will the value of the results go up by more than the cost?” Here’s the math that matters: Without value rules: 100 conversions at $45 CPA = $4,500 spend Average customer value: $530 Total customer value: $53,000 ROAS: 11.8x With value rules (+60% bid for high-LTV segment): 85 total conversions at $55 CPA = $4,675 spend 50 of those conversions are from the high-LTV segment ($850 average value) 35 conversions from other segments ($530 average value) Total customer value: (50 x $850) + (35 x $530) = $61,050 ROAS: 13.1x Fewer total conversions. Higher CPA. But higher total customer value and better ROAS. That’s the trade-off. Value rules sacrifice volume efficiency for value efficiency. If you’re optimizing for top-line conversion count, value rules will look like they’re hurting you. If you’re optimizing for revenue and LTV, they can look very different. Without Value Rules With Value Rules Conversions 100 conversions 85 conversions CPA $45 CPA $55 CPA Avg Value $530 avg value $640 avg value Total Value $53K total value $61K total value ROAS 11.8x ROAS 13.1x ROAS Fewer conversions. Higher CPA. More revenue. Value Rules vs. Narrow Audience Targeting Before value rules existed, the standard approach for targeting high-value segments was audience fragmentation. You’d create separate ad sets for different demographics, each with its own budget. Women 25 to 34 in one ad set. Men 35 to 44 in another. Different budgets reflecting different segment values. This approach has three problems in 2026: 1. It fragments the learning phase. Each ad set needs 50 conversion events per week to exit learning phase. If you split one campaign into 4 demographic ad sets, each one needs to generate 50 events […]
May 11, 2026

If you’ve been running Meta Ads for any length of time, you’ve probably had this experience: your Ads Manager shows 50 purchases. Your Shopify shows 32. Google Analytics shows 28. Your actual bank account shows revenue that matches none of them. Welcome to the attribution problem. Meta made significant changes to how attribution works in early 2026. They redefined what counts as a “click.” They created a brand new attribution category called engage-through. They shortened the video engagement threshold from 10 seconds to 5. And they quietly made incremental attribution available as an alternative to the standard model. If you haven’t updated your understanding of how Meta counts conversions, you’re making optimization decisions based on numbers that don’t mean what you think they mean. And that’s an expensive misunderstanding. In this guide, I’ll break down how every layer of Meta’s attribution system works in 2026. Not the theory. The practical reality of what your numbers actually represent and how to use them to make better decisions. The Four Attribution Types in 2026 Meta now counts conversions across four distinct attribution types. They are not equal in quality, and treating them as the same number is one of the fastest ways to overstate performance. 1. Click-Through Attribution A conversion is attributed as click-through when someone clicks a link in your ad and converts within your selected window (1-day or 7-day). What changed in March 2026: Click-through now requires an actual link click. A click that sends someone to your website, lead form, app, or Messenger. Previously, Meta counted likes, shares, saves, and comments as “clicks” for attribution purposes. If someone tapped the heart icon on your ad Tuesday and bought your product Friday, that counted as a 7-day click-through conversion. That’s no longer the case. Only link clicks count. This is the highest-quality attribution type. The person actively chose to leave Meta and visit your destination. The intent signal is strong. 2. Engage-Through Attribution (NEW in March 2026) A conversion is attributed as engage-through when someone interacts with your ad socially (like, comment, share, save, or watches a video for 5+ seconds) and converts within 1 day. This is the new category that replaced the old “engaged-view” model. It’s broader than what came before. The old engaged-view only covered video views of 10+ seconds. Engage-through covers all non-link interactions plus video views at the new 5-second threshold. The conversion window is fixed at 1 day. You can turn it on or off, but you can’t extend it to 7 days. If someone saves your ad on Monday and buys on Wednesday, that conversion is not attributed under engage-through. This is a medium-quality signal. The person engaged with your ad meaningfully, but they didn’t click through to your site. The ad may have influenced the purchase, but the path wasn’t direct. 3. View-Through Attribution A conversion is attributed as view-through when someone is served an impression of your ad (without clicking or engaging) and converts within 1 day. Meta defines an impression as any ad that is 50% in view for at least 1 second. That’s a low bar. Someone scrolling past your ad quickly enough that they barely registered it can generate an impression. If they happen to purchase your product later that day, Meta counts it as a view through conversion. This is the lowest-quality attribution type, and the most controversial one. Many experienced buyers remove it entirely from prospecting campaigns. 4. Incremental Attribution (Advanced) This is a fundamentally different model that I’ll cover in depth in the next section and in a separate dedicated article. Instead of counting every conversion within a time window, incremental attribution uses machine learning to estimate which conversions were actually caused by your ad vs. which would have happened anyway. The March 2026 Changes: What Actually Happened On March 3, 2026, Meta published “Simplifying Ad Measurement for a Social-First World” on its business blog. Three things changed: 1. Click-through narrowed to link clicks only. All those social interactions (likes, shares, saves, comments) that used to count as “clicks” no longer qualify for click-through attribution. They moved to engage-through. Why this matters: your click-through conversion numbers likely dropped after this change. That’s not a performance decline. It’s a reclassification. The conversions didn’t disappear. They moved buckets. 2. Engage-through replaced engaged-view and got much broader. The old engaged-view only applied to video ads (10+ second views). The new engage-through covers all ad formats: likes, shares, saves, comments, carousel swipes, and video views of 5+ seconds. The conversion window is fixed at 1 day. This is important. Under the old system, a share followed by a purchase on day 5 was a 7-day click-through conversion. Now, that same share gives you only a 1-day engage-through window. If the purchase happens on day 2 or later, it’s not attributed at all. As Media Performance documented, some conversions genuinely disappear from your reports because of this gap. 3. Video engaged-view threshold dropped from 10 seconds to 5 seconds. Meta’s own data shows that 46% of Reels purchase conversions happen within the first 2 seconds of attention. The old 10-second threshold was calibrated for longer Facebook Feed videos and missed a lot of genuine engagement on short-form content. The new 2026 default attribution setting is: 7-day click-through 1-day engage-through 1-day view-through Standard attribution model All conversions counted Attribution Settings: What to Choose and Why When you create an ad set, you’ll find the attribution settings in the Budget and Schedule section. Here’s what you’re actually choosing and what it does. Click-Through Window Options: 1-day or 7-day The 7-day window is standard for most e-commerce brands because purchase decisions typically happen within a week of the initial click. Switching from 7-day to 1-day typically reduces reported conversion volume by 30 to 40% for the same campaign. That’s not because your ads stopped working. It’s because you’re excluding consideration purchases in the 2 to 7 day window. My recommendation: Keep 7-day click for purchase events. The consideration window is real. Someone who clicks your ad Monday and buys Thursday was genuinely influenced by your ad. Use 1-day click for lead gen events where the conversion should happen in the same session (someone who clicks to download a free PDF but doesn’t do it for 5 days probably found it elsewhere). Engage-Through Options: 1-day or None Jon Loomer recommends keeping 1-day engage-through on for purchase events. A save, share, or video view shows interest and awareness. Even if the eventual purchase was driven by another channel, the initial engagement signals that the ad had impact. For non-purchase events (leads, sign-ups, free downloads), consider turning engage-through off. If someone didn’t click through to get your free resource, the ad’s influence is debatable. For retargeting campaigns, also consider removing engage-through. Remarketing audiences already have prior intent. Attributing a view or a like to a conversion in this audience inflates the numbers. View-Through Options: 1-day or None This is the setting that causes the most confusion and debate. A 1-day view-through conversion means someone saw your ad (50% in view for 1 second), didn’t click, didn’t engage, and then converted within 24 hours. For purchase events (especially higher-ticket items), there’s a case for keeping it on. Someone browsing Instagram sees your ad for a product they were already considering, doesn’t click, but goes to your site directly later and buys. The ad reminded them. That’s a real thing. For everything else, I’d strongly consider removing view-through. It’s the attribution type most likely to inflate your numbers with conversions your ad didn’t actually drive. Standard vs. Incremental: The Two Attribution Models On top of the window settings, Meta offers two attribution models: Standard Attribution (default): Counts every conversion that occurs within your selected windows, regardless of whether your ad actually caused it. If someone was going to buy anyway but happened to see your ad first, standard attribution gives your ad full credit. Incremental Attribution (advanced): Uses machine learning trained on Meta’s library of Conversion Lift experiments to estimate which conversions were actually caused by your ad. It filters out organic demand. When you select incremental attribution, you lose the ability to edit attribution windows. That makes sense. Incremental doesn’t use time-based windows. It uses causal modeling. Jon Loomer notes that in his testing, the difference between standard and incremental results has been modest. He recommends incremental as the better default for high-budget advertisers who have no […]
May 11, 2026

You finally found the one ad that’s performing. The engagement metrics are great, and the next actionable step that feels natural is duplicating it. Right? Wrong. If you duplicate a Facebook ad that’s been performing, you’ll watch it restart with zero interactions. Zero likes, zero comments, zero shares. That’s because Facebook’s algorithm registers it as a new post, which comes along with a new, unique ID. Think of it as an identity card; each post has its own personal number, and the engagement your ad earned belongs to that number. This is a social proof reset issue. The interactions your ads earned early belong to that phase, so when you scale, everything related to those metrics starts from scratch. Fortunately, there’s a solution to this. Facebook Dark Posts and Post IDs let you run the same ad creative across multiple campaigns, ad sets, and even ad accounts, while preserving the engagement. In this blog, I’ll show you exactly how. We’ll walk through what dark posts are, why post IDs matter for ad performance, and the best methods to use them at scale. Key Takeaways Facebook dark posts are unpublished page posts that only exist as paid placements and never appear on your Page’s public timeline. The Post ID is the identity of the ad creative. All engagement belongs to the post and carries over to every campaign or ad set that references the same Post ID. Duplicating an ad creates a new Post ID and resets social proof to zero, even if the creative is identical. There are four ways to find a Post ID: through Ads Manager, from the post URL in Publishing Tools, via the Facebook Graph API, or through a bulk campaign tool like TheOptimizer’s Campaign Creator. Your Facebook Page must be shared with every ad account you want to reuse a Post ID in. Otherwise, it will silently create a new post instead. What Is a Facebook Dark Post? Let’s kill the jargon first, because “dark post” sounds way more mysterious than it is. A dark post is simply an ad that doesn’t appear on your Facebook Page’s timeline, as other ads do. It appears as a sponsored ad, and it’s officially called an “unpublished page post”. Dark posts show up in the feeds of the audience you’re targeting. They’re invisible to anyone who doesn’t fall in that group. Every Facebook ad you create through Ads Manager is technically a dark post. When you build a new ad, Facebook creates an unpublished post behind the scenes and uses that post as the ad unit. You never see it on your Page because it was never intended to be organic content. Now, the question is, if every ad is already a dark post, why do advertisers specifically choose to create them? Well, one reason is to test multiple ad variations without cluttering the page. Dark posts allow advertisers to experiment with different creatives and see which one gets better results. Another reason is to promote products or services to a specific audience. Dark posts are targeted; you’re showing them only to selected people. For example, if you’re selling a limited-edition perfume for women, there’s no need to display the ad to all audiences when you can target women only. Dark Posts vs. Boosted Posts It’s easy to confuse these two, but they are actually distinct. A boosted post starts as an organic post on your Page. You publish it normally, your followers can see it, and you pay to boost it. The post exists on your Page before the ad does. A dark post is never organic. It was born as an ad and exists solely as a paid placement. This distinction matters because when you boost an organic post, you build on public engagement that grows naturally. When you create a dark post, all engagement is paid-only. What Is a Facebook Post ID and Why Does It Matter? A Facebook Post ID is a unique 15–17-digit number associated with every post on the platform. It allows advertisers to use the same campaign while maintaining the existing engagement; every like, comment, and share. Facebook Post ID impacts social proof. Imagine a user scrolling through their feed and seeing an ad with 847 likes, 130 comments, and a comment section full of people saying, “I bought this and love it.” Now think about the same ad with zero engagement. Same creative, but completely different first impression. The ad with social proof builds credibility before the user has read a single word of copy. It reduces the psychological friction of clicking. And it triggers a subtle yet powerful herd mentality: if all these people are engaging with this, maybe it’s worth my attention. The performance data backs this up: Higher CTR: Ads with visible engagement outperform identical ads with no engagement. Users need validation, and as a result, they’re drawn to content that others have engaged with. Lower CPMs: Facebook’s algorithm rewards engagement. High-engagement posts get shown to more people at lower cost, so the algorithm interprets engagement as a quality signal. Better conversion rates: Social proof carries over into the purchase decision. An ad that feels trusted and credible before the click generates warmer traffic than one that feels brand new. Cross-campaign consistency: When you’re testing in one campaign and scaling in another, using the same Post ID means you’re not reinventing the wheel. The testing phase builds the social proof, and the scaling leverages it. How to Find a Facebook Post ID There are a few simple methods to find a Post ID: Method 1: From Ads Manager This is the most common approach for advertisers who aren’t running at a massive scale. Open Ads Manager and navigate to the ad level Click Edit on the ad you want to find the Post ID for Under “Ad Creative,” look for “Use Existing Post”; the Post ID is displayed there Alternatively, click the ad preview link and extract the Post ID from the URL Method 2: From the Facebook Post Directly Navigate to your Facebook Page Find the dark post via the Page’s Ad Posts section (under Publishing Tools) Click on the post’s timestamp to open it in a new tab The URL will contain the Post ID in this format: facebook.com/[page]/posts/[POST_ID] Method 3: Using the Facebook API This method is mostly for technical teams managing creative libraries at scale. Facebook’s Graph API provides programmatic access to your Page’s posts, including dark posts, but you need to use the correct endpoint. The standard /feed endpoint only returns published posts and will miss your ad creative entirely. Use /promotable_posts instead: GET https://graph.facebook.com/v20.0/{page-id}/promotable_posts ?access_token={page-access-token} &fields=id,message,created_time,is_published Setting is_published=false filters for unpublished posts only, which is exactly where your dark posts live. The id field in each result is your Post ID. Method 4: From a Bulk Campaign Creation Tool Manual Post ID retrieval from the methods above works great when you’re doing it occasionally. But when launching many campaigns per week, they’re inefficient. TheOptimizer’s Campaign Creator is built specifically for this kind of scale: launching multiple campaign variations across multiple ad sets. It includes a dedicated space for the Post ID, which you can paste once, and it’s automatically applied across ad sets. Every creative stored in the Creative Library retains its associated Post ID. When you move a winning creative into a new campaign, its Post ID comes along automatically. This keeps the social proof you built during weeks of testing. No one has to remember to look it up. For agencies running campaigns across multiple client accounts, the tool also supports cross-account campaign cloning. When you clone to a new account, the Post ID connection is preserved, so you don’t have to start from scratch. How to Use Existing Post IDs Across Campaigns Once you have a Post ID, the workflow is straightforward: 1. Create a new campaign. Configure all the settings like you normally would. 2. Configure your ad set. At the ad set level, the ‘dark’ part starts to take shape. You’re defining who will see your ad, while everyone outside that audience won’t see it at all. 3. Select “Use Existing Post”. This is a key step. When creating any future campaign with an existing creative, select “Use Existing Post”. In Ads Manager, at the ad creation level, you’ll see two options: Create Ad and Use Existing Posts. Create Ad builds a brand new dark post […]
May 10, 2026

If you run Google Ads on a schedule (weekdays only, business hours only, weekends only, specific dayparts), your monthly spend is about to go up. Potentially by a lot. Google announced a change to budget pacing that takes effect June 1, 2026. The announcement frames it as “making it easier for advertisers to hit monthly spending goals.” The practical effect for anyone using ad scheduling? You’ll spend more money with the same daily budget setting. Here’s the change in plain language: Before June 1: Google paced your spend based on the number of days your ads actually ran. If your campaign was set to weekdays only, Google aimed to spend your daily budget across those ~22 weekdays per month. Your daily budget worked roughly like a daily cap. After June 1: Google paces toward the full monthly limit (30.4x your daily budget) regardless of how many days your schedule allows. Your ads still only run during your scheduled windows. But Google will push harder to spend the full monthly cap within those windows. Your daily budget is no longer acting as a daily cap. It’s a monthly target being compressed into fewer days. Ginny Marvin, Google Ads Liaison, confirmed on X that spend will still be driven by campaign objectives and no campaign will exceed existing billing caps. But as Search Engine Land put it: “Budget pacing is becoming less about when ads run and more about ensuring the full budget gets spent.” That last part is what should get your attention. The Math That Matters Let me break this down with actual numbers because the impact isn’t obvious until you run the math. Google’s billing rules haven’t changed: Daily cap: Your bill on any single day can’t exceed 2x your daily budget Monthly cap: Your monthly bill can’t exceed 30.4x your daily budget Schedule respected: Your ads still won’t run on days or hours you’ve disabled What changed is how aggressively Google uses the room between your daily budget and those caps. The formula that matters now: Effective daily spend = (Daily budget × 30.4) ÷ Number of active days per month So if your daily budget is $100 and you run ads 20 days per month: ($100 × 30.4) ÷ 20 = $152/day That’s 52% more per active day than what you were spending before. Same daily budget setting. Same schedule. More money going out the door. Three Real Scenarios to Show the Impact Let me walk through three common scheduling setups so you can see exactly what this looks like for different types of advertisers. Scenario 1: Weekdays Only (Mon-Fri) A pretty common setup. A B2B company or local service business that only wants to run ads during the work week. Before June 1 After June 1 Daily budget $100 $100 Active days/month ~22 weekdays ~22 weekdays Monthly spend target ~$2,200 Up to $3,040 Effective daily spend ~$100 Up to ~$138 Increase — +38% per day Google will try to push the full $3,040 monthly cap through 22 days instead of 30.4. Each active day absorbs more spend. Scenario 2: Weekends Only (Sat-Sun) A restaurant, entertainment venue, or e-commerce brand that concentrates spend on weekends. Before June 1 After June 1 Daily budget $100 $100 Active days/month ~8 weekend days ~8 weekend days Monthly spend target ~$800 Up to $1,600 Effective daily spend ~$100 Up to ~$200 (2x daily cap) Increase — +100% per day This is the most dramatic case. With only 8 active days, Google has to push $3,040 through a very narrow window. The 2x daily cap limits each day to $200, so the actual monthly total would be around $1,600 (8 days × $200). That’s still double what you were spending before. Scenario 3: Business Hours Only (Mon-Fri, 9 AM – 5 PM) A service business that wants leads only when the phone is staffed. Before June 1 After June 1 Daily budget $150 $150 Active days/month ~22 weekdays ~22 weekdays Monthly spend target ~$3,300 Up to $4,560 Effective daily spend ~$150 Up to ~$207 Increase — +38% per day Same percentage increase as Scenario 1 because the number of active days is the same. The hourly restriction doesn’t change the math since Google was already pacing within those hours. What changes is how aggressively it spends during those hours. Key takeaway: The fewer days your schedule allows, the bigger the impact. A 5-day schedule sees a ~38% increase per active day. A 2-day schedule sees up to 100%. A 7-day schedule (every day) sees no change at all because the current pacing and the new pacing are identical when all days are active. Who Gets Hit Hardest Not every advertiser is affected equally. Here’s who needs to pay attention: Local service businesses that run ads only during staffed hours. Plumbers, lawyers, dentists, HVAC companies. These businesses use scheduling specifically to control when leads come in. More spend during the same hours means more leads arriving when staff capacity hasn’t changed. B2B companies running weekday-only campaigns. If your sales team doesn’t work weekends, you probably don’t want ads on weekends. But now your weekday spend increases to compensate for those inactive weekend days. Agencies managing client budgets. If a client said “I want to spend $3,000/month” and you set a daily budget based on active days, that math just broke. The same daily budget now targets a higher monthly total. Advertisers using scheduling as a spending control. This is the big one. Many small-business advertisers treated ad scheduling as more than a timing control. In practice, it worked like a soft spending control too. That soft control just got removed. Who’s NOT affected: Campaigns running every day with no schedule restrictions (no change) Local Services Ads (confirmed not affected) Campaigns using campaign total budgets instead of daily budgets (different pacing system entirely) What Stays the Same Google was careful to emphasize that billing limits haven’t changed. Let me be clear about what’s not moving: Your monthly bill is still capped at 30.4x your daily budget Your daily bill is still capped at 2x your daily budget on any single day Your ads will not run on days or hours you’ve disabled in your schedule Your bid strategy, targeting, and campaign objectives are unchanged The change is entirely about how aggressively Google spends within the room you already gave it. No new limits were added. No existing limits were raised. The pacing behavior inside the existing limits is what changed. Think of it like this: you set a speed limit of 100 mph on a highway. Before, the car was driving 60 mph. The speed limit didn’t change. The car just started driving faster. What to Do Before June 1, 2026 You have a few weeks to prepare. Here’s the step-by-step: Step 1: Identify affected campaigns Open your Google Ads account. Filter for campaigns that use ad scheduling. Any campaign with a schedule that doesn’t cover all 7 days is affected. Step 2: Calculate your new effective daily spend For each affected campaign: New effective daily = (Current daily budget × 30.4) ÷ Active days per month Compare this against what you were spending. If the increase is more than you’re comfortable with, you need to adjust. Step 3: Lower daily budgets to maintain your current monthly spend If your real goal is “I want to spend $2,200/month” and your campaign runs 22 days: New daily budget = $2,200 ÷ 30.4 = ~$72 Set your daily budget to $72 instead of $100. Google will pace toward $72 × 30.4 = $2,189/month, which is close to your original $2,200 target even with the new pacing logic. A quick reference table: Your Schedule Old Daily Budget New Daily Budget (to maintain same monthly spend) Mon–Fri (22 days) $100 ~$72 Weekends only (8 days) $100 ~$26 Mon–Wed–Fri (13 days) $100 ~$43 Every day (30.4 days) $100 $100 (no change needed) Step 4: Consider switching to campaign total budgets If your real objective is a fixed monthly spend amount, campaign total budgets might be a cleaner option under the new pacing rules. With total budgets, you set the exact amount you want to spend over a defined period, and Google paces to hit that exact number. No daily budget multiplication math. The trade-off: total budgets don’t have the 2x daily cap, so Google can spend more aggressively on high-opportunity days. But you get precise control […]
April 30, 2026

Everything You Knew About Creative Testing Is Wrong Now! Two years ago, the winning playbook looked like this: find one killer image, write 10 headline variations, split them across 5 interest-based ad sets, and let the winner emerge. Rinse and repeat. That playbook is dead. And the people still running it are the ones posting on Reddit asking why their CPMs doubled overnight. Here’s what happened: Meta deployed Andromeda globally between late 2025 and January 2026. It’s not a minor tweak. It’s a ground-up rebuild of how ads get matched to users. The old system started with your audience selections. Andromeda starts with your creative. It reads the visual, the audio, the copy. It decides who should see it. Your targeting inputs are suggestions at best. The result? Brands testing 20+ new ads per month are seeing 65% higher ROAS than brands testing under 10. The top-performing advertisers run roughly 395 live ads versus 296 for the bottom third. Creative volume and creative diversity are now the primary scaling levers. But “test more creatives” isn’t a strategy. You need to understand what Andromeda actually looks at, what GEM does with that information, and how to build a testing system that feeds the machine the right signals. That’s what this article covers. The Andromeda Pipeline: How Your Ads Actually Get Delivered Before we talk about testing, you need to understand the delivery pipeline. This breakdown from Search Engine Land is the best plain-language explanation I’ve seen, and here’s my condensed version. When someone opens their feed, three AI systems work in sequence to decide what they see: Stage 1: Retrieval (Andromeda) Andromeda scans tens of millions of eligible ads and pulls out roughly 1,000 candidates for this specific user at this specific moment. It does this by analyzing your creative using computer vision and AI audio analysis, then matching it against the user’s behavioral patterns and intent signals. This is the make-or-break stage. If Andromeda doesn’t pull your ad into the shortlist, you don’t exist in that auction. Your budget, your bid, your targeting, none of it matters. You need to get through the gate first. Stage 2: Ranking (Meta Lattice) Those ~1,000 candidates enter the ranking stage. Lattice calculates expected value for each one: eCPM, predicted CTR, conversion probability, competitive bids. It picks the winner. According to Meta’s engineering team, Lattice delivered 10% metric gains and 6% conversion improvements. Stage 3: Learning (GEM) GEM (Generative Engagement Model) is the feedback engine. It’s 4x more efficient at driving performance than what came before. When someone converts (or doesn’t) after seeing your ad, GEM uses that outcome to improve future predictions. It also fills signal gaps when privacy restrictions block data by comparing your ad’s performance against billions of historical data points. What this means for you as a buyer: Andromeda decides IF your ad gets a chance. Lattice decides WHO wins. GEM decides how the system LEARNS from the result. Your job is to give Andromeda enough diverse creative signals so your ads pass the retrieval gate across many different user segments. Not just one. The Entity ID Problem (And Why 30 Ads Can Count as 1) This is the concept that changed how I think about creative production. And it’s the one most buyers still haven’t internalized. Andromeda doesn’t look at your ad count. It looks at conceptual uniqueness. Meta assigns each creative an internal identifier called an Entity ID based on its visual fingerprint. If you upload 30 ads that share the same template, same background, same visual structure with different text overlays, Andromeda collapses them into one Entity ID. One Entity ID = one ticket to the retrieval auction. If that single ticket fails for a particular user segment, your other 29 “different” ads never get a chance. They don’t exist in that auction. Performance data from admetrics.io suggests Creative Similarity Scores above 60% trigger retrieval suppression. 303 London’s diversity guide recommends keeping the index below 40%. This is huge. It means the old approach of “take winning image, test 15 headlines” actively hurts you now. Meta’s visual recognition models see an image with slightly different text overlays as essentially the same image. According to Social Media Examiner’s breakdown of the algorithm changes, if the system perceives a lack of diversity, it punishes your account with higher CPMs. The practical framework for ensuring unique Entity IDs. Before you build a new creative, ask three questions: Is the message different from what’s already running? Is the visual execution different (not just text on the same template)? Is the format different (static vs video vs carousel vs UGC)? If the answer is “no” to at least two of those, you’re probably getting grouped under an existing Entity ID. GEM, Lattice, and What They Mean for Your Testing Most articles about Andromeda stop at “creative is targeting now.” That’s true but incomplete. GEM and Lattice add two layers that directly affect how you should design tests. GEM learns from context, not just clicks. GEM doesn’t just track whether someone clicked or converted. It models the entire user journey. As this Medium breakdown explains, GEM compares your ad’s performance against billions of historical data points to estimate directional lift, even when privacy restrictions block the direct signal. For testing, this means early signals matter more than they used to. GEM starts forming opinions about your creative within the first few hundred impressions. A bad hook doesn’t just waste those impressions. It teaches GEM that your creative isn’t worth showing, and the system deprioritizes it going forward. Lattice evaluates across attribution windows. The Logical Position playbook explains that Lattice blends attribution windows at the architectural level. It evaluates success differently for high-ticket leads vs low-friction purchases because the system understands that timing and behavior vary by objective. For testing, this means you need patience with high-consideration products. A creative selling a $2,000 product might look terrible at day 3 but solid at day 14 once the longer attribution window kicks in. Killing it early means you never see the real performance. The Creative Similarity metric. Social Media Examiner reports that Meta now exposes Creative Similarity as a metric in Ads Manager. High similarity = higher CPMs because Andromeda views repetitive content as fatiguing. It also surfaces “Top Creative Themes” so you can see which angles are resonating (humor, social proof, nostalgia, etc.). Fair warning: because these metrics are new, Tara Zirker advises against over-optimizing for a specific score right now. Use them as directional signals, not hard thresholds. The Testing Framework That Works Under Andromeda Here’s the framework I use. It’s not theoretical. It’s what I run on my own campaigns and what I built TheOptimizer’s launching workflow around. Step 1: Build 8 to 12 conceptually distinct creatives. Not variations. Concepts. Use the PDA framework: Persona: Different buyer personas respond to different messages. Desire: Different motivations (save money, save time, look better, avoid risk). Awareness: Where they are in the journey (problem-aware, solution-aware, product-aware). Our guide on creating 10 angles for the same offer walks through this in detail. Step 2: Launch into a testing campaign (ABO). One creative per ad set. Clean data, no internal competition. Equal daily budgets ($20 to $50 per ad set). Broad targeting. Let Andromeda decide who sees what. Same optimization event as your scaling campaign. Step 3: Evaluate after 7 days using multi-metric scoring (see formulas below). Don’t just look at CPA. Under Andromeda, a creative with a high hook rate and decent engagement might be worth keeping even if the CPA is slightly above target on day 7. GEM is still learning. Step 4: Graduate winners to your scaling campaign (CBO). Move proven creatives into a CBO campaign with broad targeting and let Meta allocate budget across the winners. Step 5: Monitor for fatigue. Replace before the cliff. Under Andromeda, fatigue windows have compressed from 6+ weeks to 2 to 3 weeks. Your pipeline needs to be producing replacements before current winners decline. See our article on detecting creative fatigue early for the specific automation rules I use. 6 Custom Formulas for Evaluating Creatives in 2026 CPA alone doesn’t give you the full picture anymore. Here are the formulas I use to score creatives. Some of these I picked up from other buyers in the community, some I developed from looking at my own data patterns. 1. Hook Rate (video) Hook Rate = (3-Second Video Views / […]
April 30, 2026

$250+ spent. Zero conversions. Sound familiar? I’ve seen thousands of Meta ad accounts over the past few years. The pattern is almost always the same. It’s never one massive screw-up. It’s 2 or 3 things stacking on top of each other, quietly draining budget while you’re focused somewhere else. And the worst part is that most of these issues are invisible inside Ads Manager. Your dashboard shows clicks coming in. Maybe even a few conversions. But when you check your CRM, your leads, Shopify orders, or your finalized P/L reports the numbers don’t match. Something is off, and you can’t figure out what. Before you blame Meta, blame the algorithm, or start questioning your offer, let me walk you through the real reasons accounts break down. Not the beginner stuff like “pick the right objective.” I’m talking about the issues that experienced buyers run into when accounts that were printing suddenly go sideways. Your Conversion Data Is Lying to You I’m going to start here because everything else depends on this. If your tracking is broken, every optimization decision you make is based on garbage data. And garbage in, garbage out. The tricky part is that broken tracking doesn’t look broken. Your Events Manager still shows events firing. Conversions still appear in your dashboard. But those numbers are inflated, duplicated, or completely disconnected from reality. Here’s what’s actually happening in most accounts I’ve seen: Double-counting from Pixel + CAPI without deduplication. This is by far the most common issue. You set up Conversions API (which you should), but you didn’t implement event_id deduplication. So every purchase fires twice. Meta sees twice the conversions, optimizes for the wrong user profiles, and your reported CPA looks half of what it actually is. Meanwhile, you’re celebrating numbers that don’t exist. Ghost/test conversions from admin traffic. Your dev team, your marketing team, you personally, all hitting the thank-you page while testing. Each visit fires a conversion event. I’ve seen accounts where 15 to 20% of reported conversions were internal traffic. Events firing at the wrong funnel stage. A Purchase event firing on the product page instead of the order confirmation. An Add to Cart event triggering on page load instead of on button click. These seem minor. They’re not. Meta’s algorithm optimizes delivery based on who triggers these events. Feed it the wrong signals and it finds the wrong people. Low Event Match Quality eating your delivery. Check your EMQ score in Events Manager. Anything below 6 out of 10 means Meta is struggling to match your events to actual users. This directly affects how often your ads make it through Andromeda’s retrieval stage. Poor signal quality doesn’t just hurt your reporting. It actively reduces your ad delivery. Browser-only Pixel tracking now misses 20 to 40% of conversions thanks to iOS restrictions, ad blockers, and cookie consent banners. If you haven’t set up CAPI with proper deduplication, you’re flying blind. How to Fix Conversion Reporting Open Events Manager right now. Install the Meta Pixel Helper Chrome extension. Browse your site and watch what fires. Check for duplicates, wrong triggers, and missing events. Then verify your CAPI setup has event_id deduplication enabled. This isn’t optional in 2026. It’s the foundation everything else sits on. Andromeda Killed Your Targeting Strategy If you’re still running 8 ad sets with different interest stacks, each with a $15/day budget, I need you to hear this: that strategy died in 2025. Meta’s Andromeda update fundamentally changed how ads get delivered. The old system started with YOUR audience selections and then found people within them. Andromeda works in reverse. It starts with YOUR CREATIVE, reads it using computer vision and AI audio analysis, and then decides which users across Meta’s entire 3 billion user base are the best match. Your interest targeting? It’s mostly a suggestion now. Advantage+ Detailed Targeting can’t even be turned off for most performance goals. Meta uses your inputs as “hints” but goes wherever the algorithm thinks it’ll find conversions. This means two things for experienced buyers: First, audience fragmentation is now a liability. Splitting your budget across 5 to 8 narrowly targeted ad sets doesn’t give the algorithm enough data per ad set to learn. You end up with everything stuck in “Learning Limited.” The Confect Andromeda Study (covering 3,014 e-commerce advertisers and 115.7 billion impressions over the full 2025 calendar year) found that consolidated structures with broader targeting consistently outperform fragmented setups. Second, your creative IS your targeting now. An ad about “exhausted moms” will find exhausted moms regardless of your audience settings. An ad about “best SUV deals” will find SUV shoppers. The specificity lives in the creative, not in the audience panel. If your creative is generic (“Buy Now! Great Deals!”), Andromeda can’t figure out who to show it to, so it shows it to low-quality traffic and your CPA goes through the roof. I went deep on this in our article about how Andromeda affects your ad strategy. If you haven’t read it yet, do that after this one. Your Campaign Structure Is Starving the Algorithm Here’s a quick math problem that will tell you if this is your issue. Take your daily budget. Divide by your average CPA. Multiply by 7. If the answer is below 50, you’re starving the algorithm. Meta needs roughly 50 optimization events per week per ad set to exit learning phase. Below that, it never stabilizes, and you’re stuck in a permanent loop of erratic performance. Example: $150/day budget, $40 CPA. That’s 3.75 conversions/day, or about 26 per week. Not enough. You need to either consolidate (fewer ad sets, bigger budget each) or optimize for a higher-funnel event that generates more volume. The 2026 consensus from practitioners like Jodie Minto is pretty clear: the best-performing accounts now run 1 to 3 campaigns. One for testing new creatives (ABO with equal budgets per ad set). One for scaling proven winners (CBO with broad targeting). Maybe one for retargeting. That’s it. If you’re not sure which structure to use, our campaign structure best practices guide breaks down both options with specific examples. You’re Editing Campaigns Like It’s 2022 I get it. Your CPA spiked on day 2 and you panicked. You lowered the budget. Changed the targeting. Swapped a creative. Maybe all three. Congratulations, you just reset the learning phase. Again. Every significant edit triggers a reset. Budget changes over 20% (in general). Bid strategy swaps. New creatives added to an existing ad set. Targeting modifications. Each one restarts the clock and the algorithm has to relearn everything from scratch. The Cometly analysis documents this well: if you’ve been tweaking settings every other day, you’re essentially restarting the learning process each time. The algorithm never gets enough stable data to optimize. And here’s the part experienced buyers miss: the edit doesn’t need to be big to cause damage. Meta’s own documentation considers anything above a 20% budget change as “significant.” Going from $100 to $125? That’s 25%. You just triggered a reset. What actually works: Let campaigns run for 5-7 days minimum before touching them. If you need to adjust budgets, keep changes under 20%. Time them at the beginning of the day in your ad account’s time zone so Meta starts fresh with the new number. I wrote a whole piece on why killing campaigns too early hurts performance because I kept seeing buyers murder campaigns that would have been profitable by day 5 if they’d just waited. Stop resetting learning phases manually TheOptimizer handles budget changes at the beginning of the day in your ad account’s time zone, keeps increases within safe thresholds, and only adjusts when the data justifies it. Your rules run every 10 minutes. Your campaigns stay stable. Get Started for Free Your Creative Library Has Zero Diversity You uploaded 20 ads but Meta treats them as 3. This is the Entity ID problem, and it’s the thing most buyers still don’t understand about Andromeda. Meta assigns each creative an internal identifier (Entity ID) based on its visual pattern. If your 20 ads all use the same template, same background, same creator, Andromeda groups them under one Entity ID. In its eyes, you have one ad, not twenty. That means one ticket to the retrieval auction. If that ticket fails, the other 19 never get seen. Your budget wasted on volume that the algorithm treated as duplication. Data from admetrics.io shows Creative Similarity […]
April 29, 2026

Most media buyers who try automation make the same mistake. They go looking for a list of rules, copy someone else’s thresholds, plug them in, and hope for the best. Then when the results don’t match what the original person achieved, they blame the tool. The problem isn’t the rules. It’s that they skipped the thinking that goes behind the rules. An automation playbook isn’t a collection of rules. It’s a documented system that defines how your campaigns move through their lifecycle, what decisions get made at each stage, and what data triggers those decisions. The rules are just the execution layer. The playbook is the strategy. Think of it this way. If you hired a junior media buyer and handed them a list of 8 rules without context, they’d apply them mechanically and probably destroy a few campaigns. But if you gave them a playbook that explains why each rule exists, when it should apply, and how to adjust thresholds based on what they’re seeing, they’d make better decisions even without the specific rules. That’s what we’re building in this article. A framework you can use to create your own automation playbook from scratch, tailored to your specific campaigns, offers, and KPIs. The Mindset Shift: From Campaign Manager to System Manager In 2026, running Meta Ads is fundamentally different from what it was in 2022 or 2023. With Andromeda reshaping how ads get matched to users, the role of the media buyer has changed. You’re not manually selecting audiences and testing one variable at a time anymore. You’re managing a system. The best way I’ve heard this described: you’re no longer playing the instruments. You’re conducting the orchestra. What that means practically is that your time should go toward: Building and maintaining your creative pipeline (the input that matters most) Defining the rules and thresholds that govern campaign behavior Analyzing patterns and adjusting the system based on what you learn Improving your offers and funnels It should NOT go toward: Checking Ads Manager every 2 hours Manually pausing underperforming ad sets one by one Calculating budget increase percentages in a spreadsheet Remembering which campaigns you already scaled this week The automation handles the second list. The playbook ensures the automation is doing the right things. Step 1: Define Your Campaign Lifecycle Stages Every campaign goes through predictable stages. Your playbook needs to define what happens at each one. Stage 1: Launch (Days 0 to 3) The campaign is new. Meta’s algorithm is exploring. Performance data is noisy and unreliable. The goal at this stage is to collect data while limiting downside risk. Automation focus: Stop-loss protection only. Pause anything that spends a significant amount with zero conversions. Don’t make scaling or optimization decisions yet. Stage 2: Learning (Days 3 to 7) You have enough data to start seeing patterns but not enough for high-confidence decisions. The goal is to identify which campaigns show promise and which are clearly not going to work. Automation focus: Kill campaigns that show no improvement trend over 3 days. Start monitoring CPA/ROAS trends. Alert on campaigns that cross performance thresholds. Stage 3: Validation (Days 7 to 14) Campaigns that survived Stage 2 are showing stable performance. The data is now reliable enough for optimization decisions. The goal is to confirm profitability before scaling. Automation focus: Begin budget scaling on validated winners. Start creative fatigue monitoring. Adjust bids or budgets on campaigns that are trending in the wrong direction. Stage 4: Scaling (Day 14+) Validated winners get scaled vertically (budget increases) and horizontally (cloning). The goal is to maximize volume while maintaining profitability. Automation focus: Gradual budget increases on proven campaigns. Automated cloning of winners across ad accounts. Continuous creative refresh through fatigue detection and rotation. Stage 5: Maintenance Scaled campaigns need ongoing protection against degradation. Creatives fatigue, audiences saturate, and competition changes. Automation focus: Detect and pause declining campaigns. Alert when performance dips below thresholds. Reduce budgets on campaigns showing stress before killing them entirely. Important: The biggest mistake I see is applying Stage 4 rules (scaling) during Stage 1 (launch). If your automation tries to scale a campaign that’s only been running for 48 hours, you’re making decisions on insufficient data. The playbook prevents this by defining which rules apply at which stage. For more on this, read our article on why killing campaigns too early hurts performance. Step 2: Map Your Manual Decisions to Automation Logic Before building any rules, write down every manual decision you currently make about your campaigns. Every single one. Here’s a starter list: “This campaign has spent $X with no conversions, I’m pausing it” “This campaign has been profitable for 5 days, I’m increasing the budget by 20%” “This ad’s CTR dropped significantly, it’s probably fatiguing” “This campaign was working but CPA has been creeping up for 3 days” “This campaign is a clear winner, I want to clone it to another ad account” “I check my campaigns at 9 AM and make adjustments before lunch” Now translate each one into IF/THEN logic: IF Spend > $X AND Conversions = 0 THEN Pause IF ROI last 3 days > X% AND Conversions last 7 days > Y THEN Increase Budget 20% IF CTR last 3 days dropped 30%+ vs 14-day average AND Frequency > 3 THEN Pause Ad IF CPA last 3 days > Target CPA by 25% AND CPA was below target days 7 to 4 THEN Decrease Budget 20% IF ROI last 5 days > 15% across two time windows THEN Clone campaign The key insight is that most of your daily decisions follow predictable patterns. Once you can express them as IF/THEN conditions, they can be automated. For specific rule examples with exact thresholds and screenshots, check our guide on 8 automation rules top media buyers use to scale Meta Ads safely. Step 3: Build Your Rule Categories Organize your rules into categories that correspond to the campaign lifecycle: Category 1: Protection Rules (Always Active) These run from the moment a campaign launches and never stop. Their job is to prevent budget waste. Pause ad sets with zero conversions after X spend Pause campaigns with consistently negative ROI after 3+ days Alert on sudden performance drops Category 2: Optimization Rules (Active After Learning Phase) These start working once you have enough data (typically after 5 to 7 days). Decrease budgets on campaigns with rising CPA Pause degrading campaigns based on multi-day trends Adjust based on combined tracker + Meta data Category 3: Scaling Rules (Active on Validated Winners) These only apply to campaigns that have demonstrated stable profitability. Increase budgets gradually on winners Clone winning campaigns within and across ad accounts Apply at controlled frequencies (2 to 3 times per week) Category 4: Creative Management Rules (Always Active) These monitor the health of your creatives. Detect creative fatigue through CTR decline and frequency increase Pause saturated low-performing ads Send refresh alerts to your creative team Category 5: Alert Rules (Always Active) These don’t take action automatically. They just notify you. Campaign performance drops below threshold Daily spend exceeds expectations New campaign hits profitability target (potential scaling candidate) Set up your automation system TheOptimizer lets you build all five rule categories and run them across unlimited Meta ad accounts. Rules execute as frequently as every 10 minutes, 24/7. Get Started for Free Step 4: Set Thresholds Based on Your Data, Not Someone Else’s This is where most people go wrong. They copy thresholds from a blog post (including mine) and apply them without adjustment. Your thresholds need to come from YOUR data. Here’s how to determine them: For stop-loss thresholds: Look at your historical winning campaigns. How much did they typically spend before generating their first conversion? Set your stop-loss threshold at 1.5x to 2x that amount. If your winners typically convert within $50 of spend, setting a stop-loss at $75 to $100 makes sense. For scaling thresholds: What ROI or ROAS have your campaigns historically maintained after scaling? If campaigns typically hold 20% ROI after scaling, set your scaling trigger at 25% (giving a safety margin). If they hold 15%, set it at 20%. For fatigue detection: What does CTR decline look like on your ads? Pull data from your last 20 to 30 ads and look at their CTR trajectory over time. When does the decline typically start? At what point does CPA start being affected? Those are your fatigue thresholds. For budget increase […]
April 24, 2026
Let me be direct here. If you’re making optimization decisions based solely on what Meta Ads Manager tells you, you’re working with incomplete data. And incomplete data leads to bad decisions. This isn’t about Meta being dishonest. It’s about how attribution works (and doesn’t work) in 2026. Meta uses a modeled attribution system that estimates conversions based on signals it can collect. After iOS privacy changes, a significant portion of conversion data is modeled rather than directly measured. This means the CPA and ROAS you see in Ads Manager is an approximation, not a confirmed number. For DTC e-commerce brands running direct purchases through Shopify, the gap might be manageable. You can cross-reference with Shopify data and get a reasonable (not perfect) picture. But for affiliate marketers, lead generation buyers, and arbitrage players? The gap can be enormous. The real revenue data lives in your tracker, your CRM, or your upstream provider dashboard. Not in Meta. I’ve seen campaigns where Meta reported a 2x ROAS while the tracker showed -20% ROI. And I’ve seen the opposite, where Meta showed a losing campaign that was actually profitable according to the tracker. In both cases, optimizing based on Meta’s numbers alone would have been the wrong move. Check out: “Training: From Launching to Scaling Profitable Search Arbitrage Campaigns on Meta Ads” I’ve seen campaigns where Meta reported a 2x ROAS while the tracker showed -20% ROI. And I’ve seen the opposite, where Meta showed a losing campaign that was actually profitable according to the tracker. In both cases, optimizing based on Meta’s numbers alone would have been the wrong move. The Gap Between Reported and Real Revenue Let me give you some concrete examples of why this gap exists. Delayed attribution. Meta can take up to 72 hours to attribute a conversion. During that time, your dashboard shows incomplete data. If you make optimization decisions during this window (which most people do), you’re acting on partial information. Modeled conversions. A percentage of the conversions Meta reports are estimated, not directly tracked. The percentage varies by account and campaign, but it can be significant. You have no way to distinguish modeled from real conversions in Ads Manager. Cross-device gaps. A user sees your ad on mobile but converts on desktop. Meta may or may not attribute this correctly depending on whether the user is logged in, cookie consent, and other factors. Revenue accuracy for non-standard flows. For search arbitrage campaigns, the revenue per click varies based on the search keywords the user engages with. Meta has no visibility into this. For lead gen, the quality of the lead (and whether it converts downstream) isn’t reflected in Meta’s data. This is especially relevant for search arbitrage campaigns where the conversion payout can vary from $0.01 to $1.50+ per click, and revenue confirmation takes 24 to 48 hours. Meta has zero visibility into this data Bottom line: Meta tells you what it thinks happened. Your tracker tells you what actually happened. If you’re optimizing for profitability, you need to optimize on what actually happened. How to Set Up Server-to-Server Tracking for Meta Ads The solution is to use a third-party click tracker that sits between your Meta ad and your offer/landing page. This tracker captures every click, maps it to a conversion (when it happens), and records the actual revenue. Here’s the basic flow: Meta Ad → Tracker Click URL → Landing Page / Offer → Conversion fires back to Tracker → Tracker sends data to TheOptimizer The tracker becomes your source of truth. It captures: Actual cost per click (from Meta’s reporting) Actual revenue per conversion (from your offer, search feed, or CRM) Real ROI based on confirmed data, not estimates Setting up the connection: Create your campaign in your tracker (Voluum, RedTrack, Binom, FunnelFlux, ClickFlare, etc.) Use the tracker’s click URL as your ad destination in Meta Set up conversion postbacks from your offer/CRM to the tracker Connect both Meta and the tracker to TheOptimizer TheOptimizer pulls cost data from Meta and revenue data from the tracker, giving you accurate combined statistics I walked through this exact setup in our search arbitrage autopilot case study, including the specific Voluum and Outbrain configurations. The same principles apply to Meta Ads. Pro Tip: When setting up conversion postbacks, use event-based postbacks instead of standard postbacks if your tracker supports it. This way, when you get confirmed revenue later, you can upload it as the main conversion without inflating the conversion count. Connect your tracker to TheOptimizer Optimize Meta Ads based on real revenue data from ClickFlare, RedTrack, Binom, FunnelFlux, Voluum, etc. Get Started for Free Building Automation Rules Based on Tracker Data This is where the real power is. Once TheOptimizer has both Meta’s cost data and your tracker’s revenue data, you can build automation rules that use the combined, accurate statistics. Here are three examples: Rule 1: Pause Campaigns Based on Real ROI IF Tracker ROI (last 7 days, excluding today and yesterday) < -30% AND Meta Spend > $X THEN Pause Campaign Notice the “excluding today and yesterday” condition. This is critical for campaigns where revenue confirmation is delayed (like search arbitrage). You don’t want to pause a campaign based on incomplete revenue data from the last 48 hours. Rule 2: Scale Based on Confirmed ROAS IF Tracker ROAS (last 7 days, excluding today) > 1.5 AND Tracker Conversions > 10 THEN Increase daily budget by 20% Execute 2 times per week This rule only scales based on confirmed revenue, not Meta’s modeled attribution. Much safer. Rule 3: Adjust Bids Based on EPC IF Tracker EPC (last 14 days, excluding today and yesterday) > $X AND Tracker ROI > 0% THEN No action needed (campaign is healthy) IF Tracker EPC < $X AND ROI between -30% and 0% THEN Set bid to 70% of EPC This bid adjustment rule uses the actual earnings per click from your tracker to calibrate your Meta bids. You’re essentially telling Meta: “I can afford to pay up to 70% of what each click actually earns me.” Handling the Revenue Confirmation Delay One of the biggest challenges with tracker-based optimization is the revenue delay. Most search feed providers, CRMs, and affiliate networks don’t confirm revenue in real-time. It can take 24, 36, or even 48 hours for revenue to be finalized. This creates a problem. If your automation rules look at today’s data, the revenue column will be incomplete, making it look like you’re losing money when you might actually be profitable. The solution is twofold: 1. Exclude recent days from ROI-based rules. When building rules that use ROI, ROAS, or EPC, exclude Today and Yesterday from the calculation. This ensures the rules only act on confirmed, complete data. In TheOptimizer, this is a built-in feature. You can specify “Considering data from: Last 14 Days / Excluding: Today & Yesterday” directly in the rule conditions. 2. Use conversion rate for real-time rules. Even though revenue is delayed, conversions (clicks on the search feed, lead form submissions, etc.) are typically reported within minutes. So for real-time protection, you can use conversion rate as a proxy: IF Meta Spend > $X AND Tracker Conversion Rate < Y% THEN Pause the campaign This catches campaigns that aren’t converting at all, without needing confirmed revenue data. I covered this approach in detail in our data-driven campaign optimization guide, where I used the same dual-rule strategy for native ad campaigns. 3. Schedule automatic data pulls. TheOptimizer has an Automatic Updates feature where you can schedule when the system pulls your tracker data. If you know your search feed provider confirms revenue by 6 PM daily, you can schedule TheOptimizer to pull data at 7 PM, then have your ROI-based rules execute at 8 PM. Everything stays in sync. Supported Trackers and How They Connect TheOptimizer integrates with the most popular trackers and search feed providers in the affiliate and performance marketing space: ClickFlare (highly recommended) Voluum RedTrack Binom FunnelFlux Analytics: Google Analytics 4 Search Feed Providers: System1 Tonic Sedo Media.net …and many more from the integration with ClickFlare. You can also upload stats via CSV if your data source doesn’t have a direct API integration. The connection process for most trackers takes under 5 minutes. You enter your API credentials in TheOptimizer, select which campaigns to sync, and the data starts flowing. Optimize on real data, not estimates TheOptimizer combines Meta’s cost […]
April 23, 2026

There are really only two ways to scale a profitable Meta campaign. You either push more money through it (vertical scaling), or you create copies of it and let each copy find its own optimization path (horizontal scaling). Both work. Both have risks. And most media buyers rely too heavily on one while ignoring the other. The media buyers who scale to six and seven figures per month typically use both strategies together, applying each at the right time based on the data. In this guide, I’ll break down exactly when to use each approach, the specific numbers and thresholds that work, and how to automate the entire process so it runs without you watching Ads Manager all day. Vertical Scaling: Increasing Budgets on Winners Vertical scaling is the obvious move. You have a campaign that’s profitable at $100/day, so you want to run it at $500/day. Simple in theory. Dangerous in practice. The problem is that Meta’s algorithm is sensitive to budget changes. When you increase the budget, the algorithm needs to recalibrate how it spends that money. If the increase is too aggressive, it can reset the learning phase and your carefully optimized delivery goes out the window. Your CPA spikes, ROAS drops, and you’re left wondering what happened. But vertical scaling absolutely works if you do it right. The key is gradual, data-backed increases at the right time. The safe approach: Increase the daily budget by 15% to 30% at a time Never more than 2 times per week Only when the campaign has demonstrated stable performance over at least 3 days Always check that you have enough conversion volume to justify the increase I go deeper into the specifics of safe budget increases in our guide to scaling Meta Ads without killing performance. But the core idea is simple: respect the algorithm’s learning process and scale incrementally. The Budget Increase Rules That Won’t Reset Learning Phase Here’s the exact rule logic I use for automated vertical scaling. Rule: Increase Budget on Stable Winners Automation Rule Example: IF Campaign ROI over the last 3 days > X% (your profitability threshold) AND Conversions over the last 7 days ≥ Y (minimum statistical significance) AND Campaign has been running for 5+ days THEN Increase daily budget by 20 to 30% Execute maximum 2 times per week There are a few details that make a significant difference in how this plays out. Timing of budget changes. This matters more than most people realize. When TheOptimizer changes the budget, it does it at the beginning of the day according to the ad account’s time zone. Not at a random hour. This way Meta starts the new day with a clear budget for the rest of the day, instead of trying to spend a suddenly larger budget in the remaining hours. That difference in timing alone can prevent the algorithm from making erratic delivery decisions. Frequency cap. The rule runs only 2 times per week maximum. This prevents what I call the “greed scale,” where you keep bumping budgets every day because the numbers look good. The algorithm needs at least 2 to 3 days between changes to stabilize. Pushing faster than that is how you ruin winners. Data requirements. Having a 200% ROI on 2 conversions doesn’t mean you should scale. You need enough conversion volume to trust the data. As I covered in why killing campaigns too early hurts performance, the difference between bad performance and insufficient data is critical. The same principle applies to scaling. Don’t scale on insufficient data. Automate your budget scaling! TheOptimizer handles budget increases at the right time, in the right increment, at the right frequency. No manual calculations, no missed opportunities. Get Started for Free Horizontal Scaling: Cloning Campaigns Across Accounts Horizontal scaling means duplicating your winning campaigns and running the copies alongside the original. You can clone within the same ad account, across different ad accounts, or even across different Business Managers. This is the scaling strategy that most beginners overlook and most experts swear by. Why does it work? Because each cloned campaign gets its own optimization path. Meta’s algorithm treats each campaign independently, so a clone might find different audience segments or delivery patterns that the original didn’t. You’re essentially giving the algorithm multiple chances to optimize the same winning creative. The rule I use for automated horizontal cloning: Automation Rule Examples: IF Ad Set ROI over the last 6 to 3 days > 15% AND Ad Set ROI over the last 2 to 1 days > 15% THEN Clone the Ad Set 2 times Execute 3 times per week at 1 AM (ad account time zone) The rule evaluates performance over two time intervals. The last 6 to 3 days gives a broader view, while the last 2 to 1 days confirms the trend is still holding. Only when both windows show profitable performance does the cloning trigger. Cross-account cloning: TheOptimizer can also clone winning campaigns to different ad accounts automatically. This is particularly useful for advertisers managing multiple Business Managers or running high-volume operations where spreading risk across accounts makes sense. Why horizontal scaling is often safer than vertical: Unlike increasing budgets (which asks Meta to spend more money through a single campaign), cloning creates independent campaigns that each start with their own fresh learning. There’s no risk of resetting the learning phase on your original campaign, and each clone gets a clean start. One extra thing worth mentioning: it rarely happens that two or more identical campaigns end up competing with each other. You would need 50+ identical campaigns to risk meaningful auction overlap. So don’t worry about self-competition at reasonable clone volumes. When to Clone Campaigns vs. Ad Sets This is a question I get a lot, so let’s clear it up. Clone at the ad set level when you want to keep the winning creative in the same campaign structure but give it more delivery opportunities. This is good for testing whether the same creative performs better with a fresh ad set that gets its own learning phase. Clone at the campaign level when you want to test the same setup with a completely fresh budget allocation. This gives the algorithm maximum freedom to optimize without interference from other ad sets in the original campaign. Clone across ad accounts when you’re spending serious money and want to distribute risk. Different ad accounts can have different optimization histories, and a winning campaign might perform differently (sometimes better) in a fresh account. My recommendation: start with ad set cloning within the same campaign. If that works, graduate to campaign-level cloning. Once you’re spending $50K+/month, add cross-account cloning to your toolkit. When to Use Vertical vs. Horizontal Scaling Here’s a practical framework: Scenario Best Approach Why Campaign at $50/day, want to reach $200/day Vertical Budget is still low enough that gradual increases work smoothly Campaign at $500/day, want to reach $2,000/day Horizontal + Vertical Clone 3 to 4 times, then gradually scale each clone Campaign profitable but CPA starting to creep up Horizontal Don’t push more budget into a campaign showing signs of fatigue—clone it instead Multiple winning creatives, single ad account Vertical Scale the campaign budget and let the algorithm distribute spend High spend ($10K+/day) across single offer Horizontal (cross-account) Distribute spend across multiple ad accounts to reduce single-point-of-failure risk The right approach also depends on your campaign structure. CBO campaigns are generally easier to scale vertically because the algorithm handles budget distribution. ABO campaigns benefit more from horizontal scaling because each ad set has its own fixed budget. Automating Both Scaling Strategies The real power comes when both strategies run simultaneously on autopilot. Here’s how I set it up. Vertical scaling automation (Rule A): Checks winning campaigns twice a week Increases budget by 20 to 30% if performance is stable Never allows budget to go above a maximum ceiling you define Changes happen at the start of the day in the ad account’s time zone Horizontal scaling automation (Rule B): Detects winning ad sets based on performance across two time windows Clones them 2 times, 3 times per week Optionally clones to different ad accounts Resets daily budget on clones to avoid starting with inflated spend Budget protection automation (Rule C): Decreases budget by 20% if CPA has increased 30%+ over the last 3 days Pauses campaigns entirely if ROI drops below -30% after 3 […]
April 22, 2026

What Creative Fatigue Actually Looks Like in the Data Most media buyers know what creative fatigue feels like. Your campaign was printing money last week, and now it’s barely breaking even. The natural reaction is to panic, check targeting, review bids, and maybe blame the algorithm. But 9 times out of 10, the answer is staring you right in the face. Your audience has seen your ads too many times, and they’ve stopped caring. The problem is that most people don’t have a system for detecting fatigue early. They notice it after the damage is already done, when CPAs have already spiked and ROAS has tanked. By the time you react manually, you’ve already wasted days of budget on a creative that stopped working. So let’s talk about what fatigue actually looks like in the data, because it’s not always obvious. Creative fatigue doesn’t happen overnight. It follows a predictable pattern: Days 1 to 5: Strong CTR, good CPA, healthy ROAS. The creative is fresh and the algorithm is actively finding the best audiences for it. Days 5 to 10: CTR starts to decline gradually. CPA may hold steady because the algorithm compensates by bidding higher or shifting delivery. You might not even notice yet. Days 10 to 20: CTR drops more noticeably. Frequency climbs. CPA starts creeping up. ROAS begins to slide. Day 20+: Performance drops significantly. The ad is now competing against itself because Meta keeps showing it to people who’ve already seen it multiple times. CPA is well above target. The key insight here is that fatigue starts showing in CTR days before it shows in CPA. If you only monitor CPA, you’re always reacting too late. The Metrics That Matter Not all metrics are equally useful for detecting fatigue. Here’s what to actually watch. CTR (Click-Through Rate): This is your early warning signal. When the same audience sees your ad repeatedly, they stop clicking. A declining CTR on an ad that was previously performing well is the first sign of fatigue. Don’t confuse a naturally low CTR (which might mean the creative wasn’t good to begin with) with a declining CTR (which means it was good and is losing steam). Frequency: This tells you how many times the average person has seen your ad. For prospecting campaigns, anything above 2.5 to 3 should raise a flag. For retargeting, you can tolerate higher frequency (4 to 6) before fatigue kicks in. But even retargeting has a ceiling. CPM (Cost Per 1,000 Impressions): When your ad loses relevance, Meta charges you more to show it. Rising CPM alongside declining CTR is a strong fatigue signal. You’re paying more to reach people who are less likely to engage. CPA / ROAS Trend: These are lagging indicators. By the time CPA spikes and ROAS drops, the fatigue has been building for days. Use these to confirm what CTR and frequency already told you, not as your primary detection method. The formula: If CTR is declining + Frequency is rising + CPM is increasing = creative fatigue. Don’t wait for CPA to confirm it. How to Detect Creative Fatigue Before Performance Collapses The manual approach is to check each ad’s CTR trend daily, compare it to its historical average, cross-reference with frequency, and make a judgment call. This works if you’re managing 5 to 10 ads. It falls apart when you’re managing 50 to 200. Here’s the data-driven approach I use: Step 1: Establish baselines. For each ad, record its CTR during the first 3 to 5 days (the “fresh” period). This becomes the baseline. Every ad has a different natural CTR, so you need individual baselines, not account-level averages. Step 2: Monitor the delta. Compare each ad’s current 3-day CTR against its baseline. When the current CTR drops 20 to 30% below the baseline, the ad is entering the fatigue zone. Step 3: Cross-reference with frequency. An ad with declining CTR and frequency above 3 is almost certainly fatiguing. An ad with declining CTR but frequency below 2 might have a different issue (seasonality, audience saturation from other campaigns, etc.). Step 4: Act before the cliff. The “cliff” is when performance drops rapidly rather than gradually. If you can pause or rotate the creative before it hits the cliff, you save the ad’s remaining value and protect your campaign’s overall performance. This matters even more in 2026 because of how Meta’s Andromeda algorithm distributes creative delivery. Andromeda evaluates far more ads per auction, which means fatigued creatives get replaced faster in the ranking. But it also means that if all your creatives are fatiguing at the same time, your campaign has nothing to fall back on. Setting Up Automated Fatigue Alerts Doing the above process manually is fine for learning the patterns. But once you understand what to look for, you should automate it. Here’s the rule I use in TheOptimizer. Fatigue Detection and Pause Rule: Automation Rule Example: IF Ad CTR over the last 3 days has decreased by 30%+ compared to its 14-day average AND Ad Impressions over the last 3 days > 1,000 AND Ad Frequency > 3 THEN Pause the Ad AND Send a notification (email, Slack, or Telegram) Fatigue Warning Rule (alert only, no action): Automation Rule Example: IF Ad CTR over the last 3 days has decreased by 15–25% compared to its 14-day average AND Ad Frequency > 2 THEN Send alert notification The warning rule gives you a heads-up that a creative is entering the danger zone. The action rule actually pauses it when it crosses the threshold. Having both ensures you’re never caught off guard. Automate your creative fatigue detection. TheOptimizer can run fatigue detection rules every 10 minutes across all your campaigns. Get notified before performance collapses. Get Started for Free What to Do When Creative Fatigue Hits Once fatigue is detected, you have a few options. The right choice depends on the situation. Option 1: Pause and replace. The most common approach. Pause the fatigued creative and launch a new one. This works well when you have a pipeline of tested creatives ready to go. Option 2: Rotate to a different audience. Sometimes the creative isn’t dead, it’s just exhausted within a specific audience segment. Moving it to a different Lookalike or interest group can give it a second life. This is more relevant for retargeting where audiences are smaller. Option 3: Refresh the creative. Take the winning concept and create a variation. Change the hook, the opening frame, the thumbnail, or the format (turn a static into a video, turn a video into a carousel). The angle stays the same, but the visual execution is fresh enough to reset the fatigue clock. Option 4: Pivot the angle entirely. If you’ve exhausted all visual variations of a winning angle, it’s time to test a completely different narrative. Our guide on creating 10 different angles for the same offer walks through a framework for this. What NOT to do: Don’t just increase the budget hoping the algorithm will find new people. If the creative is fatiguing, throwing more money at it accelerates the problem, it doesn’t solve it. The Creative Rotation Strategy That Keeps Campaigns Alive The best defense against creative fatigue is not reacting to it. It’s preventing it from crippling your campaigns in the first place. Always have 3 stages of creatives: Active winners (currently running and performing well): 4 to 8 creatives Ready to launch (tested and approved, waiting on the bench): 4 to 6 creatives In production (being designed or filmed right now): 4 to 6 creatives When a winner fatigues and gets paused by your automation rules, a “ready to launch” creative immediately takes its place. Meanwhile, your team is working on the next batch. This creates a continuous pipeline where you’re never scrambling to replace a dead creative. The system feeds itself. Rotation timing: For most campaigns, plan to introduce 2 to 4 new creatives per week. At $200 to $500/day spend, a strong creative typically lasts 10 to 20 days before showing fatigue. At higher spend levels ($1,000+/day), that window shrinks to 7 to 14 days because frequency builds faster. Your campaign structure should support this rotation. Having a dedicated testing campaign (ABO) separate from your scaling campaign (CBO) ensures that new creatives get a fair shot without competing against your current winners for budget. Building a Sustainable […]
April 22, 2026