Agenda outline:
🚀 Campaign & Creative Mass Testing
⚖️ Tested Stop Loss Strategies
😓 Creative Fatigue Detection
⛔ Cost Spike Detection
📈 Lean and Aggressive Scaling
🎛️ Bid and Budget Control
June 21, 2024
TheOptimizer
TheOptimizer Team
Agenda outline:
🚀 Campaign & Creative Mass Testing
⚖️ Tested Stop Loss Strategies
😓 Creative Fatigue Detection
⛔ Cost Spike Detection
📈 Lean and Aggressive Scaling
🎛️ Bid and Budget Control

Most media buyers who try automation make the same mistake. They go looking for a list of rules, copy someone else’s thresholds, plug them in, and hope for the best. Then when the results don’t match what the original person achieved, they blame the tool. The problem isn’t the rules. It’s that they skipped the thinking that goes behind the rules. An automation playbook isn’t a collection of rules. It’s a documented system that defines how your campaigns move through their lifecycle, what decisions get made at each stage, and what data triggers those decisions. The rules are just the execution layer. The playbook is the strategy. Think of it this way. If you hired a junior media buyer and handed them a list of 8 rules without context, they’d apply them mechanically and probably destroy a few campaigns. But if you gave them a playbook that explains why each rule exists, when it should apply, and how to adjust thresholds based on what they’re seeing, they’d make better decisions even without the specific rules. That’s what we’re building in this article. A framework you can use to create your own automation playbook from scratch, tailored to your specific campaigns, offers, and KPIs. The Mindset Shift: From Campaign Manager to System Manager In 2026, running Meta Ads is fundamentally different from what it was in 2022 or 2023. With Andromeda reshaping how ads get matched to users, the role of the media buyer has changed. You’re not manually selecting audiences and testing one variable at a time anymore. You’re managing a system. The best way I’ve heard this described: you’re no longer playing the instruments. You’re conducting the orchestra. What that means practically is that your time should go toward: Building and maintaining your creative pipeline (the input that matters most) Defining the rules and thresholds that govern campaign behavior Analyzing patterns and adjusting the system based on what you learn Improving your offers and funnels It should NOT go toward: Checking Ads Manager every 2 hours Manually pausing underperforming ad sets one by one Calculating budget increase percentages in a spreadsheet Remembering which campaigns you already scaled this week The automation handles the second list. The playbook ensures the automation is doing the right things. Step 1: Define Your Campaign Lifecycle Stages Every campaign goes through predictable stages. Your playbook needs to define what happens at each one. Stage 1: Launch (Days 0 to 3) The campaign is new. Meta’s algorithm is exploring. Performance data is noisy and unreliable. The goal at this stage is to collect data while limiting downside risk. Automation focus: Stop-loss protection only. Pause anything that spends a significant amount with zero conversions. Don’t make scaling or optimization decisions yet. Stage 2: Learning (Days 3 to 7) You have enough data to start seeing patterns but not enough for high-confidence decisions. The goal is to identify which campaigns show promise and which are clearly not going to work. Automation focus: Kill campaigns that show no improvement trend over 3 days. Start monitoring CPA/ROAS trends. Alert on campaigns that cross performance thresholds. Stage 3: Validation (Days 7 to 14) Campaigns that survived Stage 2 are showing stable performance. The data is now reliable enough for optimization decisions. The goal is to confirm profitability before scaling. Automation focus: Begin budget scaling on validated winners. Start creative fatigue monitoring. Adjust bids or budgets on campaigns that are trending in the wrong direction. Stage 4: Scaling (Day 14+) Validated winners get scaled vertically (budget increases) and horizontally (cloning). The goal is to maximize volume while maintaining profitability. Automation focus: Gradual budget increases on proven campaigns. Automated cloning of winners across ad accounts. Continuous creative refresh through fatigue detection and rotation. Stage 5: Maintenance Scaled campaigns need ongoing protection against degradation. Creatives fatigue, audiences saturate, and competition changes. Automation focus: Detect and pause declining campaigns. Alert when performance dips below thresholds. Reduce budgets on campaigns showing stress before killing them entirely. Important: The biggest mistake I see is applying Stage 4 rules (scaling) during Stage 1 (launch). If your automation tries to scale a campaign that’s only been running for 48 hours, you’re making decisions on insufficient data. The playbook prevents this by defining which rules apply at which stage. For more on this, read our article on why killing campaigns too early hurts performance. Step 2: Map Your Manual Decisions to Automation Logic Before building any rules, write down every manual decision you currently make about your campaigns. Every single one. Here’s a starter list: “This campaign has spent $X with no conversions, I’m pausing it” “This campaign has been profitable for 5 days, I’m increasing the budget by 20%” “This ad’s CTR dropped significantly, it’s probably fatiguing” “This campaign was working but CPA has been creeping up for 3 days” “This campaign is a clear winner, I want to clone it to another ad account” “I check my campaigns at 9 AM and make adjustments before lunch” Now translate each one into IF/THEN logic: IF Spend > $X AND Conversions = 0 THEN Pause IF ROI last 3 days > X% AND Conversions last 7 days > Y THEN Increase Budget 20% IF CTR last 3 days dropped 30%+ vs 14-day average AND Frequency > 3 THEN Pause Ad IF CPA last 3 days > Target CPA by 25% AND CPA was below target days 7 to 4 THEN Decrease Budget 20% IF ROI last 5 days > 15% across two time windows THEN Clone campaign The key insight is that most of your daily decisions follow predictable patterns. Once you can express them as IF/THEN conditions, they can be automated. For specific rule examples with exact thresholds and screenshots, check our guide on 8 automation rules top media buyers use to scale Meta Ads safely. Step 3: Build Your Rule Categories Organize your rules into categories that correspond to the campaign lifecycle: Category 1: Protection Rules (Always Active) These run from the moment a campaign launches and never stop. Their job is to prevent budget waste. Pause ad sets with zero conversions after X spend Pause campaigns with consistently negative ROI after 3+ days Alert on sudden performance drops Category 2: Optimization Rules (Active After Learning Phase) These start working once you have enough data (typically after 5 to 7 days). Decrease budgets on campaigns with rising CPA Pause degrading campaigns based on multi-day trends Adjust based on combined tracker + Meta data Category 3: Scaling Rules (Active on Validated Winners) These only apply to campaigns that have demonstrated stable profitability. Increase budgets gradually on winners Clone winning campaigns within and across ad accounts Apply at controlled frequencies (2 to 3 times per week) Category 4: Creative Management Rules (Always Active) These monitor the health of your creatives. Detect creative fatigue through CTR decline and frequency increase Pause saturated low-performing ads Send refresh alerts to your creative team Category 5: Alert Rules (Always Active) These don’t take action automatically. They just notify you. Campaign performance drops below threshold Daily spend exceeds expectations New campaign hits profitability target (potential scaling candidate) Set up your automation system TheOptimizer lets you build all five rule categories and run them across unlimited Meta ad accounts. Rules execute as frequently as every 10 minutes, 24/7. Get Started for Free Step 4: Set Thresholds Based on Your Data, Not Someone Else’s This is where most people go wrong. They copy thresholds from a blog post (including mine) and apply them without adjustment. Your thresholds need to come from YOUR data. Here’s how to determine them: For stop-loss thresholds: Look at your historical winning campaigns. How much did they typically spend before generating their first conversion? Set your stop-loss threshold at 1.5x to 2x that amount. If your winners typically convert within $50 of spend, setting a stop-loss at $75 to $100 makes sense. For scaling thresholds: What ROI or ROAS have your campaigns historically maintained after scaling? If campaigns typically hold 20% ROI after scaling, set your scaling trigger at 25% (giving a safety margin). If they hold 15%, set it at 20%. For fatigue detection: What does CTR decline look like on your ads? Pull data from your last 20 to 30 ads and look at their CTR trajectory over time. When does the decline typically start? At what point does CPA start being affected? Those are your fatigue thresholds. For budget increase […]
April 24, 2026
Losid Berberi
Chief Marketing Officer

Most people running Meta ads are still optimizing for a system that no longer exists. They’re splitting budgets across six ad sets, testing one variable at a time, and capping frequency because they’re scared of “ad fatigue.” Meanwhile, Meta’s infrastructure quietly rebuilt itself from the ground up. If you don’t understand what changed, you’re fighting the algorithm instead of working with it. The engine at the center of this shift is called Andromeda. It’s Meta’s internal ad matching and ranking architecture, and understanding even the basics of how it works will change how you structure campaigns, how you think about creative, and how you interpret performance data. The Meta Andromeda algorithm explained simply: it’s the system that decides which of your ads even gets a chance to compete before a human ever sees it. What Andromeda Actually Is Meta published the full technical breakdown of Andromeda in a December 2024 post on the Engineering at Meta blog. The headline numbers got passed around: 100x faster ad matching 10,000x increase in model capacity for the matching stage, +6% recall improvement, +8% ads quality improvement on selected segments. Most people read those numbers and moved on. But the implications are well worth digging in deeper. Before Andromeda, Meta’s system had real constraints on how many ads it could evaluate against any given impression opportunity. The matching step, where the system pulls candidate ads from the full inventory to rank against a user, was the bottleneck. You could have a phenomenal ad that never found its audience simply because the system didn’t have the computational budget to evaluate it. Andromeda changed that ceiling. It uses a two-stage architecture: a fast approximate matching layer that casts a wide net across candidates, then a more expensive deep-ranking model that scores the final shortlist. The system runs on NVIDIA Grace Hopper Superchips and Meta’s own MTIA silicon, co-designed hardware and software that enables far more complex neural networks to evaluate ads in near real time. The result is that the system can now meaningfully evaluate far more ads per auction, which directly affects how your creative gets distributed. The Number That Actually Matters: 10,000x More Variants When people say “10,000x more variants,” it sounds like an abstraction, so let’s make it easy to understand Say you’re running a campaign for a DTC skincare brand. You have 8 active ad creatives. Under the old system, many of those ads were effectively competing for evaluation slots before they even reached the ranking stage. Your best ad got found. Your fourth-best ad might have rarely been pulled into consideration at all. Under Andromeda, all 8 are genuinely in play, matched to the right user at the right moment. The system can explore the full creative space you’ve given it. That changes the logic of how many ads you need, how different they should be from each other, and how you interpret which ones are “winning.” We ran a test on this dynamic for a supplement brand spending around €850/day. We went from 4 creatives per ad set to 12, but made sure each one had a distinctly different hook, angle, and format. CTR on the campaign improved, but more importantly, our cost per purchase dropped from €38 down to €26 over a 21-day window. The reach into cold audiences improved significantly. We had more genuinely different creatives driving traffc. Not just 12 versions of the same UGC testimonial with a different color grade. Why Creative Diversity Beats Creative Volume This is the part nobody talks about enough. Most media buyers hear “more variants” and go produce 20 slightly different versions of the same ad. Same hook, same offer, same format. Just different faces or different opening lines. But that is not creative diversification. Meta has been explicit about this. In their official Creative Advantage post on Meta for Business, they describe the shift directly: the focus has moved from niche targeting to creative diversification as the primary lever for finding relevant audiences. And their follow-up three-step creative diversification guide makes it even clearer. They’re not asking for volume. They’re asking for conceptually distinct creative signals. Andromeda’s matching system is trying to match ads to users based on predicted relevance and engagement. If all your variants are the same conceptual ad with minor surface changes, you’re not actually expanding the candidate pool in a meaningful way. You’re just giving the system more of the same signal. What actually works is what I’d call conceptual diversity: ads that represent genuinely different creative theses. One ad that leads with social proof, another that leads with a transformation story, another that’s educational, another that’s founder-led. Different formats: static image, short-form video, carousel. Different lengths: 7-second hook-and-close versus 60-second narrative. When your creative pool has real variety, Andromeda can do what it was built to do: find which thesis resonates with which user segment, without you having to segment manually. What “Conceptual Diversity” Looks Like in Practice When building a creative strategy now, the three dimensions I think make sense to focus are: angle (the core emotional or rational appeal), format (static, video, carousel, collection), and length (short grab vs. longer story). You want good coverage across all three, not just variations within one. A campaign with one 15-second video and six slightly different thumbnails is not a diverse creative pool. A campaign with a 15-second video, a 45-second narrative, a static proof-based image, and a carousel showing before/after, us what the algorithm can actually work with. Jon Loomer, who has one of the more grounded practitioner-level takes on this, breaks down creative diversification across seven specific examples if you want to go deeper on the tactical side. Worth the read. Meta also published a companion piece, Demystifying Creative Diversification, that’s worth bookmarking as a reference for what they actually mean when they use that phrase. Meta Andromeda Algorithm Explained: What It Means for Campaign Structure When you fragment your budget across many ad sets, you’re starving the algorithm’s learning phase in each one. Fewer conversions per ad set result in slower signal accumulation, which means worse audience matching, which means you never see what the creative could actually do with proper data behind it. Advantage+ Campaign Budget, what used to be called Campaign Budget Optimization (aka. CBO), exists to solve this. Let the budget flow to where conversions are cheapest at the campaign level, and stop manually allocating between ad sets. Meta’s own page on this feature cites an average 4.6% decrease in CPA when it’s enabled, which seems to be conservative. The gains are usually bigger when you’re coming from a heavily fragmented structure. But there’s an additional effect that Andromeda emhhasizes. A nore consolidated structure means that the matching system has a bigger, unified creative pool to evaluate per auction. You shouldn’t split your creatives across multiple ad sets and limit learnings. You should have one campaign, broad targeting, multiple strong creatives. That’s the structure that lets Andromeda work at full capacity. It’s worth noting that this doesn’t mean you should never segment. Brand versus prospecting, for example, often need separate campaigns for better budget control. Just make sure not to create separate ad sets for every audience, placement, or demographic, that is what is working against you now. Why Your “Best Practices” Are Outdated There’s a common idea in the Meta ads community that you need to “control variables” the way you would in a lab experiment, one change at a time, isolated testing, clean attribution. But this approach assumes that the algorithm is a passive pipe that delivers your ad to whoever you tell it to. This doesn’t work anymore. Andromeda is actively matching. It’s finding the sub-audiences where each creative will perform best, and that process takes time and data. When you isolate variables too aggressively, pausing ads after 48 hours, testing hooks in isolation from the offer and CTA, killing anything that doesn’t hit your CPA target in three days, you’re interrupting a matching process that hasn’t had time to complete. You’re drawing conclusions before the experiment has actually run. Most Meta ad buyerss’ obsession with fast, clean testing loops made more sense when the algorithm was less sophisticated. Now it can cost you your whole campaign. For a more measured counterpoint, because not everyone agrees Andromeda changes as much as the hype suggests, the team at Motion put together a solid roundup of practitioner perspectives, including […]
March 17, 2026
Losid Berberi
Chief Marketing Officer

When performance goes down, most marketers usually blame it on the creative. The truth is that the creative is rarely the problem, but the angle. Here’s what usually happens: You launch an campaign You find an angle that works Scale the working angle Angle burns out (performance drops) You start working on new creatives. The problem with this approach is that you’re promoting your campaign with a single angle (narrative). And a single angle cannot carry long-term scale. If you want to add stability and scale up you need to run with multiple angles. Let’s break down ho to generate 10 strong angles for the same offer. What is an Angle? An angle is the narrative or perspective you use to present your offer/product. It is not: A headline tweak A different image A rewritten CTA Think of the reason why someone cares to interact with your ads and convert on your offers. There are different motivators you can use to promote the same offer. That’s what actually helps you scale. Why Most Marketers Stop at Just One Angle Most of them think the offer defines the message, but in reality it doesn.t The offer defines the outcome, while the angle defines the story. If you only see one way of positioning or promoting an offer, you’re not thinking deeply enough. Strong offers/products can support multiple narratives, you just have find them. The 10 Angle Framework Here’s a simple framework that works. Take the offer or product you want to promote and run this through these categories. Problem Agitation Angle Focus on the pain point. Example: “What Most Homeowners Don’t Realize About Their Current Insurance Coverage” This angle highlights the existing problem. Fear Angle Highlight risks or loss. Example: “This Simple Insurance Oversight Could Cost You Thousands” Fear drives action when used responsibly. Savings Angle Focus on cost reduction Example: “Homeowners Are Saving an Average of $X With This Insurance Adjustment” Savings angles perform well in uncertain economic times or price oriented markets. Opportunity Angle Frame it as something beneficial. Example: “Why Now Might Be the Best Time to Upgrade Your Home Insurance Coverage” Opportunity appeals to ambition and curiosity. Curiosity Angle Create intrigue without overselling. Example: “Why Experts Are Quietly Talking About The Latest Insurance Changes” Curiosity works well in discovery campaigns. Data-Driven Angle Lead with statistics Example: “7 Out of 10 People Miss This When Signing For a New Insurance Policy” Numbers build credibility. Authority Angle Leverage the expertise. Example: “Insurance Experts Recommend Reviewing This Before Year-End” Authority builds trust. Story-based Angle Tell a relatable narrative (test multiple) Example: “How This Family of Four Reduced their Home Insurance Cost by 38%” Stories humanize the offer. Make it relate to them. Localized Angle Make it geographically relevant. Example: “[City] Homeowners May Qualify for a New Insurance Benefit This Month” When used right, localization increases relevance. Timing or Urgency Angle Tie it to a season or deadlines. Example: “Experts Warn New Insurance Rule Could Raise Prices by April 2026” As you can see, for a single product like “home insurance,” we were able to generate 10 different angles you can build creatives around. Hook Supporting Copy Landing Page Visual direction. If the angle changes, everything else changes. That’s how you should test it. Why Angles Protect Campaigns from Fatigue Most campaigns die because they are heavily relying on a single angle. If the angle dies, the campaign dies with it too. But, if you pushing have 8-10 angles then you can: You can rotate different narratives You can test adjacent motivations You can expand without ruining what’s already working. Angle diversity supports longevity. How to Systemize Angle Creation Instead of brainstorming randomly, follow this process. Define the core outcome of the offer. List all emotional drivers connected to that outcome. Match each emotional driver to a narrative category. Build one creative per angle. Test angles before optimizing creative variations. Refrain yourself from launching 12 versions of one angle. Instead launch 5 distinct angles first, then refine winners. Advanced Angle Combination Once you have tested and validated individual angles, you can easily combine them for a stronger impact. Example: Data + Fear “New Report Warns Many Homeowners May Be Underprepared for Major Damage” “Data Suggests Millions of Homeowners Could Be Underinsured” “Insurance Study Highlights Risks of Outdated Home Coverage” Authority + Urgency “Experts Urge Homeowners to Review Their Insurance Now” “Regulators Advise Homeowners to Review Insurance Before the Next Storm” “Experts Urge Homeowners to Review Insurance by the End of This Month” Story + Savings “How One Homeowner Discovered They Were Paying Too Much for Insurance” “Why One Family Decided to Revisit Their Home Insurance Policy” “How a Simple Insurance Check Helped One Homeowner Cut Costs” Proper combination creates better resonating angles, but only after you know which one works individually. Why Angles Matter in Scaling Scaling isn’t just about spending more. Scaling is about expanding to a broader audience. To do that you need to expand on different narratives. When you have multiple validated angles: You don’t rely on a single creative angle You expand to new audience segments You increase volume without risk. That is how seasoned performance marketers scale consistently across different offers. Final thoughts If you feel stuck with your creatives, you don’t need a better design, color change or a variation of your headline. You need a different and better narrative. Every strong offer or product supports multiple narratives. Your job is to uncover them and build angles that convert around them.
March 13, 2026
Losid Berberi
Chief Marketing Officer