Back then, advertisers used to juggle everything manually in Ads Manager. Running hundreds of campaigns, testing with different audiences, jumping from one ad set to another. In 2026, the game has changed. Your Facebook campaign structure is at the center of how the platform allocates its budget, how quickly you receive data, and whether your test results are trustworthy. The challenge is that there isn’t a single structure that works for every business. That depends on your goal, whether you’re testing creatives, scaling winners, or running retargeting. The good news is that advertisers don’t start from scratch every time. There are reliable frameworks that serve as a starting point that you can shape around your business and your goals, not the other way around. In this guide, we’ll break down the best practices for Facebook ad campaign structure in 2026, the three levels of Meta’s campaign hierarchy, and the CBO vs. ABO dilemma. Key Takeaways Facebook’s campaign hierarchy is organized in three levels: campaign, ad set, and ad. Budget flows downward, and optimization happens at the ad set level. ABO works best for testing, while CBO works best for scaling proven winners. The hybrid approach is what most experienced media buyers default to. For creative testing, one creative per ad set (Structure A) is recommended; it gives you the cleanest, most comparable data. Horizontal scaling refers to duplicating winners across new audiences, placements, or budgets; vertical scaling means increasing the budget on existing winners by 20% increments, every 24–48 hours. Using consistent naming conventions is best practice. It keeps your account readable and makes it easy to find the campaigns you’re looking for. Automation is what turns a good framework into a model you can consistently follow. Offloading the structural work frees up the operational time that would otherwise be spent on building other important tasks. Facebook’s Campaign Hierarchy — The Three Levels Before anything else, let’s get into the basics. Meta’s hierarchy is organized into three levels, and each level carries specific decisions that shape how your money is spent. Campaign Level: This is where you set the objective (sales, leads, traffic, etc), the budget strategy, the bidding type, and any special ad categories. If you’re running CBO, this is where you set the campaign budget. In the campaign level phase, Facebook understands what you’re trying to achieve, and everything below gets built around that goal. Ad Set Level: Here you control audience targeting, placements, optimization events, bid strategy, schedule, and, if you’re running ABO, the budget. More importantly, this is where the algorithm learns. Pixel data, conversion events, and delivery patterns are all anchored at the ad set level. Ad Level: Your ad creatives live here. The image or video, primary text, headline, description, and all tracking parameters; you can see different variations of your ads and a preview of what they’d look like when published. You can also measure what resonates with the target audience by connecting third-party reporting tools, like Google Analytics, to your Ads Manager account. The decisions you make on every level when running campaigns are more consequential than advertisers realize. Every structure in this hierarchy is connected in a specific direction that matters. Budget flows downward from campaign to ad set to ad, and optimization happens at the ad set level. So, if you change something at the top of the pyramid, it will pass through everything below it. If your ad sets are poorly isolated, optimization signals overlap, and your data becomes unreliable. If your campaign budget is set at the top (CBO), Facebook decides how to distribute it, and that decision is made by the algorithm, not manually by you. It’s a domino effect. A weak foundation at the campaign level creates problems that no creative testing methodology can fix. That’s why understanding this hierarchy makes the difference between a campaign structure that drives results and one that just burns budget. CBO vs. ABO: When to Use Each and How Campaign Budget Optimization Affects Your Structure This is probably the most debated structural decision in Meta advertising, and for good reason. Using the wrong budget strategy at the wrong stage has consequences: it either drains your budget or renders your test data untrustworthy. Let’s set the record straight: Campaign Budget Optimization (CBO) Campaign Budget Optimization is a strategy in which you set a centralized campaign-level budget rather than individual ad set budgets. The algorithm then distributes it across ad sets based on the predicted performance. Facebook’s model is fed by conversions and has enough data to make smart predictions, so CBO can find efficiencies you’d never find manually. That’s why this strategy works well for scaling winners with broad targeting and multiple placements. The problem with CBO for testing is structural. Facebook will often funnel the majority of your budget to one or two ad sets before your variations have gathered enough data to be judged fairly. As a result, the winners are chosen based on early, noisy signals. Meta’s model will favor ad sets based on initial traffic rather than their long-term potential. Ad Set Budget Optimization (ABO) Ad Set Budget Optimization assigns a fixed budget to each ad set. You have the control here; you decide how much each test gets, and Facebook can’t redistribute it. So, every creative or audience in your test gets a fixed spend, despite how other ad sets are performing. When you’re trying to figure out which creative performs better, you need an apples-to-apples comparison; same audience, same budget, same time window. ABO gives you that. It is the right tool for testing. But there’s a trade-off. As you scale and your test volume grows, manually monitoring individual ABO ad sets becomes overwhelming. That’s why media buyers now separate testing from campaign scaling to ensure that both ABO and CBO serve their best intentions. ABO for testing, and CBO for scaling. Run your creative tests in ABO campaigns with isolated ad sets. When a creative proves itself, based on your own conversion data, graduate it to a CBO scaling campaign. Facebook Campaign Structures for Creative Testing The whole point of a creative test is to find out what really works for your audience, not what Facebook’s algorithm decides to spend your budget on first. Everything about your structure should focus on that goal. Let’s have a look at the three Facebook structures for creative testing: Structure A: One Creative Per Ad Set This is the recommended default for most accounts doing serious creative testing. The setup is: Single ABO campaign One ad set per creative Identical audience and targeting across all ad sets Equal daily budget for each Every creative must compete on the same terms. When creative A, for example, has a 2x better CPA than creative B, and both have the same spend against the same audience, you’ve learned something that’s real. But when creative A just got more spending because Facebook’s algorithm liked it on day one, that’s biased, and you’re not learning anything that could make a difference. How to make this structure work in practice: Run each batch for seven days before making a judgment. This is where you prevent costly mistakes that many advertisers make. If you launch a new batch, say Tuesday, and pull results on Friday, you’re not making a proper comparison. For most businesses, weekend performance is different from weekday performance. So, if you shut down a batch after three days, you might be killing results that would otherwise appear on Sunday, for example. Keep each batch to 4–6 creatives at lower spend levels. I know it’s tempting to test more angles, formats, and hooks. But think about it this way. If you spend $20–$50/day per ad set, spreading the budget across 10–15 creatives means most of them will collect almost zero impressions. 4–6 is the sweet spot. Use ad set spending limits inside a CBO if you go that route. If you’re running this as a CBO, you’ll often run into a common pattern. Older ad sets with existing data absorb most of the budget while your new test batches starve. To prevent that from happening, set an ad set spending limit of 80–90% of the daily campaign budget per ad set. Structure B: One Creative Per Campaign This is the highest isolation testing structure. Each creative gets its own campaign with its own budget. Run one creative per campaign in one of […]