Posts

Last week, I wrote about a Meta ads test that I ran to compare traffic quality results based on three different optimizations. While I was confident in the results, some people questioned whether they were misleading because I used all placements.

I saw it as an opportunity to run another test. And after this one, I plan to run a third.

The short summary: The second test validates the results of the first. But it was still useful to run and see what would happen.

So let’s discuss what I did and found out…

Overview of Test #1

Before we get to the second test, a quick refresher is in order. For the first test, I created a campaign that compared the results generated by three different ad sets, each optimized differently:

  1. Link Clicks
  2. Landing Page Views
  3. Quality Visitor Custom Event (I wrote about how I created this event here)

The targeting for each was completely broad, using only countries (US, UK, Canada, and Australia). I was promoting a blog post, and the goal was to drive the most Quality Visitors.

The Quality Visitor custom event fires when someone spends at least 2 minutes and scrolls 70% or more on a page. Because I was promoting a blog post, I wasn’t looking for purchases or leads, but it was important that the ads drove engaged readers.

The test ran in two parts, one when delivery was controlled by an A/B split test and one when it wasn’t. While the A/B split test was more expensive overall, the results were consistent regardless.

As you can see in the results above, optimizing for Link Clicks or Landing Page Views led to empty clicks. Very few of those clicks resulted in people spending two minutes and scrolling 70%. Of course, optimizing for that Quality Visitor event drove far more of those actions. It wasn’t close.

While it wasn’t the goal, I should note that only the Quality Visitor ad set drove newsletter registrations (it resulted in 5).

Not surprisingly, most of the clicks from the Link Click and Landing Page View ad sets were sent through the Audience Network placement, which can be problematic.

It was suggested the results may have been different if the focus were on only “high-performing” placements.

That led to Test #2.

Test #2 Setup

I have to mention that I thought the need to remove Audience Network kind of misses the point. The fact that most of the clicks go through Audience Network when optimizing for Link Clicks and Landing Page Views is proof, in my mind, that only the click matters with these optimizations. Any quality action is incidental.

Even if you focus entirely on “high-quality” placements, the algorithm still approaches it the same when optimizing for Link Clicks and Landing Page Views. The focus is on cheap clicks and nothing else. While there may be more quality visitors due to the placement, the algorithm won’t care.

But let’s see if that theory pans out…

I set up the test for the second campaign mostly as I set up the first. The first difference is that I didn’t bother with the A/B test this time. All that did was raise the price for each ad set.

Of course, the other difference is related to placement. This time, I would only use News Feed.

Everything else would remain the same:

  • One ad set optimized for Link Clicks, one for Landing Page Views, one for Quality Visitor Custom Event
  • No custom audiences, lookalike audiences, or detailed targeting
  • Only targeting used is by country (US, UK, Canada, Australia)
  • Exclude people who already read the blog post
  • Only news feed

I would spend about $100 per ad set. While that’s a small sample, it’s still enough to generate meaningful results when you can get a decent volume of clicks.

Test #2 Results

Here are the results from the second test.

A few observations…

1. CPM is at least 2X higher across the board. This is not surprising since I was forcing the algorithm to only use News Feed, which is the most expensive placement.

2. CTR calmed down quite a bit. Again, not surprising when you toss out Audience Network. One interesting difference is that the CTR for the Quality Visitors optimization was actually better than the Landing Page Views optimization. I have doubts that this is meaningful but still interesting.

3. The Cost Per Link Click went up 3-4 times. One crazy thing is that CPC is the same for Landing Page View and Quality Visitor optimization.

4. Cost Per Quality Visitor clearly best for Quality Visitor optimization. I’m not surprised by this, but it’s still good to get confirmation. The costs when optimizing for Link Clicks and Landing Page Views did come down, but it’s still not close.

Also, a side note on incidental newsletter registrations. While purchases and leads weren’t the goals, the assumption is that a quality visitor is more likely to subscribe to my newsletter. As I mentioned earlier, that happened 5 times when optimizing for Quality Visitors in the first test, while it never happened when optimizing for Link Clicks or Landing Page Views.

In this second test, there were only two newsletter registrations, but they were again both when optimizing for Quality Visitors.

1-Day Click

Something I also wanted to look at was 1-day click attribution results. The reason is that it’s entirely possible that these results are inflated by incidental remarketing to people who would be coming to my website anyway.

Since the Quality Visitor metric isn’t unique, I also want to eliminate people coming back multiple times during the next 7 days. So, let’s focus only on 1-day click — or people who clicked the ad and performed this action within a day.

As you can see, the costs do go up some, but they actually go up for all three ad sets. The results remain clearly the best when optimizing for Quality Visitors. This is the best way to actually get Quality Visitors.

I’m not surprised by that. You shouldn’t be either. But it’s nice to see the confirmation.

New Questions

While I feel really good about optimizing for Quality Visitors for the purpose of driving highly-engaged readers, I can’t help but ask a couple more questions that I want to get answered.

1. How much of this is remarketing? Even though the targeting was broad, it’s entirely possible that the algorithm starts with my website visitors since they are most likely to perform this action.

2. Will results be impacted by using a 1-Day Click Attribution Setting? I wasn’t able to use a 1-Day Click Attribution Setting in the first tests because the Link Clicks and Landing Page Views optimization don’t allow that change to be made. But if I’m only running ad sets optimized for conversions (Quality Visitors), this will be possible.

The Next Test

So now, let’s run a new test based on these questions. I think it’s safe to throw out optimizations for Link Clicks and Landing Page Views. Those options clearly will not result in driving quality traffic.

This time, I want to test two different ad sets, both optimized for a Quality Visitor using a 1-Day Click Attribution Setting. This is something that could backfire, though. If I’m not getting results with the 1-Day Click Attribution Setting within a couple of days, I’ll reassess and may look to start over.

The difference in the two ad sets will be the targeting:

  1. Completely Broad in the US, UK, Canada, and Australia
  2. Completely Broad in the US, UK, Canada, and Australia, but EXCLUDING All Website Visitors (180 Days)

By excluding my website visitors in the second ad set — even those who haven’t fired the Quality Visitor event before — we’re going to get a much better sense of whom the algorithm is targeting with this type of optimization. There are a couple of potential scenarios:

1. The ad set that excludes my website visitors completely bombs. This would be evidence that even though I used broad targeting, it was mostly remarketing to my current audience.

2. The ad set that excludes my website visitors doesn’t bomb. If this is the case, it’s evidence that at least some (maybe more) of the Quality Visitors are from a cold audience, which would be pretty fascinating.

I don’t think either scenario is necessarily “good” or “bad.” Sure, it would be pretty cool to find out that the algorithm can find people who have never been to my website and who are likely to be quality visitors. It shows that the algorithm truly learns, even from a custom event that very few websites use.

But the main thing is that it would be nice to know one way or the other. Because if it turns out that it’s basically just going after my audience, it’s evidence that running remarketing campaigns the way we have in the past may no longer be necessary. Barring some exceptions (abandoned cart), the remarketing may be driven primarily by the optimization, not the audience.

I’m looking forward to finding out! Stay tuned.

Your Turn

What are your reactions to the latest test results?

Let me know in the comments below!

The post Quality Traffic Test #2: The Impact of Meta Ads Placement appeared first on Jon Loomer Digital.

Did you miss our previous article…
https://www.sydneysocialmediaservices.com/?p=5520

One of my biggest battles with Facebook ads over the years has been driving high-quality traffic when promoting a blog post. I don’t want empty clicks. I want people who spend more time and are likely to perform other actions.

Look, I get it. The vast majority of advertisers are trying to get sales or leads from their ads. And while I do that, too, my blog is also important. I want to drive traffic to it, but it can’t just be any old traffic.

We know that there’s a huge hole in Facebook ads optimization if you optimize for surface-level metrics. If you tell Facebook that you want link clicks, you’re going to get lots of them — but probably not the ones you want.

You see, the ads algorithm doesn’t care about quality. It just cares whether you get the thing you asked for at the lowest cost. And you may get lots of clicks or video views, for example, if weaknesses in certain placements are exploited.

Needless to say, I’ve used a different route to drive quality traffic to my blog posts during the past few years. Still, I wasn’t fully confident that it was doing what I wanted it to do. I just knew it had to be better than the alternative.

A split test was in order.

Let’s take a look at the split test that I ran and what we can learn from it…

Which Optimization is Best?

If you set up a Traffic campaign, there are two primary ways that you can optimize: Link Clicks or Landing Page Views.

Depending on your choice, Facebook will optimize the delivery of your ads to get you the most link clicks or landing page views at the lowest possible cost. What’s the difference?

Link Clicks are the “number of clicks on links within the ad that led to advertiser-specified destinations, on or off Meta technologies.”

Landing Page Views are the “number of times that a person clicked on an ad link and successfully loaded the destination web page.”

It may sound like semantics, but a Landing Page View actually requires the landing page (and Meta pixel) to load. The Link Click does not. So, the Landing Page View is slightly better.

Slightly. Neither is the definition of a quality website visit.

That’s why three years ago, I created a series of custom events that fire on my website when people perform certain actions that might signify a quality website visit. For example, I’ve created events that fire when a visitor scrolls down a page or spends a designated amount of time on a page of my website.

Even better? I created an event that requires you to spend two minutes AND scroll at least 70% down a page.

The Split Test

I created a campaign with three ad sets that were identical in every way except for one thing: Optimization. One ad set optimized for Link Clicks, one for Landing Page Views, and one for the Quality Visitor event that I created.

All three ad sets would use the broadest of targeting. I selected the US, UK, Cananda, and Australia, but no custom audiences, lookalike audiences, or detailed targeting were provided. I excluded anyone who already read the blog post that I was promoting.

Each ad set would utilize Advantage+ Placements, so all placements were available.

In each case, the ad would promote a popular blog post related to using ChatGPT to create a Facebook ads strategy.

Once the campaign was started, I went into Experiments to set up a new test.

The key metric to determine a winner, of course, would be the Quality Visitor event.

While you might assume that the ad set optimized for Quality Visitors will result in the most Quality Visitors, who knows? It’s always possible it won’t go that direction.

Since I set up the split test this way, the ad sets were able to continue delivering even after the test ended. When the test is ongoing, there isn’t any overlap. A targeted person can only see an ad from one of the three ad sets. When the test is complete, that’s no longer the case.

Theoretically, you can get better results when you’re not constrained by a split test. So, that’s one reason I wanted to keep the ad sets going a little bit longer, even after a winner was found.

I didn’t spend a crazy amount of money on this test, but that also wasn’t necessary. We’re talking about actions that don’t cost a whole lot to get, particularly Link Clicks and Landing Page Views (Quality Visitors will presumably cost more).

I spent about $300 on this test, though I haven’t stopped it yet either. I’m confident that the results I’m going to share won’t change enough to impact what is uncovered.

The Results

Here are the primary metrics that we’ll want to look at:

  • CPM
  • CTR
  • CPC (Cost Per Link Click)
  • Cost Per Landing Page View
  • Cost Per Quality Visitor (2 Minutes + 70% Scroll)

I included CPM because the cost to reach people can do crazy things if it’s drastically different between ad sets. I also included CTR to give you an idea of engagement rate and whether it matters.

First, here are the results during the split test when the target audience was constrained…

The CTR was about 3X higher when optimizing for Link Clicks or Landing Page Views. The CPC was lowest when optimizing for Link Clicks, twice as much when optimizing for Landing Page Views, and about 5X higher when optimizing for a Quality Visitor. The Cost Per Landing Page View followed a similar pattern.

So, we can get significantly more volume of visitors by optimizing for Link Clicks or Landing Page Views than we can by optimizing for a Quality Visitor. But does optimizing for Quality Visitors lead to more Quality Visitors?

Yep. And it’s not close.

Even though this test resulted in a far higher cost than I usually want to see per Quality Visitor, that cost was about 1/4th of what it was when optimizing for a Landing Page View. And optimizing for Link Clicks, while bringing in volume, resulted in practically no quality visits at all.

That was during the test. Here’s the period of time after the test…

Everything stayed in line. Optimizing for Link Clicks resulted in lots of Link Clicks, but very little quality. Optimizing for Landing Page Views was very similar, but slightly more expensive and with a little bit more quality.

This time, optimizing for Quality Visitors resulted in a Cost Per Quality Visitor that I’m used to — just over $1. I should also point out that this happened while the CPM was the highest when optimizing for a Quality Visitor (more than twice as high as when optimizing for Link Clicks).

I also shouldn’t ignore an important side effect of driving quality traffic: Other actions. The ad set that optimized for Quality Visitors also resulted in five registrations, while the other two ad sets netted zero.

The Issue with Placements

Remember when I said at the top that optimization for Link Clicks and Landing Page Views can be problematic because it often takes advantage of weaknesses in placements? Wow. We have some evidence of that here.

The Audience Network placement is notorious for empty clicks, whether they are due to accidental clicks, bot clicks, or outright click fraud. If we use Breakdowns, we can see distribution by placement. And it’s really something.

When optimizing for Link Clicks, a staggering 99% of those Link Clicks came from Audience Network.

When optimizing for Landing Page Views, 96% of those Landing Page Views came from Audience Network.

When optimizing for Quality Visitors, 0 of those Quality Visitors came from Audience Network. Instead, 98% came from News Feed (most from mobile).

If this isn’t enough to convince you that Audience Network is problematic when optimizing for traffic actions, only 3 of the 607 people driven to my website from one of these ad sets from Audience Network resulted in a Quality Visitor.

Need the final dagger? When optimizing for Quality Visitors, Facebook knew that Audience Network wouldn’t work. Not a single penny was spent there when the algorithm knew that a Quality Visitor mattered.

The Verdict

This is really good validation. While I’ve optimized for Quality Visitors (and other custom events) for the past three years, I’ve long heard whispers that the algorithm doesn’t actually learn from custom events. I still did it because it couldn’t be worse than optimizing for Link Clicks and Landing Page Views.

When Bram Van der Hallen wrote his blog post about optimizing for custom events for traffic, I told him about my concerns. Well, I’m glad Bram wrote that post because even though I had my doubts, I kept at it and started testing it more.

Yes. This really does work.

If you want to run ads that promote a blog post, you should care about quality website traffic. Do not optimize for Link Clicks or Landing Page Views. Create custom events that fire when actions happen that signify quality traffic activity and optimize for them.

In case you’re wondering, I have a whole lot of custom events on this website that fire. Not only do I have events for scroll and time spent, but I also have events that fire if you click to share, play the podcast player, or watch an embedded YouTube video.

Your Turn

Have you tested out optimizing for quality traffic? What have you seen?

Let me know in the comments below!

The post Split Test: Which Optimization Leads to the Most High-Quality Traffic? appeared first on Jon Loomer Digital.