You know you’re an old school PPC’er when your go-to best practice PPC technique is to extrapolate every possible keyword variation into its own ad group and mirror each ad group into a different campaign segment by each hour in the day.
Alright, that’s a bit of an extreme example, but I am guilty of creating the most overly segmented campaigns of all time. I would segment before I knew there were keyword volume for all variations. My teammates used say “That campaign has been Peter-ized” meaning it was way overbuilt for practical purposes.
They may have been extremely overbuilt, but I did have reasoning behind my madness. Most paid search technicians today weren’t around when it actually was common and best practice to segment campaigns by devices, time, geography, and budget. These days, it’s easy to filter out segments and make bid (not budget) adjustments to just those dimensions all in a single campaign.
But it wasn’t always that way. You simply couldn’t control how much you spent on a mobile devices unless you separated out all mobile searches into its own campaign and gave it its own budget. There was no such thing as device bid adjustments.
“Thou shalt follow the way of the segmenters!”
Believe it or not, it was an art used by the most privileged of PPC managers. Only an Adwords expert with esteemed regard for his or her optimization techniques would know to segment not only by device, but by both device and location. And when we need a go-to strategy to control advertiser budgets from bleeding out on mobile, we would just segment!
“The most brilliant minds segment data!”
That was the logic anyway. The reality is that we were dealing with 4 campaigns instead of 1. Come to find out, the most brilliant minds built filters so it would be easy for PPC managers to filter data down into segmented views.
Once in awhile, I come across an PPC account that is seemingly “Peter-ized” and I have to pray that the person who built it is as open to change as I was.
By that I mean some old school PPC people are really ignorant. They usually start by saying, “I’ve been doing Adwords since (insert year) .” That’s when I think “Ah shit. I’m about to spend the next hour trying to reverse someone’s archaic PPC thinking,” and then try to teach them that segmentation isn’t always the answer.
Well this time, I’m writing it down for the record, and my reference. The next time I see someone in need of “the talk”… they are just going to get a link to this handy blog post on segmentation.
Before I jump into the all reasons why segmentation might not be the best direction, I just want to say: Segmentation is still very much a best practice that can and should be used WHEN IT MAKES SENSE. For example, you can segment campaigns when you need to isolate a budget around a single campaign, after keyword volume and performance has been observed for the keywords in that campaign.
When we discover accounts that have unnecessary segmentation for any of the following reasons, we practice consolidation.
With Enhanced Campaigns … We just don’t have to segment anymore!
Since the advent of Google’s Enhanced Campaigns in 2013, we no longer have the need to segment campaigns by devices, time, geography, or for budget purposes. That’s because Enhanced Campaigns allow for bid adjustments based on each dimension.
Today, if you want to control how much you spend on mobile devices, you would make a Mobile bid adjustment up to +/- 100%. This would be reflected in your ad rank, and consequently your ad spend would change to match the going clicks for all device types in proportion to how each one is clicked. If you wanted to turn mobile off, you would simply bid down 100%.
Enhanced Campaigns definitely did not give users as precise of a method to control budget for each campaign dimension as the legacy campaign structure offered, because bid adjustment only correlated with ad spend amounts. For instance, if you bid down 50% on mobile, it could reduce your mobile budget (and your traffic) by 90%.
Even though the new method of adjusting bids did not offer PPC managers any precision in managing budget, it did lend itself to other positive factors involving data volume, which we will cover below.
Who searches that way??
Sometimes over-segmentation is done with ad groups. This is “Peter-ization” at its finest. Here is an example of a local campaign built out for every keyword and city/zip combination possible.
Campaign– Garage Doors
- Ad Group– Garage Door by City
- Ad Group- Garage Door Repair by City
- Ad Group– Garage Door by Zip Code
- Ad Group– Garage Door Repair by Zip Code
The campaign is already set to target the correct geo-area, and not enough inquiries come in with each specific city or zip code. Even though 99.9% of these searches have local intent, less than 15% of them will specify a city name and less than 2% will specify a zip code in the actual search query.
It doesn’t matter if it’s every city query you’re trying to get, or if it’s just every keyword possibility that you can imagine. Getting more segmentation requires lots of research and keyword planning.
Speaking from experience, you will stand to save time and gain more volume and data benefits by capturing all the traffic using the broad modified keywords. If you have hundreds of cities, most are way too granular for what people are actually searching for. Review the volume first; then create city specific keywords and ad groups based what people are actually searching.
Not Enough Data To Make Actionable Decisions.
If you don’t have enough volume, over-segmentation can lead to less actionable data. In other words, it can take longer to accrue enough data to be considered actionable. Let’s say you have a keyword that you haven’t converted on yet. If you get 50 clicks on that keyword, and it took you 3 months to get those clicks, would you turn off that keyword or would you leave it running?
If you’re thinking in terms of data, it should be hard to answer that question. You only have 50 clicks. What if it converts on the 51st click? Then you would definitely run that keyword. What if it was 100 clicks? Even then, it’s still a very difficult choice to make.
If you converted a few times on 50 clicks, then you don’t have to think about it too hard. But you might have a lot of keywords that don’t convert every 50 clicks, or every 2 months for that matter.
I used to preach this to every advertiser I spoke to on this subject at one point in my career. They would rebut with “So what! Who cares if I only spend for 50 clicks and don’t convert?”
The danger is not with the lack of data for each individual keyword but rather the lack of data in the aggregate, or for all keywords where there is insufficient data. As you segment campaigns or ad groups, you are adding more keywords which will accrue less data than if they had not been segmented at all.
“Whichever portion of your budget you put towards keywords where there is not enough data to make actionable decisions, that is the portion of your ad spend that is unactionable.”
I have probably said that sentence over 1000 times in real conversations. The danger is not in each keyword having low volume, but in the proportion of your budget going to keywords you can’t measure within a reasonable time period.
Low Volume, Low Quality Scores
As just mentioned, a problem with getting more segmented or granular is that the keyword search volume gets lower. If the search volume is low, then Google is likely to view that keywords as less relevant. Quality Score is a relevancy score comprised of several different indicators, one being overall keyword search volume. If your volume is low, your score might be low, and then your actual cost per click would be higher than the average of similar relevant keyword variations that drive more volume.
Local Geo-Setting Segmentation Creates Dark Spots
This one defied all logical best practices that my team and I swore by. If you own a local business and your goal is to be as efficient as possible and maximize a small budget, then by all means segment your geographical settings in Adwords.
To be clear, I’m not talking about segmenting ad groups and keywords here. I’m referring to choosing ONLY individual cities or areas as geo-targets in your campaign settings.
We ran this client’s campaigns for years by selecting specific geo-targets. Its seemed to work swell, though we didn’t have any other setting implemented that we could compare this to.
Then one of our team members, who was a skilled optimizer and who usually optimized for efficiency, looked at the geo-locations of the clicks and saw many were coming from way outside a drivable distance to this business. She naturally went to the campaign’s geo settings and tightened them down to only show ads in specific cities that were within 10 miles (which I don’t find drivable for many things, but maybe that’s an L.A. thing) of all four store locations.
Needless to say, even though the map circles were overlapping and completely covering the the location areas by at least 10 miles, the client had the worst revenue year ever.
Even though the locations were well covered and set to only serve ads within a reasonable distance, there were dark spots. Oh, were there dark spots! Dark spots are what happens when there is a lapse ads served due to ISP location, or where Google thinks your internet service provider is. You may be at your office now, but Google might think you’re 18 miles over in the next county.
Quick Fix: GO BIG and cover all the areas, including those way outside your specific geo area. You’ll find that you won’t be as efficient, but you’ll maximize the revenue for your area.
Increased Likelihood of Double Bidding
Double Bidding, which we also call ‘Cross Bidding,’ is a very common issue we see in most advertising accounts today. In fact, it’s largely unavoidable unless unless you ONLY use exact match in all your campaigns.
The way it works is when you open your keywords for phrase and broad match queries, you allow for the same queries to trigger multiple keywords. Usually one of these other keywords is not in the intended ad group or campaigns, and therefore doesn’t show the visitor the intended ad copy or landing page. Also Google is likely to serve the keyword that generates the highest cost per click despite its intended ad group.
Here at Conversion Giant, we spend a lot of time cleaning this up. It’s a tedious process because we have to work in spreadsheets, extracting each instance of double bidding. Then we have to assign an ad group level negative-exact match keyword to the unintended ad group in order to force the query to trigger the intended one. It’s our version of conducting traffic!
When queries start triggering their intended ad groups, conversion rates rise, clicks cost fall, and overall CPS drops. This is usually one of the biggest eye openers for an advertiser when everything seems to “look” optimized in the PPC account.
Even though this might seem like a daunting process, when you pull your analysis, you’ll probably find that most of the highest spend instances of double serving encompass less than 10%-20% of your keywords. Start where you can make the biggest impact now. Come back for the rest when you have time!
Increases How Often Underperforming Ads Are Served
This goes back to the examples of low volume, unactionable data. Before we were stating how low volume keyword data make them unactionable. The same goes for ads. The more ad groups you have, the less keyword volume you are likely to have in each ad group, and each individual ad group shares all the same ads. Therefore, if an ad group has lower volume, it will take longer to split test ad remove underperforming ads.
Messier, Hard to Manage, Takes Longer
Admittedly, I was initially opposed to the Enhanced Campaigns, mostly because Google was removing a lot of transparency in the tablet data with the rollout. But after working with consolidated data for the last 4 years, life is much better.
We don’t have to open or sift through countless low volume ad groups to check performance. We are more efficient managers because our data is more actionable. We get to act quicker. We analyze faster, which means we can produce strategies much faster.
And for clients, we can get results faster, and that’s what everyone really wants right?
With great data, comes great responsibility.