Okay folks, if I admit that the headline is just a touch “clickbait-y,” will you concede that you clicked because – in your heart of hearts – you’re just not that comfortable with your keyword research process? Why is that? I mean, if you’re paying attention to fellows like Sam Crocker and Richard Baxter, you should have a serious arsenal of keyword research & keyword selection tools and methodologies. Right?
The problem is they’ve got the same issue anyone using Google data has: we all may be using manipulated search data. And I think a lot of us felt that in our bones for years. It’s time to confront some uncomfortable coincidences, contradictions, and facts about the search field and even our own methods.
I spent some years doing SEO for law firm websites at a company that specialized in services for the legal vertical. The nice thing about that was we had really good market intelligence about what the most valuable areas of practice were for our clients. The marketing team was also smart enough to create variable pricing that harmonized reasonably well with how competitive each market (city) would be online. A lot of carefully collected data and research went into marketing and product planning. When it got to us fulfillment people, I probably shouldn’t say much more than it was interesting to see what was on an official keyword list and ranking well versus what was actually driving traffic and conversions. You should know that marching orders were to optimize against what drove leads in addition to what was on the official keyword list. I say this because we were successful enough to grow the business from a couple dozen clients when i started out to 1,000+ in a year or so. Throughout that time, one of the dashboard metrics that HAD to be reported to clients were the rankings for the official campaign keywords. At first we scraped, but quickly decided we couldn’t scale it (cheaply) so we found a vendor to take over rank reporting. Most (if not all) specialists would still scrape and check rankings manually every day based on client requests or if they were just anxious to see if the links they built recently were having any impact. Keep this in mind.
Something Fishy In the State of Minnesota…
One day while looking for a way to integrate more data points into a keyword selection project, I noticed something very strange. Among the most in-demand legal services are:
- Divorce law
- Personal injury law
- Bankruptcy law
While working with Google Insights for Search, I put these together with the seed term “lawyer” and the output really surprised me.
Minnesota? The nation’s capital for broke and negligent divorcees? All at once? Something wasn’t right. I zeroed in on divorce and looked up divorce statistics by state figuring I would learn something. As it turns out, Minnesota is not leading the nation in divorce; it’s not even top ten! Curious about the state of Google’s own data collection infrastructure, I decided to pit Google against itself and see what Trends had to say (there’s a thought-provoking article that I recommend by Wil Reynolds citing some discrepancies within Google’s own tools for the same keyword ).
Sure enough, Google Trends saw more interest coming out of Minnesota - specifically, the area of St. Paul.
Thinking hard about what could create such high search demand where, ostensibly, there shouldn’t be as much relative to other states, I remembered how we used to collect ranking data: querying the FUCK out of Google. On a hunch, I looked up our primary competitor to see where they were based. Sure enough, competitor HQ is a suburb of St. Paul, MN. Did they find a way to scale their rank scraping better than we could? And hang on, surely this would be something Google would check for and filter anyway, right??
Here’s the thing. Remember that folks were still running queries on Google daily from our IP which was primarily located in a different state from where we were located. This could amount to hundreds of queries per day from our IP – perhaps thousands per month. Take a look at that screen grab again – our IP’s location is definitely on there. And it, too, should not be so high up on the list according to real census data.
Take a Look Around
Just last week I had a look around the weight loss vertical. I noticed a few states pretty well dominating – some of them having more regional search interest than more populace states. Comparing this map to this map, once again things failed to add up. That is until i discovered there’s a chain of weight loss centers that operate out of Texas, Georgia and Florida.
Plug the following into AdWords and have a look at the suggestions and search volumes on exact match basis:
- Weight loss Houston
- Weight loss Atlanta
Finding Refuge from the Noisy Crowd(sourcing)
I haven’t even touched on the well-known (but little talked about) way many search providers prospect for leads but suffice to say it’s only adding to the problem. I submit that what many of us have suspected to be true for a while is more demonstrably so now. That rank-checking and competitive activity are skewing keyword suggestions, keyword volumes, and even regional interest. If your keyword research process does not explore the world beyond Google, I truly worry about the shape of things to come.
To be clear, I’m not suggesting Google’s data is complete garbage. But for SEO, I tend to use it as directional and a source of inspiration. My PPC colleagues may find Googles data to be much more spot on – I hear few complaints from them but then they have a traffic estimator. Must be nice!
So what can you do to avoid creating a keyword list that is little more than a misleading pile of crap? I offer the following suggestions:
- Go beyond Google – Use other tools, other data providers, and read up on keyword research methodologies from the folks that do it seriously (*cough* *cough*).
- Know your space – Who are your competitors? What do you know about them? What are the relevant OFFLINE data points you can use to make sense of what you’re seeing come back from your keyword intelligence data providers? Census data worked gangbusters for me on a couple of occasions, by the way. The more information you have, the better you’ll be at spotting bullshit.
- Understand your target consumer – Follow them around the web. Go where they like to hang out on the web and just eavesdrop a bit. What you find there usually yields better starting points for keyword ideas.
- Use existing performance data – I get giddy when a client has existing Analytics data that goes back at least a year. Especially if they’ve enjoyed some visibility AND had some conversion mechanism on the website. Picking winners is almost academic at that point – take the ones that are contributing to the bottom line and blow those out first. Simplistic, i know, but if I attempted to do a better a job as Nick Eubanks at explaining competitive keyword analysis, I’d likely go mad. Just go read that.
- Remember that no one person or entity knows it all – So don’t bank on one data provider, one tool or one loudmouth SEO (yours truly included) to guide you all the way through. Learn from lots of folks and try a bunch of things yourself and develop your own instinct. Once you’ve got the instinct, always keep your eyes open because you’re never done learning. But at least you’ll be able to filter out the nonsense.
Please note that I’m not leaving out some specific info about the companies and other pertinent details to be deceptive. If you’re clever at all you can figure most of it out. I’m being extra careful to not be interpreted as giving away anything sensitive. On a related note, please note my investigations were my own and NOT sanctioned or otherwise supported by any employer past or present. Now go put a skeptical eye on your keyword portfolio!