Too many organizations still publish content without clear objectives and KPIs.
For organizations to move beyond “just publishing content,” they need to adopt a different mindset.
They need to reflect on their past work, think critically, and ask for access to performance data they can use to assess content performance in terms of traffic, crawls, and links generated.
I know what you may be thinking: “Wait, are you hinting at content teams asking for log file data?”
Yes, but I’ll do you one better: I want content teams to start asking for real-time log file insights.
For those familiar with traditional, time-consuming log file analysis, let me tell you this is different.
Times have changed, and content teams can now tap into the valuable insights log files hold.
Let’s change that mindset with the four steps below.
Step 1: Content Teams Start Thinking Critically
Rarely do content teams say, “I want to get the content piece discovered by search engines the same day, crawled within three days after publishing, indexed within a week, and driving 200 organic visits and two leads a month three weeks after publishing.”
Unfortunately, many organizations still just publish an X amount of content pieces a month because “That’s the way we’ve always done things,” or “We need fresh content to keep up our SEO performance.”
After publishing, they quickly move on to the next piece. And at the end of the month, they’ve achieved their objective to publish four content pieces and are “done.”
They don’t reflect on how long it took for search engines to crawl their newly published or updated content, how long it took to get indexed, and how long it took before the article started to rank and drive organic traffic.
And that’s a terrible shame.
Because it’s highly unlikely that this old way of doing things is really moving the needle.
Sure, everyone’s keeping very busy and I’m sure it’ll do some good, but the content will never live up to its potential. That’s a waste of money.
Don’t get me wrong. I get why it’s happening.
It’s a combination of doing what’s worked (or may have worked) in the past and a lack of a centralized place where content teams can find all the insights they need to reflect on their work’s performance effectively.
Thinking critically means content teams are asking themselves:
- Why did article X start driving meaningful organic traffic nearly instantly after publishing? Why was it crawled so fast? Was it picked up by the press? Did it go viral on social media?
- Are we seeing very different behavior when comparing the performance of content in site section A compared to section B? Does it get recrawled more often? If so, why?
- Does section A have much more internal and external links? Does it have better-performing content in general?
Where can they find the answers to these questions?
Step 2: Getting Your Hands On Log File Analysis Insights
Getting your hands on log files has been notoriously difficult. There are all sorts of challenges.
For starters, they may not be available anymore. Even if they are still available, they are a pain to get because of red tape relating to PII (personally identifying information) concerns.
You’ll see that it’s a slow and painful process in most cases. There’s a reason most organizations perform a traditional log file analysis only once or twice a year.
Nowadays, many sites use CDNs to provide fast-loading sites to both visitors and crawlers.
And the beauty of CDNs is that they provide log files in real-time, and you can easily pull logs and make sure it doesn’t include any PII data.
Step 3: Provide Content Teams With Easily Digestible Insights
Log files also hold valuable, non-technical insights for content teams, even when their information needs differ from technical SEO teams.
Content teams need easily digestible insights that are content-focused, and they need it in real-time because they’re making changes all day and touch a lot of different content.
It needs to be a walk in the park so they can answer questions like:
- Has Google crawled these newly published pages? And what about these pages that we recently updated?
- How frequently does Google crawl pages in website section X? How does that compare to section Y?
- Did Google crawl pages when they had the wrong title tags? Or that time when they contained broken links?
Knowing what crawl behavior search engines are exhibiting is essential to improving your SEO performance because having pages (re)crawled is the first step in Google’s crawling, indexing, and ranking pipeline after discovery.
When content teams can answer the questions above, they can start connecting the dots and will learn how their work has influenced search engine behavior on the site.
They can even calculate and improve:
- Average time to crawl.
- Average time to index.
- Average time to rank.
- Average time to traffic.
Zooming out, this makes for great input for SEO traffic forecasts too!
Step 4: Mapping Insights To Content Inventory
The last piece of the puzzle is mapping these useful insights to your content inventory, which also tracks all of your changes to the content.
And, we want to stay far away from putting this manually together in spreadsheets – you want an ever up-to-date content inventory to which your log file insights are automatically tied.
Off-the-shelf solutions offer all this, or you could build your own custom solution.
Both are fine. What matters is that you empower your content team!
Pro tip: You could even integrate with Google Search Console’s URL Inspection API to determine whether the content is indexed!
Wrapping Things Up
When content teams ask all the right questions and reflect on their work and have everything they need at their fingertips to answer those questions and reflect, all of their efforts will go a long way.
You’ll see that working on improving the SEO performance of sites is much more accessible for everyone involved. It’ll be more fun, and management will likely buy in faster.
Empower your content team, and be amazed by their contribution to the site’s SEO performance!
Featured Image: The KonG/Shutterstock
if( typeof sopp !== “undefined” && sopp === ‘yes’ )
fbq(‘dataProcessingOptions’, [‘LDU’], 1, 1000);
fbq(‘trackSingle’, ‘1321385257908563’, ‘ViewContent’,
content_category: ‘strategy seo ‘