Last summer, I was invited to speak in Jo Turnbull's brilliant TurnDigi conference. I have tons of respect for Jo as she puts the effort to host events that feature a diverse line-up and always welcomes first time speakers.
I chose to talk about the difficult part of my job, not the Technical SEO side, but the getting it 'implemented' side.
You see, in 2019, back when we had physical conferences and actually met each other (good old days) - I did a 220-slide talk on indexability issues in BrightonSEO and MozCon. But there was one slide that resonated the most with everyone.
I did this talk when I was agency-side but lots of client-side SEOs told me they also fully relate to it. And now that I've been client-side myself for the past 18 months, I can relate to it more than ever.
So this time round, instead of reflecting on the problem, I want to focus on the solution.
The Audit Curse
When I was agency-side, I used to send over 100 pages worth of audit documents to each client. But the amount of recommendations that actually got implemented? Minimal!
A 10 day technical audit retainer should not equate to a 100 page audit document.
I realised that there was something fundamentally wrong with the way I delivered audits.
So, I'd like to introduce you to the power of three:
As Tech SEOs, the recommendation part is the bread and butter of our everyday. I used to get in the habit of cramming all recommendations in one massive audit but I soon came to realise that it's better to conduct one audit at a time. For example:
- Schema Audit
- Mobile Audit
- Speed Audit
- On Page Audit
- Internal Links Audit
- Indexability Audit
This not only helps you focus on one thing at a time but also makes your audits feel much more digestable by those on the receiving end.
It's also important to split out your recommendations by site template - this is especially relevant for large websites. There's little use in providing appendix files that includes 10s of 1000s of URLs. Categorising your recommendations by templates makes it easier to comprehend. For example: product pages, search pages, content hub, etc...
Finally, before you deliver your recommendations, communicate with the Product Team who will be working on these fixes on how they'd like it delivered. Is it in the form of a word document? Do they prefer to collaborate with you via Google Sheets? Or will you make their life easier by directly creating tickets in JIRA?
There are lots of different prioritisation tactics that can be used - I like to stick to T-shirt sizing because it's simplest.
- S = Small
- M = Medium
- L = Large
It's easy for us to size prioritisation purely based on SEO Impact (How much of an impact will this recommendation likely have on your overall organic traffic?), but that's only one part of the puzzle.
Something I quickly learned from working alongside Product Teams is that we need to take Tech Effort in consideration. Tech Effort is how much effort will this recommendation likely take?
As an SEO, do NOT attempt to size up Tech Effort yourself on behalf of your engineering team, ask them directly!
If your product team works using agile, they tend to size Tech Effort based on number of sprints:
- S = 1 sprint
- M = 2 sprints
- L = 3 sprints
- XL > 3 sprints
So for each identified recommendation:
For example, a recommendation that has an SEO Impact = Large and a Tech Effort = Small can easily be prioritised as High. And with that comes in your Prioritisation Matrix:
You're welcome to make up your own rules. Once you have these and you're in agreement with your Product Team about them, you can easily plug that table in Google Sheets then use a VLookUp (or MatchIndex for VLookUp haters) to match it with each recommendation in all future audits.
One at a time gets the job done.
Rather than focus on the nice-to-haves, let's make sure we first nail down those must haves!
If you want to get things done, then be the product owner of your own recommendations. Write those JIRA tickets yourself. Attend stand ups, backlog refinement and sprint planning. Make sure you're fully accessible and collaborative.
Finally, test and monitor every step of the way:
- Test on staging environment before pushing live
- When live, monitor changes in logs/traffic/rankings
- Update recommendation status in audit documents
- Communicate with all relevant stakeholders on progress
The RPI Framework
In summary, I'd like to introduce you to The RPI Framework! Because every white dude out there gets to create their own framework (they even do Ted Talks and publish Best Selling Books on them!), I figured I'd give it a try too.
And please remember, this is an ongoing cycle.
There is no “done” when it comes to Technical SEO. It is a continuous process of improvement.
And with that, it's a wrap! If you've made it this far, thank YOU! You can find my slides from TurnDigi embedded below. Feel free to reach out directly if you have any questions.