Recently a friend of mine had his website redeveloped and asked me to take a quick look after it had gone live. The site looked fairly good from the outset; well designed, clear and intuitive navigation and plenty of useful content. However, the developers had launched the site without redirecting the pages from the old site to the corresponding pages on the new site, whilst also failing to create 404 error pages. They also had no stats package on the site meaning there was no way to measure traffic during the launch and thereafter.
This led me to consider whether developers should understand the basic principles of how search engines work because in this particular case by ignoring, knowingly or unknowingly, the merits of redirects, they may have caused their client some pretty serious damage as any existing search engine rankings are lost, and along with it traffic.
I think the days are gone where a developer can simply ignore the fundamentals of how search engines work; such is their role in how all of us navigate the web and find information. When I first started with Leapfrogg nearly four years ago I would often work with clients whose websites had been built entirely in flash or frames. They would approach Leapfrogg after the site had been built (mistake! – the ‘after’ bit, not approaching me!) and then ask me to deliver an SEO strategy. I would explain the issues the client faced with their new website and their first question would be “shouldn’t my developers have taken care of this or known about these issues?”
Having worked in a web design company previously I would often answer that it simply depended on the brief that was given to the web design company at the time. If the site had been built to be incompatible with the demands of search engines, then it was likely to be because the developer did not possess the necessary knowledge and had therefore built the site to brief.
However, times have changed. Search engines are even more integral to our everyday use of the Internet to the extent where developers should understand how to build a search engine ‘friendly’ website. Remember, this is very different to search engine optimised. The latter involves detailed research into the target audience, keyword strategy, outstanding copy and an appreciation of the need to add a continuous stream of good quality content to the website. This work should all be much with the target audience in mind. This level of detail is often outside the remit of the developer, which is absolutely fine. However, I do think that developers need to really consider search engine ‘friendliness’ as an integral element of their offering. This involves the right choice of technical platform i.e. CMS, considerations around hosting and in particular navigation. If the site is an update to one that already exists than a migration strategy is integral, normally involving the use of redirects.
I think it is not only important for developers to possess this knowledge but also manage the expectations of their client. Don’t say that you do SEO when in reality all you are doing is building a search engine friendly site. This doesn’t help anybody when SEO in 2009 is an awful lot more complex than it was 5 years ago and involves a great deal more work in ‘off the page’ marketing techniques including link building, content and social media.
In helping developers, I have compiled a list of seven deadly sins to avoid when building and launching a new website. Whatever brief you have been given I see these as integral to the service offered to your clients:
1. Do not build the site in frames. I am amazed that so many clients still approach Leapfrogg with sites using this dated technique. Equally, do not build the whole site in flash. Whilst search engines are getting better at indexing flash based content they have a long way to go yet. Using areas of flash is fine by the way, just not the entire site.
2. Pick the right technology, for example in using a content management system, what control does it allow over ‘on the page’ elements, such as Meta data? Importantly, what format does it churn our URL’s? Keyword based with as few parameters as possible is ideal.
3. Build the navigation to be ‘spiderable’ and ensure you implement a site map and other spidering/usability aids, such as a breadcrumb trail.
4. Consider where the site will be hosted and the number, and type of other sites, on the server.
5. Give your client access to free tools, such as Google Analytics. It is so easy to set up that there really is no excuse not to set up an account.
6. If you are redeveloping an existing site, and assuming the URL’s are changing, perhaps the biggest sin you can commit is failure to set up appropriate redirects. Redirects are crucial to maintain a sites’ presence across search engines (assuming it has one!) and also to pass the value of any links pointing at old pages.
7. Implementing a custom 404 error page. This is again really important if you have changed the site structure. If people hit a standard 404 error page they may not be able to access the site at all and the client could potentially lose new and existing customers.
I would be keen to here your thoughts. Do developers now have a responsibility to consider these basic principles in building and launching sites for their clients, whether they are search engine experts or not? Is there anything else you think should be considered by developers as a minimum standard?