One of our tasks as SEO's is making a site crawlable; to make the site as search engine friendly as possible and to provide the spiders with the link roads and site roadmaps they need to boldly go where earlier no bot could or would.
Much of how we look at a site is approached from that angle. When we see a form, a dropdown, we wonder if it can be crawled. We don't see a neat Flash animation; we see questions about crawlability and which extracted link will call up which portion of that file.
You can't blame us then when we get lost, unable to see the trees through the forest.
- Is making any and every part of your site accessible to spiders needed?
- Is it an advantage?
- Why are you dealing with duplicate content issues?
- Where does your consideration of using the rel=canonical monster come from?
- Why is your [site:] 50 times larger than your site really is?
- How come you're looking how to exclude parts of your site via robots.txt?
Is absolute indexability of every URL parameter on your site a must?
Whose mission do you serve by making everything accessible? Who do you work for? What's in your best interest?
How can you take people by the hand and give them the smoothest ride to destinations desired but unknown via click paths that feel as intuitive and well duh! as flipping a light switch?
What's Information Architecture?
What do you do to have the right pages indexed for the right keywords? Do you use any special tricks to force search engines hands?
To think about:
- Cre8pc Usability & Holistic SEO presents:
- A turning point in the field of SEO by Adam Audette
- Best Practices for Building Scalable Information Architecture by Terry Van Horne