Found out a new tool for SEO which is really very helpful. Here is a brief text from Screaming Frog
The spider allows you to export key onsite SEO elements (url, page title, meta description, headings etc) to Excel so it can easily be used as a base to make SEO recommendations from. Our video below provides a demonstration of what the tool can do -"
"The Screaming Frog SEO Spider is a small desktop program you can install on your PC or Mac which spiders websites’ links, images, CSS, script and apps from an SEO perspective. It fetches key onsite page elements for SEO, presents them in tabs by type and allows you to filter for common SEO issues, or slice and dice the data how you see fit by exporting into Excel. You can view, analyse and filter the crawl data as it’s gathered and updated continuously in the program’s user interface.
The Screaming Frog SEO Spider allows you to quickly analyse, audit and review a site from an onsite SEO perspective. It’s particulary good for analysing medium to large sites where manually checking every page would be extremely labour intensive (or impossible!) and where you can easily miss a redirect, meta refresh or duplicate page issue.
I tried it and underneath is a screenshot of my findings. This is for one of our Internal Projects CRMDomain
A quick summary of some of the data collected -
· Errors – Client & server errors (No responses, 4XX, 5XX)
· Redirects – (3XX, permanent or temporary)
· External Links – All followed links and their subsequent status codes
· URI Issues – Non ASCII characters, underscores, uppercase characters, dynamic uris, long over 115 characters
· Duplicate Pages – Hash value / MD5checksums lookup for pages with duplicate content
· Page Title – Missing, duplicate, over 70 characters, same as h1, multiple
· Meta Description – Missing, duplicate, over 156 characters, multiple
· Meta Keywords – Mainly for reference as it’s only (barely) used by Yahoo.
· H1 – Missing, duplicate, over 70 characters, multiple
· H2 – Missing, duplicate, over 70 characters, multiple
· Meta Robots – Index, noindex, follow, nofollow, noarchive, nosnippet, noodp, noydir etc
· Meta Refresh – Including target page and time delay
· Canonical link element & Canonical HTTP headers
· X-Robots-Tag
· File Size
· Page Depth Level
· Inlinks – All pages linking to a URI
· Outlinks – All pages a URI links out to
· Anchor Text – All link text. Alt text from images with links
· Follow & Nofollow – At link level (true/false)
· Images – All URIs with the image link & all images from a given page. Images over 100kb, missing alt text, alt text over 100 characters
· User-Agent Switcher – Crawl as Googlebot, Bingbot, or Yahoo! Slurp
· Custom Source Code Search – The spider allows you to find anything you want in the source code of a website! Whether that’s analytics code, specific text, or code etc. (Please note – This is not a data extraction or scraping feature yet.)
No comments:
Post a Comment