Google Search optimisation for ajax calls -


i have page on site has list of things gets updated frequently. list created calling server via jsonp, getting json , transforming html. fast , slick.

unfortunately, google isn't able index it. after reading on how done according google's ajax crawling guide, bit confused , need clarification , confirmation:

the ajax pages need implement rules only, right? have rest url like

[site]/base/junkets/browse.aspx?page=1&rows=18&sidx=scoreall&sord=desc&callback=jsonp1295964163067 

this need become like:

[site]/base/junkets/browse.aspx#page=1&rows=18&sidx=scoreall&sord=desc&callback=jsonp1295964163067 

and when google calls this

[site]/base/junkets/browse.aspx#!page=1&rows=18&sidx=scoreall&sord=desc&callback=jsonp1295964163067 

i have deliver html snapshot.

why replace ? # ? creating html snapshots seems cumbersome. suffice serve simple links? in case happy if google index things pages.

it looks you've misunderstood ajax crawling guide. #! notation used on links page ajax application lives within, not on url of service appliction makes calls to. example, if access app going example.com/app/, you'd make page crawlable instead linking example.com/app/#!page=1.

now when googlebot sees url in link, instead of going example.com/app/#!page=1 – means issuing request example.com/app/ (recall hash never sent server) – request example.com/app/?_escaped_fragment_=page=1. if _escaped_fragment_ present in request, know return static html version of content.

why of necessary? googlebot not execute script (nor know how index json objects), has no way of knowing ends in front of users after scripts run , content loaded. so, server has heavy lifting of producing html version of users see in ajaxy version.

so next steps?

first, either change links pointing application include #!page=1 (or whatever), or add <meta name="fragment" content="!"> app's html. (see item 3 of ajax crawling guide.)

when user changes pages (if applicable), should update hash reflect current page. set location.hash='#!page=n';, i'd recommend using excellent jquery bbq plugin manage page's hash. (this way, can listen changes hash if user manually changes in address bar.) caveat: released version of bbq (1.2.1) not support ajax crawlable urls, recent version in git master (1.3pre) does, you'll need grab here. then, set ajax crawlable option:

$.param.fragment.ajaxcrawlable(true); 

second, you'll have add server-side logic example.com/app/ detect presence of _escaped_fragment_ in query string, , return static html version of page if it's there. google's guidance on creating html snapshots might helpful. sounds might want pursue option 3. modify service output html in addition json.


Comments

Popular posts from this blog

java - SNMP4J General Variable Binding Error -

windows - Python Service Installation - "Could not find PythonClass entry" -

Determine if a XmlNode is empty or null in C#? -