I've a problem with the Fetch as Google tool for my ajax website. My site is a little old, ajax website written using jquery. The developers who have made it haven't used the Hash Fragments. But they've defined static routes and the ajax calls are used only within the views (to load the page content). Now I wanted to make this specific page Google friendly, and I've already implemented what Google asks here.
Since my site is not a full Single page app, I've selected the third step directly. In my route file, what I did is, if i see a ?_escaped_fragment_=
parameter, I return a custom template file which will have server generated content. (So it should be crawl-able, right?)
Here is an example:
this page uses an ajax call to get details from the server and update the view. (I included the meta name="fragment" content="!"
meta tag in this page) so the Google crawler should go to:
=
?????
This page now generates the content in server side, no ajax calls.
Is this the correct setup? But when I try to fetch this page in the Webmaster tool, it doesn't load anything. The fetching tool keeps saying pending and ends with an error (it takes a long time to show it encountered an error, but nothing mentioned about the error.) I made sure that both these versions are working by manually visiting each url. and before I implement this, the Fetch tool actually showed the image of the page without content. So now I was expecting to see it with content. But no idea why it's taking a long time + it gives the error.
Can somebody please explain me which part I've done wrong? Is my idea about the ?_escaped_fragment_=
parameter correct???
Thank you in advance.
I've a problem with the Fetch as Google tool for my ajax website. My site is a little old, ajax website written using jquery. The developers who have made it haven't used the Hash Fragments. But they've defined static routes and the ajax calls are used only within the views (to load the page content). Now I wanted to make this specific page Google friendly, and I've already implemented what Google asks here.
Since my site is not a full Single page app, I've selected the third step directly. In my route file, what I did is, if i see a ?_escaped_fragment_=
parameter, I return a custom template file which will have server generated content. (So it should be crawl-able, right?)
Here is an example: http://example./topic/Health/Conditions_and_Diseases
this page uses an ajax call to get details from the server and update the view. (I included the meta name="fragment" content="!"
meta tag in this page) so the Google crawler should go to:
http://example./topic/Health/Conditions_and_Diseases?_escaped_fragment_=
?????
This page now generates the content in server side, no ajax calls.
Is this the correct setup? But when I try to fetch this page in the Webmaster tool, it doesn't load anything. The fetching tool keeps saying pending and ends with an error (it takes a long time to show it encountered an error, but nothing mentioned about the error.) I made sure that both these versions are working by manually visiting each url. and before I implement this, the Fetch tool actually showed the image of the page without content. So now I was expecting to see it with content. But no idea why it's taking a long time + it gives the error.
Can somebody please explain me which part I've done wrong? Is my idea about the ?_escaped_fragment_=
parameter correct???
Thank you in advance.
Share Improve this question edited Aug 1, 2018 at 9:51 Nimeshka Srimal asked Jan 20, 2015 at 8:03 Nimeshka SrimalNimeshka Srimal 8,9805 gold badges45 silver badges60 bronze badges1 Answer
Reset to default 7I'm worried because no-one here could answer this question. So I had to find it myself. According to this Google Forum answer by a Google employee, the fetch tool doesn't parse the meta-tag. It just renders the page as it sees.
Snapshot url will be crawled only by the crawler later when it's really crawling. So apparently this is the correct answer as of now. Hope this will help somebody else in the future.
Hi Todd It's good to see more sites using the AJAX crawling proposal :-)!
Looking at your blog's homepage, one thing to keep in mind is that the Fetch as Googlebot feature does not parse the content that it fetches. So when you submit http://toddmoyer/blog/ , it fetches that URL. After fetching the URL, it doesn't parse it to check for the "fragment" meta tag, it just returns it to you. However, if you fetch http://toddmoyer/blog/#! , then it should rewrite the URL and fetch the URL http://toddmoyer/blog/?_escaped_fragment_= .
When we crawl and index your pages, we'll notice the meta-tag and act accordingly. It's just the Fetch as Googlebot feature that doesn't check for meta-tags, and instead just returns the raw content.
I hope that makes it a bit clearer!
Cheers John
发布者:admin,转转请注明出处:http://www.yc00.com/questions/1744910031a4600490.html
评论列表(0条)