Hi Rod,
I'm fairly sure a HTTP 30* redirect would help - if you think about what a web crawler is doing, its just processing the contents of a page and whilst doing that building a list of links for further processing. If referencing one of those links returns a redirect response with another URL to try, the returned URL is pushed onto the queue of links to be processed.
The TDWG resolver could fairly easily return a 301 (or some other variant of 30* redirect if more appropriate) as its not embellishing the IPNI data at all, it is presented "as is" ie compare: http://lsid.tdwg.org/urn:lsid:ipni.org:names:30000959-2 and http://www.ipni.org/ipni/plantNameByVersion.do?id=30000959-2&version=1.1... (The latter being the end address used to access LSID metadata at ipni.org).
Only the "summary" page adds anything to the metadata - reformatted into a more user friendly layout: http://lsid.tdwg.org/summary/urn:lsid:ipni.org:names:30000959-2
As you point out, the TDWG LSID resolver is indeed a full blown LSID resolver, and hence also generates calls to access the WSDL(s) for the LSID authority, in addition to that required to get the metadata. The authority WSDL is the same every time so could well be cached. According to the spec, the service WSDL must indicate if the requested LSID does in fact exist. But slowing down the traffic from the current 3 x calls per request with no crawl delay to: 1 x potentially cached request for authority WSDL 1 x request service WSDL 1 x HTTP 30* redirected and hence crawl delayed (the actual metadata address) ...should improve our situation a bit.
I'm not sure how one would go about making the TDWG resolver implement the robots exclusion protocol, as the resolver is not itself a crawler.
cheers, Nicky
- Nicola Nicolson - Science Applications Development, - Royal Botanic Gardens, Kew, - Richmond, Surrey, TW9 3AB, UK - email: n.nicolson@rbgkew.org.uk - phone: 020-8332-5766 ________________________________ From: Roderic Page [r.page@bio.gla.ac.uk] Sent: 27 April 2009 14:30 To: Nicola Nicolson Cc: tdwg-tag@lists.tdwg.org Subject: Re: [tdwg-tag] LSIDs: web based (HTTP) resolvers and web crawlers
Dear Nicky,
Ouch!
I'm not sure I fully understand how 30* redirects work with respect to web crawlers, but I'm not sure they will help in this case.
If the TDWG LSID resolver is a full blown resolver, then for each request from the crawler it will be doing the full LSID resolution (three calls, one for authority WSDL, one for service WSDL, one for metadata). It may cache the WSDLs, but it will still do at least one call to the service (unless it has cached the metadata as well).
Is the solution to add TDWG to your robots.txt file, and have the LSID resolver respect the settings in that file? TDWG could also implement metadata caching so it wouldn't need to hammer you so much (i.e., when a crawler hit TDWG, TDWG would reply with the cached metadata).
Perhaps LSID services such as IPNI's could also implement etag headers, which would help avoid excessive traffic from TDWG when caching (TDWG could regularly cache metadata form IPNI, respecting the robots.txt files, and first checking whether the metadata had changed using etag and/or last modified headers).
I assume the DOI resolve has similar issues. It's robots.txt file looks like this:
Crawl-delay: 5 Request-rate: 1/5
Hope this makes sense, my understanding of the HTTP headers/redirects/robots.txt is not particularly deep.
Regards
Rod
On 27 Apr 2009, at 13:54, Nicola Nicolson wrote:
Hi,
Further to my last design question re LSID HTTP proxies (thanks for the responses), I wanted to raise the issue of HTTP LSID proxies and crawlers, in particular the crawl delay part of the robots exclusion protocol.
I'll outline a situation we had recently:
The GBIF portal and ZipCodeZoo site both inclde IPNI LSIDs in the pages. These are presented in their proxied form using the TDWG LSID resolver (eg http://lsid.tdwg.org/urn:lsid:ipni.org:names:783030-1). Using the TDWG resolver to access the data for an IPNI LSID does not issue any kind of HTTP redirect, instead the web resolver uses the LSID resolution steps to get the data and presents it in its own response (ie returning a HTTP 200 OK response).
The problem happens when one of these sites that includes proxied IPNI LSIDs is crawled by a search engine. The proxied links appear to belong to tdwg.org, so whatever crawl delay is agreed between TDWG and the crawler in question is used. The crawler has no knowledge that behind the scenes the TDWG resolver is hitting ipni.org. We (ipni.org) have agreed our own crawl limits with Google and the other major search engines using directives in robots.txt and directly agreed limits with Google (who don't use the robots.txt directly).
On a couple of occasions in the past we have had to deny access to the TDWG LSID resolver as it has been responsible for far more traffic than we can support (up to 10 times the crawl limits we have agreed with search engine bots) - this due to the pages on the GBIF portal and / or zipcodezoo being crawled by a search engine, which in turn triggers a high volume of requests from TDWG to IPNI. The crawler itself has no knowledge that it is in effect accessing data held at ipni.org rather than tdwg.org as the HTTP response is HTTP 200.
One of Rod's emails recently mentioned that we need a resolver to act like a tinyurl or bit.ly. I have pasted below the HTTP headers for an HTTP request to the TDWG LSID resolver, and to tinyurl / bit.ly. To the end user it looks as though tdwg.org is the true location of the LSID resource, whereas with the tinyurl and bitly both just redirect traffic.
I'm just posting this for discussion really - if we are to mandate use of a web based HTTP resolver/proxies, it should really issue 30* redirects so that established crawl delays between producer and consumer will be used. The alternative would be for the HTTP resolver to read and process the directives in robots.txt, but this would be difficult to implement as it is not in itself a crawler, just a gateway.
I'm sure that if proxied forms of LSIDs become more prevalent this problem will become more widespread, so now - with the on-going attempt to define what services a GUID resolver should provide - might be a good time to plan how to fix this.
cheers, Nicky
[nn00kg@kvstage01 ~]$ curl -I http://lsid.tdwg.org/urn:lsid:ipni.org:names:783030-1 HTTP/1.1 200 OK Via: 1.1 KISA01 Connection: close Proxy-Connection: close Date: Mon, 27 Apr 2009 11:41:55 GMT Content-Type: application/xml Server: Apache/2.2.3 (CentOS)
[nn00kg@kvstage01 ~]$ curl -I http://tinyurl.com/czkquy HTTP/1.1 301 Moved Permanently Via: 1.1 KISA01 Connection: close Proxy-Connection: close Date: Mon, 27 Apr 2009 12:16:38 GMT Location: http://www.ipni.org/ipni/plantNameByVersion.do?id=783030-1&version=1.4&a... Content-type: text/html Server: TinyURL/1.6 X-Powered-By: PHP/5.2.9
[nn00kg@kvstage01 ~]$ curl -I http://bit.ly/KO1Ko HTTP/1.1 301 Moved Permanently Via: 1.1 KISA01 Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 287 Date: Mon, 27 Apr 2009 12:19:48 GMT Location: http://www.ipni.org/ipni/plantNameByVersion.do?id=783030-1&version=1.4&a... Content-Type: text/html;charset=utf-8 Server: nginx/0.7.42 Allow: GET, HEAD, POST
- Nicola Nicolson - Science Applications Development, - Royal Botanic Gardens, Kew, - Richmond, Surrey, TW9 3AB, UK - email: n.nicolson@rbgkew.org.ukmailto:n.nicolson@rbgkew.org.uk - phone: 020-8332-5766 _______________________________________________ tdwg-tag mailing list tdwg-tag@lists.tdwg.orgmailto:tdwg-tag@lists.tdwg.org http://lists.tdwg.org/mailman/listinfo/tdwg-tag
--------------------------------------------------------- Roderic Page Professor of Taxonomy DEEB, FBLS Graham Kerr Building University of Glasgow Glasgow G12 8QQ, UK
Email: r.page@bio.gla.ac.ukmailto:r.page@bio.gla.ac.uk Tel: +44 141 330 4778 Fax: +44 141 330 2792 AIM: rodpage1962@aim.commailto:rodpage1962@aim.com Facebook: http://www.facebook.com/profile.php?id=1112517192 Twitter: http://twitter.com/rdmpage Blog: http://iphylo.blogspot.com Home page: http://taxonomy.zoology.gla.ac.uk/rod/rod.html