Hi,
Further to my last design question re LSID HTTP proxies (thanks for the responses), I wanted to raise the issue of HTTP LSID proxies and crawlers, in particular the crawl delay part of the robots exclusion protocol.
I'll outline a situation we had recently:
The GBIF portal and ZipCodeZoo site both inclde IPNI LSIDs in the pages. These are presented in their proxied form using the TDWG LSID resolver (eg http://lsid.tdwg.org/urn:lsid:ipni.org:names:783030-1). Using the TDWG resolver to access the data for an IPNI LSID does not issue any kind of HTTP redirect, instead the web resolver uses the LSID resolution steps to get the data and presents it in its own response (ie returning a HTTP 200 OK response).
The problem happens when one of these sites that includes proxied IPNI LSIDs is crawled by a search engine. The proxied links appear to belong to tdwg.org, so whatever crawl delay is agreed between TDWG and the crawler in question is used. The crawler has no knowledge that behind the scenes the TDWG resolver is hitting ipni.org. We (ipni.org) have agreed our own crawl limits with Google and the other major search engines using directives in robots.txt and directly agreed limits with Google (who don't use the robots.txt directly).
On a couple of occasions in the past we have had to deny access to the TDWG LSID resolver as it has been responsible for far more traffic than we can support (up to 10 times the crawl limits we have agreed with search engine bots) - this due to the pages on the GBIF portal and / or zipcodezoo being crawled by a search engine, which in turn triggers a high volume of requests from TDWG to IPNI. The crawler itself has no knowledge that it is in effect accessing data held at ipni.org rather than tdwg.org as the HTTP response is HTTP 200.
One of Rod's emails recently mentioned that we need a resolver to act like a tinyurl or bit.ly. I have pasted below the HTTP headers for an HTTP request to the TDWG LSID resolver, and to tinyurl / bit.ly. To the end user it looks as though tdwg.org is the true location of the LSID resource, whereas with the tinyurl and bitly both just redirect traffic.
I'm just posting this for discussion really - if we are to mandate use of a web based HTTP resolver/proxies, it should really issue 30* redirects so that established crawl delays between producer and consumer will be used. The alternative would be for the HTTP resolver to read and process the directives in robots.txt, but this would be difficult to implement as it is not in itself a crawler, just a gateway.
I'm sure that if proxied forms of LSIDs become more prevalent this problem will become more widespread, so now - with the on-going attempt to define what services a GUID resolver should provide - might be a good time to plan how to fix this.
cheers, Nicky
[nn00kg@kvstage01 ~]$ curl -I http://lsid.tdwg.org/urn:lsid:ipni.org:names:783030-1 HTTP/1.1 200 OK Via: 1.1 KISA01 Connection: close Proxy-Connection: close Date: Mon, 27 Apr 2009 11:41:55 GMT Content-Type: application/xml Server: Apache/2.2.3 (CentOS)
[nn00kg@kvstage01 ~]$ curl -I http://tinyurl.com/czkquy HTTP/1.1 301 Moved Permanently Via: 1.1 KISA01 Connection: close Proxy-Connection: close Date: Mon, 27 Apr 2009 12:16:38 GMT Location: http://www.ipni.org/ipni/plantNameByVersion.do?id=783030-1&version=1.4&a... Content-type: text/html Server: TinyURL/1.6 X-Powered-By: PHP/5.2.9
[nn00kg@kvstage01 ~]$ curl -I http://bit.ly/KO1Ko HTTP/1.1 301 Moved Permanently Via: 1.1 KISA01 Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 287 Date: Mon, 27 Apr 2009 12:19:48 GMT Location: http://www.ipni.org/ipni/plantNameByVersion.do?id=783030-1&version=1.4&a... Content-Type: text/html;charset=utf-8 Server: nginx/0.7.42 Allow: GET, HEAD, POST
- Nicola Nicolson - Science Applications Development, - Royal Botanic Gardens, Kew, - Richmond, Surrey, TW9 3AB, UK - email: n.nicolson@rbgkew.org.uk - phone: 020-8332-5766