Donald is right that the discussion points in contradictory directions:
1. Some people believe we need a non-http system. This points to DOI or handles
a) Certainly doi is successful in the publishing industry - not as technology but as business model. If we want DOIs, we get them as cheap as possible from http://www.tib-hannover.de/en/the-tib/doi-registration-agency/ - see mails of 2008-11-25 to 2008-11-26 on this list.
b) Or we can ride piggy-back on the DOI = handles technology. Create a BOI-handle system using the same software. But as Roger points out, this is a business model and needs a business model. It is not a suitable technology solving any problems technologically.
2. Technologically I have not seen any argument why -- with respect to desiring a RESOLVABLE entity -- a EoL / CoL / GNA / GBIF / TDWG whatever -- urn:lsid:persistent-identifier.tdwg.org:anic:12345 is in any ways better than: http://persistent-identifier.tdwg.org/anic/12345. Yes, in theory, LSIDs allow for resolving without using DNS, but in practice DNS is the method, so http://persistent-identifier.tdwg.org/ is the central resolving point. And I believe you can probably create - in theory again - a non-DNS-based resolution for parts of http - the only complication is that this would be an enumerated list of prefixes, whereas changing the resolution of urn:lsid can use a single prefix to hook in.)
http is a core web technology. It is working now and it will continue, and after that there will be a very wide upgrade path. It is the thing standard software like CMS, wikis, etc. work with.
I maintain my point, that this being the technical list, we place too much focus on interaction between large-scale technologies, and ignore the cost of explaining 99.999999% of biologists to ignore the lsids they see on their screen and use the lsid-through-http-method instead. And then, surprise, what people really need to use it a CENTRAL and HTTP-based resolver. And, surprise, this will show up as millions of http links in publications, be it PDF, cms, wiki, whatever.
The only thing with http-URLs that I see is wrong is that they are NORMALLY not properly managed as persistent URLs (link-rot). To create a multilateral system that fits both humans, the semantic web, and management practices, we could provide a cooking recipe for dataproviders:
* create a http://persistent-identifier.yourorganisation.tld domain * make sure it resolves the objects you want to publish there and that you are prepared to manage the stability of the service over a long time-period. * be prepared that by prefixing you domain with "persistent-identifier" you make a promise in the name of "yourorganisation"- Others will monitor how well that promise is kept. * be prepared to re-assign the domain to a central gbif/etc provider should your organisation no longer be prepared to maintain the service. * set up a content negotiation, so that humans see html, machines see rdf when they resolve (detailed recipe here...)
---- . If you want the benefits of branding, central resolution and business model, use a handle technology -- if you want a multilateral technology, use http?
Gregor