Rich et al,
My thinking has been that the Semantic Web only began to take off when pragmatists started to have more influence than the often described "white coated ontologists". In one way, the "white coated ontologists" are right, but their over-engineered solutions really only worked within their specific model of the world.
Now that the pragmatists have more influence, people are marking up huge amounts of data and the issues of how to get these different vocabularies and datasets to work well together is being discussed and worked out on the public-lod email list.
I think the best way to proceed is to get some example data sets online, and then work out how to get them to work together in a useful way.
An initial step is to connect them using predicates *like* skos:closeMatch etc.
Then we should work within the larger LOD / Semantic Web community to fix whatever problems we find.
In some cases this will result in a change in other vocabularies like foaf or skos.
The problems that some of us see in these have been noticed by others and it makes sense that we come to some common solution.
In other cases it will involve getting the rest of the LOD community to adopt, or at least work well with, those vocabulary terms we propose.
This is to the benefit of our community because it makes the tools and data made available by others usable by us.
It also might be useful to step back and ask.
*How successful have taxonomists been in getting the rest of the scientific community to adopt their standards?*
My current thinking is that we can deal with DOI's and LSID's if we define predicates that allow consuming applications to recognize that the "object" is to be interpreted as a DOI-like or LSID-like "thing". And do this in a way that will be widely adopted.
It is my understanding that Virtuoso is the only triple / quadstore / sparql endpoint that knows how to handle LSIDs.
Also, although I like a lot of what Steve says, I think that most existing crawlers expect that a seeAlso link is to some html, xml, rdf type thing and will not be able to handle a multi-megabyte PDF.
This is why I reluctantly minted the predicate "hasPDF"
Also some services like Sindice seem to be able to interpret predicates that are defined as subproperties of well known predicates.
For instance "txn:hasWikipediaArticle" etc. are subproperties of foaf:page.
We also need some sort of "playground" knowledge base where related data sets can be loaded and tested to see how well they work together.
It is for this reason that I created the SPARQL endpoint described here: http://www.taxonconcept.org/sparql-endpoint/
I did a quick look at the apni data and my initial impression is that I like it.
Two sites I used to check it were: http://inspector.sindice.com/
and http://validator.linkeddata.org/vapour
To get a better idea of how it works etc., I also loaded a small sample of the RDF into my knowledge base.
I had not expected that there would be a related TaxonConcept entity since I have limited myself to mainly US plants, but after checking I found that the example emailed earlier had a related entity in TaxonConcept.
So I made up this page to make it easier for people to browse through the related data.
Gail K. reminded me that some email clients handle the "#" in some URL's differently. To end users these appear as broken links.
Here is the page with links to the small set of APNI RDF and the related entity in TaxonConcept.
http://lod.taxonconcept.org/apni_example.html
Respectfully,
- Pete
On Wed, Jan 5, 2011 at 10:43 AM, Richard Pyle deepreef@bishopmuseum.orgwrote:
I second Rod’s “V. cool!” proclamation! I think this has the potential to solve a lot of problems with DOIs (particularly for the old literature). It doesn’t solve all the problems (e.g., we still need to define the article-level units within BHL content, and we’ll still need to establish a system of GUIDs that can be applied to sub-article units, such as treatments), but in the vast majority of cases, the DOIs will be the ticket into the services of CrossRef.
I desperately want to comment on several points made in this thread (which I’ve only read just now), but I’m currently travelling, so I’ll chime in later.
Aloha,
Rich
P.S. Question for Rod/Chris/anyone else concerning DOIs: Does the annual fee only apply to the ability to mint DOIs within a given year, or does it also apply to resolving them? Put another way, if BHL stops paying its annual fee sometime in the future, will the already-minted DOI’s be resolvable into perpetuity, or will they stop being resolved at that point?
*From:* tdwg-content-bounces@lists.tdwg.org [mailto: tdwg-content-bounces@lists.tdwg.org] *On Behalf Of *Roderic Page *Sent:* Wednesday, January 05, 2011 5:42 AM *To:* Chris Freeland *Cc:* tdwg-content@lists.tdwg.org; Paul Murray
*Subject:* Re: [tdwg-content] GUIDs for publications (usages and names)
On 5 Jan 2011, at 15:25, Chris Freeland wrote:
And following on re: DOIs, BHL has become a member of CrossRef and starting in February will begin assigning DOIs first to our monographs & then on to journal content. There is an annual fee for membership and then a fee for every DOI assigned. BHL is absorbing these costs for community benefit.
Chris
V. cool!
Regards
Rod
Roderic Page
Professor of Taxonomy
Institute of Biodiversity, Animal Health and Comparative Medicine
College of Medical, Veterinary and Life Sciences
Graham Kerr Building
University of Glasgow
Glasgow G12 8QQ, UK
Email: r.page@bio.gla.ac.uk
Tel: +44 141 330 4778
Fax: +44 141 330 2792
AIM: rodpage1962@aim.com
Facebook: http://www.facebook.com/profile.php?id=1112517192
Twitter: http://twitter.com/rdmpage
Blog: http://iphylo.blogspot.com
Home page: http://taxonomy.zoology.gla.ac.uk/rod/rod.html
tdwg-content mailing list tdwg-content@lists.tdwg.org http://lists.tdwg.org/mailman/listinfo/tdwg-content