See, for example,
Mungall et al., “Integrating phenotype ontologies across multiple species”, Genome Biology 2010, 11:R2 doi:10.1186/gb-2010-11-1-r2)
Ward Blondé et al. "Reasoning with bio-ontologies: using relational closure rules to enable practical querying", Bioinformatics (2011) doi: 10.1093/bioinformatics/btr164
Calder, et al. "Machine Reasoning about Anomalous Sensor Data" http://dx.doi.org/10.1016/j.ecoinf.2009.08.007 or in manuscript form at http://efg.cs.umb.edu/pubs/SensorDataReasoning.pdf
...
OK, so maybe these knowledge domains are all hypothesis-driven sciences (i.e., sciences), and <whatever dsw is modelling> is not. But that would be sad.
Bob p.s. I had almost finished something else on this thread when Hilmar beat me to the punch. But here's a slightly different expression of his point:
It turns out that the differences between instances and classes is mainly important in contexts in which you have declaimed interest, namely reasoning. In the RDF/RDFS/OWL stack, enforcing a distinction between classes and instances only occurs pretty high up in the stack, when one desires an OWL variant that will offer guarantees that reasoners will finish any inference they are asked to verify, preferably in less than exponential time . I guess, but am not certain, that even in an LOD context, if data are described with an OWL ontology that is known to be intractable, e.g. not in OWL DL, that it is possible to design SPARQL queries that will never complete. In fact, I believe that even with tractable ontologies, there are SPARQL queries that are fundamentally exponential in the number of variables.
p.p.s. Irrelevant, but equivalent, aside about mathematics. At the turn of the 20th century, Whitehead and Russell tried (and failed) to show that everything about numbers could be logically derived from an axiomatic description of the natural numbers (i.e. non-negative integers). It was later shown to be the case that you must include in your logical foundations something deeper, namely the ability to have sets that are elements of other sets (roughly, classes that are individuals in other classes.). Without this, and starting only with the natural numbers, you can logically derive all rational numbers (fractions) and their arithmetic properties, and even all the irrational numbers that are are the solutions of polynomial equations with integer coefficients ("algebraic numbers") such as sqrt(2), and even solutions of the polynomials that have coefficients that are algebraic numbers. But without introducing the notion of the set of subsets of a set, you cannot logically derive the all the interesting transcendental numbers (i.e. those which are not the roots of polynomials), such as e and pi. So if you love calculus, you better not insist on distinguishing instances from classes. But if you are content with polynomials, you can probably be ontologically sloppy. Or, if you don't care about logical foundations of your science, you can forget about the whole thing. :-)
On Tue, May 3, 2011 at 11:51 PM, Steve Baskauf steve.baskauf@vanderbilt.edu wrote:
[snip] OK, so let's imagine that we mark up several million records of specimens, tissue samples, and images as RDF. (We don't have to imagine very hard, I think the BiSciCol group is planning to actually do this within the next several months.) I would really like to hear from some of the people who actually use "DL reasoners" (a group which certainly does not include me) to know what it is that we could actually find out that would be useful about that big data blob using reasoners. I have already confessed that my primary concern is enabling data discovery, transfer, and aggregation using GUIDs and RDF. I'm still somewhat of a "semantic web" skeptic as far as the whole inferencing thing is concerned. Aside from inferring "duplicates", I'm really wanting to know what else there is useful that could be reasoned outside of the Taxon/TaxonConcept class. (I can imaging useful reasoning being done about things in that class like the relationships among names, concepts, parent taxa, etc. e.g. Rod Page's Biodiversity Informatics 3:1-15 article https://journals.ku.edu/index.php/jbi/article/view/25)%C2%A0 I think this (data markup priority vs. inferencing priority) is an important discussion to have before the tdwg community can settle on some kind of consensus way of turning database records into RDF, particularly if it is going to have a big influence on the way the RDF model is set up. To me, there is a clear and immediate need to be able to mark data up in a straightforward way. If we can get the semantic part, too, that would be great but not at the expense of data markup. I just was at a meeting of a bunch of herbarium curators. They desperately need a way to implement GUIDs and aggregate data and they need it now. I really don't think they care one whit about inferencing. If we coalesce on a model that is great for doing cool things with 10 records but which can't handle hundreds of thousands of records easily and simply, then we are wasting our time. I don't think we need to dither about this for another five years.
I would hate to have to draw an RDF graph of that model
I would as much hate to have to draw an RDF graph of 1.7 million instances. The point being, in order to draw a graph of how someone models a domain you don't draw a graph of the entire RDF triple store.
That was the point I was trying to make (I think).
Thanks for the clarification, Hilmar. Steve
-hilmar
=========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : ===========================================================
-- Steven J. Baskauf, Ph.D., Senior Lecturer Vanderbilt University Dept. of Biological Sciences
postal mail address: VU Station B 351634 Nashville, TN 37235-1634, U.S.A.
delivery address: 2125 Stevenson Center 1161 21st Ave., S. Nashville, TN 37235
office: 2128 Stevenson Center phone: (615) 343-4582, fax: (615) 343-6707 http://bioimages.vanderbilt.edu
tdwg-content mailing list tdwg-content@lists.tdwg.org http://lists.tdwg.org/mailman/listinfo/tdwg-content