I second that.<br><br><div class="gmail_quote">On Thu, May 15, 2008 at 5:11 AM, Markus Döring <<a href="mailto:mdoering@gbif.org">mdoering@gbif.org</a>> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
that's right. So they need to be escaped if they really want to have<br>
control characters in their dumps.<br>
<br>
But this is no different from escaping xml or any other document. It<br>
would just be nice if the number of escape characters is kept to a<br>
minimum. For this reason I personally prefer tab files, as escaping<br>
line returns and the delimiting tab space is rather little work.<br>
<font color="#888888"><br>
<br>
Markus<br>
</font><div><div></div><div class="Wj3C7c"><br>
<br>
On 15 May, 2008, at 13:40, Holetschek, Jörg wrote:<br>
<br>
> Hi guys,<br>
><br>
> sorry for the late reaction, but I put off reading all the mails<br>
> until today.<br>
><br>
> Using CSV and tab delimited files will cause problems when the dumps<br>
> contains freetext data, e.g. locality description or notes. When I<br>
> pushed our BioCASE cache (50 million occurrence records) between<br>
> different DBMS using tab delimited files, I had to experience that<br>
> people are very eager to use tabs and new lines in freetext fields.<br>
> Any character you choose for delimiting contents you will find in<br>
> freetext fields...<br>
><br>
> Cheers from Berlin,<br>
> Jörg<br>
><br>
> -----Ursprüngliche Nachricht-----<br>
> Von: <a href="mailto:tdwg-tapir-bounces@lists.tdwg.org">tdwg-tapir-bounces@lists.tdwg.org</a><br>
> [mailto:<a href="mailto:tdwg-tapir-bounces@lists.tdwg.org">tdwg-tapir-bounces@lists.tdwg.org</a>]Im Auftrag von Markus Döring<br>
> Gesendet: Mittwoch, 14. Mai 2008 15:35<br>
> An: Aaron D. Steele<br>
> Cc: TAPIR mailing list<br>
> Betreff: Re: [tdwg-tapir] Fwd: Tapir protocol - Harvest<br>
> methods?[SEC=UNCLASSIFIED]<br>
><br>
><br>
> it would keep the relations, but we dont really want any relational<br>
> structure to be served up.<br>
> And using sqlite binaries for the dwc star scheme would not be easier<br>
> to work with than plain text files. they can even be loaded into excel<br>
> straight away, can be versioned with svn and so on. If there is a<br>
> geospatial extension file which has the GUID in the first column,<br>
> applications might grab that directly and not even touch the central<br>
> core file if they only want location data.<br>
><br>
> I'd prefer to stick with a csv or tab delimited file.<br>
> The simpler the better. And it also cant get corrupted as easily.<br>
><br>
> Markus<br>
><br>
><br>
><br>
> On 14 May, 2008, at 15:25, Aaron D. Steele wrote:<br>
><br>
>> for preserving relational data, we could also just dump tapirlink<br>
>> resources to an sqlite database file (<a href="http://www.sqlite.org" target="_blank">http://www.sqlite.org</a>), zip it<br>
>> up, and again make it available via the web service. we use sqlite<br>
>> internally for many projects, and it's both easy to use and well<br>
>> supported by jdbc, php, python, etc.<br>
>><br>
>> would something like this be a useful option?<br>
>><br>
>> thanks,<br>
>> aaron<br>
>><br>
>> On Wed, May 14, 2008 at 2:21 AM, Markus Döring <<a href="mailto:mdoering@gbif.org">mdoering@gbif.org</a>><br>
>> wrote:<br>
>>> Interesting that we all come to the same conclusions...<br>
>>> The trouble I had with just a simple flat csv file is repeating<br>
>>> properties like multiple image urls. ABCD clients dont use ABCD just<br>
>>> because its complex, but because they want to transport this<br>
>>> relational data. We were considering 2 solutions to extending this<br>
>>> csv<br>
>>> approach. The first would be to have a single large denormalised csv<br>
>>> file with many rows for the same record. It would require knowledge<br>
>>> about the related entities though and could grow in size rapidly.<br>
>>> The<br>
>>> second idea which we think to adopt is allowing a single level of 1-<br>
>>> many related entities. It is basically a "star" design with the core<br>
>>> dwc table in the center and any number of extension tables around<br>
>>> it.<br>
>>> Each "table" aka csv file will have the record id as the first<br>
>>> column,<br>
>>> so the files can be related easily and it only needs a single<br>
>>> identifier per record and not for the extension entities. This would<br>
>>> give a lot of flexibility while keeping things pretty simple to deal<br>
>>> with. It would even satisfy the ABCD needs as I havent yet seen<br>
>>> anyone<br>
>>> requiring 2 levels of related tables (other than lookup tables).<br>
>>> Those<br>
>>> extensions could even be a simple 1-1 relation, but would keep<br>
>>> things<br>
>>> semantically together just like a xml namespace. The darwin core<br>
>>> extensions would be good for example.<br>
>>><br>
>>> So we could have a gzipped set of files, maybe with a simple<br>
>>> metafile<br>
>>> indicating the semantics of the columns for each file.<br>
>>> An example could look like this:<br>
>>><br>
>>><br>
>>> # darwincore.csv<br>
>>> 102 Aster alpinus subsp. parviceps ...<br>
>>> 103 Polygala vulgaris ...<br>
>>><br>
>>> # curatorial.csv<br>
>>> 102 Kew Herbarium<br>
>>> 103 Reading Herbarium<br>
>>><br>
>>> # identification.csv<br>
>>> 102 2003-05-04 Karl Marx Aster alpinus L.<br>
>>> 102 2007-01-11 Mark Twain Aster korshinskyi Tamamsch.<br>
>>> 102 2007-09-13 Roger Hyam Aster alpinus subsp. parviceps<br>
>>> Novopokr.<br>
>>> 103 2001-02-21 Steve Bekow Polygala vulgaris L.<br>
>>><br>
>>><br>
>>><br>
>>> I know this looks old fashioned, but it is just so simple and gives<br>
>>> us<br>
>>> so much flexibility.<br>
>>> Markus<br>
>>><br>
>>><br>
>>><br>
>>><br>
>>> On 14 May, 2008, at 24:39, Greg Whitbread wrote:<br>
>>><br>
>>>> We have used a very similar protocol to assemble the latest AVH<br>
>>>> cache.<br>
>>>> It should be noted that this is an as-well-as protocol that only<br>
>>>> works<br>
>>>> because we have an established semantic standard (hispid/abcd).<br>
>>>><br>
>>>> greg<br>
>>>><br>
>>>> <a href="mailto:trobertson@gbif.org">trobertson@gbif.org</a> wrote:<br>
>>>>> Hi All,<br>
>>>>><br>
>>>>> This is very interesting too me, as I came up with the same<br>
>>>>> conclusion<br>
>>>>> while harvesting for GBIF.<br>
>>>>><br>
>>>>> As a "harvester of all records" it is best described with an<br>
>>>>> example:<br>
>>>>><br>
>>>>> - Complete Inventory of ScientificNames: 7 minutes @ the limited<br>
>>>>> 200<br>
>>>>> records per page<br>
>>>>> - Complete Harvesting of records:<br>
>>>>> - 260,000 records<br>
>>>>> - 9 hours harvesting duration<br>
>>>>> - 500MB TAPIR+DwC XML returned (DwC 1.4 with geospatial and<br>
>>>>> curatorial<br>
>>>>> extensions)<br>
>>>>> - Extraction of DwC records from harvested XML: <2 minutes<br>
>>>>> - Resulting file size 32MB, Gzipped to <3MB<br>
>>>>><br>
>>>>> I spun hard drives for 9 hours, and took up bandwidth that is paid<br>
>>>>> for, to<br>
>>>>> retrieve something that could have been generated provider side in<br>
>>>>> minutes<br>
>>>>> and transferred in seconds (3MB).<br>
>>>>><br>
>>>>> I sent a proposal to TDWG last year termed "datamaps" which was<br>
>>>>> effectively what you are describing, and I based it on the<br>
>>>>> Sitemaps<br>
>>>>> protocol, but I got nowhere with it. With Markus, we are making<br>
>>>>> more<br>
>>>>> progress and I have spoken with several GBIF data providers<br>
>>>>> about a<br>
>>>>> proposed new standard for full dataset harvesting and it has been<br>
>>>>> received<br>
>>>>> well. So Markus and I have started a new proposal and have a<br>
>>>>> working name<br>
>>>>> of 'Localised DwC Index' file generation (it is an index if you<br>
>>>>> have more<br>
>>>>> than DwC data, and DwC is still standards compliant) which is<br>
>>>>> really a<br>
>>>>> GZipped Tab file dump of the data, which is slightly extensible.<br>
>>>>> The<br>
>>>>> document is not ready to circulate yet but the benefits section<br>
>>>>> reads<br>
>>>>> currently:<br>
>>>>><br>
>>>>> - Provider database load reduced, allowing it to serve real<br>
>>>>> distributed<br>
>>>>> queries rather than "full datasource" harvesters<br>
>>>>> - Providers can choose to publish their index as it suits them,<br>
>>>>> giving<br>
>>>>> control back to the provider<br>
>>>>> - Localised index generation can be built into tools not yet<br>
>>>>> capable of<br>
>>>>> integrating with TDWG protocol networks such as GBIF<br>
>>>>> - Harvesters receive a full dataset view in one request, making it<br>
>>>>> very<br>
>>>>> easy to determine what records are eligible for deletion<br>
>>>>> - It becomes very simple to write clients that consume entire<br>
>>>>> datasets.<br>
>>>>> E.g. data cleansing tools that the provider can run:<br>
>>>>> - Give me ISO Country Codes for my dataset<br>
>>>>> - The application pulls down the providers index file,<br>
>>>>> generates ISO<br>
>>>>> country code, returns a simple table using the providers own<br>
>>>>> identifier<br>
>>>>> - Check my names for spelling mistakes<br>
>>>>> - Application skims over the records and provides a list that<br>
>>>>> are not<br>
>>>>> known to the application<br>
>>>>> - Providers such as UK NBN cannot serve 20 million records to the<br>
>>>>> GBIF<br>
>>>>> index using the existing protocols efficiently.<br>
>>>>> - They have the ability to generate a localised index however<br>
>>>>> - Harvesters can very quickly build up searchable indexes and it<br>
>>>>> is<br>
>>>>> easy<br>
>>>>> to create large indices.<br>
>>>>> - Node Portal can easily aggregate index data files<br>
>>>>> - true index to data, not an illusion of a cache. More like Google<br>
>>>>> sitemaps<br>
>>>>><br>
>>>>> It is the ease at which one can offer tools to data providers that<br>
>>>>> really<br>
>>>>> interests me. The technical threshold required to produce<br>
>>>>> services<br>
>>>>> that<br>
>>>>> offer reporting tools on peoples data is really very low with this<br>
>>>>> mechanism. This and the fact that large datasets will be<br>
>>>>> harvestable - we<br>
>>>>> have even considered the likes of bit-torrent for the large ones<br>
>>>>> although<br>
>>>>> I think this is overkill.<br>
>>>>><br>
>>>>> As a consumer therefore I fully support this move as a valuable<br>
>>>>> addition<br>
>>>>> to the wrapper tools.<br>
>>>>><br>
>>>>> Cheers<br>
>>>>><br>
>>>>> Tim<br>
>>>>> (wrote the GBIF harvesting, and new to this list)<br>
>>>>><br>
>>>>><br>
>>>>>><br>
>>>>>> Begin forwarded message:<br>
>>>>>><br>
>>>>>>> From: "Aaron D. Steele" <<a href="mailto:eightysteele@gmail.com">eightysteele@gmail.com</a>><br>
>>>>>>> Date: 13 de mayo de 2008 22:40:09 GMT+02:00<br>
>>>>>>> To: <a href="mailto:tdwg-tapir@lists.tdwg.org">tdwg-tapir@lists.tdwg.org</a><br>
>>>>>>> Cc: Aaron Steele <<a href="mailto:asteele@berkeley.edu">asteele@berkeley.edu</a>><br>
>>>>>>> Subject: Re: [tdwg-tapir] Tapir protocol - Harvest methods?<br>
>>>>>>><br>
>>>>>>> at berkeley we've recently prototyped a simple php program that<br>
>>>>>>> uses<br>
>>>>>>> an existing tapirlink installation to periodically dump tapir<br>
>>>>>>> resources into a csv file. the solution is totally generic and<br>
>>>>>>> can<br>
>>>>>>> dump darwin core (and technically abcd schema, although it's<br>
>>>>>>> currently<br>
>>>>>>> untested). the resulting csv files are zip archived and made<br>
>>>>>>> accessible using a web service. it's a simple approach that has<br>
>>>>>>> proven<br>
>>>>>>> to be, at least internally, quite reliable and useful.<br>
>>>>>>><br>
>>>>>>> for example, several of our caching applications use the web<br>
>>>>>>> service<br>
>>>>>>> to harvest csv data from tapirlink resources using the following<br>
>>>>>>> process:<br>
>>>>>>> 1) download latest csv dump for a resource using the web<br>
>>>>>>> service.<br>
>>>>>>> 2) flush all locally cached records for the resource.<br>
>>>>>>> 3) bulk load the latest csv data into the cache.<br>
>>>>>>><br>
>>>>>>> in this way, cached data are always synchronized with the<br>
>>>>>>> resource and<br>
>>>>>>> there's no need to track new, deleted, or changed records. as an<br>
>>>>>>> aside, each time these cached data are queried by the caching<br>
>>>>>>> application or selected in the user interface, log-only search<br>
>>>>>>> requests are sent back to the resource.<br>
>>>>>>><br>
>>>>>>> after discussion with renato giovanni and john wieczorek, we've<br>
>>>>>>> decided that merging this functionality into the tapirlink<br>
>>>>>>> codebase<br>
>>>>>>> would benefit the broader community. csv generation support<br>
>>>>>>> would<br>
>>>>>>> be<br>
>>>>>>> declared through capabilities. although incremental harvesting<br>
>>>>>>> wouldn't be immediately implemented, we could certainly extend<br>
>>>>>>> the<br>
>>>>>>> service to include it later.<br>
>>>>>>><br>
>>>>>>> i'd like to pause here to gauge the consensus, thoughts,<br>
>>>>>>> concerns, and<br>
>>>>>>> ideas of others. anyone?<br>
>>>>>>><br>
>>>>>>> thanks,<br>
>>>>>>> aaron<br>
>>>>>>><br>
>>>>>>> 2008/5/5 Kevin Richards <<a href="mailto:RichardsK@landcareresearch.co.nz">RichardsK@landcareresearch.co.nz</a>>:<br>
>>>>>>>><br>
>>>>>>>> I think I agree here.<br>
>>>>>>>><br>
>>>>>>>> The harvesting "procedure" is really defined outside the Tapir<br>
>>>>>>>> protocol, is<br>
>>>>>>>> it not? So it is really an agreement between the harvester and<br>
>>>>>>>> the<br>
>>>>>>>> harvestees.<br>
>>>>>>>><br>
>>>>>>>> So what is really needed here is the standard procedure for<br>
>>>>>>>> maintaining a<br>
>>>>>>>> "harvestable" dataset and the standard procedure for harvesting<br>
>>>>>>>> that<br>
>>>>>>>> dataset.<br>
>>>>>>>> We have a general rule at Landcare, that we never delete<br>
>>>>>>>> records<br>
>>>>>>>> in<br>
>>>>>>>> our<br>
>>>>>>>> datasets - they are either deprecated in favour of another<br>
>>>>>>>> record,<br>
>>>>>>>> and so<br>
>>>>>>>> the resolution of that record would point to the new record, or<br>
>>>>>>>> the<br>
>>>>>>>> are set<br>
>>>>>>>> to a state of "deleted", but are still kept in the dataset, and<br>
>>>>>>>> can<br>
>>>>>>>> be<br>
>>>>>>>> resolved (which would indicate a state of deleted).<br>
>>>>>>>><br>
>>>>>>>> Kevin<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>>>>> "Renato De Giovanni" <<a href="mailto:renato@cria.org.br">renato@cria.org.br</a>> 6/05/2008 7:33<br>
>>>>>>>>>>> a.m.<br>
>>>>>>>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> Hi Markus,<br>
>>>>>>>><br>
>>>>>>>> I would suggest creating new concepts for incremental<br>
>>>>>>>> harvesting,<br>
>>>>>>>> either in the data standards themselves or in some new<br>
>>>>>>>> extension. In<br>
>>>>>>>> the case of TAPIR, GBIF could easily check the mapped concepts<br>
>>>>>>>> before<br>
>>>>>>>> deciding between incremental or full harvesting.<br>
>>>>>>>><br>
>>>>>>>> Actually it could be just one new concept such as<br>
>>>>>>>> "recordStatus"<br>
>>>>>>>> or<br>
>>>>>>>> "deletionFlag". Or perhaps you could also want to create your<br>
>>>>>>>> own<br>
>>>>>>>> definition for dateLastModified indicating which set of<br>
>>>>>>>> concepts<br>
>>>>>>>> should be considered to see if something has changed or not,<br>
>>>>>>>> but I<br>
>>>>>>>> guess this level of granularity would be difficult to be<br>
>>>>>>>> supported.<br>
>>>>>>>><br>
>>>>>>>> Regards,<br>
>>>>>>>> --<br>
>>>>>>>> Renato<br>
>>>>>>>><br>
>>>>>>>> On 5 May 2008 at 11:24, Markus Döring wrote:<br>
>>>>>>>><br>
>>>>>>>>> Phil,<br>
>>>>>>>>> incremental harvesting is not implemented on the GBIF side as<br>
>>>>>>>>> far<br>
>>>>>>>>> as I<br>
>>>>>>>>> am aware. And I dont think that will be a simple thing to<br>
>>>>>>>>> implement on<br>
>>>>>>>>> the current system. Also, even if we can detect only the<br>
>>>>>>>>> changed<br>
>>>>>>>>> records since the last harevesting via dateLastModified we<br>
>>>>>>>>> still<br>
>>>>>>>>> have<br>
>>>>>>>>> no information about deletions. We could have an arrangement<br>
>>>>>>>>> saying<br>
>>>>>>>>> that you keep deleted records as empty records with just the<br>
>>>>>>>>> ID<br>
>>>>>>>>> and<br>
>>>>>>>>> nothing else (I vaguely remember LSIDs were supposed to work<br>
>>>>>>>>> like<br>
>>>>>>>>> this<br>
>>>>>>>>> too). But that also needs to be supported on your side then,<br>
>>>>>>>>> never<br>
>>>>>>>>> entirely removing any record. I will have a discussion with<br>
>>>>>>>>> the<br>
>>>>>>>>> others<br>
>>>>>>>>> at GBIF about that.<br>
>>>>>>>>><br>
>>>>>>>>> Markus<br>
>>>>>>>> _______________________________________________<br>
>>>>>>>> tdwg-tapir mailing list<br>
>>>>>>>> <a href="mailto:tdwg-tapir@lists.tdwg.org">tdwg-tapir@lists.tdwg.org</a><br>
>>>>>>>> <a href="http://lists.tdwg.org/mailman/listinfo/tdwg-tapir" target="_blank">http://lists.tdwg.org/mailman/listinfo/tdwg-tapir</a><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> Please consider the environment before printing this email<br>
>>>>>>>><br>
>>>>>>>> WARNING : This email and any attachments may be confidential<br>
>>>>>>>> and/<br>
>>>>>>>> or<br>
>>>>>>>> privileged. They are intended for the addressee only and are<br>
>>>>>>>> not<br>
>>>>>>>> to<br>
>>>>>>>> be read,<br>
>>>>>>>> used, copied or disseminated by anyone receiving them in<br>
>>>>>>>> error. If<br>
>>>>>>>> you are<br>
>>>>>>>> not the intended recipient, please notify the sender by return<br>
>>>>>>>> email and<br>
>>>>>>>> delete this message and any attachments.<br>
>>>>>>>><br>
>>>>>>>> The views expressed in this email are those of the sender and<br>
>>>>>>>> do<br>
>>>>>>>> not<br>
>>>>>>>> necessarily reflect the<br>
>>>>>>>> official views of Landcare Research. http://<br>
>>>>>>>> <a href="http://www.landcareresearch.co.nz" target="_blank">www.landcareresearch.co.nz</a><br>
>>>>>>>> _______________________________________________<br>
>>>>>>>> tdwg-tapir mailing list<br>
>>>>>>>> <a href="mailto:tdwg-tapir@lists.tdwg.org">tdwg-tapir@lists.tdwg.org</a><br>
>>>>>>>> <a href="http://lists.tdwg.org/mailman/listinfo/tdwg-tapir" target="_blank">http://lists.tdwg.org/mailman/listinfo/tdwg-tapir</a><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>> _______________________________________________<br>
>>>>>>> tdwg-tapir mailing list<br>
>>>>>>> <a href="mailto:tdwg-tapir@lists.tdwg.org">tdwg-tapir@lists.tdwg.org</a><br>
>>>>>>> <a href="http://lists.tdwg.org/mailman/listinfo/tdwg-tapir" target="_blank">http://lists.tdwg.org/mailman/listinfo/tdwg-tapir</a><br>
>>>>>><br>
>>>>><br>
>>>>><br>
>>>>> _______________________________________________<br>
>>>>> tdwg-tapir mailing list<br>
>>>>> <a href="mailto:tdwg-tapir@lists.tdwg.org">tdwg-tapir@lists.tdwg.org</a><br>
>>>>> <a href="http://lists.tdwg.org/mailman/listinfo/tdwg-tapir" target="_blank">http://lists.tdwg.org/mailman/listinfo/tdwg-tapir</a><br>
>>>><br>
>>>> --<br>
>>>><br>
>>>> Australian Centre for Plant BIodiversity<br>
>>>> Research<------------------+<br>
>>>> National greg whitBread voice: +61 2 62509<br>
>>>> 482<br>
>>>> Botanic Integrated Botanical Information System fax: +61 2 62509<br>
>>>> 599<br>
>>>> Gardens S........ I.T. happens..<br>
>>>> <a href="mailto:ghw@anbg.gov.au">ghw@anbg.gov.au</a><br>
>>>> +----------------------------------------->GPO Box 1777 Canberra<br>
>>>> 2601<br>
>>>><br>
>>>><br>
>>>><br>
>>>> ------<br>
>>>> If you have received this transmission in error please notify us<br>
>>>> immediately by return e-mail and delete all copies. If this e-mail<br>
>>>> or any attachments have been sent to you in error, that error does<br>
>>>> not constitute waiver of any confidentiality, privilege or<br>
>>>> copyright<br>
>>>> in respect of information in the e-mail or attachments.<br>
>>>><br>
>>>><br>
>>>><br>
>>>> Please consider the environment before printing this email.<br>
>>>><br>
>>>> ------<br>
>>>><br>
>>>> _______________________________________________<br>
>>>> tdwg-tapir mailing list<br>
>>>> <a href="mailto:tdwg-tapir@lists.tdwg.org">tdwg-tapir@lists.tdwg.org</a><br>
>>>> <a href="http://lists.tdwg.org/mailman/listinfo/tdwg-tapir" target="_blank">http://lists.tdwg.org/mailman/listinfo/tdwg-tapir</a><br>
>>>><br>
>>><br>
>>> _______________________________________________<br>
>>> tdwg-tapir mailing list<br>
>>> <a href="mailto:tdwg-tapir@lists.tdwg.org">tdwg-tapir@lists.tdwg.org</a><br>
>>> <a href="http://lists.tdwg.org/mailman/listinfo/tdwg-tapir" target="_blank">http://lists.tdwg.org/mailman/listinfo/tdwg-tapir</a><br>
>>><br>
>> _______________________________________________<br>
>> tdwg-tapir mailing list<br>
>> <a href="mailto:tdwg-tapir@lists.tdwg.org">tdwg-tapir@lists.tdwg.org</a><br>
>> <a href="http://lists.tdwg.org/mailman/listinfo/tdwg-tapir" target="_blank">http://lists.tdwg.org/mailman/listinfo/tdwg-tapir</a><br>
>><br>
><br>
> _______________________________________________<br>
> tdwg-tapir mailing list<br>
> <a href="mailto:tdwg-tapir@lists.tdwg.org">tdwg-tapir@lists.tdwg.org</a><br>
> <a href="http://lists.tdwg.org/mailman/listinfo/tdwg-tapir" target="_blank">http://lists.tdwg.org/mailman/listinfo/tdwg-tapir</a><br>
><br>
<br>
_______________________________________________<br>
tdwg-tapir mailing list<br>
<a href="mailto:tdwg-tapir@lists.tdwg.org">tdwg-tapir@lists.tdwg.org</a><br>
<a href="http://lists.tdwg.org/mailman/listinfo/tdwg-tapir" target="_blank">http://lists.tdwg.org/mailman/listinfo/tdwg-tapir</a><br>
</div></div></blockquote></div><br>