IOOS Meeting 03.22.2010: Difference between revisions

From OPeNDAP Documentation
⧼opendap2-jumptonavigation⧽
 
(24 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== DAP 3.x and 4.x ==
* TDS and new DAP standards
:;Claim: For DAP 3.x and 4.x to become widely adopted we will need to get the TDS to become compatible with these versions.
:: The TDS is clearly the dominant DAP service software. This position makes it difficult (if not impossible) for us to push the DAP standards forward without enabling the TDS. INn order for the TDS to produce DAP 3.x and 4.x output we will need to start supporting these updated protocols in  the Java-DAP.
:;Assertion
:: If we don't do this, then changes to the DAP will be irrelevant.
* Do we want to make the DAP 3.2 DDX the default DDX response?
** Are there any users of the DAP 2.0 DDX?
== WCS Semantic Software Activities ==
== WCS Semantic Software Activities ==


Need code review to see if we can make it more robust and efficient.


* Why are updates so slow?
=== Code Review ===
** It takes a long time to verify if changes have been made to the imports.
Need code review to see if we can make it more robust and efficient. I think there are patterns of use in the code that can be factored into common methods. [http://scm.opendap.org/trac/ticket/1518 Ticket 1518]
** Even when no changes have been made to the imports list, semantic operations are still run. This seems counterintuitive: Can we change that? If nothing has changed then no additional semantic operations should be undertaken. Currently the last modified times of the imports are stored in the repository. This seems to be a very slow and unreliable mechanism for reevaluating the imports.
** We need to evaluate what do to when things have changed. Is it more expedient to remove and replace changed values from the repository, or should we just rebuild the whole thing?
** Can we pre-compute a starting point repository that contains all of the inferencing rules and ontologies? This would allow us to avoid the long slow acquisition of all of these files every time we rebuild the repository.
** Should we consider caching the import list and last modified times as directly accessible java objects so that we can write simple java procedural code to evaluate the imports list?


=== Why are updates so slow? [http://scm.opendap.org/trac/ticket/1521 Ticket 1521] ===
It takes a long time to verify if changes have been made to the imports.
* Even when no changes have been made to the imports list, semantic operations are still run. This seems counterintuitive: Can we change that? If nothing has changed then no additional semantic operations should be undertaken. Currently the last modified times of the imports are stored in the repository. This seems to be a very slow and unreliable mechanism for reevaluating the imports.
* We need to evaluate what do to when things have changed. Is it more expedient to remove and replace changed values from the repository, or should we just rebuild the whole thing?
* Can we pre-compute a starting point repository that contains all of the inferencing rules and ontologies? This would allow us to avoid the long slow acquisition of all of these files every time we rebuild the repository.
* Should we consider caching the import list and last modified times as directly accessible java objects so that we can write simple java procedural code to evaluate the imports list?


=== Caching ===
At start up the catalog is empty prior to the completion of the semantic operations. [http://scm.opendap.org/trac/ticket/1517 Ticket 1517]
* Should we cache the complete catalog? (Currently Haibo is writing out the results of the repository processing, but since additional elements are added by the java code his copy is not complete.)
* Should we consider making the service simply create the appropriate inputs for the LocalFileCatalog and skip more tightly coupled integration?


* At start up the catalog is empty prior to the completion of the semantic operations.
=== Servlet/Webapp ===
** Should we cache the catalog?
Let's make the WCS service a stand alone web app. A seperate jar file that runs in it's own context in the servlet engine.
** Should we consider making the service simply create the appropriate inputs for the LocalFileCatalog and skip more tightly coupled integration?
* Fix the local/universal ID problem. [http://scm.opendap.org/trac/ticket/1511 Ticket 1511]
 
* Re-factor the code as a servlet (or maybe even as its own web app) with it's own servlet context. [http://scm.opendap.org/trac/ticket/1516 Ticket 1516]
 
* Let's make the WCS service a stand alone web app. A seperate jar file that runs in it's own context in the servlet engine.
** Fix the local/universal ID problem.
** Re-factor the code as a servlet in it's own context.


=== Additional Semantic Functions ===
; DAP Variable Access [http://scm.opendap.org/trac/ticket/1520 Ticket 1520]
: Need to access the data values of dap arrays from the inferencing.  In particular, we need the first and last values of arrays associated with map vectors (or more likely their "bounds" vectors) so that we can create bounding boxes.  A function should take dataseturi, local variable names, and indexing constraints expressed in some reasonable fashion


; General semantic Function framework [http://scm.opendap.org/trac/ticket/1519 Ticket 1519]


== GeoTIFF Module ==
== GeoTIFF Module ==


We need to beg/borrow/steal the time to build a GeoTIFF module for the BES. The idea is that like the netcdf_fileout it would return data in a GeoTIFF format. Obviously Grids would map well to GeoTIFF (assuming that the Grid has the requisite maps). We would need to either creatively represent other data types/structures or simply deny GeoTIFF access for non-grid data types. Even if we never build releasable WMS or WCS services adding this would allow others to easily wrap those services around our server.


Just creating this kind of output has the potential to increase our user base immensely.
See [[GeoTIFF responses for Hyrax]]


== KML Module ==
== KML Module ==


Same arguments as GeoTIFF.
See [[KML_responses_for_Hyrax]]
 


== THREDDS Catalog Clients ==
== THREDDS Catalog Clients ==
Line 54: Line 45:
* Environmental Data Connector
* Environmental Data Connector
* ERDDAP
* ERDDAP
* IOOS NCV
* IOOS National Catalog Viewer (See [[#THREDDS_Catalog_Metadata]])


== Dap Capabilities Response ==


* Services
== THREDDS Catalog Metadata ==
* Return Types/Formats
* Server Side Functions
* Catalog
** What catalog metadata should we provide so that a complete enough picture of the holdings is available? The intent it to make it so that users don't need to delve into the granules to determine if the holding contains the information that they desire. See THREDDS Catalog Metadata section below.


See [[THREDDS Catalog Metadata]]


== Dap Capabilities Response ==


See [[DAP Capabilities]]


== THREDDS Catalog Metadata ==
== DAP Asynchronous Responses ==


* How do we propagate (some subset of) dataset metadata "up" into our THREDDS catalogs? People are requesting this.
Comment added to [[DAP 4.0 Design#Organization_of_the_multipart_MIME_document| DAP 4.0 Design]]


* Can we use NcML to push new metadata down into collections vis-a-vis the inherited metadata section of a dataset scan? I mean add metadata terms to the thredds catalogs using NcML. OtherXML? Attributes? Both?
== DAP 3.x and 4.x ==


* How do we add new services (say for example WCS) to the services listed in our THREDDS catalogs along with the appropriate data access links for the appropriate datasets? Is this even realistic given the architecture of our WCS?
* TDS and new DAP standards
:;Claim: For DAP 3.x and 4.x to become widely adopted we will need to get the TDS to become compatible with these versions.
:: The TDS is clearly the dominant DAP service software. This position makes it difficult (if not impossible) for us to push the DAP standards forward without enabling the TDS. INn order for the TDS to produce DAP 3.x and 4.x output we will need to start supporting these updated protocols in  the Java-DAP.


* This is requested functionality for National Catalog Viewer in development by IOOS. And is needed by June 2010. Can we pull that off?
:;Assertion
:: If we don't do this, then changes to the DAP will be irrelevant.  


== DAP Asynchronous Responses ==
* Do we want to make the DAP 3.2 DDX the default DDX response?
** Are there any users of the DAP 2.0 DDX?


* Could this be implemented by altering the dap:blob element? Currently we propose using an href attribute to hold the content ID for the MIME part that holds the data:


<dap:blob href="someUUID" />


: We might consider allowing an alternate representation:


<dap:blob
        xlink:href="http://the.server/location/where/you/can/get/the/binary/part" 
        xlink:type="simple"
        available="TimeItWillBeAvailableInSomeISOFormat" 
    />
: That would indicate to the user that the content will be available asynchronously.


== To Do ==
== To Do ==


* Look at "Fudge Messaging"
* Look at [http://www.fudgemsg.org  Fudge Messaging]

Latest revision as of 21:41, 2 April 2010

WCS Semantic Software Activities

Code Review

Need code review to see if we can make it more robust and efficient. I think there are patterns of use in the code that can be factored into common methods. Ticket 1518

Why are updates so slow? Ticket 1521

It takes a long time to verify if changes have been made to the imports.

  • Even when no changes have been made to the imports list, semantic operations are still run. This seems counterintuitive: Can we change that? If nothing has changed then no additional semantic operations should be undertaken. Currently the last modified times of the imports are stored in the repository. This seems to be a very slow and unreliable mechanism for reevaluating the imports.
  • We need to evaluate what do to when things have changed. Is it more expedient to remove and replace changed values from the repository, or should we just rebuild the whole thing?
  • Can we pre-compute a starting point repository that contains all of the inferencing rules and ontologies? This would allow us to avoid the long slow acquisition of all of these files every time we rebuild the repository.
  • Should we consider caching the import list and last modified times as directly accessible java objects so that we can write simple java procedural code to evaluate the imports list?

Caching

At start up the catalog is empty prior to the completion of the semantic operations. Ticket 1517

  • Should we cache the complete catalog? (Currently Haibo is writing out the results of the repository processing, but since additional elements are added by the java code his copy is not complete.)
  • Should we consider making the service simply create the appropriate inputs for the LocalFileCatalog and skip more tightly coupled integration?

Servlet/Webapp

Let's make the WCS service a stand alone web app. A seperate jar file that runs in it's own context in the servlet engine.

  • Fix the local/universal ID problem. Ticket 1511
  • Re-factor the code as a servlet (or maybe even as its own web app) with it's own servlet context. Ticket 1516

Additional Semantic Functions

DAP Variable Access Ticket 1520
Need to access the data values of dap arrays from the inferencing. In particular, we need the first and last values of arrays associated with map vectors (or more likely their "bounds" vectors) so that we can create bounding boxes. A function should take dataseturi, local variable names, and indexing constraints expressed in some reasonable fashion
General semantic Function framework Ticket 1519

GeoTIFF Module

See GeoTIFF responses for Hyrax

KML Module

See KML_responses_for_Hyrax

THREDDS Catalog Clients

IDV seems like it might be a client that can navigate catalogs. (Not surprising, considering it's a UNIDATA thing)

Several clients demonstrated at IOOS meeting that allowed users to navigate THREDDS catalogs in search of data:


THREDDS Catalog Metadata

See THREDDS Catalog Metadata

Dap Capabilities Response

See DAP Capabilities

DAP Asynchronous Responses

Comment added to DAP 4.0 Design

DAP 3.x and 4.x

  • TDS and new DAP standards
Claim
For DAP 3.x and 4.x to become widely adopted we will need to get the TDS to become compatible with these versions.
The TDS is clearly the dominant DAP service software. This position makes it difficult (if not impossible) for us to push the DAP standards forward without enabling the TDS. INn order for the TDS to produce DAP 3.x and 4.x output we will need to start supporting these updated protocols in the Java-DAP.
Assertion
If we don't do this, then changes to the DAP will be irrelevant.
  • Do we want to make the DAP 3.2 DDX the default DDX response?
    • Are there any users of the DAP 2.0 DDX?



To Do