DAP4: Encoding for the Data Response: Difference between revisions

From OPeNDAP Documentation
⧼opendap2-jumptonavigation⧽
Line 169: Line 169:
</source>
</source>


(NB: Some poetic license used in the following)
NB: Some poetic license used in the following and the checksums for single integer values seems silly, but these are really simple examples.


<pre>
<pre>
...
Content-Type: multipart/related; type="text/xml"; start="<<start id>>";  boundary="<<boundary>>"
Content-Type: multipart/related; type="text/xml"; start="<<start id>>";  boundary="<<boundary>>"
   
   
Line 188: Line 189:
Content-Length: <<-1 or the size in bytes of the binary data>>
Content-Length: <<-1 or the size in bytes of the binary data>>


x  
x
<<checksum>>
 
--<<boundary>>
--<<boundary>>
</pre>
</pre>
Line 204: Line 207:


x00 x01 x02 x03 x10 x11 x12 x13  
x00 x01 x02 x03 x10 x11 x12 x13  
<<checksum>>
--<<boundary>>
--<<boundary>>
</pre>
</pre>
Line 216: Line 221:
} foo;
} foo;
</source>
</source>
Note that there is a single variable at the top-level of the implied Group ''/'' and that is ''s'', so it's ''s'' that we compute the checksum for.


<pre>
<pre>
Line 223: Line 230:
x00 x01 x02 x03 x10 x11 x12 x13  
x00 x01 x02 x03 x10 x11 x12 x13  
y  
y  
<<checksum>>


--<<boundary>>
--<<boundary>>
Line 248: Line 256:
x00 x01 x02 x03 x10 x11 x12 x13  
x00 x01 x02 x03 x10 x11 x12 x13  
y  
y  
<<checksum>>


--<<boundary>>
--<<boundary>>
Line 260: Line 269:
} foo;
} foo;
</source>
</source>
Note: The checksum calculation includes only the values of the variable, not the prefix length bytes.


<pre>
<pre>
Line 266: Line 277:


16 This is a string  
16 This is a string  
<<checksum>>


5 a0 a1 a2 a3 a4
5 a0 a1 a2 a3 a4
<<checksum>>


3 x00 x01 x02 6 x00 x01 x02 x03 x04 x05  
3 x00 x01 x02 6 x00 x01 x02 x03 x04 x05  
<<checksum>>


--<<boundary>>
--<<boundary>>
Line 294: Line 308:


1  x20  
1  x20  
<<checksum>>


--<<boundary>>
--<<boundary>>
Line 319: Line 334:
x00 x01 x02 x03 x10 x11 x12 x13  
x00 x01 x02 x03 x10 x11 x12 x13  
y  
y  
<<checksum>>


--<<boundary>>
--<<boundary>>
Line 353: Line 369:
2 x00 x01 2 x10 x11
2 x00 x01 2 x10 x11
y  
y  
<<checksum>>
--<<boundary>>
--<<boundary>>
</pre>
</pre>
Line 371: Line 389:
} foo;
} foo;
</source>
</source>
Note: Like the varying dimension arrays, the checksum for a Sequence is of it values only, not the SOI or EOS markers.


<pre>
<pre>
Line 399: Line 419:


EOS
EOS
<<checksum>>
--<<boundary>>
--<<boundary>>
</pre>
</pre>

Revision as of 23:22, 12 June 2012

<< Back to OPULS Development

Background

There are two different approaches to deserializing the data received by a DAP client: The client may process the data as it is received (i.e., eager evaluation) or it may write those data to a store and process them after the fact (lazy evaluation). A variant of these techniques is to process the data and write it to a store, presumably because the initial processing steps are useful while having the data stored for later processing enables still other uses. However, in this document I'm not going to look at the latter case because experience so far with DAP2 has not provided any indication that would present any performance benefits. We do have example clients that use both eager and lazy evaluation.

HTTP/1.1 HTTP/1.1 defines a chunked transport scheme. In the past we have spent a fair amount of time on the notion of chunking as a way to achieve reliable transmission of errors when those errors are encountered during response formulation, that won't be addressed in this document. Instead, this document will assume that the entire response document described here is chunked in a way that enables reliable transmission of errors. The details of that transfer encoding will be described elsewhere.

References

I found these useful and thought they might be better not lost at the end of a long document.

  1. HTTP/1.1
  2. WikiPedia on Endianness
  3. Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies
  4. WebSockets
  5. W3C WebSockets
  6. Javascript Worker Threads (I wish there were a better reference than a blog post)

Problem addressed

There is a need to move information from the server to a client. The way this is done should facilitate many different designs for both server and client components.

Assumptions:

  1. Since DAP is so closely tied to the web and HTTP, its design is dominated by that protocol's characteristics.
  2. Processing on either the client or the server is an order of magnitude faster than network transmission.
  3. Server memory should be conserved with favor given to a design that does not require storage of large parts of a response before it is transmitted (but large is a relative term).
  4. Clients are hard to write and the existence of a plentiful supply of high-quality clients is important (of course, servers are hard to write, too, but there are between 5 and 10 times the number of DAP2 clients as servers).
  5. The response does not explicitly support a real-time stream of data (e.g., a temperature sensor which is a data structure of essentially infinite size). It may, however, be the case that the response can encode that kind of information.

Broad issues:

  1. It should be fast
  2. It should simple
  3. It should be part of the web - meaning that the XML part(s) should be identifiable/useable by generic web software even though the binary data part will be completely opaque.

Proposed solution

The response document will use the multipart-mime standard. The response is the server's answer to a request for data from a client. Each such request must either include a Constraint Expression enumerating the variables requested or a null CE that is taken to mean 'return the entire dataset.' A response will consist of two parts:

  1. A DDX that has no attribute information and contains (only) the variables requested; and
  2. A binary part that contains the data for those variables

The response uses the multipart-mime standard, but there are always exactly two parts - the DDX containing variable names and types and the binary BLOB containing data.

Structure of the metadata (DDX) Part

The start of the DataDDX document consists of the initial Content-Type header that indicates the response is a multipart mime document, followed by the first part. The first part always contains the DDX. Note that the Content-Type of this part is text/xml and that its charset parameter is UTF-8. Note also that the transfer encoding is binary. To encode the DAP version, use a XDAP header.

Note: It may be that some transport protocols require that each response be identifiable. If that's the case, DAP4 should add an optional Content-Description header to this response and set the value of that to the request URL. This will introduce some redundancy to response (because the DAP4 DDX already contains that URL as the value of the xmlbase XML attribute) but including it in a header makes it accessible without parsing the DDX. We should not use Content-ID for this, although it is tempting, since that seems appropriate for MIME sent over email and not for MIME as an HTTP payload (see HTTP/1.1, sec. 3).

HTTP/1.1 200 OK
Date: Mon, 23 May 2005 22:38:34 GMT
Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
Content-Type: multipart/related; type="text/xml"; start="<<start id>>";  boundary="<<boundary>>"
Content-Description: data-ddx; url=...
Content-Encoding: gzip
XDAP: <<DAP version>>

--<<boundary>>
Content-Type: text/xml; charset=UTF-8
Content-Transfer-Encoding: binary
Content-Description: ddx
Content-Id: <<start-id>>

    <<DDX here>>
--<<boundary>>
...

Structure of the binary part

The binary part starts with the MIME headers for a Part in a multipart-related document [Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies]. This header will include the byte-order (big-endian or little-endian) used to encode values.

Data in the 'binary part' will be serialized in the order of the variables listed in the DDX part. Essentially this is the serialization form of DAP2, but has been extended to support arrays with varying dimensions and stripped of the redundant information added by various XDR implementations.

The entire binary content of the response is contained in a second part. Note that the Content-Type of this part is application/x-dap-big-endian or application/x-dap-little-endian. The client will use this header to correctly decode data values. The Content-Length header is present here to help internet tools (such as caches) when the server can realistically know the size of the data to be serialized before the serialization takes place. A value of -1 indicates an unknown size.

...
--<<boundary>>
Content-Type: application/x-dap-little-endian
Content-Transfer-Encoding: binary
Content-Description: data
Content-Id: <<next-id>>
Content-Length: <<-1 or the size in bytes of the binary data>>

...
--<<boundary>>

Encoding of values

Unlike DAP2, DAP4 will not use XDR. Values will be encoded using the byte order of the server. Also unlike DAP2, we will not pad bytes in the response. Floating point values will use the nearly ubiquitous IEEE 754 standard.

About serialization of varying-sized variables

There are several kinds of varying data:

  1. Strings
    • String s;
  2. Array variables that vary in size
    • Int32 i[*];
    • Float64 j[10][*];
  3. Structure variables with varying dimensions and Sequence variables
    • Structure { int32 i; int32 j[10]; } thing[*];
    • Sequence { int32 i; int32 j[10]; } thing2;
  4. Structure variables that have a varying dimension and one or more fields that vary
    • Structure { int32 i[*]; int32 j[10][*]; } thing[*];

Note that there is no practical difference between a (character) String and an integer or floating point array with varying size except that the type of the elements differ. Thus, the issues associated with encoding Int32 i[*] are really no different than encoding the String type. This same logic can be extended to a varying array of Structures; it can be seen as a string of Structures.

JohnCaron

"We will pad all values to four-byte words"

I dont know of any good reason to pad, seems to me to be not needed.

Jimg 16:15, 12 June 2012 (PDT) Agreed; I modified the document to reflect that.

General serialization rules

Narrative form:

  1. Fixed size types: Serialized by writing their (encoded) data.
  2. Strings: Serialized by writing their size as a N-bit integer, then their encoded value
  3. Scalar Structures (which may have String/varying fields): Each field is iteratively serialized.
  4. Arrays (possibly with varying dimensions): An array is serialized by serializing the vector denoted by the leftmost dimension. For a fixed size dimension, each element is serialized. For a varying dimension, the length of the vector is written and then each element is serialized.
  5. Sequences are serialized row by row: First a Start of Instance marker is written, then each of the fields of the row are serialized, until the last row of the Sequence is serialized, then a End of Sequence marker
  6. Opaque types will be treated like Byte [*] variables (for the purpose of serializing their values).
  7. Checksums will be computed for the values of all the variables at the top-level of each Group in the response. The checksum value will follow the value of the variable. We will use MD5 since it appear to be faster than SHA1 and we don't care about cryptographic security (at least I don't think so...).


Assumptions:

  1. String: it is assumed that the server will know ( or can determine w/o undue cost) the length of the String at the time serialization begins.
  2. It is assumed that the size of any variable dimension will be known at the time of serialization of that variable's dimension.
  3. For a Sequence, it is assumed that the total size may be considerable and not known at the time serialization begins.

Notes:

  1. This will use receiver-makes-right and thus needs a header to convey that; I suggest using the Content-Type headers subtype value which RFC 2045 allows to be an x- value of our choosing.
  2. Sequences cannot contain child Sequences (i.e., we are not allowing 'nested sequences') in DAP4
  3. This set of serialization rules can be modified slightly to support the case where fixed, string and varying size data are separated into different parts of a multipart-mime document. In that case there would be more than two parts ot the response.

JohnCaron

I would not use an HTTP header to indicate byte ordering, since that makes essential information HTTP specific.

Rationale

There are two main differences between this proposed design and Proposed DAP4 On-The-Wire Format: The data corresponding to varying-size variables is mixed in with the fixed-size variables, and this design depends on the DDX (i.e., the metadata in the first of the two Parts of the response) to provide critical information regarding the organization of the binary information.

Combining the fixed- and varying-size data

The effort required by a server to build that response that supports random access does not seem justified. To build a response that separates the fixed- and varying-size data values, the server either must make two passes at serializing the response or store the varying-size data after serialization until all of the fixed-size data have been serialized/transmitted. If DAP were operating over transports that used parallel I/O, this would not be the case. However, for HTTP that issue is moot. A two-pass serialization process is complex and storing the serialized varying-size data is not acceptable (either because it demands run-time memory, uses slow secondary storage, ...).

On the other hand, clients can easily read the response described here and reformat it for random access use. In addition, a client may be able to take advantage of information unavailable to the server, such as the intended use of the data, to optimize the storage in ways that the server cannot. What performance penalties will this place on a client? If the client uses only a single thread/process, it cannot begin to use the data until all of the response has been read from the socket. However, it can reformat the data while they are being read (either by writing those data to two or more files or to different parts of memory) and it will have to do that anyway. That is, a single-threaded client is stuck reading all of the bytes of the response from a socket and storing them somewhere before it can do anything else. A savvy client would certainly look at the DDX and note that if it contains only fixed-size variables, it is already in a form amenable to random access. Either way, the client can have the data in a form that facilitates random access just as soon as the response is completely received (based on the assumption that network I/O is far slower than even spinning disk I/O). A multi-threaded (or double-buffered) client could do significant processing while waiting for successive read operations to complete.

JohnCaron

1) I agree that the client should be the place to create a randomly accessible persistent form, if desired.

2) Network I/O can be faster than local disk, eg at Unidata we have 1Gbit ethernet, but local transfer rate is less than 100Mbit/sec.

Removing (most of) the tags from the binary content

In the initial Proposed DAP4 On-The-Wire Format tags for the sizes and types were included in the binary stream so it was not necessary to also 'walk the DDX' to deserialize the data. This has considerable appeal. It makes error detection easier and makes the response document less bound to a particular deserialization scheme. So why leave those out? Compactness and deserialization speed. This response has the minimal amount of extra information, which makes it compact but also faster to deserialize for single-threaded clients (assumed to be most clients). It is assumed that most clients will read a block from a socket, then store it somewhere, then read the next block, and so on. Effectively mixing the parse of the response with the network I/O. The use of prefixed length information, kept to a minimum, minimizes the number of reads for a typical client implementation. Of course, a client could read fixed size blocks from the stream and parse it in-memory, but those that do will hardly suffer from this response format.

Suitability for other protocols

Two candidate protocols for DAP appear to be AMQP and WebSockets. I know of no real draw to implement DAP over AMQP; WebSockets seems like it could be very useful for building interactive web-based UIs, but it also seems very draft. I think we should focus our efforts on a response that works well with HTTP.

Discussion

Jimg 13:07, 11 June 2012 (PDT) My main concern with this encoding scheme is that a typical client be coded so that it is pretty efficient and a really good client be coded so that it can read and decode the information is as little extra time as it would take HTTP to simply transfer the document. I think a typical client will probably read the BLOB part of this chunk by chunk (see Data response and errors and dump the result in a file for later use. A better client would do that plus break the parts up into sections and store them on disk or in memory. A really good client would use two or more threads to double-buffer the network I/O, effectively using the transmission time latency to perform the decoding and processing operations. A quick look at libcurl's multi API indicates there's at least one way to do that without using threads or multiple processes.

Example responses

In these examples, spaces and newlines have been added to make them easier to read. The real responses are as compact as they can be. Since this proposal is just about the form of the response - and it really focuses on the BLOB part - there no mention of 'chunking.' For information on how this BLOB will/could be chunked, see Data response and errors.

A single scalar

Dataset {
    Int32 x;
} foo;

NB: Some poetic license used in the following and the checksums for single integer values seems silly, but these are really simple examples.

...
Content-Type: multipart/related; type="text/xml"; start="<<start id>>";  boundary="<<boundary>>"
 
--<<boundary>>
Content-Type: text/xml; charset=UTF-8
Content-Transfer-Encoding: binary
Content-Description: ddx
Content-Id: <<start-id>>

    <<DDX here>>
--<<boundary>>
Content-Type: application/x-dap-little-endian
Content-Transfer-Encoding: binary
Content-Description: data
Content-Id: <<next-id>>
Content-Length: <<-1 or the size in bytes of the binary data>>

x
<<checksum>>

--<<boundary>>

A single array

Dataset {
    Int32 x[2][4];
} foo;
...
Content-Length: <<-1 or the size in bytes of the binary data>>

x00 x01 x02 x03 x10 x11 x12 x13 
<<checksum>>

--<<boundary>>

A single structure

Dataset {
    Structure {
        Int32 x[2][4];
        Float64 y;
    } s;
} foo;

Note that there is a single variable at the top-level of the implied Group / and that is s, so it's s that we compute the checksum for.

...
Content-Length: <<-1 or the size in bytes of the binary data>>

x00 x01 x02 x03 x10 x11 x12 x13 
y 
<<checksum>>

--<<boundary>>


An array of structures

Dataset {
    Structure {
        Int32 x[2][4];
        Float64 y;
    } s[3];
} foo;
...
Content-Length: <<-1 or the size in bytes of the binary data>>

x00 x01 x02 x03 x10 x11 x12 x13 
y 
x00 x01 x02 x03 x10 x11 x12 x13 
y 
x00 x01 x02 x03 x10 x11 x12 x13 
y 
<<checksum>>

--<<boundary>>

A single varying array (one varying dimension)

Dataset {
    String s;
    Int32 a[*];
    Int32 x[2][*];
} foo;

Note: The checksum calculation includes only the values of the variable, not the prefix length bytes.

...
Content-Length: <<-1 or the size in bytes of the binary data>>

16 This is a string 
<<checksum>>

5 a0 a1 a2 a3 a4
<<checksum>>

3 x00 x01 x02 6 x00 x01 x02 x03 x04 x05 
<<checksum>>

--<<boundary>>

NB: varying dimensions are treated 'like strings' and prefixed with a length count. In the last of the three variables, the array x is a 2 by varying array with the example's first 'row' containing 3 elements and the second 6.

A single varying array (two varying dimensions)

Dataset {
    Int32 x[*][*];
} foo;
...
Content-Length: <<-1 or the size in bytes of the binary data>>

3

3 x00 x01 x02 

6 x10 x11 x12 x3 x14 x15

1  x20 
<<checksum>>

--<<boundary>>

An varying array of structures

Dataset {
    Structure {
        Int32 x[4][4];
        Float64 y;
    } s[*];
} foo;
...
Content-Length: <<-1 or the size in bytes of the binary data>>

2

x00 x01 x02 x03 x10 x11 x12 x13
y 

x00 x01 x02 x03 x10 x11 x12 x13 
y 
<<checksum>>

--<<boundary>>

NB: two rows...

JohnCaron

I would recommend some kind of an <end> tag, rather than having to know the number of structures that will get returned before you start writing.

An varying array of structures with fields that have varying dimensions

Dataset {
    Structure {
        Int32 x[2][*];
        Float64 y;
    } s[*];
} foo;
...
Content-Length: <<-1 or the size in bytes of the binary data>>

3

1 x00 4 x10 x11 x12 x13 
y 

3 x00 x01 x02 2 x10 x11
y 

2 x00 x01 2 x10 x11
y 
<<checksum>>

--<<boundary>>


An Sequence

Dataset {
    Sequence {
        Int32 x[2][*];
        Float64 y;
        Float64 x;
        Structure {
            int32 p;
            Int32 q;
        } ps_and_qs;
    } s;
} foo;

Note: Like the varying dimension arrays, the checksum for a Sequence is of it values only, not the SOI or EOS markers.

...
Content-Length: <<-1 or the size in bytes of the binary data>>

SOI

1 x00 4 x10 x11 x12 x13 
y 
x
p
q

SOI
3 x00 x01 x02 2 x10 x11
y 
x
p
q

SOI
2 x00 x01 2 x10 x11
y
x
p
q

EOS
<<checksum>>

--<<boundary>>