<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://docs.opendap.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jimg</id>
	<title>OPeNDAP Documentation - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://docs.opendap.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jimg"/>
	<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php/Special:Contributions/Jimg"/>
	<updated>2026-04-16T07:06:33Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.38.4</generator>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13613</id>
		<title>Hyrax GitHub Source Build</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13613"/>
		<updated>2026-01-29T00:18:59Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Apple OSX (Mx processor) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This describes how to get and build Hyrax from our GitHub repositories. Hyrax is a data server that implements the DAP2 and DAP4 protocols, works with a number of different data formats and supports a wide variety of customization options from tailoring the look of the server&#039;s web pages to complex server-side processing operations. This page describes how to build the server&#039;s source code. If you&#039;re working on a Linux or OS/X computer, the process is similar so we describe only the linux case; we do not support building the server on Windows operating systems.&lt;br /&gt;
&lt;br /&gt;
To build and install the server, you need to perform three steps:&lt;br /&gt;
# Set up the computer to build source code (Install a Java compiler; install a C/C++ compiler; add some other tools)&lt;br /&gt;
# Build the C++ DAP library (&#039;&#039;libdap4&#039;&#039;) and the Hyrax BES daemon&lt;br /&gt;
# Build the Hyrax OLFS web application&lt;br /&gt;
&lt;br /&gt;
Quick links if you already know the process:&lt;br /&gt;
* [https://github.com/opendap/hyrax new all-in-one repo that uses shell scripts]&lt;br /&gt;
* [https://github.com/opendap/libdap libdap git repo]&lt;br /&gt;
* [https://github.com/opendap/bes BES git repo]&lt;br /&gt;
* [https://github.com/opendap/olfs OLFS git repo]&lt;br /&gt;
* [https://github.com/opendap/hyrax-dependencies Hyrax dependencies]&lt;br /&gt;
&lt;br /&gt;
= Set up a system to build our code =&lt;br /&gt;
== CentOS-8  ==&lt;br /&gt;
The CentOS-8 setup is very similar to CentOS-7, but there are some minor differences.&lt;br /&gt;
 &lt;br /&gt;
;Update the VM&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum -y update&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;You will need to enable power-tools for this setup&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum config-manager --set-enabled powertools&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Load the basic software development environment plus the additional packages of openjpeg2, jasper, and libtirpc. Note that you may not need &#039;&#039;openjpeg2&#039;&#039; and &#039;&#039;jasper&#039;&#039; if you build the dependencies successfully. If you determine that you don&#039;t need these, please let us know. JUnit support has also been dropped so we dropped the &amp;lt;tt&amp;gt;&#039;&#039;ant-junit junit&#039;&#039;&amp;lt;/tt&amp;gt; packages from the install list.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ant git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc openjpeg2-devel jasper-devel libtirpc-devel&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Tell the machine where to find the tirpc libraries&lt;br /&gt;
:&amp;lt;tt&amp;gt;export CPPFLAGS=-I/usr/include/tirpc&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt;export LDFLAGS=-ltirpc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;NB: As of 1/28/22 you should not need to do this. The &#039;&#039;configure&#039;&#039; script should find the correct way to run python on CentOS 8. However, if it does not, our Makefiles (built from &#039;&#039;Makefile.am&#039;&#039; files) use &#039;&#039;python&#039;&#039; but a vanilla CentOS 8 machine only has &#039;&#039;python3&#039;&#039;. Until we fix this, you need to make sure &#039;&#039;python&#039;&#039; runs a python program. One way is to make a symbolic link between &#039;&#039;python3&#039;&#039; and &#039;&#039;python&#039;&#039; in a directory that is on your PATH. &#039;&#039;&#039;The TODO item here is to make sure &#039;&#039;python&#039;&#039; exists and can run a program&#039;&#039;&#039;. It is generally enough to verify that the command exists:&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;which python&#039;&#039;&#039;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
; Lacking that (which I was on Rocky8) install python&lt;br /&gt;
: &amp;lt;tt&amp;gt;sudo yum install -y python3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;If you&#039;re going to build RPMs&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum install rpm-devel rpm-build redhat-rpm-config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once you run through the rest of the hyrax build make sure that both &#039;&#039;gdal&#039;&#039; and &#039;&#039;hdf4&#039;&#039; build correctly (look for their libraries in $prefix/deps/lib). To build them manually, run &#039;&#039;&#039;make gdal&#039;&#039;&#039;, &#039;&#039;&#039;make hdf4&#039;&#039;&#039;, amd &#039;&#039;&#039;make netcdf4&#039;&#039;&#039; inside the hyrax-dependencies to build and install gdal and hdf4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Hyrax-Rocky9|Configuring Rocky9]] ==&lt;br /&gt;
== [[Hyrax-Rocky8|Configuring Rocky8]] ==&lt;br /&gt;
&lt;br /&gt;
== Rocky 8 ==&lt;br /&gt;
&#039;&#039;Updated 6/6/2024&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the commands ps, which, etc.&lt;br /&gt;
 dnf install -y procps&lt;br /&gt;
&lt;br /&gt;
C++ environment plus build tools&lt;br /&gt;
 dnf install -y git gcc-c++ flex bison cmake autoconf automake libtool emacs bzip2 vim bc&lt;br /&gt;
&lt;br /&gt;
Development library versions&lt;br /&gt;
 dnf install -y openssl-devel libuuid-devel readline-devel zlib-devel bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel libtirpc-devel&lt;br /&gt;
&lt;br /&gt;
Java&lt;br /&gt;
 dnf install -y java-17-openjdk java-17-openjdk-devel ant &lt;br /&gt;
&lt;br /&gt;
Setup DNF so that we can load in some obscure packages from EPEL, etc., repos&lt;br /&gt;
 dnf install dnf-plugins-core&lt;br /&gt;
 dnf install epel-release&lt;br /&gt;
 dnf config-manager --set-enabled powertools&lt;br /&gt;
&lt;br /&gt;
Install CppUnit and some more development libraries&lt;br /&gt;
 dnf install -y cppunit cppunit-devel openjpeg2-devel jasper-devel&lt;br /&gt;
&lt;br /&gt;
Install the RPM tools&lt;br /&gt;
 dnf install -y rpm-devel rpm-build redhat-rpm-config&lt;br /&gt;
&lt;br /&gt;
Install the AWS CLI&lt;br /&gt;
 dnf install -y awscli&lt;br /&gt;
&lt;br /&gt;
== Apple OSX (M&#039;&#039;x&#039;&#039; processor) ==&lt;br /&gt;
&lt;br /&gt;
Computers with the Apple M series chips require dedicated binaries, or binaries with both Intel and M1 contents. In order to get the &#039;&#039;hyrax-dependencies&#039;&#039; project (and the libdap4, and bes projects) to build the following packages need to be installed prior to running the hyrax-dependencies build.&lt;br /&gt;
&lt;br /&gt;
Updated 1/28/2026 using notes from a build on a clean OSX M1 machine running Tahoe 26.2. I loaded some things that are not completely necessary, like 1Password, during very first step. I&#039;m documenting that here just to be complete. jhrg&lt;br /&gt;
&lt;br /&gt;
I installed &#039;&#039;&#039;Chrome&#039;&#039;&#039;, &#039;&#039;&#039;1Password&#039;&#039;&#039;, and &#039;&#039;&#039;vscode&#039;&#039;&#039; because they make it easier for me, but I did not directly use them for the build. I used the emacs clone &#039;&#039;&#039;mg&#039;&#039;&#039; that is bundled with OSX to edit files.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Xcode&#039;&#039;&#039;&lt;br /&gt;
* Use the App Store to install Xcode&lt;br /&gt;
* Use a terminal window to run the command: &#039;&#039;xcode-select --install&#039;&#039; &amp;lt;-- I&#039;m not sure if I did this. It&#039;s likely, but I started Xcode and clicked &#039;OK&#039; in the dialog that prompts for the various environments to install (e.g., do you want to write code for the Apple Watch). I installed only the development tools for OSX.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Homebrew&#039;&#039;&#039;&lt;br /&gt;
* Needed for later steps&lt;br /&gt;
* Set the homebrew install path (/opt/homebrew for me) to HB (or a more wordy alternative ;-)&lt;br /&gt;
* export PATH=&amp;quot;$HB/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;cmake&#039;&#039;&#039;&lt;br /&gt;
* brew install cmake&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;pkg-config&#039;&#039;&#039;&lt;br /&gt;
* brew install pkg-config&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libpng&#039;&#039;&#039;&lt;br /&gt;
* brew install libpng&lt;br /&gt;
* export CPPFLAGS=&amp;quot;$HB/include&amp;quot;&lt;br /&gt;
* export LDFLAGS=&amp;quot;$HB/lib&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At this point, the hyrax-dependencies repo should build, but make sure you follow the directions there, e.g., sourcing &#039;&#039;&#039;spath.sh&#039;&#039;&#039; &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;autotools&#039;&#039;&#039;&lt;br /&gt;
* brew install autoconf automake libtool&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;CPPUNIT&#039;&#039;&#039;&lt;br /&gt;
* brew install cppunit&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;CppUnit is not needed to build the code, but it is needed to run the unit tests&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At this point the libdap4 repo should build&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;openssl&#039;&#039;&#039;&lt;br /&gt;
* brew install openssl&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At this point the bes repo should build&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ant&#039;&#039;&#039;&lt;br /&gt;
* brew install ant&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At this point the OLFS will build the Ant &#039;server&#039; target (nothing else tested)&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: as of 1/28/26 starting the BES reveals that for &#039;&#039;this build&#039;&#039; the gdal module does not work because of -rpath linker issues. However, &#039;turning off&#039; the gdal module fixes this and a functioning BES exists. I found that homebrew also has Tomcat 11 and will install it, but I could not get the opendap.war to work. This needs a bit more effort.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: I made no attempt to build a docker container.&lt;br /&gt;
&lt;br /&gt;
= A semi-automatic build =&lt;br /&gt;
&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the short instructions in the README file.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Summarized here, those instructions are:&lt;br /&gt;
;use bash: The shell scripts in this repo assume you are using bash.&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development: &#039;&#039;source spath.sh&#039;&#039;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies: &#039;&#039;./hyrax_clone.sh -v&#039;&#039;&lt;br /&gt;
;build the code, including the dependencies: &#039;&#039;./hyrax_build.sh -v&#039;&#039;&lt;br /&gt;
;test the server: Start the BES using  &#039;&#039;besctl start&#039;&#039;&lt;br /&gt;
:Start the OLFS using&#039;&#039;./build/apache-tomcat-7.0.57/bin/startup.sh&#039;&#039;&lt;br /&gt;
:Test the server by loooking at &#039;&#039;&amp;lt;nowiki&amp;gt;http://localhost:8080/opendap&amp;lt;/nowiki&amp;gt;&#039;&#039; in a browser. You should see a directory named &#039;&#039;data&#039;&#039; and following that link should lead to more data. The server will be accessible to clients other than a web browser.&lt;br /&gt;
:To test the BES function independently of the front end, use &#039;&#039;bescmdln&#039;&#039; and give it the &#039;&#039;show version;&#039;&#039; command, you should see output about different components and their versions. &lt;br /&gt;
:Use &#039;&#039;exit&#039;&#039; to leave the command line test client.&lt;br /&gt;
&lt;br /&gt;
As described in the README file that is part of the &#039;&#039;hyrax&#039;&#039; repo, there are some other scripts in the repo and some options to the &#039;&#039;clone&#039;&#039; and &#039;&#039;build&#039;&#039; script that you can investigate by using -h (help).&lt;br /&gt;
&lt;br /&gt;
= The manual build = &lt;br /&gt;
&lt;br /&gt;
In the following, we describe only the build process for CentOS; the one for OS/X is similar and we note the differences where they are significant.&lt;br /&gt;
&lt;br /&gt;
== Get Hyrax from GitHub ==&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the instructions on this page (which differ a bit from ones in the project&#039;s README)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you have the &#039;&#039;hyrax&#039;&#039; project cloned:&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;source spath.sh&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;./hyrax_clone.sh -v&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;proceed with the rest of the build as described in the following sections of this page&lt;br /&gt;
&lt;br /&gt;
== Important Note ==&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;Many of the problems people have with the build stem from not setting the shell correctly for the build.&amp;lt;/font&amp;gt;&lt;br /&gt;
In the above section, &#039;&#039;make sure&#039;&#039; you run &#039;&#039;&#039;source spath.sh&#039;&#039;&#039; before you run any of the building/compiling/testing commands that use the source code or build files. While the &#039;&#039;$prefix&#039;&#039; and &#039;&#039;$PATH&#039;&#039; environment variables are simple to set up, they are needed by most users. When you exit a terminal window and then open a new one, make sure to (re)source the &#039;&#039;spath.sh&#039;&#039; file in the new shell. You don&#039;t have to source spath.sh every time you enter the &#039;&#039;hyrax&#039;&#039; directory, but you must run it for every new instance of the shell.&lt;br /&gt;
&lt;br /&gt;
== Compile the Hyrax dependencies ==&lt;br /&gt;
If you didn&#039;t run hyrax_clone.sh, make sure you&#039;re in the top hyrax directory and use git to clone the hyrax-dependencies:&lt;br /&gt;
  git clone https://github.com/opendap/hyrax-dependencies&lt;br /&gt;
And then build it. Unlike many source packages, there is no need to run a configure script, just &#039;&#039;make&#039;&#039; will do. However, the Makefile in this package expects &#039;&#039;$prefix&#039;&#039; to be set as described above. It will put all of the Hyrax server dependencies in a subdirectory called &#039;&#039;deps&#039;&#039;. To build the dependencies for building RPMs, use &#039;&#039;make -j9 for-static-rpm&#039;&#039;.&lt;br /&gt;
;(make sure you&#039;re in the top level hyrax directory)&lt;br /&gt;
&amp;lt;tt&amp;gt;&lt;br /&gt;
; cd hyrax-dependencies&lt;br /&gt;
; make --jobs=9&lt;br /&gt;
: &#039;&#039;The --jobs=N runs a parallel build with at most N simultaneous compile operations. This will result in a huge performance improvement on multi-core machines. &#039;&#039;&#039;-jN&#039;&#039;&#039; is the short form for the option.&#039;&#039;&lt;br /&gt;
;cd ..: &#039;&#039;Go back up to &#039;&#039;&#039;$prefix&#039;&#039;&#039; &#039;&#039;&lt;br /&gt;
&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; You can get some of the &#039;&#039;dependencies&#039;&#039; for Hyrax like &#039;&#039;netCDF&#039;&#039; from the EPEL repository, but the versions are often older than Hyrax needs. Contact us if you want information about using EPEL. At the risk of throwing people a curve ball, here&#039;s a synopsis of the process. Don&#039;t do this unless you know EPEL well. Use [http://mirror.pnl.gov/epel/6/i386/epel-release-6-8.noarch.rpm epel-release-6-8.noarch.rpm] and install it using &#039;&#039;sudo yum install epel-release-6-8.noarch.rpm&#039;&#039;. Then install packages needed to read various file formats: &#039;&#039;yum install netcdf-devel hdf-devel hdf5-devel libicu-devel cfitsio-devel cppunit-devel rpm-devel rpm-build&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Build &#039;&#039;libdap&#039;&#039; and the &#039;&#039;BES&#039;&#039; daemon ==&lt;br /&gt;
&lt;br /&gt;
==== Get and build libdap4 ====&lt;br /&gt;
;WARNING: If you have &#039;&#039;libdap&#039;&#039; already, uninstall it before proceeding.&lt;br /&gt;
Build, test and install libdap4 into $prefix:&lt;br /&gt;
&amp;lt;b&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git clone https://github.com/opendap/libdap4&lt;br /&gt;
cd libdap4&lt;br /&gt;
autoreconf -fiv&lt;br /&gt;
./configure --prefix=$prefix --enable-developer &lt;br /&gt;
make -j9&lt;br /&gt;
make check -j9&lt;br /&gt;
make install&lt;br /&gt;
cd .. # Go back up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Get and build the BES and all of the modules shipped with Hyrax ====&lt;br /&gt;
Build, test and install the BES and its modules&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;git clone https://github.com/opendap/bes # Clone the BES from GitHub&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
cd bes # enter the bes dir.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;autoreconf --force --install --verbose # You can use -fiv instead of the long options.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These means, when starting from a freshly cloned repo, run all of the autotools commands and install all of the needed scripts.&lt;br /&gt;
&lt;br /&gt;
Then, run configure:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;./configure --prefix=$prefix  --with-dependencies=$prefix/deps --enable-developer&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: Notes:&lt;br /&gt;
:* The --with-deps... is not needed if you load the dependencies from RPMs or otherwise have them installed an generally accessible on the build machine.&lt;br /&gt;
:* The  --enable-developer option will compile in all of the debugging code which may affect performance even if the debugging output is not enabled.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make -j9&lt;br /&gt;
make check -j9&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Some tests may fail and adding &#039;&#039;-k&#039;&#039; ignores that and keeps make marching along. &#039;&#039;Note that you must run &#039;&#039;&#039;make&#039;&#039;&#039; before &#039;&#039;&#039;make check&#039;&#039;&#039; in the bes code&#039;&#039;.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make install&lt;br /&gt;
cd .. # Go back up to $prefix&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Test the BES ====&lt;br /&gt;
Start the BES and verify that all of the modules build correctly.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;besctl start # Start the BES.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Given that &#039;&#039;$prefix/bin&#039;&#039; is on your &#039;&#039;$PATH&#039;&#039;, this should start the BES. You will not need to be root if you used the &#039;&#039;--enable-developer&#039;&#039; switch with configure (as shown above), otherwise you should run &#039;&#039;sudo besctl start&#039;&#039; with the caveat that as root &#039;&#039;$prefix/bin&#039;&#039; will probably not be n your &#039;&#039;$PATH&#039;&#039;.&lt;br /&gt;
:If there&#039;s an error (e.g., you tried to start as a regular user but need to be root), edit bes.conf to be a real user (yourself?) in a real group (use &#039;groups&#039; to see which groups you are in) and also check that the bes.log file is &#039;&#039;not&#039;&#039; owned by root. &lt;br /&gt;
:Restart.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;bescmdln # Now that the BES is running, start the BES testing tool&lt;br /&gt;
BESClient&amp;gt; show version; # Send the BES the version command to see if it&#039;s running &amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
:Take a quick look at the output. There should be entries for libdap, bes and all of the modules.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt; BESClient&amp;gt; exit; # Exit the testing tool&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that even though you have exited the &#039;&#039;bescmdln&#039;&#039; test tool, the BES is still running. That&#039;s fine - we&#039;ll use it in just a bit - but if you want to shut it down, use &#039;&#039;besctl stop&#039;&#039;, or &#039;&#039;besctl pids&#039;&#039; to see the daemon&#039;s processes. If the BES is not stopping, &#039;&#039;besctl kill&#039;&#039; will stop all BES processes without waiting for them to complete their current task.&lt;br /&gt;
&lt;br /&gt;
== Build the Hyrax &#039;&#039;OLFS&#039;&#039; web application ==&lt;br /&gt;
The OLFS is a java servlet built using ant. The OLFS is a java servlet web application and runs with Tomcat, Glassfish, etc. You need a copy of Tomcat, but our servlet does not work with the RPM version of Tomcat. Get [http://tomcat.apache.org/download-90.cgi Tomcat 9 from Apache]. Note that if you built the dependencies from source using the &#039;&#039;hyrac-dependencies-1.10.tar&#039;&#039; then there is a copy of Tomcat in the &#039;&#039;hyrax-dependecies/extra_downloads directory. You can unpack the Tomcat tar file in &#039;&#039;$prefix&#039;&#039;. I&#039;ll assume you have the Apache Tomcat tar file in &#039;&#039;$prefix&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
;tar -xzf apache-tomcat-9.0.105.tar.gz: Expand the Tomcat tar ball&lt;br /&gt;
;git clone https://github.com/opendap/olfs: Get the OLFS source code&lt;br /&gt;
;cd olfs: change directory to the OLFS source&lt;br /&gt;
;ant server: Build it&lt;br /&gt;
;cp build/dist/opendap.war ../apache-tomcat-9.0.105/webapps/: Copy the opendap web archive to the tomcat webapps direcotry.&lt;br /&gt;
;cd ..: Go up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
;./apache-tomcat-9.0.105/bin/startup.sh: Start Tomcat&lt;br /&gt;
&lt;br /&gt;
== Test the server ==&lt;br /&gt;
You can test the server several ways, but the most fun is to use a web browser. The URL &#039;&#039;http://&amp;lt;machine&amp;gt;:8080/opendap&#039;&#039; should return a page pointing to a collection of test datasets bundled with the server. You can also use &#039;&#039;curl&#039;&#039;, &#039;&#039;wget&#039;&#039; or any application that can read from OpenDAP servers (e.g., Matlab, Octave, ArcGIS, IDL, ...).&lt;br /&gt;
&lt;br /&gt;
== Stopping the server ==&lt;br /&gt;
Stop both the BES and Apache&lt;br /&gt;
&lt;br /&gt;
;./apache-tomcat-9.0.105/bin/shutdown.sh&lt;br /&gt;
;besctl stop&lt;br /&gt;
&lt;br /&gt;
Note that there is also a &#039;&#039;hyraxctl&#039;&#039; script that provides a way to start and stop Hyrax without you (or &#039;&#039;init.d&#039;&#039;) having to type separate commands for both the BES and OLFS. This script is part of the BES software you cloned from git.&lt;br /&gt;
&lt;br /&gt;
== Building select parts of the BES ==&lt;br /&gt;
Building just the BES and one of more of its handlers/modules is not at all hard to do with a checkout of code from git. In the above section on building the BES, simply skip the step where the submodules are cloned (&#039;&#039;git submodule update --init&#039;&#039;) and link configure.ac to &#039;&#039;configure_standard.ac&#039;&#039;. The rest of the process is as shown. The end result is a BES daemon without any of the standard Hyrax modules (but support for DAP will be built if &#039;&#039;libdap&#039;&#039; is found by the configure script).&lt;br /&gt;
&lt;br /&gt;
To build modules for the BES, simply go to &#039;&#039;$prefix&#039;&#039;, clone their git repo and build them, taking care to pass set &#039;&#039;$prefix&#039;&#039; when calling the module&#039;s &#039;&#039;configure&#039;&#039; script. &lt;br /&gt;
&lt;br /&gt;
Note that it is easy to combine the &#039;build it all&#039; and &#039;build just one&#039; processes so that a complete Hyrax BES can be built in one go and then a new module/handler not included in the BES git repo can be built and used. Each module we have on GitHub has a &#039;&#039;configure.ac&#039;&#039;, &#039;&#039;Makefile.am&#039;&#039;, etc., that will support both kinds of builds and [[Configuration of BES Modules]] explains how to take a module/handler that builds as a standalone module and tweak the build scripts so that it&#039;s fully integrated into the Hyrax BES build, too.&lt;br /&gt;
&lt;br /&gt;
= Building on Ubuntu =&lt;br /&gt;
This was tested using Xenial (Ubuntu 16)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get update&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Packages needed:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get install ...&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ant junit git flex bison autoconf automake libtool emacs openssl bzip2 libjpeg-dev libxml2-dev curl libicu-dev vim bc make cmake uuid-dev libcurl4-openssl-dev libicu-dev g++ zlib1g-dev libcppunit-dev libssl-dev&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13612</id>
		<title>Hyrax GitHub Source Build</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13612"/>
		<updated>2026-01-28T23:38:56Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Apple OSX (Mx processor) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This describes how to get and build Hyrax from our GitHub repositories. Hyrax is a data server that implements the DAP2 and DAP4 protocols, works with a number of different data formats and supports a wide variety of customization options from tailoring the look of the server&#039;s web pages to complex server-side processing operations. This page describes how to build the server&#039;s source code. If you&#039;re working on a Linux or OS/X computer, the process is similar so we describe only the linux case; we do not support building the server on Windows operating systems.&lt;br /&gt;
&lt;br /&gt;
To build and install the server, you need to perform three steps:&lt;br /&gt;
# Set up the computer to build source code (Install a Java compiler; install a C/C++ compiler; add some other tools)&lt;br /&gt;
# Build the C++ DAP library (&#039;&#039;libdap4&#039;&#039;) and the Hyrax BES daemon&lt;br /&gt;
# Build the Hyrax OLFS web application&lt;br /&gt;
&lt;br /&gt;
Quick links if you already know the process:&lt;br /&gt;
* [https://github.com/opendap/hyrax new all-in-one repo that uses shell scripts]&lt;br /&gt;
* [https://github.com/opendap/libdap libdap git repo]&lt;br /&gt;
* [https://github.com/opendap/bes BES git repo]&lt;br /&gt;
* [https://github.com/opendap/olfs OLFS git repo]&lt;br /&gt;
* [https://github.com/opendap/hyrax-dependencies Hyrax dependencies]&lt;br /&gt;
&lt;br /&gt;
= Set up a system to build our code =&lt;br /&gt;
== CentOS-8  ==&lt;br /&gt;
The CentOS-8 setup is very similar to CentOS-7, but there are some minor differences.&lt;br /&gt;
 &lt;br /&gt;
;Update the VM&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum -y update&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;You will need to enable power-tools for this setup&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum config-manager --set-enabled powertools&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Load the basic software development environment plus the additional packages of openjpeg2, jasper, and libtirpc. Note that you may not need &#039;&#039;openjpeg2&#039;&#039; and &#039;&#039;jasper&#039;&#039; if you build the dependencies successfully. If you determine that you don&#039;t need these, please let us know. JUnit support has also been dropped so we dropped the &amp;lt;tt&amp;gt;&#039;&#039;ant-junit junit&#039;&#039;&amp;lt;/tt&amp;gt; packages from the install list.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ant git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc openjpeg2-devel jasper-devel libtirpc-devel&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Tell the machine where to find the tirpc libraries&lt;br /&gt;
:&amp;lt;tt&amp;gt;export CPPFLAGS=-I/usr/include/tirpc&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt;export LDFLAGS=-ltirpc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;NB: As of 1/28/22 you should not need to do this. The &#039;&#039;configure&#039;&#039; script should find the correct way to run python on CentOS 8. However, if it does not, our Makefiles (built from &#039;&#039;Makefile.am&#039;&#039; files) use &#039;&#039;python&#039;&#039; but a vanilla CentOS 8 machine only has &#039;&#039;python3&#039;&#039;. Until we fix this, you need to make sure &#039;&#039;python&#039;&#039; runs a python program. One way is to make a symbolic link between &#039;&#039;python3&#039;&#039; and &#039;&#039;python&#039;&#039; in a directory that is on your PATH. &#039;&#039;&#039;The TODO item here is to make sure &#039;&#039;python&#039;&#039; exists and can run a program&#039;&#039;&#039;. It is generally enough to verify that the command exists:&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;which python&#039;&#039;&#039;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
; Lacking that (which I was on Rocky8) install python&lt;br /&gt;
: &amp;lt;tt&amp;gt;sudo yum install -y python3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;If you&#039;re going to build RPMs&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum install rpm-devel rpm-build redhat-rpm-config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once you run through the rest of the hyrax build make sure that both &#039;&#039;gdal&#039;&#039; and &#039;&#039;hdf4&#039;&#039; build correctly (look for their libraries in $prefix/deps/lib). To build them manually, run &#039;&#039;&#039;make gdal&#039;&#039;&#039;, &#039;&#039;&#039;make hdf4&#039;&#039;&#039;, amd &#039;&#039;&#039;make netcdf4&#039;&#039;&#039; inside the hyrax-dependencies to build and install gdal and hdf4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Hyrax-Rocky9|Configuring Rocky9]] ==&lt;br /&gt;
== [[Hyrax-Rocky8|Configuring Rocky8]] ==&lt;br /&gt;
&lt;br /&gt;
== Rocky 8 ==&lt;br /&gt;
&#039;&#039;Updated 6/6/2024&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the commands ps, which, etc.&lt;br /&gt;
 dnf install -y procps&lt;br /&gt;
&lt;br /&gt;
C++ environment plus build tools&lt;br /&gt;
 dnf install -y git gcc-c++ flex bison cmake autoconf automake libtool emacs bzip2 vim bc&lt;br /&gt;
&lt;br /&gt;
Development library versions&lt;br /&gt;
 dnf install -y openssl-devel libuuid-devel readline-devel zlib-devel bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel libtirpc-devel&lt;br /&gt;
&lt;br /&gt;
Java&lt;br /&gt;
 dnf install -y java-17-openjdk java-17-openjdk-devel ant &lt;br /&gt;
&lt;br /&gt;
Setup DNF so that we can load in some obscure packages from EPEL, etc., repos&lt;br /&gt;
 dnf install dnf-plugins-core&lt;br /&gt;
 dnf install epel-release&lt;br /&gt;
 dnf config-manager --set-enabled powertools&lt;br /&gt;
&lt;br /&gt;
Install CppUnit and some more development libraries&lt;br /&gt;
 dnf install -y cppunit cppunit-devel openjpeg2-devel jasper-devel&lt;br /&gt;
&lt;br /&gt;
Install the RPM tools&lt;br /&gt;
 dnf install -y rpm-devel rpm-build redhat-rpm-config&lt;br /&gt;
&lt;br /&gt;
Install the AWS CLI&lt;br /&gt;
 dnf install -y awscli&lt;br /&gt;
&lt;br /&gt;
== Apple OSX (M&#039;&#039;x&#039;&#039; processor) ==&lt;br /&gt;
&lt;br /&gt;
Computers with the Apple M series chips require dedicated binaries, or binaries with both Intel and M1 contents. In order to get the &#039;&#039;hyrax-dependencies&#039;&#039; project (and the libdap4, and bes projects) to build the following packages need to be installed prior to running the hyrax-dependencies build.&lt;br /&gt;
&lt;br /&gt;
Updated 1/28/2026 using notes from a build on a clean OSX M1 machine running Tahoe 26.2. I loaded some things that are not completely necessary, like 1Password, during very first step. I&#039;m documenting that here just to be complete. jhrg&lt;br /&gt;
&lt;br /&gt;
I installed &#039;&#039;&#039;Chrome&#039;&#039;&#039;, &#039;&#039;&#039;1Password&#039;&#039;&#039;, and &#039;&#039;&#039;vscode&#039;&#039;&#039; because they make it easier for me, but I did not directly use them for the build. I used the emacs clone &#039;&#039;&#039;mg&#039;&#039;&#039; that is bundled with OSX to edit files.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Xcode&#039;&#039;&#039;&lt;br /&gt;
* Use the App Store to install Xcode&lt;br /&gt;
* Use a terminal window to run the command: &#039;&#039;xcode-select --install&#039;&#039; &amp;lt;-- I&#039;m not sure if I did this. It&#039;s likely, but I started Xcode and clicked &#039;OK&#039; in the dialog that prompts for the various environments to install (e.g., do you want to write code for the Apple Watch). I installed only the development tools for OSX.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Homebrew&#039;&#039;&#039;&lt;br /&gt;
* Needed for later steps&lt;br /&gt;
* Set the homebrew install path (/opt/homebrew for me) to HB (or a more wordy alternative ;-)&lt;br /&gt;
* export PATH=&amp;quot;$HB/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;cmake&#039;&#039;&#039;&lt;br /&gt;
* brew install cmake&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;pkg-config&#039;&#039;&#039;&lt;br /&gt;
* brew install pkg-config&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libpng&#039;&#039;&#039;&lt;br /&gt;
* brew install libpng&lt;br /&gt;
* export CPPFLAGS=&amp;quot;$HB/include&amp;quot;&lt;br /&gt;
* export LDFLAGS=&amp;quot;$HB/lib&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At this point, the hyrax-dependencies repo should build, but make sure you follow the directions there, e.g., sourcing &#039;&#039;&#039;spath.sh&#039;&#039;&#039; &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;autotools&#039;&#039;&#039;&lt;br /&gt;
* brew install autoconf automake libtool&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;CPPUNIT&#039;&#039;&#039;&lt;br /&gt;
* brew install cppunit&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;CppUnit is not needed to build the code, but it is needed to run the unit tests&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At this point the libdap4 repo should build&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;openssl&#039;&#039;&#039;&lt;br /&gt;
* brew install openssl&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At this point the bes repo should build&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
I did not build the OLFS nor did I make docker containers.&lt;br /&gt;
&lt;br /&gt;
= A semi-automatic build =&lt;br /&gt;
&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the short instructions in the README file.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Summarized here, those instructions are:&lt;br /&gt;
;use bash: The shell scripts in this repo assume you are using bash.&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development: &#039;&#039;source spath.sh&#039;&#039;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies: &#039;&#039;./hyrax_clone.sh -v&#039;&#039;&lt;br /&gt;
;build the code, including the dependencies: &#039;&#039;./hyrax_build.sh -v&#039;&#039;&lt;br /&gt;
;test the server: Start the BES using  &#039;&#039;besctl start&#039;&#039;&lt;br /&gt;
:Start the OLFS using&#039;&#039;./build/apache-tomcat-7.0.57/bin/startup.sh&#039;&#039;&lt;br /&gt;
:Test the server by loooking at &#039;&#039;&amp;lt;nowiki&amp;gt;http://localhost:8080/opendap&amp;lt;/nowiki&amp;gt;&#039;&#039; in a browser. You should see a directory named &#039;&#039;data&#039;&#039; and following that link should lead to more data. The server will be accessible to clients other than a web browser.&lt;br /&gt;
:To test the BES function independently of the front end, use &#039;&#039;bescmdln&#039;&#039; and give it the &#039;&#039;show version;&#039;&#039; command, you should see output about different components and their versions. &lt;br /&gt;
:Use &#039;&#039;exit&#039;&#039; to leave the command line test client.&lt;br /&gt;
&lt;br /&gt;
As described in the README file that is part of the &#039;&#039;hyrax&#039;&#039; repo, there are some other scripts in the repo and some options to the &#039;&#039;clone&#039;&#039; and &#039;&#039;build&#039;&#039; script that you can investigate by using -h (help).&lt;br /&gt;
&lt;br /&gt;
= The manual build = &lt;br /&gt;
&lt;br /&gt;
In the following, we describe only the build process for CentOS; the one for OS/X is similar and we note the differences where they are significant.&lt;br /&gt;
&lt;br /&gt;
== Get Hyrax from GitHub ==&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the instructions on this page (which differ a bit from ones in the project&#039;s README)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you have the &#039;&#039;hyrax&#039;&#039; project cloned:&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;source spath.sh&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;./hyrax_clone.sh -v&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;proceed with the rest of the build as described in the following sections of this page&lt;br /&gt;
&lt;br /&gt;
== Important Note ==&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;Many of the problems people have with the build stem from not setting the shell correctly for the build.&amp;lt;/font&amp;gt;&lt;br /&gt;
In the above section, &#039;&#039;make sure&#039;&#039; you run &#039;&#039;&#039;source spath.sh&#039;&#039;&#039; before you run any of the building/compiling/testing commands that use the source code or build files. While the &#039;&#039;$prefix&#039;&#039; and &#039;&#039;$PATH&#039;&#039; environment variables are simple to set up, they are needed by most users. When you exit a terminal window and then open a new one, make sure to (re)source the &#039;&#039;spath.sh&#039;&#039; file in the new shell. You don&#039;t have to source spath.sh every time you enter the &#039;&#039;hyrax&#039;&#039; directory, but you must run it for every new instance of the shell.&lt;br /&gt;
&lt;br /&gt;
== Compile the Hyrax dependencies ==&lt;br /&gt;
If you didn&#039;t run hyrax_clone.sh, make sure you&#039;re in the top hyrax directory and use git to clone the hyrax-dependencies:&lt;br /&gt;
  git clone https://github.com/opendap/hyrax-dependencies&lt;br /&gt;
And then build it. Unlike many source packages, there is no need to run a configure script, just &#039;&#039;make&#039;&#039; will do. However, the Makefile in this package expects &#039;&#039;$prefix&#039;&#039; to be set as described above. It will put all of the Hyrax server dependencies in a subdirectory called &#039;&#039;deps&#039;&#039;. To build the dependencies for building RPMs, use &#039;&#039;make -j9 for-static-rpm&#039;&#039;.&lt;br /&gt;
;(make sure you&#039;re in the top level hyrax directory)&lt;br /&gt;
&amp;lt;tt&amp;gt;&lt;br /&gt;
; cd hyrax-dependencies&lt;br /&gt;
; make --jobs=9&lt;br /&gt;
: &#039;&#039;The --jobs=N runs a parallel build with at most N simultaneous compile operations. This will result in a huge performance improvement on multi-core machines. &#039;&#039;&#039;-jN&#039;&#039;&#039; is the short form for the option.&#039;&#039;&lt;br /&gt;
;cd ..: &#039;&#039;Go back up to &#039;&#039;&#039;$prefix&#039;&#039;&#039; &#039;&#039;&lt;br /&gt;
&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; You can get some of the &#039;&#039;dependencies&#039;&#039; for Hyrax like &#039;&#039;netCDF&#039;&#039; from the EPEL repository, but the versions are often older than Hyrax needs. Contact us if you want information about using EPEL. At the risk of throwing people a curve ball, here&#039;s a synopsis of the process. Don&#039;t do this unless you know EPEL well. Use [http://mirror.pnl.gov/epel/6/i386/epel-release-6-8.noarch.rpm epel-release-6-8.noarch.rpm] and install it using &#039;&#039;sudo yum install epel-release-6-8.noarch.rpm&#039;&#039;. Then install packages needed to read various file formats: &#039;&#039;yum install netcdf-devel hdf-devel hdf5-devel libicu-devel cfitsio-devel cppunit-devel rpm-devel rpm-build&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Build &#039;&#039;libdap&#039;&#039; and the &#039;&#039;BES&#039;&#039; daemon ==&lt;br /&gt;
&lt;br /&gt;
==== Get and build libdap4 ====&lt;br /&gt;
;WARNING: If you have &#039;&#039;libdap&#039;&#039; already, uninstall it before proceeding.&lt;br /&gt;
Build, test and install libdap4 into $prefix:&lt;br /&gt;
&amp;lt;b&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git clone https://github.com/opendap/libdap4&lt;br /&gt;
cd libdap4&lt;br /&gt;
autoreconf -fiv&lt;br /&gt;
./configure --prefix=$prefix --enable-developer &lt;br /&gt;
make -j9&lt;br /&gt;
make check -j9&lt;br /&gt;
make install&lt;br /&gt;
cd .. # Go back up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Get and build the BES and all of the modules shipped with Hyrax ====&lt;br /&gt;
Build, test and install the BES and its modules&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;git clone https://github.com/opendap/bes # Clone the BES from GitHub&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
cd bes # enter the bes dir.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;autoreconf --force --install --verbose # You can use -fiv instead of the long options.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These means, when starting from a freshly cloned repo, run all of the autotools commands and install all of the needed scripts.&lt;br /&gt;
&lt;br /&gt;
Then, run configure:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;./configure --prefix=$prefix  --with-dependencies=$prefix/deps --enable-developer&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: Notes:&lt;br /&gt;
:* The --with-deps... is not needed if you load the dependencies from RPMs or otherwise have them installed an generally accessible on the build machine.&lt;br /&gt;
:* The  --enable-developer option will compile in all of the debugging code which may affect performance even if the debugging output is not enabled.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make -j9&lt;br /&gt;
make check -j9&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Some tests may fail and adding &#039;&#039;-k&#039;&#039; ignores that and keeps make marching along. &#039;&#039;Note that you must run &#039;&#039;&#039;make&#039;&#039;&#039; before &#039;&#039;&#039;make check&#039;&#039;&#039; in the bes code&#039;&#039;.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make install&lt;br /&gt;
cd .. # Go back up to $prefix&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Test the BES ====&lt;br /&gt;
Start the BES and verify that all of the modules build correctly.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;besctl start # Start the BES.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Given that &#039;&#039;$prefix/bin&#039;&#039; is on your &#039;&#039;$PATH&#039;&#039;, this should start the BES. You will not need to be root if you used the &#039;&#039;--enable-developer&#039;&#039; switch with configure (as shown above), otherwise you should run &#039;&#039;sudo besctl start&#039;&#039; with the caveat that as root &#039;&#039;$prefix/bin&#039;&#039; will probably not be n your &#039;&#039;$PATH&#039;&#039;.&lt;br /&gt;
:If there&#039;s an error (e.g., you tried to start as a regular user but need to be root), edit bes.conf to be a real user (yourself?) in a real group (use &#039;groups&#039; to see which groups you are in) and also check that the bes.log file is &#039;&#039;not&#039;&#039; owned by root. &lt;br /&gt;
:Restart.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;bescmdln # Now that the BES is running, start the BES testing tool&lt;br /&gt;
BESClient&amp;gt; show version; # Send the BES the version command to see if it&#039;s running &amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
:Take a quick look at the output. There should be entries for libdap, bes and all of the modules.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt; BESClient&amp;gt; exit; # Exit the testing tool&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that even though you have exited the &#039;&#039;bescmdln&#039;&#039; test tool, the BES is still running. That&#039;s fine - we&#039;ll use it in just a bit - but if you want to shut it down, use &#039;&#039;besctl stop&#039;&#039;, or &#039;&#039;besctl pids&#039;&#039; to see the daemon&#039;s processes. If the BES is not stopping, &#039;&#039;besctl kill&#039;&#039; will stop all BES processes without waiting for them to complete their current task.&lt;br /&gt;
&lt;br /&gt;
== Build the Hyrax &#039;&#039;OLFS&#039;&#039; web application ==&lt;br /&gt;
The OLFS is a java servlet built using ant. The OLFS is a java servlet web application and runs with Tomcat, Glassfish, etc. You need a copy of Tomcat, but our servlet does not work with the RPM version of Tomcat. Get [http://tomcat.apache.org/download-90.cgi Tomcat 9 from Apache]. Note that if you built the dependencies from source using the &#039;&#039;hyrac-dependencies-1.10.tar&#039;&#039; then there is a copy of Tomcat in the &#039;&#039;hyrax-dependecies/extra_downloads directory. You can unpack the Tomcat tar file in &#039;&#039;$prefix&#039;&#039;. I&#039;ll assume you have the Apache Tomcat tar file in &#039;&#039;$prefix&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
;tar -xzf apache-tomcat-9.0.105.tar.gz: Expand the Tomcat tar ball&lt;br /&gt;
;git clone https://github.com/opendap/olfs: Get the OLFS source code&lt;br /&gt;
;cd olfs: change directory to the OLFS source&lt;br /&gt;
;ant server: Build it&lt;br /&gt;
;cp build/dist/opendap.war ../apache-tomcat-9.0.105/webapps/: Copy the opendap web archive to the tomcat webapps direcotry.&lt;br /&gt;
;cd ..: Go up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
;./apache-tomcat-9.0.105/bin/startup.sh: Start Tomcat&lt;br /&gt;
&lt;br /&gt;
== Test the server ==&lt;br /&gt;
You can test the server several ways, but the most fun is to use a web browser. The URL &#039;&#039;http://&amp;lt;machine&amp;gt;:8080/opendap&#039;&#039; should return a page pointing to a collection of test datasets bundled with the server. You can also use &#039;&#039;curl&#039;&#039;, &#039;&#039;wget&#039;&#039; or any application that can read from OpenDAP servers (e.g., Matlab, Octave, ArcGIS, IDL, ...).&lt;br /&gt;
&lt;br /&gt;
== Stopping the server ==&lt;br /&gt;
Stop both the BES and Apache&lt;br /&gt;
&lt;br /&gt;
;./apache-tomcat-9.0.105/bin/shutdown.sh&lt;br /&gt;
;besctl stop&lt;br /&gt;
&lt;br /&gt;
Note that there is also a &#039;&#039;hyraxctl&#039;&#039; script that provides a way to start and stop Hyrax without you (or &#039;&#039;init.d&#039;&#039;) having to type separate commands for both the BES and OLFS. This script is part of the BES software you cloned from git.&lt;br /&gt;
&lt;br /&gt;
== Building select parts of the BES ==&lt;br /&gt;
Building just the BES and one of more of its handlers/modules is not at all hard to do with a checkout of code from git. In the above section on building the BES, simply skip the step where the submodules are cloned (&#039;&#039;git submodule update --init&#039;&#039;) and link configure.ac to &#039;&#039;configure_standard.ac&#039;&#039;. The rest of the process is as shown. The end result is a BES daemon without any of the standard Hyrax modules (but support for DAP will be built if &#039;&#039;libdap&#039;&#039; is found by the configure script).&lt;br /&gt;
&lt;br /&gt;
To build modules for the BES, simply go to &#039;&#039;$prefix&#039;&#039;, clone their git repo and build them, taking care to pass set &#039;&#039;$prefix&#039;&#039; when calling the module&#039;s &#039;&#039;configure&#039;&#039; script. &lt;br /&gt;
&lt;br /&gt;
Note that it is easy to combine the &#039;build it all&#039; and &#039;build just one&#039; processes so that a complete Hyrax BES can be built in one go and then a new module/handler not included in the BES git repo can be built and used. Each module we have on GitHub has a &#039;&#039;configure.ac&#039;&#039;, &#039;&#039;Makefile.am&#039;&#039;, etc., that will support both kinds of builds and [[Configuration of BES Modules]] explains how to take a module/handler that builds as a standalone module and tweak the build scripts so that it&#039;s fully integrated into the Hyrax BES build, too.&lt;br /&gt;
&lt;br /&gt;
= Building on Ubuntu =&lt;br /&gt;
This was tested using Xenial (Ubuntu 16)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get update&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Packages needed:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get install ...&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ant junit git flex bison autoconf automake libtool emacs openssl bzip2 libjpeg-dev libxml2-dev curl libicu-dev vim bc make cmake uuid-dev libcurl4-openssl-dev libicu-dev g++ zlib1g-dev libcppunit-dev libssl-dev&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Developer_Info&amp;diff=13588</id>
		<title>Developer Info</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Developer_Info&amp;diff=13588"/>
		<updated>2025-05-23T20:31:24Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* General development information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* [https://github.com/OPENDAP OPeNDAP&#039;s GitHub repositories]: OPeNDAP&#039;s software is available using GitHub in addition to the downloads from our website.&lt;br /&gt;
** Before 2015 we hosted our own SVN repository. It&#039;s still online and available, but for read-only access, at [https://scm.opendap.org/svn https://scm.opendap.org/svn].&lt;br /&gt;
* [https://travis-ci.org/OPENDAP Continuous Integration builds]: Software that is built whenever new changes are pushed to the master branch. These builds are done on the Travis-CI system.&lt;br /&gt;
* [http://test.opendap.org/ test.opendap.org]: Test servers with data files.&lt;br /&gt;
* We use the Coverity static system to look for common software defects, information on Hyrax is spread across three projects:&lt;br /&gt;
** [https://scan.coverity.com/projects/opendap-bes?tab=overview The BES and the standard handlers we distribute]&lt;br /&gt;
** [https://scan.coverity.com/projects/opendap-olfs?tab=overview The OLFS - the front end to the Hyrax data server]&lt;br /&gt;
** [https://scan.coverity.com/projects/opendap-libdap4?tab=overview libdap - The implementation of DAP2 and DAP4]&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP&#039;s FAQ ==&lt;br /&gt;
The [http://www.opendap.org/faq-page OPeNDAP FAQ] has a pretty good section on developer&#039;s questions.&lt;br /&gt;
&lt;br /&gt;
== C++ Coding Information ==&lt;br /&gt;
* [https://google.github.io/styleguide/cppguide.html Google C++ Style Guide]&lt;br /&gt;
* [[Include files for libdap | Guidelines for including headers]]&lt;br /&gt;
* [[Using lambdas with the STL]]&lt;br /&gt;
* [[Better Unit tests for C++]]&lt;br /&gt;
* [[Better Singleton classes C++]]&lt;br /&gt;
* [[What is faster? stringstream string + String]]&lt;br /&gt;
* [[More about strings - passing strings to functions]]&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP Workshops ==&lt;br /&gt;
* [http://www.opendap.org/about/workshops-and-presentations/2007-10-12 The APAC/BOM Workshops]: This workshop spanned several days and covered a number of topics, including information for SAs and Developers. Oct 2007.&lt;br /&gt;
* [http://www.opendap.org/about/workshops-and-presentations/2008-07-15 ESIP Federation Server Workshop]: This half-day workshop focused on server installation and configuration. Summer 2008&lt;br /&gt;
* [[A One-day Course on Hyrax Development | Server Functions]]: This one-day workshop is all about writing and debugging server-side functions. It also contains a wealth of information about Hyrax, the BES and debugging tricks for the server. Spring 2012. Updated Fall 2014 for presentation to Ocean Networks Canada.&lt;br /&gt;
&lt;br /&gt;
== libdap4 and BES Reference documentation ==&lt;br /&gt;
* [https://opendap.github.io/bes/html/ BES Reference]&lt;br /&gt;
* [https://opendap.github.io/libdap4/html/ libdap Reference]&lt;br /&gt;
&lt;br /&gt;
== BES Development Information ==&lt;br /&gt;
* [[Hyrax - Logging Configuration|Logging Configuration]]&lt;br /&gt;
&lt;br /&gt;
* [[BES_-_How_to_Debug_the_BES| How to debug the BES]]&lt;br /&gt;
* [[BES - Debugging Using besstandalone]]&lt;br /&gt;
* [[Hyrax - Create BES Module | How to create your own BES Module]]&lt;br /&gt;
* Hyrax Module Integration: How to configure your module so it&#039;s easy to add to Hyrax instances ([[:File:HyraxModuleIntegration-1.2.pdf|pdf]])&lt;br /&gt;
* [[Hyrax - Starting and stopping the BES| Starting and stopping the BES]]&lt;br /&gt;
* [[Hyrax - Running bescmdln | Running the BES command line client]]&lt;br /&gt;
* [[Hyrax - BES Client commands| BES Client commands]]. The page [[BES_XML_Commands | BES XML Commands]] repeats this info for a bit more information on the return values. Most of the commands don&#039;t return anything unless they return an error and are expected to be used in a group where a &#039;&#039;get&#039;&#039; command closes out the request and obviously does return a response of some kind (maybe an error).&lt;br /&gt;
* [[Hyrax:_BES_Administrative_Commands| BES Administrative Commands]]&lt;br /&gt;
* [[Hyrax - Extending BES Module | Extending your BES Module]]&lt;br /&gt;
* [[Hyrax - Example BES Modules | Example BES Modules]] - the Hello World example and the CSV data handler&lt;br /&gt;
* [[Hyrax - BES PPT | BES communication protocol using PPT (point to point transport)]]&lt;br /&gt;
&lt;br /&gt;
* [[Australian BOM Software Developer&#039;s Agenda and Presentations|Software Developers Workshop]]&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP Development process information  ==&lt;br /&gt;
These pages contain information about how we&#039;d like people working with us to use our various on-line tools.&lt;br /&gt;
&lt;br /&gt;
* [[Planning a Program Increment]] This is a checklist for the planning phase that precedes a Program Increment (PI) when using SAFe with the NASA ESDIS development group.&lt;br /&gt;
* [[Hyrax GitHub Source Build]] This explains how to clone our software from GitHub and build our code using a shell like bash. It also explains how to build the BES and all of the Hyrax &#039;standard&#039; handlers in one operation, as well as how to build just the parts you need without cloning the whole set of repos. Some experience with &#039;git submodule&#039; will make this easier, although the page explains everything.&lt;br /&gt;
* [[Bug Prioritization]]. How we prioritize bugs in our software.&lt;br /&gt;
&lt;br /&gt;
===[[How to Make a Release|Making A Release]] ===&lt;br /&gt;
* [[How to Make a Release]] A general template for making a release. This references some of the pages below.&lt;br /&gt;
&lt;br /&gt;
== Software process issues: ==&lt;br /&gt;
* [[How to download test logs from a Travis build]] All of our builds on Travis that run tests save those logs to an S3 bucket.&lt;br /&gt;
* [[ConfigureCentos| How to configure a CentOS machine for production of RPM binaries]] - Updated 12/2014 to include information regarding git.&lt;br /&gt;
* [[How to use CLion with our software]]&lt;br /&gt;
* [[BES Timing| How to add timing instrumentation to your BES code.]]&lt;br /&gt;
* [[UnitTests| How to write unit tests using CppUnit]] NB: See other information under the heading of C++ development&lt;br /&gt;
* [[valgrind| How to use valgrind with unit tests]]&lt;br /&gt;
* [[Debugging the distcheck target]] Yes, this gets its own page...&lt;br /&gt;
* [[CopyRights| How to copyright software written for OPeNDAP]]&lt;br /&gt;
* [[Managing public and private keys using gpg]]&lt;br /&gt;
* [[SecureEmail |How to Setup Secure Email and Sign Software Distributions]]&lt;br /&gt;
* [[UserSupport|How to Handle Email-list Support Questions]]&lt;br /&gt;
* [[NetworkServerSecurity |Security Policy and Related Procedures]]&lt;br /&gt;
* [http://semver.org/ Software version numbers]&lt;br /&gt;
* [[GuideLines| Development Guidelines]]&lt;br /&gt;
* [[Apple M1 Special Needs]]&lt;br /&gt;
&lt;br /&gt;
==== Older info of limited value: ====&lt;br /&gt;
* [http://gcc.gnu.org/gcc-4.4/cxx0x_status.html C++-11 gcc/++-4.4 support] We now require compilers that support C++-14, so this is outdated (4/19/23).&lt;br /&gt;
* [[How to use Eclipse with Hyrax Source Code]] I like Eclipse, but we now use CLion because it&#039;s better (4/19/23) . Assuming you have cloned our Hyrax code from GitHub, this explains how to setup eclipse so you can work fairly easily and switch back and forth between the shell, emacs and eclipse.&lt;br /&gt;
&lt;br /&gt;
==== AWS Tips ====&lt;br /&gt;
* [[Growing a CentOS Root Partition on an AWS EC2 Instance]]&lt;br /&gt;
* [[How Shutoff the CentOS firewall]]&lt;br /&gt;
&lt;br /&gt;
== General development information ==&lt;br /&gt;
These pages contain general information relevant to anyone working with our software:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Git Hacks and Tricks]]&#039;&#039;&#039;: Information about using git and/or GitHub that seems useful and maybe not all that obvious.&lt;br /&gt;
* [[Git Secrets]]: securing repositories from AWS secret key leaks.&lt;br /&gt;
* [https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto Valgrind Suppression File Howto] How to build a suppressions file for valgrind.&lt;br /&gt;
* [[Using a debugger for C++ with Eclipse on OS/X]] Short version: use lldbmi2 **Add info**&lt;br /&gt;
* [[Using ASAN]] Short version, look [https://github.com/google/sanitizers/wiki/AddressSanitizerAndDebugger at the Google/GitHub pages] for useful environment variables **add text** On Centos, use yum install llvm to get the &#039;symbolizer&#039; and try &#039;&#039;ASAN_OPTIONS=symbolize=1 ASAN_SYMBOLIZER_PATH=$(shell which llvm-symbolizer)&#039;&#039;&lt;br /&gt;
* [https://www.jviotti.com/2024/01/29/using-xcode-instruments-for-cpp-cpu-profiling.html Using Xcode Instruments for C++ CPU profiling]&lt;br /&gt;
* [[How to use &#039;&#039;Instruments&#039;&#039; on OS/X to profile]] Updated 7/2018&lt;br /&gt;
* [https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto Valgrind - How to generate suppression files for valgrind] This will quiet valgrind, keeping it from telling you OS/X or Linux (or the BES) is leaking memory.&lt;br /&gt;
* [[Migrating source code from SVN to git]]: How to move a large project from SVN to git and keep the history, commits, branches and tags.&lt;br /&gt;
* [https://developer.mozilla.org/en-US/docs/Eclipse_CDT Eclipse - Detailed information about running Eclipse on OSX from the Mozzilla project]. Updated in 2017, this is really good but be aware that it&#039;s specific to Mozilla so some of the tips don&#039;t apply. Hyrax (i.e., libdap4 and BES) also use their own build system (autotools + make) so most of the configuration information here is very apropos. See also [[How to use Eclipse with Hyrax Source Code]] below.&lt;br /&gt;
* [https://jfearn.fedorapeople.org/en-US/RPM/4/html/RPM_Guide/index.html RPM Guide] The best one I&#039;m found so far...&lt;br /&gt;
* [https://autotools.io/index.html Autotools Myth busters] The best info on autotools I&#039;ve found yet (covers &#039;&#039;autoconf&#039;&#039;, &#039;&#039;automake&#039;&#039;, &#039;&#039;libtool&#039;&#039; and &#039;&#039;pkg-config&#039;&#039;).&lt;br /&gt;
* The [https://www.gnu.org/software/autoconf/autoconf.html autoconf] manual&lt;br /&gt;
* The [https://www.gnu.org/software/automake/ automake] manual&lt;br /&gt;
* The [https://www.gnu.org/software/libtool/ libtool] manual&lt;br /&gt;
* A good [https://lldb.llvm.org/lldb-gdb.html gdb to lldb cheat sheet] for those of us who know &#039;&#039;gdb&#039;&#039; but not &#039;&#039;lldb&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
= Old information =&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Old build information&lt;br /&gt;
====The Release Process====&lt;br /&gt;
# Make sure the &amp;lt;tt&amp;gt;hyrax-dependencies&amp;lt;/tt&amp;gt; project is up to date and tar balls on www.o.o. If there have been changes/updates:&lt;br /&gt;
## Update version number for the &amp;lt;tt&amp;gt;hyrax-dependencies&amp;lt;/tt&amp;gt; in the &amp;lt;tt&amp;gt;Makefile&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Save, commit, (merge?), and push the changes to the &amp;lt;tt&amp;gt;master&amp;lt;/tt&amp;gt; branch.&lt;br /&gt;
## Once the &amp;lt;tt&amp;gt;hyrax-dependencies&amp;lt;/tt&amp;gt; CI build is finished, trigger CI builds for both &amp;lt;tt&amp;gt;libdap4&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;bes&amp;lt;/tt&amp;gt; by pushing change(s) to the master branch of each.&lt;br /&gt;
# [[Source_Release_for_libdap | Making a source release of libdap]]&lt;br /&gt;
# [[ReleaseGuide | Making a source release of the BES]]. &lt;br /&gt;
# [[OLFSReleaseGuide| Make the OLFS release WAR file]]. Follow these steps to create the three .jar files needed for the OLFS release. Includes information on how to build the OLFS and how to run the tests.&lt;br /&gt;
# [[HyraxDockerReleaseGuide|Make the official Hyrax Docker image for the release]] When the RPMs and the WAR file(s) are built and pushed to their respective download locations, make the Docker image of the release.&lt;br /&gt;
&lt;br /&gt;
====Supplemental release guides====&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;Old - use the packages built using the Continuous Delivery process&amp;lt;/font&amp;gt;&lt;br /&gt;
# [[RPM |Make the RPM Distributions]]. Follow these steps to create an RPM distribution of the software. &#039;&#039;&#039;Note:&#039;&#039;&#039; &#039;&#039;Now we use packages built using CI/CD, so this checklist is no longer needed.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: &#039;&#039;The following is all about using Subversion and is out of date as of November 2014 when we switched to git. There are still good ideas here...&#039;&#039;&lt;br /&gt;
* [[MergingBranches |How to merge code]]&lt;br /&gt;
* [[TrunkDevelBranchRel | Using the SVN trunk, branches and tags to manage releases]].&lt;br /&gt;
* [[ShrewBranchGuide | Making a Branch of Shrew for a Server Release]]. Releases should be made from the trunk and moved to a branch once they are &#039;ready&#039; so that development can continue on the trunk and so that we can easily go back to the software that mad up a release, fix bugs, and (re)release those fixes. In general, it&#039;s better to fix things like build issues, etc., discovered in the released software &#039;&#039;on the trunk&#039;&#039; and merge those down to the release branch to maintain consistency, re-release, etc. This also means that virtually all new feature development should take place on special &#039;&#039;feature&#039;&#039; branches, not the trunk.&lt;br /&gt;
* [[Hyrax Package for OS-X]]. This describes how to make a new OS/X &#039;metapackage&#039; for Hyrax.&lt;br /&gt;
* [[XP| Making Windows XP distributions]]. Follow these directions to make Windows XP binaries.&lt;br /&gt;
* [[ReleaseToolbox |Making a Matlab Ocean Toolbox Release]].  Follow these steps when a new Matlab GUI version is ready to be released.&lt;br /&gt;
* [[Eclipse - How to Setup Eclipse in a Shrew Checkout]] This includes some build instructions&lt;br /&gt;
* [[LinuxBuildHostConfig| How to configure a Linux machine to build Hyrax from SVN]]&lt;br /&gt;
* [[ConfigureSUSE| How to configure a SUSE machine for production of RPM binaries]]&lt;br /&gt;
* [[ConfigureAmazonLinuxAMI| How to configure an Amazon Linux AMI for EC2 Instance To Build Hyrax]]&lt;br /&gt;
* [[TestOpendapOrg | Notes from setting up Hyrax on our new web host]]&lt;br /&gt;
* [http://svnbook.red-bean.com/en/1.7/index.html Subversion 1.7 documentation] -- The official Subversion documentation; [http://svnbook.red-bean.com/en/1.1/svn-book.pdf PDF] and [http://svnbook.red-bean.com/en/1.1/index.html HTML].&lt;br /&gt;
* [[OPeNDAP&#039;s Use of Trac]] -- How to use Trac&#039;s various features in the software development process.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Developer_Info&amp;diff=13587</id>
		<title>Developer Info</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Developer_Info&amp;diff=13587"/>
		<updated>2025-05-20T23:18:21Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* C++ Coding Information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* [https://github.com/OPENDAP OPeNDAP&#039;s GitHub repositories]: OPeNDAP&#039;s software is available using GitHub in addition to the downloads from our website.&lt;br /&gt;
** Before 2015 we hosted our own SVN repository. It&#039;s still online and available, but for read-only access, at [https://scm.opendap.org/svn https://scm.opendap.org/svn].&lt;br /&gt;
* [https://travis-ci.org/OPENDAP Continuous Integration builds]: Software that is built whenever new changes are pushed to the master branch. These builds are done on the Travis-CI system.&lt;br /&gt;
* [http://test.opendap.org/ test.opendap.org]: Test servers with data files.&lt;br /&gt;
* We use the Coverity static system to look for common software defects, information on Hyrax is spread across three projects:&lt;br /&gt;
** [https://scan.coverity.com/projects/opendap-bes?tab=overview The BES and the standard handlers we distribute]&lt;br /&gt;
** [https://scan.coverity.com/projects/opendap-olfs?tab=overview The OLFS - the front end to the Hyrax data server]&lt;br /&gt;
** [https://scan.coverity.com/projects/opendap-libdap4?tab=overview libdap - The implementation of DAP2 and DAP4]&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP&#039;s FAQ ==&lt;br /&gt;
The [http://www.opendap.org/faq-page OPeNDAP FAQ] has a pretty good section on developer&#039;s questions.&lt;br /&gt;
&lt;br /&gt;
== C++ Coding Information ==&lt;br /&gt;
* [https://google.github.io/styleguide/cppguide.html Google C++ Style Guide]&lt;br /&gt;
* [[Include files for libdap | Guidelines for including headers]]&lt;br /&gt;
* [[Using lambdas with the STL]]&lt;br /&gt;
* [[Better Unit tests for C++]]&lt;br /&gt;
* [[Better Singleton classes C++]]&lt;br /&gt;
* [[What is faster? stringstream string + String]]&lt;br /&gt;
* [[More about strings - passing strings to functions]]&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP Workshops ==&lt;br /&gt;
* [http://www.opendap.org/about/workshops-and-presentations/2007-10-12 The APAC/BOM Workshops]: This workshop spanned several days and covered a number of topics, including information for SAs and Developers. Oct 2007.&lt;br /&gt;
* [http://www.opendap.org/about/workshops-and-presentations/2008-07-15 ESIP Federation Server Workshop]: This half-day workshop focused on server installation and configuration. Summer 2008&lt;br /&gt;
* [[A One-day Course on Hyrax Development | Server Functions]]: This one-day workshop is all about writing and debugging server-side functions. It also contains a wealth of information about Hyrax, the BES and debugging tricks for the server. Spring 2012. Updated Fall 2014 for presentation to Ocean Networks Canada.&lt;br /&gt;
&lt;br /&gt;
== libdap4 and BES Reference documentation ==&lt;br /&gt;
* [https://opendap.github.io/bes/html/ BES Reference]&lt;br /&gt;
* [https://opendap.github.io/libdap4/html/ libdap Reference]&lt;br /&gt;
&lt;br /&gt;
== BES Development Information ==&lt;br /&gt;
* [[Hyrax - Logging Configuration|Logging Configuration]]&lt;br /&gt;
&lt;br /&gt;
* [[BES_-_How_to_Debug_the_BES| How to debug the BES]]&lt;br /&gt;
* [[BES - Debugging Using besstandalone]]&lt;br /&gt;
* [[Hyrax - Create BES Module | How to create your own BES Module]]&lt;br /&gt;
* Hyrax Module Integration: How to configure your module so it&#039;s easy to add to Hyrax instances ([[:File:HyraxModuleIntegration-1.2.pdf|pdf]])&lt;br /&gt;
* [[Hyrax - Starting and stopping the BES| Starting and stopping the BES]]&lt;br /&gt;
* [[Hyrax - Running bescmdln | Running the BES command line client]]&lt;br /&gt;
* [[Hyrax - BES Client commands| BES Client commands]]. The page [[BES_XML_Commands | BES XML Commands]] repeats this info for a bit more information on the return values. Most of the commands don&#039;t return anything unless they return an error and are expected to be used in a group where a &#039;&#039;get&#039;&#039; command closes out the request and obviously does return a response of some kind (maybe an error).&lt;br /&gt;
* [[Hyrax:_BES_Administrative_Commands| BES Administrative Commands]]&lt;br /&gt;
* [[Hyrax - Extending BES Module | Extending your BES Module]]&lt;br /&gt;
* [[Hyrax - Example BES Modules | Example BES Modules]] - the Hello World example and the CSV data handler&lt;br /&gt;
* [[Hyrax - BES PPT | BES communication protocol using PPT (point to point transport)]]&lt;br /&gt;
&lt;br /&gt;
* [[Australian BOM Software Developer&#039;s Agenda and Presentations|Software Developers Workshop]]&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP Development process information  ==&lt;br /&gt;
These pages contain information about how we&#039;d like people working with us to use our various on-line tools.&lt;br /&gt;
&lt;br /&gt;
* [[Planning a Program Increment]] This is a checklist for the planning phase that precedes a Program Increment (PI) when using SAFe with the NASA ESDIS development group.&lt;br /&gt;
* [[Hyrax GitHub Source Build]] This explains how to clone our software from GitHub and build our code using a shell like bash. It also explains how to build the BES and all of the Hyrax &#039;standard&#039; handlers in one operation, as well as how to build just the parts you need without cloning the whole set of repos. Some experience with &#039;git submodule&#039; will make this easier, although the page explains everything.&lt;br /&gt;
* [[Bug Prioritization]]. How we prioritize bugs in our software.&lt;br /&gt;
&lt;br /&gt;
===[[How to Make a Release|Making A Release]] ===&lt;br /&gt;
* [[How to Make a Release]] A general template for making a release. This references some of the pages below.&lt;br /&gt;
&lt;br /&gt;
== Software process issues: ==&lt;br /&gt;
* [[How to download test logs from a Travis build]] All of our builds on Travis that run tests save those logs to an S3 bucket.&lt;br /&gt;
* [[ConfigureCentos| How to configure a CentOS machine for production of RPM binaries]] - Updated 12/2014 to include information regarding git.&lt;br /&gt;
* [[How to use CLion with our software]]&lt;br /&gt;
* [[BES Timing| How to add timing instrumentation to your BES code.]]&lt;br /&gt;
* [[UnitTests| How to write unit tests using CppUnit]] NB: See other information under the heading of C++ development&lt;br /&gt;
* [[valgrind| How to use valgrind with unit tests]]&lt;br /&gt;
* [[Debugging the distcheck target]] Yes, this gets its own page...&lt;br /&gt;
* [[CopyRights| How to copyright software written for OPeNDAP]]&lt;br /&gt;
* [[Managing public and private keys using gpg]]&lt;br /&gt;
* [[SecureEmail |How to Setup Secure Email and Sign Software Distributions]]&lt;br /&gt;
* [[UserSupport|How to Handle Email-list Support Questions]]&lt;br /&gt;
* [[NetworkServerSecurity |Security Policy and Related Procedures]]&lt;br /&gt;
* [http://semver.org/ Software version numbers]&lt;br /&gt;
* [[GuideLines| Development Guidelines]]&lt;br /&gt;
* [[Apple M1 Special Needs]]&lt;br /&gt;
&lt;br /&gt;
==== Older info of limited value: ====&lt;br /&gt;
* [http://gcc.gnu.org/gcc-4.4/cxx0x_status.html C++-11 gcc/++-4.4 support] We now require compilers that support C++-14, so this is outdated (4/19/23).&lt;br /&gt;
* [[How to use Eclipse with Hyrax Source Code]] I like Eclipse, but we now use CLion because it&#039;s better (4/19/23) . Assuming you have cloned our Hyrax code from GitHub, this explains how to setup eclipse so you can work fairly easily and switch back and forth between the shell, emacs and eclipse.&lt;br /&gt;
&lt;br /&gt;
==== AWS Tips ====&lt;br /&gt;
* [[Growing a CentOS Root Partition on an AWS EC2 Instance]]&lt;br /&gt;
* [[How Shutoff the CentOS firewall]]&lt;br /&gt;
&lt;br /&gt;
== General development information ==&lt;br /&gt;
These pages contain general information relevant to anyone working with our software:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Git Hacks and Tricks]]&#039;&#039;&#039;: Information about using git and/or GitHub that seems useful and maybe not all that obvious.&lt;br /&gt;
* [[Git Secrets]]: securing repositories from AWS secret key leaks.&lt;br /&gt;
* [https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto Valgrind Suppression File Howto] How to build a suppressions file for valgrind.&lt;br /&gt;
* [[Using a debugger for C++ with Eclipse on OS/X]] Short version: use lldbmi2 **Add info**&lt;br /&gt;
* [[Using ASAN]] Short version, look [https://github.com/google/sanitizers/wiki/AddressSanitizerAndDebugger at the Google/GitHub pages] for useful environment variables **add text** On Centos, use yum install llvm to get the &#039;symbolizer&#039; and try &#039;&#039;ASAN_OPTIONS=symbolize=1 ASAN_SYMBOLIZER_PATH=$(shell which llvm-symbolizer)&#039;&#039;&lt;br /&gt;
* [[How to use &#039;&#039;Instruments&#039;&#039; on OS/X to profile]] Updated 7/2018&lt;br /&gt;
* [https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto Valgrind - How to generate suppression files for valgrind] This will quiet valgrind, keeping it from telling you OS/X or Linux (or the BES) is leaking memory.&lt;br /&gt;
* [[Migrating source code from SVN to git]]: How to move a large project from SVN to git and keep the history, commits, branches and tags.&lt;br /&gt;
* [https://developer.mozilla.org/en-US/docs/Eclipse_CDT Eclipse - Detailed information about running Eclipse on OSX from the Mozzilla project]. Updated in 2017, this is really good but be aware that it&#039;s specific to Mozilla so some of the tips don&#039;t apply. Hyrax (i.e., libdap4 and BES) also use their own build system (autotools + make) so most of the configuration information here is very apropos. See also [[How to use Eclipse with Hyrax Source Code]] below.&lt;br /&gt;
* [https://jfearn.fedorapeople.org/en-US/RPM/4/html/RPM_Guide/index.html RPM Guide] The best one I&#039;m found so far...&lt;br /&gt;
* [https://autotools.io/index.html Autotools Myth busters] The best info on autotools I&#039;ve found yet (covers &#039;&#039;autoconf&#039;&#039;, &#039;&#039;automake&#039;&#039;, &#039;&#039;libtool&#039;&#039; and &#039;&#039;pkg-config&#039;&#039;).&lt;br /&gt;
* The [https://www.gnu.org/software/autoconf/autoconf.html autoconf] manual&lt;br /&gt;
* The [https://www.gnu.org/software/automake/ automake] manual&lt;br /&gt;
* The [https://www.gnu.org/software/libtool/ libtool] manual&lt;br /&gt;
* A good [https://lldb.llvm.org/lldb-gdb.html gdb to lldb cheat sheet] for those of us who know &#039;&#039;gdb&#039;&#039; but not &#039;&#039;lldb&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
= Old information =&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Old build information&lt;br /&gt;
====The Release Process====&lt;br /&gt;
# Make sure the &amp;lt;tt&amp;gt;hyrax-dependencies&amp;lt;/tt&amp;gt; project is up to date and tar balls on www.o.o. If there have been changes/updates:&lt;br /&gt;
## Update version number for the &amp;lt;tt&amp;gt;hyrax-dependencies&amp;lt;/tt&amp;gt; in the &amp;lt;tt&amp;gt;Makefile&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Save, commit, (merge?), and push the changes to the &amp;lt;tt&amp;gt;master&amp;lt;/tt&amp;gt; branch.&lt;br /&gt;
## Once the &amp;lt;tt&amp;gt;hyrax-dependencies&amp;lt;/tt&amp;gt; CI build is finished, trigger CI builds for both &amp;lt;tt&amp;gt;libdap4&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;bes&amp;lt;/tt&amp;gt; by pushing change(s) to the master branch of each.&lt;br /&gt;
# [[Source_Release_for_libdap | Making a source release of libdap]]&lt;br /&gt;
# [[ReleaseGuide | Making a source release of the BES]]. &lt;br /&gt;
# [[OLFSReleaseGuide| Make the OLFS release WAR file]]. Follow these steps to create the three .jar files needed for the OLFS release. Includes information on how to build the OLFS and how to run the tests.&lt;br /&gt;
# [[HyraxDockerReleaseGuide|Make the official Hyrax Docker image for the release]] When the RPMs and the WAR file(s) are built and pushed to their respective download locations, make the Docker image of the release.&lt;br /&gt;
&lt;br /&gt;
====Supplemental release guides====&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;Old - use the packages built using the Continuous Delivery process&amp;lt;/font&amp;gt;&lt;br /&gt;
# [[RPM |Make the RPM Distributions]]. Follow these steps to create an RPM distribution of the software. &#039;&#039;&#039;Note:&#039;&#039;&#039; &#039;&#039;Now we use packages built using CI/CD, so this checklist is no longer needed.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: &#039;&#039;The following is all about using Subversion and is out of date as of November 2014 when we switched to git. There are still good ideas here...&#039;&#039;&lt;br /&gt;
* [[MergingBranches |How to merge code]]&lt;br /&gt;
* [[TrunkDevelBranchRel | Using the SVN trunk, branches and tags to manage releases]].&lt;br /&gt;
* [[ShrewBranchGuide | Making a Branch of Shrew for a Server Release]]. Releases should be made from the trunk and moved to a branch once they are &#039;ready&#039; so that development can continue on the trunk and so that we can easily go back to the software that mad up a release, fix bugs, and (re)release those fixes. In general, it&#039;s better to fix things like build issues, etc., discovered in the released software &#039;&#039;on the trunk&#039;&#039; and merge those down to the release branch to maintain consistency, re-release, etc. This also means that virtually all new feature development should take place on special &#039;&#039;feature&#039;&#039; branches, not the trunk.&lt;br /&gt;
* [[Hyrax Package for OS-X]]. This describes how to make a new OS/X &#039;metapackage&#039; for Hyrax.&lt;br /&gt;
* [[XP| Making Windows XP distributions]]. Follow these directions to make Windows XP binaries.&lt;br /&gt;
* [[ReleaseToolbox |Making a Matlab Ocean Toolbox Release]].  Follow these steps when a new Matlab GUI version is ready to be released.&lt;br /&gt;
* [[Eclipse - How to Setup Eclipse in a Shrew Checkout]] This includes some build instructions&lt;br /&gt;
* [[LinuxBuildHostConfig| How to configure a Linux machine to build Hyrax from SVN]]&lt;br /&gt;
* [[ConfigureSUSE| How to configure a SUSE machine for production of RPM binaries]]&lt;br /&gt;
* [[ConfigureAmazonLinuxAMI| How to configure an Amazon Linux AMI for EC2 Instance To Build Hyrax]]&lt;br /&gt;
* [[TestOpendapOrg | Notes from setting up Hyrax on our new web host]]&lt;br /&gt;
* [http://svnbook.red-bean.com/en/1.7/index.html Subversion 1.7 documentation] -- The official Subversion documentation; [http://svnbook.red-bean.com/en/1.1/svn-book.pdf PDF] and [http://svnbook.red-bean.com/en/1.1/index.html HTML].&lt;br /&gt;
* [[OPeNDAP&#039;s Use of Trac]] -- How to use Trac&#039;s various features in the software development process.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13551</id>
		<title>DMR++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13551"/>
		<updated>2024-08-29T18:12:27Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Building dmr++ files with get_dmrpp */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;How to build &amp;amp; deploy dmr++ files for Hyrax&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==What? Why?== &lt;br /&gt;
&lt;br /&gt;
It is a fast and flexible way to serve data stored in S3.&lt;br /&gt;
The dmr++ encodes the location of the data content residing in a binary data file/object (e.g., an hdf5 file) so that it can be directly accessed, without the need for an intermediate library API, by using the file with the location information. The binary data objects may be on a local filesystem, or they may reside across the web in something like an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How Does It Work?==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; ingest software reads a data file (see &#039;&#039;&#039;note&#039;&#039;&#039;) and builds a document that holds all of the file&#039;s metadata (the names and types of all of the variables along with any other information bound to those variables). This information is stored in a document we call the Dataset Metadata Response (DMR). The &#039;&#039;dmr++&#039;&#039; adds some extra information to this (that&#039;s the &#039;++&#039; part) about where each variable can be found and how to decode those values. The &#039;&#039;dmr++&#039;&#039; is simply an special annotated DMR document.&lt;br /&gt;
&lt;br /&gt;
This effectively decouples the annotated DMR (&#039;&#039;dmr++&#039;&#039;) from the location of the granule file itself. Since dmr++ files are typically significantly smaller than the source data granules they represent, they can be stored and moved for less expense. They also enable reading all of the file&#039;s metadata in one operation instead of the iterative process that many APIs require.&lt;br /&gt;
&lt;br /&gt;
If the dmr++ contains references to the source granules location on the web, the location of the the dmr++ file itself does not matter.&lt;br /&gt;
&lt;br /&gt;
Software that understands the dmr++ content can directly access the data values held in the source granule file, and it can do so without having to retrieve the entire file and work on it locally, even when the file is stored in a Web Object Store like S3. &lt;br /&gt;
&lt;br /&gt;
If the granule file contains multiple variables and only a subset of them are needed, the dmr++ enabled software can retrieve just the bytes associated with the desired variables parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;note:&#039;&#039;&#039; The OPeNDAP software currently supports HDF5 and NetCDF4. Other formats can be supported, such as zarr.&lt;br /&gt;
&lt;br /&gt;
==Supported Data Formats==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently works with &#039;&#039;hdf5&#039;&#039;, &#039;&#039;netcdf-4&#039;&#039;, and (experimental as of 8/29/24) &#039;&#039;HDF4&#039;&#039;/&#039;&#039;HDF4-EOS2&#039;&#039; files. (The &#039;&#039;netcdf-4&#039;&#039; format is a subset of &#039;&#039;hdf5&#039;&#039; so &#039;&#039;hdf5&#039;&#039; tools are utilized for both.) Other formats like &#039;&#039;zarr&#039;&#039;, &#039;&#039;netcdf-3&#039;&#039; are not currently supported by the &#039;&#039;dmr++&#039;&#039; software, but support could be added if requested. However, an external group working on the Python Kerchunk software has developed &#039;&#039;VirtualiZarr&#039;&#039; which can parse either Kerchunk or DMR++ documents and read from data those describe using the Zarr API.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; data format is quite complex and many of the options and edge cases are not currently supported by the &#039;&#039;dmr++&#039;&#039; software. &lt;br /&gt;
&lt;br /&gt;
These limitations and how to quickly evaluate an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file for use with the &#039;&#039;dmr++&#039;&#039; software are explained below.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; filters====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format has several filter/compression options used for storing data values. &lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5Z_FILTER_DEFLATE, H5Z_FILTER_SHUFFLE, and H5Z_FILTER_FLETCHER32 filters.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc/RM/RM_H5Z.html You can find more on hdf5 filters here.]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; storage layouts====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format also uses a number of &amp;quot;storage layouts&amp;quot; that describe various structural organizations of the data values associated with a variable in the granule file.&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5D_COMPACT, H5D_CHUNKED, and H5D_CONTIGUOUS storage layouts. These are all of the storage layouts defined by the &#039;&#039;hdf5&#039;&#039; library, but others can be added.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc1.6/Datasets.html You can find more on hdf5 storage layouts here.]&lt;br /&gt;
&lt;br /&gt;
====Is my &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file suitable for &#039;&#039;dmr++&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
To determine the &#039;&#039;hdf5&#039;&#039; filters, storage layouts, and chunking scheme used in an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file you can use the command:&lt;br /&gt;
 &amp;lt;code&amp;gt;h5dump -H -p &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
To get a human readable assessment of the file that will show the storage layouts, chunking structure, and the filters needed for each variable (aka DATASET in the &#039;&#039;hdf5&#039;&#039; vocabulary) [https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Dump h5dump info can be found here.]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;h5dump example output:&#039;&#039;&lt;br /&gt;
 $ h5dump -H -p chunked_gzipped_fourD.h5&lt;br /&gt;
 HDF5 &amp;quot;chunked_gzipped_fourD.h5&amp;quot; {&lt;br /&gt;
 GROUP &amp;quot;/&amp;quot; {&lt;br /&gt;
   DATASET &amp;quot;d_16_gzipped_chunks&amp;quot; {&lt;br /&gt;
      DATATYPE  H5T_IEEE_F32LE&lt;br /&gt;
      DATASPACE  SIMPLE { ( 40, 40, 40, 40 ) / ( 40, 40, 40, 40 ) }&lt;br /&gt;
      STORAGE_LAYOUT {&lt;br /&gt;
         CHUNKED ( 20, 20, 20, 20 )&lt;br /&gt;
         SIZE 2863311 (3.576:1 COMPRESSION)&lt;br /&gt;
      }&lt;br /&gt;
      FILTERS {&lt;br /&gt;
         COMPRESSION DEFLATE { LEVEL 6 }&lt;br /&gt;
      }&lt;br /&gt;
      FILLVALUE {&lt;br /&gt;
         FILL_TIME H5D_FILL_TIME_ALLOC&lt;br /&gt;
         VALUE  H5D_FILL_VALUE_DEFAULT&lt;br /&gt;
      }&lt;br /&gt;
      ALLOCATION_TIME {&lt;br /&gt;
         H5D_ALLOC_TIME_INCR&lt;br /&gt;
      }&lt;br /&gt;
   }&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
====Is my netcdf file &#039;&#039;netcdf-3&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
It is an unfortunate state of affairs that the file suffix &amp;quot;.nc&amp;quot; is the commonly used naming convention for both &#039;&#039;netcdf-3&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. &lt;br /&gt;
You can use the command:  &lt;br /&gt;
 &amp;lt;code&amp;gt;ncdump -k &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt; &lt;br /&gt;
to determine if a &#039;&#039;netcdf&#039;&#039; file is either classic &#039;&#039;netcdf-3&#039;&#039; (classic) or &#039;&#039;netcdf-4&#039;&#039; (netCDF-4).&lt;br /&gt;
* The &#039;&#039;netcdf&#039;&#039; library must be installed on the system upon which the command is issued.&lt;br /&gt;
&lt;br /&gt;
[http://www.bic.mni.mcgill.ca/users/sean/Docs/netcdf/guide.txn_79.html You can learn more in the NetCDF documentation here.]&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;HDF4&#039;&#039;/&#039;&#039;HDF4-EOS2&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
This is a complicated case, and its support as of 8/29/24 is still considered experimental. The HDF4 data model is quite complex, more so than the HDF5 model, and we focusing on complete support for those features used by NASA. To this end, we are also working on support for HDF4-EOS2, data files that can only be read correctly with the HDF4-EOS2 library. The main distinction of that API is the treatment of values for the Domain variables for Latitude and Longitude. Our support handles the HDF4-EOS Grid data type and using DMR++ the Latitude and Longitude values appear as users expect, although some aspects of this are ongoing. We do not yet support the HDF4-EOS2 Swath data type.&lt;br /&gt;
&lt;br /&gt;
Se the section below for information on the tool for building DMR++ files for HDF4 and HDF4-EOS2 data files.&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;DMR++&#039;&#039; files for HDF4 and HDF4-EOS2 (experimental)==&lt;br /&gt;
The HDF4 and HDF4-EOS2 (hereafter just HDF4) DMR++ document builder is currently available in the docker container we build for &#039;&#039;hyrax&#039;&#039; server/service. You can get this container from the public Docker Hub repository. You can also get and build the &#039;&#039;Hyrax&#039;&#039; source code, and use the client that way (as part of a source code build) but it&#039;s much more complex than getting the Docker container. In addition, the Docker container includes a server that can test the DMR++ documents that are built and can even show you how the files would look when served without using the DMR++.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Because this command is still experimental, I&#039;ll write this documentation like a recipe. Modify it to suit your own needs.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Using get_dmrpp_h4===&lt;br /&gt;
Make a new directory in a convenient place and copy the HDF4 and/or HDF4-EOS2 files in that directory. Once you have the files in that directory, make an environment variable so it can be referred to easily. From inside the directory:&lt;br /&gt;
&lt;br /&gt;
 export HDF4_DIR=$(pwd)&lt;br /&gt;
&lt;br /&gt;
Get the Docker container from Docker Hub using this command:&lt;br /&gt;
&lt;br /&gt;
 docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
What the options mean:&lt;br /&gt;
 -d, --detach Run container in background and print container ID&lt;br /&gt;
 -h, --hostname Container host name&lt;br /&gt;
 -p, --publish Publish a container&#039;s port(s) to the host&lt;br /&gt;
 -v, --volume Bind mount a volume&lt;br /&gt;
 --name Assign a name to the container&lt;br /&gt;
&lt;br /&gt;
This command will fetch the container &#039;&#039;&#039;opendap/hyrax:snapshot&#039;&#039;&#039; from Docker Hub. Thw &#039;&#039;snapshot&#039;&#039; is the latest build of the container. It will then &#039;&#039;run&#039;&#039; the container and return the container ID. The &#039;&#039;hyrax&#039;&#039; server is now running on you computer and can be accessed with a web browser, curl, et cetera. More on that in a bit.&lt;br /&gt;
&lt;br /&gt;
The volume mount, from $HDF4_DIR to &#039;/usr/share/hyrax&#039; mounts the current directory of the host computer running the container to the directory &#039;&#039;/usr/share/hyrax&#039;&#039; inside the container. That directory is the root of the server&#039;s data tree. This means that the HDF4 files you copied into the HDF4_DIR directory will be accessible by the server running in the container. That will be useful for testing later on.&lt;br /&gt;
&lt;br /&gt;
Note: If you want to use a specific container version, just substitute the version info for &#039;snapshot.&#039;&lt;br /&gt;
&lt;br /&gt;
Check that the container is running using:&lt;br /&gt;
&lt;br /&gt;
 docker ps&lt;br /&gt;
&lt;br /&gt;
This will show a somewhat hard-to-read bit of information about all the running Docker container on you host:&lt;br /&gt;
&lt;br /&gt;
 CONTAINER ID   IMAGE                    COMMAND              CREATED          STATUS          PORTS                                                              &lt;br /&gt;
 NAMES&lt;br /&gt;
 2949d4101df4   opendap/hyrax:snapshot   &amp;quot;/entrypoint.sh -&amp;quot;   15 seconds ago   Up 14 seconds   8009/tcp, 8443/tcp, &lt;br /&gt;
 10022/tcp, 11002/tcp, 0.0.0.0:8080-&amp;gt;8080/tcp   hyrax&lt;br /&gt;
&lt;br /&gt;
If you want to stop containers, use&lt;br /&gt;
&lt;br /&gt;
 docker rm -f &amp;lt;CONTAINER ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &#039;&#039;CONTAINER ID&#039;&#039; for the one we just started and shown in the output of &#039;&#039;docker ps -a&#039;&#039; above is &#039;&#039;2949d4101df4&#039;&#039;. No need to stop the container now, I&#039;m just pointing out how to do it because it&#039;s often useful.&lt;br /&gt;
&lt;br /&gt;
====Lets run the DMR++ builder====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At the end of this, I&#039;ll include a shell script that takes away many of these steps, but the script obscures some aspects of the command that you might want to tweak, so the following shows you all the details. Skip to &#039;&#039;&#039;Simple shell command&#039;&#039;&#039; to skip over these details.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Make sure you are in the directory with the HDF4 files for these steps. &lt;br /&gt;
&lt;br /&gt;
Get the command to return its help information:&lt;br /&gt;
&lt;br /&gt;
 docker exec -it hyrax get_dmrpp_h4 -h&lt;br /&gt;
&lt;br /&gt;
will return:&lt;br /&gt;
  &lt;br /&gt;
 &amp;lt;nowiki&amp;gt;usage: get_dmrpp_h4 [-h] -i I [-c CONF] [-s] [-u DATA_URL] [-D] [-v]&lt;br /&gt;
&lt;br /&gt;
Build a dmrpp file for an HDF4 file. get_dmrpp_h4 -i h4_file_name. A dmrpp&lt;br /&gt;
file that uses the HDF4 file name will be generated.&lt;br /&gt;
&lt;br /&gt;
optional arguments:&lt;br /&gt;
  &lt;br /&gt;
...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lets build a DMR++ now, by explicitly using the container:&lt;br /&gt;
&lt;br /&gt;
 docker exec -it hyrax bash&lt;br /&gt;
&lt;br /&gt;
starts the &#039;&#039;bash&#039;&#039; shell in the container, with the current directory as root (/)&lt;br /&gt;
&lt;br /&gt;
 [root@hyrax /]# &lt;br /&gt;
&lt;br /&gt;
Change to the directory that is the root of the data (you&#039;ll see your HDF4 files in here):&lt;br /&gt;
&lt;br /&gt;
 cd /usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
You will see, roughly:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;[root@hyrax /]# cd /usr/share/hyrax&lt;br /&gt;
[root@hyrax hyrax]# ls&lt;br /&gt;
3B42.19980101.00.7.HDF&lt;br /&gt;
3B42.19980101.03.7.HDF&lt;br /&gt;
3B42.19980101.06.7.HDF&lt;br /&gt;
&lt;br /&gt;
...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In that directory, use the &#039;&#039;get_dmrpp_h4&#039;&#039; command to build a DMR++ document for one of the files:&lt;br /&gt;
&lt;br /&gt;
 [root@hyrax hyrax]# get_dmrpp_h4 -i 3B42.20130111.09.7.HDF -u &#039;file:///usr/share/hyrax/3B42.20130111.09.7.HDF&#039;&lt;br /&gt;
&lt;br /&gt;
Copy that pattern for whatever file you use. From the /usr/share/hyrax directory, you pass &#039;&#039;get_dmrpp_h4&#039;&#039; the name of the file (because it&#039;s local to the current directory) using the &#039;&#039;&#039;-i&#039;&#039;&#039; option. The &#039;&#039;&#039;-u&#039;&#039;&#039; option tells the command to embed the URL that follows it in the DMR++. I&#039;ve used a &#039;&#039;file://&#039;&#039; URL to the file &#039;&#039;/usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Note the three slashes following the colon, two from the way a URL names a protocol and one because the pathname starts at the root directory. Obscure, but it makes sense.&lt;br /&gt;
&lt;br /&gt;
Building the DMR++ and embedding a &#039;&#039;file://&#039;&#039; URL will enable testing the DMR+.&lt;br /&gt;
====Using the server to examine data returned by the DMR++====&lt;br /&gt;
&lt;br /&gt;
Lets look at how the &#039;&#039;hyrax&#039;&#039; service will treat that data file using the DMR++. In a browser, goto &lt;br /&gt;
&lt;br /&gt;
 http://localhost:8080/opendap/&lt;br /&gt;
&lt;br /&gt;
[[File:Hyrax-including-new-DNRpp.png|200px|thumb|right|text-top|The running server shows the DMR++ as a dataset.]]&lt;br /&gt;
&lt;br /&gt;
Note: &#039;&#039;The server caches data catalog information for 5 minutes (configurable) so new items (e..g., DMR++ documents) may not show up right away. To force the display of a DMR++ that you just created, click on the source data file name and edit the URL so that the suffix &#039;&#039;&#039;.dmr.html&#039;&#039;&#039; is replaced by &#039;&#039;&#039;.dmrpp/dmr&#039;&#039;&#039;.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Click on the your equivalent of the &#039;&#039;&#039;3B42.20130111.09.7.HDF&#039;&#039; link, subset, download and open in Panoply or the equivalent. [[File:Hyrax-subsetting.png|200px|thumb|right|text-top|Use the form interface to subset and get a response.]]&lt;br /&gt;
&lt;br /&gt;
You can run batch tests in lots of files by building many DMR++ documents and then asking the server for various responses (nc4, dap) from the DMR++ and the original file. Those could be compared using various schemes, although in its entirety that is beyond this section&#039;s scope, the command &#039;&#039;getdap4&#039;&#039; is also included in the container and could be used to compare &#039;&#039;dap&#039;&#039; responses from the data file and the DMR++ document.&lt;br /&gt;
&lt;br /&gt;
To the right is a comparison of the same underlying data, the left window shows the data returned using the DMR++, the right shows the data read directly from the file using the server&#039;s builtin HDF4 reader. &lt;br /&gt;
[[File:Data-comparison.png|200px|thumb|right|text-top|Comparison of responses from a DMR++ and the native file handler.]]&lt;br /&gt;
&lt;br /&gt;
====Simple shell command====&lt;br /&gt;
&lt;br /&gt;
Here is a simple shell command that you can run on the host computer that will eliminate most of the above. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In the spirit of a recipe, I&#039;ll restate the earlier command for starting the docker container with the &#039;&#039;&#039;get_dmrpp_h4&#039;&#039;&#039; command and the &#039;&#039;&#039;hyrax&#039;&#039;&#039; server.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Start the container:&lt;br /&gt;
&lt;br /&gt;
  docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
Is it running:&lt;br /&gt;
&lt;br /&gt;
  docker ps&lt;br /&gt;
&lt;br /&gt;
The command, written for the Bourne Shell, is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#!/bin/sh&lt;br /&gt;
#&lt;br /&gt;
# usage get_dmrpp_h4.sh &amp;lt;file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
data_root=/usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
cat &amp;lt;&amp;lt;EOF | docker exec --interactive hyrax sh&lt;br /&gt;
cd $data_root&lt;br /&gt;
get_dmrpp_h4 -i $1 -u &amp;quot;file://$data_root/$1&amp;quot;&lt;br /&gt;
EOF&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy that, save it in a file (I named the file &#039;&#039;get_dmrpp_h4.sh&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
Run the command on the host (not the docker container) and in the directory with the HDF4 files (you don&#039;t &#039;&#039;have&#039;&#039; to do that, but sorting out the details is left as an exercise for the reader ;-). Run the command like this:&lt;br /&gt;
&lt;br /&gt;
  ./get_dmrpp_h4.sh AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
&lt;br /&gt;
The DMR++ will appear when the command completes.&lt;br /&gt;
&lt;br /&gt;
  (hyrax500) hyrax_git/HDF4-dir % ls -l&lt;br /&gt;
  total 1251240&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff    1250778 Aug 22 22:31 AMSR_E_L2_Land_V09_200206191112_A.hdf&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff   20746207 Aug 22 22:32 AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
  -rw-r--r--  1 jimg  staff    3378674 Aug 28 17:37 AMSR_E_L3_SeaIce25km_V15_20020601.hdf.dmrpp&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;DMR++&#039;&#039; files for HDF5/NetCDF4 with get_dmrpp==&lt;br /&gt;
&lt;br /&gt;
The application that builds the &#039;&#039;dmr++&#039;&#039; files is a command line tool called &#039;&#039;get_dmrpp&#039;&#039;. It in turn utilizes other executables such as &#039;&#039;build_dmrpp&#039;&#039;, &#039;&#039;reduce_mdf&#039;&#039;, &#039;&#039;merge_dmrpp&#039;&#039; (which rely in turn on the &#039;&#039;hdf5_handler&#039;&#039; and the &#039;&#039;hdf5&#039;&#039; library), along with a number of UNIX shell commands.&lt;br /&gt;
&lt;br /&gt;
All of these components are install with each recent version of the Hyrax Data Server&lt;br /&gt;
&lt;br /&gt;
You can see the &#039;&#039;get_dmrpp&#039;&#039; usage statement with the command: &amp;lt;code&amp;gt;get_dmrpp -h&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Using &#039;&#039;get_dmrpp&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The way that &#039;&#039;get_dmrpp&#039;&#039; is invoked controls the way that the data are ultimately represented in the resulting &#039;&#039;dmr++&#039;&#039; file(s). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;get_dmrpp&#039;&#039; application utilizes software from the Hyrax data server to produce the base DMR document which is used to construct the &#039;&#039;dmr++&#039;&#039; file. &lt;br /&gt;
&lt;br /&gt;
The Hyrax server has a long list of configuration options, several of which can substantially alter the the structural and semantic representation of the dataset as seen in the dmr++ files generated using these options.&lt;br /&gt;
&lt;br /&gt;
===Command line options===&lt;br /&gt;
&lt;br /&gt;
The command line switches provide a way to control the output of the tool. In addition to common options like verbose output or testing modes, the tool provides options to build extra (aka &#039;sidecar&#039;) data files that hold information needed for CF compliance if the original HDF5 data files lack that information (see the &#039;&#039;missing data&#039;&#039; section ). In addition, it is often desirable to build &#039;&#039;dmr++&#039;&#039; files before the source data files are uploaded to a cloud store like S3. In this case, the URL to the data may not be known when the &#039;&#039;dmr++&#039;&#039; is built. We support this by using placeholder/template strings in the &#039;&#039;dmr++&#039;&#039; and which can then be replaced with the URL at runtime, when the &#039;&#039;dmr++&#039;&#039; file is evaluated. See the &#039;-u&#039; and &#039;-p&#039; options below.&lt;br /&gt;
&lt;br /&gt;
====Inputs====&lt;br /&gt;
&lt;br /&gt;
; -b&lt;br /&gt;
: The fully qualified path to the top level data directory. Data files read by &#039;&#039;get_dmrpp&#039;&#039; must be in the directory tree rooted at this location and their names expressed as a path relative to this location. The value may not be set to &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; The default value is /tmp if a value is not provided. All the data files to be processed must be in this directory or one of its subdirectories. If &#039;&#039;get_dmrpp&#039;&#039; is being executed from same directory as the data then &amp;lt;code&amp;gt;-b `pwd`&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-b .&amp;lt;/code&amp;gt; works as well.&lt;br /&gt;
;-u&lt;br /&gt;
: This option is used to specify the location of the binary data object. It’s value must be an http, https, or file (file://) URL. This URL will be injected into the dmr++ when it is constructed. If option -u is not used; then the template string &amp;lt;code&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/code&amp;gt; will be used and the dmr++ will substitute a value at runtime.&lt;br /&gt;
;-c&lt;br /&gt;
:The path to an alternate bes configuration file to use.&lt;br /&gt;
;-s&lt;br /&gt;
:The path to an optional addendum configuration file which will be appended to the default BES configuration. Much like the site.conf file works for the full server deployment it will be loaded last and the settings there-in will have an override effect on the default configuration.&lt;br /&gt;
&lt;br /&gt;
====Output====&lt;br /&gt;
&lt;br /&gt;
; -o&lt;br /&gt;
: The name of the file to create.&lt;br /&gt;
&lt;br /&gt;
====Verbose Output Modes====&lt;br /&gt;
&lt;br /&gt;
; -h&lt;br /&gt;
: Show help/usage page.&lt;br /&gt;
; -v:&lt;br /&gt;
: verbose mode, prints the intermediate DMR.&lt;br /&gt;
; -V&lt;br /&gt;
: Very verbose mode,  prints the DMR, the command and the configuration file used to build the DMR.&lt;br /&gt;
; -D&lt;br /&gt;
: Just print the DMR that will be used to build the DMR++&lt;br /&gt;
; -X&lt;br /&gt;
: Do not remove temporary files. May be used independently of the -v and/or -V options.&lt;br /&gt;
&lt;br /&gt;
====Tests====&lt;br /&gt;
&lt;br /&gt;
; -T&lt;br /&gt;
: Run ALL hyrax tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -I&lt;br /&gt;
: Run hyrax inventory tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -F&lt;br /&gt;
: Run hyrax value probe tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
&lt;br /&gt;
====Missing Data Creation====&lt;br /&gt;
&lt;br /&gt;
; -M&lt;br /&gt;
: Build a &#039;sidecar&#039; file that holds missing information needed for CF compliance (e.g., Latitude, Longitude and Time coordinate data).&lt;br /&gt;
; -p&lt;br /&gt;
: Provide the URL for the Missing data sidecar file. If this is not given (but -M is), then a template value is used in the dmr++ file and a real URL is substituted at runtime.&lt;br /&gt;
; -r&lt;br /&gt;
: The path to the file that contains missing variable information for sets of input data files that share common missing variables. The file will be created if it doesn&#039;t exist and the result may be used in subsequent invocations of get_dmrpp (using -r) to identify the missing variable file.&lt;br /&gt;
&lt;br /&gt;
====AWS Integration====&lt;br /&gt;
The &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application supports both S3 hosted granules as inputs, and uploading generated dmr++ files to an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
; S3 Hosted granules are supported by default,&lt;br /&gt;
: When the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application sees that the name of the input file is an S3 URL it will check to see if the AWS CLI is configured and if so &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will attempt retrieve the granule and make a dmr++ utilizing whatever other options have been chosen.&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
; -U&lt;br /&gt;
: The &#039;&#039;&#039;&amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt;&#039;&#039;&#039; command line parameter for &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; instructs the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application to upload the generated dmr++ file to S3, but only when the following conditions are met:&lt;br /&gt;
:* The name of the input file is an S3 URL&lt;br /&gt;
:* The AWS CLI has been configured with credentials that provide r+w permissions for the bucket referenced in the input file S3 URL.&lt;br /&gt;
:* The &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; option has been specified.&lt;br /&gt;
: If all three of the above are true then &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will copy the retrieve the granule, create a dmr++ file from the granule, and copy the resulting dmr++ file (as defined by the -o option) to the source S3 bucket using the well known NGAP sidecar file naming convention: &#039;&#039;&#039;&amp;lt;tt&amp;gt;s3://bucket_name/granule_object_id.dmrpp&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -U -o foo -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5_handler&#039;&#039; Configuration===&lt;br /&gt;
&lt;br /&gt;
Because &#039;&#039;get_dmrpp&#039;&#039; uses the &#039;&#039;hdf5_handler&#039;&#039; software to build the &#039;&#039;dmr++&#039;&#039; the software must inject the &#039;&#039;hdf5_handler&#039;&#039;&#039;s configuration. &lt;br /&gt;
&lt;br /&gt;
The default configuration is large, but any valued may be altered at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some of the commonly manipulated configuration parameters with their default values:&lt;br /&gt;
 H5.EnableCF=true&lt;br /&gt;
 H5.EnableDMR64bitInt=true&lt;br /&gt;
 H5.DefaultHandleDimension=true&lt;br /&gt;
 H5.KeepVarLeadingUnderscore=false&lt;br /&gt;
 H5.EnableCheckNameClashing=true&lt;br /&gt;
 H5.EnableAddPathAttrs=true&lt;br /&gt;
 H5.EnableDropLongString=true&lt;br /&gt;
 H5.DisableStructMetaAttr=true&lt;br /&gt;
 H5.EnableFillValueCheck=true&lt;br /&gt;
 H5.CheckIgnoreObj=false&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;Note to DAACs with existing Hyrax deployments.&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
If your group is already serving data with Hyrax and the data representations that are generated by your Hyrax server are satisfactory, then a careful inspection of the localized configuration, typically held in /etc/bes/site.conf, will help you determine what configuration state you may need to inject into &#039;&#039;get_dmrpp&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===The &#039;&#039;H5.EnableCF&#039;&#039; option===&lt;br /&gt;
&lt;br /&gt;
Of particular importance is the &#039;&#039;H5.EnableCF&#039;&#039; option, which instructs the &#039;&#039;get_dmrpp&#039;&#039; tool to produce [https://cfconventions.org/ Climate Forecast convention (CF)] compatible output based on metadata found in the granule file being processed. &lt;br /&gt;
&lt;br /&gt;
Changing the value of &#039;&#039;H5.EnableCF&#039;&#039; from &#039;&#039;&#039;&#039;&#039;false&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;true&#039;&#039;&#039;&#039;&#039; will have (at least) two significant effects.&lt;br /&gt;
&lt;br /&gt;
It will:&lt;br /&gt;
&lt;br /&gt;
* Cause &#039;&#039;get_dmrpp&#039;&#039; to attempt to make the dmr++ metadata CF compliant.&lt;br /&gt;
* Remove Group hierarchies (if any) in the underlying data granule by flattening the Group hierarchy into the variable names.  &lt;br /&gt;
&lt;br /&gt;
By default &#039;&#039;get_dmrpp&#039;&#039; the &#039;&#039;H5.EnableCF&#039;&#039; option is set to false:&lt;br /&gt;
 H5.EnableCF = false&lt;br /&gt;
&lt;br /&gt;
There is a much more comprehensive discussion of this key feature, and others, in the &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;[https://opendap.github.io/hyrax_guide/Master_Hyrax_Guide.html#_hyrax_handlers HDF5 Handler section of the Appendix in the Hyrax Data Server Installation and Configuration Guide]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Missing data, the CF conventions and &#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
Many of the &#039;&#039;hdf5&#039;&#039; files produced by NASA and others do not contain the domain coordinate data (such as latitude, longitude, time, etc.) as a collection of explicit values. Instead information contained in the dataset metadata can used to reproduce these values. &lt;br /&gt;
&lt;br /&gt;
In order for a dataset to be Climate Forecast (CF) compatible it must contain these domain coordinate data values.&lt;br /&gt;
&lt;br /&gt;
The Hyrax &#039;&#039;hdf5_handler&#039;&#039; software, utilized by the &#039;&#039;get_dmrpp&#039;&#039; application, can create this data from the dataset metadata.  The &#039;&#039;get_dmrpp&#039;&#039; application places these generated data in a “sidecar” file for deployment with the source &#039;&#039;hdf5/netcdf&#039;&#039;-4 file.&lt;br /&gt;
&lt;br /&gt;
==Hyrax - Serving data using dmr++ files==&lt;br /&gt;
&lt;br /&gt;
There are three fundamental deployment scenarios for using dmr++ files to serve data with the Hyrax data server.&lt;br /&gt;
&lt;br /&gt;
This can be simple categorized as follows:&lt;br /&gt;
The dmr++ file(s) are XML files that contain a root &amp;lt;tt&amp;gt;dap4:Dataset&amp;lt;/tt&amp;gt; element with a &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attribute whose value is one of:&lt;br /&gt;
# An http(s):// URL referencing to the underlying granule files via http.&lt;br /&gt;
# A file:// URL that references the granule file on the local filesystem in a location that is inside the BES&#039; data root tree.&lt;br /&gt;
# The template string &amp;lt;tt&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each will discussed in turn below. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: By default Hyrax will automatically associate files whose name ends with &amp;quot;.dmrpp&amp;quot; with the dmr++ handler.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with http(s) URLs===&lt;br /&gt;
&lt;br /&gt;
If the dmr++ files that you wish to serve contain &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attributes whose values are http(s) URLs then there are 2+1 steps to serve the data:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure that the Hyrax AllowedHosts list is configured to allow Hyrax to access those target URLs. This can be accomplished by adding new regex entires to the AllowedHosts list in /etc/bes/site.conf, creating that file as need be.&lt;br /&gt;
# If the data URLs require authentication to access then you&#039;ll need to configure Hyrax for that too.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with file URLs===&lt;br /&gt;
&lt;br /&gt;
Using dmr++ files with locally held files can be useful for verifying that dmr++ functionality is working without relying on network access that may have data rate limits, authenticated access configuration, or security access constraints. Additionally, in many cases the dmr++ access to the locally held data may be significantly faster than through the native netcdf-4/hdf5 data handlers.&lt;br /&gt;
&lt;br /&gt;
In order to use dmr++ files that contain file:// URLs:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure the the dmr++ files contain only file:// URLs that refer to data granule files inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration.&lt;br /&gt;
&lt;br /&gt;
Note: For Hyrax, a correctly formatted file URL must start with the protocol &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; followed by the full qualified path to the data granule, for example:  &lt;br /&gt;
 /usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
so the the completed URL will have three slashes after the first colon:&lt;br /&gt;
 file:///usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with the template string. (NASA)===&lt;br /&gt;
&lt;br /&gt;
Another way to serve dmr++ files with Hyrax is to build the dmr++ files &#039;&#039;&#039;without&#039;&#039;&#039; valid URLs but with a template string that is replaced at runtime. If no target URL is supplied to get_drmpp at the time that the dmr++ is generated the template string: &amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt; will added to the file in place of the URL. The at runtime it can be replaced withe the correct value.&lt;br /&gt;
&lt;br /&gt;
Currently the only implementation of this is Hyrax&#039;s NGAP service which, when deployed in the NASA NGAP cloud, will accept &amp;quot;restified path&amp;quot; URLs that are defined as&lt;br /&gt;
having a URL path component with two mandatory and one optional parameters:&lt;br /&gt;
 MANDATORY: &amp;quot;/collections/UMM-C:{concept-id}&amp;quot;&lt;br /&gt;
 OPTIONAL:  &amp;quot;/UMM-C:{ShortName} &#039;.&#039; UMM-C:{Version}&amp;quot;&lt;br /&gt;
 MANDATORY: &amp;quot;/granules/UMM-G:{GranuleUR}&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039; &amp;lt;tt&amp;gt;https://opendap.earthdata.nasa.gov/collections/C1443727145-LAADS/MOD08_D3.v6.1/granules/MOD08_D3.A2020308.061.2020309092644.hdf.nc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When encountering this type of URL Hyrax will decompose it and use the content to formulate a query to the NASA CMR in order to retrieve the data access URL for the granule and for the dmr++ file. It then retrieves the dmr++ file and injects the data URL so that data access can proceed as described above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://wiki.earthdata.nasa.gov/display/DUTRAIN/Feature+analysis%3A+Restified+URL+for+OPENDAP+Data+Access More on the Restified Path can be found here],&lt;br /&gt;
&lt;br /&gt;
== Recipe: Building and testing dmr++ files ==&lt;br /&gt;
There are two recipes shown here, one using Hyrax docker containers and a second using the container that is part of the EOSDIS Cumulous task.&lt;br /&gt;
Prerequisites: &lt;br /&gt;
* Docker daemon running on a system that also supports a shell (the examples use &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; in this section)&lt;br /&gt;
=== Recipe: Building dmr++ files using a Hyrax docker container.===&lt;br /&gt;
# Acquire representative granule files for the collection you wish to import. Put them on the system that is running the Docker daemon. For this recipe we will assume that these files have been placed in the directory:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;/tmp/dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Get the most up to date Hyrax docker image:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker pull opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Start the docker container, mounting your data directory on to the docker image at &amp;lt;tt&amp;gt;/usr/share/hyrax&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker run -d -h hyrax -p 8080:8080 --volume /tmp/dmrpp:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Get a first view of your data using &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; with it&#039;s default configuration.&lt;br /&gt;
## If you want you can build a dmr++ for an example &amp;quot;&amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;input_file&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt;&amp;quot; using a  &amp;lt;tt&amp;gt;docker exec&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
##: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax get_dmrpp -b /usr/share/hyrax -o /usr/share/hyrax/input_file.dmrpp -u &amp;quot;file:///usr/share/hyrax/input_file&amp;quot; &amp;quot;input_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
## Or if you want more scripting flexibility you can login to the docker conainer to do the same:&lt;br /&gt;
### Login to the docker container:&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax /bin/bash&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### Change working dir to data dir:&lt;br /&gt;
###:&amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;cd /usr/share/hyrax&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### This sets the data directory to the current one (&amp;lt;tt&amp;gt;-b $(pwd)&amp;lt;/tt&amp;gt;) and sets the data URL (&amp;lt;tt&amp;gt;-u&amp;lt;/tt&amp;gt;) to the fully qualified path to the input file.&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;get_dmrpp -b $(pwd) -o foo.dmrpp -u &amp;quot;file://&amp;quot;$(pwd)&amp;quot;/your_test_file&amp;quot; &amp;quot;your_test_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;&#039;&#039;Now that you have made a dmr++ file, use the running Hyrax server to view and test it by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
#:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You can also batch process all of your test granules, if you want to go that route. This script assumes your ingestable data files end with &#039;&amp;lt;tt&amp;gt;.h5&amp;lt;/tt&amp;gt;&#039;. &lt;br /&gt;
#: &#039;&#039;The resulting dmr++ files should contain the correct &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; URLs and be correctly located so that they may be tested with the Hyrax service running in the docker instance.&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script will write each output file as a sidecar file into &lt;br /&gt;
# the same directory as its associated input granule data file.&lt;br /&gt;
&lt;br /&gt;
# The target directory to search for data files &lt;br /&gt;
target_dir=/usr/share/hyrax&lt;br /&gt;
echo &amp;quot;target_dir: ${target_dir}&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Search the target_dir for names matching the regex \*.h5 &lt;br /&gt;
for infile in `find &amp;quot;${target_dir}&amp;quot; -name \*.h5`&lt;br /&gt;
do&lt;br /&gt;
    echo &amp;quot; Processing: ${infile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    infile_base=`basename &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;infile_base: ${infile_base}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    bes_dir=`dirname &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;    bes_dir: ${bes_dir}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    outfile=&amp;quot;${infile}.dmrpp&amp;quot;&lt;br /&gt;
    echo &amp;quot;     Output: ${outfile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    get_dmrpp -b &amp;quot;${bes_dir}&amp;quot; -o &amp;quot;${outfile}&amp;quot; -u &amp;quot;file://${infile}&amp;quot; &amp;quot;${infile_base}&amp;quot;&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Remember that you can use the Hyrax server that is running in the docker container to view and test the dmr++ files you just created by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Testing and qualifying dmr++ files ===&lt;br /&gt;
In the previous section/step we created some initial dmr++ files using the default configuration. It is crucial to make sure that they provide the representation of the data that you and your users are expecting, and that they will work correctly with the Hyrax server. (See the following sections for details). If the generated dmr++ files do not match expectations then the default configuration of the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; may need to be amended using the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; parameter.&lt;br /&gt;
If the data are currently being served by your DAAC&#039;s on-prem team this is where understanding exactly what the localizations made to the configurations of the on-prem Hyrax instances deployed for the collection is important. These localization will probably need to be injected into &amp;lt;tt&amp;gt;get_drmpp&amp;lt;/tt&amp;gt; in order to produce the correct data representation in the dmr++ files.&lt;br /&gt;
&lt;br /&gt;
=== Flattening Groups ===&lt;br /&gt;
By default &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will preserve and show group hierarchies. If this is not desired, say for CF-1.0 compatibility, then you can change this by creating a small amendment to &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt;&#039;s default configuration. &lt;br /&gt;
First create the amending configuration file:&lt;br /&gt;
 echo &amp;quot;H5.EnableCF=true&amp;quot; &amp;gt; site.conf&lt;br /&gt;
Then, change the invocation of &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; in the above example by adding the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; switch:&lt;br /&gt;
 get_dmrpp -s site.conf -b `pwd` -o &amp;quot;${dmrpp_file}&amp;quot; -u &amp;quot;file://&amp;quot;`pwd`&amp;quot;/${file}&amp;quot; &amp;quot;${file}&amp;quot;&lt;br /&gt;
And re-run the dmr++ production as shown above.&lt;br /&gt;
&lt;br /&gt;
=== DAP representations ===&lt;br /&gt;
We have test and assurance procedures for DAP4 and DAP2 protocols below. Both are important. For legacy datasets the DAP2 request API is widely used by an existing client base and should continue to be supported. Since DAP4 subsumes DAP2 (but with somewhat different API semantics) It should be checked for legacy datasets as well. For more modern datasets that content DAP4 types such as Int64 that are not part of the DAP2 specification or implementations we will need to relying eliding the instances of unmapped types, or return an error when this is encountered.&lt;br /&gt;
 # Test Constants:&lt;br /&gt;
 GRANULE_FILE=&amp;quot;some_name.h5&amp;quot;&lt;br /&gt;
 # Granule URL&lt;br /&gt;
 gf_url=&amp;quot;http://localhost:8080/opendap/${GRANULE_FILE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== Inspect the dmr++ files====&lt;br /&gt;
# Do the dmr++ files have the expected dmrpp:href URL(s)?&lt;br /&gt;
#: &amp;lt;tt&amp;gt;head -2 ${GRANULE_FILE}.dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check DAP4 DMR Response ====&lt;br /&gt;
Inspect &amp;lt;tt&amp;gt;${gf_url}.dmrpp.dmr&amp;lt;/tt&amp;gt; &lt;br /&gt;
# Get the document, save as &amp;lt;tt&amp;gt;foo.dmr&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;curl -L -o foo.dmr &amp;quot;${gf_url}.dmr&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
# Are the associated dimensions correct?&lt;br /&gt;
&lt;br /&gt;
==== DAP4 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://docs.opendap.org/index.php?title=DAP4:_Specification_Volume_1#Fully_Qualified_Names VARIABLE_NAME is a full qualified DAP4 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_file &amp;quot;${gf_url}.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap4_subset_file dap4_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP4 UI test ====&lt;br /&gt;
* View and exercise the DAP4 Data Request Form &amp;lt;tt&amp;gt;{gf_url}.dmr.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check DDS Response ====&lt;br /&gt;
&lt;br /&gt;
# Inspect &amp;lt;tt&amp;gt;${gf_url}.dds&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
## Are the associated dimensions correct?&lt;br /&gt;
# Compare DMR++ DDS with granule file DDS. &lt;br /&gt;
#:For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_file &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_dmrpp &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; cmp dap2_dds_file dap2_dds_dmrpp &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_file &amp;quot;${gf_url}.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap2_subset_file dap2_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: One might consider doing this with two or more variables.&lt;br /&gt;
&lt;br /&gt;
==== DAP2 UI Test ====&lt;br /&gt;
* View and exercise the DAP2 Data Request Form located here: &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Try it in Panoply!&lt;br /&gt;
** Open Panoply.&lt;br /&gt;
** From the &#039;&#039;&#039;File&#039;&#039;&#039; menu select &#039;&#039;&#039;Open Remote Dataset...&#039;&#039;&#039;&lt;br /&gt;
** Paste the &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt; into the resulting dialog box.&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13550</id>
		<title>DMR++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13550"/>
		<updated>2024-08-29T18:11:48Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Supported Data Formats */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;How to build &amp;amp; deploy dmr++ files for Hyrax&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==What? Why?== &lt;br /&gt;
&lt;br /&gt;
It is a fast and flexible way to serve data stored in S3.&lt;br /&gt;
The dmr++ encodes the location of the data content residing in a binary data file/object (e.g., an hdf5 file) so that it can be directly accessed, without the need for an intermediate library API, by using the file with the location information. The binary data objects may be on a local filesystem, or they may reside across the web in something like an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How Does It Work?==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; ingest software reads a data file (see &#039;&#039;&#039;note&#039;&#039;&#039;) and builds a document that holds all of the file&#039;s metadata (the names and types of all of the variables along with any other information bound to those variables). This information is stored in a document we call the Dataset Metadata Response (DMR). The &#039;&#039;dmr++&#039;&#039; adds some extra information to this (that&#039;s the &#039;++&#039; part) about where each variable can be found and how to decode those values. The &#039;&#039;dmr++&#039;&#039; is simply an special annotated DMR document.&lt;br /&gt;
&lt;br /&gt;
This effectively decouples the annotated DMR (&#039;&#039;dmr++&#039;&#039;) from the location of the granule file itself. Since dmr++ files are typically significantly smaller than the source data granules they represent, they can be stored and moved for less expense. They also enable reading all of the file&#039;s metadata in one operation instead of the iterative process that many APIs require.&lt;br /&gt;
&lt;br /&gt;
If the dmr++ contains references to the source granules location on the web, the location of the the dmr++ file itself does not matter.&lt;br /&gt;
&lt;br /&gt;
Software that understands the dmr++ content can directly access the data values held in the source granule file, and it can do so without having to retrieve the entire file and work on it locally, even when the file is stored in a Web Object Store like S3. &lt;br /&gt;
&lt;br /&gt;
If the granule file contains multiple variables and only a subset of them are needed, the dmr++ enabled software can retrieve just the bytes associated with the desired variables parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;note:&#039;&#039;&#039; The OPeNDAP software currently supports HDF5 and NetCDF4. Other formats can be supported, such as zarr.&lt;br /&gt;
&lt;br /&gt;
==Supported Data Formats==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently works with &#039;&#039;hdf5&#039;&#039;, &#039;&#039;netcdf-4&#039;&#039;, and (experimental as of 8/29/24) &#039;&#039;HDF4&#039;&#039;/&#039;&#039;HDF4-EOS2&#039;&#039; files. (The &#039;&#039;netcdf-4&#039;&#039; format is a subset of &#039;&#039;hdf5&#039;&#039; so &#039;&#039;hdf5&#039;&#039; tools are utilized for both.) Other formats like &#039;&#039;zarr&#039;&#039;, &#039;&#039;netcdf-3&#039;&#039; are not currently supported by the &#039;&#039;dmr++&#039;&#039; software, but support could be added if requested. However, an external group working on the Python Kerchunk software has developed &#039;&#039;VirtualiZarr&#039;&#039; which can parse either Kerchunk or DMR++ documents and read from data those describe using the Zarr API.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; data format is quite complex and many of the options and edge cases are not currently supported by the &#039;&#039;dmr++&#039;&#039; software. &lt;br /&gt;
&lt;br /&gt;
These limitations and how to quickly evaluate an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file for use with the &#039;&#039;dmr++&#039;&#039; software are explained below.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; filters====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format has several filter/compression options used for storing data values. &lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5Z_FILTER_DEFLATE, H5Z_FILTER_SHUFFLE, and H5Z_FILTER_FLETCHER32 filters.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc/RM/RM_H5Z.html You can find more on hdf5 filters here.]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; storage layouts====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format also uses a number of &amp;quot;storage layouts&amp;quot; that describe various structural organizations of the data values associated with a variable in the granule file.&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5D_COMPACT, H5D_CHUNKED, and H5D_CONTIGUOUS storage layouts. These are all of the storage layouts defined by the &#039;&#039;hdf5&#039;&#039; library, but others can be added.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc1.6/Datasets.html You can find more on hdf5 storage layouts here.]&lt;br /&gt;
&lt;br /&gt;
====Is my &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file suitable for &#039;&#039;dmr++&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
To determine the &#039;&#039;hdf5&#039;&#039; filters, storage layouts, and chunking scheme used in an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file you can use the command:&lt;br /&gt;
 &amp;lt;code&amp;gt;h5dump -H -p &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
To get a human readable assessment of the file that will show the storage layouts, chunking structure, and the filters needed for each variable (aka DATASET in the &#039;&#039;hdf5&#039;&#039; vocabulary) [https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Dump h5dump info can be found here.]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;h5dump example output:&#039;&#039;&lt;br /&gt;
 $ h5dump -H -p chunked_gzipped_fourD.h5&lt;br /&gt;
 HDF5 &amp;quot;chunked_gzipped_fourD.h5&amp;quot; {&lt;br /&gt;
 GROUP &amp;quot;/&amp;quot; {&lt;br /&gt;
   DATASET &amp;quot;d_16_gzipped_chunks&amp;quot; {&lt;br /&gt;
      DATATYPE  H5T_IEEE_F32LE&lt;br /&gt;
      DATASPACE  SIMPLE { ( 40, 40, 40, 40 ) / ( 40, 40, 40, 40 ) }&lt;br /&gt;
      STORAGE_LAYOUT {&lt;br /&gt;
         CHUNKED ( 20, 20, 20, 20 )&lt;br /&gt;
         SIZE 2863311 (3.576:1 COMPRESSION)&lt;br /&gt;
      }&lt;br /&gt;
      FILTERS {&lt;br /&gt;
         COMPRESSION DEFLATE { LEVEL 6 }&lt;br /&gt;
      }&lt;br /&gt;
      FILLVALUE {&lt;br /&gt;
         FILL_TIME H5D_FILL_TIME_ALLOC&lt;br /&gt;
         VALUE  H5D_FILL_VALUE_DEFAULT&lt;br /&gt;
      }&lt;br /&gt;
      ALLOCATION_TIME {&lt;br /&gt;
         H5D_ALLOC_TIME_INCR&lt;br /&gt;
      }&lt;br /&gt;
   }&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
====Is my netcdf file &#039;&#039;netcdf-3&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
It is an unfortunate state of affairs that the file suffix &amp;quot;.nc&amp;quot; is the commonly used naming convention for both &#039;&#039;netcdf-3&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. &lt;br /&gt;
You can use the command:  &lt;br /&gt;
 &amp;lt;code&amp;gt;ncdump -k &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt; &lt;br /&gt;
to determine if a &#039;&#039;netcdf&#039;&#039; file is either classic &#039;&#039;netcdf-3&#039;&#039; (classic) or &#039;&#039;netcdf-4&#039;&#039; (netCDF-4).&lt;br /&gt;
* The &#039;&#039;netcdf&#039;&#039; library must be installed on the system upon which the command is issued.&lt;br /&gt;
&lt;br /&gt;
[http://www.bic.mni.mcgill.ca/users/sean/Docs/netcdf/guide.txn_79.html You can learn more in the NetCDF documentation here.]&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;HDF4&#039;&#039;/&#039;&#039;HDF4-EOS2&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
This is a complicated case, and its support as of 8/29/24 is still considered experimental. The HDF4 data model is quite complex, more so than the HDF5 model, and we focusing on complete support for those features used by NASA. To this end, we are also working on support for HDF4-EOS2, data files that can only be read correctly with the HDF4-EOS2 library. The main distinction of that API is the treatment of values for the Domain variables for Latitude and Longitude. Our support handles the HDF4-EOS Grid data type and using DMR++ the Latitude and Longitude values appear as users expect, although some aspects of this are ongoing. We do not yet support the HDF4-EOS2 Swath data type.&lt;br /&gt;
&lt;br /&gt;
Se the section below for information on the tool for building DMR++ files for HDF4 and HDF4-EOS2 data files.&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;DMR++&#039;&#039; files for HDF4 and HDF4-EOS2 (experimental)==&lt;br /&gt;
The HDF4 and HDF4-EOS2 (hereafter just HDF4) DMR++ document builder is currently available in the docker container we build for &#039;&#039;hyrax&#039;&#039; server/service. You can get this container from the public Docker Hub repository. You can also get and build the &#039;&#039;Hyrax&#039;&#039; source code, and use the client that way (as part of a source code build) but it&#039;s much more complex than getting the Docker container. In addition, the Docker container includes a server that can test the DMR++ documents that are built and can even show you how the files would look when served without using the DMR++.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Because this command is still experimental, I&#039;ll write this documentation like a recipe. Modify it to suit your own needs.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Using get_dmrpp_h4===&lt;br /&gt;
Make a new directory in a convenient place and copy the HDF4 and/or HDF4-EOS2 files in that directory. Once you have the files in that directory, make an environment variable so it can be referred to easily. From inside the directory:&lt;br /&gt;
&lt;br /&gt;
 export HDF4_DIR=$(pwd)&lt;br /&gt;
&lt;br /&gt;
Get the Docker container from Docker Hub using this command:&lt;br /&gt;
&lt;br /&gt;
 docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
What the options mean:&lt;br /&gt;
 -d, --detach Run container in background and print container ID&lt;br /&gt;
 -h, --hostname Container host name&lt;br /&gt;
 -p, --publish Publish a container&#039;s port(s) to the host&lt;br /&gt;
 -v, --volume Bind mount a volume&lt;br /&gt;
 --name Assign a name to the container&lt;br /&gt;
&lt;br /&gt;
This command will fetch the container &#039;&#039;&#039;opendap/hyrax:snapshot&#039;&#039;&#039; from Docker Hub. Thw &#039;&#039;snapshot&#039;&#039; is the latest build of the container. It will then &#039;&#039;run&#039;&#039; the container and return the container ID. The &#039;&#039;hyrax&#039;&#039; server is now running on you computer and can be accessed with a web browser, curl, et cetera. More on that in a bit.&lt;br /&gt;
&lt;br /&gt;
The volume mount, from $HDF4_DIR to &#039;/usr/share/hyrax&#039; mounts the current directory of the host computer running the container to the directory &#039;&#039;/usr/share/hyrax&#039;&#039; inside the container. That directory is the root of the server&#039;s data tree. This means that the HDF4 files you copied into the HDF4_DIR directory will be accessible by the server running in the container. That will be useful for testing later on.&lt;br /&gt;
&lt;br /&gt;
Note: If you want to use a specific container version, just substitute the version info for &#039;snapshot.&#039;&lt;br /&gt;
&lt;br /&gt;
Check that the container is running using:&lt;br /&gt;
&lt;br /&gt;
 docker ps&lt;br /&gt;
&lt;br /&gt;
This will show a somewhat hard-to-read bit of information about all the running Docker container on you host:&lt;br /&gt;
&lt;br /&gt;
 CONTAINER ID   IMAGE                    COMMAND              CREATED          STATUS          PORTS                                                              &lt;br /&gt;
 NAMES&lt;br /&gt;
 2949d4101df4   opendap/hyrax:snapshot   &amp;quot;/entrypoint.sh -&amp;quot;   15 seconds ago   Up 14 seconds   8009/tcp, 8443/tcp, &lt;br /&gt;
 10022/tcp, 11002/tcp, 0.0.0.0:8080-&amp;gt;8080/tcp   hyrax&lt;br /&gt;
&lt;br /&gt;
If you want to stop containers, use&lt;br /&gt;
&lt;br /&gt;
 docker rm -f &amp;lt;CONTAINER ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &#039;&#039;CONTAINER ID&#039;&#039; for the one we just started and shown in the output of &#039;&#039;docker ps -a&#039;&#039; above is &#039;&#039;2949d4101df4&#039;&#039;. No need to stop the container now, I&#039;m just pointing out how to do it because it&#039;s often useful.&lt;br /&gt;
&lt;br /&gt;
====Lets run the DMR++ builder====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At the end of this, I&#039;ll include a shell script that takes away many of these steps, but the script obscures some aspects of the command that you might want to tweak, so the following shows you all the details. Skip to &#039;&#039;&#039;Simple shell command&#039;&#039;&#039; to skip over these details.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Make sure you are in the directory with the HDF4 files for these steps. &lt;br /&gt;
&lt;br /&gt;
Get the command to return its help information:&lt;br /&gt;
&lt;br /&gt;
 docker exec -it hyrax get_dmrpp_h4 -h&lt;br /&gt;
&lt;br /&gt;
will return:&lt;br /&gt;
  &lt;br /&gt;
 &amp;lt;nowiki&amp;gt;usage: get_dmrpp_h4 [-h] -i I [-c CONF] [-s] [-u DATA_URL] [-D] [-v]&lt;br /&gt;
&lt;br /&gt;
Build a dmrpp file for an HDF4 file. get_dmrpp_h4 -i h4_file_name. A dmrpp&lt;br /&gt;
file that uses the HDF4 file name will be generated.&lt;br /&gt;
&lt;br /&gt;
optional arguments:&lt;br /&gt;
  &lt;br /&gt;
...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lets build a DMR++ now, by explicitly using the container:&lt;br /&gt;
&lt;br /&gt;
 docker exec -it hyrax bash&lt;br /&gt;
&lt;br /&gt;
starts the &#039;&#039;bash&#039;&#039; shell in the container, with the current directory as root (/)&lt;br /&gt;
&lt;br /&gt;
 [root@hyrax /]# &lt;br /&gt;
&lt;br /&gt;
Change to the directory that is the root of the data (you&#039;ll see your HDF4 files in here):&lt;br /&gt;
&lt;br /&gt;
 cd /usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
You will see, roughly:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;[root@hyrax /]# cd /usr/share/hyrax&lt;br /&gt;
[root@hyrax hyrax]# ls&lt;br /&gt;
3B42.19980101.00.7.HDF&lt;br /&gt;
3B42.19980101.03.7.HDF&lt;br /&gt;
3B42.19980101.06.7.HDF&lt;br /&gt;
&lt;br /&gt;
...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In that directory, use the &#039;&#039;get_dmrpp_h4&#039;&#039; command to build a DMR++ document for one of the files:&lt;br /&gt;
&lt;br /&gt;
 [root@hyrax hyrax]# get_dmrpp_h4 -i 3B42.20130111.09.7.HDF -u &#039;file:///usr/share/hyrax/3B42.20130111.09.7.HDF&#039;&lt;br /&gt;
&lt;br /&gt;
Copy that pattern for whatever file you use. From the /usr/share/hyrax directory, you pass &#039;&#039;get_dmrpp_h4&#039;&#039; the name of the file (because it&#039;s local to the current directory) using the &#039;&#039;&#039;-i&#039;&#039;&#039; option. The &#039;&#039;&#039;-u&#039;&#039;&#039; option tells the command to embed the URL that follows it in the DMR++. I&#039;ve used a &#039;&#039;file://&#039;&#039; URL to the file &#039;&#039;/usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Note the three slashes following the colon, two from the way a URL names a protocol and one because the pathname starts at the root directory. Obscure, but it makes sense.&lt;br /&gt;
&lt;br /&gt;
Building the DMR++ and embedding a &#039;&#039;file://&#039;&#039; URL will enable testing the DMR+.&lt;br /&gt;
====Using the server to examine data returned by the DMR++====&lt;br /&gt;
&lt;br /&gt;
Lets look at how the &#039;&#039;hyrax&#039;&#039; service will treat that data file using the DMR++. In a browser, goto &lt;br /&gt;
&lt;br /&gt;
 http://localhost:8080/opendap/&lt;br /&gt;
&lt;br /&gt;
[[File:Hyrax-including-new-DNRpp.png|200px|thumb|right|text-top|The running server shows the DMR++ as a dataset.]]&lt;br /&gt;
&lt;br /&gt;
Note: &#039;&#039;The server caches data catalog information for 5 minutes (configurable) so new items (e..g., DMR++ documents) may not show up right away. To force the display of a DMR++ that you just created, click on the source data file name and edit the URL so that the suffix &#039;&#039;&#039;.dmr.html&#039;&#039;&#039; is replaced by &#039;&#039;&#039;.dmrpp/dmr&#039;&#039;&#039;.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Click on the your equivalent of the &#039;&#039;&#039;3B42.20130111.09.7.HDF&#039;&#039; link, subset, download and open in Panoply or the equivalent. [[File:Hyrax-subsetting.png|200px|thumb|right|text-top|Use the form interface to subset and get a response.]]&lt;br /&gt;
&lt;br /&gt;
You can run batch tests in lots of files by building many DMR++ documents and then asking the server for various responses (nc4, dap) from the DMR++ and the original file. Those could be compared using various schemes, although in its entirety that is beyond this section&#039;s scope, the command &#039;&#039;getdap4&#039;&#039; is also included in the container and could be used to compare &#039;&#039;dap&#039;&#039; responses from the data file and the DMR++ document.&lt;br /&gt;
&lt;br /&gt;
To the right is a comparison of the same underlying data, the left window shows the data returned using the DMR++, the right shows the data read directly from the file using the server&#039;s builtin HDF4 reader. &lt;br /&gt;
[[File:Data-comparison.png|200px|thumb|right|text-top|Comparison of responses from a DMR++ and the native file handler.]]&lt;br /&gt;
&lt;br /&gt;
====Simple shell command====&lt;br /&gt;
&lt;br /&gt;
Here is a simple shell command that you can run on the host computer that will eliminate most of the above. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In the spirit of a recipe, I&#039;ll restate the earlier command for starting the docker container with the &#039;&#039;&#039;get_dmrpp_h4&#039;&#039;&#039; command and the &#039;&#039;&#039;hyrax&#039;&#039;&#039; server.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Start the container:&lt;br /&gt;
&lt;br /&gt;
  docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
Is it running:&lt;br /&gt;
&lt;br /&gt;
  docker ps&lt;br /&gt;
&lt;br /&gt;
The command, written for the Bourne Shell, is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#!/bin/sh&lt;br /&gt;
#&lt;br /&gt;
# usage get_dmrpp_h4.sh &amp;lt;file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
data_root=/usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
cat &amp;lt;&amp;lt;EOF | docker exec --interactive hyrax sh&lt;br /&gt;
cd $data_root&lt;br /&gt;
get_dmrpp_h4 -i $1 -u &amp;quot;file://$data_root/$1&amp;quot;&lt;br /&gt;
EOF&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy that, save it in a file (I named the file &#039;&#039;get_dmrpp_h4.sh&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
Run the command on the host (not the docker container) and in the directory with the HDF4 files (you don&#039;t &#039;&#039;have&#039;&#039; to do that, but sorting out the details is left as an exercise for the reader ;-). Run the command like this:&lt;br /&gt;
&lt;br /&gt;
  ./get_dmrpp_h4.sh AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
&lt;br /&gt;
The DMR++ will appear when the command completes.&lt;br /&gt;
&lt;br /&gt;
  (hyrax500) hyrax_git/HDF4-dir % ls -l&lt;br /&gt;
  total 1251240&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff    1250778 Aug 22 22:31 AMSR_E_L2_Land_V09_200206191112_A.hdf&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff   20746207 Aug 22 22:32 AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
  -rw-r--r--  1 jimg  staff    3378674 Aug 28 17:37 AMSR_E_L3_SeaIce25km_V15_20020601.hdf.dmrpp&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files with get_dmrpp==&lt;br /&gt;
&lt;br /&gt;
The application that builds the &#039;&#039;dmr++&#039;&#039; files is a command line tool called &#039;&#039;get_dmrpp&#039;&#039;. It in turn utilizes other executables such as &#039;&#039;build_dmrpp&#039;&#039;, &#039;&#039;reduce_mdf&#039;&#039;, &#039;&#039;merge_dmrpp&#039;&#039; (which rely in turn on the &#039;&#039;hdf5_handler&#039;&#039; and the &#039;&#039;hdf5&#039;&#039; library), along with a number of UNIX shell commands.&lt;br /&gt;
&lt;br /&gt;
All of these components are install with each recent version of the Hyrax Data Server&lt;br /&gt;
&lt;br /&gt;
You can see the &#039;&#039;get_dmrpp&#039;&#039; usage statement with the command: &amp;lt;code&amp;gt;get_dmrpp -h&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Using &#039;&#039;get_dmrpp&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The way that &#039;&#039;get_dmrpp&#039;&#039; is invoked controls the way that the data are ultimately represented in the resulting &#039;&#039;dmr++&#039;&#039; file(s). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;get_dmrpp&#039;&#039; application utilizes software from the Hyrax data server to produce the base DMR document which is used to construct the &#039;&#039;dmr++&#039;&#039; file. &lt;br /&gt;
&lt;br /&gt;
The Hyrax server has a long list of configuration options, several of which can substantially alter the the structural and semantic representation of the dataset as seen in the dmr++ files generated using these options.&lt;br /&gt;
&lt;br /&gt;
===Command line options===&lt;br /&gt;
&lt;br /&gt;
The command line switches provide a way to control the output of the tool. In addition to common options like verbose output or testing modes, the tool provides options to build extra (aka &#039;sidecar&#039;) data files that hold information needed for CF compliance if the original HDF5 data files lack that information (see the &#039;&#039;missing data&#039;&#039; section ). In addition, it is often desirable to build &#039;&#039;dmr++&#039;&#039; files before the source data files are uploaded to a cloud store like S3. In this case, the URL to the data may not be known when the &#039;&#039;dmr++&#039;&#039; is built. We support this by using placeholder/template strings in the &#039;&#039;dmr++&#039;&#039; and which can then be replaced with the URL at runtime, when the &#039;&#039;dmr++&#039;&#039; file is evaluated. See the &#039;-u&#039; and &#039;-p&#039; options below.&lt;br /&gt;
&lt;br /&gt;
====Inputs====&lt;br /&gt;
&lt;br /&gt;
; -b&lt;br /&gt;
: The fully qualified path to the top level data directory. Data files read by &#039;&#039;get_dmrpp&#039;&#039; must be in the directory tree rooted at this location and their names expressed as a path relative to this location. The value may not be set to &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; The default value is /tmp if a value is not provided. All the data files to be processed must be in this directory or one of its subdirectories. If &#039;&#039;get_dmrpp&#039;&#039; is being executed from same directory as the data then &amp;lt;code&amp;gt;-b `pwd`&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-b .&amp;lt;/code&amp;gt; works as well.&lt;br /&gt;
;-u&lt;br /&gt;
: This option is used to specify the location of the binary data object. It’s value must be an http, https, or file (file://) URL. This URL will be injected into the dmr++ when it is constructed. If option -u is not used; then the template string &amp;lt;code&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/code&amp;gt; will be used and the dmr++ will substitute a value at runtime.&lt;br /&gt;
;-c&lt;br /&gt;
:The path to an alternate bes configuration file to use.&lt;br /&gt;
;-s&lt;br /&gt;
:The path to an optional addendum configuration file which will be appended to the default BES configuration. Much like the site.conf file works for the full server deployment it will be loaded last and the settings there-in will have an override effect on the default configuration.&lt;br /&gt;
&lt;br /&gt;
====Output====&lt;br /&gt;
&lt;br /&gt;
; -o&lt;br /&gt;
: The name of the file to create.&lt;br /&gt;
&lt;br /&gt;
====Verbose Output Modes====&lt;br /&gt;
&lt;br /&gt;
; -h&lt;br /&gt;
: Show help/usage page.&lt;br /&gt;
; -v:&lt;br /&gt;
: verbose mode, prints the intermediate DMR.&lt;br /&gt;
; -V&lt;br /&gt;
: Very verbose mode,  prints the DMR, the command and the configuration file used to build the DMR.&lt;br /&gt;
; -D&lt;br /&gt;
: Just print the DMR that will be used to build the DMR++&lt;br /&gt;
; -X&lt;br /&gt;
: Do not remove temporary files. May be used independently of the -v and/or -V options.&lt;br /&gt;
&lt;br /&gt;
====Tests====&lt;br /&gt;
&lt;br /&gt;
; -T&lt;br /&gt;
: Run ALL hyrax tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -I&lt;br /&gt;
: Run hyrax inventory tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -F&lt;br /&gt;
: Run hyrax value probe tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
&lt;br /&gt;
====Missing Data Creation====&lt;br /&gt;
&lt;br /&gt;
; -M&lt;br /&gt;
: Build a &#039;sidecar&#039; file that holds missing information needed for CF compliance (e.g., Latitude, Longitude and Time coordinate data).&lt;br /&gt;
; -p&lt;br /&gt;
: Provide the URL for the Missing data sidecar file. If this is not given (but -M is), then a template value is used in the dmr++ file and a real URL is substituted at runtime.&lt;br /&gt;
; -r&lt;br /&gt;
: The path to the file that contains missing variable information for sets of input data files that share common missing variables. The file will be created if it doesn&#039;t exist and the result may be used in subsequent invocations of get_dmrpp (using -r) to identify the missing variable file.&lt;br /&gt;
&lt;br /&gt;
====AWS Integration====&lt;br /&gt;
The &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application supports both S3 hosted granules as inputs, and uploading generated dmr++ files to an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
; S3 Hosted granules are supported by default,&lt;br /&gt;
: When the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application sees that the name of the input file is an S3 URL it will check to see if the AWS CLI is configured and if so &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will attempt retrieve the granule and make a dmr++ utilizing whatever other options have been chosen.&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
; -U&lt;br /&gt;
: The &#039;&#039;&#039;&amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt;&#039;&#039;&#039; command line parameter for &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; instructs the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application to upload the generated dmr++ file to S3, but only when the following conditions are met:&lt;br /&gt;
:* The name of the input file is an S3 URL&lt;br /&gt;
:* The AWS CLI has been configured with credentials that provide r+w permissions for the bucket referenced in the input file S3 URL.&lt;br /&gt;
:* The &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; option has been specified.&lt;br /&gt;
: If all three of the above are true then &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will copy the retrieve the granule, create a dmr++ file from the granule, and copy the resulting dmr++ file (as defined by the -o option) to the source S3 bucket using the well known NGAP sidecar file naming convention: &#039;&#039;&#039;&amp;lt;tt&amp;gt;s3://bucket_name/granule_object_id.dmrpp&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -U -o foo -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5_handler&#039;&#039; Configuration===&lt;br /&gt;
&lt;br /&gt;
Because &#039;&#039;get_dmrpp&#039;&#039; uses the &#039;&#039;hdf5_handler&#039;&#039; software to build the &#039;&#039;dmr++&#039;&#039; the software must inject the &#039;&#039;hdf5_handler&#039;&#039;&#039;s configuration. &lt;br /&gt;
&lt;br /&gt;
The default configuration is large, but any valued may be altered at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some of the commonly manipulated configuration parameters with their default values:&lt;br /&gt;
 H5.EnableCF=true&lt;br /&gt;
 H5.EnableDMR64bitInt=true&lt;br /&gt;
 H5.DefaultHandleDimension=true&lt;br /&gt;
 H5.KeepVarLeadingUnderscore=false&lt;br /&gt;
 H5.EnableCheckNameClashing=true&lt;br /&gt;
 H5.EnableAddPathAttrs=true&lt;br /&gt;
 H5.EnableDropLongString=true&lt;br /&gt;
 H5.DisableStructMetaAttr=true&lt;br /&gt;
 H5.EnableFillValueCheck=true&lt;br /&gt;
 H5.CheckIgnoreObj=false&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;Note to DAACs with existing Hyrax deployments.&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
If your group is already serving data with Hyrax and the data representations that are generated by your Hyrax server are satisfactory, then a careful inspection of the localized configuration, typically held in /etc/bes/site.conf, will help you determine what configuration state you may need to inject into &#039;&#039;get_dmrpp&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===The &#039;&#039;H5.EnableCF&#039;&#039; option===&lt;br /&gt;
&lt;br /&gt;
Of particular importance is the &#039;&#039;H5.EnableCF&#039;&#039; option, which instructs the &#039;&#039;get_dmrpp&#039;&#039; tool to produce [https://cfconventions.org/ Climate Forecast convention (CF)] compatible output based on metadata found in the granule file being processed. &lt;br /&gt;
&lt;br /&gt;
Changing the value of &#039;&#039;H5.EnableCF&#039;&#039; from &#039;&#039;&#039;&#039;&#039;false&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;true&#039;&#039;&#039;&#039;&#039; will have (at least) two significant effects.&lt;br /&gt;
&lt;br /&gt;
It will:&lt;br /&gt;
&lt;br /&gt;
* Cause &#039;&#039;get_dmrpp&#039;&#039; to attempt to make the dmr++ metadata CF compliant.&lt;br /&gt;
* Remove Group hierarchies (if any) in the underlying data granule by flattening the Group hierarchy into the variable names.  &lt;br /&gt;
&lt;br /&gt;
By default &#039;&#039;get_dmrpp&#039;&#039; the &#039;&#039;H5.EnableCF&#039;&#039; option is set to false:&lt;br /&gt;
 H5.EnableCF = false&lt;br /&gt;
&lt;br /&gt;
There is a much more comprehensive discussion of this key feature, and others, in the &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;[https://opendap.github.io/hyrax_guide/Master_Hyrax_Guide.html#_hyrax_handlers HDF5 Handler section of the Appendix in the Hyrax Data Server Installation and Configuration Guide]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Missing data, the CF conventions and &#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
Many of the &#039;&#039;hdf5&#039;&#039; files produced by NASA and others do not contain the domain coordinate data (such as latitude, longitude, time, etc.) as a collection of explicit values. Instead information contained in the dataset metadata can used to reproduce these values. &lt;br /&gt;
&lt;br /&gt;
In order for a dataset to be Climate Forecast (CF) compatible it must contain these domain coordinate data values.&lt;br /&gt;
&lt;br /&gt;
The Hyrax &#039;&#039;hdf5_handler&#039;&#039; software, utilized by the &#039;&#039;get_dmrpp&#039;&#039; application, can create this data from the dataset metadata.  The &#039;&#039;get_dmrpp&#039;&#039; application places these generated data in a “sidecar” file for deployment with the source &#039;&#039;hdf5/netcdf&#039;&#039;-4 file.&lt;br /&gt;
&lt;br /&gt;
==Hyrax - Serving data using dmr++ files==&lt;br /&gt;
&lt;br /&gt;
There are three fundamental deployment scenarios for using dmr++ files to serve data with the Hyrax data server.&lt;br /&gt;
&lt;br /&gt;
This can be simple categorized as follows:&lt;br /&gt;
The dmr++ file(s) are XML files that contain a root &amp;lt;tt&amp;gt;dap4:Dataset&amp;lt;/tt&amp;gt; element with a &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attribute whose value is one of:&lt;br /&gt;
# An http(s):// URL referencing to the underlying granule files via http.&lt;br /&gt;
# A file:// URL that references the granule file on the local filesystem in a location that is inside the BES&#039; data root tree.&lt;br /&gt;
# The template string &amp;lt;tt&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each will discussed in turn below. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: By default Hyrax will automatically associate files whose name ends with &amp;quot;.dmrpp&amp;quot; with the dmr++ handler.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with http(s) URLs===&lt;br /&gt;
&lt;br /&gt;
If the dmr++ files that you wish to serve contain &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attributes whose values are http(s) URLs then there are 2+1 steps to serve the data:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure that the Hyrax AllowedHosts list is configured to allow Hyrax to access those target URLs. This can be accomplished by adding new regex entires to the AllowedHosts list in /etc/bes/site.conf, creating that file as need be.&lt;br /&gt;
# If the data URLs require authentication to access then you&#039;ll need to configure Hyrax for that too.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with file URLs===&lt;br /&gt;
&lt;br /&gt;
Using dmr++ files with locally held files can be useful for verifying that dmr++ functionality is working without relying on network access that may have data rate limits, authenticated access configuration, or security access constraints. Additionally, in many cases the dmr++ access to the locally held data may be significantly faster than through the native netcdf-4/hdf5 data handlers.&lt;br /&gt;
&lt;br /&gt;
In order to use dmr++ files that contain file:// URLs:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure the the dmr++ files contain only file:// URLs that refer to data granule files inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration.&lt;br /&gt;
&lt;br /&gt;
Note: For Hyrax, a correctly formatted file URL must start with the protocol &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; followed by the full qualified path to the data granule, for example:  &lt;br /&gt;
 /usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
so the the completed URL will have three slashes after the first colon:&lt;br /&gt;
 file:///usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with the template string. (NASA)===&lt;br /&gt;
&lt;br /&gt;
Another way to serve dmr++ files with Hyrax is to build the dmr++ files &#039;&#039;&#039;without&#039;&#039;&#039; valid URLs but with a template string that is replaced at runtime. If no target URL is supplied to get_drmpp at the time that the dmr++ is generated the template string: &amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt; will added to the file in place of the URL. The at runtime it can be replaced withe the correct value.&lt;br /&gt;
&lt;br /&gt;
Currently the only implementation of this is Hyrax&#039;s NGAP service which, when deployed in the NASA NGAP cloud, will accept &amp;quot;restified path&amp;quot; URLs that are defined as&lt;br /&gt;
having a URL path component with two mandatory and one optional parameters:&lt;br /&gt;
 MANDATORY: &amp;quot;/collections/UMM-C:{concept-id}&amp;quot;&lt;br /&gt;
 OPTIONAL:  &amp;quot;/UMM-C:{ShortName} &#039;.&#039; UMM-C:{Version}&amp;quot;&lt;br /&gt;
 MANDATORY: &amp;quot;/granules/UMM-G:{GranuleUR}&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039; &amp;lt;tt&amp;gt;https://opendap.earthdata.nasa.gov/collections/C1443727145-LAADS/MOD08_D3.v6.1/granules/MOD08_D3.A2020308.061.2020309092644.hdf.nc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When encountering this type of URL Hyrax will decompose it and use the content to formulate a query to the NASA CMR in order to retrieve the data access URL for the granule and for the dmr++ file. It then retrieves the dmr++ file and injects the data URL so that data access can proceed as described above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://wiki.earthdata.nasa.gov/display/DUTRAIN/Feature+analysis%3A+Restified+URL+for+OPENDAP+Data+Access More on the Restified Path can be found here],&lt;br /&gt;
&lt;br /&gt;
== Recipe: Building and testing dmr++ files ==&lt;br /&gt;
There are two recipes shown here, one using Hyrax docker containers and a second using the container that is part of the EOSDIS Cumulous task.&lt;br /&gt;
Prerequisites: &lt;br /&gt;
* Docker daemon running on a system that also supports a shell (the examples use &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; in this section)&lt;br /&gt;
=== Recipe: Building dmr++ files using a Hyrax docker container.===&lt;br /&gt;
# Acquire representative granule files for the collection you wish to import. Put them on the system that is running the Docker daemon. For this recipe we will assume that these files have been placed in the directory:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;/tmp/dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Get the most up to date Hyrax docker image:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker pull opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Start the docker container, mounting your data directory on to the docker image at &amp;lt;tt&amp;gt;/usr/share/hyrax&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker run -d -h hyrax -p 8080:8080 --volume /tmp/dmrpp:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Get a first view of your data using &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; with it&#039;s default configuration.&lt;br /&gt;
## If you want you can build a dmr++ for an example &amp;quot;&amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;input_file&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt;&amp;quot; using a  &amp;lt;tt&amp;gt;docker exec&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
##: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax get_dmrpp -b /usr/share/hyrax -o /usr/share/hyrax/input_file.dmrpp -u &amp;quot;file:///usr/share/hyrax/input_file&amp;quot; &amp;quot;input_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
## Or if you want more scripting flexibility you can login to the docker conainer to do the same:&lt;br /&gt;
### Login to the docker container:&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax /bin/bash&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### Change working dir to data dir:&lt;br /&gt;
###:&amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;cd /usr/share/hyrax&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### This sets the data directory to the current one (&amp;lt;tt&amp;gt;-b $(pwd)&amp;lt;/tt&amp;gt;) and sets the data URL (&amp;lt;tt&amp;gt;-u&amp;lt;/tt&amp;gt;) to the fully qualified path to the input file.&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;get_dmrpp -b $(pwd) -o foo.dmrpp -u &amp;quot;file://&amp;quot;$(pwd)&amp;quot;/your_test_file&amp;quot; &amp;quot;your_test_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;&#039;&#039;Now that you have made a dmr++ file, use the running Hyrax server to view and test it by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
#:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You can also batch process all of your test granules, if you want to go that route. This script assumes your ingestable data files end with &#039;&amp;lt;tt&amp;gt;.h5&amp;lt;/tt&amp;gt;&#039;. &lt;br /&gt;
#: &#039;&#039;The resulting dmr++ files should contain the correct &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; URLs and be correctly located so that they may be tested with the Hyrax service running in the docker instance.&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script will write each output file as a sidecar file into &lt;br /&gt;
# the same directory as its associated input granule data file.&lt;br /&gt;
&lt;br /&gt;
# The target directory to search for data files &lt;br /&gt;
target_dir=/usr/share/hyrax&lt;br /&gt;
echo &amp;quot;target_dir: ${target_dir}&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Search the target_dir for names matching the regex \*.h5 &lt;br /&gt;
for infile in `find &amp;quot;${target_dir}&amp;quot; -name \*.h5`&lt;br /&gt;
do&lt;br /&gt;
    echo &amp;quot; Processing: ${infile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    infile_base=`basename &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;infile_base: ${infile_base}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    bes_dir=`dirname &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;    bes_dir: ${bes_dir}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    outfile=&amp;quot;${infile}.dmrpp&amp;quot;&lt;br /&gt;
    echo &amp;quot;     Output: ${outfile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    get_dmrpp -b &amp;quot;${bes_dir}&amp;quot; -o &amp;quot;${outfile}&amp;quot; -u &amp;quot;file://${infile}&amp;quot; &amp;quot;${infile_base}&amp;quot;&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Remember that you can use the Hyrax server that is running in the docker container to view and test the dmr++ files you just created by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Testing and qualifying dmr++ files ===&lt;br /&gt;
In the previous section/step we created some initial dmr++ files using the default configuration. It is crucial to make sure that they provide the representation of the data that you and your users are expecting, and that they will work correctly with the Hyrax server. (See the following sections for details). If the generated dmr++ files do not match expectations then the default configuration of the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; may need to be amended using the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; parameter.&lt;br /&gt;
If the data are currently being served by your DAAC&#039;s on-prem team this is where understanding exactly what the localizations made to the configurations of the on-prem Hyrax instances deployed for the collection is important. These localization will probably need to be injected into &amp;lt;tt&amp;gt;get_drmpp&amp;lt;/tt&amp;gt; in order to produce the correct data representation in the dmr++ files.&lt;br /&gt;
&lt;br /&gt;
=== Flattening Groups ===&lt;br /&gt;
By default &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will preserve and show group hierarchies. If this is not desired, say for CF-1.0 compatibility, then you can change this by creating a small amendment to &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt;&#039;s default configuration. &lt;br /&gt;
First create the amending configuration file:&lt;br /&gt;
 echo &amp;quot;H5.EnableCF=true&amp;quot; &amp;gt; site.conf&lt;br /&gt;
Then, change the invocation of &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; in the above example by adding the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; switch:&lt;br /&gt;
 get_dmrpp -s site.conf -b `pwd` -o &amp;quot;${dmrpp_file}&amp;quot; -u &amp;quot;file://&amp;quot;`pwd`&amp;quot;/${file}&amp;quot; &amp;quot;${file}&amp;quot;&lt;br /&gt;
And re-run the dmr++ production as shown above.&lt;br /&gt;
&lt;br /&gt;
=== DAP representations ===&lt;br /&gt;
We have test and assurance procedures for DAP4 and DAP2 protocols below. Both are important. For legacy datasets the DAP2 request API is widely used by an existing client base and should continue to be supported. Since DAP4 subsumes DAP2 (but with somewhat different API semantics) It should be checked for legacy datasets as well. For more modern datasets that content DAP4 types such as Int64 that are not part of the DAP2 specification or implementations we will need to relying eliding the instances of unmapped types, or return an error when this is encountered.&lt;br /&gt;
 # Test Constants:&lt;br /&gt;
 GRANULE_FILE=&amp;quot;some_name.h5&amp;quot;&lt;br /&gt;
 # Granule URL&lt;br /&gt;
 gf_url=&amp;quot;http://localhost:8080/opendap/${GRANULE_FILE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== Inspect the dmr++ files====&lt;br /&gt;
# Do the dmr++ files have the expected dmrpp:href URL(s)?&lt;br /&gt;
#: &amp;lt;tt&amp;gt;head -2 ${GRANULE_FILE}.dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check DAP4 DMR Response ====&lt;br /&gt;
Inspect &amp;lt;tt&amp;gt;${gf_url}.dmrpp.dmr&amp;lt;/tt&amp;gt; &lt;br /&gt;
# Get the document, save as &amp;lt;tt&amp;gt;foo.dmr&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;curl -L -o foo.dmr &amp;quot;${gf_url}.dmr&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
# Are the associated dimensions correct?&lt;br /&gt;
&lt;br /&gt;
==== DAP4 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://docs.opendap.org/index.php?title=DAP4:_Specification_Volume_1#Fully_Qualified_Names VARIABLE_NAME is a full qualified DAP4 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_file &amp;quot;${gf_url}.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap4_subset_file dap4_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP4 UI test ====&lt;br /&gt;
* View and exercise the DAP4 Data Request Form &amp;lt;tt&amp;gt;{gf_url}.dmr.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check DDS Response ====&lt;br /&gt;
&lt;br /&gt;
# Inspect &amp;lt;tt&amp;gt;${gf_url}.dds&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
## Are the associated dimensions correct?&lt;br /&gt;
# Compare DMR++ DDS with granule file DDS. &lt;br /&gt;
#:For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_file &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_dmrpp &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; cmp dap2_dds_file dap2_dds_dmrpp &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_file &amp;quot;${gf_url}.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap2_subset_file dap2_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: One might consider doing this with two or more variables.&lt;br /&gt;
&lt;br /&gt;
==== DAP2 UI Test ====&lt;br /&gt;
* View and exercise the DAP2 Data Request Form located here: &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Try it in Panoply!&lt;br /&gt;
** Open Panoply.&lt;br /&gt;
** From the &#039;&#039;&#039;File&#039;&#039;&#039; menu select &#039;&#039;&#039;Open Remote Dataset...&#039;&#039;&#039;&lt;br /&gt;
** Paste the &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt; into the resulting dialog box.&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13549</id>
		<title>DMR++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13549"/>
		<updated>2024-08-29T00:25:00Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Simple shell command */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;How to build &amp;amp; deploy dmr++ files for Hyrax&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==What? Why?== &lt;br /&gt;
&lt;br /&gt;
It is a fast and flexible way to serve data stored in S3.&lt;br /&gt;
The dmr++ encodes the location of the data content residing in a binary data file/object (e.g., an hdf5 file) so that it can be directly accessed, without the need for an intermediate library API, by using the file with the location information. The binary data objects may be on a local filesystem, or they may reside across the web in something like an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How Does It Work?==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; ingest software reads a data file (see &#039;&#039;&#039;note&#039;&#039;&#039;) and builds a document that holds all of the file&#039;s metadata (the names and types of all of the variables along with any other information bound to those variables). This information is stored in a document we call the Dataset Metadata Response (DMR). The &#039;&#039;dmr++&#039;&#039; adds some extra information to this (that&#039;s the &#039;++&#039; part) about where each variable can be found and how to decode those values. The &#039;&#039;dmr++&#039;&#039; is simply an special annotated DMR document.&lt;br /&gt;
&lt;br /&gt;
This effectively decouples the annotated DMR (&#039;&#039;dmr++&#039;&#039;) from the location of the granule file itself. Since dmr++ files are typically significantly smaller than the source data granules they represent, they can be stored and moved for less expense. They also enable reading all of the file&#039;s metadata in one operation instead of the iterative process that many APIs require.&lt;br /&gt;
&lt;br /&gt;
If the dmr++ contains references to the source granules location on the web, the location of the the dmr++ file itself does not matter.&lt;br /&gt;
&lt;br /&gt;
Software that understands the dmr++ content can directly access the data values held in the source granule file, and it can do so without having to retrieve the entire file and work on it locally, even when the file is stored in a Web Object Store like S3. &lt;br /&gt;
&lt;br /&gt;
If the granule file contains multiple variables and only a subset of them are needed, the dmr++ enabled software can retrieve just the bytes associated with the desired variables parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;note:&#039;&#039;&#039; The OPeNDAP software currently supports HDF5 and NetCDF4. Other formats can be supported, such as zarr.&lt;br /&gt;
&lt;br /&gt;
==Supported Data Formats==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently works with &#039;&#039;hdf5&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. (The &#039;&#039;netcdf-4&#039;&#039; format is a subset of &#039;&#039;hdf5&#039;&#039; so &#039;&#039;hdf5&#039;&#039; tools are utilized for both.) Other formats like &#039;&#039;zarr&#039;&#039;, &#039;&#039;hdf4&#039;&#039;, &#039;&#039;netcdf-3&#039;&#039; are not currently supported by the &#039;&#039;dmr++&#039;&#039; software, but support could be added if requested.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; data format is quite complex and many of the options and edge cases are not currently supported by the &#039;&#039;dmr++&#039;&#039; software. &lt;br /&gt;
&lt;br /&gt;
These limitations and how to quickly evaluate an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file for use with the &#039;&#039;dmr++&#039;&#039; software are explained below.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; filters====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format has several filter/compression options used for storing data values. &lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5Z_FILTER_DEFLATE, H5Z_FILTER_SHUFFLE, and H5Z_FILTER_FLETCHER32 filters.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc/RM/RM_H5Z.html You can find more on hdf5 filters here.]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; storage layouts====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format also uses a number of &amp;quot;storage layouts&amp;quot; that describe various structural organizations of the data values associated with a variable in the granule file.&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5D_COMPACT, H5D_CHUNKED, and H5D_CONTIGUOUS storage layouts. These are all of the storage layouts defined by the &#039;&#039;hdf5&#039;&#039; library, but others can be added.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc1.6/Datasets.html You can find more on hdf5 storage layouts here.]&lt;br /&gt;
&lt;br /&gt;
====Is my &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file suitable for &#039;&#039;dmr++&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
To determine the &#039;&#039;hdf5&#039;&#039; filters, storage layouts, and chunking scheme used in an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file you can use the command:&lt;br /&gt;
 &amp;lt;code&amp;gt;h5dump -H -p &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
To get a human readable assessment of the file that will show the storage layouts, chunking structure, and the filters needed for each variable (aka DATASET in the &#039;&#039;hdf5&#039;&#039; vocabulary) [https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Dump h5dump info can be found here.]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;h5dump example output:&#039;&#039;&lt;br /&gt;
 $ h5dump -H -p chunked_gzipped_fourD.h5&lt;br /&gt;
 HDF5 &amp;quot;chunked_gzipped_fourD.h5&amp;quot; {&lt;br /&gt;
 GROUP &amp;quot;/&amp;quot; {&lt;br /&gt;
   DATASET &amp;quot;d_16_gzipped_chunks&amp;quot; {&lt;br /&gt;
      DATATYPE  H5T_IEEE_F32LE&lt;br /&gt;
      DATASPACE  SIMPLE { ( 40, 40, 40, 40 ) / ( 40, 40, 40, 40 ) }&lt;br /&gt;
      STORAGE_LAYOUT {&lt;br /&gt;
         CHUNKED ( 20, 20, 20, 20 )&lt;br /&gt;
         SIZE 2863311 (3.576:1 COMPRESSION)&lt;br /&gt;
      }&lt;br /&gt;
      FILTERS {&lt;br /&gt;
         COMPRESSION DEFLATE { LEVEL 6 }&lt;br /&gt;
      }&lt;br /&gt;
      FILLVALUE {&lt;br /&gt;
         FILL_TIME H5D_FILL_TIME_ALLOC&lt;br /&gt;
         VALUE  H5D_FILL_VALUE_DEFAULT&lt;br /&gt;
      }&lt;br /&gt;
      ALLOCATION_TIME {&lt;br /&gt;
         H5D_ALLOC_TIME_INCR&lt;br /&gt;
      }&lt;br /&gt;
   }&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
====Is my netcdf file &#039;&#039;netcdf-3&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
It is an unfortunate state of affairs that the file suffix &amp;quot;.nc&amp;quot; is the commonly used naming convention for both &#039;&#039;netcdf-3&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. &lt;br /&gt;
You can use the command:  &lt;br /&gt;
 &amp;lt;code&amp;gt;ncdump -k &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt; &lt;br /&gt;
to determine if a &#039;&#039;netcdf&#039;&#039; file is either classic &#039;&#039;netcdf-3&#039;&#039; (classic) or &#039;&#039;netcdf-4&#039;&#039; (netCDF-4).&lt;br /&gt;
* The &#039;&#039;netcdf&#039;&#039; library must be installed on the system upon which the command is issued.&lt;br /&gt;
&lt;br /&gt;
[http://www.bic.mni.mcgill.ca/users/sean/Docs/netcdf/guide.txn_79.html You can learn more in the NetCDF documentation here.]&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;DMR++&#039;&#039; files for HDF4 and HDF4-EOS2 (experimental)==&lt;br /&gt;
The HDF4 and HDF4-EOS2 (hereafter just HDF4) DMR++ document builder is currently available in the docker container we build for &#039;&#039;hyrax&#039;&#039; server/service. You can get this container from the public Docker Hub repository. You can also get and build the &#039;&#039;Hyrax&#039;&#039; source code, and use the client that way (as part of a source code build) but it&#039;s much more complex than getting the Docker container. In addition, the Docker container includes a server that can test the DMR++ documents that are built and can even show you how the files would look when served without using the DMR++.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Because this command is still experimental, I&#039;ll write this documentation like a recipe. Modify it to suit your own needs.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Using get_dmrpp_h4===&lt;br /&gt;
Make a new directory in a convenient place and copy the HDF4 and/or HDF4-EOS2 files in that directory. Once you have the files in that directory, make an environment variable so it can be referred to easily. From inside the directory:&lt;br /&gt;
&lt;br /&gt;
 export HDF4_DIR=$(pwd)&lt;br /&gt;
&lt;br /&gt;
Get the Docker container from Docker Hub using this command:&lt;br /&gt;
&lt;br /&gt;
 docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
What the options mean:&lt;br /&gt;
 -d, --detach Run container in background and print container ID&lt;br /&gt;
 -h, --hostname Container host name&lt;br /&gt;
 -p, --publish Publish a container&#039;s port(s) to the host&lt;br /&gt;
 -v, --volume Bind mount a volume&lt;br /&gt;
 --name Assign a name to the container&lt;br /&gt;
&lt;br /&gt;
This command will fetch the container &#039;&#039;&#039;opendap/hyrax:snapshot&#039;&#039;&#039; from Docker Hub. Thw &#039;&#039;snapshot&#039;&#039; is the latest build of the container. It will then &#039;&#039;run&#039;&#039; the container and return the container ID. The &#039;&#039;hyrax&#039;&#039; server is now running on you computer and can be accessed with a web browser, curl, et cetera. More on that in a bit.&lt;br /&gt;
&lt;br /&gt;
The volume mount, from $HDF4_DIR to &#039;/usr/share/hyrax&#039; mounts the current directory of the host computer running the container to the directory &#039;&#039;/usr/share/hyrax&#039;&#039; inside the container. That directory is the root of the server&#039;s data tree. This means that the HDF4 files you copied into the HDF4_DIR directory will be accessible by the server running in the container. That will be useful for testing later on.&lt;br /&gt;
&lt;br /&gt;
Note: If you want to use a specific container version, just substitute the version info for &#039;snapshot.&#039;&lt;br /&gt;
&lt;br /&gt;
Check that the container is running using:&lt;br /&gt;
&lt;br /&gt;
 docker ps&lt;br /&gt;
&lt;br /&gt;
This will show a somewhat hard-to-read bit of information about all the running Docker container on you host:&lt;br /&gt;
&lt;br /&gt;
 CONTAINER ID   IMAGE                    COMMAND              CREATED          STATUS          PORTS                                                              &lt;br /&gt;
 NAMES&lt;br /&gt;
 2949d4101df4   opendap/hyrax:snapshot   &amp;quot;/entrypoint.sh -&amp;quot;   15 seconds ago   Up 14 seconds   8009/tcp, 8443/tcp, &lt;br /&gt;
 10022/tcp, 11002/tcp, 0.0.0.0:8080-&amp;gt;8080/tcp   hyrax&lt;br /&gt;
&lt;br /&gt;
If you want to stop containers, use&lt;br /&gt;
&lt;br /&gt;
 docker rm -f &amp;lt;CONTAINER ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &#039;&#039;CONTAINER ID&#039;&#039; for the one we just started and shown in the output of &#039;&#039;docker ps -a&#039;&#039; above is &#039;&#039;2949d4101df4&#039;&#039;. No need to stop the container now, I&#039;m just pointing out how to do it because it&#039;s often useful.&lt;br /&gt;
&lt;br /&gt;
====Lets run the DMR++ builder====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At the end of this, I&#039;ll include a shell script that takes away many of these steps, but the script obscures some aspects of the command that you might want to tweak, so the following shows you all the details. Skip to &#039;&#039;&#039;Simple shell command&#039;&#039;&#039; to skip over these details.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Make sure you are in the directory with the HDF4 files for these steps. &lt;br /&gt;
&lt;br /&gt;
Get the command to return its help information:&lt;br /&gt;
&lt;br /&gt;
 docker exec -it hyrax get_dmrpp_h4 -h&lt;br /&gt;
&lt;br /&gt;
will return:&lt;br /&gt;
  &lt;br /&gt;
 &amp;lt;nowiki&amp;gt;usage: get_dmrpp_h4 [-h] -i I [-c CONF] [-s] [-u DATA_URL] [-D] [-v]&lt;br /&gt;
&lt;br /&gt;
Build a dmrpp file for an HDF4 file. get_dmrpp_h4 -i h4_file_name. A dmrpp&lt;br /&gt;
file that uses the HDF4 file name will be generated.&lt;br /&gt;
&lt;br /&gt;
optional arguments:&lt;br /&gt;
  &lt;br /&gt;
...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lets build a DMR++ now, by explicitly using the container:&lt;br /&gt;
&lt;br /&gt;
 docker exec -it hyrax bash&lt;br /&gt;
&lt;br /&gt;
starts the &#039;&#039;bash&#039;&#039; shell in the container, with the current directory as root (/)&lt;br /&gt;
&lt;br /&gt;
 [root@hyrax /]# &lt;br /&gt;
&lt;br /&gt;
Change to the directory that is the root of the data (you&#039;ll see your HDF4 files in here):&lt;br /&gt;
&lt;br /&gt;
 cd /usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
You will see, roughly:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;[root@hyrax /]# cd /usr/share/hyrax&lt;br /&gt;
[root@hyrax hyrax]# ls&lt;br /&gt;
3B42.19980101.00.7.HDF&lt;br /&gt;
3B42.19980101.03.7.HDF&lt;br /&gt;
3B42.19980101.06.7.HDF&lt;br /&gt;
&lt;br /&gt;
...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In that directory, use the &#039;&#039;get_dmrpp_h4&#039;&#039; command to build a DMR++ document for one of the files:&lt;br /&gt;
&lt;br /&gt;
 [root@hyrax hyrax]# get_dmrpp_h4 -i 3B42.20130111.09.7.HDF -u &#039;file:///usr/share/hyrax/3B42.20130111.09.7.HDF&#039;&lt;br /&gt;
&lt;br /&gt;
Copy that pattern for whatever file you use. From the /usr/share/hyrax directory, you pass &#039;&#039;get_dmrpp_h4&#039;&#039; the name of the file (because it&#039;s local to the current directory) using the &#039;&#039;&#039;-i&#039;&#039;&#039; option. The &#039;&#039;&#039;-u&#039;&#039;&#039; option tells the command to embed the URL that follows it in the DMR++. I&#039;ve used a &#039;&#039;file://&#039;&#039; URL to the file &#039;&#039;/usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Note the three slashes following the colon, two from the way a URL names a protocol and one because the pathname starts at the root directory. Obscure, but it makes sense.&lt;br /&gt;
&lt;br /&gt;
Building the DMR++ and embedding a &#039;&#039;file://&#039;&#039; URL will enable testing the DMR+.&lt;br /&gt;
====Using the server to examine data returned by the DMR++====&lt;br /&gt;
&lt;br /&gt;
Lets look at how the &#039;&#039;hyrax&#039;&#039; service will treat that data file using the DMR++. In a browser, goto &lt;br /&gt;
&lt;br /&gt;
 http://localhost:8080/opendap/&lt;br /&gt;
&lt;br /&gt;
[[File:Hyrax-including-new-DNRpp.png|200px|thumb|right|text-top|The running server shows the DMR++ as a dataset.]]&lt;br /&gt;
&lt;br /&gt;
Note: &#039;&#039;The server caches data catalog information for 5 minutes (configurable) so new items (e..g., DMR++ documents) may not show up right away. To force the display of a DMR++ that you just created, click on the source data file name and edit the URL so that the suffix &#039;&#039;&#039;.dmr.html&#039;&#039;&#039; is replaced by &#039;&#039;&#039;.dmrpp/dmr&#039;&#039;&#039;.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Click on the your equivalent of the &#039;&#039;&#039;3B42.20130111.09.7.HDF&#039;&#039; link, subset, download and open in Panoply or the equivalent. [[File:Hyrax-subsetting.png|200px|thumb|right|text-top|Use the form interface to subset and get a response.]]&lt;br /&gt;
&lt;br /&gt;
You can run batch tests in lots of files by building many DMR++ documents and then asking the server for various responses (nc4, dap) from the DMR++ and the original file. Those could be compared using various schemes, although in its entirety that is beyond this section&#039;s scope, the command &#039;&#039;getdap4&#039;&#039; is also included in the container and could be used to compare &#039;&#039;dap&#039;&#039; responses from the data file and the DMR++ document.&lt;br /&gt;
&lt;br /&gt;
To the right is a comparison of the same underlying data, the left window shows the data returned using the DMR++, the right shows the data read directly from the file using the server&#039;s builtin HDF4 reader. &lt;br /&gt;
[[File:Data-comparison.png|200px|thumb|right|text-top|Comparison of responses from a DMR++ and the native file handler.]]&lt;br /&gt;
&lt;br /&gt;
====Simple shell command====&lt;br /&gt;
&lt;br /&gt;
Here is a simple shell command that you can run on the host computer that will eliminate most of the above. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In the spirit of a recipe, I&#039;ll restate the earlier command for starting the docker container with the &#039;&#039;&#039;get_dmrpp_h4&#039;&#039;&#039; command and the &#039;&#039;&#039;hyrax&#039;&#039;&#039; server.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Start the container:&lt;br /&gt;
&lt;br /&gt;
  docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
Is it running:&lt;br /&gt;
&lt;br /&gt;
  docker ps&lt;br /&gt;
&lt;br /&gt;
The command, written for the Bourne Shell, is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#!/bin/sh&lt;br /&gt;
#&lt;br /&gt;
# usage get_dmrpp_h4.sh &amp;lt;file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
data_root=/usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
cat &amp;lt;&amp;lt;EOF | docker exec --interactive hyrax sh&lt;br /&gt;
cd $data_root&lt;br /&gt;
get_dmrpp_h4 -i $1 -u &amp;quot;file://$data_root/$1&amp;quot;&lt;br /&gt;
EOF&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy that, save it in a file (I named the file &#039;&#039;get_dmrpp_h4.sh&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
Run the command on the host (not the docker container) and in the directory with the HDF4 files (you don&#039;t &#039;&#039;have&#039;&#039; to do that, but sorting out the details is left as an exercise for the reader ;-). Run the command like this:&lt;br /&gt;
&lt;br /&gt;
  ./get_dmrpp_h4.sh AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
&lt;br /&gt;
The DMR++ will appear when the command completes.&lt;br /&gt;
&lt;br /&gt;
  (hyrax500) hyrax_git/HDF4-dir % ls -l&lt;br /&gt;
  total 1251240&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff    1250778 Aug 22 22:31 AMSR_E_L2_Land_V09_200206191112_A.hdf&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff   20746207 Aug 22 22:32 AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
  -rw-r--r--  1 jimg  staff    3378674 Aug 28 17:37 AMSR_E_L3_SeaIce25km_V15_20020601.hdf.dmrpp&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files with get_dmrpp==&lt;br /&gt;
&lt;br /&gt;
The application that builds the &#039;&#039;dmr++&#039;&#039; files is a command line tool called &#039;&#039;get_dmrpp&#039;&#039;. It in turn utilizes other executables such as &#039;&#039;build_dmrpp&#039;&#039;, &#039;&#039;reduce_mdf&#039;&#039;, &#039;&#039;merge_dmrpp&#039;&#039; (which rely in turn on the &#039;&#039;hdf5_handler&#039;&#039; and the &#039;&#039;hdf5&#039;&#039; library), along with a number of UNIX shell commands.&lt;br /&gt;
&lt;br /&gt;
All of these components are install with each recent version of the Hyrax Data Server&lt;br /&gt;
&lt;br /&gt;
You can see the &#039;&#039;get_dmrpp&#039;&#039; usage statement with the command: &amp;lt;code&amp;gt;get_dmrpp -h&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Using &#039;&#039;get_dmrpp&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The way that &#039;&#039;get_dmrpp&#039;&#039; is invoked controls the way that the data are ultimately represented in the resulting &#039;&#039;dmr++&#039;&#039; file(s). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;get_dmrpp&#039;&#039; application utilizes software from the Hyrax data server to produce the base DMR document which is used to construct the &#039;&#039;dmr++&#039;&#039; file. &lt;br /&gt;
&lt;br /&gt;
The Hyrax server has a long list of configuration options, several of which can substantially alter the the structural and semantic representation of the dataset as seen in the dmr++ files generated using these options.&lt;br /&gt;
&lt;br /&gt;
===Command line options===&lt;br /&gt;
&lt;br /&gt;
The command line switches provide a way to control the output of the tool. In addition to common options like verbose output or testing modes, the tool provides options to build extra (aka &#039;sidecar&#039;) data files that hold information needed for CF compliance if the original HDF5 data files lack that information (see the &#039;&#039;missing data&#039;&#039; section ). In addition, it is often desirable to build &#039;&#039;dmr++&#039;&#039; files before the source data files are uploaded to a cloud store like S3. In this case, the URL to the data may not be known when the &#039;&#039;dmr++&#039;&#039; is built. We support this by using placeholder/template strings in the &#039;&#039;dmr++&#039;&#039; and which can then be replaced with the URL at runtime, when the &#039;&#039;dmr++&#039;&#039; file is evaluated. See the &#039;-u&#039; and &#039;-p&#039; options below.&lt;br /&gt;
&lt;br /&gt;
====Inputs====&lt;br /&gt;
&lt;br /&gt;
; -b&lt;br /&gt;
: The fully qualified path to the top level data directory. Data files read by &#039;&#039;get_dmrpp&#039;&#039; must be in the directory tree rooted at this location and their names expressed as a path relative to this location. The value may not be set to &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; The default value is /tmp if a value is not provided. All the data files to be processed must be in this directory or one of its subdirectories. If &#039;&#039;get_dmrpp&#039;&#039; is being executed from same directory as the data then &amp;lt;code&amp;gt;-b `pwd`&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-b .&amp;lt;/code&amp;gt; works as well.&lt;br /&gt;
;-u&lt;br /&gt;
: This option is used to specify the location of the binary data object. It’s value must be an http, https, or file (file://) URL. This URL will be injected into the dmr++ when it is constructed. If option -u is not used; then the template string &amp;lt;code&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/code&amp;gt; will be used and the dmr++ will substitute a value at runtime.&lt;br /&gt;
;-c&lt;br /&gt;
:The path to an alternate bes configuration file to use.&lt;br /&gt;
;-s&lt;br /&gt;
:The path to an optional addendum configuration file which will be appended to the default BES configuration. Much like the site.conf file works for the full server deployment it will be loaded last and the settings there-in will have an override effect on the default configuration.&lt;br /&gt;
&lt;br /&gt;
====Output====&lt;br /&gt;
&lt;br /&gt;
; -o&lt;br /&gt;
: The name of the file to create.&lt;br /&gt;
&lt;br /&gt;
====Verbose Output Modes====&lt;br /&gt;
&lt;br /&gt;
; -h&lt;br /&gt;
: Show help/usage page.&lt;br /&gt;
; -v:&lt;br /&gt;
: verbose mode, prints the intermediate DMR.&lt;br /&gt;
; -V&lt;br /&gt;
: Very verbose mode,  prints the DMR, the command and the configuration file used to build the DMR.&lt;br /&gt;
; -D&lt;br /&gt;
: Just print the DMR that will be used to build the DMR++&lt;br /&gt;
; -X&lt;br /&gt;
: Do not remove temporary files. May be used independently of the -v and/or -V options.&lt;br /&gt;
&lt;br /&gt;
====Tests====&lt;br /&gt;
&lt;br /&gt;
; -T&lt;br /&gt;
: Run ALL hyrax tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -I&lt;br /&gt;
: Run hyrax inventory tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -F&lt;br /&gt;
: Run hyrax value probe tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
&lt;br /&gt;
====Missing Data Creation====&lt;br /&gt;
&lt;br /&gt;
; -M&lt;br /&gt;
: Build a &#039;sidecar&#039; file that holds missing information needed for CF compliance (e.g., Latitude, Longitude and Time coordinate data).&lt;br /&gt;
; -p&lt;br /&gt;
: Provide the URL for the Missing data sidecar file. If this is not given (but -M is), then a template value is used in the dmr++ file and a real URL is substituted at runtime.&lt;br /&gt;
; -r&lt;br /&gt;
: The path to the file that contains missing variable information for sets of input data files that share common missing variables. The file will be created if it doesn&#039;t exist and the result may be used in subsequent invocations of get_dmrpp (using -r) to identify the missing variable file.&lt;br /&gt;
&lt;br /&gt;
====AWS Integration====&lt;br /&gt;
The &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application supports both S3 hosted granules as inputs, and uploading generated dmr++ files to an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
; S3 Hosted granules are supported by default,&lt;br /&gt;
: When the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application sees that the name of the input file is an S3 URL it will check to see if the AWS CLI is configured and if so &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will attempt retrieve the granule and make a dmr++ utilizing whatever other options have been chosen.&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
; -U&lt;br /&gt;
: The &#039;&#039;&#039;&amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt;&#039;&#039;&#039; command line parameter for &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; instructs the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application to upload the generated dmr++ file to S3, but only when the following conditions are met:&lt;br /&gt;
:* The name of the input file is an S3 URL&lt;br /&gt;
:* The AWS CLI has been configured with credentials that provide r+w permissions for the bucket referenced in the input file S3 URL.&lt;br /&gt;
:* The &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; option has been specified.&lt;br /&gt;
: If all three of the above are true then &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will copy the retrieve the granule, create a dmr++ file from the granule, and copy the resulting dmr++ file (as defined by the -o option) to the source S3 bucket using the well known NGAP sidecar file naming convention: &#039;&#039;&#039;&amp;lt;tt&amp;gt;s3://bucket_name/granule_object_id.dmrpp&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -U -o foo -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5_handler&#039;&#039; Configuration===&lt;br /&gt;
&lt;br /&gt;
Because &#039;&#039;get_dmrpp&#039;&#039; uses the &#039;&#039;hdf5_handler&#039;&#039; software to build the &#039;&#039;dmr++&#039;&#039; the software must inject the &#039;&#039;hdf5_handler&#039;&#039;&#039;s configuration. &lt;br /&gt;
&lt;br /&gt;
The default configuration is large, but any valued may be altered at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some of the commonly manipulated configuration parameters with their default values:&lt;br /&gt;
 H5.EnableCF=true&lt;br /&gt;
 H5.EnableDMR64bitInt=true&lt;br /&gt;
 H5.DefaultHandleDimension=true&lt;br /&gt;
 H5.KeepVarLeadingUnderscore=false&lt;br /&gt;
 H5.EnableCheckNameClashing=true&lt;br /&gt;
 H5.EnableAddPathAttrs=true&lt;br /&gt;
 H5.EnableDropLongString=true&lt;br /&gt;
 H5.DisableStructMetaAttr=true&lt;br /&gt;
 H5.EnableFillValueCheck=true&lt;br /&gt;
 H5.CheckIgnoreObj=false&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;Note to DAACs with existing Hyrax deployments.&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
If your group is already serving data with Hyrax and the data representations that are generated by your Hyrax server are satisfactory, then a careful inspection of the localized configuration, typically held in /etc/bes/site.conf, will help you determine what configuration state you may need to inject into &#039;&#039;get_dmrpp&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===The &#039;&#039;H5.EnableCF&#039;&#039; option===&lt;br /&gt;
&lt;br /&gt;
Of particular importance is the &#039;&#039;H5.EnableCF&#039;&#039; option, which instructs the &#039;&#039;get_dmrpp&#039;&#039; tool to produce [https://cfconventions.org/ Climate Forecast convention (CF)] compatible output based on metadata found in the granule file being processed. &lt;br /&gt;
&lt;br /&gt;
Changing the value of &#039;&#039;H5.EnableCF&#039;&#039; from &#039;&#039;&#039;&#039;&#039;false&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;true&#039;&#039;&#039;&#039;&#039; will have (at least) two significant effects.&lt;br /&gt;
&lt;br /&gt;
It will:&lt;br /&gt;
&lt;br /&gt;
* Cause &#039;&#039;get_dmrpp&#039;&#039; to attempt to make the dmr++ metadata CF compliant.&lt;br /&gt;
* Remove Group hierarchies (if any) in the underlying data granule by flattening the Group hierarchy into the variable names.  &lt;br /&gt;
&lt;br /&gt;
By default &#039;&#039;get_dmrpp&#039;&#039; the &#039;&#039;H5.EnableCF&#039;&#039; option is set to false:&lt;br /&gt;
 H5.EnableCF = false&lt;br /&gt;
&lt;br /&gt;
There is a much more comprehensive discussion of this key feature, and others, in the &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;[https://opendap.github.io/hyrax_guide/Master_Hyrax_Guide.html#_hyrax_handlers HDF5 Handler section of the Appendix in the Hyrax Data Server Installation and Configuration Guide]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Missing data, the CF conventions and &#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
Many of the &#039;&#039;hdf5&#039;&#039; files produced by NASA and others do not contain the domain coordinate data (such as latitude, longitude, time, etc.) as a collection of explicit values. Instead information contained in the dataset metadata can used to reproduce these values. &lt;br /&gt;
&lt;br /&gt;
In order for a dataset to be Climate Forecast (CF) compatible it must contain these domain coordinate data values.&lt;br /&gt;
&lt;br /&gt;
The Hyrax &#039;&#039;hdf5_handler&#039;&#039; software, utilized by the &#039;&#039;get_dmrpp&#039;&#039; application, can create this data from the dataset metadata.  The &#039;&#039;get_dmrpp&#039;&#039; application places these generated data in a “sidecar” file for deployment with the source &#039;&#039;hdf5/netcdf&#039;&#039;-4 file.&lt;br /&gt;
&lt;br /&gt;
==Hyrax - Serving data using dmr++ files==&lt;br /&gt;
&lt;br /&gt;
There are three fundamental deployment scenarios for using dmr++ files to serve data with the Hyrax data server.&lt;br /&gt;
&lt;br /&gt;
This can be simple categorized as follows:&lt;br /&gt;
The dmr++ file(s) are XML files that contain a root &amp;lt;tt&amp;gt;dap4:Dataset&amp;lt;/tt&amp;gt; element with a &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attribute whose value is one of:&lt;br /&gt;
# An http(s):// URL referencing to the underlying granule files via http.&lt;br /&gt;
# A file:// URL that references the granule file on the local filesystem in a location that is inside the BES&#039; data root tree.&lt;br /&gt;
# The template string &amp;lt;tt&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each will discussed in turn below. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: By default Hyrax will automatically associate files whose name ends with &amp;quot;.dmrpp&amp;quot; with the dmr++ handler.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with http(s) URLs===&lt;br /&gt;
&lt;br /&gt;
If the dmr++ files that you wish to serve contain &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attributes whose values are http(s) URLs then there are 2+1 steps to serve the data:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure that the Hyrax AllowedHosts list is configured to allow Hyrax to access those target URLs. This can be accomplished by adding new regex entires to the AllowedHosts list in /etc/bes/site.conf, creating that file as need be.&lt;br /&gt;
# If the data URLs require authentication to access then you&#039;ll need to configure Hyrax for that too.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with file URLs===&lt;br /&gt;
&lt;br /&gt;
Using dmr++ files with locally held files can be useful for verifying that dmr++ functionality is working without relying on network access that may have data rate limits, authenticated access configuration, or security access constraints. Additionally, in many cases the dmr++ access to the locally held data may be significantly faster than through the native netcdf-4/hdf5 data handlers.&lt;br /&gt;
&lt;br /&gt;
In order to use dmr++ files that contain file:// URLs:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure the the dmr++ files contain only file:// URLs that refer to data granule files inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration.&lt;br /&gt;
&lt;br /&gt;
Note: For Hyrax, a correctly formatted file URL must start with the protocol &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; followed by the full qualified path to the data granule, for example:  &lt;br /&gt;
 /usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
so the the completed URL will have three slashes after the first colon:&lt;br /&gt;
 file:///usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with the template string. (NASA)===&lt;br /&gt;
&lt;br /&gt;
Another way to serve dmr++ files with Hyrax is to build the dmr++ files &#039;&#039;&#039;without&#039;&#039;&#039; valid URLs but with a template string that is replaced at runtime. If no target URL is supplied to get_drmpp at the time that the dmr++ is generated the template string: &amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt; will added to the file in place of the URL. The at runtime it can be replaced withe the correct value.&lt;br /&gt;
&lt;br /&gt;
Currently the only implementation of this is Hyrax&#039;s NGAP service which, when deployed in the NASA NGAP cloud, will accept &amp;quot;restified path&amp;quot; URLs that are defined as&lt;br /&gt;
having a URL path component with two mandatory and one optional parameters:&lt;br /&gt;
 MANDATORY: &amp;quot;/collections/UMM-C:{concept-id}&amp;quot;&lt;br /&gt;
 OPTIONAL:  &amp;quot;/UMM-C:{ShortName} &#039;.&#039; UMM-C:{Version}&amp;quot;&lt;br /&gt;
 MANDATORY: &amp;quot;/granules/UMM-G:{GranuleUR}&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039; &amp;lt;tt&amp;gt;https://opendap.earthdata.nasa.gov/collections/C1443727145-LAADS/MOD08_D3.v6.1/granules/MOD08_D3.A2020308.061.2020309092644.hdf.nc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When encountering this type of URL Hyrax will decompose it and use the content to formulate a query to the NASA CMR in order to retrieve the data access URL for the granule and for the dmr++ file. It then retrieves the dmr++ file and injects the data URL so that data access can proceed as described above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://wiki.earthdata.nasa.gov/display/DUTRAIN/Feature+analysis%3A+Restified+URL+for+OPENDAP+Data+Access More on the Restified Path can be found here],&lt;br /&gt;
&lt;br /&gt;
== Recipe: Building and testing dmr++ files ==&lt;br /&gt;
There are two recipes shown here, one using Hyrax docker containers and a second using the container that is part of the EOSDIS Cumulous task.&lt;br /&gt;
Prerequisites: &lt;br /&gt;
* Docker daemon running on a system that also supports a shell (the examples use &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; in this section)&lt;br /&gt;
=== Recipe: Building dmr++ files using a Hyrax docker container.===&lt;br /&gt;
# Acquire representative granule files for the collection you wish to import. Put them on the system that is running the Docker daemon. For this recipe we will assume that these files have been placed in the directory:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;/tmp/dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Get the most up to date Hyrax docker image:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker pull opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Start the docker container, mounting your data directory on to the docker image at &amp;lt;tt&amp;gt;/usr/share/hyrax&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker run -d -h hyrax -p 8080:8080 --volume /tmp/dmrpp:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Get a first view of your data using &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; with it&#039;s default configuration.&lt;br /&gt;
## If you want you can build a dmr++ for an example &amp;quot;&amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;input_file&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt;&amp;quot; using a  &amp;lt;tt&amp;gt;docker exec&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
##: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax get_dmrpp -b /usr/share/hyrax -o /usr/share/hyrax/input_file.dmrpp -u &amp;quot;file:///usr/share/hyrax/input_file&amp;quot; &amp;quot;input_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
## Or if you want more scripting flexibility you can login to the docker conainer to do the same:&lt;br /&gt;
### Login to the docker container:&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax /bin/bash&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### Change working dir to data dir:&lt;br /&gt;
###:&amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;cd /usr/share/hyrax&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### This sets the data directory to the current one (&amp;lt;tt&amp;gt;-b $(pwd)&amp;lt;/tt&amp;gt;) and sets the data URL (&amp;lt;tt&amp;gt;-u&amp;lt;/tt&amp;gt;) to the fully qualified path to the input file.&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;get_dmrpp -b $(pwd) -o foo.dmrpp -u &amp;quot;file://&amp;quot;$(pwd)&amp;quot;/your_test_file&amp;quot; &amp;quot;your_test_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;&#039;&#039;Now that you have made a dmr++ file, use the running Hyrax server to view and test it by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
#:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You can also batch process all of your test granules, if you want to go that route. This script assumes your ingestable data files end with &#039;&amp;lt;tt&amp;gt;.h5&amp;lt;/tt&amp;gt;&#039;. &lt;br /&gt;
#: &#039;&#039;The resulting dmr++ files should contain the correct &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; URLs and be correctly located so that they may be tested with the Hyrax service running in the docker instance.&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script will write each output file as a sidecar file into &lt;br /&gt;
# the same directory as its associated input granule data file.&lt;br /&gt;
&lt;br /&gt;
# The target directory to search for data files &lt;br /&gt;
target_dir=/usr/share/hyrax&lt;br /&gt;
echo &amp;quot;target_dir: ${target_dir}&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Search the target_dir for names matching the regex \*.h5 &lt;br /&gt;
for infile in `find &amp;quot;${target_dir}&amp;quot; -name \*.h5`&lt;br /&gt;
do&lt;br /&gt;
    echo &amp;quot; Processing: ${infile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    infile_base=`basename &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;infile_base: ${infile_base}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    bes_dir=`dirname &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;    bes_dir: ${bes_dir}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    outfile=&amp;quot;${infile}.dmrpp&amp;quot;&lt;br /&gt;
    echo &amp;quot;     Output: ${outfile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    get_dmrpp -b &amp;quot;${bes_dir}&amp;quot; -o &amp;quot;${outfile}&amp;quot; -u &amp;quot;file://${infile}&amp;quot; &amp;quot;${infile_base}&amp;quot;&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Remember that you can use the Hyrax server that is running in the docker container to view and test the dmr++ files you just created by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Testing and qualifying dmr++ files ===&lt;br /&gt;
In the previous section/step we created some initial dmr++ files using the default configuration. It is crucial to make sure that they provide the representation of the data that you and your users are expecting, and that they will work correctly with the Hyrax server. (See the following sections for details). If the generated dmr++ files do not match expectations then the default configuration of the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; may need to be amended using the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; parameter.&lt;br /&gt;
If the data are currently being served by your DAAC&#039;s on-prem team this is where understanding exactly what the localizations made to the configurations of the on-prem Hyrax instances deployed for the collection is important. These localization will probably need to be injected into &amp;lt;tt&amp;gt;get_drmpp&amp;lt;/tt&amp;gt; in order to produce the correct data representation in the dmr++ files.&lt;br /&gt;
&lt;br /&gt;
=== Flattening Groups ===&lt;br /&gt;
By default &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will preserve and show group hierarchies. If this is not desired, say for CF-1.0 compatibility, then you can change this by creating a small amendment to &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt;&#039;s default configuration. &lt;br /&gt;
First create the amending configuration file:&lt;br /&gt;
 echo &amp;quot;H5.EnableCF=true&amp;quot; &amp;gt; site.conf&lt;br /&gt;
Then, change the invocation of &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; in the above example by adding the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; switch:&lt;br /&gt;
 get_dmrpp -s site.conf -b `pwd` -o &amp;quot;${dmrpp_file}&amp;quot; -u &amp;quot;file://&amp;quot;`pwd`&amp;quot;/${file}&amp;quot; &amp;quot;${file}&amp;quot;&lt;br /&gt;
And re-run the dmr++ production as shown above.&lt;br /&gt;
&lt;br /&gt;
=== DAP representations ===&lt;br /&gt;
We have test and assurance procedures for DAP4 and DAP2 protocols below. Both are important. For legacy datasets the DAP2 request API is widely used by an existing client base and should continue to be supported. Since DAP4 subsumes DAP2 (but with somewhat different API semantics) It should be checked for legacy datasets as well. For more modern datasets that content DAP4 types such as Int64 that are not part of the DAP2 specification or implementations we will need to relying eliding the instances of unmapped types, or return an error when this is encountered.&lt;br /&gt;
 # Test Constants:&lt;br /&gt;
 GRANULE_FILE=&amp;quot;some_name.h5&amp;quot;&lt;br /&gt;
 # Granule URL&lt;br /&gt;
 gf_url=&amp;quot;http://localhost:8080/opendap/${GRANULE_FILE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== Inspect the dmr++ files====&lt;br /&gt;
# Do the dmr++ files have the expected dmrpp:href URL(s)?&lt;br /&gt;
#: &amp;lt;tt&amp;gt;head -2 ${GRANULE_FILE}.dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check DAP4 DMR Response ====&lt;br /&gt;
Inspect &amp;lt;tt&amp;gt;${gf_url}.dmrpp.dmr&amp;lt;/tt&amp;gt; &lt;br /&gt;
# Get the document, save as &amp;lt;tt&amp;gt;foo.dmr&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;curl -L -o foo.dmr &amp;quot;${gf_url}.dmr&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
# Are the associated dimensions correct?&lt;br /&gt;
&lt;br /&gt;
==== DAP4 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://docs.opendap.org/index.php?title=DAP4:_Specification_Volume_1#Fully_Qualified_Names VARIABLE_NAME is a full qualified DAP4 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_file &amp;quot;${gf_url}.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap4_subset_file dap4_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP4 UI test ====&lt;br /&gt;
* View and exercise the DAP4 Data Request Form &amp;lt;tt&amp;gt;{gf_url}.dmr.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check DDS Response ====&lt;br /&gt;
&lt;br /&gt;
# Inspect &amp;lt;tt&amp;gt;${gf_url}.dds&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
## Are the associated dimensions correct?&lt;br /&gt;
# Compare DMR++ DDS with granule file DDS. &lt;br /&gt;
#:For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_file &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_dmrpp &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; cmp dap2_dds_file dap2_dds_dmrpp &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_file &amp;quot;${gf_url}.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap2_subset_file dap2_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: One might consider doing this with two or more variables.&lt;br /&gt;
&lt;br /&gt;
==== DAP2 UI Test ====&lt;br /&gt;
* View and exercise the DAP2 Data Request Form located here: &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Try it in Panoply!&lt;br /&gt;
** Open Panoply.&lt;br /&gt;
** From the &#039;&#039;&#039;File&#039;&#039;&#039; menu select &#039;&#039;&#039;Open Remote Dataset...&#039;&#039;&#039;&lt;br /&gt;
** Paste the &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt; into the resulting dialog box.&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13548</id>
		<title>DMR++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13548"/>
		<updated>2024-08-29T00:20:56Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Building DMR++ files for HDF4 and HDF4-EOS2 (experimental) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;How to build &amp;amp; deploy dmr++ files for Hyrax&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==What? Why?== &lt;br /&gt;
&lt;br /&gt;
It is a fast and flexible way to serve data stored in S3.&lt;br /&gt;
The dmr++ encodes the location of the data content residing in a binary data file/object (e.g., an hdf5 file) so that it can be directly accessed, without the need for an intermediate library API, by using the file with the location information. The binary data objects may be on a local filesystem, or they may reside across the web in something like an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How Does It Work?==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; ingest software reads a data file (see &#039;&#039;&#039;note&#039;&#039;&#039;) and builds a document that holds all of the file&#039;s metadata (the names and types of all of the variables along with any other information bound to those variables). This information is stored in a document we call the Dataset Metadata Response (DMR). The &#039;&#039;dmr++&#039;&#039; adds some extra information to this (that&#039;s the &#039;++&#039; part) about where each variable can be found and how to decode those values. The &#039;&#039;dmr++&#039;&#039; is simply an special annotated DMR document.&lt;br /&gt;
&lt;br /&gt;
This effectively decouples the annotated DMR (&#039;&#039;dmr++&#039;&#039;) from the location of the granule file itself. Since dmr++ files are typically significantly smaller than the source data granules they represent, they can be stored and moved for less expense. They also enable reading all of the file&#039;s metadata in one operation instead of the iterative process that many APIs require.&lt;br /&gt;
&lt;br /&gt;
If the dmr++ contains references to the source granules location on the web, the location of the the dmr++ file itself does not matter.&lt;br /&gt;
&lt;br /&gt;
Software that understands the dmr++ content can directly access the data values held in the source granule file, and it can do so without having to retrieve the entire file and work on it locally, even when the file is stored in a Web Object Store like S3. &lt;br /&gt;
&lt;br /&gt;
If the granule file contains multiple variables and only a subset of them are needed, the dmr++ enabled software can retrieve just the bytes associated with the desired variables parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;note:&#039;&#039;&#039; The OPeNDAP software currently supports HDF5 and NetCDF4. Other formats can be supported, such as zarr.&lt;br /&gt;
&lt;br /&gt;
==Supported Data Formats==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently works with &#039;&#039;hdf5&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. (The &#039;&#039;netcdf-4&#039;&#039; format is a subset of &#039;&#039;hdf5&#039;&#039; so &#039;&#039;hdf5&#039;&#039; tools are utilized for both.) Other formats like &#039;&#039;zarr&#039;&#039;, &#039;&#039;hdf4&#039;&#039;, &#039;&#039;netcdf-3&#039;&#039; are not currently supported by the &#039;&#039;dmr++&#039;&#039; software, but support could be added if requested.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; data format is quite complex and many of the options and edge cases are not currently supported by the &#039;&#039;dmr++&#039;&#039; software. &lt;br /&gt;
&lt;br /&gt;
These limitations and how to quickly evaluate an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file for use with the &#039;&#039;dmr++&#039;&#039; software are explained below.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; filters====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format has several filter/compression options used for storing data values. &lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5Z_FILTER_DEFLATE, H5Z_FILTER_SHUFFLE, and H5Z_FILTER_FLETCHER32 filters.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc/RM/RM_H5Z.html You can find more on hdf5 filters here.]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; storage layouts====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format also uses a number of &amp;quot;storage layouts&amp;quot; that describe various structural organizations of the data values associated with a variable in the granule file.&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5D_COMPACT, H5D_CHUNKED, and H5D_CONTIGUOUS storage layouts. These are all of the storage layouts defined by the &#039;&#039;hdf5&#039;&#039; library, but others can be added.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc1.6/Datasets.html You can find more on hdf5 storage layouts here.]&lt;br /&gt;
&lt;br /&gt;
====Is my &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file suitable for &#039;&#039;dmr++&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
To determine the &#039;&#039;hdf5&#039;&#039; filters, storage layouts, and chunking scheme used in an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file you can use the command:&lt;br /&gt;
 &amp;lt;code&amp;gt;h5dump -H -p &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
To get a human readable assessment of the file that will show the storage layouts, chunking structure, and the filters needed for each variable (aka DATASET in the &#039;&#039;hdf5&#039;&#039; vocabulary) [https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Dump h5dump info can be found here.]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;h5dump example output:&#039;&#039;&lt;br /&gt;
 $ h5dump -H -p chunked_gzipped_fourD.h5&lt;br /&gt;
 HDF5 &amp;quot;chunked_gzipped_fourD.h5&amp;quot; {&lt;br /&gt;
 GROUP &amp;quot;/&amp;quot; {&lt;br /&gt;
   DATASET &amp;quot;d_16_gzipped_chunks&amp;quot; {&lt;br /&gt;
      DATATYPE  H5T_IEEE_F32LE&lt;br /&gt;
      DATASPACE  SIMPLE { ( 40, 40, 40, 40 ) / ( 40, 40, 40, 40 ) }&lt;br /&gt;
      STORAGE_LAYOUT {&lt;br /&gt;
         CHUNKED ( 20, 20, 20, 20 )&lt;br /&gt;
         SIZE 2863311 (3.576:1 COMPRESSION)&lt;br /&gt;
      }&lt;br /&gt;
      FILTERS {&lt;br /&gt;
         COMPRESSION DEFLATE { LEVEL 6 }&lt;br /&gt;
      }&lt;br /&gt;
      FILLVALUE {&lt;br /&gt;
         FILL_TIME H5D_FILL_TIME_ALLOC&lt;br /&gt;
         VALUE  H5D_FILL_VALUE_DEFAULT&lt;br /&gt;
      }&lt;br /&gt;
      ALLOCATION_TIME {&lt;br /&gt;
         H5D_ALLOC_TIME_INCR&lt;br /&gt;
      }&lt;br /&gt;
   }&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
====Is my netcdf file &#039;&#039;netcdf-3&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
It is an unfortunate state of affairs that the file suffix &amp;quot;.nc&amp;quot; is the commonly used naming convention for both &#039;&#039;netcdf-3&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. &lt;br /&gt;
You can use the command:  &lt;br /&gt;
 &amp;lt;code&amp;gt;ncdump -k &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt; &lt;br /&gt;
to determine if a &#039;&#039;netcdf&#039;&#039; file is either classic &#039;&#039;netcdf-3&#039;&#039; (classic) or &#039;&#039;netcdf-4&#039;&#039; (netCDF-4).&lt;br /&gt;
* The &#039;&#039;netcdf&#039;&#039; library must be installed on the system upon which the command is issued.&lt;br /&gt;
&lt;br /&gt;
[http://www.bic.mni.mcgill.ca/users/sean/Docs/netcdf/guide.txn_79.html You can learn more in the NetCDF documentation here.]&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;DMR++&#039;&#039; files for HDF4 and HDF4-EOS2 (experimental)==&lt;br /&gt;
The HDF4 and HDF4-EOS2 (hereafter just HDF4) DMR++ document builder is currently available in the docker container we build for &#039;&#039;hyrax&#039;&#039; server/service. You can get this container from the public Docker Hub repository. You can also get and build the &#039;&#039;Hyrax&#039;&#039; source code, and use the client that way (as part of a source code build) but it&#039;s much more complex than getting the Docker container. In addition, the Docker container includes a server that can test the DMR++ documents that are built and can even show you how the files would look when served without using the DMR++.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Because this command is still experimental, I&#039;ll write this documentation like a recipe. Modify it to suit your own needs.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Using get_dmrpp_h4===&lt;br /&gt;
Make a new directory in a convenient place and copy the HDF4 and/or HDF4-EOS2 files in that directory. Once you have the files in that directory, make an environment variable so it can be referred to easily. From inside the directory:&lt;br /&gt;
&lt;br /&gt;
 export HDF4_DIR=$(pwd)&lt;br /&gt;
&lt;br /&gt;
Get the Docker container from Docker Hub using this command:&lt;br /&gt;
&lt;br /&gt;
 docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
What the options mean:&lt;br /&gt;
 -d, --detach Run container in background and print container ID&lt;br /&gt;
 -h, --hostname Container host name&lt;br /&gt;
 -p, --publish Publish a container&#039;s port(s) to the host&lt;br /&gt;
 -v, --volume Bind mount a volume&lt;br /&gt;
 --name Assign a name to the container&lt;br /&gt;
&lt;br /&gt;
This command will fetch the container &#039;&#039;&#039;opendap/hyrax:snapshot&#039;&#039;&#039; from Docker Hub. Thw &#039;&#039;snapshot&#039;&#039; is the latest build of the container. It will then &#039;&#039;run&#039;&#039; the container and return the container ID. The &#039;&#039;hyrax&#039;&#039; server is now running on you computer and can be accessed with a web browser, curl, et cetera. More on that in a bit.&lt;br /&gt;
&lt;br /&gt;
The volume mount, from $HDF4_DIR to &#039;/usr/share/hyrax&#039; mounts the current directory of the host computer running the container to the directory &#039;&#039;/usr/share/hyrax&#039;&#039; inside the container. That directory is the root of the server&#039;s data tree. This means that the HDF4 files you copied into the HDF4_DIR directory will be accessible by the server running in the container. That will be useful for testing later on.&lt;br /&gt;
&lt;br /&gt;
Note: If you want to use a specific container version, just substitute the version info for &#039;snapshot.&#039;&lt;br /&gt;
&lt;br /&gt;
Check that the container is running using:&lt;br /&gt;
&lt;br /&gt;
 docker ps&lt;br /&gt;
&lt;br /&gt;
This will show a somewhat hard-to-read bit of information about all the running Docker container on you host:&lt;br /&gt;
&lt;br /&gt;
 CONTAINER ID   IMAGE                    COMMAND              CREATED          STATUS          PORTS                                                              &lt;br /&gt;
 NAMES&lt;br /&gt;
 2949d4101df4   opendap/hyrax:snapshot   &amp;quot;/entrypoint.sh -&amp;quot;   15 seconds ago   Up 14 seconds   8009/tcp, 8443/tcp, &lt;br /&gt;
 10022/tcp, 11002/tcp, 0.0.0.0:8080-&amp;gt;8080/tcp   hyrax&lt;br /&gt;
&lt;br /&gt;
If you want to stop containers, use&lt;br /&gt;
&lt;br /&gt;
 docker rm -f &amp;lt;CONTAINER ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &#039;&#039;CONTAINER ID&#039;&#039; for the one we just started and shown in the output of &#039;&#039;docker ps -a&#039;&#039; above is &#039;&#039;2949d4101df4&#039;&#039;. No need to stop the container now, I&#039;m just pointing out how to do it because it&#039;s often useful.&lt;br /&gt;
&lt;br /&gt;
====Lets run the DMR++ builder====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At the end of this, I&#039;ll include a shell script that takes away many of these steps, but the script obscures some aspects of the command that you might want to tweak, so the following shows you all the details. Skip to &#039;&#039;&#039;Simple shell command&#039;&#039;&#039; to skip over these details.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Make sure you are in the directory with the HDF4 files for these steps. &lt;br /&gt;
&lt;br /&gt;
Get the command to return its help information:&lt;br /&gt;
&lt;br /&gt;
 docker exec -it hyrax get_dmrpp_h4 -h&lt;br /&gt;
&lt;br /&gt;
will return:&lt;br /&gt;
  &lt;br /&gt;
 &amp;lt;nowiki&amp;gt;usage: get_dmrpp_h4 [-h] -i I [-c CONF] [-s] [-u DATA_URL] [-D] [-v]&lt;br /&gt;
&lt;br /&gt;
Build a dmrpp file for an HDF4 file. get_dmrpp_h4 -i h4_file_name. A dmrpp&lt;br /&gt;
file that uses the HDF4 file name will be generated.&lt;br /&gt;
&lt;br /&gt;
optional arguments:&lt;br /&gt;
  &lt;br /&gt;
...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lets build a DMR++ now, by explicitly using the container:&lt;br /&gt;
&lt;br /&gt;
 docker exec -it hyrax bash&lt;br /&gt;
&lt;br /&gt;
starts the &#039;&#039;bash&#039;&#039; shell in the container, with the current directory as root (/)&lt;br /&gt;
&lt;br /&gt;
 [root@hyrax /]# &lt;br /&gt;
&lt;br /&gt;
Change to the directory that is the root of the data (you&#039;ll see your HDF4 files in here):&lt;br /&gt;
&lt;br /&gt;
 cd /usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
You will see, roughly:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;[root@hyrax /]# cd /usr/share/hyrax&lt;br /&gt;
[root@hyrax hyrax]# ls&lt;br /&gt;
3B42.19980101.00.7.HDF&lt;br /&gt;
3B42.19980101.03.7.HDF&lt;br /&gt;
3B42.19980101.06.7.HDF&lt;br /&gt;
&lt;br /&gt;
...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In that directory, use the &#039;&#039;get_dmrpp_h4&#039;&#039; command to build a DMR++ document for one of the files:&lt;br /&gt;
&lt;br /&gt;
 [root@hyrax hyrax]# get_dmrpp_h4 -i 3B42.20130111.09.7.HDF -u &#039;file:///usr/share/hyrax/3B42.20130111.09.7.HDF&#039;&lt;br /&gt;
&lt;br /&gt;
Copy that pattern for whatever file you use. From the /usr/share/hyrax directory, you pass &#039;&#039;get_dmrpp_h4&#039;&#039; the name of the file (because it&#039;s local to the current directory) using the &#039;&#039;&#039;-i&#039;&#039;&#039; option. The &#039;&#039;&#039;-u&#039;&#039;&#039; option tells the command to embed the URL that follows it in the DMR++. I&#039;ve used a &#039;&#039;file://&#039;&#039; URL to the file &#039;&#039;/usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Note the three slashes following the colon, two from the way a URL names a protocol and one because the pathname starts at the root directory. Obscure, but it makes sense.&lt;br /&gt;
&lt;br /&gt;
Building the DMR++ and embedding a &#039;&#039;file://&#039;&#039; URL will enable testing the DMR+.&lt;br /&gt;
====Using the server to examine data returned by the DMR++====&lt;br /&gt;
&lt;br /&gt;
Lets look at how the &#039;&#039;hyrax&#039;&#039; service will treat that data file using the DMR++. In a browser, goto &lt;br /&gt;
&lt;br /&gt;
 http://localhost:8080/opendap/&lt;br /&gt;
&lt;br /&gt;
[[File:Hyrax-including-new-DNRpp.png|200px|thumb|right|text-top|The running server shows the DMR++ as a dataset.]]&lt;br /&gt;
&lt;br /&gt;
Note: &#039;&#039;The server caches data catalog information for 5 minutes (configurable) so new items (e..g., DMR++ documents) may not show up right away. To force the display of a DMR++ that you just created, click on the source data file name and edit the URL so that the suffix &#039;&#039;&#039;.dmr.html&#039;&#039;&#039; is replaced by &#039;&#039;&#039;.dmrpp/dmr&#039;&#039;&#039;.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Click on the your equivalent of the &#039;&#039;&#039;3B42.20130111.09.7.HDF&#039;&#039; link, subset, download and open in Panoply or the equivalent. [[File:Hyrax-subsetting.png|200px|thumb|right|text-top|Use the form interface to subset and get a response.]]&lt;br /&gt;
&lt;br /&gt;
You can run batch tests in lots of files by building many DMR++ documents and then asking the server for various responses (nc4, dap) from the DMR++ and the original file. Those could be compared using various schemes, although in its entirety that is beyond this section&#039;s scope, the command &#039;&#039;getdap4&#039;&#039; is also included in the container and could be used to compare &#039;&#039;dap&#039;&#039; responses from the data file and the DMR++ document.&lt;br /&gt;
&lt;br /&gt;
To the right is a comparison of the same underlying data, the left window shows the data returned using the DMR++, the right shows the data read directly from the file using the server&#039;s builtin HDF4 reader. &lt;br /&gt;
[[File:Data-comparison.png|200px|thumb|right|text-top|Comparison of responses from a DMR++ and the native file handler.]]&lt;br /&gt;
&lt;br /&gt;
====Simple shell command====&lt;br /&gt;
&lt;br /&gt;
Here is a simple shell command that you can run on the host computer that will eliminate most of the above. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In the spirit of a recipe, I&#039;ll restate the earlier command for starting the docker container with the &#039;&#039;&#039;get_dmrpp_h4&#039;&#039;&#039; command and the &#039;&#039;&#039;hyrax&#039;&#039;&#039; server.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Start the container:&lt;br /&gt;
&lt;br /&gt;
  docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
Is it running:&lt;br /&gt;
&lt;br /&gt;
  docker ps&lt;br /&gt;
&lt;br /&gt;
The command, written for the Bourne Shell, is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
# usage get_dmrpp_h4.sh &amp;lt;file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
data_root=/usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
cat &amp;lt;&amp;lt;EOF | docker exec --interactive hyrax sh&lt;br /&gt;
cd $data_root&lt;br /&gt;
get_dmrpp_h4 -i $1 -u &amp;quot;file://$data_root/$1&amp;quot;&lt;br /&gt;
EOF&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy that, save it in a file (I named the file &#039;&#039;get_dmrpp_h4.sh&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
Run the command on the host (not the docker container) and in the directory with the HDF4 files (you don&#039;t &#039;&#039;have&#039;&#039; to do that, but sorting out the details is left as an exercise for the reader ;-). Run the command like this:&lt;br /&gt;
&lt;br /&gt;
  ./get_dmrpp_h4.sh AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
&lt;br /&gt;
The DMR++ will appear when the command completes.&lt;br /&gt;
&lt;br /&gt;
  (hyrax500) hyrax_git/HDF4-dir % ls -l&lt;br /&gt;
  total 1251240&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff    1250778 Aug 22 22:31 AMSR_E_L2_Land_V09_200206191112_A.hdf&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff   20746207 Aug 22 22:32 AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
  -rw-r--r--  1 jimg  staff    3378674 Aug 28 17:37 AMSR_E_L3_SeaIce25km_V15_20020601.hdf.dmrpp&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files with get_dmrpp==&lt;br /&gt;
&lt;br /&gt;
The application that builds the &#039;&#039;dmr++&#039;&#039; files is a command line tool called &#039;&#039;get_dmrpp&#039;&#039;. It in turn utilizes other executables such as &#039;&#039;build_dmrpp&#039;&#039;, &#039;&#039;reduce_mdf&#039;&#039;, &#039;&#039;merge_dmrpp&#039;&#039; (which rely in turn on the &#039;&#039;hdf5_handler&#039;&#039; and the &#039;&#039;hdf5&#039;&#039; library), along with a number of UNIX shell commands.&lt;br /&gt;
&lt;br /&gt;
All of these components are install with each recent version of the Hyrax Data Server&lt;br /&gt;
&lt;br /&gt;
You can see the &#039;&#039;get_dmrpp&#039;&#039; usage statement with the command: &amp;lt;code&amp;gt;get_dmrpp -h&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Using &#039;&#039;get_dmrpp&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The way that &#039;&#039;get_dmrpp&#039;&#039; is invoked controls the way that the data are ultimately represented in the resulting &#039;&#039;dmr++&#039;&#039; file(s). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;get_dmrpp&#039;&#039; application utilizes software from the Hyrax data server to produce the base DMR document which is used to construct the &#039;&#039;dmr++&#039;&#039; file. &lt;br /&gt;
&lt;br /&gt;
The Hyrax server has a long list of configuration options, several of which can substantially alter the the structural and semantic representation of the dataset as seen in the dmr++ files generated using these options.&lt;br /&gt;
&lt;br /&gt;
===Command line options===&lt;br /&gt;
&lt;br /&gt;
The command line switches provide a way to control the output of the tool. In addition to common options like verbose output or testing modes, the tool provides options to build extra (aka &#039;sidecar&#039;) data files that hold information needed for CF compliance if the original HDF5 data files lack that information (see the &#039;&#039;missing data&#039;&#039; section ). In addition, it is often desirable to build &#039;&#039;dmr++&#039;&#039; files before the source data files are uploaded to a cloud store like S3. In this case, the URL to the data may not be known when the &#039;&#039;dmr++&#039;&#039; is built. We support this by using placeholder/template strings in the &#039;&#039;dmr++&#039;&#039; and which can then be replaced with the URL at runtime, when the &#039;&#039;dmr++&#039;&#039; file is evaluated. See the &#039;-u&#039; and &#039;-p&#039; options below.&lt;br /&gt;
&lt;br /&gt;
====Inputs====&lt;br /&gt;
&lt;br /&gt;
; -b&lt;br /&gt;
: The fully qualified path to the top level data directory. Data files read by &#039;&#039;get_dmrpp&#039;&#039; must be in the directory tree rooted at this location and their names expressed as a path relative to this location. The value may not be set to &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; The default value is /tmp if a value is not provided. All the data files to be processed must be in this directory or one of its subdirectories. If &#039;&#039;get_dmrpp&#039;&#039; is being executed from same directory as the data then &amp;lt;code&amp;gt;-b `pwd`&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-b .&amp;lt;/code&amp;gt; works as well.&lt;br /&gt;
;-u&lt;br /&gt;
: This option is used to specify the location of the binary data object. It’s value must be an http, https, or file (file://) URL. This URL will be injected into the dmr++ when it is constructed. If option -u is not used; then the template string &amp;lt;code&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/code&amp;gt; will be used and the dmr++ will substitute a value at runtime.&lt;br /&gt;
;-c&lt;br /&gt;
:The path to an alternate bes configuration file to use.&lt;br /&gt;
;-s&lt;br /&gt;
:The path to an optional addendum configuration file which will be appended to the default BES configuration. Much like the site.conf file works for the full server deployment it will be loaded last and the settings there-in will have an override effect on the default configuration.&lt;br /&gt;
&lt;br /&gt;
====Output====&lt;br /&gt;
&lt;br /&gt;
; -o&lt;br /&gt;
: The name of the file to create.&lt;br /&gt;
&lt;br /&gt;
====Verbose Output Modes====&lt;br /&gt;
&lt;br /&gt;
; -h&lt;br /&gt;
: Show help/usage page.&lt;br /&gt;
; -v:&lt;br /&gt;
: verbose mode, prints the intermediate DMR.&lt;br /&gt;
; -V&lt;br /&gt;
: Very verbose mode,  prints the DMR, the command and the configuration file used to build the DMR.&lt;br /&gt;
; -D&lt;br /&gt;
: Just print the DMR that will be used to build the DMR++&lt;br /&gt;
; -X&lt;br /&gt;
: Do not remove temporary files. May be used independently of the -v and/or -V options.&lt;br /&gt;
&lt;br /&gt;
====Tests====&lt;br /&gt;
&lt;br /&gt;
; -T&lt;br /&gt;
: Run ALL hyrax tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -I&lt;br /&gt;
: Run hyrax inventory tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -F&lt;br /&gt;
: Run hyrax value probe tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
&lt;br /&gt;
====Missing Data Creation====&lt;br /&gt;
&lt;br /&gt;
; -M&lt;br /&gt;
: Build a &#039;sidecar&#039; file that holds missing information needed for CF compliance (e.g., Latitude, Longitude and Time coordinate data).&lt;br /&gt;
; -p&lt;br /&gt;
: Provide the URL for the Missing data sidecar file. If this is not given (but -M is), then a template value is used in the dmr++ file and a real URL is substituted at runtime.&lt;br /&gt;
; -r&lt;br /&gt;
: The path to the file that contains missing variable information for sets of input data files that share common missing variables. The file will be created if it doesn&#039;t exist and the result may be used in subsequent invocations of get_dmrpp (using -r) to identify the missing variable file.&lt;br /&gt;
&lt;br /&gt;
====AWS Integration====&lt;br /&gt;
The &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application supports both S3 hosted granules as inputs, and uploading generated dmr++ files to an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
; S3 Hosted granules are supported by default,&lt;br /&gt;
: When the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application sees that the name of the input file is an S3 URL it will check to see if the AWS CLI is configured and if so &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will attempt retrieve the granule and make a dmr++ utilizing whatever other options have been chosen.&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
; -U&lt;br /&gt;
: The &#039;&#039;&#039;&amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt;&#039;&#039;&#039; command line parameter for &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; instructs the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application to upload the generated dmr++ file to S3, but only when the following conditions are met:&lt;br /&gt;
:* The name of the input file is an S3 URL&lt;br /&gt;
:* The AWS CLI has been configured with credentials that provide r+w permissions for the bucket referenced in the input file S3 URL.&lt;br /&gt;
:* The &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; option has been specified.&lt;br /&gt;
: If all three of the above are true then &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will copy the retrieve the granule, create a dmr++ file from the granule, and copy the resulting dmr++ file (as defined by the -o option) to the source S3 bucket using the well known NGAP sidecar file naming convention: &#039;&#039;&#039;&amp;lt;tt&amp;gt;s3://bucket_name/granule_object_id.dmrpp&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -U -o foo -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5_handler&#039;&#039; Configuration===&lt;br /&gt;
&lt;br /&gt;
Because &#039;&#039;get_dmrpp&#039;&#039; uses the &#039;&#039;hdf5_handler&#039;&#039; software to build the &#039;&#039;dmr++&#039;&#039; the software must inject the &#039;&#039;hdf5_handler&#039;&#039;&#039;s configuration. &lt;br /&gt;
&lt;br /&gt;
The default configuration is large, but any valued may be altered at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some of the commonly manipulated configuration parameters with their default values:&lt;br /&gt;
 H5.EnableCF=true&lt;br /&gt;
 H5.EnableDMR64bitInt=true&lt;br /&gt;
 H5.DefaultHandleDimension=true&lt;br /&gt;
 H5.KeepVarLeadingUnderscore=false&lt;br /&gt;
 H5.EnableCheckNameClashing=true&lt;br /&gt;
 H5.EnableAddPathAttrs=true&lt;br /&gt;
 H5.EnableDropLongString=true&lt;br /&gt;
 H5.DisableStructMetaAttr=true&lt;br /&gt;
 H5.EnableFillValueCheck=true&lt;br /&gt;
 H5.CheckIgnoreObj=false&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;Note to DAACs with existing Hyrax deployments.&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
If your group is already serving data with Hyrax and the data representations that are generated by your Hyrax server are satisfactory, then a careful inspection of the localized configuration, typically held in /etc/bes/site.conf, will help you determine what configuration state you may need to inject into &#039;&#039;get_dmrpp&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===The &#039;&#039;H5.EnableCF&#039;&#039; option===&lt;br /&gt;
&lt;br /&gt;
Of particular importance is the &#039;&#039;H5.EnableCF&#039;&#039; option, which instructs the &#039;&#039;get_dmrpp&#039;&#039; tool to produce [https://cfconventions.org/ Climate Forecast convention (CF)] compatible output based on metadata found in the granule file being processed. &lt;br /&gt;
&lt;br /&gt;
Changing the value of &#039;&#039;H5.EnableCF&#039;&#039; from &#039;&#039;&#039;&#039;&#039;false&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;true&#039;&#039;&#039;&#039;&#039; will have (at least) two significant effects.&lt;br /&gt;
&lt;br /&gt;
It will:&lt;br /&gt;
&lt;br /&gt;
* Cause &#039;&#039;get_dmrpp&#039;&#039; to attempt to make the dmr++ metadata CF compliant.&lt;br /&gt;
* Remove Group hierarchies (if any) in the underlying data granule by flattening the Group hierarchy into the variable names.  &lt;br /&gt;
&lt;br /&gt;
By default &#039;&#039;get_dmrpp&#039;&#039; the &#039;&#039;H5.EnableCF&#039;&#039; option is set to false:&lt;br /&gt;
 H5.EnableCF = false&lt;br /&gt;
&lt;br /&gt;
There is a much more comprehensive discussion of this key feature, and others, in the &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;[https://opendap.github.io/hyrax_guide/Master_Hyrax_Guide.html#_hyrax_handlers HDF5 Handler section of the Appendix in the Hyrax Data Server Installation and Configuration Guide]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Missing data, the CF conventions and &#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
Many of the &#039;&#039;hdf5&#039;&#039; files produced by NASA and others do not contain the domain coordinate data (such as latitude, longitude, time, etc.) as a collection of explicit values. Instead information contained in the dataset metadata can used to reproduce these values. &lt;br /&gt;
&lt;br /&gt;
In order for a dataset to be Climate Forecast (CF) compatible it must contain these domain coordinate data values.&lt;br /&gt;
&lt;br /&gt;
The Hyrax &#039;&#039;hdf5_handler&#039;&#039; software, utilized by the &#039;&#039;get_dmrpp&#039;&#039; application, can create this data from the dataset metadata.  The &#039;&#039;get_dmrpp&#039;&#039; application places these generated data in a “sidecar” file for deployment with the source &#039;&#039;hdf5/netcdf&#039;&#039;-4 file.&lt;br /&gt;
&lt;br /&gt;
==Hyrax - Serving data using dmr++ files==&lt;br /&gt;
&lt;br /&gt;
There are three fundamental deployment scenarios for using dmr++ files to serve data with the Hyrax data server.&lt;br /&gt;
&lt;br /&gt;
This can be simple categorized as follows:&lt;br /&gt;
The dmr++ file(s) are XML files that contain a root &amp;lt;tt&amp;gt;dap4:Dataset&amp;lt;/tt&amp;gt; element with a &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attribute whose value is one of:&lt;br /&gt;
# An http(s):// URL referencing to the underlying granule files via http.&lt;br /&gt;
# A file:// URL that references the granule file on the local filesystem in a location that is inside the BES&#039; data root tree.&lt;br /&gt;
# The template string &amp;lt;tt&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each will discussed in turn below. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: By default Hyrax will automatically associate files whose name ends with &amp;quot;.dmrpp&amp;quot; with the dmr++ handler.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with http(s) URLs===&lt;br /&gt;
&lt;br /&gt;
If the dmr++ files that you wish to serve contain &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attributes whose values are http(s) URLs then there are 2+1 steps to serve the data:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure that the Hyrax AllowedHosts list is configured to allow Hyrax to access those target URLs. This can be accomplished by adding new regex entires to the AllowedHosts list in /etc/bes/site.conf, creating that file as need be.&lt;br /&gt;
# If the data URLs require authentication to access then you&#039;ll need to configure Hyrax for that too.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with file URLs===&lt;br /&gt;
&lt;br /&gt;
Using dmr++ files with locally held files can be useful for verifying that dmr++ functionality is working without relying on network access that may have data rate limits, authenticated access configuration, or security access constraints. Additionally, in many cases the dmr++ access to the locally held data may be significantly faster than through the native netcdf-4/hdf5 data handlers.&lt;br /&gt;
&lt;br /&gt;
In order to use dmr++ files that contain file:// URLs:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure the the dmr++ files contain only file:// URLs that refer to data granule files inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration.&lt;br /&gt;
&lt;br /&gt;
Note: For Hyrax, a correctly formatted file URL must start with the protocol &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; followed by the full qualified path to the data granule, for example:  &lt;br /&gt;
 /usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
so the the completed URL will have three slashes after the first colon:&lt;br /&gt;
 file:///usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with the template string. (NASA)===&lt;br /&gt;
&lt;br /&gt;
Another way to serve dmr++ files with Hyrax is to build the dmr++ files &#039;&#039;&#039;without&#039;&#039;&#039; valid URLs but with a template string that is replaced at runtime. If no target URL is supplied to get_drmpp at the time that the dmr++ is generated the template string: &amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt; will added to the file in place of the URL. The at runtime it can be replaced withe the correct value.&lt;br /&gt;
&lt;br /&gt;
Currently the only implementation of this is Hyrax&#039;s NGAP service which, when deployed in the NASA NGAP cloud, will accept &amp;quot;restified path&amp;quot; URLs that are defined as&lt;br /&gt;
having a URL path component with two mandatory and one optional parameters:&lt;br /&gt;
 MANDATORY: &amp;quot;/collections/UMM-C:{concept-id}&amp;quot;&lt;br /&gt;
 OPTIONAL:  &amp;quot;/UMM-C:{ShortName} &#039;.&#039; UMM-C:{Version}&amp;quot;&lt;br /&gt;
 MANDATORY: &amp;quot;/granules/UMM-G:{GranuleUR}&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039; &amp;lt;tt&amp;gt;https://opendap.earthdata.nasa.gov/collections/C1443727145-LAADS/MOD08_D3.v6.1/granules/MOD08_D3.A2020308.061.2020309092644.hdf.nc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When encountering this type of URL Hyrax will decompose it and use the content to formulate a query to the NASA CMR in order to retrieve the data access URL for the granule and for the dmr++ file. It then retrieves the dmr++ file and injects the data URL so that data access can proceed as described above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://wiki.earthdata.nasa.gov/display/DUTRAIN/Feature+analysis%3A+Restified+URL+for+OPENDAP+Data+Access More on the Restified Path can be found here],&lt;br /&gt;
&lt;br /&gt;
== Recipe: Building and testing dmr++ files ==&lt;br /&gt;
There are two recipes shown here, one using Hyrax docker containers and a second using the container that is part of the EOSDIS Cumulous task.&lt;br /&gt;
Prerequisites: &lt;br /&gt;
* Docker daemon running on a system that also supports a shell (the examples use &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; in this section)&lt;br /&gt;
=== Recipe: Building dmr++ files using a Hyrax docker container.===&lt;br /&gt;
# Acquire representative granule files for the collection you wish to import. Put them on the system that is running the Docker daemon. For this recipe we will assume that these files have been placed in the directory:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;/tmp/dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Get the most up to date Hyrax docker image:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker pull opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Start the docker container, mounting your data directory on to the docker image at &amp;lt;tt&amp;gt;/usr/share/hyrax&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker run -d -h hyrax -p 8080:8080 --volume /tmp/dmrpp:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Get a first view of your data using &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; with it&#039;s default configuration.&lt;br /&gt;
## If you want you can build a dmr++ for an example &amp;quot;&amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;input_file&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt;&amp;quot; using a  &amp;lt;tt&amp;gt;docker exec&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
##: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax get_dmrpp -b /usr/share/hyrax -o /usr/share/hyrax/input_file.dmrpp -u &amp;quot;file:///usr/share/hyrax/input_file&amp;quot; &amp;quot;input_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
## Or if you want more scripting flexibility you can login to the docker conainer to do the same:&lt;br /&gt;
### Login to the docker container:&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax /bin/bash&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### Change working dir to data dir:&lt;br /&gt;
###:&amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;cd /usr/share/hyrax&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### This sets the data directory to the current one (&amp;lt;tt&amp;gt;-b $(pwd)&amp;lt;/tt&amp;gt;) and sets the data URL (&amp;lt;tt&amp;gt;-u&amp;lt;/tt&amp;gt;) to the fully qualified path to the input file.&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;get_dmrpp -b $(pwd) -o foo.dmrpp -u &amp;quot;file://&amp;quot;$(pwd)&amp;quot;/your_test_file&amp;quot; &amp;quot;your_test_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;&#039;&#039;Now that you have made a dmr++ file, use the running Hyrax server to view and test it by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
#:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You can also batch process all of your test granules, if you want to go that route. This script assumes your ingestable data files end with &#039;&amp;lt;tt&amp;gt;.h5&amp;lt;/tt&amp;gt;&#039;. &lt;br /&gt;
#: &#039;&#039;The resulting dmr++ files should contain the correct &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; URLs and be correctly located so that they may be tested with the Hyrax service running in the docker instance.&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script will write each output file as a sidecar file into &lt;br /&gt;
# the same directory as its associated input granule data file.&lt;br /&gt;
&lt;br /&gt;
# The target directory to search for data files &lt;br /&gt;
target_dir=/usr/share/hyrax&lt;br /&gt;
echo &amp;quot;target_dir: ${target_dir}&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Search the target_dir for names matching the regex \*.h5 &lt;br /&gt;
for infile in `find &amp;quot;${target_dir}&amp;quot; -name \*.h5`&lt;br /&gt;
do&lt;br /&gt;
    echo &amp;quot; Processing: ${infile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    infile_base=`basename &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;infile_base: ${infile_base}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    bes_dir=`dirname &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;    bes_dir: ${bes_dir}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    outfile=&amp;quot;${infile}.dmrpp&amp;quot;&lt;br /&gt;
    echo &amp;quot;     Output: ${outfile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    get_dmrpp -b &amp;quot;${bes_dir}&amp;quot; -o &amp;quot;${outfile}&amp;quot; -u &amp;quot;file://${infile}&amp;quot; &amp;quot;${infile_base}&amp;quot;&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Remember that you can use the Hyrax server that is running in the docker container to view and test the dmr++ files you just created by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Testing and qualifying dmr++ files ===&lt;br /&gt;
In the previous section/step we created some initial dmr++ files using the default configuration. It is crucial to make sure that they provide the representation of the data that you and your users are expecting, and that they will work correctly with the Hyrax server. (See the following sections for details). If the generated dmr++ files do not match expectations then the default configuration of the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; may need to be amended using the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; parameter.&lt;br /&gt;
If the data are currently being served by your DAAC&#039;s on-prem team this is where understanding exactly what the localizations made to the configurations of the on-prem Hyrax instances deployed for the collection is important. These localization will probably need to be injected into &amp;lt;tt&amp;gt;get_drmpp&amp;lt;/tt&amp;gt; in order to produce the correct data representation in the dmr++ files.&lt;br /&gt;
&lt;br /&gt;
=== Flattening Groups ===&lt;br /&gt;
By default &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will preserve and show group hierarchies. If this is not desired, say for CF-1.0 compatibility, then you can change this by creating a small amendment to &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt;&#039;s default configuration. &lt;br /&gt;
First create the amending configuration file:&lt;br /&gt;
 echo &amp;quot;H5.EnableCF=true&amp;quot; &amp;gt; site.conf&lt;br /&gt;
Then, change the invocation of &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; in the above example by adding the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; switch:&lt;br /&gt;
 get_dmrpp -s site.conf -b `pwd` -o &amp;quot;${dmrpp_file}&amp;quot; -u &amp;quot;file://&amp;quot;`pwd`&amp;quot;/${file}&amp;quot; &amp;quot;${file}&amp;quot;&lt;br /&gt;
And re-run the dmr++ production as shown above.&lt;br /&gt;
&lt;br /&gt;
=== DAP representations ===&lt;br /&gt;
We have test and assurance procedures for DAP4 and DAP2 protocols below. Both are important. For legacy datasets the DAP2 request API is widely used by an existing client base and should continue to be supported. Since DAP4 subsumes DAP2 (but with somewhat different API semantics) It should be checked for legacy datasets as well. For more modern datasets that content DAP4 types such as Int64 that are not part of the DAP2 specification or implementations we will need to relying eliding the instances of unmapped types, or return an error when this is encountered.&lt;br /&gt;
 # Test Constants:&lt;br /&gt;
 GRANULE_FILE=&amp;quot;some_name.h5&amp;quot;&lt;br /&gt;
 # Granule URL&lt;br /&gt;
 gf_url=&amp;quot;http://localhost:8080/opendap/${GRANULE_FILE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== Inspect the dmr++ files====&lt;br /&gt;
# Do the dmr++ files have the expected dmrpp:href URL(s)?&lt;br /&gt;
#: &amp;lt;tt&amp;gt;head -2 ${GRANULE_FILE}.dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check DAP4 DMR Response ====&lt;br /&gt;
Inspect &amp;lt;tt&amp;gt;${gf_url}.dmrpp.dmr&amp;lt;/tt&amp;gt; &lt;br /&gt;
# Get the document, save as &amp;lt;tt&amp;gt;foo.dmr&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;curl -L -o foo.dmr &amp;quot;${gf_url}.dmr&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
# Are the associated dimensions correct?&lt;br /&gt;
&lt;br /&gt;
==== DAP4 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://docs.opendap.org/index.php?title=DAP4:_Specification_Volume_1#Fully_Qualified_Names VARIABLE_NAME is a full qualified DAP4 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_file &amp;quot;${gf_url}.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap4_subset_file dap4_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP4 UI test ====&lt;br /&gt;
* View and exercise the DAP4 Data Request Form &amp;lt;tt&amp;gt;{gf_url}.dmr.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check DDS Response ====&lt;br /&gt;
&lt;br /&gt;
# Inspect &amp;lt;tt&amp;gt;${gf_url}.dds&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
## Are the associated dimensions correct?&lt;br /&gt;
# Compare DMR++ DDS with granule file DDS. &lt;br /&gt;
#:For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_file &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_dmrpp &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; cmp dap2_dds_file dap2_dds_dmrpp &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_file &amp;quot;${gf_url}.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap2_subset_file dap2_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: One might consider doing this with two or more variables.&lt;br /&gt;
&lt;br /&gt;
==== DAP2 UI Test ====&lt;br /&gt;
* View and exercise the DAP2 Data Request Form located here: &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Try it in Panoply!&lt;br /&gt;
** Open Panoply.&lt;br /&gt;
** From the &#039;&#039;&#039;File&#039;&#039;&#039; menu select &#039;&#039;&#039;Open Remote Dataset...&#039;&#039;&#039;&lt;br /&gt;
** Paste the &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt; into the resulting dialog box.&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13547</id>
		<title>DMR++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13547"/>
		<updated>2024-08-29T00:13:39Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Building DMR++ files for HDF4 and HDF4-EOS2 (experimental) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;How to build &amp;amp; deploy dmr++ files for Hyrax&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==What? Why?== &lt;br /&gt;
&lt;br /&gt;
It is a fast and flexible way to serve data stored in S3.&lt;br /&gt;
The dmr++ encodes the location of the data content residing in a binary data file/object (e.g., an hdf5 file) so that it can be directly accessed, without the need for an intermediate library API, by using the file with the location information. The binary data objects may be on a local filesystem, or they may reside across the web in something like an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How Does It Work?==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; ingest software reads a data file (see &#039;&#039;&#039;note&#039;&#039;&#039;) and builds a document that holds all of the file&#039;s metadata (the names and types of all of the variables along with any other information bound to those variables). This information is stored in a document we call the Dataset Metadata Response (DMR). The &#039;&#039;dmr++&#039;&#039; adds some extra information to this (that&#039;s the &#039;++&#039; part) about where each variable can be found and how to decode those values. The &#039;&#039;dmr++&#039;&#039; is simply an special annotated DMR document.&lt;br /&gt;
&lt;br /&gt;
This effectively decouples the annotated DMR (&#039;&#039;dmr++&#039;&#039;) from the location of the granule file itself. Since dmr++ files are typically significantly smaller than the source data granules they represent, they can be stored and moved for less expense. They also enable reading all of the file&#039;s metadata in one operation instead of the iterative process that many APIs require.&lt;br /&gt;
&lt;br /&gt;
If the dmr++ contains references to the source granules location on the web, the location of the the dmr++ file itself does not matter.&lt;br /&gt;
&lt;br /&gt;
Software that understands the dmr++ content can directly access the data values held in the source granule file, and it can do so without having to retrieve the entire file and work on it locally, even when the file is stored in a Web Object Store like S3. &lt;br /&gt;
&lt;br /&gt;
If the granule file contains multiple variables and only a subset of them are needed, the dmr++ enabled software can retrieve just the bytes associated with the desired variables parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;note:&#039;&#039;&#039; The OPeNDAP software currently supports HDF5 and NetCDF4. Other formats can be supported, such as zarr.&lt;br /&gt;
&lt;br /&gt;
==Supported Data Formats==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently works with &#039;&#039;hdf5&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. (The &#039;&#039;netcdf-4&#039;&#039; format is a subset of &#039;&#039;hdf5&#039;&#039; so &#039;&#039;hdf5&#039;&#039; tools are utilized for both.) Other formats like &#039;&#039;zarr&#039;&#039;, &#039;&#039;hdf4&#039;&#039;, &#039;&#039;netcdf-3&#039;&#039; are not currently supported by the &#039;&#039;dmr++&#039;&#039; software, but support could be added if requested.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; data format is quite complex and many of the options and edge cases are not currently supported by the &#039;&#039;dmr++&#039;&#039; software. &lt;br /&gt;
&lt;br /&gt;
These limitations and how to quickly evaluate an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file for use with the &#039;&#039;dmr++&#039;&#039; software are explained below.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; filters====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format has several filter/compression options used for storing data values. &lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5Z_FILTER_DEFLATE, H5Z_FILTER_SHUFFLE, and H5Z_FILTER_FLETCHER32 filters.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc/RM/RM_H5Z.html You can find more on hdf5 filters here.]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; storage layouts====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format also uses a number of &amp;quot;storage layouts&amp;quot; that describe various structural organizations of the data values associated with a variable in the granule file.&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5D_COMPACT, H5D_CHUNKED, and H5D_CONTIGUOUS storage layouts. These are all of the storage layouts defined by the &#039;&#039;hdf5&#039;&#039; library, but others can be added.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc1.6/Datasets.html You can find more on hdf5 storage layouts here.]&lt;br /&gt;
&lt;br /&gt;
====Is my &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file suitable for &#039;&#039;dmr++&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
To determine the &#039;&#039;hdf5&#039;&#039; filters, storage layouts, and chunking scheme used in an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file you can use the command:&lt;br /&gt;
 &amp;lt;code&amp;gt;h5dump -H -p &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
To get a human readable assessment of the file that will show the storage layouts, chunking structure, and the filters needed for each variable (aka DATASET in the &#039;&#039;hdf5&#039;&#039; vocabulary) [https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Dump h5dump info can be found here.]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;h5dump example output:&#039;&#039;&lt;br /&gt;
 $ h5dump -H -p chunked_gzipped_fourD.h5&lt;br /&gt;
 HDF5 &amp;quot;chunked_gzipped_fourD.h5&amp;quot; {&lt;br /&gt;
 GROUP &amp;quot;/&amp;quot; {&lt;br /&gt;
   DATASET &amp;quot;d_16_gzipped_chunks&amp;quot; {&lt;br /&gt;
      DATATYPE  H5T_IEEE_F32LE&lt;br /&gt;
      DATASPACE  SIMPLE { ( 40, 40, 40, 40 ) / ( 40, 40, 40, 40 ) }&lt;br /&gt;
      STORAGE_LAYOUT {&lt;br /&gt;
         CHUNKED ( 20, 20, 20, 20 )&lt;br /&gt;
         SIZE 2863311 (3.576:1 COMPRESSION)&lt;br /&gt;
      }&lt;br /&gt;
      FILTERS {&lt;br /&gt;
         COMPRESSION DEFLATE { LEVEL 6 }&lt;br /&gt;
      }&lt;br /&gt;
      FILLVALUE {&lt;br /&gt;
         FILL_TIME H5D_FILL_TIME_ALLOC&lt;br /&gt;
         VALUE  H5D_FILL_VALUE_DEFAULT&lt;br /&gt;
      }&lt;br /&gt;
      ALLOCATION_TIME {&lt;br /&gt;
         H5D_ALLOC_TIME_INCR&lt;br /&gt;
      }&lt;br /&gt;
   }&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
====Is my netcdf file &#039;&#039;netcdf-3&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
It is an unfortunate state of affairs that the file suffix &amp;quot;.nc&amp;quot; is the commonly used naming convention for both &#039;&#039;netcdf-3&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. &lt;br /&gt;
You can use the command:  &lt;br /&gt;
 &amp;lt;code&amp;gt;ncdump -k &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt; &lt;br /&gt;
to determine if a &#039;&#039;netcdf&#039;&#039; file is either classic &#039;&#039;netcdf-3&#039;&#039; (classic) or &#039;&#039;netcdf-4&#039;&#039; (netCDF-4).&lt;br /&gt;
* The &#039;&#039;netcdf&#039;&#039; library must be installed on the system upon which the command is issued.&lt;br /&gt;
&lt;br /&gt;
[http://www.bic.mni.mcgill.ca/users/sean/Docs/netcdf/guide.txn_79.html You can learn more in the NetCDF documentation here.]&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;DMR++&#039;&#039; files for HDF4 and HDF4-EOS2 (experimental)==&lt;br /&gt;
The HDF4 and HDF4-EOS2 (hereafter just HDF4) DMR++ document builder is currently available in the docker container we build for &#039;&#039;hyrax&#039;&#039; server/service. You can get this container from the public Docker Hub repository. You can also get and build the &#039;&#039;Hyrax&#039;&#039; source code, and use the client that way (as part of a source code build) but it&#039;s much more complex than getting the Docker container. In addition, the Docker container includes a server that can test the DMR++ documents that are built and can even show you how the files would look when served without using the DMR++.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Because this command is still experimental, I&#039;ll write this documentation like a recipe. Modify it to suit your own needs.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Using get_dmrpp_h4===&lt;br /&gt;
Make a new directory in a convenient place and copy the HDF4 and/or HDF4-EOS2 files in that directory. Once you have the files in that directory, make an environment variable so it can be referred to easily. From inside the directory:&lt;br /&gt;
&lt;br /&gt;
 export HDF4_DIR=$(pwd)&lt;br /&gt;
&lt;br /&gt;
Get the Docker container from Docker Hub using this command:&lt;br /&gt;
&lt;br /&gt;
 docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
What the options mean:&lt;br /&gt;
 -d, --detach Run container in background and print container ID&lt;br /&gt;
 -h, --hostname Container host name&lt;br /&gt;
 -p, --publish Publish a container&#039;s port(s) to the host&lt;br /&gt;
 -v, --volume Bind mount a volume&lt;br /&gt;
 --name Assign a name to the container&lt;br /&gt;
&lt;br /&gt;
This command will fetch the container &#039;&#039;&#039;opendap/hyrax:snapshot&#039;&#039;&#039; from Docker Hub. Thw &#039;&#039;snapshot&#039;&#039; is the latest build of the container. It will then &#039;&#039;run&#039;&#039; the container and return the container ID. The &#039;&#039;hyrax&#039;&#039; server is now running on you computer and can be accessed with a web browser, curl, et cetera. More on that in a bit.&lt;br /&gt;
&lt;br /&gt;
The volume mount, from $HDF4_DIR to &#039;/usr/share/hyrax&#039; mounts the current directory of the host computer running the container to the directory &#039;&#039;/usr/share/hyrax&#039;&#039; inside the container. That directory is the root of the server&#039;s data tree. This means that the HDF4 files you copied into the HDF4_DIR directory will be accessible by the server running in the container. That will be useful for testing later on.&lt;br /&gt;
&lt;br /&gt;
Note: If you want to use a specific container version, just substitute the version info for &#039;snapshot.&#039;&lt;br /&gt;
&lt;br /&gt;
Check that the container is running using:&lt;br /&gt;
&lt;br /&gt;
 docker ps&lt;br /&gt;
&lt;br /&gt;
This will show a somewhat hard-to-read bit of information about all the running Docker container on you host:&lt;br /&gt;
&lt;br /&gt;
 CONTAINER ID   IMAGE                    COMMAND              CREATED          STATUS          PORTS                                                              &lt;br /&gt;
 NAMES&lt;br /&gt;
 2949d4101df4   opendap/hyrax:snapshot   &amp;quot;/entrypoint.sh -&amp;quot;   15 seconds ago   Up 14 seconds   8009/tcp, 8443/tcp, &lt;br /&gt;
 10022/tcp, 11002/tcp, 0.0.0.0:8080-&amp;gt;8080/tcp   hyrax&lt;br /&gt;
&lt;br /&gt;
If you want to stop containers, use&lt;br /&gt;
&lt;br /&gt;
 docker rm -f &amp;lt;CONTAINER ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &#039;&#039;CONTAINER ID&#039;&#039; for the one we just started and shown in the output of &#039;&#039;docker ps -a&#039;&#039; above is &#039;&#039;2949d4101df4&#039;&#039;. No need to stop the container now, I&#039;m just pointing out how to do it because it&#039;s often useful.&lt;br /&gt;
&lt;br /&gt;
====Lets run the DMR++ builder====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At the end of this, I&#039;ll include a shell script that takes away many of these steps, but the script obscures some aspects of the command that you might want to tweak, so the following shows you all the details. Skip to &#039;&#039;&#039;Simple shell command&#039;&#039;&#039; to skip over these details.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Make sure you are in the directory with the HDF4 files for these steps. &lt;br /&gt;
&lt;br /&gt;
Get the command to return its help information:&lt;br /&gt;
&lt;br /&gt;
 docker exec -it hyrax get_dmrpp_h4 -h&lt;br /&gt;
&lt;br /&gt;
will return:&lt;br /&gt;
  &lt;br /&gt;
 &amp;lt;nowiki&amp;gt;usage: get_dmrpp_h4 [-h] -i I [-c CONF] [-s] [-u DATA_URL] [-D] [-v]&lt;br /&gt;
&lt;br /&gt;
Build a dmrpp file for an HDF4 file. get_dmrpp_h4 -i h4_file_name. A dmrpp&lt;br /&gt;
file that uses the HDF4 file name will be generated.&lt;br /&gt;
&lt;br /&gt;
optional arguments:&lt;br /&gt;
  &lt;br /&gt;
...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lets build a DMR++ now, by explicitly using the container:&lt;br /&gt;
&lt;br /&gt;
 docker exec -it hyrax bash&lt;br /&gt;
&lt;br /&gt;
starts the &#039;&#039;bash&#039;&#039; shell in the container, with the current directory as root (/)&lt;br /&gt;
&lt;br /&gt;
 [root@hyrax /]# &lt;br /&gt;
&lt;br /&gt;
Change to the directory that is the root of the data (you&#039;ll see your HDF4 files in here):&lt;br /&gt;
&lt;br /&gt;
 cd /usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
You will see, roughly:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;[root@hyrax /]# cd /usr/share/hyrax&lt;br /&gt;
[root@hyrax hyrax]# ls&lt;br /&gt;
3B42.19980101.00.7.HDF&lt;br /&gt;
3B42.19980101.03.7.HDF&lt;br /&gt;
3B42.19980101.06.7.HDF&lt;br /&gt;
&lt;br /&gt;
...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In that directory, use the &#039;&#039;get_dmrpp_h4&#039;&#039; command to build a DMR++ document for one of the files:&lt;br /&gt;
&lt;br /&gt;
 [root@hyrax hyrax]# get_dmrpp_h4 -i 3B42.20130111.09.7.HDF -u &#039;file:///usr/share/hyrax/3B42.20130111.09.7.HDF&#039;&lt;br /&gt;
&lt;br /&gt;
Copy that pattern for whatever file you use. From the /usr/share/hyrax directory, you pass &#039;&#039;get_dmrpp_h4&#039;&#039; the name of the file (because it&#039;s local to the current directory) using the &#039;&#039;&#039;-i&#039;&#039;&#039; option. The &#039;&#039;&#039;-u&#039;&#039;&#039; option tells the command to embed the URL that follows it in the DMR++. I&#039;ve used a &#039;&#039;file://&#039;&#039; URL to the file &#039;&#039;/usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Note the three slashes following the colon, two from the way a URL names a protocol and one because the pathname starts at the root directory. Obscure, but it makes sense.&lt;br /&gt;
&lt;br /&gt;
Building the DMR++ and embedding a &#039;&#039;file://&#039;&#039; URL will enable testing the DMR+.&lt;br /&gt;
&lt;br /&gt;
Lets look at how the &#039;&#039;hyrax&#039;&#039; service will treat that data file using the DMR++. In a browser, goto &lt;br /&gt;
&lt;br /&gt;
 http://localhost:8080/opendap/&lt;br /&gt;
&lt;br /&gt;
You will see [[File:Hyrax-including-new-DNRpp.png|200px|thumb|right|text-bottom|The running server shows the DMR++ as a dataset.]]&lt;br /&gt;
&lt;br /&gt;
Click on the your equivalent of the &#039;&#039;&#039;3B42.20130111.09.7.HDF&#039;&#039; link, subset, download and open in Panoply or the equivalent. [[File:Hyrax-subsetting.png|200px|thumb|right|text-bottom|Use the form interface to subset and get a response.]]&lt;br /&gt;
&lt;br /&gt;
You can run batch tests in lots of files by building many DMR++ documents and then asking the server for various responses (nc4, dap) from the DMR++ and the original file. Those could be compared using various schemes, although in its entirety that is beyond this section&#039;s scope, the command &#039;&#039;getdap4&#039;&#039; is also included in the container and could be used to compare &#039;&#039;dap&#039;&#039; responses from the data file and the DMR++ document.&lt;br /&gt;
&lt;br /&gt;
Here is a comparison of the same underlying data, the left window shows the data returned using the DMR++, the right shows the data read directly from the file using the server&#039;s builtin HDF4 reader. &lt;br /&gt;
[[File:Data-comparison.png|200px|thumb|right|text-bottom|Comparison of responses from a DMR++ and the native file handler.]]&lt;br /&gt;
&lt;br /&gt;
====Simple shell command====&lt;br /&gt;
&lt;br /&gt;
Here is a simple shell command that you can run on the host computer that will eliminate most of the above. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In the spirit of a recipe, I&#039;ll restate the earlier command for starting the docker container with the &#039;&#039;&#039;get_dmrpp_h4&#039;&#039;&#039; command and the &#039;&#039;&#039;hyrax&#039;&#039;&#039; server.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Start the container:&lt;br /&gt;
&lt;br /&gt;
  docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
Is it running:&lt;br /&gt;
&lt;br /&gt;
  docker ps&lt;br /&gt;
&lt;br /&gt;
The command, written for the Bourne Shell, is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
# usage get_dmrpp_h4.sh &amp;lt;file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
data_root=/usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
cat &amp;lt;&amp;lt;EOF | docker exec --interactive hyrax sh&lt;br /&gt;
cd $data_root&lt;br /&gt;
get_dmrpp_h4 -i $1 -u &amp;quot;file://$data_root/$1&amp;quot;&lt;br /&gt;
EOF&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy that, save it in a file (I named the file &#039;&#039;get_dmrpp_h4.sh&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
Run the command on the host (not the docker container) and in the directory with the HDF4 files (you don&#039;t &#039;&#039;have&#039;&#039; to do that, but sorting out the details is left as an exercise for the reader ;-). Run the command like this:&lt;br /&gt;
&lt;br /&gt;
  ./get_dmrpp_h4.sh AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
&lt;br /&gt;
The DMR++ will appear when the command completes.&lt;br /&gt;
&lt;br /&gt;
  ls -l&lt;br /&gt;
&lt;br /&gt;
shows&lt;br /&gt;
&lt;br /&gt;
  (hyrax500) hyrax_git/HDF4-dir % ls -l&lt;br /&gt;
  total 1251240&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff    1250778 Aug 22 22:31 AMSR_E_L2_Land_V09_200206191112_A.hdf&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff   20746207 Aug 22 22:32 AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
  -rw-r--r--  1 jimg  staff    3378674 Aug 28 17:37 AMSR_E_L3_SeaIce25km_V15_20020601.hdf.dmrpp&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files with get_dmrpp==&lt;br /&gt;
&lt;br /&gt;
The application that builds the &#039;&#039;dmr++&#039;&#039; files is a command line tool called &#039;&#039;get_dmrpp&#039;&#039;. It in turn utilizes other executables such as &#039;&#039;build_dmrpp&#039;&#039;, &#039;&#039;reduce_mdf&#039;&#039;, &#039;&#039;merge_dmrpp&#039;&#039; (which rely in turn on the &#039;&#039;hdf5_handler&#039;&#039; and the &#039;&#039;hdf5&#039;&#039; library), along with a number of UNIX shell commands.&lt;br /&gt;
&lt;br /&gt;
All of these components are install with each recent version of the Hyrax Data Server&lt;br /&gt;
&lt;br /&gt;
You can see the &#039;&#039;get_dmrpp&#039;&#039; usage statement with the command: &amp;lt;code&amp;gt;get_dmrpp -h&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Using &#039;&#039;get_dmrpp&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The way that &#039;&#039;get_dmrpp&#039;&#039; is invoked controls the way that the data are ultimately represented in the resulting &#039;&#039;dmr++&#039;&#039; file(s). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;get_dmrpp&#039;&#039; application utilizes software from the Hyrax data server to produce the base DMR document which is used to construct the &#039;&#039;dmr++&#039;&#039; file. &lt;br /&gt;
&lt;br /&gt;
The Hyrax server has a long list of configuration options, several of which can substantially alter the the structural and semantic representation of the dataset as seen in the dmr++ files generated using these options.&lt;br /&gt;
&lt;br /&gt;
===Command line options===&lt;br /&gt;
&lt;br /&gt;
The command line switches provide a way to control the output of the tool. In addition to common options like verbose output or testing modes, the tool provides options to build extra (aka &#039;sidecar&#039;) data files that hold information needed for CF compliance if the original HDF5 data files lack that information (see the &#039;&#039;missing data&#039;&#039; section ). In addition, it is often desirable to build &#039;&#039;dmr++&#039;&#039; files before the source data files are uploaded to a cloud store like S3. In this case, the URL to the data may not be known when the &#039;&#039;dmr++&#039;&#039; is built. We support this by using placeholder/template strings in the &#039;&#039;dmr++&#039;&#039; and which can then be replaced with the URL at runtime, when the &#039;&#039;dmr++&#039;&#039; file is evaluated. See the &#039;-u&#039; and &#039;-p&#039; options below.&lt;br /&gt;
&lt;br /&gt;
====Inputs====&lt;br /&gt;
&lt;br /&gt;
; -b&lt;br /&gt;
: The fully qualified path to the top level data directory. Data files read by &#039;&#039;get_dmrpp&#039;&#039; must be in the directory tree rooted at this location and their names expressed as a path relative to this location. The value may not be set to &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; The default value is /tmp if a value is not provided. All the data files to be processed must be in this directory or one of its subdirectories. If &#039;&#039;get_dmrpp&#039;&#039; is being executed from same directory as the data then &amp;lt;code&amp;gt;-b `pwd`&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-b .&amp;lt;/code&amp;gt; works as well.&lt;br /&gt;
;-u&lt;br /&gt;
: This option is used to specify the location of the binary data object. It’s value must be an http, https, or file (file://) URL. This URL will be injected into the dmr++ when it is constructed. If option -u is not used; then the template string &amp;lt;code&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/code&amp;gt; will be used and the dmr++ will substitute a value at runtime.&lt;br /&gt;
;-c&lt;br /&gt;
:The path to an alternate bes configuration file to use.&lt;br /&gt;
;-s&lt;br /&gt;
:The path to an optional addendum configuration file which will be appended to the default BES configuration. Much like the site.conf file works for the full server deployment it will be loaded last and the settings there-in will have an override effect on the default configuration.&lt;br /&gt;
&lt;br /&gt;
====Output====&lt;br /&gt;
&lt;br /&gt;
; -o&lt;br /&gt;
: The name of the file to create.&lt;br /&gt;
&lt;br /&gt;
====Verbose Output Modes====&lt;br /&gt;
&lt;br /&gt;
; -h&lt;br /&gt;
: Show help/usage page.&lt;br /&gt;
; -v:&lt;br /&gt;
: verbose mode, prints the intermediate DMR.&lt;br /&gt;
; -V&lt;br /&gt;
: Very verbose mode,  prints the DMR, the command and the configuration file used to build the DMR.&lt;br /&gt;
; -D&lt;br /&gt;
: Just print the DMR that will be used to build the DMR++&lt;br /&gt;
; -X&lt;br /&gt;
: Do not remove temporary files. May be used independently of the -v and/or -V options.&lt;br /&gt;
&lt;br /&gt;
====Tests====&lt;br /&gt;
&lt;br /&gt;
; -T&lt;br /&gt;
: Run ALL hyrax tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -I&lt;br /&gt;
: Run hyrax inventory tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -F&lt;br /&gt;
: Run hyrax value probe tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
&lt;br /&gt;
====Missing Data Creation====&lt;br /&gt;
&lt;br /&gt;
; -M&lt;br /&gt;
: Build a &#039;sidecar&#039; file that holds missing information needed for CF compliance (e.g., Latitude, Longitude and Time coordinate data).&lt;br /&gt;
; -p&lt;br /&gt;
: Provide the URL for the Missing data sidecar file. If this is not given (but -M is), then a template value is used in the dmr++ file and a real URL is substituted at runtime.&lt;br /&gt;
; -r&lt;br /&gt;
: The path to the file that contains missing variable information for sets of input data files that share common missing variables. The file will be created if it doesn&#039;t exist and the result may be used in subsequent invocations of get_dmrpp (using -r) to identify the missing variable file.&lt;br /&gt;
&lt;br /&gt;
====AWS Integration====&lt;br /&gt;
The &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application supports both S3 hosted granules as inputs, and uploading generated dmr++ files to an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
; S3 Hosted granules are supported by default,&lt;br /&gt;
: When the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application sees that the name of the input file is an S3 URL it will check to see if the AWS CLI is configured and if so &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will attempt retrieve the granule and make a dmr++ utilizing whatever other options have been chosen.&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
; -U&lt;br /&gt;
: The &#039;&#039;&#039;&amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt;&#039;&#039;&#039; command line parameter for &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; instructs the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application to upload the generated dmr++ file to S3, but only when the following conditions are met:&lt;br /&gt;
:* The name of the input file is an S3 URL&lt;br /&gt;
:* The AWS CLI has been configured with credentials that provide r+w permissions for the bucket referenced in the input file S3 URL.&lt;br /&gt;
:* The &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; option has been specified.&lt;br /&gt;
: If all three of the above are true then &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will copy the retrieve the granule, create a dmr++ file from the granule, and copy the resulting dmr++ file (as defined by the -o option) to the source S3 bucket using the well known NGAP sidecar file naming convention: &#039;&#039;&#039;&amp;lt;tt&amp;gt;s3://bucket_name/granule_object_id.dmrpp&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -U -o foo -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5_handler&#039;&#039; Configuration===&lt;br /&gt;
&lt;br /&gt;
Because &#039;&#039;get_dmrpp&#039;&#039; uses the &#039;&#039;hdf5_handler&#039;&#039; software to build the &#039;&#039;dmr++&#039;&#039; the software must inject the &#039;&#039;hdf5_handler&#039;&#039;&#039;s configuration. &lt;br /&gt;
&lt;br /&gt;
The default configuration is large, but any valued may be altered at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some of the commonly manipulated configuration parameters with their default values:&lt;br /&gt;
 H5.EnableCF=true&lt;br /&gt;
 H5.EnableDMR64bitInt=true&lt;br /&gt;
 H5.DefaultHandleDimension=true&lt;br /&gt;
 H5.KeepVarLeadingUnderscore=false&lt;br /&gt;
 H5.EnableCheckNameClashing=true&lt;br /&gt;
 H5.EnableAddPathAttrs=true&lt;br /&gt;
 H5.EnableDropLongString=true&lt;br /&gt;
 H5.DisableStructMetaAttr=true&lt;br /&gt;
 H5.EnableFillValueCheck=true&lt;br /&gt;
 H5.CheckIgnoreObj=false&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;Note to DAACs with existing Hyrax deployments.&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
If your group is already serving data with Hyrax and the data representations that are generated by your Hyrax server are satisfactory, then a careful inspection of the localized configuration, typically held in /etc/bes/site.conf, will help you determine what configuration state you may need to inject into &#039;&#039;get_dmrpp&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===The &#039;&#039;H5.EnableCF&#039;&#039; option===&lt;br /&gt;
&lt;br /&gt;
Of particular importance is the &#039;&#039;H5.EnableCF&#039;&#039; option, which instructs the &#039;&#039;get_dmrpp&#039;&#039; tool to produce [https://cfconventions.org/ Climate Forecast convention (CF)] compatible output based on metadata found in the granule file being processed. &lt;br /&gt;
&lt;br /&gt;
Changing the value of &#039;&#039;H5.EnableCF&#039;&#039; from &#039;&#039;&#039;&#039;&#039;false&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;true&#039;&#039;&#039;&#039;&#039; will have (at least) two significant effects.&lt;br /&gt;
&lt;br /&gt;
It will:&lt;br /&gt;
&lt;br /&gt;
* Cause &#039;&#039;get_dmrpp&#039;&#039; to attempt to make the dmr++ metadata CF compliant.&lt;br /&gt;
* Remove Group hierarchies (if any) in the underlying data granule by flattening the Group hierarchy into the variable names.  &lt;br /&gt;
&lt;br /&gt;
By default &#039;&#039;get_dmrpp&#039;&#039; the &#039;&#039;H5.EnableCF&#039;&#039; option is set to false:&lt;br /&gt;
 H5.EnableCF = false&lt;br /&gt;
&lt;br /&gt;
There is a much more comprehensive discussion of this key feature, and others, in the &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;[https://opendap.github.io/hyrax_guide/Master_Hyrax_Guide.html#_hyrax_handlers HDF5 Handler section of the Appendix in the Hyrax Data Server Installation and Configuration Guide]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Missing data, the CF conventions and &#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
Many of the &#039;&#039;hdf5&#039;&#039; files produced by NASA and others do not contain the domain coordinate data (such as latitude, longitude, time, etc.) as a collection of explicit values. Instead information contained in the dataset metadata can used to reproduce these values. &lt;br /&gt;
&lt;br /&gt;
In order for a dataset to be Climate Forecast (CF) compatible it must contain these domain coordinate data values.&lt;br /&gt;
&lt;br /&gt;
The Hyrax &#039;&#039;hdf5_handler&#039;&#039; software, utilized by the &#039;&#039;get_dmrpp&#039;&#039; application, can create this data from the dataset metadata.  The &#039;&#039;get_dmrpp&#039;&#039; application places these generated data in a “sidecar” file for deployment with the source &#039;&#039;hdf5/netcdf&#039;&#039;-4 file.&lt;br /&gt;
&lt;br /&gt;
==Hyrax - Serving data using dmr++ files==&lt;br /&gt;
&lt;br /&gt;
There are three fundamental deployment scenarios for using dmr++ files to serve data with the Hyrax data server.&lt;br /&gt;
&lt;br /&gt;
This can be simple categorized as follows:&lt;br /&gt;
The dmr++ file(s) are XML files that contain a root &amp;lt;tt&amp;gt;dap4:Dataset&amp;lt;/tt&amp;gt; element with a &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attribute whose value is one of:&lt;br /&gt;
# An http(s):// URL referencing to the underlying granule files via http.&lt;br /&gt;
# A file:// URL that references the granule file on the local filesystem in a location that is inside the BES&#039; data root tree.&lt;br /&gt;
# The template string &amp;lt;tt&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each will discussed in turn below. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: By default Hyrax will automatically associate files whose name ends with &amp;quot;.dmrpp&amp;quot; with the dmr++ handler.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with http(s) URLs===&lt;br /&gt;
&lt;br /&gt;
If the dmr++ files that you wish to serve contain &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attributes whose values are http(s) URLs then there are 2+1 steps to serve the data:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure that the Hyrax AllowedHosts list is configured to allow Hyrax to access those target URLs. This can be accomplished by adding new regex entires to the AllowedHosts list in /etc/bes/site.conf, creating that file as need be.&lt;br /&gt;
# If the data URLs require authentication to access then you&#039;ll need to configure Hyrax for that too.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with file URLs===&lt;br /&gt;
&lt;br /&gt;
Using dmr++ files with locally held files can be useful for verifying that dmr++ functionality is working without relying on network access that may have data rate limits, authenticated access configuration, or security access constraints. Additionally, in many cases the dmr++ access to the locally held data may be significantly faster than through the native netcdf-4/hdf5 data handlers.&lt;br /&gt;
&lt;br /&gt;
In order to use dmr++ files that contain file:// URLs:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure the the dmr++ files contain only file:// URLs that refer to data granule files inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration.&lt;br /&gt;
&lt;br /&gt;
Note: For Hyrax, a correctly formatted file URL must start with the protocol &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; followed by the full qualified path to the data granule, for example:  &lt;br /&gt;
 /usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
so the the completed URL will have three slashes after the first colon:&lt;br /&gt;
 file:///usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with the template string. (NASA)===&lt;br /&gt;
&lt;br /&gt;
Another way to serve dmr++ files with Hyrax is to build the dmr++ files &#039;&#039;&#039;without&#039;&#039;&#039; valid URLs but with a template string that is replaced at runtime. If no target URL is supplied to get_drmpp at the time that the dmr++ is generated the template string: &amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt; will added to the file in place of the URL. The at runtime it can be replaced withe the correct value.&lt;br /&gt;
&lt;br /&gt;
Currently the only implementation of this is Hyrax&#039;s NGAP service which, when deployed in the NASA NGAP cloud, will accept &amp;quot;restified path&amp;quot; URLs that are defined as&lt;br /&gt;
having a URL path component with two mandatory and one optional parameters:&lt;br /&gt;
 MANDATORY: &amp;quot;/collections/UMM-C:{concept-id}&amp;quot;&lt;br /&gt;
 OPTIONAL:  &amp;quot;/UMM-C:{ShortName} &#039;.&#039; UMM-C:{Version}&amp;quot;&lt;br /&gt;
 MANDATORY: &amp;quot;/granules/UMM-G:{GranuleUR}&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039; &amp;lt;tt&amp;gt;https://opendap.earthdata.nasa.gov/collections/C1443727145-LAADS/MOD08_D3.v6.1/granules/MOD08_D3.A2020308.061.2020309092644.hdf.nc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When encountering this type of URL Hyrax will decompose it and use the content to formulate a query to the NASA CMR in order to retrieve the data access URL for the granule and for the dmr++ file. It then retrieves the dmr++ file and injects the data URL so that data access can proceed as described above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://wiki.earthdata.nasa.gov/display/DUTRAIN/Feature+analysis%3A+Restified+URL+for+OPENDAP+Data+Access More on the Restified Path can be found here],&lt;br /&gt;
&lt;br /&gt;
== Recipe: Building and testing dmr++ files ==&lt;br /&gt;
There are two recipes shown here, one using Hyrax docker containers and a second using the container that is part of the EOSDIS Cumulous task.&lt;br /&gt;
Prerequisites: &lt;br /&gt;
* Docker daemon running on a system that also supports a shell (the examples use &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; in this section)&lt;br /&gt;
=== Recipe: Building dmr++ files using a Hyrax docker container.===&lt;br /&gt;
# Acquire representative granule files for the collection you wish to import. Put them on the system that is running the Docker daemon. For this recipe we will assume that these files have been placed in the directory:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;/tmp/dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Get the most up to date Hyrax docker image:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker pull opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Start the docker container, mounting your data directory on to the docker image at &amp;lt;tt&amp;gt;/usr/share/hyrax&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker run -d -h hyrax -p 8080:8080 --volume /tmp/dmrpp:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Get a first view of your data using &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; with it&#039;s default configuration.&lt;br /&gt;
## If you want you can build a dmr++ for an example &amp;quot;&amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;input_file&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt;&amp;quot; using a  &amp;lt;tt&amp;gt;docker exec&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
##: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax get_dmrpp -b /usr/share/hyrax -o /usr/share/hyrax/input_file.dmrpp -u &amp;quot;file:///usr/share/hyrax/input_file&amp;quot; &amp;quot;input_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
## Or if you want more scripting flexibility you can login to the docker conainer to do the same:&lt;br /&gt;
### Login to the docker container:&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax /bin/bash&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### Change working dir to data dir:&lt;br /&gt;
###:&amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;cd /usr/share/hyrax&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### This sets the data directory to the current one (&amp;lt;tt&amp;gt;-b $(pwd)&amp;lt;/tt&amp;gt;) and sets the data URL (&amp;lt;tt&amp;gt;-u&amp;lt;/tt&amp;gt;) to the fully qualified path to the input file.&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;get_dmrpp -b $(pwd) -o foo.dmrpp -u &amp;quot;file://&amp;quot;$(pwd)&amp;quot;/your_test_file&amp;quot; &amp;quot;your_test_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;&#039;&#039;Now that you have made a dmr++ file, use the running Hyrax server to view and test it by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
#:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You can also batch process all of your test granules, if you want to go that route. This script assumes your ingestable data files end with &#039;&amp;lt;tt&amp;gt;.h5&amp;lt;/tt&amp;gt;&#039;. &lt;br /&gt;
#: &#039;&#039;The resulting dmr++ files should contain the correct &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; URLs and be correctly located so that they may be tested with the Hyrax service running in the docker instance.&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script will write each output file as a sidecar file into &lt;br /&gt;
# the same directory as its associated input granule data file.&lt;br /&gt;
&lt;br /&gt;
# The target directory to search for data files &lt;br /&gt;
target_dir=/usr/share/hyrax&lt;br /&gt;
echo &amp;quot;target_dir: ${target_dir}&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Search the target_dir for names matching the regex \*.h5 &lt;br /&gt;
for infile in `find &amp;quot;${target_dir}&amp;quot; -name \*.h5`&lt;br /&gt;
do&lt;br /&gt;
    echo &amp;quot; Processing: ${infile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    infile_base=`basename &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;infile_base: ${infile_base}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    bes_dir=`dirname &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;    bes_dir: ${bes_dir}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    outfile=&amp;quot;${infile}.dmrpp&amp;quot;&lt;br /&gt;
    echo &amp;quot;     Output: ${outfile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    get_dmrpp -b &amp;quot;${bes_dir}&amp;quot; -o &amp;quot;${outfile}&amp;quot; -u &amp;quot;file://${infile}&amp;quot; &amp;quot;${infile_base}&amp;quot;&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Remember that you can use the Hyrax server that is running in the docker container to view and test the dmr++ files you just created by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Testing and qualifying dmr++ files ===&lt;br /&gt;
In the previous section/step we created some initial dmr++ files using the default configuration. It is crucial to make sure that they provide the representation of the data that you and your users are expecting, and that they will work correctly with the Hyrax server. (See the following sections for details). If the generated dmr++ files do not match expectations then the default configuration of the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; may need to be amended using the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; parameter.&lt;br /&gt;
If the data are currently being served by your DAAC&#039;s on-prem team this is where understanding exactly what the localizations made to the configurations of the on-prem Hyrax instances deployed for the collection is important. These localization will probably need to be injected into &amp;lt;tt&amp;gt;get_drmpp&amp;lt;/tt&amp;gt; in order to produce the correct data representation in the dmr++ files.&lt;br /&gt;
&lt;br /&gt;
=== Flattening Groups ===&lt;br /&gt;
By default &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will preserve and show group hierarchies. If this is not desired, say for CF-1.0 compatibility, then you can change this by creating a small amendment to &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt;&#039;s default configuration. &lt;br /&gt;
First create the amending configuration file:&lt;br /&gt;
 echo &amp;quot;H5.EnableCF=true&amp;quot; &amp;gt; site.conf&lt;br /&gt;
Then, change the invocation of &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; in the above example by adding the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; switch:&lt;br /&gt;
 get_dmrpp -s site.conf -b `pwd` -o &amp;quot;${dmrpp_file}&amp;quot; -u &amp;quot;file://&amp;quot;`pwd`&amp;quot;/${file}&amp;quot; &amp;quot;${file}&amp;quot;&lt;br /&gt;
And re-run the dmr++ production as shown above.&lt;br /&gt;
&lt;br /&gt;
=== DAP representations ===&lt;br /&gt;
We have test and assurance procedures for DAP4 and DAP2 protocols below. Both are important. For legacy datasets the DAP2 request API is widely used by an existing client base and should continue to be supported. Since DAP4 subsumes DAP2 (but with somewhat different API semantics) It should be checked for legacy datasets as well. For more modern datasets that content DAP4 types such as Int64 that are not part of the DAP2 specification or implementations we will need to relying eliding the instances of unmapped types, or return an error when this is encountered.&lt;br /&gt;
 # Test Constants:&lt;br /&gt;
 GRANULE_FILE=&amp;quot;some_name.h5&amp;quot;&lt;br /&gt;
 # Granule URL&lt;br /&gt;
 gf_url=&amp;quot;http://localhost:8080/opendap/${GRANULE_FILE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== Inspect the dmr++ files====&lt;br /&gt;
# Do the dmr++ files have the expected dmrpp:href URL(s)?&lt;br /&gt;
#: &amp;lt;tt&amp;gt;head -2 ${GRANULE_FILE}.dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check DAP4 DMR Response ====&lt;br /&gt;
Inspect &amp;lt;tt&amp;gt;${gf_url}.dmrpp.dmr&amp;lt;/tt&amp;gt; &lt;br /&gt;
# Get the document, save as &amp;lt;tt&amp;gt;foo.dmr&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;curl -L -o foo.dmr &amp;quot;${gf_url}.dmr&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
# Are the associated dimensions correct?&lt;br /&gt;
&lt;br /&gt;
==== DAP4 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://docs.opendap.org/index.php?title=DAP4:_Specification_Volume_1#Fully_Qualified_Names VARIABLE_NAME is a full qualified DAP4 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_file &amp;quot;${gf_url}.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap4_subset_file dap4_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP4 UI test ====&lt;br /&gt;
* View and exercise the DAP4 Data Request Form &amp;lt;tt&amp;gt;{gf_url}.dmr.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check DDS Response ====&lt;br /&gt;
&lt;br /&gt;
# Inspect &amp;lt;tt&amp;gt;${gf_url}.dds&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
## Are the associated dimensions correct?&lt;br /&gt;
# Compare DMR++ DDS with granule file DDS. &lt;br /&gt;
#:For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_file &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_dmrpp &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; cmp dap2_dds_file dap2_dds_dmrpp &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_file &amp;quot;${gf_url}.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap2_subset_file dap2_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: One might consider doing this with two or more variables.&lt;br /&gt;
&lt;br /&gt;
==== DAP2 UI Test ====&lt;br /&gt;
* View and exercise the DAP2 Data Request Form located here: &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Try it in Panoply!&lt;br /&gt;
** Open Panoply.&lt;br /&gt;
** From the &#039;&#039;&#039;File&#039;&#039;&#039; menu select &#039;&#039;&#039;Open Remote Dataset...&#039;&#039;&#039;&lt;br /&gt;
** Paste the &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt; into the resulting dialog box.&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=File:Data-comparison.png&amp;diff=13546</id>
		<title>File:Data-comparison.png</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=File:Data-comparison.png&amp;diff=13546"/>
		<updated>2024-08-29T00:11:19Z</updated>

		<summary type="html">&lt;p&gt;Jimg: Data subset using DMR++ and the native file reader (HDF4-EOS2), compared using Panoply. The data were subset and the response was a netCDF4 file in each case.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Data subset using DMR++ and the native file reader (HDF4-EOS2), compared using Panoply. The data were subset and the response was a netCDF4 file in each case.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13545</id>
		<title>DMR++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13545"/>
		<updated>2024-08-28T23:59:59Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Building DMR++ files for HDF4 and HDF4-EOS2 (experimental) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;How to build &amp;amp; deploy dmr++ files for Hyrax&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==What? Why?== &lt;br /&gt;
&lt;br /&gt;
It is a fast and flexible way to serve data stored in S3.&lt;br /&gt;
The dmr++ encodes the location of the data content residing in a binary data file/object (e.g., an hdf5 file) so that it can be directly accessed, without the need for an intermediate library API, by using the file with the location information. The binary data objects may be on a local filesystem, or they may reside across the web in something like an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How Does It Work?==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; ingest software reads a data file (see &#039;&#039;&#039;note&#039;&#039;&#039;) and builds a document that holds all of the file&#039;s metadata (the names and types of all of the variables along with any other information bound to those variables). This information is stored in a document we call the Dataset Metadata Response (DMR). The &#039;&#039;dmr++&#039;&#039; adds some extra information to this (that&#039;s the &#039;++&#039; part) about where each variable can be found and how to decode those values. The &#039;&#039;dmr++&#039;&#039; is simply an special annotated DMR document.&lt;br /&gt;
&lt;br /&gt;
This effectively decouples the annotated DMR (&#039;&#039;dmr++&#039;&#039;) from the location of the granule file itself. Since dmr++ files are typically significantly smaller than the source data granules they represent, they can be stored and moved for less expense. They also enable reading all of the file&#039;s metadata in one operation instead of the iterative process that many APIs require.&lt;br /&gt;
&lt;br /&gt;
If the dmr++ contains references to the source granules location on the web, the location of the the dmr++ file itself does not matter.&lt;br /&gt;
&lt;br /&gt;
Software that understands the dmr++ content can directly access the data values held in the source granule file, and it can do so without having to retrieve the entire file and work on it locally, even when the file is stored in a Web Object Store like S3. &lt;br /&gt;
&lt;br /&gt;
If the granule file contains multiple variables and only a subset of them are needed, the dmr++ enabled software can retrieve just the bytes associated with the desired variables parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;note:&#039;&#039;&#039; The OPeNDAP software currently supports HDF5 and NetCDF4. Other formats can be supported, such as zarr.&lt;br /&gt;
&lt;br /&gt;
==Supported Data Formats==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently works with &#039;&#039;hdf5&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. (The &#039;&#039;netcdf-4&#039;&#039; format is a subset of &#039;&#039;hdf5&#039;&#039; so &#039;&#039;hdf5&#039;&#039; tools are utilized for both.) Other formats like &#039;&#039;zarr&#039;&#039;, &#039;&#039;hdf4&#039;&#039;, &#039;&#039;netcdf-3&#039;&#039; are not currently supported by the &#039;&#039;dmr++&#039;&#039; software, but support could be added if requested.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; data format is quite complex and many of the options and edge cases are not currently supported by the &#039;&#039;dmr++&#039;&#039; software. &lt;br /&gt;
&lt;br /&gt;
These limitations and how to quickly evaluate an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file for use with the &#039;&#039;dmr++&#039;&#039; software are explained below.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; filters====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format has several filter/compression options used for storing data values. &lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5Z_FILTER_DEFLATE, H5Z_FILTER_SHUFFLE, and H5Z_FILTER_FLETCHER32 filters.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc/RM/RM_H5Z.html You can find more on hdf5 filters here.]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; storage layouts====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format also uses a number of &amp;quot;storage layouts&amp;quot; that describe various structural organizations of the data values associated with a variable in the granule file.&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5D_COMPACT, H5D_CHUNKED, and H5D_CONTIGUOUS storage layouts. These are all of the storage layouts defined by the &#039;&#039;hdf5&#039;&#039; library, but others can be added.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc1.6/Datasets.html You can find more on hdf5 storage layouts here.]&lt;br /&gt;
&lt;br /&gt;
====Is my &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file suitable for &#039;&#039;dmr++&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
To determine the &#039;&#039;hdf5&#039;&#039; filters, storage layouts, and chunking scheme used in an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file you can use the command:&lt;br /&gt;
 &amp;lt;code&amp;gt;h5dump -H -p &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
To get a human readable assessment of the file that will show the storage layouts, chunking structure, and the filters needed for each variable (aka DATASET in the &#039;&#039;hdf5&#039;&#039; vocabulary) [https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Dump h5dump info can be found here.]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;h5dump example output:&#039;&#039;&lt;br /&gt;
 $ h5dump -H -p chunked_gzipped_fourD.h5&lt;br /&gt;
 HDF5 &amp;quot;chunked_gzipped_fourD.h5&amp;quot; {&lt;br /&gt;
 GROUP &amp;quot;/&amp;quot; {&lt;br /&gt;
   DATASET &amp;quot;d_16_gzipped_chunks&amp;quot; {&lt;br /&gt;
      DATATYPE  H5T_IEEE_F32LE&lt;br /&gt;
      DATASPACE  SIMPLE { ( 40, 40, 40, 40 ) / ( 40, 40, 40, 40 ) }&lt;br /&gt;
      STORAGE_LAYOUT {&lt;br /&gt;
         CHUNKED ( 20, 20, 20, 20 )&lt;br /&gt;
         SIZE 2863311 (3.576:1 COMPRESSION)&lt;br /&gt;
      }&lt;br /&gt;
      FILTERS {&lt;br /&gt;
         COMPRESSION DEFLATE { LEVEL 6 }&lt;br /&gt;
      }&lt;br /&gt;
      FILLVALUE {&lt;br /&gt;
         FILL_TIME H5D_FILL_TIME_ALLOC&lt;br /&gt;
         VALUE  H5D_FILL_VALUE_DEFAULT&lt;br /&gt;
      }&lt;br /&gt;
      ALLOCATION_TIME {&lt;br /&gt;
         H5D_ALLOC_TIME_INCR&lt;br /&gt;
      }&lt;br /&gt;
   }&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
====Is my netcdf file &#039;&#039;netcdf-3&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
It is an unfortunate state of affairs that the file suffix &amp;quot;.nc&amp;quot; is the commonly used naming convention for both &#039;&#039;netcdf-3&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. &lt;br /&gt;
You can use the command:  &lt;br /&gt;
 &amp;lt;code&amp;gt;ncdump -k &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt; &lt;br /&gt;
to determine if a &#039;&#039;netcdf&#039;&#039; file is either classic &#039;&#039;netcdf-3&#039;&#039; (classic) or &#039;&#039;netcdf-4&#039;&#039; (netCDF-4).&lt;br /&gt;
* The &#039;&#039;netcdf&#039;&#039; library must be installed on the system upon which the command is issued.&lt;br /&gt;
&lt;br /&gt;
[http://www.bic.mni.mcgill.ca/users/sean/Docs/netcdf/guide.txn_79.html You can learn more in the NetCDF documentation here.]&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;DMR++&#039;&#039; files for HDF4 and HDF4-EOS2 (experimental)==&lt;br /&gt;
The HDF4 and HDF4-EOS2 (hereafter just HDF4) DMR++ document builder is currently available in the docker container we build for &#039;&#039;hyrax&#039;&#039; server/service. You can get this container from the public Docker Hub repository. You can also get and build the &#039;&#039;Hyrax&#039;&#039; source code, and use the client that way (as part of a source code build) but it&#039;s much more complex than getting the Docker container. In addition, the Docker container includes a server that can test the DMR++ documents that are built and can even show you how the files would look when served without using the DMR++.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Because this command is still experimental, I&#039;ll write this documentation like a recipe. Modify it to suit your own needs.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Using get_dmrpp_h4===&lt;br /&gt;
Make a new directory in a convenient place and copy the HDF4 and/or HDF4-EOS2 files in that directory. Once you have the files in that directory, make an environment variable so it can be referred to easily. From inside the directory:&lt;br /&gt;
&lt;br /&gt;
  export HDF4_DIR=$(pwd)&lt;br /&gt;
&lt;br /&gt;
Get the Docker container from Docker Hub using this command:&lt;br /&gt;
&lt;br /&gt;
  docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
What the options mean:&lt;br /&gt;
 -d, --detach Run container in background and print container ID&lt;br /&gt;
 -h, --hostname Container host name&lt;br /&gt;
 -p, --publish Publish a container&#039;s port(s) to the host&lt;br /&gt;
 -v, --volume Bind mount a volume&lt;br /&gt;
 --name Assign a name to the container&lt;br /&gt;
&lt;br /&gt;
This command will fetch the container &#039;&#039;&#039;opendap/hyrax:snapshot&#039;&#039;&#039; from Docker Hub. Thw &#039;&#039;snapshot&#039;&#039; is the latest build of the container. It will then &#039;&#039;run&#039;&#039; the container and return the container ID. The &#039;&#039;hyrax&#039;&#039; server is now running on you computer and can be accessed with a web browser, curl, et cetera. More on that in a bit.&lt;br /&gt;
&lt;br /&gt;
The volume mount, from $HDF4_DIR to &#039;/usr/share/hyrax&#039; mounts the current directory of the host computer running the container to the directory &#039;&#039;/usr/share/hyrax&#039;&#039; inside the container. That directory is the root of the server&#039;s data tree. This means that the HDF4 files you copied into the HDF4_DIR directory will be accessible by the server running in the container. That will be useful for testing later on.&lt;br /&gt;
&lt;br /&gt;
Note: If you want to use a specific container version, just substitute the version info for &#039;snapshot.&#039;&lt;br /&gt;
&lt;br /&gt;
Check that the container is running using:&lt;br /&gt;
&lt;br /&gt;
  docker ps&lt;br /&gt;
&lt;br /&gt;
This will show a somewhat hard-to-read bit of information about all the running Docker container on you host:&lt;br /&gt;
&lt;br /&gt;
  CONTAINER ID   IMAGE                    COMMAND              CREATED          STATUS          PORTS                                                              &lt;br /&gt;
  NAMES&lt;br /&gt;
  2949d4101df4   opendap/hyrax:snapshot   &amp;quot;/entrypoint.sh -&amp;quot;   15 seconds ago   Up 14 seconds   8009/tcp, 8443/tcp, &lt;br /&gt;
  10022/tcp, 11002/tcp, 0.0.0.0:8080-&amp;gt;8080/tcp   hyrax&lt;br /&gt;
&lt;br /&gt;
If you want to stop containers, use&lt;br /&gt;
&lt;br /&gt;
  docker rm -f &amp;lt;CONTAINER ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &#039;&#039;CONTAINER ID&#039;&#039; for the one we just started and shown in the output of &#039;&#039;docker ps -a&#039;&#039; above is &#039;&#039;2949d4101df4&#039;&#039;. No need to stop the container now, I&#039;m just pointing out how to do it because it&#039;s often useful.&lt;br /&gt;
&lt;br /&gt;
====Lets run the DMR++ builder====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At the end of this, I&#039;ll include a shell script that takes away many of these steps, but the script obscures some aspects of the command that you might want to tweak, so the following shows you all the details. Skip to &#039;&#039;&#039;Simple shell command&#039;&#039;&#039; to skip over these details.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Make sure you are in the directory with the HDF4 files for these steps. &lt;br /&gt;
&lt;br /&gt;
Get the command to return its help information:&lt;br /&gt;
&lt;br /&gt;
  docker exec -it hyrax get_dmrpp_h4 -h&lt;br /&gt;
&lt;br /&gt;
will return:&lt;br /&gt;
  &lt;br /&gt;
  &amp;lt;nowiki&amp;gt;usage: get_dmrpp_h4 [-h] -i I [-c CONF] [-s] [-u DATA_URL] [-D] [-v]&lt;br /&gt;
&lt;br /&gt;
  Build a dmrpp file for an HDF4 file. get_dmrpp_h4 -i h4_file_name. A dmrpp&lt;br /&gt;
  file that uses the HDF4 file name will be generated.&lt;br /&gt;
&lt;br /&gt;
  optional arguments:&lt;br /&gt;
  &lt;br /&gt;
  ...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lets build a DMR++ now, by explicitly using the container:&lt;br /&gt;
&lt;br /&gt;
  docker exec -it hyrax bash&lt;br /&gt;
&lt;br /&gt;
starts the &#039;&#039;bash&#039;&#039; shell in the container, with the current directory as root (/)&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax /]# &lt;br /&gt;
&lt;br /&gt;
Change to the directory that is the root of the data (you&#039;ll see your HDF4 files in here):&lt;br /&gt;
&lt;br /&gt;
  cd /usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
You will see, roughly:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;nowiki&amp;gt;[root@hyrax /]# cd /usr/share/hyrax&lt;br /&gt;
  [root@hyrax hyrax]# ls&lt;br /&gt;
  3B42.19980101.00.7.HDF&lt;br /&gt;
  3B42.19980101.03.7.HDF&lt;br /&gt;
  3B42.19980101.06.7.HDF&lt;br /&gt;
&lt;br /&gt;
  ...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In that directory, use the &#039;&#039;get_dmrpp_h4&#039;&#039; command to build a DMR++ document for one of the files:&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax hyrax]# get_dmrpp_h4 -i 3B42.20130111.09.7.HDF -u &#039;file:///usr/share/hyrax/3B42.20130111.09.7.HDF&#039;&lt;br /&gt;
&lt;br /&gt;
Copy that pattern for whatever file you use. From the /usr/share/hyrax directory, you pass &#039;&#039;get_dmrpp_h4&#039;&#039; the name of the file (because it&#039;s local to the current directory) using the &#039;&#039;&#039;-i&#039;&#039;&#039; option. The &#039;&#039;&#039;-u&#039;&#039;&#039; option tells the command to embed the URL that follows it in the DMR++. I&#039;ve used a &#039;&#039;file://&#039;&#039; URL to the file &#039;&#039;/usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Note the three slashes following the colon, two from the way a URL names a protocol and one because the pathname starts at the root directory. Obscure, but it makes sense.&lt;br /&gt;
&lt;br /&gt;
Building the DMR++ and embedding a &#039;&#039;file://&#039;&#039; URL will enable testing the DMR+.&lt;br /&gt;
&lt;br /&gt;
Lets look at how the &#039;&#039;hyrax&#039;&#039; service will treat that data file using the DMR++. In a browser, goto &lt;br /&gt;
&lt;br /&gt;
  http://localhost:8080/opendap/&lt;br /&gt;
&lt;br /&gt;
You will see [[File:Hyrax-including-new-DNRpp.png|200px|thumb|right|The running server shows the DMR++ as a dataset.]]&lt;br /&gt;
&lt;br /&gt;
Click on the your equivalent of the &#039;&#039;&#039;3B42.20130111.09.7.HDF&#039;&#039; link, subset, download and open in Panoply or the equivalent. [[File:Hyrax-subsetting.png|200px|thumb|right|Use the form interface to subset and get a response.]]&lt;br /&gt;
&lt;br /&gt;
You can run batch tests in lots of files by building many DMR++ documents and then asking the server for various responses (nc4, dap) from the DMR++ and the original file. Those could be compared using various schemes, although in its entirety that is beyond this section&#039;s scope, the command &#039;&#039;getdap4&#039;&#039; is also included in the container and could be used to compare &#039;&#039;dap&#039;&#039; responses from the data file and the DMR++ document.&lt;br /&gt;
&lt;br /&gt;
====Simple shell command====&lt;br /&gt;
&lt;br /&gt;
Here is a simple shell command that you can run on the host computer that will eliminate most of the above. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In the spirit of a recipe, I&#039;ll restate the earlier command for starting the docker container with the &#039;&#039;&#039;get_dmrpp_h4&#039;&#039;&#039; command and the &#039;&#039;&#039;hyrax&#039;&#039;&#039; server.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Start the container:&lt;br /&gt;
&lt;br /&gt;
  docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
Is it running:&lt;br /&gt;
&lt;br /&gt;
  docker ps&lt;br /&gt;
&lt;br /&gt;
The command, written for the Bourne Shell, is:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;nowiki&amp;gt;#!/bin/bash&lt;br /&gt;
  #&lt;br /&gt;
  # usage get_dmrpp_h4.sh &amp;lt;file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  data_root=/usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
  cat &amp;lt;&amp;lt;EOF | docker exec --interactive hyrax sh&lt;br /&gt;
  cd $data_root&lt;br /&gt;
  get_dmrpp_h4 -i $1 -u &amp;quot;file://$data_root/$1&amp;quot;&lt;br /&gt;
  EOF&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy that, save it in a file (I named the file &#039;&#039;get_dmrpp_h4.sh&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
Run the command on the host (not the docker container) and in the directory with the HDF4 files (you don&#039;t &#039;&#039;have&#039;&#039; to do that, but sorting out the details is left as an exercise for the reader ;-). Run the command like this:&lt;br /&gt;
&lt;br /&gt;
  ./get_dmrpp_h4.sh AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
&lt;br /&gt;
The DMR++ will appear when the command completes.&lt;br /&gt;
&lt;br /&gt;
  ls -l&lt;br /&gt;
&lt;br /&gt;
shows&lt;br /&gt;
&lt;br /&gt;
  (hyrax500) hyrax_git/HDF4-dir % ls -l&lt;br /&gt;
  total 1251240&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff    1250778 Aug 22 22:31 AMSR_E_L2_Land_V09_200206191112_A.hdf&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff   20746207 Aug 22 22:32 AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
  -rw-r--r--  1 jimg  staff    3378674 Aug 28 17:37 AMSR_E_L3_SeaIce25km_V15_20020601.hdf.dmrpp&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files with get_dmrpp==&lt;br /&gt;
&lt;br /&gt;
The application that builds the &#039;&#039;dmr++&#039;&#039; files is a command line tool called &#039;&#039;get_dmrpp&#039;&#039;. It in turn utilizes other executables such as &#039;&#039;build_dmrpp&#039;&#039;, &#039;&#039;reduce_mdf&#039;&#039;, &#039;&#039;merge_dmrpp&#039;&#039; (which rely in turn on the &#039;&#039;hdf5_handler&#039;&#039; and the &#039;&#039;hdf5&#039;&#039; library), along with a number of UNIX shell commands.&lt;br /&gt;
&lt;br /&gt;
All of these components are install with each recent version of the Hyrax Data Server&lt;br /&gt;
&lt;br /&gt;
You can see the &#039;&#039;get_dmrpp&#039;&#039; usage statement with the command: &amp;lt;code&amp;gt;get_dmrpp -h&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Using &#039;&#039;get_dmrpp&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The way that &#039;&#039;get_dmrpp&#039;&#039; is invoked controls the way that the data are ultimately represented in the resulting &#039;&#039;dmr++&#039;&#039; file(s). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;get_dmrpp&#039;&#039; application utilizes software from the Hyrax data server to produce the base DMR document which is used to construct the &#039;&#039;dmr++&#039;&#039; file. &lt;br /&gt;
&lt;br /&gt;
The Hyrax server has a long list of configuration options, several of which can substantially alter the the structural and semantic representation of the dataset as seen in the dmr++ files generated using these options.&lt;br /&gt;
&lt;br /&gt;
===Command line options===&lt;br /&gt;
&lt;br /&gt;
The command line switches provide a way to control the output of the tool. In addition to common options like verbose output or testing modes, the tool provides options to build extra (aka &#039;sidecar&#039;) data files that hold information needed for CF compliance if the original HDF5 data files lack that information (see the &#039;&#039;missing data&#039;&#039; section ). In addition, it is often desirable to build &#039;&#039;dmr++&#039;&#039; files before the source data files are uploaded to a cloud store like S3. In this case, the URL to the data may not be known when the &#039;&#039;dmr++&#039;&#039; is built. We support this by using placeholder/template strings in the &#039;&#039;dmr++&#039;&#039; and which can then be replaced with the URL at runtime, when the &#039;&#039;dmr++&#039;&#039; file is evaluated. See the &#039;-u&#039; and &#039;-p&#039; options below.&lt;br /&gt;
&lt;br /&gt;
====Inputs====&lt;br /&gt;
&lt;br /&gt;
; -b&lt;br /&gt;
: The fully qualified path to the top level data directory. Data files read by &#039;&#039;get_dmrpp&#039;&#039; must be in the directory tree rooted at this location and their names expressed as a path relative to this location. The value may not be set to &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; The default value is /tmp if a value is not provided. All the data files to be processed must be in this directory or one of its subdirectories. If &#039;&#039;get_dmrpp&#039;&#039; is being executed from same directory as the data then &amp;lt;code&amp;gt;-b `pwd`&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-b .&amp;lt;/code&amp;gt; works as well.&lt;br /&gt;
;-u&lt;br /&gt;
: This option is used to specify the location of the binary data object. It’s value must be an http, https, or file (file://) URL. This URL will be injected into the dmr++ when it is constructed. If option -u is not used; then the template string &amp;lt;code&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/code&amp;gt; will be used and the dmr++ will substitute a value at runtime.&lt;br /&gt;
;-c&lt;br /&gt;
:The path to an alternate bes configuration file to use.&lt;br /&gt;
;-s&lt;br /&gt;
:The path to an optional addendum configuration file which will be appended to the default BES configuration. Much like the site.conf file works for the full server deployment it will be loaded last and the settings there-in will have an override effect on the default configuration.&lt;br /&gt;
&lt;br /&gt;
====Output====&lt;br /&gt;
&lt;br /&gt;
; -o&lt;br /&gt;
: The name of the file to create.&lt;br /&gt;
&lt;br /&gt;
====Verbose Output Modes====&lt;br /&gt;
&lt;br /&gt;
; -h&lt;br /&gt;
: Show help/usage page.&lt;br /&gt;
; -v:&lt;br /&gt;
: verbose mode, prints the intermediate DMR.&lt;br /&gt;
; -V&lt;br /&gt;
: Very verbose mode,  prints the DMR, the command and the configuration file used to build the DMR.&lt;br /&gt;
; -D&lt;br /&gt;
: Just print the DMR that will be used to build the DMR++&lt;br /&gt;
; -X&lt;br /&gt;
: Do not remove temporary files. May be used independently of the -v and/or -V options.&lt;br /&gt;
&lt;br /&gt;
====Tests====&lt;br /&gt;
&lt;br /&gt;
; -T&lt;br /&gt;
: Run ALL hyrax tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -I&lt;br /&gt;
: Run hyrax inventory tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -F&lt;br /&gt;
: Run hyrax value probe tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
&lt;br /&gt;
====Missing Data Creation====&lt;br /&gt;
&lt;br /&gt;
; -M&lt;br /&gt;
: Build a &#039;sidecar&#039; file that holds missing information needed for CF compliance (e.g., Latitude, Longitude and Time coordinate data).&lt;br /&gt;
; -p&lt;br /&gt;
: Provide the URL for the Missing data sidecar file. If this is not given (but -M is), then a template value is used in the dmr++ file and a real URL is substituted at runtime.&lt;br /&gt;
; -r&lt;br /&gt;
: The path to the file that contains missing variable information for sets of input data files that share common missing variables. The file will be created if it doesn&#039;t exist and the result may be used in subsequent invocations of get_dmrpp (using -r) to identify the missing variable file.&lt;br /&gt;
&lt;br /&gt;
====AWS Integration====&lt;br /&gt;
The &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application supports both S3 hosted granules as inputs, and uploading generated dmr++ files to an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
; S3 Hosted granules are supported by default,&lt;br /&gt;
: When the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application sees that the name of the input file is an S3 URL it will check to see if the AWS CLI is configured and if so &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will attempt retrieve the granule and make a dmr++ utilizing whatever other options have been chosen.&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
; -U&lt;br /&gt;
: The &#039;&#039;&#039;&amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt;&#039;&#039;&#039; command line parameter for &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; instructs the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application to upload the generated dmr++ file to S3, but only when the following conditions are met:&lt;br /&gt;
:* The name of the input file is an S3 URL&lt;br /&gt;
:* The AWS CLI has been configured with credentials that provide r+w permissions for the bucket referenced in the input file S3 URL.&lt;br /&gt;
:* The &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; option has been specified.&lt;br /&gt;
: If all three of the above are true then &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will copy the retrieve the granule, create a dmr++ file from the granule, and copy the resulting dmr++ file (as defined by the -o option) to the source S3 bucket using the well known NGAP sidecar file naming convention: &#039;&#039;&#039;&amp;lt;tt&amp;gt;s3://bucket_name/granule_object_id.dmrpp&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -U -o foo -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5_handler&#039;&#039; Configuration===&lt;br /&gt;
&lt;br /&gt;
Because &#039;&#039;get_dmrpp&#039;&#039; uses the &#039;&#039;hdf5_handler&#039;&#039; software to build the &#039;&#039;dmr++&#039;&#039; the software must inject the &#039;&#039;hdf5_handler&#039;&#039;&#039;s configuration. &lt;br /&gt;
&lt;br /&gt;
The default configuration is large, but any valued may be altered at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some of the commonly manipulated configuration parameters with their default values:&lt;br /&gt;
 H5.EnableCF=true&lt;br /&gt;
 H5.EnableDMR64bitInt=true&lt;br /&gt;
 H5.DefaultHandleDimension=true&lt;br /&gt;
 H5.KeepVarLeadingUnderscore=false&lt;br /&gt;
 H5.EnableCheckNameClashing=true&lt;br /&gt;
 H5.EnableAddPathAttrs=true&lt;br /&gt;
 H5.EnableDropLongString=true&lt;br /&gt;
 H5.DisableStructMetaAttr=true&lt;br /&gt;
 H5.EnableFillValueCheck=true&lt;br /&gt;
 H5.CheckIgnoreObj=false&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;Note to DAACs with existing Hyrax deployments.&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
If your group is already serving data with Hyrax and the data representations that are generated by your Hyrax server are satisfactory, then a careful inspection of the localized configuration, typically held in /etc/bes/site.conf, will help you determine what configuration state you may need to inject into &#039;&#039;get_dmrpp&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===The &#039;&#039;H5.EnableCF&#039;&#039; option===&lt;br /&gt;
&lt;br /&gt;
Of particular importance is the &#039;&#039;H5.EnableCF&#039;&#039; option, which instructs the &#039;&#039;get_dmrpp&#039;&#039; tool to produce [https://cfconventions.org/ Climate Forecast convention (CF)] compatible output based on metadata found in the granule file being processed. &lt;br /&gt;
&lt;br /&gt;
Changing the value of &#039;&#039;H5.EnableCF&#039;&#039; from &#039;&#039;&#039;&#039;&#039;false&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;true&#039;&#039;&#039;&#039;&#039; will have (at least) two significant effects.&lt;br /&gt;
&lt;br /&gt;
It will:&lt;br /&gt;
&lt;br /&gt;
* Cause &#039;&#039;get_dmrpp&#039;&#039; to attempt to make the dmr++ metadata CF compliant.&lt;br /&gt;
* Remove Group hierarchies (if any) in the underlying data granule by flattening the Group hierarchy into the variable names.  &lt;br /&gt;
&lt;br /&gt;
By default &#039;&#039;get_dmrpp&#039;&#039; the &#039;&#039;H5.EnableCF&#039;&#039; option is set to false:&lt;br /&gt;
 H5.EnableCF = false&lt;br /&gt;
&lt;br /&gt;
There is a much more comprehensive discussion of this key feature, and others, in the &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;[https://opendap.github.io/hyrax_guide/Master_Hyrax_Guide.html#_hyrax_handlers HDF5 Handler section of the Appendix in the Hyrax Data Server Installation and Configuration Guide]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Missing data, the CF conventions and &#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
Many of the &#039;&#039;hdf5&#039;&#039; files produced by NASA and others do not contain the domain coordinate data (such as latitude, longitude, time, etc.) as a collection of explicit values. Instead information contained in the dataset metadata can used to reproduce these values. &lt;br /&gt;
&lt;br /&gt;
In order for a dataset to be Climate Forecast (CF) compatible it must contain these domain coordinate data values.&lt;br /&gt;
&lt;br /&gt;
The Hyrax &#039;&#039;hdf5_handler&#039;&#039; software, utilized by the &#039;&#039;get_dmrpp&#039;&#039; application, can create this data from the dataset metadata.  The &#039;&#039;get_dmrpp&#039;&#039; application places these generated data in a “sidecar” file for deployment with the source &#039;&#039;hdf5/netcdf&#039;&#039;-4 file.&lt;br /&gt;
&lt;br /&gt;
==Hyrax - Serving data using dmr++ files==&lt;br /&gt;
&lt;br /&gt;
There are three fundamental deployment scenarios for using dmr++ files to serve data with the Hyrax data server.&lt;br /&gt;
&lt;br /&gt;
This can be simple categorized as follows:&lt;br /&gt;
The dmr++ file(s) are XML files that contain a root &amp;lt;tt&amp;gt;dap4:Dataset&amp;lt;/tt&amp;gt; element with a &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attribute whose value is one of:&lt;br /&gt;
# An http(s):// URL referencing to the underlying granule files via http.&lt;br /&gt;
# A file:// URL that references the granule file on the local filesystem in a location that is inside the BES&#039; data root tree.&lt;br /&gt;
# The template string &amp;lt;tt&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each will discussed in turn below. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: By default Hyrax will automatically associate files whose name ends with &amp;quot;.dmrpp&amp;quot; with the dmr++ handler.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with http(s) URLs===&lt;br /&gt;
&lt;br /&gt;
If the dmr++ files that you wish to serve contain &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attributes whose values are http(s) URLs then there are 2+1 steps to serve the data:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure that the Hyrax AllowedHosts list is configured to allow Hyrax to access those target URLs. This can be accomplished by adding new regex entires to the AllowedHosts list in /etc/bes/site.conf, creating that file as need be.&lt;br /&gt;
# If the data URLs require authentication to access then you&#039;ll need to configure Hyrax for that too.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with file URLs===&lt;br /&gt;
&lt;br /&gt;
Using dmr++ files with locally held files can be useful for verifying that dmr++ functionality is working without relying on network access that may have data rate limits, authenticated access configuration, or security access constraints. Additionally, in many cases the dmr++ access to the locally held data may be significantly faster than through the native netcdf-4/hdf5 data handlers.&lt;br /&gt;
&lt;br /&gt;
In order to use dmr++ files that contain file:// URLs:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure the the dmr++ files contain only file:// URLs that refer to data granule files inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration.&lt;br /&gt;
&lt;br /&gt;
Note: For Hyrax, a correctly formatted file URL must start with the protocol &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; followed by the full qualified path to the data granule, for example:  &lt;br /&gt;
 /usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
so the the completed URL will have three slashes after the first colon:&lt;br /&gt;
 file:///usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with the template string. (NASA)===&lt;br /&gt;
&lt;br /&gt;
Another way to serve dmr++ files with Hyrax is to build the dmr++ files &#039;&#039;&#039;without&#039;&#039;&#039; valid URLs but with a template string that is replaced at runtime. If no target URL is supplied to get_drmpp at the time that the dmr++ is generated the template string: &amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt; will added to the file in place of the URL. The at runtime it can be replaced withe the correct value.&lt;br /&gt;
&lt;br /&gt;
Currently the only implementation of this is Hyrax&#039;s NGAP service which, when deployed in the NASA NGAP cloud, will accept &amp;quot;restified path&amp;quot; URLs that are defined as&lt;br /&gt;
having a URL path component with two mandatory and one optional parameters:&lt;br /&gt;
 MANDATORY: &amp;quot;/collections/UMM-C:{concept-id}&amp;quot;&lt;br /&gt;
 OPTIONAL:  &amp;quot;/UMM-C:{ShortName} &#039;.&#039; UMM-C:{Version}&amp;quot;&lt;br /&gt;
 MANDATORY: &amp;quot;/granules/UMM-G:{GranuleUR}&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039; &amp;lt;tt&amp;gt;https://opendap.earthdata.nasa.gov/collections/C1443727145-LAADS/MOD08_D3.v6.1/granules/MOD08_D3.A2020308.061.2020309092644.hdf.nc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When encountering this type of URL Hyrax will decompose it and use the content to formulate a query to the NASA CMR in order to retrieve the data access URL for the granule and for the dmr++ file. It then retrieves the dmr++ file and injects the data URL so that data access can proceed as described above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://wiki.earthdata.nasa.gov/display/DUTRAIN/Feature+analysis%3A+Restified+URL+for+OPENDAP+Data+Access More on the Restified Path can be found here],&lt;br /&gt;
&lt;br /&gt;
== Recipe: Building and testing dmr++ files ==&lt;br /&gt;
There are two recipes shown here, one using Hyrax docker containers and a second using the container that is part of the EOSDIS Cumulous task.&lt;br /&gt;
Prerequisites: &lt;br /&gt;
* Docker daemon running on a system that also supports a shell (the examples use &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; in this section)&lt;br /&gt;
=== Recipe: Building dmr++ files using a Hyrax docker container.===&lt;br /&gt;
# Acquire representative granule files for the collection you wish to import. Put them on the system that is running the Docker daemon. For this recipe we will assume that these files have been placed in the directory:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;/tmp/dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Get the most up to date Hyrax docker image:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker pull opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Start the docker container, mounting your data directory on to the docker image at &amp;lt;tt&amp;gt;/usr/share/hyrax&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker run -d -h hyrax -p 8080:8080 --volume /tmp/dmrpp:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Get a first view of your data using &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; with it&#039;s default configuration.&lt;br /&gt;
## If you want you can build a dmr++ for an example &amp;quot;&amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;input_file&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt;&amp;quot; using a  &amp;lt;tt&amp;gt;docker exec&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
##: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax get_dmrpp -b /usr/share/hyrax -o /usr/share/hyrax/input_file.dmrpp -u &amp;quot;file:///usr/share/hyrax/input_file&amp;quot; &amp;quot;input_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
## Or if you want more scripting flexibility you can login to the docker conainer to do the same:&lt;br /&gt;
### Login to the docker container:&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax /bin/bash&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### Change working dir to data dir:&lt;br /&gt;
###:&amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;cd /usr/share/hyrax&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### This sets the data directory to the current one (&amp;lt;tt&amp;gt;-b $(pwd)&amp;lt;/tt&amp;gt;) and sets the data URL (&amp;lt;tt&amp;gt;-u&amp;lt;/tt&amp;gt;) to the fully qualified path to the input file.&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;get_dmrpp -b $(pwd) -o foo.dmrpp -u &amp;quot;file://&amp;quot;$(pwd)&amp;quot;/your_test_file&amp;quot; &amp;quot;your_test_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;&#039;&#039;Now that you have made a dmr++ file, use the running Hyrax server to view and test it by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
#:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You can also batch process all of your test granules, if you want to go that route. This script assumes your ingestable data files end with &#039;&amp;lt;tt&amp;gt;.h5&amp;lt;/tt&amp;gt;&#039;. &lt;br /&gt;
#: &#039;&#039;The resulting dmr++ files should contain the correct &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; URLs and be correctly located so that they may be tested with the Hyrax service running in the docker instance.&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script will write each output file as a sidecar file into &lt;br /&gt;
# the same directory as its associated input granule data file.&lt;br /&gt;
&lt;br /&gt;
# The target directory to search for data files &lt;br /&gt;
target_dir=/usr/share/hyrax&lt;br /&gt;
echo &amp;quot;target_dir: ${target_dir}&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Search the target_dir for names matching the regex \*.h5 &lt;br /&gt;
for infile in `find &amp;quot;${target_dir}&amp;quot; -name \*.h5`&lt;br /&gt;
do&lt;br /&gt;
    echo &amp;quot; Processing: ${infile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    infile_base=`basename &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;infile_base: ${infile_base}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    bes_dir=`dirname &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;    bes_dir: ${bes_dir}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    outfile=&amp;quot;${infile}.dmrpp&amp;quot;&lt;br /&gt;
    echo &amp;quot;     Output: ${outfile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    get_dmrpp -b &amp;quot;${bes_dir}&amp;quot; -o &amp;quot;${outfile}&amp;quot; -u &amp;quot;file://${infile}&amp;quot; &amp;quot;${infile_base}&amp;quot;&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Remember that you can use the Hyrax server that is running in the docker container to view and test the dmr++ files you just created by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Testing and qualifying dmr++ files ===&lt;br /&gt;
In the previous section/step we created some initial dmr++ files using the default configuration. It is crucial to make sure that they provide the representation of the data that you and your users are expecting, and that they will work correctly with the Hyrax server. (See the following sections for details). If the generated dmr++ files do not match expectations then the default configuration of the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; may need to be amended using the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; parameter.&lt;br /&gt;
If the data are currently being served by your DAAC&#039;s on-prem team this is where understanding exactly what the localizations made to the configurations of the on-prem Hyrax instances deployed for the collection is important. These localization will probably need to be injected into &amp;lt;tt&amp;gt;get_drmpp&amp;lt;/tt&amp;gt; in order to produce the correct data representation in the dmr++ files.&lt;br /&gt;
&lt;br /&gt;
=== Flattening Groups ===&lt;br /&gt;
By default &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will preserve and show group hierarchies. If this is not desired, say for CF-1.0 compatibility, then you can change this by creating a small amendment to &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt;&#039;s default configuration. &lt;br /&gt;
First create the amending configuration file:&lt;br /&gt;
 echo &amp;quot;H5.EnableCF=true&amp;quot; &amp;gt; site.conf&lt;br /&gt;
Then, change the invocation of &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; in the above example by adding the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; switch:&lt;br /&gt;
 get_dmrpp -s site.conf -b `pwd` -o &amp;quot;${dmrpp_file}&amp;quot; -u &amp;quot;file://&amp;quot;`pwd`&amp;quot;/${file}&amp;quot; &amp;quot;${file}&amp;quot;&lt;br /&gt;
And re-run the dmr++ production as shown above.&lt;br /&gt;
&lt;br /&gt;
=== DAP representations ===&lt;br /&gt;
We have test and assurance procedures for DAP4 and DAP2 protocols below. Both are important. For legacy datasets the DAP2 request API is widely used by an existing client base and should continue to be supported. Since DAP4 subsumes DAP2 (but with somewhat different API semantics) It should be checked for legacy datasets as well. For more modern datasets that content DAP4 types such as Int64 that are not part of the DAP2 specification or implementations we will need to relying eliding the instances of unmapped types, or return an error when this is encountered.&lt;br /&gt;
 # Test Constants:&lt;br /&gt;
 GRANULE_FILE=&amp;quot;some_name.h5&amp;quot;&lt;br /&gt;
 # Granule URL&lt;br /&gt;
 gf_url=&amp;quot;http://localhost:8080/opendap/${GRANULE_FILE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== Inspect the dmr++ files====&lt;br /&gt;
# Do the dmr++ files have the expected dmrpp:href URL(s)?&lt;br /&gt;
#: &amp;lt;tt&amp;gt;head -2 ${GRANULE_FILE}.dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check DAP4 DMR Response ====&lt;br /&gt;
Inspect &amp;lt;tt&amp;gt;${gf_url}.dmrpp.dmr&amp;lt;/tt&amp;gt; &lt;br /&gt;
# Get the document, save as &amp;lt;tt&amp;gt;foo.dmr&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;curl -L -o foo.dmr &amp;quot;${gf_url}.dmr&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
# Are the associated dimensions correct?&lt;br /&gt;
&lt;br /&gt;
==== DAP4 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://docs.opendap.org/index.php?title=DAP4:_Specification_Volume_1#Fully_Qualified_Names VARIABLE_NAME is a full qualified DAP4 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_file &amp;quot;${gf_url}.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap4_subset_file dap4_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP4 UI test ====&lt;br /&gt;
* View and exercise the DAP4 Data Request Form &amp;lt;tt&amp;gt;{gf_url}.dmr.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check DDS Response ====&lt;br /&gt;
&lt;br /&gt;
# Inspect &amp;lt;tt&amp;gt;${gf_url}.dds&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
## Are the associated dimensions correct?&lt;br /&gt;
# Compare DMR++ DDS with granule file DDS. &lt;br /&gt;
#:For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_file &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_dmrpp &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; cmp dap2_dds_file dap2_dds_dmrpp &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_file &amp;quot;${gf_url}.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap2_subset_file dap2_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: One might consider doing this with two or more variables.&lt;br /&gt;
&lt;br /&gt;
==== DAP2 UI Test ====&lt;br /&gt;
* View and exercise the DAP2 Data Request Form located here: &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Try it in Panoply!&lt;br /&gt;
** Open Panoply.&lt;br /&gt;
** From the &#039;&#039;&#039;File&#039;&#039;&#039; menu select &#039;&#039;&#039;Open Remote Dataset...&#039;&#039;&#039;&lt;br /&gt;
** Paste the &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt; into the resulting dialog box.&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13544</id>
		<title>DMR++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13544"/>
		<updated>2024-08-28T23:43:19Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Building DMR++ files for HDF4 and HDF4-EOS2 (experimental) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;How to build &amp;amp; deploy dmr++ files for Hyrax&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==What? Why?== &lt;br /&gt;
&lt;br /&gt;
It is a fast and flexible way to serve data stored in S3.&lt;br /&gt;
The dmr++ encodes the location of the data content residing in a binary data file/object (e.g., an hdf5 file) so that it can be directly accessed, without the need for an intermediate library API, by using the file with the location information. The binary data objects may be on a local filesystem, or they may reside across the web in something like an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How Does It Work?==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; ingest software reads a data file (see &#039;&#039;&#039;note&#039;&#039;&#039;) and builds a document that holds all of the file&#039;s metadata (the names and types of all of the variables along with any other information bound to those variables). This information is stored in a document we call the Dataset Metadata Response (DMR). The &#039;&#039;dmr++&#039;&#039; adds some extra information to this (that&#039;s the &#039;++&#039; part) about where each variable can be found and how to decode those values. The &#039;&#039;dmr++&#039;&#039; is simply an special annotated DMR document.&lt;br /&gt;
&lt;br /&gt;
This effectively decouples the annotated DMR (&#039;&#039;dmr++&#039;&#039;) from the location of the granule file itself. Since dmr++ files are typically significantly smaller than the source data granules they represent, they can be stored and moved for less expense. They also enable reading all of the file&#039;s metadata in one operation instead of the iterative process that many APIs require.&lt;br /&gt;
&lt;br /&gt;
If the dmr++ contains references to the source granules location on the web, the location of the the dmr++ file itself does not matter.&lt;br /&gt;
&lt;br /&gt;
Software that understands the dmr++ content can directly access the data values held in the source granule file, and it can do so without having to retrieve the entire file and work on it locally, even when the file is stored in a Web Object Store like S3. &lt;br /&gt;
&lt;br /&gt;
If the granule file contains multiple variables and only a subset of them are needed, the dmr++ enabled software can retrieve just the bytes associated with the desired variables parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;note:&#039;&#039;&#039; The OPeNDAP software currently supports HDF5 and NetCDF4. Other formats can be supported, such as zarr.&lt;br /&gt;
&lt;br /&gt;
==Supported Data Formats==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently works with &#039;&#039;hdf5&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. (The &#039;&#039;netcdf-4&#039;&#039; format is a subset of &#039;&#039;hdf5&#039;&#039; so &#039;&#039;hdf5&#039;&#039; tools are utilized for both.) Other formats like &#039;&#039;zarr&#039;&#039;, &#039;&#039;hdf4&#039;&#039;, &#039;&#039;netcdf-3&#039;&#039; are not currently supported by the &#039;&#039;dmr++&#039;&#039; software, but support could be added if requested.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; data format is quite complex and many of the options and edge cases are not currently supported by the &#039;&#039;dmr++&#039;&#039; software. &lt;br /&gt;
&lt;br /&gt;
These limitations and how to quickly evaluate an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file for use with the &#039;&#039;dmr++&#039;&#039; software are explained below.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; filters====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format has several filter/compression options used for storing data values. &lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5Z_FILTER_DEFLATE, H5Z_FILTER_SHUFFLE, and H5Z_FILTER_FLETCHER32 filters.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc/RM/RM_H5Z.html You can find more on hdf5 filters here.]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; storage layouts====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format also uses a number of &amp;quot;storage layouts&amp;quot; that describe various structural organizations of the data values associated with a variable in the granule file.&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5D_COMPACT, H5D_CHUNKED, and H5D_CONTIGUOUS storage layouts. These are all of the storage layouts defined by the &#039;&#039;hdf5&#039;&#039; library, but others can be added.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc1.6/Datasets.html You can find more on hdf5 storage layouts here.]&lt;br /&gt;
&lt;br /&gt;
====Is my &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file suitable for &#039;&#039;dmr++&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
To determine the &#039;&#039;hdf5&#039;&#039; filters, storage layouts, and chunking scheme used in an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file you can use the command:&lt;br /&gt;
 &amp;lt;code&amp;gt;h5dump -H -p &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
To get a human readable assessment of the file that will show the storage layouts, chunking structure, and the filters needed for each variable (aka DATASET in the &#039;&#039;hdf5&#039;&#039; vocabulary) [https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Dump h5dump info can be found here.]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;h5dump example output:&#039;&#039;&lt;br /&gt;
 $ h5dump -H -p chunked_gzipped_fourD.h5&lt;br /&gt;
 HDF5 &amp;quot;chunked_gzipped_fourD.h5&amp;quot; {&lt;br /&gt;
 GROUP &amp;quot;/&amp;quot; {&lt;br /&gt;
   DATASET &amp;quot;d_16_gzipped_chunks&amp;quot; {&lt;br /&gt;
      DATATYPE  H5T_IEEE_F32LE&lt;br /&gt;
      DATASPACE  SIMPLE { ( 40, 40, 40, 40 ) / ( 40, 40, 40, 40 ) }&lt;br /&gt;
      STORAGE_LAYOUT {&lt;br /&gt;
         CHUNKED ( 20, 20, 20, 20 )&lt;br /&gt;
         SIZE 2863311 (3.576:1 COMPRESSION)&lt;br /&gt;
      }&lt;br /&gt;
      FILTERS {&lt;br /&gt;
         COMPRESSION DEFLATE { LEVEL 6 }&lt;br /&gt;
      }&lt;br /&gt;
      FILLVALUE {&lt;br /&gt;
         FILL_TIME H5D_FILL_TIME_ALLOC&lt;br /&gt;
         VALUE  H5D_FILL_VALUE_DEFAULT&lt;br /&gt;
      }&lt;br /&gt;
      ALLOCATION_TIME {&lt;br /&gt;
         H5D_ALLOC_TIME_INCR&lt;br /&gt;
      }&lt;br /&gt;
   }&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
====Is my netcdf file &#039;&#039;netcdf-3&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
It is an unfortunate state of affairs that the file suffix &amp;quot;.nc&amp;quot; is the commonly used naming convention for both &#039;&#039;netcdf-3&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. &lt;br /&gt;
You can use the command:  &lt;br /&gt;
 &amp;lt;code&amp;gt;ncdump -k &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt; &lt;br /&gt;
to determine if a &#039;&#039;netcdf&#039;&#039; file is either classic &#039;&#039;netcdf-3&#039;&#039; (classic) or &#039;&#039;netcdf-4&#039;&#039; (netCDF-4).&lt;br /&gt;
* The &#039;&#039;netcdf&#039;&#039; library must be installed on the system upon which the command is issued.&lt;br /&gt;
&lt;br /&gt;
[http://www.bic.mni.mcgill.ca/users/sean/Docs/netcdf/guide.txn_79.html You can learn more in the NetCDF documentation here.]&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;DMR++&#039;&#039; files for HDF4 and HDF4-EOS2 (experimental)==&lt;br /&gt;
The HDF4 and HDF4-EOS2 (hereafter just HDF4) DMR++ document builder is currently available in the docker container we build for &#039;&#039;hyrax&#039;&#039; server/service. You can get this container from the public Docker Hub repository. You can also get and build the &#039;&#039;Hyrax&#039;&#039; source code, and use the client that way (as part of a source code build) but it&#039;s much more complex than getting the Docker container. In addition, the Docker container includes a server that can test the DMR++ documents that are built and can even show you how the files would look when served without using the DMR++.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Because this command is still experimental, I&#039;ll write this documentation like a recipe. Modify it to suit your own needs.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Using get_dmrpp_h4===&lt;br /&gt;
Make a new directory in a convenient place and copy the HDF4 and/or HDF4-EOS2 files in that directory. Once you have the files in that directory, make an environment variable so it can be referred to easily. From inside the directory:&lt;br /&gt;
&lt;br /&gt;
  export HDF4_DIR=$(pwd)&lt;br /&gt;
&lt;br /&gt;
Get the Docker container from Docker Hub using this command:&lt;br /&gt;
&lt;br /&gt;
  docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
What the options mean:&lt;br /&gt;
 -d, --detach Run container in background and print container ID&lt;br /&gt;
 -h, --hostname Container host name&lt;br /&gt;
 -p, --publish Publish a container&#039;s port(s) to the host&lt;br /&gt;
 -v, --volume Bind mount a volume&lt;br /&gt;
 --name Assign a name to the container&lt;br /&gt;
&lt;br /&gt;
This command will fetch the container &#039;&#039;&#039;opendap/hyrax:snapshot&#039;&#039;&#039; from Docker Hub. Thw &#039;&#039;snapshot&#039;&#039; is the latest build of the container. It will then &#039;&#039;run&#039;&#039; the container and return the container ID. The &#039;&#039;hyrax&#039;&#039; server is now running on you computer and can be accessed with a web browser, curl, et cetera. More on that in a bit.&lt;br /&gt;
&lt;br /&gt;
The volume mount, from $HDF4_DIR to &#039;/usr/share/hyrax&#039; mounts the current directory of the host computer running the container to the directory &#039;&#039;/usr/share/hyrax&#039;&#039; inside the container. That directory is the root of the server&#039;s data tree. This means that the HDF4 files you copied into the HDF4_DIR directory will be accessible by the server running in the container. That will be useful for testing later on.&lt;br /&gt;
&lt;br /&gt;
Note: If you want to use a specific container version, just substitute the version info for &#039;snapshot.&#039;&lt;br /&gt;
&lt;br /&gt;
Check that the container is running using:&lt;br /&gt;
&lt;br /&gt;
  docker ps&lt;br /&gt;
&lt;br /&gt;
This will show a somewhat hard-to-read bit of information about all the running Docker container on you host:&lt;br /&gt;
&lt;br /&gt;
  CONTAINER ID   IMAGE                    COMMAND              CREATED          STATUS          PORTS                                                              &lt;br /&gt;
  NAMES&lt;br /&gt;
  2949d4101df4   opendap/hyrax:snapshot   &amp;quot;/entrypoint.sh -&amp;quot;   15 seconds ago   Up 14 seconds   8009/tcp, 8443/tcp, &lt;br /&gt;
  10022/tcp, 11002/tcp, 0.0.0.0:8080-&amp;gt;8080/tcp   hyrax&lt;br /&gt;
&lt;br /&gt;
If you want to stop containers, use&lt;br /&gt;
&lt;br /&gt;
  docker rm -f &amp;lt;CONTAINER ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &#039;&#039;CONTAINER ID&#039;&#039; for the one we just started and shown in the output of &#039;&#039;docker ps -a&#039;&#039; above is &#039;&#039;2949d4101df4&#039;&#039;. No need to stop the container now, I&#039;m just pointing out how to do it because it&#039;s often useful.&lt;br /&gt;
&lt;br /&gt;
====Lets run the DMR++ builder====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At the end of this, I&#039;ll include a shell script that takes away many of these steps, but the script obscures some aspects of the command that you might want to tweak, so the following shows you all the details. Skip to &#039;&#039;&#039;Simple shell command&#039;&#039;&#039; to skip over these details.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Make sure you are in the directory with the HDF4 files for these steps. &lt;br /&gt;
&lt;br /&gt;
Get the command to return its help information:&lt;br /&gt;
&lt;br /&gt;
  docker exec -it hyrax get_dmrpp_h4 -h&lt;br /&gt;
&lt;br /&gt;
will return:&lt;br /&gt;
  &lt;br /&gt;
  usage: get_dmrpp_h4 [-h] -i I [-c CONF] [-s] [-u DATA_URL] [-D] [-v]&lt;br /&gt;
&lt;br /&gt;
  Build a dmrpp file for an HDF4 file. get_dmrpp_h4 -i h4_file_name. A dmrpp&lt;br /&gt;
  file that uses the HDF4 file name will be generated.&lt;br /&gt;
&lt;br /&gt;
  optional arguments:&lt;br /&gt;
  &lt;br /&gt;
  ...&lt;br /&gt;
&lt;br /&gt;
Lets build a DMR++ now, by explicitly using the container:&lt;br /&gt;
&lt;br /&gt;
  docker exec -it hyrax bash&lt;br /&gt;
&lt;br /&gt;
starts the &#039;&#039;bash&#039;&#039; shell in the container, with the current directory as root (/)&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax /]# &lt;br /&gt;
&lt;br /&gt;
Change to the directory that is the root of the data (you&#039;ll see your HDF4 files in here):&lt;br /&gt;
&lt;br /&gt;
  cd /usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
You will see, roughly:&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax /]# cd /usr/share/hyrax&lt;br /&gt;
  [root@hyrax hyrax]# ls&lt;br /&gt;
  3B42.19980101.00.7.HDF&lt;br /&gt;
  3B42.19980101.03.7.HDF&lt;br /&gt;
  3B42.19980101.06.7.HDF&lt;br /&gt;
&lt;br /&gt;
  ...&lt;br /&gt;
&lt;br /&gt;
In that directory, use the &#039;&#039;get_dmrpp_h4&#039;&#039; command to build a DMR++ document for one of the files:&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax hyrax]# get_dmrpp_h4 -i 3B42.20130111.09.7.HDF -u &#039;file:///usr/share/hyrax/3B42.20130111.09.7.HDF&#039;&lt;br /&gt;
&lt;br /&gt;
Copy that pattern for whatever file you use. From the /usr/share/hyrax directory, you pass &#039;&#039;get_dmrpp_h4&#039;&#039; the name of the file (because it&#039;s local to the current directory) using the &#039;&#039;&#039;-i&#039;&#039;&#039; option. The &#039;&#039;&#039;-u&#039;&#039;&#039; option tells the command to embed the URL that follows it in the DMR++. I&#039;ve used a &#039;&#039;file://&#039;&#039; URL to the file &#039;&#039;/usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Note the three slashes following the colon, two from the way a URL names a protocol and one because the pathname starts at the root directory. Obscure, but it makes sense.&lt;br /&gt;
&lt;br /&gt;
Building the DMR++ and embedding a &#039;&#039;file://&#039;&#039; URL will enable testing the DMR+.&lt;br /&gt;
&lt;br /&gt;
Lets look at how the &#039;&#039;hyrax&#039;&#039; service will treat that data file using the DMR++. In a browser, goto &lt;br /&gt;
&lt;br /&gt;
  http://localhost:8080/opendap/&lt;br /&gt;
&lt;br /&gt;
You will see [[File:Hyrax-including-new-DNRpp.png|200px|thumb|right|The running server shows the DMR++ as a dataset.]]&lt;br /&gt;
&lt;br /&gt;
Click on the your equivalent of the &#039;&#039;&#039;3B42.20130111.09.7.HDF&#039;&#039; link, subset, download and open in Panoply or the equivalent. [[File:Hyrax-subsetting.png|200px|thumb|right|Use the form interface to subset and get a response.]]&lt;br /&gt;
&lt;br /&gt;
You can run batch tests in lots of files by building many DMR++ documents and then asking the server for various responses (nc4, dap) from the DMR++ and the original file. Those could be compared using various schemes, although in its entirety that is beyond this section&#039;s scope, the command &#039;&#039;getdap4&#039;&#039; is also included in the container and could be used to compare &#039;&#039;dap&#039;&#039; responses from the data file and the DMR++ document.&lt;br /&gt;
&lt;br /&gt;
====Simple shell command====&lt;br /&gt;
&lt;br /&gt;
Here is a simple shell command that you can run on the host computer that will eliminate most of the above. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In the spirit of a recipe, I&#039;ll restate the earlier command for starting the docker container with the &#039;&#039;&#039;get_dmrpp_h4&#039;&#039;&#039; command and the &#039;&#039;&#039;hyrax&#039;&#039;&#039; server.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Start the container:&lt;br /&gt;
&lt;br /&gt;
  docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
Is it running:&lt;br /&gt;
&lt;br /&gt;
  docker ps&lt;br /&gt;
&lt;br /&gt;
The command, written for the Bourne Shell, is:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  #&lt;br /&gt;
  # usage get_dmrpp_h4.sh &amp;lt;file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  data_root=/usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
  cat &amp;lt;&amp;lt;EOF | docker exec --interactive hyrax sh&lt;br /&gt;
  cd $data_root&lt;br /&gt;
  get_dmrpp_h4 -i $1 -u &amp;quot;file://$data_root/$1&amp;quot;&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
Copy that, save it in a file (I named the file &#039;&#039;get_dmrpp_h4.sh&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
Run the command on the host (not the docker container) and in the directory with the HDF4 files (you don&#039;t &#039;&#039;have&#039;&#039; to do that, but sorting out the details is left as an exercise for the reader ;-). Run the command like this:&lt;br /&gt;
&lt;br /&gt;
  ./get_dmrpp_h4.sh AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
&lt;br /&gt;
The DMR++ will appear when the command completes.&lt;br /&gt;
&lt;br /&gt;
  ls -l&lt;br /&gt;
&lt;br /&gt;
shows&lt;br /&gt;
&lt;br /&gt;
  (hyrax500) hyrax_git/HDF4-dir % ls -l&lt;br /&gt;
  total 1251240&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff    1250778 Aug 22 22:31 AMSR_E_L2_Land_V09_200206191112_A.hdf&lt;br /&gt;
  -rw-r--r--@ 1 jimg  staff   20746207 Aug 22 22:32 AMSR_E_L3_SeaIce25km_V15_20020601.hdf&lt;br /&gt;
  -rw-r--r--  1 jimg  staff    3378674 Aug 28 17:37 AMSR_E_L3_SeaIce25km_V15_20020601.hdf.dmrpp&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files with get_dmrpp==&lt;br /&gt;
&lt;br /&gt;
The application that builds the &#039;&#039;dmr++&#039;&#039; files is a command line tool called &#039;&#039;get_dmrpp&#039;&#039;. It in turn utilizes other executables such as &#039;&#039;build_dmrpp&#039;&#039;, &#039;&#039;reduce_mdf&#039;&#039;, &#039;&#039;merge_dmrpp&#039;&#039; (which rely in turn on the &#039;&#039;hdf5_handler&#039;&#039; and the &#039;&#039;hdf5&#039;&#039; library), along with a number of UNIX shell commands.&lt;br /&gt;
&lt;br /&gt;
All of these components are install with each recent version of the Hyrax Data Server&lt;br /&gt;
&lt;br /&gt;
You can see the &#039;&#039;get_dmrpp&#039;&#039; usage statement with the command: &amp;lt;code&amp;gt;get_dmrpp -h&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Using &#039;&#039;get_dmrpp&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The way that &#039;&#039;get_dmrpp&#039;&#039; is invoked controls the way that the data are ultimately represented in the resulting &#039;&#039;dmr++&#039;&#039; file(s). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;get_dmrpp&#039;&#039; application utilizes software from the Hyrax data server to produce the base DMR document which is used to construct the &#039;&#039;dmr++&#039;&#039; file. &lt;br /&gt;
&lt;br /&gt;
The Hyrax server has a long list of configuration options, several of which can substantially alter the the structural and semantic representation of the dataset as seen in the dmr++ files generated using these options.&lt;br /&gt;
&lt;br /&gt;
===Command line options===&lt;br /&gt;
&lt;br /&gt;
The command line switches provide a way to control the output of the tool. In addition to common options like verbose output or testing modes, the tool provides options to build extra (aka &#039;sidecar&#039;) data files that hold information needed for CF compliance if the original HDF5 data files lack that information (see the &#039;&#039;missing data&#039;&#039; section ). In addition, it is often desirable to build &#039;&#039;dmr++&#039;&#039; files before the source data files are uploaded to a cloud store like S3. In this case, the URL to the data may not be known when the &#039;&#039;dmr++&#039;&#039; is built. We support this by using placeholder/template strings in the &#039;&#039;dmr++&#039;&#039; and which can then be replaced with the URL at runtime, when the &#039;&#039;dmr++&#039;&#039; file is evaluated. See the &#039;-u&#039; and &#039;-p&#039; options below.&lt;br /&gt;
&lt;br /&gt;
====Inputs====&lt;br /&gt;
&lt;br /&gt;
; -b&lt;br /&gt;
: The fully qualified path to the top level data directory. Data files read by &#039;&#039;get_dmrpp&#039;&#039; must be in the directory tree rooted at this location and their names expressed as a path relative to this location. The value may not be set to &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; The default value is /tmp if a value is not provided. All the data files to be processed must be in this directory or one of its subdirectories. If &#039;&#039;get_dmrpp&#039;&#039; is being executed from same directory as the data then &amp;lt;code&amp;gt;-b `pwd`&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-b .&amp;lt;/code&amp;gt; works as well.&lt;br /&gt;
;-u&lt;br /&gt;
: This option is used to specify the location of the binary data object. It’s value must be an http, https, or file (file://) URL. This URL will be injected into the dmr++ when it is constructed. If option -u is not used; then the template string &amp;lt;code&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/code&amp;gt; will be used and the dmr++ will substitute a value at runtime.&lt;br /&gt;
;-c&lt;br /&gt;
:The path to an alternate bes configuration file to use.&lt;br /&gt;
;-s&lt;br /&gt;
:The path to an optional addendum configuration file which will be appended to the default BES configuration. Much like the site.conf file works for the full server deployment it will be loaded last and the settings there-in will have an override effect on the default configuration.&lt;br /&gt;
&lt;br /&gt;
====Output====&lt;br /&gt;
&lt;br /&gt;
; -o&lt;br /&gt;
: The name of the file to create.&lt;br /&gt;
&lt;br /&gt;
====Verbose Output Modes====&lt;br /&gt;
&lt;br /&gt;
; -h&lt;br /&gt;
: Show help/usage page.&lt;br /&gt;
; -v:&lt;br /&gt;
: verbose mode, prints the intermediate DMR.&lt;br /&gt;
; -V&lt;br /&gt;
: Very verbose mode,  prints the DMR, the command and the configuration file used to build the DMR.&lt;br /&gt;
; -D&lt;br /&gt;
: Just print the DMR that will be used to build the DMR++&lt;br /&gt;
; -X&lt;br /&gt;
: Do not remove temporary files. May be used independently of the -v and/or -V options.&lt;br /&gt;
&lt;br /&gt;
====Tests====&lt;br /&gt;
&lt;br /&gt;
; -T&lt;br /&gt;
: Run ALL hyrax tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -I&lt;br /&gt;
: Run hyrax inventory tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -F&lt;br /&gt;
: Run hyrax value probe tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
&lt;br /&gt;
====Missing Data Creation====&lt;br /&gt;
&lt;br /&gt;
; -M&lt;br /&gt;
: Build a &#039;sidecar&#039; file that holds missing information needed for CF compliance (e.g., Latitude, Longitude and Time coordinate data).&lt;br /&gt;
; -p&lt;br /&gt;
: Provide the URL for the Missing data sidecar file. If this is not given (but -M is), then a template value is used in the dmr++ file and a real URL is substituted at runtime.&lt;br /&gt;
; -r&lt;br /&gt;
: The path to the file that contains missing variable information for sets of input data files that share common missing variables. The file will be created if it doesn&#039;t exist and the result may be used in subsequent invocations of get_dmrpp (using -r) to identify the missing variable file.&lt;br /&gt;
&lt;br /&gt;
====AWS Integration====&lt;br /&gt;
The &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application supports both S3 hosted granules as inputs, and uploading generated dmr++ files to an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
; S3 Hosted granules are supported by default,&lt;br /&gt;
: When the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application sees that the name of the input file is an S3 URL it will check to see if the AWS CLI is configured and if so &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will attempt retrieve the granule and make a dmr++ utilizing whatever other options have been chosen.&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
; -U&lt;br /&gt;
: The &#039;&#039;&#039;&amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt;&#039;&#039;&#039; command line parameter for &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; instructs the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application to upload the generated dmr++ file to S3, but only when the following conditions are met:&lt;br /&gt;
:* The name of the input file is an S3 URL&lt;br /&gt;
:* The AWS CLI has been configured with credentials that provide r+w permissions for the bucket referenced in the input file S3 URL.&lt;br /&gt;
:* The &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; option has been specified.&lt;br /&gt;
: If all three of the above are true then &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will copy the retrieve the granule, create a dmr++ file from the granule, and copy the resulting dmr++ file (as defined by the -o option) to the source S3 bucket using the well known NGAP sidecar file naming convention: &#039;&#039;&#039;&amp;lt;tt&amp;gt;s3://bucket_name/granule_object_id.dmrpp&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -U -o foo -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5_handler&#039;&#039; Configuration===&lt;br /&gt;
&lt;br /&gt;
Because &#039;&#039;get_dmrpp&#039;&#039; uses the &#039;&#039;hdf5_handler&#039;&#039; software to build the &#039;&#039;dmr++&#039;&#039; the software must inject the &#039;&#039;hdf5_handler&#039;&#039;&#039;s configuration. &lt;br /&gt;
&lt;br /&gt;
The default configuration is large, but any valued may be altered at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some of the commonly manipulated configuration parameters with their default values:&lt;br /&gt;
 H5.EnableCF=true&lt;br /&gt;
 H5.EnableDMR64bitInt=true&lt;br /&gt;
 H5.DefaultHandleDimension=true&lt;br /&gt;
 H5.KeepVarLeadingUnderscore=false&lt;br /&gt;
 H5.EnableCheckNameClashing=true&lt;br /&gt;
 H5.EnableAddPathAttrs=true&lt;br /&gt;
 H5.EnableDropLongString=true&lt;br /&gt;
 H5.DisableStructMetaAttr=true&lt;br /&gt;
 H5.EnableFillValueCheck=true&lt;br /&gt;
 H5.CheckIgnoreObj=false&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;Note to DAACs with existing Hyrax deployments.&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
If your group is already serving data with Hyrax and the data representations that are generated by your Hyrax server are satisfactory, then a careful inspection of the localized configuration, typically held in /etc/bes/site.conf, will help you determine what configuration state you may need to inject into &#039;&#039;get_dmrpp&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===The &#039;&#039;H5.EnableCF&#039;&#039; option===&lt;br /&gt;
&lt;br /&gt;
Of particular importance is the &#039;&#039;H5.EnableCF&#039;&#039; option, which instructs the &#039;&#039;get_dmrpp&#039;&#039; tool to produce [https://cfconventions.org/ Climate Forecast convention (CF)] compatible output based on metadata found in the granule file being processed. &lt;br /&gt;
&lt;br /&gt;
Changing the value of &#039;&#039;H5.EnableCF&#039;&#039; from &#039;&#039;&#039;&#039;&#039;false&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;true&#039;&#039;&#039;&#039;&#039; will have (at least) two significant effects.&lt;br /&gt;
&lt;br /&gt;
It will:&lt;br /&gt;
&lt;br /&gt;
* Cause &#039;&#039;get_dmrpp&#039;&#039; to attempt to make the dmr++ metadata CF compliant.&lt;br /&gt;
* Remove Group hierarchies (if any) in the underlying data granule by flattening the Group hierarchy into the variable names.  &lt;br /&gt;
&lt;br /&gt;
By default &#039;&#039;get_dmrpp&#039;&#039; the &#039;&#039;H5.EnableCF&#039;&#039; option is set to false:&lt;br /&gt;
 H5.EnableCF = false&lt;br /&gt;
&lt;br /&gt;
There is a much more comprehensive discussion of this key feature, and others, in the &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;[https://opendap.github.io/hyrax_guide/Master_Hyrax_Guide.html#_hyrax_handlers HDF5 Handler section of the Appendix in the Hyrax Data Server Installation and Configuration Guide]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Missing data, the CF conventions and &#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
Many of the &#039;&#039;hdf5&#039;&#039; files produced by NASA and others do not contain the domain coordinate data (such as latitude, longitude, time, etc.) as a collection of explicit values. Instead information contained in the dataset metadata can used to reproduce these values. &lt;br /&gt;
&lt;br /&gt;
In order for a dataset to be Climate Forecast (CF) compatible it must contain these domain coordinate data values.&lt;br /&gt;
&lt;br /&gt;
The Hyrax &#039;&#039;hdf5_handler&#039;&#039; software, utilized by the &#039;&#039;get_dmrpp&#039;&#039; application, can create this data from the dataset metadata.  The &#039;&#039;get_dmrpp&#039;&#039; application places these generated data in a “sidecar” file for deployment with the source &#039;&#039;hdf5/netcdf&#039;&#039;-4 file.&lt;br /&gt;
&lt;br /&gt;
==Hyrax - Serving data using dmr++ files==&lt;br /&gt;
&lt;br /&gt;
There are three fundamental deployment scenarios for using dmr++ files to serve data with the Hyrax data server.&lt;br /&gt;
&lt;br /&gt;
This can be simple categorized as follows:&lt;br /&gt;
The dmr++ file(s) are XML files that contain a root &amp;lt;tt&amp;gt;dap4:Dataset&amp;lt;/tt&amp;gt; element with a &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attribute whose value is one of:&lt;br /&gt;
# An http(s):// URL referencing to the underlying granule files via http.&lt;br /&gt;
# A file:// URL that references the granule file on the local filesystem in a location that is inside the BES&#039; data root tree.&lt;br /&gt;
# The template string &amp;lt;tt&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each will discussed in turn below. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: By default Hyrax will automatically associate files whose name ends with &amp;quot;.dmrpp&amp;quot; with the dmr++ handler.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with http(s) URLs===&lt;br /&gt;
&lt;br /&gt;
If the dmr++ files that you wish to serve contain &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attributes whose values are http(s) URLs then there are 2+1 steps to serve the data:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure that the Hyrax AllowedHosts list is configured to allow Hyrax to access those target URLs. This can be accomplished by adding new regex entires to the AllowedHosts list in /etc/bes/site.conf, creating that file as need be.&lt;br /&gt;
# If the data URLs require authentication to access then you&#039;ll need to configure Hyrax for that too.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with file URLs===&lt;br /&gt;
&lt;br /&gt;
Using dmr++ files with locally held files can be useful for verifying that dmr++ functionality is working without relying on network access that may have data rate limits, authenticated access configuration, or security access constraints. Additionally, in many cases the dmr++ access to the locally held data may be significantly faster than through the native netcdf-4/hdf5 data handlers.&lt;br /&gt;
&lt;br /&gt;
In order to use dmr++ files that contain file:// URLs:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure the the dmr++ files contain only file:// URLs that refer to data granule files inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration.&lt;br /&gt;
&lt;br /&gt;
Note: For Hyrax, a correctly formatted file URL must start with the protocol &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; followed by the full qualified path to the data granule, for example:  &lt;br /&gt;
 /usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
so the the completed URL will have three slashes after the first colon:&lt;br /&gt;
 file:///usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with the template string. (NASA)===&lt;br /&gt;
&lt;br /&gt;
Another way to serve dmr++ files with Hyrax is to build the dmr++ files &#039;&#039;&#039;without&#039;&#039;&#039; valid URLs but with a template string that is replaced at runtime. If no target URL is supplied to get_drmpp at the time that the dmr++ is generated the template string: &amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt; will added to the file in place of the URL. The at runtime it can be replaced withe the correct value.&lt;br /&gt;
&lt;br /&gt;
Currently the only implementation of this is Hyrax&#039;s NGAP service which, when deployed in the NASA NGAP cloud, will accept &amp;quot;restified path&amp;quot; URLs that are defined as&lt;br /&gt;
having a URL path component with two mandatory and one optional parameters:&lt;br /&gt;
 MANDATORY: &amp;quot;/collections/UMM-C:{concept-id}&amp;quot;&lt;br /&gt;
 OPTIONAL:  &amp;quot;/UMM-C:{ShortName} &#039;.&#039; UMM-C:{Version}&amp;quot;&lt;br /&gt;
 MANDATORY: &amp;quot;/granules/UMM-G:{GranuleUR}&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039; &amp;lt;tt&amp;gt;https://opendap.earthdata.nasa.gov/collections/C1443727145-LAADS/MOD08_D3.v6.1/granules/MOD08_D3.A2020308.061.2020309092644.hdf.nc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When encountering this type of URL Hyrax will decompose it and use the content to formulate a query to the NASA CMR in order to retrieve the data access URL for the granule and for the dmr++ file. It then retrieves the dmr++ file and injects the data URL so that data access can proceed as described above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://wiki.earthdata.nasa.gov/display/DUTRAIN/Feature+analysis%3A+Restified+URL+for+OPENDAP+Data+Access More on the Restified Path can be found here],&lt;br /&gt;
&lt;br /&gt;
== Recipe: Building and testing dmr++ files ==&lt;br /&gt;
There are two recipes shown here, one using Hyrax docker containers and a second using the container that is part of the EOSDIS Cumulous task.&lt;br /&gt;
Prerequisites: &lt;br /&gt;
* Docker daemon running on a system that also supports a shell (the examples use &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; in this section)&lt;br /&gt;
=== Recipe: Building dmr++ files using a Hyrax docker container.===&lt;br /&gt;
# Acquire representative granule files for the collection you wish to import. Put them on the system that is running the Docker daemon. For this recipe we will assume that these files have been placed in the directory:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;/tmp/dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Get the most up to date Hyrax docker image:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker pull opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Start the docker container, mounting your data directory on to the docker image at &amp;lt;tt&amp;gt;/usr/share/hyrax&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker run -d -h hyrax -p 8080:8080 --volume /tmp/dmrpp:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Get a first view of your data using &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; with it&#039;s default configuration.&lt;br /&gt;
## If you want you can build a dmr++ for an example &amp;quot;&amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;input_file&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt;&amp;quot; using a  &amp;lt;tt&amp;gt;docker exec&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
##: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax get_dmrpp -b /usr/share/hyrax -o /usr/share/hyrax/input_file.dmrpp -u &amp;quot;file:///usr/share/hyrax/input_file&amp;quot; &amp;quot;input_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
## Or if you want more scripting flexibility you can login to the docker conainer to do the same:&lt;br /&gt;
### Login to the docker container:&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax /bin/bash&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### Change working dir to data dir:&lt;br /&gt;
###:&amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;cd /usr/share/hyrax&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### This sets the data directory to the current one (&amp;lt;tt&amp;gt;-b $(pwd)&amp;lt;/tt&amp;gt;) and sets the data URL (&amp;lt;tt&amp;gt;-u&amp;lt;/tt&amp;gt;) to the fully qualified path to the input file.&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;get_dmrpp -b $(pwd) -o foo.dmrpp -u &amp;quot;file://&amp;quot;$(pwd)&amp;quot;/your_test_file&amp;quot; &amp;quot;your_test_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;&#039;&#039;Now that you have made a dmr++ file, use the running Hyrax server to view and test it by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
#:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You can also batch process all of your test granules, if you want to go that route. This script assumes your ingestable data files end with &#039;&amp;lt;tt&amp;gt;.h5&amp;lt;/tt&amp;gt;&#039;. &lt;br /&gt;
#: &#039;&#039;The resulting dmr++ files should contain the correct &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; URLs and be correctly located so that they may be tested with the Hyrax service running in the docker instance.&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script will write each output file as a sidecar file into &lt;br /&gt;
# the same directory as its associated input granule data file.&lt;br /&gt;
&lt;br /&gt;
# The target directory to search for data files &lt;br /&gt;
target_dir=/usr/share/hyrax&lt;br /&gt;
echo &amp;quot;target_dir: ${target_dir}&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Search the target_dir for names matching the regex \*.h5 &lt;br /&gt;
for infile in `find &amp;quot;${target_dir}&amp;quot; -name \*.h5`&lt;br /&gt;
do&lt;br /&gt;
    echo &amp;quot; Processing: ${infile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    infile_base=`basename &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;infile_base: ${infile_base}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    bes_dir=`dirname &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;    bes_dir: ${bes_dir}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    outfile=&amp;quot;${infile}.dmrpp&amp;quot;&lt;br /&gt;
    echo &amp;quot;     Output: ${outfile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    get_dmrpp -b &amp;quot;${bes_dir}&amp;quot; -o &amp;quot;${outfile}&amp;quot; -u &amp;quot;file://${infile}&amp;quot; &amp;quot;${infile_base}&amp;quot;&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Remember that you can use the Hyrax server that is running in the docker container to view and test the dmr++ files you just created by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Testing and qualifying dmr++ files ===&lt;br /&gt;
In the previous section/step we created some initial dmr++ files using the default configuration. It is crucial to make sure that they provide the representation of the data that you and your users are expecting, and that they will work correctly with the Hyrax server. (See the following sections for details). If the generated dmr++ files do not match expectations then the default configuration of the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; may need to be amended using the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; parameter.&lt;br /&gt;
If the data are currently being served by your DAAC&#039;s on-prem team this is where understanding exactly what the localizations made to the configurations of the on-prem Hyrax instances deployed for the collection is important. These localization will probably need to be injected into &amp;lt;tt&amp;gt;get_drmpp&amp;lt;/tt&amp;gt; in order to produce the correct data representation in the dmr++ files.&lt;br /&gt;
&lt;br /&gt;
=== Flattening Groups ===&lt;br /&gt;
By default &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will preserve and show group hierarchies. If this is not desired, say for CF-1.0 compatibility, then you can change this by creating a small amendment to &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt;&#039;s default configuration. &lt;br /&gt;
First create the amending configuration file:&lt;br /&gt;
 echo &amp;quot;H5.EnableCF=true&amp;quot; &amp;gt; site.conf&lt;br /&gt;
Then, change the invocation of &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; in the above example by adding the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; switch:&lt;br /&gt;
 get_dmrpp -s site.conf -b `pwd` -o &amp;quot;${dmrpp_file}&amp;quot; -u &amp;quot;file://&amp;quot;`pwd`&amp;quot;/${file}&amp;quot; &amp;quot;${file}&amp;quot;&lt;br /&gt;
And re-run the dmr++ production as shown above.&lt;br /&gt;
&lt;br /&gt;
=== DAP representations ===&lt;br /&gt;
We have test and assurance procedures for DAP4 and DAP2 protocols below. Both are important. For legacy datasets the DAP2 request API is widely used by an existing client base and should continue to be supported. Since DAP4 subsumes DAP2 (but with somewhat different API semantics) It should be checked for legacy datasets as well. For more modern datasets that content DAP4 types such as Int64 that are not part of the DAP2 specification or implementations we will need to relying eliding the instances of unmapped types, or return an error when this is encountered.&lt;br /&gt;
 # Test Constants:&lt;br /&gt;
 GRANULE_FILE=&amp;quot;some_name.h5&amp;quot;&lt;br /&gt;
 # Granule URL&lt;br /&gt;
 gf_url=&amp;quot;http://localhost:8080/opendap/${GRANULE_FILE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== Inspect the dmr++ files====&lt;br /&gt;
# Do the dmr++ files have the expected dmrpp:href URL(s)?&lt;br /&gt;
#: &amp;lt;tt&amp;gt;head -2 ${GRANULE_FILE}.dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check DAP4 DMR Response ====&lt;br /&gt;
Inspect &amp;lt;tt&amp;gt;${gf_url}.dmrpp.dmr&amp;lt;/tt&amp;gt; &lt;br /&gt;
# Get the document, save as &amp;lt;tt&amp;gt;foo.dmr&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;curl -L -o foo.dmr &amp;quot;${gf_url}.dmr&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
# Are the associated dimensions correct?&lt;br /&gt;
&lt;br /&gt;
==== DAP4 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://docs.opendap.org/index.php?title=DAP4:_Specification_Volume_1#Fully_Qualified_Names VARIABLE_NAME is a full qualified DAP4 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_file &amp;quot;${gf_url}.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap4_subset_file dap4_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP4 UI test ====&lt;br /&gt;
* View and exercise the DAP4 Data Request Form &amp;lt;tt&amp;gt;{gf_url}.dmr.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check DDS Response ====&lt;br /&gt;
&lt;br /&gt;
# Inspect &amp;lt;tt&amp;gt;${gf_url}.dds&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
## Are the associated dimensions correct?&lt;br /&gt;
# Compare DMR++ DDS with granule file DDS. &lt;br /&gt;
#:For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_file &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_dmrpp &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; cmp dap2_dds_file dap2_dds_dmrpp &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_file &amp;quot;${gf_url}.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap2_subset_file dap2_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: One might consider doing this with two or more variables.&lt;br /&gt;
&lt;br /&gt;
==== DAP2 UI Test ====&lt;br /&gt;
* View and exercise the DAP2 Data Request Form located here: &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Try it in Panoply!&lt;br /&gt;
** Open Panoply.&lt;br /&gt;
** From the &#039;&#039;&#039;File&#039;&#039;&#039; menu select &#039;&#039;&#039;Open Remote Dataset...&#039;&#039;&#039;&lt;br /&gt;
** Paste the &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt; into the resulting dialog box.&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13543</id>
		<title>DMR++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13543"/>
		<updated>2024-08-28T22:57:42Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Building DMR++ files for HDF4 and HDF4-EOS2 (experimental) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;How to build &amp;amp; deploy dmr++ files for Hyrax&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==What? Why?== &lt;br /&gt;
&lt;br /&gt;
It is a fast and flexible way to serve data stored in S3.&lt;br /&gt;
The dmr++ encodes the location of the data content residing in a binary data file/object (e.g., an hdf5 file) so that it can be directly accessed, without the need for an intermediate library API, by using the file with the location information. The binary data objects may be on a local filesystem, or they may reside across the web in something like an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How Does It Work?==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; ingest software reads a data file (see &#039;&#039;&#039;note&#039;&#039;&#039;) and builds a document that holds all of the file&#039;s metadata (the names and types of all of the variables along with any other information bound to those variables). This information is stored in a document we call the Dataset Metadata Response (DMR). The &#039;&#039;dmr++&#039;&#039; adds some extra information to this (that&#039;s the &#039;++&#039; part) about where each variable can be found and how to decode those values. The &#039;&#039;dmr++&#039;&#039; is simply an special annotated DMR document.&lt;br /&gt;
&lt;br /&gt;
This effectively decouples the annotated DMR (&#039;&#039;dmr++&#039;&#039;) from the location of the granule file itself. Since dmr++ files are typically significantly smaller than the source data granules they represent, they can be stored and moved for less expense. They also enable reading all of the file&#039;s metadata in one operation instead of the iterative process that many APIs require.&lt;br /&gt;
&lt;br /&gt;
If the dmr++ contains references to the source granules location on the web, the location of the the dmr++ file itself does not matter.&lt;br /&gt;
&lt;br /&gt;
Software that understands the dmr++ content can directly access the data values held in the source granule file, and it can do so without having to retrieve the entire file and work on it locally, even when the file is stored in a Web Object Store like S3. &lt;br /&gt;
&lt;br /&gt;
If the granule file contains multiple variables and only a subset of them are needed, the dmr++ enabled software can retrieve just the bytes associated with the desired variables parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;note:&#039;&#039;&#039; The OPeNDAP software currently supports HDF5 and NetCDF4. Other formats can be supported, such as zarr.&lt;br /&gt;
&lt;br /&gt;
==Supported Data Formats==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently works with &#039;&#039;hdf5&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. (The &#039;&#039;netcdf-4&#039;&#039; format is a subset of &#039;&#039;hdf5&#039;&#039; so &#039;&#039;hdf5&#039;&#039; tools are utilized for both.) Other formats like &#039;&#039;zarr&#039;&#039;, &#039;&#039;hdf4&#039;&#039;, &#039;&#039;netcdf-3&#039;&#039; are not currently supported by the &#039;&#039;dmr++&#039;&#039; software, but support could be added if requested.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; data format is quite complex and many of the options and edge cases are not currently supported by the &#039;&#039;dmr++&#039;&#039; software. &lt;br /&gt;
&lt;br /&gt;
These limitations and how to quickly evaluate an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file for use with the &#039;&#039;dmr++&#039;&#039; software are explained below.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; filters====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format has several filter/compression options used for storing data values. &lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5Z_FILTER_DEFLATE, H5Z_FILTER_SHUFFLE, and H5Z_FILTER_FLETCHER32 filters.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc/RM/RM_H5Z.html You can find more on hdf5 filters here.]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; storage layouts====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format also uses a number of &amp;quot;storage layouts&amp;quot; that describe various structural organizations of the data values associated with a variable in the granule file.&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5D_COMPACT, H5D_CHUNKED, and H5D_CONTIGUOUS storage layouts. These are all of the storage layouts defined by the &#039;&#039;hdf5&#039;&#039; library, but others can be added.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc1.6/Datasets.html You can find more on hdf5 storage layouts here.]&lt;br /&gt;
&lt;br /&gt;
====Is my &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file suitable for &#039;&#039;dmr++&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
To determine the &#039;&#039;hdf5&#039;&#039; filters, storage layouts, and chunking scheme used in an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file you can use the command:&lt;br /&gt;
 &amp;lt;code&amp;gt;h5dump -H -p &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
To get a human readable assessment of the file that will show the storage layouts, chunking structure, and the filters needed for each variable (aka DATASET in the &#039;&#039;hdf5&#039;&#039; vocabulary) [https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Dump h5dump info can be found here.]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;h5dump example output:&#039;&#039;&lt;br /&gt;
 $ h5dump -H -p chunked_gzipped_fourD.h5&lt;br /&gt;
 HDF5 &amp;quot;chunked_gzipped_fourD.h5&amp;quot; {&lt;br /&gt;
 GROUP &amp;quot;/&amp;quot; {&lt;br /&gt;
   DATASET &amp;quot;d_16_gzipped_chunks&amp;quot; {&lt;br /&gt;
      DATATYPE  H5T_IEEE_F32LE&lt;br /&gt;
      DATASPACE  SIMPLE { ( 40, 40, 40, 40 ) / ( 40, 40, 40, 40 ) }&lt;br /&gt;
      STORAGE_LAYOUT {&lt;br /&gt;
         CHUNKED ( 20, 20, 20, 20 )&lt;br /&gt;
         SIZE 2863311 (3.576:1 COMPRESSION)&lt;br /&gt;
      }&lt;br /&gt;
      FILTERS {&lt;br /&gt;
         COMPRESSION DEFLATE { LEVEL 6 }&lt;br /&gt;
      }&lt;br /&gt;
      FILLVALUE {&lt;br /&gt;
         FILL_TIME H5D_FILL_TIME_ALLOC&lt;br /&gt;
         VALUE  H5D_FILL_VALUE_DEFAULT&lt;br /&gt;
      }&lt;br /&gt;
      ALLOCATION_TIME {&lt;br /&gt;
         H5D_ALLOC_TIME_INCR&lt;br /&gt;
      }&lt;br /&gt;
   }&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
====Is my netcdf file &#039;&#039;netcdf-3&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
It is an unfortunate state of affairs that the file suffix &amp;quot;.nc&amp;quot; is the commonly used naming convention for both &#039;&#039;netcdf-3&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. &lt;br /&gt;
You can use the command:  &lt;br /&gt;
 &amp;lt;code&amp;gt;ncdump -k &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt; &lt;br /&gt;
to determine if a &#039;&#039;netcdf&#039;&#039; file is either classic &#039;&#039;netcdf-3&#039;&#039; (classic) or &#039;&#039;netcdf-4&#039;&#039; (netCDF-4).&lt;br /&gt;
* The &#039;&#039;netcdf&#039;&#039; library must be installed on the system upon which the command is issued.&lt;br /&gt;
&lt;br /&gt;
[http://www.bic.mni.mcgill.ca/users/sean/Docs/netcdf/guide.txn_79.html You can learn more in the NetCDF documentation here.]&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;DMR++&#039;&#039; files for HDF4 and HDF4-EOS2 (experimental)==&lt;br /&gt;
The HDF4 and HDF4-EOS2 (hereafter just HDF4) DMR++ document builder is currently available in the docker container we build for &#039;&#039;hyrax&#039;&#039; server/service. You can get this container from the public Docker Hub repository. You can also get and build the &#039;&#039;Hyrax&#039;&#039; source code, and use the client that way (as part of a source code build) but it&#039;s much more complex than getting the Docker container. In addition, the Docker container includes a server that can test the DMR++ documents that are built and can even show you how the files would look when served without using the DMR++.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Because this command is still experimental, I&#039;ll write this documentation like a recipe. Modify it to suit your own needs.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Using get_dmrpp_h4===&lt;br /&gt;
Make a new directory in a convenient place and copy the HDF4 and/or HDF4-EOS2 files in that directory. Once you have the files in that directory, make an environment variable so it can be referred to easily. From inside the directory:&lt;br /&gt;
&lt;br /&gt;
  export HDF4_DIR=$(pwd)&lt;br /&gt;
&lt;br /&gt;
Get the Docker container from Docker Hub using this command:&lt;br /&gt;
&lt;br /&gt;
  docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
What the options mean:&lt;br /&gt;
 -d, --detach Run container in background and print container ID&lt;br /&gt;
 -h, --hostname Container host name&lt;br /&gt;
 -p, --publish Publish a container&#039;s port(s) to the host&lt;br /&gt;
 -v, --volume Bind mount a volume&lt;br /&gt;
 --name Assign a name to the container&lt;br /&gt;
&lt;br /&gt;
This command will fetch the container &#039;&#039;&#039;opendap/hyrax:snapshot&#039;&#039;&#039; from Docker Hub. Thw &#039;&#039;snapshot&#039;&#039; is the latest build of the container. It will then &#039;&#039;run&#039;&#039; the container and return the container ID. The &#039;&#039;hyrax&#039;&#039; server is now running on you computer and can be accessed with a web browser, curl, et cetera. More on that in a bit.&lt;br /&gt;
&lt;br /&gt;
The volume mount, from $HDF4_DIR to &#039;/usr/share/hyrax&#039; mounts the current directory of the host computer running the container to the directory &#039;&#039;/usr/share/hyrax&#039;&#039; inside the container. That directory is the root of the server&#039;s data tree. This means that the HDF4 files you copied into the HDF4_DIR directory will be accessible by the server running in the container. That will be useful for testing later on.&lt;br /&gt;
&lt;br /&gt;
Note: If you want to use a specific container version, just substitute the version info for &#039;snapshot.&#039;&lt;br /&gt;
&lt;br /&gt;
Check that the container is running using:&lt;br /&gt;
&lt;br /&gt;
  docker ps -a&lt;br /&gt;
&lt;br /&gt;
This will show a somewhat hard-to-read bit of information about all the running Docker container on you host:&lt;br /&gt;
&lt;br /&gt;
  CONTAINER ID   IMAGE                    COMMAND              CREATED          STATUS          PORTS                                                              &lt;br /&gt;
  NAMES&lt;br /&gt;
  2949d4101df4   opendap/hyrax:snapshot   &amp;quot;/entrypoint.sh -&amp;quot;   15 seconds ago   Up 14 seconds   8009/tcp, 8443/tcp, &lt;br /&gt;
  10022/tcp, 11002/tcp, 0.0.0.0:8080-&amp;gt;8080/tcp   hyrax&lt;br /&gt;
&lt;br /&gt;
If you want to stop containers, use&lt;br /&gt;
&lt;br /&gt;
  docker rm -r &amp;lt;CONTAINER ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &#039;&#039;CONTAINER ID&#039;&#039; for the one we just started and shown in the output of &#039;&#039;docker ps -a&#039;&#039; above is &#039;&#039;2949d4101df4&#039;&#039;. No need to stop the container now, I&#039;m just pointing out how to do it because it&#039;s often useful.&lt;br /&gt;
&lt;br /&gt;
====Lets run the DMR++ builder====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At the end of this, I&#039;ll include a shell script that takes away many of these steps, but the script obscures some aspects of the command that you might want to tweak, so the following shows you all the details.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Make sure you are in the directory with the HDF4 files for these steps. &lt;br /&gt;
&lt;br /&gt;
Get the command to return its help information:&lt;br /&gt;
&lt;br /&gt;
  docker exec -it hyrax get_dmrpp_h4 -h&lt;br /&gt;
&lt;br /&gt;
will return:&lt;br /&gt;
  &lt;br /&gt;
  usage: get_dmrpp_h4 [-h] -i I [-c CONF] [-s] [-u DATA_URL] [-D] [-v]&lt;br /&gt;
&lt;br /&gt;
  Build a dmrpp file for an HDF4 file. get_dmrpp_h4 -i h4_file_name. A dmrpp&lt;br /&gt;
  file that uses the HDF4 file name will be generated.&lt;br /&gt;
&lt;br /&gt;
  optional arguments:&lt;br /&gt;
  &lt;br /&gt;
  ...&lt;br /&gt;
&lt;br /&gt;
Lets build a DMR++ now, by explicitly using the container:&lt;br /&gt;
&lt;br /&gt;
  docker exec -it hyrax bash&lt;br /&gt;
&lt;br /&gt;
starts the &#039;&#039;bash&#039;&#039; shell in the container, with the current directory as root (/)&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax /]# &lt;br /&gt;
&lt;br /&gt;
Change to the directory that is the root of the data (you&#039;ll see your HDF4 files in here):&lt;br /&gt;
&lt;br /&gt;
  cd /usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
You will see, roughly:&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax /]# cd /usr/share/hyrax&lt;br /&gt;
  [root@hyrax hyrax]# ls&lt;br /&gt;
  3B42.19980101.00.7.HDF&lt;br /&gt;
  3B42.19980101.03.7.HDF&lt;br /&gt;
  3B42.19980101.06.7.HDF&lt;br /&gt;
&lt;br /&gt;
  ...&lt;br /&gt;
&lt;br /&gt;
In that directory, use the &#039;&#039;get_dmrpp_h4&#039;&#039; command to build a DMR++ document for one of the files:&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax hyrax]# get_dmrpp_h4 -i 3B42.19980101.00.7.HDF -u &#039;file:///usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&lt;br /&gt;
&lt;br /&gt;
Copy that pattern for whatever file you use. From the /usr/share/hyrax directory, you pass &#039;&#039;get_dmrpp_h4&#039;&#039; the name of the file (because it&#039;s local to the current directory). The &#039;&#039;&#039;-u&#039;&#039;&#039; option tells the command to embed the URL that follows it in the DMR++. I&#039;ve used a &#039;&#039;file://&#039;&#039; URL to the file &#039;&#039;/usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Note the three slashes following the colon. Obscure, but it makes sense.&lt;br /&gt;
&lt;br /&gt;
Build ing the DMR++ and embedding a &#039;&#039;file://&#039;&#039; URL will enable testing the DMR++ easily.&lt;br /&gt;
&lt;br /&gt;
Lets look at how the &#039;&#039;hyrax&#039;&#039; service will treat that data file using the DMR++. In a browser, goto &lt;br /&gt;
&lt;br /&gt;
  http://localhost:8080/opendap/&lt;br /&gt;
&lt;br /&gt;
You will see [[File:Hyrax-including-new-DNRpp.png|200px|thumb|right|Caption]]&lt;br /&gt;
&lt;br /&gt;
Click on the your equivalent of the &#039;&#039;&#039;3B42.19980101.00.7.HDF.dmrpp&#039;&#039; link, subset, download and open in Panoply or the equivalent. [[File:Hyrax-subsetting.png|200px|thumb|right|Caption]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
===Command line option===&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files with get_dmrpp==&lt;br /&gt;
&lt;br /&gt;
The application that builds the &#039;&#039;dmr++&#039;&#039; files is a command line tool called &#039;&#039;get_dmrpp&#039;&#039;. It in turn utilizes other executables such as &#039;&#039;build_dmrpp&#039;&#039;, &#039;&#039;reduce_mdf&#039;&#039;, &#039;&#039;merge_dmrpp&#039;&#039; (which rely in turn on the &#039;&#039;hdf5_handler&#039;&#039; and the &#039;&#039;hdf5&#039;&#039; library), along with a number of UNIX shell commands.&lt;br /&gt;
&lt;br /&gt;
All of these components are install with each recent version of the Hyrax Data Server&lt;br /&gt;
&lt;br /&gt;
You can see the &#039;&#039;get_dmrpp&#039;&#039; usage statement with the command: &amp;lt;code&amp;gt;get_dmrpp -h&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Using &#039;&#039;get_dmrpp&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The way that &#039;&#039;get_dmrpp&#039;&#039; is invoked controls the way that the data are ultimately represented in the resulting &#039;&#039;dmr++&#039;&#039; file(s). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;get_dmrpp&#039;&#039; application utilizes software from the Hyrax data server to produce the base DMR document which is used to construct the &#039;&#039;dmr++&#039;&#039; file. &lt;br /&gt;
&lt;br /&gt;
The Hyrax server has a long list of configuration options, several of which can substantially alter the the structural and semantic representation of the dataset as seen in the dmr++ files generated using these options.&lt;br /&gt;
&lt;br /&gt;
===Command line options===&lt;br /&gt;
&lt;br /&gt;
The command line switches provide a way to control the output of the tool. In addition to common options like verbose output or testing modes, the tool provides options to build extra (aka &#039;sidecar&#039;) data files that hold information needed for CF compliance if the original HDF5 data files lack that information (see the &#039;&#039;missing data&#039;&#039; section ). In addition, it is often desirable to build &#039;&#039;dmr++&#039;&#039; files before the source data files are uploaded to a cloud store like S3. In this case, the URL to the data may not be known when the &#039;&#039;dmr++&#039;&#039; is built. We support this by using placeholder/template strings in the &#039;&#039;dmr++&#039;&#039; and which can then be replaced with the URL at runtime, when the &#039;&#039;dmr++&#039;&#039; file is evaluated. See the &#039;-u&#039; and &#039;-p&#039; options below.&lt;br /&gt;
&lt;br /&gt;
====Inputs====&lt;br /&gt;
&lt;br /&gt;
; -b&lt;br /&gt;
: The fully qualified path to the top level data directory. Data files read by &#039;&#039;get_dmrpp&#039;&#039; must be in the directory tree rooted at this location and their names expressed as a path relative to this location. The value may not be set to &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; The default value is /tmp if a value is not provided. All the data files to be processed must be in this directory or one of its subdirectories. If &#039;&#039;get_dmrpp&#039;&#039; is being executed from same directory as the data then &amp;lt;code&amp;gt;-b `pwd`&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-b .&amp;lt;/code&amp;gt; works as well.&lt;br /&gt;
;-u&lt;br /&gt;
: This option is used to specify the location of the binary data object. It’s value must be an http, https, or file (file://) URL. This URL will be injected into the dmr++ when it is constructed. If option -u is not used; then the template string &amp;lt;code&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/code&amp;gt; will be used and the dmr++ will substitute a value at runtime.&lt;br /&gt;
;-c&lt;br /&gt;
:The path to an alternate bes configuration file to use.&lt;br /&gt;
;-s&lt;br /&gt;
:The path to an optional addendum configuration file which will be appended to the default BES configuration. Much like the site.conf file works for the full server deployment it will be loaded last and the settings there-in will have an override effect on the default configuration.&lt;br /&gt;
&lt;br /&gt;
====Output====&lt;br /&gt;
&lt;br /&gt;
; -o&lt;br /&gt;
: The name of the file to create.&lt;br /&gt;
&lt;br /&gt;
====Verbose Output Modes====&lt;br /&gt;
&lt;br /&gt;
; -h&lt;br /&gt;
: Show help/usage page.&lt;br /&gt;
; -v:&lt;br /&gt;
: verbose mode, prints the intermediate DMR.&lt;br /&gt;
; -V&lt;br /&gt;
: Very verbose mode,  prints the DMR, the command and the configuration file used to build the DMR.&lt;br /&gt;
; -D&lt;br /&gt;
: Just print the DMR that will be used to build the DMR++&lt;br /&gt;
; -X&lt;br /&gt;
: Do not remove temporary files. May be used independently of the -v and/or -V options.&lt;br /&gt;
&lt;br /&gt;
====Tests====&lt;br /&gt;
&lt;br /&gt;
; -T&lt;br /&gt;
: Run ALL hyrax tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -I&lt;br /&gt;
: Run hyrax inventory tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -F&lt;br /&gt;
: Run hyrax value probe tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
&lt;br /&gt;
====Missing Data Creation====&lt;br /&gt;
&lt;br /&gt;
; -M&lt;br /&gt;
: Build a &#039;sidecar&#039; file that holds missing information needed for CF compliance (e.g., Latitude, Longitude and Time coordinate data).&lt;br /&gt;
; -p&lt;br /&gt;
: Provide the URL for the Missing data sidecar file. If this is not given (but -M is), then a template value is used in the dmr++ file and a real URL is substituted at runtime.&lt;br /&gt;
; -r&lt;br /&gt;
: The path to the file that contains missing variable information for sets of input data files that share common missing variables. The file will be created if it doesn&#039;t exist and the result may be used in subsequent invocations of get_dmrpp (using -r) to identify the missing variable file.&lt;br /&gt;
&lt;br /&gt;
====AWS Integration====&lt;br /&gt;
The &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application supports both S3 hosted granules as inputs, and uploading generated dmr++ files to an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
; S3 Hosted granules are supported by default,&lt;br /&gt;
: When the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application sees that the name of the input file is an S3 URL it will check to see if the AWS CLI is configured and if so &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will attempt retrieve the granule and make a dmr++ utilizing whatever other options have been chosen.&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
; -U&lt;br /&gt;
: The &#039;&#039;&#039;&amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt;&#039;&#039;&#039; command line parameter for &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; instructs the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application to upload the generated dmr++ file to S3, but only when the following conditions are met:&lt;br /&gt;
:* The name of the input file is an S3 URL&lt;br /&gt;
:* The AWS CLI has been configured with credentials that provide r+w permissions for the bucket referenced in the input file S3 URL.&lt;br /&gt;
:* The &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; option has been specified.&lt;br /&gt;
: If all three of the above are true then &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will copy the retrieve the granule, create a dmr++ file from the granule, and copy the resulting dmr++ file (as defined by the -o option) to the source S3 bucket using the well known NGAP sidecar file naming convention: &#039;&#039;&#039;&amp;lt;tt&amp;gt;s3://bucket_name/granule_object_id.dmrpp&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -U -o foo -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5_handler&#039;&#039; Configuration===&lt;br /&gt;
&lt;br /&gt;
Because &#039;&#039;get_dmrpp&#039;&#039; uses the &#039;&#039;hdf5_handler&#039;&#039; software to build the &#039;&#039;dmr++&#039;&#039; the software must inject the &#039;&#039;hdf5_handler&#039;&#039;&#039;s configuration. &lt;br /&gt;
&lt;br /&gt;
The default configuration is large, but any valued may be altered at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some of the commonly manipulated configuration parameters with their default values:&lt;br /&gt;
 H5.EnableCF=true&lt;br /&gt;
 H5.EnableDMR64bitInt=true&lt;br /&gt;
 H5.DefaultHandleDimension=true&lt;br /&gt;
 H5.KeepVarLeadingUnderscore=false&lt;br /&gt;
 H5.EnableCheckNameClashing=true&lt;br /&gt;
 H5.EnableAddPathAttrs=true&lt;br /&gt;
 H5.EnableDropLongString=true&lt;br /&gt;
 H5.DisableStructMetaAttr=true&lt;br /&gt;
 H5.EnableFillValueCheck=true&lt;br /&gt;
 H5.CheckIgnoreObj=false&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;Note to DAACs with existing Hyrax deployments.&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
If your group is already serving data with Hyrax and the data representations that are generated by your Hyrax server are satisfactory, then a careful inspection of the localized configuration, typically held in /etc/bes/site.conf, will help you determine what configuration state you may need to inject into &#039;&#039;get_dmrpp&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===The &#039;&#039;H5.EnableCF&#039;&#039; option===&lt;br /&gt;
&lt;br /&gt;
Of particular importance is the &#039;&#039;H5.EnableCF&#039;&#039; option, which instructs the &#039;&#039;get_dmrpp&#039;&#039; tool to produce [https://cfconventions.org/ Climate Forecast convention (CF)] compatible output based on metadata found in the granule file being processed. &lt;br /&gt;
&lt;br /&gt;
Changing the value of &#039;&#039;H5.EnableCF&#039;&#039; from &#039;&#039;&#039;&#039;&#039;false&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;true&#039;&#039;&#039;&#039;&#039; will have (at least) two significant effects.&lt;br /&gt;
&lt;br /&gt;
It will:&lt;br /&gt;
&lt;br /&gt;
* Cause &#039;&#039;get_dmrpp&#039;&#039; to attempt to make the dmr++ metadata CF compliant.&lt;br /&gt;
* Remove Group hierarchies (if any) in the underlying data granule by flattening the Group hierarchy into the variable names.  &lt;br /&gt;
&lt;br /&gt;
By default &#039;&#039;get_dmrpp&#039;&#039; the &#039;&#039;H5.EnableCF&#039;&#039; option is set to false:&lt;br /&gt;
 H5.EnableCF = false&lt;br /&gt;
&lt;br /&gt;
There is a much more comprehensive discussion of this key feature, and others, in the &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;[https://opendap.github.io/hyrax_guide/Master_Hyrax_Guide.html#_hyrax_handlers HDF5 Handler section of the Appendix in the Hyrax Data Server Installation and Configuration Guide]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Missing data, the CF conventions and &#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
Many of the &#039;&#039;hdf5&#039;&#039; files produced by NASA and others do not contain the domain coordinate data (such as latitude, longitude, time, etc.) as a collection of explicit values. Instead information contained in the dataset metadata can used to reproduce these values. &lt;br /&gt;
&lt;br /&gt;
In order for a dataset to be Climate Forecast (CF) compatible it must contain these domain coordinate data values.&lt;br /&gt;
&lt;br /&gt;
The Hyrax &#039;&#039;hdf5_handler&#039;&#039; software, utilized by the &#039;&#039;get_dmrpp&#039;&#039; application, can create this data from the dataset metadata.  The &#039;&#039;get_dmrpp&#039;&#039; application places these generated data in a “sidecar” file for deployment with the source &#039;&#039;hdf5/netcdf&#039;&#039;-4 file.&lt;br /&gt;
&lt;br /&gt;
==Hyrax - Serving data using dmr++ files==&lt;br /&gt;
&lt;br /&gt;
There are three fundamental deployment scenarios for using dmr++ files to serve data with the Hyrax data server.&lt;br /&gt;
&lt;br /&gt;
This can be simple categorized as follows:&lt;br /&gt;
The dmr++ file(s) are XML files that contain a root &amp;lt;tt&amp;gt;dap4:Dataset&amp;lt;/tt&amp;gt; element with a &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attribute whose value is one of:&lt;br /&gt;
# An http(s):// URL referencing to the underlying granule files via http.&lt;br /&gt;
# A file:// URL that references the granule file on the local filesystem in a location that is inside the BES&#039; data root tree.&lt;br /&gt;
# The template string &amp;lt;tt&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each will discussed in turn below. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: By default Hyrax will automatically associate files whose name ends with &amp;quot;.dmrpp&amp;quot; with the dmr++ handler.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with http(s) URLs===&lt;br /&gt;
&lt;br /&gt;
If the dmr++ files that you wish to serve contain &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attributes whose values are http(s) URLs then there are 2+1 steps to serve the data:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure that the Hyrax AllowedHosts list is configured to allow Hyrax to access those target URLs. This can be accomplished by adding new regex entires to the AllowedHosts list in /etc/bes/site.conf, creating that file as need be.&lt;br /&gt;
# If the data URLs require authentication to access then you&#039;ll need to configure Hyrax for that too.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with file URLs===&lt;br /&gt;
&lt;br /&gt;
Using dmr++ files with locally held files can be useful for verifying that dmr++ functionality is working without relying on network access that may have data rate limits, authenticated access configuration, or security access constraints. Additionally, in many cases the dmr++ access to the locally held data may be significantly faster than through the native netcdf-4/hdf5 data handlers.&lt;br /&gt;
&lt;br /&gt;
In order to use dmr++ files that contain file:// URLs:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure the the dmr++ files contain only file:// URLs that refer to data granule files inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration.&lt;br /&gt;
&lt;br /&gt;
Note: For Hyrax, a correctly formatted file URL must start with the protocol &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; followed by the full qualified path to the data granule, for example:  &lt;br /&gt;
 /usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
so the the completed URL will have three slashes after the first colon:&lt;br /&gt;
 file:///usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with the template string. (NASA)===&lt;br /&gt;
&lt;br /&gt;
Another way to serve dmr++ files with Hyrax is to build the dmr++ files &#039;&#039;&#039;without&#039;&#039;&#039; valid URLs but with a template string that is replaced at runtime. If no target URL is supplied to get_drmpp at the time that the dmr++ is generated the template string: &amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt; will added to the file in place of the URL. The at runtime it can be replaced withe the correct value.&lt;br /&gt;
&lt;br /&gt;
Currently the only implementation of this is Hyrax&#039;s NGAP service which, when deployed in the NASA NGAP cloud, will accept &amp;quot;restified path&amp;quot; URLs that are defined as&lt;br /&gt;
having a URL path component with two mandatory and one optional parameters:&lt;br /&gt;
 MANDATORY: &amp;quot;/collections/UMM-C:{concept-id}&amp;quot;&lt;br /&gt;
 OPTIONAL:  &amp;quot;/UMM-C:{ShortName} &#039;.&#039; UMM-C:{Version}&amp;quot;&lt;br /&gt;
 MANDATORY: &amp;quot;/granules/UMM-G:{GranuleUR}&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039; &amp;lt;tt&amp;gt;https://opendap.earthdata.nasa.gov/collections/C1443727145-LAADS/MOD08_D3.v6.1/granules/MOD08_D3.A2020308.061.2020309092644.hdf.nc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When encountering this type of URL Hyrax will decompose it and use the content to formulate a query to the NASA CMR in order to retrieve the data access URL for the granule and for the dmr++ file. It then retrieves the dmr++ file and injects the data URL so that data access can proceed as described above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://wiki.earthdata.nasa.gov/display/DUTRAIN/Feature+analysis%3A+Restified+URL+for+OPENDAP+Data+Access More on the Restified Path can be found here],&lt;br /&gt;
&lt;br /&gt;
== Recipe: Building and testing dmr++ files ==&lt;br /&gt;
There are two recipes shown here, one using Hyrax docker containers and a second using the container that is part of the EOSDIS Cumulous task.&lt;br /&gt;
Prerequisites: &lt;br /&gt;
* Docker daemon running on a system that also supports a shell (the examples use &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; in this section)&lt;br /&gt;
=== Recipe: Building dmr++ files using a Hyrax docker container.===&lt;br /&gt;
# Acquire representative granule files for the collection you wish to import. Put them on the system that is running the Docker daemon. For this recipe we will assume that these files have been placed in the directory:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;/tmp/dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Get the most up to date Hyrax docker image:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker pull opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Start the docker container, mounting your data directory on to the docker image at &amp;lt;tt&amp;gt;/usr/share/hyrax&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker run -d -h hyrax -p 8080:8080 --volume /tmp/dmrpp:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Get a first view of your data using &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; with it&#039;s default configuration.&lt;br /&gt;
## If you want you can build a dmr++ for an example &amp;quot;&amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;input_file&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt;&amp;quot; using a  &amp;lt;tt&amp;gt;docker exec&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
##: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax get_dmrpp -b /usr/share/hyrax -o /usr/share/hyrax/input_file.dmrpp -u &amp;quot;file:///usr/share/hyrax/input_file&amp;quot; &amp;quot;input_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
## Or if you want more scripting flexibility you can login to the docker conainer to do the same:&lt;br /&gt;
### Login to the docker container:&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax /bin/bash&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### Change working dir to data dir:&lt;br /&gt;
###:&amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;cd /usr/share/hyrax&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### This sets the data directory to the current one (&amp;lt;tt&amp;gt;-b $(pwd)&amp;lt;/tt&amp;gt;) and sets the data URL (&amp;lt;tt&amp;gt;-u&amp;lt;/tt&amp;gt;) to the fully qualified path to the input file.&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;get_dmrpp -b $(pwd) -o foo.dmrpp -u &amp;quot;file://&amp;quot;$(pwd)&amp;quot;/your_test_file&amp;quot; &amp;quot;your_test_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;&#039;&#039;Now that you have made a dmr++ file, use the running Hyrax server to view and test it by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
#:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You can also batch process all of your test granules, if you want to go that route. This script assumes your ingestable data files end with &#039;&amp;lt;tt&amp;gt;.h5&amp;lt;/tt&amp;gt;&#039;. &lt;br /&gt;
#: &#039;&#039;The resulting dmr++ files should contain the correct &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; URLs and be correctly located so that they may be tested with the Hyrax service running in the docker instance.&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script will write each output file as a sidecar file into &lt;br /&gt;
# the same directory as its associated input granule data file.&lt;br /&gt;
&lt;br /&gt;
# The target directory to search for data files &lt;br /&gt;
target_dir=/usr/share/hyrax&lt;br /&gt;
echo &amp;quot;target_dir: ${target_dir}&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Search the target_dir for names matching the regex \*.h5 &lt;br /&gt;
for infile in `find &amp;quot;${target_dir}&amp;quot; -name \*.h5`&lt;br /&gt;
do&lt;br /&gt;
    echo &amp;quot; Processing: ${infile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    infile_base=`basename &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;infile_base: ${infile_base}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    bes_dir=`dirname &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;    bes_dir: ${bes_dir}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    outfile=&amp;quot;${infile}.dmrpp&amp;quot;&lt;br /&gt;
    echo &amp;quot;     Output: ${outfile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    get_dmrpp -b &amp;quot;${bes_dir}&amp;quot; -o &amp;quot;${outfile}&amp;quot; -u &amp;quot;file://${infile}&amp;quot; &amp;quot;${infile_base}&amp;quot;&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Remember that you can use the Hyrax server that is running in the docker container to view and test the dmr++ files you just created by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Testing and qualifying dmr++ files ===&lt;br /&gt;
In the previous section/step we created some initial dmr++ files using the default configuration. It is crucial to make sure that they provide the representation of the data that you and your users are expecting, and that they will work correctly with the Hyrax server. (See the following sections for details). If the generated dmr++ files do not match expectations then the default configuration of the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; may need to be amended using the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; parameter.&lt;br /&gt;
If the data are currently being served by your DAAC&#039;s on-prem team this is where understanding exactly what the localizations made to the configurations of the on-prem Hyrax instances deployed for the collection is important. These localization will probably need to be injected into &amp;lt;tt&amp;gt;get_drmpp&amp;lt;/tt&amp;gt; in order to produce the correct data representation in the dmr++ files.&lt;br /&gt;
&lt;br /&gt;
=== Flattening Groups ===&lt;br /&gt;
By default &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will preserve and show group hierarchies. If this is not desired, say for CF-1.0 compatibility, then you can change this by creating a small amendment to &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt;&#039;s default configuration. &lt;br /&gt;
First create the amending configuration file:&lt;br /&gt;
 echo &amp;quot;H5.EnableCF=true&amp;quot; &amp;gt; site.conf&lt;br /&gt;
Then, change the invocation of &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; in the above example by adding the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; switch:&lt;br /&gt;
 get_dmrpp -s site.conf -b `pwd` -o &amp;quot;${dmrpp_file}&amp;quot; -u &amp;quot;file://&amp;quot;`pwd`&amp;quot;/${file}&amp;quot; &amp;quot;${file}&amp;quot;&lt;br /&gt;
And re-run the dmr++ production as shown above.&lt;br /&gt;
&lt;br /&gt;
=== DAP representations ===&lt;br /&gt;
We have test and assurance procedures for DAP4 and DAP2 protocols below. Both are important. For legacy datasets the DAP2 request API is widely used by an existing client base and should continue to be supported. Since DAP4 subsumes DAP2 (but with somewhat different API semantics) It should be checked for legacy datasets as well. For more modern datasets that content DAP4 types such as Int64 that are not part of the DAP2 specification or implementations we will need to relying eliding the instances of unmapped types, or return an error when this is encountered.&lt;br /&gt;
 # Test Constants:&lt;br /&gt;
 GRANULE_FILE=&amp;quot;some_name.h5&amp;quot;&lt;br /&gt;
 # Granule URL&lt;br /&gt;
 gf_url=&amp;quot;http://localhost:8080/opendap/${GRANULE_FILE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== Inspect the dmr++ files====&lt;br /&gt;
# Do the dmr++ files have the expected dmrpp:href URL(s)?&lt;br /&gt;
#: &amp;lt;tt&amp;gt;head -2 ${GRANULE_FILE}.dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check DAP4 DMR Response ====&lt;br /&gt;
Inspect &amp;lt;tt&amp;gt;${gf_url}.dmrpp.dmr&amp;lt;/tt&amp;gt; &lt;br /&gt;
# Get the document, save as &amp;lt;tt&amp;gt;foo.dmr&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;curl -L -o foo.dmr &amp;quot;${gf_url}.dmr&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
# Are the associated dimensions correct?&lt;br /&gt;
&lt;br /&gt;
==== DAP4 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://docs.opendap.org/index.php?title=DAP4:_Specification_Volume_1#Fully_Qualified_Names VARIABLE_NAME is a full qualified DAP4 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_file &amp;quot;${gf_url}.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap4_subset_file dap4_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP4 UI test ====&lt;br /&gt;
* View and exercise the DAP4 Data Request Form &amp;lt;tt&amp;gt;{gf_url}.dmr.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check DDS Response ====&lt;br /&gt;
&lt;br /&gt;
# Inspect &amp;lt;tt&amp;gt;${gf_url}.dds&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
## Are the associated dimensions correct?&lt;br /&gt;
# Compare DMR++ DDS with granule file DDS. &lt;br /&gt;
#:For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_file &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_dmrpp &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; cmp dap2_dds_file dap2_dds_dmrpp &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_file &amp;quot;${gf_url}.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap2_subset_file dap2_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: One might consider doing this with two or more variables.&lt;br /&gt;
&lt;br /&gt;
==== DAP2 UI Test ====&lt;br /&gt;
* View and exercise the DAP2 Data Request Form located here: &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Try it in Panoply!&lt;br /&gt;
** Open Panoply.&lt;br /&gt;
** From the &#039;&#039;&#039;File&#039;&#039;&#039; menu select &#039;&#039;&#039;Open Remote Dataset...&#039;&#039;&#039;&lt;br /&gt;
** Paste the &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt; into the resulting dialog box.&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13542</id>
		<title>DMR++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13542"/>
		<updated>2024-08-28T22:49:43Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Building DMR++ files for HDF4 and HDF4-EOS2 (experimental) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;How to build &amp;amp; deploy dmr++ files for Hyrax&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==What? Why?== &lt;br /&gt;
&lt;br /&gt;
It is a fast and flexible way to serve data stored in S3.&lt;br /&gt;
The dmr++ encodes the location of the data content residing in a binary data file/object (e.g., an hdf5 file) so that it can be directly accessed, without the need for an intermediate library API, by using the file with the location information. The binary data objects may be on a local filesystem, or they may reside across the web in something like an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How Does It Work?==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; ingest software reads a data file (see &#039;&#039;&#039;note&#039;&#039;&#039;) and builds a document that holds all of the file&#039;s metadata (the names and types of all of the variables along with any other information bound to those variables). This information is stored in a document we call the Dataset Metadata Response (DMR). The &#039;&#039;dmr++&#039;&#039; adds some extra information to this (that&#039;s the &#039;++&#039; part) about where each variable can be found and how to decode those values. The &#039;&#039;dmr++&#039;&#039; is simply an special annotated DMR document.&lt;br /&gt;
&lt;br /&gt;
This effectively decouples the annotated DMR (&#039;&#039;dmr++&#039;&#039;) from the location of the granule file itself. Since dmr++ files are typically significantly smaller than the source data granules they represent, they can be stored and moved for less expense. They also enable reading all of the file&#039;s metadata in one operation instead of the iterative process that many APIs require.&lt;br /&gt;
&lt;br /&gt;
If the dmr++ contains references to the source granules location on the web, the location of the the dmr++ file itself does not matter.&lt;br /&gt;
&lt;br /&gt;
Software that understands the dmr++ content can directly access the data values held in the source granule file, and it can do so without having to retrieve the entire file and work on it locally, even when the file is stored in a Web Object Store like S3. &lt;br /&gt;
&lt;br /&gt;
If the granule file contains multiple variables and only a subset of them are needed, the dmr++ enabled software can retrieve just the bytes associated with the desired variables parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;note:&#039;&#039;&#039; The OPeNDAP software currently supports HDF5 and NetCDF4. Other formats can be supported, such as zarr.&lt;br /&gt;
&lt;br /&gt;
==Supported Data Formats==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently works with &#039;&#039;hdf5&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. (The &#039;&#039;netcdf-4&#039;&#039; format is a subset of &#039;&#039;hdf5&#039;&#039; so &#039;&#039;hdf5&#039;&#039; tools are utilized for both.) Other formats like &#039;&#039;zarr&#039;&#039;, &#039;&#039;hdf4&#039;&#039;, &#039;&#039;netcdf-3&#039;&#039; are not currently supported by the &#039;&#039;dmr++&#039;&#039; software, but support could be added if requested.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; data format is quite complex and many of the options and edge cases are not currently supported by the &#039;&#039;dmr++&#039;&#039; software. &lt;br /&gt;
&lt;br /&gt;
These limitations and how to quickly evaluate an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file for use with the &#039;&#039;dmr++&#039;&#039; software are explained below.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; filters====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format has several filter/compression options used for storing data values. &lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5Z_FILTER_DEFLATE, H5Z_FILTER_SHUFFLE, and H5Z_FILTER_FLETCHER32 filters.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc/RM/RM_H5Z.html You can find more on hdf5 filters here.]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; storage layouts====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format also uses a number of &amp;quot;storage layouts&amp;quot; that describe various structural organizations of the data values associated with a variable in the granule file.&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5D_COMPACT, H5D_CHUNKED, and H5D_CONTIGUOUS storage layouts. These are all of the storage layouts defined by the &#039;&#039;hdf5&#039;&#039; library, but others can be added.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc1.6/Datasets.html You can find more on hdf5 storage layouts here.]&lt;br /&gt;
&lt;br /&gt;
====Is my &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file suitable for &#039;&#039;dmr++&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
To determine the &#039;&#039;hdf5&#039;&#039; filters, storage layouts, and chunking scheme used in an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file you can use the command:&lt;br /&gt;
 &amp;lt;code&amp;gt;h5dump -H -p &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
To get a human readable assessment of the file that will show the storage layouts, chunking structure, and the filters needed for each variable (aka DATASET in the &#039;&#039;hdf5&#039;&#039; vocabulary) [https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Dump h5dump info can be found here.]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;h5dump example output:&#039;&#039;&lt;br /&gt;
 $ h5dump -H -p chunked_gzipped_fourD.h5&lt;br /&gt;
 HDF5 &amp;quot;chunked_gzipped_fourD.h5&amp;quot; {&lt;br /&gt;
 GROUP &amp;quot;/&amp;quot; {&lt;br /&gt;
   DATASET &amp;quot;d_16_gzipped_chunks&amp;quot; {&lt;br /&gt;
      DATATYPE  H5T_IEEE_F32LE&lt;br /&gt;
      DATASPACE  SIMPLE { ( 40, 40, 40, 40 ) / ( 40, 40, 40, 40 ) }&lt;br /&gt;
      STORAGE_LAYOUT {&lt;br /&gt;
         CHUNKED ( 20, 20, 20, 20 )&lt;br /&gt;
         SIZE 2863311 (3.576:1 COMPRESSION)&lt;br /&gt;
      }&lt;br /&gt;
      FILTERS {&lt;br /&gt;
         COMPRESSION DEFLATE { LEVEL 6 }&lt;br /&gt;
      }&lt;br /&gt;
      FILLVALUE {&lt;br /&gt;
         FILL_TIME H5D_FILL_TIME_ALLOC&lt;br /&gt;
         VALUE  H5D_FILL_VALUE_DEFAULT&lt;br /&gt;
      }&lt;br /&gt;
      ALLOCATION_TIME {&lt;br /&gt;
         H5D_ALLOC_TIME_INCR&lt;br /&gt;
      }&lt;br /&gt;
   }&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
====Is my netcdf file &#039;&#039;netcdf-3&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
It is an unfortunate state of affairs that the file suffix &amp;quot;.nc&amp;quot; is the commonly used naming convention for both &#039;&#039;netcdf-3&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. &lt;br /&gt;
You can use the command:  &lt;br /&gt;
 &amp;lt;code&amp;gt;ncdump -k &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt; &lt;br /&gt;
to determine if a &#039;&#039;netcdf&#039;&#039; file is either classic &#039;&#039;netcdf-3&#039;&#039; (classic) or &#039;&#039;netcdf-4&#039;&#039; (netCDF-4).&lt;br /&gt;
* The &#039;&#039;netcdf&#039;&#039; library must be installed on the system upon which the command is issued.&lt;br /&gt;
&lt;br /&gt;
[http://www.bic.mni.mcgill.ca/users/sean/Docs/netcdf/guide.txn_79.html You can learn more in the NetCDF documentation here.]&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;DMR++&#039;&#039; files for HDF4 and HDF4-EOS2 (experimental)==&lt;br /&gt;
The HDF4 and HDF4-EOS2 (hereafter just HDF4) DMR++ document builder is currently available in the docker container we build for &#039;&#039;hyrax&#039;&#039; server/service. You can get this container from the public Docker Hub repository. You can also get and build the &#039;&#039;Hyrax&#039;&#039; source code, and use the client that way (as part of a source code build) but it&#039;s much more complex than getting the Docker container. In addition, the Docker container includes a server that can test the DMR++ documents that are built and can even show you how the files would look when served without using the DMR++.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Because this command is still experimental, I&#039;ll write this documentation like a recipe. Modify it to suit your own needs.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Using get_dmrpp_h4===&lt;br /&gt;
Make a new directory in a convenient place and copy the HDF4 and/or HDF4-EOS2 files in that directory. Once you have the files in that directory, make an environment variable so it can be referred to easily. From inside the directory:&lt;br /&gt;
&lt;br /&gt;
  export HDF4_DIR=$(pwd)&lt;br /&gt;
&lt;br /&gt;
Get the Docker container from Docker Hub using this command:&lt;br /&gt;
&lt;br /&gt;
  docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
What the options mean:&lt;br /&gt;
 -d, --detach Run container in background and print container ID&lt;br /&gt;
 -h, --hostname Container host name&lt;br /&gt;
 -p, --publish Publish a container&#039;s port(s) to the host&lt;br /&gt;
 -v, --volume Bind mount a volume&lt;br /&gt;
 --name Assign a name to the container&lt;br /&gt;
&lt;br /&gt;
This command will fetch the container from Docker Hub &#039;&#039;&#039;opendap/hyrax:snapshot&#039;&#039;&#039; this is the latest build of the container. It will then &#039;&#039;run&#039;&#039; the container and return the container ID. The &#039;&#039;hyrax&#039;&#039; server is now running on you computer and can be accessed with a web browser, curl, et cetera. More on that in a bit.&lt;br /&gt;
&lt;br /&gt;
Note: If you want to use a specific container version, just substitute the version info for &#039;snapshot.&#039;&lt;br /&gt;
&lt;br /&gt;
Check that the container is running using:&lt;br /&gt;
&lt;br /&gt;
  docker ps -a&lt;br /&gt;
&lt;br /&gt;
This will show a somewhat hard-to-read bit of information about all the running Docker container on you host:&lt;br /&gt;
&lt;br /&gt;
  CONTAINER ID   IMAGE                    COMMAND              CREATED          STATUS          PORTS                                                              &lt;br /&gt;
  NAMES&lt;br /&gt;
  2949d4101df4   opendap/hyrax:snapshot   &amp;quot;/entrypoint.sh -&amp;quot;   15 seconds ago   Up 14 seconds   8009/tcp, 8443/tcp, &lt;br /&gt;
  10022/tcp, 11002/tcp, 0.0.0.0:8080-&amp;gt;8080/tcp   hyrax&lt;br /&gt;
&lt;br /&gt;
If you want to stop containers, use&lt;br /&gt;
&lt;br /&gt;
  docker rm -r &amp;lt;CONTAINER ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &#039;&#039;CONTAINER ID&#039;&#039; for the one we just started and shown in the output of &#039;&#039;docker ps -a&#039;&#039; above is &#039;&#039;2949d4101df4&#039;&#039;. No need to stop the container now, I&#039;m just pointing out how to do it because it&#039;s often useful.&lt;br /&gt;
&lt;br /&gt;
====Lets run the DMR++ builder====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;At the end of this, I&#039;ll include a shell script that takes away many of these steps, but the script obscures some aspects of the command that you might want to tweak, so the following shows you all the details.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Make sure you are in the directory with the HDF4 files for these steps. &lt;br /&gt;
&lt;br /&gt;
Get the command to return its help information:&lt;br /&gt;
&lt;br /&gt;
  docker exec -it hyrax get_dmrpp_h4 -h&lt;br /&gt;
&lt;br /&gt;
will return:&lt;br /&gt;
  &lt;br /&gt;
  usage: get_dmrpp_h4 [-h] -i I [-c CONF] [-s] [-u DATA_URL] [-D] [-v]&lt;br /&gt;
&lt;br /&gt;
  Build a dmrpp file for an HDF4 file. get_dmrpp_h4 -i h4_file_name. A dmrpp&lt;br /&gt;
  file that uses the HDF4 file name will be generated.&lt;br /&gt;
&lt;br /&gt;
  optional arguments:&lt;br /&gt;
  &lt;br /&gt;
  ...&lt;br /&gt;
&lt;br /&gt;
Lets build a DMR++ now, by explicitly using the container:&lt;br /&gt;
&lt;br /&gt;
  docker exec -it hyrax bash&lt;br /&gt;
&lt;br /&gt;
starts the &#039;&#039;bash&#039;&#039; shell in the container, with the current directory as root (/)&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax /]# &lt;br /&gt;
&lt;br /&gt;
Change to the directory that is the root of the data (you&#039;ll see your HDF4 files in here):&lt;br /&gt;
&lt;br /&gt;
  cd /usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
You will see, roughly:&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax /]# cd /usr/share/hyrax&lt;br /&gt;
  [root@hyrax hyrax]# ls&lt;br /&gt;
  3B42.19980101.00.7.HDF&lt;br /&gt;
  3B42.19980101.03.7.HDF&lt;br /&gt;
  3B42.19980101.06.7.HDF&lt;br /&gt;
&lt;br /&gt;
  ...&lt;br /&gt;
&lt;br /&gt;
In that directory, use the &#039;&#039;get_dmrpp_h4&#039;&#039; command to build a DMR++ document for one of the files:&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax hyrax]# get_dmrpp_h4 -i 3B42.19980101.00.7.HDF -u &#039;file:///usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&lt;br /&gt;
&lt;br /&gt;
Copy that pattern for whatever file you use. From the /usr/share/hyrax directory, you pass &#039;&#039;get_dmrpp_h4&#039;&#039; the name of the file (because it&#039;s local to the current directory). The &#039;&#039;&#039;-u&#039;&#039;&#039; option tells the command to embed the URL that follows it in the DMR++. I&#039;ve used a &#039;&#039;file://&#039;&#039; URL to the file &#039;&#039;/usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Note the three slashes following the colon. Obscure, but it makes sense.&lt;br /&gt;
&lt;br /&gt;
Build ing the DMR++ and embedding a &#039;&#039;file://&#039;&#039; URL will enable testing the DMR++ easily.&lt;br /&gt;
&lt;br /&gt;
Lets look at how the &#039;&#039;hyrax&#039;&#039; service will treat that data file using the DMR++. In a browser, goto &lt;br /&gt;
&lt;br /&gt;
  http://localhost:8080/opendap/&lt;br /&gt;
&lt;br /&gt;
You will see [[File:Hyrax-including-new-DNRpp.png|200px|thumb|left|Caption]]&lt;br /&gt;
&lt;br /&gt;
Click on the your equivalent of the &#039;&#039;&#039;3B42.19980101.00.7.HDF.dmrpp&#039;&#039; link:&lt;br /&gt;
[[File:Hyrax-subsetting.png|200px|thumb|left|Caption]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 Also note that the volume mount, from $HDF4_DIR to &#039;/usr/share/hyrax&#039; is a trick to get the current directory (where we are running these commands) to be the default directory for the server. &lt;br /&gt;
&lt;br /&gt;
===Command line option===&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files with get_dmrpp==&lt;br /&gt;
&lt;br /&gt;
The application that builds the &#039;&#039;dmr++&#039;&#039; files is a command line tool called &#039;&#039;get_dmrpp&#039;&#039;. It in turn utilizes other executables such as &#039;&#039;build_dmrpp&#039;&#039;, &#039;&#039;reduce_mdf&#039;&#039;, &#039;&#039;merge_dmrpp&#039;&#039; (which rely in turn on the &#039;&#039;hdf5_handler&#039;&#039; and the &#039;&#039;hdf5&#039;&#039; library), along with a number of UNIX shell commands.&lt;br /&gt;
&lt;br /&gt;
All of these components are install with each recent version of the Hyrax Data Server&lt;br /&gt;
&lt;br /&gt;
You can see the &#039;&#039;get_dmrpp&#039;&#039; usage statement with the command: &amp;lt;code&amp;gt;get_dmrpp -h&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Using &#039;&#039;get_dmrpp&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The way that &#039;&#039;get_dmrpp&#039;&#039; is invoked controls the way that the data are ultimately represented in the resulting &#039;&#039;dmr++&#039;&#039; file(s). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;get_dmrpp&#039;&#039; application utilizes software from the Hyrax data server to produce the base DMR document which is used to construct the &#039;&#039;dmr++&#039;&#039; file. &lt;br /&gt;
&lt;br /&gt;
The Hyrax server has a long list of configuration options, several of which can substantially alter the the structural and semantic representation of the dataset as seen in the dmr++ files generated using these options.&lt;br /&gt;
&lt;br /&gt;
===Command line options===&lt;br /&gt;
&lt;br /&gt;
The command line switches provide a way to control the output of the tool. In addition to common options like verbose output or testing modes, the tool provides options to build extra (aka &#039;sidecar&#039;) data files that hold information needed for CF compliance if the original HDF5 data files lack that information (see the &#039;&#039;missing data&#039;&#039; section ). In addition, it is often desirable to build &#039;&#039;dmr++&#039;&#039; files before the source data files are uploaded to a cloud store like S3. In this case, the URL to the data may not be known when the &#039;&#039;dmr++&#039;&#039; is built. We support this by using placeholder/template strings in the &#039;&#039;dmr++&#039;&#039; and which can then be replaced with the URL at runtime, when the &#039;&#039;dmr++&#039;&#039; file is evaluated. See the &#039;-u&#039; and &#039;-p&#039; options below.&lt;br /&gt;
&lt;br /&gt;
====Inputs====&lt;br /&gt;
&lt;br /&gt;
; -b&lt;br /&gt;
: The fully qualified path to the top level data directory. Data files read by &#039;&#039;get_dmrpp&#039;&#039; must be in the directory tree rooted at this location and their names expressed as a path relative to this location. The value may not be set to &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; The default value is /tmp if a value is not provided. All the data files to be processed must be in this directory or one of its subdirectories. If &#039;&#039;get_dmrpp&#039;&#039; is being executed from same directory as the data then &amp;lt;code&amp;gt;-b `pwd`&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-b .&amp;lt;/code&amp;gt; works as well.&lt;br /&gt;
;-u&lt;br /&gt;
: This option is used to specify the location of the binary data object. It’s value must be an http, https, or file (file://) URL. This URL will be injected into the dmr++ when it is constructed. If option -u is not used; then the template string &amp;lt;code&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/code&amp;gt; will be used and the dmr++ will substitute a value at runtime.&lt;br /&gt;
;-c&lt;br /&gt;
:The path to an alternate bes configuration file to use.&lt;br /&gt;
;-s&lt;br /&gt;
:The path to an optional addendum configuration file which will be appended to the default BES configuration. Much like the site.conf file works for the full server deployment it will be loaded last and the settings there-in will have an override effect on the default configuration.&lt;br /&gt;
&lt;br /&gt;
====Output====&lt;br /&gt;
&lt;br /&gt;
; -o&lt;br /&gt;
: The name of the file to create.&lt;br /&gt;
&lt;br /&gt;
====Verbose Output Modes====&lt;br /&gt;
&lt;br /&gt;
; -h&lt;br /&gt;
: Show help/usage page.&lt;br /&gt;
; -v:&lt;br /&gt;
: verbose mode, prints the intermediate DMR.&lt;br /&gt;
; -V&lt;br /&gt;
: Very verbose mode,  prints the DMR, the command and the configuration file used to build the DMR.&lt;br /&gt;
; -D&lt;br /&gt;
: Just print the DMR that will be used to build the DMR++&lt;br /&gt;
; -X&lt;br /&gt;
: Do not remove temporary files. May be used independently of the -v and/or -V options.&lt;br /&gt;
&lt;br /&gt;
====Tests====&lt;br /&gt;
&lt;br /&gt;
; -T&lt;br /&gt;
: Run ALL hyrax tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -I&lt;br /&gt;
: Run hyrax inventory tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -F&lt;br /&gt;
: Run hyrax value probe tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
&lt;br /&gt;
====Missing Data Creation====&lt;br /&gt;
&lt;br /&gt;
; -M&lt;br /&gt;
: Build a &#039;sidecar&#039; file that holds missing information needed for CF compliance (e.g., Latitude, Longitude and Time coordinate data).&lt;br /&gt;
; -p&lt;br /&gt;
: Provide the URL for the Missing data sidecar file. If this is not given (but -M is), then a template value is used in the dmr++ file and a real URL is substituted at runtime.&lt;br /&gt;
; -r&lt;br /&gt;
: The path to the file that contains missing variable information for sets of input data files that share common missing variables. The file will be created if it doesn&#039;t exist and the result may be used in subsequent invocations of get_dmrpp (using -r) to identify the missing variable file.&lt;br /&gt;
&lt;br /&gt;
====AWS Integration====&lt;br /&gt;
The &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application supports both S3 hosted granules as inputs, and uploading generated dmr++ files to an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
; S3 Hosted granules are supported by default,&lt;br /&gt;
: When the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application sees that the name of the input file is an S3 URL it will check to see if the AWS CLI is configured and if so &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will attempt retrieve the granule and make a dmr++ utilizing whatever other options have been chosen.&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
; -U&lt;br /&gt;
: The &#039;&#039;&#039;&amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt;&#039;&#039;&#039; command line parameter for &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; instructs the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application to upload the generated dmr++ file to S3, but only when the following conditions are met:&lt;br /&gt;
:* The name of the input file is an S3 URL&lt;br /&gt;
:* The AWS CLI has been configured with credentials that provide r+w permissions for the bucket referenced in the input file S3 URL.&lt;br /&gt;
:* The &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; option has been specified.&lt;br /&gt;
: If all three of the above are true then &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will copy the retrieve the granule, create a dmr++ file from the granule, and copy the resulting dmr++ file (as defined by the -o option) to the source S3 bucket using the well known NGAP sidecar file naming convention: &#039;&#039;&#039;&amp;lt;tt&amp;gt;s3://bucket_name/granule_object_id.dmrpp&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -U -o foo -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5_handler&#039;&#039; Configuration===&lt;br /&gt;
&lt;br /&gt;
Because &#039;&#039;get_dmrpp&#039;&#039; uses the &#039;&#039;hdf5_handler&#039;&#039; software to build the &#039;&#039;dmr++&#039;&#039; the software must inject the &#039;&#039;hdf5_handler&#039;&#039;&#039;s configuration. &lt;br /&gt;
&lt;br /&gt;
The default configuration is large, but any valued may be altered at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some of the commonly manipulated configuration parameters with their default values:&lt;br /&gt;
 H5.EnableCF=true&lt;br /&gt;
 H5.EnableDMR64bitInt=true&lt;br /&gt;
 H5.DefaultHandleDimension=true&lt;br /&gt;
 H5.KeepVarLeadingUnderscore=false&lt;br /&gt;
 H5.EnableCheckNameClashing=true&lt;br /&gt;
 H5.EnableAddPathAttrs=true&lt;br /&gt;
 H5.EnableDropLongString=true&lt;br /&gt;
 H5.DisableStructMetaAttr=true&lt;br /&gt;
 H5.EnableFillValueCheck=true&lt;br /&gt;
 H5.CheckIgnoreObj=false&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;Note to DAACs with existing Hyrax deployments.&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
If your group is already serving data with Hyrax and the data representations that are generated by your Hyrax server are satisfactory, then a careful inspection of the localized configuration, typically held in /etc/bes/site.conf, will help you determine what configuration state you may need to inject into &#039;&#039;get_dmrpp&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===The &#039;&#039;H5.EnableCF&#039;&#039; option===&lt;br /&gt;
&lt;br /&gt;
Of particular importance is the &#039;&#039;H5.EnableCF&#039;&#039; option, which instructs the &#039;&#039;get_dmrpp&#039;&#039; tool to produce [https://cfconventions.org/ Climate Forecast convention (CF)] compatible output based on metadata found in the granule file being processed. &lt;br /&gt;
&lt;br /&gt;
Changing the value of &#039;&#039;H5.EnableCF&#039;&#039; from &#039;&#039;&#039;&#039;&#039;false&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;true&#039;&#039;&#039;&#039;&#039; will have (at least) two significant effects.&lt;br /&gt;
&lt;br /&gt;
It will:&lt;br /&gt;
&lt;br /&gt;
* Cause &#039;&#039;get_dmrpp&#039;&#039; to attempt to make the dmr++ metadata CF compliant.&lt;br /&gt;
* Remove Group hierarchies (if any) in the underlying data granule by flattening the Group hierarchy into the variable names.  &lt;br /&gt;
&lt;br /&gt;
By default &#039;&#039;get_dmrpp&#039;&#039; the &#039;&#039;H5.EnableCF&#039;&#039; option is set to false:&lt;br /&gt;
 H5.EnableCF = false&lt;br /&gt;
&lt;br /&gt;
There is a much more comprehensive discussion of this key feature, and others, in the &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;[https://opendap.github.io/hyrax_guide/Master_Hyrax_Guide.html#_hyrax_handlers HDF5 Handler section of the Appendix in the Hyrax Data Server Installation and Configuration Guide]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Missing data, the CF conventions and &#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
Many of the &#039;&#039;hdf5&#039;&#039; files produced by NASA and others do not contain the domain coordinate data (such as latitude, longitude, time, etc.) as a collection of explicit values. Instead information contained in the dataset metadata can used to reproduce these values. &lt;br /&gt;
&lt;br /&gt;
In order for a dataset to be Climate Forecast (CF) compatible it must contain these domain coordinate data values.&lt;br /&gt;
&lt;br /&gt;
The Hyrax &#039;&#039;hdf5_handler&#039;&#039; software, utilized by the &#039;&#039;get_dmrpp&#039;&#039; application, can create this data from the dataset metadata.  The &#039;&#039;get_dmrpp&#039;&#039; application places these generated data in a “sidecar” file for deployment with the source &#039;&#039;hdf5/netcdf&#039;&#039;-4 file.&lt;br /&gt;
&lt;br /&gt;
==Hyrax - Serving data using dmr++ files==&lt;br /&gt;
&lt;br /&gt;
There are three fundamental deployment scenarios for using dmr++ files to serve data with the Hyrax data server.&lt;br /&gt;
&lt;br /&gt;
This can be simple categorized as follows:&lt;br /&gt;
The dmr++ file(s) are XML files that contain a root &amp;lt;tt&amp;gt;dap4:Dataset&amp;lt;/tt&amp;gt; element with a &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attribute whose value is one of:&lt;br /&gt;
# An http(s):// URL referencing to the underlying granule files via http.&lt;br /&gt;
# A file:// URL that references the granule file on the local filesystem in a location that is inside the BES&#039; data root tree.&lt;br /&gt;
# The template string &amp;lt;tt&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each will discussed in turn below. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: By default Hyrax will automatically associate files whose name ends with &amp;quot;.dmrpp&amp;quot; with the dmr++ handler.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with http(s) URLs===&lt;br /&gt;
&lt;br /&gt;
If the dmr++ files that you wish to serve contain &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attributes whose values are http(s) URLs then there are 2+1 steps to serve the data:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure that the Hyrax AllowedHosts list is configured to allow Hyrax to access those target URLs. This can be accomplished by adding new regex entires to the AllowedHosts list in /etc/bes/site.conf, creating that file as need be.&lt;br /&gt;
# If the data URLs require authentication to access then you&#039;ll need to configure Hyrax for that too.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with file URLs===&lt;br /&gt;
&lt;br /&gt;
Using dmr++ files with locally held files can be useful for verifying that dmr++ functionality is working without relying on network access that may have data rate limits, authenticated access configuration, or security access constraints. Additionally, in many cases the dmr++ access to the locally held data may be significantly faster than through the native netcdf-4/hdf5 data handlers.&lt;br /&gt;
&lt;br /&gt;
In order to use dmr++ files that contain file:// URLs:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure the the dmr++ files contain only file:// URLs that refer to data granule files inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration.&lt;br /&gt;
&lt;br /&gt;
Note: For Hyrax, a correctly formatted file URL must start with the protocol &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; followed by the full qualified path to the data granule, for example:  &lt;br /&gt;
 /usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
so the the completed URL will have three slashes after the first colon:&lt;br /&gt;
 file:///usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with the template string. (NASA)===&lt;br /&gt;
&lt;br /&gt;
Another way to serve dmr++ files with Hyrax is to build the dmr++ files &#039;&#039;&#039;without&#039;&#039;&#039; valid URLs but with a template string that is replaced at runtime. If no target URL is supplied to get_drmpp at the time that the dmr++ is generated the template string: &amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt; will added to the file in place of the URL. The at runtime it can be replaced withe the correct value.&lt;br /&gt;
&lt;br /&gt;
Currently the only implementation of this is Hyrax&#039;s NGAP service which, when deployed in the NASA NGAP cloud, will accept &amp;quot;restified path&amp;quot; URLs that are defined as&lt;br /&gt;
having a URL path component with two mandatory and one optional parameters:&lt;br /&gt;
 MANDATORY: &amp;quot;/collections/UMM-C:{concept-id}&amp;quot;&lt;br /&gt;
 OPTIONAL:  &amp;quot;/UMM-C:{ShortName} &#039;.&#039; UMM-C:{Version}&amp;quot;&lt;br /&gt;
 MANDATORY: &amp;quot;/granules/UMM-G:{GranuleUR}&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039; &amp;lt;tt&amp;gt;https://opendap.earthdata.nasa.gov/collections/C1443727145-LAADS/MOD08_D3.v6.1/granules/MOD08_D3.A2020308.061.2020309092644.hdf.nc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When encountering this type of URL Hyrax will decompose it and use the content to formulate a query to the NASA CMR in order to retrieve the data access URL for the granule and for the dmr++ file. It then retrieves the dmr++ file and injects the data URL so that data access can proceed as described above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://wiki.earthdata.nasa.gov/display/DUTRAIN/Feature+analysis%3A+Restified+URL+for+OPENDAP+Data+Access More on the Restified Path can be found here],&lt;br /&gt;
&lt;br /&gt;
== Recipe: Building and testing dmr++ files ==&lt;br /&gt;
There are two recipes shown here, one using Hyrax docker containers and a second using the container that is part of the EOSDIS Cumulous task.&lt;br /&gt;
Prerequisites: &lt;br /&gt;
* Docker daemon running on a system that also supports a shell (the examples use &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; in this section)&lt;br /&gt;
=== Recipe: Building dmr++ files using a Hyrax docker container.===&lt;br /&gt;
# Acquire representative granule files for the collection you wish to import. Put them on the system that is running the Docker daemon. For this recipe we will assume that these files have been placed in the directory:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;/tmp/dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Get the most up to date Hyrax docker image:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker pull opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Start the docker container, mounting your data directory on to the docker image at &amp;lt;tt&amp;gt;/usr/share/hyrax&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker run -d -h hyrax -p 8080:8080 --volume /tmp/dmrpp:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Get a first view of your data using &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; with it&#039;s default configuration.&lt;br /&gt;
## If you want you can build a dmr++ for an example &amp;quot;&amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;input_file&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt;&amp;quot; using a  &amp;lt;tt&amp;gt;docker exec&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
##: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax get_dmrpp -b /usr/share/hyrax -o /usr/share/hyrax/input_file.dmrpp -u &amp;quot;file:///usr/share/hyrax/input_file&amp;quot; &amp;quot;input_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
## Or if you want more scripting flexibility you can login to the docker conainer to do the same:&lt;br /&gt;
### Login to the docker container:&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax /bin/bash&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### Change working dir to data dir:&lt;br /&gt;
###:&amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;cd /usr/share/hyrax&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### This sets the data directory to the current one (&amp;lt;tt&amp;gt;-b $(pwd)&amp;lt;/tt&amp;gt;) and sets the data URL (&amp;lt;tt&amp;gt;-u&amp;lt;/tt&amp;gt;) to the fully qualified path to the input file.&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;get_dmrpp -b $(pwd) -o foo.dmrpp -u &amp;quot;file://&amp;quot;$(pwd)&amp;quot;/your_test_file&amp;quot; &amp;quot;your_test_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;&#039;&#039;Now that you have made a dmr++ file, use the running Hyrax server to view and test it by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
#:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You can also batch process all of your test granules, if you want to go that route. This script assumes your ingestable data files end with &#039;&amp;lt;tt&amp;gt;.h5&amp;lt;/tt&amp;gt;&#039;. &lt;br /&gt;
#: &#039;&#039;The resulting dmr++ files should contain the correct &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; URLs and be correctly located so that they may be tested with the Hyrax service running in the docker instance.&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script will write each output file as a sidecar file into &lt;br /&gt;
# the same directory as its associated input granule data file.&lt;br /&gt;
&lt;br /&gt;
# The target directory to search for data files &lt;br /&gt;
target_dir=/usr/share/hyrax&lt;br /&gt;
echo &amp;quot;target_dir: ${target_dir}&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Search the target_dir for names matching the regex \*.h5 &lt;br /&gt;
for infile in `find &amp;quot;${target_dir}&amp;quot; -name \*.h5`&lt;br /&gt;
do&lt;br /&gt;
    echo &amp;quot; Processing: ${infile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    infile_base=`basename &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;infile_base: ${infile_base}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    bes_dir=`dirname &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;    bes_dir: ${bes_dir}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    outfile=&amp;quot;${infile}.dmrpp&amp;quot;&lt;br /&gt;
    echo &amp;quot;     Output: ${outfile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    get_dmrpp -b &amp;quot;${bes_dir}&amp;quot; -o &amp;quot;${outfile}&amp;quot; -u &amp;quot;file://${infile}&amp;quot; &amp;quot;${infile_base}&amp;quot;&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Remember that you can use the Hyrax server that is running in the docker container to view and test the dmr++ files you just created by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Testing and qualifying dmr++ files ===&lt;br /&gt;
In the previous section/step we created some initial dmr++ files using the default configuration. It is crucial to make sure that they provide the representation of the data that you and your users are expecting, and that they will work correctly with the Hyrax server. (See the following sections for details). If the generated dmr++ files do not match expectations then the default configuration of the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; may need to be amended using the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; parameter.&lt;br /&gt;
If the data are currently being served by your DAAC&#039;s on-prem team this is where understanding exactly what the localizations made to the configurations of the on-prem Hyrax instances deployed for the collection is important. These localization will probably need to be injected into &amp;lt;tt&amp;gt;get_drmpp&amp;lt;/tt&amp;gt; in order to produce the correct data representation in the dmr++ files.&lt;br /&gt;
&lt;br /&gt;
=== Flattening Groups ===&lt;br /&gt;
By default &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will preserve and show group hierarchies. If this is not desired, say for CF-1.0 compatibility, then you can change this by creating a small amendment to &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt;&#039;s default configuration. &lt;br /&gt;
First create the amending configuration file:&lt;br /&gt;
 echo &amp;quot;H5.EnableCF=true&amp;quot; &amp;gt; site.conf&lt;br /&gt;
Then, change the invocation of &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; in the above example by adding the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; switch:&lt;br /&gt;
 get_dmrpp -s site.conf -b `pwd` -o &amp;quot;${dmrpp_file}&amp;quot; -u &amp;quot;file://&amp;quot;`pwd`&amp;quot;/${file}&amp;quot; &amp;quot;${file}&amp;quot;&lt;br /&gt;
And re-run the dmr++ production as shown above.&lt;br /&gt;
&lt;br /&gt;
=== DAP representations ===&lt;br /&gt;
We have test and assurance procedures for DAP4 and DAP2 protocols below. Both are important. For legacy datasets the DAP2 request API is widely used by an existing client base and should continue to be supported. Since DAP4 subsumes DAP2 (but with somewhat different API semantics) It should be checked for legacy datasets as well. For more modern datasets that content DAP4 types such as Int64 that are not part of the DAP2 specification or implementations we will need to relying eliding the instances of unmapped types, or return an error when this is encountered.&lt;br /&gt;
 # Test Constants:&lt;br /&gt;
 GRANULE_FILE=&amp;quot;some_name.h5&amp;quot;&lt;br /&gt;
 # Granule URL&lt;br /&gt;
 gf_url=&amp;quot;http://localhost:8080/opendap/${GRANULE_FILE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== Inspect the dmr++ files====&lt;br /&gt;
# Do the dmr++ files have the expected dmrpp:href URL(s)?&lt;br /&gt;
#: &amp;lt;tt&amp;gt;head -2 ${GRANULE_FILE}.dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check DAP4 DMR Response ====&lt;br /&gt;
Inspect &amp;lt;tt&amp;gt;${gf_url}.dmrpp.dmr&amp;lt;/tt&amp;gt; &lt;br /&gt;
# Get the document, save as &amp;lt;tt&amp;gt;foo.dmr&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;curl -L -o foo.dmr &amp;quot;${gf_url}.dmr&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
# Are the associated dimensions correct?&lt;br /&gt;
&lt;br /&gt;
==== DAP4 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://docs.opendap.org/index.php?title=DAP4:_Specification_Volume_1#Fully_Qualified_Names VARIABLE_NAME is a full qualified DAP4 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_file &amp;quot;${gf_url}.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap4_subset_file dap4_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP4 UI test ====&lt;br /&gt;
* View and exercise the DAP4 Data Request Form &amp;lt;tt&amp;gt;{gf_url}.dmr.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check DDS Response ====&lt;br /&gt;
&lt;br /&gt;
# Inspect &amp;lt;tt&amp;gt;${gf_url}.dds&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
## Are the associated dimensions correct?&lt;br /&gt;
# Compare DMR++ DDS with granule file DDS. &lt;br /&gt;
#:For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_file &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_dmrpp &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; cmp dap2_dds_file dap2_dds_dmrpp &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_file &amp;quot;${gf_url}.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap2_subset_file dap2_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: One might consider doing this with two or more variables.&lt;br /&gt;
&lt;br /&gt;
==== DAP2 UI Test ====&lt;br /&gt;
* View and exercise the DAP2 Data Request Form located here: &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Try it in Panoply!&lt;br /&gt;
** Open Panoply.&lt;br /&gt;
** From the &#039;&#039;&#039;File&#039;&#039;&#039; menu select &#039;&#039;&#039;Open Remote Dataset...&#039;&#039;&#039;&lt;br /&gt;
** Paste the &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt; into the resulting dialog box.&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=File:Hyrax-subsetting.png&amp;diff=13541</id>
		<title>File:Hyrax-subsetting.png</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=File:Hyrax-subsetting.png&amp;diff=13541"/>
		<updated>2024-08-28T22:47:53Z</updated>

		<summary type="html">&lt;p&gt;Jimg: Click on the file with the .dmrpp extension and use the Hyrax interface to subset the data, as this example shows. You can use a tool like Panoply to plot the data returned.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Click on the file with the .dmrpp extension and use the Hyrax interface to subset the data, as this example shows. You can use a tool like Panoply to plot the data returned.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=File:Hyrax-including-new-DNRpp.png&amp;diff=13540</id>
		<title>File:Hyrax-including-new-DNRpp.png</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=File:Hyrax-including-new-DNRpp.png&amp;diff=13540"/>
		<updated>2024-08-28T22:44:03Z</updated>

		<summary type="html">&lt;p&gt;Jimg: This shows that the hyrax server running in the Docker container that also contains the get_dmrpp_h4 command can be used along with the server&amp;#039;s web interface to examine the DMR++. The interface can be used to download various response types using the DMR++.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
This shows that the hyrax server running in the Docker container that also contains the get_dmrpp_h4 command can be used along with the server&#039;s web interface to examine the DMR++. The interface can be used to download various response types using the DMR++.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13539</id>
		<title>DMR++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13539"/>
		<updated>2024-08-28T22:01:18Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Building dmr++ files for HDF4 and HDF4-EOS2 (experimental) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;How to build &amp;amp; deploy dmr++ files for Hyrax&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==What? Why?== &lt;br /&gt;
&lt;br /&gt;
It is a fast and flexible way to serve data stored in S3.&lt;br /&gt;
The dmr++ encodes the location of the data content residing in a binary data file/object (e.g., an hdf5 file) so that it can be directly accessed, without the need for an intermediate library API, by using the file with the location information. The binary data objects may be on a local filesystem, or they may reside across the web in something like an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How Does It Work?==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; ingest software reads a data file (see &#039;&#039;&#039;note&#039;&#039;&#039;) and builds a document that holds all of the file&#039;s metadata (the names and types of all of the variables along with any other information bound to those variables). This information is stored in a document we call the Dataset Metadata Response (DMR). The &#039;&#039;dmr++&#039;&#039; adds some extra information to this (that&#039;s the &#039;++&#039; part) about where each variable can be found and how to decode those values. The &#039;&#039;dmr++&#039;&#039; is simply an special annotated DMR document.&lt;br /&gt;
&lt;br /&gt;
This effectively decouples the annotated DMR (&#039;&#039;dmr++&#039;&#039;) from the location of the granule file itself. Since dmr++ files are typically significantly smaller than the source data granules they represent, they can be stored and moved for less expense. They also enable reading all of the file&#039;s metadata in one operation instead of the iterative process that many APIs require.&lt;br /&gt;
&lt;br /&gt;
If the dmr++ contains references to the source granules location on the web, the location of the the dmr++ file itself does not matter.&lt;br /&gt;
&lt;br /&gt;
Software that understands the dmr++ content can directly access the data values held in the source granule file, and it can do so without having to retrieve the entire file and work on it locally, even when the file is stored in a Web Object Store like S3. &lt;br /&gt;
&lt;br /&gt;
If the granule file contains multiple variables and only a subset of them are needed, the dmr++ enabled software can retrieve just the bytes associated with the desired variables parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;note:&#039;&#039;&#039; The OPeNDAP software currently supports HDF5 and NetCDF4. Other formats can be supported, such as zarr.&lt;br /&gt;
&lt;br /&gt;
==Supported Data Formats==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently works with &#039;&#039;hdf5&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. (The &#039;&#039;netcdf-4&#039;&#039; format is a subset of &#039;&#039;hdf5&#039;&#039; so &#039;&#039;hdf5&#039;&#039; tools are utilized for both.) Other formats like &#039;&#039;zarr&#039;&#039;, &#039;&#039;hdf4&#039;&#039;, &#039;&#039;netcdf-3&#039;&#039; are not currently supported by the &#039;&#039;dmr++&#039;&#039; software, but support could be added if requested.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; data format is quite complex and many of the options and edge cases are not currently supported by the &#039;&#039;dmr++&#039;&#039; software. &lt;br /&gt;
&lt;br /&gt;
These limitations and how to quickly evaluate an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file for use with the &#039;&#039;dmr++&#039;&#039; software are explained below.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; filters====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format has several filter/compression options used for storing data values. &lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5Z_FILTER_DEFLATE, H5Z_FILTER_SHUFFLE, and H5Z_FILTER_FLETCHER32 filters.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc/RM/RM_H5Z.html You can find more on hdf5 filters here.]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; storage layouts====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format also uses a number of &amp;quot;storage layouts&amp;quot; that describe various structural organizations of the data values associated with a variable in the granule file.&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5D_COMPACT, H5D_CHUNKED, and H5D_CONTIGUOUS storage layouts. These are all of the storage layouts defined by the &#039;&#039;hdf5&#039;&#039; library, but others can be added.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc1.6/Datasets.html You can find more on hdf5 storage layouts here.]&lt;br /&gt;
&lt;br /&gt;
====Is my &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file suitable for &#039;&#039;dmr++&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
To determine the &#039;&#039;hdf5&#039;&#039; filters, storage layouts, and chunking scheme used in an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file you can use the command:&lt;br /&gt;
 &amp;lt;code&amp;gt;h5dump -H -p &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
To get a human readable assessment of the file that will show the storage layouts, chunking structure, and the filters needed for each variable (aka DATASET in the &#039;&#039;hdf5&#039;&#039; vocabulary) [https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Dump h5dump info can be found here.]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;h5dump example output:&#039;&#039;&lt;br /&gt;
 $ h5dump -H -p chunked_gzipped_fourD.h5&lt;br /&gt;
 HDF5 &amp;quot;chunked_gzipped_fourD.h5&amp;quot; {&lt;br /&gt;
 GROUP &amp;quot;/&amp;quot; {&lt;br /&gt;
   DATASET &amp;quot;d_16_gzipped_chunks&amp;quot; {&lt;br /&gt;
      DATATYPE  H5T_IEEE_F32LE&lt;br /&gt;
      DATASPACE  SIMPLE { ( 40, 40, 40, 40 ) / ( 40, 40, 40, 40 ) }&lt;br /&gt;
      STORAGE_LAYOUT {&lt;br /&gt;
         CHUNKED ( 20, 20, 20, 20 )&lt;br /&gt;
         SIZE 2863311 (3.576:1 COMPRESSION)&lt;br /&gt;
      }&lt;br /&gt;
      FILTERS {&lt;br /&gt;
         COMPRESSION DEFLATE { LEVEL 6 }&lt;br /&gt;
      }&lt;br /&gt;
      FILLVALUE {&lt;br /&gt;
         FILL_TIME H5D_FILL_TIME_ALLOC&lt;br /&gt;
         VALUE  H5D_FILL_VALUE_DEFAULT&lt;br /&gt;
      }&lt;br /&gt;
      ALLOCATION_TIME {&lt;br /&gt;
         H5D_ALLOC_TIME_INCR&lt;br /&gt;
      }&lt;br /&gt;
   }&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
====Is my netcdf file &#039;&#039;netcdf-3&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
It is an unfortunate state of affairs that the file suffix &amp;quot;.nc&amp;quot; is the commonly used naming convention for both &#039;&#039;netcdf-3&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. &lt;br /&gt;
You can use the command:  &lt;br /&gt;
 &amp;lt;code&amp;gt;ncdump -k &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt; &lt;br /&gt;
to determine if a &#039;&#039;netcdf&#039;&#039; file is either classic &#039;&#039;netcdf-3&#039;&#039; (classic) or &#039;&#039;netcdf-4&#039;&#039; (netCDF-4).&lt;br /&gt;
* The &#039;&#039;netcdf&#039;&#039; library must be installed on the system upon which the command is issued.&lt;br /&gt;
&lt;br /&gt;
[http://www.bic.mni.mcgill.ca/users/sean/Docs/netcdf/guide.txn_79.html You can learn more in the NetCDF documentation here.]&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;DMR++&#039;&#039; files for HDF4 and HDF4-EOS2 (experimental)==&lt;br /&gt;
The HDF4 and HDF4-EOS2 (hereafter just HDF4) DMR++ document builder is currently available in the docker container we build for &#039;&#039;hyrax&#039;&#039; server/service. You can get this container from the public Docker Hub repository. You can also get and build the &#039;&#039;Hyrax&#039;&#039; source code, and use the client that way (as part of a source code build) but it&#039;s much more complex than getting the Docker container. In addition, the Docker container includes a server that can test the DMR++ documents that are built and can even show you how the files would look when served without using the DMR++.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Because this command is still experimental, I&#039;ll write this documentation like a recipe. Modify it to suit your on needs.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Using get_dmrpp_h4===&lt;br /&gt;
Make a new directory in a convenient place and copy the HDF4 and/or HDF4-EOS2 files in that directory. Once you have the files in that directory, make an environment variable so it can be referred to easily. From inside the directory:&lt;br /&gt;
&lt;br /&gt;
  export HDF4_DIR=$(pwd)&lt;br /&gt;
&lt;br /&gt;
Get the Docker container from Docker Hub using this command:&lt;br /&gt;
&lt;br /&gt;
  docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
What the options mean:&lt;br /&gt;
 -d, --detach Run container in background and print container ID&lt;br /&gt;
 -h, --hostname Container host name&lt;br /&gt;
 -p, --publish Publish a container&#039;s port(s) to the host&lt;br /&gt;
 -v, --volume Bind mount a volume&lt;br /&gt;
 --name Assign a name to the container&lt;br /&gt;
&lt;br /&gt;
This command will fetch the container from Docker Hub &#039;&#039;&#039;opendap/hyrax:snapshot&#039;&#039;&#039; this is the latest build of the container. It will then &#039;&#039;run&#039;&#039; the container and return the container ID. The &#039;&#039;hyrax&#039;&#039; server is now running on you computer and can be accessed with a web browser, curl, et cetera. More on that in a bit.&lt;br /&gt;
&lt;br /&gt;
Note: If you want to use a specific container version, just substitute the version info for &#039;snapshot.&#039;&lt;br /&gt;
&lt;br /&gt;
Check that the container is running using:&lt;br /&gt;
&lt;br /&gt;
  docker ps -a&lt;br /&gt;
&lt;br /&gt;
This will show a somewhat hard-to-read bit of information about all the running Docker container on you host:&lt;br /&gt;
&lt;br /&gt;
  CONTAINER ID   IMAGE                    COMMAND              CREATED          STATUS          PORTS                                                              &lt;br /&gt;
  NAMES&lt;br /&gt;
  2949d4101df4   opendap/hyrax:snapshot   &amp;quot;/entrypoint.sh -&amp;quot;   15 seconds ago   Up 14 seconds   8009/tcp, 8443/tcp, &lt;br /&gt;
  10022/tcp, 11002/tcp, 0.0.0.0:8080-&amp;gt;8080/tcp   hyrax&lt;br /&gt;
&lt;br /&gt;
If you want to stop containers, use&lt;br /&gt;
&lt;br /&gt;
  docker rm -r &amp;lt;CONTAINER ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &#039;&#039;CONTAINER ID&#039;&#039; for the one we just started and shown in the output of &#039;&#039;docker ps -a&#039;&#039; above is &#039;&#039;2949d4101df4&#039;&#039;. No need to stop the container now, I&#039;m just pointing out how to do it because it&#039;s often useful.&lt;br /&gt;
&lt;br /&gt;
====Lets run the DMR++ builder====&lt;br /&gt;
&lt;br /&gt;
At the end of this, I&#039;ll include a shell script that takes away many of these steps, but it obscures some aspects of the command that you might want to tweak, so the following shows you all the deatils.&lt;br /&gt;
&lt;br /&gt;
Make sure you are in the directory with the HDF4 files for these steps. &lt;br /&gt;
&lt;br /&gt;
Get the command to return its help information:&lt;br /&gt;
&lt;br /&gt;
  docker exec -it hyrax get_dmrpp_h4 -h&lt;br /&gt;
&lt;br /&gt;
will return:&lt;br /&gt;
  &lt;br /&gt;
  usage: get_dmrpp_h4 [-h] -i I [-c CONF] [-s] [-u DATA_URL] [-D] [-v]&lt;br /&gt;
&lt;br /&gt;
  Build a dmrpp file for an HDF4 file. get_dmrpp_h4 -i h4_file_name. A dmrpp&lt;br /&gt;
  file that uses the HDF4 file name will be generated.&lt;br /&gt;
&lt;br /&gt;
  optional arguments:&lt;br /&gt;
  &lt;br /&gt;
  ...&lt;br /&gt;
&lt;br /&gt;
Lets build a DMR++ now, using the by explicitly using the container:&lt;br /&gt;
&lt;br /&gt;
  docker exec -it hyrax bash&lt;br /&gt;
&lt;br /&gt;
which starts the &#039;&#039;bash&#039;&#039; shell in the container, with the current directory as root (/)&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax /]# &lt;br /&gt;
&lt;br /&gt;
Change to the directory that is the root of the data (you&#039;ll see your HDF4 files in here):&lt;br /&gt;
&lt;br /&gt;
  cd /usr/share/hyrax&lt;br /&gt;
&lt;br /&gt;
You will see, roughly:&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax /]# cd /usr/share/hyrax&lt;br /&gt;
  [root@hyrax hyrax]# ls&lt;br /&gt;
  3B42.19980101.00.7.HDF&lt;br /&gt;
  3B42.19980101.03.7.HDF&lt;br /&gt;
  3B42.19980101.06.7.HDF&lt;br /&gt;
  ...&lt;br /&gt;
&lt;br /&gt;
In that directory while inside the container, use the &#039;&#039;get_dmrpp_h4&#039;&#039; command to build a DMR++ document for one of the files:&lt;br /&gt;
&lt;br /&gt;
  [root@hyrax hyrax]# get_dmrpp_h4 -i 3B42.19980101.00.7.HDF -u &#039;file:///usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&lt;br /&gt;
&lt;br /&gt;
Copy that pattern for whatever file you use. From the /usr/share/hyrax directory, you pass &#039;&#039;get_dmrpp_h4&#039;&#039; the name of the file (because it&#039;s local to the current directory). The &#039;&#039;&#039;-u&#039;&#039;&#039; option tells the command to embed the following URL in the DMR++. I&#039;ve used a &#039;&#039;file://&#039;&#039; URL to the file &#039;&#039;/usr/share/hyrax/3B42.19980101.00.7.HDF&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Note the three slashes following the colon. Obscure, but it makes sense.&lt;br /&gt;
&lt;br /&gt;
Build ing the DMR++ and embedding a &#039;&#039;file://&#039;&#039; URL will enable testing the DMR++ easily.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 Also note that the volume mount, from $HDF4_DIR to &#039;/usr/share/hyrax&#039; is a trick to get the current directory (where we are running these commands) to be the default directory for the server. &lt;br /&gt;
&lt;br /&gt;
===Command line option===&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files with get_dmrpp==&lt;br /&gt;
&lt;br /&gt;
The application that builds the &#039;&#039;dmr++&#039;&#039; files is a command line tool called &#039;&#039;get_dmrpp&#039;&#039;. It in turn utilizes other executables such as &#039;&#039;build_dmrpp&#039;&#039;, &#039;&#039;reduce_mdf&#039;&#039;, &#039;&#039;merge_dmrpp&#039;&#039; (which rely in turn on the &#039;&#039;hdf5_handler&#039;&#039; and the &#039;&#039;hdf5&#039;&#039; library), along with a number of UNIX shell commands.&lt;br /&gt;
&lt;br /&gt;
All of these components are install with each recent version of the Hyrax Data Server&lt;br /&gt;
&lt;br /&gt;
You can see the &#039;&#039;get_dmrpp&#039;&#039; usage statement with the command: &amp;lt;code&amp;gt;get_dmrpp -h&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Using &#039;&#039;get_dmrpp&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The way that &#039;&#039;get_dmrpp&#039;&#039; is invoked controls the way that the data are ultimately represented in the resulting &#039;&#039;dmr++&#039;&#039; file(s). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;get_dmrpp&#039;&#039; application utilizes software from the Hyrax data server to produce the base DMR document which is used to construct the &#039;&#039;dmr++&#039;&#039; file. &lt;br /&gt;
&lt;br /&gt;
The Hyrax server has a long list of configuration options, several of which can substantially alter the the structural and semantic representation of the dataset as seen in the dmr++ files generated using these options.&lt;br /&gt;
&lt;br /&gt;
===Command line options===&lt;br /&gt;
&lt;br /&gt;
The command line switches provide a way to control the output of the tool. In addition to common options like verbose output or testing modes, the tool provides options to build extra (aka &#039;sidecar&#039;) data files that hold information needed for CF compliance if the original HDF5 data files lack that information (see the &#039;&#039;missing data&#039;&#039; section ). In addition, it is often desirable to build &#039;&#039;dmr++&#039;&#039; files before the source data files are uploaded to a cloud store like S3. In this case, the URL to the data may not be known when the &#039;&#039;dmr++&#039;&#039; is built. We support this by using placeholder/template strings in the &#039;&#039;dmr++&#039;&#039; and which can then be replaced with the URL at runtime, when the &#039;&#039;dmr++&#039;&#039; file is evaluated. See the &#039;-u&#039; and &#039;-p&#039; options below.&lt;br /&gt;
&lt;br /&gt;
====Inputs====&lt;br /&gt;
&lt;br /&gt;
; -b&lt;br /&gt;
: The fully qualified path to the top level data directory. Data files read by &#039;&#039;get_dmrpp&#039;&#039; must be in the directory tree rooted at this location and their names expressed as a path relative to this location. The value may not be set to &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; The default value is /tmp if a value is not provided. All the data files to be processed must be in this directory or one of its subdirectories. If &#039;&#039;get_dmrpp&#039;&#039; is being executed from same directory as the data then &amp;lt;code&amp;gt;-b `pwd`&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-b .&amp;lt;/code&amp;gt; works as well.&lt;br /&gt;
;-u&lt;br /&gt;
: This option is used to specify the location of the binary data object. It’s value must be an http, https, or file (file://) URL. This URL will be injected into the dmr++ when it is constructed. If option -u is not used; then the template string &amp;lt;code&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/code&amp;gt; will be used and the dmr++ will substitute a value at runtime.&lt;br /&gt;
;-c&lt;br /&gt;
:The path to an alternate bes configuration file to use.&lt;br /&gt;
;-s&lt;br /&gt;
:The path to an optional addendum configuration file which will be appended to the default BES configuration. Much like the site.conf file works for the full server deployment it will be loaded last and the settings there-in will have an override effect on the default configuration.&lt;br /&gt;
&lt;br /&gt;
====Output====&lt;br /&gt;
&lt;br /&gt;
; -o&lt;br /&gt;
: The name of the file to create.&lt;br /&gt;
&lt;br /&gt;
====Verbose Output Modes====&lt;br /&gt;
&lt;br /&gt;
; -h&lt;br /&gt;
: Show help/usage page.&lt;br /&gt;
; -v:&lt;br /&gt;
: verbose mode, prints the intermediate DMR.&lt;br /&gt;
; -V&lt;br /&gt;
: Very verbose mode,  prints the DMR, the command and the configuration file used to build the DMR.&lt;br /&gt;
; -D&lt;br /&gt;
: Just print the DMR that will be used to build the DMR++&lt;br /&gt;
; -X&lt;br /&gt;
: Do not remove temporary files. May be used independently of the -v and/or -V options.&lt;br /&gt;
&lt;br /&gt;
====Tests====&lt;br /&gt;
&lt;br /&gt;
; -T&lt;br /&gt;
: Run ALL hyrax tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -I&lt;br /&gt;
: Run hyrax inventory tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -F&lt;br /&gt;
: Run hyrax value probe tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
&lt;br /&gt;
====Missing Data Creation====&lt;br /&gt;
&lt;br /&gt;
; -M&lt;br /&gt;
: Build a &#039;sidecar&#039; file that holds missing information needed for CF compliance (e.g., Latitude, Longitude and Time coordinate data).&lt;br /&gt;
; -p&lt;br /&gt;
: Provide the URL for the Missing data sidecar file. If this is not given (but -M is), then a template value is used in the dmr++ file and a real URL is substituted at runtime.&lt;br /&gt;
; -r&lt;br /&gt;
: The path to the file that contains missing variable information for sets of input data files that share common missing variables. The file will be created if it doesn&#039;t exist and the result may be used in subsequent invocations of get_dmrpp (using -r) to identify the missing variable file.&lt;br /&gt;
&lt;br /&gt;
====AWS Integration====&lt;br /&gt;
The &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application supports both S3 hosted granules as inputs, and uploading generated dmr++ files to an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
; S3 Hosted granules are supported by default,&lt;br /&gt;
: When the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application sees that the name of the input file is an S3 URL it will check to see if the AWS CLI is configured and if so &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will attempt retrieve the granule and make a dmr++ utilizing whatever other options have been chosen.&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
; -U&lt;br /&gt;
: The &#039;&#039;&#039;&amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt;&#039;&#039;&#039; command line parameter for &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; instructs the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application to upload the generated dmr++ file to S3, but only when the following conditions are met:&lt;br /&gt;
:* The name of the input file is an S3 URL&lt;br /&gt;
:* The AWS CLI has been configured with credentials that provide r+w permissions for the bucket referenced in the input file S3 URL.&lt;br /&gt;
:* The &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; option has been specified.&lt;br /&gt;
: If all three of the above are true then &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will copy the retrieve the granule, create a dmr++ file from the granule, and copy the resulting dmr++ file (as defined by the -o option) to the source S3 bucket using the well known NGAP sidecar file naming convention: &#039;&#039;&#039;&amp;lt;tt&amp;gt;s3://bucket_name/granule_object_id.dmrpp&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -U -o foo -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5_handler&#039;&#039; Configuration===&lt;br /&gt;
&lt;br /&gt;
Because &#039;&#039;get_dmrpp&#039;&#039; uses the &#039;&#039;hdf5_handler&#039;&#039; software to build the &#039;&#039;dmr++&#039;&#039; the software must inject the &#039;&#039;hdf5_handler&#039;&#039;&#039;s configuration. &lt;br /&gt;
&lt;br /&gt;
The default configuration is large, but any valued may be altered at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some of the commonly manipulated configuration parameters with their default values:&lt;br /&gt;
 H5.EnableCF=true&lt;br /&gt;
 H5.EnableDMR64bitInt=true&lt;br /&gt;
 H5.DefaultHandleDimension=true&lt;br /&gt;
 H5.KeepVarLeadingUnderscore=false&lt;br /&gt;
 H5.EnableCheckNameClashing=true&lt;br /&gt;
 H5.EnableAddPathAttrs=true&lt;br /&gt;
 H5.EnableDropLongString=true&lt;br /&gt;
 H5.DisableStructMetaAttr=true&lt;br /&gt;
 H5.EnableFillValueCheck=true&lt;br /&gt;
 H5.CheckIgnoreObj=false&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;Note to DAACs with existing Hyrax deployments.&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
If your group is already serving data with Hyrax and the data representations that are generated by your Hyrax server are satisfactory, then a careful inspection of the localized configuration, typically held in /etc/bes/site.conf, will help you determine what configuration state you may need to inject into &#039;&#039;get_dmrpp&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===The &#039;&#039;H5.EnableCF&#039;&#039; option===&lt;br /&gt;
&lt;br /&gt;
Of particular importance is the &#039;&#039;H5.EnableCF&#039;&#039; option, which instructs the &#039;&#039;get_dmrpp&#039;&#039; tool to produce [https://cfconventions.org/ Climate Forecast convention (CF)] compatible output based on metadata found in the granule file being processed. &lt;br /&gt;
&lt;br /&gt;
Changing the value of &#039;&#039;H5.EnableCF&#039;&#039; from &#039;&#039;&#039;&#039;&#039;false&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;true&#039;&#039;&#039;&#039;&#039; will have (at least) two significant effects.&lt;br /&gt;
&lt;br /&gt;
It will:&lt;br /&gt;
&lt;br /&gt;
* Cause &#039;&#039;get_dmrpp&#039;&#039; to attempt to make the dmr++ metadata CF compliant.&lt;br /&gt;
* Remove Group hierarchies (if any) in the underlying data granule by flattening the Group hierarchy into the variable names.  &lt;br /&gt;
&lt;br /&gt;
By default &#039;&#039;get_dmrpp&#039;&#039; the &#039;&#039;H5.EnableCF&#039;&#039; option is set to false:&lt;br /&gt;
 H5.EnableCF = false&lt;br /&gt;
&lt;br /&gt;
There is a much more comprehensive discussion of this key feature, and others, in the &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;[https://opendap.github.io/hyrax_guide/Master_Hyrax_Guide.html#_hyrax_handlers HDF5 Handler section of the Appendix in the Hyrax Data Server Installation and Configuration Guide]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Missing data, the CF conventions and &#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
Many of the &#039;&#039;hdf5&#039;&#039; files produced by NASA and others do not contain the domain coordinate data (such as latitude, longitude, time, etc.) as a collection of explicit values. Instead information contained in the dataset metadata can used to reproduce these values. &lt;br /&gt;
&lt;br /&gt;
In order for a dataset to be Climate Forecast (CF) compatible it must contain these domain coordinate data values.&lt;br /&gt;
&lt;br /&gt;
The Hyrax &#039;&#039;hdf5_handler&#039;&#039; software, utilized by the &#039;&#039;get_dmrpp&#039;&#039; application, can create this data from the dataset metadata.  The &#039;&#039;get_dmrpp&#039;&#039; application places these generated data in a “sidecar” file for deployment with the source &#039;&#039;hdf5/netcdf&#039;&#039;-4 file.&lt;br /&gt;
&lt;br /&gt;
==Hyrax - Serving data using dmr++ files==&lt;br /&gt;
&lt;br /&gt;
There are three fundamental deployment scenarios for using dmr++ files to serve data with the Hyrax data server.&lt;br /&gt;
&lt;br /&gt;
This can be simple categorized as follows:&lt;br /&gt;
The dmr++ file(s) are XML files that contain a root &amp;lt;tt&amp;gt;dap4:Dataset&amp;lt;/tt&amp;gt; element with a &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attribute whose value is one of:&lt;br /&gt;
# An http(s):// URL referencing to the underlying granule files via http.&lt;br /&gt;
# A file:// URL that references the granule file on the local filesystem in a location that is inside the BES&#039; data root tree.&lt;br /&gt;
# The template string &amp;lt;tt&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each will discussed in turn below. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: By default Hyrax will automatically associate files whose name ends with &amp;quot;.dmrpp&amp;quot; with the dmr++ handler.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with http(s) URLs===&lt;br /&gt;
&lt;br /&gt;
If the dmr++ files that you wish to serve contain &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attributes whose values are http(s) URLs then there are 2+1 steps to serve the data:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure that the Hyrax AllowedHosts list is configured to allow Hyrax to access those target URLs. This can be accomplished by adding new regex entires to the AllowedHosts list in /etc/bes/site.conf, creating that file as need be.&lt;br /&gt;
# If the data URLs require authentication to access then you&#039;ll need to configure Hyrax for that too.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with file URLs===&lt;br /&gt;
&lt;br /&gt;
Using dmr++ files with locally held files can be useful for verifying that dmr++ functionality is working without relying on network access that may have data rate limits, authenticated access configuration, or security access constraints. Additionally, in many cases the dmr++ access to the locally held data may be significantly faster than through the native netcdf-4/hdf5 data handlers.&lt;br /&gt;
&lt;br /&gt;
In order to use dmr++ files that contain file:// URLs:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure the the dmr++ files contain only file:// URLs that refer to data granule files inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration.&lt;br /&gt;
&lt;br /&gt;
Note: For Hyrax, a correctly formatted file URL must start with the protocol &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; followed by the full qualified path to the data granule, for example:  &lt;br /&gt;
 /usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
so the the completed URL will have three slashes after the first colon:&lt;br /&gt;
 file:///usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with the template string. (NASA)===&lt;br /&gt;
&lt;br /&gt;
Another way to serve dmr++ files with Hyrax is to build the dmr++ files &#039;&#039;&#039;without&#039;&#039;&#039; valid URLs but with a template string that is replaced at runtime. If no target URL is supplied to get_drmpp at the time that the dmr++ is generated the template string: &amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt; will added to the file in place of the URL. The at runtime it can be replaced withe the correct value.&lt;br /&gt;
&lt;br /&gt;
Currently the only implementation of this is Hyrax&#039;s NGAP service which, when deployed in the NASA NGAP cloud, will accept &amp;quot;restified path&amp;quot; URLs that are defined as&lt;br /&gt;
having a URL path component with two mandatory and one optional parameters:&lt;br /&gt;
 MANDATORY: &amp;quot;/collections/UMM-C:{concept-id}&amp;quot;&lt;br /&gt;
 OPTIONAL:  &amp;quot;/UMM-C:{ShortName} &#039;.&#039; UMM-C:{Version}&amp;quot;&lt;br /&gt;
 MANDATORY: &amp;quot;/granules/UMM-G:{GranuleUR}&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039; &amp;lt;tt&amp;gt;https://opendap.earthdata.nasa.gov/collections/C1443727145-LAADS/MOD08_D3.v6.1/granules/MOD08_D3.A2020308.061.2020309092644.hdf.nc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When encountering this type of URL Hyrax will decompose it and use the content to formulate a query to the NASA CMR in order to retrieve the data access URL for the granule and for the dmr++ file. It then retrieves the dmr++ file and injects the data URL so that data access can proceed as described above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://wiki.earthdata.nasa.gov/display/DUTRAIN/Feature+analysis%3A+Restified+URL+for+OPENDAP+Data+Access More on the Restified Path can be found here],&lt;br /&gt;
&lt;br /&gt;
== Recipe: Building and testing dmr++ files ==&lt;br /&gt;
There are two recipes shown here, one using Hyrax docker containers and a second using the container that is part of the EOSDIS Cumulous task.&lt;br /&gt;
Prerequisites: &lt;br /&gt;
* Docker daemon running on a system that also supports a shell (the examples use &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; in this section)&lt;br /&gt;
=== Recipe: Building dmr++ files using a Hyrax docker container.===&lt;br /&gt;
# Acquire representative granule files for the collection you wish to import. Put them on the system that is running the Docker daemon. For this recipe we will assume that these files have been placed in the directory:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;/tmp/dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Get the most up to date Hyrax docker image:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker pull opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Start the docker container, mounting your data directory on to the docker image at &amp;lt;tt&amp;gt;/usr/share/hyrax&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker run -d -h hyrax -p 8080:8080 --volume /tmp/dmrpp:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Get a first view of your data using &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; with it&#039;s default configuration.&lt;br /&gt;
## If you want you can build a dmr++ for an example &amp;quot;&amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;input_file&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt;&amp;quot; using a  &amp;lt;tt&amp;gt;docker exec&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
##: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax get_dmrpp -b /usr/share/hyrax -o /usr/share/hyrax/input_file.dmrpp -u &amp;quot;file:///usr/share/hyrax/input_file&amp;quot; &amp;quot;input_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
## Or if you want more scripting flexibility you can login to the docker conainer to do the same:&lt;br /&gt;
### Login to the docker container:&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax /bin/bash&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### Change working dir to data dir:&lt;br /&gt;
###:&amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;cd /usr/share/hyrax&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### This sets the data directory to the current one (&amp;lt;tt&amp;gt;-b $(pwd)&amp;lt;/tt&amp;gt;) and sets the data URL (&amp;lt;tt&amp;gt;-u&amp;lt;/tt&amp;gt;) to the fully qualified path to the input file.&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;get_dmrpp -b $(pwd) -o foo.dmrpp -u &amp;quot;file://&amp;quot;$(pwd)&amp;quot;/your_test_file&amp;quot; &amp;quot;your_test_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;&#039;&#039;Now that you have made a dmr++ file, use the running Hyrax server to view and test it by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
#:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You can also batch process all of your test granules, if you want to go that route. This script assumes your ingestable data files end with &#039;&amp;lt;tt&amp;gt;.h5&amp;lt;/tt&amp;gt;&#039;. &lt;br /&gt;
#: &#039;&#039;The resulting dmr++ files should contain the correct &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; URLs and be correctly located so that they may be tested with the Hyrax service running in the docker instance.&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script will write each output file as a sidecar file into &lt;br /&gt;
# the same directory as its associated input granule data file.&lt;br /&gt;
&lt;br /&gt;
# The target directory to search for data files &lt;br /&gt;
target_dir=/usr/share/hyrax&lt;br /&gt;
echo &amp;quot;target_dir: ${target_dir}&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Search the target_dir for names matching the regex \*.h5 &lt;br /&gt;
for infile in `find &amp;quot;${target_dir}&amp;quot; -name \*.h5`&lt;br /&gt;
do&lt;br /&gt;
    echo &amp;quot; Processing: ${infile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    infile_base=`basename &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;infile_base: ${infile_base}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    bes_dir=`dirname &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;    bes_dir: ${bes_dir}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    outfile=&amp;quot;${infile}.dmrpp&amp;quot;&lt;br /&gt;
    echo &amp;quot;     Output: ${outfile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    get_dmrpp -b &amp;quot;${bes_dir}&amp;quot; -o &amp;quot;${outfile}&amp;quot; -u &amp;quot;file://${infile}&amp;quot; &amp;quot;${infile_base}&amp;quot;&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Remember that you can use the Hyrax server that is running in the docker container to view and test the dmr++ files you just created by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Testing and qualifying dmr++ files ===&lt;br /&gt;
In the previous section/step we created some initial dmr++ files using the default configuration. It is crucial to make sure that they provide the representation of the data that you and your users are expecting, and that they will work correctly with the Hyrax server. (See the following sections for details). If the generated dmr++ files do not match expectations then the default configuration of the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; may need to be amended using the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; parameter.&lt;br /&gt;
If the data are currently being served by your DAAC&#039;s on-prem team this is where understanding exactly what the localizations made to the configurations of the on-prem Hyrax instances deployed for the collection is important. These localization will probably need to be injected into &amp;lt;tt&amp;gt;get_drmpp&amp;lt;/tt&amp;gt; in order to produce the correct data representation in the dmr++ files.&lt;br /&gt;
&lt;br /&gt;
=== Flattening Groups ===&lt;br /&gt;
By default &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will preserve and show group hierarchies. If this is not desired, say for CF-1.0 compatibility, then you can change this by creating a small amendment to &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt;&#039;s default configuration. &lt;br /&gt;
First create the amending configuration file:&lt;br /&gt;
 echo &amp;quot;H5.EnableCF=true&amp;quot; &amp;gt; site.conf&lt;br /&gt;
Then, change the invocation of &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; in the above example by adding the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; switch:&lt;br /&gt;
 get_dmrpp -s site.conf -b `pwd` -o &amp;quot;${dmrpp_file}&amp;quot; -u &amp;quot;file://&amp;quot;`pwd`&amp;quot;/${file}&amp;quot; &amp;quot;${file}&amp;quot;&lt;br /&gt;
And re-run the dmr++ production as shown above.&lt;br /&gt;
&lt;br /&gt;
=== DAP representations ===&lt;br /&gt;
We have test and assurance procedures for DAP4 and DAP2 protocols below. Both are important. For legacy datasets the DAP2 request API is widely used by an existing client base and should continue to be supported. Since DAP4 subsumes DAP2 (but with somewhat different API semantics) It should be checked for legacy datasets as well. For more modern datasets that content DAP4 types such as Int64 that are not part of the DAP2 specification or implementations we will need to relying eliding the instances of unmapped types, or return an error when this is encountered.&lt;br /&gt;
 # Test Constants:&lt;br /&gt;
 GRANULE_FILE=&amp;quot;some_name.h5&amp;quot;&lt;br /&gt;
 # Granule URL&lt;br /&gt;
 gf_url=&amp;quot;http://localhost:8080/opendap/${GRANULE_FILE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== Inspect the dmr++ files====&lt;br /&gt;
# Do the dmr++ files have the expected dmrpp:href URL(s)?&lt;br /&gt;
#: &amp;lt;tt&amp;gt;head -2 ${GRANULE_FILE}.dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check DAP4 DMR Response ====&lt;br /&gt;
Inspect &amp;lt;tt&amp;gt;${gf_url}.dmrpp.dmr&amp;lt;/tt&amp;gt; &lt;br /&gt;
# Get the document, save as &amp;lt;tt&amp;gt;foo.dmr&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;curl -L -o foo.dmr &amp;quot;${gf_url}.dmr&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
# Are the associated dimensions correct?&lt;br /&gt;
&lt;br /&gt;
==== DAP4 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://docs.opendap.org/index.php?title=DAP4:_Specification_Volume_1#Fully_Qualified_Names VARIABLE_NAME is a full qualified DAP4 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_file &amp;quot;${gf_url}.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap4_subset_file dap4_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP4 UI test ====&lt;br /&gt;
* View and exercise the DAP4 Data Request Form &amp;lt;tt&amp;gt;{gf_url}.dmr.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check DDS Response ====&lt;br /&gt;
&lt;br /&gt;
# Inspect &amp;lt;tt&amp;gt;${gf_url}.dds&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
## Are the associated dimensions correct?&lt;br /&gt;
# Compare DMR++ DDS with granule file DDS. &lt;br /&gt;
#:For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_file &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_dmrpp &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; cmp dap2_dds_file dap2_dds_dmrpp &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_file &amp;quot;${gf_url}.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap2_subset_file dap2_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: One might consider doing this with two or more variables.&lt;br /&gt;
&lt;br /&gt;
==== DAP2 UI Test ====&lt;br /&gt;
* View and exercise the DAP2 Data Request Form located here: &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Try it in Panoply!&lt;br /&gt;
** Open Panoply.&lt;br /&gt;
** From the &#039;&#039;&#039;File&#039;&#039;&#039; menu select &#039;&#039;&#039;Open Remote Dataset...&#039;&#039;&#039;&lt;br /&gt;
** Paste the &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt; into the resulting dialog box.&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13538</id>
		<title>DMR++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13538"/>
		<updated>2024-08-28T21:10:10Z</updated>

		<summary type="html">&lt;p&gt;Jimg: here&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;How to build &amp;amp; deploy dmr++ files for Hyrax&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==What? Why?== &lt;br /&gt;
&lt;br /&gt;
It is a fast and flexible way to serve data stored in S3.&lt;br /&gt;
The dmr++ encodes the location of the data content residing in a binary data file/object (e.g., an hdf5 file) so that it can be directly accessed, without the need for an intermediate library API, by using the file with the location information. The binary data objects may be on a local filesystem, or they may reside across the web in something like an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How Does It Work?==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; ingest software reads a data file (see &#039;&#039;&#039;note&#039;&#039;&#039;) and builds a document that holds all of the file&#039;s metadata (the names and types of all of the variables along with any other information bound to those variables). This information is stored in a document we call the Dataset Metadata Response (DMR). The &#039;&#039;dmr++&#039;&#039; adds some extra information to this (that&#039;s the &#039;++&#039; part) about where each variable can be found and how to decode those values. The &#039;&#039;dmr++&#039;&#039; is simply an special annotated DMR document.&lt;br /&gt;
&lt;br /&gt;
This effectively decouples the annotated DMR (&#039;&#039;dmr++&#039;&#039;) from the location of the granule file itself. Since dmr++ files are typically significantly smaller than the source data granules they represent, they can be stored and moved for less expense. They also enable reading all of the file&#039;s metadata in one operation instead of the iterative process that many APIs require.&lt;br /&gt;
&lt;br /&gt;
If the dmr++ contains references to the source granules location on the web, the location of the the dmr++ file itself does not matter.&lt;br /&gt;
&lt;br /&gt;
Software that understands the dmr++ content can directly access the data values held in the source granule file, and it can do so without having to retrieve the entire file and work on it locally, even when the file is stored in a Web Object Store like S3. &lt;br /&gt;
&lt;br /&gt;
If the granule file contains multiple variables and only a subset of them are needed, the dmr++ enabled software can retrieve just the bytes associated with the desired variables parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;note:&#039;&#039;&#039; The OPeNDAP software currently supports HDF5 and NetCDF4. Other formats can be supported, such as zarr.&lt;br /&gt;
&lt;br /&gt;
==Supported Data Formats==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently works with &#039;&#039;hdf5&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. (The &#039;&#039;netcdf-4&#039;&#039; format is a subset of &#039;&#039;hdf5&#039;&#039; so &#039;&#039;hdf5&#039;&#039; tools are utilized for both.) Other formats like &#039;&#039;zarr&#039;&#039;, &#039;&#039;hdf4&#039;&#039;, &#039;&#039;netcdf-3&#039;&#039; are not currently supported by the &#039;&#039;dmr++&#039;&#039; software, but support could be added if requested.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; data format is quite complex and many of the options and edge cases are not currently supported by the &#039;&#039;dmr++&#039;&#039; software. &lt;br /&gt;
&lt;br /&gt;
These limitations and how to quickly evaluate an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file for use with the &#039;&#039;dmr++&#039;&#039; software are explained below.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; filters====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format has several filter/compression options used for storing data values. &lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5Z_FILTER_DEFLATE, H5Z_FILTER_SHUFFLE, and H5Z_FILTER_FLETCHER32 filters.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc/RM/RM_H5Z.html You can find more on hdf5 filters here.]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; storage layouts====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format also uses a number of &amp;quot;storage layouts&amp;quot; that describe various structural organizations of the data values associated with a variable in the granule file.&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5D_COMPACT, H5D_CHUNKED, and H5D_CONTIGUOUS storage layouts. These are all of the storage layouts defined by the &#039;&#039;hdf5&#039;&#039; library, but others can be added.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc1.6/Datasets.html You can find more on hdf5 storage layouts here.]&lt;br /&gt;
&lt;br /&gt;
====Is my &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file suitable for &#039;&#039;dmr++&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
To determine the &#039;&#039;hdf5&#039;&#039; filters, storage layouts, and chunking scheme used in an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file you can use the command:&lt;br /&gt;
 &amp;lt;code&amp;gt;h5dump -H -p &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
To get a human readable assessment of the file that will show the storage layouts, chunking structure, and the filters needed for each variable (aka DATASET in the &#039;&#039;hdf5&#039;&#039; vocabulary) [https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Dump h5dump info can be found here.]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;h5dump example output:&#039;&#039;&lt;br /&gt;
 $ h5dump -H -p chunked_gzipped_fourD.h5&lt;br /&gt;
 HDF5 &amp;quot;chunked_gzipped_fourD.h5&amp;quot; {&lt;br /&gt;
 GROUP &amp;quot;/&amp;quot; {&lt;br /&gt;
   DATASET &amp;quot;d_16_gzipped_chunks&amp;quot; {&lt;br /&gt;
      DATATYPE  H5T_IEEE_F32LE&lt;br /&gt;
      DATASPACE  SIMPLE { ( 40, 40, 40, 40 ) / ( 40, 40, 40, 40 ) }&lt;br /&gt;
      STORAGE_LAYOUT {&lt;br /&gt;
         CHUNKED ( 20, 20, 20, 20 )&lt;br /&gt;
         SIZE 2863311 (3.576:1 COMPRESSION)&lt;br /&gt;
      }&lt;br /&gt;
      FILTERS {&lt;br /&gt;
         COMPRESSION DEFLATE { LEVEL 6 }&lt;br /&gt;
      }&lt;br /&gt;
      FILLVALUE {&lt;br /&gt;
         FILL_TIME H5D_FILL_TIME_ALLOC&lt;br /&gt;
         VALUE  H5D_FILL_VALUE_DEFAULT&lt;br /&gt;
      }&lt;br /&gt;
      ALLOCATION_TIME {&lt;br /&gt;
         H5D_ALLOC_TIME_INCR&lt;br /&gt;
      }&lt;br /&gt;
   }&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
====Is my netcdf file &#039;&#039;netcdf-3&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
It is an unfortunate state of affairs that the file suffix &amp;quot;.nc&amp;quot; is the commonly used naming convention for both &#039;&#039;netcdf-3&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. &lt;br /&gt;
You can use the command:  &lt;br /&gt;
 &amp;lt;code&amp;gt;ncdump -k &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt; &lt;br /&gt;
to determine if a &#039;&#039;netcdf&#039;&#039; file is either classic &#039;&#039;netcdf-3&#039;&#039; (classic) or &#039;&#039;netcdf-4&#039;&#039; (netCDF-4).&lt;br /&gt;
* The &#039;&#039;netcdf&#039;&#039; library must be installed on the system upon which the command is issued.&lt;br /&gt;
&lt;br /&gt;
[http://www.bic.mni.mcgill.ca/users/sean/Docs/netcdf/guide.txn_79.html You can learn more in the NetCDF documentation here.]&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files for HDF4 and HDF4-EOS2 (experimental)==&lt;br /&gt;
The HDF4 and HDF4-EOS2 (hereafter just HDF4) DMR++ document builder is currently available in the docker container we build for &#039;&#039;hyrax&#039;&#039; server/service. You can get this container from the public Docker Hub repository. You can also get and build the &#039;&#039;Hyrax&#039;&#039; source code, and use the client that way (as part of a source code build) but it&#039;s much more complex than getting the Docker container. In addition, the Docker container includes a server that can test the DMR++ documents that are built and can even show you how the files would look when served without using the DMR++.&lt;br /&gt;
&lt;br /&gt;
===Using get_dmrpp_h4===&lt;br /&gt;
Get the Docker container from Docker Hub using this command:&lt;br /&gt;
&lt;br /&gt;
 docker run -d -h hyrax -p 8080:8080 -v $HDF4_DIR:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&lt;br /&gt;
&lt;br /&gt;
What the options mean:&lt;br /&gt;
 -d, --detach Run container in background and print container ID&lt;br /&gt;
 -h, --hostname Container host name&lt;br /&gt;
 -p, --publish Publish a container&#039;s port(s) to the host&lt;br /&gt;
 -v, --volume Bind mount a volume&lt;br /&gt;
 --name Assign a name to the container&lt;br /&gt;
&lt;br /&gt;
This command will fetch the container from Docker Hub &#039;&#039;&#039;opendap/hyrax:snapshot&#039;&#039;&#039; this is the latest build of the container. It will then &#039;&#039;run&#039;&#039; the container and return the container ID. The &#039;&#039;hyrax&#039;&#039; server is now running on you computer and can be accessed with a web browser, curl, et cetera. More on that in a bit.&lt;br /&gt;
&lt;br /&gt;
If you want to use a specific container, just substitute the version info for &#039;snapshot.&#039; Also note that the volume mount, from $HDF4_DIR to &#039;/usr/share/hyrax&#039; is a trick to get the current directory (where we are running these commands) to be the default directory for the server. &lt;br /&gt;
&lt;br /&gt;
===Command line option===&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files with get_dmrpp==&lt;br /&gt;
&lt;br /&gt;
The application that builds the &#039;&#039;dmr++&#039;&#039; files is a command line tool called &#039;&#039;get_dmrpp&#039;&#039;. It in turn utilizes other executables such as &#039;&#039;build_dmrpp&#039;&#039;, &#039;&#039;reduce_mdf&#039;&#039;, &#039;&#039;merge_dmrpp&#039;&#039; (which rely in turn on the &#039;&#039;hdf5_handler&#039;&#039; and the &#039;&#039;hdf5&#039;&#039; library), along with a number of UNIX shell commands.&lt;br /&gt;
&lt;br /&gt;
All of these components are install with each recent version of the Hyrax Data Server&lt;br /&gt;
&lt;br /&gt;
You can see the &#039;&#039;get_dmrpp&#039;&#039; usage statement with the command: &amp;lt;code&amp;gt;get_dmrpp -h&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Using &#039;&#039;get_dmrpp&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The way that &#039;&#039;get_dmrpp&#039;&#039; is invoked controls the way that the data are ultimately represented in the resulting &#039;&#039;dmr++&#039;&#039; file(s). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;get_dmrpp&#039;&#039; application utilizes software from the Hyrax data server to produce the base DMR document which is used to construct the &#039;&#039;dmr++&#039;&#039; file. &lt;br /&gt;
&lt;br /&gt;
The Hyrax server has a long list of configuration options, several of which can substantially alter the the structural and semantic representation of the dataset as seen in the dmr++ files generated using these options.&lt;br /&gt;
&lt;br /&gt;
===Command line options===&lt;br /&gt;
&lt;br /&gt;
The command line switches provide a way to control the output of the tool. In addition to common options like verbose output or testing modes, the tool provides options to build extra (aka &#039;sidecar&#039;) data files that hold information needed for CF compliance if the original HDF5 data files lack that information (see the &#039;&#039;missing data&#039;&#039; section ). In addition, it is often desirable to build &#039;&#039;dmr++&#039;&#039; files before the source data files are uploaded to a cloud store like S3. In this case, the URL to the data may not be known when the &#039;&#039;dmr++&#039;&#039; is built. We support this by using placeholder/template strings in the &#039;&#039;dmr++&#039;&#039; and which can then be replaced with the URL at runtime, when the &#039;&#039;dmr++&#039;&#039; file is evaluated. See the &#039;-u&#039; and &#039;-p&#039; options below.&lt;br /&gt;
&lt;br /&gt;
====Inputs====&lt;br /&gt;
&lt;br /&gt;
; -b&lt;br /&gt;
: The fully qualified path to the top level data directory. Data files read by &#039;&#039;get_dmrpp&#039;&#039; must be in the directory tree rooted at this location and their names expressed as a path relative to this location. The value may not be set to &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; The default value is /tmp if a value is not provided. All the data files to be processed must be in this directory or one of its subdirectories. If &#039;&#039;get_dmrpp&#039;&#039; is being executed from same directory as the data then &amp;lt;code&amp;gt;-b `pwd`&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-b .&amp;lt;/code&amp;gt; works as well.&lt;br /&gt;
;-u&lt;br /&gt;
: This option is used to specify the location of the binary data object. It’s value must be an http, https, or file (file://) URL. This URL will be injected into the dmr++ when it is constructed. If option -u is not used; then the template string &amp;lt;code&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/code&amp;gt; will be used and the dmr++ will substitute a value at runtime.&lt;br /&gt;
;-c&lt;br /&gt;
:The path to an alternate bes configuration file to use.&lt;br /&gt;
;-s&lt;br /&gt;
:The path to an optional addendum configuration file which will be appended to the default BES configuration. Much like the site.conf file works for the full server deployment it will be loaded last and the settings there-in will have an override effect on the default configuration.&lt;br /&gt;
&lt;br /&gt;
====Output====&lt;br /&gt;
&lt;br /&gt;
; -o&lt;br /&gt;
: The name of the file to create.&lt;br /&gt;
&lt;br /&gt;
====Verbose Output Modes====&lt;br /&gt;
&lt;br /&gt;
; -h&lt;br /&gt;
: Show help/usage page.&lt;br /&gt;
; -v:&lt;br /&gt;
: verbose mode, prints the intermediate DMR.&lt;br /&gt;
; -V&lt;br /&gt;
: Very verbose mode,  prints the DMR, the command and the configuration file used to build the DMR.&lt;br /&gt;
; -D&lt;br /&gt;
: Just print the DMR that will be used to build the DMR++&lt;br /&gt;
; -X&lt;br /&gt;
: Do not remove temporary files. May be used independently of the -v and/or -V options.&lt;br /&gt;
&lt;br /&gt;
====Tests====&lt;br /&gt;
&lt;br /&gt;
; -T&lt;br /&gt;
: Run ALL hyrax tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -I&lt;br /&gt;
: Run hyrax inventory tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -F&lt;br /&gt;
: Run hyrax value probe tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
&lt;br /&gt;
====Missing Data Creation====&lt;br /&gt;
&lt;br /&gt;
; -M&lt;br /&gt;
: Build a &#039;sidecar&#039; file that holds missing information needed for CF compliance (e.g., Latitude, Longitude and Time coordinate data).&lt;br /&gt;
; -p&lt;br /&gt;
: Provide the URL for the Missing data sidecar file. If this is not given (but -M is), then a template value is used in the dmr++ file and a real URL is substituted at runtime.&lt;br /&gt;
; -r&lt;br /&gt;
: The path to the file that contains missing variable information for sets of input data files that share common missing variables. The file will be created if it doesn&#039;t exist and the result may be used in subsequent invocations of get_dmrpp (using -r) to identify the missing variable file.&lt;br /&gt;
&lt;br /&gt;
====AWS Integration====&lt;br /&gt;
The &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application supports both S3 hosted granules as inputs, and uploading generated dmr++ files to an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
; S3 Hosted granules are supported by default,&lt;br /&gt;
: When the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application sees that the name of the input file is an S3 URL it will check to see if the AWS CLI is configured and if so &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will attempt retrieve the granule and make a dmr++ utilizing whatever other options have been chosen.&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
; -U&lt;br /&gt;
: The &#039;&#039;&#039;&amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt;&#039;&#039;&#039; command line parameter for &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; instructs the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application to upload the generated dmr++ file to S3, but only when the following conditions are met:&lt;br /&gt;
:* The name of the input file is an S3 URL&lt;br /&gt;
:* The AWS CLI has been configured with credentials that provide r+w permissions for the bucket referenced in the input file S3 URL.&lt;br /&gt;
:* The &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; option has been specified.&lt;br /&gt;
: If all three of the above are true then &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will copy the retrieve the granule, create a dmr++ file from the granule, and copy the resulting dmr++ file (as defined by the -o option) to the source S3 bucket using the well known NGAP sidecar file naming convention: &#039;&#039;&#039;&amp;lt;tt&amp;gt;s3://bucket_name/granule_object_id.dmrpp&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -U -o foo -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5_handler&#039;&#039; Configuration===&lt;br /&gt;
&lt;br /&gt;
Because &#039;&#039;get_dmrpp&#039;&#039; uses the &#039;&#039;hdf5_handler&#039;&#039; software to build the &#039;&#039;dmr++&#039;&#039; the software must inject the &#039;&#039;hdf5_handler&#039;&#039;&#039;s configuration. &lt;br /&gt;
&lt;br /&gt;
The default configuration is large, but any valued may be altered at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some of the commonly manipulated configuration parameters with their default values:&lt;br /&gt;
 H5.EnableCF=true&lt;br /&gt;
 H5.EnableDMR64bitInt=true&lt;br /&gt;
 H5.DefaultHandleDimension=true&lt;br /&gt;
 H5.KeepVarLeadingUnderscore=false&lt;br /&gt;
 H5.EnableCheckNameClashing=true&lt;br /&gt;
 H5.EnableAddPathAttrs=true&lt;br /&gt;
 H5.EnableDropLongString=true&lt;br /&gt;
 H5.DisableStructMetaAttr=true&lt;br /&gt;
 H5.EnableFillValueCheck=true&lt;br /&gt;
 H5.CheckIgnoreObj=false&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;Note to DAACs with existing Hyrax deployments.&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
If your group is already serving data with Hyrax and the data representations that are generated by your Hyrax server are satisfactory, then a careful inspection of the localized configuration, typically held in /etc/bes/site.conf, will help you determine what configuration state you may need to inject into &#039;&#039;get_dmrpp&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===The &#039;&#039;H5.EnableCF&#039;&#039; option===&lt;br /&gt;
&lt;br /&gt;
Of particular importance is the &#039;&#039;H5.EnableCF&#039;&#039; option, which instructs the &#039;&#039;get_dmrpp&#039;&#039; tool to produce [https://cfconventions.org/ Climate Forecast convention (CF)] compatible output based on metadata found in the granule file being processed. &lt;br /&gt;
&lt;br /&gt;
Changing the value of &#039;&#039;H5.EnableCF&#039;&#039; from &#039;&#039;&#039;&#039;&#039;false&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;true&#039;&#039;&#039;&#039;&#039; will have (at least) two significant effects.&lt;br /&gt;
&lt;br /&gt;
It will:&lt;br /&gt;
&lt;br /&gt;
* Cause &#039;&#039;get_dmrpp&#039;&#039; to attempt to make the dmr++ metadata CF compliant.&lt;br /&gt;
* Remove Group hierarchies (if any) in the underlying data granule by flattening the Group hierarchy into the variable names.  &lt;br /&gt;
&lt;br /&gt;
By default &#039;&#039;get_dmrpp&#039;&#039; the &#039;&#039;H5.EnableCF&#039;&#039; option is set to false:&lt;br /&gt;
 H5.EnableCF = false&lt;br /&gt;
&lt;br /&gt;
There is a much more comprehensive discussion of this key feature, and others, in the &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;[https://opendap.github.io/hyrax_guide/Master_Hyrax_Guide.html#_hyrax_handlers HDF5 Handler section of the Appendix in the Hyrax Data Server Installation and Configuration Guide]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Missing data, the CF conventions and &#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
Many of the &#039;&#039;hdf5&#039;&#039; files produced by NASA and others do not contain the domain coordinate data (such as latitude, longitude, time, etc.) as a collection of explicit values. Instead information contained in the dataset metadata can used to reproduce these values. &lt;br /&gt;
&lt;br /&gt;
In order for a dataset to be Climate Forecast (CF) compatible it must contain these domain coordinate data values.&lt;br /&gt;
&lt;br /&gt;
The Hyrax &#039;&#039;hdf5_handler&#039;&#039; software, utilized by the &#039;&#039;get_dmrpp&#039;&#039; application, can create this data from the dataset metadata.  The &#039;&#039;get_dmrpp&#039;&#039; application places these generated data in a “sidecar” file for deployment with the source &#039;&#039;hdf5/netcdf&#039;&#039;-4 file.&lt;br /&gt;
&lt;br /&gt;
==Hyrax - Serving data using dmr++ files==&lt;br /&gt;
&lt;br /&gt;
There are three fundamental deployment scenarios for using dmr++ files to serve data with the Hyrax data server.&lt;br /&gt;
&lt;br /&gt;
This can be simple categorized as follows:&lt;br /&gt;
The dmr++ file(s) are XML files that contain a root &amp;lt;tt&amp;gt;dap4:Dataset&amp;lt;/tt&amp;gt; element with a &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attribute whose value is one of:&lt;br /&gt;
# An http(s):// URL referencing to the underlying granule files via http.&lt;br /&gt;
# A file:// URL that references the granule file on the local filesystem in a location that is inside the BES&#039; data root tree.&lt;br /&gt;
# The template string &amp;lt;tt&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each will discussed in turn below. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: By default Hyrax will automatically associate files whose name ends with &amp;quot;.dmrpp&amp;quot; with the dmr++ handler.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with http(s) URLs===&lt;br /&gt;
&lt;br /&gt;
If the dmr++ files that you wish to serve contain &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attributes whose values are http(s) URLs then there are 2+1 steps to serve the data:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure that the Hyrax AllowedHosts list is configured to allow Hyrax to access those target URLs. This can be accomplished by adding new regex entires to the AllowedHosts list in /etc/bes/site.conf, creating that file as need be.&lt;br /&gt;
# If the data URLs require authentication to access then you&#039;ll need to configure Hyrax for that too.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with file URLs===&lt;br /&gt;
&lt;br /&gt;
Using dmr++ files with locally held files can be useful for verifying that dmr++ functionality is working without relying on network access that may have data rate limits, authenticated access configuration, or security access constraints. Additionally, in many cases the dmr++ access to the locally held data may be significantly faster than through the native netcdf-4/hdf5 data handlers.&lt;br /&gt;
&lt;br /&gt;
In order to use dmr++ files that contain file:// URLs:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure the the dmr++ files contain only file:// URLs that refer to data granule files inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration.&lt;br /&gt;
&lt;br /&gt;
Note: For Hyrax, a correctly formatted file URL must start with the protocol &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; followed by the full qualified path to the data granule, for example:  &lt;br /&gt;
 /usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
so the the completed URL will have three slashes after the first colon:&lt;br /&gt;
 file:///usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with the template string. (NASA)===&lt;br /&gt;
&lt;br /&gt;
Another way to serve dmr++ files with Hyrax is to build the dmr++ files &#039;&#039;&#039;without&#039;&#039;&#039; valid URLs but with a template string that is replaced at runtime. If no target URL is supplied to get_drmpp at the time that the dmr++ is generated the template string: &amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt; will added to the file in place of the URL. The at runtime it can be replaced withe the correct value.&lt;br /&gt;
&lt;br /&gt;
Currently the only implementation of this is Hyrax&#039;s NGAP service which, when deployed in the NASA NGAP cloud, will accept &amp;quot;restified path&amp;quot; URLs that are defined as&lt;br /&gt;
having a URL path component with two mandatory and one optional parameters:&lt;br /&gt;
 MANDATORY: &amp;quot;/collections/UMM-C:{concept-id}&amp;quot;&lt;br /&gt;
 OPTIONAL:  &amp;quot;/UMM-C:{ShortName} &#039;.&#039; UMM-C:{Version}&amp;quot;&lt;br /&gt;
 MANDATORY: &amp;quot;/granules/UMM-G:{GranuleUR}&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039; &amp;lt;tt&amp;gt;https://opendap.earthdata.nasa.gov/collections/C1443727145-LAADS/MOD08_D3.v6.1/granules/MOD08_D3.A2020308.061.2020309092644.hdf.nc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When encountering this type of URL Hyrax will decompose it and use the content to formulate a query to the NASA CMR in order to retrieve the data access URL for the granule and for the dmr++ file. It then retrieves the dmr++ file and injects the data URL so that data access can proceed as described above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://wiki.earthdata.nasa.gov/display/DUTRAIN/Feature+analysis%3A+Restified+URL+for+OPENDAP+Data+Access More on the Restified Path can be found here],&lt;br /&gt;
&lt;br /&gt;
== Recipe: Building and testing dmr++ files ==&lt;br /&gt;
There are two recipes shown here, one using Hyrax docker containers and a second using the container that is part of the EOSDIS Cumulous task.&lt;br /&gt;
Prerequisites: &lt;br /&gt;
* Docker daemon running on a system that also supports a shell (the examples use &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; in this section)&lt;br /&gt;
=== Recipe: Building dmr++ files using a Hyrax docker container.===&lt;br /&gt;
# Acquire representative granule files for the collection you wish to import. Put them on the system that is running the Docker daemon. For this recipe we will assume that these files have been placed in the directory:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;/tmp/dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Get the most up to date Hyrax docker image:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker pull opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Start the docker container, mounting your data directory on to the docker image at &amp;lt;tt&amp;gt;/usr/share/hyrax&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker run -d -h hyrax -p 8080:8080 --volume /tmp/dmrpp:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Get a first view of your data using &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; with it&#039;s default configuration.&lt;br /&gt;
## If you want you can build a dmr++ for an example &amp;quot;&amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;input_file&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt;&amp;quot; using a  &amp;lt;tt&amp;gt;docker exec&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
##: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax get_dmrpp -b /usr/share/hyrax -o /usr/share/hyrax/input_file.dmrpp -u &amp;quot;file:///usr/share/hyrax/input_file&amp;quot; &amp;quot;input_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
## Or if you want more scripting flexibility you can login to the docker conainer to do the same:&lt;br /&gt;
### Login to the docker container:&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax /bin/bash&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### Change working dir to data dir:&lt;br /&gt;
###:&amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;cd /usr/share/hyrax&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### This sets the data directory to the current one (&amp;lt;tt&amp;gt;-b $(pwd)&amp;lt;/tt&amp;gt;) and sets the data URL (&amp;lt;tt&amp;gt;-u&amp;lt;/tt&amp;gt;) to the fully qualified path to the input file.&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;get_dmrpp -b $(pwd) -o foo.dmrpp -u &amp;quot;file://&amp;quot;$(pwd)&amp;quot;/your_test_file&amp;quot; &amp;quot;your_test_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;&#039;&#039;Now that you have made a dmr++ file, use the running Hyrax server to view and test it by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
#:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You can also batch process all of your test granules, if you want to go that route. This script assumes your ingestable data files end with &#039;&amp;lt;tt&amp;gt;.h5&amp;lt;/tt&amp;gt;&#039;. &lt;br /&gt;
#: &#039;&#039;The resulting dmr++ files should contain the correct &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; URLs and be correctly located so that they may be tested with the Hyrax service running in the docker instance.&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script will write each output file as a sidecar file into &lt;br /&gt;
# the same directory as its associated input granule data file.&lt;br /&gt;
&lt;br /&gt;
# The target directory to search for data files &lt;br /&gt;
target_dir=/usr/share/hyrax&lt;br /&gt;
echo &amp;quot;target_dir: ${target_dir}&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Search the target_dir for names matching the regex \*.h5 &lt;br /&gt;
for infile in `find &amp;quot;${target_dir}&amp;quot; -name \*.h5`&lt;br /&gt;
do&lt;br /&gt;
    echo &amp;quot; Processing: ${infile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    infile_base=`basename &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;infile_base: ${infile_base}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    bes_dir=`dirname &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;    bes_dir: ${bes_dir}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    outfile=&amp;quot;${infile}.dmrpp&amp;quot;&lt;br /&gt;
    echo &amp;quot;     Output: ${outfile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    get_dmrpp -b &amp;quot;${bes_dir}&amp;quot; -o &amp;quot;${outfile}&amp;quot; -u &amp;quot;file://${infile}&amp;quot; &amp;quot;${infile_base}&amp;quot;&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Remember that you can use the Hyrax server that is running in the docker container to view and test the dmr++ files you just created by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Testing and qualifying dmr++ files ===&lt;br /&gt;
In the previous section/step we created some initial dmr++ files using the default configuration. It is crucial to make sure that they provide the representation of the data that you and your users are expecting, and that they will work correctly with the Hyrax server. (See the following sections for details). If the generated dmr++ files do not match expectations then the default configuration of the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; may need to be amended using the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; parameter.&lt;br /&gt;
If the data are currently being served by your DAAC&#039;s on-prem team this is where understanding exactly what the localizations made to the configurations of the on-prem Hyrax instances deployed for the collection is important. These localization will probably need to be injected into &amp;lt;tt&amp;gt;get_drmpp&amp;lt;/tt&amp;gt; in order to produce the correct data representation in the dmr++ files.&lt;br /&gt;
&lt;br /&gt;
=== Flattening Groups ===&lt;br /&gt;
By default &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will preserve and show group hierarchies. If this is not desired, say for CF-1.0 compatibility, then you can change this by creating a small amendment to &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt;&#039;s default configuration. &lt;br /&gt;
First create the amending configuration file:&lt;br /&gt;
 echo &amp;quot;H5.EnableCF=true&amp;quot; &amp;gt; site.conf&lt;br /&gt;
Then, change the invocation of &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; in the above example by adding the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; switch:&lt;br /&gt;
 get_dmrpp -s site.conf -b `pwd` -o &amp;quot;${dmrpp_file}&amp;quot; -u &amp;quot;file://&amp;quot;`pwd`&amp;quot;/${file}&amp;quot; &amp;quot;${file}&amp;quot;&lt;br /&gt;
And re-run the dmr++ production as shown above.&lt;br /&gt;
&lt;br /&gt;
=== DAP representations ===&lt;br /&gt;
We have test and assurance procedures for DAP4 and DAP2 protocols below. Both are important. For legacy datasets the DAP2 request API is widely used by an existing client base and should continue to be supported. Since DAP4 subsumes DAP2 (but with somewhat different API semantics) It should be checked for legacy datasets as well. For more modern datasets that content DAP4 types such as Int64 that are not part of the DAP2 specification or implementations we will need to relying eliding the instances of unmapped types, or return an error when this is encountered.&lt;br /&gt;
 # Test Constants:&lt;br /&gt;
 GRANULE_FILE=&amp;quot;some_name.h5&amp;quot;&lt;br /&gt;
 # Granule URL&lt;br /&gt;
 gf_url=&amp;quot;http://localhost:8080/opendap/${GRANULE_FILE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== Inspect the dmr++ files====&lt;br /&gt;
# Do the dmr++ files have the expected dmrpp:href URL(s)?&lt;br /&gt;
#: &amp;lt;tt&amp;gt;head -2 ${GRANULE_FILE}.dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check DAP4 DMR Response ====&lt;br /&gt;
Inspect &amp;lt;tt&amp;gt;${gf_url}.dmrpp.dmr&amp;lt;/tt&amp;gt; &lt;br /&gt;
# Get the document, save as &amp;lt;tt&amp;gt;foo.dmr&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;curl -L -o foo.dmr &amp;quot;${gf_url}.dmr&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
# Are the associated dimensions correct?&lt;br /&gt;
&lt;br /&gt;
==== DAP4 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://docs.opendap.org/index.php?title=DAP4:_Specification_Volume_1#Fully_Qualified_Names VARIABLE_NAME is a full qualified DAP4 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_file &amp;quot;${gf_url}.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap4_subset_file dap4_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP4 UI test ====&lt;br /&gt;
* View and exercise the DAP4 Data Request Form &amp;lt;tt&amp;gt;{gf_url}.dmr.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check DDS Response ====&lt;br /&gt;
&lt;br /&gt;
# Inspect &amp;lt;tt&amp;gt;${gf_url}.dds&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
## Are the associated dimensions correct?&lt;br /&gt;
# Compare DMR++ DDS with granule file DDS. &lt;br /&gt;
#:For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_file &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_dmrpp &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; cmp dap2_dds_file dap2_dds_dmrpp &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_file &amp;quot;${gf_url}.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap2_subset_file dap2_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: One might consider doing this with two or more variables.&lt;br /&gt;
&lt;br /&gt;
==== DAP2 UI Test ====&lt;br /&gt;
* View and exercise the DAP2 Data Request Form located here: &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Try it in Panoply!&lt;br /&gt;
** Open Panoply.&lt;br /&gt;
** From the &#039;&#039;&#039;File&#039;&#039;&#039; menu select &#039;&#039;&#039;Open Remote Dataset...&#039;&#039;&#039;&lt;br /&gt;
** Paste the &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt; into the resulting dialog box.&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13537</id>
		<title>DMR++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=DMR%2B%2B&amp;diff=13537"/>
		<updated>2024-08-28T20:40:23Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Building dmr++ files with get_dmrpp */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;How to build &amp;amp; deploy dmr++ files for Hyrax&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==What? Why?== &lt;br /&gt;
&lt;br /&gt;
It is a fast and flexible way to serve data stored in S3.&lt;br /&gt;
The dmr++ encodes the location of the data content residing in a binary data file/object (e.g., an hdf5 file) so that it can be directly accessed, without the need for an intermediate library API, by using the file with the location information. The binary data objects may be on a local filesystem, or they may reside across the web in something like an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How Does It Work?==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; ingest software reads a data file (see &#039;&#039;&#039;note&#039;&#039;&#039;) and builds a document that holds all of the file&#039;s metadata (the names and types of all of the variables along with any other information bound to those variables). This information is stored in a document we call the Dataset Metadata Response (DMR). The &#039;&#039;dmr++&#039;&#039; adds some extra information to this (that&#039;s the &#039;++&#039; part) about where each variable can be found and how to decode those values. The &#039;&#039;dmr++&#039;&#039; is simply an special annotated DMR document.&lt;br /&gt;
&lt;br /&gt;
This effectively decouples the annotated DMR (&#039;&#039;dmr++&#039;&#039;) from the location of the granule file itself. Since dmr++ files are typically significantly smaller than the source data granules they represent, they can be stored and moved for less expense. They also enable reading all of the file&#039;s metadata in one operation instead of the iterative process that many APIs require.&lt;br /&gt;
&lt;br /&gt;
If the dmr++ contains references to the source granules location on the web, the location of the the dmr++ file itself does not matter.&lt;br /&gt;
&lt;br /&gt;
Software that understands the dmr++ content can directly access the data values held in the source granule file, and it can do so without having to retrieve the entire file and work on it locally, even when the file is stored in a Web Object Store like S3. &lt;br /&gt;
&lt;br /&gt;
If the granule file contains multiple variables and only a subset of them are needed, the dmr++ enabled software can retrieve just the bytes associated with the desired variables parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;note:&#039;&#039;&#039; The OPeNDAP software currently supports HDF5 and NetCDF4. Other formats can be supported, such as zarr.&lt;br /&gt;
&lt;br /&gt;
==Supported Data Formats==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently works with &#039;&#039;hdf5&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. (The &#039;&#039;netcdf-4&#039;&#039; format is a subset of &#039;&#039;hdf5&#039;&#039; so &#039;&#039;hdf5&#039;&#039; tools are utilized for both.) Other formats like &#039;&#039;zarr&#039;&#039;, &#039;&#039;hdf4&#039;&#039;, &#039;&#039;netcdf-3&#039;&#039; are not currently supported by the &#039;&#039;dmr++&#039;&#039; software, but support could be added if requested.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; data format is quite complex and many of the options and edge cases are not currently supported by the &#039;&#039;dmr++&#039;&#039; software. &lt;br /&gt;
&lt;br /&gt;
These limitations and how to quickly evaluate an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file for use with the &#039;&#039;dmr++&#039;&#039; software are explained below.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; filters====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format has several filter/compression options used for storing data values. &lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5Z_FILTER_DEFLATE, H5Z_FILTER_SHUFFLE, and H5Z_FILTER_FLETCHER32 filters.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc/RM/RM_H5Z.html You can find more on hdf5 filters here.]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;hdf5&#039;&#039; storage layouts====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;hdf5&#039;&#039; format also uses a number of &amp;quot;storage layouts&amp;quot; that describe various structural organizations of the data values associated with a variable in the granule file.&lt;br /&gt;
The &#039;&#039;dmr++&#039;&#039; software currently supports data that utilize the  H5D_COMPACT, H5D_CHUNKED, and H5D_CONTIGUOUS storage layouts. These are all of the storage layouts defined by the &#039;&#039;hdf5&#039;&#039; library, but others can be added.&lt;br /&gt;
[https://support.hdfgroup.org/HDF5/doc1.6/Datasets.html You can find more on hdf5 storage layouts here.]&lt;br /&gt;
&lt;br /&gt;
====Is my &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file suitable for &#039;&#039;dmr++&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
To determine the &#039;&#039;hdf5&#039;&#039; filters, storage layouts, and chunking scheme used in an &#039;&#039;hdf5&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039; file you can use the command:&lt;br /&gt;
 &amp;lt;code&amp;gt;h5dump -H -p &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
To get a human readable assessment of the file that will show the storage layouts, chunking structure, and the filters needed for each variable (aka DATASET in the &#039;&#039;hdf5&#039;&#039; vocabulary) [https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Dump h5dump info can be found here.]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;h5dump example output:&#039;&#039;&lt;br /&gt;
 $ h5dump -H -p chunked_gzipped_fourD.h5&lt;br /&gt;
 HDF5 &amp;quot;chunked_gzipped_fourD.h5&amp;quot; {&lt;br /&gt;
 GROUP &amp;quot;/&amp;quot; {&lt;br /&gt;
   DATASET &amp;quot;d_16_gzipped_chunks&amp;quot; {&lt;br /&gt;
      DATATYPE  H5T_IEEE_F32LE&lt;br /&gt;
      DATASPACE  SIMPLE { ( 40, 40, 40, 40 ) / ( 40, 40, 40, 40 ) }&lt;br /&gt;
      STORAGE_LAYOUT {&lt;br /&gt;
         CHUNKED ( 20, 20, 20, 20 )&lt;br /&gt;
         SIZE 2863311 (3.576:1 COMPRESSION)&lt;br /&gt;
      }&lt;br /&gt;
      FILTERS {&lt;br /&gt;
         COMPRESSION DEFLATE { LEVEL 6 }&lt;br /&gt;
      }&lt;br /&gt;
      FILLVALUE {&lt;br /&gt;
         FILL_TIME H5D_FILL_TIME_ALLOC&lt;br /&gt;
         VALUE  H5D_FILL_VALUE_DEFAULT&lt;br /&gt;
      }&lt;br /&gt;
      ALLOCATION_TIME {&lt;br /&gt;
         H5D_ALLOC_TIME_INCR&lt;br /&gt;
      }&lt;br /&gt;
   }&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
====Is my netcdf file &#039;&#039;netcdf-3&#039;&#039; or &#039;&#039;netcdf-4&#039;&#039;?====&lt;br /&gt;
&lt;br /&gt;
It is an unfortunate state of affairs that the file suffix &amp;quot;.nc&amp;quot; is the commonly used naming convention for both &#039;&#039;netcdf-3&#039;&#039; and &#039;&#039;netcdf-4&#039;&#039; files. &lt;br /&gt;
You can use the command:  &lt;br /&gt;
 &amp;lt;code&amp;gt;ncdump -k &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt; &lt;br /&gt;
to determine if a &#039;&#039;netcdf&#039;&#039; file is either classic &#039;&#039;netcdf-3&#039;&#039; (classic) or &#039;&#039;netcdf-4&#039;&#039; (netCDF-4).&lt;br /&gt;
* The &#039;&#039;netcdf&#039;&#039; library must be installed on the system upon which the command is issued.&lt;br /&gt;
&lt;br /&gt;
[http://www.bic.mni.mcgill.ca/users/sean/Docs/netcdf/guide.txn_79.html You can learn more in the NetCDF documentation here.]&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files for HDF4 and HDF4-EOS2 (experimental)==&lt;br /&gt;
&lt;br /&gt;
==Building &#039;&#039;dmr++&#039;&#039; files with get_dmrpp==&lt;br /&gt;
&lt;br /&gt;
The application that builds the &#039;&#039;dmr++&#039;&#039; files is a command line tool called &#039;&#039;get_dmrpp&#039;&#039;. It in turn utilizes other executables such as &#039;&#039;build_dmrpp&#039;&#039;, &#039;&#039;reduce_mdf&#039;&#039;, &#039;&#039;merge_dmrpp&#039;&#039; (which rely in turn on the &#039;&#039;hdf5_handler&#039;&#039; and the &#039;&#039;hdf5&#039;&#039; library), along with a number of UNIX shell commands.&lt;br /&gt;
&lt;br /&gt;
All of these components are install with each recent version of the Hyrax Data Server&lt;br /&gt;
&lt;br /&gt;
You can see the &#039;&#039;get_dmrpp&#039;&#039; usage statement with the command: &amp;lt;code&amp;gt;get_dmrpp -h&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Using &#039;&#039;get_dmrpp&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
The way that &#039;&#039;get_dmrpp&#039;&#039; is invoked controls the way that the data are ultimately represented in the resulting &#039;&#039;dmr++&#039;&#039; file(s). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;get_dmrpp&#039;&#039; application utilizes software from the Hyrax data server to produce the base DMR document which is used to construct the &#039;&#039;dmr++&#039;&#039; file. &lt;br /&gt;
&lt;br /&gt;
The Hyrax server has a long list of configuration options, several of which can substantially alter the the structural and semantic representation of the dataset as seen in the dmr++ files generated using these options.&lt;br /&gt;
&lt;br /&gt;
===Command line options===&lt;br /&gt;
&lt;br /&gt;
The command line switches provide a way to control the output of the tool. In addition to common options like verbose output or testing modes, the tool provides options to build extra (aka &#039;sidecar&#039;) data files that hold information needed for CF compliance if the original HDF5 data files lack that information (see the &#039;&#039;missing data&#039;&#039; section ). In addition, it is often desirable to build &#039;&#039;dmr++&#039;&#039; files before the source data files are uploaded to a cloud store like S3. In this case, the URL to the data may not be known when the &#039;&#039;dmr++&#039;&#039; is built. We support this by using placeholder/template strings in the &#039;&#039;dmr++&#039;&#039; and which can then be replaced with the URL at runtime, when the &#039;&#039;dmr++&#039;&#039; file is evaluated. See the &#039;-u&#039; and &#039;-p&#039; options below.&lt;br /&gt;
&lt;br /&gt;
====Inputs====&lt;br /&gt;
&lt;br /&gt;
; -b&lt;br /&gt;
: The fully qualified path to the top level data directory. Data files read by &#039;&#039;get_dmrpp&#039;&#039; must be in the directory tree rooted at this location and their names expressed as a path relative to this location. The value may not be set to &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; The default value is /tmp if a value is not provided. All the data files to be processed must be in this directory or one of its subdirectories. If &#039;&#039;get_dmrpp&#039;&#039; is being executed from same directory as the data then &amp;lt;code&amp;gt;-b `pwd`&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-b .&amp;lt;/code&amp;gt; works as well.&lt;br /&gt;
;-u&lt;br /&gt;
: This option is used to specify the location of the binary data object. It’s value must be an http, https, or file (file://) URL. This URL will be injected into the dmr++ when it is constructed. If option -u is not used; then the template string &amp;lt;code&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/code&amp;gt; will be used and the dmr++ will substitute a value at runtime.&lt;br /&gt;
;-c&lt;br /&gt;
:The path to an alternate bes configuration file to use.&lt;br /&gt;
;-s&lt;br /&gt;
:The path to an optional addendum configuration file which will be appended to the default BES configuration. Much like the site.conf file works for the full server deployment it will be loaded last and the settings there-in will have an override effect on the default configuration.&lt;br /&gt;
&lt;br /&gt;
====Output====&lt;br /&gt;
&lt;br /&gt;
; -o&lt;br /&gt;
: The name of the file to create.&lt;br /&gt;
&lt;br /&gt;
====Verbose Output Modes====&lt;br /&gt;
&lt;br /&gt;
; -h&lt;br /&gt;
: Show help/usage page.&lt;br /&gt;
; -v:&lt;br /&gt;
: verbose mode, prints the intermediate DMR.&lt;br /&gt;
; -V&lt;br /&gt;
: Very verbose mode,  prints the DMR, the command and the configuration file used to build the DMR.&lt;br /&gt;
; -D&lt;br /&gt;
: Just print the DMR that will be used to build the DMR++&lt;br /&gt;
; -X&lt;br /&gt;
: Do not remove temporary files. May be used independently of the -v and/or -V options.&lt;br /&gt;
&lt;br /&gt;
====Tests====&lt;br /&gt;
&lt;br /&gt;
; -T&lt;br /&gt;
: Run ALL hyrax tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -I&lt;br /&gt;
: Run hyrax inventory tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
; -F&lt;br /&gt;
: Run hyrax value probe tests on the resulting dmr++ file and compare the responses the ones generated by the source hdf5 file.&lt;br /&gt;
&lt;br /&gt;
====Missing Data Creation====&lt;br /&gt;
&lt;br /&gt;
; -M&lt;br /&gt;
: Build a &#039;sidecar&#039; file that holds missing information needed for CF compliance (e.g., Latitude, Longitude and Time coordinate data).&lt;br /&gt;
; -p&lt;br /&gt;
: Provide the URL for the Missing data sidecar file. If this is not given (but -M is), then a template value is used in the dmr++ file and a real URL is substituted at runtime.&lt;br /&gt;
; -r&lt;br /&gt;
: The path to the file that contains missing variable information for sets of input data files that share common missing variables. The file will be created if it doesn&#039;t exist and the result may be used in subsequent invocations of get_dmrpp (using -r) to identify the missing variable file.&lt;br /&gt;
&lt;br /&gt;
====AWS Integration====&lt;br /&gt;
The &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application supports both S3 hosted granules as inputs, and uploading generated dmr++ files to an S3 bucket.&lt;br /&gt;
&lt;br /&gt;
; S3 Hosted granules are supported by default,&lt;br /&gt;
: When the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application sees that the name of the input file is an S3 URL it will check to see if the AWS CLI is configured and if so &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will attempt retrieve the granule and make a dmr++ utilizing whatever other options have been chosen.&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
; -U&lt;br /&gt;
: The &#039;&#039;&#039;&amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt;&#039;&#039;&#039; command line parameter for &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; instructs the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; application to upload the generated dmr++ file to S3, but only when the following conditions are met:&lt;br /&gt;
:* The name of the input file is an S3 URL&lt;br /&gt;
:* The AWS CLI has been configured with credentials that provide r+w permissions for the bucket referenced in the input file S3 URL.&lt;br /&gt;
:* The &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; option has been specified.&lt;br /&gt;
: If all three of the above are true then &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will copy the retrieve the granule, create a dmr++ file from the granule, and copy the resulting dmr++ file (as defined by the -o option) to the source S3 bucket using the well known NGAP sidecar file naming convention: &#039;&#039;&#039;&amp;lt;tt&amp;gt;s3://bucket_name/granule_object_id.dmrpp&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
:: Example: &#039;&#039;&#039;&amp;lt;tt&amp;gt;get_dmrpp -U -o foo -b `pwd` s3://bucket_name/granule_object_id&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;hdf5_handler&#039;&#039; Configuration===&lt;br /&gt;
&lt;br /&gt;
Because &#039;&#039;get_dmrpp&#039;&#039; uses the &#039;&#039;hdf5_handler&#039;&#039; software to build the &#039;&#039;dmr++&#039;&#039; the software must inject the &#039;&#039;hdf5_handler&#039;&#039;&#039;s configuration. &lt;br /&gt;
&lt;br /&gt;
The default configuration is large, but any valued may be altered at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some of the commonly manipulated configuration parameters with their default values:&lt;br /&gt;
 H5.EnableCF=true&lt;br /&gt;
 H5.EnableDMR64bitInt=true&lt;br /&gt;
 H5.DefaultHandleDimension=true&lt;br /&gt;
 H5.KeepVarLeadingUnderscore=false&lt;br /&gt;
 H5.EnableCheckNameClashing=true&lt;br /&gt;
 H5.EnableAddPathAttrs=true&lt;br /&gt;
 H5.EnableDropLongString=true&lt;br /&gt;
 H5.DisableStructMetaAttr=true&lt;br /&gt;
 H5.EnableFillValueCheck=true&lt;br /&gt;
 H5.CheckIgnoreObj=false&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;Note to DAACs with existing Hyrax deployments.&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
If your group is already serving data with Hyrax and the data representations that are generated by your Hyrax server are satisfactory, then a careful inspection of the localized configuration, typically held in /etc/bes/site.conf, will help you determine what configuration state you may need to inject into &#039;&#039;get_dmrpp&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===The &#039;&#039;H5.EnableCF&#039;&#039; option===&lt;br /&gt;
&lt;br /&gt;
Of particular importance is the &#039;&#039;H5.EnableCF&#039;&#039; option, which instructs the &#039;&#039;get_dmrpp&#039;&#039; tool to produce [https://cfconventions.org/ Climate Forecast convention (CF)] compatible output based on metadata found in the granule file being processed. &lt;br /&gt;
&lt;br /&gt;
Changing the value of &#039;&#039;H5.EnableCF&#039;&#039; from &#039;&#039;&#039;&#039;&#039;false&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;true&#039;&#039;&#039;&#039;&#039; will have (at least) two significant effects.&lt;br /&gt;
&lt;br /&gt;
It will:&lt;br /&gt;
&lt;br /&gt;
* Cause &#039;&#039;get_dmrpp&#039;&#039; to attempt to make the dmr++ metadata CF compliant.&lt;br /&gt;
* Remove Group hierarchies (if any) in the underlying data granule by flattening the Group hierarchy into the variable names.  &lt;br /&gt;
&lt;br /&gt;
By default &#039;&#039;get_dmrpp&#039;&#039; the &#039;&#039;H5.EnableCF&#039;&#039; option is set to false:&lt;br /&gt;
 H5.EnableCF = false&lt;br /&gt;
&lt;br /&gt;
There is a much more comprehensive discussion of this key feature, and others, in the &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;[https://opendap.github.io/hyrax_guide/Master_Hyrax_Guide.html#_hyrax_handlers HDF5 Handler section of the Appendix in the Hyrax Data Server Installation and Configuration Guide]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Missing data, the CF conventions and &#039;&#039;hdf5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
Many of the &#039;&#039;hdf5&#039;&#039; files produced by NASA and others do not contain the domain coordinate data (such as latitude, longitude, time, etc.) as a collection of explicit values. Instead information contained in the dataset metadata can used to reproduce these values. &lt;br /&gt;
&lt;br /&gt;
In order for a dataset to be Climate Forecast (CF) compatible it must contain these domain coordinate data values.&lt;br /&gt;
&lt;br /&gt;
The Hyrax &#039;&#039;hdf5_handler&#039;&#039; software, utilized by the &#039;&#039;get_dmrpp&#039;&#039; application, can create this data from the dataset metadata.  The &#039;&#039;get_dmrpp&#039;&#039; application places these generated data in a “sidecar” file for deployment with the source &#039;&#039;hdf5/netcdf&#039;&#039;-4 file.&lt;br /&gt;
&lt;br /&gt;
==Hyrax - Serving data using dmr++ files==&lt;br /&gt;
&lt;br /&gt;
There are three fundamental deployment scenarios for using dmr++ files to serve data with the Hyrax data server.&lt;br /&gt;
&lt;br /&gt;
This can be simple categorized as follows:&lt;br /&gt;
The dmr++ file(s) are XML files that contain a root &amp;lt;tt&amp;gt;dap4:Dataset&amp;lt;/tt&amp;gt; element with a &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attribute whose value is one of:&lt;br /&gt;
# An http(s):// URL referencing to the underlying granule files via http.&lt;br /&gt;
# A file:// URL that references the granule file on the local filesystem in a location that is inside the BES&#039; data root tree.&lt;br /&gt;
# The template string &amp;lt;tt&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each will discussed in turn below. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: By default Hyrax will automatically associate files whose name ends with &amp;quot;.dmrpp&amp;quot; with the dmr++ handler.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with http(s) URLs===&lt;br /&gt;
&lt;br /&gt;
If the dmr++ files that you wish to serve contain &amp;lt;tt&amp;gt;dmrpp:href&amp;lt;/tt&amp;gt; attributes whose values are http(s) URLs then there are 2+1 steps to serve the data:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure that the Hyrax AllowedHosts list is configured to allow Hyrax to access those target URLs. This can be accomplished by adding new regex entires to the AllowedHosts list in /etc/bes/site.conf, creating that file as need be.&lt;br /&gt;
# If the data URLs require authentication to access then you&#039;ll need to configure Hyrax for that too.&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with file URLs===&lt;br /&gt;
&lt;br /&gt;
Using dmr++ files with locally held files can be useful for verifying that dmr++ functionality is working without relying on network access that may have data rate limits, authenticated access configuration, or security access constraints. Additionally, in many cases the dmr++ access to the locally held data may be significantly faster than through the native netcdf-4/hdf5 data handlers.&lt;br /&gt;
&lt;br /&gt;
In order to use dmr++ files that contain file:// URLs:&lt;br /&gt;
# Place the dmr++ files on the local disk inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration&lt;br /&gt;
# Ensure the the dmr++ files contain only file:// URLs that refer to data granule files inside of the directory tree identified by the &amp;lt;tt&amp;gt;BES.Catalog.catalog.RootDirectory&amp;lt;/tt&amp;gt; in the BES configuration.&lt;br /&gt;
&lt;br /&gt;
Note: For Hyrax, a correctly formatted file URL must start with the protocol &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; followed by the full qualified path to the data granule, for example:  &lt;br /&gt;
 /usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
so the the completed URL will have three slashes after the first colon:&lt;br /&gt;
 file:///usr/share/hyrax/ghrsst/some_granule.h5&lt;br /&gt;
&lt;br /&gt;
===Using dmr++ with the template string. (NASA)===&lt;br /&gt;
&lt;br /&gt;
Another way to serve dmr++ files with Hyrax is to build the dmr++ files &#039;&#039;&#039;without&#039;&#039;&#039; valid URLs but with a template string that is replaced at runtime. If no target URL is supplied to get_drmpp at the time that the dmr++ is generated the template string: &amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;OPeNDAP_DMRpp_DATA_ACCESS_URL&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt; will added to the file in place of the URL. The at runtime it can be replaced withe the correct value.&lt;br /&gt;
&lt;br /&gt;
Currently the only implementation of this is Hyrax&#039;s NGAP service which, when deployed in the NASA NGAP cloud, will accept &amp;quot;restified path&amp;quot; URLs that are defined as&lt;br /&gt;
having a URL path component with two mandatory and one optional parameters:&lt;br /&gt;
 MANDATORY: &amp;quot;/collections/UMM-C:{concept-id}&amp;quot;&lt;br /&gt;
 OPTIONAL:  &amp;quot;/UMM-C:{ShortName} &#039;.&#039; UMM-C:{Version}&amp;quot;&lt;br /&gt;
 MANDATORY: &amp;quot;/granules/UMM-G:{GranuleUR}&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039; &amp;lt;tt&amp;gt;https://opendap.earthdata.nasa.gov/collections/C1443727145-LAADS/MOD08_D3.v6.1/granules/MOD08_D3.A2020308.061.2020309092644.hdf.nc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When encountering this type of URL Hyrax will decompose it and use the content to formulate a query to the NASA CMR in order to retrieve the data access URL for the granule and for the dmr++ file. It then retrieves the dmr++ file and injects the data URL so that data access can proceed as described above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://wiki.earthdata.nasa.gov/display/DUTRAIN/Feature+analysis%3A+Restified+URL+for+OPENDAP+Data+Access More on the Restified Path can be found here],&lt;br /&gt;
&lt;br /&gt;
== Recipe: Building and testing dmr++ files ==&lt;br /&gt;
There are two recipes shown here, one using Hyrax docker containers and a second using the container that is part of the EOSDIS Cumulous task.&lt;br /&gt;
Prerequisites: &lt;br /&gt;
* Docker daemon running on a system that also supports a shell (the examples use &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; in this section)&lt;br /&gt;
=== Recipe: Building dmr++ files using a Hyrax docker container.===&lt;br /&gt;
# Acquire representative granule files for the collection you wish to import. Put them on the system that is running the Docker daemon. For this recipe we will assume that these files have been placed in the directory:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;/tmp/dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Get the most up to date Hyrax docker image:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker pull opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Start the docker container, mounting your data directory on to the docker image at &amp;lt;tt&amp;gt;/usr/share/hyrax&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker run -d -h hyrax -p 8080:8080 --volume /tmp/dmrpp:/usr/share/hyrax --name=hyrax opendap/hyrax:snapshot&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
# Get a first view of your data using &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; with it&#039;s default configuration.&lt;br /&gt;
## If you want you can build a dmr++ for an example &amp;quot;&amp;lt;tt&amp;gt;&amp;lt;b&amp;gt;input_file&amp;lt;/b&amp;gt;&amp;lt;/tt&amp;gt;&amp;quot; using a  &amp;lt;tt&amp;gt;docker exec&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
##: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax get_dmrpp -b /usr/share/hyrax -o /usr/share/hyrax/input_file.dmrpp -u &amp;quot;file:///usr/share/hyrax/input_file&amp;quot; &amp;quot;input_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
## Or if you want more scripting flexibility you can login to the docker conainer to do the same:&lt;br /&gt;
### Login to the docker container:&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;docker exec -it hyrax /bin/bash&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### Change working dir to data dir:&lt;br /&gt;
###:&amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;cd /usr/share/hyrax&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
### This sets the data directory to the current one (&amp;lt;tt&amp;gt;-b $(pwd)&amp;lt;/tt&amp;gt;) and sets the data URL (&amp;lt;tt&amp;gt;-u&amp;lt;/tt&amp;gt;) to the fully qualified path to the input file.&lt;br /&gt;
###: &amp;lt;b&amp;gt;&amp;lt;tt&amp;gt;get_dmrpp -b $(pwd) -o foo.dmrpp -u &amp;quot;file://&amp;quot;$(pwd)&amp;quot;/your_test_file&amp;quot; &amp;quot;your_test_file&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;&#039;&#039;Now that you have made a dmr++ file, use the running Hyrax server to view and test it by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
#:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
# You can also batch process all of your test granules, if you want to go that route. This script assumes your ingestable data files end with &#039;&amp;lt;tt&amp;gt;.h5&amp;lt;/tt&amp;gt;&#039;. &lt;br /&gt;
#: &#039;&#039;The resulting dmr++ files should contain the correct &amp;lt;tt&amp;gt;file://&amp;lt;/tt&amp;gt; URLs and be correctly located so that they may be tested with the Hyrax service running in the docker instance.&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script will write each output file as a sidecar file into &lt;br /&gt;
# the same directory as its associated input granule data file.&lt;br /&gt;
&lt;br /&gt;
# The target directory to search for data files &lt;br /&gt;
target_dir=/usr/share/hyrax&lt;br /&gt;
echo &amp;quot;target_dir: ${target_dir}&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Search the target_dir for names matching the regex \*.h5 &lt;br /&gt;
for infile in `find &amp;quot;${target_dir}&amp;quot; -name \*.h5`&lt;br /&gt;
do&lt;br /&gt;
    echo &amp;quot; Processing: ${infile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    infile_base=`basename &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;infile_base: ${infile_base}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    bes_dir=`dirname &amp;quot;${infile}&amp;quot;`&lt;br /&gt;
    echo &amp;quot;    bes_dir: ${bes_dir}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    outfile=&amp;quot;${infile}.dmrpp&amp;quot;&lt;br /&gt;
    echo &amp;quot;     Output: ${outfile}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    get_dmrpp -b &amp;quot;${bes_dir}&amp;quot; -o &amp;quot;${outfile}&amp;quot; -u &amp;quot;file://${infile}&amp;quot; &amp;quot;${infile_base}&amp;quot;&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Remember that you can use the Hyrax server that is running in the docker container to view and test the dmr++ files you just created by pointing your browser at:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;http://localhost:8080/opendap/&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Testing and qualifying dmr++ files ===&lt;br /&gt;
In the previous section/step we created some initial dmr++ files using the default configuration. It is crucial to make sure that they provide the representation of the data that you and your users are expecting, and that they will work correctly with the Hyrax server. (See the following sections for details). If the generated dmr++ files do not match expectations then the default configuration of the &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; may need to be amended using the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; parameter.&lt;br /&gt;
If the data are currently being served by your DAAC&#039;s on-prem team this is where understanding exactly what the localizations made to the configurations of the on-prem Hyrax instances deployed for the collection is important. These localization will probably need to be injected into &amp;lt;tt&amp;gt;get_drmpp&amp;lt;/tt&amp;gt; in order to produce the correct data representation in the dmr++ files.&lt;br /&gt;
&lt;br /&gt;
=== Flattening Groups ===&lt;br /&gt;
By default &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; will preserve and show group hierarchies. If this is not desired, say for CF-1.0 compatibility, then you can change this by creating a small amendment to &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt;&#039;s default configuration. &lt;br /&gt;
First create the amending configuration file:&lt;br /&gt;
 echo &amp;quot;H5.EnableCF=true&amp;quot; &amp;gt; site.conf&lt;br /&gt;
Then, change the invocation of &amp;lt;tt&amp;gt;get_dmrpp&amp;lt;/tt&amp;gt; in the above example by adding the &amp;lt;tt&amp;gt;-s&amp;lt;/tt&amp;gt; switch:&lt;br /&gt;
 get_dmrpp -s site.conf -b `pwd` -o &amp;quot;${dmrpp_file}&amp;quot; -u &amp;quot;file://&amp;quot;`pwd`&amp;quot;/${file}&amp;quot; &amp;quot;${file}&amp;quot;&lt;br /&gt;
And re-run the dmr++ production as shown above.&lt;br /&gt;
&lt;br /&gt;
=== DAP representations ===&lt;br /&gt;
We have test and assurance procedures for DAP4 and DAP2 protocols below. Both are important. For legacy datasets the DAP2 request API is widely used by an existing client base and should continue to be supported. Since DAP4 subsumes DAP2 (but with somewhat different API semantics) It should be checked for legacy datasets as well. For more modern datasets that content DAP4 types such as Int64 that are not part of the DAP2 specification or implementations we will need to relying eliding the instances of unmapped types, or return an error when this is encountered.&lt;br /&gt;
 # Test Constants:&lt;br /&gt;
 GRANULE_FILE=&amp;quot;some_name.h5&amp;quot;&lt;br /&gt;
 # Granule URL&lt;br /&gt;
 gf_url=&amp;quot;http://localhost:8080/opendap/${GRANULE_FILE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== Inspect the dmr++ files====&lt;br /&gt;
# Do the dmr++ files have the expected dmrpp:href URL(s)?&lt;br /&gt;
#: &amp;lt;tt&amp;gt;head -2 ${GRANULE_FILE}.dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check DAP4 DMR Response ====&lt;br /&gt;
Inspect &amp;lt;tt&amp;gt;${gf_url}.dmrpp.dmr&amp;lt;/tt&amp;gt; &lt;br /&gt;
# Get the document, save as &amp;lt;tt&amp;gt;foo.dmr&amp;lt;/tt&amp;gt;:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;curl -L -o foo.dmr &amp;quot;${gf_url}.dmr&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
# Are the associated dimensions correct?&lt;br /&gt;
&lt;br /&gt;
==== DAP4 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://docs.opendap.org/index.php?title=DAP4:_Specification_Volume_1#Fully_Qualified_Names VARIABLE_NAME is a full qualified DAP4 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_file &amp;quot;${gf_url}.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap4_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dap?dap4.ce=VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap4_subset_file dap4_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP4 UI test ====&lt;br /&gt;
* View and exercise the DAP4 Data Request Form &amp;lt;tt&amp;gt;{gf_url}.dmr.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check DDS Response ====&lt;br /&gt;
&lt;br /&gt;
# Inspect &amp;lt;tt&amp;gt;${gf_url}.dds&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Is each variable&#039;s data type correct and as expected? &lt;br /&gt;
## Are the associated dimensions correct?&lt;br /&gt;
# Compare DMR++ DDS with granule file DDS. &lt;br /&gt;
#:For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_file &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; curl -L -o dap2_dds_dmrpp &amp;quot;${gf_url}.dds&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#::&amp;lt;tt&amp;gt; cmp dap2_dds_file dap2_dds_dmrpp &amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== DAP2 Check binary data response ====&lt;br /&gt;
&lt;br /&gt;
For a particular granule GRANULE_FILE and a particular variable VARIABLE_NAME (Where [https://cdn.earthdata.nasa.gov/conduit/upload/512/ESE-RFC-004v1.1.pdf VARIABLE_NAME is a DAP2 name.]):&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_file &amp;quot;${gf_url}.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; curl -L -o dap2_subset_dmrpp &amp;quot;${gf_url}.dmrpp.dods?VARIABLE_NAME&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt; cmp dap2_subset_file dap2_subset_dmrpp&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: One might consider doing this with two or more variables.&lt;br /&gt;
&lt;br /&gt;
==== DAP2 UI Test ====&lt;br /&gt;
* View and exercise the DAP2 Data Request Form located here: &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Try it in Panoply!&lt;br /&gt;
** Open Panoply.&lt;br /&gt;
** From the &#039;&#039;&#039;File&#039;&#039;&#039; menu select &#039;&#039;&#039;Open Remote Dataset...&#039;&#039;&#039;&lt;br /&gt;
** Paste the &amp;lt;tt&amp;gt;{gf_url}.html&amp;lt;/tt&amp;gt; into the resulting dialog box.&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13534</id>
		<title>Hyrax GitHub Source Build</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13534"/>
		<updated>2024-06-07T03:12:24Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Rocky 8 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This describes how to get and build Hyrax from our GitHub repositories. Hyrax is a data server that implements the DAP2 and DAP4 protocols, works with a number of different data formats and supports a wide variety of customization options from tailoring the look of the server&#039;s web pages to complex server-side processing operations. This page describes how to build the server&#039;s source code. If you&#039;re working on a Linux or OS/X computer, the process is similar so we describe only the linux case; we do not support building the server on Windows operating systems.&lt;br /&gt;
&lt;br /&gt;
To build and install the server, you need to perform three steps:&lt;br /&gt;
# Set up the computer to build source code (Install a Java compiler; install a C/C++ compiler; add some other tools)&lt;br /&gt;
# Build the C++ DAP library (&#039;&#039;libdap4&#039;&#039;) and the Hyrax BES daemon&lt;br /&gt;
# Build the Hyrax OLFS web application&lt;br /&gt;
&lt;br /&gt;
Quick links if you already know the process:&lt;br /&gt;
* [https://github.com/opendap/hyrax new all-in-one repo that uses shell scripts]&lt;br /&gt;
* [https://github.com/opendap/libdap libdap git repo]&lt;br /&gt;
* [https://github.com/opendap/bes BES git repo]&lt;br /&gt;
* [https://github.com/opendap/olfs OLFS git repo]&lt;br /&gt;
* [https://github.com/opendap/hyrax-dependencies Hyrax dependencies]&lt;br /&gt;
&lt;br /&gt;
= Set up a CentOS machine to build code =&lt;br /&gt;
== Setup CentOS-7 ==&lt;br /&gt;
Note that I don&#039;t like clicking around to different pages to follow simple directions, so what follows is a short version of the CentOS 6 configuration information we&#039;ve compiled for people that help us by building RPM packages for Hyrax. You can use this to extrapolate how to configure Ubuntu and OSX (We routinely build on those platforms as well). The complete instructions are in [[ConfigureCentos | Configure CentOS]] and describe how to to set up a CentOS 6 machine to build software. What follows is the condensed version:&lt;br /&gt;
&lt;br /&gt;
;Update the VM&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum -y update&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Load a basic software development environment:&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.7.0-openjdk java-1.7.0-openjdk-devel ant ant-junit junit&#039;&#039;&#039;&amp;lt;/tt&amp;gt; (it&#039;s likely that you can use more recent versions of Java)&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; The whole thing, with java-1.8.0&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ant ant-junit junit git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;If you&#039;re going to build RPMs&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install rpm-devel rpm-build redhat-rpm-config&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Optional&lt;br /&gt;
:Download, unpack, build and install the GNU autotools (&#039;&#039;but &#039;&#039;&#039;don&#039;t&#039;&#039;&#039; do this unless the versions installed using yum don&#039;t work&#039;&#039;)&lt;br /&gt;
* autoconf &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz autoconf-2.69.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
* automake &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/automake/automake-1.14.1.tar.gz automake-1.14.1.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt; &lt;br /&gt;
* libtool &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/libtool/libtool-2.4.2.tar.gz libtool-2.4.2.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:build them (&amp;lt;tt&amp;gt;&#039;&#039;&#039;&#039;&#039;./configure; make; sudo make install &#039;&#039;&#039;&#039;&#039;&amp;lt;/tt&amp;gt; - this should take no more than three minutes).&lt;br /&gt;
&lt;br /&gt;
== Setup CentOS-8  ==&lt;br /&gt;
The CentOS-8 setup is very similar to CentOS-7, but there are some minor differences.&lt;br /&gt;
 &lt;br /&gt;
;Update the VM&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum -y update&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;You will need to enable power-tools for this setup&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum config-manager --set-enabled powertools&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Load the basic software development environment plus the additional packages of openjpeg2, jasper, and libtirpc. Note that you may not need &#039;&#039;openjpeg2&#039;&#039; and &#039;&#039;jasper&#039;&#039; if you build the dependencies successfully. If you determine that you don&#039;t need these, please let us know. JUnit support has also been dropped so we dropped the &amp;lt;tt&amp;gt;&#039;&#039;ant-junit junit&#039;&#039;&amp;lt;/tt&amp;gt; packages from the install list.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ant git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc openjpeg2-devel jasper-devel libtirpc-devel&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Tell the machine where to find the tirpc libraries&lt;br /&gt;
:&amp;lt;tt&amp;gt;export CPPFLAGS=-I/usr/include/tirpc&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt;export LDFLAGS=-ltirpc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;NB: As of 1/28/22 you should not need to do this. The &#039;&#039;configure&#039;&#039; script should find the correct way to run python on CentOS 8. However, if it does not, our Makefiles (built from &#039;&#039;Makefile.am&#039;&#039; files) use &#039;&#039;python&#039;&#039; but a vanilla CentOS 8 machine only has &#039;&#039;python3&#039;&#039;. Until we fix this, you need to make sure &#039;&#039;python&#039;&#039; runs a python program. One way is to make a symbolic link between &#039;&#039;python3&#039;&#039; and &#039;&#039;python&#039;&#039; in a directory that is on your PATH. &#039;&#039;&#039;The TODO item here is to make sure &#039;&#039;python&#039;&#039; exists and can run a program&#039;&#039;&#039;. It is generally enough to verify that the command exists:&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;which python&#039;&#039;&#039;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
; Lacking that (which I was on Rocky8) install python&lt;br /&gt;
: &amp;lt;tt&amp;gt;sudo yum install -y python3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;If you&#039;re going to build RPMs&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum install rpm-devel rpm-build redhat-rpm-config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once you run through the rest of the hyrax build make sure that both &#039;&#039;gdal&#039;&#039; and &#039;&#039;hdf4&#039;&#039; build correctly (look for their libraries in $prefix/deps/lib). To build them manually, run &#039;&#039;&#039;make gdal&#039;&#039;&#039;, &#039;&#039;&#039;make hdf4&#039;&#039;&#039;, amd &#039;&#039;&#039;make netcdf4&#039;&#039;&#039; inside the hyrax-dependencies to build and install gdal and hdf4&lt;br /&gt;
&lt;br /&gt;
== Rocky 8 ==&lt;br /&gt;
&#039;&#039;Updated 6/6/2024&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the commands ps, which, etc.&lt;br /&gt;
 dnf install -y procps&lt;br /&gt;
&lt;br /&gt;
C++ environment plus build tools&lt;br /&gt;
 dnf install -y git gcc-c++ flex bison cmake autoconf automake libtool emacs bzip2 vim bc&lt;br /&gt;
&lt;br /&gt;
Development library versions&lt;br /&gt;
 dnf install -y openssl-devel libuuid-devel readline-devel zlib-devel bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel libtirpc-devel&lt;br /&gt;
&lt;br /&gt;
Java&lt;br /&gt;
 dnf install -y java-17-openjdk java-17-openjdk-devel ant &lt;br /&gt;
&lt;br /&gt;
Setup DNF so that we can load in some obscure packages from EPEL, etc., repos&lt;br /&gt;
 dnf install dnf-plugins-core&lt;br /&gt;
 dnf install epel-release&lt;br /&gt;
 dnf config-manager --set-enabled powertools&lt;br /&gt;
&lt;br /&gt;
Install CppUnit and some more development libraries&lt;br /&gt;
 dnf install -y cppunit cppunit-devel openjpeg2-devel jasper-devel&lt;br /&gt;
&lt;br /&gt;
Install the RPM tools&lt;br /&gt;
 dnf install -y rpm-devel rpm-build redhat-rpm-config&lt;br /&gt;
&lt;br /&gt;
Install the AWS CLI&lt;br /&gt;
 dnf install -y awscli&lt;br /&gt;
&lt;br /&gt;
= A semi-automatic build =&lt;br /&gt;
&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the short instructions in the README file.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Summarized here, those instructions are:&lt;br /&gt;
;use bash: The shell scripts in this repo assume you are using bash.&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development: &#039;&#039;source spath.sh&#039;&#039;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies: &#039;&#039;./hyrax_clone.sh -v&#039;&#039;&lt;br /&gt;
;build the code, including the dependencies: &#039;&#039;./hyrax_build.sh -v&#039;&#039;&lt;br /&gt;
;test the server: Start the BES using  &#039;&#039;besctl start&#039;&#039;&lt;br /&gt;
:Start the OLFS using&#039;&#039;./build/apache-tomcat-7.0.57/bin/startup.sh&#039;&#039;&lt;br /&gt;
:Test the server by loooking at &#039;&#039;&amp;lt;nowiki&amp;gt;http://localhost:8080/opendap&amp;lt;/nowiki&amp;gt;&#039;&#039; in a browser. You should see a directory named &#039;&#039;data&#039;&#039; and following that link should lead to more data. The server will be accessible to clients other than a web browser.&lt;br /&gt;
:To test the BES function independently of the front end, use &#039;&#039;bescmdln&#039;&#039; and give it the &#039;&#039;show version;&#039;&#039; command, you should see output about different components and their versions. &lt;br /&gt;
:Use &#039;&#039;exit&#039;&#039; to leave the command line test client.&lt;br /&gt;
&lt;br /&gt;
As described in the README file that is part of the &#039;&#039;hyrax&#039;&#039; repo, there are some other scripts in the repo and some options to the &#039;&#039;clone&#039;&#039; and &#039;&#039;build&#039;&#039; script that you can investigate by using -h (help).&lt;br /&gt;
&lt;br /&gt;
= The manual build = &lt;br /&gt;
&lt;br /&gt;
In the following, we describe only the build process for CentOS; the one for OS/X is similar and we note the differences where they are significant.&lt;br /&gt;
&lt;br /&gt;
== Get Hyrax from GitHub ==&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the instructions on this page (which differ a bit from ones in the project&#039;s README)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you have the &#039;&#039;hyrax&#039;&#039; project cloned:&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;source spath.sh&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;./hyrax_clone.sh -v&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;proceed with the rest of the build as described in the following sections of this page&lt;br /&gt;
&lt;br /&gt;
== Important Note ==&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;Many of the problems people have with the build stem from not setting the shell correctly for the build.&amp;lt;/font&amp;gt;&lt;br /&gt;
In the above section, &#039;&#039;make sure&#039;&#039; you run &#039;&#039;&#039;source spath.sh&#039;&#039;&#039; before you run any of the building/compiling/testing commands that use the source code or build files. While the &#039;&#039;$prefix&#039;&#039; and &#039;&#039;$PATH&#039;&#039; environment variables are simple to set up, they are needed by most users. When you exit a terminal window and then open a new one, make sure to (re)source the &#039;&#039;spath.sh&#039;&#039; file in the new shell. You don&#039;t have to source spath.sh every time you enter the &#039;&#039;hyrax&#039;&#039; directory, but you must run it for every new instance of the shell.&lt;br /&gt;
&lt;br /&gt;
== Compile the Hyrax dependencies ==&lt;br /&gt;
Use git to clone the hyrax-dependencies:&lt;br /&gt;
  git clone https://github.com/opendap/hyrax-dependencies&lt;br /&gt;
And then build it. Unlike many source packages, there is no need to run a configure script, just &#039;&#039;make&#039;&#039; will do. However, the Makefile in this package expects &#039;&#039;$prefix&#039;&#039; to be set as described above. It will put all of the Hyrax server dependencies in a subdirectory called &#039;&#039;deps&#039;&#039;. To build the dependencies for building RPMs, use &#039;&#039;make -j9 for-static-rpm&#039;&#039;.&lt;br /&gt;
;(make sure you&#039;re in the directory set to &#039;&#039;$prefix&#039;&#039;)&lt;br /&gt;
&amp;lt;tt&amp;gt;&lt;br /&gt;
;git clone https://github.com/opendap/hyrax-dependencies&lt;br /&gt;
; cd hyrax-dependencies&lt;br /&gt;
; make --jobs=9&lt;br /&gt;
: &#039;&#039;The --jobs=N runs a parallel build with at most N simultaneous compile operations. This will result in a huge performance improvement on multi-core machines. &#039;&#039;&#039;-jN&#039;&#039;&#039; is the short form for the option.&#039;&#039;&lt;br /&gt;
;cd ..: &#039;&#039;Go back up to &#039;&#039;&#039;$prefix&#039;&#039;&#039; &#039;&#039;&lt;br /&gt;
&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; You can get some of the &#039;&#039;dependencies&#039;&#039; for Hyrax like &#039;&#039;netCDF&#039;&#039; from the EPEL repository, but the versions are often older than Hyrax needs. Contact us if you want information about using EPEL. At the risk of throwing people a curve ball, here&#039;s a synopsis of the process. Don&#039;t do this unless you know EPEL well. Use [http://mirror.pnl.gov/epel/6/i386/epel-release-6-8.noarch.rpm epel-release-6-8.noarch.rpm] and install it using &#039;&#039;sudo yum install epel-release-6-8.noarch.rpm&#039;&#039;. Then install packages needed to read various file formats: &#039;&#039;yum install netcdf-devel hdf-devel hdf5-devel libicu-devel cfitsio-devel cppunit-devel rpm-devel rpm-build&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Build &#039;&#039;libdap&#039;&#039; and the &#039;&#039;BES&#039;&#039; daemon ==&lt;br /&gt;
&lt;br /&gt;
==== Get and build libdap4 ====&lt;br /&gt;
;WARNING: If you have &#039;&#039;libdap&#039;&#039; already, uninstall it before proceeding.&lt;br /&gt;
Build, test and install libdap4 into $prefix:&lt;br /&gt;
&amp;lt;b&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git clone https://github.com/opendap/libdap4&lt;br /&gt;
cd libdap4&lt;br /&gt;
autoreconf -fiv&lt;br /&gt;
./configure --prefix=$prefix --enable-developer &lt;br /&gt;
make -j9&lt;br /&gt;
make check -j9&lt;br /&gt;
make install&lt;br /&gt;
cd .. # Go back up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Get and build the BES and all of the modules shipped with Hyrax ====&lt;br /&gt;
Build, test and install the BES and its modules&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;git clone https://github.com/opendap/bes # Clone the BES from GitHub&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
cd bes # enter the bes dir.&lt;br /&gt;
git submodule update --init # update the submodules&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
That will clone some additional modules into the directory &#039;&#039;modules&#039;&#039;; you need to do this! (Previously it was an optional step). See [http://git-scm.com/docs/git-submodule git submodule] for information about all you can do with git&#039;s submodule command. Also note that this does not checkout a particular branch for the submodules; the modules are left in the &#039;detached head&#039; state. To checkout a particular branch like &#039;master&#039;, which is important if you&#039;ll be making changes to that code, use &#039;&#039;git submodule foreach &#039;git checkout master&#039; &#039;&#039;. &lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;autoreconf --force --install --verbose # You can use -fiv instead of the long options.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These means, when starting from a freshly cloned repo, run all of the autotools commands and install all of the needed scripts.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;./configure --prefix=$prefix  --with-dependencies=$prefix/deps --enable-developer&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: Notes:&lt;br /&gt;
:* The --with-deps... is not needed if you load the dependencies from RPMs or otherwise have them installed an generally accessible on the build machine.&lt;br /&gt;
:* The  --enable-developer option will compile in all of the debugging code which may affect performance even if the debugging output is not enabled.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make -j9&lt;br /&gt;
make check -j9&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Some tests may fail and adding &#039;&#039;-k&#039;&#039; ignores that and keeps make marching along. &#039;&#039;Note that you must run &#039;&#039;&#039;make&#039;&#039;&#039; before &#039;&#039;&#039;make check&#039;&#039;&#039; in the bes code&#039;&#039;.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make install&lt;br /&gt;
cd .. # Go back up to $prefix&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Test the BES ====&lt;br /&gt;
Start the BES and verify that all of the modules build correctly.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;besctl start # Start the BES.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Given that &#039;&#039;$prefix/bin&#039;&#039; is on your &#039;&#039;$PATH&#039;&#039;, this should start the BES. You will not need to be root if you used the &#039;&#039;--enable-developer&#039;&#039; switch with configure (as shown above), otherwise you should run &#039;&#039;sudo besctl start&#039;&#039; with the caveat that as root &#039;&#039;$prefix/bin&#039;&#039; will probably not be n your &#039;&#039;$PATH&#039;&#039;.&lt;br /&gt;
:If there&#039;s an error (e.g., you tried to start as a regular user but need to be root), edit bes.conf to be a real user (yourself?) in a real group (use &#039;groups&#039; to see which groups you are in) and also check that the bes.log file is &#039;&#039;not&#039;&#039; owned by root. &lt;br /&gt;
:Restart.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;bescmdln # Now that the BES is running, start the BES testing tool&lt;br /&gt;
BESClient&amp;gt; show version; # Send the BES the version command to see if it&#039;s running &amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
:Take a quick look at the output. There should be entries for libdap, bes and all of the modules.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt; BESClient&amp;gt; exit; # Exit the testing tool&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that even though you have exited the &#039;&#039;bescmdln&#039;&#039; test tool, the BES is still running. That&#039;s fine - we&#039;ll use it in just a bit - but if you want to shut it down, use &#039;&#039;besctl stop&#039;&#039;, or &#039;&#039;besctl pids&#039;&#039; to see the daemon&#039;s processes. If the BES is not stopping, &#039;&#039;besctl kill&#039;&#039; will stop all BES processes without waiting for them to complete their current task.&lt;br /&gt;
&lt;br /&gt;
== Build the Hyrax &#039;&#039;OLFS&#039;&#039; web application ==&lt;br /&gt;
The OLFS is a java servlet built using ant. The OLFS is a java servlet web application and runs with Tomcat, Glassfish, etc. You need a copy of Tomcat, but our servlet does not work with the RPM version of Tomcat. Get [http://tomcat.apache.org/download-70.cgi Tomcat 7 from Apache]. Note that if you built the dependencies from source using the &#039;&#039;hyrac-dependencies-1.10.tar&#039;&#039; then there is a copy of Tomcat in the &#039;&#039;hyrax-dependecies/extra_downloads directory. You can unpack the Tomcat tar file in &#039;&#039;$prefix&#039;&#039;. I&#039;ll assume you have the Apache Tomcat tar file in &#039;&#039;$prefix&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
;tar -xzf apache-tomcat-7.0.57.tar.gz: Expand the Tomcat tar ball&lt;br /&gt;
;git clone https://github.com/opendap/olfs: Get the OLFS source code&lt;br /&gt;
;cd olfs: change directory to the OLFS source&lt;br /&gt;
;ant server: Build it&lt;br /&gt;
;cp build/dist/opendap.war ../apache-tomcat-7.0.57/webapps/: Copy the opendap web archive to the tomcat webapps direcotry.&lt;br /&gt;
;cd ..: Go up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
;./apache-tomcat-7.0.57/bin/startup.sh: Start Tomcat&lt;br /&gt;
&lt;br /&gt;
== Test the server ==&lt;br /&gt;
You can test the server several ways, but the most fun is to use a web browser. The URL &#039;&#039;http://&amp;lt;machine&amp;gt;:8080/opendap&#039;&#039; should return a page pointing to a collection of test datasets bundled with the server. You can also use &#039;&#039;curl&#039;&#039;, &#039;&#039;wget&#039;&#039; or any application that can read from OpenDAP servers (e.g., Matlab, Octave, ArcGIS, IDL, ...).&lt;br /&gt;
&lt;br /&gt;
== Stopping the server ==&lt;br /&gt;
Stop both the BES and Apache&lt;br /&gt;
&lt;br /&gt;
;besctl stop&lt;br /&gt;
;./apache-tomcat-7.0.57/bin/shutdown.sh&lt;br /&gt;
&lt;br /&gt;
Note that there is also a &#039;&#039;hyraxctl&#039;&#039; script that provides a way to start and stop Hyrax without you (or &#039;&#039;init.d&#039;&#039;) having to type separate commands for both the BES and OLFS. This script is part of the BES software you cloned from git.&lt;br /&gt;
&lt;br /&gt;
== Building select parts of the BES ==&lt;br /&gt;
Building just the BES and one of more of its handlers/modules is not at all hard to do with a checkout of code from git. In the above section on building the BES, simply skip the step where the submodules are cloned (&#039;&#039;git submodule update --init&#039;&#039;) and link configure.ac to &#039;&#039;configure_standard.ac&#039;&#039;. The rest of the process is as shown. The end result is a BES daemon without any of the standard Hyrax modules (but support for DAP will be built if &#039;&#039;libdap&#039;&#039; is found by the configure script).&lt;br /&gt;
&lt;br /&gt;
To build modules for the BES, simply go to &#039;&#039;$prefix&#039;&#039;, clone their git repo and build them, taking care to pass set &#039;&#039;$prefix&#039;&#039; when calling the module&#039;s &#039;&#039;configure&#039;&#039; script. &lt;br /&gt;
&lt;br /&gt;
Note that it is easy to combine the &#039;build it all&#039; and &#039;build just one&#039; processes so that a complete Hyrax BES can be built in one go and then a new module/handler not included in the BES git repo can be built and used. Each module we have on GitHub has a &#039;&#039;configure.ac&#039;&#039;, &#039;&#039;Makefile.am&#039;&#039;, etc., that will support both kinds of builds and [[Configuration of BES Modules]] explains how to take a module/handler that builds as a standalone module and tweak the build scripts so that it&#039;s fully integrated into the Hyrax BES build, too.&lt;br /&gt;
&lt;br /&gt;
= Building on Ubuntu =&lt;br /&gt;
This was tested using Xenial (Ubuntu 16)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get update&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Packages needed:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get install ...&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ant junit git flex bison autoconf automake libtool emacs openssl bzip2 libjpeg-dev libxml2-dev curl libicu-dev vim bc make cmake uuid-dev libcurl4-openssl-dev libicu-dev g++ zlib1g-dev libcppunit-dev libssl-dev&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13533</id>
		<title>Hyrax GitHub Source Build</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13533"/>
		<updated>2024-06-07T02:54:23Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Setup CentOS-8 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This describes how to get and build Hyrax from our GitHub repositories. Hyrax is a data server that implements the DAP2 and DAP4 protocols, works with a number of different data formats and supports a wide variety of customization options from tailoring the look of the server&#039;s web pages to complex server-side processing operations. This page describes how to build the server&#039;s source code. If you&#039;re working on a Linux or OS/X computer, the process is similar so we describe only the linux case; we do not support building the server on Windows operating systems.&lt;br /&gt;
&lt;br /&gt;
To build and install the server, you need to perform three steps:&lt;br /&gt;
# Set up the computer to build source code (Install a Java compiler; install a C/C++ compiler; add some other tools)&lt;br /&gt;
# Build the C++ DAP library (&#039;&#039;libdap4&#039;&#039;) and the Hyrax BES daemon&lt;br /&gt;
# Build the Hyrax OLFS web application&lt;br /&gt;
&lt;br /&gt;
Quick links if you already know the process:&lt;br /&gt;
* [https://github.com/opendap/hyrax new all-in-one repo that uses shell scripts]&lt;br /&gt;
* [https://github.com/opendap/libdap libdap git repo]&lt;br /&gt;
* [https://github.com/opendap/bes BES git repo]&lt;br /&gt;
* [https://github.com/opendap/olfs OLFS git repo]&lt;br /&gt;
* [https://github.com/opendap/hyrax-dependencies Hyrax dependencies]&lt;br /&gt;
&lt;br /&gt;
= Set up a CentOS machine to build code =&lt;br /&gt;
== Setup CentOS-7 ==&lt;br /&gt;
Note that I don&#039;t like clicking around to different pages to follow simple directions, so what follows is a short version of the CentOS 6 configuration information we&#039;ve compiled for people that help us by building RPM packages for Hyrax. You can use this to extrapolate how to configure Ubuntu and OSX (We routinely build on those platforms as well). The complete instructions are in [[ConfigureCentos | Configure CentOS]] and describe how to to set up a CentOS 6 machine to build software. What follows is the condensed version:&lt;br /&gt;
&lt;br /&gt;
;Update the VM&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum -y update&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Load a basic software development environment:&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.7.0-openjdk java-1.7.0-openjdk-devel ant ant-junit junit&#039;&#039;&#039;&amp;lt;/tt&amp;gt; (it&#039;s likely that you can use more recent versions of Java)&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; The whole thing, with java-1.8.0&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ant ant-junit junit git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;If you&#039;re going to build RPMs&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install rpm-devel rpm-build redhat-rpm-config&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Optional&lt;br /&gt;
:Download, unpack, build and install the GNU autotools (&#039;&#039;but &#039;&#039;&#039;don&#039;t&#039;&#039;&#039; do this unless the versions installed using yum don&#039;t work&#039;&#039;)&lt;br /&gt;
* autoconf &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz autoconf-2.69.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
* automake &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/automake/automake-1.14.1.tar.gz automake-1.14.1.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt; &lt;br /&gt;
* libtool &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/libtool/libtool-2.4.2.tar.gz libtool-2.4.2.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:build them (&amp;lt;tt&amp;gt;&#039;&#039;&#039;&#039;&#039;./configure; make; sudo make install &#039;&#039;&#039;&#039;&#039;&amp;lt;/tt&amp;gt; - this should take no more than three minutes).&lt;br /&gt;
&lt;br /&gt;
== Setup CentOS-8  ==&lt;br /&gt;
The CentOS-8 setup is very similar to CentOS-7, but there are some minor differences.&lt;br /&gt;
 &lt;br /&gt;
;Update the VM&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum -y update&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;You will need to enable power-tools for this setup&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum config-manager --set-enabled powertools&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Load the basic software development environment plus the additional packages of openjpeg2, jasper, and libtirpc. Note that you may not need &#039;&#039;openjpeg2&#039;&#039; and &#039;&#039;jasper&#039;&#039; if you build the dependencies successfully. If you determine that you don&#039;t need these, please let us know. JUnit support has also been dropped so we dropped the &amp;lt;tt&amp;gt;&#039;&#039;ant-junit junit&#039;&#039;&amp;lt;/tt&amp;gt; packages from the install list.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ant git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc openjpeg2-devel jasper-devel libtirpc-devel&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Tell the machine where to find the tirpc libraries&lt;br /&gt;
:&amp;lt;tt&amp;gt;export CPPFLAGS=-I/usr/include/tirpc&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt;export LDFLAGS=-ltirpc&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;NB: As of 1/28/22 you should not need to do this. The &#039;&#039;configure&#039;&#039; script should find the correct way to run python on CentOS 8. However, if it does not, our Makefiles (built from &#039;&#039;Makefile.am&#039;&#039; files) use &#039;&#039;python&#039;&#039; but a vanilla CentOS 8 machine only has &#039;&#039;python3&#039;&#039;. Until we fix this, you need to make sure &#039;&#039;python&#039;&#039; runs a python program. One way is to make a symbolic link between &#039;&#039;python3&#039;&#039; and &#039;&#039;python&#039;&#039; in a directory that is on your PATH. &#039;&#039;&#039;The TODO item here is to make sure &#039;&#039;python&#039;&#039; exists and can run a program&#039;&#039;&#039;. It is generally enough to verify that the command exists:&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;which python&#039;&#039;&#039;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
; Lacking that (which I was on Rocky8) install python&lt;br /&gt;
: &amp;lt;tt&amp;gt;sudo yum install -y python3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;If you&#039;re going to build RPMs&lt;br /&gt;
:&amp;lt;tt&amp;gt;yum install rpm-devel rpm-build redhat-rpm-config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once you run through the rest of the hyrax build make sure that both &#039;&#039;gdal&#039;&#039; and &#039;&#039;hdf4&#039;&#039; build correctly (look for their libraries in $prefix/deps/lib). To build them manually, run &#039;&#039;&#039;make gdal&#039;&#039;&#039;, &#039;&#039;&#039;make hdf4&#039;&#039;&#039;, amd &#039;&#039;&#039;make netcdf4&#039;&#039;&#039; inside the hyrax-dependencies to build and install gdal and hdf4&lt;br /&gt;
&lt;br /&gt;
== Rocky 8 ==&lt;br /&gt;
&#039;&#039;Updated 6/6/2024&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the commands ps, which, etc.&lt;br /&gt;
 dnf install -y procps&lt;br /&gt;
&lt;br /&gt;
C++ environment plus build tools&lt;br /&gt;
 dnf install -y git gcc-c++ flex bison cmake autoconf automake libtool emacs bzip2 vim bc&lt;br /&gt;
&lt;br /&gt;
Development library versions&lt;br /&gt;
 dnf install -y openssl-devel libuuid-devel readline-devel zlib-devel bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel libtirpc-devel&lt;br /&gt;
&lt;br /&gt;
Java&lt;br /&gt;
 dnf install -y java-17-openjdk java-17-openjdk-devel ant &lt;br /&gt;
&lt;br /&gt;
Setup DNF so that we can load in some obscure packages from EPEL, etc., repos&lt;br /&gt;
 dnf install dnf-plugins-core&lt;br /&gt;
 dnf install epel-release&lt;br /&gt;
 dnf config-manager --set-enabled powertools&lt;br /&gt;
&lt;br /&gt;
Install CppUnit and some more development libraries&lt;br /&gt;
 dnf install -y cppunit cppunit-devel openjpeg2-devel jasper-devel&lt;br /&gt;
&lt;br /&gt;
= A semi-automatic build =&lt;br /&gt;
&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the short instructions in the README file.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Summarized here, those instructions are:&lt;br /&gt;
;use bash: The shell scripts in this repo assume you are using bash.&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development: &#039;&#039;source spath.sh&#039;&#039;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies: &#039;&#039;./hyrax_clone.sh -v&#039;&#039;&lt;br /&gt;
;build the code, including the dependencies: &#039;&#039;./hyrax_build.sh -v&#039;&#039;&lt;br /&gt;
;test the server: Start the BES using  &#039;&#039;besctl start&#039;&#039;&lt;br /&gt;
:Start the OLFS using&#039;&#039;./build/apache-tomcat-7.0.57/bin/startup.sh&#039;&#039;&lt;br /&gt;
:Test the server by loooking at &#039;&#039;&amp;lt;nowiki&amp;gt;http://localhost:8080/opendap&amp;lt;/nowiki&amp;gt;&#039;&#039; in a browser. You should see a directory named &#039;&#039;data&#039;&#039; and following that link should lead to more data. The server will be accessible to clients other than a web browser.&lt;br /&gt;
:To test the BES function independently of the front end, use &#039;&#039;bescmdln&#039;&#039; and give it the &#039;&#039;show version;&#039;&#039; command, you should see output about different components and their versions. &lt;br /&gt;
:Use &#039;&#039;exit&#039;&#039; to leave the command line test client.&lt;br /&gt;
&lt;br /&gt;
As described in the README file that is part of the &#039;&#039;hyrax&#039;&#039; repo, there are some other scripts in the repo and some options to the &#039;&#039;clone&#039;&#039; and &#039;&#039;build&#039;&#039; script that you can investigate by using -h (help).&lt;br /&gt;
&lt;br /&gt;
= The manual build = &lt;br /&gt;
&lt;br /&gt;
In the following, we describe only the build process for CentOS; the one for OS/X is similar and we note the differences where they are significant.&lt;br /&gt;
&lt;br /&gt;
== Get Hyrax from GitHub ==&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the instructions on this page (which differ a bit from ones in the project&#039;s README)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you have the &#039;&#039;hyrax&#039;&#039; project cloned:&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;source spath.sh&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;./hyrax_clone.sh -v&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;proceed with the rest of the build as described in the following sections of this page&lt;br /&gt;
&lt;br /&gt;
== Important Note ==&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;Many of the problems people have with the build stem from not setting the shell correctly for the build.&amp;lt;/font&amp;gt;&lt;br /&gt;
In the above section, &#039;&#039;make sure&#039;&#039; you run &#039;&#039;&#039;source spath.sh&#039;&#039;&#039; before you run any of the building/compiling/testing commands that use the source code or build files. While the &#039;&#039;$prefix&#039;&#039; and &#039;&#039;$PATH&#039;&#039; environment variables are simple to set up, they are needed by most users. When you exit a terminal window and then open a new one, make sure to (re)source the &#039;&#039;spath.sh&#039;&#039; file in the new shell. You don&#039;t have to source spath.sh every time you enter the &#039;&#039;hyrax&#039;&#039; directory, but you must run it for every new instance of the shell.&lt;br /&gt;
&lt;br /&gt;
== Compile the Hyrax dependencies ==&lt;br /&gt;
Use git to clone the hyrax-dependencies:&lt;br /&gt;
  git clone https://github.com/opendap/hyrax-dependencies&lt;br /&gt;
And then build it. Unlike many source packages, there is no need to run a configure script, just &#039;&#039;make&#039;&#039; will do. However, the Makefile in this package expects &#039;&#039;$prefix&#039;&#039; to be set as described above. It will put all of the Hyrax server dependencies in a subdirectory called &#039;&#039;deps&#039;&#039;. To build the dependencies for building RPMs, use &#039;&#039;make -j9 for-static-rpm&#039;&#039;.&lt;br /&gt;
;(make sure you&#039;re in the directory set to &#039;&#039;$prefix&#039;&#039;)&lt;br /&gt;
&amp;lt;tt&amp;gt;&lt;br /&gt;
;git clone https://github.com/opendap/hyrax-dependencies&lt;br /&gt;
; cd hyrax-dependencies&lt;br /&gt;
; make --jobs=9&lt;br /&gt;
: &#039;&#039;The --jobs=N runs a parallel build with at most N simultaneous compile operations. This will result in a huge performance improvement on multi-core machines. &#039;&#039;&#039;-jN&#039;&#039;&#039; is the short form for the option.&#039;&#039;&lt;br /&gt;
;cd ..: &#039;&#039;Go back up to &#039;&#039;&#039;$prefix&#039;&#039;&#039; &#039;&#039;&lt;br /&gt;
&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; You can get some of the &#039;&#039;dependencies&#039;&#039; for Hyrax like &#039;&#039;netCDF&#039;&#039; from the EPEL repository, but the versions are often older than Hyrax needs. Contact us if you want information about using EPEL. At the risk of throwing people a curve ball, here&#039;s a synopsis of the process. Don&#039;t do this unless you know EPEL well. Use [http://mirror.pnl.gov/epel/6/i386/epel-release-6-8.noarch.rpm epel-release-6-8.noarch.rpm] and install it using &#039;&#039;sudo yum install epel-release-6-8.noarch.rpm&#039;&#039;. Then install packages needed to read various file formats: &#039;&#039;yum install netcdf-devel hdf-devel hdf5-devel libicu-devel cfitsio-devel cppunit-devel rpm-devel rpm-build&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Build &#039;&#039;libdap&#039;&#039; and the &#039;&#039;BES&#039;&#039; daemon ==&lt;br /&gt;
&lt;br /&gt;
==== Get and build libdap4 ====&lt;br /&gt;
;WARNING: If you have &#039;&#039;libdap&#039;&#039; already, uninstall it before proceeding.&lt;br /&gt;
Build, test and install libdap4 into $prefix:&lt;br /&gt;
&amp;lt;b&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git clone https://github.com/opendap/libdap4&lt;br /&gt;
cd libdap4&lt;br /&gt;
autoreconf -fiv&lt;br /&gt;
./configure --prefix=$prefix --enable-developer &lt;br /&gt;
make -j9&lt;br /&gt;
make check -j9&lt;br /&gt;
make install&lt;br /&gt;
cd .. # Go back up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Get and build the BES and all of the modules shipped with Hyrax ====&lt;br /&gt;
Build, test and install the BES and its modules&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;git clone https://github.com/opendap/bes # Clone the BES from GitHub&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
cd bes # enter the bes dir.&lt;br /&gt;
git submodule update --init # update the submodules&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
That will clone some additional modules into the directory &#039;&#039;modules&#039;&#039;; you need to do this! (Previously it was an optional step). See [http://git-scm.com/docs/git-submodule git submodule] for information about all you can do with git&#039;s submodule command. Also note that this does not checkout a particular branch for the submodules; the modules are left in the &#039;detached head&#039; state. To checkout a particular branch like &#039;master&#039;, which is important if you&#039;ll be making changes to that code, use &#039;&#039;git submodule foreach &#039;git checkout master&#039; &#039;&#039;. &lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;autoreconf --force --install --verbose # You can use -fiv instead of the long options.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These means, when starting from a freshly cloned repo, run all of the autotools commands and install all of the needed scripts.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;./configure --prefix=$prefix  --with-dependencies=$prefix/deps --enable-developer&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: Notes:&lt;br /&gt;
:* The --with-deps... is not needed if you load the dependencies from RPMs or otherwise have them installed an generally accessible on the build machine.&lt;br /&gt;
:* The  --enable-developer option will compile in all of the debugging code which may affect performance even if the debugging output is not enabled.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make -j9&lt;br /&gt;
make check -j9&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Some tests may fail and adding &#039;&#039;-k&#039;&#039; ignores that and keeps make marching along. &#039;&#039;Note that you must run &#039;&#039;&#039;make&#039;&#039;&#039; before &#039;&#039;&#039;make check&#039;&#039;&#039; in the bes code&#039;&#039;.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make install&lt;br /&gt;
cd .. # Go back up to $prefix&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Test the BES ====&lt;br /&gt;
Start the BES and verify that all of the modules build correctly.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;besctl start # Start the BES.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Given that &#039;&#039;$prefix/bin&#039;&#039; is on your &#039;&#039;$PATH&#039;&#039;, this should start the BES. You will not need to be root if you used the &#039;&#039;--enable-developer&#039;&#039; switch with configure (as shown above), otherwise you should run &#039;&#039;sudo besctl start&#039;&#039; with the caveat that as root &#039;&#039;$prefix/bin&#039;&#039; will probably not be n your &#039;&#039;$PATH&#039;&#039;.&lt;br /&gt;
:If there&#039;s an error (e.g., you tried to start as a regular user but need to be root), edit bes.conf to be a real user (yourself?) in a real group (use &#039;groups&#039; to see which groups you are in) and also check that the bes.log file is &#039;&#039;not&#039;&#039; owned by root. &lt;br /&gt;
:Restart.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;bescmdln # Now that the BES is running, start the BES testing tool&lt;br /&gt;
BESClient&amp;gt; show version; # Send the BES the version command to see if it&#039;s running &amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
:Take a quick look at the output. There should be entries for libdap, bes and all of the modules.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt; BESClient&amp;gt; exit; # Exit the testing tool&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that even though you have exited the &#039;&#039;bescmdln&#039;&#039; test tool, the BES is still running. That&#039;s fine - we&#039;ll use it in just a bit - but if you want to shut it down, use &#039;&#039;besctl stop&#039;&#039;, or &#039;&#039;besctl pids&#039;&#039; to see the daemon&#039;s processes. If the BES is not stopping, &#039;&#039;besctl kill&#039;&#039; will stop all BES processes without waiting for them to complete their current task.&lt;br /&gt;
&lt;br /&gt;
== Build the Hyrax &#039;&#039;OLFS&#039;&#039; web application ==&lt;br /&gt;
The OLFS is a java servlet built using ant. The OLFS is a java servlet web application and runs with Tomcat, Glassfish, etc. You need a copy of Tomcat, but our servlet does not work with the RPM version of Tomcat. Get [http://tomcat.apache.org/download-70.cgi Tomcat 7 from Apache]. Note that if you built the dependencies from source using the &#039;&#039;hyrac-dependencies-1.10.tar&#039;&#039; then there is a copy of Tomcat in the &#039;&#039;hyrax-dependecies/extra_downloads directory. You can unpack the Tomcat tar file in &#039;&#039;$prefix&#039;&#039;. I&#039;ll assume you have the Apache Tomcat tar file in &#039;&#039;$prefix&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
;tar -xzf apache-tomcat-7.0.57.tar.gz: Expand the Tomcat tar ball&lt;br /&gt;
;git clone https://github.com/opendap/olfs: Get the OLFS source code&lt;br /&gt;
;cd olfs: change directory to the OLFS source&lt;br /&gt;
;ant server: Build it&lt;br /&gt;
;cp build/dist/opendap.war ../apache-tomcat-7.0.57/webapps/: Copy the opendap web archive to the tomcat webapps direcotry.&lt;br /&gt;
;cd ..: Go up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
;./apache-tomcat-7.0.57/bin/startup.sh: Start Tomcat&lt;br /&gt;
&lt;br /&gt;
== Test the server ==&lt;br /&gt;
You can test the server several ways, but the most fun is to use a web browser. The URL &#039;&#039;http://&amp;lt;machine&amp;gt;:8080/opendap&#039;&#039; should return a page pointing to a collection of test datasets bundled with the server. You can also use &#039;&#039;curl&#039;&#039;, &#039;&#039;wget&#039;&#039; or any application that can read from OpenDAP servers (e.g., Matlab, Octave, ArcGIS, IDL, ...).&lt;br /&gt;
&lt;br /&gt;
== Stopping the server ==&lt;br /&gt;
Stop both the BES and Apache&lt;br /&gt;
&lt;br /&gt;
;besctl stop&lt;br /&gt;
;./apache-tomcat-7.0.57/bin/shutdown.sh&lt;br /&gt;
&lt;br /&gt;
Note that there is also a &#039;&#039;hyraxctl&#039;&#039; script that provides a way to start and stop Hyrax without you (or &#039;&#039;init.d&#039;&#039;) having to type separate commands for both the BES and OLFS. This script is part of the BES software you cloned from git.&lt;br /&gt;
&lt;br /&gt;
== Building select parts of the BES ==&lt;br /&gt;
Building just the BES and one of more of its handlers/modules is not at all hard to do with a checkout of code from git. In the above section on building the BES, simply skip the step where the submodules are cloned (&#039;&#039;git submodule update --init&#039;&#039;) and link configure.ac to &#039;&#039;configure_standard.ac&#039;&#039;. The rest of the process is as shown. The end result is a BES daemon without any of the standard Hyrax modules (but support for DAP will be built if &#039;&#039;libdap&#039;&#039; is found by the configure script).&lt;br /&gt;
&lt;br /&gt;
To build modules for the BES, simply go to &#039;&#039;$prefix&#039;&#039;, clone their git repo and build them, taking care to pass set &#039;&#039;$prefix&#039;&#039; when calling the module&#039;s &#039;&#039;configure&#039;&#039; script. &lt;br /&gt;
&lt;br /&gt;
Note that it is easy to combine the &#039;build it all&#039; and &#039;build just one&#039; processes so that a complete Hyrax BES can be built in one go and then a new module/handler not included in the BES git repo can be built and used. Each module we have on GitHub has a &#039;&#039;configure.ac&#039;&#039;, &#039;&#039;Makefile.am&#039;&#039;, etc., that will support both kinds of builds and [[Configuration of BES Modules]] explains how to take a module/handler that builds as a standalone module and tweak the build scripts so that it&#039;s fully integrated into the Hyrax BES build, too.&lt;br /&gt;
&lt;br /&gt;
= Building on Ubuntu =&lt;br /&gt;
This was tested using Xenial (Ubuntu 16)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get update&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Packages needed:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get install ...&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ant junit git flex bison autoconf automake libtool emacs openssl bzip2 libjpeg-dev libxml2-dev curl libicu-dev vim bc make cmake uuid-dev libcurl4-openssl-dev libicu-dev g++ zlib1g-dev libcppunit-dev libssl-dev&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13532</id>
		<title>Hyrax GitHub Source Build</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13532"/>
		<updated>2024-06-07T02:51:40Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Rocky 8 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This describes how to get and build Hyrax from our GitHub repositories. Hyrax is a data server that implements the DAP2 and DAP4 protocols, works with a number of different data formats and supports a wide variety of customization options from tailoring the look of the server&#039;s web pages to complex server-side processing operations. This page describes how to build the server&#039;s source code. If you&#039;re working on a Linux or OS/X computer, the process is similar so we describe only the linux case; we do not support building the server on Windows operating systems.&lt;br /&gt;
&lt;br /&gt;
To build and install the server, you need to perform three steps:&lt;br /&gt;
# Set up the computer to build source code (Install a Java compiler; install a C/C++ compiler; add some other tools)&lt;br /&gt;
# Build the C++ DAP library (&#039;&#039;libdap4&#039;&#039;) and the Hyrax BES daemon&lt;br /&gt;
# Build the Hyrax OLFS web application&lt;br /&gt;
&lt;br /&gt;
Quick links if you already know the process:&lt;br /&gt;
* [https://github.com/opendap/hyrax new all-in-one repo that uses shell scripts]&lt;br /&gt;
* [https://github.com/opendap/libdap libdap git repo]&lt;br /&gt;
* [https://github.com/opendap/bes BES git repo]&lt;br /&gt;
* [https://github.com/opendap/olfs OLFS git repo]&lt;br /&gt;
* [https://github.com/opendap/hyrax-dependencies Hyrax dependencies]&lt;br /&gt;
&lt;br /&gt;
= Set up a CentOS machine to build code =&lt;br /&gt;
== Setup CentOS-7 ==&lt;br /&gt;
Note that I don&#039;t like clicking around to different pages to follow simple directions, so what follows is a short version of the CentOS 6 configuration information we&#039;ve compiled for people that help us by building RPM packages for Hyrax. You can use this to extrapolate how to configure Ubuntu and OSX (We routinely build on those platforms as well). The complete instructions are in [[ConfigureCentos | Configure CentOS]] and describe how to to set up a CentOS 6 machine to build software. What follows is the condensed version:&lt;br /&gt;
&lt;br /&gt;
;Update the VM&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum -y update&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Load a basic software development environment:&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.7.0-openjdk java-1.7.0-openjdk-devel ant ant-junit junit&#039;&#039;&#039;&amp;lt;/tt&amp;gt; (it&#039;s likely that you can use more recent versions of Java)&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; The whole thing, with java-1.8.0&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ant ant-junit junit git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;If you&#039;re going to build RPMs&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install rpm-devel rpm-build redhat-rpm-config&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Optional&lt;br /&gt;
:Download, unpack, build and install the GNU autotools (&#039;&#039;but &#039;&#039;&#039;don&#039;t&#039;&#039;&#039; do this unless the versions installed using yum don&#039;t work&#039;&#039;)&lt;br /&gt;
* autoconf &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz autoconf-2.69.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
* automake &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/automake/automake-1.14.1.tar.gz automake-1.14.1.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt; &lt;br /&gt;
* libtool &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/libtool/libtool-2.4.2.tar.gz libtool-2.4.2.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:build them (&amp;lt;tt&amp;gt;&#039;&#039;&#039;&#039;&#039;./configure; make; sudo make install &#039;&#039;&#039;&#039;&#039;&amp;lt;/tt&amp;gt; - this should take no more than three minutes).&lt;br /&gt;
&lt;br /&gt;
== Setup CentOS-8  ==&lt;br /&gt;
The CentOS-8 setup is very similar to CentOS-7, but there are some minor differences.&lt;br /&gt;
 &lt;br /&gt;
;Update the VM&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum -y update&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;You will need to enable power-tools for this setup&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum config-manager --set-enabled powertools&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Load the basic software development environment plus the additional packages of openjpeg2, jasper, and libtirpc. Note that you may not need &#039;&#039;openjpeg2&#039;&#039; and &#039;&#039;jasper&#039;&#039; if you build the dependencies successfully. If you determine that you don&#039;t need these, please let us know. JUnit support has also been dropped so we dropped the &amp;lt;tt&amp;gt;&#039;&#039;ant-junit junit&#039;&#039;&amp;lt;/tt&amp;gt; packages from the install list.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ant git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc openjpeg2-devel jasper-devel libtirpc-devel&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Tell the machine where to find the tirpc libraries&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;export CPPFLAGS=-I/usr/include/tirpc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;export LDFLAGS=-ltirpc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;NB: As of 1/28/22 you should not need to do this. The &#039;&#039;configure&#039;&#039; script should find the correct way to run python on CentOS 8. However, if it does not, our Makefiles (built from &#039;&#039;Makefile.am&#039;&#039; files) use &#039;&#039;python&#039;&#039; but a vanilla CentOS 8 machine only has &#039;&#039;python3&#039;&#039;. Until we fix this, you need to make sure &#039;&#039;python&#039;&#039; runs a python program. One way is to make a symbolic link between &#039;&#039;python3&#039;&#039; and &#039;&#039;python&#039;&#039; in a directory that is on your PATH. &#039;&#039;&#039;The TODO item here is to make sure &#039;&#039;python&#039;&#039; exists and can run a program&#039;&#039;&#039;. It is generally enough to verify that the command exists:&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;which python&#039;&#039;&#039;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
; Lacking that (which I was on Rocky8) install python&lt;br /&gt;
: &amp;lt;tt&amp;gt;&#039;&#039;&#039;sudo yum install -y python3&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;If you&#039;re going to build RPMs&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install rpm-devel rpm-build redhat-rpm-config&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once you run through the rest of the hyrax build make sure that both &#039;&#039;gdal&#039;&#039; and &#039;&#039;hdf4&#039;&#039; build correctly (look for their libraries in $prefix/deps/lib). To build them manually, run &#039;&#039;&#039;make gdal&#039;&#039;&#039;, &#039;&#039;&#039;make hdf4&#039;&#039;&#039;, amd &#039;&#039;&#039;make netcdf4&#039;&#039;&#039; inside the hyrax-dependencies to build and install gdal and hdf4&lt;br /&gt;
&lt;br /&gt;
== Rocky 8 ==&lt;br /&gt;
&#039;&#039;Updated 6/6/2024&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the commands ps, which, etc.&lt;br /&gt;
 dnf install -y procps&lt;br /&gt;
&lt;br /&gt;
C++ environment plus build tools&lt;br /&gt;
 dnf install -y git gcc-c++ flex bison cmake autoconf automake libtool emacs bzip2 vim bc&lt;br /&gt;
&lt;br /&gt;
Development library versions&lt;br /&gt;
 dnf install -y openssl-devel libuuid-devel readline-devel zlib-devel bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel libtirpc-devel&lt;br /&gt;
&lt;br /&gt;
Java&lt;br /&gt;
 dnf install -y java-17-openjdk java-17-openjdk-devel ant &lt;br /&gt;
&lt;br /&gt;
Setup DNF so that we can load in some obscure packages from EPEL, etc., repos&lt;br /&gt;
 dnf install dnf-plugins-core&lt;br /&gt;
 dnf install epel-release&lt;br /&gt;
 dnf config-manager --set-enabled powertools&lt;br /&gt;
&lt;br /&gt;
Install CppUnit and some more development libraries&lt;br /&gt;
 dnf install -y cppunit cppunit-devel openjpeg2-devel jasper-devel&lt;br /&gt;
&lt;br /&gt;
= A semi-automatic build =&lt;br /&gt;
&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the short instructions in the README file.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Summarized here, those instructions are:&lt;br /&gt;
;use bash: The shell scripts in this repo assume you are using bash.&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development: &#039;&#039;source spath.sh&#039;&#039;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies: &#039;&#039;./hyrax_clone.sh -v&#039;&#039;&lt;br /&gt;
;build the code, including the dependencies: &#039;&#039;./hyrax_build.sh -v&#039;&#039;&lt;br /&gt;
;test the server: Start the BES using  &#039;&#039;besctl start&#039;&#039;&lt;br /&gt;
:Start the OLFS using&#039;&#039;./build/apache-tomcat-7.0.57/bin/startup.sh&#039;&#039;&lt;br /&gt;
:Test the server by loooking at &#039;&#039;&amp;lt;nowiki&amp;gt;http://localhost:8080/opendap&amp;lt;/nowiki&amp;gt;&#039;&#039; in a browser. You should see a directory named &#039;&#039;data&#039;&#039; and following that link should lead to more data. The server will be accessible to clients other than a web browser.&lt;br /&gt;
:To test the BES function independently of the front end, use &#039;&#039;bescmdln&#039;&#039; and give it the &#039;&#039;show version;&#039;&#039; command, you should see output about different components and their versions. &lt;br /&gt;
:Use &#039;&#039;exit&#039;&#039; to leave the command line test client.&lt;br /&gt;
&lt;br /&gt;
As described in the README file that is part of the &#039;&#039;hyrax&#039;&#039; repo, there are some other scripts in the repo and some options to the &#039;&#039;clone&#039;&#039; and &#039;&#039;build&#039;&#039; script that you can investigate by using -h (help).&lt;br /&gt;
&lt;br /&gt;
= The manual build = &lt;br /&gt;
&lt;br /&gt;
In the following, we describe only the build process for CentOS; the one for OS/X is similar and we note the differences where they are significant.&lt;br /&gt;
&lt;br /&gt;
== Get Hyrax from GitHub ==&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the instructions on this page (which differ a bit from ones in the project&#039;s README)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you have the &#039;&#039;hyrax&#039;&#039; project cloned:&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;source spath.sh&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;./hyrax_clone.sh -v&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;proceed with the rest of the build as described in the following sections of this page&lt;br /&gt;
&lt;br /&gt;
== Important Note ==&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;Many of the problems people have with the build stem from not setting the shell correctly for the build.&amp;lt;/font&amp;gt;&lt;br /&gt;
In the above section, &#039;&#039;make sure&#039;&#039; you run &#039;&#039;&#039;source spath.sh&#039;&#039;&#039; before you run any of the building/compiling/testing commands that use the source code or build files. While the &#039;&#039;$prefix&#039;&#039; and &#039;&#039;$PATH&#039;&#039; environment variables are simple to set up, they are needed by most users. When you exit a terminal window and then open a new one, make sure to (re)source the &#039;&#039;spath.sh&#039;&#039; file in the new shell. You don&#039;t have to source spath.sh every time you enter the &#039;&#039;hyrax&#039;&#039; directory, but you must run it for every new instance of the shell.&lt;br /&gt;
&lt;br /&gt;
== Compile the Hyrax dependencies ==&lt;br /&gt;
Use git to clone the hyrax-dependencies:&lt;br /&gt;
  git clone https://github.com/opendap/hyrax-dependencies&lt;br /&gt;
And then build it. Unlike many source packages, there is no need to run a configure script, just &#039;&#039;make&#039;&#039; will do. However, the Makefile in this package expects &#039;&#039;$prefix&#039;&#039; to be set as described above. It will put all of the Hyrax server dependencies in a subdirectory called &#039;&#039;deps&#039;&#039;. To build the dependencies for building RPMs, use &#039;&#039;make -j9 for-static-rpm&#039;&#039;.&lt;br /&gt;
;(make sure you&#039;re in the directory set to &#039;&#039;$prefix&#039;&#039;)&lt;br /&gt;
&amp;lt;tt&amp;gt;&lt;br /&gt;
;git clone https://github.com/opendap/hyrax-dependencies&lt;br /&gt;
; cd hyrax-dependencies&lt;br /&gt;
; make --jobs=9&lt;br /&gt;
: &#039;&#039;The --jobs=N runs a parallel build with at most N simultaneous compile operations. This will result in a huge performance improvement on multi-core machines. &#039;&#039;&#039;-jN&#039;&#039;&#039; is the short form for the option.&#039;&#039;&lt;br /&gt;
;cd ..: &#039;&#039;Go back up to &#039;&#039;&#039;$prefix&#039;&#039;&#039; &#039;&#039;&lt;br /&gt;
&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; You can get some of the &#039;&#039;dependencies&#039;&#039; for Hyrax like &#039;&#039;netCDF&#039;&#039; from the EPEL repository, but the versions are often older than Hyrax needs. Contact us if you want information about using EPEL. At the risk of throwing people a curve ball, here&#039;s a synopsis of the process. Don&#039;t do this unless you know EPEL well. Use [http://mirror.pnl.gov/epel/6/i386/epel-release-6-8.noarch.rpm epel-release-6-8.noarch.rpm] and install it using &#039;&#039;sudo yum install epel-release-6-8.noarch.rpm&#039;&#039;. Then install packages needed to read various file formats: &#039;&#039;yum install netcdf-devel hdf-devel hdf5-devel libicu-devel cfitsio-devel cppunit-devel rpm-devel rpm-build&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Build &#039;&#039;libdap&#039;&#039; and the &#039;&#039;BES&#039;&#039; daemon ==&lt;br /&gt;
&lt;br /&gt;
==== Get and build libdap4 ====&lt;br /&gt;
;WARNING: If you have &#039;&#039;libdap&#039;&#039; already, uninstall it before proceeding.&lt;br /&gt;
Build, test and install libdap4 into $prefix:&lt;br /&gt;
&amp;lt;b&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git clone https://github.com/opendap/libdap4&lt;br /&gt;
cd libdap4&lt;br /&gt;
autoreconf -fiv&lt;br /&gt;
./configure --prefix=$prefix --enable-developer &lt;br /&gt;
make -j9&lt;br /&gt;
make check -j9&lt;br /&gt;
make install&lt;br /&gt;
cd .. # Go back up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Get and build the BES and all of the modules shipped with Hyrax ====&lt;br /&gt;
Build, test and install the BES and its modules&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;git clone https://github.com/opendap/bes # Clone the BES from GitHub&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
cd bes # enter the bes dir.&lt;br /&gt;
git submodule update --init # update the submodules&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
That will clone some additional modules into the directory &#039;&#039;modules&#039;&#039;; you need to do this! (Previously it was an optional step). See [http://git-scm.com/docs/git-submodule git submodule] for information about all you can do with git&#039;s submodule command. Also note that this does not checkout a particular branch for the submodules; the modules are left in the &#039;detached head&#039; state. To checkout a particular branch like &#039;master&#039;, which is important if you&#039;ll be making changes to that code, use &#039;&#039;git submodule foreach &#039;git checkout master&#039; &#039;&#039;. &lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;autoreconf --force --install --verbose # You can use -fiv instead of the long options.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These means, when starting from a freshly cloned repo, run all of the autotools commands and install all of the needed scripts.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;./configure --prefix=$prefix  --with-dependencies=$prefix/deps --enable-developer&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: Notes:&lt;br /&gt;
:* The --with-deps... is not needed if you load the dependencies from RPMs or otherwise have them installed an generally accessible on the build machine.&lt;br /&gt;
:* The  --enable-developer option will compile in all of the debugging code which may affect performance even if the debugging output is not enabled.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make -j9&lt;br /&gt;
make check -j9&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Some tests may fail and adding &#039;&#039;-k&#039;&#039; ignores that and keeps make marching along. &#039;&#039;Note that you must run &#039;&#039;&#039;make&#039;&#039;&#039; before &#039;&#039;&#039;make check&#039;&#039;&#039; in the bes code&#039;&#039;.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make install&lt;br /&gt;
cd .. # Go back up to $prefix&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Test the BES ====&lt;br /&gt;
Start the BES and verify that all of the modules build correctly.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;besctl start # Start the BES.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Given that &#039;&#039;$prefix/bin&#039;&#039; is on your &#039;&#039;$PATH&#039;&#039;, this should start the BES. You will not need to be root if you used the &#039;&#039;--enable-developer&#039;&#039; switch with configure (as shown above), otherwise you should run &#039;&#039;sudo besctl start&#039;&#039; with the caveat that as root &#039;&#039;$prefix/bin&#039;&#039; will probably not be n your &#039;&#039;$PATH&#039;&#039;.&lt;br /&gt;
:If there&#039;s an error (e.g., you tried to start as a regular user but need to be root), edit bes.conf to be a real user (yourself?) in a real group (use &#039;groups&#039; to see which groups you are in) and also check that the bes.log file is &#039;&#039;not&#039;&#039; owned by root. &lt;br /&gt;
:Restart.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;bescmdln # Now that the BES is running, start the BES testing tool&lt;br /&gt;
BESClient&amp;gt; show version; # Send the BES the version command to see if it&#039;s running &amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
:Take a quick look at the output. There should be entries for libdap, bes and all of the modules.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt; BESClient&amp;gt; exit; # Exit the testing tool&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that even though you have exited the &#039;&#039;bescmdln&#039;&#039; test tool, the BES is still running. That&#039;s fine - we&#039;ll use it in just a bit - but if you want to shut it down, use &#039;&#039;besctl stop&#039;&#039;, or &#039;&#039;besctl pids&#039;&#039; to see the daemon&#039;s processes. If the BES is not stopping, &#039;&#039;besctl kill&#039;&#039; will stop all BES processes without waiting for them to complete their current task.&lt;br /&gt;
&lt;br /&gt;
== Build the Hyrax &#039;&#039;OLFS&#039;&#039; web application ==&lt;br /&gt;
The OLFS is a java servlet built using ant. The OLFS is a java servlet web application and runs with Tomcat, Glassfish, etc. You need a copy of Tomcat, but our servlet does not work with the RPM version of Tomcat. Get [http://tomcat.apache.org/download-70.cgi Tomcat 7 from Apache]. Note that if you built the dependencies from source using the &#039;&#039;hyrac-dependencies-1.10.tar&#039;&#039; then there is a copy of Tomcat in the &#039;&#039;hyrax-dependecies/extra_downloads directory. You can unpack the Tomcat tar file in &#039;&#039;$prefix&#039;&#039;. I&#039;ll assume you have the Apache Tomcat tar file in &#039;&#039;$prefix&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
;tar -xzf apache-tomcat-7.0.57.tar.gz: Expand the Tomcat tar ball&lt;br /&gt;
;git clone https://github.com/opendap/olfs: Get the OLFS source code&lt;br /&gt;
;cd olfs: change directory to the OLFS source&lt;br /&gt;
;ant server: Build it&lt;br /&gt;
;cp build/dist/opendap.war ../apache-tomcat-7.0.57/webapps/: Copy the opendap web archive to the tomcat webapps direcotry.&lt;br /&gt;
;cd ..: Go up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
;./apache-tomcat-7.0.57/bin/startup.sh: Start Tomcat&lt;br /&gt;
&lt;br /&gt;
== Test the server ==&lt;br /&gt;
You can test the server several ways, but the most fun is to use a web browser. The URL &#039;&#039;http://&amp;lt;machine&amp;gt;:8080/opendap&#039;&#039; should return a page pointing to a collection of test datasets bundled with the server. You can also use &#039;&#039;curl&#039;&#039;, &#039;&#039;wget&#039;&#039; or any application that can read from OpenDAP servers (e.g., Matlab, Octave, ArcGIS, IDL, ...).&lt;br /&gt;
&lt;br /&gt;
== Stopping the server ==&lt;br /&gt;
Stop both the BES and Apache&lt;br /&gt;
&lt;br /&gt;
;besctl stop&lt;br /&gt;
;./apache-tomcat-7.0.57/bin/shutdown.sh&lt;br /&gt;
&lt;br /&gt;
Note that there is also a &#039;&#039;hyraxctl&#039;&#039; script that provides a way to start and stop Hyrax without you (or &#039;&#039;init.d&#039;&#039;) having to type separate commands for both the BES and OLFS. This script is part of the BES software you cloned from git.&lt;br /&gt;
&lt;br /&gt;
== Building select parts of the BES ==&lt;br /&gt;
Building just the BES and one of more of its handlers/modules is not at all hard to do with a checkout of code from git. In the above section on building the BES, simply skip the step where the submodules are cloned (&#039;&#039;git submodule update --init&#039;&#039;) and link configure.ac to &#039;&#039;configure_standard.ac&#039;&#039;. The rest of the process is as shown. The end result is a BES daemon without any of the standard Hyrax modules (but support for DAP will be built if &#039;&#039;libdap&#039;&#039; is found by the configure script).&lt;br /&gt;
&lt;br /&gt;
To build modules for the BES, simply go to &#039;&#039;$prefix&#039;&#039;, clone their git repo and build them, taking care to pass set &#039;&#039;$prefix&#039;&#039; when calling the module&#039;s &#039;&#039;configure&#039;&#039; script. &lt;br /&gt;
&lt;br /&gt;
Note that it is easy to combine the &#039;build it all&#039; and &#039;build just one&#039; processes so that a complete Hyrax BES can be built in one go and then a new module/handler not included in the BES git repo can be built and used. Each module we have on GitHub has a &#039;&#039;configure.ac&#039;&#039;, &#039;&#039;Makefile.am&#039;&#039;, etc., that will support both kinds of builds and [[Configuration of BES Modules]] explains how to take a module/handler that builds as a standalone module and tweak the build scripts so that it&#039;s fully integrated into the Hyrax BES build, too.&lt;br /&gt;
&lt;br /&gt;
= Building on Ubuntu =&lt;br /&gt;
This was tested using Xenial (Ubuntu 16)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get update&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Packages needed:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get install ...&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ant junit git flex bison autoconf automake libtool emacs openssl bzip2 libjpeg-dev libxml2-dev curl libicu-dev vim bc make cmake uuid-dev libcurl4-openssl-dev libicu-dev g++ zlib1g-dev libcppunit-dev libssl-dev&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13531</id>
		<title>Hyrax GitHub Source Build</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13531"/>
		<updated>2024-06-07T02:47:33Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Rocky 8 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This describes how to get and build Hyrax from our GitHub repositories. Hyrax is a data server that implements the DAP2 and DAP4 protocols, works with a number of different data formats and supports a wide variety of customization options from tailoring the look of the server&#039;s web pages to complex server-side processing operations. This page describes how to build the server&#039;s source code. If you&#039;re working on a Linux or OS/X computer, the process is similar so we describe only the linux case; we do not support building the server on Windows operating systems.&lt;br /&gt;
&lt;br /&gt;
To build and install the server, you need to perform three steps:&lt;br /&gt;
# Set up the computer to build source code (Install a Java compiler; install a C/C++ compiler; add some other tools)&lt;br /&gt;
# Build the C++ DAP library (&#039;&#039;libdap4&#039;&#039;) and the Hyrax BES daemon&lt;br /&gt;
# Build the Hyrax OLFS web application&lt;br /&gt;
&lt;br /&gt;
Quick links if you already know the process:&lt;br /&gt;
* [https://github.com/opendap/hyrax new all-in-one repo that uses shell scripts]&lt;br /&gt;
* [https://github.com/opendap/libdap libdap git repo]&lt;br /&gt;
* [https://github.com/opendap/bes BES git repo]&lt;br /&gt;
* [https://github.com/opendap/olfs OLFS git repo]&lt;br /&gt;
* [https://github.com/opendap/hyrax-dependencies Hyrax dependencies]&lt;br /&gt;
&lt;br /&gt;
= Set up a CentOS machine to build code =&lt;br /&gt;
== Setup CentOS-7 ==&lt;br /&gt;
Note that I don&#039;t like clicking around to different pages to follow simple directions, so what follows is a short version of the CentOS 6 configuration information we&#039;ve compiled for people that help us by building RPM packages for Hyrax. You can use this to extrapolate how to configure Ubuntu and OSX (We routinely build on those platforms as well). The complete instructions are in [[ConfigureCentos | Configure CentOS]] and describe how to to set up a CentOS 6 machine to build software. What follows is the condensed version:&lt;br /&gt;
&lt;br /&gt;
;Update the VM&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum -y update&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Load a basic software development environment:&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.7.0-openjdk java-1.7.0-openjdk-devel ant ant-junit junit&#039;&#039;&#039;&amp;lt;/tt&amp;gt; (it&#039;s likely that you can use more recent versions of Java)&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; The whole thing, with java-1.8.0&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ant ant-junit junit git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;If you&#039;re going to build RPMs&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install rpm-devel rpm-build redhat-rpm-config&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Optional&lt;br /&gt;
:Download, unpack, build and install the GNU autotools (&#039;&#039;but &#039;&#039;&#039;don&#039;t&#039;&#039;&#039; do this unless the versions installed using yum don&#039;t work&#039;&#039;)&lt;br /&gt;
* autoconf &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz autoconf-2.69.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
* automake &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/automake/automake-1.14.1.tar.gz automake-1.14.1.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt; &lt;br /&gt;
* libtool &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/libtool/libtool-2.4.2.tar.gz libtool-2.4.2.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:build them (&amp;lt;tt&amp;gt;&#039;&#039;&#039;&#039;&#039;./configure; make; sudo make install &#039;&#039;&#039;&#039;&#039;&amp;lt;/tt&amp;gt; - this should take no more than three minutes).&lt;br /&gt;
&lt;br /&gt;
== Setup CentOS-8  ==&lt;br /&gt;
The CentOS-8 setup is very similar to CentOS-7, but there are some minor differences.&lt;br /&gt;
 &lt;br /&gt;
;Update the VM&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum -y update&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;You will need to enable power-tools for this setup&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum config-manager --set-enabled powertools&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Load the basic software development environment plus the additional packages of openjpeg2, jasper, and libtirpc. Note that you may not need &#039;&#039;openjpeg2&#039;&#039; and &#039;&#039;jasper&#039;&#039; if you build the dependencies successfully. If you determine that you don&#039;t need these, please let us know. JUnit support has also been dropped so we dropped the &amp;lt;tt&amp;gt;&#039;&#039;ant-junit junit&#039;&#039;&amp;lt;/tt&amp;gt; packages from the install list.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ant git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc openjpeg2-devel jasper-devel libtirpc-devel&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Tell the machine where to find the tirpc libraries&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;export CPPFLAGS=-I/usr/include/tirpc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;export LDFLAGS=-ltirpc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;NB: As of 1/28/22 you should not need to do this. The &#039;&#039;configure&#039;&#039; script should find the correct way to run python on CentOS 8. However, if it does not, our Makefiles (built from &#039;&#039;Makefile.am&#039;&#039; files) use &#039;&#039;python&#039;&#039; but a vanilla CentOS 8 machine only has &#039;&#039;python3&#039;&#039;. Until we fix this, you need to make sure &#039;&#039;python&#039;&#039; runs a python program. One way is to make a symbolic link between &#039;&#039;python3&#039;&#039; and &#039;&#039;python&#039;&#039; in a directory that is on your PATH. &#039;&#039;&#039;The TODO item here is to make sure &#039;&#039;python&#039;&#039; exists and can run a program&#039;&#039;&#039;. It is generally enough to verify that the command exists:&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;which python&#039;&#039;&#039;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
; Lacking that (which I was on Rocky8) install python&lt;br /&gt;
: &amp;lt;tt&amp;gt;&#039;&#039;&#039;sudo yum install -y python3&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;If you&#039;re going to build RPMs&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install rpm-devel rpm-build redhat-rpm-config&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once you run through the rest of the hyrax build make sure that both &#039;&#039;gdal&#039;&#039; and &#039;&#039;hdf4&#039;&#039; build correctly (look for their libraries in $prefix/deps/lib). To build them manually, run &#039;&#039;&#039;make gdal&#039;&#039;&#039;, &#039;&#039;&#039;make hdf4&#039;&#039;&#039;, amd &#039;&#039;&#039;make netcdf4&#039;&#039;&#039; inside the hyrax-dependencies to build and install gdal and hdf4&lt;br /&gt;
&lt;br /&gt;
== Rocky 8 ==&lt;br /&gt;
&#039;&#039;Updated 6/6/2024&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# To get the commands ps, which, etc.&lt;br /&gt;
dnf install -y procps&lt;br /&gt;
&lt;br /&gt;
# C++ environment plus build tools&lt;br /&gt;
dnf install -y git gcc-c++ flex bison cmake autoconf automake libtool emacs bzip2 vim bc&lt;br /&gt;
&lt;br /&gt;
# Development library versions&lt;br /&gt;
dnf install -y openssl-devel libuuid-devel readline-devel zlib-devel bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel libtirpc-devel&lt;br /&gt;
&lt;br /&gt;
# Java&lt;br /&gt;
dnf install -y java-17-openjdk java-17-openjdk-devel ant &lt;br /&gt;
&lt;br /&gt;
# Setup DNF so that we can load in some obscure packages from EPEL, etc., repos&lt;br /&gt;
dnf install dnf-plugins-core&lt;br /&gt;
dnf install epel-release&lt;br /&gt;
dnf update&lt;br /&gt;
&lt;br /&gt;
cppunit cppunit-devel openjpeg2-devel jasper-devel&lt;br /&gt;
&lt;br /&gt;
= A semi-automatic build =&lt;br /&gt;
&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the short instructions in the README file.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Summarized here, those instructions are:&lt;br /&gt;
;use bash: The shell scripts in this repo assume you are using bash.&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development: &#039;&#039;source spath.sh&#039;&#039;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies: &#039;&#039;./hyrax_clone.sh -v&#039;&#039;&lt;br /&gt;
;build the code, including the dependencies: &#039;&#039;./hyrax_build.sh -v&#039;&#039;&lt;br /&gt;
;test the server: Start the BES using  &#039;&#039;besctl start&#039;&#039;&lt;br /&gt;
:Start the OLFS using&#039;&#039;./build/apache-tomcat-7.0.57/bin/startup.sh&#039;&#039;&lt;br /&gt;
:Test the server by loooking at &#039;&#039;&amp;lt;nowiki&amp;gt;http://localhost:8080/opendap&amp;lt;/nowiki&amp;gt;&#039;&#039; in a browser. You should see a directory named &#039;&#039;data&#039;&#039; and following that link should lead to more data. The server will be accessible to clients other than a web browser.&lt;br /&gt;
:To test the BES function independently of the front end, use &#039;&#039;bescmdln&#039;&#039; and give it the &#039;&#039;show version;&#039;&#039; command, you should see output about different components and their versions. &lt;br /&gt;
:Use &#039;&#039;exit&#039;&#039; to leave the command line test client.&lt;br /&gt;
&lt;br /&gt;
As described in the README file that is part of the &#039;&#039;hyrax&#039;&#039; repo, there are some other scripts in the repo and some options to the &#039;&#039;clone&#039;&#039; and &#039;&#039;build&#039;&#039; script that you can investigate by using -h (help).&lt;br /&gt;
&lt;br /&gt;
= The manual build = &lt;br /&gt;
&lt;br /&gt;
In the following, we describe only the build process for CentOS; the one for OS/X is similar and we note the differences where they are significant.&lt;br /&gt;
&lt;br /&gt;
== Get Hyrax from GitHub ==&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the instructions on this page (which differ a bit from ones in the project&#039;s README)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you have the &#039;&#039;hyrax&#039;&#039; project cloned:&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;source spath.sh&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;./hyrax_clone.sh -v&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;proceed with the rest of the build as described in the following sections of this page&lt;br /&gt;
&lt;br /&gt;
== Important Note ==&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;Many of the problems people have with the build stem from not setting the shell correctly for the build.&amp;lt;/font&amp;gt;&lt;br /&gt;
In the above section, &#039;&#039;make sure&#039;&#039; you run &#039;&#039;&#039;source spath.sh&#039;&#039;&#039; before you run any of the building/compiling/testing commands that use the source code or build files. While the &#039;&#039;$prefix&#039;&#039; and &#039;&#039;$PATH&#039;&#039; environment variables are simple to set up, they are needed by most users. When you exit a terminal window and then open a new one, make sure to (re)source the &#039;&#039;spath.sh&#039;&#039; file in the new shell. You don&#039;t have to source spath.sh every time you enter the &#039;&#039;hyrax&#039;&#039; directory, but you must run it for every new instance of the shell.&lt;br /&gt;
&lt;br /&gt;
== Compile the Hyrax dependencies ==&lt;br /&gt;
Use git to clone the hyrax-dependencies:&lt;br /&gt;
  git clone https://github.com/opendap/hyrax-dependencies&lt;br /&gt;
And then build it. Unlike many source packages, there is no need to run a configure script, just &#039;&#039;make&#039;&#039; will do. However, the Makefile in this package expects &#039;&#039;$prefix&#039;&#039; to be set as described above. It will put all of the Hyrax server dependencies in a subdirectory called &#039;&#039;deps&#039;&#039;. To build the dependencies for building RPMs, use &#039;&#039;make -j9 for-static-rpm&#039;&#039;.&lt;br /&gt;
;(make sure you&#039;re in the directory set to &#039;&#039;$prefix&#039;&#039;)&lt;br /&gt;
&amp;lt;tt&amp;gt;&lt;br /&gt;
;git clone https://github.com/opendap/hyrax-dependencies&lt;br /&gt;
; cd hyrax-dependencies&lt;br /&gt;
; make --jobs=9&lt;br /&gt;
: &#039;&#039;The --jobs=N runs a parallel build with at most N simultaneous compile operations. This will result in a huge performance improvement on multi-core machines. &#039;&#039;&#039;-jN&#039;&#039;&#039; is the short form for the option.&#039;&#039;&lt;br /&gt;
;cd ..: &#039;&#039;Go back up to &#039;&#039;&#039;$prefix&#039;&#039;&#039; &#039;&#039;&lt;br /&gt;
&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; You can get some of the &#039;&#039;dependencies&#039;&#039; for Hyrax like &#039;&#039;netCDF&#039;&#039; from the EPEL repository, but the versions are often older than Hyrax needs. Contact us if you want information about using EPEL. At the risk of throwing people a curve ball, here&#039;s a synopsis of the process. Don&#039;t do this unless you know EPEL well. Use [http://mirror.pnl.gov/epel/6/i386/epel-release-6-8.noarch.rpm epel-release-6-8.noarch.rpm] and install it using &#039;&#039;sudo yum install epel-release-6-8.noarch.rpm&#039;&#039;. Then install packages needed to read various file formats: &#039;&#039;yum install netcdf-devel hdf-devel hdf5-devel libicu-devel cfitsio-devel cppunit-devel rpm-devel rpm-build&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Build &#039;&#039;libdap&#039;&#039; and the &#039;&#039;BES&#039;&#039; daemon ==&lt;br /&gt;
&lt;br /&gt;
==== Get and build libdap4 ====&lt;br /&gt;
;WARNING: If you have &#039;&#039;libdap&#039;&#039; already, uninstall it before proceeding.&lt;br /&gt;
Build, test and install libdap4 into $prefix:&lt;br /&gt;
&amp;lt;b&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git clone https://github.com/opendap/libdap4&lt;br /&gt;
cd libdap4&lt;br /&gt;
autoreconf -fiv&lt;br /&gt;
./configure --prefix=$prefix --enable-developer &lt;br /&gt;
make -j9&lt;br /&gt;
make check -j9&lt;br /&gt;
make install&lt;br /&gt;
cd .. # Go back up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Get and build the BES and all of the modules shipped with Hyrax ====&lt;br /&gt;
Build, test and install the BES and its modules&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;git clone https://github.com/opendap/bes # Clone the BES from GitHub&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
cd bes # enter the bes dir.&lt;br /&gt;
git submodule update --init # update the submodules&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
That will clone some additional modules into the directory &#039;&#039;modules&#039;&#039;; you need to do this! (Previously it was an optional step). See [http://git-scm.com/docs/git-submodule git submodule] for information about all you can do with git&#039;s submodule command. Also note that this does not checkout a particular branch for the submodules; the modules are left in the &#039;detached head&#039; state. To checkout a particular branch like &#039;master&#039;, which is important if you&#039;ll be making changes to that code, use &#039;&#039;git submodule foreach &#039;git checkout master&#039; &#039;&#039;. &lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;autoreconf --force --install --verbose # You can use -fiv instead of the long options.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These means, when starting from a freshly cloned repo, run all of the autotools commands and install all of the needed scripts.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;./configure --prefix=$prefix  --with-dependencies=$prefix/deps --enable-developer&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: Notes:&lt;br /&gt;
:* The --with-deps... is not needed if you load the dependencies from RPMs or otherwise have them installed an generally accessible on the build machine.&lt;br /&gt;
:* The  --enable-developer option will compile in all of the debugging code which may affect performance even if the debugging output is not enabled.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make -j9&lt;br /&gt;
make check -j9&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Some tests may fail and adding &#039;&#039;-k&#039;&#039; ignores that and keeps make marching along. &#039;&#039;Note that you must run &#039;&#039;&#039;make&#039;&#039;&#039; before &#039;&#039;&#039;make check&#039;&#039;&#039; in the bes code&#039;&#039;.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make install&lt;br /&gt;
cd .. # Go back up to $prefix&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Test the BES ====&lt;br /&gt;
Start the BES and verify that all of the modules build correctly.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;besctl start # Start the BES.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Given that &#039;&#039;$prefix/bin&#039;&#039; is on your &#039;&#039;$PATH&#039;&#039;, this should start the BES. You will not need to be root if you used the &#039;&#039;--enable-developer&#039;&#039; switch with configure (as shown above), otherwise you should run &#039;&#039;sudo besctl start&#039;&#039; with the caveat that as root &#039;&#039;$prefix/bin&#039;&#039; will probably not be n your &#039;&#039;$PATH&#039;&#039;.&lt;br /&gt;
:If there&#039;s an error (e.g., you tried to start as a regular user but need to be root), edit bes.conf to be a real user (yourself?) in a real group (use &#039;groups&#039; to see which groups you are in) and also check that the bes.log file is &#039;&#039;not&#039;&#039; owned by root. &lt;br /&gt;
:Restart.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;bescmdln # Now that the BES is running, start the BES testing tool&lt;br /&gt;
BESClient&amp;gt; show version; # Send the BES the version command to see if it&#039;s running &amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
:Take a quick look at the output. There should be entries for libdap, bes and all of the modules.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt; BESClient&amp;gt; exit; # Exit the testing tool&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that even though you have exited the &#039;&#039;bescmdln&#039;&#039; test tool, the BES is still running. That&#039;s fine - we&#039;ll use it in just a bit - but if you want to shut it down, use &#039;&#039;besctl stop&#039;&#039;, or &#039;&#039;besctl pids&#039;&#039; to see the daemon&#039;s processes. If the BES is not stopping, &#039;&#039;besctl kill&#039;&#039; will stop all BES processes without waiting for them to complete their current task.&lt;br /&gt;
&lt;br /&gt;
== Build the Hyrax &#039;&#039;OLFS&#039;&#039; web application ==&lt;br /&gt;
The OLFS is a java servlet built using ant. The OLFS is a java servlet web application and runs with Tomcat, Glassfish, etc. You need a copy of Tomcat, but our servlet does not work with the RPM version of Tomcat. Get [http://tomcat.apache.org/download-70.cgi Tomcat 7 from Apache]. Note that if you built the dependencies from source using the &#039;&#039;hyrac-dependencies-1.10.tar&#039;&#039; then there is a copy of Tomcat in the &#039;&#039;hyrax-dependecies/extra_downloads directory. You can unpack the Tomcat tar file in &#039;&#039;$prefix&#039;&#039;. I&#039;ll assume you have the Apache Tomcat tar file in &#039;&#039;$prefix&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
;tar -xzf apache-tomcat-7.0.57.tar.gz: Expand the Tomcat tar ball&lt;br /&gt;
;git clone https://github.com/opendap/olfs: Get the OLFS source code&lt;br /&gt;
;cd olfs: change directory to the OLFS source&lt;br /&gt;
;ant server: Build it&lt;br /&gt;
;cp build/dist/opendap.war ../apache-tomcat-7.0.57/webapps/: Copy the opendap web archive to the tomcat webapps direcotry.&lt;br /&gt;
;cd ..: Go up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
;./apache-tomcat-7.0.57/bin/startup.sh: Start Tomcat&lt;br /&gt;
&lt;br /&gt;
== Test the server ==&lt;br /&gt;
You can test the server several ways, but the most fun is to use a web browser. The URL &#039;&#039;http://&amp;lt;machine&amp;gt;:8080/opendap&#039;&#039; should return a page pointing to a collection of test datasets bundled with the server. You can also use &#039;&#039;curl&#039;&#039;, &#039;&#039;wget&#039;&#039; or any application that can read from OpenDAP servers (e.g., Matlab, Octave, ArcGIS, IDL, ...).&lt;br /&gt;
&lt;br /&gt;
== Stopping the server ==&lt;br /&gt;
Stop both the BES and Apache&lt;br /&gt;
&lt;br /&gt;
;besctl stop&lt;br /&gt;
;./apache-tomcat-7.0.57/bin/shutdown.sh&lt;br /&gt;
&lt;br /&gt;
Note that there is also a &#039;&#039;hyraxctl&#039;&#039; script that provides a way to start and stop Hyrax without you (or &#039;&#039;init.d&#039;&#039;) having to type separate commands for both the BES and OLFS. This script is part of the BES software you cloned from git.&lt;br /&gt;
&lt;br /&gt;
== Building select parts of the BES ==&lt;br /&gt;
Building just the BES and one of more of its handlers/modules is not at all hard to do with a checkout of code from git. In the above section on building the BES, simply skip the step where the submodules are cloned (&#039;&#039;git submodule update --init&#039;&#039;) and link configure.ac to &#039;&#039;configure_standard.ac&#039;&#039;. The rest of the process is as shown. The end result is a BES daemon without any of the standard Hyrax modules (but support for DAP will be built if &#039;&#039;libdap&#039;&#039; is found by the configure script).&lt;br /&gt;
&lt;br /&gt;
To build modules for the BES, simply go to &#039;&#039;$prefix&#039;&#039;, clone their git repo and build them, taking care to pass set &#039;&#039;$prefix&#039;&#039; when calling the module&#039;s &#039;&#039;configure&#039;&#039; script. &lt;br /&gt;
&lt;br /&gt;
Note that it is easy to combine the &#039;build it all&#039; and &#039;build just one&#039; processes so that a complete Hyrax BES can be built in one go and then a new module/handler not included in the BES git repo can be built and used. Each module we have on GitHub has a &#039;&#039;configure.ac&#039;&#039;, &#039;&#039;Makefile.am&#039;&#039;, etc., that will support both kinds of builds and [[Configuration of BES Modules]] explains how to take a module/handler that builds as a standalone module and tweak the build scripts so that it&#039;s fully integrated into the Hyrax BES build, too.&lt;br /&gt;
&lt;br /&gt;
= Building on Ubuntu =&lt;br /&gt;
This was tested using Xenial (Ubuntu 16)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get update&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Packages needed:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get install ...&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ant junit git flex bison autoconf automake libtool emacs openssl bzip2 libjpeg-dev libxml2-dev curl libicu-dev vim bc make cmake uuid-dev libcurl4-openssl-dev libicu-dev g++ zlib1g-dev libcppunit-dev libssl-dev&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13530</id>
		<title>Hyrax GitHub Source Build</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Hyrax_GitHub_Source_Build&amp;diff=13530"/>
		<updated>2024-06-07T02:45:32Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Setup CentOS-8 and Rocky8 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This describes how to get and build Hyrax from our GitHub repositories. Hyrax is a data server that implements the DAP2 and DAP4 protocols, works with a number of different data formats and supports a wide variety of customization options from tailoring the look of the server&#039;s web pages to complex server-side processing operations. This page describes how to build the server&#039;s source code. If you&#039;re working on a Linux or OS/X computer, the process is similar so we describe only the linux case; we do not support building the server on Windows operating systems.&lt;br /&gt;
&lt;br /&gt;
To build and install the server, you need to perform three steps:&lt;br /&gt;
# Set up the computer to build source code (Install a Java compiler; install a C/C++ compiler; add some other tools)&lt;br /&gt;
# Build the C++ DAP library (&#039;&#039;libdap4&#039;&#039;) and the Hyrax BES daemon&lt;br /&gt;
# Build the Hyrax OLFS web application&lt;br /&gt;
&lt;br /&gt;
Quick links if you already know the process:&lt;br /&gt;
* [https://github.com/opendap/hyrax new all-in-one repo that uses shell scripts]&lt;br /&gt;
* [https://github.com/opendap/libdap libdap git repo]&lt;br /&gt;
* [https://github.com/opendap/bes BES git repo]&lt;br /&gt;
* [https://github.com/opendap/olfs OLFS git repo]&lt;br /&gt;
* [https://github.com/opendap/hyrax-dependencies Hyrax dependencies]&lt;br /&gt;
&lt;br /&gt;
= Set up a CentOS machine to build code =&lt;br /&gt;
== Setup CentOS-7 ==&lt;br /&gt;
Note that I don&#039;t like clicking around to different pages to follow simple directions, so what follows is a short version of the CentOS 6 configuration information we&#039;ve compiled for people that help us by building RPM packages for Hyrax. You can use this to extrapolate how to configure Ubuntu and OSX (We routinely build on those platforms as well). The complete instructions are in [[ConfigureCentos | Configure CentOS]] and describe how to to set up a CentOS 6 machine to build software. What follows is the condensed version:&lt;br /&gt;
&lt;br /&gt;
;Update the VM&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum -y update&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Load a basic software development environment:&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.7.0-openjdk java-1.7.0-openjdk-devel ant ant-junit junit&#039;&#039;&#039;&amp;lt;/tt&amp;gt; (it&#039;s likely that you can use more recent versions of Java)&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; The whole thing, with java-1.8.0&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ant ant-junit junit git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;If you&#039;re going to build RPMs&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install rpm-devel rpm-build redhat-rpm-config&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Optional&lt;br /&gt;
:Download, unpack, build and install the GNU autotools (&#039;&#039;but &#039;&#039;&#039;don&#039;t&#039;&#039;&#039; do this unless the versions installed using yum don&#039;t work&#039;&#039;)&lt;br /&gt;
* autoconf &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz autoconf-2.69.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
* automake &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/automake/automake-1.14.1.tar.gz automake-1.14.1.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt; &lt;br /&gt;
* libtool &amp;lt;tt&amp;gt;&#039;&#039;&#039;[http://ftp.gnu.org/gnu/libtool/libtool-2.4.2.tar.gz libtool-2.4.2.tar.gz]&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:build them (&amp;lt;tt&amp;gt;&#039;&#039;&#039;&#039;&#039;./configure; make; sudo make install &#039;&#039;&#039;&#039;&#039;&amp;lt;/tt&amp;gt; - this should take no more than three minutes).&lt;br /&gt;
&lt;br /&gt;
== Setup CentOS-8  ==&lt;br /&gt;
The CentOS-8 setup is very similar to CentOS-7, but there are some minor differences.&lt;br /&gt;
 &lt;br /&gt;
;Update the VM&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum -y update&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;You will need to enable power-tools for this setup&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum config-manager --set-enabled powertools&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Load the basic software development environment plus the additional packages of openjpeg2, jasper, and libtirpc. Note that you may not need &#039;&#039;openjpeg2&#039;&#039; and &#039;&#039;jasper&#039;&#039; if you build the dependencies successfully. If you determine that you don&#039;t need these, please let us know. JUnit support has also been dropped so we dropped the &amp;lt;tt&amp;gt;&#039;&#039;ant-junit junit&#039;&#039;&amp;lt;/tt&amp;gt; packages from the install list.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ant git gcc-c++ flex bison cmake autoconf automake libtool emacs openssl-devel libuuid-devel readline-devel zlib-devel bzip2 bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel cppunit cppunit-devel vim bc openjpeg2-devel jasper-devel libtirpc-devel&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Tell the machine where to find the tirpc libraries&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;export CPPFLAGS=-I/usr/include/tirpc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;export LDFLAGS=-ltirpc&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;NB: As of 1/28/22 you should not need to do this. The &#039;&#039;configure&#039;&#039; script should find the correct way to run python on CentOS 8. However, if it does not, our Makefiles (built from &#039;&#039;Makefile.am&#039;&#039; files) use &#039;&#039;python&#039;&#039; but a vanilla CentOS 8 machine only has &#039;&#039;python3&#039;&#039;. Until we fix this, you need to make sure &#039;&#039;python&#039;&#039; runs a python program. One way is to make a symbolic link between &#039;&#039;python3&#039;&#039; and &#039;&#039;python&#039;&#039; in a directory that is on your PATH. &#039;&#039;&#039;The TODO item here is to make sure &#039;&#039;python&#039;&#039; exists and can run a program&#039;&#039;&#039;. It is generally enough to verify that the command exists:&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;which python&#039;&#039;&#039;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
; Lacking that (which I was on Rocky8) install python&lt;br /&gt;
: &amp;lt;tt&amp;gt;&#039;&#039;&#039;sudo yum install -y python3&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;If you&#039;re going to build RPMs&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;yum install rpm-devel rpm-build redhat-rpm-config&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once you run through the rest of the hyrax build make sure that both &#039;&#039;gdal&#039;&#039; and &#039;&#039;hdf4&#039;&#039; build correctly (look for their libraries in $prefix/deps/lib). To build them manually, run &#039;&#039;&#039;make gdal&#039;&#039;&#039;, &#039;&#039;&#039;make hdf4&#039;&#039;&#039;, amd &#039;&#039;&#039;make netcdf4&#039;&#039;&#039; inside the hyrax-dependencies to build and install gdal and hdf4&lt;br /&gt;
&lt;br /&gt;
== Rocky 8 ==&lt;br /&gt;
&#039;&#039;Updated 6/6/2024&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# To get the commands ps, which, etc.&lt;br /&gt;
dnf install -y procps&lt;br /&gt;
&lt;br /&gt;
# C++ environment plus build tools&lt;br /&gt;
dnf install -y git gcc-c++ flex bison cmake autoconf automake libtool emacs bzip2 vim bc&lt;br /&gt;
&lt;br /&gt;
# Development library versions&lt;br /&gt;
dnf install -y openssl-devel libuuid-devel readline-devel zlib-devel bzip2-devel libjpeg-devel libxml2-devel curl-devel libicu-devel libtirpc-devel&lt;br /&gt;
&lt;br /&gt;
# Java&lt;br /&gt;
dnf install -y java-17-openjdk java-17-openjdk-devel ant &lt;br /&gt;
&lt;br /&gt;
# Setup DNF so that we can load in some obscure packages from EPEL, etc., repos&lt;br /&gt;
dnf install dnf-plugins-core&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
cppunit cppunit-devel openjpeg2-devel jasper-devel&lt;br /&gt;
&lt;br /&gt;
= A semi-automatic build =&lt;br /&gt;
&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the short instructions in the README file.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Summarized here, those instructions are:&lt;br /&gt;
;use bash: The shell scripts in this repo assume you are using bash.&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development: &#039;&#039;source spath.sh&#039;&#039;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies: &#039;&#039;./hyrax_clone.sh -v&#039;&#039;&lt;br /&gt;
;build the code, including the dependencies: &#039;&#039;./hyrax_build.sh -v&#039;&#039;&lt;br /&gt;
;test the server: Start the BES using  &#039;&#039;besctl start&#039;&#039;&lt;br /&gt;
:Start the OLFS using&#039;&#039;./build/apache-tomcat-7.0.57/bin/startup.sh&#039;&#039;&lt;br /&gt;
:Test the server by loooking at &#039;&#039;&amp;lt;nowiki&amp;gt;http://localhost:8080/opendap&amp;lt;/nowiki&amp;gt;&#039;&#039; in a browser. You should see a directory named &#039;&#039;data&#039;&#039; and following that link should lead to more data. The server will be accessible to clients other than a web browser.&lt;br /&gt;
:To test the BES function independently of the front end, use &#039;&#039;bescmdln&#039;&#039; and give it the &#039;&#039;show version;&#039;&#039; command, you should see output about different components and their versions. &lt;br /&gt;
:Use &#039;&#039;exit&#039;&#039; to leave the command line test client.&lt;br /&gt;
&lt;br /&gt;
As described in the README file that is part of the &#039;&#039;hyrax&#039;&#039; repo, there are some other scripts in the repo and some options to the &#039;&#039;clone&#039;&#039; and &#039;&#039;build&#039;&#039; script that you can investigate by using -h (help).&lt;br /&gt;
&lt;br /&gt;
= The manual build = &lt;br /&gt;
&lt;br /&gt;
In the following, we describe only the build process for CentOS; the one for OS/X is similar and we note the differences where they are significant.&lt;br /&gt;
&lt;br /&gt;
== Get Hyrax from GitHub ==&lt;br /&gt;
Use git to clone the https://github.com/opendap/hyrax project and follow the instructions on this page (which differ a bit from ones in the project&#039;s README)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;tt&amp;gt;git clone https://github.com/opendap/hyrax&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you have the &#039;&#039;hyrax&#039;&#039; project cloned:&lt;br /&gt;
;set up some environment variables so the server will build an install locally, something that streamlines development&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;source spath.sh&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;clone the three code repos for the server plus the hyrax dependencies&lt;br /&gt;
:&amp;lt;tt&amp;gt;&#039;&#039;&#039;./hyrax_clone.sh -v&#039;&#039;&#039;&amp;lt;/tt&amp;gt;&lt;br /&gt;
;proceed with the rest of the build as described in the following sections of this page&lt;br /&gt;
&lt;br /&gt;
== Important Note ==&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;Many of the problems people have with the build stem from not setting the shell correctly for the build.&amp;lt;/font&amp;gt;&lt;br /&gt;
In the above section, &#039;&#039;make sure&#039;&#039; you run &#039;&#039;&#039;source spath.sh&#039;&#039;&#039; before you run any of the building/compiling/testing commands that use the source code or build files. While the &#039;&#039;$prefix&#039;&#039; and &#039;&#039;$PATH&#039;&#039; environment variables are simple to set up, they are needed by most users. When you exit a terminal window and then open a new one, make sure to (re)source the &#039;&#039;spath.sh&#039;&#039; file in the new shell. You don&#039;t have to source spath.sh every time you enter the &#039;&#039;hyrax&#039;&#039; directory, but you must run it for every new instance of the shell.&lt;br /&gt;
&lt;br /&gt;
== Compile the Hyrax dependencies ==&lt;br /&gt;
Use git to clone the hyrax-dependencies:&lt;br /&gt;
  git clone https://github.com/opendap/hyrax-dependencies&lt;br /&gt;
And then build it. Unlike many source packages, there is no need to run a configure script, just &#039;&#039;make&#039;&#039; will do. However, the Makefile in this package expects &#039;&#039;$prefix&#039;&#039; to be set as described above. It will put all of the Hyrax server dependencies in a subdirectory called &#039;&#039;deps&#039;&#039;. To build the dependencies for building RPMs, use &#039;&#039;make -j9 for-static-rpm&#039;&#039;.&lt;br /&gt;
;(make sure you&#039;re in the directory set to &#039;&#039;$prefix&#039;&#039;)&lt;br /&gt;
&amp;lt;tt&amp;gt;&lt;br /&gt;
;git clone https://github.com/opendap/hyrax-dependencies&lt;br /&gt;
; cd hyrax-dependencies&lt;br /&gt;
; make --jobs=9&lt;br /&gt;
: &#039;&#039;The --jobs=N runs a parallel build with at most N simultaneous compile operations. This will result in a huge performance improvement on multi-core machines. &#039;&#039;&#039;-jN&#039;&#039;&#039; is the short form for the option.&#039;&#039;&lt;br /&gt;
;cd ..: &#039;&#039;Go back up to &#039;&#039;&#039;$prefix&#039;&#039;&#039; &#039;&#039;&lt;br /&gt;
&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; You can get some of the &#039;&#039;dependencies&#039;&#039; for Hyrax like &#039;&#039;netCDF&#039;&#039; from the EPEL repository, but the versions are often older than Hyrax needs. Contact us if you want information about using EPEL. At the risk of throwing people a curve ball, here&#039;s a synopsis of the process. Don&#039;t do this unless you know EPEL well. Use [http://mirror.pnl.gov/epel/6/i386/epel-release-6-8.noarch.rpm epel-release-6-8.noarch.rpm] and install it using &#039;&#039;sudo yum install epel-release-6-8.noarch.rpm&#039;&#039;. Then install packages needed to read various file formats: &#039;&#039;yum install netcdf-devel hdf-devel hdf5-devel libicu-devel cfitsio-devel cppunit-devel rpm-devel rpm-build&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Build &#039;&#039;libdap&#039;&#039; and the &#039;&#039;BES&#039;&#039; daemon ==&lt;br /&gt;
&lt;br /&gt;
==== Get and build libdap4 ====&lt;br /&gt;
;WARNING: If you have &#039;&#039;libdap&#039;&#039; already, uninstall it before proceeding.&lt;br /&gt;
Build, test and install libdap4 into $prefix:&lt;br /&gt;
&amp;lt;b&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git clone https://github.com/opendap/libdap4&lt;br /&gt;
cd libdap4&lt;br /&gt;
autoreconf -fiv&lt;br /&gt;
./configure --prefix=$prefix --enable-developer &lt;br /&gt;
make -j9&lt;br /&gt;
make check -j9&lt;br /&gt;
make install&lt;br /&gt;
cd .. # Go back up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Get and build the BES and all of the modules shipped with Hyrax ====&lt;br /&gt;
Build, test and install the BES and its modules&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;git clone https://github.com/opendap/bes # Clone the BES from GitHub&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
cd bes # enter the bes dir.&lt;br /&gt;
git submodule update --init # update the submodules&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
That will clone some additional modules into the directory &#039;&#039;modules&#039;&#039;; you need to do this! (Previously it was an optional step). See [http://git-scm.com/docs/git-submodule git submodule] for information about all you can do with git&#039;s submodule command. Also note that this does not checkout a particular branch for the submodules; the modules are left in the &#039;detached head&#039; state. To checkout a particular branch like &#039;master&#039;, which is important if you&#039;ll be making changes to that code, use &#039;&#039;git submodule foreach &#039;git checkout master&#039; &#039;&#039;. &lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;autoreconf --force --install --verbose # You can use -fiv instead of the long options.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These means, when starting from a freshly cloned repo, run all of the autotools commands and install all of the needed scripts.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;./configure --prefix=$prefix  --with-dependencies=$prefix/deps --enable-developer&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: Notes:&lt;br /&gt;
:* The --with-deps... is not needed if you load the dependencies from RPMs or otherwise have them installed an generally accessible on the build machine.&lt;br /&gt;
:* The  --enable-developer option will compile in all of the debugging code which may affect performance even if the debugging output is not enabled.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make -j9&lt;br /&gt;
make check -j9&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Some tests may fail and adding &#039;&#039;-k&#039;&#039; ignores that and keeps make marching along. &#039;&#039;Note that you must run &#039;&#039;&#039;make&#039;&#039;&#039; before &#039;&#039;&#039;make check&#039;&#039;&#039; in the bes code&#039;&#039;.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;make install&lt;br /&gt;
cd .. # Go back up to $prefix&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Test the BES ====&lt;br /&gt;
Start the BES and verify that all of the modules build correctly.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;besctl start # Start the BES.&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
Given that &#039;&#039;$prefix/bin&#039;&#039; is on your &#039;&#039;$PATH&#039;&#039;, this should start the BES. You will not need to be root if you used the &#039;&#039;--enable-developer&#039;&#039; switch with configure (as shown above), otherwise you should run &#039;&#039;sudo besctl start&#039;&#039; with the caveat that as root &#039;&#039;$prefix/bin&#039;&#039; will probably not be n your &#039;&#039;$PATH&#039;&#039;.&lt;br /&gt;
:If there&#039;s an error (e.g., you tried to start as a regular user but need to be root), edit bes.conf to be a real user (yourself?) in a real group (use &#039;groups&#039; to see which groups you are in) and also check that the bes.log file is &#039;&#039;not&#039;&#039; owned by root. &lt;br /&gt;
:Restart.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt;bescmdln # Now that the BES is running, start the BES testing tool&lt;br /&gt;
BESClient&amp;gt; show version; # Send the BES the version command to see if it&#039;s running &amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
:Take a quick look at the output. There should be entries for libdap, bes and all of the modules.&lt;br /&gt;
&amp;lt;b&amp;gt;&amp;lt;pre&amp;gt; BESClient&amp;gt; exit; # Exit the testing tool&amp;lt;/pre&amp;gt;&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that even though you have exited the &#039;&#039;bescmdln&#039;&#039; test tool, the BES is still running. That&#039;s fine - we&#039;ll use it in just a bit - but if you want to shut it down, use &#039;&#039;besctl stop&#039;&#039;, or &#039;&#039;besctl pids&#039;&#039; to see the daemon&#039;s processes. If the BES is not stopping, &#039;&#039;besctl kill&#039;&#039; will stop all BES processes without waiting for them to complete their current task.&lt;br /&gt;
&lt;br /&gt;
== Build the Hyrax &#039;&#039;OLFS&#039;&#039; web application ==&lt;br /&gt;
The OLFS is a java servlet built using ant. The OLFS is a java servlet web application and runs with Tomcat, Glassfish, etc. You need a copy of Tomcat, but our servlet does not work with the RPM version of Tomcat. Get [http://tomcat.apache.org/download-70.cgi Tomcat 7 from Apache]. Note that if you built the dependencies from source using the &#039;&#039;hyrac-dependencies-1.10.tar&#039;&#039; then there is a copy of Tomcat in the &#039;&#039;hyrax-dependecies/extra_downloads directory. You can unpack the Tomcat tar file in &#039;&#039;$prefix&#039;&#039;. I&#039;ll assume you have the Apache Tomcat tar file in &#039;&#039;$prefix&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
;tar -xzf apache-tomcat-7.0.57.tar.gz: Expand the Tomcat tar ball&lt;br /&gt;
;git clone https://github.com/opendap/olfs: Get the OLFS source code&lt;br /&gt;
;cd olfs: change directory to the OLFS source&lt;br /&gt;
;ant server: Build it&lt;br /&gt;
;cp build/dist/opendap.war ../apache-tomcat-7.0.57/webapps/: Copy the opendap web archive to the tomcat webapps direcotry.&lt;br /&gt;
;cd ..: Go up to &#039;&#039;$prefix&#039;&#039;&lt;br /&gt;
;./apache-tomcat-7.0.57/bin/startup.sh: Start Tomcat&lt;br /&gt;
&lt;br /&gt;
== Test the server ==&lt;br /&gt;
You can test the server several ways, but the most fun is to use a web browser. The URL &#039;&#039;http://&amp;lt;machine&amp;gt;:8080/opendap&#039;&#039; should return a page pointing to a collection of test datasets bundled with the server. You can also use &#039;&#039;curl&#039;&#039;, &#039;&#039;wget&#039;&#039; or any application that can read from OpenDAP servers (e.g., Matlab, Octave, ArcGIS, IDL, ...).&lt;br /&gt;
&lt;br /&gt;
== Stopping the server ==&lt;br /&gt;
Stop both the BES and Apache&lt;br /&gt;
&lt;br /&gt;
;besctl stop&lt;br /&gt;
;./apache-tomcat-7.0.57/bin/shutdown.sh&lt;br /&gt;
&lt;br /&gt;
Note that there is also a &#039;&#039;hyraxctl&#039;&#039; script that provides a way to start and stop Hyrax without you (or &#039;&#039;init.d&#039;&#039;) having to type separate commands for both the BES and OLFS. This script is part of the BES software you cloned from git.&lt;br /&gt;
&lt;br /&gt;
== Building select parts of the BES ==&lt;br /&gt;
Building just the BES and one of more of its handlers/modules is not at all hard to do with a checkout of code from git. In the above section on building the BES, simply skip the step where the submodules are cloned (&#039;&#039;git submodule update --init&#039;&#039;) and link configure.ac to &#039;&#039;configure_standard.ac&#039;&#039;. The rest of the process is as shown. The end result is a BES daemon without any of the standard Hyrax modules (but support for DAP will be built if &#039;&#039;libdap&#039;&#039; is found by the configure script).&lt;br /&gt;
&lt;br /&gt;
To build modules for the BES, simply go to &#039;&#039;$prefix&#039;&#039;, clone their git repo and build them, taking care to pass set &#039;&#039;$prefix&#039;&#039; when calling the module&#039;s &#039;&#039;configure&#039;&#039; script. &lt;br /&gt;
&lt;br /&gt;
Note that it is easy to combine the &#039;build it all&#039; and &#039;build just one&#039; processes so that a complete Hyrax BES can be built in one go and then a new module/handler not included in the BES git repo can be built and used. Each module we have on GitHub has a &#039;&#039;configure.ac&#039;&#039;, &#039;&#039;Makefile.am&#039;&#039;, etc., that will support both kinds of builds and [[Configuration of BES Modules]] explains how to take a module/handler that builds as a standalone module and tweak the build scripts so that it&#039;s fully integrated into the Hyrax BES build, too.&lt;br /&gt;
&lt;br /&gt;
= Building on Ubuntu =&lt;br /&gt;
This was tested using Xenial (Ubuntu 16)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get update&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Packages needed:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;sudo apt-get install ...&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ant junit git flex bison autoconf automake libtool emacs openssl bzip2 libjpeg-dev libxml2-dev curl libicu-dev vim bc make cmake uuid-dev libcurl4-openssl-dev libicu-dev g++ zlib1g-dev libcppunit-dev libssl-dev&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=More_about_strings_-_passing_strings_to_functions&amp;diff=13527</id>
		<title>More about strings - passing strings to functions</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=More_about_strings_-_passing_strings_to_functions&amp;diff=13527"/>
		<updated>2024-02-22T23:26:00Z</updated>

		<summary type="html">&lt;p&gt;Jimg: Created page with &amp;quot;== Is it better to pass as a const reference, or by value and then use std::move()? ==  It depends.  Here&amp;#039;s a concise explanation from Stack Overflow. There are three cases:    /* (0) */    Creature(const std::string &amp;amp;name) : m_name{name} { }  A passed lvalue binds to name, then is copied into m_name.  A passed rvalue binds to name, then is copied into m_name.    /* (1) */    Creature(std::string name) : m_name{std::move(name)} { }  A passed lvalue is copied into name, t...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Is it better to pass as a const reference, or by value and then use std::move()? ==&lt;br /&gt;
&lt;br /&gt;
It depends.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s a concise explanation from Stack Overflow. There are three cases:&lt;br /&gt;
&lt;br /&gt;
  /* (0) */ &lt;br /&gt;
  Creature(const std::string &amp;amp;name) : m_name{name} { }&lt;br /&gt;
&lt;br /&gt;
A passed lvalue binds to name, then is copied into m_name.&lt;br /&gt;
&lt;br /&gt;
A passed rvalue binds to name, then is copied into m_name.&lt;br /&gt;
&lt;br /&gt;
  /* (1) */ &lt;br /&gt;
  Creature(std::string name) : m_name{std::move(name)} { }&lt;br /&gt;
&lt;br /&gt;
A passed lvalue is copied into name, then is moved into m_name.&lt;br /&gt;
&lt;br /&gt;
A passed rvalue is moved into name, then is moved into m_name.&lt;br /&gt;
&lt;br /&gt;
  /* (2) */ &lt;br /&gt;
  Creature(const std::string &amp;amp;name) : m_name{name} { }&lt;br /&gt;
  Creature(std::string &amp;amp;&amp;amp;rname) : m_name{std::move(rname)} { }&lt;br /&gt;
&lt;br /&gt;
A passed lvalue binds to name, then is copied into m_name.&lt;br /&gt;
&lt;br /&gt;
A passed rvalue binds to rname, then is moved into m_name.&lt;br /&gt;
&lt;br /&gt;
As move operations are usually faster than copies, (1) is better than (0) if you pass a lot of temporaries. (2) is optimal in terms of copies/moves, but requires code repetition.&lt;br /&gt;
&lt;br /&gt;
Source: https://stackoverflow.com/questions/51705967/advantages-of-pass-by-value-and-stdmove-over-pass-by-reference&lt;br /&gt;
&lt;br /&gt;
== What is an &#039;&#039;lvalue&#039;&#039;? What is an &#039;&#039;rvalue&#039;&#039;? ==&lt;br /&gt;
&lt;br /&gt;
;LValue: An lvalue refers to an expression that identifies a memory location and can be assigned a value. It essentially acts as a locator for data storage.&lt;br /&gt;
&lt;br /&gt;
Key characteristics:&lt;br /&gt;
* Can appear on the left-hand side of an assignment operator (=).&lt;br /&gt;
* Represents a persistent object that exists beyond the evaluation of a single expression.&lt;br /&gt;
&lt;br /&gt;
Examples:&lt;br /&gt;
* Variable names (e.g., x, name)&lt;br /&gt;
* Array elements (e.g., array[3])&lt;br /&gt;
* Member variables of objects (e.g., object.value)&lt;br /&gt;
* Function calls returning an lvalue reference (rare)&lt;br /&gt;
&lt;br /&gt;
;RValue: An rvalue represents a value itself, but it doesn&#039;t have a memory location that can be directly assigned to. It provides the data for an assignment.&lt;br /&gt;
&lt;br /&gt;
Key characteristics:&lt;br /&gt;
* Can appear on the right-hand side of an assignment operator (=).&lt;br /&gt;
* Represents a temporary value that exists only during the evaluation of an expression.&lt;br /&gt;
&lt;br /&gt;
Examples:&lt;br /&gt;
* Literal values (e.g., 42, &amp;quot;hello&amp;quot;)&lt;br /&gt;
* Arithmetic expressions (e.g., 2 + 3)&lt;br /&gt;
* Function calls that don&#039;t return an lvalue reference&lt;br /&gt;
* The result of certain operators (e.g., increment/decrement)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Every expression in a program is either an lvalue or an rvalue.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
;Implicit conversions: In some cases, compilers can implicitly convert between lvalues and rvalues. For example, an lvalue can be converted to an rvalue when its value is used in an expression (e.g., x + 5).&lt;br /&gt;
;Modern languages: While the core concepts remain similar, some modern languages like C++ introduce additional categories like &#039;&#039;xvalues&#039;&#039; for handling move semantics and resource management.&lt;br /&gt;
&lt;br /&gt;
Source: https://gemini.google.com/app/efaa1d88d284719d, with edits.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Developer_Info&amp;diff=13526</id>
		<title>Developer Info</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Developer_Info&amp;diff=13526"/>
		<updated>2024-02-22T23:09:05Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* C++ Coding Information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* [https://github.com/OPENDAP OPeNDAP&#039;s GitHub repositories]: OPeNDAP&#039;s software is available using GitHub in addition to the downloads from our website.&lt;br /&gt;
** Before 2015 we hosted our own SVN repository. It&#039;s still online and available, but for read-only access, at [https://scm.opendap.org/svn https://scm.opendap.org/svn].&lt;br /&gt;
* [https://travis-ci.org/OPENDAP Continuous Integration builds]: Software that is built whenever new changes are pushed to the master branch. These builds are done on the Travis-CI system.&lt;br /&gt;
* [http://test.opendap.org/ test.opendap.org]: Test servers with data files.&lt;br /&gt;
* We use the Coverity static system to look for common software defects, information on Hyrax is spread across three projects:&lt;br /&gt;
** [https://scan.coverity.com/projects/opendap-bes?tab=overview The BES and the standard handlers we distribute]&lt;br /&gt;
** [https://scan.coverity.com/projects/opendap-olfs?tab=overview The OLFS - the front end to the Hyrax data server]&lt;br /&gt;
** [https://scan.coverity.com/projects/opendap-libdap4?tab=overview libdap - The implementation of DAP2 and DAP4]&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP&#039;s FAQ ==&lt;br /&gt;
The [http://www.opendap.org/faq-page OPeNDAP FAQ] has a pretty good section on developer&#039;s questions.&lt;br /&gt;
&lt;br /&gt;
== C++ Coding Information ==&lt;br /&gt;
* [[Include files for libdap | Guidelines for including headers]]&lt;br /&gt;
* [[Using lambdas with the STL]]&lt;br /&gt;
* [[Better Unit tests for C++]]&lt;br /&gt;
* [[Better Singleton classes C++]]&lt;br /&gt;
* [[What is faster? stringstream string + String]]&lt;br /&gt;
* [[More about strings - passing strings to functions]]&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP Workshops ==&lt;br /&gt;
* [http://www.opendap.org/about/workshops-and-presentations/2007-10-12 The APAC/BOM Workshops]: This workshop spanned several days and covered a number of topics, including information for SAs and Developers. Oct 2007.&lt;br /&gt;
* [http://www.opendap.org/about/workshops-and-presentations/2008-07-15 ESIP Federation Server Workshop]: This half-day workshop focused on server installation and configuration. Summer 2008&lt;br /&gt;
* [[A One-day Course on Hyrax Development | Server Functions]]: This one-day workshop is all about writing and debugging server-side functions. It also contains a wealth of information about Hyrax, the BES and debugging tricks for the server. Spring 2012. Updated Fall 2014 for presentation to Ocean Networks Canada.&lt;br /&gt;
&lt;br /&gt;
== libdap4 and BES Reference documentation ==&lt;br /&gt;
* [https://opendap.github.io/bes/html/ BES Reference]&lt;br /&gt;
* [https://opendap.github.io/libdap4/html/ libdap Reference]&lt;br /&gt;
&lt;br /&gt;
== BES Development Information ==&lt;br /&gt;
* [[Hyrax - Logging Configuration|Logging Configuration]]&lt;br /&gt;
&lt;br /&gt;
* [[BES_-_How_to_Debug_the_BES| How to debug the BES]]&lt;br /&gt;
* [[BES - Debugging Using besstandalone]]&lt;br /&gt;
* [[Hyrax - Create BES Module | How to create your own BES Module]]&lt;br /&gt;
* Hyrax Module Integration: How to configure your module so it&#039;s easy to add to Hyrax instances ([[:File:HyraxModuleIntegration-1.2.pdf|pdf]])&lt;br /&gt;
* [[Hyrax - Starting and stopping the BES| Starting and stopping the BES]]&lt;br /&gt;
* [[Hyrax - Running bescmdln | Running the BES command line client]]&lt;br /&gt;
* [[Hyrax - BES Client commands| BES Client commands]]. The page [[BES_XML_Commands | BES XML Commands]] repeats this info for a bit more information on the return values. Most of the commands don&#039;t return anything unless they return an error and are expected to be used in a group where a &#039;&#039;get&#039;&#039; command closes out the request and obviously does return a response of some kind (maybe an error).&lt;br /&gt;
* [[Hyrax:_BES_Administrative_Commands| BES Administrative Commands]]&lt;br /&gt;
* [[Hyrax - Extending BES Module | Extending your BES Module]]&lt;br /&gt;
* [[Hyrax - Example BES Modules | Example BES Modules]] - the Hello World example and the CSV data handler&lt;br /&gt;
* [[Hyrax - BES PPT | BES communication protocol using PPT (point to point transport)]]&lt;br /&gt;
&lt;br /&gt;
* [[Australian BOM Software Developer&#039;s Agenda and Presentations|Software Developers Workshop]]&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP Development process information  ==&lt;br /&gt;
These pages contain information about how we&#039;d like people working with us to use our various on-line tools.&lt;br /&gt;
&lt;br /&gt;
* [[Planning a Program Increment]] This is a checklist for the planning phase that precedes a Program Increment (PI) when using SAFe with the NASA ESDIS development group.&lt;br /&gt;
* [[Hyrax GitHub Source Build]] This explains how to clone our software from GitHub and build our code using a shell like bash. It also explains how to build the BES and all of the Hyrax &#039;standard&#039; handlers in one operation, as well as how to build just the parts you need without cloning the whole set of repos. Some experience with &#039;git submodule&#039; will make this easier, although the page explains everything.&lt;br /&gt;
* [[Bug Prioritization]]. How we prioritize bugs in our software.&lt;br /&gt;
&lt;br /&gt;
===[[How to Make a Release|Making A Release]] ===&lt;br /&gt;
* [[How to Make a Release]] A general template for making a release. This references some of the pages below.&lt;br /&gt;
&lt;br /&gt;
== Software process issues: ==&lt;br /&gt;
* [[How to download test logs from a Travis build]] All of our builds on Travis that run tests save those logs to an S3 bucket.&lt;br /&gt;
* [[ConfigureCentos| How to configure a CentOS machine for production of RPM binaries]] - Updated 12/2014 to include information regarding git.&lt;br /&gt;
* [[How to use CLion with our software]]&lt;br /&gt;
* [[BES Timing| How to add timing instrumentation to your BES code.]]&lt;br /&gt;
* [[UnitTests| How to write unit tests using CppUnit]] NB: See other information under the heading of C++ development&lt;br /&gt;
* [[valgrind| How to use valgrind with unit tests]]&lt;br /&gt;
* [[Debugging the distcheck target]] Yes, this gets its own page...&lt;br /&gt;
* [[CopyRights| How to copyright software written for OPeNDAP]]&lt;br /&gt;
* [[Managing public and private keys using gpg]]&lt;br /&gt;
* [[SecureEmail |How to Setup Secure Email and Sign Software Distributions]]&lt;br /&gt;
* [[UserSupport|How to Handle Email-list Support Questions]]&lt;br /&gt;
* [[NetworkServerSecurity |Security Policy and Related Procedures]]&lt;br /&gt;
* [http://semver.org/ Software version numbers]&lt;br /&gt;
* [[GuideLines| Development Guidelines]]&lt;br /&gt;
* [[Apple M1 Special Needs]]&lt;br /&gt;
&lt;br /&gt;
==== Older info of limited value: ====&lt;br /&gt;
* [http://gcc.gnu.org/gcc-4.4/cxx0x_status.html C++-11 gcc/++-4.4 support] We now require compilers that support C++-14, so this is outdated (4/19/23).&lt;br /&gt;
* [[How to use Eclipse with Hyrax Source Code]] I like Eclipse, but we now use CLion because it&#039;s better (4/19/23) . Assuming you have cloned our Hyrax code from GitHub, this explains how to setup eclipse so you can work fairly easily and switch back and forth between the shell, emacs and eclipse.&lt;br /&gt;
&lt;br /&gt;
==== AWS Tips ====&lt;br /&gt;
* [[Growing a CentOS Root Partition on an AWS EC2 Instance]]&lt;br /&gt;
* [[How Shutoff the CentOS firewall]]&lt;br /&gt;
&lt;br /&gt;
== General development information ==&lt;br /&gt;
These pages contain general information relevant to anyone working with our software:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Git Hacks and Tricks]]&#039;&#039;&#039;: Information about using git and/or GitHub that seems useful and maybe not all that obvious.&lt;br /&gt;
* [[Git Secrets]]: securing repositories from AWS secret key leaks.&lt;br /&gt;
* [https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto Valgrind Suppression File Howto] How to build a suppressions file for valgrind.&lt;br /&gt;
* [[Using a debugger for C++ with Eclipse on OS/X]] Short version: use lldbmi2 **Add info**&lt;br /&gt;
* [[Using ASAN]] Short version, look [https://github.com/google/sanitizers/wiki/AddressSanitizerAndDebugger at the Google/GitHub pages] for useful environment variables **add text** On Centos, use yum install llvm to get the &#039;symbolizer&#039; and try &#039;&#039;ASAN_OPTIONS=symbolize=1 ASAN_SYMBOLIZER_PATH=$(shell which llvm-symbolizer)&#039;&#039;&lt;br /&gt;
* [[How to use &#039;&#039;Instruments&#039;&#039; on OS/X to profile]] Updated 7/2018&lt;br /&gt;
* [https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto Valgrind - How to generate suppression files for valgrind] This will quiet valgrind, keeping it from telling you OS/X or Linux (or the BES) is leaking memory.&lt;br /&gt;
* [[Migrating source code from SVN to git]]: How to move a large project from SVN to git and keep the history, commits, branches and tags.&lt;br /&gt;
* [https://developer.mozilla.org/en-US/docs/Eclipse_CDT Eclipse - Detailed information about running Eclipse on OSX from the Mozzilla project]. Updated in 2017, this is really good but be aware that it&#039;s specific to Mozilla so some of the tips don&#039;t apply. Hyrax (i.e., libdap4 and BES) also use their own build system (autotools + make) so most of the configuration information here is very apropos. See also [[How to use Eclipse with Hyrax Source Code]] below.&lt;br /&gt;
* [https://jfearn.fedorapeople.org/en-US/RPM/4/html/RPM_Guide/index.html RPM Guide] The best one I&#039;m found so far...&lt;br /&gt;
* [https://autotools.io/index.html Autotools Myth busters] The best info on autotools I&#039;ve found yet (covers &#039;&#039;autoconf&#039;&#039;, &#039;&#039;automake&#039;&#039;, &#039;&#039;libtool&#039;&#039; and &#039;&#039;pkg-config&#039;&#039;).&lt;br /&gt;
* The [https://www.gnu.org/software/autoconf/autoconf.html autoconf] manual&lt;br /&gt;
* The [https://www.gnu.org/software/automake/ automake] manual&lt;br /&gt;
* The [https://www.gnu.org/software/libtool/ libtool] manual&lt;br /&gt;
* A good [https://lldb.llvm.org/lldb-gdb.html gdb to lldb cheat sheet] for those of us who know &#039;&#039;gdb&#039;&#039; but not &#039;&#039;lldb&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
= Old information =&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Old build information&lt;br /&gt;
====The Release Process====&lt;br /&gt;
# Make sure the &amp;lt;tt&amp;gt;hyrax-dependencies&amp;lt;/tt&amp;gt; project is up to date and tar balls on www.o.o. If there have been changes/updates:&lt;br /&gt;
## Update version number for the &amp;lt;tt&amp;gt;hyrax-dependencies&amp;lt;/tt&amp;gt; in the &amp;lt;tt&amp;gt;Makefile&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Save, commit, (merge?), and push the changes to the &amp;lt;tt&amp;gt;master&amp;lt;/tt&amp;gt; branch.&lt;br /&gt;
## Once the &amp;lt;tt&amp;gt;hyrax-dependencies&amp;lt;/tt&amp;gt; CI build is finished, trigger CI builds for both &amp;lt;tt&amp;gt;libdap4&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;bes&amp;lt;/tt&amp;gt; by pushing change(s) to the master branch of each.&lt;br /&gt;
# [[Source_Release_for_libdap | Making a source release of libdap]]&lt;br /&gt;
# [[ReleaseGuide | Making a source release of the BES]]. &lt;br /&gt;
# [[OLFSReleaseGuide| Make the OLFS release WAR file]]. Follow these steps to create the three .jar files needed for the OLFS release. Includes information on how to build the OLFS and how to run the tests.&lt;br /&gt;
# [[HyraxDockerReleaseGuide|Make the official Hyrax Docker image for the release]] When the RPMs and the WAR file(s) are built and pushed to their respective download locations, make the Docker image of the release.&lt;br /&gt;
&lt;br /&gt;
====Supplemental release guides====&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;Old - use the packages built using the Continuous Delivery process&amp;lt;/font&amp;gt;&lt;br /&gt;
# [[RPM |Make the RPM Distributions]]. Follow these steps to create an RPM distribution of the software. &#039;&#039;&#039;Note:&#039;&#039;&#039; &#039;&#039;Now we use packages built using CI/CD, so this checklist is no longer needed.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: &#039;&#039;The following is all about using Subversion and is out of date as of November 2014 when we switched to git. There are still good ideas here...&#039;&#039;&lt;br /&gt;
* [[MergingBranches |How to merge code]]&lt;br /&gt;
* [[TrunkDevelBranchRel | Using the SVN trunk, branches and tags to manage releases]].&lt;br /&gt;
* [[ShrewBranchGuide | Making a Branch of Shrew for a Server Release]]. Releases should be made from the trunk and moved to a branch once they are &#039;ready&#039; so that development can continue on the trunk and so that we can easily go back to the software that mad up a release, fix bugs, and (re)release those fixes. In general, it&#039;s better to fix things like build issues, etc., discovered in the released software &#039;&#039;on the trunk&#039;&#039; and merge those down to the release branch to maintain consistency, re-release, etc. This also means that virtually all new feature development should take place on special &#039;&#039;feature&#039;&#039; branches, not the trunk.&lt;br /&gt;
* [[Hyrax Package for OS-X]]. This describes how to make a new OS/X &#039;metapackage&#039; for Hyrax.&lt;br /&gt;
* [[XP| Making Windows XP distributions]]. Follow these directions to make Windows XP binaries.&lt;br /&gt;
* [[ReleaseToolbox |Making a Matlab Ocean Toolbox Release]].  Follow these steps when a new Matlab GUI version is ready to be released.&lt;br /&gt;
* [[Eclipse - How to Setup Eclipse in a Shrew Checkout]] This includes some build instructions&lt;br /&gt;
* [[LinuxBuildHostConfig| How to configure a Linux machine to build Hyrax from SVN]]&lt;br /&gt;
* [[ConfigureSUSE| How to configure a SUSE machine for production of RPM binaries]]&lt;br /&gt;
* [[ConfigureAmazonLinuxAMI| How to configure an Amazon Linux AMI for EC2 Instance To Build Hyrax]]&lt;br /&gt;
* [[TestOpendapOrg | Notes from setting up Hyrax on our new web host]]&lt;br /&gt;
* [http://svnbook.red-bean.com/en/1.7/index.html Subversion 1.7 documentation] -- The official Subversion documentation; [http://svnbook.red-bean.com/en/1.1/svn-book.pdf PDF] and [http://svnbook.red-bean.com/en/1.1/index.html HTML].&lt;br /&gt;
* [[OPeNDAP&#039;s Use of Trac]] -- How to use Trac&#039;s various features in the software development process.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Source_Release_For_hyrax-dependencies&amp;diff=13525</id>
		<title>Source Release For hyrax-dependencies</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Source_Release_For_hyrax-dependencies&amp;diff=13525"/>
		<updated>2024-01-24T23:25:36Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Tag The Release */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
This task is to ensure that the &#039;&#039;hyrax-dependencies&#039;&#039; project is up to date and tar balls on www.o.o. are current.&lt;br /&gt;
&lt;br /&gt;
== Update ChangeLog, NEW, and release version ==&lt;br /&gt;
=== Update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file. ===&lt;br /&gt;
Use the script &amp;lt;tt&amp;gt;gitlog-to-changelog&amp;lt;/tt&amp;gt; (which can be found with Google) to update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file by running it using the &amp;lt;tt&amp;gt;--since=&amp;quot;&amp;lt;date&amp;gt;&amp;quot;&amp;lt;/tt&amp;gt; option with a date one day later in time than the newest entry in the current ChangeLog. &lt;br /&gt;
: &#039;&#039;&#039;gitlog-to-changelog --since=&amp;quot;1970-01-01&amp;quot;&#039;&#039;&#039; (&#039;&#039;Specify a date one day later than the one at the top of ChangeLog&#039;&#039;)&lt;br /&gt;
Save the result to a temp file and combine the two files: &amp;lt;br/&amp;gt;&lt;br /&gt;
: &#039;&#039;&#039;cat tmp ChangeLog &amp;gt; ChangeLog.tmp; mv ChangeLog.tmp ChangeLog&#039;&#039;&#039;&lt;br /&gt;
If you&#039;re making the first ChangeLog entries, then you&#039;ll need to create the ChangeLog file first. &amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Tip&#039;&#039;&#039;: &#039;&#039;When you&#039;re making the commit log entries, use line breaks so ChangeLog will be readable. That is, use lines &amp;lt; 80 characters long.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Update the Version Numbers ===&lt;br /&gt;
If the review of the ChangeLog indicates that there have been changes since the last release, increment the version number in the Makefile. &lt;br /&gt;
&lt;br /&gt;
Make sure any change in version number is also reflected in the NEWS file.&lt;br /&gt;
&lt;br /&gt;
=== Update the NEWS file ===&lt;br /&gt;
To update the NEWS file, just read over the new ChangeLog entries and summarize.&lt;br /&gt;
&lt;br /&gt;
== Commit And Push ==&lt;br /&gt;
# Save, commit, and push the changes to master branch.&lt;br /&gt;
# Once the &#039;&#039;hyrax-dependencies&#039;&#039; CI build is finished&lt;br /&gt;
## Trigger a CI build &#039;&#039;libdap4&#039;&#039; by pushing a small change to the &#039;&#039;libdap4&#039;&#039; master branch. When that CI build has completed successfully,&lt;br /&gt;
## Trigger a CI build in the &#039;&#039;bes&#039;&#039; by pushing a small change to the &#039;&#039;bes&#039;&#039; master branch.&lt;br /&gt;
# Wait for the successful completion. &lt;br /&gt;
#: If there&#039;s a problem with the CI builds at this point you may wish to follow the advice of &#039;&#039;&#039;&#039;&#039;Herman Wouk&#039;&#039;&#039;&#039;&#039;: &#039;&#039;&amp;quot;When in danger or in doubt, run in circles, scream and shout&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Publish and Sign ==&lt;br /&gt;
&lt;br /&gt;
All you need do is build the tar file using &amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;, sign it, and push (or pull) these files onto www.opendap.org/pub/source. &lt;br /&gt;
&lt;br /&gt;
# Go to the &#039;&#039;&#039;hyrax-dependencies&#039;&#039;&#039; project on your local machine and run &amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt; which will make a hyrax-dependencies-x.y.tar.gz file in the directory above the top level of the &#039;&#039;&#039;hyrax-dependencies&#039;&#039;&#039; project.&lt;br /&gt;
# Use &#039;&#039;&#039;gpg&#039;&#039;&#039; to sign the tar bundle:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;gpg --detach-sign --local-user security@opendap.org ../hyrax-dependencies-x.y.tar&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Use &#039;&#039;&#039;sftp&#039;&#039;&#039; to push the signature file and the tar bundle to the /httpdocs/pub/source directory on www.opendap.org&lt;br /&gt;
#: &#039;&#039;(Assuming your current working directory is the top of the &#039;&#039;&#039;hyrax-dependencies&#039;&#039;&#039; project)&#039;&#039;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;sftp opendap@www.opendap.org&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;cd httpdocs/pub/source&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put hyrax-dependencies-x.y.tar.sig&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put hyrax-dependencies-x.y.tar&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;quit&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Check your work!&lt;br /&gt;
## Download the source tar bundle and signature from www.opendap.org.&lt;br /&gt;
## Verify the signature:&lt;br /&gt;
##: &amp;lt;tt&amp;gt; gpg --verify hyrax-dependencies-x.y.tar.sig hyrax-dependencies-x.y.tar&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tag The Release ==&lt;br /&gt;
# Tag, and push the tag. &lt;br /&gt;
#* &#039;&#039;git tag -m &amp;quot;version-&amp;lt;number&amp;gt;&amp;quot; -a &amp;lt;numbers&amp;gt;&#039;&#039; &lt;br /&gt;
#* &#039;&#039;git push origin &amp;lt;numbers&amp;gt;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Make The Release On GitHub ==&lt;br /&gt;
# Goto the [https://github.com/OPENDAP/hyrax-dependencies/tags GitHub &#039;tags&#039; page for &#039;&#039;hyrax-dependencies&#039;&#039;]. &lt;br /&gt;
# Click the &amp;quot;Create release from tag&amp;quot; button&lt;br /&gt;
# Enter a title for the release (looks at previous releases for examples)&lt;br /&gt;
# Copy the most recent text from the NEWS file into the describe field&lt;br /&gt;
# Click Save/Update this release.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Source_Release_for_BES&amp;diff=13524</id>
		<title>Source Release for BES</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Source_Release_for_BES&amp;diff=13524"/>
		<updated>2024-01-24T19:15:23Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Get the BES DOI from Zenodo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
This pages covers the steps required to release the BES software for Hyrax. &lt;br /&gt;
&lt;br /&gt;
We now depend on the CI/CD process to build binary packages and to test the source builds.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Tip&#039;&#039;&#039;: If, while working on the release, you find you need to make changes to the code and you know the CI build will fail, do so on a &#039;&#039;release branch&#039;&#039; that you can merge and discard later. Do not make a release branch if you don&#039;t &#039;&#039;&#039;need&#039;&#039;&#039; it, since it complicates making tags.&lt;br /&gt;
&lt;br /&gt;
==  Verify the code base ==&lt;br /&gt;
# We release using the &#039;&#039;master&#039;&#039; branch. The code on &#039;&#039;master&#039;&#039; must have passed the CI builds. &#039;&#039;&#039;This includes the hyrax-docker builds since that CI build runs the full server regression tests!&#039;&#039;&#039;&lt;br /&gt;
# Make sure that the source code you&#039;re using for the following steps is up-to-date. (&#039;&#039;git pull&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== Update the Version Numbers ==&lt;br /&gt;
&lt;br /&gt;
=== Version for Humans ===&lt;br /&gt;
# Determine the human version number. This appears to be a somewhat subjective process.&lt;br /&gt;
# Edit each of the &#039;&#039;Affected Files&#039;&#039; and update the human version number.&lt;br /&gt;
&lt;br /&gt;
:; Affected Files&lt;br /&gt;
:* &#039;&#039;&#039;&#039;&#039;configure.ac&#039;&#039;&#039;&#039;&#039; - Look for:&lt;br /&gt;
::: &amp;lt;tt&amp;gt;AC_INIT(bes, ###.###.###, opendap-tech@opendap.org)&amp;lt;/tt&amp;gt;&lt;br /&gt;
:* &#039;&#039;&#039;&#039;&#039; debian/changelog&#039;&#039;&#039;&#039;&#039; (see [https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#changelog Debian ChangeLog])&lt;br /&gt;
::: &#039;&#039;&#039;Take Note!&#039;&#039;&#039; &#039;&#039;The &amp;lt;tt&amp;gt;debian/changelog&amp;lt;/tt&amp;gt; is the &amp;quot;single source of truth&amp;quot; for the libdap4 version in the debian packaging. If this does not agree with the version being packaged the package build will fail.&#039;&#039;&lt;br /&gt;
:* &#039;&#039;&#039;&#039;&#039;ChangeLog&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:* &#039;&#039;&#039;&#039;&#039;NEWS&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:* &#039;&#039;&#039;&#039;&#039;README.md&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:* &#039;&#039;&#039;&#039;&#039;INSTALL&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Update the internal library (API/ABI) version numbers. ===&lt;br /&gt;
The BES is &#039;&#039;&#039;&#039;&#039;not&#039;&#039;&#039;&#039;&#039; a shared library, it is a set of c++ applications that are typically built as statically linked binaries. Because of this the usual CURRENT:REVISION:AGE tuples used to express the binary compatibility state of a c++ shared object library have little meaning for the BES code. So, what we choose to do is simply bump the REVISION numbers by one value for each release.&lt;br /&gt;
&lt;br /&gt;
* In the &#039;&#039;&#039;configure.ac&#039;&#039;&#039; file locate each of:&lt;br /&gt;
** LIB_DIS_REVISION&lt;br /&gt;
** LIB_PPT_REVISION&lt;br /&gt;
** LIB_XML_CMD_REVISION&lt;br /&gt;
* Increase the value of each by one (1).&lt;br /&gt;
* Save the file.&lt;br /&gt;
* Update the text documentation files and version numbers in the configuration files:&lt;br /&gt;
&lt;br /&gt;
Example of the relevant section from configure.ac: &lt;br /&gt;
&amp;lt;source lang-=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
LIB_DIS_CURRENT=18&lt;br /&gt;
LIB_DIS_AGE=3&lt;br /&gt;
LIB_DIS_REVISION=3&lt;br /&gt;
AC_SUBST(LIB_DIS_CURRENT)&lt;br /&gt;
AC_SUBST(LIB_DIS_AGE)&lt;br /&gt;
AC_SUBST(LIB_DIS_REVISION)&lt;br /&gt;
LIBDISPATCH_VERSION=&amp;quot;$LIB_DIS_CURRENT:$LIB_DIS_REVISION:$LIB_DIS_AGE&amp;quot;&lt;br /&gt;
AC_SUBST(LIBDISPATCH_VERSION)&lt;br /&gt;
&lt;br /&gt;
LIB_PPT_CURRENT=5&lt;br /&gt;
LIB_PPT_AGE=1&lt;br /&gt;
LIB_PPT_REVISION=2&lt;br /&gt;
AC_SUBST(LIB_PPT_CURRENT)&lt;br /&gt;
AC_SUBST(LIB_PPT_AGE)&lt;br /&gt;
AC_SUBST(LIB_PPT_REVISION)&lt;br /&gt;
LIBPPT_VERSION=&amp;quot;$LIB_PPT_CURRENT:$LIB_PPT_REVISION:$LIB_PPT_AGE&amp;quot;&lt;br /&gt;
AC_SUBST(LIBPPT_VERSION)&lt;br /&gt;
&lt;br /&gt;
LIB_XML_CMD_CURRENT=5&lt;br /&gt;
LIB_XML_CMD_AGE=4&lt;br /&gt;
LIB_XML_CMD_REVISION=2&lt;br /&gt;
AC_SUBST(LIB_XML_CMD_CURRENT)&lt;br /&gt;
AC_SUBST(LIB_XML_CMD_AGE)&lt;br /&gt;
AC_SUBST(LIB_XML_CMD_REVISION)&lt;br /&gt;
LIBXMLCOMMAND_VERSION=&amp;quot;$LIB_XML_CMD_CURRENT:$LIB_XML_CMD_REVISION:$LIB_XML_CMD_AGE&amp;quot;&lt;br /&gt;
AC_SUBST(LIBXMLCOMMAND_VERSION)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file. ==&lt;br /&gt;
Use the script &amp;lt;tt&amp;gt;gitlog-to-changelog&amp;lt;/tt&amp;gt; (which can be found with Google) to update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file by running it using the &amp;lt;tt&amp;gt;--since=&amp;quot;&amp;lt;date&amp;gt;&amp;quot;&amp;lt;/tt&amp;gt; option with a date one day later in time than the newest entry in the current ChangeLog. &lt;br /&gt;
: &amp;lt;tt style=&amp;quot;font-size: 1.1em; font-weight: bold;&amp;quot;&amp;gt;gitlog-to-changelog --since=&amp;quot;1970-01-01&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:: (&#039;&#039;Specify a date one day later than the one at the top of the existing ChangeLog file.&#039;&#039;)&lt;br /&gt;
Save the result to a temp file and combine the two files: &amp;lt;br/&amp;gt;&lt;br /&gt;
: &amp;lt;tt style=&amp;quot;font-size: 1.1em; font-weight: bold;&amp;quot;&amp;gt;cat tmp ChangeLog &amp;gt; ChangeLog.tmp; mv ChangeLog.tmp ChangeLog&amp;lt;/tt&amp;gt;&lt;br /&gt;
If you&#039;re making the first ChangeLog entries, then you&#039;ll need to create the ChangeLog file first. &amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Tip&#039;&#039;&#039;: &#039;&#039;When you&#039;re making the commit log entries, use line breaks so ChangeLog will be readable. That is, use lines &amp;lt; 80 characters long.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Update the &#039;&#039;&#039;NEWS&#039;&#039;&#039; file ==&lt;br /&gt;
To update the NEWS file, just read over the new ChangeLog entries and summarize. &lt;br /&gt;
&lt;br /&gt;
The new entries to the NEWS file will be used later when making the GitHub release and when writing the server&#039;s release page on www.opendap.org.&lt;br /&gt;
&lt;br /&gt;
We might replace this:&lt;br /&gt;
* It&#039;s also helpful to have, in the &#039;&#039;&#039;NEWS&#039;&#039;&#039; file and the Web site and the release notes, a list of the Jira tickets that have been closed since the last release. The best way to do this is to goto &#039;&#039;Jira&#039;s Issues&#039;&#039; page and look at the &#039;&#039;Tickets closed recently&#039;&#039; item. From there, click on &#039;&#039;Advanced&#039;&#039; and edit the time range so it matches the time range since the past release to now, then &#039;&#039;Export&#039;&#039; that info as an excel spreadsheet (the icon with a hat and a down arrow). YMMV regarding how easy this is and Jira&#039;s UI changes often.)&lt;br /&gt;
&lt;br /&gt;
With instructions about making an associated release in JIRA using version tagging.&lt;br /&gt;
&lt;br /&gt;
== Update the Version Numbers for Humans ==&lt;br /&gt;
;Affected Files: &lt;br /&gt;
: configure.ac&lt;br /&gt;
: debian/changelog (see [https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#changelog] Debian ChangeLog)&lt;br /&gt;
: NEWS&lt;br /&gt;
: README.md&lt;br /&gt;
: INSTALL&lt;br /&gt;
&lt;br /&gt;
# Determine the human version number. This appears to be a somewhat subjective process.&lt;br /&gt;
# Edit each of the &#039;&#039;Affected Files&#039;&#039; and update the human version number.&lt;br /&gt;
# In the &#039;&#039;&#039;README.md&#039;&#039;&#039; file be sure to update the description of how to locate the DOI for the release with the new version number.&lt;br /&gt;
&lt;br /&gt;
== Update the libdap version ==&lt;br /&gt;
Determine the libdap version associated with this release by checking the contents of the file &amp;lt;tt&amp;gt;libdap4-snapshot&amp;lt;/tt&amp;gt; The &amp;lt;tt&amp;gt;libdap4-snapshot&amp;lt;/tt&amp;gt; file should contain a single line like this example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
libdap4-3.20.9-0 2021-12-28T19:23:45+0000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The libdap version for the above example is: &amp;lt;tt&amp;gt;libdap-3.20.9&amp;lt;/tt&amp;gt; (The version is NOT &amp;lt;tt&amp;gt;libdap4-3.20.9&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
=== Update the libdap version in the .travis.yml file ===&lt;br /&gt;
;Affected Files&lt;br /&gt;
: .travis.yml&lt;br /&gt;
&lt;br /&gt;
In the .travis.yml file update the value of  &#039;&#039;LIBDAP_RPM_VERSION&#039;&#039; in the &#039;&#039;env: global:&#039;&#039; section so that it contains the complete numerical value of the libdap version you located in the previous step. Using the previous example the value would be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    - LIBDAP_RPM_VERSION=3.20.9-0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Update the libdap version in the RPM spec files ===&lt;br /&gt;
;Affected Files&lt;br /&gt;
: &#039;&#039;bes.spec*.in&#039;&#039;&lt;br /&gt;
Update the &amp;lt;tt&amp;gt;bes.spec*.in&amp;lt;/tt&amp;gt; files by changing the &amp;lt;tt&amp;gt;Requires&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;BuildRequires&amp;lt;/tt&amp;gt; entries for libdap. Based on our example the result would be: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Requires:       libdap &amp;gt;= 3.20.9&lt;br /&gt;
BuildRequires:  libdap-devel &amp;gt;= 3.20.9&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;These lines may not be adjacent to each other in the spec files&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
=== Update the libdap version in the README.md file ===&lt;br /&gt;
;Affected Files&lt;br /&gt;
: README.md&lt;br /&gt;
# [https://zenodo.org Get the DOI markdown from Zenodo] by using the search bar and searching for the libdap version string that you determined at the beginning of this section. &lt;br /&gt;
# Update the &#039;&#039;&#039;README.md&#039;&#039;&#039; file with libdap version and the associated DOI link (using the markdown you got from Zenodo).&lt;br /&gt;
&lt;br /&gt;
; Note&lt;br /&gt;
: You will also need this DOI markdown when making the GitHub release page for the BES. &lt;br /&gt;
&lt;br /&gt;
See the section on this page titled &amp;quot;&#039;&#039;Get the BES DOI from Zenodo&#039;&#039;&amp;quot;  for more details about getting the DOI markdown.&lt;br /&gt;
&lt;br /&gt;
== Update the RPM dependencies ==&lt;br /&gt;
;Affected Files: &lt;br /&gt;
:&#039;&#039;bes.spec*.in&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the RPM &#039;&#039;.spec&#039;&#039; file, update the dependencies as needed. &lt;br /&gt;
* The libdap version dependency was covered in a previous step.&lt;br /&gt;
* Be attentive to changes that have been made to the hyrax-dependencies since the last release.&lt;br /&gt;
&lt;br /&gt;
== Update the module version numbers for humans ==&lt;br /&gt;
In bes/modules/common, check that the file all-modules.txt is complete and update as needed. Then run:&lt;br /&gt;
&lt;br /&gt;
* Remove the sentinel files that prevent the version updater from being run multiple times in succession without specific intervention:&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;rm -v ../*/version_updated&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
* Now run the version updater:&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;./version_update_modules.sh -v &amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will update the patch number (x.y.patch) for each of the named modules. &lt;br /&gt;
&lt;br /&gt;
If a particular module has significant fixes, hand edit the number, in the Makefile.am. &lt;br /&gt;
&lt;br /&gt;
See below for special info about the HDF4/5 modules (which also applies to any modules not in the BES GitHub repo).&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;del&amp;gt;For the BES HDF4/5 modules (BES only) &amp;lt;/del&amp;gt;==&lt;br /&gt;
# &amp;lt;del&amp;gt;&#039;&#039;Make sure that you are working on the master branch of each module!!&#039;&#039;&amp;lt;/del&amp;gt;&lt;br /&gt;
# &amp;lt;del&amp;gt; Goto those directories and update the ChangeLog, NEWS, README, and INSTALL files (even though INSTALL is not used by many).&amp;lt;/del&amp;gt;&lt;br /&gt;
# &amp;lt;del&amp;gt; Update the module version numbers in their respective Makefile.am files.&amp;lt;/del&amp;gt;&lt;br /&gt;
# &amp;lt;del&amp;gt; Commit and Push these changes.&amp;lt;/del&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update the Build Offset ==&lt;br /&gt;
&#039;&#039;Setting the build offset correctly will set the build number for the new release to &amp;quot;0&amp;quot;.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the file &amp;lt;tt&amp;gt;travis/travis_bes_build_offset.sh&amp;lt;/tt&amp;gt; set the value of &amp;lt;tt&amp;gt;BES_TRAVIS_BUILD_OFFSET&amp;lt;/tt&amp;gt; to the number of the last TravisCI build plus one. The previous commit and push will have triggered a TravisCI build. Find the build number for the previous commit in [https://app.travis-ci.com/github/OPENDAP/bes the TravisCI page for the BES] and use that build number plus 1.&lt;br /&gt;
&lt;br /&gt;
== Commit Changes ==&lt;br /&gt;
&#039;&#039;Be sure that you have completed all of the changes to the various ChangeLog, NEWS, INSTALL, configure.ac,  &amp;lt;tt&amp;gt;travis/travis_bes_build_offset.sh&amp;lt;/tt&amp;gt;, and other files before proceeding!&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# Commit and push the BES code. Wait for the CI/CD builds to complete. You must be working on the &#039;&#039;master&#039;&#039; branch to get the CD package builds to work.&lt;br /&gt;
&lt;br /&gt;
== Tag the BES code ==&lt;br /&gt;
&lt;br /&gt;
The build process automatically tags builds of the master branch. The Hyrax-version tag is a placeholder for us so we can sort out what code goes with various Hyrax source releases.&lt;br /&gt;
&lt;br /&gt;
# If this is part of a Hyrax Release, then tag this point in the master branch with the Hyrax release number&lt;br /&gt;
#* &amp;lt;tt style=&amp;quot;font-size: 1.1em; font-weight: bold;&amp;quot;&amp;gt;git tag -m &amp;quot;hyrax-&amp;lt;number&amp;gt;&amp;quot; -a hyrax-&amp;lt;numbers&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#* &amp;lt;tt style=&amp;quot;font-size: 1.1em; font-weight: bold;&amp;quot;&amp;gt;git push origin hyrax-&amp;lt;numbers&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;NB:&#039;&#039;&#039; &#039;&#039;Instead of tagging the HDF4/5 modules, use the saved commit hashes that git tracks for submodules. This cuts down on the bookkeeping for releases and removes one source of error.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Create the BES release on Github ==&lt;br /&gt;
# [https://github.com/OPENDAP/bes Goto the BES project page in GitHub]&lt;br /&gt;
# Choose the &#039;&#039;&#039;releases&#039;&#039;&#039; tab.&lt;br /&gt;
# On the [https://github.com/OPENDAP/bes/releases Releases page] click the &#039;Tags&#039; tab. &lt;br /&gt;
# On the [https://github.com/OPENDAP/bes/tags Tags page], locate the tag (created above) associated with this new release.&lt;br /&gt;
# Click the ellipses (...) located on the far right side of the &#039;&#039;version-x.y.z&#039;&#039; tag &#039;frame&#039; for this release and and choose &#039;&#039;Create release&#039;&#039;.&lt;br /&gt;
#* Enter a &#039;&#039;title&#039;&#039; for the release&lt;br /&gt;
#* Copy the most recent text from the NEWS file into the &#039;&#039;describe&#039;&#039; field&lt;br /&gt;
#* Click &#039;&#039;&#039;Publish release&#039;&#039;&#039; or  &#039;&#039;&#039;Save draft&#039;&#039;&#039;. &lt;br /&gt;
#** If you have previously edited the release page you can click &#039;&#039;&#039;Update this release&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Publish and Sign ==&lt;br /&gt;
&lt;br /&gt;
When the release is made on GitHub the source tar bundle is made automatically. However, this bundle is &#039;&#039;&#039;not&#039;&#039;&#039; the one we wish to publish because it requires people to have &#039;&#039;autoconf&#039;&#039; installed. Rather we want to use the result of &amp;quot;&amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;&amp;quot; which will have the &amp;lt;tt&amp;gt;configure&amp;lt;/tt&amp;gt; script pre-generated.&lt;br /&gt;
&lt;br /&gt;
All you need do is build the tar file using &amp;lt;tt&amp;gt;make list&amp;lt;/tt&amp;gt;, sign it, and push (or pull) these files onto www.opendap.org/pub/source. &lt;br /&gt;
&lt;br /&gt;
# Go to the &#039;&#039;&#039;bes&#039;&#039;&#039; project on your local machine and run &#039;&#039;&amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;&#039;&#039; which will make a bes-x.y.z,tar.gz file at the top level of the &#039;&#039;&#039;bes&#039;&#039;&#039; project.&lt;br /&gt;
# Use &#039;&#039;&#039;gpg&#039;&#039;&#039; to sign the tar bundle:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;gpg --detach-sign --local-user security@opendap.org bes-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Use &#039;&#039;&#039;sftp&#039;&#039;&#039; to push the signature file and the tar bundle to the /httpdocs/pub/source directory on www.opendap.org&lt;br /&gt;
#: &#039;&#039;(Assuming your current working directory is the top of the &#039;&#039;&#039;bes&#039;&#039;&#039; project)&#039;&#039;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;sftp opendap@www.opendap.org&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;cd httpdocs/pub/source&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put bes-x.y.z.tgz.sig&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put bes-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;quit&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Check your work!&lt;br /&gt;
## Download the source tar bundle and signature from www.opendap.org.&lt;br /&gt;
## Verify the signature:&lt;br /&gt;
##: &amp;lt;tt&amp;gt; gpg --verify bes-x.y.z.tgz.sig bes-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get the BES DOI from Zenodo ==&lt;br /&gt;
Get the Zenodo DOI for the newly created BES release and add it to the associated GitHub BES release page.&lt;br /&gt;
&lt;br /&gt;
# [https://zenodo.org Goto Zenodo] &lt;br /&gt;
# Look at the &#039;upload&#039; page. If there is nothing there (perhaps because you are not &#039;&#039;jhrg&#039;&#039; or whoever set up the connection between the BES project and Zenodo) you can use the search bar to search for &#039;&#039;&#039;bes&#039;&#039;&#039;. &lt;br /&gt;
#: Since the libdap, BES and OLFS repositories are linked to Zenodo, the newly-tagged code is uploaded to Zenodo automatically and a DOI is minted for us.&lt;br /&gt;
# Click on the new version, then click on the DOI tag in the pane on the right of the page for the given release.&lt;br /&gt;
# Copy the DOI as markdown from the window that pops up.&lt;br /&gt;
# Edit the GitHub release page for the BES release you just created and paste the DOI markdown into the top of the  description.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tip:&#039;&#039;&#039; &#039;&#039;If you are trying to locate the &#039;&#039;&#039;libdap&#039;&#039;&#039; releases in Zenodo you have to search for the string:&#039;&#039; &amp;lt;tt style=&amp;quot;font-size: 1.1em; font-weight: bold;&amp;quot;&amp;gt;libdap4&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Images ===&lt;br /&gt;
[[File:Screenshot 2018-12-06 11.06.44.png|none|thumb|400px|border|left|Zenodo upload page]]&lt;br /&gt;
&lt;br /&gt;
== Update the online reference documentation ==&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;make gh-docs&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Source_Release_for_libdap&amp;diff=13523</id>
		<title>Source Release for libdap</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Source_Release_for_libdap&amp;diff=13523"/>
		<updated>2024-01-24T18:54:46Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Get the DOI from Zenodo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers the step needed to release the libdap software for Hyrax. There is are separate pages for the BES and OLFS code and an overview page that describes how the website is updated and lists are notified.&lt;br /&gt;
&lt;br /&gt;
We now depend on the CI/CD process to build binary packages and to test the source builds. When the source code is tagged and marked as a release in GitHub, our linked Zenodo account archives that software and mints a DOI for it.&lt;br /&gt;
&lt;br /&gt;
== The Release Process ==&lt;br /&gt;
:&#039;&#039;&#039;Tip&#039;&#039;&#039;: If, while working on the release, you find you need to make changes to the code and you know the CI build will fail, do so on a &#039;&#039;release branch&#039;&#039; that you can merge and discard later. Do not make a release branch unless you need to since it complicates making tags.&lt;br /&gt;
&lt;br /&gt;
===  Verify the code base ===&lt;br /&gt;
# We release using the &#039;&#039;master&#039;&#039; branch. The code on &#039;&#039;master&#039;&#039; must pass the CI build. &lt;br /&gt;
# Make sure that the source code you&#039;re using for the following steps is up-to-date. (&#039;&#039;git pull&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
=== Update Release Files ===&lt;br /&gt;
Update the text documentation files and version numbers in the configuration files:&lt;br /&gt;
&lt;br /&gt;
; &#039;&#039;&#039;Note&#039;&#039;&#039; &lt;br /&gt;
:It&#039;s helpful to have, in the &#039;&#039;&#039;NEWS&#039;&#039;&#039; file, the Web site and the release notes, a list of the Jira tickets that have been closed since the last release. The best way to do this is to goto &#039;&#039;Jira&#039;s Issues&#039;&#039; page and look at the &#039;&#039;Tickets closed recently&#039;&#039; item. From there, click on &#039;&#039;Advanced&#039;&#039; and edit the time range so it matches the time range since the past release to now, then &#039;&#039;Export&#039;&#039; that info as an excel spreadsheet (the icon with a hat and a down arrow). YMMV regarding how easy this is and Jira&#039;s UI changes often.&lt;br /&gt;
&lt;br /&gt;
==== Update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file. ====&lt;br /&gt;
Use the script &amp;lt;tt&amp;gt;gitlog-to-changelog&amp;lt;/tt&amp;gt; (which can be found with Google) to update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file by running it using the &amp;lt;tt&amp;gt;--since=&amp;quot;&amp;lt;date&amp;gt;&amp;quot;&amp;lt;/tt&amp;gt; option with a date one day later in time than the newest entry in the current ChangeLog. &lt;br /&gt;
: &#039;&#039;&#039;gitlog-to-changelog --since=&amp;quot;1970-01-01&amp;quot;&#039;&#039;&#039; (&#039;&#039;Specify a date one day later than the one at the top of ChangeLog&#039;&#039;)&lt;br /&gt;
Save the result to a temp file and combine the two files: &amp;lt;br/&amp;gt;&lt;br /&gt;
: &#039;&#039;&#039;cat tmp ChangeLog &amp;gt; ChangeLog.tmp; mv ChangeLog.tmp ChangeLog&#039;&#039;&#039;&lt;br /&gt;
If you&#039;re making the first ChangeLog entries, then you&#039;ll need to create the ChangeLog file first. &amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Tip&#039;&#039;&#039;: &#039;&#039;When you&#039;re making the commit log entries, use line breaks so ChangeLog will be readable. That is, use lines &amp;lt; 80 characters long.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Update the NEWS file ====&lt;br /&gt;
To update the NEWS file, just read over the new ChangeLog entries and summarize.&lt;br /&gt;
&lt;br /&gt;
==== Update the Version Numbers ====&lt;br /&gt;
There are really 2 version numbers for each of these project items. The &#039;&#039;human&#039;&#039; version (like version-3.17.5) and the &#039;&#039;library&#039;&#039; API/ABI version which is represented as &amp;lt;tt&amp;gt;CURRENT:REVISION:AGE&amp;lt;/tt&amp;gt;. There are special rules for when each of the numbers in the library API/ABI version get incremented that are triggered by the kinds of changes that where made to the code base. The human version number is more arbitrary. So for example, we might make a major API/ABI change and have to change to a new Libtool version like &amp;lt;tt&amp;gt;25:0:0&amp;lt;/tt&amp;gt; but the human version might only change from bes-3.17.3 to bes-3.18.0&lt;br /&gt;
&lt;br /&gt;
===== Version for Humans =====&lt;br /&gt;
# Determine the human version number. This appears to be a somewhat subjective process.&lt;br /&gt;
# Edit each of the &#039;&#039;Affected Files&#039;&#039; and update the human version number.&lt;br /&gt;
&lt;br /&gt;
:;Affected Files: &lt;br /&gt;
:: &#039;&#039;&#039;&#039;&#039;configure.ac&#039;&#039;&#039;&#039;&#039; - Look for:&lt;br /&gt;
::: &amp;lt;tt&amp;gt;AC_INIT(libdap, ###.###.###, opendap-tech@opendap.org)&amp;lt;/tt&amp;gt;&lt;br /&gt;
:: debian/changelog (see [https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#changelog Debian ChangeLog])&lt;br /&gt;
::: &#039;&#039;&#039;Take Note!&#039;&#039;&#039; &#039;&#039;The &amp;lt;tt&amp;gt;debian/changelog&amp;lt;/tt&amp;gt; is the &amp;quot;single source of truth&amp;quot; for the libdap4 version in the debian packaging. If this does not agree with the version being packaged the package build will fail.&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;README.md&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;INSTALL&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===== API/ABI Version =====&lt;br /&gt;
The library API/ABI version is represented as CURRENT:REVISION:AGE. &lt;br /&gt;
&lt;br /&gt;
;The rules for shared image version numbers:&lt;br /&gt;
:# No interfaces changed, only implementations (good): Increment REVISION.&lt;br /&gt;
:# Interfaces added, none removed (good): Increment CURRENT, set REVISION to 0, increment AGE.&lt;br /&gt;
:# Interfaces removed or changed (BAD, breaks upward compatibility): Increment CURRENT, set REVISION to 0 , and set AGE to 0.&lt;br /&gt;
&lt;br /&gt;
See the &#039;&#039;Appendix: How to see the scope of API/ABI changes in C++ sources&#039;&#039; below for gruesome details. Often basic knowledge of the edits is good enough.&lt;br /&gt;
&lt;br /&gt;
;Affected Files: &lt;br /&gt;
: &#039;&#039;&#039;&#039;&#039;configure.ac&#039;&#039;&#039;&#039;&#039; - Look for&lt;br /&gt;
:: DAPLIB_CURRENT=###&lt;br /&gt;
:: DAPLIB_REVISION=###&lt;br /&gt;
:: DAPLIB_AGE=###&lt;br /&gt;
&lt;br /&gt;
=== Commit ===&lt;br /&gt;
* Commit and push the code. Wait for the CI/CD builds to complete. You must be working on the &#039;&#039;master&#039;&#039; branch to get the CD package builds to work.&lt;br /&gt;
&lt;br /&gt;
=== Update the Build Offset ===&lt;br /&gt;
&#039;&#039;Setting the build offset correctly will set the build number for the new release to &amp;quot;0&amp;quot;.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the file &amp;lt;tt&amp;gt;travis/travis_libdap_build_offset.sh&amp;lt;/tt&amp;gt; set the value of &amp;lt;tt&amp;gt;LIBDAP_TRAVIS_BUILD_OFFSET&amp;lt;/tt&amp;gt; to the number of the last TravisCI build plus one. The previous commit and push will have triggered a TravisCI build. Find the build number for the previous commit in [https://app.travis-ci.com/github/OPENDAP/libdap4 the TravisCI page for libdap4] and use that build number plus 1.&lt;br /&gt;
&lt;br /&gt;
This is not the build number for the package. It is the build number used by Travis, which is the the total number of times Travis has build the code. This number is the build number on the left-hand TOC &lt;br /&gt;
&lt;br /&gt;
Once you have updated the &amp;lt;tt&amp;gt;travis/travis_libdap_build_offset.sh&amp;lt;/tt&amp;gt; commit and push this change. Do NOT use a &amp;lt;tt&amp;gt;[skip ci]&amp;lt;/tt&amp;gt; string in the commit message as it is important that this commit run through the entire CI process.&lt;br /&gt;
&lt;br /&gt;
=== Tag The Release ===&lt;br /&gt;
In the past we manually made the tags for builds. Since we started a &#039;build number release&#039; for NASA, we automated that. &lt;br /&gt;
&lt;br /&gt;
If this is part of Hyrax, also tag this point in the master branch with the Hyrax release number:&lt;br /&gt;
# &#039;&#039;&#039;git tag -m &amp;quot;hyrax-&amp;lt;number&amp;gt;&amp;quot; -a hyrax-&amp;lt;numbers&amp;gt;&#039;&#039;&#039; I think we can leave this tag as &#039;&#039;hyrax-&amp;lt;version&amp;gt;&#039;&#039; since it&#039;s for our own bookkeeping. &lt;br /&gt;
# &#039;&#039;&#039;git push origin hyrax-&amp;lt;numbers&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
#: NB: Instead of tagging the HDF4/5 modules, use the saved commit hashes that git tracks for submodules. This cuts down on the bookkeeping for releases and removes one source of error.&lt;br /&gt;
&lt;br /&gt;
=== Create the release on Github ===&lt;br /&gt;
Goto the &#039;tags&#039; page (&#039;code&#039; then &#039;tags&#039; at the top of the directory window) and click the &#039;Tags&#039; tab. There, click the ellipses (...) on the right of the &#039;version-*&#039; tag and:&lt;br /&gt;
# Enter a &#039;&#039;title&#039;&#039; for the release&lt;br /&gt;
# Copy the most recent text from the NEWS file into the &#039;&#039;describe&#039;&#039; field&lt;br /&gt;
# Click &#039;&#039;Update this release&#039;&#039; or &#039;&#039;Save draft&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will trigger a &#039;archive and DOI&#039; process on the Zenodo system.&lt;br /&gt;
&lt;br /&gt;
=== Publish and Sign ===&lt;br /&gt;
&lt;br /&gt;
When the release is made on GitHub the source tar bundle is made automatically. However, this bundle is &#039;&#039;&#039;not&#039;&#039;&#039; the one we wish to publish because it requires people to have &#039;&#039;autoconf&#039;&#039; installed. Rather we want to use the result of &amp;quot;&amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;&amp;quot; which will have the &amp;lt;tt&amp;gt;configure&amp;lt;/tt&amp;gt; script pre-generated.&lt;br /&gt;
&lt;br /&gt;
All you need do is build the tar file using &amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;, sign it, and push (or pull) these files onto www.opendap.org/pub/source. &lt;br /&gt;
&lt;br /&gt;
# Go to the &#039;&#039;&#039;libdap4&#039;&#039;&#039; project on your local machine and run &amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt; which will make a libdap-x.y.z.tar.gz file at the top level of the &#039;&#039;&#039;libdap4&#039;&#039;&#039; project.&lt;br /&gt;
# Use &#039;&#039;&#039;gpg&#039;&#039;&#039; to sign the tar bundle:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;gpg --detach-sign --local-user security@opendap.org libdap-x.y.z.tar.gz&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Use &#039;&#039;&#039;sftp&#039;&#039;&#039; to push the signature file and the tar bundle to the /httpdocs/pub/source directory on www.opendap.org&lt;br /&gt;
#: &#039;&#039;(Assuming your current working directory is the top of the &#039;&#039;&#039;bes&#039;&#039;&#039; project)&#039;&#039;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;sftp opendap@www.opendap.org&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;cd httpdocs/pub/source&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put libdap-x.y.z.tgz.sig&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put libdap-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;quit&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Check your work!&lt;br /&gt;
## Download the source tar bundle and signature from www.opendap.org.&lt;br /&gt;
## Verify the signature:&lt;br /&gt;
##: &amp;lt;tt&amp;gt; gpg --verify libdap-x.y.z.tgz.sig libdap-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Get the DOI from [https://zenodo.org Zenodo] ===&lt;br /&gt;
&lt;br /&gt;
# Goto [https://zenodo.org Zenodo] and look at the &#039;upload&#039; page. Since the libdap, BES and OLFS repositories are linked to Zenodo, the newly-tagged code is uploaded to Zenodo automatically and a DOI is minted for us.&lt;br /&gt;
# Click on the new version, then click on the DOI tag in the pane on the right of the page for the given release.&lt;br /&gt;
# Copy the DOI as markdown from the window that pops up and paste that into the info for the version back in Github land.&lt;br /&gt;
# Also paste that into the README file. Commit using &#039;&#039;[skip ci]&#039;&#039; so we don&#039;t do a huge build (or do the build, it really doesn&#039;t matter that much).&lt;br /&gt;
&lt;br /&gt;
Images for the above steps to help with the web UI: coming soon&lt;br /&gt;
&lt;br /&gt;
=== Update the online reference documentation ===&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;make gh-docs&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Appendix: How to see the scope of API/ABI changes in C++ sources ==&lt;br /&gt;
Determine the new software version (assuming you don&#039;t already know the extent of the changes that have been made)&lt;br /&gt;
: For C++, build a file of the methods and their arguments using:&lt;br /&gt;
:: &#039;&#039;&#039;nm .libs/libdap.a | c++filt | grep &#039; T .*::&#039; | sed &#039;s@.* T \(.*\)@\1@&#039; &amp;gt; libdap_funcs&#039;&#039;&#039;&lt;br /&gt;
: and compare that using &amp;lt;tt&amp;gt;diff&amp;lt;/tt&amp;gt; on the previous release&#039;s library.&lt;br /&gt;
Assess the changes you find based on the following rules for the values of &amp;lt;tt&amp;gt;CURRENT&amp;lt;/tt&amp;gt;,&amp;lt;tt&amp;gt;REVISION&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;AGE&amp;lt;/tt&amp;gt;&lt;br /&gt;
* No interfaces changed, only implementations (good): ==&amp;gt; Increment REVISION.&lt;br /&gt;
* Interfaces added, none removed (good): ==&amp;gt; Increment CURRENT, increment AGE, set REVISION to 0.&lt;br /&gt;
* Interfaces removed or changed (BAD, breaks upward compatibility): ==&amp;gt; Increment CURRENT, set AGE and REVISION to 0.&lt;br /&gt;
The current value of  &amp;lt;tt&amp;gt;CURRENT&amp;lt;/TT&amp;gt;,&amp;lt;tt&amp;gt;REVISION&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;AGE&amp;lt;/tt&amp;gt; can be found in &amp;lt;tt&amp;gt;configure.ac&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
LIB_DIS_CURRENT=14&lt;br /&gt;
LIB_DIS_AGE=6&lt;br /&gt;
LIB_DIS_REVISION=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Once you have determined the new values of  the &amp;lt;tt&amp;gt;CURRENT:REVISION:AGE&amp;lt;/tt&amp;gt;  strings then:&lt;br /&gt;
;Edit the configure.ac and update the version values to the new ones.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Source_Release_for_libdap&amp;diff=13522</id>
		<title>Source Release for libdap</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Source_Release_for_libdap&amp;diff=13522"/>
		<updated>2024-01-24T18:39:22Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Get the DOI from Zenodo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers the step needed to release the libdap software for Hyrax. There is are separate pages for the BES and OLFS code and an overview page that describes how the website is updated and lists are notified.&lt;br /&gt;
&lt;br /&gt;
We now depend on the CI/CD process to build binary packages and to test the source builds. When the source code is tagged and marked as a release in GitHub, our linked Zenodo account archives that software and mints a DOI for it.&lt;br /&gt;
&lt;br /&gt;
== The Release Process ==&lt;br /&gt;
:&#039;&#039;&#039;Tip&#039;&#039;&#039;: If, while working on the release, you find you need to make changes to the code and you know the CI build will fail, do so on a &#039;&#039;release branch&#039;&#039; that you can merge and discard later. Do not make a release branch unless you need to since it complicates making tags.&lt;br /&gt;
&lt;br /&gt;
===  Verify the code base ===&lt;br /&gt;
# We release using the &#039;&#039;master&#039;&#039; branch. The code on &#039;&#039;master&#039;&#039; must pass the CI build. &lt;br /&gt;
# Make sure that the source code you&#039;re using for the following steps is up-to-date. (&#039;&#039;git pull&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
=== Update Release Files ===&lt;br /&gt;
Update the text documentation files and version numbers in the configuration files:&lt;br /&gt;
&lt;br /&gt;
; &#039;&#039;&#039;Note&#039;&#039;&#039; &lt;br /&gt;
:It&#039;s helpful to have, in the &#039;&#039;&#039;NEWS&#039;&#039;&#039; file, the Web site and the release notes, a list of the Jira tickets that have been closed since the last release. The best way to do this is to goto &#039;&#039;Jira&#039;s Issues&#039;&#039; page and look at the &#039;&#039;Tickets closed recently&#039;&#039; item. From there, click on &#039;&#039;Advanced&#039;&#039; and edit the time range so it matches the time range since the past release to now, then &#039;&#039;Export&#039;&#039; that info as an excel spreadsheet (the icon with a hat and a down arrow). YMMV regarding how easy this is and Jira&#039;s UI changes often.&lt;br /&gt;
&lt;br /&gt;
==== Update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file. ====&lt;br /&gt;
Use the script &amp;lt;tt&amp;gt;gitlog-to-changelog&amp;lt;/tt&amp;gt; (which can be found with Google) to update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file by running it using the &amp;lt;tt&amp;gt;--since=&amp;quot;&amp;lt;date&amp;gt;&amp;quot;&amp;lt;/tt&amp;gt; option with a date one day later in time than the newest entry in the current ChangeLog. &lt;br /&gt;
: &#039;&#039;&#039;gitlog-to-changelog --since=&amp;quot;1970-01-01&amp;quot;&#039;&#039;&#039; (&#039;&#039;Specify a date one day later than the one at the top of ChangeLog&#039;&#039;)&lt;br /&gt;
Save the result to a temp file and combine the two files: &amp;lt;br/&amp;gt;&lt;br /&gt;
: &#039;&#039;&#039;cat tmp ChangeLog &amp;gt; ChangeLog.tmp; mv ChangeLog.tmp ChangeLog&#039;&#039;&#039;&lt;br /&gt;
If you&#039;re making the first ChangeLog entries, then you&#039;ll need to create the ChangeLog file first. &amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Tip&#039;&#039;&#039;: &#039;&#039;When you&#039;re making the commit log entries, use line breaks so ChangeLog will be readable. That is, use lines &amp;lt; 80 characters long.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Update the NEWS file ====&lt;br /&gt;
To update the NEWS file, just read over the new ChangeLog entries and summarize.&lt;br /&gt;
&lt;br /&gt;
==== Update the Version Numbers ====&lt;br /&gt;
There are really 2 version numbers for each of these project items. The &#039;&#039;human&#039;&#039; version (like version-3.17.5) and the &#039;&#039;library&#039;&#039; API/ABI version which is represented as &amp;lt;tt&amp;gt;CURRENT:REVISION:AGE&amp;lt;/tt&amp;gt;. There are special rules for when each of the numbers in the library API/ABI version get incremented that are triggered by the kinds of changes that where made to the code base. The human version number is more arbitrary. So for example, we might make a major API/ABI change and have to change to a new Libtool version like &amp;lt;tt&amp;gt;25:0:0&amp;lt;/tt&amp;gt; but the human version might only change from bes-3.17.3 to bes-3.18.0&lt;br /&gt;
&lt;br /&gt;
===== Version for Humans =====&lt;br /&gt;
# Determine the human version number. This appears to be a somewhat subjective process.&lt;br /&gt;
# Edit each of the &#039;&#039;Affected Files&#039;&#039; and update the human version number.&lt;br /&gt;
&lt;br /&gt;
:;Affected Files: &lt;br /&gt;
:: &#039;&#039;&#039;&#039;&#039;configure.ac&#039;&#039;&#039;&#039;&#039; - Look for:&lt;br /&gt;
::: &amp;lt;tt&amp;gt;AC_INIT(libdap, ###.###.###, opendap-tech@opendap.org)&amp;lt;/tt&amp;gt;&lt;br /&gt;
:: debian/changelog (see [https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#changelog Debian ChangeLog])&lt;br /&gt;
::: &#039;&#039;&#039;Take Note!&#039;&#039;&#039; &#039;&#039;The &amp;lt;tt&amp;gt;debian/changelog&amp;lt;/tt&amp;gt; is the &amp;quot;single source of truth&amp;quot; for the libdap4 version in the debian packaging. If this does not agree with the version being packaged the package build will fail.&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;README.md&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;INSTALL&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===== API/ABI Version =====&lt;br /&gt;
The library API/ABI version is represented as CURRENT:REVISION:AGE. &lt;br /&gt;
&lt;br /&gt;
;The rules for shared image version numbers:&lt;br /&gt;
:# No interfaces changed, only implementations (good): Increment REVISION.&lt;br /&gt;
:# Interfaces added, none removed (good): Increment CURRENT, set REVISION to 0, increment AGE.&lt;br /&gt;
:# Interfaces removed or changed (BAD, breaks upward compatibility): Increment CURRENT, set REVISION to 0 , and set AGE to 0.&lt;br /&gt;
&lt;br /&gt;
See the &#039;&#039;Appendix: How to see the scope of API/ABI changes in C++ sources&#039;&#039; below for gruesome details. Often basic knowledge of the edits is good enough.&lt;br /&gt;
&lt;br /&gt;
;Affected Files: &lt;br /&gt;
: &#039;&#039;&#039;&#039;&#039;configure.ac&#039;&#039;&#039;&#039;&#039; - Look for&lt;br /&gt;
:: DAPLIB_CURRENT=###&lt;br /&gt;
:: DAPLIB_REVISION=###&lt;br /&gt;
:: DAPLIB_AGE=###&lt;br /&gt;
&lt;br /&gt;
=== Commit ===&lt;br /&gt;
* Commit and push the code. Wait for the CI/CD builds to complete. You must be working on the &#039;&#039;master&#039;&#039; branch to get the CD package builds to work.&lt;br /&gt;
&lt;br /&gt;
=== Update the Build Offset ===&lt;br /&gt;
&#039;&#039;Setting the build offset correctly will set the build number for the new release to &amp;quot;0&amp;quot;.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the file &amp;lt;tt&amp;gt;travis/travis_libdap_build_offset.sh&amp;lt;/tt&amp;gt; set the value of &amp;lt;tt&amp;gt;LIBDAP_TRAVIS_BUILD_OFFSET&amp;lt;/tt&amp;gt; to the number of the last TravisCI build plus one. The previous commit and push will have triggered a TravisCI build. Find the build number for the previous commit in [https://app.travis-ci.com/github/OPENDAP/libdap4 the TravisCI page for libdap4] and use that build number plus 1.&lt;br /&gt;
&lt;br /&gt;
This is not the build number for the package. It is the build number used by Travis, which is the the total number of times Travis has build the code. This number is the build number on the left-hand TOC &lt;br /&gt;
&lt;br /&gt;
Once you have updated the &amp;lt;tt&amp;gt;travis/travis_libdap_build_offset.sh&amp;lt;/tt&amp;gt; commit and push this change. Do NOT use a &amp;lt;tt&amp;gt;[skip ci]&amp;lt;/tt&amp;gt; string in the commit message as it is important that this commit run through the entire CI process.&lt;br /&gt;
&lt;br /&gt;
=== Tag The Release ===&lt;br /&gt;
In the past we manually made the tags for builds. Since we started a &#039;build number release&#039; for NASA, we automated that. &lt;br /&gt;
&lt;br /&gt;
If this is part of Hyrax, also tag this point in the master branch with the Hyrax release number:&lt;br /&gt;
# &#039;&#039;&#039;git tag -m &amp;quot;hyrax-&amp;lt;number&amp;gt;&amp;quot; -a hyrax-&amp;lt;numbers&amp;gt;&#039;&#039;&#039; I think we can leave this tag as &#039;&#039;hyrax-&amp;lt;version&amp;gt;&#039;&#039; since it&#039;s for our own bookkeeping. &lt;br /&gt;
# &#039;&#039;&#039;git push origin hyrax-&amp;lt;numbers&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
#: NB: Instead of tagging the HDF4/5 modules, use the saved commit hashes that git tracks for submodules. This cuts down on the bookkeeping for releases and removes one source of error.&lt;br /&gt;
&lt;br /&gt;
=== Create the release on Github ===&lt;br /&gt;
Goto the &#039;tags&#039; page (&#039;code&#039; then &#039;tags&#039; at the top of the directory window) and click the &#039;Tags&#039; tab. There, click the ellipses (...) on the right of the &#039;version-*&#039; tag and:&lt;br /&gt;
# Enter a &#039;&#039;title&#039;&#039; for the release&lt;br /&gt;
# Copy the most recent text from the NEWS file into the &#039;&#039;describe&#039;&#039; field&lt;br /&gt;
# Click &#039;&#039;Update this release&#039;&#039; or &#039;&#039;Save draft&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will trigger a &#039;archive and DOI&#039; process on the Zenodo system.&lt;br /&gt;
&lt;br /&gt;
=== Publish and Sign ===&lt;br /&gt;
&lt;br /&gt;
When the release is made on GitHub the source tar bundle is made automatically. However, this bundle is &#039;&#039;&#039;not&#039;&#039;&#039; the one we wish to publish because it requires people to have &#039;&#039;autoconf&#039;&#039; installed. Rather we want to use the result of &amp;quot;&amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;&amp;quot; which will have the &amp;lt;tt&amp;gt;configure&amp;lt;/tt&amp;gt; script pre-generated.&lt;br /&gt;
&lt;br /&gt;
All you need do is build the tar file using &amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;, sign it, and push (or pull) these files onto www.opendap.org/pub/source. &lt;br /&gt;
&lt;br /&gt;
# Go to the &#039;&#039;&#039;libdap4&#039;&#039;&#039; project on your local machine and run &amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt; which will make a libdap-x.y.z.tar.gz file at the top level of the &#039;&#039;&#039;libdap4&#039;&#039;&#039; project.&lt;br /&gt;
# Use &#039;&#039;&#039;gpg&#039;&#039;&#039; to sign the tar bundle:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;gpg --detach-sign --local-user security@opendap.org libdap-x.y.z.tar.gz&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Use &#039;&#039;&#039;sftp&#039;&#039;&#039; to push the signature file and the tar bundle to the /httpdocs/pub/source directory on www.opendap.org&lt;br /&gt;
#: &#039;&#039;(Assuming your current working directory is the top of the &#039;&#039;&#039;bes&#039;&#039;&#039; project)&#039;&#039;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;sftp opendap@www.opendap.org&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;cd httpdocs/pub/source&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put libdap-x.y.z.tgz.sig&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put libdap-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;quit&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Check your work!&lt;br /&gt;
## Download the source tar bundle and signature from www.opendap.org.&lt;br /&gt;
## Verify the signature:&lt;br /&gt;
##: &amp;lt;tt&amp;gt; gpg --verify libdap-x.y.z.tgz.sig libdap-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Get the DOI from [https://zenodo.org Zenodo] ===&lt;br /&gt;
&lt;br /&gt;
# Goto [https://zenodo.org Zenodo] and look at the &#039;upload&#039; page. Since the libdap, BES and OLFS repositories are linked to Zenodo, the newly-tagged code is uploaded to Zenodo automatically and a DOI is minted for us.&lt;br /&gt;
# Click on the new version, then click on the DOI tag in the pane on the right of the page for the given release.&lt;br /&gt;
# Copy the DOI as markdown from the window that pops up and paste that into the info for the version back in Github land.&lt;br /&gt;
# Also paste that into the README file. Commit using &#039;&#039;[skip ci]&#039;&#039; so we don&#039;t do a huge build (or do the build, it really doesn&#039;t matter that much).&lt;br /&gt;
&lt;br /&gt;
Images for the above steps to help with the web UI: coming soon&lt;br /&gt;
&lt;br /&gt;
== Appendix: How to see the scope of API/ABI changes in C++ sources ==&lt;br /&gt;
Determine the new software version (assuming you don&#039;t already know the extent of the changes that have been made)&lt;br /&gt;
: For C++, build a file of the methods and their arguments using:&lt;br /&gt;
:: &#039;&#039;&#039;nm .libs/libdap.a | c++filt | grep &#039; T .*::&#039; | sed &#039;s@.* T \(.*\)@\1@&#039; &amp;gt; libdap_funcs&#039;&#039;&#039;&lt;br /&gt;
: and compare that using &amp;lt;tt&amp;gt;diff&amp;lt;/tt&amp;gt; on the previous release&#039;s library.&lt;br /&gt;
Assess the changes you find based on the following rules for the values of &amp;lt;tt&amp;gt;CURRENT&amp;lt;/tt&amp;gt;,&amp;lt;tt&amp;gt;REVISION&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;AGE&amp;lt;/tt&amp;gt;&lt;br /&gt;
* No interfaces changed, only implementations (good): ==&amp;gt; Increment REVISION.&lt;br /&gt;
* Interfaces added, none removed (good): ==&amp;gt; Increment CURRENT, increment AGE, set REVISION to 0.&lt;br /&gt;
* Interfaces removed or changed (BAD, breaks upward compatibility): ==&amp;gt; Increment CURRENT, set AGE and REVISION to 0.&lt;br /&gt;
The current value of  &amp;lt;tt&amp;gt;CURRENT&amp;lt;/TT&amp;gt;,&amp;lt;tt&amp;gt;REVISION&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;AGE&amp;lt;/tt&amp;gt; can be found in &amp;lt;tt&amp;gt;configure.ac&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
LIB_DIS_CURRENT=14&lt;br /&gt;
LIB_DIS_AGE=6&lt;br /&gt;
LIB_DIS_REVISION=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Once you have determined the new values of  the &amp;lt;tt&amp;gt;CURRENT:REVISION:AGE&amp;lt;/tt&amp;gt;  strings then:&lt;br /&gt;
;Edit the configure.ac and update the version values to the new ones.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Jira_Release_Process&amp;diff=13520</id>
		<title>Jira Release Process</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Jira_Release_Process&amp;diff=13520"/>
		<updated>2024-01-23T21:14:32Z</updated>

		<summary type="html">&lt;p&gt;Jimg: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ReleaseSprintNotes | back]]&lt;br /&gt;
&lt;br /&gt;
This is a work in progress... (jhrg 12/05/18)&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP JIRA ==&lt;br /&gt;
&lt;br /&gt;
===Get all the closed tickets for this release ===&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot 2018-12-05 15.36.01.png|200px|thumb|right|Jira Recently Closed Tickets]]&lt;br /&gt;
&lt;br /&gt;
Get closed tickets, numbers, et c., from Jira. Go to the Issues and Filters page and Look at recently closed issues. Edit the query by clicking on the &#039;Advanced&#039; link at the upper right, and then get the list of issues as an Excel/CSV file. This makes the issues, their obscure ticket numbers, et c., easy to copy and paste into the release web page.&lt;br /&gt;
&lt;br /&gt;
These steps worked when were not also using Jira for our &#039;Build releases,&#039; but now that we are, editing the &#039;fixVersion&#039; makes those more work and does not seem to add much to the &#039;releases for people outside of NGAP&#039; since they cannot see the &#039;NASA Jira&#039; and thus have no way of knowing about this change.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s stop making this change unless/until a way to do so that works with the Build Releases presents itself.&lt;br /&gt;
&lt;br /&gt;
== Don&#039;t Do this &amp;lt;s&amp;gt;Update the Release Version&amp;lt;/s&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Screenshot 2018-12-05 15.16.27.png|200px|thumb|right|Jira Releases]]&lt;br /&gt;
&lt;br /&gt;
Add a new &#039;Release version&#039; on the Jira &#039;Releases&#039; page. Then &#039;release&#039; the current version, moving all unclosed tickets up to the next logical version.&lt;br /&gt;
&lt;br /&gt;
== Don&#039;t do this &amp;lt;s&amp;gt;NASA JIRA&amp;lt;/s&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
# Go to the [https://bugs.earthdata.nasa.gov/secure/RapidBoard.jspa?rapidView=901&amp;amp;projectKey=HYRAX&amp;amp;view=planning.nodetail&amp;amp;issueLimit=100 Hyrax JIRA page at NASA].&lt;br /&gt;
# Find the Version for the pending Hyrax release. If you can only find the Version for a previous release make a new Version title Hyrax-&amp;lt;numbers&amp;gt; for the new release.&lt;br /&gt;
# Locate the tickets closed since the last Hyrax release date. You can easily find this out by using the [https://bugs.earthdata.nasa.gov/issues/?filter=23327 saved issue filter &amp;quot;Closed Since&amp;quot;]. You may have to edit the filter so the &amp;quot;since date&amp;quot; reflects the date of the previous Hyrax release.&lt;br /&gt;
# Examine each ticket to determine if the work done resulted in a change to Hyrax or to a change to some part of the NGAP stack. Add the Hyrax Version tag to the tickets that have become part of the publicly available Hyrax code. Do not include changes that are associated with NGAP CI/CD, NGAP deployments, or NGAP Cloud provisioning. The idea is to only tag changes that were made to Hyrax that took place in our GitHub repository and that changed the server that we release publicly. In theory this process should be happening as we go along in the NGAP JIRA process, so this step should really be a &amp;quot;sweep&amp;quot; to make sure nothing important was missed.&lt;br /&gt;
# When this has been completed you can go to the Hyrax Version page and make it a Release.&lt;br /&gt;
# Copy the URL for the release page for inclusion in the Hyrax release page on www.opendap.org&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Source_Release_for_BES&amp;diff=13519</id>
		<title>Source Release for BES</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Source_Release_for_BES&amp;diff=13519"/>
		<updated>2024-01-05T02:31:44Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Tag the BES code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
This pages covers the steps required to release the BES software for Hyrax. &lt;br /&gt;
&lt;br /&gt;
We now depend on the CI/CD process to build binary packages and to test the source builds.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Tip&#039;&#039;&#039;: If, while working on the release, you find you need to make changes to the code and you know the CI build will fail, do so on a &#039;&#039;release branch&#039;&#039; that you can merge and discard later. Do not make a release branch if you don&#039;t &#039;&#039;&#039;need&#039;&#039;&#039; it, since it complicates making tags.&lt;br /&gt;
&lt;br /&gt;
==  Verify the code base ==&lt;br /&gt;
# We release using the &#039;&#039;master&#039;&#039; branch. The code on &#039;&#039;master&#039;&#039; must have passed the CI builds. &#039;&#039;&#039;This includes the hyrax-docker builds since that CI build runs the full server regression tests!&#039;&#039;&#039;&lt;br /&gt;
# Make sure that the source code you&#039;re using for the following steps is up-to-date. (&#039;&#039;git pull&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== Update the Version Numbers ==&lt;br /&gt;
&lt;br /&gt;
=== Version for Humans ===&lt;br /&gt;
# Determine the human version number. This appears to be a somewhat subjective process.&lt;br /&gt;
# Edit each of the &#039;&#039;Affected Files&#039;&#039; and update the human version number.&lt;br /&gt;
&lt;br /&gt;
:; Affected Files&lt;br /&gt;
:* &#039;&#039;&#039;&#039;&#039;configure.ac&#039;&#039;&#039;&#039;&#039; - Look for:&lt;br /&gt;
::: &amp;lt;tt&amp;gt;AC_INIT(bes, ###.###.###, opendap-tech@opendap.org)&amp;lt;/tt&amp;gt;&lt;br /&gt;
:* &#039;&#039;&#039;&#039;&#039; debian/changelog&#039;&#039;&#039;&#039;&#039; (see [https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#changelog Debian ChangeLog])&lt;br /&gt;
::: &#039;&#039;&#039;Take Note!&#039;&#039;&#039; &#039;&#039;The &amp;lt;tt&amp;gt;debian/changelog&amp;lt;/tt&amp;gt; is the &amp;quot;single source of truth&amp;quot; for the libdap4 version in the debian packaging. If this does not agree with the version being packaged the package build will fail.&#039;&#039;&lt;br /&gt;
:* &#039;&#039;&#039;&#039;&#039;ChangeLog&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:* &#039;&#039;&#039;&#039;&#039;NEWS&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:* &#039;&#039;&#039;&#039;&#039;README.md&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:* &#039;&#039;&#039;&#039;&#039;INSTALL&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Update the internal library (API/ABI) version numbers. ===&lt;br /&gt;
The BES is &#039;&#039;&#039;&#039;&#039;not&#039;&#039;&#039;&#039;&#039; a shared library, it is a set of c++ applications that are typically built as statically linked binaries. Because of this the usual CURRENT:REVISION:AGE tuples used to express the binary compatibility state of a c++ shared object library have little meaning for the BES code. So, what we choose to do is simply bump the REVISION numbers by one value for each release.&lt;br /&gt;
&lt;br /&gt;
* In the &#039;&#039;&#039;configure.ac&#039;&#039;&#039; file locate each of:&lt;br /&gt;
** LIB_DIS_REVISION&lt;br /&gt;
** LIB_PPT_REVISION&lt;br /&gt;
** LIB_XML_CMD_REVISION&lt;br /&gt;
* Increase the value of each by one (1).&lt;br /&gt;
* Save the file.&lt;br /&gt;
* Update the text documentation files and version numbers in the configuration files:&lt;br /&gt;
&lt;br /&gt;
Example of the relevant section from configure.ac: &lt;br /&gt;
&amp;lt;source lang-=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
LIB_DIS_CURRENT=18&lt;br /&gt;
LIB_DIS_AGE=3&lt;br /&gt;
LIB_DIS_REVISION=3&lt;br /&gt;
AC_SUBST(LIB_DIS_CURRENT)&lt;br /&gt;
AC_SUBST(LIB_DIS_AGE)&lt;br /&gt;
AC_SUBST(LIB_DIS_REVISION)&lt;br /&gt;
LIBDISPATCH_VERSION=&amp;quot;$LIB_DIS_CURRENT:$LIB_DIS_REVISION:$LIB_DIS_AGE&amp;quot;&lt;br /&gt;
AC_SUBST(LIBDISPATCH_VERSION)&lt;br /&gt;
&lt;br /&gt;
LIB_PPT_CURRENT=5&lt;br /&gt;
LIB_PPT_AGE=1&lt;br /&gt;
LIB_PPT_REVISION=2&lt;br /&gt;
AC_SUBST(LIB_PPT_CURRENT)&lt;br /&gt;
AC_SUBST(LIB_PPT_AGE)&lt;br /&gt;
AC_SUBST(LIB_PPT_REVISION)&lt;br /&gt;
LIBPPT_VERSION=&amp;quot;$LIB_PPT_CURRENT:$LIB_PPT_REVISION:$LIB_PPT_AGE&amp;quot;&lt;br /&gt;
AC_SUBST(LIBPPT_VERSION)&lt;br /&gt;
&lt;br /&gt;
LIB_XML_CMD_CURRENT=5&lt;br /&gt;
LIB_XML_CMD_AGE=4&lt;br /&gt;
LIB_XML_CMD_REVISION=2&lt;br /&gt;
AC_SUBST(LIB_XML_CMD_CURRENT)&lt;br /&gt;
AC_SUBST(LIB_XML_CMD_AGE)&lt;br /&gt;
AC_SUBST(LIB_XML_CMD_REVISION)&lt;br /&gt;
LIBXMLCOMMAND_VERSION=&amp;quot;$LIB_XML_CMD_CURRENT:$LIB_XML_CMD_REVISION:$LIB_XML_CMD_AGE&amp;quot;&lt;br /&gt;
AC_SUBST(LIBXMLCOMMAND_VERSION)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file. ==&lt;br /&gt;
Use the script &amp;lt;tt&amp;gt;gitlog-to-changelog&amp;lt;/tt&amp;gt; (which can be found with Google) to update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file by running it using the &amp;lt;tt&amp;gt;--since=&amp;quot;&amp;lt;date&amp;gt;&amp;quot;&amp;lt;/tt&amp;gt; option with a date one day later in time than the newest entry in the current ChangeLog. &lt;br /&gt;
: &amp;lt;tt style=&amp;quot;font-size: 1.1em; font-weight: bold;&amp;quot;&amp;gt;gitlog-to-changelog --since=&amp;quot;1970-01-01&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
:: (&#039;&#039;Specify a date one day later than the one at the top of the existing ChangeLog file.&#039;&#039;)&lt;br /&gt;
Save the result to a temp file and combine the two files: &amp;lt;br/&amp;gt;&lt;br /&gt;
: &amp;lt;tt style=&amp;quot;font-size: 1.1em; font-weight: bold;&amp;quot;&amp;gt;cat tmp ChangeLog &amp;gt; ChangeLog.tmp; mv ChangeLog.tmp ChangeLog&amp;lt;/tt&amp;gt;&lt;br /&gt;
If you&#039;re making the first ChangeLog entries, then you&#039;ll need to create the ChangeLog file first. &amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Tip&#039;&#039;&#039;: &#039;&#039;When you&#039;re making the commit log entries, use line breaks so ChangeLog will be readable. That is, use lines &amp;lt; 80 characters long.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Update the &#039;&#039;&#039;NEWS&#039;&#039;&#039; file ==&lt;br /&gt;
To update the NEWS file, just read over the new ChangeLog entries and summarize. &lt;br /&gt;
&lt;br /&gt;
The new entries to the NEWS file will be used later when making the GitHub release and when writing the server&#039;s release page on www.opendap.org.&lt;br /&gt;
&lt;br /&gt;
We might replace this:&lt;br /&gt;
* It&#039;s also helpful to have, in the &#039;&#039;&#039;NEWS&#039;&#039;&#039; file and the Web site and the release notes, a list of the Jira tickets that have been closed since the last release. The best way to do this is to goto &#039;&#039;Jira&#039;s Issues&#039;&#039; page and look at the &#039;&#039;Tickets closed recently&#039;&#039; item. From there, click on &#039;&#039;Advanced&#039;&#039; and edit the time range so it matches the time range since the past release to now, then &#039;&#039;Export&#039;&#039; that info as an excel spreadsheet (the icon with a hat and a down arrow). YMMV regarding how easy this is and Jira&#039;s UI changes often.)&lt;br /&gt;
&lt;br /&gt;
With instructions about making an associated release in JIRA using version tagging.&lt;br /&gt;
&lt;br /&gt;
== Update the Version Numbers for Humans ==&lt;br /&gt;
;Affected Files: &lt;br /&gt;
: configure.ac&lt;br /&gt;
: debian/changelog (see [https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#changelog] Debian ChangeLog)&lt;br /&gt;
: NEWS&lt;br /&gt;
: README.md&lt;br /&gt;
: INSTALL&lt;br /&gt;
&lt;br /&gt;
# Determine the human version number. This appears to be a somewhat subjective process.&lt;br /&gt;
# Edit each of the &#039;&#039;Affected Files&#039;&#039; and update the human version number.&lt;br /&gt;
# In the &#039;&#039;&#039;README.md&#039;&#039;&#039; file be sure to update the description of how to locate the DOI for the release with the new version number.&lt;br /&gt;
&lt;br /&gt;
== Update the libdap version ==&lt;br /&gt;
Determine the libdap version associated with this release by checking the contents of the file &amp;lt;tt&amp;gt;libdap4-snapshot&amp;lt;/tt&amp;gt; The &amp;lt;tt&amp;gt;libdap4-snapshot&amp;lt;/tt&amp;gt; file should contain a single line like this example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
libdap4-3.20.9-0 2021-12-28T19:23:45+0000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The libdap version for the above example is: &amp;lt;tt&amp;gt;libdap-3.20.9&amp;lt;/tt&amp;gt; (The version is NOT &amp;lt;tt&amp;gt;libdap4-3.20.9&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
=== Update the libdap version in the .travis.yml file ===&lt;br /&gt;
;Affected Files&lt;br /&gt;
: .travis.yml&lt;br /&gt;
&lt;br /&gt;
In the .travis.yml file update the value of  &#039;&#039;LIBDAP_RPM_VERSION&#039;&#039; in the &#039;&#039;env: global:&#039;&#039; section so that it contains the complete numerical value of the libdap version you located in the previous step. Using the previous example the value would be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    - LIBDAP_RPM_VERSION=3.20.9-0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Update the libdap version in the RPM spec files ===&lt;br /&gt;
;Affected Files&lt;br /&gt;
: &#039;&#039;bes.spec*.in&#039;&#039;&lt;br /&gt;
Update the &amp;lt;tt&amp;gt;bes.spec*.in&amp;lt;/tt&amp;gt; files by changing the &amp;lt;tt&amp;gt;Requires&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;BuildRequires&amp;lt;/tt&amp;gt; entries for libdap. Based on our example the result would be: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Requires:       libdap &amp;gt;= 3.20.9&lt;br /&gt;
BuildRequires:  libdap-devel &amp;gt;= 3.20.9&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;These lines may not be adjacent to each other in the spec files&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
=== Update the libdap version in the README.md file ===&lt;br /&gt;
;Affected Files&lt;br /&gt;
: README.md&lt;br /&gt;
# [https://zenodo.org Get the DOI markdown from Zenodo] by using the search bar and searching for the libdap version string that you determined at the beginning of this section. &lt;br /&gt;
# Update the &#039;&#039;&#039;README.md&#039;&#039;&#039; file with libdap version and the associated DOI link (using the markdown you got from Zenodo).&lt;br /&gt;
&lt;br /&gt;
; Note&lt;br /&gt;
: You will also need this DOI markdown when making the GitHub release page for the BES. &lt;br /&gt;
&lt;br /&gt;
See the section on this page titled &amp;quot;&#039;&#039;Get the BES DOI from Zenodo&#039;&#039;&amp;quot;  for more details about getting the DOI markdown.&lt;br /&gt;
&lt;br /&gt;
== Update the RPM dependencies ==&lt;br /&gt;
;Affected Files: &lt;br /&gt;
:&#039;&#039;bes.spec*.in&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the RPM &#039;&#039;.spec&#039;&#039; file, update the dependencies as needed. &lt;br /&gt;
* The libdap version dependency was covered in a previous step.&lt;br /&gt;
* Be attentive to changes that have been made to the hyrax-dependencies since the last release.&lt;br /&gt;
&lt;br /&gt;
== Update the module version numbers for humans ==&lt;br /&gt;
In bes/modules/common, check that the file all-modules.txt is complete and update as needed. Then run:&lt;br /&gt;
&lt;br /&gt;
* Remove the sentinel files that prevent the version updater from being run multiple times in succession without specific intervention:&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;rm -v ../*/version_updated&amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
* Now run the version updater:&lt;br /&gt;
:: &#039;&#039;&#039;&amp;lt;tt&amp;gt;./version_update_modules.sh -v &amp;lt;/tt&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will update the patch number (x.y.patch) for each of the named modules. &lt;br /&gt;
&lt;br /&gt;
If a particular module has significant fixes, hand edit the number, in the Makefile.am. &lt;br /&gt;
&lt;br /&gt;
See below for special info about the HDF4/5 modules (which also applies to any modules not in the BES GitHub repo).&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;del&amp;gt;For the BES HDF4/5 modules (BES only) &amp;lt;/del&amp;gt;==&lt;br /&gt;
# &amp;lt;del&amp;gt;&#039;&#039;Make sure that you are working on the master branch of each module!!&#039;&#039;&amp;lt;/del&amp;gt;&lt;br /&gt;
# &amp;lt;del&amp;gt; Goto those directories and update the ChangeLog, NEWS, README, and INSTALL files (even though INSTALL is not used by many).&amp;lt;/del&amp;gt;&lt;br /&gt;
# &amp;lt;del&amp;gt; Update the module version numbers in their respective Makefile.am files.&amp;lt;/del&amp;gt;&lt;br /&gt;
# &amp;lt;del&amp;gt; Commit and Push these changes.&amp;lt;/del&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update the Build Offset ==&lt;br /&gt;
&#039;&#039;Setting the build offset correctly will set the build number for the new release to &amp;quot;0&amp;quot;.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the file &amp;lt;tt&amp;gt;travis/travis_bes_build_offset.sh&amp;lt;/tt&amp;gt; set the value of &amp;lt;tt&amp;gt;BES_TRAVIS_BUILD_OFFSET&amp;lt;/tt&amp;gt; to the number of the last TravisCI build plus one. The previous commit and push will have triggered a TravisCI build. Find the build number for the previous commit in [https://app.travis-ci.com/github/OPENDAP/bes the TravisCI page for the BES] and use that build number plus 1.&lt;br /&gt;
&lt;br /&gt;
== Commit Changes ==&lt;br /&gt;
&#039;&#039;Be sure that you have completed all of the changes to the various ChangeLog, NEWS, INSTALL, configure.ac,  &amp;lt;tt&amp;gt;travis/travis_bes_build_offset.sh&amp;lt;/tt&amp;gt;, and other files before proceeding!&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# Commit and push the BES code. Wait for the CI/CD builds to complete. You must be working on the &#039;&#039;master&#039;&#039; branch to get the CD package builds to work.&lt;br /&gt;
&lt;br /&gt;
== Tag the BES code ==&lt;br /&gt;
&lt;br /&gt;
The build process automatically tags builds of the master branch. The Hyrax-version tag is a placeholder for us so we can sort out what code goes with various Hyrax source releases.&lt;br /&gt;
&lt;br /&gt;
# If this is part of a Hyrax Release, then tag this point in the master branch with the Hyrax release number&lt;br /&gt;
#* &amp;lt;tt style=&amp;quot;font-size: 1.1em; font-weight: bold;&amp;quot;&amp;gt;git tag -m &amp;quot;hyrax-&amp;lt;number&amp;gt;&amp;quot; -a hyrax-&amp;lt;numbers&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#* &amp;lt;tt style=&amp;quot;font-size: 1.1em; font-weight: bold;&amp;quot;&amp;gt;git push origin hyrax-&amp;lt;numbers&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &#039;&#039;&#039;NB:&#039;&#039;&#039; &#039;&#039;Instead of tagging the HDF4/5 modules, use the saved commit hashes that git tracks for submodules. This cuts down on the bookkeeping for releases and removes one source of error.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Create the BES release on Github ==&lt;br /&gt;
# [https://github.com/OPENDAP/bes Goto the BES project page in GitHub]&lt;br /&gt;
# Choose the &#039;&#039;&#039;releases&#039;&#039;&#039; tab.&lt;br /&gt;
# On the [https://github.com/OPENDAP/bes/releases Releases page] click the &#039;Tags&#039; tab. &lt;br /&gt;
# On the [https://github.com/OPENDAP/bes/tags Tags page], locate the tag (created above) associated with this new release.&lt;br /&gt;
# Click the ellipses (...) located on the far right side of the &#039;&#039;version-x.y.z&#039;&#039; tag &#039;frame&#039; for this release and and choose &#039;&#039;Create release&#039;&#039;.&lt;br /&gt;
#* Enter a &#039;&#039;title&#039;&#039; for the release&lt;br /&gt;
#* Copy the most recent text from the NEWS file into the &#039;&#039;describe&#039;&#039; field&lt;br /&gt;
#* Click &#039;&#039;&#039;Publish release&#039;&#039;&#039; or  &#039;&#039;&#039;Save draft&#039;&#039;&#039;. &lt;br /&gt;
#** If you have previously edited the release page you can click &#039;&#039;&#039;Update this release&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Publish and Sign ==&lt;br /&gt;
&lt;br /&gt;
When the release is made on GitHub the source tar bundle is made automatically. However, this bundle is &#039;&#039;&#039;not&#039;&#039;&#039; the one we wish to publish because it requires people to have &#039;&#039;autoconf&#039;&#039; installed. Rather we want to use the result of &amp;quot;&amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;&amp;quot; which will have the &amp;lt;tt&amp;gt;configure&amp;lt;/tt&amp;gt; script pre-generated.&lt;br /&gt;
&lt;br /&gt;
All you need do is build the tar file using &amp;lt;tt&amp;gt;make list&amp;lt;/tt&amp;gt;, sign it, and push (or pull) these files onto www.opendap.org/pub/source. &lt;br /&gt;
&lt;br /&gt;
# Go to the &#039;&#039;&#039;bes&#039;&#039;&#039; project on your local machine and run &#039;&#039;&amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;&#039;&#039; which will make a bes-x.y.z,tar.gz file at the top level of the &#039;&#039;&#039;bes&#039;&#039;&#039; project.&lt;br /&gt;
# Use &#039;&#039;&#039;gpg&#039;&#039;&#039; to sign the tar bundle:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;gpg --detach-sign --local-user security@opendap.org bes-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Use &#039;&#039;&#039;sftp&#039;&#039;&#039; to push the signature file and the tar bundle to the /httpdocs/pub/source directory on www.opendap.org&lt;br /&gt;
#: &#039;&#039;(Assuming your current working directory is the top of the &#039;&#039;&#039;bes&#039;&#039;&#039; project)&#039;&#039;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;sftp opendap@www.opendap.org&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;cd httpdocs/pub/source&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put bes-x.y.z.tgz.sig&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put bes-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;quit&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Check your work!&lt;br /&gt;
## Download the source tar bundle and signature from www.opendap.org.&lt;br /&gt;
## Verify the signature:&lt;br /&gt;
##: &amp;lt;tt&amp;gt; gpg --verify bes-x.y.z.tgz.sig bes-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get the BES DOI from Zenodo ==&lt;br /&gt;
Get the Zenodo DOI for the newly created BES release and add it to the associated GitHub BES release page.&lt;br /&gt;
&lt;br /&gt;
# [https://zenodo.org Goto Zenodo] &lt;br /&gt;
# Look at the &#039;upload&#039; page. If there is nothing there (perhaps because you are not &#039;&#039;jhrg&#039;&#039; or whoever set up the connection between the BES project and Zenodo) you can use the search bar to search for &#039;&#039;&#039;bes&#039;&#039;&#039;. &lt;br /&gt;
#: Since the libdap, BES and OLFS repositories are linked to Zenodo, the newly-tagged code is uploaded to Zenodo automatically and a DOI is minted for us.&lt;br /&gt;
# Click on the new version, then click on the DOI tag in the pane on the right of the page for the given release.&lt;br /&gt;
# Copy the DOI as markdown from the window that pops up.&lt;br /&gt;
# Edit the GitHub release page for the BES release you just created and paste the DOI markdown into the top of the  description.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tip:&#039;&#039;&#039; &#039;&#039;If you are trying to locate the &#039;&#039;&#039;libdap&#039;&#039;&#039; releases in Zenodo you have to search for the string:&#039;&#039; &amp;lt;tt style=&amp;quot;font-size: 1.1em; font-weight: bold;&amp;quot;&amp;gt;libdap4&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Images ===&lt;br /&gt;
[[File:Screenshot 2018-12-06 11.06.44.png|none|thumb|400px|border|left|Zenodo upload page]]&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Source_Release_for_libdap&amp;diff=13518</id>
		<title>Source Release for libdap</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Source_Release_for_libdap&amp;diff=13518"/>
		<updated>2024-01-04T22:44:56Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Tag The Release */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers the step needed to release the libdap software for Hyrax. There is are separate pages for the BES and OLFS code and an overview page that describes how the website is updated and lists are notified.&lt;br /&gt;
&lt;br /&gt;
We now depend on the CI/CD process to build binary packages and to test the source builds. When the source code is tagged and marked as a release in GitHub, our linked Zenodo account archives that software and mints a DOI for it.&lt;br /&gt;
&lt;br /&gt;
== The Release Process ==&lt;br /&gt;
:&#039;&#039;&#039;Tip&#039;&#039;&#039;: If, while working on the release, you find you need to make changes to the code and you know the CI build will fail, do so on a &#039;&#039;release branch&#039;&#039; that you can merge and discard later. Do not make a release branch unless you need to since it complicates making tags.&lt;br /&gt;
&lt;br /&gt;
===  Verify the code base ===&lt;br /&gt;
# We release using the &#039;&#039;master&#039;&#039; branch. The code on &#039;&#039;master&#039;&#039; must pass the CI build. &lt;br /&gt;
# Make sure that the source code you&#039;re using for the following steps is up-to-date. (&#039;&#039;git pull&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
=== Update Release Files ===&lt;br /&gt;
Update the text documentation files and version numbers in the configuration files:&lt;br /&gt;
&lt;br /&gt;
; &#039;&#039;&#039;Note&#039;&#039;&#039; &lt;br /&gt;
:It&#039;s helpful to have, in the &#039;&#039;&#039;NEWS&#039;&#039;&#039; file, the Web site and the release notes, a list of the Jira tickets that have been closed since the last release. The best way to do this is to goto &#039;&#039;Jira&#039;s Issues&#039;&#039; page and look at the &#039;&#039;Tickets closed recently&#039;&#039; item. From there, click on &#039;&#039;Advanced&#039;&#039; and edit the time range so it matches the time range since the past release to now, then &#039;&#039;Export&#039;&#039; that info as an excel spreadsheet (the icon with a hat and a down arrow). YMMV regarding how easy this is and Jira&#039;s UI changes often.&lt;br /&gt;
&lt;br /&gt;
==== Update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file. ====&lt;br /&gt;
Use the script &amp;lt;tt&amp;gt;gitlog-to-changelog&amp;lt;/tt&amp;gt; (which can be found with Google) to update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file by running it using the &amp;lt;tt&amp;gt;--since=&amp;quot;&amp;lt;date&amp;gt;&amp;quot;&amp;lt;/tt&amp;gt; option with a date one day later in time than the newest entry in the current ChangeLog. &lt;br /&gt;
: &#039;&#039;&#039;gitlog-to-changelog --since=&amp;quot;1970-01-01&amp;quot;&#039;&#039;&#039; (&#039;&#039;Specify a date one day later than the one at the top of ChangeLog&#039;&#039;)&lt;br /&gt;
Save the result to a temp file and combine the two files: &amp;lt;br/&amp;gt;&lt;br /&gt;
: &#039;&#039;&#039;cat tmp ChangeLog &amp;gt; ChangeLog.tmp; mv ChangeLog.tmp ChangeLog&#039;&#039;&#039;&lt;br /&gt;
If you&#039;re making the first ChangeLog entries, then you&#039;ll need to create the ChangeLog file first. &amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Tip&#039;&#039;&#039;: &#039;&#039;When you&#039;re making the commit log entries, use line breaks so ChangeLog will be readable. That is, use lines &amp;lt; 80 characters long.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Update the NEWS file ====&lt;br /&gt;
To update the NEWS file, just read over the new ChangeLog entries and summarize.&lt;br /&gt;
&lt;br /&gt;
==== Update the Version Numbers ====&lt;br /&gt;
There are really 2 version numbers for each of these project items. The &#039;&#039;human&#039;&#039; version (like version-3.17.5) and the &#039;&#039;library&#039;&#039; API/ABI version which is represented as &amp;lt;tt&amp;gt;CURRENT:REVISION:AGE&amp;lt;/tt&amp;gt;. There are special rules for when each of the numbers in the library API/ABI version get incremented that are triggered by the kinds of changes that where made to the code base. The human version number is more arbitrary. So for example, we might make a major API/ABI change and have to change to a new Libtool version like &amp;lt;tt&amp;gt;25:0:0&amp;lt;/tt&amp;gt; but the human version might only change from bes-3.17.3 to bes-3.18.0&lt;br /&gt;
&lt;br /&gt;
===== Version for Humans =====&lt;br /&gt;
# Determine the human version number. This appears to be a somewhat subjective process.&lt;br /&gt;
# Edit each of the &#039;&#039;Affected Files&#039;&#039; and update the human version number.&lt;br /&gt;
&lt;br /&gt;
:;Affected Files: &lt;br /&gt;
:: &#039;&#039;&#039;&#039;&#039;configure.ac&#039;&#039;&#039;&#039;&#039; - Look for:&lt;br /&gt;
::: &amp;lt;tt&amp;gt;AC_INIT(libdap, ###.###.###, opendap-tech@opendap.org)&amp;lt;/tt&amp;gt;&lt;br /&gt;
:: debian/changelog (see [https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#changelog Debian ChangeLog])&lt;br /&gt;
::: &#039;&#039;&#039;Take Note!&#039;&#039;&#039; &#039;&#039;The &amp;lt;tt&amp;gt;debian/changelog&amp;lt;/tt&amp;gt; is the &amp;quot;single source of truth&amp;quot; for the libdap4 version in the debian packaging. If this does not agree with the version being packaged the package build will fail.&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;README.md&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;INSTALL&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===== API/ABI Version =====&lt;br /&gt;
The library API/ABI version is represented as CURRENT:REVISION:AGE. &lt;br /&gt;
&lt;br /&gt;
;The rules for shared image version numbers:&lt;br /&gt;
:# No interfaces changed, only implementations (good): Increment REVISION.&lt;br /&gt;
:# Interfaces added, none removed (good): Increment CURRENT, set REVISION to 0, increment AGE.&lt;br /&gt;
:# Interfaces removed or changed (BAD, breaks upward compatibility): Increment CURRENT, set REVISION to 0 , and set AGE to 0.&lt;br /&gt;
&lt;br /&gt;
See the &#039;&#039;Appendix: How to see the scope of API/ABI changes in C++ sources&#039;&#039; below for gruesome details. Often basic knowledge of the edits is good enough.&lt;br /&gt;
&lt;br /&gt;
;Affected Files: &lt;br /&gt;
: &#039;&#039;&#039;&#039;&#039;configure.ac&#039;&#039;&#039;&#039;&#039; - Look for&lt;br /&gt;
:: DAPLIB_CURRENT=###&lt;br /&gt;
:: DAPLIB_REVISION=###&lt;br /&gt;
:: DAPLIB_AGE=###&lt;br /&gt;
&lt;br /&gt;
=== Commit ===&lt;br /&gt;
* Commit and push the code. Wait for the CI/CD builds to complete. You must be working on the &#039;&#039;master&#039;&#039; branch to get the CD package builds to work.&lt;br /&gt;
&lt;br /&gt;
=== Update the Build Offset ===&lt;br /&gt;
&#039;&#039;Setting the build offset correctly will set the build number for the new release to &amp;quot;0&amp;quot;.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the file &amp;lt;tt&amp;gt;travis/travis_libdap_build_offset.sh&amp;lt;/tt&amp;gt; set the value of &amp;lt;tt&amp;gt;LIBDAP_TRAVIS_BUILD_OFFSET&amp;lt;/tt&amp;gt; to the number of the last TravisCI build plus one. The previous commit and push will have triggered a TravisCI build. Find the build number for the previous commit in [https://app.travis-ci.com/github/OPENDAP/libdap4 the TravisCI page for libdap4] and use that build number plus 1.&lt;br /&gt;
&lt;br /&gt;
This is not the build number for the package. It is the build number used by Travis, which is the the total number of times Travis has build the code. This number is the build number on the left-hand TOC &lt;br /&gt;
&lt;br /&gt;
Once you have updated the &amp;lt;tt&amp;gt;travis/travis_libdap_build_offset.sh&amp;lt;/tt&amp;gt; commit and push this change. Do NOT use a &amp;lt;tt&amp;gt;[skip ci]&amp;lt;/tt&amp;gt; string in the commit message as it is important that this commit run through the entire CI process.&lt;br /&gt;
&lt;br /&gt;
=== Tag The Release ===&lt;br /&gt;
In the past we manually made the tags for builds. Since we started a &#039;build number release&#039; for NASA, we automated that. &lt;br /&gt;
&lt;br /&gt;
If this is part of Hyrax, also tag this point in the master branch with the Hyrax release number:&lt;br /&gt;
# &#039;&#039;&#039;git tag -m &amp;quot;hyrax-&amp;lt;number&amp;gt;&amp;quot; -a hyrax-&amp;lt;numbers&amp;gt;&#039;&#039;&#039; I think we can leave this tag as &#039;&#039;hyrax-&amp;lt;version&amp;gt;&#039;&#039; since it&#039;s for our own bookkeeping. &lt;br /&gt;
# &#039;&#039;&#039;git push origin hyrax-&amp;lt;numbers&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
#: NB: Instead of tagging the HDF4/5 modules, use the saved commit hashes that git tracks for submodules. This cuts down on the bookkeeping for releases and removes one source of error.&lt;br /&gt;
&lt;br /&gt;
=== Create the release on Github ===&lt;br /&gt;
Goto the &#039;tags&#039; page (&#039;code&#039; then &#039;tags&#039; at the top of the directory window) and click the &#039;Tags&#039; tab. There, click the ellipses (...) on the right of the &#039;version-*&#039; tag and:&lt;br /&gt;
# Enter a &#039;&#039;title&#039;&#039; for the release&lt;br /&gt;
# Copy the most recent text from the NEWS file into the &#039;&#039;describe&#039;&#039; field&lt;br /&gt;
# Click &#039;&#039;Update this release&#039;&#039; or &#039;&#039;Save draft&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will trigger a &#039;archive and DOI&#039; process on the Zenodo system.&lt;br /&gt;
&lt;br /&gt;
=== Publish and Sign ===&lt;br /&gt;
&lt;br /&gt;
When the release is made on GitHub the source tar bundle is made automatically. However, this bundle is &#039;&#039;&#039;not&#039;&#039;&#039; the one we wish to publish because it requires people to have &#039;&#039;autoconf&#039;&#039; installed. Rather we want to use the result of &amp;quot;&amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;&amp;quot; which will have the &amp;lt;tt&amp;gt;configure&amp;lt;/tt&amp;gt; script pre-generated.&lt;br /&gt;
&lt;br /&gt;
All you need do is build the tar file using &amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;, sign it, and push (or pull) these files onto www.opendap.org/pub/source. &lt;br /&gt;
&lt;br /&gt;
# Go to the &#039;&#039;&#039;libdap4&#039;&#039;&#039; project on your local machine and run &amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt; which will make a libdap-x.y.z.tar.gz file at the top level of the &#039;&#039;&#039;libdap4&#039;&#039;&#039; project.&lt;br /&gt;
# Use &#039;&#039;&#039;gpg&#039;&#039;&#039; to sign the tar bundle:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;gpg --detach-sign --local-user security@opendap.org libdap-x.y.z.tar.gz&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Use &#039;&#039;&#039;sftp&#039;&#039;&#039; to push the signature file and the tar bundle to the /httpdocs/pub/source directory on www.opendap.org&lt;br /&gt;
#: &#039;&#039;(Assuming your current working directory is the top of the &#039;&#039;&#039;bes&#039;&#039;&#039; project)&#039;&#039;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;sftp opendap@www.opendap.org&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;cd httpdocs/pub/source&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put libdap-x.y.z.tgz.sig&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put libdap-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;quit&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Check your work!&lt;br /&gt;
## Download the source tar bundle and signature from www.opendap.org.&lt;br /&gt;
## Verify the signature:&lt;br /&gt;
##: &amp;lt;tt&amp;gt; gpg --verify libdap-x.y.z.tgz.sig libdap-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Get the DOI from [https://zenodo.org Zenodo] ===&lt;br /&gt;
&#039;&#039;&#039;We should stop putting the DOIs and the cool badge in the README because the copy of README archived by Zenodo will be one step out of sync.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Instead, put a note in the README that the DOI can be found at Zenodo under name XYZ. jhrg 5/8/20&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# Goto [https://zenodo.org Zenodo] and look at the &#039;upload&#039; page. Since the libdap, BES and OLFS repositories are linked to Zenodo, the newly-tagged code is uploaded to Zenodo automatically and a DOI is minted for us.&lt;br /&gt;
# Click on the new version, then click on the DOI tag in the pane on the right of the page for the given release.&lt;br /&gt;
# Copy the DOI as markdown from the window that pops up and paste that into the info for the version back in Github land.&lt;br /&gt;
# Also paste that into the README file. Commit using &#039;&#039;[skip ci]&#039;&#039; so we don&#039;t do a huge build (or do the build, it really doesn&#039;t matter that much).&lt;br /&gt;
&lt;br /&gt;
Images for the above steps to help with the web UI: coming soon&lt;br /&gt;
&lt;br /&gt;
== Appendix: How to see the scope of API/ABI changes in C++ sources ==&lt;br /&gt;
Determine the new software version (assuming you don&#039;t already know the extent of the changes that have been made)&lt;br /&gt;
: For C++, build a file of the methods and their arguments using:&lt;br /&gt;
:: &#039;&#039;&#039;nm .libs/libdap.a | c++filt | grep &#039; T .*::&#039; | sed &#039;s@.* T \(.*\)@\1@&#039; &amp;gt; libdap_funcs&#039;&#039;&#039;&lt;br /&gt;
: and compare that using &amp;lt;tt&amp;gt;diff&amp;lt;/tt&amp;gt; on the previous release&#039;s library.&lt;br /&gt;
Assess the changes you find based on the following rules for the values of &amp;lt;tt&amp;gt;CURRENT&amp;lt;/tt&amp;gt;,&amp;lt;tt&amp;gt;REVISION&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;AGE&amp;lt;/tt&amp;gt;&lt;br /&gt;
* No interfaces changed, only implementations (good): ==&amp;gt; Increment REVISION.&lt;br /&gt;
* Interfaces added, none removed (good): ==&amp;gt; Increment CURRENT, increment AGE, set REVISION to 0.&lt;br /&gt;
* Interfaces removed or changed (BAD, breaks upward compatibility): ==&amp;gt; Increment CURRENT, set AGE and REVISION to 0.&lt;br /&gt;
The current value of  &amp;lt;tt&amp;gt;CURRENT&amp;lt;/TT&amp;gt;,&amp;lt;tt&amp;gt;REVISION&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;AGE&amp;lt;/tt&amp;gt; can be found in &amp;lt;tt&amp;gt;configure.ac&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
LIB_DIS_CURRENT=14&lt;br /&gt;
LIB_DIS_AGE=6&lt;br /&gt;
LIB_DIS_REVISION=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Once you have determined the new values of  the &amp;lt;tt&amp;gt;CURRENT:REVISION:AGE&amp;lt;/tt&amp;gt;  strings then:&lt;br /&gt;
;Edit the configure.ac and update the version values to the new ones.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Source_Release_for_libdap&amp;diff=13517</id>
		<title>Source Release for libdap</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Source_Release_for_libdap&amp;diff=13517"/>
		<updated>2024-01-04T22:40:53Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Update the Build Offset */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers the step needed to release the libdap software for Hyrax. There is are separate pages for the BES and OLFS code and an overview page that describes how the website is updated and lists are notified.&lt;br /&gt;
&lt;br /&gt;
We now depend on the CI/CD process to build binary packages and to test the source builds. When the source code is tagged and marked as a release in GitHub, our linked Zenodo account archives that software and mints a DOI for it.&lt;br /&gt;
&lt;br /&gt;
== The Release Process ==&lt;br /&gt;
:&#039;&#039;&#039;Tip&#039;&#039;&#039;: If, while working on the release, you find you need to make changes to the code and you know the CI build will fail, do so on a &#039;&#039;release branch&#039;&#039; that you can merge and discard later. Do not make a release branch unless you need to since it complicates making tags.&lt;br /&gt;
&lt;br /&gt;
===  Verify the code base ===&lt;br /&gt;
# We release using the &#039;&#039;master&#039;&#039; branch. The code on &#039;&#039;master&#039;&#039; must pass the CI build. &lt;br /&gt;
# Make sure that the source code you&#039;re using for the following steps is up-to-date. (&#039;&#039;git pull&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
=== Update Release Files ===&lt;br /&gt;
Update the text documentation files and version numbers in the configuration files:&lt;br /&gt;
&lt;br /&gt;
; &#039;&#039;&#039;Note&#039;&#039;&#039; &lt;br /&gt;
:It&#039;s helpful to have, in the &#039;&#039;&#039;NEWS&#039;&#039;&#039; file, the Web site and the release notes, a list of the Jira tickets that have been closed since the last release. The best way to do this is to goto &#039;&#039;Jira&#039;s Issues&#039;&#039; page and look at the &#039;&#039;Tickets closed recently&#039;&#039; item. From there, click on &#039;&#039;Advanced&#039;&#039; and edit the time range so it matches the time range since the past release to now, then &#039;&#039;Export&#039;&#039; that info as an excel spreadsheet (the icon with a hat and a down arrow). YMMV regarding how easy this is and Jira&#039;s UI changes often.&lt;br /&gt;
&lt;br /&gt;
==== Update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file. ====&lt;br /&gt;
Use the script &amp;lt;tt&amp;gt;gitlog-to-changelog&amp;lt;/tt&amp;gt; (which can be found with Google) to update the &#039;&#039;&#039;ChangeLog&#039;&#039;&#039; file by running it using the &amp;lt;tt&amp;gt;--since=&amp;quot;&amp;lt;date&amp;gt;&amp;quot;&amp;lt;/tt&amp;gt; option with a date one day later in time than the newest entry in the current ChangeLog. &lt;br /&gt;
: &#039;&#039;&#039;gitlog-to-changelog --since=&amp;quot;1970-01-01&amp;quot;&#039;&#039;&#039; (&#039;&#039;Specify a date one day later than the one at the top of ChangeLog&#039;&#039;)&lt;br /&gt;
Save the result to a temp file and combine the two files: &amp;lt;br/&amp;gt;&lt;br /&gt;
: &#039;&#039;&#039;cat tmp ChangeLog &amp;gt; ChangeLog.tmp; mv ChangeLog.tmp ChangeLog&#039;&#039;&#039;&lt;br /&gt;
If you&#039;re making the first ChangeLog entries, then you&#039;ll need to create the ChangeLog file first. &amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Tip&#039;&#039;&#039;: &#039;&#039;When you&#039;re making the commit log entries, use line breaks so ChangeLog will be readable. That is, use lines &amp;lt; 80 characters long.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Update the NEWS file ====&lt;br /&gt;
To update the NEWS file, just read over the new ChangeLog entries and summarize.&lt;br /&gt;
&lt;br /&gt;
==== Update the Version Numbers ====&lt;br /&gt;
There are really 2 version numbers for each of these project items. The &#039;&#039;human&#039;&#039; version (like version-3.17.5) and the &#039;&#039;library&#039;&#039; API/ABI version which is represented as &amp;lt;tt&amp;gt;CURRENT:REVISION:AGE&amp;lt;/tt&amp;gt;. There are special rules for when each of the numbers in the library API/ABI version get incremented that are triggered by the kinds of changes that where made to the code base. The human version number is more arbitrary. So for example, we might make a major API/ABI change and have to change to a new Libtool version like &amp;lt;tt&amp;gt;25:0:0&amp;lt;/tt&amp;gt; but the human version might only change from bes-3.17.3 to bes-3.18.0&lt;br /&gt;
&lt;br /&gt;
===== Version for Humans =====&lt;br /&gt;
# Determine the human version number. This appears to be a somewhat subjective process.&lt;br /&gt;
# Edit each of the &#039;&#039;Affected Files&#039;&#039; and update the human version number.&lt;br /&gt;
&lt;br /&gt;
:;Affected Files: &lt;br /&gt;
:: &#039;&#039;&#039;&#039;&#039;configure.ac&#039;&#039;&#039;&#039;&#039; - Look for:&lt;br /&gt;
::: &amp;lt;tt&amp;gt;AC_INIT(libdap, ###.###.###, opendap-tech@opendap.org)&amp;lt;/tt&amp;gt;&lt;br /&gt;
:: debian/changelog (see [https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#changelog Debian ChangeLog])&lt;br /&gt;
::: &#039;&#039;&#039;Take Note!&#039;&#039;&#039; &#039;&#039;The &amp;lt;tt&amp;gt;debian/changelog&amp;lt;/tt&amp;gt; is the &amp;quot;single source of truth&amp;quot; for the libdap4 version in the debian packaging. If this does not agree with the version being packaged the package build will fail.&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;README.md&#039;&#039;&#039;&lt;br /&gt;
:: &#039;&#039;&#039;INSTALL&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===== API/ABI Version =====&lt;br /&gt;
The library API/ABI version is represented as CURRENT:REVISION:AGE. &lt;br /&gt;
&lt;br /&gt;
;The rules for shared image version numbers:&lt;br /&gt;
:# No interfaces changed, only implementations (good): Increment REVISION.&lt;br /&gt;
:# Interfaces added, none removed (good): Increment CURRENT, set REVISION to 0, increment AGE.&lt;br /&gt;
:# Interfaces removed or changed (BAD, breaks upward compatibility): Increment CURRENT, set REVISION to 0 , and set AGE to 0.&lt;br /&gt;
&lt;br /&gt;
See the &#039;&#039;Appendix: How to see the scope of API/ABI changes in C++ sources&#039;&#039; below for gruesome details. Often basic knowledge of the edits is good enough.&lt;br /&gt;
&lt;br /&gt;
;Affected Files: &lt;br /&gt;
: &#039;&#039;&#039;&#039;&#039;configure.ac&#039;&#039;&#039;&#039;&#039; - Look for&lt;br /&gt;
:: DAPLIB_CURRENT=###&lt;br /&gt;
:: DAPLIB_REVISION=###&lt;br /&gt;
:: DAPLIB_AGE=###&lt;br /&gt;
&lt;br /&gt;
=== Commit ===&lt;br /&gt;
* Commit and push the code. Wait for the CI/CD builds to complete. You must be working on the &#039;&#039;master&#039;&#039; branch to get the CD package builds to work.&lt;br /&gt;
&lt;br /&gt;
=== Update the Build Offset ===&lt;br /&gt;
&#039;&#039;Setting the build offset correctly will set the build number for the new release to &amp;quot;0&amp;quot;.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the file &amp;lt;tt&amp;gt;travis/travis_libdap_build_offset.sh&amp;lt;/tt&amp;gt; set the value of &amp;lt;tt&amp;gt;LIBDAP_TRAVIS_BUILD_OFFSET&amp;lt;/tt&amp;gt; to the number of the last TravisCI build plus one. The previous commit and push will have triggered a TravisCI build. Find the build number for the previous commit in [https://app.travis-ci.com/github/OPENDAP/libdap4 the TravisCI page for libdap4] and use that build number plus 1.&lt;br /&gt;
&lt;br /&gt;
This is not the build number for the package. It is the build number used by Travis, which is the the total number of times Travis has build the code. This number is the build number on the left-hand TOC &lt;br /&gt;
&lt;br /&gt;
Once you have updated the &amp;lt;tt&amp;gt;travis/travis_libdap_build_offset.sh&amp;lt;/tt&amp;gt; commit and push this change. Do NOT use a &amp;lt;tt&amp;gt;[skip ci]&amp;lt;/tt&amp;gt; string in the commit message as it is important that this commit run through the entire CI process.&lt;br /&gt;
&lt;br /&gt;
=== Tag The Release ===&lt;br /&gt;
# &#039;&#039;&#039;git tag -m &amp;quot;version-&amp;lt;number&amp;gt;&amp;quot; -a &amp;lt;numbers&amp;gt;&#039;&#039;&#039;  (this was &#039;&#039;&#039;git tag -m &amp;quot;version-&amp;lt;number&amp;gt;&amp;quot; -a version-&amp;lt;numbers&amp;gt; &#039;&#039;&#039; but we have had a request to switch to plan version numbers to be more conformant with common practice WRT git version tags).&lt;br /&gt;
# &#039;&#039;&#039;git push origin &amp;lt;numbers&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If this is part of Hyrax, also tag this point in the master branch with the Hyrax release number:&lt;br /&gt;
# &#039;&#039;&#039;git tag -m &amp;quot;hyrax-&amp;lt;number&amp;gt;&amp;quot; -a hyrax-&amp;lt;numbers&amp;gt;&#039;&#039;&#039; I think we can leave this tag as &#039;&#039;hyrax-&amp;lt;version&amp;gt;&#039;&#039; since it&#039;s for our own bookkeeping. &lt;br /&gt;
# &#039;&#039;&#039;git push origin hyrax-&amp;lt;numbers&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
#: NB: Instead of tagging the HDF4/5 modules, use the saved commit hashes that git tracks for submodules. This cuts down on the bookkeeping for releases and removes one source of error.&lt;br /&gt;
&lt;br /&gt;
=== Create the release on Github ===&lt;br /&gt;
Goto the &#039;tags&#039; page (&#039;code&#039; then &#039;tags&#039; at the top of the directory window) and click the &#039;Tags&#039; tab. There, click the ellipses (...) on the right of the &#039;version-*&#039; tag and:&lt;br /&gt;
# Enter a &#039;&#039;title&#039;&#039; for the release&lt;br /&gt;
# Copy the most recent text from the NEWS file into the &#039;&#039;describe&#039;&#039; field&lt;br /&gt;
# Click &#039;&#039;Update this release&#039;&#039; or &#039;&#039;Save draft&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will trigger a &#039;archive and DOI&#039; process on the Zenodo system.&lt;br /&gt;
&lt;br /&gt;
=== Publish and Sign ===&lt;br /&gt;
&lt;br /&gt;
When the release is made on GitHub the source tar bundle is made automatically. However, this bundle is &#039;&#039;&#039;not&#039;&#039;&#039; the one we wish to publish because it requires people to have &#039;&#039;autoconf&#039;&#039; installed. Rather we want to use the result of &amp;quot;&amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;&amp;quot; which will have the &amp;lt;tt&amp;gt;configure&amp;lt;/tt&amp;gt; script pre-generated.&lt;br /&gt;
&lt;br /&gt;
All you need do is build the tar file using &amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt;, sign it, and push (or pull) these files onto www.opendap.org/pub/source. &lt;br /&gt;
&lt;br /&gt;
# Go to the &#039;&#039;&#039;libdap4&#039;&#039;&#039; project on your local machine and run &amp;lt;tt&amp;gt;make dist&amp;lt;/tt&amp;gt; which will make a libdap-x.y.z.tar.gz file at the top level of the &#039;&#039;&#039;libdap4&#039;&#039;&#039; project.&lt;br /&gt;
# Use &#039;&#039;&#039;gpg&#039;&#039;&#039; to sign the tar bundle:&lt;br /&gt;
#: &amp;lt;tt&amp;gt;gpg --detach-sign --local-user security@opendap.org libdap-x.y.z.tar.gz&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Use &#039;&#039;&#039;sftp&#039;&#039;&#039; to push the signature file and the tar bundle to the /httpdocs/pub/source directory on www.opendap.org&lt;br /&gt;
#: &#039;&#039;(Assuming your current working directory is the top of the &#039;&#039;&#039;bes&#039;&#039;&#039; project)&#039;&#039;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;sftp opendap@www.opendap.org&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;cd httpdocs/pub/source&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put libdap-x.y.z.tgz.sig&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;put libdap-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
#: &amp;lt;tt&amp;gt;quit&amp;lt;/tt&amp;gt;&lt;br /&gt;
# Check your work!&lt;br /&gt;
## Download the source tar bundle and signature from www.opendap.org.&lt;br /&gt;
## Verify the signature:&lt;br /&gt;
##: &amp;lt;tt&amp;gt; gpg --verify libdap-x.y.z.tgz.sig libdap-x.y.z.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Get the DOI from [https://zenodo.org Zenodo] ===&lt;br /&gt;
&#039;&#039;&#039;We should stop putting the DOIs and the cool badge in the README because the copy of README archived by Zenodo will be one step out of sync.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Instead, put a note in the README that the DOI can be found at Zenodo under name XYZ. jhrg 5/8/20&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# Goto [https://zenodo.org Zenodo] and look at the &#039;upload&#039; page. Since the libdap, BES and OLFS repositories are linked to Zenodo, the newly-tagged code is uploaded to Zenodo automatically and a DOI is minted for us.&lt;br /&gt;
# Click on the new version, then click on the DOI tag in the pane on the right of the page for the given release.&lt;br /&gt;
# Copy the DOI as markdown from the window that pops up and paste that into the info for the version back in Github land.&lt;br /&gt;
# Also paste that into the README file. Commit using &#039;&#039;[skip ci]&#039;&#039; so we don&#039;t do a huge build (or do the build, it really doesn&#039;t matter that much).&lt;br /&gt;
&lt;br /&gt;
Images for the above steps to help with the web UI: coming soon&lt;br /&gt;
&lt;br /&gt;
== Appendix: How to see the scope of API/ABI changes in C++ sources ==&lt;br /&gt;
Determine the new software version (assuming you don&#039;t already know the extent of the changes that have been made)&lt;br /&gt;
: For C++, build a file of the methods and their arguments using:&lt;br /&gt;
:: &#039;&#039;&#039;nm .libs/libdap.a | c++filt | grep &#039; T .*::&#039; | sed &#039;s@.* T \(.*\)@\1@&#039; &amp;gt; libdap_funcs&#039;&#039;&#039;&lt;br /&gt;
: and compare that using &amp;lt;tt&amp;gt;diff&amp;lt;/tt&amp;gt; on the previous release&#039;s library.&lt;br /&gt;
Assess the changes you find based on the following rules for the values of &amp;lt;tt&amp;gt;CURRENT&amp;lt;/tt&amp;gt;,&amp;lt;tt&amp;gt;REVISION&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;AGE&amp;lt;/tt&amp;gt;&lt;br /&gt;
* No interfaces changed, only implementations (good): ==&amp;gt; Increment REVISION.&lt;br /&gt;
* Interfaces added, none removed (good): ==&amp;gt; Increment CURRENT, increment AGE, set REVISION to 0.&lt;br /&gt;
* Interfaces removed or changed (BAD, breaks upward compatibility): ==&amp;gt; Increment CURRENT, set AGE and REVISION to 0.&lt;br /&gt;
The current value of  &amp;lt;tt&amp;gt;CURRENT&amp;lt;/TT&amp;gt;,&amp;lt;tt&amp;gt;REVISION&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;AGE&amp;lt;/tt&amp;gt; can be found in &amp;lt;tt&amp;gt;configure.ac&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
LIB_DIS_CURRENT=14&lt;br /&gt;
LIB_DIS_AGE=6&lt;br /&gt;
LIB_DIS_REVISION=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Once you have determined the new values of  the &amp;lt;tt&amp;gt;CURRENT:REVISION:AGE&amp;lt;/tt&amp;gt;  strings then:&lt;br /&gt;
;Edit the configure.ac and update the version values to the new ones.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Better_Singleton_classes_C%2B%2B&amp;diff=13516</id>
		<title>Better Singleton classes C++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Better_Singleton_classes_C%2B%2B&amp;diff=13516"/>
		<updated>2023-12-28T17:50:46Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Explanation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
NB: This was ripped from Google Bard and edited. jhrg 12/27/23&lt;br /&gt;
&lt;br /&gt;
The Meyers Singleton pattern is a popular way to implement the singleton design pattern in C++ using a static member variable declared within a function. It leverages the properties of static functions and objects to guarantee only one instance of the class is ever created.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This pattern, developed by Scott Meyers (although G&#039;bard didn&#039;t say so...) uses the basic properties of static methods and static variables within functions/methods. The singleton object initialization should happen in the object&#039;s default constructor, which is shielded from use by anything other than the singleton class. That&#039;s the one thing that makes this pattern somewhat tricky as constructors should not throw exceptions.&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
class Singleton {&lt;br /&gt;
private:&lt;br /&gt;
  // Private constructor to prevent direct instantiation&lt;br /&gt;
  Singleton() {}&lt;br /&gt;
&lt;br /&gt;
public:&lt;br /&gt;
  // Delete the copy constructor and assignment operator to prevent copying&lt;br /&gt;
  Singleton(const Singleton&amp;amp;) = delete;&lt;br /&gt;
  Singleton&amp;amp; operator=(const Singleton&amp;amp;) = delete;&lt;br /&gt;
&lt;br /&gt;
  // Static member function that returns the singleton instance&lt;br /&gt;
  static Singleton&amp;amp; getInstance() {&lt;br /&gt;
    // Create a local static object the first time the function is called&lt;br /&gt;
    static Singleton instance;&lt;br /&gt;
    return instance;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  // Other member functions of the Singleton class...&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
int main() {&lt;br /&gt;
  // Access the singleton instance through the getInstance() function&lt;br /&gt;
  Singleton&amp;amp; instance1 = Singleton::getInstance();&lt;br /&gt;
  Singleton&amp;amp; instance2 = Singleton::getInstance();&lt;br /&gt;
&lt;br /&gt;
  // Verify that both instances are the same object&lt;br /&gt;
  if (&amp;amp;instance1 == &amp;amp;instance2) {&lt;br /&gt;
    std::cout &amp;lt;&amp;lt; &amp;quot;Both instances refer to the same Singleton object!&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  // Use the Singleton instance...&lt;br /&gt;
&lt;br /&gt;
  return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Explanation ==&lt;br /&gt;
&lt;br /&gt;
The Singleton class has a private constructor to prevent direct instantiation.&lt;br /&gt;
The getInstance() function is static and declared within the Singleton class.&lt;br /&gt;
Inside getInstance(), a static member object instance is declared. This object is only created the first time the function is called.&lt;br /&gt;
Subsequent calls to getInstance() simply return the existing instance object.&lt;br /&gt;
The copy constructor and assignment operator are deleted to prevent copying the singleton object.&lt;br /&gt;
&lt;br /&gt;
=== Benefits of the Meyers Singleton ===&lt;br /&gt;
&lt;br /&gt;
;Thread-safe: Initialization is guaranteed to happen only once, even in multithreaded environments.&lt;br /&gt;
;Lazy initialization: The singleton object is only created when it is first needed.&lt;br /&gt;
;Simple and concise: The implementation is relatively easy to understand and maintain.&lt;br /&gt;
&lt;br /&gt;
=== Drawbacks of the Meyers Singleton ===&lt;br /&gt;
&lt;br /&gt;
;Overuse: Singletons can lead to tight coupling and reduced testability. Use them sparingly and only when truly necessary.&lt;br /&gt;
;No explicit destruction: The singleton object will be destroyed only when the program exits. This can be problematic if resources need to be explicitly released earlier.&lt;br /&gt;
&lt;br /&gt;
=== Alternatives to the Meyers Singleton ===&lt;br /&gt;
&lt;br /&gt;
;Static local variables: This approach can be used within a single file to create a thread-safe singleton.&lt;br /&gt;
;Resource acquisition is initialization (RAII): This technique can be used to manage resources associated with the singleton and ensure proper cleanup.&lt;br /&gt;
;Dependency injection: This approach can improve testability and decouple the singleton from its dependent classes.&lt;br /&gt;
&lt;br /&gt;
== Thread safety ==&lt;br /&gt;
&lt;br /&gt;
The Meyers Singleton pattern is thread-safe in C++11 and later, even without explicit synchronization mechanisms like mutexes or locks.&lt;br /&gt;
&lt;br /&gt;
The thread safety of the Meyers Singleton pattern relies on language-level guarantees introduced in C++11.&lt;br /&gt;
For older C++ versions (pre-C++11), additional synchronization mechanisms would be needed for thread safety.&lt;br /&gt;
Even though the initialization is thread-safe, it&#039;s important to note that any methods of the singleton class itself still need to be thread-safe if accessed concurrently by multiple threads.&lt;br /&gt;
&lt;br /&gt;
=== How it achieves thread safety ===&lt;br /&gt;
&lt;br /&gt;
;Static Local Variable:&lt;br /&gt;
The getInstance() function uses a static local variable to hold the singleton instance.&lt;br /&gt;
Static local variables are guaranteed to be initialized only once, even in multithreaded environments.&lt;br /&gt;
This initialization happens in a thread-safe manner due to language-level guarantees in C++11 and beyond.&lt;br /&gt;
&lt;br /&gt;
;Magic Statics:&lt;br /&gt;
This feature of C++11 and later ensures that static variables with block scope (like those within functions) are initialized in a thread-safe way.&lt;br /&gt;
The compiler and runtime collaborate to handle potential race conditions during initialization, ensuring that only one thread initializes the variable at a time.&lt;br /&gt;
&lt;br /&gt;
;No Explicit Locking Needed:&lt;br /&gt;
Because of these language-level guarantees, the Meyers Singleton pattern doesn&#039;t require any explicit locking mechanisms (like mutexes or locks) to ensure thread safety.&lt;br /&gt;
This makes it a relatively simple and efficient way to implement a thread-safe singleton.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Better_Singleton_classes_C%2B%2B&amp;diff=13515</id>
		<title>Better Singleton classes C++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Better_Singleton_classes_C%2B%2B&amp;diff=13515"/>
		<updated>2023-12-28T00:14:21Z</updated>

		<summary type="html">&lt;p&gt;Jimg: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
NB: This was ripped from Google Bard and edited. jhrg 12/27/23&lt;br /&gt;
&lt;br /&gt;
The Meyers Singleton pattern is a popular way to implement the singleton design pattern in C++ using a static member variable declared within a function. It leverages the properties of static functions and objects to guarantee only one instance of the class is ever created.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This pattern, developed by Scott Meyers (although G&#039;bard didn&#039;t say so...) uses the basic properties of static methods and static variables within functions/methods. The singleton object initialization should happen in the object&#039;s default constructor, which is shielded from use by anything other than the singleton class. That&#039;s the one thing that makes this pattern somewhat tricky as constructors should not throw exceptions.&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
class Singleton {&lt;br /&gt;
private:&lt;br /&gt;
  // Private constructor to prevent direct instantiation&lt;br /&gt;
  Singleton() {}&lt;br /&gt;
&lt;br /&gt;
public:&lt;br /&gt;
  // Delete the copy constructor and assignment operator to prevent copying&lt;br /&gt;
  Singleton(const Singleton&amp;amp;) = delete;&lt;br /&gt;
  Singleton&amp;amp; operator=(const Singleton&amp;amp;) = delete;&lt;br /&gt;
&lt;br /&gt;
  // Static member function that returns the singleton instance&lt;br /&gt;
  static Singleton&amp;amp; getInstance() {&lt;br /&gt;
    // Create a local static object the first time the function is called&lt;br /&gt;
    static Singleton instance;&lt;br /&gt;
    return instance;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  // Other member functions of the Singleton class...&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
int main() {&lt;br /&gt;
  // Access the singleton instance through the getInstance() function&lt;br /&gt;
  Singleton&amp;amp; instance1 = Singleton::getInstance();&lt;br /&gt;
  Singleton&amp;amp; instance2 = Singleton::getInstance();&lt;br /&gt;
&lt;br /&gt;
  // Verify that both instances are the same object&lt;br /&gt;
  if (&amp;amp;instance1 == &amp;amp;instance2) {&lt;br /&gt;
    std::cout &amp;lt;&amp;lt; &amp;quot;Both instances refer to the same Singleton object!&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  // Use the Singleton instance...&lt;br /&gt;
&lt;br /&gt;
  return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Explanation ==&lt;br /&gt;
&lt;br /&gt;
The Singleton class has a private constructor to prevent direct instantiation.&lt;br /&gt;
The getInstance() function is static and declared within the Singleton class.&lt;br /&gt;
Inside getInstance(), a static member object instance is declared. This object is only created the first time the function is called.&lt;br /&gt;
Subsequent calls to getInstance() simply return the existing instance object.&lt;br /&gt;
The copy constructor and assignment operator are deleted to prevent copying the singleton object.&lt;br /&gt;
&lt;br /&gt;
=== Benefits of the Meyers Singleton ===&lt;br /&gt;
&lt;br /&gt;
;Thread-safe: Initialization is guaranteed to happen only once, even in multithreaded environments.&lt;br /&gt;
;Lazy initialization: The singleton object is only created when it is first needed.&lt;br /&gt;
;Simple and concise: The implementation is relatively easy to understand and maintain.&lt;br /&gt;
&lt;br /&gt;
=== Drawbacks of the Meyers Singleton ===&lt;br /&gt;
&lt;br /&gt;
;Overuse: Singletons can lead to tight coupling and reduced testability. Use them sparingly and only when truly necessary.&lt;br /&gt;
;No explicit destruction: The singleton object will be destroyed only when the program exits. This can be problematic if resources need to be explicitly released earlier.&lt;br /&gt;
&lt;br /&gt;
=== Alternatives to the Meyers Singleton ===&lt;br /&gt;
&lt;br /&gt;
;Static local variables: This approach can be used within a single file to create a thread-safe singleton.&lt;br /&gt;
;Resource acquisition is initialization (RAII): This technique can be used to manage resources associated with the singleton and ensure proper cleanup.&lt;br /&gt;
;Dependency injection: This approach can improve testability and decouple the singleton from its dependent classes.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Better_Singleton_classes_C%2B%2B&amp;diff=13514</id>
		<title>Better Singleton classes C++</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Better_Singleton_classes_C%2B%2B&amp;diff=13514"/>
		<updated>2023-12-27T22:05:12Z</updated>

		<summary type="html">&lt;p&gt;Jimg: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;NEVER USE THIS&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We use lots of Singleton classes in the BES Framework. One issue with that pattern is that memory is usually not returned to the heap before the process exits, leaving tools like valgrind to report the memory as leaked. This is misleading and can be ignored, except that it&#039;s a great way to hide &#039;&#039;real&#039;&#039; leaks behind the noise in a sea of false positives. Here&#039;s a way around that using C++&#039;s unquie_ptr.&lt;br /&gt;
&lt;br /&gt;
== The basic plan ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;NEVER USE THIS&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I&#039;ll use a real example of this from the BES; the BES &#039;Keys&#039; key-value-pair configuration database. The idea is that we store a static pointer to the single copy of the class to be built. We access that pointer whenever the class is to used with a static method. That method returns a pointer to the class if it has been allocated or allocates a pointer and constructs the object if not. The pattern builds the single instance in a way that is thread safe (using the C++ 11 concurrency features).&lt;br /&gt;
&lt;br /&gt;
== What goes in the header ==&lt;br /&gt;
&lt;br /&gt;
In the header for the singleton class, include a static member that is a &#039;&#039;unique pointer&#039;&#039; to an instance of the object. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
class TheBESKeys: public BESObj {&lt;br /&gt;
&lt;br /&gt;
    ...&lt;br /&gt;
&lt;br /&gt;
    TheBESKeys() = default;&lt;br /&gt;
&lt;br /&gt;
    explicit TheBESKeys(const std::string &amp;amp;keys_file_name);&lt;br /&gt;
&lt;br /&gt;
    static std::unique_ptr&amp;lt;TheBESKeys&amp;gt; d_instance;&lt;br /&gt;
    static std::once_flag d_euc_init_once;&lt;br /&gt;
&lt;br /&gt;
public:&lt;br /&gt;
&lt;br /&gt;
    /// Access to the singleton.&lt;br /&gt;
    static TheBESKeys *TheKeys();&lt;br /&gt;
&lt;br /&gt;
    ~TheBESKeys() override = default;&lt;br /&gt;
&lt;br /&gt;
    ...&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To adapt most existing code to this pattern, move an existing static pointer so that it is a private static &#039;&#039;&#039;std::unique_ptr&amp;lt;&#039;&#039;&#039;class&#039;&#039;&#039;&amp;gt;&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== What goes in the implementation ==&lt;br /&gt;
&lt;br /&gt;
Define the singleton instance a global/file level scope:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;std::unique_ptr&amp;lt;TheBESKeys&amp;gt; TheBESKeys::d_instance = nullptr;&lt;br /&gt;
std::once_flag TheBESKeys::d_euc_init_once;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accessor for the static pointer to the instance is likely the only code that needs to be changed in the implementation. Note that the &#039;&#039;std::once_flag&#039;&#039; is a static class member like the unique_ptr&amp;lt;&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
TheBESKeys *TheBESKeys::TheKeys()&lt;br /&gt;
{&lt;br /&gt;
    if (d_instance == nullptr) {&lt;br /&gt;
        std::call_once(d_euc_init_once, []() {&lt;br /&gt;
            d_instance.reset(new TheBESKeys(get_the_config_filename()));&lt;br /&gt;
        });&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    return d_instance.get();&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the constructor for the class is called inside the &#039;&#039;reset()&#039;&#039; method of the &#039;&#039;uniqe_ptr&#039;&#039; object: &#039;&#039;d_instance.reset(new TheBESKeys(get_the_config_filename()));&#039;&#039; and that that is called inside &#039;&#039;std::call_once()&#039;&#039;. The use of &#039;&#039;std::call_once()&#039;&#039; ensure that if two threads simultaneously call the accessor, only one instance will be made. In the code above, a lambda was used to pass the &#039;runnable&#039; to call_once(). If the pointer to the instance (&#039;&#039;d_intance&#039;&#039;) is not null, then the accessor uses &#039;&#039;unique_ptr::get()&#039;&#039; to return the &#039;raw&#039; pointer to the instance, which will greatly simplify using this pattern with our existing code.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Git_Hacks_and_Tricks&amp;diff=13513</id>
		<title>Git Hacks and Tricks</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Git_Hacks_and_Tricks&amp;diff=13513"/>
		<updated>2023-10-20T22:30:47Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Cheat sheet items */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Git resources ==&lt;br /&gt;
* The [http://git-scm.com/book/en/v2 Pro GIT] book is online at: git-scm.com&lt;br /&gt;
* Good cheat sheet: http://ndpsoftware.com/git-cheatsheet.html#loc=workspace;&lt;br /&gt;
* Info on branching from git.com: http://git-scm.com/book/en/Git-Branching-Remote-Branches&lt;br /&gt;
* Migration to git: http://git-scm.com/book/en/Git-and-Other-Systems-Migrating-to-Git&lt;br /&gt;
&lt;br /&gt;
== Setup a username and access token for GitHub ==&lt;br /&gt;
&lt;br /&gt;
:git config --global github.user &amp;lt;name&amp;gt;&lt;br /&gt;
:git config --global github.token &amp;lt;token&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:where the token is made using the instructions at https://help.github.com/articles/creating-an-access-token-for-command-line-use&lt;br /&gt;
&lt;br /&gt;
If you want to configure a token for use with the OSX keychain, get the credential-osxkeychain tool with brew if you need to. Test if you have it by running &#039;&#039;&#039;git credential-osxkeychain&#039;&#039;&#039; and look for the credential-osxkeychain help message. To &#039;&#039;use&#039;&#039; the git extension, you need to enter &#039;&#039;&#039;git credential-osxkeychain &amp;lt;command&amp;gt;&#039;&#039;&#039; and then, on the next line, enter &#039;&#039;&#039;host=github.com&#039;&#039;&#039; and maybe &#039;&#039;&#039;protocol=https&#039;&#039;&#039; and other key/value pairs and then a blank line. See the examples below.&lt;br /&gt;
&lt;br /&gt;
To use the osx keychain, first check if you have a password/token already saved:&lt;br /&gt;
&lt;br /&gt;
:git credential-osxkeychain get&lt;br /&gt;
::host=github.com&lt;br /&gt;
::protocol=https&lt;br /&gt;
::&amp;lt;cr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Erase the password/token&lt;br /&gt;
&lt;br /&gt;
:git credential-osxkeychain erase&lt;br /&gt;
::host=github.com&lt;br /&gt;
::protocol=https&lt;br /&gt;
::&amp;lt;cr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then set the new token&lt;br /&gt;
&lt;br /&gt;
:git credential-osxkeychain store&lt;br /&gt;
::host=github.com&lt;br /&gt;
::protocol=https&lt;br /&gt;
::username=&amp;lt;your login&amp;gt;&lt;br /&gt;
::password=&amp;lt;your token&amp;gt;&lt;br /&gt;
::&amp;lt;cr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then use the &#039;&#039;&#039;get&#039;&#039;&#039; command to verify.&lt;br /&gt;
&lt;br /&gt;
== Git Secrets ==&lt;br /&gt;
&lt;br /&gt;
Use this tool, which is run automatically before each commit, to keep from adding AWS and other secret keys to code that is destined for a public repository. Doing that will &#039;leak&#039; the key and Amazon _will_ notice. The remedy will involve every account changing its password and every key pair being &#039;rotated&#039; (i.e., every key pair has to be replaced with t anew one).&lt;br /&gt;
&lt;br /&gt;
https://github.com/awslabs/git-secrets&lt;br /&gt;
&lt;br /&gt;
Scroll down to the bottom for installation instructions. On OSX, you can use brew and do not have to clone the repo. Here&#039;s what I did:&lt;br /&gt;
&lt;br /&gt;
:brew install git-secrets # install _git secrets_&lt;br /&gt;
:git secrets --register-aws --global # configure git so all future cloned repos will use it&lt;br /&gt;
:git secrets --install ~/.git-templates/git-secrets # set up all existing repos so they use it.&lt;br /&gt;
:git config --global init.templateDir ~/.git-templates/git-secrets&lt;br /&gt;
&lt;br /&gt;
You can read a bit more of the docs in the _git secrets_ repo and configure a more fine grained approach.&lt;br /&gt;
&lt;br /&gt;
== Subtrees: how to incorporate code from other repositories ==&lt;br /&gt;
Git subtrees are an alternative to submodules and are easier for the users of the parent repository (in our case, typically the BES repo). There is a fair amount of information about subtrees, although the main thing to know is that once the code for the child repo is part of the parent, in most cases there&#039;s nothing else to do. Only when changes get made in the code from the child repo are extra steps to keep the parent&#039;s copy of the code and child repo in sync needed (and then, only if you want to keep them in sync). For normal branch-PR-merge operations, there is no need to think about the subtree management commands.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s a [https://winstonkotzan.com/blog/2016/09/26/git-submodule-vs-subtree.html discussion about the differences between submodules and subtrees].&lt;br /&gt;
&lt;br /&gt;
=== How to incorporate code from another repo ===&lt;br /&gt;
To incorporate code from another repo (the &#039;&#039;child&#039;&#039;) into an existing repo (the &#039;&#039;parent&#039;&#039;), use these steps:&lt;br /&gt;
&lt;br /&gt;
# name the other project &#039;&#039;child&#039;&#039;, and fetch: &#039;&#039;&#039;git remote add -f &#039;&#039;child&#039;&#039; &amp;lt;nowiki&amp;gt;https://github.com/&amp;lt;/nowiki&amp;gt;...&#039;&#039;&#039;&lt;br /&gt;
:: The &#039;&#039;-f&#039;&#039; option runs &#039;&#039;git fetch&#039;&#039; automatically after the remote repo is added. See [https://git-scm.com/docs/git-remote git remote].&lt;br /&gt;
# prepare for the later step to record the result as a merge.: &#039;&#039;&#039;git merge -s ours --no-commit --allow-unrelated-histories &#039;&#039;child&#039;&#039;/master&#039;&#039;&#039;&lt;br /&gt;
:: The &#039;&#039;-s&#039;&#039; option to &#039;&#039;git merge&#039;&#039; selects the &#039;&#039;ours&#039;&#039; strategy for the merge; &#039;&#039;--no-commit&#039;&#039; does not commit the merge automatically. See [https://git-scm.com/docs/git-merge#_merge_strategies git merge]. If you are using git 2.9+, add the option &#039;&#039;--allow-unrelated-histories&#039;&#039;, but older versions of git don&#039;t support that (as of Jan. 2022, OSX was using git 2.32).&lt;br /&gt;
# read &amp;quot;master&amp;quot; branch of &#039;&#039;child&#039;&#039; to the subdirectory &#039;&#039;dir-child&#039;&#039;: &#039;&#039;&#039;git read-tree --prefix=&#039;&#039;dir-child&#039;&#039;/ -u &#039;&#039;child&#039;&#039;/master&#039;&#039;&#039;&lt;br /&gt;
:: The &#039;&#039;-u&#039;&#039; option causes &#039;&#039;git read-tree&#039;&#039; to update the files in the working directory. See [https://git-scm.com/docs/git-read-tree git read-tree].&lt;br /&gt;
# record the merge result: &#039;&#039;&#039;git commit -m &amp;quot;Merge &#039;&#039;child&#039;&#039; project as our subdirectory&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== About Git subtree merges ===&lt;br /&gt;
To pull in subsequent update from &#039;&#039;child&#039;&#039; using &amp;quot;subtree&amp;quot; merge: &#039;&#039;&#039;git pull -s subtree &#039;&#039;child&#039;&#039; master&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To learn more about subtree merges, see [https://docs.github.com/en/get-started/using-git/about-git-subtree-merges About Git subtree merges].&lt;br /&gt;
&lt;br /&gt;
=== How to remove a submodule ===&lt;br /&gt;
If a child repo was included in a parent repo using git submodules, here&#039;s how to remove it so that the child repo can be included using subtrees as documented above.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;git rm -r &#039;&#039;path/to/submodule&#039;&#039; &#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;rm -rf .git/modules/&#039;&#039;path/to/submodule&#039;&#039; &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If the second line isn&#039;t used, even if you removed the submodule for now, the remnant .git/modules/the_submodule folder will prevent the same submodule from being added back or replaced in the future.&lt;br /&gt;
&lt;br /&gt;
Also, using just these two commands will leave an entry for the submodule in &#039;&#039;.git/config&#039;&#039;. To remove that, &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;git config -f .git/config --remove-section submodule.&#039;&#039;path/to/submodule&#039;&#039; &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Here is an older way that illustrates where all the information is held:&lt;br /&gt;
;How do I delete a submodule?&lt;br /&gt;
NB: Ignore the submodule help info about &#039;&#039;deinit&#039;&#039; since that seems to leave too much undone.&lt;br /&gt;
&lt;br /&gt;
To remove a submodule you need to:&lt;br /&gt;
* Delete the relevant line from the &#039;&#039;.gitmodules&#039;&#039; file.&lt;br /&gt;
* Delete the relevant section from &#039;&#039;.git/config&#039;&#039;.&lt;br /&gt;
* Delete the submodule info in &#039;&#039;.git/modules&#039;&#039;: &#039;&#039;&#039;rm -rf .git/modules/&amp;lt;path_to_submodule&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
* Run &#039;&#039;&#039;git rm --cached path_to_submodule&#039;&#039;&#039; (no trailing slash).&lt;br /&gt;
* Commit the parent repo (&#039;&#039;&#039;git commit -m &amp;quot;Removed submodule ...&amp;quot;&#039;&#039;&#039;)&lt;br /&gt;
* Delete the now untracked submodule files.&lt;br /&gt;
&lt;br /&gt;
== Someone forked and issued a PR on our repo, but ... ==&lt;br /&gt;
The Travis build failed because that person is not allowed access to our AWS. &lt;br /&gt;
&lt;br /&gt;
One way is to copy their branch to our remote (aka the &#039;origin&#039; remote) and issue a PR on it.&lt;br /&gt;
&lt;br /&gt;
# Set up their remote as one you can reference.&lt;br /&gt;
# Fetch the branches of that remote (you can fetch just the one branch)&lt;br /&gt;
# Checkout that remote/branch combo.&lt;br /&gt;
# Checkout with branching to move it to your default remote (which is liley called &#039;origin&#039;).&lt;br /&gt;
# Push that branch to github&lt;br /&gt;
&lt;br /&gt;
Here are the commands (with a real example):&lt;br /&gt;
&lt;br /&gt;
# git remote add https://github.com/Bo98/libdap4.git&lt;br /&gt;
# git fetch Bo98&lt;br /&gt;
# git checkout Bo98/libtirpc-fix       // Bo98 is the remote, libtirpc-fix is the branch &lt;br /&gt;
# git checkout -b libtirpc-fix         // That makes the code just checked out a branch for the &#039;origin&#039; remote&lt;br /&gt;
# get push -u origin libtirpc-fix      // Now that code is a branch in our repo and Travis will work.&lt;br /&gt;
&lt;br /&gt;
== Cheat sheet items ==&lt;br /&gt;
These are simple things that are not really obvious from the git book or other sources&lt;br /&gt;
&lt;br /&gt;
; How do I see the most recent commit date for all my branches?&lt;br /&gt;
:: git branch --sort=creatordate --sort=committername --format &amp;quot;%(align:20) %(creatordate:relative) %(end) %(align:25) %(committername) %(end) %(refname:lstrip=-1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
;About &#039;&#039;git rebase&#039;&#039;; I have a branch with lots of commits and I want to squash them. How? Oh, I pushed those commits to github...&lt;br /&gt;
:: The trick is to use &#039;&#039;git log&#039;&#039; to find the commit hash of the starting point for &#039;&#039;rebase&#039;&#039; and &#039;&#039;git rebase --interactive&#039;&#039; and then &#039;&#039;git push origin +&amp;lt;branch&amp;gt;&#039;&#039;. &lt;br /&gt;
:: &#039;&#039;&#039;&#039;&#039;Here&#039;s a HowTo on [[Squashing commits]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
;I forked someone&#039;s repo, now I want to synch to their &#039;master&#039; branch. How?&lt;br /&gt;
: Follow these steps&lt;br /&gt;
:: Set up an &#039;upstream&#039; remote: https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/configuring-a-remote-for-a-fork&lt;br /&gt;
:: Then do these operations to get and merge the changes to &#039;master&#039;: https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork&lt;br /&gt;
&lt;br /&gt;
;I keep getting a &#039;Permission Denied (publickey)&#039; error when I push!&lt;br /&gt;
: Follow these steps to make and use a public/private key pair for github&lt;br /&gt;
:: cd ~/.ssh.&lt;br /&gt;
:: Within .ssh, there should be these two files: id_rsa and id_rsa.pub. If those two files are not there... &lt;br /&gt;
:: To create the SSH keys, type ssh-keygen -t rsa -C &amp;quot;your_email@example.com&amp;quot;.&lt;br /&gt;
:: Open id_rsa.pub in a text editor, and copy the contents, exactly as it appears, of id_rsa.pub&lt;br /&gt;
:: and paste it into GitHub and/or BitBucket under the Account Menu (upper right corner) Settings &amp;gt; SSH Keys.&lt;br /&gt;
&lt;br /&gt;
;I just made a perfectly good commit to the wrong branch. How do I undo the last commit in my master branch and then take those same changes and get them into my upgrade branch?&lt;br /&gt;
:If you haven&#039;t yet pushed your changes, you can also do a soft reset:&lt;br /&gt;
:&#039;&#039;git reset --soft HEAD^&#039;&#039;&lt;br /&gt;
:This will revert the commit, but put the committed changes back into your index. Assuming the branches are relatively up-to-date with regard to each other, git will let you do a checkout into the other branch, whereupon you can simply commit:&lt;br /&gt;
:&#039;&#039;git checkout [-b] branch&#039;&#039;&lt;br /&gt;
:&#039;&#039;git commit&#039;&#039;&lt;br /&gt;
:The disadvantage is that you need to re-enter your commit message.&lt;br /&gt;
&lt;br /&gt;
;How to see a list of &#039;conflicted&#039; files after a merge&lt;br /&gt;
:git diff --name-only --diff-filter=U&lt;br /&gt;
;How to see the difference between to commits&lt;br /&gt;
:git diff &amp;lt;commit-hash-1&amp;gt; &amp;lt;commit-hash-2&amp;gt;, e.g., git diff 0da94be 59ff30c&lt;br /&gt;
:...for a specific file: git diff &amp;lt;commit-hash-1&amp;gt; &amp;lt;commit-hash-2&amp;gt; -- &amp;lt;file&amp;gt;&lt;br /&gt;
:...and don&#039;t forget the shorthand for the hashes: git diff HEAD^^..HEAD -- main.c where &#039;&#039;HEAD^&#039;&#039; is the parent of HEAD. HEAD{n} is the Nth parent.&lt;br /&gt;
;How to see the different remote branches:&lt;br /&gt;
:git remote show origin&lt;br /&gt;
;Fetch all the branches on &#039;&#039;origin&#039;&#039;&lt;br /&gt;
:git fetch origin&lt;br /&gt;
;How do I list the remote branches (that have been fetched)?&lt;br /&gt;
:git branch -a&lt;br /&gt;
;How do I switch to a branch from a remote origin?&lt;br /&gt;
:git checkout -b test origin/test&lt;br /&gt;
:or, with newer versions of git&amp;lt;nowiki&amp;gt;:&amp;lt;/nowiki&amp;gt; git checkout test&lt;br /&gt;
;How do I see what would be pushed to a remote repo?&lt;br /&gt;
:git push --dry-run&lt;br /&gt;
:git diff origin/master		# Assumes you have run git fetch, I think &lt;br /&gt;
:git diff --stat origin/master	# --stat just shows the file names stats, not the diffs&lt;br /&gt;
;To get a specific file from a specific branch&lt;br /&gt;
:git show dap4:./gdal_dds.cc &amp;gt; gdal_dds.dap4.cc &#039;&#039;You can use checkout instead of show and that will overwrite the file.&#039;&#039;&lt;br /&gt;
:the general syntax is &#039;&#039;object&#039;&#039; (that&#039;s the &#039;dap4:./gdal_dds.cc&#039; part) and it can use the ^ and ~n syntax to specify various commits on the given branch. A SHA can also be used.&lt;br /&gt;
;How to change the &#039;origin&#039; for a remote repo&lt;br /&gt;
:git remote set-url origin &amp;lt;nowiki&amp;gt;git://new.url.here&amp;lt;/nowiki&amp;gt; (https URLs work too...)&lt;br /&gt;
;How to push a local branch to a remote repo&lt;br /&gt;
:git push -u origin feature_branch_name&lt;br /&gt;
;How to make and track a new (local) branch&lt;br /&gt;
;How to cause Travis CI to skip a build&lt;br /&gt;
:Add &#039;&#039;[ci skip]&#039;&#039; to the log text. See the about topic on amending a commit log, which can be handy&lt;br /&gt;
:git checkout -b &amp;lt;branch name&amp;gt;&lt;br /&gt;
;How to track a remote branch&lt;br /&gt;
:git checkout --track origin/serverfix &#039;&#039; or&#039;&#039; git checkout -b sf origin/serverfix&lt;br /&gt;
;How do I make an &#039;&#039;existing&#039;&#039; local branch track an existing remote branch?&lt;br /&gt;
:git branch --set-upstream upstream/foo foo where &#039;&#039;upstream&#039;&#039; is probably actually &#039;&#039;origin&#039;&#039;.&lt;br /&gt;
;Commited my code, then made a bunch of changes that just seem like a bad idea in retrospect. How do I go back to my previous commit for everything in a directory? &#039;&#039;I don&#039;t care if I loose all my changes since the last commit.&#039;&#039;&lt;br /&gt;
:git reset HEAD --hard (Note that this is one of the very few git commands where you really cannot undo what you have done).&lt;br /&gt;
;How to undo a commit (that has not been pushed)&lt;br /&gt;
:git reset --soft HEAD~1. This leaves the files in their changed state in your working dir so that you can edit them and recommit. You can also change to a different branch and commit there, then change back. &lt;br /&gt;
;In the above case, To reuse the old commit message&lt;br /&gt;
:git commit -c ORIG_HEAD &amp;lt;-- This works because &#039;reset&#039; copied the old head to .git/ORIG_HEAD. If you don&#039;t need to edit the old message, use -C instead of -c.&lt;br /&gt;
;How to delete a remote brnach&lt;br /&gt;
:git push origin --delete serverfix &#039;&#039;The data are kept for a little bit - before git runs garbage collection - so it may be possible to undo this.&#039;&#039;&lt;br /&gt;
;How to delete a local branch&lt;br /&gt;
:git branch -d the_local_branch &#039;&#039;and delete the remote branch you were tracking with the same name&#039;&#039; git push origin :the_remote_branch&lt;br /&gt;
;How to I set up a git cloned repo on a remote machine so I don&#039;t have to type my password all the time?&lt;br /&gt;
:This page shows how to make a PKI key-pair with a secure password, configure the machine to remember the password using ssh-agent and upload the public key to your github account so it&#039;ll use the key for authentication. https://help.github.com/articles/generating-ssh-keys/&lt;br /&gt;
;How can I know which branches are already merged into the master branch?&lt;br /&gt;
:&#039;&#039;git branch --merged master&#039;&#039; lists branches merged into master&lt;br /&gt;
:&#039;&#039;git branch --merged&#039;&#039; lists branches merged into HEAD (i.e. tip of current branch)&lt;br /&gt;
:&#039;&#039;git branch --no-merged&#039;&#039; lists branches that have not been merged&lt;br /&gt;
:By default this applies to only the local branches. The -a flag will show both local and remote branches, and the -r flag shows only the remote branches.&lt;br /&gt;
;Switching remote URLs from HTTPS to SSH&lt;br /&gt;
:&#039;&#039;git remote -v&#039;&#039;&lt;br /&gt;
: # origin  &amp;lt;nowiki&amp;gt;https://github.com/USERNAME/REPOSITORY.git&amp;lt;/nowiki&amp;gt; (fetch)&lt;br /&gt;
: # origin  &amp;lt;nowiki&amp;gt;https://github.com/USERNAME/REPOSITORY.git&amp;lt;/nowiki&amp;gt; (push)&lt;br /&gt;
:&#039;&#039;git remote set-url origin git@github.com:USERNAME/OTHERREPOSITORY.git&#039;&#039;&lt;br /&gt;
:&#039;&#039;git remote -v&lt;br /&gt;
: # Verify new remote URL&lt;br /&gt;
: # origin  git@github.com:USERNAME/OTHERREPOSITORY.git (fetch)&lt;br /&gt;
: # origin  git@github.com:USERNAME/OTHERREPOSITORY.git (push)&lt;br /&gt;
;Amending the commit message&lt;br /&gt;
:&#039;&#039;git commit --amend&#039;&#039;&lt;br /&gt;
:&#039;&#039;git commit --amend -m &amp;quot;New commit message&amp;quot;&#039;&#039;&lt;br /&gt;
; How do I revert a commit after if it has been pushed?:&lt;br /&gt;
:Given:&lt;br /&gt;
::&#039;&#039;e512d38 Adding taunts to management.&#039;&#039;&lt;br /&gt;
::&#039;&#039;bd89039 Adding kill switch in case I&#039;m fired.&#039;&#039;&lt;br /&gt;
::&#039;&#039;da8af4d Adding performance optimizations to master loop.&#039;&#039;&lt;br /&gt;
::&#039;&#039;db0c012 Fixing bug in the doohickey&#039;&#039;&lt;br /&gt;
:If you just want to revert the commits without modifying the history, you can do the following:&lt;br /&gt;
:&lt;br /&gt;
::&#039;&#039;git revert e512d38&#039;&#039;&lt;br /&gt;
::&#039;&#039;git revert bd89039&#039;&#039;&lt;br /&gt;
:Alternatively, if you don’t want others to see that you added the kill switch and then removed it, you can roll back the repository using the following (however, this will cause problems for others who have already pulled your changes from the remote):&lt;br /&gt;
:&lt;br /&gt;
::&#039;&#039;git reset --hard da8af4d&#039;&#039;&lt;br /&gt;
::&#039;&#039;git push origin -f localBranch:remoteBranch&#039;&#039;&lt;br /&gt;
;The gitlog-to-changelog script comes in handy to generate a GNU-style ChangeLog.&lt;br /&gt;
:As shown by gitlog-to-changelog --help, you may select the commits used to generate a ChangeLog file using either the option --since:&lt;br /&gt;
:&lt;br /&gt;
::&#039;&#039;gitlog-to-changelog --since=2008-01-01 &amp;gt; ChangeLog&#039;&#039;&lt;br /&gt;
:or by passing additional arguments after --, which will be passed to git-log (called internally by gitlog-to-changelog):&lt;br /&gt;
:&lt;br /&gt;
::&#039;&#039;gitlog-to-changelog -- -n 5 foo &amp;gt; last-5-commits-to-branch-foo&#039;&#039;&lt;br /&gt;
;Amending the commit message&lt;br /&gt;
:&#039;&#039;git commit --amend&#039;&#039;&lt;br /&gt;
:&lt;br /&gt;
;Tagging stuff&lt;br /&gt;
:&#039;&#039;git tag&#039;&#039; will list the existing tags&lt;br /&gt;
:&#039;&#039;git tag -a &amp;lt;tag name&amp;gt;&#039;&#039; adds a new tag&lt;br /&gt;
:&#039;&#039;git push origin &amp;lt;tag name&amp;gt;&#039;&#039; pushes that tag up to the server &#039;&#039;origin&#039;&#039;&lt;br /&gt;
:&#039;&#039;git push origin --tags&#039;&#039; pushes all new tags up to &#039;&#039;origin&#039;&#039;&lt;br /&gt;
;How to resolve conflicts in a submodule when you&#039;ve just merged master down to a branch&lt;br /&gt;
:&lt;br /&gt;
:Run git status - make a note of the submodule folder with conflicts&lt;br /&gt;
:Reset the submodule to the version that was last committed in the current branch:&lt;br /&gt;
:&lt;br /&gt;
:git reset HEAD path/to/submodule&lt;br /&gt;
: At this point, you have a conflict-free version of your submodule which you can now update to the latest version in the submodule&#039;s repository:&lt;br /&gt;
:&lt;br /&gt;
: cd path/to/submodule&lt;br /&gt;
:&lt;br /&gt;
: git pull origin SUBMODULE-BRANCH-NAME&lt;br /&gt;
: And now you can commit that and get back to work.&lt;br /&gt;
; How to move a submodule into the main repo &lt;br /&gt;
:If all you want is to put your submodule code into the main repository, you just need to remove the submodule and re-add the files into the main repo, follow the prescription below. If you want to see how to add the branches, history, etc. to the repo, see http://stackoverflow.com/questions/1759587/un-submodule-a-git-submodule:&lt;br /&gt;
:&lt;br /&gt;
:&#039;&#039;git rm --cached submodule_path&#039;&#039; &#039;&#039;&#039;# delete reference to submodule HEAD (no trailing slash)&#039;&#039;&#039;&lt;br /&gt;
:&#039;&#039;git rm .gitmodules&#039;&#039;             &#039;&#039;&#039;# if you have more than one submodules, you need to edit this file instead of deleting!&#039;&#039;&#039;&lt;br /&gt;
:&#039;&#039;rm -rf submodule_path/.git&#039;&#039;     &#039;&#039;&#039;# make sure you have backup!!&#039;&#039;&#039;&lt;br /&gt;
:&#039;&#039;git add submodule_path&#039;&#039;         &#039;&#039;&#039;# will add files instead of commit reference&#039;&#039;&#039;&lt;br /&gt;
:&#039;&#039;git commit -m &amp;quot;remove submodule&amp;quot;&#039;&#039;&lt;br /&gt;
; Checking out a tag&lt;br /&gt;
:You will not be able to checkout the tags if its not locally in your repo so first you have to fetch it all.&lt;br /&gt;
:&lt;br /&gt;
:First make sure that the tag exists locally by doing&lt;br /&gt;
:&lt;br /&gt;
:# --all will fetch all the remotes.&lt;br /&gt;
:# --tags will fetch all tags as well&lt;br /&gt;
:&#039;&#039;git fetch --all --tags --prune&#039;&#039;&lt;br /&gt;
:Then check out the tag by running&lt;br /&gt;
:&lt;br /&gt;
:&#039;&#039;git checkout tags/&amp;lt;tag_name&amp;gt; -b &amp;lt;branch_name&amp;gt;&#039;&#039;&lt;br /&gt;
:Instead of origin use the tags/ prefix.&lt;br /&gt;
;How to remove old/unused/deleted branches&lt;br /&gt;
:&#039;&#039;git remote prune origin&#039;&#039; prunes tracking branches not on the remote.&lt;br /&gt;
:&#039;&#039;git branch --merged&#039;&#039; lists branches that have been merged into the current branch )but maybe including &#039;&#039;&#039;master&#039;&#039;&#039; so be careful about the next part).&lt;br /&gt;
:&#039;&#039;xargs git branch -d&#039;&#039; deletes branches listed on standard input.&lt;br /&gt;
:Be &#039;&#039;&#039;careful&#039;&#039;&#039; deleting branches listed by &#039;&#039;git branch --merged&#039;&#039;. The list could include master or other branches you&#039;d prefer not to delete.&lt;br /&gt;
;How do I merge just one file? &lt;br /&gt;
:A simple command already solved the problem for me if I assume that all changes are committed in both branches A and B&lt;br /&gt;
:&#039;&#039;git checkout A&#039;&#039;&lt;br /&gt;
:&#039;&#039;git checkout --patch B f&#039;&#039;&lt;br /&gt;
:The first command switches into branch &#039;&#039;A&#039;&#039;, into where I want to merge &#039;&#039;B&#039;&#039; &#039;s version of the file &#039;&#039;f&#039;&#039;. The second command patches the file &#039;&#039;f&#039;&#039; with &#039;&#039;f&#039;&#039; of HEAD of &#039;&#039;B&#039;&#039;. You may even accept/discard single parts of the patch. Instead of &#039;&#039;B&#039;&#039; you can specify any commit here, it does not have to be HEAD.&lt;br /&gt;
&lt;br /&gt;
== Continuous Integration builds involving submodules ==&lt;br /&gt;
There are two ways to handle getting a CI build to run when you&#039;ve editing a submodule used by the BES. The **best** way is to use a branch of the BES to run the build as part of a **pull request** and is described as option number one below. Another way is to use the master branch of the BES and is describe as the second choice. &lt;br /&gt;
&lt;br /&gt;
==== Using a BES branch and a GitHub pull request ====&lt;br /&gt;
This is the better of the two ways. It requires a bit more work but does not introduce code to the master branch before that code has passed the CI build.&lt;br /&gt;
&lt;br /&gt;
Once your work on the submodule is complete:&lt;br /&gt;
* Commit and push the code in your submodule&#039;s branch&lt;br /&gt;
: git commit&lt;br /&gt;
: git push&lt;br /&gt;
* Goto the top of the BES and checkout a new branch. Choose a name that is similar to the name of the branch used for the submodule&#039;s changes&lt;br /&gt;
: cd bes&lt;br /&gt;
: git checkout -b &amp;lt;name&amp;gt;&lt;br /&gt;
* Commit the submodule&#039;s current commit hash to the .gitmodules file (easier than is sounds)&lt;br /&gt;
: &#039;git commit -a&#039; or &#039;git add &amp;lt;path to submodule&amp;gt;&#039; and then &#039;git commit&#039;&lt;br /&gt;
* Push this to GitHub&lt;br /&gt;
: git push&lt;br /&gt;
* Goto Github and issue a pull request for the BES &amp;lt;name&amp;gt; branch.&lt;br /&gt;
&lt;br /&gt;
This will trigger a CI build of that branch. This does not change the BES master branch at all, which is the goal here - to build without affecting the master branch.&lt;br /&gt;
&lt;br /&gt;
Once the build works, merge the submodule branch to the submodule&#039;s master. Then delete the BES &amp;lt;name&amp;gt; branch and make sure to update the BES master so that it references the new master branch for the submodule.&lt;br /&gt;
&lt;br /&gt;
==== An alternative, that uses the BES master branch ====&lt;br /&gt;
&lt;br /&gt;
Once your code is in committed and pushed on the master branch, go to the top of the bes project and run ‘git commit -a’. This will prompt you with a commit that shows a new HDF5 handler version. Add a commit message (e.g., “New HDF5 handler version”) and then push. This works because the new hash code for the commit that is on the master branch is recorded in a hidden file managed by git (named ‘.gitmodules’). Since that file is changed, the push results in changed files in bes and than triggers a new build. The new build will reference the new commit hash for your latest changes on master and that hash will be checked out for the build.&lt;br /&gt;
&lt;br /&gt;
== Merging in a branch where submodules have been removed and replaced with directories ==&lt;br /&gt;
Recently (02/03/17) we dropped most of the submodules in the &#039;&#039;&#039;bes&#039;&#039;&#039; (except for the &#039;&#039;hdf*_handler&#039;&#039; modules) and replaced them with regular code directories. &lt;br /&gt;
&lt;br /&gt;
What to do if you have an existing branch the you need to update with the new master:&lt;br /&gt;
&lt;br /&gt;
# Check in all of your changes.&lt;br /&gt;
# Push your branch to github.&lt;br /&gt;
# Start with a brand new clone for the bes repo from github:&lt;br /&gt;
#:  &#039;&#039;git clone https://github.com/opendap/bes&#039;&#039;&lt;br /&gt;
# DO NOT RUN &#039;&#039;submodule init&#039;&#039;&lt;br /&gt;
# In the new clone, checkout your branch.&lt;br /&gt;
# In the &#039;&#039;modules&#039;&#039; directory remove crucial submodules:&lt;br /&gt;
#: &#039;&#039;rm -rf csv_handler dap-server debug_functions fileout_* fits_handler freeform_handler gateway_module gdal_handler ncml_module netcdf_handler ugrid_functions w10n_handler xml_data_handler&#039;&#039;&lt;br /&gt;
# Now, merge the master to your branch:&lt;br /&gt;
#: &#039;&#039;git merge master&#039;&#039;&lt;br /&gt;
# Cleanup any trouble  (there should not be much)&lt;br /&gt;
# Build and test&lt;br /&gt;
# Merge your branch to master when you&#039;re ready.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=File:OPeNDAP-Logo_Large.png&amp;diff=13511</id>
		<title>File:OPeNDAP-Logo Large.png</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=File:OPeNDAP-Logo_Large.png&amp;diff=13511"/>
		<updated>2023-10-18T19:08:06Z</updated>

		<summary type="html">&lt;p&gt;Jimg: Jimg reverted File:OPeNDAP-Logo Large.png to an old version&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The rally big logo with a clear background.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=File:OPeNDAP-Logo_Large.png&amp;diff=13510</id>
		<title>File:OPeNDAP-Logo Large.png</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=File:OPeNDAP-Logo_Large.png&amp;diff=13510"/>
		<updated>2023-10-18T19:05:55Z</updated>

		<summary type="html">&lt;p&gt;Jimg: Jimg uploaded a new version of File:OPeNDAP-Logo Large.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The rally big logo with a clear background.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=File:OPeNDAP-Logo_Large.png&amp;diff=13509</id>
		<title>File:OPeNDAP-Logo Large.png</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=File:OPeNDAP-Logo_Large.png&amp;diff=13509"/>
		<updated>2023-10-18T19:03:34Z</updated>

		<summary type="html">&lt;p&gt;Jimg: Jimg uploaded a new version of File:OPeNDAP-Logo Large.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The rally big logo with a clear background.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Planning_a_Program_Increment&amp;diff=13507</id>
		<title>Planning a Program Increment</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Planning_a_Program_Increment&amp;diff=13507"/>
		<updated>2023-10-16T19:30:11Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* How to Plan a Feature */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The overall scope of the work that management thinks should be done for a quarter (i.e., a PI) is given in the Candidate Feature list. The Planning Agenda describes how the 200+ people in the ESDIS project will coordinate the planning process for that work.&lt;br /&gt;
&lt;br /&gt;
Information sources:&lt;br /&gt;
* Overall information with schedule: https://wiki.earthdata.nasa.gov/display/EPS/ESDIS+Program+SAFe&lt;br /&gt;
* Planning Agenda: https://wiki.earthdata.nasa.gov/display/EPS/PI+Planning+for+23.4+Agenda&lt;br /&gt;
* Candidate features: https://wiki.earthdata.nasa.gov/display/EPS/PI+23.4+Candidate+Features#tab-Transformation&lt;br /&gt;
&lt;br /&gt;
But... The Candidate Feature list is not the only place where features are found. Other places include:&lt;br /&gt;
* Features that were not completed in the previous quarter (&#039;&#039;Carryover&#039;&#039;);&lt;br /&gt;
* Features that are internal to the group (&#039;&#039;Team&#039;&#039;); and&lt;br /&gt;
* Features that are important to other groups in the larger collection of groups (aka, &#039;train&#039;, &#039;&#039;Train-level&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Regardless of source, all features are planned similarly.&lt;br /&gt;
&lt;br /&gt;
== How to Plan a Feature ==&lt;br /&gt;
Based on the introduction, there are four kinds of features: Candidate, Carryover, Team and Train. All these are planned the same way &#039;&#039;except&#039;&#039; for the start of the process. The Candidate features have a more formal origin than the remaining three feature types.&lt;br /&gt;
&lt;br /&gt;
* In Jira, write an Epic for the feature.&lt;br /&gt;
** Include the Acceptance Criteria (AC) for the feature in the Epic&#039;s AC - you may have to hunt around Jira&#039;s various &#039;Edit&#039; features to find how to add/edit an AC&lt;br /&gt;
** Bind this to the PI using a Fix Version label for the PI (e.g., &#039;&#039;Transformation PI 23.4&#039;&#039;)&lt;br /&gt;
* For that feature, write one or more tickets that describe what to do. For each ticket (normally a &#039;&#039;Task&#039;&#039;):&lt;br /&gt;
** write an AC&lt;br /&gt;
** assign points&lt;br /&gt;
** make it visible as part of the P by assigning a Fix Version label for the given PI (e.g., &#039;&#039;Transformation PI 23.4&#039;&#039;)&lt;br /&gt;
** assign a person&lt;br /&gt;
** map those tickets across the Sprints in the PI&lt;br /&gt;
* If there is a dependency between our work on the feature and some other team, link that using a ticket in the Epic, not the Epic itself.&lt;br /&gt;
&lt;br /&gt;
If another team has feature that will depend on us in some way, make a non-Epic (e.g., Task) ticket and link that up with the non-Epic ticket in their Jira.&lt;br /&gt;
&lt;br /&gt;
Why do all this labeling and linking? Because that&#039;s how the page of Objectives and Risks (e.g., https://wiki.earthdata.nasa.gov/display/EPS/Transformation+Train+-+PI+23.4+-+Objectives%2C+Risks%2C+and+Dependencies+Dashboard#tab-OPeNDAP) gets the stuff it displays.&lt;br /&gt;
&lt;br /&gt;
== Candidate Features are Special ==&lt;br /&gt;
Because they are derived from perceived needs that span the whole ESDIS group and are blessed by NASA management, the Candidate Features are special. The only way they differ from the other three types is that their ACs are given in a Feature Planning ticket that can be found as a link on the Candidate Feature page. That AC should be used to write the AC for the Epic we make for the feature. However, we need to be circumspect about what can be done in a single quarter versus what is shown on the Feature Request page. Make sure to write an AC that&#039; achievable. During review if something that is required has been removed, that will become apparent.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Planning_a_Program_Increment&amp;diff=13506</id>
		<title>Planning a Program Increment</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Planning_a_Program_Increment&amp;diff=13506"/>
		<updated>2023-10-16T19:08:36Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* How to plan a feature */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The overall scope of the work that management thinks should be done for a quarter (i.e., a PI) is given in the Candidate Feature list. The Planning Agenda describes how the 200+ people in the ESDIS project will coordinate the planning process for that work.&lt;br /&gt;
&lt;br /&gt;
Information sources:&lt;br /&gt;
* Overall information with schedule: https://wiki.earthdata.nasa.gov/display/EPS/ESDIS+Program+SAFe&lt;br /&gt;
* Planning Agenda: https://wiki.earthdata.nasa.gov/display/EPS/PI+Planning+for+23.4+Agenda&lt;br /&gt;
* Candidate features: https://wiki.earthdata.nasa.gov/display/EPS/PI+23.4+Candidate+Features#tab-Transformation&lt;br /&gt;
&lt;br /&gt;
But... The Candidate Feature list is not the only place where features are found. Other places include:&lt;br /&gt;
* Features that were not completed in the previous quarter (&#039;&#039;Carryover&#039;&#039;);&lt;br /&gt;
* Features that are internal to the group (&#039;&#039;Team&#039;&#039;); and&lt;br /&gt;
* Features that are important to other groups in the larger collection of groups (aka, &#039;train&#039;, &#039;&#039;Train-level&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Regardless of source, all features are planned similarly.&lt;br /&gt;
&lt;br /&gt;
== How to Plan a Feature ==&lt;br /&gt;
Based on the introduction, there are four kinds of features: Candidate, Carryover, Team and Train. All these are planned the same way &#039;&#039;except&#039;&#039; for the start of the process. The Candidate features have a more formal origin than the remaining three feature types.&lt;br /&gt;
&lt;br /&gt;
* In Jira, write an Epic for the feature.&lt;br /&gt;
** Include the Acceptance Criteria (AC) for the feature in the Epic&#039;s AC - you may have to hunt around Jira&#039;s various &#039;Edit&#039; features to find how to add/edit an AC&lt;br /&gt;
** Bind this to the PI using a Fix Version label for the PI (e.g., &#039;&#039;Transformation PI 23.4&#039;&#039;)&lt;br /&gt;
* For that feature, write one or more tickets that describe what to do&lt;br /&gt;
** For each ticket, assign a person&lt;br /&gt;
** assign points&lt;br /&gt;
** write an AC&lt;br /&gt;
** make it visible as part of the P by assigning a Fix Version label for the given PI (e.g., &#039;&#039;Transformation PI 23.4&#039;&#039;)&lt;br /&gt;
** map those tickets across the Sprints in the PI&lt;br /&gt;
* If there is a dependency between our work on the feature and some other team, link that using a ticket in the Epic, not the Epic itself.&lt;br /&gt;
&lt;br /&gt;
If another team has feature that will depend on us in some way, make a non-Epic (e.g., Task) ticket and link that up with the non-Epic ticket in their Jira.&lt;br /&gt;
&lt;br /&gt;
Why do all this labeling and linking? Because that&#039;s how the page of Objectives and Risks (e.g., https://wiki.earthdata.nasa.gov/display/EPS/Transformation+Train+-+PI+23.4+-+Objectives%2C+Risks%2C+and+Dependencies+Dashboard#tab-OPeNDAP) gets the stuff it displays.&lt;br /&gt;
&lt;br /&gt;
== Candidate Features are Special ==&lt;br /&gt;
Because they are derived from perceived needs that span the whole ESDIS group and are blessed by NASA management, the Candidate Features are special. The only way they differ from the other three types is that their ACs are given in a Feature Planning ticket that can be found as a link on the Candidate Feature page. That AC should be used to write the AC for the Epic we make for the feature. However, we need to be circumspect about what can be done in a single quarter versus what is shown on the Feature Request page. Make sure to write an AC that&#039; achievable. During review if something that is required has been removed, that will become apparent.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Planning_a_Program_Increment&amp;diff=13505</id>
		<title>Planning a Program Increment</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Planning_a_Program_Increment&amp;diff=13505"/>
		<updated>2023-10-16T19:08:06Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* How to plan a feature */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The overall scope of the work that management thinks should be done for a quarter (i.e., a PI) is given in the Candidate Feature list. The Planning Agenda describes how the 200+ people in the ESDIS project will coordinate the planning process for that work.&lt;br /&gt;
&lt;br /&gt;
Information sources:&lt;br /&gt;
* Overall information with schedule: https://wiki.earthdata.nasa.gov/display/EPS/ESDIS+Program+SAFe&lt;br /&gt;
* Planning Agenda: https://wiki.earthdata.nasa.gov/display/EPS/PI+Planning+for+23.4+Agenda&lt;br /&gt;
* Candidate features: https://wiki.earthdata.nasa.gov/display/EPS/PI+23.4+Candidate+Features#tab-Transformation&lt;br /&gt;
&lt;br /&gt;
But... The Candidate Feature list is not the only place where features are found. Other places include:&lt;br /&gt;
* Features that were not completed in the previous quarter (&#039;&#039;Carryover&#039;&#039;);&lt;br /&gt;
* Features that are internal to the group (&#039;&#039;Team&#039;&#039;); and&lt;br /&gt;
* Features that are important to other groups in the larger collection of groups (aka, &#039;train&#039;, &#039;&#039;Train-level&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Regardless of source, all features are planned similarly.&lt;br /&gt;
&lt;br /&gt;
== How to plan a feature ==&lt;br /&gt;
Based on the introduction, there are four kinds of features: Candidate, Carryover, Team and Train. All these are planned the same way &#039;&#039;except&#039;&#039; for the start of the process. The Candidate features have a more formal origin than the remaining three feature types.&lt;br /&gt;
&lt;br /&gt;
* In Jira, write an Epic for the feature.&lt;br /&gt;
** Include the Acceptance Criteria (AC) for the feature in the Epic&#039;s AC - you may have to hunt around Jira&#039;s various &#039;Edit&#039; features to find how to add/edit an AC&lt;br /&gt;
** Bind this to the PI using a Fix Version label for the PI (e.g., &#039;&#039;Transformation PI 23.4&#039;&#039;)&lt;br /&gt;
* For that feature, write one or more tickets that describe what to do&lt;br /&gt;
** For each ticket, assign a person&lt;br /&gt;
** assign points&lt;br /&gt;
** write an AC&lt;br /&gt;
** make it visible as part of the P by assigning a Fix Version label for the given PI (e.g., &#039;&#039;Transformation PI 23.4&#039;&#039;)&lt;br /&gt;
** map those tickets across the Sprints in the PI&lt;br /&gt;
* If there is a dependency between our work on the feature and some other team, link that using a ticket in the Epic, not the Epic itself.&lt;br /&gt;
&lt;br /&gt;
If another team has feature that will depend on us in some way, make a non-Epic (e.g., Task) ticket and link that up with the non-Epic ticket in their Jira.&lt;br /&gt;
&lt;br /&gt;
Why do all this labeling and linking? Because that&#039;s how the page of Objectives and Risks (e.g., https://wiki.earthdata.nasa.gov/display/EPS/Transformation+Train+-+PI+23.4+-+Objectives%2C+Risks%2C+and+Dependencies+Dashboard#tab-OPeNDAP) gets the stuff it displays.&lt;br /&gt;
&lt;br /&gt;
== Candidate Features are Special ==&lt;br /&gt;
Because they are derived from perceived needs that span the whole ESDIS group and are blessed by NASA management, the Candidate Features are special. The only way they differ from the other three types is that their ACs are given in a Feature Planning ticket that can be found as a link on the Candidate Feature page. That AC should be used to write the AC for the Epic we make for the feature. However, we need to be circumspect about what can be done in a single quarter versus what is shown on the Feature Request page. Make sure to write an AC that&#039; achievable. During review if something that is required has been removed, that will become apparent.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Planning_a_Program_Increment&amp;diff=13504</id>
		<title>Planning a Program Increment</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Planning_a_Program_Increment&amp;diff=13504"/>
		<updated>2023-10-16T18:45:51Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* How to plan a feature */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The overall scope of the work that management thinks should be done for a quarter (i.e., a PI) is given in the Candidate Feature list. The Planning Agenda describes how the 200+ people in the ESDIS project will coordinate the planning process for that work.&lt;br /&gt;
&lt;br /&gt;
Information sources:&lt;br /&gt;
* Overall information with schedule: https://wiki.earthdata.nasa.gov/display/EPS/ESDIS+Program+SAFe&lt;br /&gt;
* Planning Agenda: https://wiki.earthdata.nasa.gov/display/EPS/PI+Planning+for+23.4+Agenda&lt;br /&gt;
* Candidate features: https://wiki.earthdata.nasa.gov/display/EPS/PI+23.4+Candidate+Features#tab-Transformation&lt;br /&gt;
&lt;br /&gt;
But... The Candidate Feature list is not the only place where features are found. Other places include:&lt;br /&gt;
* Features that were not completed in the previous quarter (&#039;&#039;Carryover&#039;&#039;);&lt;br /&gt;
* Features that are internal to the group (&#039;&#039;Team&#039;&#039;); and&lt;br /&gt;
* Features that are important to other groups in the larger collection of groups (aka, &#039;train&#039;, &#039;&#039;Train-level&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Regardless of source, all features are planned similarly.&lt;br /&gt;
&lt;br /&gt;
== How to plan a feature ==&lt;br /&gt;
Based on the introduction, there are four kinds of features: Candidate, Carryover, Team and Train. All these are planned the same way &#039;&#039;except&#039;&#039; for the start of the process. The Candidate features have a more formal origin than the remaining three feature types.&lt;br /&gt;
&lt;br /&gt;
* In Jira, write an Epic for the feature.&lt;br /&gt;
** Include the Acceptance Criteria (AC) for the feature in the Epic&#039;s AC - you may have to hunt around Jira&#039;s various &#039;Edit&#039; features to find how to add/edit an AC&lt;br /&gt;
** Bind this to the PI using a Fix Version label for the PI (e.g., &#039;&#039;Transformation PI 23.4&#039;&#039;)&lt;br /&gt;
* For that feature, write one or more tickets that describe what to do&lt;br /&gt;
** For each ticket, assign a person&lt;br /&gt;
** assign points&lt;br /&gt;
** write an AC&lt;br /&gt;
** make it visible as part of the P by assigning a Fix Version label for the given PI (e.g., &#039;&#039;Transformation PI 23.4&#039;&#039;)&lt;br /&gt;
** map those tickets across the Sprints in the PI&lt;br /&gt;
* If there is a dependency between our work on the feature and some other team, link that using a ticket in the Epic, not the Epic itself.&lt;br /&gt;
&lt;br /&gt;
If another team has feature that will depend on us in some way, make a non-Epic (e.g., Task) ticket and link that up with the non-Epic ticket in their Jira.&lt;br /&gt;
&lt;br /&gt;
Why do all this labeling and linking? Because that&#039;s how the page of Objectives and Risks (e.g., https://wiki.earthdata.nasa.gov/display/EPS/Transformation+Train+-+PI+23.4+-+Objectives%2C+Risks%2C+and+Dependencies+Dashboard#tab-OPeNDAP) gets the stuff it displays.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Planning_a_Program_Increment&amp;diff=13503</id>
		<title>Planning a Program Increment</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Planning_a_Program_Increment&amp;diff=13503"/>
		<updated>2023-10-16T18:34:23Z</updated>

		<summary type="html">&lt;p&gt;Jimg: Created page with &amp;quot;The overall scope of the work that management thinks should be done for a quarter (i.e., a PI) is given in the Candidate Feature list. The Planning Agenda describes how the 200+ people in the ESDIS project will coordinate the planning process for that work.  Information sources: * Overall information with schedule: https://wiki.earthdata.nasa.gov/display/EPS/ESDIS+Program+SAFe * Planning Agenda: https://wiki.earthdata.nasa.gov/display/EPS/PI+Planning+for+23.4+Agenda * Ca...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The overall scope of the work that management thinks should be done for a quarter (i.e., a PI) is given in the Candidate Feature list. The Planning Agenda describes how the 200+ people in the ESDIS project will coordinate the planning process for that work.&lt;br /&gt;
&lt;br /&gt;
Information sources:&lt;br /&gt;
* Overall information with schedule: https://wiki.earthdata.nasa.gov/display/EPS/ESDIS+Program+SAFe&lt;br /&gt;
* Planning Agenda: https://wiki.earthdata.nasa.gov/display/EPS/PI+Planning+for+23.4+Agenda&lt;br /&gt;
* Candidate features: https://wiki.earthdata.nasa.gov/display/EPS/PI+23.4+Candidate+Features#tab-Transformation&lt;br /&gt;
&lt;br /&gt;
But... The Candidate Feature list is not the only place where features are found. Other places include:&lt;br /&gt;
* Features that were not completed in the previous quarter (&#039;&#039;Carryover&#039;&#039;);&lt;br /&gt;
* Features that are internal to the group (&#039;&#039;Team&#039;&#039;); and&lt;br /&gt;
* Features that are important to other groups in the larger collection of groups (aka, &#039;train&#039;, &#039;&#039;Train-level&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
Regardless of source, all features are planned similarly.&lt;br /&gt;
&lt;br /&gt;
== How to plan a feature ==&lt;br /&gt;
Based on the introduction, there are four kinds of features: Candidate, Carryover, Team and Train. All these are planned the same way &#039;&#039;except&#039;&#039; for the start of the process. The Candidate features have a more formal origin than the remaining three feature types.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Developer_Info&amp;diff=13502</id>
		<title>Developer Info</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Developer_Info&amp;diff=13502"/>
		<updated>2023-10-16T16:54:15Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* OPeNDAP Development process information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* [https://github.com/OPENDAP OPeNDAP&#039;s GitHub repositories]: OPeNDAP&#039;s software is available using GitHub in addition to the downloads from our website.&lt;br /&gt;
** Before 2015 we hosted our own SVN repository. It&#039;s still online and available, but for read-only access, at [https://scm.opendap.org/svn https://scm.opendap.org/svn].&lt;br /&gt;
* [https://travis-ci.org/OPENDAP Continuous Integration builds]: Software that is built whenever new changes are pushed to the master branch. These builds are done on the Travis-CI system.&lt;br /&gt;
* [http://test.opendap.org/ test.opendap.org]: Test servers with data files.&lt;br /&gt;
* We use the Coverity static system to look for common software defects, information on Hyrax is spread across three projects:&lt;br /&gt;
** [https://scan.coverity.com/projects/opendap-bes?tab=overview The BES and the standard handlers we distribute]&lt;br /&gt;
** [https://scan.coverity.com/projects/opendap-olfs?tab=overview The OLFS - the front end to the Hyrax data server]&lt;br /&gt;
** [https://scan.coverity.com/projects/opendap-libdap4?tab=overview libdap - The implementation of DAP2 and DAP4]&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP&#039;s FAQ ==&lt;br /&gt;
The [http://www.opendap.org/faq-page OPeNDAP FAQ] has a pretty good section on developer&#039;s questions.&lt;br /&gt;
&lt;br /&gt;
== C++ Coding Information ==&lt;br /&gt;
* [[Include files for libdap | Guidelines for including headers]]&lt;br /&gt;
* [[Using lambdas with the STL]]&lt;br /&gt;
* [[Better Unit tests for C++]]&lt;br /&gt;
* [[Better Singleton classes C++]]&lt;br /&gt;
* [[What is faster? stringstream string + String]]&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP Workshops ==&lt;br /&gt;
* [http://www.opendap.org/about/workshops-and-presentations/2007-10-12 The APAC/BOM Workshops]: This workshop spanned several days and covered a number of topics, including information for SAs and Developers. Oct 2007.&lt;br /&gt;
* [http://www.opendap.org/about/workshops-and-presentations/2008-07-15 ESIP Federation Server Workshop]: This half-day workshop focused on server installation and configuration. Summer 2008&lt;br /&gt;
* [[A One-day Course on Hyrax Development | Server Functions]]: This one-day workshop is all about writing and debugging server-side functions. It also contains a wealth of information about Hyrax, the BES and debugging tricks for the server. Spring 2012. Updated Fall 2014 for presentation to Ocean Networks Canada.&lt;br /&gt;
&lt;br /&gt;
== libdap4 and BES Reference documentation ==&lt;br /&gt;
* [https://opendap.github.io/bes/html/ BES Reference]&lt;br /&gt;
* [https://opendap.github.io/libdap4/html/ libdap Reference]&lt;br /&gt;
&lt;br /&gt;
== BES Development Information ==&lt;br /&gt;
* [[Hyrax - Logging Configuration|Logging Configuration]]&lt;br /&gt;
&lt;br /&gt;
* [[BES_-_How_to_Debug_the_BES| How to debug the BES]]&lt;br /&gt;
* [[BES - Debugging Using besstandalone]]&lt;br /&gt;
* [[Hyrax - Create BES Module | How to create your own BES Module]]&lt;br /&gt;
* Hyrax Module Integration: How to configure your module so it&#039;s easy to add to Hyrax instances ([[:File:HyraxModuleIntegration-1.2.pdf|pdf]])&lt;br /&gt;
* [[Hyrax - Starting and stopping the BES| Starting and stopping the BES]]&lt;br /&gt;
* [[Hyrax - Running bescmdln | Running the BES command line client]]&lt;br /&gt;
* [[Hyrax - BES Client commands| BES Client commands]]. The page [[BES_XML_Commands | BES XML Commands]] repeats this info for a bit more information on the return values. Most of the commands don&#039;t return anything unless they return an error and are expected to be used in a group where a &#039;&#039;get&#039;&#039; command closes out the request and obviously does return a response of some kind (maybe an error).&lt;br /&gt;
* [[Hyrax:_BES_Administrative_Commands| BES Administrative Commands]]&lt;br /&gt;
* [[Hyrax - Extending BES Module | Extending your BES Module]]&lt;br /&gt;
* [[Hyrax - Example BES Modules | Example BES Modules]] - the Hello World example and the CSV data handler&lt;br /&gt;
* [[Hyrax - BES PPT | BES communication protocol using PPT (point to point transport)]]&lt;br /&gt;
&lt;br /&gt;
* [[Australian BOM Software Developer&#039;s Agenda and Presentations|Software Developers Workshop]]&lt;br /&gt;
&lt;br /&gt;
== OPeNDAP Development process information  ==&lt;br /&gt;
These pages contain information about how we&#039;d like people working with us to use our various on-line tools.&lt;br /&gt;
&lt;br /&gt;
* [[Planning a Program Increment]] This is a checklist for the planning phase that precedes a Program Increment (PI) when using SAFe with the NASA ESDIS development group.&lt;br /&gt;
* [[Hyrax GitHub Source Build]] This explains how to clone our software from GitHub and build our code using a shell like bash. It also explains how to build the BES and all of the Hyrax &#039;standard&#039; handlers in one operation, as well as how to build just the parts you need without cloning the whole set of repos. Some experience with &#039;git submodule&#039; will make this easier, although the page explains everything.&lt;br /&gt;
* [[Bug Prioritization]]. How we prioritize bugs in our software.&lt;br /&gt;
&lt;br /&gt;
===[[How to Make a Release|Making A Release]] ===&lt;br /&gt;
* [[How to Make a Release]] A general template for making a release. This references some of the pages below.&lt;br /&gt;
&lt;br /&gt;
== Software process issues: ==&lt;br /&gt;
* [[How to download test logs from a Travis build]] All of our builds on Travis that run tests save those logs to an S3 bucket.&lt;br /&gt;
* [[ConfigureCentos| How to configure a CentOS machine for production of RPM binaries]] - Updated 12/2014 to include information regarding git.&lt;br /&gt;
* [[How to use CLion with our software]]&lt;br /&gt;
* [[BES Timing| How to add timing instrumentation to your BES code.]]&lt;br /&gt;
* [[UnitTests| How to write unit tests using CppUnit]] NB: See other information under the heading of C++ development&lt;br /&gt;
* [[valgrind| How to use valgrind with unit tests]]&lt;br /&gt;
* [[Debugging the distcheck target]] Yes, this gets its own page...&lt;br /&gt;
* [[CopyRights| How to copyright software written for OPeNDAP]]&lt;br /&gt;
* [[Managing public and private keys using gpg]]&lt;br /&gt;
* [[SecureEmail |How to Setup Secure Email and Sign Software Distributions]]&lt;br /&gt;
* [[UserSupport|How to Handle Email-list Support Questions]]&lt;br /&gt;
* [[NetworkServerSecurity |Security Policy and Related Procedures]]&lt;br /&gt;
* [http://semver.org/ Software version numbers]&lt;br /&gt;
* [[GuideLines| Development Guidelines]]&lt;br /&gt;
* [[Apple M1 Special Needs]]&lt;br /&gt;
&lt;br /&gt;
==== Older info of limited value: ====&lt;br /&gt;
* [http://gcc.gnu.org/gcc-4.4/cxx0x_status.html C++-11 gcc/++-4.4 support] We now require compilers that support C++-14, so this is outdated (4/19/23).&lt;br /&gt;
* [[How to use Eclipse with Hyrax Source Code]] I like Eclipse, but we now use CLion because it&#039;s better (4/19/23) . Assuming you have cloned our Hyrax code from GitHub, this explains how to setup eclipse so you can work fairly easily and switch back and forth between the shell, emacs and eclipse.&lt;br /&gt;
&lt;br /&gt;
==== AWS Tips ====&lt;br /&gt;
* [[Growing a CentOS Root Partition on an AWS EC2 Instance]]&lt;br /&gt;
* [[How Shutoff the CentOS firewall]]&lt;br /&gt;
&lt;br /&gt;
== General development information ==&lt;br /&gt;
These pages contain general information relevant to anyone working with our software:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Git Hacks and Tricks]]&#039;&#039;&#039;: Information about using git and/or GitHub that seems useful and maybe not all that obvious.&lt;br /&gt;
* [[Git Secrets]]: securing repositories from AWS secret key leaks.&lt;br /&gt;
* [https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto Valgrind Suppression File Howto] How to build a suppressions file for valgrind.&lt;br /&gt;
* [[Using a debugger for C++ with Eclipse on OS/X]] Short version: use lldbmi2 **Add info**&lt;br /&gt;
* [[Using ASAN]] Short version, look [https://github.com/google/sanitizers/wiki/AddressSanitizerAndDebugger at the Google/GitHub pages] for useful environment variables **add text** On Centos, use yum install llvm to get the &#039;symbolizer&#039; and try &#039;&#039;ASAN_OPTIONS=symbolize=1 ASAN_SYMBOLIZER_PATH=$(shell which llvm-symbolizer)&#039;&#039;&lt;br /&gt;
* [[How to use &#039;&#039;Instruments&#039;&#039; on OS/X to profile]] Updated 7/2018&lt;br /&gt;
* [https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto Valgrind - How to generate suppression files for valgrind] This will quiet valgrind, keeping it from telling you OS/X or Linux (or the BES) is leaking memory.&lt;br /&gt;
* [[Migrating source code from SVN to git]]: How to move a large project from SVN to git and keep the history, commits, branches and tags.&lt;br /&gt;
* [https://developer.mozilla.org/en-US/docs/Eclipse_CDT Eclipse - Detailed information about running Eclipse on OSX from the Mozzilla project]. Updated in 2017, this is really good but be aware that it&#039;s specific to Mozilla so some of the tips don&#039;t apply. Hyrax (i.e., libdap4 and BES) also use their own build system (autotools + make) so most of the configuration information here is very apropos. See also [[How to use Eclipse with Hyrax Source Code]] below.&lt;br /&gt;
* [https://jfearn.fedorapeople.org/en-US/RPM/4/html/RPM_Guide/index.html RPM Guide] The best one I&#039;m found so far...&lt;br /&gt;
* [https://autotools.io/index.html Autotools Myth busters] The best info on autotools I&#039;ve found yet (covers &#039;&#039;autoconf&#039;&#039;, &#039;&#039;automake&#039;&#039;, &#039;&#039;libtool&#039;&#039; and &#039;&#039;pkg-config&#039;&#039;).&lt;br /&gt;
* The [https://www.gnu.org/software/autoconf/autoconf.html autoconf] manual&lt;br /&gt;
* The [https://www.gnu.org/software/automake/ automake] manual&lt;br /&gt;
* The [https://www.gnu.org/software/libtool/ libtool] manual&lt;br /&gt;
* A good [https://lldb.llvm.org/lldb-gdb.html gdb to lldb cheat sheet] for those of us who know &#039;&#039;gdb&#039;&#039; but not &#039;&#039;lldb&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
= Old information =&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Old build information&lt;br /&gt;
====The Release Process====&lt;br /&gt;
# Make sure the &amp;lt;tt&amp;gt;hyrax-dependencies&amp;lt;/tt&amp;gt; project is up to date and tar balls on www.o.o. If there have been changes/updates:&lt;br /&gt;
## Update version number for the &amp;lt;tt&amp;gt;hyrax-dependencies&amp;lt;/tt&amp;gt; in the &amp;lt;tt&amp;gt;Makefile&amp;lt;/tt&amp;gt;&lt;br /&gt;
## Save, commit, (merge?), and push the changes to the &amp;lt;tt&amp;gt;master&amp;lt;/tt&amp;gt; branch.&lt;br /&gt;
## Once the &amp;lt;tt&amp;gt;hyrax-dependencies&amp;lt;/tt&amp;gt; CI build is finished, trigger CI builds for both &amp;lt;tt&amp;gt;libdap4&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;bes&amp;lt;/tt&amp;gt; by pushing change(s) to the master branch of each.&lt;br /&gt;
# [[Source_Release_for_libdap | Making a source release of libdap]]&lt;br /&gt;
# [[ReleaseGuide | Making a source release of the BES]]. &lt;br /&gt;
# [[OLFSReleaseGuide| Make the OLFS release WAR file]]. Follow these steps to create the three .jar files needed for the OLFS release. Includes information on how to build the OLFS and how to run the tests.&lt;br /&gt;
# [[HyraxDockerReleaseGuide|Make the official Hyrax Docker image for the release]] When the RPMs and the WAR file(s) are built and pushed to their respective download locations, make the Docker image of the release.&lt;br /&gt;
&lt;br /&gt;
====Supplemental release guides====&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;Old - use the packages built using the Continuous Delivery process&amp;lt;/font&amp;gt;&lt;br /&gt;
# [[RPM |Make the RPM Distributions]]. Follow these steps to create an RPM distribution of the software. &#039;&#039;&#039;Note:&#039;&#039;&#039; &#039;&#039;Now we use packages built using CI/CD, so this checklist is no longer needed.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: &#039;&#039;The following is all about using Subversion and is out of date as of November 2014 when we switched to git. There are still good ideas here...&#039;&#039;&lt;br /&gt;
* [[MergingBranches |How to merge code]]&lt;br /&gt;
* [[TrunkDevelBranchRel | Using the SVN trunk, branches and tags to manage releases]].&lt;br /&gt;
* [[ShrewBranchGuide | Making a Branch of Shrew for a Server Release]]. Releases should be made from the trunk and moved to a branch once they are &#039;ready&#039; so that development can continue on the trunk and so that we can easily go back to the software that mad up a release, fix bugs, and (re)release those fixes. In general, it&#039;s better to fix things like build issues, etc., discovered in the released software &#039;&#039;on the trunk&#039;&#039; and merge those down to the release branch to maintain consistency, re-release, etc. This also means that virtually all new feature development should take place on special &#039;&#039;feature&#039;&#039; branches, not the trunk.&lt;br /&gt;
* [[Hyrax Package for OS-X]]. This describes how to make a new OS/X &#039;metapackage&#039; for Hyrax.&lt;br /&gt;
* [[XP| Making Windows XP distributions]]. Follow these directions to make Windows XP binaries.&lt;br /&gt;
* [[ReleaseToolbox |Making a Matlab Ocean Toolbox Release]].  Follow these steps when a new Matlab GUI version is ready to be released.&lt;br /&gt;
* [[Eclipse - How to Setup Eclipse in a Shrew Checkout]] This includes some build instructions&lt;br /&gt;
* [[LinuxBuildHostConfig| How to configure a Linux machine to build Hyrax from SVN]]&lt;br /&gt;
* [[ConfigureSUSE| How to configure a SUSE machine for production of RPM binaries]]&lt;br /&gt;
* [[ConfigureAmazonLinuxAMI| How to configure an Amazon Linux AMI for EC2 Instance To Build Hyrax]]&lt;br /&gt;
* [[TestOpendapOrg | Notes from setting up Hyrax on our new web host]]&lt;br /&gt;
* [http://svnbook.red-bean.com/en/1.7/index.html Subversion 1.7 documentation] -- The official Subversion documentation; [http://svnbook.red-bean.com/en/1.1/svn-book.pdf PDF] and [http://svnbook.red-bean.com/en/1.1/index.html HTML].&lt;br /&gt;
* [[OPeNDAP&#039;s Use of Trac]] -- How to use Trac&#039;s various features in the software development process.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Using_lambdas_with_the_STL&amp;diff=13501</id>
		<title>Using lambdas with the STL</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Using_lambdas_with_the_STL&amp;diff=13501"/>
		<updated>2023-10-16T16:50:24Z</updated>

		<summary type="html">&lt;p&gt;Jimg: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Using lambdas with the STL =&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say you want to find the first instance of the variable named &#039;name&#039; in one of our containers. The thing in the container is not a string, but a complex object with lots of fields, one of which happens to be a string that holds the name of the object. Here&#039;s the old, pre-c++-11 way using a function that returns a boolean and some adapters like bind2nd() and ptr_fun(). &lt;br /&gt;
&lt;br /&gt;
Change code like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 // Note that in order for this to work the second argument must not be a reference.&lt;br /&gt;
 // jhrg 8/20/13&lt;br /&gt;
 static bool&lt;br /&gt;
 name_eq(D4Group *g, const string name)&lt;br /&gt;
 {&lt;br /&gt;
	return g-&amp;gt;name() == name;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
 groupsIter g = find_if(grp_begin(), grp_end(), bind2nd(ptr_fun(name_eq), grp_name));&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 auto g = find_if(grp_begin(), grp_end(), [name](const D4Group *g) { return g-&amp;gt;name() == name; });&lt;br /&gt;
                                           ^     ^                                       ^&lt;br /&gt;
                                           1     2                                       3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &#039;&#039;[name](const D4Group *g) { return g-&amp;gt;name() == name; }&#039;&#039; is a C++ Lambda function (an anonymous function).&lt;br /&gt;
This lambda function use uses a &#039;capture.&#039; The square braces at #1 name the captured variable. Its value is taken from the current environment when the lambda is instantiated at runtime and used at #3 in the function.&lt;br /&gt;
At #2 the argument to the lambda function is declared as a &#039;&#039;const pointer&#039;&#039; so the compiler knows the function won&#039;t be modifying the object.&lt;br /&gt;
C++ STL functions like &#039;&#039;find_if()&#039;&#039; take predicates (which this lambda function is) and that can streamline code quite a bit.&lt;br /&gt;
&lt;br /&gt;
The return value from find_if(...) is the iterator that references the first instance in the D4Group with a name that matches &#039;&#039;name&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Note that there a many algorithms in the STL that can perform operations like searching on all the elements of a container given the beginning and ending iterators.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=Git_Hacks_and_Tricks&amp;diff=13500</id>
		<title>Git Hacks and Tricks</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=Git_Hacks_and_Tricks&amp;diff=13500"/>
		<updated>2023-09-26T18:37:04Z</updated>

		<summary type="html">&lt;p&gt;Jimg: /* Cheat sheet items */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Git resources ==&lt;br /&gt;
* The [http://git-scm.com/book/en/v2 Pro GIT] book is online at: git-scm.com&lt;br /&gt;
* Good cheat sheet: http://ndpsoftware.com/git-cheatsheet.html#loc=workspace;&lt;br /&gt;
* Info on branching from git.com: http://git-scm.com/book/en/Git-Branching-Remote-Branches&lt;br /&gt;
* Migration to git: http://git-scm.com/book/en/Git-and-Other-Systems-Migrating-to-Git&lt;br /&gt;
&lt;br /&gt;
== Setup a username and access token for GitHub ==&lt;br /&gt;
&lt;br /&gt;
:git config --global github.user &amp;lt;name&amp;gt;&lt;br /&gt;
:git config --global github.token &amp;lt;token&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:where the token is made using the instructions at https://help.github.com/articles/creating-an-access-token-for-command-line-use&lt;br /&gt;
&lt;br /&gt;
If you want to configure a token for use with the OSX keychain, get the credential-osxkeychain tool with brew if you need to. Test if you have it by running &#039;&#039;&#039;git credential-osxkeychain&#039;&#039;&#039; and look for the credential-osxkeychain help message. To &#039;&#039;use&#039;&#039; the git extension, you need to enter &#039;&#039;&#039;git credential-osxkeychain &amp;lt;command&amp;gt;&#039;&#039;&#039; and then, on the next line, enter &#039;&#039;&#039;host=github.com&#039;&#039;&#039; and maybe &#039;&#039;&#039;protocol=https&#039;&#039;&#039; and other key/value pairs and then a blank line. See the examples below.&lt;br /&gt;
&lt;br /&gt;
To use the osx keychain, first check if you have a password/token already saved:&lt;br /&gt;
&lt;br /&gt;
:git credential-osxkeychain get&lt;br /&gt;
::host=github.com&lt;br /&gt;
::protocol=https&lt;br /&gt;
::&amp;lt;cr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Erase the password/token&lt;br /&gt;
&lt;br /&gt;
:git credential-osxkeychain erase&lt;br /&gt;
::host=github.com&lt;br /&gt;
::protocol=https&lt;br /&gt;
::&amp;lt;cr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then set the new token&lt;br /&gt;
&lt;br /&gt;
:git credential-osxkeychain store&lt;br /&gt;
::host=github.com&lt;br /&gt;
::protocol=https&lt;br /&gt;
::username=&amp;lt;your login&amp;gt;&lt;br /&gt;
::password=&amp;lt;your token&amp;gt;&lt;br /&gt;
::&amp;lt;cr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then use the &#039;&#039;&#039;get&#039;&#039;&#039; command to verify.&lt;br /&gt;
&lt;br /&gt;
== Git Secrets ==&lt;br /&gt;
&lt;br /&gt;
Use this tool, which is run automatically before each commit, to keep from adding AWS and other secret keys to code that is destined for a public repository. Doing that will &#039;leak&#039; the key and Amazon _will_ notice. The remedy will involve every account changing its password and every key pair being &#039;rotated&#039; (i.e., every key pair has to be replaced with t anew one).&lt;br /&gt;
&lt;br /&gt;
https://github.com/awslabs/git-secrets&lt;br /&gt;
&lt;br /&gt;
Scroll down to the bottom for installation instructions. On OSX, you can use brew and do not have to clone the repo. Here&#039;s what I did:&lt;br /&gt;
&lt;br /&gt;
:brew install git-secrets # install _git secrets_&lt;br /&gt;
:git secrets --register-aws --global # configure git so all future cloned repos will use it&lt;br /&gt;
:git secrets --install ~/.git-templates/git-secrets # set up all existing repos so they use it.&lt;br /&gt;
:git config --global init.templateDir ~/.git-templates/git-secrets&lt;br /&gt;
&lt;br /&gt;
You can read a bit more of the docs in the _git secrets_ repo and configure a more fine grained approach.&lt;br /&gt;
&lt;br /&gt;
== Subtrees: how to incorporate code from other repositories ==&lt;br /&gt;
Git subtrees are an alternative to submodules and are easier for the users of the parent repository (in our case, typically the BES repo). There is a fair amount of information about subtrees, although the main thing to know is that once the code for the child repo is part of the parent, in most cases there&#039;s nothing else to do. Only when changes get made in the code from the child repo are extra steps to keep the parent&#039;s copy of the code and child repo in sync needed (and then, only if you want to keep them in sync). For normal branch-PR-merge operations, there is no need to think about the subtree management commands.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s a [https://winstonkotzan.com/blog/2016/09/26/git-submodule-vs-subtree.html discussion about the differences between submodules and subtrees].&lt;br /&gt;
&lt;br /&gt;
=== How to incorporate code from another repo ===&lt;br /&gt;
To incorporate code from another repo (the &#039;&#039;child&#039;&#039;) into an existing repo (the &#039;&#039;parent&#039;&#039;), use these steps:&lt;br /&gt;
&lt;br /&gt;
# name the other project &#039;&#039;child&#039;&#039;, and fetch: &#039;&#039;&#039;git remote add -f &#039;&#039;child&#039;&#039; &amp;lt;nowiki&amp;gt;https://github.com/&amp;lt;/nowiki&amp;gt;...&#039;&#039;&#039;&lt;br /&gt;
:: The &#039;&#039;-f&#039;&#039; option runs &#039;&#039;git fetch&#039;&#039; automatically after the remote repo is added. See [https://git-scm.com/docs/git-remote git remote].&lt;br /&gt;
# prepare for the later step to record the result as a merge.: &#039;&#039;&#039;git merge -s ours --no-commit --allow-unrelated-histories &#039;&#039;child&#039;&#039;/master&#039;&#039;&#039;&lt;br /&gt;
:: The &#039;&#039;-s&#039;&#039; option to &#039;&#039;git merge&#039;&#039; selects the &#039;&#039;ours&#039;&#039; strategy for the merge; &#039;&#039;--no-commit&#039;&#039; does not commit the merge automatically. See [https://git-scm.com/docs/git-merge#_merge_strategies git merge]. If you are using git 2.9+, add the option &#039;&#039;--allow-unrelated-histories&#039;&#039;, but older versions of git don&#039;t support that (as of Jan. 2022, OSX was using git 2.32).&lt;br /&gt;
# read &amp;quot;master&amp;quot; branch of &#039;&#039;child&#039;&#039; to the subdirectory &#039;&#039;dir-child&#039;&#039;: &#039;&#039;&#039;git read-tree --prefix=&#039;&#039;dir-child&#039;&#039;/ -u &#039;&#039;child&#039;&#039;/master&#039;&#039;&#039;&lt;br /&gt;
:: The &#039;&#039;-u&#039;&#039; option causes &#039;&#039;git read-tree&#039;&#039; to update the files in the working directory. See [https://git-scm.com/docs/git-read-tree git read-tree].&lt;br /&gt;
# record the merge result: &#039;&#039;&#039;git commit -m &amp;quot;Merge &#039;&#039;child&#039;&#039; project as our subdirectory&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== About Git subtree merges ===&lt;br /&gt;
To pull in subsequent update from &#039;&#039;child&#039;&#039; using &amp;quot;subtree&amp;quot; merge: &#039;&#039;&#039;git pull -s subtree &#039;&#039;child&#039;&#039; master&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To learn more about subtree merges, see [https://docs.github.com/en/get-started/using-git/about-git-subtree-merges About Git subtree merges].&lt;br /&gt;
&lt;br /&gt;
=== How to remove a submodule ===&lt;br /&gt;
If a child repo was included in a parent repo using git submodules, here&#039;s how to remove it so that the child repo can be included using subtrees as documented above.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;git rm -r &#039;&#039;path/to/submodule&#039;&#039; &#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;rm -rf .git/modules/&#039;&#039;path/to/submodule&#039;&#039; &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If the second line isn&#039;t used, even if you removed the submodule for now, the remnant .git/modules/the_submodule folder will prevent the same submodule from being added back or replaced in the future.&lt;br /&gt;
&lt;br /&gt;
Also, using just these two commands will leave an entry for the submodule in &#039;&#039;.git/config&#039;&#039;. To remove that, &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;git config -f .git/config --remove-section submodule.&#039;&#039;path/to/submodule&#039;&#039; &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Here is an older way that illustrates where all the information is held:&lt;br /&gt;
;How do I delete a submodule?&lt;br /&gt;
NB: Ignore the submodule help info about &#039;&#039;deinit&#039;&#039; since that seems to leave too much undone.&lt;br /&gt;
&lt;br /&gt;
To remove a submodule you need to:&lt;br /&gt;
* Delete the relevant line from the &#039;&#039;.gitmodules&#039;&#039; file.&lt;br /&gt;
* Delete the relevant section from &#039;&#039;.git/config&#039;&#039;.&lt;br /&gt;
* Delete the submodule info in &#039;&#039;.git/modules&#039;&#039;: &#039;&#039;&#039;rm -rf .git/modules/&amp;lt;path_to_submodule&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
* Run &#039;&#039;&#039;git rm --cached path_to_submodule&#039;&#039;&#039; (no trailing slash).&lt;br /&gt;
* Commit the parent repo (&#039;&#039;&#039;git commit -m &amp;quot;Removed submodule ...&amp;quot;&#039;&#039;&#039;)&lt;br /&gt;
* Delete the now untracked submodule files.&lt;br /&gt;
&lt;br /&gt;
== Someone forked and issued a PR on our repo, but ... ==&lt;br /&gt;
The Travis build failed because that person is not allowed access to our AWS. &lt;br /&gt;
&lt;br /&gt;
One way is to copy their branch to our remote (aka the &#039;origin&#039; remote) and issue a PR on it.&lt;br /&gt;
&lt;br /&gt;
# Set up their remote as one you can reference.&lt;br /&gt;
# Fetch the branches of that remote (you can fetch just the one branch)&lt;br /&gt;
# Checkout that remote/branch combo.&lt;br /&gt;
# Checkout with branching to move it to your default remote (which is liley called &#039;origin&#039;).&lt;br /&gt;
# Push that branch to github&lt;br /&gt;
&lt;br /&gt;
Here are the commands (with a real example):&lt;br /&gt;
&lt;br /&gt;
# git remote add https://github.com/Bo98/libdap4.git&lt;br /&gt;
# git fetch Bo98&lt;br /&gt;
# git checkout Bo98/libtirpc-fix       // Bo98 is the remote, libtirpc-fix is the branch &lt;br /&gt;
# git checkout -b libtirpc-fix         // That makes the code just checked out a branch for the &#039;origin&#039; remote&lt;br /&gt;
# get push -u origin libtirpc-fix      // Now that code is a branch in our repo and Travis will work.&lt;br /&gt;
&lt;br /&gt;
== Cheat sheet items ==&lt;br /&gt;
These are simple things that are not really obvious from the git book or other sources&lt;br /&gt;
&lt;br /&gt;
;About &#039;&#039;git rebase&#039;&#039;; I have a branch with lots of commits and I want to squash them. How? Oh, I pushed those commits to github...&lt;br /&gt;
:: The trick is to use &#039;&#039;git log&#039;&#039; to find the commit hash of the starting point for &#039;&#039;rebase&#039;&#039; and &#039;&#039;git rebase --interactive&#039;&#039; and then &#039;&#039;git push origin +&amp;lt;branch&amp;gt;&#039;&#039;. &lt;br /&gt;
:: &#039;&#039;&#039;&#039;&#039;Here&#039;s a HowTo on [[Squashing commits]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
;I forked someone&#039;s repo, now I want to synch to their &#039;master&#039; branch. How?&lt;br /&gt;
: Follow these steps&lt;br /&gt;
:: Set up an &#039;upstream&#039; remote: https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/configuring-a-remote-for-a-fork&lt;br /&gt;
:: Then do these operations to get and merge the changes to &#039;master&#039;: https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork&lt;br /&gt;
&lt;br /&gt;
;I keep getting a &#039;Permission Denied (publickey)&#039; error when I push!&lt;br /&gt;
: Follow these steps to make and use a public/private key pair for github&lt;br /&gt;
:: cd ~/.ssh.&lt;br /&gt;
:: Within .ssh, there should be these two files: id_rsa and id_rsa.pub. If those two files are not there... &lt;br /&gt;
:: To create the SSH keys, type ssh-keygen -t rsa -C &amp;quot;your_email@example.com&amp;quot;.&lt;br /&gt;
:: Open id_rsa.pub in a text editor, and copy the contents, exactly as it appears, of id_rsa.pub&lt;br /&gt;
:: and paste it into GitHub and/or BitBucket under the Account Menu (upper right corner) Settings &amp;gt; SSH Keys.&lt;br /&gt;
&lt;br /&gt;
;I just made a perfectly good commit to the wrong branch. How do I undo the last commit in my master branch and then take those same changes and get them into my upgrade branch?&lt;br /&gt;
:If you haven&#039;t yet pushed your changes, you can also do a soft reset:&lt;br /&gt;
:&#039;&#039;git reset --soft HEAD^&#039;&#039;&lt;br /&gt;
:This will revert the commit, but put the committed changes back into your index. Assuming the branches are relatively up-to-date with regard to each other, git will let you do a checkout into the other branch, whereupon you can simply commit:&lt;br /&gt;
:&#039;&#039;git checkout [-b] branch&#039;&#039;&lt;br /&gt;
:&#039;&#039;git commit&#039;&#039;&lt;br /&gt;
:The disadvantage is that you need to re-enter your commit message.&lt;br /&gt;
&lt;br /&gt;
;How to see a list of &#039;conflicted&#039; files after a merge&lt;br /&gt;
:git diff --name-only --diff-filter=U&lt;br /&gt;
;How to see the difference between to commits&lt;br /&gt;
:git diff &amp;lt;commit-hash-1&amp;gt; &amp;lt;commit-hash-2&amp;gt;, e.g., git diff 0da94be 59ff30c&lt;br /&gt;
:...for a specific file: git diff &amp;lt;commit-hash-1&amp;gt; &amp;lt;commit-hash-2&amp;gt; -- &amp;lt;file&amp;gt;&lt;br /&gt;
:...and don&#039;t forget the shorthand for the hashes: git diff HEAD^^..HEAD -- main.c where &#039;&#039;HEAD^&#039;&#039; is the parent of HEAD. HEAD{n} is the Nth parent.&lt;br /&gt;
;How to see the different remote branches:&lt;br /&gt;
:git remote show origin&lt;br /&gt;
;Fetch all the branches on &#039;&#039;origin&#039;&#039;&lt;br /&gt;
:git fetch origin&lt;br /&gt;
;How do I list the remote branches (that have been fetched)?&lt;br /&gt;
:git branch -a&lt;br /&gt;
;How do I switch to a branch from a remote origin?&lt;br /&gt;
:git checkout -b test origin/test&lt;br /&gt;
:or, with newer versions of git&amp;lt;nowiki&amp;gt;:&amp;lt;/nowiki&amp;gt; git checkout test&lt;br /&gt;
;How do I see what would be pushed to a remote repo?&lt;br /&gt;
:git push --dry-run&lt;br /&gt;
:git diff origin/master		# Assumes you have run git fetch, I think &lt;br /&gt;
:git diff --stat origin/master	# --stat just shows the file names stats, not the diffs&lt;br /&gt;
;To get a specific file from a specific branch&lt;br /&gt;
:git show dap4:./gdal_dds.cc &amp;gt; gdal_dds.dap4.cc &#039;&#039;You can use checkout instead of show and that will overwrite the file.&#039;&#039;&lt;br /&gt;
:the general syntax is &#039;&#039;object&#039;&#039; (that&#039;s the &#039;dap4:./gdal_dds.cc&#039; part) and it can use the ^ and ~n syntax to specify various commits on the given branch. A SHA can also be used.&lt;br /&gt;
;How to change the &#039;origin&#039; for a remote repo&lt;br /&gt;
:git remote set-url origin &amp;lt;nowiki&amp;gt;git://new.url.here&amp;lt;/nowiki&amp;gt; (https URLs work too...)&lt;br /&gt;
;How to push a local branch to a remote repo&lt;br /&gt;
:git push -u origin feature_branch_name&lt;br /&gt;
;How to make and track a new (local) branch&lt;br /&gt;
;How to cause Travis CI to skip a build&lt;br /&gt;
:Add &#039;&#039;[ci skip]&#039;&#039; to the log text. See the about topic on amending a commit log, which can be handy&lt;br /&gt;
:git checkout -b &amp;lt;branch name&amp;gt;&lt;br /&gt;
;How to track a remote branch&lt;br /&gt;
:git checkout --track origin/serverfix &#039;&#039; or&#039;&#039; git checkout -b sf origin/serverfix&lt;br /&gt;
;How do I make an &#039;&#039;existing&#039;&#039; local branch track an existing remote branch?&lt;br /&gt;
:git branch --set-upstream upstream/foo foo where &#039;&#039;upstream&#039;&#039; is probably actually &#039;&#039;origin&#039;&#039;.&lt;br /&gt;
;Commited my code, then made a bunch of changes that just seem like a bad idea in retrospect. How do I go back to my previous commit for everything in a directory? &#039;&#039;I don&#039;t care if I loose all my changes since the last commit.&#039;&#039;&lt;br /&gt;
:git reset HEAD --hard (Note that this is one of the very few git commands where you really cannot undo what you have done).&lt;br /&gt;
;How to undo a commit (that has not been pushed)&lt;br /&gt;
:git reset --soft HEAD~1. This leaves the files in their changed state in your working dir so that you can edit them and recommit. You can also change to a different branch and commit there, then change back. &lt;br /&gt;
;In the above case, To reuse the old commit message&lt;br /&gt;
:git commit -c ORIG_HEAD &amp;lt;-- This works because &#039;reset&#039; copied the old head to .git/ORIG_HEAD. If you don&#039;t need to edit the old message, use -C instead of -c.&lt;br /&gt;
;How to delete a remote brnach&lt;br /&gt;
:git push origin --delete serverfix &#039;&#039;The data are kept for a little bit - before git runs garbage collection - so it may be possible to undo this.&#039;&#039;&lt;br /&gt;
;How to delete a local branch&lt;br /&gt;
:git branch -d the_local_branch &#039;&#039;and delete the remote branch you were tracking with the same name&#039;&#039; git push origin :the_remote_branch&lt;br /&gt;
;How to I set up a git cloned repo on a remote machine so I don&#039;t have to type my password all the time?&lt;br /&gt;
:This page shows how to make a PKI key-pair with a secure password, configure the machine to remember the password using ssh-agent and upload the public key to your github account so it&#039;ll use the key for authentication. https://help.github.com/articles/generating-ssh-keys/&lt;br /&gt;
;How can I know which branches are already merged into the master branch?&lt;br /&gt;
:&#039;&#039;git branch --merged master&#039;&#039; lists branches merged into master&lt;br /&gt;
:&#039;&#039;git branch --merged&#039;&#039; lists branches merged into HEAD (i.e. tip of current branch)&lt;br /&gt;
:&#039;&#039;git branch --no-merged&#039;&#039; lists branches that have not been merged&lt;br /&gt;
:By default this applies to only the local branches. The -a flag will show both local and remote branches, and the -r flag shows only the remote branches.&lt;br /&gt;
;Switching remote URLs from HTTPS to SSH&lt;br /&gt;
:&#039;&#039;git remote -v&#039;&#039;&lt;br /&gt;
: # origin  &amp;lt;nowiki&amp;gt;https://github.com/USERNAME/REPOSITORY.git&amp;lt;/nowiki&amp;gt; (fetch)&lt;br /&gt;
: # origin  &amp;lt;nowiki&amp;gt;https://github.com/USERNAME/REPOSITORY.git&amp;lt;/nowiki&amp;gt; (push)&lt;br /&gt;
:&#039;&#039;git remote set-url origin git@github.com:USERNAME/OTHERREPOSITORY.git&#039;&#039;&lt;br /&gt;
:&#039;&#039;git remote -v&lt;br /&gt;
: # Verify new remote URL&lt;br /&gt;
: # origin  git@github.com:USERNAME/OTHERREPOSITORY.git (fetch)&lt;br /&gt;
: # origin  git@github.com:USERNAME/OTHERREPOSITORY.git (push)&lt;br /&gt;
;Amending the commit message&lt;br /&gt;
:&#039;&#039;git commit --amend&#039;&#039;&lt;br /&gt;
:&#039;&#039;git commit --amend -m &amp;quot;New commit message&amp;quot;&#039;&#039;&lt;br /&gt;
; How do I revert a commit after if it has been pushed?:&lt;br /&gt;
:Given:&lt;br /&gt;
::&#039;&#039;e512d38 Adding taunts to management.&#039;&#039;&lt;br /&gt;
::&#039;&#039;bd89039 Adding kill switch in case I&#039;m fired.&#039;&#039;&lt;br /&gt;
::&#039;&#039;da8af4d Adding performance optimizations to master loop.&#039;&#039;&lt;br /&gt;
::&#039;&#039;db0c012 Fixing bug in the doohickey&#039;&#039;&lt;br /&gt;
:If you just want to revert the commits without modifying the history, you can do the following:&lt;br /&gt;
:&lt;br /&gt;
::&#039;&#039;git revert e512d38&#039;&#039;&lt;br /&gt;
::&#039;&#039;git revert bd89039&#039;&#039;&lt;br /&gt;
:Alternatively, if you don’t want others to see that you added the kill switch and then removed it, you can roll back the repository using the following (however, this will cause problems for others who have already pulled your changes from the remote):&lt;br /&gt;
:&lt;br /&gt;
::&#039;&#039;git reset --hard da8af4d&#039;&#039;&lt;br /&gt;
::&#039;&#039;git push origin -f localBranch:remoteBranch&#039;&#039;&lt;br /&gt;
;The gitlog-to-changelog script comes in handy to generate a GNU-style ChangeLog.&lt;br /&gt;
:As shown by gitlog-to-changelog --help, you may select the commits used to generate a ChangeLog file using either the option --since:&lt;br /&gt;
:&lt;br /&gt;
::&#039;&#039;gitlog-to-changelog --since=2008-01-01 &amp;gt; ChangeLog&#039;&#039;&lt;br /&gt;
:or by passing additional arguments after --, which will be passed to git-log (called internally by gitlog-to-changelog):&lt;br /&gt;
:&lt;br /&gt;
::&#039;&#039;gitlog-to-changelog -- -n 5 foo &amp;gt; last-5-commits-to-branch-foo&#039;&#039;&lt;br /&gt;
;Amending the commit message&lt;br /&gt;
:&#039;&#039;git commit --amend&#039;&#039;&lt;br /&gt;
:&lt;br /&gt;
;Tagging stuff&lt;br /&gt;
:&#039;&#039;git tag&#039;&#039; will list the existing tags&lt;br /&gt;
:&#039;&#039;git tag -a &amp;lt;tag name&amp;gt;&#039;&#039; adds a new tag&lt;br /&gt;
:&#039;&#039;git push origin &amp;lt;tag name&amp;gt;&#039;&#039; pushes that tag up to the server &#039;&#039;origin&#039;&#039;&lt;br /&gt;
:&#039;&#039;git push origin --tags&#039;&#039; pushes all new tags up to &#039;&#039;origin&#039;&#039;&lt;br /&gt;
;How to resolve conflicts in a submodule when you&#039;ve just merged master down to a branch&lt;br /&gt;
:&lt;br /&gt;
:Run git status - make a note of the submodule folder with conflicts&lt;br /&gt;
:Reset the submodule to the version that was last committed in the current branch:&lt;br /&gt;
:&lt;br /&gt;
:git reset HEAD path/to/submodule&lt;br /&gt;
: At this point, you have a conflict-free version of your submodule which you can now update to the latest version in the submodule&#039;s repository:&lt;br /&gt;
:&lt;br /&gt;
: cd path/to/submodule&lt;br /&gt;
:&lt;br /&gt;
: git pull origin SUBMODULE-BRANCH-NAME&lt;br /&gt;
: And now you can commit that and get back to work.&lt;br /&gt;
; How to move a submodule into the main repo &lt;br /&gt;
:If all you want is to put your submodule code into the main repository, you just need to remove the submodule and re-add the files into the main repo, follow the prescription below. If you want to see how to add the branches, history, etc. to the repo, see http://stackoverflow.com/questions/1759587/un-submodule-a-git-submodule:&lt;br /&gt;
:&lt;br /&gt;
:&#039;&#039;git rm --cached submodule_path&#039;&#039; &#039;&#039;&#039;# delete reference to submodule HEAD (no trailing slash)&#039;&#039;&#039;&lt;br /&gt;
:&#039;&#039;git rm .gitmodules&#039;&#039;             &#039;&#039;&#039;# if you have more than one submodules, you need to edit this file instead of deleting!&#039;&#039;&#039;&lt;br /&gt;
:&#039;&#039;rm -rf submodule_path/.git&#039;&#039;     &#039;&#039;&#039;# make sure you have backup!!&#039;&#039;&#039;&lt;br /&gt;
:&#039;&#039;git add submodule_path&#039;&#039;         &#039;&#039;&#039;# will add files instead of commit reference&#039;&#039;&#039;&lt;br /&gt;
:&#039;&#039;git commit -m &amp;quot;remove submodule&amp;quot;&#039;&#039;&lt;br /&gt;
; Checking out a tag&lt;br /&gt;
:You will not be able to checkout the tags if its not locally in your repo so first you have to fetch it all.&lt;br /&gt;
:&lt;br /&gt;
:First make sure that the tag exists locally by doing&lt;br /&gt;
:&lt;br /&gt;
:# --all will fetch all the remotes.&lt;br /&gt;
:# --tags will fetch all tags as well&lt;br /&gt;
:&#039;&#039;git fetch --all --tags --prune&#039;&#039;&lt;br /&gt;
:Then check out the tag by running&lt;br /&gt;
:&lt;br /&gt;
:&#039;&#039;git checkout tags/&amp;lt;tag_name&amp;gt; -b &amp;lt;branch_name&amp;gt;&#039;&#039;&lt;br /&gt;
:Instead of origin use the tags/ prefix.&lt;br /&gt;
;How to remove old/unused/deleted branches&lt;br /&gt;
:&#039;&#039;git remote prune origin&#039;&#039; prunes tracking branches not on the remote.&lt;br /&gt;
:&#039;&#039;git branch --merged&#039;&#039; lists branches that have been merged into the current branch )but maybe including &#039;&#039;&#039;master&#039;&#039;&#039; so be careful about the next part).&lt;br /&gt;
:&#039;&#039;xargs git branch -d&#039;&#039; deletes branches listed on standard input.&lt;br /&gt;
:Be &#039;&#039;&#039;careful&#039;&#039;&#039; deleting branches listed by &#039;&#039;git branch --merged&#039;&#039;. The list could include master or other branches you&#039;d prefer not to delete.&lt;br /&gt;
;How do I merge just one file? &lt;br /&gt;
:A simple command already solved the problem for me if I assume that all changes are committed in both branches A and B&lt;br /&gt;
:&#039;&#039;git checkout A&#039;&#039;&lt;br /&gt;
:&#039;&#039;git checkout --patch B f&#039;&#039;&lt;br /&gt;
:The first command switches into branch &#039;&#039;A&#039;&#039;, into where I want to merge &#039;&#039;B&#039;&#039; &#039;s version of the file &#039;&#039;f&#039;&#039;. The second command patches the file &#039;&#039;f&#039;&#039; with &#039;&#039;f&#039;&#039; of HEAD of &#039;&#039;B&#039;&#039;. You may even accept/discard single parts of the patch. Instead of &#039;&#039;B&#039;&#039; you can specify any commit here, it does not have to be HEAD.&lt;br /&gt;
&lt;br /&gt;
== Continuous Integration builds involving submodules ==&lt;br /&gt;
There are two ways to handle getting a CI build to run when you&#039;ve editing a submodule used by the BES. The **best** way is to use a branch of the BES to run the build as part of a **pull request** and is described as option number one below. Another way is to use the master branch of the BES and is describe as the second choice. &lt;br /&gt;
&lt;br /&gt;
==== Using a BES branch and a GitHub pull request ====&lt;br /&gt;
This is the better of the two ways. It requires a bit more work but does not introduce code to the master branch before that code has passed the CI build.&lt;br /&gt;
&lt;br /&gt;
Once your work on the submodule is complete:&lt;br /&gt;
* Commit and push the code in your submodule&#039;s branch&lt;br /&gt;
: git commit&lt;br /&gt;
: git push&lt;br /&gt;
* Goto the top of the BES and checkout a new branch. Choose a name that is similar to the name of the branch used for the submodule&#039;s changes&lt;br /&gt;
: cd bes&lt;br /&gt;
: git checkout -b &amp;lt;name&amp;gt;&lt;br /&gt;
* Commit the submodule&#039;s current commit hash to the .gitmodules file (easier than is sounds)&lt;br /&gt;
: &#039;git commit -a&#039; or &#039;git add &amp;lt;path to submodule&amp;gt;&#039; and then &#039;git commit&#039;&lt;br /&gt;
* Push this to GitHub&lt;br /&gt;
: git push&lt;br /&gt;
* Goto Github and issue a pull request for the BES &amp;lt;name&amp;gt; branch.&lt;br /&gt;
&lt;br /&gt;
This will trigger a CI build of that branch. This does not change the BES master branch at all, which is the goal here - to build without affecting the master branch.&lt;br /&gt;
&lt;br /&gt;
Once the build works, merge the submodule branch to the submodule&#039;s master. Then delete the BES &amp;lt;name&amp;gt; branch and make sure to update the BES master so that it references the new master branch for the submodule.&lt;br /&gt;
&lt;br /&gt;
==== An alternative, that uses the BES master branch ====&lt;br /&gt;
&lt;br /&gt;
Once your code is in committed and pushed on the master branch, go to the top of the bes project and run ‘git commit -a’. This will prompt you with a commit that shows a new HDF5 handler version. Add a commit message (e.g., “New HDF5 handler version”) and then push. This works because the new hash code for the commit that is on the master branch is recorded in a hidden file managed by git (named ‘.gitmodules’). Since that file is changed, the push results in changed files in bes and than triggers a new build. The new build will reference the new commit hash for your latest changes on master and that hash will be checked out for the build.&lt;br /&gt;
&lt;br /&gt;
== Merging in a branch where submodules have been removed and replaced with directories ==&lt;br /&gt;
Recently (02/03/17) we dropped most of the submodules in the &#039;&#039;&#039;bes&#039;&#039;&#039; (except for the &#039;&#039;hdf*_handler&#039;&#039; modules) and replaced them with regular code directories. &lt;br /&gt;
&lt;br /&gt;
What to do if you have an existing branch the you need to update with the new master:&lt;br /&gt;
&lt;br /&gt;
# Check in all of your changes.&lt;br /&gt;
# Push your branch to github.&lt;br /&gt;
# Start with a brand new clone for the bes repo from github:&lt;br /&gt;
#:  &#039;&#039;git clone https://github.com/opendap/bes&#039;&#039;&lt;br /&gt;
# DO NOT RUN &#039;&#039;submodule init&#039;&#039;&lt;br /&gt;
# In the new clone, checkout your branch.&lt;br /&gt;
# In the &#039;&#039;modules&#039;&#039; directory remove crucial submodules:&lt;br /&gt;
#: &#039;&#039;rm -rf csv_handler dap-server debug_functions fileout_* fits_handler freeform_handler gateway_module gdal_handler ncml_module netcdf_handler ugrid_functions w10n_handler xml_data_handler&#039;&#039;&lt;br /&gt;
# Now, merge the master to your branch:&lt;br /&gt;
#: &#039;&#039;git merge master&#039;&#039;&lt;br /&gt;
# Cleanup any trouble  (there should not be much)&lt;br /&gt;
# Build and test&lt;br /&gt;
# Merge your branch to master when you&#039;re ready.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
	<entry>
		<id>https://docs.opendap.org/index.php?title=What_is_faster%3F_stringstream_string_%2B_String&amp;diff=13498</id>
		<title>What is faster? stringstream string + String</title>
		<link rel="alternate" type="text/html" href="https://docs.opendap.org/index.php?title=What_is_faster%3F_stringstream_string_%2B_String&amp;diff=13498"/>
		<updated>2023-08-16T18:45:19Z</updated>

		<summary type="html">&lt;p&gt;Jimg: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Question ==&lt;br /&gt;
&lt;br /&gt;
I have two string objects:string str_1, str_2. I want to concatenate to them. I can use two methods: &lt;br /&gt;
&lt;br /&gt;
method 1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
std::stringstream ss;&lt;br /&gt;
ss &amp;lt;&amp;lt; &amp;quot;hello&amp;quot;&amp;lt;&amp;lt; &amp;quot;world&amp;quot;；&lt;br /&gt;
const std::string dst_str = std::move(ss.str());&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
method 2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
std::string str_1(&amp;quot;hello&amp;quot;);&lt;br /&gt;
std::string str_2(&amp;quot;world&amp;quot;);&lt;br /&gt;
const std::string dst_str = str_1 + str_2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Because the string&#039;s buffer is read only, when you change the string object, its buffer will destroy and create a new one to store new content. So method 1 is better than method 2? Is my understanding correct?&lt;br /&gt;
&lt;br /&gt;
== Answer ==&lt;br /&gt;
From StackOverflow (https://stackoverflow.com/questions/30254175/is-stringstream-better-than-strings-operator-for-string-objects-concatenati)&lt;br /&gt;
&lt;br /&gt;
stringstreams are complex objects compared to simple strings. Everythime you use method 1, a stringstream must be constructed, and later destructed. If you do this millions of time, the overhead will be far from neglectible.&lt;br /&gt;
&lt;br /&gt;
The apparently simple &amp;lt;pre&amp;gt;ss &amp;lt;&amp;lt; str_1 &amp;lt;&amp;lt; str_2&amp;lt;/pre&amp;gt; is in fact equivalent to &amp;lt;pre&amp;gt;std::operator&amp;lt;&amp;lt;(sst::operator&amp;lt;&amp;lt;(ss, str_1), str_2);&amp;lt;/pre&amp;gt; which is not optimized for in memory concatenation, but common to all the streams.&lt;br /&gt;
&lt;br /&gt;
I&#039;ve done a small benchmark :&lt;br /&gt;
&lt;br /&gt;
* In debug mode, method 2 is almost twice as fast as method1.&lt;br /&gt;
&lt;br /&gt;
* In optimized build (verifying in the assembler file that nothing was optimized away), it&#039;s more then 27 times faster.&lt;/div&gt;</summary>
		<author><name>Jimg</name></author>
	</entry>
</feed>