-
Notifications
You must be signed in to change notification settings - Fork 1
/
paper.qmd
251 lines (181 loc) · 17.4 KB
/
paper.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
---
title: "Evaluating Cloud-Optimized HDF5 for NASA’s ICESat-2 Mission"
format:
agu-pdf:
keep-tex: true
agu-html: default
author:
- name: Luis A. Lopez
affiliations:
- name: National Snow and Ice Data Center, University of Colorado, Boulder.
department: CIRES
address: CIRES, 449 UCB
city: Boulder
region: CO
country: USA
postal-code: 80309
orcid: 0000-0003-4896-3263
email: [email protected]
url: https://github.com/betolink
- name: "Andrew P. Barrett"
affiliations:
- name: National Snow and Ice Data Center, University of Colorado, Boulder.
department: CIRES
address: CIRES, 449 UCB
city: Boulder
region: CO
country: USA
postal-code: 80309
orcid: 0000-0003-4394-5445
url: "https://github.com/andypbarrett"
- name: "Amy Steiker"
affiliations:
- name: National Snow and Ice Data Center, University of Colorado, Boulder.
department: CIRES
address: CIRES, 449 UCB
city: Boulder
region: CO
country: USA
postal-code: 80309
orcid: 0000-0002-3039-0260
url: "https://github.com/asteiker"
- name: "Aleksandar Jelenak"
affiliations:
- name: The HDF Group
city: Champaign
region: IL
country: USA
postal-code: 61820
orcid: 0009-0001-2102-0559
url: "https://github.com/ajelenak"
- name: "Lisa Kaser"
affiliations:
- name: National Snow and Ice Data Center, University of Colorado, Boulder.
department: CIRES
address: CIRES, 449 UCB
city: Boulder
region: CO
country: USA
postal-code: 80309
url: "https://nsidc.org/about/about-nsidc/what-we-do/our-people/lisa_kaser"
- name: "Jeffrey E. Lee"
affiliations:
- name: NASA / KBR
department: NASA Goddard Space Flight Center
address: 8800 Greenbelt Rd
city: Greenbelt
region: MD
country: USA
postal-code: 20771
url: "https://science.gsfc.nasa.gov/sci/bio/jeffrey.e.lee"
abstract: |
The Hierarchical Data Format (HDF) is a common archival format for n-dimensional scientific data; it has been utilized to store valuable information from astrophysics to earth sciences and everything in between. As flexible and powerful as HDF can be, it comes with big tradeoffs when it’s accessed from remote storage systems, mainly because the file format and the client I/O libraries were designed for local and supercomputing workflows. As scientific data and workflows migrate to the cloud , efficient access to data stored in HDF format is a key factor that will accelerate or slow down “science in the cloud” across all disciplines.
We present an implementation of recently available features in the HDF5 stack that results in performant access to HDF from remote cloud storage. This performance is on par with modern cloud-native formats like Zarr but with the advantage of not having to reformat data or generate metadata sidecar files (DMR++, Kerchunk). Our benchmarks also show potential cost-savings for data producers if their data are processed using cloud-optimized strategies.
keywords: ["cloud-native","cloud", "HDF5", "NASA", "ICESat-2"]
key-points:
- Key Points convey the main points and conclusions of the article.
- Up to three key point statements are allowed, and each is limited to at most 140 characters with no abbreviations.
- Key Points are not included in the word count.
bibliography: bibliography.bib
citation:
container-title: Geophysical Research Letters
keep-tex: true
date: last-modified
---
## Problem
Scientific data from NASA and other agencies are increasingly being distributed from the commercial cloud. Cloud storage enables large-scale workflows and should reduce local storage costs. It also allows the use of scalable on-demand cloud computing resources by individual scientists and the broader scientific community. However, the majority of this scientific data is stored in a format that was not designed for the cloud: The Hierarchical Data format or HDF.
The most recent version of the Hierarchical Data Format is HDF5, a common archival format for n-dimensional scientific data; it has been utilized to store valuable information from astrophysics to earth sciences and everything in between. As flexible and powerful as HDF5 can be, it comes with big trade-offs when it’s accessed from remote storage systems.
HDF5 is a complex file format; we can think of it as a file system using a tree-like structure with multiple data types and native data structures. Because of this complexity, the most reliable way of accessing data stored in this format is using the HDF5 C API. Regardless of access pattern, nearly all tools ultimately rely on the HDF5-C library and this brings a couple issues that affect the efficiency of accessing this format over the network:
---
#### **Metadata fragmentation**
When working with large datasets, especially those that include numerous variables and nested groups, the storage of file-level metadata can become a challenge. By default, metadata associated with each dataset is stored in chunks of 4 kilobytes (KB). This chunking mechanism was originally intended to optimize storage efficiency and access speed on disks with hardware resources available more than 20 years ago. In datasets with many variables and/or complex hierarchical structures, these 4KB chunks can lead to significant fragmentation.
Fragmentation occurs when this metadata is spread out across multiple non-contiguous chunks within the file. This results in inefficiencies when accessing or modifying data because compatible libraries need to read from multiple, scattered locations in the file. Over time, as the dataset grows and evolves, this fragmentation can compound, leading to degraded performance and increased storage overhead. In particular, operations that involve reading or writing metadata, such as opening a file, checking attributes, or modifying variables, can become slower and more resource-intensive.
#### **Global API Lock**
Because of the historical complexity of operations with the HDF5 format[@The_HDF_Group_Hierarchical_Data_Format], there has been a necessity to make the library thread-safe and similarly to what happens in the Python language, the simplest mechanism to implement this is to have a global API lock. This global lock is not as big of an issue when we read data from local disk but it becomes a major bottleneck when we read data over the network because each read is sequential and latency in the cloud is exponentially bigger than local access [@Mozilla-latency-2024] [@scott-2020].
---
::: {#fig-1 fig-env="figure*"}
![](figures/figure-1.png)
shows how reads (Rn) are done in order to access file metadata, In the first read, R0, the HDF5 library verifies the file signature from the superblock, subsequent reads, R1, R2,...Rn, read file metadata, 4kb at the time.
:::
#### **Background and data selection**
As a result of community feedback and “hack weeks” organized by NSIDC and UW eScience Institute in 2023[@h5cloud2023], NSIDC started the Cloud Optimized Format Investigation (COFI) project to improve access to HDF5 from the ICESat-2 mission, a spaceborne lidar that retrieves surface topography of the Earth’s ice sheets, land and oceans [@NEUMANN2019111325]. Because of its complexity, large size and importance for cryospheric studies we targeted the ATL03 data product. The most relevant variable in ATL03 are geolocated photon heights from the ICESat-2 ATLAS instrument. Each ATL03 file contains 1003 geophysical variables in 6 data groups. Although our research was focused on this dataset, most of our findings are applicable to any dataset stored in HDF5 and NetCDF4.
## Methodology
We tested access times to original and different configurations of cloud-optimized HDF5 [ATL03 files](https://its-live-data.s3.amazonaws.com/index.html#test-space/cloud-experiments/h5cloud/) stored in AWS S3 buckets in region us-west-2, the region hosting NASA’s Earthdata Cloud archives. Files were accessed using Python tools commonly used by Earth scientists: h5py and Xarray[@Hoyer2017-su]. h5py is a Python wrapper around the HDF5 C API. xarray^[`h5py` is a dependency of Xarray] is a widely used Python package for working with n-dimensional data. We also tested access times using h5coro, a python package optimized for reading HDF5 files from S3 buckets and kerchunk, a tool that creates an efficient lookup table for file chunks to allow performant partial reads of files.
The test files were originally cloud optimized by “repacking” them, using a relatively new feature in the HDF5 C API called “paged aggregation”. Page aggregation does 2 things: first, it collects file-level metadata from datasets and stores it on dedicated metadata blocks at the front of the file; second, it forces the library to write both data and metadata using these fixed-size pages. Aggregation allows client libraries to read file metadata with only a few requests using the page size as a fixed request size, overriding the 1 request per chunk behavior.
::: {#fig-2 fig-env="figure*"}
![](figures/figure-2.png)
shows how file-level metadata and data gets internally packed once we use paged aggregation on a file.
:::
As we can see in [@fig-2], when we cloud optimize a file using paged-aggregation there are some considerations and behavior that we had to take into account. The first thing to observe is that
page aggregation will --as we mentioned-- consolidate the file-level metadata at the front of the file and will add information in the so-called superblock^[The HDF5 superblock is a crucial component of the HDF5 file format, acting as the starting point for accessing all data within the file. It stores important metadata such as the version of the file format, pointers to the root group, and addresses for locating different file components]
The next thing to notice is that page size us uses across the board for metadata and data as of October 2024 and version 1.14 of the HDF5 library, page size cannot dynamically adjust to the total metadata size.
::: {#fig-3 fig-env="figure*"}
![](figures/figure-3.png)
shows how file-level metadata and data packing inside aggregated pages leave unused space that can potentially increase the file size in a considerable way.
:::
This one page size for all approach simplifies how the HDF5 API reads the file (if configured) but it also brings unused page space and chunk over-reads. In the case of the ICESat-2 dataset (ATL03) the data itself has been partitioned and each granule
represents a segment in the satellite orbit and within the file the most relevant dataset is chunked using 10,000 items per chunk, with data being float-32 and using a fast compression value, the resulting chunk size is on average under 40KB, which is really small
for an HTTP request, especially when we have to read them sequentially. Because of these considerations, we opted for testing different page sizes, and increase chunk size. The following table describes the different configurations used in our tests.
| prefix | description | % file size increase | ~km per chunk | shape | page size | avg_chunk_size |
|------------------------ |---------------------------------------------------- |---------------------- |--------------- |----------- |----------- |---------------- |
| original | original file from ATL03 v6 (1gb and 7gb) | 0 | 1.5km | (10000,) | N/A | 35kb |
| original-kerchunk | kerchunk sidecar of the original file | N/A | 1.5km | (10000,) | N/A | 35kb |
| page-only-4mb | paged-aggregated file with 4mb per page | ~1% | 1.5km | (10000,) | 4MB | 35kb |
| page-only-8mb | paged-aggregated file with 4mb per pag8 | ~1% | 1.5km | (10000,) | 8MB | 35kb |
| rechunked-4mb | page-aggregated and bigger chunk sizes | ~1% | 10km | (100000,) | 4MB | 400kb |
| rechunked-8mb | page-aggregated and bigger chunk sizes | ~1% | 10km | (100000,) | 8MB | 400kb |
| rechunked-8mb-kerchunk | kerchunk sidecar of the last paged-aggregated file | N/A | 10km | (100000,) | 8MB | 400kb |
This table represents the different configurations we used for our tests in 2 file sizes. It's worth noticing that we encountered a few outlier cases where compression and chunk sizes led page aggregation to an increase in file size of approximately 10% which was above the desired value for NSIDC (5% max)
We tested these files using the most common libraries to handle HDF5 and 2 different I/O drivers that support remote access to AWS S3, fsspec and the native S3. The results of our testing is explained in the next section and the code
to reproduce the results is in the attached notebooks.
## Results
::: {#fig-4 fig-env="figure*"}
![](figures/figure-4.png)
shows that using paged aggregation alone is not a complete solution. This behavior us caused by over-reads of data now distributed in pages and the internals of HDF5 not knowing how to optimize
the requests. This means that if we cloud optimize alone and use the same code, in some cases we'll make access to these files even slower. A very important thing to notice here is that rechunking the file, in this case using 10X bigger chunks results in a predictable 10X improvement in access times without any cloud optimization involved.
Having less chunks generates less metadata and bigger requests, in general is it recommended that chunk sizes should range between 1MB and 10MB[Add citation, S3 and HDF5] and if we have enough memory and bandwidth even
bigger (Pangeo recommends up to 100MB chunks)[Add citation.]
:::
::: {#fig-5 fig-env="figure*"}
![](figures/figure-5.png)
shows that performance once the I/O configuration is aligned with the chunking in the file, access times perform on par with cloud optimized access patterns like Kerchunk/Zarr.
These numbers are from in-region execution. Out of region is considerable slower for the non cloud optimized case.
:::
## Recommendations
We have split our recommendations for the ATL03 product into 3 main categories, creating the files, accessing the files, and future tool development.
### Recommended cloud optimizations
Based on our testing we recommend the following cloud optimizations for creating HDF5 files for the ATL03 product:
Create HDF5 files using paged aggregation by setting HDF5 library parameters:
1. File page strategy: H5F_FSPACE_STRATEGY_PAGE
2. File page size: 8000000
If repacking an existing file, h5repack contains the code to alter these variables inside the file
```bash
h5repack -S PAGE -G 8000000 input.h5 output.h5
```
3. Avoid using unlimited dimensions when creating variables because the HDF5 API cannot support it inside buffered pages and representation of these variables is not supported by Kerchunk.
#### Reasoning
Based on the variable size of ATL03 it becomes really difficult to allocate a fixed metadata page, big files contain north of 30MB of metadata, but the median sized file is below 8MB. If we had
adopted user block we would have caused an increase in the file size and storage cost of approximate 30% (reference to our tests). Another consequence of using a dedicated fixed page for file-level metadata is
that metadata overflow will generate the same impact in access times, the library will fetch the metadata in one go but the rest will be using the predefined block size of 4kb.
Paged aggregation is thus the simplest way of cloud optimizing an HDF5 file as the metadata will keep filling dedicated pages until all the file-level metadata is stored at the front of the file.
Chunk sizes cannot be larger than the page size and when chunk sizes are smaller we need to take into account how these chunks will fit on a page, in an ideal scenario all the space
will be filled but that is not the case and we will end up with unused space [See @fig-2].
### Recommended access patterns
In progress
### Recommended tooling development
In progress
### Mission implementation
ATL03 is a complex science data product containing both segmented (20 meters along-track) and large, variable-rate photon datasets. ATL03 is created using pipeline-style processing where the science data and NetCDF-style metadata are written by independent software packages. The following steps were employed to create cloud-optimized Release 007 ATL03 products, while minimizing increases in file size:
1. Set the "file space strategy" to H5F_FSPACE_STRATEGY_PAGE and enabled "free space tracking" within the HDF5 file creation property list.
2. Set the "file space page size" to 8MiB.
3. Changed all "COMPACT" dataset storage types to "CONTIGUOUS".
4. Increased the "chunk size" of the photon-rate datasets (from 10,000 to 100,000 elements), while making sure no "chunk sizes" exceeded the 8MiB "file space page size".
5. Introduced a new production step that executes the "h5repack" utility (with no options) to create a "defragmented" final product.
## Discussion
1. Chunking shapes and sizes
2. Paged aggregation vs User block
3. Side effects on different access patterns, e.g. Kerchunk
<!-- ## Acknowledgments -->
## References
:::{#refs}
:::