Skip to content

Commit

Permalink
Merge remote-tracking branch 'intel/master' into jxiong/variable_in_loop
Browse files Browse the repository at this point in the history
  • Loading branch information
jxiong committed Nov 5, 2024
2 parents 5c8a986 + 35ee55e commit 046c899
Show file tree
Hide file tree
Showing 192 changed files with 27,760 additions and 3,334 deletions.
25 changes: 16 additions & 9 deletions .github/workflows/trivy.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,12 @@
# SPDX-License-Identifier: BSD-2-Clause-Patent
# Copyright (c) 2024 Intel Corporation.

name: Trivy scan

on:
workflow_dispatch:
schedule:
- cron: '0 0 * * *'
push:
branches: ["master", "release/**"]
pull_request:
Expand All @@ -11,15 +16,17 @@ on:
permissions: {}

jobs:
build:
name: Build
runs-on: ubuntu-20.04
scan:
name: Scan with Trivy
runs-on: ubuntu-latest
permissions:
security-events: write
steps:
- name: Checkout code
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1

- name: Run Trivy vulnerability scanner in repo mode
uses: aquasecurity/trivy-action@6e7b7d1fd3e4fef0c5fa8cce1229c54b2c9bd0d8 # 0.24.0
- name: Run Trivy vulnerability scanner in filesystem mode (table format)
uses: aquasecurity/trivy-action@915b19bbe73b92a6cf82a1bc12b087c9a19a5fe2 # 0.28.0
with:
scan-type: 'fs'
scan-ref: '.'
Expand All @@ -43,8 +50,8 @@ jobs:
utils/trivy/trivy.yaml
sed -i 's/format: template/format: sarif/g' utils/trivy/trivy.yaml
- name: Run Trivy vulnerability scanner in repo mode
uses: aquasecurity/trivy-action@6e7b7d1fd3e4fef0c5fa8cce1229c54b2c9bd0d8 # 0.24.0
- name: Run Trivy vulnerability scanner in filesystem mode (sarif format)
uses: aquasecurity/trivy-action@915b19bbe73b92a6cf82a1bc12b087c9a19a5fe2 # 0.28.0
with:
scan-type: 'fs'
scan-ref: '.'
Expand All @@ -62,8 +69,8 @@ jobs:
sed -i 's/format: sarif/format: table/g' utils/trivy/trivy.yaml
sed -i 's/exit-code: 0/exit-code: 1/g' utils/trivy/trivy.yaml
- name: Run Trivy vulnerability scanner in repo mode
uses: aquasecurity/trivy-action@6e7b7d1fd3e4fef0c5fa8cce1229c54b2c9bd0d8 # 0.24.0
- name: Run Trivy vulnerability scanner in filesystem mode (human readable format)
uses: aquasecurity/trivy-action@915b19bbe73b92a6cf82a1bc12b087c9a19a5fe2 # 0.28.0
with:
scan-type: 'fs'
scan-ref: '.'
Expand Down
10 changes: 9 additions & 1 deletion Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -876,7 +876,7 @@ pipeline {
}
steps {
job_step_update(
unitTest(timeout_time: 60,
unitTest(timeout_time: 180,
unstash_opt: true,
ignore_failure: true,
inst_repos: prRepos(),
Expand Down Expand Up @@ -1167,6 +1167,7 @@ pipeline {
'Functional Hardware Medium': getFunctionalTestStage(
name: 'Functional Hardware Medium',
pragma_suffix: '-hw-medium',
base_branch: 'master',
label: params.FUNCTIONAL_HARDWARE_MEDIUM_LABEL,
next_version: next_version,
stage_tags: 'hw,medium,-provider',
Expand All @@ -1179,6 +1180,7 @@ pipeline {
'Functional Hardware Medium MD on SSD': getFunctionalTestStage(
name: 'Functional Hardware Medium MD on SSD',
pragma_suffix: '-hw-medium-md-on-ssd',
base_branch: 'master',
label: params.FUNCTIONAL_HARDWARE_MEDIUM_LABEL,
next_version: next_version,
stage_tags: 'hw,medium,-provider',
Expand All @@ -1192,6 +1194,7 @@ pipeline {
'Functional Hardware Medium VMD': getFunctionalTestStage(
name: 'Functional Hardware Medium VMD',
pragma_suffix: '-hw-medium-vmd',
base_branch: 'master',
label: params.FUNCTIONAL_HARDWARE_MEDIUM_VMD_LABEL,
next_version: next_version,
stage_tags: 'hw_vmd,medium',
Expand All @@ -1205,6 +1208,7 @@ pipeline {
'Functional Hardware Medium Verbs Provider': getFunctionalTestStage(
name: 'Functional Hardware Medium Verbs Provider',
pragma_suffix: '-hw-medium-verbs-provider',
base_branch: 'master',
label: params.FUNCTIONAL_HARDWARE_MEDIUM_VERBS_PROVIDER_LABEL,
next_version: next_version,
stage_tags: 'hw,medium,provider',
Expand All @@ -1218,6 +1222,7 @@ pipeline {
'Functional Hardware Medium Verbs Provider MD on SSD': getFunctionalTestStage(
name: 'Functional Hardware Medium Verbs Provider MD on SSD',
pragma_suffix: '-hw-medium-verbs-provider-md-on-ssd',
base_branch: 'master',
label: params.FUNCTIONAL_HARDWARE_MEDIUM_VERBS_PROVIDER_LABEL,
next_version: next_version,
stage_tags: 'hw,medium,provider',
Expand All @@ -1232,6 +1237,7 @@ pipeline {
'Functional Hardware Medium UCX Provider': getFunctionalTestStage(
name: 'Functional Hardware Medium UCX Provider',
pragma_suffix: '-hw-medium-ucx-provider',
base_branch: 'master',
label: params.FUNCTIONAL_HARDWARE_MEDIUM_UCX_PROVIDER_LABEL,
next_version: next_version,
stage_tags: 'hw,medium,provider',
Expand All @@ -1245,6 +1251,7 @@ pipeline {
'Functional Hardware Large': getFunctionalTestStage(
name: 'Functional Hardware Large',
pragma_suffix: '-hw-large',
base_branch: 'master',
label: params.FUNCTIONAL_HARDWARE_LARGE_LABEL,
next_version: next_version,
stage_tags: 'hw,large',
Expand All @@ -1257,6 +1264,7 @@ pipeline {
'Functional Hardware Large MD on SSD': getFunctionalTestStage(
name: 'Functional Hardware Large MD on SSD',
pragma_suffix: '-hw-large-md-on-ssd',
base_branch: 'master',
label: params.FUNCTIONAL_HARDWARE_LARGE_LABEL,
next_version: next_version,
stage_tags: 'hw,large',
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
[![Build](https://github.com/daos-stack/daos/actions/workflows/ci2.yml/badge.svg)](https://github.com/daos-stack/daos/actions/workflows/ci2.yml)
[![Codespell](https://github.com/daos-stack/daos/actions/workflows/spelling.yml/badge.svg)](https://github.com/daos-stack/daos/actions/workflows/spelling.yml)
[![Doxygen](https://github.com/daos-stack/daos/actions/workflows/doxygen.yml/badge.svg)](https://github.com/daos-stack/daos/actions/workflows/doxygen.yml)
[![Trivy scan](https://github.com/daos-stack/daos/actions/workflows/trivy.yml/badge.svg)](https://github.com/daos-stack/daos/actions/workflows/trivy.yml)

<a href="https://docs.daos.io/">
<img src="https://avatars.githubusercontent.com/u/20561043?s=400&u=db7cd0ada987ba59c21c3de5f9e7cffba73c3325&v=4" width="200" height="200">
Expand Down
8 changes: 8 additions & 0 deletions debian/changelog
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
daos (2.7.100-10) unstable; urgency=medium

[ Sherin T George ]
* Add DAV v2 lib

-- Sherin T George <[email protected]> Fri, 1 Nov 2024 11:54:00 +0530

daos (2.7.100-9) unstable; urgency=medium
[ Brian J. Murrell ]
* Remove Build-Depends: for UCX as they were obsoleted as of e01970d
Expand Down Expand Up @@ -130,6 +137,7 @@ daos (2.5.100-12) unstable; urgency=medium

-- Tomasz Gromadzki <[email protected]> Fri, 17 Nov 2023 12:52:00 -0400

daos (2.5.100-11) unstable; urgency=medium
[ Jerome Soumagne ]
* Bump mercury min version to 2.3.1

Expand Down
1 change: 1 addition & 0 deletions debian/daos-server.install
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ usr/lib64/daos_srv/libbio.so
usr/lib64/daos_srv/libplacement.so
usr/lib64/daos_srv/libpipeline.so
usr/lib64/libdaos_common_pmem.so
usr/lib64/libdav_v2.so
usr/share/daos/control/setup_spdk.sh
usr/lib/systemd/system/daos_server.service
usr/lib/sysctl.d/10-daos_server.conf
Expand Down
190 changes: 190 additions & 0 deletions docs/admin/pool_operations.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ Its subcommands can be grouped into the following areas:
* An upgrade command to upgrade a pool's format version
after a DAOS software upgrade.


### Creating a Pool

A DAOS pool can be created through the `dmg pool create` command.
Expand Down Expand Up @@ -170,6 +171,195 @@ on pool size, but also on number of targets, target size, object class,
storage redundancy factor, etc.


#### Creating a pool in MD-on-SSD mode

In MD-on-SSD mode, a pool is made up of a single component in memory (RAM-disk
associated with each engine) and three components on storage (NVMe SSD). The
components in storage are related to "roles" WAL, META and DATA and roles are
assigned to hardware devices in the
[server configuration file](https://docs.daos.io/v2.6/admin/deployment/#server-configuration-file).

In MD-on-SSD mode pools are by default created with equal allocations for
metadata-in-memory and metadata-on-SSD but it is possible to change this. To
create a pool with a metadata-on-SSD allocation size that is double what is
allocated in memory, set `dmg pool create --mem-ratio` option to `50%`. This
implies that the ratio of metadata on memory and on storage should be 0.5 and
therefore metadata-on-SSD allocation is twice that of metadata-in-memory.

A MD-on-SSD pool created with a `--mem-ratio` between 0 and 100 percent is
said to be operating in "phase-2" mode.

#### MD-on-SSD phase-2 pool create examples

These examples cover the recommended way to create a pool in MD-on-SSD phase-2
mode using the `--size` percentage option.

The following example is run on a single host with dual engines where bdev
roles META and DATA are not shared. Two pools are created with VOS index file
size equal to half the meta-blob size (`--mem-ratio 50%`). Both pools use
roughly half the original capacity available (first using 50% and the second
100% of the remainder).

Rough calculations: `dmg storage scan` shows that for each rank, one 800GB SSD
is assigned for each tier (first: WAL+META, second: DATA). `df -h /mnt/daos*`
reports usable ramdisk capacity for each rank is 66GiB.
- Expected Data storage would then be 400GB for a 50% capacity first pool and
100% capacity second pool per-rank.
- Expected Meta storage at 50% mem-ratio would be `66GiB*2 = 132GiB == 141GB`
giving ~70GB for 50% first and 100% second pools.
- Expected Memory file size (aggregated) is `66GiB/2 = 35GB` for 50% first and
100% second pools.

```bash
$ dmg pool create bob --size 50% --mem-ratio 50%

Pool created with 14.86%,85.14% storage tier ratio
--------------------------------------------------
UUID : 47060d94-c689-4981-8c89-011beb063f8f
Service Leader : 0
Service Ranks : [0-1]
Storage Ranks : [0-1]
Total Size : 940 GB
Metadata Storage : 140 GB (70 GB / rank)
Data Storage : 800 GB (400 GB / rank)
Memory File Size : 70 GB (35 GB / rank)

$ dmg pool create bob2 --size 100% --mem-ratio 50%

Pool created with 14.47%,85.53% storage tier ratio
--------------------------------------------------
UUID : bdbef091-f0f8-411d-8995-f91c4efc690f
Service Leader : 1
Service Ranks : [0-1]
Storage Ranks : [0-1]
Total Size : 935 GB
Metadata Storage : 135 GB (68 GB / rank)
Data Storage : 800 GB (400 GB / rank)
Memory File Size : 68 GB (34 GB / rank)

$ dmg pool query bob

Pool 47060d94-c689-4981-8c89-011beb063f8f, ntarget=32, disabled=0, leader=0, version=1, state=Ready
Pool health info:
- Rebuild idle, 0 objs, 0 recs
Pool space info:
- Target count:32
- Total memory-file size: 70 GB
- Metadata storage:
Total size: 140 GB
Free: 131 GB, min:4.1 GB, max:4.1 GB, mean:4.1 GB
- Data storage:
Total size: 800 GB
Free: 799 GB, min:25 GB, max:25 GB, mean:25 GB

$ dmg pool query bob2

Pool bdbef091-f0f8-411d-8995-f91c4efc690f, ntarget=32, disabled=0, leader=1, version=1, state=Ready
Pool health info:
- Rebuild idle, 0 objs, 0 recs
Pool space info:
- Target count:32
- Total memory-file size: 68 GB
- Metadata storage:
Total size: 135 GB
Free: 127 GB, min:4.0 GB, max:4.0 GB, mean:4.0 GB
- Data storage:
Total size: 800 GB
Free: 799 GB, min:25 GB, max:25 GB, mean:25 GB
```

The following examples are with a single host with dual engines where bdev
roles WAL, META and DATA are shared.

Single pool with VOS index file size equal to the meta-blob size (`--mem-ratio
100%`).

```bash
$ dmg pool create bob --size 100% --mem-ratio 100%

Pool created with 5.93%,94.07% storage tier ratio
-------------------------------------------------
UUID : bad54f1d-8976-428b-a5dd-243372dfa65c
Service Leader : 1
Service Ranks : [0-1]
Storage Ranks : [0-1]
Total Size : 2.4 TB
Metadata Storage : 140 GB (70 GB / rank)
Data Storage : 2.2 TB (1.1 TB / rank)
Memory File Size : 140 GB (70 GB / rank)

```

Rough calculations: 1.2TB of usable space is returned from storage scan and
because roles are shared required META (70GB) is reserved so only 1.1TB is
provided for data.

Logging shows:
```bash
DEBUG 2024/09/24 15:44:38.554431 pool.go:1139: added smd device c7da7391-9077-4eb6-9f4a-a3d656166236 (rank 1, ctrlr 0000:d8:00.0, roles "data,meta,wal") as usable: device state="NORMAL", smd-size 623 GB (623307128832), ctrlr-total-free 623 GB (623307128832)
DEBUG 2024/09/24 15:44:38.554516 pool.go:1139: added smd device 18c7bf45-7586-49ba-93c0-cbc08caed901 (rank 1, ctrlr 0000:d9:00.0, roles "data,meta,wal") as usable: device state="NORMAL", smd-size 554 GB (554050781184), ctrlr-total-free 1.2 TB (1177357910016)
DEBUG 2024/09/24 15:44:38.554603 pool.go:1246: based on minimum available ramdisk capacity of 70 GB and mem-ratio 1.00 with 70 GB of reserved metadata capacity, the maximum per-rank sizes for a pool are META=70 GB (69792169984 B) DATA=1.1 TB (1107565740032 B)
```

Now the same as above but with a single pool with VOS index file size equal to
a quarter of the meta-blob size (`--mem-ratio 25%`).

```bash
$ dmg pool create bob --size 100% --mem-ratio 25%

Pool created with 23.71%,76.29% storage tier ratio
--------------------------------------------------
UUID : 999ecf55-474e-4476-9f90-0b4c754d4619
Service Leader : 0
Service Ranks : [0-1]
Storage Ranks : [0-1]
Total Size : 2.4 TB
Metadata Storage : 558 GB (279 GB / rank)
Data Storage : 1.8 TB (898 GB / rank)
Memory File Size : 140 GB (70 GB / rank)

```

Rough calculations: 1.2TB of usable space is returned from storage scan and
because roles are shared required META (279GB) is reserved so only ~900GB is
provided for data.

Logging shows:
```bash
DEBUG 2024/09/24 16:16:00.172719 pool.go:1246: based on minimum available ramdisk capacity of 70 GB and mem-ratio 0.25 with 279 GB of reserved metadata capacity, the maximum per-rank sizes for a pool are META=279 GB (279168679936 B) DATA=898 GB (898189230080 B)
```

Now with 6 ranks and a single pool with VOS index file size equal to a half of
the meta-blob size (`--mem-ratio 50%`).

```bash
$ dmg pool create bob --size 100% --mem-ratio 50%

Pool created with 11.86%,88.14% storage tier ratio
--------------------------------------------------
UUID : 4fa38199-23a9-4b4d-aa9a-8b9838cad1d6
Service Leader : 1
Service Ranks : [0-2,4-5]
Storage Ranks : [0-5]
Total Size : 7.1 TB
Metadata Storage : 838 GB (140 GB / rank)
Data Storage : 6.2 TB (1.0 TB / rank)
Memory File Size : 419 GB (70 GB / rank)

```

Rough calculations: 1177 GB of usable space is returned from storage scan and
because roles are shared required META (140 GB) is reserved so only 1037 GB is
provided for data (per-rank).

Logging shows:
```bash
DEBUG 2024/09/24 16:40:41.570331 pool.go:1139: added smd device c921c7b9-5f5c-4332-a878-0ebb8191c160 (rank 1, ctrlr 0000:d8:00.0, roles "data,meta,wal") as usable: device state="NORMAL", smd-size 623 GB (623307128832), ctrlr-total-free 623 GB (623307128832)
DEBUG 2024/09/24 16:40:41.570447 pool.go:1139: added smd device a071c3cf-5de1-4911-8549-8c5e8f550554 (rank 1, ctrlr 0000:d9:00.0, roles "data,meta,wal") as usable: device state="NORMAL", smd-size 554 GB (554050781184), ctrlr-total-free 1.2 TB (1177357910016)
DEBUG 2024/09/24 16:40:41.570549 pool.go:1246: based on minimum available ramdisk capacity of 70 GB and mem-ratio 0.50 with 140 GB of reserved metadata capacity, the maximum per-rank sizes for a pool are META=140 GB (139584339968 B) DATA=1.0 TB (1037773570048 B)
```


### Listing Pools

To see a list of the pools in the DAOS system:
Expand Down
1 change: 0 additions & 1 deletion docs/user/filesystem.md
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,6 @@ Additionally, there are several optional command-line options:
| --container=<label\|uuid\> | container label or uuid to open |
| --sys-name=<name\> | DAOS system name |
| --foreground | run in foreground |
| --singlethreaded | run single threaded |
| --thread-count=<count> | Number of threads to use |
| --multi-user | Run in multi user mode |
| --read-only | Mount in read-only mode |
Expand Down
Loading

0 comments on commit 046c899

Please sign in to comment.