Skip to content

Component release checklist

Nigel Babu edited this page Feb 1, 2017 · 1 revision

Overall:

  • Upgrade one server after the other from 3.{7,8} to 3.9
  • Allow self-healing to happen after a node is upgraded
  • Upgrade clients after all the servers are upgraded

All of the above is manual

xlators:

Automatic File Replication (AFR)

Manual

  • memory leak tests for shd
  • Perform upgrade tests with I/O in progress - Not automated

Automated:

BitRot

Changelog

Tests:

  • Geo-rep tests cover Changelogs too.

Changetimerecorder

  • M: Dan Lambright [email protected]
  • S: Maintained
  • F: xlators/features/changetimerecorder/

Distributed Hashing Table (DHT)

Erasure Coding

Manual:

  • Memory leak tests for shd
  • Perform upgrade tests with I/O in progress - Not automated

Automated:

FUSE Bridge

Index

Same as afr/ec

IO threads

  • M: Pranith Karampuri [email protected]
  • S: Maintained
  • F: xlators/performance/io-threads/

Tests

Locks

Tests

Marker

NFS/

Manual:

  • Run cthon04
  • install autofs on a client, start the automount service and use /net///....

Performance

Tests

Posix:

Tests

  • Same tests as afr/ec

Quota

Tiering

Tests

  • attach/detach tier , 8 nodes , while I/O is running
  • hi watermark tests, we do not run out of space on hot tier
  • low watermark tests, promote in such cases
  • promotion and demotion happen according to expected behaviors
  • counters work as expected
  • pass all t file tests
  • various file ops (open fd, rename, append, dir creations, dir deletion, lookups) + attach tier
  • various file ops (open fd, rename, append, dir creations, dir deletion, lookups) + detach tier
  • No avalanche of promotion/demotion towards watermark levels
  • EC and dist-rep cold tier

Negative cases:

  • Add,remove,replace brick doesn't work
  • failure of nodes, glusterd, bricks etc doesn't impact IO

Basic interop:

  • quota, snapshot features work as expected on tiered vol

Upcall

Manual:

  • Test cache-invalidation feature using NFS-Ganesha HA setup and latest md-cache/upcall integration.
  • Verify the files created/deleted from one client are visible from another client.
  • Verify if directory entries are also invalidated/updated appropriately.
  • Also if it is just stat/lookup/read calls from multiple clients, there shouldn't be any upcall sent. (Other md-cache related tests should be covered under md-cache xlator).

Other bits of code:

Doc

Geo Replication

Tests:

  • Regression tests covers positive cases. But disabled in Upstream. We can run rsync/tarssh regression tests

Glupy

  • S: Orphan
  • F: xlators/features/glupy/

libgfapi

Tests

  • test known applications, run their test-suites:

glusterfs-coreutils

(has test suite in repo)

libgfapi-python

(has test suite in repo)

nfs-ganesha

(pynfs and cthon04 tests)

Samba (test?)

QEMU

  • run qemu binary and qemu-img with gluster:// URL, possibly/somehow run Advocado suite
  • Add-brick/replace-brick/remove-brick while I/O is in progress.
  • Check with objdump if there are no symbols in libgfapi.so that have a version higher than is being released.

libgfdb

libglusterfs

Tests

  • same as afr/ec which tests good amount of the infra present in libglusterfs.

Management Daemon

Tests

Remote Procedure Call subsystem

Snapshot

  • M: Rajesh Joseph [email protected]
  • S: Maintained
  • F: xlators/mgmt/glusterd/src/glusterd-snap*
  • F: extras/snap-scheduler.py

The following set of automated and manual tests need to pass before doing a release for snapshot component:

  • The entire snapshot regression suite present in the source repository, which as of now consist of:
  • ./basic/volume-snapshot.t
  • ./basic/volume-snapshot-clone.t
  • ./basic/volume-snapshot-xml.t
  • All tests present in ./bugs/snapshot
  • Manual test of using snapshot scheduler.
  • Till the eventing test framework is integrated with the regression suite, manual test of all 28 snapshot events.

Events APIs

  • M: Aravinda VK [email protected]
  • S: Maintained
  • F: events/
  • F: libglusterfs/src/events*
  • F: libglusterfs/src/eventtypes*
  • F: extras/systemd/glustereventsd*

Automated:

  • Regression tests patch is under review. We can run eventsapi regression tests which covers all the events.

Distribution Specific:

Build:

Debian Packaging

Fedora Packaging

FreeBSD port

  • S: Orphan

MacOS X port

  • S: Orphan

NetBSD Packaging

Ubuntu Packaging

SuSE Packaging

Related projects

Gluster Openstack Swift

Tests

  • As per Prashanth Pai, we have unit/functoinal tests in gluster-swift. At the moment it will be manual test before release.

GlusterFS Hadoop HCFS plugin

NFS-Ganesha FSAL plugin

  • M: Kaleb Keithley [email protected]
  • S: Maintained
  • T: git://github.com/nfs-ganesha/nfs-ganesha.git
  • F: src/nfs-ganesha~/src/FSAL/FSAL_GLUSTER/

Manual:

  • Fresh install of NFS-Ganesha 2.4/2.3 against Gluster 3.9. Run cthon and pynfs tests.
  • ATM there is known issue with upcall processing in ganesha. With cache-invalidation off, verify the HA setup, failover/failback.
  • Check with ACL enabled
  • Run https://github.com/avati/perf-test and compare it with 3.7, 3.8
  • Ensure upgrade works with NFS-Ganesha installation.

QEMU integration

  • M: Bharata B Rao [email protected]
  • S: Maintained
  • T: git://git.qemu.org/qemu.git
  • F: block/gluster.c

Samba VFS plugin

Manual:

  • Clean install with Gluster 3.9 and Samba, mount should succeed with Windows client, cifs client , smbclient
  • Test File creates, edit, ACL set/unset from windows clients
  • Compare file create, file list, file read, file write times with Gluster 3.8 with default settings.
  • Perform test 3 with samba cache option on in md-cache.
  • Ensure upgrade works with Samba installation.
  • Test for ping pong correctness for ctdb volume, compare lock rates with 3.8 version.
  • Test for samba vfs_shadow_copy2 integration.
  • Test for vfs_acl integration.
  • async io tests

Wireshark dissectors

Clone this wiki locally