-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Component release checklist
Nigel Babu edited this page Feb 1, 2017
·
1 revision
- Upgrade one server after the other from 3.{7,8} to 3.9
- Allow self-healing to happen after a node is upgraded
- Upgrade clients after all the servers are upgraded
All of the above is manual
- M: Pranith Karampuri [email protected]
- S: Maintained
- F: xlators/cluster/afr/
Manual
- memory leak tests for shd
- Perform upgrade tests with I/O in progress - Not automated
Automated:
- Run https://github.com/avati/perf-test and compare it with 3.7, 3.8 - Automated
- Run automation we have for AFR
- M: Raghavendra Bhat [email protected]
- S: Maintained
- F: xlators/features/bit-rot/
- M: Aravinda V K [email protected]
- S: Maintained
- F: xlators/features/changelog/
Tests:
- Geo-rep tests cover Changelogs too.
- M: Dan Lambright [email protected]
- S: Maintained
- F: xlators/features/changetimerecorder/
- M: Raghavendra Gowdappa [email protected]
- M: Shyamsundar Ranganathan [email protected]
- S: Maintained
- F: xlators/cluster/dht/
- M: Pranith Karampuri [email protected]
- M: Xavier Hernandez [email protected]
- S: Maintained
- F: xlators/cluster/ec/
Manual:
- Memory leak tests for shd
- Perform upgrade tests with I/O in progress - Not automated
Automated:
- Run https://github.com/avati/perf-test and compare it with 3.7, 3.8 - Automated
- M: Niels de Vos [email protected]
- M: Raghavendra Bhat [email protected]
- S: Maintained
- F: xlators/mount/
- M: Pranith Karampuri [email protected]
- S: Maintained
- F: xlators/features/index/
Same as afr/ec
- M: Pranith Karampuri [email protected]
- S: Maintained
- F: xlators/performance/io-threads/
Tests
- Run https://github.com/avati/perf-test and compare it with 3.7, 3.8 - Automated
- M: Pranith Karampuri [email protected]
- S: Maintained
- F: xlators/features/locks/
Tests
- Same tests as afr/ec
- posix-locks tests https://github.com/linux-test-project/ltp/tree/master/testcases/network/nfsv4/locks - Automated
- M: Raghavendra Gowdappa [email protected]
- S: Maintained
- F: xlators/features/marker/
- M: Niels de Vos [email protected]
- S: Maintained
- F: xlators/nfs/server/
Manual:
- Run cthon04
- install autofs on a client, start the automount service and use /net///....
- M: Raghavendra Gowdappa [email protected]
- S: Maintained
- F: xlators/performance/
Tests
- Run https://github.com/avati/perf-test and compare it with 3.7, 3.8 - Automated
- M: Pranith Karampuri [email protected]
- M: Raghavendra Bhat [email protected]
- S: Maintained
- F: xlators/storage/posix/
Tests
- Same tests as afr/ec
- M: Raghavendra Gowdappa [email protected]
- S: Maintained
- F: xlators/features/quota/
- M: Dan Lambright [email protected]
- M: Nithya Balachandran [email protected]
- S: Maintained
- F: xlators/cluster/dht/src/tier.c
- F: xlators/features/changetimerecorder
- F: libglusterfs/src/gfdb
- W: http://www.gluster.org/community/documentation/index.php/Features/data-classification
Tests
- attach/detach tier , 8 nodes , while I/O is running
- hi watermark tests, we do not run out of space on hot tier
- low watermark tests, promote in such cases
- promotion and demotion happen according to expected behaviors
- counters work as expected
- pass all t file tests
- various file ops (open fd, rename, append, dir creations, dir deletion, lookups) + attach tier
- various file ops (open fd, rename, append, dir creations, dir deletion, lookups) + detach tier
- No avalanche of promotion/demotion towards watermark levels
- EC and dist-rep cold tier
Negative cases:
- Add,remove,replace brick doesn't work
- failure of nodes, glusterd, bricks etc doesn't impact IO
Basic interop:
- quota, snapshot features work as expected on tiered vol
- M: Niels de Vos [email protected]
- S: Maintained
- F: xlators/features/upcall/
Manual:
- Test cache-invalidation feature using NFS-Ganesha HA setup and latest md-cache/upcall integration.
- Verify the files created/deleted from one client are visible from another client.
- Verify if directory entries are also invalidated/updated appropriately.
- Also if it is just stat/lookup/read calls from multiple clients, there shouldn't be any upcall sent. (Other md-cache related tests should be covered under md-cache xlator).
- M: Humble Chirammal [email protected]
- M: Raghavendra Talur [email protected]
- M: Prashanth Pai [email protected]
- S: Maintained
- F: doc/
- M: Aravinda V K [email protected]
- S: Maintained
- F: geo-replication/
Tests:
- Regression tests covers positive cases. But disabled in Upstream. We can run rsync/tarssh regression tests
- S: Orphan
- F: xlators/features/glupy/
- M: Niels de Vos [email protected]
- M: Shyamsundar Ranganathan [email protected]
- S: Maintained
- F: api/
Tests
- test known applications, run their test-suites:
(has test suite in repo)
(has test suite in repo)
(pynfs and cthon04 tests)
- run qemu binary and qemu-img with gluster:// URL, possibly/somehow run Advocado suite
- Add-brick/replace-brick/remove-brick while I/O is in progress.
- Check with objdump if there are no symbols in libgfapi.so that have a version higher than is being released.
- M: Dan Lambright [email protected]
- S: Maintained
- F: libglusterfs/src/gfdb/
- M: Niels de Vos [email protected]
- M: Pranith Karampuri [email protected]
- S: Maintained
- F: libglusterfs/
Tests
- same as afr/ec which tests good amount of the infra present in libglusterfs.
- M: Kaushal Madappa [email protected]
- M: Atin Mukherjee [email protected]
- S: Maintained
- F: cli/
- F: xlators/mgmt/
Tests
- M: Raghavendra Gowdappa [email protected]
- S: Maintained
- F: rpc/
- M: Rajesh Joseph [email protected]
- S: Maintained
- F: xlators/mgmt/glusterd/src/glusterd-snap*
- F: extras/snap-scheduler.py
The following set of automated and manual tests need to pass before doing a release for snapshot component:
- The entire snapshot regression suite present in the source repository, which as of now consist of:
./basic/volume-snapshot.t
./basic/volume-snapshot-clone.t
./basic/volume-snapshot-xml.t
- All tests present in
./bugs/snapshot
- Manual test of using snapshot scheduler.
- Till the eventing test framework is integrated with the regression suite, manual test of all 28 snapshot events.
- M: Aravinda VK [email protected]
- S: Maintained
- F: events/
- F: libglusterfs/src/events*
- F: libglusterfs/src/eventtypes*
- F: extras/systemd/glustereventsd*
Automated:
- Regression tests patch is under review. We can run eventsapi regression tests which covers all the events.
- M: Kaleb Keithley [email protected]
- M: Niels de Vos [email protected]
- S: Maintained
- M: Patrick Matthäi [email protected]
- M: Louis Zuckerman [email protected]
- M: Kaleb S. KEITHLEY ([email protected]
- S: Maintained
- W: http://packages.qa.debian.org/g/glusterfs.html
- T: https://github.com/gluster/glusterfs-debian
- M: [email protected]
- M: Humble Chirammal [email protected]
- M: Kaleb Keithley [email protected]
- M: Niels de Vos [email protected]
- S: Maintained
- W: https://apps.fedoraproject.org/packages/glusterfs
- T: http://pkgs.fedoraproject.org/git/glusterfs.git
- S: Orphan
- S: Orphan
- M: Emmanuel Dreyfus [email protected]
- S: Maintained
- W: http://pkgsrc.se/filesystems/glusterfs
- M: Louis Zuckerman [email protected]
- M: Kaleb S. KEITHLEY ([email protected]
- S: Maintained
- W: http://download.gluster.org/pub/gluster/glusterfs/LATEST/Ubuntu/Ubuntu.README
- T: https://github.com/gluster/glusterfs-debian
- M: Kaleb S. KEITHLEY ([email protected]
- S: Maintained
- W: http://download.gluster.org/pub/gluster/glusterfs/LATEST/SuSE/SuSE.README
- T: https://github.com/gluster/glusterfs-suse
- M: Thiago da Silva [email protected]
- S: Maintained
- T: https://github.com/gluster/gluster-swift.git
Tests
- As per Prashanth Pai, we have unit/functoinal tests in gluster-swift. At the moment it will be manual test before release.
- M: Jay Vyas [email protected]
- S: Maintained
- W: https://github.com/gluster/glusterfs-hadoop/wiki
- T: https://github.com/gluster/glusterfs-hadoop.git
- M: Kaleb Keithley [email protected]
- S: Maintained
- T: git://github.com/nfs-ganesha/nfs-ganesha.git
- F: src/nfs-ganesha~/src/FSAL/FSAL_GLUSTER/
- Fresh install of NFS-Ganesha 2.4/2.3 against Gluster 3.9. Run cthon and pynfs tests.
- ATM there is known issue with upcall processing in ganesha. With cache-invalidation off, verify the HA setup, failover/failback.
- Check with ACL enabled
- Run https://github.com/avati/perf-test and compare it with 3.7, 3.8
- Ensure upgrade works with NFS-Ganesha installation.
- M: Bharata B Rao [email protected]
- S: Maintained
- T: git://git.qemu.org/qemu.git
- F: block/gluster.c
- M: Raghavendra Talur [email protected]
- M: Jose Rivera [email protected]
- M: Ira Cooper [email protected]
- S: Maintained
- T: git://git.samba.org/samba.git
- F: source3/modules/vfs_glusterfs.c
Manual:
- Clean install with Gluster 3.9 and Samba, mount should succeed with Windows client, cifs client , smbclient
- Test File creates, edit, ACL set/unset from windows clients
- Compare file create, file list, file read, file write times with Gluster 3.8 with default settings.
- Perform test 3 with samba cache option on in md-cache.
- Ensure upgrade works with Samba installation.
- Test for ping pong correctness for ctdb volume, compare lock rates with 3.8 version.
- Test for samba vfs_shadow_copy2 integration.
- Test for vfs_acl integration.
- async io tests
- M: Niels de Vos [email protected]
- S: Maintained
- W: https://forge.gluster.org/wireshark
- T: http://code.wireshark.org/git/wireshark
- F: epan/dissectors/packet-gluster*