Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative Analysis Mode #947

Open
AdvancedImagingUTSW opened this issue Jul 22, 2024 · 3 comments
Open

Alternative Analysis Mode #947

AdvancedImagingUTSW opened this issue Jul 22, 2024 · 3 comments
Assignees

Comments

@AdvancedImagingUTSW
Copy link
Collaborator

Currently, our software looks at each frame as it comes off the camera and performs image analysis. In some instances, it might be best to look at a volume instead of a plane. And Dushyant wanted the ability to look at a certain number of frames in history. So some more abstract way to control the number of images that we evaluate would be nice.

@AdvancedImagingUTSW
Copy link
Collaborator Author

AdvancedImagingUTSW commented Sep 27, 2024

How can we begin to do analysis on multi-channel Z-stacks?

We would do the analysis between z-stack acquisitions. I assume it will be much slower. I just need a way to grab the volume and then feed the results (e.g., positions) into the next feature.

Presumably, this is all done in a blocking mode. I imagine doing this asynchronous would add significant complexity, but I am open to it if you think it is the way to go. In that case, we would have to move to the next position or z-stack while evaluating the previous one, and then react accordingly once the analysis is done.

Even more explicit, find features in a large 3D volume acquired at the mesoscale, switch to the nanoscale and interrogate them locally. Can be run as a feature. Preferably the image is in RAM, but it can also be read from disk if that is the most practical. That spooled image writer could come in handy here too.

It is worth noting that 3D analysis is a RAM intensive and slow endeavor. I would expect 4x RAM overhead. So if we have a volume that is 4GB, should probably make sure that we can handle 16GB of processing or so. With the right GPU, the analysis could also implement CUDA-based routines like CuPy...

@AdvancedImagingUTSW
Copy link
Collaborator Author

If we are going to load from disk, we might need to implement a few more image readers. Some already exist. TIFF, N5, OME-Zarr, HDF5... It would be powerful to be able to load the data at different resolutions if it is N5, OME-Zarr or HDF5. Tiff would have to be loaded and down-sampled afterwards.

We want people to be able to load their own 3D analysis feature, and we can also select from a few of our own that are already implemented and available for selection. Where would we begin to save 3D python functions?

We want to be able to output the positions identified to the multi-position table so that we can switch modes.

We can also save the analysis results back to disk. For OME-Zarr and N5, data could actually be saved within the same folder hierarchy so that any data derived from the raw data is also present. For now, just save the analysis result as a 3D tiff file, but in the future it would be nice to do this properly.

We could also do an analysis plugin, which would enable us to have new dependencies (e.g., for GPU-accelerated analysis). For now, I plan to just use numpy and scikitimage, which are already dependencies.

@AdvancedImagingUTSW
Copy link
Collaborator Author

AdvancedImagingUTSW commented Oct 7, 2024

Will only be used in combination with a z-stack. The z-stack could be multi-channel, however. Could also be a part of a multi-position acquisition.

I am not certain how we have historically implemented the multi-resolution settings. For example, is the offset between the two microscopes defined as the middle of the image, or the corner?

For example, let's say I have an image volume that is 2048 x 2048 x 512, and I find two objects in it. How do I map the coordinates in pixel space and stage space for the low-resolution unit, to the stage space in the high-resolution unit? I want the identified objects to be centered in the high-resolution z-stack acquisition that will follow...

For the low-resolution scan, the analysis results, which will be a binary image, can be saved in a sub-directory with the original data. Cell1/analysis/CH00_000000.tif, etc.

For the high-resolution scan, this will be in a separate path, simply because this is how the multi position already works. If you don't think this is a good idea, we can adjust.
Cell2/position1/CH00_000000.tif
Cell2/position2/CH00_...

The low-resolution Z-stack will have a relatively small step size. Ideally, the step size should be around 1 micron in order for the data to be properly Nyquist sampled. So you can image having a volume that is 2048 x 2048 x 2048 voxels for the low-resolution side.

The step size for the high-resolution Z-stack is typically around 167 or 200 nm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants