-
Notifications
You must be signed in to change notification settings - Fork 149
SURF2010
[wiki:SURF2008], [wiki:SURF2007]
* Improving performance in FiPy with partitioned meshing
[source:sandbox/SURF2010/talk/talk.pdf].
are all located [source:sandbox/SURF2010], however they're poorly documented and very hacky. We were pressed for time towards the end, so I didn't have much time to make things readable for posterity.
* Profiled Trilinos vs. PySparse performance for low difficulty problems * Fixed PySparse preconditioning selection * Rewrote gmshImport.py * Uses NumPy's `genfromtext` to parse msh file. * Implemented faster algorithm for parsing meshes from gmsh. * Extended gmshImport to allow different shape types within the same mesh. * Now easily generalizable to arbitrary shapes given by gmsh. * Extended new gmshImport.py to support partitioned meshes * Subclasses `MshFile` with `PartedMshFile`, which overrides certain methods to support parallel solving of a partitioned mesh. * `PartedMshFile` doesn't ever read the entire msh file into memory; it only ever stores the elements and vertices of the partition relevant to the process ID. * Relies on gmsh >=2.5.0 to provide ghost cell information.
Gmsh nightly builds currently provide one layer of ghost cells; we use this to make partitioned meshes. It would be nice to have an arbitrary number of ghost cell layers to support higher order problems.
* I've sent mail to the gmsh mailing list asking whether or not developers plan to include this as a feature. * If gmsh developers respond negatively, I'll begin to think about patching gmsh toward this end.
Done [source:sandbox/SURF2010/partitionPlot], but not documented and therefore basically unusable for anyone but me.
When I completed coding for `PartedMshFile`, the object within `gmshImport.py` which supports partitioned meshes, I found that my initial tests using the new object (solving `examples/diffusion/circle.py` in parallel) were failing for strange reasons. I got opaque errors like
After a few hours of debugging, Wheeler and I finally figured out that I was not defining a certain class attribute (`globalNumberOfCells`, or something to that effect) within my newly written `Gmsh2D` object, which is descended from `Mesh2D` and uses `PartedMshFile` to construct a partitioned mesh which can be solved in parallel. Because my class definition for `Gmsh2D` lacked this particular attribute, the parent class (`Mesh2D`) inferred `self.globalNumberOfCells` incorrectly, which had rippling repercussions throughout the object hierarchy.
Because of this faulty derivation, which can be found within `_calcTopology()` of `fipy/meshes/numMesh/mesh.py`, an obscure error was generated down the line and hours were spent trying to make sense of it.
I do not think the contract between `Mesh2D` and any subclassing meshes which seek to support parallel solution is made clear in the current architecture. The prerequisite definitions needed in such a subclass are unknown because of the `ifhasattr` derivations made in `mesh.py`.
Attributes and methods (like `get{Global,Local}{Non,}OverlappingCellIDs()`) which are required for partitioned meshes should be declared or required in the parent `mesh.py` but not implemented and should throw errors if child classes do not supply an implementation. This would lead to direct errors instead of cryptic indexing fumbles later on down the line. The exact architectural changes that should be made are currently unclear to me, but I hope to be able to supply some more specific recommendations before the summer ends.
I haven't experienced much heartache on account of the `Epetra`/`mpi4py` schism, but something tells me I soon will. Wheeler recommends that, at some point, we should unify these two under the hood of `fipy.tools.parallel` so instead of referring directly to those two objects within FiPy, we instead reference the `parallel` module.
Here are a few of things I'd like to do after I've been expelled from NIST.
* Merging `gmshImport` into trunk
* Gmsh business * Add the ability to calculate more layers of overlaps in Gmsh. * Figure out how to incorporate Gmsh Grid objects. * GmshGrid objects? * Grid wrapper for all dimensions, Gmsh use when (i) Gmsh is present and (ii) when running in parallel, otherwise fall back to previous Grid implementation? * Complete refactoring of mesh classes? * Profliing and optimization of Gmsh?D objects, or even a Mesh-wide treatment.
* Packaging * Ubuntu FiPy package * ez_install FiPy package
* Testing * slug (my home computer) bitten-slave setup
* Spectral/FFT modules * CUDA stuff