Releases: vkostyukov/la4j
0.6.0
- Fix validation errors in Matrix.insert
- New matrix methods insertRow and insertColumn
- Support comments in Matrix Market format
- New NO_PIVOT_GAUSS matrix inverter
- New matrix norms: Euclidean, Infinity and Manhattan
- New factory methods for Vector: fromMap and fromCollection
<dependency>
<groupId>org.la4j</groupId>
<artifactId>la4j</artifactId>
<version>0.6.0</version>
</dependency>
0.5.5
This is a cleanup release. Every deprecated method/class has been removed:
- There is no longer
org.la4j.factory
package, use static factory methods:
Matrix a = Matrix.identity(100); // creates identity matrix
Matrix b = CRSMatrix.zero(100, 100); creates zero matrix (matrix with all zeros)
Matrix c = DenseMatrix.constant(100, 100, 3.14); // creates a constant matrix
- There is no longer
org.la4j.io
package, use static factory methods:
Matrix a = Matrix.fromCSV("0.0, 0.1\n1.0, 2.0");
Matrix b = CRSMatrix.fromMatrixMarket(mmString);
Matrices and vectors are no longer serializable objects. User methods X.toBinary()
and X.fromBinary()
instead.
Grab the release on Maven Central:
<dependency>
<groupId>org.la4j</groupId>
<artifactId>la4j</artifactId>
<version>0.5.5</version>
</dependency>
0.5.0
It's been a while since the previous release. It took me almost an year to rethink and re-implement the significant part of la4j - operations on the sparse data. This is literally the most significant release of the library for a while. It finally solves the old performance/design issue: the la4j library is now taking the advantage of sparse data. Two major features allowed la4j to increase the performance up to 20x on sparse data:
- Expendable operations allowed to select an algorithm implementation depending on arguments types: sparse vs. dense.
- A lightweight and composable iterators allowed to develop easy to reason about algorithms on sparse data.
In order to perform an operation on a matrix or vector an apply
method might be used. The following example calculates the inner product of the given vectors a
and b
.
Vector a = ???
Vector b = ???
double d = a.apply(LinearAlgebra.OO_PLACE_INNER_PRODUCT, b);
While the same result might be achieved by just:
Vector a = ???
Vector b = ???
double d = a.innerProduct(b);
Usually, you don't need to use operation explicitly unless you're extending the library with you own operation. The Matrix/Vector API provides a reasonable set of methods that forward to underlying operations.
Iterators provides a simple control abstraction for working with matrices and vectors. There are dramatically useful for sparse data. For example, while regular for-loop over the sparse vector's components takes O(n log n) time the following code requires O(n) time:
SparseVector a = ???
VectorIterator it = a.iterator();
while (it.hasNext()) {
double x = it.next();
int i = it.index();
// do something with x and i
}
More importantly, there are bunch of sparse iterators in the Vector/Matrix API. For example, the following iterator iterates over non-zero elements of CRSMatrix
:
CRSMatrix a = ???
MatrixIterator it = a.nonZeroRowMajorIterator();
while (it.hasNext()) {
double x = it.next();
int i = it.rowIndex();
int j = it.columnIndex();
// do something with x, i and j
}
The most cool thing about iterators is that they are composable! You can compose two iterators with ether orElse
or andAlso
combinators. The orElse
combinators (i.e., orElseAdd
, orElseSubtract
) return an iterator that represents a union between two iterators, while the andAlso
combinators (i.e., andAlsoMultiply
, andAlsoDivide
) return an iterator that represents an intersection between two iterators. Here is how we'd add two sparse vectors using non-zero iterators and their composition:
SparseVector a = ???
SparseVector b = ???
SparseVector c = ??? // result
VectorIterator these = a.nonZeroIterator();
VectorIterator those = b.nonZeroIterator();
VectorIterator both = these.orElseAdd(those);
while (both.hasNext()) {
double x = both.next();
int i = both.index();
c.set(i, x);
}
Note that +
(add) operation is used to resolve conflicts of union: if both vectors have non-zero components on the same index, the composed iterator will return their sum on both.next()
or both.get()
.
There is another powerful feature of the iterators: there are writable. The set(double)
method might be used to write the current iterator's element. The following example shows how to implement in-place scaling by 10 of the sparse matrix:
SparseMatrix a = ???
MatrixIterator it = a.nonZeroIterator();
while (it.hasNext()) {
double x = it.next();
it.set(x * 10);
}
The last thing you should know about iterators is that they are very efficient if used correctly. The rule is very simple: row-major sparse matrices (i.e., CRSMatrix
) doesn't really like column-major iterators: columnMajorIterator()
, nonZerorColumnMajorIterator()
, iteratorOfColumn(int)
and nonZeroIteratorOfColumn(int)
and vise-versa. So, if you use column-major sparse matrix (i.e., CCSMatrix
) do not read/write it in a row-major manner.
Iterators made the la4j library much more awesome. Within a super small overhead they provide a powerful and easy to reason about abstraction that forces the developers write efficient and clean sparse algorithms.
Finally, I'm trying to get rid of two boilerplate abstractions: org.la4j.factory.Factory
and org.la4j.{matrix, vector}.source.Source
. All of the them have been deprecated in the release: try to avoid the usage of deprecated API, more likely such methods will be removed in the next release.
Why get rid of factories? They are terrible. Operations allowed to chose the most efficient data representation for the out-of-place operations. The la4j library knows how to store your result in the most efficient way, so you should not be worried about it any more. For example, the multiplication of two sparse vectors is sparse vector while addition of sparse vector to dense vector will return dense vector and so on. Thus, instead of using factories for building new vectors/matrices you can use static factory methods like DenseMatrix.zero(int, int)
or SparseMatrix.identity(int)
. I'm trying to get rid of other ways of doing things in la4j. There is should be the only way: the right way.
Now, you can convert matrices or vectors from sparse to dense representation using type-safe converter to
. Note, that is the given factory (which is not org.la4j.factory.Factory
but org.la4j.matrix.MatrixFactory
or org.la4j.vector.VectorFactory
) produces the same objects as an object being converted - no copying will be performed: just type casting:
Matrix a = CRSMatrix.identity(100);
RowMajorSparseMatrix b = a.toRowMajorSparseMatrix(); // no copying but type-safe casting
Matrix c = Basic2DMatrix.zero(100, 100);
DenseMatrix d = c.to(Matrices.BASIC_1D); // copying
This release is a huge step for the la4j project. I'm finally see the roadmap to the 1.0.0
release. The biggest feature in the first stable version should be in-pace operations, which are definitely might be done with iterators.
0.4.9
This release is made by the awesome contributors: Michael, Phil, Anveshi, Clement, Miron and Todd. Thank you folks for the fantastic job!
- Bug fix in
align()
method for big sparse matrices (reported by Michael Kapper) - Bug fix in
growup()
method for big sparse matrices (contributed by Phil Messenger) - Bug fix in
MatrixMarketStream
- New matrix method
select()
(contributed by Anveshi Charuvaka) - New vector method
select()
- Bug fix in
growup()
method for the case with positive overflow (contributed by Clement Skau) - New matrix predicate
Matrices.SQUARE_MATRIX
(contributed by Miron Aseev) - New matrix predicate
Matrices.INVERTIBLE_MATRIX
(contributed by Miron Aseev) - New vector method
norm(NormFunction)
that implements p-norm support (contributed by Miron Aseev) - New matrix predicate
PositiveDefiniteMatrix
(contributed by Miron Aseev) - Bug fix in
each
,eachInRow
,eachInColumn
methods of sparse matrices (reported by Leonid Ilyevsky) - New matrix methods:
foldColumns
andfoldRows
(contributed by Todd Brunhoff) - New matrix methods:
assignRow
andassignColumn
- New matrix methods:
updateRow
andupdateColumn
- New matrix methods:
transformRow
andtransformColumn
0.4.5
Some fantastic job has been done by Daniel, Ewald, Jacob, Yuriy, Julia, Maxim and me.
- New vector methods:
innerProduct()
,outerProduct()
(contributed by Daniel Renshaw) - Bug fix in
Vector.subtract()
method (contributed by Ewald Grusk) - Bug fix in
Matrix.subtract()
method (contributed by Ewald Grusk) - New matrix method
rotate()
(contributed by Jakob Moellers) - New matrix method
shuffle()
(contributed by Jakob Moellers) - Bug fix in
Vector.density()
andMatrix.density()
(contributed by Ewald Grusk) - Bug fix in
Matrix.determinant()
method (contributed by Yuriy Drozd) - Minor improvement of
SymmetricMatrixPredicate
(contributed by Ewald Grusk) - Bug fix in
EigenDecompositor
(reported by Ewald Grusk) - Bug fix in
CompressedVector.swap()
(reported by Ewald Grusk, contributed by Yuriy Drozd) - Typo fix in
IdentityMattixSource
(reported by Ewald Grusk, contributed by Yuriy Drozd) - Renamed
Matrix.product()
toMatrix.diagonalProduct()
(contributed by Julia Kostyukova) - New matrix methods:
sum()
andproduct()
(contributed by Julia Kostyukova) - New vector methods:
sum()
andproduct()
(contributed by Julia Kostyukova) - Renamed
Matrix.kronecker()
toMatrix.kroneckerProduct()
(contributed by Julia Kostyukova) - New matrix method
hadamardProduct()
(contributed by Julia Kostyukova) - Bug fix in
EigenDecompositor
(contributed by Maxim Samoylov) - Improved stability of
EigenDecompositor
(contributed by Maxim Samoylov) - New vector method
eachNonZero
(contributed by Maxim Samoylov) - New matrix method
power
(contributed by Jakob Moellers) - New matrix methods
eachInRow
,eachInColumn
(contributed by Maxim Samoylov) - New matrix methods
eachNonZeroInRow
,eachNonZeroInColumn
,eachNonZero
(contributed by Maxim Samoylov) - New factory method
createBlockMatrix
(contributed by Maxim Samoylov) - New fast and stable algorithm for determinant calculation (contributed by Maxim Samoylov)
- Improved stability of accumulators (contributed by Maxim Samoylov)
- Bug fix in
Matrix.rank()
method (contributed by Ewald Grusk) - Bug fix in
SingularValueDecompositor
class (reported by Jonathan Edwards) - Fixed a typo in
MatrixInvertor
->MatrixInverter
- New function
Mod
(requested by Luc Trudeau) - Bug fix in
GaussianSolver
- Bug fix in
SquareRootSolver
- Bug fix in
JacobiSolver
- New matrix and vector methods
max()
,min()
,minInRow()
,maxInColumn()
(contributed by Maxim Samoylov) - New linear solver:
ForwardBackSubstitutionSolver
(for square systems) - New linear solver:
LeastSquaresSolver
(least squares solver) - New all-things-in-one class
LinearAlgebra
- New matrix/vector method:
non()
, which is actually!is()
delegate - New matrix to vector converters:
toRowVector()
,toColumnVector()
- New vector to matrix converters:
toRowMatrix()
,toColumnMatrix()
- New API for solving system of linear equations:
withSolver(SolverFactory)
- New API for decomposing:
withDecompositor(DecompositorFactory)
- New API for inverting:
withInverter(InverterFactory)