Apache Calcite is a dynamic data management framework.
It was formerly called Optiq.
To run Apache Calcite, you can either download and build from github, or download a release then build the source code.
Pre-built jars are in the Apache maven repository with the following Maven coordinates:
<dependency>
<groupId>org.apache.calcite</groupId>
<artifactId>calcite-core</artifactId>
<version>1.1.0-incubating</version>
</dependency>
Calcite makes data anywhere, of any format, look like a database. For example, you can execute a complex ANSI-standard SQL statement on in-memory collections:
public static class HrSchema {
public final Employee[] emps = ... ;
public final Department[] depts = ...;
}
Class.forName("net.hydromatic.optiq.jdbc.Driver");
Properties info = new Properties();
info.setProperty("lex", "JAVA");
Connection connection = DriverManager.getConnection("jdbc:calcite:", info);
OptiqConnection optiqConnection =
connection.unwrap(OptiqConnection.class);
ReflectiveSchema.create(optiqConnection,
optiqConnection.getRootSchema(), "hr", new HrSchema());
Statement statement = optiqConnection.createStatement();
ResultSet resultSet = statement.executeQuery(
"select d.deptno, min(e.empid)\n"
+ "from hr.emps as e\n"
+ "join hr.depts as d\n"
+ " on e.deptno = d.deptno\n"
+ "group by d.deptno\n"
+ "having count(*) > 1");
print(resultSet);
resultSet.close();
statement.close();
connection.close();
Where is the database? There is no database. The connection is
completely empty until ReflectiveSchema.create
registers
a Java object as a schema and its collection fields emps
and depts
as tables.
Calcite does not want to own data; it does not even have favorite data
format. This example used in-memory data sets, and processed them
using operators such as groupBy
and join
from the linq4j
library. But Calcite can also process data in other data formats, such
as JDBC. In the first example, replace
ReflectiveSchema.create(optiqConnection,
optiqConnection.getRootSchema(), "hr", new HrSchema());
with
Class.forName("com.mysql.jdbc.Driver");
BasicDataSource dataSource = new BasicDataSource();
dataSource.setUrl("jdbc:mysql://localhost");
dataSource.setUsername("sa");
dataSource.setPassword("");
JdbcSchema.create(optiqConnection, dataSource, rootSchema, "hr", "");
and Calcite will execute the same query in JDBC. To the application, the
data and API are the same, but behind the scenes the implementation is
very different. Calcite uses optimizer rules
to push the JOIN
and GROUP BY
operations to
the source database.
In-memory and JDBC are just two familiar examples. Calcite can handle any data source and data format. To add a data source, you need to write an adapter that tells Calcite what collections in the data source it should consider "tables".
For more advanced integration, you can write optimizer rules. Optimizer rules allow Calcite to access data of a new format, allow you to register new operators (such as a better join algorithm), and allow Calcite to optimize how queries are translated to operators. Calcite will combine your rules and operators with built-in rules and operators, apply cost-based optimization, and generate an efficient plan.
Calcite also allows front-ends other than SQL/JDBC. For example, you can execute queries in linq4j:
final OptiqConnection connection = ...;
ParameterExpression c = Expressions.parameter(Customer.class, "c");
for (Customer customer
: connection.getRootSchema()
.getSubSchema("foodmart")
.getTable("customer", Customer.class)
.where(
Expressions.<Predicate1<Customer>>lambda(
Expressions.lessThan(
Expressions.field(c, "customer_id"),
Expressions.constant(5)),
c))) {
System.out.println(c.name);
}
Linq4j understands the full query parse tree, and the Linq4j query
provider for Calcite invokes Calcite as an query optimizer. If the
customer
table comes from a JDBC database (based on
this code fragment, we really can't tell) then the optimal plan
will be to send the query
SELECT *
FROM "customer"
WHERE "customer_id" < 5
to the JDBC data source.
The subproject under example/csv provides a CSV adapter, which is fully functional for use in applications but is also simple enough to serve as a good template if you are writing your own adapter.
See the csv tutorial for information on using csv adapter and writing other adapters.
See the HOWTO for more information about using other adapters, and about using Calcite in general.
The following features are complete.
- Query parser, validator and optimizer
- Support for reading models in JSON format
- Many standard functions and aggregate functions
- JDBC queries against Linq4j and JDBC back-ends
- Linq4j front-end
- SQL features: SELECT, FROM (including JOIN syntax), WHERE, GROUP BY (and aggregate functions including COUNT(DISTINCT ...)), HAVING, ORDER BY (including NULLS FIRST/LAST), set operations (UNION, INTERSECT, MINUS), sub-queries (including correlated sub-queries), windowed aggregates, LIMIT (syntax as Postgres)
For more details, see the Reference guide.
- Apache Drill adapter
- Cascading adapter (Lingual)
- CSV adapter (example/csv)
- JDBC adapter (part of calcite-core)
- MongoDB adapter (calcite-mongodb)
- Spark adapter (calcite-spark)
- Splunk adapter (calcite-splunk)
- Eclipse Memory Analyzer (MAT) adapter (mat-calcite-plugin)
- License: Apache License, Version 2.0
- Blog: http://julianhyde.blogspot.com
- Project page: http://calcite.incubator.apache.org
- Incubation status page: http://incubator.apache.org/projects/calcite.html
- Source code: http://github.com/apache/incubator-calcite
- Issues: Apache JIRA
- Developers list: dev at calcite.incubator.apache.org (archive, subscribe)
- Twitter: @ApacheCalcite
- HOWTO
- JSON model
- Reference guide
- Streaming SQL
- Release notes and history
These resources, which we used when Calcite was called Optiq and before it joined the Apache incubator, are for reference only. They may be out of date. Please don't post or try to subscribe to the mailing list.
- Developers list: [email protected]
- How to integrate Splunk with any data solution (Splunk User Conference, 2012)
- Drill / SQL / Optiq (2013)
- SQL on Big Data using Optiq (2013)
- SQL Now! (NoSQL Now! conference, 2013)
- Cost-based optimization in Hive (video) (Hadoop Summit, 2014)
- Discardable, in-memory materialized query for Hadoop (video) (Hadoop Summit, 2014)
- Cost-based optimization in Hive 0.14 (Seattle, 2014)
Apache Calcite is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.