Skip to content

Apache Spark Connector for SQL Server and Azure SQL. A fork of microsoft/sql-spark-connector

License

Notifications You must be signed in to change notification settings

solytic/sql-spark-connector

 
 

Repository files navigation

Apache Spark Connector for SQL Server and Azure SQL

Apache Spark Connector for SQL Server and Azure SQL

Born out of Microsoft’s SQL Server Big Data Clusters investments, the Apache Spark Connector for SQL Server and Azure SQL is a high-performance connector that enables you to use transactional data in big data analytics and persists results for ad-hoc queries or reporting. The connector allows you to use any SQL database, on-premises or in the cloud, as an input data source or output data sink for Spark jobs.

This library contains the source code for the Apache Spark Connector for SQL Server and Azure SQL.

Apache Spark is a unified analytics engine for large-scale data processing.

In this repository we are planning to support builds for Spark 3.2.0 and higher. Since this a fork of the "official" Microsoft repository, head over there for builds for older Spark versions.

The Microsoft builds can be found here, the Solytic builds can be found here.

Get the connector

First, connect to the Solyic feed. To do this, add the repository to your pom.xml or your build.sbt:

<!-- Add to the pom.xml -->
<repository>
   <id>solytic-sql-spark-connector</id>
   <url>https://pkgs.dev.azure.com/solytic/OpenSource/_packaging/releases/maven/v1</url>
   <releases>
      <enabled>true</enabled>
   </releases>
   <snapshots>
      <enabled>true</enabled>
   </snapshots>
</repository>
// add to the build.sbt
resolvers += "Solytic SQL Spark Connector" at "https://pkgs.dev.azure.com/solytic/OpenSource/_packaging/releases/maven/v1"

Then, to load the dependency, add the following lines to your pom.xml or your build.sbt:

<!-- Add to the pom.xml -->
<dependency>
    <groupId>com.solytic</groupId>
    <artifactId>spark-mssql-connector_2.12</artifactId>
    <version>...</version>
</dependency>
// add to the build.sbt
libraryDependencies += "com.solytic" %% "spark-mssql-connector" % "..."

These are the currently available versions of the connector.

Connector Maven Coordinate Scala Version Repository
Spark 2.4.x compatible com.microsoft.azure:spark-mssql-connector:1.0.2 2.11 Microsoft
Spark 3.0.x compatible com.microsoft.azure:spark-mssql-connector_2.12:1.1.0 2.12 Microsoft
Spark 3.1.x compatible com.microsoft.azure:spark-mssql-connector_2.12:1.2.0 2.12 Microsoft
Spark 3.1.x compatible com.solytic:spark-mssql-connector_2.12:1.2.1 2.12 Solytic
Spark 3.2.x compatible com.solytic:spark-mssql-connector_2.12:1.3.0 2.12 Solytic
Spark 3.3.x compatible com.solytic:spark-mssql-connector_2.12:1.4.0 2.12 Solytic

Current Releases

  • The latest Spark 2.4.x compatible connector is on v1.0.2, available through Microsoft
  • The latest Spark 3.0.x compatible connector is on v1.1.0, available through Microsoft
  • The latest Spark 3.1.x compatible connector is on v1.2.x, available through Microsoft or Solytic
  • The latest Spark 3.2.x compatible connector is on v1.3.x, available through Solytic
  • The latest Spark 3.3.x compatible connector is on v1.4.x, available through Solytic

Supported Features

  • Support for all Spark bindings (Scala, Python, R)
  • Basic authentication and Active Directory (AD) Key Tab support
  • Reordered DataFrame write support
  • Support for write to SQL Server Single instance and Data Pool in SQL Server Big Data Clusters
  • Reliable connector support for Sql Server Single Instance
Component Versions Supported
Apache Spark 2.4.x, 3.0.x, 3.1.x, 3.2.x, 3.3.x
Scala 2.11, 2.12
Microsoft JDBC Driver for SQL Server 8.4.1
Microsoft SQL Server SQL Server 2008 or later
Azure SQL Databases Supported

Note: Azure Synapse (Azure SQL DW) use is not tested with this connector. While it may work, there may be unintended consequences.

Supported Options

The Apache Spark Connector for SQL Server and Azure SQL supports the options defined here: SQL DataSource JDBC

In addition, the following options are supported:

Option Default Description
reliabilityLevel "BEST_EFFORT" "BEST_EFFORT" or "NO_DUPLICATES". "NO_DUPLICATES" implements an reliable insert in executor restart scenarios
dataPoolDataSource none none implies the value is not set and the connector should write to SQl Server Single Instance. Set this value to data source name to write a Data Pool Table in Big Data Cluster
isolationLevel "READ_COMMITTED" Specify the isolation level
tableLock "false" Implements an insert with TABLOCK option to improve write performance
schemaCheckEnabled "true" Disables strict dataframe and sql table schema check when set to false

Other Bulk api options can be set as options on the dataframe and will be passed to bulkcopy apis on write

Performance comparison

Apache Spark Connector for SQL Server and Azure SQL is up to 15x faster than generic JDBC connector for writing to SQL Server. Note performance characteristics vary on type, volume of data, options used and may show run to run variations. The following performance results are the time taken to overwrite a sql table with 143.9M rows in a Spark dataframe. The spark dataframe is constructed by reading store_sales HDFS table generated using spark TPCDS Benchmark. Time to read store_sales to dataframe is excluded. The results are averaged over 3 runs.

Note: The following results were achieved using the Apache Spark 2.4.5 compatible connector. These numbers are not a guarantee of performance.

Connector Type Options Description Time to write
JDBCConnector Default Generic JDBC connector with default options 1385s
sql-spark-connector BEST_EFFORT Best effort sql-spark-connector with default options 580s
sql-spark-connector NO_DUPLICATES Reliable sql-spark-connector 709s
sql-spark-connector BEST_EFFORT + tabLock=true Best effort sql-spark-connector with table lock enabled 72s
sql-spark-connector NO_DUPLICATES + tabLock=true Reliable sql-spark-connector with table lock enabled 198s

Config:

  • Spark config : num_executors = 20, executor_memory = '1664m', executor_cores = 2
  • Data Gen config : scale_factor=50, partitioned_tables=true
  • Data file Store_sales with number of of rows 143,997,590

Environment:

Commonly Faced Issues

java.lang.NoClassDefFoundError: com/microsoft/aad/adal4j/AuthenticationException

This issue arises from using an older version of the mssql driver (which is now included in this connector) in your hadoop environment. If you are coming from using the previous Azure SQL Connector and have manually installed drivers onto that cluster for AAD compatibility, you will need to remove those drivers.

Steps to fix the issue:

  1. If you are using a generic Hadoop environment, check and remove the mssql jar: rm $HADOOP_HOME/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar. If you are using Databricks, add a global or cluster init script to remove old versions of the mssql driver from the /databricks/jars folder, or add this line to an existing script: rm /databricks/jars/*mssql*
  2. Add the adal4j and mssql packages, I used Maven, but anyway should work. DO NOT install the SQL spark connector this way
  3. Add the driver class to your connection configuration:
connectionProperties = {
  "Driver": "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}

For more information and explanation, visit the closed issue.

Get Started

The Apache Spark Connector for SQL Server and Azure SQL is based on the Spark DataSourceV1 API and SQL Server Bulk API and uses the same interface as the built-in JDBC Spark-SQL connector. This allows you to easily integrate the connector and migrate your existing Spark jobs by simply updating the format parameter with com.microsoft.sqlserver.jdbc.spark.

To include the connector in your projects download this repository and build the jar using SBT.

Migrating from Legacy Azure SQL Connector for Spark

Receiving java.lang.NoClassDefFoundError when trying to use the new connector with Azure Databricks?

If you are migrating from the previous Azure SQL Connector for Spark and have manually installed drivers onto that cluster for AAD compatibility, you will most likely need to remove those custom drivers, restore the previous drivers that ship by default with Databricks, uninstall the previous connector, and restart your cluster. You may be better off spinning up a new cluster.

With this new connector, you should be able to simply install onto a cluster (new or existing cluster that hasn't had its drivers modified) or a cluster which previously used modified drivers for the older Azure SQL Connector for Spark provided the modified drivers were removed and the previous default drivers restored.

See Issue #26 for more details.

Executing custom SQL through the connector

The previous Azure SQL Connector for Spark provided the ability to execute custom SQL code like DML or DDL statements through the connector. This functionality is out-of-scope of this connector since it is based on the DataSource APIs. This functionality is readily provided by libraries like pyodbc or you can use the standard java sql interfaces as well.

You can read the closed issue and view community provided alternatives in Issue #21.

Write to a new SQL Table

⚠️ Important: using the overwrite mode will first DROP the table if it already exists in the database by default. Please use this option with due care to avoid unexpected data loss!

⚠️ When using mode overwrite if you do not use the option truncate, on recreation of the table indexes will be lost. For example a columnstore table would now be a heap. If you want to maintain existing indexing please also specify option truncate with value true. i.e .option("truncate",true)

server_name = "jdbc:sqlserver://{SERVER_ADDR}"
database_name = "database_name"
url = server_name + ";" + "databaseName=" + database_name + ";"

table_name = "table_name"
username = "username"
password = "password123!#" # Please specify password here

try:
  df.write \
    .format("com.microsoft.sqlserver.jdbc.spark") \
    .mode("overwrite") \
    .option("url", url) \
    .option("dbtable", table_name) \
    .option("user", username) \
    .option("password", password) \
    .save()
except ValueError as error :
    print("Connector write failed", error)

Append to SQL Table

try:
  df.write \
    .format("com.microsoft.sqlserver.jdbc.spark") \
    .mode("append") \
    .option("url", url) \
    .option("dbtable", table_name) \
    .option("user", username) \
    .option("password", password) \
    .save()
except ValueError as error :
    print("Connector write failed", error)

Specifying the isolation level

This connector by default uses READ_COMMITTED isolation level when performing the bulk insert into the database. If you wish to override this to another isolation level, please use the mssqlIsolationLevel option as shown below.

    .option("mssqlIsolationLevel", "READ_UNCOMMITTED") \

Read from SQL Table

jdbcDF = spark.read \
        .format("com.microsoft.sqlserver.jdbc.spark") \
        .option("url", url) \
        .option("dbtable", table_name) \
        .option("user", username) \
        .option("password", password).load()

Azure Active Directory Authentication

Python Example with Service Principal

context = adal.AuthenticationContext(authority)
token = context.acquire_token_with_client_credentials(resource_app_id_url, service_principal_id, service_principal_secret)
access_token = token["accessToken"]

jdbc_db = spark.read \
        .format("com.microsoft.sqlserver.jdbc.spark") \
        .option("url", url) \
        .option("dbtable", table_name) \
        .option("accessToken", access_token) \
        .option("encrypt", "true") \
        .option("hostNameInCertificate", "*.database.windows.net") \
        .load()

Python Example with Active Directory Password

jdbc_df = spark.read \
        .format("com.microsoft.sqlserver.jdbc.spark") \
        .option("url", url) \
        .option("dbtable", table_name) \
        .option("authentication", "ActiveDirectoryPassword") \
        .option("user", user_name) \
        .option("password", password) \
        .option("encrypt", "true") \
        .option("hostNameInCertificate", "*.database.windows.net") \
        .load()

A required dependency must be installed in order to authenticate using Active Directory.

For Scala, the com.microsoft.aad.adal4j artifact will need to be installed.

For Python, the adal library will need to be installed. This is available via pip.

Please check the sample notebooks for examples.

Support

The Apache Spark Connector for Azure SQL and SQL Server is an open source project. For issues with or questions about the connector, please create an Issue in this project repository. The connector community is active and monitoring submissions.

Roadmap

Visit the Connector project in the Projects tab to see needed / planned items. Feel free to make an issue and start contributing!

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Getting started

  • Install Java 8 (e.g. with sdkman: sdk install java 8.0.342-zulu and sdk use java 8.0.342-zulu)
  • Install Scala 2.12, SBT, Maven (e.g. on Mac with Homebrew brew install maven [email protected] sbt)

Releasing a new version

Prerequisites

  • Make sure you have gpg installed (e.g. on Mac with Homebrew brew install gpg)
  • Install the Solytic GPG key
  • Connect to the artifact feed (see here)

Steps to create a release

  1. Checkout a new branch, e.g.: git checkout -b release-v-1-4-0
  2. Make sure all your local changes are committed
  3. Make sure that the version number in the pom.xml ends with -SNAPSHOT, so e.g. 1.2.0-SNAPSHOT - the version without SNAPSHOT is the one that is going to be published, so 1.2.0 in this case
  4. Since the tests will be executed during the release, run docker compose up -d to start the Docker setup
  5. Run mvn release:clean
  6. Run mvn release:prepare to prepare the release, which will increment the version, create a tag in Git, etc.
    • Note: on Mac, you need to run GPG_TTY=$(tty) mvn release:prepare, see here
    • This will ask you for the version number to be published, the new version, etc.
    • It will also run the tests
    • Run mvn release:perform to create the release
  7. Update the README: add the new version to the table
  8. Create a PR on GitHub and have it reviewed
  9. After the PR has been merged, create a release on GitHub and add the release notes

About

Apache Spark Connector for SQL Server and Azure SQL. A fork of microsoft/sql-spark-connector

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Scala 95.2%
  • Java 4.8%