Apache Spark is a popular unified analytics engine for large-scale data processing. However being unified makes it difficult for it to participate in a composable data system. That's where the Spark to Substrait gateway comes in. By implementing the SparkConnect protocol a backend using Substrait can be seamlessly dropped in for Apache Spark.
That said, there are some caveats. The gateway doesn't yet support all features and capabilities of SparkConnect. And for the features it does support it doesn't support them with the same semantics that Spark does (because whatever backend you want to plug in likely uses different semantics.) So please test your workloads to ensure they behave as expected.
There are other projects that also use Substrait to enhance Apache Spark. The Gluten project exposes the pushdown Substrait used in pipelines (think the join free parts of a plan) to either Clickhouse or Velox.
To run the gateway locally - you need to setup a Python (Conda) environment.
To run the Spark tests you will need Java installed.
Ensure you have Miniconda and Rust/Cargo installed.
Once that is done - run these steps from a bash terminal:
git clone https://github.com/<your-fork>/spark-substrait-gateway.git
cd spark-substrait-gateway
conda init bash
. ~/.bashrc
conda env create -f environment.yml
pip install .
To use the gateway simply start the gateway server:
spark-substrait-gateway-server
Note: to see all of the options available to the server run:
spark-substrait-gateway-server --help
This will start the service on local port 50051 by default.
To run the client demo - make sure you have the gateway server running and then (in another terminal) run the client demo script:
spark-substrait-client-demo
Note: to see all of the options available to the client-demo run:
spark-substrait-client-demo --help
Here's how to use PySpark to connect to the running gateway:
from pyspark.sql import SparkSession
# Create a Spark Connect session to local port 50051.
spark = (SparkSession.builder
.remote("sc://localhost:50051/")
.getOrCreate()
)
# Use the spark session normally.
You'll find that the usage examples on the Spark website also apply to the gateway without any modification. You may also run the provided demo located at src/gateway/demo/client_demo.py
.