Skip to main content
Skip to main content
Edit this page

Integrating ClickHouse with Databricks

ClickHouse Supported

The ClickHouse Spark connector works seamlessly with Databricks. This guide covers platform-specific setup, installation, and usage patterns for Databricks.

API Selection for Databricks

By default, Databricks uses Unity Catalog, which blocks Spark catalog registration. In this case, you must use the TableProvider API (format-based access).

However, if you disable Unity Catalog by creating a cluster with No isolation shared access mode, you can use the Catalog API instead. The Catalog API provides centralized configuration and native Spark SQL integration.

Unity Catalog StatusRecommended APINotes
Enabled (default)TableProvider API (format-based)Unity Catalog blocks Spark catalog registration
Disabled (No isolation shared)Catalog APIRequires cluster with "No isolation shared" access mode

Installation on Databricks

Option 1: Upload JAR via Databricks UI

  1. Build or download the runtime JAR:

    clickhouse-spark-runtime-{{ spark_binary_version }}_{{ scala_binary_version }}-{{ stable_version }}.jar
    
  2. Upload the JAR to your Databricks workspace:

    • Go to Workspace → Navigate to your desired folder
    • Click Upload → Select the JAR file
    • The JAR will be stored in your workspace
  3. Install the library on your cluster:

    • Go to Compute → Select your cluster
    • Click the Libraries tab
    • Click Install New
    • Select DBFS or Workspace → Navigate to the uploaded JAR file
    • Click Install
  1. Restart the cluster to load the library

Option 2: Install via Databricks CLI

# Upload JAR to DBFS
databricks fs cp clickhouse-spark-runtime-{{ spark_binary_version }}_{{ scala_binary_version }}-{{ stable_version }}.jar \
  dbfs:/FileStore/jars/

# Install on cluster
databricks libraries install \
  --cluster-id <your-cluster-id> \
  --jar dbfs:/FileStore/jars/clickhouse-spark-runtime-{{ spark_binary_version }}_{{ scala_binary_version }}-{{ stable_version }}.jar

Option 3: Maven Coordinates (Recommended)

  1. Navigate to your Databricks workspace:

    • Go to Compute → Select your cluster
    • Click the Libraries tab
    • Click Install New
    • Select Maven tab
  2. Add the Maven coordinates:

com.clickhouse.spark:clickhouse-spark-runtime-{{ spark_binary_version }}_{{ scala_binary_version }}:{{ stable_version }}
  1. Click Install and restart the cluster to load the library

Using TableProvider API

When Unity Catalog is enabled (default), you must use the TableProvider API (format-based access) because Unity Catalog blocks Spark catalog registration. If you've disabled Unity Catalog by using a cluster with "No isolation shared" access mode, you can use the Catalog API instead.

Reading data

# Read from ClickHouse using TableProvider API
df = spark.read \
    .format("clickhouse") \
    .option("host", "your-clickhouse-cloud-host.clickhouse.cloud") \
    .option("protocol", "https") \
    .option("http_port", "8443") \
    .option("database", "default") \
    .option("table", "events") \
    .option("user", "default") \
    .option("password", dbutils.secrets.get(scope="clickhouse", key="password")) \
    .option("ssl", "true") \
    .load()

# Schema is automatically inferred
df.display()

Writing data

# Write to ClickHouse - table will be created automatically if it doesn't exist
df.write \
    .format("clickhouse") \
    .option("host", "your-clickhouse-cloud-host.clickhouse.cloud") \
    .option("protocol", "https") \
    .option("http_port", "8443") \
    .option("database", "default") \
    .option("table", "events_copy") \
    .option("user", "default") \
    .option("password", dbutils.secrets.get(scope="clickhouse", key="password")) \
    .option("ssl", "true") \
    .option("order_by", "id") \  # Required: specify ORDER BY when creating a new table
    .option("settings.allow_nullable_key", "1") \  # Required for ClickHouse Cloud if ORDER BY has nullable columns
    .mode("append") \
    .save()
Note

This example assumes preconfigured secret scopes in Databricks. For setup instructions, see the Databricks Secret management documentation.

Databricks-specific considerations

Secret management

Use Databricks secret scopes to securely store ClickHouse credentials:

# Access secrets
password = dbutils.secrets.get(scope="clickhouse", key="password")

For setup instructions, see the Databricks Secret management documentation.

ClickHouse Cloud connection

When connecting to ClickHouse Cloud from Databricks:

  1. Use HTTPS protocol (protocol: https, http_port: 8443)
  2. Enable SSL (ssl: true)

Examples

Complete workflow example

from pyspark.sql import SparkSession
from pyspark.sql.functions import col

# Initialize Spark with ClickHouse connector
spark = SparkSession.builder \
    .config("spark.jars.packages", "com.clickhouse.spark:clickhouse-spark-runtime-3.4_2.12:0.9.0") \
    .getOrCreate()

# Read from ClickHouse
df = spark.read \
    .format("clickhouse") \
    .option("host", "your-host.clickhouse.cloud") \
    .option("protocol", "https") \
    .option("http_port", "8443") \
    .option("database", "default") \
    .option("table", "source_table") \
    .option("user", "default") \
    .option("password", dbutils.secrets.get(scope="clickhouse", key="password")) \
    .option("ssl", "true") \
    .load()

# Transform data
transformed_df = df.filter(col("status") == "active")

# Write to ClickHouse
transformed_df.write \
    .format("clickhouse") \
    .option("host", "your-host.clickhouse.cloud") \
    .option("protocol", "https") \
    .option("http_port", "8443") \
    .option("database", "default") \
    .option("table", "target_table") \
    .option("user", "default") \
    .option("password", dbutils.secrets.get(scope="clickhouse", key="password")) \
    .option("ssl", "true") \
    .option("order_by", "id") \
    .mode("append") \
    .save()