Snowflake
Snowflake is a popular cloud-based data platform.Prerequisites
In order to connect Cube to Snowflake, you need to grant certain permissions to the Snowflake role
used by Cube. Cube requires the role to have
USAGE on databases and schemas
and SELECT on tables. An example configuration:- Account/Server URL for Snowflake.
- User name and password or an RSA private key for the Snowflake account.
- Optionally, the warehouse name, the user role, and the database name.
Setup
Manual
Add the following to a.env file in your Cube project:
Cube Cloud
In some cases you’ll need to allow connections from your Cube Cloud deployment
IP address to your database. You can copy the IP address from either the
Database Setup step in deployment creation, or from Settings →
Configuration in your deployment.
Environment Variables
| Environment Variable | Description | Possible Values | Required |
|---|---|---|---|
CUBEJS_DB_SNOWFLAKE_ACCOUNT | The Snowflake account identifier to use when connecting to the database | A valid Snowflake account ID | ✅ |
CUBEJS_DB_SNOWFLAKE_REGION | The Snowflake region to use when connecting to the database | A valid Snowflake region | ❌ |
CUBEJS_DB_SNOWFLAKE_WAREHOUSE | The Snowflake warehouse to use when connecting to the database | A valid Snowflake warehouse in the account | ✅ |
CUBEJS_DB_SNOWFLAKE_ROLE | The Snowflake role to use when connecting to the database | A valid Snowflake role in the account | ❌ |
CUBEJS_DB_SNOWFLAKE_CLIENT_SESSION_KEEP_ALIVE | If true, keep the Snowflake connection alive indefinitely | true, false | ❌ |
CUBEJS_DB_NAME | The name of the database to connect to | A valid database name | ✅ |
CUBEJS_DB_USER | The username used to connect to the database | A valid database username | ✅ |
CUBEJS_DB_PASS | The password used to connect to the database | A valid database password | ✅ |
CUBEJS_DB_SNOWFLAKE_AUTHENTICATOR | The type of authenticator to use with Snowflake. Use SNOWFLAKE with username/password, or SNOWFLAKE_JWT with key pairs. Defaults to SNOWFLAKE | SNOWFLAKE, SNOWFLAKE_JWT, OAUTH | ❌ |
CUBEJS_DB_SNOWFLAKE_PRIVATE_KEY | The content of the private RSA key | Content of the private RSA key (encrypted or not) | ❌ |
CUBEJS_DB_SNOWFLAKE_PRIVATE_KEY_PATH | The path to the private RSA key | A valid path to the private RSA key | ❌ |
CUBEJS_DB_SNOWFLAKE_PRIVATE_KEY_PASS | The password for the private RSA key. Only required for encrypted keys | A valid password for the encrypted private RSA key | ❌ |
CUBEJS_DB_SNOWFLAKE_OAUTH_TOKEN | The OAuth token | A valid OAuth token (string) | ❌ |
CUBEJS_DB_SNOWFLAKE_OAUTH_TOKEN_PATH | The path to the valid oauth toket file | A valid path for the oauth token file | ❌ |
CUBEJS_DB_SNOWFLAKE_HOST | Host address to which the driver should connect | A valid hostname | ❌ |
CUBEJS_DB_SNOWFLAKE_QUOTED_IDENTIFIERS_IGNORE_CASE | Whether or not quoted identifiers should be case insensitive. Default is false | true, false | ❌ |
CUBEJS_DB_MAX_POOL | The maximum number of concurrent database connections to pool. Default is 20 | A valid number | ❌ |
CUBEJS_CONCURRENCY | The number of concurrent queries to the data source | A valid number | ❌ |
Pre-Aggregation Feature Support
count_distinct_approx
Measures of typecount_distinct_approx can
be used in pre-aggregations when using Snowflake as a source database. To learn
more about Snowflake’s support for approximate aggregate functions, click
here.
Pre-Aggregation Build Strategies
To learn more about pre-aggregation build strategies, head
here.
| Feature | Works with read-only mode? | Is default? |
|---|---|---|
| Batching | ❌ | ✅ |
| Export Bucket | ❌ | ❌ |
Batching
No extra configuration is required to configure batching for Snowflake.Export Bucket
Snowflake supports using both AWS S3 and Google Cloud Storage for export bucket functionality.AWS S3
Ensure proper IAM privileges are configured for S3 bucket reads and writes, using either
storage integration or user credentials for Snowflake and either IAM roles/IRSA or user
credentials for Cube Store, with mixed configurations supported.
Google Cloud Storage
When using an export bucket, remember to assign the Storage Object Admin
role to your Google Cloud credentials (
CUBEJS_DB_EXPORT_GCS_CREDENTIALS).gcs_int from the example link) as you’ll need it to configure Cube.
Once the Snowflake integration is set up, configure Cube using the following:
Azure
To use Azure Blob Storage as an export bucket, follow the guide on using a Snowflake storage integration (Option 1). Take note of the integration name (azure_int from the example link)
as you’ll need it to configure Cube.
Retrieve the storage account access key from your Azure
account.
Once the Snowflake integration is set up, configure Cube using the following: