Google BigQuery
Prerequisites
In order to connect Google BigQuery to Cube, you need to provide service account
credentials. Cube requires the service account to have BigQuery Data Viewer
and BigQuery Job User roles enabled. If you plan to use pre-aggregations,
the account will need the BigQuery Data Editor role instead of BigQuery Data Viewer.
You can learn more about acquiring
Google BigQuery credentials here.
- The Google Cloud Project ID for the BigQuery project
- A set of Google Cloud service credentials which allow access to the BigQuery project
- The Google Cloud region for the BigQuery project
Setup
Manual
Add the following to a.env file in your Cube project:
CUBEJS_DB_BQ_CREDENTIALS:
Cube Cloud
In some cases you’ll need to allow connections from your Cube Cloud deployment
IP address to your database. You can copy the IP address from either the
Database Setup step in deployment creation, or from Settings →
Configuration in your deployment.
Environment Variables
| Environment Variable | Description | Possible Values | Required |
|---|---|---|---|
CUBEJS_DB_BQ_PROJECT_ID | The Google BigQuery project ID to connect to | A valid Google BigQuery Project ID | ✅ |
CUBEJS_DB_BQ_KEY_FILE | The path to a JSON key file for connecting to Google BigQuery | A valid Google BigQuery JSON key file | ✅ |
CUBEJS_DB_BQ_CREDENTIALS | A Base64 encoded JSON key file for connecting to Google BigQuery | A valid Google BigQuery JSON key file encoded as a Base64 string | ❌ |
CUBEJS_DB_BQ_LOCATION | The Google BigQuery dataset location to connect to. Required if used with pre-aggregations outside of US. If not set then BQ driver will fail with Dataset was not found in location US error | A valid Google BigQuery regional location | ⚠️ |
CUBEJS_DB_EXPORT_BUCKET | The name of a bucket in cloud storage | A valid bucket name from cloud storage | ❌ |
CUBEJS_DB_EXPORT_BUCKET_TYPE | The cloud provider where the bucket is hosted | gcp | ❌ |
CUBEJS_DB_MAX_POOL | The maximum number of concurrent database connections to pool. Default is 40 | A valid number | ❌ |
CUBEJS_CONCURRENCY | The number of concurrent queries to the data source | A valid number | ❌ |
Pre-Aggregation Feature Support
count_distinct_approx
Measures of typecount_distinct_approx can
be used in pre-aggregations when using Google BigQuery as a source database. To
learn more about Google BigQuery’s support for approximate aggregate functions,
click here.
Pre-Aggregation Build Strategies
To learn more about pre-aggregation build strategies, head
here.
| Feature | Works with read-only mode? | Is default? |
|---|---|---|
| Batching | ❌ | ✅ |
| Export Bucket | ❌ | ❌ |
Batching
No extra configuration is required to configure batching for Google BigQuery.Export bucket
Google Cloud Storage
For improved pre-aggregation performance with large datasets, enable export bucket functionality by configuring Cube with the following environment variables:When using an export bucket, remember to assign the BigQuery Data Editor and
Storage Object Admin role to your BigQuery service account.