Getting Databricks table data using PostgREST interface
Databricks PostgREST Connector API
There are multiple ways to get table data out of Databricks but we recommend creating Databricks Sync Table and using our PostgREST connector.
Prerequisites
- You need to have IrisX Databricks account and some table with data in it
- Create Sync Table according to the Databricks documentation https://docs.databricks.com/aws/en/oltp/instances/sync-data/sync-table (keep in mind that this is a Preview Feature and is subject to change in databricks)
- Create a Service Principal in your Databricks instance. You will need the credentials generated after creating it.
- Configure the created Service Principal access to the Sync Table
- (Optional - recommended for final release) Add the Service Principal credentials to the Trackunit Vault using Proxy GraphQL mutation or a dedicated API call which you can find in
Iris API Reference. Notice the key names of the stored credentials. The credentials can be stored in a two way:Account credentials- meaning that the credentials in the vault are assigned to a single account and cannot be used by different accounts. For example, when you create an Iris App that will require those credentials and publish it, then every Trackunit account that installs this App will need to add their ownaccount credentialsto the vault. This is useful when your App is connecting to user's "private" Sync TableApp credentials- meaning that the credentials in the vault are assigned to the Iris App (independent of the account that is using the app). This is a way of sharing access to the Iris App developer's Sync Table
Important: This endpoint requires an IrisAppToken for authentication, which is only available when using Serverless Functions. This means the Databricks PostgREST Connector cannot be called directly from the frontend (Iris App UI). You must call this API from your Iris App's serverless function backend.
Endpoint
GET https://iris.trackunit.com/public/api/databricks-postgrest/v1beta1/table-data/{workspaceUrl}/{catalog}/{tableName}
Note This is a guide dedicated to help you use the PostgREST endpoint. For the up-to-date API documentation please visit the Iris API Reference
Path Parameters
| Parameter | Description |
|---|---|
workspaceUrl | The Databricks workspace URL (e.g., dbc-123-1b43.cloud.databricks.com) |
catalog | The catalog name in Databricks |
tableName | The table name in Databricks |
Headers
Common Headers (Required for all requests)
| Header | Type | Required | Description |
|---|---|---|---|
Accept-Profile | string | Yes | The Databricks Sync Table schema name |
Authentication Options
You can authenticate using one of two methods:
Option 1: Using Vault (Recommended for release version)
Use this option when credentials are stored in Vault.
| Header | Type | Required | Default | Description |
|---|---|---|---|---|
x-client-id-key-name | string | Yes | - | The Vault key name for the Databricks client ID |
x-client-secret-key-name | string | Yes | - | The Vault key name for the Databricks client secret |
x-account-id | string | Conditional | - | Required when x-allow-use-app-secret is false or not provided. The account ID used in the Vault path. Most often this should be the App user's account ID. This requires using Account credentials |
x-allow-use-app-secret | boolean | No | false | If true, the credentials' Vault path will not include the account-id meaning that App credentials will be used; if false or omitted, x-account-id is required |
Note: When x-allow-use-app-secret is false (default), the Vault path includes the account-id and Account credentials are required. When true, only the app-level path is used meaning that App credentials will be used.
Option 2: Direct Credentials
Use this option when you want to provide credentials directly (for development purpose, not recommended for production).
| Header | Type | Required | Description |
|---|---|---|---|
x-client-id | string | Yes | The Databricks client ID |
x-client-secret | string | Yes | The Databricks client secret |
Important: The system distinguishes between the two authentication options based on which headers are present. If x-client-id and x-client-secret are provided, it uses direct credentials; otherwise, it uses the Vault-based approach.
Query Parameters
All PostgREST query parameters are supported. You can filter, sort, paginate, and shape your response according to the PostgREST documentation.
Note Only GET (read) operations are allowed. This is a restriction of Databricks API.
Common examples:
?select=column1,column2- Select specific columns?column=eq.value- Filter by exact match?order=column.asc- Sort results?limit=10&offset=20- Pagination
Response
Returns a JSON array of objects, where each object represents a row from the table.
[
{
"column1": "value1",
"column2": "value2",
...
},
...
]Example Requests
Using Vault (with account-id)
curl -X GET "https://iris.trackunit.com/public/api/databricks-postgrest/v1beta1/table-data/dbc-123-1b43.cloud.databricks.com/my_catalog/my_table?limit=10" \
-H "Accept-Profile: my_schema" \
-H "x-client-id-key-name: databricks-client-id-key" \
-H "x-client-secret-key-name: databricks-client-secret-key" \
-H "x-account-id: account-123"Using Vault (app-level secrets)
curl -X GET "https://iris.trackunit.com/public/api/databricks-postgrest/v1beta1/table-data/dbc-123-1b43.cloud.databricks.com/my_catalog/my_table?limit=10" \
-H "Accept-Profile: my_schema" \
-H "x-client-id-key-name: databricks-client-id-key" \
-H "x-client-secret-key-name: databricks-client-secret-key" \
-H "x-allow-use-app-secret: true"Using Direct Credentials
curl -X GET "https://iris.trackunit.com/public/api/databricks-postgrest/v1beta1/table-data/dbc-123-1b43.cloud.databricks.com/my_catalog/my_table?limit=10" \
-H "Accept-Profile: my_schema" \
-H "x-client-id: your-client-id" \
-H "x-client-secret: your-client-secret"Error Handling
Always check the error message - the problem might be either because of incorrect authentication to the Trackunit PostgREST endpoint, or wrong authentication with the Databricks. Here is the summary of possible situations:
| Failure Point | HTTP Status | Exception Type | Error Message |
|---|---|---|---|
| Missing/Invalid Bearer Token | 401 | Spring Security | "Full authentication is required to access this resource" or "Unauthorized" |
| Missing Required Scope | 403 | Spring Security | "Access Denied" or "Forbidden" |
| Unable to Get TrackUnit Token | 401 | UnauthorizedException | "Unable to get trackunitToken." |
| Missing irisAppId Claim | 401 | UnauthorizedException | "Token does not contain App Id" |
| Missing x-account-id Header | 400 | BadRequestException | "Request header 'x-account-id' is missing while 'x-allow-use-app-secret' is false or not provided" |
| Vault Path Not Found | 401 | UnauthorizedException | "Vault path was not found" |
| Vault Client ID Missing | 401 | UnauthorizedException | "Vault does not contain client id" |
| Vault Client Secret Missing | 401 | UnauthorizedException | "Vault does not contain client secret" |
| Vault Read Failed (After Retries) | 500 | RuntimeException | "Failed to read secrets from vault: {original error message}" |
| Invalid Databricks Credentials | 401 | UnauthorizedException | "Failed to get OAuth Databricks token. Status code: {statusCode}. Message: {databricks error response}" |
| Empty OAuth Token Response | 401 | UnauthorizedException | "Failed to get OAuth Databricks token - empty response" |
| OAuth Token Generation Failed | 500 | RuntimeException | "Failed to generate OAuth token" |
| Cannot Access Sync Table Metadata | 500 | RestClientException/RuntimeException | Databricks SDK exception (e.g., permission denied, table not found) |
| No ReadOnlineView Permission | 403 | RestClientException | PostgREST error response from Databricks |
| Invalid Query Parameters | 400 | RestClientException | PostgREST error response from Databricks |
Updated 9 days ago