Getting Databricks table data using PostgREST interface

Databricks PostgREST Connector API

There are multiple ways to get table data out of Databricks but we recommend creating Databricks Sync Table and using our PostgREST connector.

Prerequisites

  1. You need to have IrisX Databricks account and some table with data in it
  2. Create Sync Table according to the Databricks documentation https://docs.databricks.com/aws/en/oltp/instances/sync-data/sync-table (keep in mind that this is a Preview Feature and is subject to change in databricks)
  3. Create a Service Principal in your Databricks instance. You will need the credentials generated after creating it.
  4. Configure the created Service Principal access to the Sync Table
  5. (Optional - recommended for final release) Add the Service Principal credentials to the Trackunit Vault using Proxy GraphQL mutation or a dedicated API call which you can find in Iris API Reference. Notice the key names of the stored credentials. The credentials can be stored in a two way:
    1. Account credentials - meaning that the credentials in the vault are assigned to a single account and cannot be used by different accounts. For example, when you create an Iris App that will require those credentials and publish it, then every Trackunit account that installs this App will need to add their own account credentials to the vault. This is useful when your App is connecting to user's "private" Sync Table
    2. App credentials - meaning that the credentials in the vault are assigned to the Iris App (independent of the account that is using the app). This is a way of sharing access to the Iris App developer's Sync Table

Important: This endpoint requires an IrisAppToken for authentication, which is only available when using Serverless Functions. This means the Databricks PostgREST Connector cannot be called directly from the frontend (Iris App UI). You must call this API from your Iris App's serverless function backend.

Endpoint

GET https://iris.trackunit.com/public/api/databricks-postgrest/v1beta1/table-data/{workspaceUrl}/{catalog}/{tableName}

Note This is a guide dedicated to help you use the PostgREST endpoint. For the up-to-date API documentation please visit the Iris API Reference

Path Parameters

ParameterDescription
workspaceUrlThe Databricks workspace URL (e.g., dbc-123-1b43.cloud.databricks.com)
catalogThe catalog name in Databricks
tableNameThe table name in Databricks

Headers

Common Headers (Required for all requests)

HeaderTypeRequiredDescription
Accept-ProfilestringYesThe Databricks Sync Table schema name

Authentication Options

You can authenticate using one of two methods:

Option 1: Using Vault (Recommended for release version)

Use this option when credentials are stored in Vault.

HeaderTypeRequiredDefaultDescription
x-client-id-key-namestringYes-The Vault key name for the Databricks client ID
x-client-secret-key-namestringYes-The Vault key name for the Databricks client secret
x-account-idstringConditional-Required when x-allow-use-app-secret is false or not provided. The account ID used in the Vault path. Most often this should be the App user's account ID. This requires using Account credentials
x-allow-use-app-secretbooleanNofalseIf true, the credentials' Vault path will not include the account-id meaning that App credentials will be used; if false or omitted, x-account-id is required

Note: When x-allow-use-app-secret is false (default), the Vault path includes the account-id and Account credentials are required. When true, only the app-level path is used meaning that App credentials will be used.

Option 2: Direct Credentials

Use this option when you want to provide credentials directly (for development purpose, not recommended for production).

HeaderTypeRequiredDescription
x-client-idstringYesThe Databricks client ID
x-client-secretstringYesThe Databricks client secret

Important: The system distinguishes between the two authentication options based on which headers are present. If x-client-id and x-client-secret are provided, it uses direct credentials; otherwise, it uses the Vault-based approach.

Query Parameters

All PostgREST query parameters are supported. You can filter, sort, paginate, and shape your response according to the PostgREST documentation.

Note Only GET (read) operations are allowed. This is a restriction of Databricks API.

Common examples:

  • ?select=column1,column2 - Select specific columns
  • ?column=eq.value - Filter by exact match
  • ?order=column.asc - Sort results
  • ?limit=10&offset=20 - Pagination

Response

Returns a JSON array of objects, where each object represents a row from the table.

[
  {
    "column1": "value1",
    "column2": "value2",
    ...
  },
  ...
]

Example Requests

Using Vault (with account-id)

curl -X GET "https://iris.trackunit.com/public/api/databricks-postgrest/v1beta1/table-data/dbc-123-1b43.cloud.databricks.com/my_catalog/my_table?limit=10" \
  -H "Accept-Profile: my_schema" \
  -H "x-client-id-key-name: databricks-client-id-key" \
  -H "x-client-secret-key-name: databricks-client-secret-key" \
  -H "x-account-id: account-123"

Using Vault (app-level secrets)

curl -X GET "https://iris.trackunit.com/public/api/databricks-postgrest/v1beta1/table-data/dbc-123-1b43.cloud.databricks.com/my_catalog/my_table?limit=10" \
  -H "Accept-Profile: my_schema" \
  -H "x-client-id-key-name: databricks-client-id-key" \
  -H "x-client-secret-key-name: databricks-client-secret-key" \
  -H "x-allow-use-app-secret: true"

Using Direct Credentials

curl -X GET "https://iris.trackunit.com/public/api/databricks-postgrest/v1beta1/table-data/dbc-123-1b43.cloud.databricks.com/my_catalog/my_table?limit=10" \
  -H "Accept-Profile: my_schema" \
  -H "x-client-id: your-client-id" \
  -H "x-client-secret: your-client-secret"

Error Handling

Always check the error message - the problem might be either because of incorrect authentication to the Trackunit PostgREST endpoint, or wrong authentication with the Databricks. Here is the summary of possible situations:

Failure PointHTTP StatusException TypeError Message
Missing/Invalid Bearer Token401Spring Security"Full authentication is required to access this resource" or "Unauthorized"
Missing Required Scope403Spring Security"Access Denied" or "Forbidden"
Unable to Get TrackUnit Token401UnauthorizedException"Unable to get trackunitToken."
Missing irisAppId Claim401UnauthorizedException"Token does not contain App Id"
Missing x-account-id Header400BadRequestException"Request header 'x-account-id' is missing while 'x-allow-use-app-secret' is false or not provided"
Vault Path Not Found401UnauthorizedException"Vault path was not found"
Vault Client ID Missing401UnauthorizedException"Vault does not contain client id"
Vault Client Secret Missing401UnauthorizedException"Vault does not contain client secret"
Vault Read Failed (After Retries)500RuntimeException"Failed to read secrets from vault: {original error message}"
Invalid Databricks Credentials401UnauthorizedException"Failed to get OAuth Databricks token. Status code: {statusCode}. Message: {databricks error response}"
Empty OAuth Token Response401UnauthorizedException"Failed to get OAuth Databricks token - empty response"
OAuth Token Generation Failed500RuntimeException"Failed to generate OAuth token"
Cannot Access Sync Table Metadata500RestClientException/RuntimeExceptionDatabricks SDK exception (e.g., permission denied, table not found)
No ReadOnlineView Permission403RestClientExceptionPostgREST error response from Databricks
Invalid Query Parameters400RestClientExceptionPostgREST error response from Databricks