When set to true, fixed and default values from the policy will be used for fields that are omitted. When set to false, only fixed values from the policy will be applied.
The configuration for storing init scripts. Any number of destinations can be specified. The scripts are executed sequentially in the order provided. If `cluster_log_conf` is specified, init script logs are sent to `<destination>/<cluster-ID>/init_scripts`.
"instance_pool_id":
"description": |-
The optional ID of the instance pool to which the cluster belongs.
An optional continuous property for this job. The continuous property will ensure that there is always one run executing. Only one of `schedule` and `continuous` can be used.
"deployment":
"description": |-
Deployment information for jobs managed by external sources.
"description":
"description": |-
An optional description for the job. The maximum length is 27700 characters in UTF-8 encoding.
Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When using the Jobs API 2.1 this value is always set to `"MULTI_TASK"`.
An optional specification for a remote Git repository containing the source code used by tasks. Version-controlled source code is supported by notebook, dbt, Python script, and SQL File tasks.
If `git_source` is set, these tasks retrieve the file from the remote repository by default. However, this behavior can be overridden by setting `source` to `WORKSPACE` on the task.
Note:dbt and SQL File tasks support only version-controlled sources. If dbt or SQL File tasks are used, `git_source` must be defined on the job.
A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings.
An optional maximum allowed number of concurrent runs of the job.
Set this value if you want to be able to execute multiple runs of the same job concurrently.
This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs which differ by their input parameters.
This setting affects only new runs. For example, suppose the job’s concurrency is 4 and there are 4 concurrent active runs. Then setting the concurrency to 3 won’t kill any of the active runs.
However, from then on, new runs are skipped unless there are fewer than 3 active runs.
This value cannot exceed 1000. Setting this value to `0` causes all new runs to be skipped.
An optional name for the job. The maximum length is 4096 bytes in UTF-8 encoding.
"notification_settings":
"description": |-
Optional notification settings that are used when sending notifications to each of the `email_notifications` and `webhook_notifications` for this job.
"parameters":
"description": |-
Job-level parameter definitions
"queue":
"description": |-
The queue settings of the job.
"run_as": {}
"schedule":
"description": |-
An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`.
"tags":
"description": |-
A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags. A maximum of 25 tags can be added to the job.
"tasks":
"description": |-
A list of task specifications to be executed by this job.
"timeout_seconds":
"description": |-
An optional timeout applied to each run of this job. A value of `0` means no timeout.
"trigger":
"description": |-
A configuration to trigger a run when certain conditions are met. The default behavior is that the job runs only when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`.
"webhook_notifications":
"description": |-
A collection of system notification IDs to notify when runs of this job begin or complete.
A catalog in Unity Catalog to publish data from this pipeline to. If `target` is specified, tables in this pipeline are published to a `target` schema inside `catalog` (for example, `catalog`.`target`.`table`). If `target` is not specified, no data is published to Unity Catalog.
"channel":
"description": |-
DLT Release Channel that specifies which version to use.
"clusters":
"description": |-
Cluster settings for this pipeline deployment.
"configuration":
"description": |-
String-String configuration for this pipeline execution.
"continuous":
"description": |-
Whether the pipeline is continuous or triggered. This replaces `trigger`.
"deployment":
"description": |-
Deployment type of this pipeline.
"development":
"description": |-
Whether the pipeline is in Development mode. Defaults to false.
"edition":
"description": |-
Pipeline product edition.
"filters":
"description": |-
Filters on which Pipeline packages to include in the deployed graph.
"gateway_definition":
"description": |-
The definition of a gateway pipeline to support change data capture.
"id":
"description": |-
Unique identifier for this pipeline.
"ingestion_definition":
"description": |-
The configuration for a managed ingestion pipeline. These settings cannot be used with the 'libraries', 'target' or 'catalog' settings.
"libraries":
"description": |-
Libraries or code needed by this deployment.
"name":
"description": |-
Friendly identifier for this pipeline.
"notifications":
"description": |-
List of notification settings for this pipeline.
"photon":
"description": |-
Whether Photon is enabled for this pipeline.
"restart_window":
"description": |-
Restart window of this pipeline.
"schema":
"description": |-
The default schema (database) where tables are read from or published to. The presence of this field implies that the pipeline is in direct publishing mode.
"serverless":
"description": |-
Whether serverless compute is enabled for this pipeline.
"storage":
"description": |-
DBFS root directory for storing checkpoints and tables.
"target":
"description": |-
Target schema (database) to add tables in this pipeline to. If not specified, no data is published to the Hive metastore or Unity Catalog. To publish to Unity Catalog, also specify `catalog`.
"trigger":
"description": |-
Which pipeline trigger to use. Deprecated:Use `continuous` instead.
The expression that determines when to run the monitor. See [examples](https://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html).
Jinja template for a SQL expression that specifies how to compute the metric. See [create metric definition](https://docs.databricks.com/en/lakehouse-monitoring/custom-metrics.html#create-definition).
If using gp3 volumes, what IOPS to use for the disk. If this is not set, the maximum performance of a gp2 volume with the same volume size will be used.
If using gp3 volumes, what throughput to use for the disk. If this is not set, the maximum performance of a gp2 volume with the same volume size will be used.
When set to true, fixed and default values from the policy will be used for fields that are omitted. When set to false, only fixed values from the policy will be applied.
The configuration for storing init scripts. Any number of destinations can be specified. The scripts are executed sequentially in the order provided. If `cluster_log_conf` is specified, init script logs are sent to `<destination>/<cluster-ID>/init_scripts`.
"instance_pool_id":
"description": |-
The optional ID of the instance pool to which the cluster belongs.
* `NONE`:Nosecurity isolation for multiple users sharing the cluster. Data governance features are not available in this mode.
* `SINGLE_USER`:A secure cluster that can only be exclusively used by a single user specified in `single_user_name`. Most programming languages, cluster features and data governance features are available in this mode.
* `USER_ISOLATION`:A secure cluster that can be shared by multiple users. Cluster users are fully isolated so that they cannot see each other's data and credentials. Most data governance features are supported in this mode. But programming languages and cluster features might be limited.
The following modes are deprecated starting with Databricks Runtime 15.0 and
will be removed for future Databricks Runtime versions:
* `LEGACY_TABLE_ACL`:This mode is for users migrating from legacy Table ACL clusters.
* `LEGACY_PASSTHROUGH`:This mode is for users migrating from legacy Passthrough on high concurrency clusters.
* `LEGACY_SINGLE_USER`:This mode is for users migrating from legacy Passthrough on standard clusters.
* `LEGACY_SINGLE_USER_STANDARD`:This mode provides a way that doesn’t have UC nor passthrough enabled.
If provided, each node (workers and driver) in the cluster will have this number of local SSDs attached. Each local SSD is 375GB in size. Refer to [GCP documentation](https://cloud.google.com/compute/docs/disks/local-ssd#choose_number_local_ssds) for the supported number of local SSDs for each instance type.
This field determines whether the spark executors will be scheduled to run on preemptible VMs (when set to true) versus standard compute engine VMs (when set to false; default).
Note:Soon to be deprecated, use the availability field instead.
*`EQUAL_TO`, `NOT_EQUAL` operators perform string comparison of their operands. This means that `“12.0” == “12”` will evaluate to `false`.
*`GREATER_THAN`, `GREATER_THAN_OR_EQUAL`, `LESS_THAN`, `LESS_THAN_OR_EQUAL` operators perform numeric comparison of their operands. `“12.0” >= “12”` will evaluate to `true`, `“10.0” >= “12”` will evaluate to `false`.
The boolean comparison to task values can be implemented with operators `EQUAL_TO`, `NOT_EQUAL`. If a task value was set to a boolean value, it will be serialized to `“true”` or `“false”` for the comparison.
*`EQUAL_TO`, `NOT_EQUAL` operators perform string comparison of their operands. This means that `“12.0” == “12”` will evaluate to `false`.
*`GREATER_THAN`, `GREATER_THAN_OR_EQUAL`, `LESS_THAN`, `LESS_THAN_OR_EQUAL` operators perform numeric comparison of their operands. `“12.0” >= “12”` will evaluate to `true`, `“10.0” >= “12”` will evaluate to `false`.
The boolean comparison to task values can be implemented with operators `EQUAL_TO`, `NOT_EQUAL`. If a task value was set to a boolean value, it will be serialized to `“true”` or `“false”` for the comparison.
A Cron expression using Quartz syntax that describes the schedule for a job. See [Cron Trigger](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html) for details. This field is required.
"timezone_id":
"description": |-
A Java timezone ID. The schedule for a job is resolved with respect to this timezone. See [Java TimeZone](https://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html) for details. This field is required.
Optional name of the catalog to use. The value is the top level in the 3-level namespace of Unity Catalog (catalog / schema / relation). The catalog value can only be specified if a warehouse_id is specified. Requires dbt-databricks >= 1.1.1.
"commands":
"description": |-
A list of dbt commands to execute. All commands must start with `dbt`. This parameter must not be empty. A maximum of up to 10 commands can be provided.
"profiles_directory":
"description": |-
Optional (relative) path to the profiles directory. Can only be specified if no warehouse_id is specified. If no warehouse_id is specified and this folder is unset, the root directory is used.
ID of the SQL warehouse to connect to. If provided, we automatically generate and provide the profile and connection details to dbt. It can be overridden on a per-command basis by using the `--profiles-dir` command line argument.
Read-only state of the remote repository at the time the job was run. This field is only included on job runs.
"used_commit":
"description": |-
Commit that was used to execute the run. If git_branch was specified, this points to the HEAD of the branch at the time of the run; if git_tag was specified, this points to the commit the tag points to.
An optional specification for a remote Git repository containing the source code used by tasks. Version-controlled source code is supported by notebook, dbt, Python script, and SQL File tasks.
If `git_source` is set, these tasks retrieve the file from the remote repository by default. However, this behavior can be overridden by setting `source` to `WORKSPACE` on the task.
Note:dbt and SQL File tasks support only version-controlled sources. If dbt or SQL File tasks are used, `git_source` must be defined on the job.
A list of email addresses to be notified when the duration of a run exceeds the threshold specified for the `RUN_DURATION_SECONDS` metric in the `health` field. If no rule for the `RUN_DURATION_SECONDS` metric is specified in the `health` field for the job, notifications are not sent.
"on_failure":
"description": |-
A list of email addresses to be notified when a run unsuccessfully completes. A run is considered to have completed unsuccessfully if it ends with an `INTERNAL_ERROR` `life_cycle_state` or a `FAILED`, or `TIMED_OUT` result_state. If this is not specified on job creation, reset, or update the list is empty, and notifications are not sent.
"on_start":
"description": |-
A list of email addresses to be notified when a run begins. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.
A list of email addresses to notify when any streaming backlog thresholds are exceeded for any stream.
Streaming backlog thresholds can be set in the `health` field using the following metrics:`STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or `STREAMING_BACKLOG_FILES`.
Alerting is based on the 10-minute average of these metrics. If the issue persists, notifications are resent every 30 minutes.
A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESS` result_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.
Dirty state indicates the job is not fully synced with the job specification in the remote repository.
Possible values are:
* `NOT_SYNCED`:The job is not yet synced with the remote job specification. Import the remote job specification from UI to make the job fully synced.
* `DISCONNECTED`:The job is temporary disconnected from the remote job specification and is allowed for live edit. Import the remote job specification again from UI to make the job fully synced.
Dirty state indicates the job is not fully synced with the job specification
in the remote repository.
Possible values are:
* `NOT_SYNCED`:The job is not yet synced with the remote job specification. Import the remote job specification from UI to make the job fully synced.
* `DISCONNECTED`:The job is temporary disconnected from the remote job specification and is allowed for live edit. Import the remote job specification again from UI to make the job fully synced.
Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved from the local Databricks workspace. When set to `GIT`, the notebook will be retrieved from a Git repository
defined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.
* `WORKSPACE`:Notebook is located in Databricks workspace.
* `GIT`:Notebook is located in cloud Git provider.
Named entry point to use, if it does not exist in the metadata of the package it executes the function from the package directly using `$packageName.$entryPoint()`
"named_parameters":
"description": |-
Command-line parameters passed to Python wheel task in the form of `["--name=task", "--data=dbfs:/path/to/data.json"]`. Leave it empty if `parameters` is not null.
"package_name":
"description": |-
Name of the package to execute
"parameters":
"description": |-
Command-line parameters passed to Python wheel task. Leave it empty if `named_parameters` is not null.
An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. When omitted, defaults to `ALL_SUCCESS`.
Possible values are:
* `ALL_SUCCESS`:All dependencies have executed and succeeded
* `AT_LEAST_ONE_SUCCESS`:At least one dependency has succeeded
* `NONE_FAILED`:None of the dependencies have failed and at least one was executed
* `ALL_DONE`:All dependencies have been completed
* `AT_LEAST_ONE_FAILED`:At least one dependency failed
A map from keys to values for jobs with notebook task, for example `"notebook_params": {"name": "john doe", "age": "35"}`.
The map is passed to the notebook and is accessible through the [dbutils.widgets.get](https://docs.databricks.com/dev-tools/databricks-utils.html) function.
If not specified upon `run-now`, the triggered run uses the job’s base parameters.
notebook_params cannot be specified in conjunction with jar_params.
Use [Task parameter variables](https://docs.databricks.com/jobs.html#parameter-variables) to set parameters containing information about job runs.
The JSON representation of this field (for example `{"notebook_params":{"name":"john doe","age":"35"}}`) cannot exceed 10,000 bytes.
A map from keys to values for jobs with SQL task, for example `"sql_params": {"name": "john doe", "age": "35"}`. The SQL alert task does not support custom parameters.
The Python file to be executed. Cloud file URIs (such as dbfs:/, s3:/, adls:/, gcs:/) and workspace paths are supported. For python files stored in the Databricks workspace, the path must be absolute and begin with `/`. For files stored in a remote repository, the path must be relative. This field is required.
If alert, indicates that this job must refresh a SQL alert.
"dashboard":
"description": |-
If dashboard, indicates that this job must refresh a SQL dashboard.
"file":
"description": |-
If file, indicates that this job runs a SQL file in a remote Git repository.
"parameters":
"description": |-
Parameters to be used for each run of this job. The SQL alert task does not support custom parameters.
"query":
"description": |-
If query, indicates that this job must execute a SQL query.
"warehouse_id":
"description": |-
The canonical identifier of the SQL warehouse. Recommended to use with serverless or pro SQL warehouses. Classic SQL warehouses are only supported for SQL alert, dashboard and query tasks and are limited to scheduled single-task jobs.
The canonical identifier of the destination to receive email notification. This parameter is mutually exclusive with user_name. You cannot set both destination_id and user_name for subscription notifications.
"user_name":
"description": |-
The user name to receive the subscription email. This parameter is mutually exclusive with destination_id. You cannot set both destination_id and user_name for subscription notifications.
The task runs one or more dbt commands when the `dbt_task` field is present. The dbt task requires both Databricks SQL and the ability to use a serverless or a pro SQL warehouse.
An optional array of objects specifying the dependency graph of the task. All tasks specified in this field must complete before executing this task. The task will run only if the `run_if` condition is true.
The key is `task_key`, and the value is the name assigned to the dependent task.
An option to disable auto optimization in serverless
"email_notifications":
"description": |-
An optional set of email addresses that is notified when runs of this task begin or complete as well as when this task is deleted. The default behavior is to not send any emails.
"environment_key":
"description": |-
The key that references an environment spec in a job. This field is required for Python script, Python wheel and dbt tasks when using serverless compute.
An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with the `FAILED` result_state or `INTERNAL_ERROR` `life_cycle_state`. The value `-1` means to retry indefinitely and the value `0` means to never retry.
"min_retry_interval_millis":
"description": |-
An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
"new_cluster":
"description": |-
If new_cluster, a description of a new cluster that is created for each run.
"notebook_task":
"description": |-
The task runs a notebook when the `notebook_task` field is present.
"notification_settings":
"description": |-
Optional notification settings that are used when sending notifications to each of the `email_notifications` and `webhook_notifications` for this task.
"pipeline_task":
"description": |-
The task triggers a pipeline update when the `pipeline_task` field is present. Only pipelines configured to use triggered more are supported.
"python_wheel_task":
"description": |-
The task runs a Python wheel when the `python_wheel_task` field is present.
(Legacy) The task runs the spark-submit script when the `spark_submit_task` field is present. This task can run only on new clusters and is not compatible with serverless compute.
In the `new_cluster` specification, `libraries` and `spark_conf` are not supported. Instead, use `--jars` and `--py-files` to add Java and Python libraries and `--conf` to set the Spark configurations.
`master`, `deploy-mode`, and `executor-cores` are automatically configured by Databricks; you _cannot_ specify them in parameters.
By default, the Spark submit job uses all available memory (excluding reserved memory for Databricks services). You can set `--driver-memory`, and `--executor-memory` to a smaller value to leave some room for off-heap usage.
The `--jars`, `--py-files`, `--files` arguments support DBFS and S3 paths.
An optional timeout applied to each run of this job task. A value of `0` means no timeout.
"webhook_notifications":
"description": |-
A collection of system notification IDs to notify when runs of this task begin or complete. The default behavior is to not send any system notifications.
A list of email addresses to be notified when the duration of a run exceeds the threshold specified for the `RUN_DURATION_SECONDS` metric in the `health` field. If no rule for the `RUN_DURATION_SECONDS` metric is specified in the `health` field for the job, notifications are not sent.
"on_failure":
"description": |-
A list of email addresses to be notified when a run unsuccessfully completes. A run is considered to have completed unsuccessfully if it ends with an `INTERNAL_ERROR` `life_cycle_state` or a `FAILED`, or `TIMED_OUT` result_state. If this is not specified on job creation, reset, or update the list is empty, and notifications are not sent.
"on_start":
"description": |-
A list of email addresses to be notified when a run begins. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.
A list of email addresses to notify when any streaming backlog thresholds are exceeded for any stream.
Streaming backlog thresholds can be set in the `health` field using the following metrics:`STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or `STREAMING_BACKLOG_FILES`.
Alerting is based on the 10-minute average of these metrics. If the issue persists, notifications are resent every 30 minutes.
A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESS` result_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.
If true, do not send notifications to recipients specified in `on_start` for the retried runs and do not send notifications to recipients specified in `on_failure` until the last retry of the run.
"no_alert_for_canceled_runs":
"description": |-
If true, do not send notifications to recipients specified in `on_failure` if the run is canceled.
"no_alert_for_skipped_runs":
"description": |-
If true, do not send notifications to recipients specified in `on_failure` if the run is skipped.
An optional list of system notification IDs to call when the duration of a run exceeds the threshold specified for the `RUN_DURATION_SECONDS` metric in the `health` field. A maximum of 3 destinations can be specified for the `on_duration_warning_threshold_exceeded` property.
"on_failure":
"description": |-
An optional list of system notification IDs to call when the run fails. A maximum of 3 destinations can be specified for the `on_failure` property.
"on_start":
"description": |-
An optional list of system notification IDs to call when the run starts. A maximum of 3 destinations can be specified for the `on_start` property.
An optional list of system notification IDs to call when any streaming backlog thresholds are exceeded for any stream.
Streaming backlog thresholds can be set in the `health` field using the following metrics:`STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or `STREAMING_BACKLOG_FILES`.
Alerting is based on the 10-minute average of these metrics. If the issue persists, notifications are resent every 30 minutes.
A maximum of 3 destinations can be specified for the `on_streaming_backlog_exceeded` property.
An optional list of system notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the `on_success` property.
Immutable. The Unity Catalog connection that this ingestion pipeline uses to communicate with the source. This is used with connectors for applications like Salesforce, Workday, and so on.
"ingestion_gateway_id":
"description": |-
Immutable. Identifier for the gateway that is used by this ingestion pipeline to communicate with the source database. This is used with connectors to databases like SQL Server.
"objects":
"description": |-
Required. Settings specifying tables to replicate and the destination for the replicated tables.
"table_configuration":
"description": |-
Configuration settings to control the ingestion of tables. These settings are applied to all tables in the pipeline.
The configuration for storing init scripts. Any number of destinations can be specified. The scripts are executed sequentially in the order provided. If `cluster_log_conf` is specified, init script logs are sent to `<destination>/<cluster-ID>/init_scripts`.
"instance_pool_id":
"description": |-
The optional ID of the instance pool to which the cluster belongs.
"label":
"description": |-
A label for the cluster specification, either `default` to configure the default cluster, or `maintenance` to configure the maintenance cluster. This field is optional. The default value is `default`.
Required. Destination table name. The pipeline fails if a table with that name already exists.
"source_url":
"description": |-
Required. Report URL in the source system.
"table_configuration":
"description": |-
Configuration settings to control the ingestion of tables. These settings override the table_configuration defined in the IngestionPipelineDefinition object.
Required. Destination schema to store tables in. Tables with the same name as the source tables are created in this destination schema. The pipeline fails If a table with the same name already exists.
"source_catalog":
"description": |-
The source catalog name. Might be optional depending on the type of source.
"source_schema":
"description": |-
Required. Schema name in the source database.
"table_configuration":
"description": |-
Configuration settings to control the ingestion of tables. These settings are applied to all tables in this schema and override the table_configuration defined in the IngestionPipelineDefinition object.
Optional. Destination table name. The pipeline fails if a table with that name already exists. If not set, the source table name is used.
"source_catalog":
"description": |-
Source catalog name. Might be optional depending on the type of source.
"source_schema":
"description": |-
Schema name in the source database. Might be optional depending on the type of source.
"source_table":
"description": |-
Required. Table name in the source database.
"table_configuration":
"description": |-
Configuration settings to control the ingestion of tables. These settings override the table_configuration defined in the IngestionPipelineDefinition object and the SchemaSpec.
The primary key of the table used to apply changes.
"salesforce_include_formula_fields":
"description": |-
If true, formula fields defined in the table are included in the ingestion. This setting is only valid for the Salesforce connector
"scd_type":
"description": |-
The SCD type to use to ingest the table.
"sequence_by":
"description": |-
The column names specifying the logical order of events in the source data. Delta Live Tables uses this sequencing to handle change events that arrive out of order.
The Databricks secret key reference for an AI21 Labs API key. If you prefer to paste your API key directly, see `ai21labs_api_key_plaintext`. You must provide an API key using one of the following fields:`ai21labs_api_key` or `ai21labs_api_key_plaintext`.
"ai21labs_api_key_plaintext":
"description": |-
An AI21 Labs API key provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `ai21labs_api_key`. You must provide an API key using one of the following fields:`ai21labs_api_key` or `ai21labs_api_key_plaintext`.
Configuration for AI Guardrails to prevent unwanted data and unsafe data in requests and responses.
"inference_table_config":
"description": |-
Configuration for payload logging using inference tables. Use these tables to monitor and audit data being sent to and received from model APIs and to improve model quality.
"rate_limits":
"description": |-
Configuration for rate limits which can be set to limit endpoint traffic.
"usage_tracking_config":
"description": |-
Configuration to enable usage tracking using system tables. These tables allow you to monitor operational usage on endpoints and their associated costs.
Behavior for PII filter. Currently only 'BLOCK' is supported. If 'BLOCK' is set for the input guardrail and the request contains PII, the request is not sent to the model server and 400 status code is returned; if 'BLOCK' is set for the output guardrail and the model response contains PII, the PII info in the response is redacted and 400 status code is returned.
Behavior for PII filter. Currently only 'BLOCK' is supported. If 'BLOCK' is set for the input guardrail and the request contains PII, the request is not sent to the model server and 400 status code is returned; if 'BLOCK' is set for the output guardrail and the model response contains PII, the PII info in the response is redacted and 400 status code is returned.
The name of the catalog in Unity Catalog. Required when enabling inference tables. NOTE:Onupdate, you have to disable inference table first in order to change the catalog name.
"enabled":
"description": |-
Indicates whether the inference table is enabled.
"schema_name":
"description": |-
The name of the schema in Unity Catalog. Required when enabling inference tables. NOTE:Onupdate, you have to disable inference table first in order to change the schema name.
"table_name_prefix":
"description": |-
The prefix of the table in Unity Catalog. NOTE:Onupdate, you have to disable inference table first in order to change the prefix name.
The Databricks secret key reference for an AWS access key ID with permissions to interact with Bedrock services. If you prefer to paste your API key directly, see `aws_access_key_id`. You must provide an API key using one of the following fields:`aws_access_key_id` or `aws_access_key_id_plaintext`.
"aws_access_key_id_plaintext":
"description": |-
An AWS access key ID with permissions to interact with Bedrock services provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `aws_access_key_id`. You must provide an API key using one of the following fields:`aws_access_key_id` or `aws_access_key_id_plaintext`.
"aws_region":
"description": |-
The AWS region to use. Bedrock has to be enabled there.
"aws_secret_access_key":
"description": |-
The Databricks secret key reference for an AWS secret access key paired with the access key ID, with permissions to interact with Bedrock services. If you prefer to paste your API key directly, see `aws_secret_access_key_plaintext`. You must provide an API key using one of the following fields:`aws_secret_access_key` or `aws_secret_access_key_plaintext`.
"aws_secret_access_key_plaintext":
"description": |-
An AWS secret access key paired with the access key ID, with permissions to interact with Bedrock services provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `aws_secret_access_key`. You must provide an API key using one of the following fields:`aws_secret_access_key` or `aws_secret_access_key_plaintext`.
"bedrock_provider":
"description": |-
The underlying provider in Amazon Bedrock. Supported values (case insensitive) include:Anthropic, Cohere, AI21Labs, Amazon.
The Databricks secret key reference for an Anthropic API key. If you prefer to paste your API key directly, see `anthropic_api_key_plaintext`. You must provide an API key using one of the following fields:`anthropic_api_key` or `anthropic_api_key_plaintext`.
"anthropic_api_key_plaintext":
"description": |-
The Anthropic API key provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `anthropic_api_key`. You must provide an API key using one of the following fields:`anthropic_api_key` or `anthropic_api_key_plaintext`.
"description": "This is an optional field to provide a customized base URL for the Cohere API. \nIf left unspecified, the standard Cohere base URL is used.\n"
"cohere_api_key":
"description": |-
The Databricks secret key reference for a Cohere API key. If you prefer to paste your API key directly, see `cohere_api_key_plaintext`. You must provide an API key using one of the following fields:`cohere_api_key` or `cohere_api_key_plaintext`.
"cohere_api_key_plaintext":
"description": |-
The Cohere API key provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `cohere_api_key`. You must provide an API key using one of the following fields:`cohere_api_key` or `cohere_api_key_plaintext`.
The Databricks secret key reference for a private key for the service account which has access to the Google Cloud Vertex AI Service. See [Best practices for managing service account keys](https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys). If you prefer to paste your API key directly, see `private_key_plaintext`. You must provide an API key using one of the following fields:`private_key` or `private_key_plaintext`
"private_key_plaintext":
"description": |-
The private key for the service account which has access to the Google Cloud Vertex AI Service provided as a plaintext secret. See [Best practices for managing service account keys](https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys). If you prefer to reference your key using Databricks Secrets, see `private_key`. You must provide an API key using one of the following fields:`private_key` or `private_key_plaintext`.
"project_id":
"description": |-
This is the Google Cloud project id that the service account is associated with.
"region":
"description": |-
This is the region for the Google Cloud Vertex AI Service. See [supported regions](https://cloud.google.com/vertex-ai/docs/general/locations) for more details. Some models are only available in specific regions.
This field is only required for Azure AD OpenAI and is the Microsoft Entra Client ID.
"microsoft_entra_client_secret":
"description": |
The Databricks secret key reference for a client secret used for Microsoft Entra ID authentication.
If you prefer to paste your client secret directly, see `microsoft_entra_client_secret_plaintext`.
You must provide an API key using one of the following fields:`microsoft_entra_client_secret` or `microsoft_entra_client_secret_plaintext`.
"microsoft_entra_client_secret_plaintext":
"description": |
The client secret used for Microsoft Entra ID authentication provided as a plaintext string.
If you prefer to reference your key using Databricks Secrets, see `microsoft_entra_client_secret`.
You must provide an API key using one of the following fields:`microsoft_entra_client_secret` or `microsoft_entra_client_secret_plaintext`.
"microsoft_entra_tenant_id":
"description": |
This field is only required for Azure AD OpenAI and is the Microsoft Entra Tenant ID.
"openai_api_base":
"description": |
This is a field to provide a customized base URl for the OpenAI API.
For Azure OpenAI, this field is required, and is the base URL for the Azure OpenAI API service
provided by Azure.
For other OpenAI API types, this field is optional, and if left unspecified, the standard OpenAI base URL is used.
"openai_api_key":
"description": |-
The Databricks secret key reference for an OpenAI API key using the OpenAI or Azure service. If you prefer to paste your API key directly, see `openai_api_key_plaintext`. You must provide an API key using one of the following fields:`openai_api_key` or `openai_api_key_plaintext`.
"openai_api_key_plaintext":
"description": |-
The OpenAI API key using the OpenAI or Azure service provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `openai_api_key`. You must provide an API key using one of the following fields:`openai_api_key` or `openai_api_key_plaintext`.
"openai_api_type":
"description": |
This is an optional field to specify the type of OpenAI API to use.
For Azure OpenAI, this field is required, and adjust this parameter to represent the preferred security
access validation protocol. For access token validation, use azure. For authentication using Azure Active
Directory (Azure AD) use, azuread.
"openai_api_version":
"description": |
This is an optional field to specify the OpenAI API version.
For Azure OpenAI, this field is required, and is the version of the Azure OpenAI service to
utilize, specified by a date.
"openai_deployment_name":
"description": |
This field is only required for Azure OpenAI and is the name of the deployment resource for the
Azure OpenAI service.
"openai_organization":
"description": |
This is an optional field to specify the organization in OpenAI or Azure OpenAI.
The Databricks secret key reference for a PaLM API key. If you prefer to paste your API key directly, see `palm_api_key_plaintext`. You must provide an API key using one of the following fields:`palm_api_key` or `palm_api_key_plaintext`.
"palm_api_key_plaintext":
"description": |-
The PaLM API key provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `palm_api_key`. You must provide an API key using one of the following fields:`palm_api_key` or `palm_api_key_plaintext`.
The version of the model in Databricks Model Registry to be served or empty if the entity is a FEATURE_SPEC.
"environment_vars":
"description": "An object containing a set of optional, user-specified environment variable key-value pairs used for serving this entity.\nNote: this is an experimental feature and subject to change. \nExample entity environment variables that refer to Databricks secrets: `{\"OPENAI_API_KEY\": \"{{secrets/my_scope/my_key}}\", \"DATABRICKS_TOKEN\": \"{{secrets/my_scope2/my_key2}}\"}`"
The external model to be served. NOTE:Only one of external_model and (entity_name, entity_version, workload_size, workload_type, and scale_to_zero_enabled)
can be specified with the latter set being used for custom model serving for a Databricks registered model. For an existing endpoint with external_model,
it cannot be updated to an endpoint without external_model. If the endpoint is created without external_model, users cannot update it to add external_model later.
The task type of all external models within an endpoint must be the same.
The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores.
If not specified for an external model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other
entities, it defaults to <entity-name>-<entity-version>.
The workload type of the served entity. The workload type selects which type of compute to use in the endpoint. The default value for this parameter is
"CPU". For deep learning workloads, GPU acceleration is available by selecting workload types like GPU_SMALL and others.
See the available [GPU types](https://docs.databricks.com/machine-learning/model-serving/create-manage-serving-endpoints.html#gpu-workload-types).
"description": "An object containing a set of optional, user-specified environment variable key-value pairs used for serving this model.\nNote: this is an experimental feature and subject to change. \nExample model environment variables that refer to Databricks secrets: `{\"OPENAI_API_KEY\": \"{{secrets/my_scope/my_key}}\", \"DATABRICKS_TOKEN\": \"{{secrets/my_scope2/my_key2}}\"}`"
"instance_profile_arn":
"description": |-
ARN of the instance profile that the served model will use to access AWS resources.
"max_provisioned_throughput":
"description": |-
The maximum tokens per second that the endpoint can scale up to.
"min_provisioned_throughput":
"description": |-
The minimum tokens per second that the endpoint can scale down to.