- [databricks alerts - The alerts API can be used to perform CRUD operations on alerts.](#databricks-alerts---the-alerts-api-can-be-used-to-perform-crud-operations-on-alerts)
- [databricks alerts create - Create an alert.](#databricks-alerts-create---create-an-alert)
- [databricks alerts delete - Delete an alert.](#databricks-alerts-delete---delete-an-alert)
- [databricks alerts get - Get an alert.](#databricks-alerts-get---get-an-alert)
- [databricks alerts list - Get alerts.](#databricks-alerts-list---get-alerts)
- [databricks alerts update - Update an alert.](#databricks-alerts-update---update-an-alert)
- [databricks catalogs - A catalog is the first layer of Unity Catalog’s three-level namespace.](#databricks-catalogs---a-catalog-is-the-first-layer-of-unity-catalogs-three-level-namespace)
- [databricks catalogs create - Create a catalog.](#databricks-catalogs-create---create-a-catalog)
- [databricks catalogs delete - Delete a catalog.](#databricks-catalogs-delete---delete-a-catalog)
- [databricks catalogs get - Get a catalog.](#databricks-catalogs-get---get-a-catalog)
- [databricks catalogs list - List catalogs.](#databricks-catalogs-list---list-catalogs)
- [databricks catalogs update - Update a catalog.](#databricks-catalogs-update---update-a-catalog)
- [databricks cluster-policies - Cluster policy limits the ability to configure clusters based on a set of rules.](#databricks-cluster-policies---cluster-policy-limits-the-ability-to-configure-clusters-based-on-a-set-of-rules)
- [databricks cluster-policies create - Create a new policy.](#databricks-cluster-policies-create---create-a-new-policy)
- [databricks cluster-policies delete - Delete a cluster policy.](#databricks-cluster-policies-delete---delete-a-cluster-policy)
- [databricks cluster-policies edit - Update a cluster policy.](#databricks-cluster-policies-edit---update-a-cluster-policy)
- [databricks cluster-policies get - Get entity.](#databricks-cluster-policies-get---get-entity)
- [databricks cluster-policies list - Get a cluster policy.](#databricks-cluster-policies-list---get-a-cluster-policy)
- [databricks clusters - The Clusters API allows you to create, start, edit, list, terminate, and delete clusters.](#databricks-clusters---the-clusters-api-allows-you-to-create-start-edit-list-terminate-and-delete-clusters)
- [databricks account credentials - These commands manage credential configurations for this workspace.](#databricks-account-credentials---these-commands-manage-credential-configurations-for-this-workspace)
- [databricks account credentials get - Get credential configuration.](#databricks-account-credentials-get---get-credential-configuration)
- [databricks account credentials list - Get all credential configurations.](#databricks-account-credentials-list---get-all-credential-configurations)
- [databricks current-user - command allows retrieving information about currently authenticated user or service principal.](#databricks-current-user---command-allows-retrieving-information-about-currently-authenticated-user-or-service-principal)
- [databricks current-user me - Get current user info.](#databricks-current-user-me---get-current-user-info)
- [databricks dashboards create - Create a dashboard object.](#databricks-dashboards-create---create-a-dashboard-object)
- [databricks dashboards delete - Remove a dashboard.](#databricks-dashboards-delete---remove-a-dashboard)
- [databricks dashboards get - Retrieve a definition.](#databricks-dashboards-get---retrieve-a-definition)
- [databricks dashboards list - Get dashboard objects.](#databricks-dashboards-list---get-dashboard-objects)
- [databricks dashboards restore - Restore a dashboard.](#databricks-dashboards-restore---restore-a-dashboard)
- [databricks data-sources - command is provided to assist you in making new query objects.](#databricks-data-sources---command-is-provided-to-assist-you-in-making-new-query-objects)
- [databricks data-sources list - Get a list of SQL warehouses.](#databricks-data-sources-list---get-a-list-of-sql-warehouses)
- [databricks account encryption-keys get - Get encryption key configuration.](#databricks-account-encryption-keys-get---get-encryption-key-configuration)
- [databricks account encryption-keys list - Get all encryption key configurations.](#databricks-account-encryption-keys-list---get-all-encryption-key-configurations)
- [databricks experiments create-run - Create a run.](#databricks-experiments-create-run---create-a-run)
- [databricks experiments delete-experiment - Delete an experiment.](#databricks-experiments-delete-experiment---delete-an-experiment)
- [databricks experiments delete-run - Delete a run.](#databricks-experiments-delete-run---delete-a-run)
- [databricks experiments delete-tag - Delete a tag.](#databricks-experiments-delete-tag---delete-a-tag)
- [databricks experiments get-by-name - Get metadata.](#databricks-experiments-get-by-name---get-metadata)
- [databricks experiments get-experiment - Get an experiment.](#databricks-experiments-get-experiment---get-an-experiment)
- [databricks experiments get-history - Get history of a given metric within a run.](#databricks-experiments-get-history---get-history-of-a-given-metric-within-a-run)
- [databricks experiments get-run - Get a run.](#databricks-experiments-get-run---get-a-run)
- [databricks experiments list-artifacts - Get all artifacts.](#databricks-experiments-list-artifacts---get-all-artifacts)
- [databricks experiments list-experiments - List experiments.](#databricks-experiments-list-experiments---list-experiments)
- [databricks experiments log-batch - Log a batch.](#databricks-experiments-log-batch---log-a-batch)
- [databricks experiments log-metric - Log a metric.](#databricks-experiments-log-metric---log-a-metric)
- [databricks experiments log-model - Log a model.](#databricks-experiments-log-model---log-a-model)
- [databricks experiments log-param - Log a param.](#databricks-experiments-log-param---log-a-param)
- [databricks experiments restore-experiment - Restores an experiment.](#databricks-experiments-restore-experiment---restores-an-experiment)
- [databricks experiments restore-run - Restore a run.](#databricks-experiments-restore-run---restore-a-run)
- [databricks experiments search-runs - Search for runs.](#databricks-experiments-search-runs---search-for-runs)
- [databricks experiments set-experiment-tag - Set a tag.](#databricks-experiments-set-experiment-tag---set-a-tag)
- [databricks experiments set-tag - Set a tag.](#databricks-experiments-set-tag---set-a-tag)
- [databricks experiments update-experiment - Update an experiment.](#databricks-experiments-update-experiment---update-an-experiment)
- [databricks experiments update-run - Update a run.](#databricks-experiments-update-run---update-a-run)
- [databricks external-locations - manage cloud storage path with a storage credential that authorizes access to it.](#databricks-external-locations---manage-cloud-storage-path-with-a-storage-credential-that-authorizes-access-to-it)
- [databricks external-locations create - Create an external location.](#databricks-external-locations-create---create-an-external-location)
- [databricks external-locations delete - Delete an external location.](#databricks-external-locations-delete---delete-an-external-location)
- [databricks external-locations get - Get an external location.](#databricks-external-locations-get---get-an-external-location)
- [databricks external-locations list - List external locations.](#databricks-external-locations-list---list-external-locations)
- [databricks external-locations update - Update an external location.](#databricks-external-locations-update---update-an-external-location)
- [databricks functions create - Create a function.](#databricks-functions-create---create-a-function)
- [databricks functions delete - Delete a function.](#databricks-functions-delete---delete-a-function)
- [databricks functions get - Get a function.](#databricks-functions-get---get-a-function)
- [databricks functions list - List functions.](#databricks-functions-list---list-functions)
- [databricks functions update - Update a function.](#databricks-functions-update---update-a-function)
- [databricks git-credentials - Registers personal access token for Databricks to do operations on behalf of the user.](#databricks-git-credentials---registers-personal-access-token-for-databricks-to-do-operations-on-behalf-of-the-user)
- [databricks git-credentials create - Create a credential entry.](#databricks-git-credentials-create---create-a-credential-entry)
- [databricks git-credentials delete - Delete a credential.](#databricks-git-credentials-delete---delete-a-credential)
- [databricks git-credentials get - Get a credential entry.](#databricks-git-credentials-get---get-a-credential-entry)
- [databricks git-credentials list - Get Git credentials.](#databricks-git-credentials-list---get-git-credentials)
- [databricks git-credentials update - Update a credential.](#databricks-git-credentials-update---update-a-credential)
- [databricks global-init-scripts - configure global initialization scripts for the workspace.](#databricks-global-init-scripts---configure-global-initialization-scripts-for-the-workspace)
- [databricks account ip-access-lists - The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console.](#databricks-account-ip-access-lists---the-accounts-ip-access-list-api-enables-account-admins-to-configure-ip-access-lists-for-access-to-the-account-console)
- [databricks account log-delivery - These commands manage log delivery configurations for this account.](#databricks-account-log-delivery---these-commands-manage-log-delivery-configurations-for-this-account)
- [databricks account log-delivery create - Create a new log delivery configuration.](#databricks-account-log-delivery-create---create-a-new-log-delivery-configuration)
- [databricks account log-delivery get - Get log delivery configuration.](#databricks-account-log-delivery-get---get-log-delivery-configuration)
- [databricks account log-delivery list - Get all log delivery configurations.](#databricks-account-log-delivery-list---get-all-log-delivery-configurations)
- [databricks account metastore-assignments - These commands manage metastore assignments to a workspace.](#databricks-account-metastore-assignments---these-commands-manage-metastore-assignments-to-a-workspace)
- [databricks account metastore-assignments create - Assigns a workspace to a metastore.](#databricks-account-metastore-assignments-create---assigns-a-workspace-to-a-metastore)
- [databricks account metastore-assignments delete - Delete a metastore assignment.](#databricks-account-metastore-assignments-delete---delete-a-metastore-assignment)
- [databricks account metastore-assignments get - Gets the metastore assignment for a workspace.](#databricks-account-metastore-assignments-get---gets-the-metastore-assignment-for-a-workspace)
- [databricks account metastore-assignments list - Get all workspaces assigned to a metastore.](#databricks-account-metastore-assignments-list---get-all-workspaces-assigned-to-a-metastore)
- [databricks account metastore-assignments update - Updates a metastore assignment to a workspaces.](#databricks-account-metastore-assignments-update---updates-a-metastore-assignment-to-a-workspaces)
- [databricks metastores - Manage metastores in Unity Catalog.](#databricks-metastores---manage-metastores-in-unity-catalog)
- [databricks metastores assign - Create an assignment.](#databricks-metastores-assign---create-an-assignment)
- [databricks metastores create - Create a metastore.](#databricks-metastores-create---create-a-metastore)
- [databricks metastores current - Get metastore assignment for workspace.](#databricks-metastores-current---get-metastore-assignment-for-workspace)
- [databricks metastores delete - Delete a metastore.](#databricks-metastores-delete---delete-a-metastore)
- [databricks metastores get - Get a metastore.](#databricks-metastores-get---get-a-metastore)
- [databricks metastores list - List metastores.](#databricks-metastores-list---list-metastores)
- [databricks metastores maintenance - Enables or disables auto maintenance on the metastore.](#databricks-metastores-maintenance---enables-or-disables-auto-maintenance-on-the-metastore)
- [databricks metastores summary - Get a metastore summary.](#databricks-metastores-summary---get-a-metastore-summary)
- [databricks metastores unassign - Delete an assignment.](#databricks-metastores-unassign---delete-an-assignment)
- [databricks metastores update - Update a metastore.](#databricks-metastores-update---update-a-metastore)
- [databricks metastores update-assignment - Update an assignment.](#databricks-metastores-update-assignment---update-an-assignment)
- [databricks account metastores - These commands manage Unity Catalog metastores for an account.](#databricks-account-metastores---these-commands-manage-unity-catalog-metastores-for-an-account)
- [databricks account metastores delete - Delete a metastore.](#databricks-account-metastores-delete---delete-a-metastore)
- [databricks account metastores get - Get a metastore.](#databricks-account-metastores-get---get-a-metastore)
- [databricks account metastores list - Get all metastores associated with an account.](#databricks-account-metastores-list---get-all-metastores-associated-with-an-account)
- [databricks account metastores update - Update a metastore.](#databricks-account-metastores-update---update-a-metastore)
- [databricks model-registry - Expose commands for Model Registry.](#databricks-model-registry---expose-commands-for-model-registry)
- [databricks model-registry create-comment - Post a comment.](#databricks-model-registry-create-comment---post-a-comment)
- [databricks model-registry create-model - Create a model.](#databricks-model-registry-create-model---create-a-model)
- [databricks model-registry create-model-version - Create a model version.](#databricks-model-registry-create-model-version---create-a-model-version)
- [databricks model-registry create-transition-request - Make a transition request.](#databricks-model-registry-create-transition-request---make-a-transition-request)
- [databricks model-registry create-webhook - Create a webhook.](#databricks-model-registry-create-webhook---create-a-webhook)
- [databricks model-registry delete-comment - Delete a comment.](#databricks-model-registry-delete-comment---delete-a-comment)
- [databricks model-registry delete-model - Delete a model.](#databricks-model-registry-delete-model---delete-a-model)
- [databricks model-registry delete-model-tag - Delete a model tag.](#databricks-model-registry-delete-model-tag---delete-a-model-tag)
- [databricks model-registry delete-model-version - Delete a model version.](#databricks-model-registry-delete-model-version---delete-a-model-version)
- [databricks model-registry delete-model-version-tag - Delete a model version tag.](#databricks-model-registry-delete-model-version-tag---delete-a-model-version-tag)
- [databricks model-registry delete-transition-request - Delete a ransition request.](#databricks-model-registry-delete-transition-request---delete-a-ransition-request)
- [databricks model-registry delete-webhook - Delete a webhook.](#databricks-model-registry-delete-webhook---delete-a-webhook)
- [databricks model-registry get-latest-versions - Get the latest version.](#databricks-model-registry-get-latest-versions---get-the-latest-version)
- [databricks model-registry get-model - Get model.](#databricks-model-registry-get-model---get-model)
- [databricks model-registry get-model-version - Get a model version.](#databricks-model-registry-get-model-version---get-a-model-version)
- [databricks model-registry get-model-version-download-uri - Get a model version URI.](#databricks-model-registry-get-model-version-download-uri---get-a-model-version-uri)
- [databricks model-registry list-models - List models.](#databricks-model-registry-list-models---list-models)
- [databricks model-registry list-transition-requests - List transition requests.](#databricks-model-registry-list-transition-requests---list-transition-requests)
- [databricks model-registry list-webhooks - List registry webhooks.](#databricks-model-registry-list-webhooks---list-registry-webhooks)
- [databricks model-registry reject-transition-request - Reject a transition request.](#databricks-model-registry-reject-transition-request---reject-a-transition-request)
- [databricks model-registry rename-model - Rename a model.](#databricks-model-registry-rename-model---rename-a-model)
- [databricks model-registry search-model-versions - Searches model versions.](#databricks-model-registry-search-model-versions---searches-model-versions)
- [databricks account networks delete - Delete a network configuration.](#databricks-account-networks-delete---delete-a-network-configuration)
- [databricks account networks get - Get a network configuration.](#databricks-account-networks-get---get-a-network-configuration)
- [databricks account networks list - Get all network configurations.](#databricks-account-networks-list---get-all-network-configurations)
- [databricks account o-auth-enrollment - These commands enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration.](#databricks-account-o-auth-enrollment---these-commands-enable-administrators-to-enroll-oauth-for-their-accounts-which-is-required-for-addingusing-any-oauth-publishedcustom-application-integration)
- [databricks account o-auth-enrollment get - Get OAuth enrollment status.](#databricks-account-o-auth-enrollment-get---get-oauth-enrollment-status)
- [databricks permissions - Manage access for various users on different objects and endpoints.](#databricks-permissions---manage-access-for-various-users-on-different-objects-and-endpoints)
- [databricks permissions get - Get object permissions.](#databricks-permissions-get---get-object-permissions)
- [databricks permissions get-permission-levels - Get permission levels.](#databricks-permissions-get-permission-levels---get-permission-levels)
- [databricks permissions set - Set permissions.](#databricks-permissions-set---set-permissions)
- [databricks account private-access get - Get a private access settings object.](#databricks-account-private-access-get---get-a-private-access-settings-object)
- [databricks account private-access list - Get all private access settings objects.](#databricks-account-private-access-list---get-all-private-access-settings-objects)
- [databricks providers create - Create an auth provider.](#databricks-providers-create---create-an-auth-provider)
- [databricks providers delete - Delete a provider.](#databricks-providers-delete---delete-a-provider)
- [databricks providers get - Get a provider.](#databricks-providers-get---get-a-provider)
- [databricks providers list - List providers.](#databricks-providers-list---list-providers)
- [databricks providers list-shares - List shares by Provider.](#databricks-providers-list-shares---list-shares-by-provider)
- [databricks providers update - Update a provider.](#databricks-providers-update---update-a-provider)
- [databricks account published-app-integration - manage published OAuth app integrations like Tableau Cloud for Databricks in AWS cloud.](#databricks-account-published-app-integration---manage-published-oauth-app-integrations-like-tableau-cloud-for-databricks-in-aws-cloud)
- [databricks account published-app-integration get - Get OAuth Published App Integration.](#databricks-account-published-app-integration-get---get-oauth-published-app-integration)
- [databricks account published-app-integration list - Get published oauth app integrations.](#databricks-account-published-app-integration-list---get-published-oauth-app-integrations)
- [databricks queries - These endpoints are used for CRUD operations on query definitions.](#databricks-queries---these-endpoints-are-used-for-crud-operations-on-query-definitions)
- [databricks queries create - Create a new query definition.](#databricks-queries-create---create-a-new-query-definition)
- [databricks queries delete - Delete a query.](#databricks-queries-delete---delete-a-query)
- [databricks queries get - Get a query definition.](#databricks-queries-get---get-a-query-definition)
- [databricks queries list - Get a list of queries.](#databricks-queries-list---get-a-list-of-queries)
- [databricks queries restore - Restore a query.](#databricks-queries-restore---restore-a-query)
- [databricks queries update - Change a query definition.](#databricks-queries-update---change-a-query-definition)
- [databricks query-history - Access the history of queries through SQL warehouses.](#databricks-query-history---access-the-history-of-queries-through-sql-warehouses)
- [databricks query-history list - List Queries.](#databricks-query-history-list---list-queries)
- [databricks recipient-activation get-activation-url-info - Get a share activation URL.](#databricks-recipient-activation-get-activation-url-info---get-a-share-activation-url)
- [databricks recipient-activation retrieve-token - Get an access token.](#databricks-recipient-activation-retrieve-token---get-an-access-token)
- [databricks secrets list-scopes - List all scopes.](#databricks-secrets-list-scopes---list-all-scopes)
- [databricks secrets list-secrets - List secret keys.](#databricks-secrets-list-secrets---list-secret-keys)
- [databricks secrets put-acl - Create/update an ACL.](#databricks-secrets-put-acl---createupdate-an-acl)
- [databricks secrets put-secret - Add a secret.](#databricks-secrets-put-secret---add-a-secret)
- [databricks service-principals - Manage service principals.](#databricks-service-principals---manage-service-principals)
- [databricks service-principals create - Create a service principal.](#databricks-service-principals-create---create-a-service-principal)
- [databricks service-principals delete - Delete a service principal.](#databricks-service-principals-delete---delete-a-service-principal)
- [databricks service-principals get - Get service principal details.](#databricks-service-principals-get---get-service-principal-details)
- [databricks service-principals list - List service principals.](#databricks-service-principals-list---list-service-principals)
- [databricks service-principals patch - Update service principal details.](#databricks-service-principals-patch---update-service-principal-details)
- [databricks service-principals update - Replace service principal.](#databricks-service-principals-update---replace-service-principal)
- [databricks account service-principals - Manage service principals on the account level.](#databricks-account-service-principals---manage-service-principals-on-the-account-level)
- [databricks account service-principals create - Create a service principal.](#databricks-account-service-principals-create---create-a-service-principal)
- [databricks account service-principals delete - Delete a service principal.](#databricks-account-service-principals-delete---delete-a-service-principal)
- [databricks account service-principals get - Get service principal details.](#databricks-account-service-principals-get---get-service-principal-details)
- [databricks account service-principals list - List service principals.](#databricks-account-service-principals-list---list-service-principals)
- [databricks account service-principals patch - Update service principal details.](#databricks-account-service-principals-patch---update-service-principal-details)
- [databricks account service-principals update - Replace service principal.](#databricks-account-service-principals-update---replace-service-principal)
- [databricks serving-endpoints - Manage model serving endpoints.](#databricks-serving-endpoints---manage-model-serving-endpoints)
- [databricks serving-endpoints build-logs - Retrieve the logs associated with building the model's environment for a given serving endpoint's served model.](#databricks-serving-endpoints-build-logs---retrieve-the-logs-associated-with-building-the-models-environment-for-a-given-serving-endpoints-served-model)
- [databricks serving-endpoints create - Create a new serving endpoint.](#databricks-serving-endpoints-create---create-a-new-serving-endpoint)
- [databricks serving-endpoints delete - Delete a serving endpoint.](#databricks-serving-endpoints-delete---delete-a-serving-endpoint)
- [databricks serving-endpoints export-metrics - Retrieve the metrics corresponding to a serving endpoint for the current time in Prometheus or OpenMetrics exposition format.](#databricks-serving-endpoints-export-metrics---retrieve-the-metrics-corresponding-to-a-serving-endpoint-for-the-current-time-in-prometheus-or-openmetrics-exposition-format)
- [databricks serving-endpoints get - Get a single serving endpoint.](#databricks-serving-endpoints-get---get-a-single-serving-endpoint)
- [databricks serving-endpoints list - Retrieve all serving endpoints.](#databricks-serving-endpoints-list---retrieve-all-serving-endpoints)
- [databricks serving-endpoints logs - Retrieve the most recent log lines associated with a given serving endpoint's served model.](#databricks-serving-endpoints-logs---retrieve-the-most-recent-log-lines-associated-with-a-given-serving-endpoints-served-model)
- [databricks serving-endpoints query - Query a serving endpoint with provided model input.](#databricks-serving-endpoints-query---query-a-serving-endpoint-with-provided-model-input)
- [databricks serving-endpoints update-config - Update a serving endpoint with a new config.](#databricks-serving-endpoints-update-config---update-a-serving-endpoint-with-a-new-config)
- [databricks account storage get - Get storage configuration.](#databricks-account-storage-get---get-storage-configuration)
- [databricks account storage list - Get all storage configurations.](#databricks-account-storage-list---get-all-storage-configurations)
- [databricks storage-credentials - Manage storage credentials for Unity Catalog.](#databricks-storage-credentials---manage-storage-credentials-for-unity-catalog)
- [databricks storage-credentials create - Create a storage credential.](#databricks-storage-credentials-create---create-a-storage-credential)
- [databricks storage-credentials delete - Delete a credential.](#databricks-storage-credentials-delete---delete-a-credential)
- [databricks storage-credentials get - Get a credential.](#databricks-storage-credentials-get---get-a-credential)
- [databricks storage-credentials list - List credentials.](#databricks-storage-credentials-list---list-credentials)
- [databricks storage-credentials update - Update a credential.](#databricks-storage-credentials-update---update-a-credential)
- [databricks storage-credentials validate - Validate a storage credential.](#databricks-storage-credentials-validate---validate-a-storage-credential)
- [databricks account storage-credentials - These commands manage storage credentials for a particular metastore.](#databricks-account-storage-credentials---these-commands-manage-storage-credentials-for-a-particular-metastore)
- [databricks account storage-credentials create - Create a storage credential.](#databricks-account-storage-credentials-create---create-a-storage-credential)
- [databricks account storage-credentials get - Gets the named storage credential.](#databricks-account-storage-credentials-get---gets-the-named-storage-credential)
- [databricks account storage-credentials list - Get all storage credentials assigned to a metastore.](#databricks-account-storage-credentials-list---get-all-storage-credentials-assigned-to-a-metastore)
- [databricks table-constraints - Primary key and foreign key constraints encode relationships between fields in tables.](#databricks-table-constraints---primary-key-and-foreign-key-constraints-encode-relationships-between-fields-in-tables)
- [databricks table-constraints create - Create a table constraint.](#databricks-table-constraints-create---create-a-table-constraint)
- [databricks table-constraints delete - Delete a table constraint.](#databricks-table-constraints-delete---delete-a-table-constraint)
- [databricks tables - A table resides in the third layer of Unity Catalog’s three-level namespace.](#databricks-tables---a-table-resides-in-the-third-layer-of-unity-catalogs-three-level-namespace)
- [databricks tables delete - Delete a table.](#databricks-tables-delete---delete-a-table)
- [databricks tables get - Get a table.](#databricks-tables-get---get-a-table)
- [databricks tables list - List tables.](#databricks-tables-list---list-tables)
- [databricks tables list-summaries - List table summaries.](#databricks-tables-list-summaries---list-table-summaries)
- [databricks token-management - Enables administrators to get all tokens and delete tokens for other users.](#databricks-token-management---enables-administrators-to-get-all-tokens-and-delete-tokens-for-other-users)
- [databricks token-management delete - Delete a token.](#databricks-token-management-delete---delete-a-token)
- [databricks token-management get - Get token info.](#databricks-token-management-get---get-token-info)
- [databricks token-management list - List all tokens.](#databricks-token-management-list---list-all-tokens)
- [databricks tokens - The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks commandss.](#databricks-tokens---the-token-api-allows-you-to-create-list-and-revoke-tokens-that-can-be-used-to-authenticate-and-access-databricks-commandss)
- [databricks tokens create - Create a user token.](#databricks-tokens-create---create-a-user-token)
- [databricks account vpc-endpoints get - Get a VPC endpoint configuration.](#databricks-account-vpc-endpoints-get---get-a-vpc-endpoint-configuration)
- [databricks account vpc-endpoints list - Get all VPC endpoint configurations.](#databricks-account-vpc-endpoints-list---get-all-vpc-endpoint-configurations)
- [databricks warehouses create - Create a warehouse.](#databricks-warehouses-create---create-a-warehouse)
- [databricks warehouses delete - Delete a warehouse.](#databricks-warehouses-delete---delete-a-warehouse)
- [databricks warehouses edit - Update a warehouse.](#databricks-warehouses-edit---update-a-warehouse)
- [databricks warehouses get - Get warehouse info.](#databricks-warehouses-get---get-warehouse-info)
- [databricks warehouses get-workspace-warehouse-config - Get the workspace configuration.](#databricks-warehouses-get-workspace-warehouse-config---get-the-workspace-configuration)
- [databricks warehouses list - List warehouses.](#databricks-warehouses-list---list-warehouses)
- [databricks warehouses set-workspace-warehouse-config - Set the workspace configuration.](#databricks-warehouses-set-workspace-warehouse-config---set-the-workspace-configuration)
- [databricks warehouses start - Start a warehouse.](#databricks-warehouses-start---start-a-warehouse)
- [databricks warehouses stop - Stop a warehouse.](#databricks-warehouses-stop---stop-a-warehouse)
- [databricks workspace - The Workspace API allows you to list, import, export, and delete notebooks and folders.](#databricks-workspace---the-workspace-api-allows-you-to-list-import-export-and-delete-notebooks-and-folders)
- [databricks workspace delete - Delete a workspace object.](#databricks-workspace-delete---delete-a-workspace-object)
- [databricks workspace export - Export a workspace object.](#databricks-workspace-export---export-a-workspace-object)
- [databricks workspace get-status - Get status.](#databricks-workspace-get-status---get-status)
- [databricks workspace import - Import a workspace object.](#databricks-workspace-import---import-a-workspace-object)
- [databricks workspace list - List contents.](#databricks-workspace-list---list-contents)
- [databricks workspace mkdirs - Create a directory.](#databricks-workspace-mkdirs---create-a-directory)
- [databricks account workspace-assignment - The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account.](#databricks-account-workspace-assignment---the-workspace-permission-assignment-api-allows-you-to-manage-workspace-permissions-for-principals-in-your-account)
- [databricks account workspace-assignment get - List workspace permissions.](#databricks-account-workspace-assignment-get---list-workspace-permissions)
- [databricks account workspace-assignment list - Get permission assignments.](#databricks-account-workspace-assignment-list---get-permission-assignments)
- [databricks account workspaces - These commands manage workspaces for this account.](#databricks-account-workspaces---these-commands-manage-workspaces-for-this-account)
- [databricks account workspaces create - Create a new workspace.](#databricks-account-workspaces-create---create-a-new-workspace)
- [databricks account workspaces delete - Delete a workspace.](#databricks-account-workspaces-delete---delete-a-workspace)
- [databricks account workspaces get - Get a workspace.](#databricks-account-workspaces-get---get-a-workspace)
- [databricks account workspaces list - Get all workspaces.](#databricks-account-workspaces-list---get-all-workspaces)
An alert is a Databricks SQL object that periodically runs a query, evaluates a condition of its result,
and notifies users or notification destinations if the condition was met.
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body * `--parent` - The identifier of the workspace folder containing the alert.
*`--rearm` - Number of seconds after being triggered before the alert rearms itself and can be triggered again.
Gets the specified catalog in a metastore. The caller must be a metastore admin, the owner of the catalog, or a user that has the **USE_CATALOG** privilege set for their account.
Creates a new Spark cluster. This method will acquire new instances from the cloud provider if necessary.
This method is asynchronous; the returned `cluster_id` can be used to poll the cluster status.
When this method returns, the cluster will be in a `PENDING` state.
The cluster will be usable once it enters a `RUNNING` state.
Note: Databricks may not be able to acquire some of the requested nodes, due to cloud provider limitations
(account limits, spot price, etc.) or transient network issues.
If Databricks acquires at least 85% of the requested on-demand nodes, cluster creation will succeed.
Otherwise the cluster will terminate with an informative error message.
Flags:
*`--no-wait` - do not wait to reach RUNNING state.
*`--timeout` - maximum amount of time to reach RUNNING state.
*`--json` - either inline JSON string or @path/to/file.json with request body
*`--apply-policy-default-values` - Note: This field won't be true for webapp requests.
*`--autotermination-minutes` - Automatically terminates the cluster after it is inactive for this time in minutes.
*`--cluster-name` - Cluster name requested by the user.
*`--cluster-source` - Determines whether the cluster was created by a user through the UI, created by the Databricks Jobs Scheduler, or through an API request.
*`--driver-instance-pool-id` - The optional ID of the instance pool for the driver of the cluster belongs.
*`--driver-node-type-id` - The node type of the Spark driver.
*`--enable-elastic-disk` - Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk space when its Spark workers are running low on disk space.
*`--enable-local-disk-encryption` - Whether to enable LUKS on cluster VMs' local disks.
*`--instance-pool-id` - The optional ID of the instance pool to which the cluster belongs.
*`--node-type-id` - This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster.
*`--num-workers` - Number of worker nodes that this cluster should have.
*`--policy-id` - The ID of the cluster policy used to create the cluster if applicable.
*`--runtime-engine` - Decides which runtime engine to be use, e.g.
Updates the configuration of a cluster to match the provided attributes and size.
A cluster can be updated if it is in a `RUNNING` or `TERMINATED` state.
If a cluster is updated while in a `RUNNING` state, it will be restarted so that the new attributes can take effect.
If a cluster is updated while in a `TERMINATED` state, it will remain `TERMINATED`.
The next time it is started using the `clusters/start` API, the new attributes will take effect.
Any attempt to update a cluster in any other state will be rejected with an `INVALID_STATE` error code.
Clusters created by the Databricks Jobs service cannot be edited.
Flags:
*`--no-wait` - do not wait to reach RUNNING state.
*`--timeout` - maximum amount of time to reach RUNNING state.
*`--json` - either inline JSON string or @path/to/file.json with request body
*`--apply-policy-default-values` - Note: This field won't be true for webapp requests.
*`--autotermination-minutes` - Automatically terminates the cluster after it is inactive for this time in minutes.
*`--cluster-name` - Cluster name requested by the user.
*`--cluster-source` - Determines whether the cluster was created by a user through the UI, created by the Databricks Jobs Scheduler, or through an API request.
*`--driver-instance-pool-id` - The optional ID of the instance pool for the driver of the cluster belongs.
*`--driver-node-type-id` - The node type of the Spark driver.
*`--enable-elastic-disk` - Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk space when its Spark workers are running low on disk space.
*`--enable-local-disk-encryption` - Whether to enable LUKS on cluster VMs' local disks.
*`--instance-pool-id` - The optional ID of the instance pool to which the cluster belongs.
*`--node-type-id` - This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster.
*`--num-workers` - Number of worker nodes that this cluster should have.
*`--policy-id` - The ID of the cluster policy used to create the cluster if applicable.
*`--runtime-engine` - Decides which runtime engine to be use, e.g.
Creates a Databricks credential configuration that represents cloud cross-account credentials for a specified account. Databricks uses this to set up network infrastructure properly to host Databricks clusters. For your AWS IAM role, you need to trust the External ID (the Databricks Account API account ID) in the returned credential object, and configure the required access policy.
Save the response's `credentials_id` field, which is the ID for your new credential configuration object.
For information about how to create a new workspace with command, see [Create a new workspace using the Account API](http://docs.databricks.com/administration-guide/account-api/new-workspace.html)
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body
Deletes a Databricks credential configuration object for an account, both specified by ID. You cannot delete a credential that is associated with any workspace.
Creates a customer-managed key configuration object for an account, specified by ID. This operation uploads a reference to a customer-managed key to Databricks. If the key is assigned as a workspace's customer-managed key for managed services, Databricks uses the key to encrypt the workspaces notebooks and secrets in the control plane, in addition to Databricks SQL queries and query history. If it is specified as a workspace's customer-managed key for workspace storage, the key encrypts the workspace's root S3 bucket (which contains the workspace's root DBFS and system data) and, optionally, cluster EBS volume data.
**Important**: Customer-managed keys are supported only for some deployment types, subscription types, and AWS regions.
This operation is available only if your account is on the E2 version of the platform or on a select custom plan that allows multiple workspaces per account.
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body
Gets a customer-managed key configuration object for an account, specified by ID. This operation uploads a reference to a customer-managed key to Databricks. If assigned as a workspace's customer-managed key for managed services, Databricks uses the key to encrypt the workspaces notebooks and secrets in the control plane, in addition to Databricks SQL queries and query history. If it is specified as a workspace's customer-managed key for storage, the key encrypts the workspace's root S3 bucket (which contains the workspace's root DBFS and system data) and, optionally, cluster EBS volume data.
**Important**: Customer-managed keys are supported only for some deployment types, subscription types, and AWS regions.
This operation is available only if your account is on the E2 version of the platform.
Gets all customer-managed key configuration objects for an account. If the key is specified as a workspace's managed services customer-managed key, Databricks uses the key to encrypt the workspace's notebooks and secrets in the control plane, in addition to Databricks SQL queries and query history. If the key is specified as a workspace's storage customer-managed key, the key is used to encrypt the workspace's root S3 bucket and optionally can encrypt cluster EBS volumes data in the data plane.
**Important**: Customer-managed keys are supported only for some deployment types, subscription types, and AWS regions.
This operation is available only if your account is on the E2 version of the platform.
List artifacts for a run. Takes an optional `artifact_path` prefix. If it is specified, the response contains only artifacts with the specified prefix.",
Flags:
*`--path` - Filter artifacts matching this path (a relative path from the root artifact directory).
*`--run-id` - ID of the run whose artifacts to list.
*`--run-uuid` - [Deprecated, use run_id instead] ID of the run whose artifacts to list.
Restore an experiment marked for deletion. This also restores associated metadata, runs, metrics, params, and tags. If experiment uses FileStore, underlying artifacts associated with experiment are also restored. Throws `RESOURCE_DOES_NOT_EXIST` if experiment was never created or was permanently deleted.",
Gets an external location from the metastore. The caller must be either a metastore admin, the owner of the external location, or a user that has some privilege on the external location.
The function implementation can be any SQL expression or Query, and it can be invoked wherever a table reference is allowed in a query.
In Unity Catalog, a function resides at the same level as a table, so it can be referenced with the form __catalog_name__.__schema_name__.__function_name__.
Deletes the function that matches the supplied name.
For the deletion to succeed, the user must satisfy one of the following conditions:
- Is the owner of the function's parent catalog
- Is the owner of the function's parent schema and have the **USE_CATALOG** privilege on its parent catalog
- Is the owner of the function itself and have both the **USE_CATALOG** privilege on its parent catalog and the **USE_SCHEMA** privilege on its parent schema
Flags:
*`--force` - Force deletion even if the function is notempty.
Gets a function from within a parent catalog and schema.
For the fetch to succeed, the user must satisfy one of the following requirements:
- Is a metastore admin
- Is an owner of the function's parent catalog
- Have the **USE_CATALOG** privilege on the function's parent catalog and be the owner of the function
- Have the **USE_CATALOG** privilege on the function's parent catalog, the **USE_SCHEMA** privilege on the function's parent schema, and the **EXECUTE** privilege on the function itself
List functions within the specified parent catalog and schema.
If the user is a metastore admin, all functions are returned in the output list.
Otherwise, the user must have the **USE_CATALOG** privilege on the catalog and the **USE_SCHEMA** privilege on the schema, and the output list contains only functions for which either the user has the **EXECUTE** privilege or the user is the owner.
There is no guarantee of a specific ordering of the elements in the array.
Updates the function that matches the supplied name.
Only the owner of the function can be updated. If the user is not a metastore admin, the user must be a member of the group that is the new function owner.
- Is a metastore admin
- Is the owner of the function's parent catalog
- Is the owner of the function's parent schema and has the **USE_CATALOG** privilege on its parent catalog
- Is the owner of the function itself and has the **USE_CATALOG** privilege on its parent catalog as well as the **USE_SCHEMA** privilege on the function's parent schema.
Flags:
*`--owner` - Username of current owner of function.
Creates a new instance pool using idle and ready-to-use cloud instances.
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body
*`--enable-elastic-disk` - Autoscaling Local Storage: when enabled, this instances in this pool will dynamically acquire additional disk space when its Spark workers are running low on disk space.
*`--idle-instance-autotermination-minutes` - Automatically terminates the extra instances in the pool cache after they are inactive for this time in minutes if min_idle_instances requirement is already met.
*`--max-capacity` - Maximum number of outstanding instances to keep in the pool, including both instances used by clusters and idle instances.
*`--min-idle-instances` - Minimum number of idle instances to keep in the instance pool.
Modifies the configuration of an existing instance pool.
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body
*`--enable-elastic-disk` - Autoscaling Local Storage: when enabled, this instances in this pool will dynamically acquire additional disk space when its Spark workers are running low on disk space.
*`--idle-instance-autotermination-minutes` - Automatically terminates the extra instances in the pool cache after they are inactive for this time in minutes if min_idle_instances requirement is already met.
*`--max-capacity` - Maximum number of outstanding instances to keep in the pool, including both instances used by clusters and idle instances.
*`--min-idle-instances` - Minimum number of idle instances to keep in the instance pool.
## `databricks account ip-access-lists` - The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console.
These commands manage log delivery configurations for this account. The two supported log types
for command are _billable usage logs_ and _audit logs_. This feature is in Public Preview.
This feature works with all account ID types.
Log delivery works with all account types. However, if your account is on the E2 version of
the platform or on a select custom plan that allows multiple workspaces per account, you can
optionally configure different storage destinations for each workspace. Log delivery status
is also provided to know the latest status of log delivery attempts.
The high-level flow of billable usage delivery:
1.**Create storage**: In AWS, [create a new AWS S3 bucket](https://docs.databricks.com/administration-guide/account-api/aws-storage.html)
with a specific bucket policy. Using Databricks APIs, call the Account API to create a [storage configuration object](#operation/create-storage-config)
that uses the bucket name.
2.**Create credentials**: In AWS, create the appropriate AWS IAM role. For full details,
including the required IAM role policies and trust relationship, see
Creates a new Databricks log delivery configuration to enable delivery of the specified type of logs to your storage location. This requires that you already created a [credential object](#operation/create-credential-config) (which encapsulates a cross-account service IAM role) and a [storage configuration object](#operation/create-storage-config) (which encapsulates an S3 bucket).
For full details, including the required IAM role policies and bucket policies, see [Deliver and access billable usage logs](https://docs.databricks.com/administration-guide/account-settings/billable-usage-delivery.html) or [Configure audit logging](https://docs.databricks.com/administration-guide/account-settings/audit-logs.html).
**Note**: There is a limit on the number of log delivery configurations available per account (each limit applies separately to each log type including billable usage and audit logs). You can create a maximum of two enabled account-level delivery configurations (configurations without a workspace filter) per type. Additionally, you can create two enabled workspace-level delivery configurations per workspace for each log type, which means that the same workspace ID can occur in the workspace filter for no more than two delivery configurations per log type.
You cannot delete a log delivery configuration, but you can disable it (see [Enable or disable log delivery configuration](#operation/patch-log-delivery-config-status)).
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body
Enables or disables a log delivery configuration. Deletion of delivery configurations is not supported, so disable log delivery configurations that are no longer needed. Note that you can't re-enable a delivery configuration if this would violate the delivery configuration limits described under [Create log delivery](#operation/create-log-delivery-config).
Updates information for a specific metastore. The caller must be a metastore admin.
Flags:
*`--delta-sharing-organization-name` - The organization name of a Delta Sharing entity, to be used in Databricks-to-Databricks Delta Sharing as the official name.
*`--delta-sharing-recipient-token-lifetime-in-seconds` - The lifetime of delta sharing recipient token in seconds.
*`--delta-sharing-scope` - The scope of Delta Sharing enabled for the metastore.
*`--name` - The user-specified name of the metastore.
*`--owner` - The owner of the metastore.
*`--privilege-model-version` - Privilege model version of the metastore, of the form `major.minor` (e.g., `1.0`).
*`--storage-root-credential-id` - UUID of storage credential to access the metastore storage_root.
*`--delta-sharing-organization-name` - The organization name of a Delta Sharing entity, to be used in Databricks-to-Databricks Delta Sharing as the official name.
*`--delta-sharing-recipient-token-lifetime-in-seconds` - The lifetime of delta sharing recipient token in seconds.
*`--delta-sharing-scope` - The scope of Delta Sharing enabled for the metastore.
*`--name` - The user-specified name of the metastore.
*`--owner` - The owner of the metastore.
*`--privilege-model-version` - Privilege model version of the metastore, of the form `major.minor` (e.g., `1.0`).
*`--storage-root-credential-id` - UUID of storage credential to access the metastore storage_root.
*`--json` - either inline JSON string or @path/to/file.json with request body
*`--description` - Optional description for model version.
*`--run-id` - MLflow run ID for correlation, if `source` was generated by an experiment run in MLflow tracking server.
*`--run-link` - MLflow run link - this is the exact link of the run that generated this model version, potentially hosted at another instance of MLflow.
Get the details of a model. This is a Databricks Workspace version of the [MLflow endpoint](https://www.mlflow.org/docs/latest/rest-api.html#get-registeredmodel)
that also returns the model's Databricks Workspace ID and the permission level of the requesting user on the model.
Transition a model version's stage. This is a Databricks Workspace version of the [MLflow endpoint](https://www.mlflow.org/docs/latest/rest-api.html#transition-modelversion-stage)
that also accepts a comment associated with the transition to be recorded.",
Flags:
*`--comment` - User-provided comment on the action.
These commands manage network configurations for customer-managed VPCs (optional). Its ID is used when creating a new workspace if you use customer-managed VPCs.
Creates a Databricks network configuration that represents an VPC and its resources. The VPC will be used for new Databricks clusters. This requires a pre-existing VPC and subnets.
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body
*`--vpc-id` - The ID of the VPC associated with this network.
Deletes a Databricks network configuration, which represents a cloud VPC and its resources. You cannot delete a network that is associated with a workspace.
This operation is available only if your account is on the E2 version of the platform.
## `databricks account o-auth-enrollment` - These commands enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration.
These commands enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration.
**Note:** Your account must be on the E2 version to use These commands, this is because OAuth
Creates a private access settings object, which specifies how your workspace is
accessed over [AWS PrivateLink](https://aws.amazon.com/privatelink). To use AWS
PrivateLink, a workspace must have a private access settings object referenced
by ID in the workspace's `private_access_settings_id` property.
You can share one private access settings with multiple workspaces in a single account. However,
private access settings are specific to AWS regions, so only workspaces in the same
AWS region can use a given private access settings object.
Before configuring PrivateLink, read the
[Databricks article about PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html).
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body
*`--private-access-level` - The private access level controls which VPC endpoints can connect to the UI or API of any workspace that attaches this private access settings object.
*`--public-access-enabled` - Determines if the workspace can be accessed over public internet.
Deletes a private access settings object, which determines how your workspace is accessed over [AWS PrivateLink](https://aws.amazon.com/privatelink).
Before configuring PrivateLink, read the [Databricks article about PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html).
Gets a private access settings object, which specifies how your workspace is accessed over [AWS PrivateLink](https://aws.amazon.com/privatelink).
Before configuring PrivateLink, read the [Databricks article about PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html).
Updates an existing private access settings object, which specifies how your workspace is
accessed over [AWS PrivateLink](https://aws.amazon.com/privatelink). To use AWS
PrivateLink, a workspace must have a private access settings object referenced by ID in
the workspace's `private_access_settings_id` property.
This operation completely overwrites your existing private access settings object attached to your workspaces.
All workspaces attached to the private access settings are affected by any change.
If `public_access_enabled`, `private_access_level`, or `allowed_vpc_endpoint_ids`
are updated, effects of these changes might take several minutes to propagate to the
workspace API.
You can share one private access settings object with multiple
workspaces in a single account. However, private access settings are specific to
AWS regions, so only workspaces in the same AWS region can use a given private access
settings object.
Before configuring PrivateLink, read the
[Databricks article about PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html).
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body
*`--private-access-level` - The private access level controls which VPC endpoints can connect to the UI or API of any workspace that attaches this private access settings object.
*`--public-access-enabled` - Determines if the workspace can be accessed over public internet.
Gets a specific authentication provider. The caller must supply the name of the provider, and must either be a metastore admin or the owner of the provider.
Creates a new query definition. Queries created with this endpoint belong to the authenticated user making the request.
The `data_source_id` field specifies the ID of the SQL warehouse to run this query against. You can use the Data Sources API to see a complete list of available SQL warehouses. Or you can copy the `data_source_id` from an existing query.
**Note**: You cannot add a visualization until you create the query.
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body
*`--data-source-id` - The ID of the data source / SQL warehouse where this query will run.
*`--description` - General description that can convey additional information about this query such as usage notes.
*`--name` - The name or title of this query to display in list views.
*`--parent` - The identifier of the workspace folder containing the query.
Creates a new schema for catalog in the Metatastore. The caller must be a metastore admin, or have the **CREATE_SCHEMA** privilege in the parent catalog.
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body
*`--comment` - User-provided free-form text description.
*`--storage-root` - Storage root URL for managed tables within schema.
Gets the specified schema within the metastore. The caller must be a metastore admin, the owner of the schema, or a user that has the **USE_SCHEMA** privilege on the schema.
Gets an array of schemas for a catalog in the metastore. If the caller is the metastore admin or the owner of the parent catalog, all schemas for the catalog will be retrieved.
Otherwise, only schemas owned by the caller (or for which the caller has the **USE_SCHEMA** privilege) will be retrieved.
There is no guarantee of a specific ordering of the elements in the array.
### `databricks serving-endpoints build-logs` - Retrieve the logs associated with building the model's environment for a given serving endpoint's served model.
### `databricks serving-endpoints export-metrics` - Retrieve the metrics corresponding to a serving endpoint for the current time in Prometheus or OpenMetrics exposition format.
Creates new storage configuration for an account, specified by ID. Uploads a storage configuration object that represents the root AWS S3 bucket in your account. Databricks stores related workspace assets including DBFS, cluster logs, and job results. For the AWS S3 bucket, you need to configure the required bucket policy.
For information about how to create a new workspace with command, see [Create a new workspace using the Account API](http://docs.databricks.com/administration-guide/account-api/new-workspace.html)
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body
Gets a storage credential from the metastore. The caller must be a metastore admin, the owner of the storage credential, or have some permission on the storage credential.
Updates a storage credential on the metastore. The caller must be the owner of the storage credential or a metastore admin. If the caller is a metastore admin, only the __owner__ credential can be changed.
Flags:
*`--json` - either inline JSON string or @path/to/file.json with request body
*`--comment` - Comment associated with the credential.
*`--force` - Force update even if there are dependent external locations or external tables.
*`--name` - The credential name.
*`--owner` - Username of current owner of credential.
*`--read-only` - Whether the storage credential is only usable for read operations.
*`--skip-validation` - Supplying true to this argument skips validation of the updated credential.
Gets a storage credential from the metastore. The caller must be a metastore admin, the owner of the storage credential, or have a level of privilege on the storage credential.
Gets a table from the metastore for a specific catalog and schema.
The caller must be a metastore admin, be the owner of the table and have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema,
or be the owner of the table and have the **SELECT** privilege on it as well.
Flags:
*`--include-delta-metadata` - Whether delta metadata should be included in the response.
Gets an array of all tables for the current metastore under the parent catalog and schema.
The caller must be a metastore admin or an owner of (or have the **SELECT** privilege on) the table.
For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.
There is no guarantee of a specific ordering of the elements in the array.
Flags:
*`--include-delta-metadata` - Whether delta metadata should be included in the response.
*`--max-results` - Maximum number of tables to return (page length).
*`--page-token` - Opaque token to send for the next page of results (pagination).
Deletes a VPC endpoint configuration, which represents an
[AWS VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html) that
can communicate privately with Databricks over [AWS PrivateLink](https://aws.amazon.com/privatelink).
Before configuring PrivateLink, read the [Databricks article about PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html).
Gets a VPC endpoint configuration, which represents a [VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html) object in AWS used to communicate privately with Databricks over
Gets a list of all VPC endpoints for an account, specified by ID.
Before configuring PrivateLink, read the [Databricks article about PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html).
Exports an object or the contents of an entire directory.
If `path` does not exist, this call returns an error `RESOURCE_DOES_NOT_EXIST`.
One can only export a directory in `DBC` format. If the exported data would exceed size limit, this call returns `MAX_NOTEBOOK_SIZE_EXCEEDED`. Currently, command does not support exporting a library.
Flags:
*`--direct-download` - Flag to enable direct download.
*`--format` - This specifies the format of the exported file.
## `databricks account workspace-assignment` - The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account.
Terminates and deletes a Databricks workspace. From an API perspective, deletion is immediate. However, it might take a few minutes for all workspaces resources to be deleted, depending on the size and number of workspace resources.
This operation is available only if your account is on the E2 version of the platform or on a select custom plan that allows multiple workspaces per account.
Gets information including status for a Databricks workspace, specified by ID. In the response, the `workspace_status` field indicates the current status. After initial workspace creation (which is asynchronous), make repeated `GET` requests with the workspace ID and check its status. The workspace becomes available when the status changes to `RUNNING`.
For information about how to create a new workspace with command **including error handling**, see [Create a new workspace using the Account API](http://docs.databricks.com/administration-guide/account-api/new-workspace.html).
This operation is available only if your account is on the E2 version of the platform or on a select custom plan that allows multiple workspaces per account.
Gets a list of all workspaces associated with an account, specified by ID.
This operation is available only if your account is on the E2 version of the platform or on a select custom plan that allows multiple workspaces per account.
Updates a workspace configuration for either a running workspace or a failed workspace. The elements that can be updated varies between these two use cases.
Update a failed workspace:
You can update a Databricks workspace configuration for failed workspace deployment for some fields, but not all fields. For a failed workspace, this request supports updates to the following fields only:
- Credential configuration ID
- Storage configuration ID
- Network configuration ID. Used only to add or change a network configuration for a customer-managed VPC. For a failed workspace only, you can convert a workspace with Databricks-managed VPC to use a customer-managed VPC by adding this ID. You cannot downgrade a workspace with a customer-managed VPC to be a Databricks-managed VPC. You can update the network configuration for a failed or running workspace to add PrivateLink support, though you must also add a private access settings object.
- Key configuration ID for managed services (control plane storage, such as notebook source and Databricks SQL queries). Used only if you use customer-managed keys for managed services.
- Key configuration ID for workspace storage (root S3 bucket and, optionally, EBS volumes). Used only if you use customer-managed keys for workspace storage. **Important**: If the workspace was ever in the running state, even if briefly before becoming a failed workspace, you cannot add a new key configuration ID for workspace storage.
- Private access settings ID to add PrivateLink support. You can add or update the private access settings ID to upgrade a workspace to add support for front-end, back-end, or both types of connectivity. You cannot remove (downgrade) any existing front-end or back-end PrivateLink support on a workspace.
After calling the `PATCH` operation to update the workspace configuration, make repeated `GET` requests with the workspace ID and check the workspace status. The workspace is successful if the status changes to `RUNNING`.
For information about how to create a new workspace with command **including error handling**, see [Create a new workspace using the Account API](http://docs.databricks.com/administration-guide/account-api/new-workspace.html).
Update a running workspace:
You can update a Databricks workspace configuration for running workspaces for some fields, but not all fields. For a running workspace, this request supports updating the following fields only:
- Credential configuration ID
- Network configuration ID. Used only if you already use a customer-managed VPC. You cannot convert a running workspace from a Databricks-managed VPC to a customer-managed VPC. You can use a network configuration update in command for a failed or running workspace to add support for PrivateLink, although you also need to add a private access settings object.
- Key configuration ID for managed services (control plane storage, such as notebook source and Databricks SQL queries). Databricks does not directly encrypt the data with the customer-managed key (CMK). Databricks uses both the CMK and the Databricks managed key (DMK) that is unique to your workspace to encrypt the Data Encryption Key (DEK). Databricks uses the DEK to encrypt your workspace's managed services persisted data. If the workspace does not already have a CMK for managed services, adding this ID enables managed services encryption for new or updated data. Existing managed services data that existed before adding the key remains not encrypted with the DEK until it is modified. If the workspace already has customer-managed keys for managed services, this request rotates (changes) the CMK keys and the DEK is re-encrypted with the DMK and the new CMK.
- Key configuration ID for workspace storage (root S3 bucket and, optionally, EBS volumes). You can set this only if the workspace does not already have a customer-managed key configuration for workspace storage.
- Private access settings ID to add PrivateLink support. You can add or update the private access settings ID to upgrade a workspace to add support for front-end, back-end, or both types of connectivity. You cannot remove (downgrade) any existing front-end or back-end PrivateLink support on a workspace.
**Important**: To update a running workspace, your workspace must have no running compute resources that run in your workspace's VPC in the Classic data plane. For example, stop all all-purpose clusters, job clusters, pools with running clusters, and Classic SQL warehouses. If you do not terminate all cluster instances in the workspace before calling command, the request will fail.
**Important**: Customer-managed keys and customer-managed VPCs are supported by only some deployment types and subscription types. If you have questions about availability, contact your Databricks representative.
This operation is available only if your account is on the E2 version of the platform or on a select custom plan that allows multiple workspaces per account.
Flags:
*`--no-wait` - do not wait to reach RUNNING state.
*`--timeout` - maximum amount of time to reach RUNNING state.
*`--aws-region` - The AWS region of the workspace's data plane (for example, `us-west-2`).
*`--credentials-id` - ID of the workspace's credential configuration object.
*`--managed-services-customer-managed-key-id` - The ID of the workspace's managed services encryption key configuration object.
*`--network-id` - The ID of the workspace's network configuration object.
*`--storage-configuration-id` - The ID of the workspace's storage configuration object.
*`--storage-customer-managed-key-id` - The ID of the key configuration object for workspace storage.