databricks-cli/cmd/account/workspaces/workspaces.go

450 lines
19 KiB
Go
Raw Normal View History

Added OpenAPI command coverage (#357) This PR adds the following command groups: ## Workspace-level command groups * `bricks alerts` - The alerts API can be used to perform CRUD operations on alerts. * `bricks catalogs` - A catalog is the first layer of Unity Catalog’s three-level namespace. * `bricks cluster-policies` - Cluster policy limits the ability to configure clusters based on a set of rules. * `bricks clusters` - The Clusters API allows you to create, start, edit, list, terminate, and delete clusters. * `bricks current-user` - This API allows retrieving information about currently authenticated user or service principal. * `bricks dashboards` - In general, there is little need to modify dashboards using the API. * `bricks data-sources` - This API is provided to assist you in making new query objects. * `bricks experiments` - MLflow Experiment tracking. * `bricks external-locations` - An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path. * `bricks functions` - Functions implement User-Defined Functions (UDFs) in Unity Catalog. * `bricks git-credentials` - Registers personal access token for Databricks to do operations on behalf of the user. * `bricks global-init-scripts` - The Global Init Scripts API enables Workspace administrators to configure global initialization scripts for their workspace. * `bricks grants` - In Unity Catalog, data is secure by default. * `bricks groups` - Groups simplify identity management, making it easier to assign access to Databricks Workspace, data, and other securable objects. * `bricks instance-pools` - Instance Pools API are used to create, edit, delete and list instance pools by using ready-to-use cloud instances which reduces a cluster start and auto-scaling times. * `bricks instance-profiles` - The Instance Profiles API allows admins to add, list, and remove instance profiles that users can launch clusters with. * `bricks ip-access-lists` - IP Access List enables admins to configure IP access lists. * `bricks jobs` - The Jobs API allows you to create, edit, and delete jobs. * `bricks libraries` - The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster. * `bricks metastores` - A metastore is the top-level container of objects in Unity Catalog. * `bricks model-registry` - MLflow Model Registry commands. * `bricks permissions` - Permissions API are used to create read, write, edit, update and manage access for various users on different objects and endpoints. * `bricks pipelines` - The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines. * `bricks policy-families` - View available policy families. * `bricks providers` - Databricks Providers REST API. * `bricks queries` - These endpoints are used for CRUD operations on query definitions. * `bricks query-history` - Access the history of queries through SQL warehouses. * `bricks recipient-activation` - Databricks Recipient Activation REST API. * `bricks recipients` - Databricks Recipients REST API. * `bricks repos` - The Repos API allows users to manage their git repos. * `bricks schemas` - A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace. * `bricks secrets` - The Secrets API allows you to manage secrets, secret scopes, and access permissions. * `bricks service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms. * `bricks serving-endpoints` - The Serving Endpoints API allows you to create, update, and delete model serving endpoints. * `bricks shares` - Databricks Shares REST API. * `bricks storage-credentials` - A storage credential represents an authentication and authorization mechanism for accessing data stored on your cloud tenant. * `bricks table-constraints` - Primary key and foreign key constraints encode relationships between fields in tables. * `bricks tables` - A table resides in the third layer of Unity Catalog’s three-level namespace. * `bricks token-management` - Enables administrators to get all tokens and delete tokens for other users. * `bricks tokens` - The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks REST APIs. * `bricks users` - User identities recognized by Databricks and represented by email addresses. * `bricks volumes` - Volumes are a Unity Catalog (UC) capability for accessing, storing, governing, organizing and processing files. * `bricks warehouses` - A SQL warehouse is a compute resource that lets you run SQL commands on data objects within Databricks SQL. * `bricks workspace` - The Workspace API allows you to list, import, export, and delete notebooks and folders. * `bricks workspace-conf` - This API allows updating known workspace settings for advanced users. ## Account-level command groups * `bricks account billable-usage` - This API allows you to download billable usage logs for the specified account and date range. * `bricks account budgets` - These APIs manage budget configuration including notifications for exceeding a budget for a period. * `bricks account credentials` - These APIs manage credential configurations for this workspace. * `bricks account custom-app-integration` - These APIs enable administrators to manage custom oauth app integrations, which is required for adding/using Custom OAuth App Integration like Tableau Cloud for Databricks in AWS cloud. * `bricks account encryption-keys` - These APIs manage encryption key configurations for this workspace (optional). * `bricks account groups` - Groups simplify identity management, making it easier to assign access to Databricks Account, data, and other securable objects. * `bricks account ip-access-lists` - The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console. * `bricks account log-delivery` - These APIs manage log delivery configurations for this account. * `bricks account metastore-assignments` - These APIs manage metastore assignments to a workspace. * `bricks account metastores` - These APIs manage Unity Catalog metastores for an account. * `bricks account networks` - These APIs manage network configurations for customer-managed VPCs (optional). * `bricks account o-auth-enrollment` - These APIs enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration. * `bricks account private-access` - These APIs manage private access settings for this account. * `bricks account published-app-integration` - These APIs enable administrators to manage published oauth app integrations, which is required for adding/using Published OAuth App Integration like Tableau Cloud for Databricks in AWS cloud. * `bricks account service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms. * `bricks account storage` - These APIs manage storage configurations for this workspace. * `bricks account storage-credentials` - These APIs manage storage credentials for a particular metastore. * `bricks account users` - User identities recognized by Databricks and represented by email addresses. * `bricks account vpc-endpoints` - These APIs manage VPC endpoint configurations for this account. * `bricks account workspace-assignment` - The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account. * `bricks account workspaces` - These APIs manage workspaces for this account.
2023-04-26 11:06:16 +00:00
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package workspaces
import (
"fmt"
"time"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
Added OpenAPI command coverage (#357) This PR adds the following command groups: ## Workspace-level command groups * `bricks alerts` - The alerts API can be used to perform CRUD operations on alerts. * `bricks catalogs` - A catalog is the first layer of Unity Catalog’s three-level namespace. * `bricks cluster-policies` - Cluster policy limits the ability to configure clusters based on a set of rules. * `bricks clusters` - The Clusters API allows you to create, start, edit, list, terminate, and delete clusters. * `bricks current-user` - This API allows retrieving information about currently authenticated user or service principal. * `bricks dashboards` - In general, there is little need to modify dashboards using the API. * `bricks data-sources` - This API is provided to assist you in making new query objects. * `bricks experiments` - MLflow Experiment tracking. * `bricks external-locations` - An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path. * `bricks functions` - Functions implement User-Defined Functions (UDFs) in Unity Catalog. * `bricks git-credentials` - Registers personal access token for Databricks to do operations on behalf of the user. * `bricks global-init-scripts` - The Global Init Scripts API enables Workspace administrators to configure global initialization scripts for their workspace. * `bricks grants` - In Unity Catalog, data is secure by default. * `bricks groups` - Groups simplify identity management, making it easier to assign access to Databricks Workspace, data, and other securable objects. * `bricks instance-pools` - Instance Pools API are used to create, edit, delete and list instance pools by using ready-to-use cloud instances which reduces a cluster start and auto-scaling times. * `bricks instance-profiles` - The Instance Profiles API allows admins to add, list, and remove instance profiles that users can launch clusters with. * `bricks ip-access-lists` - IP Access List enables admins to configure IP access lists. * `bricks jobs` - The Jobs API allows you to create, edit, and delete jobs. * `bricks libraries` - The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster. * `bricks metastores` - A metastore is the top-level container of objects in Unity Catalog. * `bricks model-registry` - MLflow Model Registry commands. * `bricks permissions` - Permissions API are used to create read, write, edit, update and manage access for various users on different objects and endpoints. * `bricks pipelines` - The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines. * `bricks policy-families` - View available policy families. * `bricks providers` - Databricks Providers REST API. * `bricks queries` - These endpoints are used for CRUD operations on query definitions. * `bricks query-history` - Access the history of queries through SQL warehouses. * `bricks recipient-activation` - Databricks Recipient Activation REST API. * `bricks recipients` - Databricks Recipients REST API. * `bricks repos` - The Repos API allows users to manage their git repos. * `bricks schemas` - A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace. * `bricks secrets` - The Secrets API allows you to manage secrets, secret scopes, and access permissions. * `bricks service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms. * `bricks serving-endpoints` - The Serving Endpoints API allows you to create, update, and delete model serving endpoints. * `bricks shares` - Databricks Shares REST API. * `bricks storage-credentials` - A storage credential represents an authentication and authorization mechanism for accessing data stored on your cloud tenant. * `bricks table-constraints` - Primary key and foreign key constraints encode relationships between fields in tables. * `bricks tables` - A table resides in the third layer of Unity Catalog’s three-level namespace. * `bricks token-management` - Enables administrators to get all tokens and delete tokens for other users. * `bricks tokens` - The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks REST APIs. * `bricks users` - User identities recognized by Databricks and represented by email addresses. * `bricks volumes` - Volumes are a Unity Catalog (UC) capability for accessing, storing, governing, organizing and processing files. * `bricks warehouses` - A SQL warehouse is a compute resource that lets you run SQL commands on data objects within Databricks SQL. * `bricks workspace` - The Workspace API allows you to list, import, export, and delete notebooks and folders. * `bricks workspace-conf` - This API allows updating known workspace settings for advanced users. ## Account-level command groups * `bricks account billable-usage` - This API allows you to download billable usage logs for the specified account and date range. * `bricks account budgets` - These APIs manage budget configuration including notifications for exceeding a budget for a period. * `bricks account credentials` - These APIs manage credential configurations for this workspace. * `bricks account custom-app-integration` - These APIs enable administrators to manage custom oauth app integrations, which is required for adding/using Custom OAuth App Integration like Tableau Cloud for Databricks in AWS cloud. * `bricks account encryption-keys` - These APIs manage encryption key configurations for this workspace (optional). * `bricks account groups` - Groups simplify identity management, making it easier to assign access to Databricks Account, data, and other securable objects. * `bricks account ip-access-lists` - The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console. * `bricks account log-delivery` - These APIs manage log delivery configurations for this account. * `bricks account metastore-assignments` - These APIs manage metastore assignments to a workspace. * `bricks account metastores` - These APIs manage Unity Catalog metastores for an account. * `bricks account networks` - These APIs manage network configurations for customer-managed VPCs (optional). * `bricks account o-auth-enrollment` - These APIs enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration. * `bricks account private-access` - These APIs manage private access settings for this account. * `bricks account published-app-integration` - These APIs enable administrators to manage published oauth app integrations, which is required for adding/using Published OAuth App Integration like Tableau Cloud for Databricks in AWS cloud. * `bricks account service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms. * `bricks account storage` - These APIs manage storage configurations for this workspace. * `bricks account storage-credentials` - These APIs manage storage credentials for a particular metastore. * `bricks account users` - User identities recognized by Databricks and represented by email addresses. * `bricks account vpc-endpoints` - These APIs manage VPC endpoint configurations for this account. * `bricks account workspace-assignment` - The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account. * `bricks account workspaces` - These APIs manage workspaces for this account.
2023-04-26 11:06:16 +00:00
"github.com/databricks/databricks-sdk-go/retries"
"github.com/databricks/databricks-sdk-go/service/provisioning"
"github.com/spf13/cobra"
)
var Cmd = &cobra.Command{
Use: "workspaces",
Short: `These APIs manage workspaces for this account.`,
Long: `These APIs manage workspaces for this account. A Databricks workspace is an
environment for accessing all of your Databricks assets. The workspace
organizes objects (notebooks, libraries, and experiments) into folders, and
provides access to data and computational resources such as clusters and jobs.
These endpoints are available if your account is on the E2 version of the
platform or on a select custom plan that allows multiple workspaces per
account.`,
}
// start create command
var createReq provisioning.CreateWorkspaceRequest
var createJson flags.JsonFlag
var createSkipWait bool
var createTimeout time.Duration
func init() {
Cmd.AddCommand(createCmd)
createCmd.Flags().BoolVar(&createSkipWait, "no-wait", createSkipWait, `do not wait to reach RUNNING state`)
createCmd.Flags().DurationVar(&createTimeout, "timeout", 20*time.Minute, `maximum amount of time to reach RUNNING state`)
// TODO: short flags
createCmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
createCmd.Flags().StringVar(&createReq.AwsRegion, "aws-region", createReq.AwsRegion, `The AWS region of the workspace's data plane.`)
createCmd.Flags().StringVar(&createReq.Cloud, "cloud", createReq.Cloud, `The cloud provider which the workspace uses.`)
// TODO: complex arg: cloud_resource_container
createCmd.Flags().StringVar(&createReq.CredentialsId, "credentials-id", createReq.CredentialsId, `ID of the workspace's credential configuration object.`)
createCmd.Flags().StringVar(&createReq.DeploymentName, "deployment-name", createReq.DeploymentName, `The deployment name defines part of the subdomain for the workspace.`)
createCmd.Flags().StringVar(&createReq.Location, "location", createReq.Location, `The Google Cloud region of the workspace data plane in your Google account.`)
createCmd.Flags().StringVar(&createReq.ManagedServicesCustomerManagedKeyId, "managed-services-customer-managed-key-id", createReq.ManagedServicesCustomerManagedKeyId, `The ID of the workspace's managed services encryption key configuration object.`)
createCmd.Flags().StringVar(&createReq.NetworkId, "network-id", createReq.NetworkId, ``)
createCmd.Flags().Var(&createReq.PricingTier, "pricing-tier", `The pricing tier of the workspace.`)
createCmd.Flags().StringVar(&createReq.PrivateAccessSettingsId, "private-access-settings-id", createReq.PrivateAccessSettingsId, `ID of the workspace's private access settings object.`)
createCmd.Flags().StringVar(&createReq.StorageConfigurationId, "storage-configuration-id", createReq.StorageConfigurationId, `The ID of the workspace's storage configuration object.`)
createCmd.Flags().StringVar(&createReq.StorageCustomerManagedKeyId, "storage-customer-managed-key-id", createReq.StorageCustomerManagedKeyId, `The ID of the workspace's storage encryption key configuration object.`)
}
var createCmd = &cobra.Command{
Use: "create",
Short: `Create a new workspace.`,
Long: `Create a new workspace.
Creates a new workspace.
**Important**: This operation is asynchronous. A response with HTTP status
code 200 means the request has been accepted and is in progress, but does not
mean that the workspace deployed successfully and is running. The initial
workspace status is typically PROVISIONING. Use the workspace ID
(workspace_id) field in the response to identify the new workspace and make
repeated GET requests with the workspace ID and check its status. The
workspace becomes available when the status changes to RUNNING.`,
Annotations: map[string]string{},
PreRunE: root.MustAccountClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
err = createJson.Unmarshal(&createReq)
if err != nil {
return err
}
createReq.WorkspaceName = args[0]
if createSkipWait {
response, err := a.Workspaces.Create(ctx, createReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
spinner := cmdio.Spinner(ctx)
info, err := a.Workspaces.CreateAndWait(ctx, createReq,
retries.Timeout[provisioning.Workspace](createTimeout),
func(i *retries.Info[provisioning.Workspace]) {
if i.Info == nil {
return
}
statusMessage := i.Info.WorkspaceStatusMessage
spinner <- statusMessage
})
close(spinner)
if err != nil {
return err
}
return cmdio.Render(ctx, info)
},
}
// start delete command
var deleteReq provisioning.DeleteWorkspaceRequest
func init() {
Cmd.AddCommand(deleteCmd)
// TODO: short flags
}
var deleteCmd = &cobra.Command{
Use: "delete WORKSPACE_ID",
Short: `Delete a workspace.`,
Long: `Delete a workspace.
Terminates and deletes a Databricks workspace. From an API perspective,
deletion is immediate. However, it might take a few minutes for all workspaces
resources to be deleted, depending on the size and number of workspace
resources.
This operation is available only if your account is on the E2 version of the
platform or on a select custom plan that allows multiple workspaces per
account.`,
Annotations: map[string]string{},
PreRunE: root.MustAccountClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
if len(args) == 0 {
names, err := a.Workspaces.WorkspaceWorkspaceNameToWorkspaceIdMap(ctx)
if err != nil {
return err
}
id, err := cmdio.Select(ctx, names, "Workspace ID")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) != 1 {
return fmt.Errorf("expected to have workspace id")
}
_, err = fmt.Sscan(args[0], &deleteReq.WorkspaceId)
if err != nil {
return fmt.Errorf("invalid WORKSPACE_ID: %s", args[0])
}
err = a.Workspaces.Delete(ctx, deleteReq)
if err != nil {
return err
}
return nil
},
}
// start get command
var getReq provisioning.GetWorkspaceRequest
func init() {
Cmd.AddCommand(getCmd)
// TODO: short flags
}
var getCmd = &cobra.Command{
Use: "get WORKSPACE_ID",
Short: `Get a workspace.`,
Long: `Get a workspace.
Gets information including status for a Databricks workspace, specified by ID.
In the response, the workspace_status field indicates the current status.
After initial workspace creation (which is asynchronous), make repeated GET
requests with the workspace ID and check its status. The workspace becomes
available when the status changes to RUNNING.
For information about how to create a new workspace with this API **including
error handling**, see [Create a new workspace using the Account API].
This operation is available only if your account is on the E2 version of the
platform or on a select custom plan that allows multiple workspaces per
account.
[Create a new workspace using the Account API]: http://docs.databricks.com/administration-guide/account-api/new-workspace.html`,
Annotations: map[string]string{},
PreRunE: root.MustAccountClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
if len(args) == 0 {
names, err := a.Workspaces.WorkspaceWorkspaceNameToWorkspaceIdMap(ctx)
if err != nil {
return err
}
id, err := cmdio.Select(ctx, names, "Workspace ID")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) != 1 {
return fmt.Errorf("expected to have workspace id")
}
_, err = fmt.Sscan(args[0], &getReq.WorkspaceId)
if err != nil {
return fmt.Errorf("invalid WORKSPACE_ID: %s", args[0])
}
response, err := a.Workspaces.Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
},
}
// start list command
func init() {
Cmd.AddCommand(listCmd)
}
var listCmd = &cobra.Command{
Use: "list",
Short: `Get all workspaces.`,
Long: `Get all workspaces.
Gets a list of all workspaces associated with an account, specified by ID.
This operation is available only if your account is on the E2 version of the
platform or on a select custom plan that allows multiple workspaces per
account.`,
Annotations: map[string]string{},
PreRunE: root.MustAccountClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
response, err := a.Workspaces.List(ctx)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
},
}
// start update command
var updateReq provisioning.UpdateWorkspaceRequest
var updateSkipWait bool
var updateTimeout time.Duration
func init() {
Cmd.AddCommand(updateCmd)
updateCmd.Flags().BoolVar(&updateSkipWait, "no-wait", updateSkipWait, `do not wait to reach RUNNING state`)
updateCmd.Flags().DurationVar(&updateTimeout, "timeout", 20*time.Minute, `maximum amount of time to reach RUNNING state`)
// TODO: short flags
updateCmd.Flags().StringVar(&updateReq.AwsRegion, "aws-region", updateReq.AwsRegion, `The AWS region of the workspace's data plane (for example, us-west-2).`)
updateCmd.Flags().StringVar(&updateReq.CredentialsId, "credentials-id", updateReq.CredentialsId, `ID of the workspace's credential configuration object.`)
updateCmd.Flags().StringVar(&updateReq.ManagedServicesCustomerManagedKeyId, "managed-services-customer-managed-key-id", updateReq.ManagedServicesCustomerManagedKeyId, `The ID of the workspace's managed services encryption key configuration object.`)
updateCmd.Flags().StringVar(&updateReq.NetworkId, "network-id", updateReq.NetworkId, `The ID of the workspace's network configuration object.`)
updateCmd.Flags().StringVar(&updateReq.StorageConfigurationId, "storage-configuration-id", updateReq.StorageConfigurationId, `The ID of the workspace's storage configuration object.`)
updateCmd.Flags().StringVar(&updateReq.StorageCustomerManagedKeyId, "storage-customer-managed-key-id", updateReq.StorageCustomerManagedKeyId, `The ID of the key configuration object for workspace storage.`)
}
var updateCmd = &cobra.Command{
Use: "update WORKSPACE_ID",
Short: `Update workspace configuration.`,
Long: `Update workspace configuration.
Updates a workspace configuration for either a running workspace or a failed
workspace. The elements that can be updated varies between these two use
cases.
### Update a failed workspace You can update a Databricks workspace
configuration for failed workspace deployment for some fields, but not all
fields. For a failed workspace, this request supports updates to the following
fields only: - Credential configuration ID - Storage configuration ID -
Network configuration ID. Used only to add or change a network configuration
for a customer-managed VPC. For a failed workspace only, you can convert a
workspace with Databricks-managed VPC to use a customer-managed VPC by adding
this ID. You cannot downgrade a workspace with a customer-managed VPC to be a
Databricks-managed VPC. You can update the network configuration for a failed
or running workspace to add PrivateLink support, though you must also add a
private access settings object. - Key configuration ID for managed services
(control plane storage, such as notebook source and Databricks SQL queries).
Used only if you use customer-managed keys for managed services. - Key
configuration ID for workspace storage (root S3 bucket and, optionally, EBS
volumes). Used only if you use customer-managed keys for workspace storage.
**Important**: If the workspace was ever in the running state, even if briefly
before becoming a failed workspace, you cannot add a new key configuration ID
for workspace storage. - Private access settings ID to add PrivateLink
support. You can add or update the private access settings ID to upgrade a
workspace to add support for front-end, back-end, or both types of
connectivity. You cannot remove (downgrade) any existing front-end or back-end
PrivateLink support on a workspace.
After calling the PATCH operation to update the workspace configuration,
make repeated GET requests with the workspace ID and check the workspace
status. The workspace is successful if the status changes to RUNNING.
For information about how to create a new workspace with this API **including
error handling**, see [Create a new workspace using the Account API].
### Update a running workspace You can update a Databricks workspace
configuration for running workspaces for some fields, but not all fields. For
a running workspace, this request supports updating the following fields only:
- Credential configuration ID
- Network configuration ID. Used only if you already use a customer-managed
VPC. You cannot convert a running workspace from a Databricks-managed VPC to a
customer-managed VPC. You can use a network configuration update in this API
for a failed or running workspace to add support for PrivateLink, although you
also need to add a private access settings object.
- Key configuration ID for managed services (control plane storage, such as
notebook source and Databricks SQL queries). Databricks does not directly
encrypt the data with the customer-managed key (CMK). Databricks uses both the
CMK and the Databricks managed key (DMK) that is unique to your workspace to
encrypt the Data Encryption Key (DEK). Databricks uses the DEK to encrypt your
workspace's managed services persisted data. If the workspace does not already
have a CMK for managed services, adding this ID enables managed services
encryption for new or updated data. Existing managed services data that
existed before adding the key remains not encrypted with the DEK until it is
modified. If the workspace already has customer-managed keys for managed
services, this request rotates (changes) the CMK keys and the DEK is
re-encrypted with the DMK and the new CMK. - Key configuration ID for
workspace storage (root S3 bucket and, optionally, EBS volumes). You can set
this only if the workspace does not already have a customer-managed key
configuration for workspace storage. - Private access settings ID to add
PrivateLink support. You can add or update the private access settings ID to
upgrade a workspace to add support for front-end, back-end, or both types of
connectivity. You cannot remove (downgrade) any existing front-end or back-end
PrivateLink support on a workspace.
**Important**: To update a running workspace, your workspace must have no
running compute resources that run in your workspace's VPC in the Classic data
plane. For example, stop all all-purpose clusters, job clusters, pools with
running clusters, and Classic SQL warehouses. If you do not terminate all
cluster instances in the workspace before calling this API, the request will
fail.
### Wait until changes take effect. After calling the PATCH operation to
update the workspace configuration, make repeated GET requests with the
workspace ID and check the workspace status and the status of the fields. *
For workspaces with a Databricks-managed VPC, the workspace status becomes
PROVISIONING temporarily (typically under 20 minutes). If the workspace
update is successful, the workspace status changes to RUNNING. Note that you
can also check the workspace status in the [Account Console]. However, you
cannot use or create clusters for another 20 minutes after that status change.
This results in a total of up to 40 minutes in which you cannot create
clusters. If you create or use clusters before this time interval elapses,
clusters do not launch successfully, fail, or could cause other unexpected
behavior.
* For workspaces with a customer-managed VPC, the workspace status stays at
status RUNNING and the VPC change happens immediately. A change to the
storage customer-managed key configuration ID might take a few minutes to
update, so continue to check the workspace until you observe that it has been
updated. If the update fails, the workspace might revert silently to its
original configuration. After the workspace has been updated, you cannot use
or create clusters for another 20 minutes. If you create or use clusters
before this time interval elapses, clusters do not launch successfully, fail,
or could cause other unexpected behavior.
If you update the _storage_ customer-managed key configurations, it takes 20
minutes for the changes to fully take effect. During the 20 minute wait, it is
important that you stop all REST API calls to the DBFS API. If you are
modifying _only the managed services key configuration_, you can omit the 20
minute wait.
**Important**: Customer-managed keys and customer-managed VPCs are supported
by only some deployment types and subscription types. If you have questions
about availability, contact your Databricks representative.
This operation is available only if your account is on the E2 version of the
platform or on a select custom plan that allows multiple workspaces per
account.
[Account Console]: https://docs.databricks.com/administration-guide/account-settings-e2/account-console-e2.html
[Create a new workspace using the Account API]: http://docs.databricks.com/administration-guide/account-api/new-workspace.html`,
Annotations: map[string]string{},
PreRunE: root.MustAccountClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
if len(args) == 0 {
names, err := a.Workspaces.WorkspaceWorkspaceNameToWorkspaceIdMap(ctx)
if err != nil {
return err
}
id, err := cmdio.Select(ctx, names, "Workspace ID")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) != 1 {
return fmt.Errorf("expected to have workspace id")
}
_, err = fmt.Sscan(args[0], &updateReq.WorkspaceId)
if err != nil {
return fmt.Errorf("invalid WORKSPACE_ID: %s", args[0])
}
if updateSkipWait {
err = a.Workspaces.Update(ctx, updateReq)
if err != nil {
return err
}
return nil
}
spinner := cmdio.Spinner(ctx)
info, err := a.Workspaces.UpdateAndWait(ctx, updateReq,
retries.Timeout[provisioning.Workspace](updateTimeout),
func(i *retries.Info[provisioning.Workspace]) {
if i.Info == nil {
return
}
statusMessage := i.Info.WorkspaceStatusMessage
spinner <- statusMessage
})
close(spinner)
if err != nil {
return err
}
return cmdio.Render(ctx, info)
},
}
// end service Workspaces