Upgrade Go SDK to 0.54.0 (#2029)

## Changes

* Added
[a.AccountFederationPolicy](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/oauth2#AccountFederationPolicyAPI)
account-level service and
[a.ServicePrincipalFederationPolicy](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/oauth2#ServicePrincipalFederationPolicyAPI)
account-level service.
* Added `IsSingleNode`, `Kind` and `UseMlRuntime` fields for Cluster
commands.
* Added `UpdateParameterSyntax` field for
[dashboards.MigrateDashboardRequest](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#MigrateDashboardRequest).
This commit is contained in:
Andrew Nester 2024-12-18 13:43:27 +01:00 committed by GitHub
parent 042c8d88c6
commit 59f0859e00
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
12 changed files with 1009 additions and 11 deletions

View File

@ -1 +1 @@
7016dcbf2e011459416cf408ce21143bcc4b3a25
a6a317df8327c9b1e5cb59a03a42ffa2aabeef6d

2
.gitattributes vendored
View File

@ -8,6 +8,7 @@ cmd/account/custom-app-integration/custom-app-integration.go linguist-generated=
cmd/account/disable-legacy-features/disable-legacy-features.go linguist-generated=true
cmd/account/encryption-keys/encryption-keys.go linguist-generated=true
cmd/account/esm-enablement-account/esm-enablement-account.go linguist-generated=true
cmd/account/federation-policy/federation-policy.go linguist-generated=true
cmd/account/groups/groups.go linguist-generated=true
cmd/account/ip-access-lists/ip-access-lists.go linguist-generated=true
cmd/account/log-delivery/log-delivery.go linguist-generated=true
@ -19,6 +20,7 @@ cmd/account/o-auth-published-apps/o-auth-published-apps.go linguist-generated=tr
cmd/account/personal-compute/personal-compute.go linguist-generated=true
cmd/account/private-access/private-access.go linguist-generated=true
cmd/account/published-app-integration/published-app-integration.go linguist-generated=true
cmd/account/service-principal-federation-policy/service-principal-federation-policy.go linguist-generated=true
cmd/account/service-principal-secrets/service-principal-secrets.go linguist-generated=true
cmd/account/service-principals/service-principals.go linguist-generated=true
cmd/account/settings/settings.go linguist-generated=true

View File

@ -70,6 +70,12 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
If `cluster_log_conf` is specified, init script logs are sent to `<destination>/<cluster-ID>/init_scripts`.
instance_pool_id:
description: The optional ID of the instance pool to which the cluster belongs.
is_single_node:
description: |
This field can only be used with `kind`.
When set to true, Databricks will automatically set single node related `custom_tags`, `spark_conf`, and `num_workers`
kind: {}
node_type_id:
description: |
This field encodes, through a single value, the resources available to each of
@ -119,6 +125,11 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
SSH public key contents that will be added to each Spark node in this cluster. The
corresponding private keys can be used to login with the user name `ubuntu` on port `2200`.
Up to 10 keys can be specified.
use_ml_runtime:
description: |
This field can only be used with `kind`.
`effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.
workload_type: {}
github.com/databricks/cli/bundle/config/resources.Dashboard:
create_time:
@ -759,6 +770,12 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
If `cluster_log_conf` is specified, init script logs are sent to `<destination>/<cluster-ID>/init_scripts`.
instance_pool_id:
description: The optional ID of the instance pool to which the cluster belongs.
is_single_node:
description: |
This field can only be used with `kind`.
When set to true, Databricks will automatically set single node related `custom_tags`, `spark_conf`, and `num_workers`
kind: {}
node_type_id:
description: |
This field encodes, through a single value, the resources available to each of
@ -808,6 +825,11 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
SSH public key contents that will be added to each Spark node in this cluster. The
corresponding private keys can be used to login with the user name `ubuntu` on port `2200`.
Up to 10 keys can be specified.
use_ml_runtime:
description: |
This field can only be used with `kind`.
`effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.
workload_type: {}
github.com/databricks/databricks-sdk-go/service/compute.DataSecurityMode:
_:
@ -815,6 +837,12 @@ github.com/databricks/databricks-sdk-go/service/compute.DataSecurityMode:
Data security mode decides what data governance model to use when accessing data
from a cluster.
The following modes can only be used with `kind`.
* `DATA_SECURITY_MODE_AUTO`: Databricks will choose the most appropriate access mode depending on your compute configuration.
* `DATA_SECURITY_MODE_STANDARD`: Alias for `USER_ISOLATION`.
* `DATA_SECURITY_MODE_DEDICATED`: Alias for `SINGLE_USER`.
The following modes can be used regardless of `kind`.
* `NONE`: No security isolation for multiple users sharing the cluster. Data governance features are not available in this mode.
* `SINGLE_USER`: A secure cluster that can only be exclusively used by a single user specified in `single_user_name`. Most programming languages, cluster features and data governance features are available in this mode.
* `USER_ISOLATION`: A secure cluster that can be shared by multiple users. Cluster users are fully isolated so that they cannot see each other's data and credentials. Most data governance features are supported in this mode. But programming languages and cluster features might be limited.
@ -827,6 +855,9 @@ github.com/databricks/databricks-sdk-go/service/compute.DataSecurityMode:
* `LEGACY_SINGLE_USER`: This mode is for users migrating from legacy Passthrough on standard clusters.
* `LEGACY_SINGLE_USER_STANDARD`: This mode provides a way that doesnt have UC nor passthrough enabled.
enum:
- DATA_SECURITY_MODE_AUTO
- DATA_SECURITY_MODE_STANDARD
- DATA_SECURITY_MODE_DEDICATED
- NONE
- SINGLE_USER
- USER_ISOLATION
@ -1068,6 +1099,17 @@ github.com/databricks/databricks-sdk-go/service/dashboards.LifecycleState:
enum:
- ACTIVE
- TRASHED
github.com/databricks/databricks-sdk-go/service/jobs.CleanRoomsNotebookTask:
clean_room_name:
description: The clean room that the notebook belongs to.
etag:
description: |-
Checksum to validate the freshness of the notebook resource (i.e. the notebook being run is the latest version).
It can be fetched by calling the :method:cleanroomassets/get API.
notebook_base_parameters:
description: Base parameters to be used for the clean room notebook job.
notebook_name:
description: Name of the notebook being run.
github.com/databricks/databricks-sdk-go/service/jobs.Condition:
_:
enum:
@ -1346,10 +1388,10 @@ github.com/databricks/databricks-sdk-go/service/jobs.JobsHealthMetric:
Specifies the health metric that is being evaluated for a particular health rule.
* `RUN_DURATION_SECONDS`: Expected total time for a run in seconds.
* `STREAMING_BACKLOG_BYTES`: An estimate of the maximum bytes of data waiting to be consumed across all streams. This metric is in Private Preview.
* `STREAMING_BACKLOG_RECORDS`: An estimate of the maximum offset lag across all streams. This metric is in Private Preview.
* `STREAMING_BACKLOG_SECONDS`: An estimate of the maximum consumer delay across all streams. This metric is in Private Preview.
* `STREAMING_BACKLOG_FILES`: An estimate of the maximum number of outstanding files across all streams. This metric is in Private Preview.
* `STREAMING_BACKLOG_BYTES`: An estimate of the maximum bytes of data waiting to be consumed across all streams. This metric is in Public Preview.
* `STREAMING_BACKLOG_RECORDS`: An estimate of the maximum offset lag across all streams. This metric is in Public Preview.
* `STREAMING_BACKLOG_SECONDS`: An estimate of the maximum consumer delay across all streams. This metric is in Public Preview.
* `STREAMING_BACKLOG_FILES`: An estimate of the maximum number of outstanding files across all streams. This metric is in Public Preview.
enum:
- RUN_DURATION_SECONDS
- STREAMING_BACKLOG_BYTES
@ -1651,6 +1693,10 @@ github.com/databricks/databricks-sdk-go/service/jobs.TableUpdateTriggerConfigura
and can be used to wait for a series of table updates before triggering a run. The
minimum allowed value is 60 seconds.
github.com/databricks/databricks-sdk-go/service/jobs.Task:
clean_rooms_notebook_task:
description: |-
The task runs a [clean rooms](https://docs.databricks.com/en/clean-rooms/index.html) notebook
when the `clean_rooms_notebook_task` field is present.
condition_task:
description: |-
The task evaluates a condition that can be used to control the execution of other tasks when the `condition_task` field is present.

View File

@ -5,6 +5,9 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
"docker_image":
"description": |-
PLACEHOLDER
"kind":
"description": |-
PLACEHOLDER
"permissions":
"description": |-
PLACEHOLDER
@ -90,6 +93,9 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
"docker_image":
"description": |-
PLACEHOLDER
"kind":
"description": |-
PLACEHOLDER
"runtime_engine":
"description": |-
PLACEHOLDER

View File

@ -130,6 +130,13 @@
"description": "The optional ID of the instance pool to which the cluster belongs.",
"$ref": "#/$defs/string"
},
"is_single_node": {
"description": "This field can only be used with `kind`.\n\nWhen set to true, Databricks will automatically set single node related `custom_tags`, `spark_conf`, and `num_workers`\n",
"$ref": "#/$defs/bool"
},
"kind": {
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.Kind"
},
"node_type_id": {
"description": "This field encodes, through a single value, the resources available to each of\nthe Spark nodes in this cluster. For example, the Spark nodes can be provisioned\nand optimized for memory or compute intensive workloads. A list of available node\ntypes can be retrieved by using the :method:clusters/listNodeTypes API call.\n",
"$ref": "#/$defs/string"
@ -168,6 +175,10 @@
"description": "SSH public key contents that will be added to each Spark node in this cluster. The\ncorresponding private keys can be used to login with the user name `ubuntu` on port `2200`.\nUp to 10 keys can be specified.",
"$ref": "#/$defs/slice/string"
},
"use_ml_runtime": {
"description": "This field can only be used with `kind`.\n\n`effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.\n",
"$ref": "#/$defs/bool"
},
"workload_type": {
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.WorkloadType"
}
@ -1988,6 +1999,13 @@
"description": "The optional ID of the instance pool to which the cluster belongs.",
"$ref": "#/$defs/string"
},
"is_single_node": {
"description": "This field can only be used with `kind`.\n\nWhen set to true, Databricks will automatically set single node related `custom_tags`, `spark_conf`, and `num_workers`\n",
"$ref": "#/$defs/bool"
},
"kind": {
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.Kind"
},
"node_type_id": {
"description": "This field encodes, through a single value, the resources available to each of\nthe Spark nodes in this cluster. For example, the Spark nodes can be provisioned\nand optimized for memory or compute intensive workloads. A list of available node\ntypes can be retrieved by using the :method:clusters/listNodeTypes API call.\n",
"$ref": "#/$defs/string"
@ -2023,6 +2041,10 @@
"description": "SSH public key contents that will be added to each Spark node in this cluster. The\ncorresponding private keys can be used to login with the user name `ubuntu` on port `2200`.\nUp to 10 keys can be specified.",
"$ref": "#/$defs/slice/string"
},
"use_ml_runtime": {
"description": "This field can only be used with `kind`.\n\n`effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.\n",
"$ref": "#/$defs/bool"
},
"workload_type": {
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.WorkloadType"
}
@ -2037,8 +2059,11 @@
},
"compute.DataSecurityMode": {
"type": "string",
"description": "Data security mode decides what data governance model to use when accessing data\nfrom a cluster.\n\n* `NONE`: No security isolation for multiple users sharing the cluster. Data governance features are not available in this mode.\n* `SINGLE_USER`: A secure cluster that can only be exclusively used by a single user specified in `single_user_name`. Most programming languages, cluster features and data governance features are available in this mode.\n* `USER_ISOLATION`: A secure cluster that can be shared by multiple users. Cluster users are fully isolated so that they cannot see each other's data and credentials. Most data governance features are supported in this mode. But programming languages and cluster features might be limited.\n\nThe following modes are deprecated starting with Databricks Runtime 15.0 and\nwill be removed for future Databricks Runtime versions:\n\n* `LEGACY_TABLE_ACL`: This mode is for users migrating from legacy Table ACL clusters.\n* `LEGACY_PASSTHROUGH`: This mode is for users migrating from legacy Passthrough on high concurrency clusters.\n* `LEGACY_SINGLE_USER`: This mode is for users migrating from legacy Passthrough on standard clusters.\n* `LEGACY_SINGLE_USER_STANDARD`: This mode provides a way that doesnt have UC nor passthrough enabled.\n",
"description": "Data security mode decides what data governance model to use when accessing data\nfrom a cluster.\n\nThe following modes can only be used with `kind`.\n* `DATA_SECURITY_MODE_AUTO`: Databricks will choose the most appropriate access mode depending on your compute configuration.\n* `DATA_SECURITY_MODE_STANDARD`: Alias for `USER_ISOLATION`.\n* `DATA_SECURITY_MODE_DEDICATED`: Alias for `SINGLE_USER`.\n\nThe following modes can be used regardless of `kind`.\n* `NONE`: No security isolation for multiple users sharing the cluster. Data governance features are not available in this mode.\n* `SINGLE_USER`: A secure cluster that can only be exclusively used by a single user specified in `single_user_name`. Most programming languages, cluster features and data governance features are available in this mode.\n* `USER_ISOLATION`: A secure cluster that can be shared by multiple users. Cluster users are fully isolated so that they cannot see each other's data and credentials. Most data governance features are supported in this mode. But programming languages and cluster features might be limited.\n\nThe following modes are deprecated starting with Databricks Runtime 15.0 and\nwill be removed for future Databricks Runtime versions:\n\n* `LEGACY_TABLE_ACL`: This mode is for users migrating from legacy Table ACL clusters.\n* `LEGACY_PASSTHROUGH`: This mode is for users migrating from legacy Passthrough on high concurrency clusters.\n* `LEGACY_SINGLE_USER`: This mode is for users migrating from legacy Passthrough on standard clusters.\n* `LEGACY_SINGLE_USER_STANDARD`: This mode provides a way that doesnt have UC nor passthrough enabled.\n",
"enum": [
"DATA_SECURITY_MODE_AUTO",
"DATA_SECURITY_MODE_STANDARD",
"DATA_SECURITY_MODE_DEDICATED",
"NONE",
"SINGLE_USER",
"USER_ISOLATION",
@ -2255,6 +2280,9 @@
}
]
},
"compute.Kind": {
"type": "string"
},
"compute.Library": {
"oneOf": [
{
@ -2543,6 +2571,40 @@
"TRASHED"
]
},
"jobs.CleanRoomsNotebookTask": {
"oneOf": [
{
"type": "object",
"properties": {
"clean_room_name": {
"description": "The clean room that the notebook belongs to.",
"$ref": "#/$defs/string"
},
"etag": {
"description": "Checksum to validate the freshness of the notebook resource (i.e. the notebook being run is the latest version).\nIt can be fetched by calling the :method:cleanroomassets/get API.",
"$ref": "#/$defs/string"
},
"notebook_base_parameters": {
"description": "Base parameters to be used for the clean room notebook job.",
"$ref": "#/$defs/map/string"
},
"notebook_name": {
"description": "Name of the notebook being run.",
"$ref": "#/$defs/string"
}
},
"additionalProperties": false,
"required": [
"clean_room_name",
"notebook_name"
]
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"jobs.Condition": {
"type": "string",
"enum": [
@ -3063,7 +3125,7 @@
},
"jobs.JobsHealthMetric": {
"type": "string",
"description": "Specifies the health metric that is being evaluated for a particular health rule.\n\n* `RUN_DURATION_SECONDS`: Expected total time for a run in seconds.\n* `STREAMING_BACKLOG_BYTES`: An estimate of the maximum bytes of data waiting to be consumed across all streams. This metric is in Private Preview.\n* `STREAMING_BACKLOG_RECORDS`: An estimate of the maximum offset lag across all streams. This metric is in Private Preview.\n* `STREAMING_BACKLOG_SECONDS`: An estimate of the maximum consumer delay across all streams. This metric is in Private Preview.\n* `STREAMING_BACKLOG_FILES`: An estimate of the maximum number of outstanding files across all streams. This metric is in Private Preview.",
"description": "Specifies the health metric that is being evaluated for a particular health rule.\n\n* `RUN_DURATION_SECONDS`: Expected total time for a run in seconds.\n* `STREAMING_BACKLOG_BYTES`: An estimate of the maximum bytes of data waiting to be consumed across all streams. This metric is in Public Preview.\n* `STREAMING_BACKLOG_RECORDS`: An estimate of the maximum offset lag across all streams. This metric is in Public Preview.\n* `STREAMING_BACKLOG_SECONDS`: An estimate of the maximum consumer delay across all streams. This metric is in Public Preview.\n* `STREAMING_BACKLOG_FILES`: An estimate of the maximum number of outstanding files across all streams. This metric is in Public Preview.",
"enum": [
"RUN_DURATION_SECONDS",
"STREAMING_BACKLOG_BYTES",
@ -3653,6 +3715,10 @@
{
"type": "object",
"properties": {
"clean_rooms_notebook_task": {
"description": "The task runs a [clean rooms](https://docs.databricks.com/en/clean-rooms/index.html) notebook\nwhen the `clean_rooms_notebook_task` field is present.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.CleanRoomsNotebookTask"
},
"condition_task": {
"description": "The task evaluates a condition that can be used to control the execution of other tasks when the `condition_task` field is present.\nThe condition task does not require a cluster to execute and does not support retries or notifications.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.ConditionTask"
@ -4551,7 +4617,7 @@
"properties": {
"days_of_week": {
"description": "Days of week in which the restart is allowed to happen (within a five-hour window starting at start_hour).\nIf not specified all days of the week will be used.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.RestartWindowDaysOfWeek"
"$ref": "#/$defs/slice/github.com/databricks/databricks-sdk-go/service/pipelines.RestartWindowDaysOfWeek"
},
"start_hour": {
"description": "An integer between 0 and 23 denoting the start hour for the restart window in the 24-hour day.\nContinuous pipeline restart is triggered only within a five-hour window starting at this hour.",
@ -6162,6 +6228,20 @@
}
]
},
"pipelines.RestartWindowDaysOfWeek": {
"oneOf": [
{
"type": "array",
"items": {
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.RestartWindowDaysOfWeek"
}
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"serving.AiGatewayRateLimit": {
"oneOf": [
{

4
cmd/account/cmd.go generated
View File

@ -11,6 +11,7 @@ import (
credentials "github.com/databricks/cli/cmd/account/credentials"
custom_app_integration "github.com/databricks/cli/cmd/account/custom-app-integration"
encryption_keys "github.com/databricks/cli/cmd/account/encryption-keys"
account_federation_policy "github.com/databricks/cli/cmd/account/federation-policy"
account_groups "github.com/databricks/cli/cmd/account/groups"
account_ip_access_lists "github.com/databricks/cli/cmd/account/ip-access-lists"
log_delivery "github.com/databricks/cli/cmd/account/log-delivery"
@ -21,6 +22,7 @@ import (
o_auth_published_apps "github.com/databricks/cli/cmd/account/o-auth-published-apps"
private_access "github.com/databricks/cli/cmd/account/private-access"
published_app_integration "github.com/databricks/cli/cmd/account/published-app-integration"
service_principal_federation_policy "github.com/databricks/cli/cmd/account/service-principal-federation-policy"
service_principal_secrets "github.com/databricks/cli/cmd/account/service-principal-secrets"
account_service_principals "github.com/databricks/cli/cmd/account/service-principals"
account_settings "github.com/databricks/cli/cmd/account/settings"
@ -44,6 +46,7 @@ func New() *cobra.Command {
cmd.AddCommand(credentials.New())
cmd.AddCommand(custom_app_integration.New())
cmd.AddCommand(encryption_keys.New())
cmd.AddCommand(account_federation_policy.New())
cmd.AddCommand(account_groups.New())
cmd.AddCommand(account_ip_access_lists.New())
cmd.AddCommand(log_delivery.New())
@ -54,6 +57,7 @@ func New() *cobra.Command {
cmd.AddCommand(o_auth_published_apps.New())
cmd.AddCommand(private_access.New())
cmd.AddCommand(published_app_integration.New())
cmd.AddCommand(service_principal_federation_policy.New())
cmd.AddCommand(service_principal_secrets.New())
cmd.AddCommand(account_service_principals.New())
cmd.AddCommand(account_settings.New())

View File

@ -0,0 +1,402 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package federation_policy
import (
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/oauth2"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "federation-policy",
Short: `These APIs manage account federation policies.`,
Long: `These APIs manage account federation policies.
Account federation policies allow users and service principals in your
Databricks account to securely access Databricks APIs using tokens from your
trusted identity providers (IdPs).
With token federation, your users and service principals can exchange tokens
from your IdP for Databricks OAuth tokens, which can be used to access
Databricks APIs. Token federation eliminates the need to manage Databricks
secrets, and allows you to centralize management of token issuance policies in
your IdP. Databricks token federation is typically used in combination with
[SCIM], so users in your IdP are synchronized into your Databricks account.
Token federation is configured in your Databricks account using an account
federation policy. An account federation policy specifies: * which IdP, or
issuer, your Databricks account should accept tokens from * how to determine
which Databricks user, or subject, a token is issued for
To configure a federation policy, you provide the following: * The required
token __issuer__, as specified in the iss claim of your tokens. The
issuer is an https URL that identifies your IdP. * The allowed token
__audiences__, as specified in the aud claim of your tokens. This
identifier is intended to represent the recipient of the token. As long as the
audience in the token matches at least one audience in the policy, the token
is considered a match. If unspecified, the default value is your Databricks
account id. * The __subject claim__, which indicates which token claim
contains the Databricks username of the user the token was issued for. If
unspecified, the default value is sub. * Optionally, the public keys
used to validate the signature of your tokens, in JWKS format. If unspecified
(recommended), Databricks automatically fetches the public keys from your
issuers well known endpoint. Databricks strongly recommends relying on your
issuers well known endpoint for discovering public keys.
An example federation policy is: issuer: "https://idp.mycompany.com/oidc"
audiences: ["databricks"] subject_claim: "sub"
An example JWT token body that matches this policy and could be used to
authenticate to Databricks as user username@mycompany.com is: { "iss":
"https://idp.mycompany.com/oidc", "aud": "databricks", "sub":
"username@mycompany.com" }
You may also need to configure your IdP to generate tokens for your users to
exchange with Databricks, if your users do not already have the ability to
generate tokens that are compatible with your federation policy.
You do not need to configure an OAuth application in Databricks to use token
federation.
[SCIM]: https://docs.databricks.com/admin/users-groups/scim/index.html`,
GroupID: "oauth2",
Annotations: map[string]string{
"package": "oauth2",
},
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add methods
cmd.AddCommand(newCreate())
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newList())
cmd.AddCommand(newUpdate())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start create command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var createOverrides []func(
*cobra.Command,
*oauth2.CreateAccountFederationPolicyRequest,
)
func newCreate() *cobra.Command {
cmd := &cobra.Command{}
var createReq oauth2.CreateAccountFederationPolicyRequest
createReq.Policy = &oauth2.FederationPolicy{}
var createJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&createReq.Policy.Description, "description", createReq.Policy.Description, `Description of the federation policy.`)
cmd.Flags().StringVar(&createReq.Policy.Name, "name", createReq.Policy.Name, `Name of the federation policy.`)
// TODO: complex arg: oidc_policy
cmd.Use = "create"
cmd.Short = `Create account federation policy.`
cmd.Long = `Create account federation policy.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
if cmd.Flags().Changed("json") {
diags := createJson.Unmarshal(&createReq.Policy)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
response, err := a.FederationPolicy.Create(ctx, createReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range createOverrides {
fn(cmd, &createReq)
}
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*oauth2.DeleteAccountFederationPolicyRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq oauth2.DeleteAccountFederationPolicyRequest
// TODO: short flags
cmd.Use = "delete POLICY_ID"
cmd.Short = `Delete account federation policy.`
cmd.Long = `Delete account federation policy.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
deleteReq.PolicyId = args[0]
err = a.FederationPolicy.Delete(ctx, deleteReq)
if err != nil {
return err
}
return nil
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getOverrides []func(
*cobra.Command,
*oauth2.GetAccountFederationPolicyRequest,
)
func newGet() *cobra.Command {
cmd := &cobra.Command{}
var getReq oauth2.GetAccountFederationPolicyRequest
// TODO: short flags
cmd.Use = "get POLICY_ID"
cmd.Short = `Get account federation policy.`
cmd.Long = `Get account federation policy.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
getReq.PolicyId = args[0]
response, err := a.FederationPolicy.Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getOverrides {
fn(cmd, &getReq)
}
return cmd
}
// start list command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var listOverrides []func(
*cobra.Command,
*oauth2.ListAccountFederationPoliciesRequest,
)
func newList() *cobra.Command {
cmd := &cobra.Command{}
var listReq oauth2.ListAccountFederationPoliciesRequest
// TODO: short flags
cmd.Flags().IntVar(&listReq.PageSize, "page-size", listReq.PageSize, ``)
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, ``)
cmd.Use = "list"
cmd.Short = `List account federation policies.`
cmd.Long = `List account federation policies.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
response := a.FederationPolicy.List(ctx, listReq)
return cmdio.RenderIterator(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range listOverrides {
fn(cmd, &listReq)
}
return cmd
}
// start update command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var updateOverrides []func(
*cobra.Command,
*oauth2.UpdateAccountFederationPolicyRequest,
)
func newUpdate() *cobra.Command {
cmd := &cobra.Command{}
var updateReq oauth2.UpdateAccountFederationPolicyRequest
updateReq.Policy = &oauth2.FederationPolicy{}
var updateJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&updateReq.Policy.Description, "description", updateReq.Policy.Description, `Description of the federation policy.`)
cmd.Flags().StringVar(&updateReq.Policy.Name, "name", updateReq.Policy.Name, `Name of the federation policy.`)
// TODO: complex arg: oidc_policy
cmd.Use = "update POLICY_ID UPDATE_MASK"
cmd.Short = `Update account federation policy.`
cmd.Long = `Update account federation policy.
Arguments:
POLICY_ID:
UPDATE_MASK: Field mask is required to be passed into the PATCH request. Field mask
specifies which fields of the setting payload will be updated. The field
mask needs to be supplied as single string. To specify multiple fields in
the field mask, use comma as the separator (no space).`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(2)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
if cmd.Flags().Changed("json") {
diags := updateJson.Unmarshal(&updateReq.Policy)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
updateReq.PolicyId = args[0]
updateReq.UpdateMask = args[1]
response, err := a.FederationPolicy.Update(ctx, updateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range updateOverrides {
fn(cmd, &updateReq)
}
return cmd
}
// end service AccountFederationPolicy

View File

@ -0,0 +1,445 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package service_principal_federation_policy
import (
"fmt"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/oauth2"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "service-principal-federation-policy",
Short: `These APIs manage service principal federation policies.`,
Long: `These APIs manage service principal federation policies.
Service principal federation, also known as Workload Identity Federation,
allows your automated workloads running outside of Databricks to securely
access Databricks APIs without the need for Databricks secrets. With Workload
Identity Federation, your application (or workload) authenticates to
Databricks as a Databricks service principal, using tokens provided by the
workload runtime.
Databricks strongly recommends using Workload Identity Federation to
authenticate to Databricks from automated workloads, over alternatives such as
OAuth client secrets or Personal Access Tokens, whenever possible. Workload
Identity Federation is supported by many popular services, including Github
Actions, Azure DevOps, GitLab, Terraform Cloud, and Kubernetes clusters, among
others.
Workload identity federation is configured in your Databricks account using a
service principal federation policy. A service principal federation policy
specifies: * which IdP, or issuer, the service principal is allowed to
authenticate from * which workload identity, or subject, is allowed to
authenticate as the Databricks service principal
To configure a federation policy, you provide the following: * The required
token __issuer__, as specified in the iss claim of workload identity
tokens. The issuer is an https URL that identifies the workload identity
provider. * The required token __subject__, as specified in the sub
claim of workload identity tokens. The subject uniquely identifies the
workload in the workload runtime environment. * The allowed token
__audiences__, as specified in the aud claim of workload identity
tokens. The audience is intended to represent the recipient of the token. As
long as the audience in the token matches at least one audience in the policy,
the token is considered a match. If unspecified, the default value is your
Databricks account id. * Optionally, the public keys used to validate the
signature of the workload identity tokens, in JWKS format. If unspecified
(recommended), Databricks automatically fetches the public keys from the
issuers well known endpoint. Databricks strongly recommends relying on the
issuers well known endpoint for discovering public keys.
An example service principal federation policy, for a Github Actions workload,
is: issuer: "https://token.actions.githubusercontent.com" audiences:
["https://github.com/my-github-org"] subject:
"repo:my-github-org/my-repo:environment:prod"
An example JWT token body that matches this policy and could be used to
authenticate to Databricks is: { "iss":
"https://token.actions.githubusercontent.com", "aud":
"https://github.com/my-github-org", "sub":
"repo:my-github-org/my-repo:environment:prod" }
You may also need to configure the workload runtime to generate tokens for
your workloads.
You do not need to configure an OAuth application in Databricks to use token
federation.`,
GroupID: "oauth2",
Annotations: map[string]string{
"package": "oauth2",
},
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add methods
cmd.AddCommand(newCreate())
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newList())
cmd.AddCommand(newUpdate())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start create command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var createOverrides []func(
*cobra.Command,
*oauth2.CreateServicePrincipalFederationPolicyRequest,
)
func newCreate() *cobra.Command {
cmd := &cobra.Command{}
var createReq oauth2.CreateServicePrincipalFederationPolicyRequest
createReq.Policy = &oauth2.FederationPolicy{}
var createJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&createReq.Policy.Description, "description", createReq.Policy.Description, `Description of the federation policy.`)
cmd.Flags().StringVar(&createReq.Policy.Name, "name", createReq.Policy.Name, `Name of the federation policy.`)
// TODO: complex arg: oidc_policy
cmd.Use = "create SERVICE_PRINCIPAL_ID"
cmd.Short = `Create service principal federation policy.`
cmd.Long = `Create service principal federation policy.
Arguments:
SERVICE_PRINCIPAL_ID: The service principal id for the federation policy.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
if cmd.Flags().Changed("json") {
diags := createJson.Unmarshal(&createReq.Policy)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
_, err = fmt.Sscan(args[0], &createReq.ServicePrincipalId)
if err != nil {
return fmt.Errorf("invalid SERVICE_PRINCIPAL_ID: %s", args[0])
}
response, err := a.ServicePrincipalFederationPolicy.Create(ctx, createReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range createOverrides {
fn(cmd, &createReq)
}
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*oauth2.DeleteServicePrincipalFederationPolicyRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq oauth2.DeleteServicePrincipalFederationPolicyRequest
// TODO: short flags
cmd.Use = "delete SERVICE_PRINCIPAL_ID POLICY_ID"
cmd.Short = `Delete service principal federation policy.`
cmd.Long = `Delete service principal federation policy.
Arguments:
SERVICE_PRINCIPAL_ID: The service principal id for the federation policy.
POLICY_ID: `
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(2)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
_, err = fmt.Sscan(args[0], &deleteReq.ServicePrincipalId)
if err != nil {
return fmt.Errorf("invalid SERVICE_PRINCIPAL_ID: %s", args[0])
}
deleteReq.PolicyId = args[1]
err = a.ServicePrincipalFederationPolicy.Delete(ctx, deleteReq)
if err != nil {
return err
}
return nil
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getOverrides []func(
*cobra.Command,
*oauth2.GetServicePrincipalFederationPolicyRequest,
)
func newGet() *cobra.Command {
cmd := &cobra.Command{}
var getReq oauth2.GetServicePrincipalFederationPolicyRequest
// TODO: short flags
cmd.Use = "get SERVICE_PRINCIPAL_ID POLICY_ID"
cmd.Short = `Get service principal federation policy.`
cmd.Long = `Get service principal federation policy.
Arguments:
SERVICE_PRINCIPAL_ID: The service principal id for the federation policy.
POLICY_ID: `
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(2)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
_, err = fmt.Sscan(args[0], &getReq.ServicePrincipalId)
if err != nil {
return fmt.Errorf("invalid SERVICE_PRINCIPAL_ID: %s", args[0])
}
getReq.PolicyId = args[1]
response, err := a.ServicePrincipalFederationPolicy.Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getOverrides {
fn(cmd, &getReq)
}
return cmd
}
// start list command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var listOverrides []func(
*cobra.Command,
*oauth2.ListServicePrincipalFederationPoliciesRequest,
)
func newList() *cobra.Command {
cmd := &cobra.Command{}
var listReq oauth2.ListServicePrincipalFederationPoliciesRequest
// TODO: short flags
cmd.Flags().IntVar(&listReq.PageSize, "page-size", listReq.PageSize, ``)
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, ``)
cmd.Use = "list SERVICE_PRINCIPAL_ID"
cmd.Short = `List service principal federation policies.`
cmd.Long = `List service principal federation policies.
Arguments:
SERVICE_PRINCIPAL_ID: The service principal id for the federation policy.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
_, err = fmt.Sscan(args[0], &listReq.ServicePrincipalId)
if err != nil {
return fmt.Errorf("invalid SERVICE_PRINCIPAL_ID: %s", args[0])
}
response := a.ServicePrincipalFederationPolicy.List(ctx, listReq)
return cmdio.RenderIterator(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range listOverrides {
fn(cmd, &listReq)
}
return cmd
}
// start update command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var updateOverrides []func(
*cobra.Command,
*oauth2.UpdateServicePrincipalFederationPolicyRequest,
)
func newUpdate() *cobra.Command {
cmd := &cobra.Command{}
var updateReq oauth2.UpdateServicePrincipalFederationPolicyRequest
updateReq.Policy = &oauth2.FederationPolicy{}
var updateJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&updateReq.Policy.Description, "description", updateReq.Policy.Description, `Description of the federation policy.`)
cmd.Flags().StringVar(&updateReq.Policy.Name, "name", updateReq.Policy.Name, `Name of the federation policy.`)
// TODO: complex arg: oidc_policy
cmd.Use = "update SERVICE_PRINCIPAL_ID POLICY_ID UPDATE_MASK"
cmd.Short = `Update service principal federation policy.`
cmd.Long = `Update service principal federation policy.
Arguments:
SERVICE_PRINCIPAL_ID: The service principal id for the federation policy.
POLICY_ID:
UPDATE_MASK: Field mask is required to be passed into the PATCH request. Field mask
specifies which fields of the setting payload will be updated. The field
mask needs to be supplied as single string. To specify multiple fields in
the field mask, use comma as the separator (no space).`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(3)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
if cmd.Flags().Changed("json") {
diags := updateJson.Unmarshal(&updateReq.Policy)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
_, err = fmt.Sscan(args[0], &updateReq.ServicePrincipalId)
if err != nil {
return fmt.Errorf("invalid SERVICE_PRINCIPAL_ID: %s", args[0])
}
updateReq.PolicyId = args[1]
updateReq.UpdateMask = args[2]
response, err := a.ServicePrincipalFederationPolicy.Update(ctx, updateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range updateOverrides {
fn(cmd, &updateReq)
}
return cmd
}
// end service ServicePrincipalFederationPolicy

View File

@ -204,6 +204,9 @@ func newCreate() *cobra.Command {
cmd.Flags().StringVar(&createReq.ClusterName, "cluster-name", createReq.ClusterName, `Cluster name requested by the user.`)
// TODO: map via StringToStringVar: custom_tags
cmd.Flags().Var(&createReq.DataSecurityMode, "data-security-mode", `Data security mode decides what data governance model to use when accessing data from a cluster. Supported values: [
DATA_SECURITY_MODE_AUTO,
DATA_SECURITY_MODE_DEDICATED,
DATA_SECURITY_MODE_STANDARD,
LEGACY_PASSTHROUGH,
LEGACY_SINGLE_USER,
LEGACY_SINGLE_USER_STANDARD,
@ -220,6 +223,8 @@ func newCreate() *cobra.Command {
// TODO: complex arg: gcp_attributes
// TODO: array: init_scripts
cmd.Flags().StringVar(&createReq.InstancePoolId, "instance-pool-id", createReq.InstancePoolId, `The optional ID of the instance pool to which the cluster belongs.`)
cmd.Flags().BoolVar(&createReq.IsSingleNode, "is-single-node", createReq.IsSingleNode, `This field can only be used with kind.`)
cmd.Flags().Var(&createReq.Kind, "kind", `The kind of compute described by this compute specification. Supported values: [CLASSIC_PREVIEW]`)
cmd.Flags().StringVar(&createReq.NodeTypeId, "node-type-id", createReq.NodeTypeId, `This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster.`)
cmd.Flags().IntVar(&createReq.NumWorkers, "num-workers", createReq.NumWorkers, `Number of worker nodes that this cluster should have.`)
cmd.Flags().StringVar(&createReq.PolicyId, "policy-id", createReq.PolicyId, `The ID of the cluster policy used to create the cluster if applicable.`)
@ -228,6 +233,7 @@ func newCreate() *cobra.Command {
// TODO: map via StringToStringVar: spark_conf
// TODO: map via StringToStringVar: spark_env_vars
// TODO: array: ssh_public_keys
cmd.Flags().BoolVar(&createReq.UseMlRuntime, "use-ml-runtime", createReq.UseMlRuntime, `This field can only be used with kind.`)
// TODO: complex arg: workload_type
cmd.Use = "create SPARK_VERSION"
@ -468,6 +474,9 @@ func newEdit() *cobra.Command {
cmd.Flags().StringVar(&editReq.ClusterName, "cluster-name", editReq.ClusterName, `Cluster name requested by the user.`)
// TODO: map via StringToStringVar: custom_tags
cmd.Flags().Var(&editReq.DataSecurityMode, "data-security-mode", `Data security mode decides what data governance model to use when accessing data from a cluster. Supported values: [
DATA_SECURITY_MODE_AUTO,
DATA_SECURITY_MODE_DEDICATED,
DATA_SECURITY_MODE_STANDARD,
LEGACY_PASSTHROUGH,
LEGACY_SINGLE_USER,
LEGACY_SINGLE_USER_STANDARD,
@ -484,6 +493,8 @@ func newEdit() *cobra.Command {
// TODO: complex arg: gcp_attributes
// TODO: array: init_scripts
cmd.Flags().StringVar(&editReq.InstancePoolId, "instance-pool-id", editReq.InstancePoolId, `The optional ID of the instance pool to which the cluster belongs.`)
cmd.Flags().BoolVar(&editReq.IsSingleNode, "is-single-node", editReq.IsSingleNode, `This field can only be used with kind.`)
cmd.Flags().Var(&editReq.Kind, "kind", `The kind of compute described by this compute specification. Supported values: [CLASSIC_PREVIEW]`)
cmd.Flags().StringVar(&editReq.NodeTypeId, "node-type-id", editReq.NodeTypeId, `This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster.`)
cmd.Flags().IntVar(&editReq.NumWorkers, "num-workers", editReq.NumWorkers, `Number of worker nodes that this cluster should have.`)
cmd.Flags().StringVar(&editReq.PolicyId, "policy-id", editReq.PolicyId, `The ID of the cluster policy used to create the cluster if applicable.`)
@ -492,6 +503,7 @@ func newEdit() *cobra.Command {
// TODO: map via StringToStringVar: spark_conf
// TODO: map via StringToStringVar: spark_env_vars
// TODO: array: ssh_public_keys
cmd.Flags().BoolVar(&editReq.UseMlRuntime, "use-ml-runtime", editReq.UseMlRuntime, `This field can only be used with kind.`)
// TODO: complex arg: workload_type
cmd.Use = "edit CLUSTER_ID SPARK_VERSION"

View File

@ -828,6 +828,7 @@ func newMigrate() *cobra.Command {
cmd.Flags().StringVar(&migrateReq.DisplayName, "display-name", migrateReq.DisplayName, `Display name for the new Lakeview dashboard.`)
cmd.Flags().StringVar(&migrateReq.ParentPath, "parent-path", migrateReq.ParentPath, `The workspace path of the folder to contain the migrated Lakeview dashboard.`)
cmd.Flags().BoolVar(&migrateReq.UpdateParameterSyntax, "update-parameter-syntax", migrateReq.UpdateParameterSyntax, `Flag to indicate if mustache parameter syntax ({{ param }}) should be auto-updated to named syntax (:param) when converting datasets in the dashboard.`)
cmd.Use = "migrate SOURCE_DASHBOARD_ID"
cmd.Short = `Migrate dashboard.`

2
go.mod
View File

@ -7,7 +7,7 @@ toolchain go1.23.2
require (
github.com/Masterminds/semver/v3 v3.3.1 // MIT
github.com/briandowns/spinner v1.23.1 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.53.0 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.54.0 // Apache 2.0
github.com/fatih/color v1.18.0 // MIT
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
github.com/google/uuid v1.6.0 // BSD-3-Clause

4
go.sum generated
View File

@ -32,8 +32,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/databricks/databricks-sdk-go v0.53.0 h1:rZMXaTC3HNKZt+m4C4I/dY3EdZj+kl/sVd/Kdq55Qfo=
github.com/databricks/databricks-sdk-go v0.53.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU=
github.com/databricks/databricks-sdk-go v0.54.0 h1:L8gsA3NXs+uYU3QtW/OUgjxMQxOH24k0MT9JhB3zLlM=
github.com/databricks/databricks-sdk-go v0.54.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=