Update Go SDK to v0.35.0 (#1300)

## Changes

SDK release:
https://github.com/databricks/databricks-sdk-go/releases/tag/v0.35.0

## Tests

Tests pass.
This commit is contained in:
Pieter Noordhuis 2024-03-20 14:57:53 +01:00 committed by GitHub
parent 8255c9d9fb
commit 0ef93c2502
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
23 changed files with 464 additions and 109 deletions

View File

@ -1 +1 @@
d855b30f25a06fe84f25214efa20e7f1fffcdf9e
3821dc51952c5cf1c276dd84967da011b191e64a

View File

@ -193,7 +193,7 @@
"description": "An optional continuous property for this job. The continuous property will ensure that there is always one run executing. Only one of `schedule` and `continuous` can be used.",
"properties": {
"pause_status": {
"description": "Whether this trigger is paused or not."
"description": "Indicate whether this schedule is paused or not."
}
}
},
@ -725,7 +725,7 @@
"description": "An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`.",
"properties": {
"pause_status": {
"description": "Whether this trigger is paused or not."
"description": "Indicate whether this schedule is paused or not."
},
"quartz_cron_expression": {
"description": "A Cron expression using Quartz syntax that describes the schedule for a job.\nSee [Cron Trigger](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html)\nfor details. This field is required.\"\n"
@ -785,7 +785,7 @@
"description": "Optional schema to write to. This parameter is only used when a warehouse_id is also provided. If not provided, the `default` schema is used."
},
"source": {
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
"description": "Optional location type of the Python file. When set to `WORKSPACE` or not specified, the file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace or cloud location (if the `python_file` has a URI format). When set to `GIT`,\nthe Python file will be retrieved from a Git repository defined in `git_source`.\n\n* `WORKSPACE`: The Python file is located in a \u003cDatabricks\u003e workspace or at a cloud filesystem URI.\n* `GIT`: The Python file is located in a remote Git repository.\n"
},
"warehouse_id": {
"description": "ID of the SQL warehouse to connect to. If provided, we automatically generate and provide the profile and connection details to dbt. It can be overridden on a per-command basis by using the `--profiles-dir` command line argument."
@ -1269,7 +1269,7 @@
"description": "The path of the notebook to be run in the Databricks workspace or remote repository.\nFor notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash.\nFor notebooks stored in a remote repository, the path must be relative. This field is required.\n"
},
"source": {
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
"description": "Optional location type of the Python file. When set to `WORKSPACE` or not specified, the file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace or cloud location (if the `python_file` has a URI format). When set to `GIT`,\nthe Python file will be retrieved from a Git repository defined in `git_source`.\n\n* `WORKSPACE`: The Python file is located in a \u003cDatabricks\u003e workspace or at a cloud filesystem URI.\n* `GIT`: The Python file is located in a remote Git repository.\n"
}
}
},
@ -1371,7 +1371,7 @@
"description": "The Python file to be executed. Cloud file URIs (such as dbfs:/, s3:/, adls:/, gcs:/) and workspace paths are supported. For python files stored in the Databricks workspace, the path must be absolute and begin with `/`. For files stored in a remote repository, the path must be relative. This field is required."
},
"source": {
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
"description": "Optional location type of the Python file. When set to `WORKSPACE` or not specified, the file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace or cloud location (if the `python_file` has a URI format). When set to `GIT`,\nthe Python file will be retrieved from a Git repository defined in `git_source`.\n\n* `WORKSPACE`: The Python file is located in a \u003cDatabricks\u003e workspace or at a cloud filesystem URI.\n* `GIT`: The Python file is located in a remote Git repository.\n"
}
}
},
@ -1449,7 +1449,7 @@
"description": "Path of the SQL file. Must be relative if the source is a remote Git repository and absolute for workspace paths."
},
"source": {
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
"description": "Optional location type of the Python file. When set to `WORKSPACE` or not specified, the file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace or cloud location (if the `python_file` has a URI format). When set to `GIT`,\nthe Python file will be retrieved from a Git repository defined in `git_source`.\n\n* `WORKSPACE`: The Python file is located in a \u003cDatabricks\u003e workspace or at a cloud filesystem URI.\n* `GIT`: The Python file is located in a remote Git repository.\n"
}
}
},
@ -1551,7 +1551,7 @@
}
},
"pause_status": {
"description": "Whether this trigger is paused or not."
"description": "Indicate whether this schedule is paused or not."
},
"table": {
"description": "Table trigger settings.",
@ -1653,7 +1653,7 @@
}
},
"served_entities": {
"description": "A list of served entities for the endpoint to serve. A serving endpoint can have up to 10 served entities.",
"description": "A list of served entities for the endpoint to serve. A serving endpoint can have up to 15 served entities.",
"items": {
"description": "",
"properties": {
@ -1791,7 +1791,7 @@
}
},
"served_models": {
"description": "(Deprecated, use served_entities instead) A list of served models for the endpoint to serve. A serving endpoint can have up to 10 served models.",
"description": "(Deprecated, use served_entities instead) A list of served models for the endpoint to serve. A serving endpoint can have up to 15 served models.",
"items": {
"description": "",
"properties": {
@ -2726,7 +2726,7 @@
"description": "An optional continuous property for this job. The continuous property will ensure that there is always one run executing. Only one of `schedule` and `continuous` can be used.",
"properties": {
"pause_status": {
"description": "Whether this trigger is paused or not."
"description": "Indicate whether this schedule is paused or not."
}
}
},
@ -3258,7 +3258,7 @@
"description": "An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`.",
"properties": {
"pause_status": {
"description": "Whether this trigger is paused or not."
"description": "Indicate whether this schedule is paused or not."
},
"quartz_cron_expression": {
"description": "A Cron expression using Quartz syntax that describes the schedule for a job.\nSee [Cron Trigger](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html)\nfor details. This field is required.\"\n"
@ -3318,7 +3318,7 @@
"description": "Optional schema to write to. This parameter is only used when a warehouse_id is also provided. If not provided, the `default` schema is used."
},
"source": {
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
"description": "Optional location type of the Python file. When set to `WORKSPACE` or not specified, the file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace or cloud location (if the `python_file` has a URI format). When set to `GIT`,\nthe Python file will be retrieved from a Git repository defined in `git_source`.\n\n* `WORKSPACE`: The Python file is located in a \u003cDatabricks\u003e workspace or at a cloud filesystem URI.\n* `GIT`: The Python file is located in a remote Git repository.\n"
},
"warehouse_id": {
"description": "ID of the SQL warehouse to connect to. If provided, we automatically generate and provide the profile and connection details to dbt. It can be overridden on a per-command basis by using the `--profiles-dir` command line argument."
@ -3802,7 +3802,7 @@
"description": "The path of the notebook to be run in the Databricks workspace or remote repository.\nFor notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash.\nFor notebooks stored in a remote repository, the path must be relative. This field is required.\n"
},
"source": {
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
"description": "Optional location type of the Python file. When set to `WORKSPACE` or not specified, the file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace or cloud location (if the `python_file` has a URI format). When set to `GIT`,\nthe Python file will be retrieved from a Git repository defined in `git_source`.\n\n* `WORKSPACE`: The Python file is located in a \u003cDatabricks\u003e workspace or at a cloud filesystem URI.\n* `GIT`: The Python file is located in a remote Git repository.\n"
}
}
},
@ -3904,7 +3904,7 @@
"description": "The Python file to be executed. Cloud file URIs (such as dbfs:/, s3:/, adls:/, gcs:/) and workspace paths are supported. For python files stored in the Databricks workspace, the path must be absolute and begin with `/`. For files stored in a remote repository, the path must be relative. This field is required."
},
"source": {
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
"description": "Optional location type of the Python file. When set to `WORKSPACE` or not specified, the file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace or cloud location (if the `python_file` has a URI format). When set to `GIT`,\nthe Python file will be retrieved from a Git repository defined in `git_source`.\n\n* `WORKSPACE`: The Python file is located in a \u003cDatabricks\u003e workspace or at a cloud filesystem URI.\n* `GIT`: The Python file is located in a remote Git repository.\n"
}
}
},
@ -3982,7 +3982,7 @@
"description": "Path of the SQL file. Must be relative if the source is a remote Git repository and absolute for workspace paths."
},
"source": {
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
"description": "Optional location type of the Python file. When set to `WORKSPACE` or not specified, the file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace or cloud location (if the `python_file` has a URI format). When set to `GIT`,\nthe Python file will be retrieved from a Git repository defined in `git_source`.\n\n* `WORKSPACE`: The Python file is located in a \u003cDatabricks\u003e workspace or at a cloud filesystem URI.\n* `GIT`: The Python file is located in a remote Git repository.\n"
}
}
},
@ -4084,7 +4084,7 @@
}
},
"pause_status": {
"description": "Whether this trigger is paused or not."
"description": "Indicate whether this schedule is paused or not."
},
"table": {
"description": "Table trigger settings.",
@ -4186,7 +4186,7 @@
}
},
"served_entities": {
"description": "A list of served entities for the endpoint to serve. A serving endpoint can have up to 10 served entities.",
"description": "A list of served entities for the endpoint to serve. A serving endpoint can have up to 15 served entities.",
"items": {
"description": "",
"properties": {
@ -4324,7 +4324,7 @@
}
},
"served_models": {
"description": "(Deprecated, use served_entities instead) A list of served models for the endpoint to serve. A serving endpoint can have up to 10 served models.",
"description": "(Deprecated, use served_entities instead) A list of served models for the endpoint to serve. A serving endpoint can have up to 15 served models.",
"items": {
"description": "",
"properties": {

View File

@ -300,11 +300,12 @@ func newDeletePrivateEndpointRule() *cobra.Command {
cmd.Short = `Delete a private endpoint rule.`
cmd.Long = `Delete a private endpoint rule.
Initiates deleting a private endpoint rule. The private endpoint will be
deactivated and will be purged after seven days of deactivation. When a
private endpoint is in deactivated state, deactivated field is set to true
and the private endpoint is not available to your serverless compute
resources.
Initiates deleting a private endpoint rule. If the connection state is PENDING
or EXPIRED, the private endpoint is immediately deleted. Otherwise, the
private endpoint is deactivated and will be deleted after seven days of
deactivation. When a private endpoint is deactivated, the deactivated field
is set to true and the private endpoint is not available to your serverless
compute resources.
Arguments:
NETWORK_CONNECTIVITY_CONFIG_ID: Your Network Connectvity Configuration ID.

View File

@ -23,9 +23,6 @@ func New() *cobra.Command {
Annotations: map[string]string{
"package": "settings",
},
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add subservices

View File

@ -210,6 +210,8 @@ func newGet() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&getReq.IncludeBrowse, "include-browse", getReq.IncludeBrowse, `Whether to include catalogs in the response for which the principal can only access selective metadata for.`)
cmd.Use = "get NAME"
cmd.Short = `Get a catalog.`
cmd.Long = `Get a catalog.
@ -260,11 +262,18 @@ func newGet() *cobra.Command {
// Functions can be added from the `init()` function in manually curated files in this directory.
var listOverrides []func(
*cobra.Command,
*catalog.ListCatalogsRequest,
)
func newList() *cobra.Command {
cmd := &cobra.Command{}
var listReq catalog.ListCatalogsRequest
// TODO: short flags
cmd.Flags().BoolVar(&listReq.IncludeBrowse, "include-browse", listReq.IncludeBrowse, `Whether to include catalogs in the response for which the principal can only access selective metadata for.`)
cmd.Use = "list"
cmd.Short = `List catalogs.`
cmd.Long = `List catalogs.
@ -277,11 +286,17 @@ func newList() *cobra.Command {
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
response := w.Catalogs.List(ctx)
response := w.Catalogs.List(ctx, listReq)
return cmdio.RenderIterator(ctx, response)
}
@ -291,7 +306,7 @@ func newList() *cobra.Command {
// Apply optional overrides to this command.
for _, fn := range listOverrides {
fn(cmd)
fn(cmd, &listReq)
}
return cmd

View File

@ -2,10 +2,11 @@ package catalogs
import (
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/spf13/cobra"
)
func listOverride(listCmd *cobra.Command) {
func listOverride(listCmd *cobra.Command, listReq *catalog.ListCatalogsRequest) {
listCmd.Annotations["headerTemplate"] = cmdio.Heredoc(`
{{header "Name"}} {{header "Type"}} {{header "Comment"}}`)
listCmd.Annotations["template"] = cmdio.Heredoc(`

View File

@ -222,6 +222,8 @@ func newGet() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&getReq.IncludeBrowse, "include-browse", getReq.IncludeBrowse, `Whether to include external locations in the response for which the principal can only access selective metadata for.`)
cmd.Use = "get NAME"
cmd.Short = `Get an external location.`
cmd.Long = `Get an external location.
@ -282,6 +284,7 @@ func newList() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&listReq.IncludeBrowse, "include-browse", listReq.IncludeBrowse, `Whether to include external locations in the response for which the principal can only access selective metadata for.`)
cmd.Flags().IntVar(&listReq.MaxResults, "max-results", listReq.MaxResults, `Maximum number of external locations to return.`)
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, `Opaque pagination token to go to next page based on previous query.`)
@ -291,10 +294,8 @@ func newList() *cobra.Command {
Gets an array of external locations (__ExternalLocationInfo__ objects) from
the metastore. The caller must be a metastore admin, the owner of the external
location, or a user that has some privilege on the external location. For
unpaginated request, there is no guarantee of a specific ordering of the
elements in the array. For paginated request, elements are ordered by their
name.`
location, or a user that has some privilege on the external location. There is
no guarantee of a specific ordering of the elements in the array.`
cmd.Annotations = make(map[string]string)

View File

@ -204,6 +204,8 @@ func newGet() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&getReq.IncludeBrowse, "include-browse", getReq.IncludeBrowse, `Whether to include functions in the response for which the principal can only access selective metadata for.`)
cmd.Use = "get NAME"
cmd.Short = `Get a function.`
cmd.Long = `Get a function.
@ -281,6 +283,7 @@ func newList() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&listReq.IncludeBrowse, "include-browse", listReq.IncludeBrowse, `Whether to include functions in the response for which the principal can only access selective metadata for.`)
cmd.Flags().IntVar(&listReq.MaxResults, "max-results", listReq.MaxResults, `Maximum number of functions to return.`)
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, `Opaque pagination token to go to next page based on previous query.`)
@ -293,9 +296,8 @@ func newList() *cobra.Command {
the user must have the **USE_CATALOG** privilege on the catalog and the
**USE_SCHEMA** privilege on the schema, and the output list contains only
functions for which either the user has the **EXECUTE** privilege or the user
is the owner. For unpaginated request, there is no guarantee of a specific
ordering of the elements in the array. For paginated request, elements are
ordered by their name.
is the owner. There is no guarantee of a specific ordering of the elements in
the array.
Arguments:
CATALOG_NAME: Name of parent catalog for functions of interest.

View File

@ -146,11 +146,11 @@ func newCreate() *cobra.Command {
// TODO: array: custom_metrics
// TODO: complex arg: data_classification_config
// TODO: complex arg: inference_log
// TODO: array: notifications
// TODO: complex arg: notifications
// TODO: complex arg: schedule
cmd.Flags().BoolVar(&createReq.SkipBuiltinDashboard, "skip-builtin-dashboard", createReq.SkipBuiltinDashboard, `Whether to skip creating a default dashboard summarizing data quality metrics.`)
// TODO: array: slicing_exprs
// TODO: output-only field
// TODO: complex arg: snapshot
// TODO: complex arg: time_series
cmd.Flags().StringVar(&createReq.WarehouseId, "warehouse-id", createReq.WarehouseId, `Optional argument to specify the warehouse for dashboard creation.`)
@ -593,10 +593,10 @@ func newUpdate() *cobra.Command {
// TODO: array: custom_metrics
// TODO: complex arg: data_classification_config
// TODO: complex arg: inference_log
// TODO: array: notifications
// TODO: complex arg: notifications
// TODO: complex arg: schedule
// TODO: array: slicing_exprs
// TODO: output-only field
// TODO: complex arg: snapshot
// TODO: complex arg: time_series
cmd.Use = "update FULL_NAME OUTPUT_SCHEMA_NAME"

View File

@ -3,7 +3,10 @@
package lakeview
import (
"fmt"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/spf13/cobra"
@ -27,7 +30,12 @@ func New() *cobra.Command {
}
// Add methods
cmd.AddCommand(newCreate())
cmd.AddCommand(newGet())
cmd.AddCommand(newGetPublished())
cmd.AddCommand(newPublish())
cmd.AddCommand(newTrash())
cmd.AddCommand(newUpdate())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
@ -37,6 +45,201 @@ func New() *cobra.Command {
return cmd
}
// start create command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var createOverrides []func(
*cobra.Command,
*dashboards.CreateDashboardRequest,
)
func newCreate() *cobra.Command {
cmd := &cobra.Command{}
var createReq dashboards.CreateDashboardRequest
var createJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&createReq.ParentPath, "parent-path", createReq.ParentPath, `The workspace path of the folder containing the dashboard.`)
cmd.Flags().StringVar(&createReq.SerializedDashboard, "serialized-dashboard", createReq.SerializedDashboard, `The contents of the dashboard in serialized string form.`)
cmd.Flags().StringVar(&createReq.WarehouseId, "warehouse-id", createReq.WarehouseId, `The warehouse ID used to run the dashboard.`)
cmd.Use = "create DISPLAY_NAME"
cmd.Short = `Create dashboard.`
cmd.Long = `Create dashboard.
Create a draft dashboard.
Arguments:
DISPLAY_NAME: The display name of the dashboard.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
if cmd.Flags().Changed("json") {
err := root.ExactArgs(0)(cmd, args)
if err != nil {
return fmt.Errorf("when --json flag is specified, no positional arguments are required. Provide 'display_name' in your JSON input")
}
return nil
}
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
err = createJson.Unmarshal(&createReq)
if err != nil {
return err
}
}
if !cmd.Flags().Changed("json") {
createReq.DisplayName = args[0]
}
response, err := w.Lakeview.Create(ctx, createReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range createOverrides {
fn(cmd, &createReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getOverrides []func(
*cobra.Command,
*dashboards.GetLakeviewRequest,
)
func newGet() *cobra.Command {
cmd := &cobra.Command{}
var getReq dashboards.GetLakeviewRequest
// TODO: short flags
cmd.Use = "get DASHBOARD_ID"
cmd.Short = `Get dashboard.`
cmd.Long = `Get dashboard.
Get a draft dashboard.
Arguments:
DASHBOARD_ID: UUID identifying the dashboard.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
getReq.DashboardId = args[0]
response, err := w.Lakeview.Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getOverrides {
fn(cmd, &getReq)
}
return cmd
}
// start get-published command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getPublishedOverrides []func(
*cobra.Command,
*dashboards.GetPublishedRequest,
)
func newGetPublished() *cobra.Command {
cmd := &cobra.Command{}
var getPublishedReq dashboards.GetPublishedRequest
// TODO: short flags
cmd.Use = "get-published DASHBOARD_ID"
cmd.Short = `Get published dashboard.`
cmd.Long = `Get published dashboard.
Get the current published dashboard.
Arguments:
DASHBOARD_ID: UUID identifying the dashboard to be published.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
getPublishedReq.DashboardId = args[0]
response, err := w.Lakeview.GetPublished(ctx, getPublishedReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getPublishedOverrides {
fn(cmd, &getPublishedReq)
}
return cmd
}
// start publish command
// Slice with functions to override default command behavior.
@ -87,11 +290,11 @@ func newPublish() *cobra.Command {
}
publishReq.DashboardId = args[0]
err = w.Lakeview.Publish(ctx, publishReq)
response, err := w.Lakeview.Publish(ctx, publishReq)
if err != nil {
return err
}
return nil
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
@ -106,4 +309,133 @@ func newPublish() *cobra.Command {
return cmd
}
// start trash command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var trashOverrides []func(
*cobra.Command,
*dashboards.TrashRequest,
)
func newTrash() *cobra.Command {
cmd := &cobra.Command{}
var trashReq dashboards.TrashRequest
// TODO: short flags
cmd.Use = "trash DASHBOARD_ID"
cmd.Short = `Trash dashboard.`
cmd.Long = `Trash dashboard.
Trash a dashboard.
Arguments:
DASHBOARD_ID: UUID identifying the dashboard.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
trashReq.DashboardId = args[0]
err = w.Lakeview.Trash(ctx, trashReq)
if err != nil {
return err
}
return nil
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range trashOverrides {
fn(cmd, &trashReq)
}
return cmd
}
// start update command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var updateOverrides []func(
*cobra.Command,
*dashboards.UpdateDashboardRequest,
)
func newUpdate() *cobra.Command {
cmd := &cobra.Command{}
var updateReq dashboards.UpdateDashboardRequest
var updateJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&updateReq.DisplayName, "display-name", updateReq.DisplayName, `The display name of the dashboard.`)
cmd.Flags().StringVar(&updateReq.Etag, "etag", updateReq.Etag, `The etag for the dashboard.`)
cmd.Flags().StringVar(&updateReq.SerializedDashboard, "serialized-dashboard", updateReq.SerializedDashboard, `The contents of the dashboard in serialized string form.`)
cmd.Flags().StringVar(&updateReq.WarehouseId, "warehouse-id", updateReq.WarehouseId, `The warehouse ID used to run the dashboard.`)
cmd.Use = "update DASHBOARD_ID"
cmd.Short = `Update dashboard.`
cmd.Long = `Update dashboard.
Update a draft dashboard.
Arguments:
DASHBOARD_ID: UUID identifying the dashboard.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
err = updateJson.Unmarshal(&updateReq)
if err != nil {
return err
}
}
updateReq.DashboardId = args[0]
response, err := w.Lakeview.Update(ctx, updateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range updateOverrides {
fn(cmd, &updateReq)
}
return cmd
}
// end service Lakeview

View File

@ -133,6 +133,8 @@ func newGet() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&getReq.IncludeBrowse, "include-browse", getReq.IncludeBrowse, `Whether to include model versions in the response for which the principal can only access selective metadata for.`)
cmd.Use = "get FULL_NAME VERSION"
cmd.Short = `Get a Model Version.`
cmd.Long = `Get a Model Version.
@ -266,6 +268,7 @@ func newList() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&listReq.IncludeBrowse, "include-browse", listReq.IncludeBrowse, `Whether to include model versions in the response for which the principal can only access selective metadata for.`)
cmd.Flags().IntVar(&listReq.MaxResults, "max-results", listReq.MaxResults, `Maximum number of model versions to return.`)
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, `Opaque pagination token to go to next page based on previous query.`)

View File

@ -45,13 +45,13 @@ func New() *cobra.Command {
// Functions can be added from the `init()` function in manually curated files in this directory.
var createOverrides []func(
*cobra.Command,
*catalog.ViewData,
*catalog.CreateOnlineTableRequest,
)
func newCreate() *cobra.Command {
cmd := &cobra.Command{}
var createReq catalog.ViewData
var createReq catalog.CreateOnlineTableRequest
var createJson flags.JsonFlag
// TODO: short flags

View File

@ -326,6 +326,8 @@ func newGet() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&getReq.IncludeBrowse, "include-browse", getReq.IncludeBrowse, `Whether to include registered models in the response for which the principal can only access selective metadata for.`)
cmd.Use = "get FULL_NAME"
cmd.Short = `Get a Registered Model.`
cmd.Long = `Get a Registered Model.
@ -402,6 +404,7 @@ func newList() *cobra.Command {
// TODO: short flags
cmd.Flags().StringVar(&listReq.CatalogName, "catalog-name", listReq.CatalogName, `The identifier of the catalog under which to list registered models.`)
cmd.Flags().BoolVar(&listReq.IncludeBrowse, "include-browse", listReq.IncludeBrowse, `Whether to include registered models in the response for which the principal can only access selective metadata for.`)
cmd.Flags().IntVar(&listReq.MaxResults, "max-results", listReq.MaxResults, `Max number of registered models to return.`)
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, `Opaque token to send for the next page of results (pagination).`)
cmd.Flags().StringVar(&listReq.SchemaName, "schema-name", listReq.SchemaName, `The identifier of the schema under which to list registered models.`)

View File

@ -218,6 +218,8 @@ func newGet() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&getReq.IncludeBrowse, "include-browse", getReq.IncludeBrowse, `Whether to include schemas in the response for which the principal can only access selective metadata for.`)
cmd.Use = "get FULL_NAME"
cmd.Short = `Get a schema.`
cmd.Long = `Get a schema.
@ -290,6 +292,7 @@ func newList() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&listReq.IncludeBrowse, "include-browse", listReq.IncludeBrowse, `Whether to include schemas in the response for which the principal can only access selective metadata for.`)
cmd.Flags().IntVar(&listReq.MaxResults, "max-results", listReq.MaxResults, `Maximum number of schemas to return.`)
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, `Opaque pagination token to go to next page based on previous query.`)
@ -300,10 +303,8 @@ func newList() *cobra.Command {
Gets an array of schemas for a catalog in the metastore. If the caller is the
metastore admin or the owner of the parent catalog, all schemas for the
catalog will be retrieved. Otherwise, only schemas owned by the caller (or for
which the caller has the **USE_SCHEMA** privilege) will be retrieved. For
unpaginated request, there is no guarantee of a specific ordering of the
elements in the array. For paginated request, elements are ordered by their
name.
which the caller has the **USE_SCHEMA** privilege) will be retrieved. There is
no guarantee of a specific ordering of the elements in the array.
Arguments:
CATALOG_NAME: Parent catalog for schemas of interest.`

View File

@ -85,8 +85,7 @@ func newCreateScope() *cobra.Command {
cmd.Long = `Create a new secret scope.
The scope name must consist of alphanumeric characters, dashes, underscores,
and periods, and may not exceed 128 characters. The maximum number of scopes
in a workspace is 100.
and periods, and may not exceed 128 characters.
Arguments:
SCOPE: Scope name requested by the user. Scope names are unique.`

View File

@ -82,9 +82,8 @@ func newBuildLogs() *cobra.Command {
// TODO: short flags
cmd.Use = "build-logs NAME SERVED_MODEL_NAME"
cmd.Short = `Retrieve the logs associated with building the model's environment for a given serving endpoint's served model.`
cmd.Long = `Retrieve the logs associated with building the model's environment for a given
serving endpoint's served model.
cmd.Short = `Get build logs for a served model.`
cmd.Long = `Get build logs for a served model.
Retrieves the build logs associated with the provided served model.
@ -279,8 +278,8 @@ func newExportMetrics() *cobra.Command {
// TODO: short flags
cmd.Use = "export-metrics NAME"
cmd.Short = `Retrieve the metrics associated with a serving endpoint.`
cmd.Long = `Retrieve the metrics associated with a serving endpoint.
cmd.Short = `Get metrics of a serving endpoint.`
cmd.Long = `Get metrics of a serving endpoint.
Retrieves the metrics associated with the provided serving endpoint in either
Prometheus or OpenMetrics exposition format.
@ -509,8 +508,8 @@ func newList() *cobra.Command {
cmd := &cobra.Command{}
cmd.Use = "list"
cmd.Short = `Retrieve all serving endpoints.`
cmd.Long = `Retrieve all serving endpoints.`
cmd.Short = `Get all serving endpoints.`
cmd.Long = `Get all serving endpoints.`
cmd.Annotations = make(map[string]string)
@ -551,9 +550,8 @@ func newLogs() *cobra.Command {
// TODO: short flags
cmd.Use = "logs NAME SERVED_MODEL_NAME"
cmd.Short = `Retrieve the most recent log lines associated with a given serving endpoint's served model.`
cmd.Long = `Retrieve the most recent log lines associated with a given serving endpoint's
served model.
cmd.Short = `Get the latest logs for a served model.`
cmd.Long = `Get the latest logs for a served model.
Retrieves the service logs associated with the provided served model.
@ -619,8 +617,8 @@ func newPatch() *cobra.Command {
// TODO: array: delete_tags
cmd.Use = "patch NAME"
cmd.Short = `Patch the tags of a serving endpoint.`
cmd.Long = `Patch the tags of a serving endpoint.
cmd.Short = `Update tags of a serving endpoint.`
cmd.Long = `Update tags of a serving endpoint.
Used to batch add and delete tags from a serving endpoint with a single API
call.
@ -689,8 +687,8 @@ func newPut() *cobra.Command {
// TODO: array: rate_limits
cmd.Use = "put NAME"
cmd.Short = `Update the rate limits of a serving endpoint.`
cmd.Long = `Update the rate limits of a serving endpoint.
cmd.Short = `Update rate limits of a serving endpoint.`
cmd.Long = `Update rate limits of a serving endpoint.
Used to update the rate limits of a serving endpoint. NOTE: only external and
foundation model endpoints are supported as of now.
@ -771,8 +769,8 @@ func newQuery() *cobra.Command {
cmd.Flags().Float64Var(&queryReq.Temperature, "temperature", queryReq.Temperature, `The temperature field used ONLY for __completions__ and __chat external & foundation model__ serving endpoints.`)
cmd.Use = "query NAME"
cmd.Short = `Query a serving endpoint with provided model input.`
cmd.Long = `Query a serving endpoint with provided model input.
cmd.Short = `Query a serving endpoint.`
cmd.Long = `Query a serving endpoint.
Arguments:
NAME: The name of the serving endpoint. This field is required.`
@ -914,8 +912,8 @@ func newUpdateConfig() *cobra.Command {
// TODO: complex arg: traffic_config
cmd.Use = "update-config NAME"
cmd.Short = `Update a serving endpoint with a new config.`
cmd.Long = `Update a serving endpoint with a new config.
cmd.Short = `Update config of a serving endpoint.`
cmd.Long = `Update config of a serving endpoint.
Updates any combination of the serving endpoint's served entities, the compute
configuration of those served entities, and the endpoint's traffic config. An

View File

@ -25,9 +25,6 @@ func New() *cobra.Command {
Annotations: map[string]string{
"package": "settings",
},
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add subservices

View File

@ -78,7 +78,7 @@ func newCreate() *cobra.Command {
// TODO: complex arg: azure_service_principal
// TODO: complex arg: cloudflare_api_token
cmd.Flags().StringVar(&createReq.Comment, "comment", createReq.Comment, `Comment associated with the credential.`)
// TODO: output-only field
// TODO: complex arg: databricks_gcp_service_account
cmd.Flags().BoolVar(&createReq.ReadOnly, "read-only", createReq.ReadOnly, `Whether the storage credential is only usable for read operations.`)
cmd.Flags().BoolVar(&createReq.SkipValidation, "skip-validation", createReq.SkipValidation, `Supplying true to this argument skips validation of the created credential.`)
@ -310,9 +310,8 @@ func newList() *cobra.Command {
Gets an array of storage credentials (as __StorageCredentialInfo__ objects).
The array is limited to only those storage credentials the caller has
permission to access. If the caller is a metastore admin, retrieval of
credentials is unrestricted. For unpaginated request, there is no guarantee of
a specific ordering of the elements in the array. For paginated request,
elements are ordered by their name.`
credentials is unrestricted. There is no guarantee of a specific ordering of
the elements in the array.`
cmd.Annotations = make(map[string]string)
@ -365,7 +364,7 @@ func newUpdate() *cobra.Command {
// TODO: complex arg: azure_service_principal
// TODO: complex arg: cloudflare_api_token
cmd.Flags().StringVar(&updateReq.Comment, "comment", updateReq.Comment, `Comment associated with the credential.`)
// TODO: output-only field
// TODO: complex arg: databricks_gcp_service_account
cmd.Flags().BoolVar(&updateReq.Force, "force", updateReq.Force, `Force update even if there are dependent external locations or external tables.`)
cmd.Flags().StringVar(&updateReq.NewName, "new-name", updateReq.NewName, `New name for the storage credential.`)
cmd.Flags().StringVar(&updateReq.Owner, "owner", updateReq.Owner, `Username of current owner of credential.`)
@ -454,7 +453,7 @@ func newValidate() *cobra.Command {
// TODO: complex arg: azure_managed_identity
// TODO: complex arg: azure_service_principal
// TODO: complex arg: cloudflare_api_token
// TODO: output-only field
// TODO: complex arg: databricks_gcp_service_account
cmd.Flags().StringVar(&validateReq.ExternalLocationName, "external-location-name", validateReq.ExternalLocationName, `The name of an existing external location to validate.`)
cmd.Flags().BoolVar(&validateReq.ReadOnly, "read-only", validateReq.ReadOnly, `Whether the storage credential is only usable for read operations.`)
cmd.Flags().StringVar(&validateReq.StorageCredentialName, "storage-credential-name", validateReq.StorageCredentialName, `The name of the storage credential to validate.`)

View File

@ -218,6 +218,7 @@ func newGet() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&getReq.IncludeBrowse, "include-browse", getReq.IncludeBrowse, `Whether to include tables in the response for which the principal can only access selective metadata for.`)
cmd.Flags().BoolVar(&getReq.IncludeDeltaMetadata, "include-delta-metadata", getReq.IncludeDeltaMetadata, `Whether delta metadata should be included in the response.`)
cmd.Use = "get FULL_NAME"
@ -296,6 +297,7 @@ func newList() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&listReq.IncludeBrowse, "include-browse", listReq.IncludeBrowse, `Whether to include tables in the response for which the principal can only access selective metadata for.`)
cmd.Flags().BoolVar(&listReq.IncludeDeltaMetadata, "include-delta-metadata", listReq.IncludeDeltaMetadata, `Whether delta metadata should be included in the response.`)
cmd.Flags().IntVar(&listReq.MaxResults, "max-results", listReq.MaxResults, `Maximum number of tables to return.`)
cmd.Flags().BoolVar(&listReq.OmitColumns, "omit-columns", listReq.OmitColumns, `Whether to omit the columns of the table from the response or not.`)

View File

@ -416,6 +416,7 @@ func newQueryIndex() *cobra.Command {
cmd.Flags().IntVar(&queryIndexReq.NumResults, "num-results", queryIndexReq.NumResults, `Number of results to return.`)
cmd.Flags().StringVar(&queryIndexReq.QueryText, "query-text", queryIndexReq.QueryText, `Query text.`)
// TODO: array: query_vector
cmd.Flags().Float64Var(&queryIndexReq.ScoreThreshold, "score-threshold", queryIndexReq.ScoreThreshold, `Threshold for the approximate nearest neighbor search.`)
cmd.Use = "query-index INDEX_NAME"
cmd.Short = `Query an index.`

View File

@ -249,6 +249,7 @@ func newList() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&listReq.IncludeBrowse, "include-browse", listReq.IncludeBrowse, `Whether to include volumes in the response for which the principal can only access selective metadata for.`)
cmd.Flags().IntVar(&listReq.MaxResults, "max-results", listReq.MaxResults, `Maximum number of volumes to return (page length).`)
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, `Opaque token returned by a previous request.`)
@ -319,6 +320,8 @@ func newRead() *cobra.Command {
// TODO: short flags
cmd.Flags().BoolVar(&readReq.IncludeBrowse, "include-browse", readReq.IncludeBrowse, `Whether to include volumes in the response for which the principal can only access selective metadata for.`)
cmd.Use = "read NAME"
cmd.Short = `Get a Volume.`
cmd.Long = `Get a Volume.

18
go.mod
View File

@ -4,7 +4,7 @@ go 1.21
require (
github.com/briandowns/spinner v1.23.0 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.34.0 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.35.0 // Apache 2.0
github.com/fatih/color v1.16.0 // MIT
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
github.com/google/uuid v1.6.0 // BSD-3-Clause
@ -54,18 +54,18 @@ require (
github.com/stretchr/objx v0.5.2 // indirect
github.com/zclconf/go-cty v1.14.1 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.48.0 // indirect
go.opentelemetry.io/otel v1.23.0 // indirect
go.opentelemetry.io/otel/metric v1.23.0 // indirect
go.opentelemetry.io/otel/trace v1.23.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect
go.opentelemetry.io/otel v1.24.0 // indirect
go.opentelemetry.io/otel/metric v1.24.0 // indirect
go.opentelemetry.io/otel/trace v1.24.0 // indirect
golang.org/x/crypto v0.21.0 // indirect
golang.org/x/net v0.22.0 // indirect
golang.org/x/sys v0.18.0 // indirect
golang.org/x/time v0.5.0 // indirect
google.golang.org/api v0.166.0 // indirect
google.golang.org/api v0.169.0 // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240213162025-012b6fc9bca9 // indirect
google.golang.org/grpc v1.61.1 // indirect
google.golang.org/protobuf v1.32.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240304161311-37d4d3c04a78 // indirect
google.golang.org/grpc v1.62.0 // indirect
google.golang.org/protobuf v1.33.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)

44
go.sum generated
View File

@ -28,8 +28,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/databricks/databricks-sdk-go v0.34.0 h1:z4JjgcCk99jAGxx3JgkMsniJFtReWhtAxkgyvtdFqCs=
github.com/databricks/databricks-sdk-go v0.34.0/go.mod h1:MGNWVPqxYCW1vj/xD7DeLT8uChi4lgTFum+iIwDxd/Q=
github.com/databricks/databricks-sdk-go v0.35.0 h1:Z5dflnYEqCreYtuDkwsCPadvRP/aucikI34+gzrvTYQ=
github.com/databricks/databricks-sdk-go v0.35.0/go.mod h1:Yjy1gREDLK65g4axpVbVNKYAHYE2Sqzj0AB9QWHCBVM=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@ -94,8 +94,8 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/enterprise-certificate-proxy v0.3.2 h1:Vie5ybvEvT75RniqhfFxPRy3Bf7vr3h0cechB90XaQs=
github.com/googleapis/enterprise-certificate-proxy v0.3.2/go.mod h1:VLSiSSBs/ksPL8kq3OBOQ6WRI2QnaFynd1DCjZ62+V0=
github.com/googleapis/gax-go/v2 v2.12.1 h1:9F8GV9r9ztXyAi00gsMQHNoF51xPZm8uj1dpYt2ZETM=
github.com/googleapis/gax-go/v2 v2.12.1/go.mod h1:61M8vcyyXR2kqKFxKrfA22jaA8JGF7Dc8App1U3H6jc=
github.com/googleapis/gax-go/v2 v2.12.2 h1:mhN09QQW1jEWeMF74zGR81R30z4VJzjZsfkUhuHF+DA=
github.com/googleapis/gax-go/v2 v2.12.2/go.mod h1:61M8vcyyXR2kqKFxKrfA22jaA8JGF7Dc8App1U3H6jc=
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
github.com/hashicorp/go-version v1.6.0 h1:feTTfFNnjP967rlCxM/I9g701jU+RN74YKx2mOkIeek=
@ -160,16 +160,16 @@ github.com/zclconf/go-cty v1.14.1 h1:t9fyA35fwjjUMcmL5hLER+e/rEPqrbCK1/OSE4SI9KA
github.com/zclconf/go-cty v1.14.1/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.48.0 h1:P+/g8GpuJGYbOp2tAdKrIPUX9JO02q8Q0YNlHolpibA=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.48.0/go.mod h1:tIKj3DbO8N9Y2xo52og3irLsPI4GW02DSMtrVgNMgxg=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.48.0 h1:doUP+ExOpH3spVTLS0FcWGLnQrPct/hD/bCPbDRUEAU=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.48.0/go.mod h1:rdENBZMT2OE6Ne/KLwpiXudnAsbdrdBaqBvTN8M8BgA=
go.opentelemetry.io/otel v1.23.0 h1:Df0pqjqExIywbMCMTxkAwzjLZtRf+bBKLbUcpxO2C9E=
go.opentelemetry.io/otel v1.23.0/go.mod h1:YCycw9ZeKhcJFrb34iVSkyT0iczq/zYDtZYFufObyB0=
go.opentelemetry.io/otel/metric v1.23.0 h1:pazkx7ss4LFVVYSxYew7L5I6qvLXHA0Ap2pwV+9Cnpo=
go.opentelemetry.io/otel/metric v1.23.0/go.mod h1:MqUW2X2a6Q8RN96E2/nqNoT+z9BSms20Jb7Bbp+HiTo=
go.opentelemetry.io/otel/trace v1.23.0 h1:37Ik5Ib7xfYVb4V1UtnT97T1jI+AoIYkJyPkuL4iJgI=
go.opentelemetry.io/otel/trace v1.23.0/go.mod h1:GSGTbIClEsuZrGIzoEHqsVfxgn5UkggkflQwDScNUsk=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 h1:4Pp6oUg3+e/6M4C0A/3kJ2VYa++dsWVTtGgLVj5xtHg=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0/go.mod h1:Mjt1i1INqiaoZOMGR1RIUJN+i3ChKoFRqzrRQhlkbs0=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u3so/bN+JPT166wjOI6/vQPF6Xe7nMNIltagk=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw=
go.opentelemetry.io/otel v1.24.0 h1:0LAOdjNmQeSTzGBzduGe/rU4tZhMwL5rWgtp9Ku5Jfo=
go.opentelemetry.io/otel v1.24.0/go.mod h1:W7b9Ozg4nkF5tWI5zsXkaKKDjdVjpD4oAt9Qi/MArHo=
go.opentelemetry.io/otel/metric v1.24.0 h1:6EhoGWWK28x1fbpA4tYTOWBkPefTDQnb8WSGXlc88kI=
go.opentelemetry.io/otel/metric v1.24.0/go.mod h1:VYhLe1rFfxuTXLgj4CBiyz+9WYBA8pNGJgDcSFRKBco=
go.opentelemetry.io/otel/trace v1.24.0 h1:CsKnnL4dUAr/0llH9FKuc698G04IrpWV0MQA/Y1YELI=
go.opentelemetry.io/otel/trace v1.24.0/go.mod h1:HPc3Xr/cOApsBI154IU0OI0HJexz+aw5uPdbs3UCjNU=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
@ -243,8 +243,8 @@ golang.org/x/tools v0.18.0 h1:k8NLag8AGHnn+PHbl7g43CtqZAwG60vZkLqgyZgIHgQ=
golang.org/x/tools v0.18.0/go.mod h1:GL7B4CwcLLeo59yx/9UWWuNOW1n3VZ4f5axWfML7Lcg=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.166.0 h1:6m4NUwrZYhAaVIHZWxaKjw1L1vNAjtMwORmKRyEEo24=
google.golang.org/api v0.166.0/go.mod h1:4FcBc686KFi7QI/U51/2GKKevfZMpM17sCdibqe/bSA=
google.golang.org/api v0.169.0 h1:QwWPy71FgMWqJN/l6jVlFHUa29a7dcUy02I8o799nPY=
google.golang.org/api v0.169.0/go.mod h1:gpNOiMA2tZ4mf5R9Iwf4rK/Dcz0fbdIgWYWVoxmsyLg=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM=
@ -252,15 +252,15 @@ google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJ
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240213162025-012b6fc9bca9 h1:hZB7eLIaYlW9qXRfCq/qDaPdbeY3757uARz5Vvfv+cY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240213162025-012b6fc9bca9/go.mod h1:YUWgXUFRPfoYK1IHMuxH5K6nPEXSCzIMljnQ59lLRCk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240304161311-37d4d3c04a78 h1:Xs9lu+tLXxLIfuci70nG4cpwaRC+mRQPUL7LoIeDJC4=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240304161311-37d4d3c04a78/go.mod h1:UCOku4NytXMJuLQE5VuqA5lX3PcHCBo8pxNyvkf4xBs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.61.1 h1:kLAiWrZs7YeDM6MumDe7m3y4aM6wacLzM1Y/wiLP9XY=
google.golang.org/grpc v1.61.1/go.mod h1:VUbo7IFqmF1QtCAstipjG0GIoq49KvMe9+h1jFLBNJs=
google.golang.org/grpc v1.62.0 h1:HQKZ/fa1bXkX1oFOvSjmZEUL8wLSaZTjCcLAlmZRtdk=
google.golang.org/grpc v1.62.0/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@ -272,8 +272,8 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.32.0 h1:pPC6BG5ex8PDFnkbrGU3EixyhKcQ2aDuBS36lqK/C7I=
google.golang.org/protobuf v1.32.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=