mirror of https://github.com/databricks/cli.git
Bump github.com/databricks/databricks-sdk-go from 0.32.0 to 0.33.0 (#1222)
Bumps [github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go) from 0.32.0 to 0.33.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/databricks/databricks-sdk-go/releases">github.com/databricks/databricks-sdk-go's releases</a>.</em></p> <blockquote> <h2>v0.33.0</h2> <p>Internal Changes:</p> <ul> <li>Add helper function to get header fields (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/822">#822</a>).</li> <li>Add Int64 to header type injection (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/819">#819</a>).</li> </ul> <p>API Changes:</p> <ul> <li>Changed <code>Update</code> method for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#LakehouseMonitorsAPI">w.LakehouseMonitors</a> workspace-level service with new required argument order.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTablesAPI">w.OnlineTables</a> workspace-level service.</li> <li>Removed <code>AssetsDir</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateMonitor">catalog.UpdateMonitor</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ContinuousUpdateStatus">catalog.ContinuousUpdateStatus</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#DeleteOnlineTableRequest">catalog.DeleteOnlineTableRequest</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#FailedStatus">catalog.FailedStatus</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GetOnlineTableRequest">catalog.GetOnlineTableRequest</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTable">catalog.OnlineTable</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTableSpec">catalog.OnlineTableSpec</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTableState">catalog.OnlineTableState</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTableStatus">catalog.OnlineTableStatus</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#PipelineProgress">catalog.PipelineProgress</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ProvisioningStatus">catalog.ProvisioningStatus</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#TriggeredUpdateStatus">catalog.TriggeredUpdateStatus</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ViewData">catalog.ViewData</a>.</li> <li>Added <code>ContentLength</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#DownloadResponse">files.DownloadResponse</a>.</li> <li>Added <code>ContentType</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#DownloadResponse">files.DownloadResponse</a>.</li> <li>Added <code>LastModified</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#DownloadResponse">files.DownloadResponse</a>.</li> <li>Changed <code>LastModified</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#GetMetadataResponse">files.GetMetadataResponse</a> to <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#LastModifiedHttpDate">files.LastModifiedHttpDate</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#LastModifiedHttpDate">files.LastModifiedHttpDate</a>.</li> <li>Removed <code>Config</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>Ai21labsConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>AnthropicConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>AwsBedrockConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>CohereConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>DatabricksModelServingConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>OpenaiConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>PalmConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Removed <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModelConfig">serving.ExternalModelConfig</a>.</li> <li>Added <code>MaxProvisionedThroughput</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServedEntityInput">serving.ServedEntityInput</a>.</li> <li>Added <code>MinProvisionedThroughput</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServedEntityInput">serving.ServedEntityInput</a>.</li> <li>Added <code>MaxProvisionedThroughput</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServedEntityOutput">serving.ServedEntityOutput</a>.</li> <li>Added <code>MinProvisionedThroughput</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServedEntityOutput">serving.ServedEntityOutput</a>.</li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md">github.com/databricks/databricks-sdk-go's changelog</a>.</em></p> <blockquote> <h2>0.33.0</h2> <p>Internal Changes:</p> <ul> <li>Add helper function to get header fields (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/822">#822</a>).</li> <li>Add Int64 to header type injection (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/819">#819</a>).</li> </ul> <p>API Changes:</p> <ul> <li>Changed <code>Update</code> method for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#LakehouseMonitorsAPI">w.LakehouseMonitors</a> workspace-level service with new required argument order.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTablesAPI">w.OnlineTables</a> workspace-level service.</li> <li>Removed <code>AssetsDir</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateMonitor">catalog.UpdateMonitor</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ContinuousUpdateStatus">catalog.ContinuousUpdateStatus</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#DeleteOnlineTableRequest">catalog.DeleteOnlineTableRequest</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#FailedStatus">catalog.FailedStatus</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GetOnlineTableRequest">catalog.GetOnlineTableRequest</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTable">catalog.OnlineTable</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTableSpec">catalog.OnlineTableSpec</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTableState">catalog.OnlineTableState</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTableStatus">catalog.OnlineTableStatus</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#PipelineProgress">catalog.PipelineProgress</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ProvisioningStatus">catalog.ProvisioningStatus</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#TriggeredUpdateStatus">catalog.TriggeredUpdateStatus</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ViewData">catalog.ViewData</a>.</li> <li>Added <code>ContentLength</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#DownloadResponse">files.DownloadResponse</a>.</li> <li>Added <code>ContentType</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#DownloadResponse">files.DownloadResponse</a>.</li> <li>Added <code>LastModified</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#DownloadResponse">files.DownloadResponse</a>.</li> <li>Changed <code>LastModified</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#GetMetadataResponse">files.GetMetadataResponse</a> to <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#LastModifiedHttpDate">files.LastModifiedHttpDate</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#LastModifiedHttpDate">files.LastModifiedHttpDate</a>.</li> <li>Removed <code>Config</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>Ai21labsConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>AnthropicConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>AwsBedrockConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>CohereConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>DatabricksModelServingConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>OpenaiConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Added <code>PalmConfig</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModel">serving.ExternalModel</a>.</li> <li>Removed <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ExternalModelConfig">serving.ExternalModelConfig</a>.</li> <li>Added <code>MaxProvisionedThroughput</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServedEntityInput">serving.ServedEntityInput</a>.</li> <li>Added <code>MinProvisionedThroughput</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServedEntityInput">serving.ServedEntityInput</a>.</li> <li>Added <code>MaxProvisionedThroughput</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServedEntityOutput">serving.ServedEntityOutput</a>.</li> <li>Added <code>MinProvisionedThroughput</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServedEntityOutput">serving.ServedEntityOutput</a>.</li> </ul> <p>OpenAPI SHA: cdd76a98a4fca7008572b3a94427566dd286c63b, Date: 2024-02-19</p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="eba5c8b3ae
"><code>eba5c8b</code></a> Release v0.33.0 (<a href="https://redirect.github.com/databricks/databricks-sdk-go/issues/823">#823</a>)</li> <li><a href="6846045a98
"><code>6846045</code></a> Add Int64 to header type injection (<a href="https://redirect.github.com/databricks/databricks-sdk-go/issues/819">#819</a>)</li> <li><a href="c6a803ae18
"><code>c6a803a</code></a> Add helper function to get header fields (<a href="https://redirect.github.com/databricks/databricks-sdk-go/issues/822">#822</a>)</li> <li>See full diff in <a href="https://github.com/databricks/databricks-sdk-go/compare/v0.32.0...v0.33.0">compare view</a></li> </ul> </details> <br /> <details> <summary>Most Recent Ignore Conditions Applied to This Pull Request</summary> | Dependency Name | Ignore Conditions | | --- | --- | | github.com/databricks/databricks-sdk-go | [>= 0.28.a, < 0.29] | </details> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/databricks/databricks-sdk-go&package-manager=go_modules&previous-version=0.32.0&new-version=0.33.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
This commit is contained in:
parent
162b115e19
commit
d9f34e6b22
|
@ -1 +1 @@
|
||||||
c40670f5a2055c92cf0a6aac92a5bccebfb80866
|
cdd76a98a4fca7008572b3a94427566dd286c63b
|
|
@ -56,6 +56,7 @@ cmd/workspace/libraries/libraries.go linguist-generated=true
|
||||||
cmd/workspace/metastores/metastores.go linguist-generated=true
|
cmd/workspace/metastores/metastores.go linguist-generated=true
|
||||||
cmd/workspace/model-registry/model-registry.go linguist-generated=true
|
cmd/workspace/model-registry/model-registry.go linguist-generated=true
|
||||||
cmd/workspace/model-versions/model-versions.go linguist-generated=true
|
cmd/workspace/model-versions/model-versions.go linguist-generated=true
|
||||||
|
cmd/workspace/online-tables/online-tables.go linguist-generated=true
|
||||||
cmd/workspace/permissions/permissions.go linguist-generated=true
|
cmd/workspace/permissions/permissions.go linguist-generated=true
|
||||||
cmd/workspace/pipelines/pipelines.go linguist-generated=true
|
cmd/workspace/pipelines/pipelines.go linguist-generated=true
|
||||||
cmd/workspace/policy-families/policy-families.go linguist-generated=true
|
cmd/workspace/policy-families/policy-families.go linguist-generated=true
|
||||||
|
|
|
@ -322,7 +322,7 @@
|
||||||
"description": "A unique name for the job cluster. This field is required and must be unique within the job.\n`JobTaskSettings` may refer to this field to determine which cluster to launch for the task execution."
|
"description": "A unique name for the job cluster. This field is required and must be unique within the job.\n`JobTaskSettings` may refer to this field to determine which cluster to launch for the task execution."
|
||||||
},
|
},
|
||||||
"new_cluster": {
|
"new_cluster": {
|
||||||
"description": "If new_cluster, a description of a cluster that is created for only for this task.",
|
"description": "If new_cluster, a description of a cluster that is created for each task.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"apply_policy_default_values": {
|
"apply_policy_default_values": {
|
||||||
"description": ""
|
"description": ""
|
||||||
|
@ -785,7 +785,7 @@
|
||||||
"description": "Optional schema to write to. This parameter is only used when a warehouse_id is also provided. If not provided, the `default` schema is used."
|
"description": "Optional schema to write to. This parameter is only used when a warehouse_id is also provided. If not provided, the `default` schema is used."
|
||||||
},
|
},
|
||||||
"source": {
|
"source": {
|
||||||
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
|
"description": "Optional location type of the SQL file. When set to `WORKSPACE`, the SQL file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the SQL file will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: SQL file is located in \u003cDatabricks\u003e workspace.\n* `GIT`: SQL file is located in cloud Git provider.\n"
|
||||||
},
|
},
|
||||||
"warehouse_id": {
|
"warehouse_id": {
|
||||||
"description": "ID of the SQL warehouse to connect to. If provided, we automatically generate and provide the profile and connection details to dbt. It can be overridden on a per-command basis by using the `--profiles-dir` command line argument."
|
"description": "ID of the SQL warehouse to connect to. If provided, we automatically generate and provide the profile and connection details to dbt. It can be overridden on a per-command basis by using the `--profiles-dir` command line argument."
|
||||||
|
@ -930,7 +930,7 @@
|
||||||
"description": "An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried."
|
"description": "An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried."
|
||||||
},
|
},
|
||||||
"new_cluster": {
|
"new_cluster": {
|
||||||
"description": "If new_cluster, a description of a cluster that is created for only for this task.",
|
"description": "If new_cluster, a description of a cluster that is created for each task.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"apply_policy_default_values": {
|
"apply_policy_default_values": {
|
||||||
"description": ""
|
"description": ""
|
||||||
|
@ -1269,7 +1269,7 @@
|
||||||
"description": "The path of the notebook to be run in the Databricks workspace or remote repository.\nFor notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash.\nFor notebooks stored in a remote repository, the path must be relative. This field is required.\n"
|
"description": "The path of the notebook to be run in the Databricks workspace or remote repository.\nFor notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash.\nFor notebooks stored in a remote repository, the path must be relative. This field is required.\n"
|
||||||
},
|
},
|
||||||
"source": {
|
"source": {
|
||||||
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
|
"description": "Optional location type of the SQL file. When set to `WORKSPACE`, the SQL file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the SQL file will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: SQL file is located in \u003cDatabricks\u003e workspace.\n* `GIT`: SQL file is located in cloud Git provider.\n"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -1371,7 +1371,7 @@
|
||||||
"description": "The Python file to be executed. Cloud file URIs (such as dbfs:/, s3:/, adls:/, gcs:/) and workspace paths are supported. For python files stored in the Databricks workspace, the path must be absolute and begin with `/`. For files stored in a remote repository, the path must be relative. This field is required."
|
"description": "The Python file to be executed. Cloud file URIs (such as dbfs:/, s3:/, adls:/, gcs:/) and workspace paths are supported. For python files stored in the Databricks workspace, the path must be absolute and begin with `/`. For files stored in a remote repository, the path must be relative. This field is required."
|
||||||
},
|
},
|
||||||
"source": {
|
"source": {
|
||||||
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
|
"description": "Optional location type of the SQL file. When set to `WORKSPACE`, the SQL file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the SQL file will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: SQL file is located in \u003cDatabricks\u003e workspace.\n* `GIT`: SQL file is located in cloud Git provider.\n"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -1449,7 +1449,7 @@
|
||||||
"description": "Path of the SQL file. Must be relative if the source is a remote Git repository and absolute for workspace paths."
|
"description": "Path of the SQL file. Must be relative if the source is a remote Git repository and absolute for workspace paths."
|
||||||
},
|
},
|
||||||
"source": {
|
"source": {
|
||||||
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
|
"description": "Optional location type of the SQL file. When set to `WORKSPACE`, the SQL file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the SQL file will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: SQL file is located in \u003cDatabricks\u003e workspace.\n* `GIT`: SQL file is located in cloud Git provider.\n"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -1672,97 +1672,92 @@
|
||||||
"external_model": {
|
"external_model": {
|
||||||
"description": "The external model to be served. NOTE: Only one of external_model and (entity_name, entity_version, workload_size, workload_type, and scale_to_zero_enabled)\ncan be specified with the latter set being used for custom model serving for a Databricks registered model. When an external_model is present, the served\nentities list can only have one served_entity object. For an existing endpoint with external_model, it can not be updated to an endpoint without external_model.\nIf the endpoint is created without external_model, users cannot update it to add external_model later.\n",
|
"description": "The external model to be served. NOTE: Only one of external_model and (entity_name, entity_version, workload_size, workload_type, and scale_to_zero_enabled)\ncan be specified with the latter set being used for custom model serving for a Databricks registered model. When an external_model is present, the served\nentities list can only have one served_entity object. For an existing endpoint with external_model, it can not be updated to an endpoint without external_model.\nIf the endpoint is created without external_model, users cannot update it to add external_model later.\n",
|
||||||
"properties": {
|
"properties": {
|
||||||
"config": {
|
"ai21labs_config": {
|
||||||
"description": "The config for the external model, which must match the provider.",
|
"description": "AI21Labs Config. Only required if the provider is 'ai21labs'.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"ai21labs_config": {
|
"ai21labs_api_key": {
|
||||||
"description": "AI21Labs Config",
|
"description": "The Databricks secret key reference for an AI21Labs API key."
|
||||||
"properties": {
|
}
|
||||||
"ai21labs_api_key": {
|
}
|
||||||
"description": "The Databricks secret key reference for an AI21Labs API key."
|
},
|
||||||
}
|
"anthropic_config": {
|
||||||
}
|
"description": "Anthropic Config. Only required if the provider is 'anthropic'.",
|
||||||
|
"properties": {
|
||||||
|
"anthropic_api_key": {
|
||||||
|
"description": "The Databricks secret key reference for an Anthropic API key."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"aws_bedrock_config": {
|
||||||
|
"description": "AWS Bedrock Config. Only required if the provider is 'aws-bedrock'.",
|
||||||
|
"properties": {
|
||||||
|
"aws_access_key_id": {
|
||||||
|
"description": "The Databricks secret key reference for an AWS Access Key ID with permissions to interact with Bedrock services."
|
||||||
},
|
},
|
||||||
"anthropic_config": {
|
"aws_region": {
|
||||||
"description": "Anthropic Config",
|
"description": "The AWS region to use. Bedrock has to be enabled there."
|
||||||
"properties": {
|
|
||||||
"anthropic_api_key": {
|
|
||||||
"description": "The Databricks secret key reference for an Anthropic API key."
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
"aws_bedrock_config": {
|
"aws_secret_access_key": {
|
||||||
"description": "AWS Bedrock Config",
|
"description": "The Databricks secret key reference for an AWS Secret Access Key paired with the access key ID, with permissions to interact with Bedrock services."
|
||||||
"properties": {
|
|
||||||
"aws_access_key_id": {
|
|
||||||
"description": "The Databricks secret key reference for an AWS Access Key ID with permissions to interact with Bedrock services."
|
|
||||||
},
|
|
||||||
"aws_region": {
|
|
||||||
"description": "The AWS region to use. Bedrock has to be enabled there."
|
|
||||||
},
|
|
||||||
"aws_secret_access_key": {
|
|
||||||
"description": "The Databricks secret key reference for an AWS Secret Access Key paired with the access key ID, with permissions to interact with Bedrock services."
|
|
||||||
},
|
|
||||||
"bedrock_provider": {
|
|
||||||
"description": "The underlying provider in AWS Bedrock. Supported values (case insensitive) include: Anthropic, Cohere, AI21Labs, Amazon."
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
"cohere_config": {
|
"bedrock_provider": {
|
||||||
"description": "Cohere Config",
|
"description": "The underlying provider in AWS Bedrock. Supported values (case insensitive) include: Anthropic, Cohere, AI21Labs, Amazon."
|
||||||
"properties": {
|
}
|
||||||
"cohere_api_key": {
|
}
|
||||||
"description": "The Databricks secret key reference for a Cohere API key."
|
},
|
||||||
}
|
"cohere_config": {
|
||||||
}
|
"description": "Cohere Config. Only required if the provider is 'cohere'.",
|
||||||
|
"properties": {
|
||||||
|
"cohere_api_key": {
|
||||||
|
"description": "The Databricks secret key reference for a Cohere API key."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"databricks_model_serving_config": {
|
||||||
|
"description": "Databricks Model Serving Config. Only required if the provider is 'databricks-model-serving'.",
|
||||||
|
"properties": {
|
||||||
|
"databricks_api_token": {
|
||||||
|
"description": "The Databricks secret key reference for a Databricks API token that corresponds to a user or service\nprincipal with Can Query access to the model serving endpoint pointed to by this external model.\n"
|
||||||
},
|
},
|
||||||
"databricks_model_serving_config": {
|
"databricks_workspace_url": {
|
||||||
"description": "Databricks Model Serving Config",
|
"description": "The URL of the Databricks workspace containing the model serving endpoint pointed to by this external model.\n"
|
||||||
"properties": {
|
|
||||||
"databricks_api_token": {
|
|
||||||
"description": "The Databricks secret key reference for a Databricks API token that corresponds to a user or service\nprincipal with Can Query access to the model serving endpoint pointed to by this external model.\n"
|
|
||||||
},
|
|
||||||
"databricks_workspace_url": {
|
|
||||||
"description": "The URL of the Databricks workspace containing the model serving endpoint pointed to by this external model.\n"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"openai_config": {
|
|
||||||
"description": "OpenAI Config",
|
|
||||||
"properties": {
|
|
||||||
"openai_api_base": {
|
|
||||||
"description": "This is the base URL for the OpenAI API (default: \"https://api.openai.com/v1\").\nFor Azure OpenAI, this field is required, and is the base URL for the Azure OpenAI API service\nprovided by Azure.\n"
|
|
||||||
},
|
|
||||||
"openai_api_key": {
|
|
||||||
"description": "The Databricks secret key reference for an OpenAI or Azure OpenAI API key."
|
|
||||||
},
|
|
||||||
"openai_api_type": {
|
|
||||||
"description": "This is an optional field to specify the type of OpenAI API to use.\nFor Azure OpenAI, this field is required, and adjust this parameter to represent the preferred security\naccess validation protocol. For access token validation, use azure. For authentication using Azure Active\nDirectory (Azure AD) use, azuread.\n"
|
|
||||||
},
|
|
||||||
"openai_api_version": {
|
|
||||||
"description": "This is an optional field to specify the OpenAI API version.\nFor Azure OpenAI, this field is required, and is the version of the Azure OpenAI service to\nutilize, specified by a date.\n"
|
|
||||||
},
|
|
||||||
"openai_deployment_name": {
|
|
||||||
"description": "This field is only required for Azure OpenAI and is the name of the deployment resource for the\nAzure OpenAI service.\n"
|
|
||||||
},
|
|
||||||
"openai_organization": {
|
|
||||||
"description": "This is an optional field to specify the organization in OpenAI or Azure OpenAI.\n"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"palm_config": {
|
|
||||||
"description": "PaLM Config",
|
|
||||||
"properties": {
|
|
||||||
"palm_api_key": {
|
|
||||||
"description": "The Databricks secret key reference for a PaLM API key."
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"name": {
|
"name": {
|
||||||
"description": "The name of the external model."
|
"description": "The name of the external model."
|
||||||
},
|
},
|
||||||
|
"openai_config": {
|
||||||
|
"description": "OpenAI Config. Only required if the provider is 'openai'.",
|
||||||
|
"properties": {
|
||||||
|
"openai_api_base": {
|
||||||
|
"description": "This is the base URL for the OpenAI API (default: \"https://api.openai.com/v1\").\nFor Azure OpenAI, this field is required, and is the base URL for the Azure OpenAI API service\nprovided by Azure.\n"
|
||||||
|
},
|
||||||
|
"openai_api_key": {
|
||||||
|
"description": "The Databricks secret key reference for an OpenAI or Azure OpenAI API key."
|
||||||
|
},
|
||||||
|
"openai_api_type": {
|
||||||
|
"description": "This is an optional field to specify the type of OpenAI API to use.\nFor Azure OpenAI, this field is required, and adjust this parameter to represent the preferred security\naccess validation protocol. For access token validation, use azure. For authentication using Azure Active\nDirectory (Azure AD) use, azuread.\n"
|
||||||
|
},
|
||||||
|
"openai_api_version": {
|
||||||
|
"description": "This is an optional field to specify the OpenAI API version.\nFor Azure OpenAI, this field is required, and is the version of the Azure OpenAI service to\nutilize, specified by a date.\n"
|
||||||
|
},
|
||||||
|
"openai_deployment_name": {
|
||||||
|
"description": "This field is only required for Azure OpenAI and is the name of the deployment resource for the\nAzure OpenAI service.\n"
|
||||||
|
},
|
||||||
|
"openai_organization": {
|
||||||
|
"description": "This is an optional field to specify the organization in OpenAI or Azure OpenAI.\n"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"palm_config": {
|
||||||
|
"description": "PaLM Config. Only required if the provider is 'palm'.",
|
||||||
|
"properties": {
|
||||||
|
"palm_api_key": {
|
||||||
|
"description": "The Databricks secret key reference for a PaLM API key."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"provider": {
|
"provider": {
|
||||||
"description": "The name of the provider for the external model. Currently, the supported providers are 'ai21labs', 'anthropic',\n'aws-bedrock', 'cohere', 'databricks-model-serving', 'openai', and 'palm'.\",\n"
|
"description": "The name of the provider for the external model. Currently, the supported providers are 'ai21labs', 'anthropic',\n'aws-bedrock', 'cohere', 'databricks-model-serving', 'openai', and 'palm'.\",\n"
|
||||||
},
|
},
|
||||||
|
@ -1774,6 +1769,12 @@
|
||||||
"instance_profile_arn": {
|
"instance_profile_arn": {
|
||||||
"description": "ARN of the instance profile that the served entity uses to access AWS resources."
|
"description": "ARN of the instance profile that the served entity uses to access AWS resources."
|
||||||
},
|
},
|
||||||
|
"max_provisioned_throughput": {
|
||||||
|
"description": "The maximum tokens per second that the endpoint can scale up to."
|
||||||
|
},
|
||||||
|
"min_provisioned_throughput": {
|
||||||
|
"description": "The minimum tokens per second that the endpoint can scale down to."
|
||||||
|
},
|
||||||
"name": {
|
"name": {
|
||||||
"description": "The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores.\nIf not specified for an external model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other\nentities, it defaults to \u003centity-name\u003e-\u003centity-version\u003e.\n"
|
"description": "The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores.\nIf not specified for an external model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other\nentities, it defaults to \u003centity-name\u003e-\u003centity-version\u003e.\n"
|
||||||
},
|
},
|
||||||
|
@ -2854,7 +2855,7 @@
|
||||||
"description": "A unique name for the job cluster. This field is required and must be unique within the job.\n`JobTaskSettings` may refer to this field to determine which cluster to launch for the task execution."
|
"description": "A unique name for the job cluster. This field is required and must be unique within the job.\n`JobTaskSettings` may refer to this field to determine which cluster to launch for the task execution."
|
||||||
},
|
},
|
||||||
"new_cluster": {
|
"new_cluster": {
|
||||||
"description": "If new_cluster, a description of a cluster that is created for only for this task.",
|
"description": "If new_cluster, a description of a cluster that is created for each task.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"apply_policy_default_values": {
|
"apply_policy_default_values": {
|
||||||
"description": ""
|
"description": ""
|
||||||
|
@ -3317,7 +3318,7 @@
|
||||||
"description": "Optional schema to write to. This parameter is only used when a warehouse_id is also provided. If not provided, the `default` schema is used."
|
"description": "Optional schema to write to. This parameter is only used when a warehouse_id is also provided. If not provided, the `default` schema is used."
|
||||||
},
|
},
|
||||||
"source": {
|
"source": {
|
||||||
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
|
"description": "Optional location type of the SQL file. When set to `WORKSPACE`, the SQL file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the SQL file will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: SQL file is located in \u003cDatabricks\u003e workspace.\n* `GIT`: SQL file is located in cloud Git provider.\n"
|
||||||
},
|
},
|
||||||
"warehouse_id": {
|
"warehouse_id": {
|
||||||
"description": "ID of the SQL warehouse to connect to. If provided, we automatically generate and provide the profile and connection details to dbt. It can be overridden on a per-command basis by using the `--profiles-dir` command line argument."
|
"description": "ID of the SQL warehouse to connect to. If provided, we automatically generate and provide the profile and connection details to dbt. It can be overridden on a per-command basis by using the `--profiles-dir` command line argument."
|
||||||
|
@ -3462,7 +3463,7 @@
|
||||||
"description": "An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried."
|
"description": "An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried."
|
||||||
},
|
},
|
||||||
"new_cluster": {
|
"new_cluster": {
|
||||||
"description": "If new_cluster, a description of a cluster that is created for only for this task.",
|
"description": "If new_cluster, a description of a cluster that is created for each task.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"apply_policy_default_values": {
|
"apply_policy_default_values": {
|
||||||
"description": ""
|
"description": ""
|
||||||
|
@ -3801,7 +3802,7 @@
|
||||||
"description": "The path of the notebook to be run in the Databricks workspace or remote repository.\nFor notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash.\nFor notebooks stored in a remote repository, the path must be relative. This field is required.\n"
|
"description": "The path of the notebook to be run in the Databricks workspace or remote repository.\nFor notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash.\nFor notebooks stored in a remote repository, the path must be relative. This field is required.\n"
|
||||||
},
|
},
|
||||||
"source": {
|
"source": {
|
||||||
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
|
"description": "Optional location type of the SQL file. When set to `WORKSPACE`, the SQL file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the SQL file will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: SQL file is located in \u003cDatabricks\u003e workspace.\n* `GIT`: SQL file is located in cloud Git provider.\n"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -3903,7 +3904,7 @@
|
||||||
"description": "The Python file to be executed. Cloud file URIs (such as dbfs:/, s3:/, adls:/, gcs:/) and workspace paths are supported. For python files stored in the Databricks workspace, the path must be absolute and begin with `/`. For files stored in a remote repository, the path must be relative. This field is required."
|
"description": "The Python file to be executed. Cloud file URIs (such as dbfs:/, s3:/, adls:/, gcs:/) and workspace paths are supported. For python files stored in the Databricks workspace, the path must be absolute and begin with `/`. For files stored in a remote repository, the path must be relative. This field is required."
|
||||||
},
|
},
|
||||||
"source": {
|
"source": {
|
||||||
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
|
"description": "Optional location type of the SQL file. When set to `WORKSPACE`, the SQL file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the SQL file will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: SQL file is located in \u003cDatabricks\u003e workspace.\n* `GIT`: SQL file is located in cloud Git provider.\n"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -3981,7 +3982,7 @@
|
||||||
"description": "Path of the SQL file. Must be relative if the source is a remote Git repository and absolute for workspace paths."
|
"description": "Path of the SQL file. Must be relative if the source is a remote Git repository and absolute for workspace paths."
|
||||||
},
|
},
|
||||||
"source": {
|
"source": {
|
||||||
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: Notebook is located in \u003cDatabricks\u003e workspace.\n* `GIT`: Notebook is located in cloud Git provider.\n"
|
"description": "Optional location type of the SQL file. When set to `WORKSPACE`, the SQL file will be retrieved\nfrom the local \u003cDatabricks\u003e workspace. When set to `GIT`, the SQL file will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n\n* `WORKSPACE`: SQL file is located in \u003cDatabricks\u003e workspace.\n* `GIT`: SQL file is located in cloud Git provider.\n"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -4204,97 +4205,92 @@
|
||||||
"external_model": {
|
"external_model": {
|
||||||
"description": "The external model to be served. NOTE: Only one of external_model and (entity_name, entity_version, workload_size, workload_type, and scale_to_zero_enabled)\ncan be specified with the latter set being used for custom model serving for a Databricks registered model. When an external_model is present, the served\nentities list can only have one served_entity object. For an existing endpoint with external_model, it can not be updated to an endpoint without external_model.\nIf the endpoint is created without external_model, users cannot update it to add external_model later.\n",
|
"description": "The external model to be served. NOTE: Only one of external_model and (entity_name, entity_version, workload_size, workload_type, and scale_to_zero_enabled)\ncan be specified with the latter set being used for custom model serving for a Databricks registered model. When an external_model is present, the served\nentities list can only have one served_entity object. For an existing endpoint with external_model, it can not be updated to an endpoint without external_model.\nIf the endpoint is created without external_model, users cannot update it to add external_model later.\n",
|
||||||
"properties": {
|
"properties": {
|
||||||
"config": {
|
"ai21labs_config": {
|
||||||
"description": "The config for the external model, which must match the provider.",
|
"description": "AI21Labs Config. Only required if the provider is 'ai21labs'.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"ai21labs_config": {
|
"ai21labs_api_key": {
|
||||||
"description": "AI21Labs Config",
|
"description": "The Databricks secret key reference for an AI21Labs API key."
|
||||||
"properties": {
|
}
|
||||||
"ai21labs_api_key": {
|
}
|
||||||
"description": "The Databricks secret key reference for an AI21Labs API key."
|
},
|
||||||
}
|
"anthropic_config": {
|
||||||
}
|
"description": "Anthropic Config. Only required if the provider is 'anthropic'.",
|
||||||
|
"properties": {
|
||||||
|
"anthropic_api_key": {
|
||||||
|
"description": "The Databricks secret key reference for an Anthropic API key."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"aws_bedrock_config": {
|
||||||
|
"description": "AWS Bedrock Config. Only required if the provider is 'aws-bedrock'.",
|
||||||
|
"properties": {
|
||||||
|
"aws_access_key_id": {
|
||||||
|
"description": "The Databricks secret key reference for an AWS Access Key ID with permissions to interact with Bedrock services."
|
||||||
},
|
},
|
||||||
"anthropic_config": {
|
"aws_region": {
|
||||||
"description": "Anthropic Config",
|
"description": "The AWS region to use. Bedrock has to be enabled there."
|
||||||
"properties": {
|
|
||||||
"anthropic_api_key": {
|
|
||||||
"description": "The Databricks secret key reference for an Anthropic API key."
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
"aws_bedrock_config": {
|
"aws_secret_access_key": {
|
||||||
"description": "AWS Bedrock Config",
|
"description": "The Databricks secret key reference for an AWS Secret Access Key paired with the access key ID, with permissions to interact with Bedrock services."
|
||||||
"properties": {
|
|
||||||
"aws_access_key_id": {
|
|
||||||
"description": "The Databricks secret key reference for an AWS Access Key ID with permissions to interact with Bedrock services."
|
|
||||||
},
|
|
||||||
"aws_region": {
|
|
||||||
"description": "The AWS region to use. Bedrock has to be enabled there."
|
|
||||||
},
|
|
||||||
"aws_secret_access_key": {
|
|
||||||
"description": "The Databricks secret key reference for an AWS Secret Access Key paired with the access key ID, with permissions to interact with Bedrock services."
|
|
||||||
},
|
|
||||||
"bedrock_provider": {
|
|
||||||
"description": "The underlying provider in AWS Bedrock. Supported values (case insensitive) include: Anthropic, Cohere, AI21Labs, Amazon."
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
"cohere_config": {
|
"bedrock_provider": {
|
||||||
"description": "Cohere Config",
|
"description": "The underlying provider in AWS Bedrock. Supported values (case insensitive) include: Anthropic, Cohere, AI21Labs, Amazon."
|
||||||
"properties": {
|
}
|
||||||
"cohere_api_key": {
|
}
|
||||||
"description": "The Databricks secret key reference for a Cohere API key."
|
},
|
||||||
}
|
"cohere_config": {
|
||||||
}
|
"description": "Cohere Config. Only required if the provider is 'cohere'.",
|
||||||
|
"properties": {
|
||||||
|
"cohere_api_key": {
|
||||||
|
"description": "The Databricks secret key reference for a Cohere API key."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"databricks_model_serving_config": {
|
||||||
|
"description": "Databricks Model Serving Config. Only required if the provider is 'databricks-model-serving'.",
|
||||||
|
"properties": {
|
||||||
|
"databricks_api_token": {
|
||||||
|
"description": "The Databricks secret key reference for a Databricks API token that corresponds to a user or service\nprincipal with Can Query access to the model serving endpoint pointed to by this external model.\n"
|
||||||
},
|
},
|
||||||
"databricks_model_serving_config": {
|
"databricks_workspace_url": {
|
||||||
"description": "Databricks Model Serving Config",
|
"description": "The URL of the Databricks workspace containing the model serving endpoint pointed to by this external model.\n"
|
||||||
"properties": {
|
|
||||||
"databricks_api_token": {
|
|
||||||
"description": "The Databricks secret key reference for a Databricks API token that corresponds to a user or service\nprincipal with Can Query access to the model serving endpoint pointed to by this external model.\n"
|
|
||||||
},
|
|
||||||
"databricks_workspace_url": {
|
|
||||||
"description": "The URL of the Databricks workspace containing the model serving endpoint pointed to by this external model.\n"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"openai_config": {
|
|
||||||
"description": "OpenAI Config",
|
|
||||||
"properties": {
|
|
||||||
"openai_api_base": {
|
|
||||||
"description": "This is the base URL for the OpenAI API (default: \"https://api.openai.com/v1\").\nFor Azure OpenAI, this field is required, and is the base URL for the Azure OpenAI API service\nprovided by Azure.\n"
|
|
||||||
},
|
|
||||||
"openai_api_key": {
|
|
||||||
"description": "The Databricks secret key reference for an OpenAI or Azure OpenAI API key."
|
|
||||||
},
|
|
||||||
"openai_api_type": {
|
|
||||||
"description": "This is an optional field to specify the type of OpenAI API to use.\nFor Azure OpenAI, this field is required, and adjust this parameter to represent the preferred security\naccess validation protocol. For access token validation, use azure. For authentication using Azure Active\nDirectory (Azure AD) use, azuread.\n"
|
|
||||||
},
|
|
||||||
"openai_api_version": {
|
|
||||||
"description": "This is an optional field to specify the OpenAI API version.\nFor Azure OpenAI, this field is required, and is the version of the Azure OpenAI service to\nutilize, specified by a date.\n"
|
|
||||||
},
|
|
||||||
"openai_deployment_name": {
|
|
||||||
"description": "This field is only required for Azure OpenAI and is the name of the deployment resource for the\nAzure OpenAI service.\n"
|
|
||||||
},
|
|
||||||
"openai_organization": {
|
|
||||||
"description": "This is an optional field to specify the organization in OpenAI or Azure OpenAI.\n"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"palm_config": {
|
|
||||||
"description": "PaLM Config",
|
|
||||||
"properties": {
|
|
||||||
"palm_api_key": {
|
|
||||||
"description": "The Databricks secret key reference for a PaLM API key."
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"name": {
|
"name": {
|
||||||
"description": "The name of the external model."
|
"description": "The name of the external model."
|
||||||
},
|
},
|
||||||
|
"openai_config": {
|
||||||
|
"description": "OpenAI Config. Only required if the provider is 'openai'.",
|
||||||
|
"properties": {
|
||||||
|
"openai_api_base": {
|
||||||
|
"description": "This is the base URL for the OpenAI API (default: \"https://api.openai.com/v1\").\nFor Azure OpenAI, this field is required, and is the base URL for the Azure OpenAI API service\nprovided by Azure.\n"
|
||||||
|
},
|
||||||
|
"openai_api_key": {
|
||||||
|
"description": "The Databricks secret key reference for an OpenAI or Azure OpenAI API key."
|
||||||
|
},
|
||||||
|
"openai_api_type": {
|
||||||
|
"description": "This is an optional field to specify the type of OpenAI API to use.\nFor Azure OpenAI, this field is required, and adjust this parameter to represent the preferred security\naccess validation protocol. For access token validation, use azure. For authentication using Azure Active\nDirectory (Azure AD) use, azuread.\n"
|
||||||
|
},
|
||||||
|
"openai_api_version": {
|
||||||
|
"description": "This is an optional field to specify the OpenAI API version.\nFor Azure OpenAI, this field is required, and is the version of the Azure OpenAI service to\nutilize, specified by a date.\n"
|
||||||
|
},
|
||||||
|
"openai_deployment_name": {
|
||||||
|
"description": "This field is only required for Azure OpenAI and is the name of the deployment resource for the\nAzure OpenAI service.\n"
|
||||||
|
},
|
||||||
|
"openai_organization": {
|
||||||
|
"description": "This is an optional field to specify the organization in OpenAI or Azure OpenAI.\n"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"palm_config": {
|
||||||
|
"description": "PaLM Config. Only required if the provider is 'palm'.",
|
||||||
|
"properties": {
|
||||||
|
"palm_api_key": {
|
||||||
|
"description": "The Databricks secret key reference for a PaLM API key."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"provider": {
|
"provider": {
|
||||||
"description": "The name of the provider for the external model. Currently, the supported providers are 'ai21labs', 'anthropic',\n'aws-bedrock', 'cohere', 'databricks-model-serving', 'openai', and 'palm'.\",\n"
|
"description": "The name of the provider for the external model. Currently, the supported providers are 'ai21labs', 'anthropic',\n'aws-bedrock', 'cohere', 'databricks-model-serving', 'openai', and 'palm'.\",\n"
|
||||||
},
|
},
|
||||||
|
@ -4306,6 +4302,12 @@
|
||||||
"instance_profile_arn": {
|
"instance_profile_arn": {
|
||||||
"description": "ARN of the instance profile that the served entity uses to access AWS resources."
|
"description": "ARN of the instance profile that the served entity uses to access AWS resources."
|
||||||
},
|
},
|
||||||
|
"max_provisioned_throughput": {
|
||||||
|
"description": "The maximum tokens per second that the endpoint can scale up to."
|
||||||
|
},
|
||||||
|
"min_provisioned_throughput": {
|
||||||
|
"description": "The minimum tokens per second that the endpoint can scale down to."
|
||||||
|
},
|
||||||
"name": {
|
"name": {
|
||||||
"description": "The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores.\nIf not specified for an external model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other\nentities, it defaults to \u003centity-name\u003e-\u003centity-version\u003e.\n"
|
"description": "The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores.\nIf not specified for an external model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other\nentities, it defaults to \u003centity-name\u003e-\u003centity-version\u003e.\n"
|
||||||
},
|
},
|
||||||
|
@ -5063,7 +5065,53 @@
|
||||||
"variables": {
|
"variables": {
|
||||||
"description": "",
|
"description": "",
|
||||||
"additionalproperties": {
|
"additionalproperties": {
|
||||||
"description": ""
|
"description": "",
|
||||||
|
"properties": {
|
||||||
|
"default": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"description": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"lookup": {
|
||||||
|
"description": "",
|
||||||
|
"properties": {
|
||||||
|
"alert": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"cluster": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"cluster_policy": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"dashboard": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"instance_pool": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"job": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"metastore": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"pipeline": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"query": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"service_principal": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"warehouse": {
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"workspace": {
|
"workspace": {
|
||||||
|
|
|
@ -33,6 +33,7 @@ import (
|
||||||
metastores "github.com/databricks/cli/cmd/workspace/metastores"
|
metastores "github.com/databricks/cli/cmd/workspace/metastores"
|
||||||
model_registry "github.com/databricks/cli/cmd/workspace/model-registry"
|
model_registry "github.com/databricks/cli/cmd/workspace/model-registry"
|
||||||
model_versions "github.com/databricks/cli/cmd/workspace/model-versions"
|
model_versions "github.com/databricks/cli/cmd/workspace/model-versions"
|
||||||
|
online_tables "github.com/databricks/cli/cmd/workspace/online-tables"
|
||||||
permissions "github.com/databricks/cli/cmd/workspace/permissions"
|
permissions "github.com/databricks/cli/cmd/workspace/permissions"
|
||||||
pipelines "github.com/databricks/cli/cmd/workspace/pipelines"
|
pipelines "github.com/databricks/cli/cmd/workspace/pipelines"
|
||||||
policy_families "github.com/databricks/cli/cmd/workspace/policy-families"
|
policy_families "github.com/databricks/cli/cmd/workspace/policy-families"
|
||||||
|
@ -100,6 +101,7 @@ func All() []*cobra.Command {
|
||||||
out = append(out, metastores.New())
|
out = append(out, metastores.New())
|
||||||
out = append(out, model_registry.New())
|
out = append(out, model_registry.New())
|
||||||
out = append(out, model_versions.New())
|
out = append(out, model_versions.New())
|
||||||
|
out = append(out, online_tables.New())
|
||||||
out = append(out, permissions.New())
|
out = append(out, permissions.New())
|
||||||
out = append(out, pipelines.New())
|
out = append(out, pipelines.New())
|
||||||
out = append(out, policy_families.New())
|
out = append(out, policy_families.New())
|
||||||
|
|
|
@ -631,7 +631,7 @@ func newUpdate() *cobra.Command {
|
||||||
// TODO: output-only field
|
// TODO: output-only field
|
||||||
// TODO: complex arg: time_series
|
// TODO: complex arg: time_series
|
||||||
|
|
||||||
cmd.Use = "update FULL_NAME ASSETS_DIR OUTPUT_SCHEMA_NAME"
|
cmd.Use = "update FULL_NAME OUTPUT_SCHEMA_NAME"
|
||||||
cmd.Short = `Update a table monitor.`
|
cmd.Short = `Update a table monitor.`
|
||||||
cmd.Long = `Update a table monitor.
|
cmd.Long = `Update a table monitor.
|
||||||
|
|
||||||
|
@ -651,7 +651,6 @@ func newUpdate() *cobra.Command {
|
||||||
|
|
||||||
Arguments:
|
Arguments:
|
||||||
FULL_NAME: Full name of the table.
|
FULL_NAME: Full name of the table.
|
||||||
ASSETS_DIR: The directory to store monitoring assets (e.g. dashboard, metric tables).
|
|
||||||
OUTPUT_SCHEMA_NAME: Schema where output metric tables are created.`
|
OUTPUT_SCHEMA_NAME: Schema where output metric tables are created.`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
@ -660,11 +659,11 @@ func newUpdate() *cobra.Command {
|
||||||
if cmd.Flags().Changed("json") {
|
if cmd.Flags().Changed("json") {
|
||||||
err := cobra.ExactArgs(1)(cmd, args)
|
err := cobra.ExactArgs(1)(cmd, args)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("when --json flag is specified, provide only FULL_NAME as positional arguments. Provide 'assets_dir', 'output_schema_name' in your JSON input")
|
return fmt.Errorf("when --json flag is specified, provide only FULL_NAME as positional arguments. Provide 'output_schema_name' in your JSON input")
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
check := cobra.ExactArgs(3)
|
check := cobra.ExactArgs(2)
|
||||||
return check(cmd, args)
|
return check(cmd, args)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -681,10 +680,7 @@ func newUpdate() *cobra.Command {
|
||||||
}
|
}
|
||||||
updateReq.FullName = args[0]
|
updateReq.FullName = args[0]
|
||||||
if !cmd.Flags().Changed("json") {
|
if !cmd.Flags().Changed("json") {
|
||||||
updateReq.AssetsDir = args[1]
|
updateReq.OutputSchemaName = args[1]
|
||||||
}
|
|
||||||
if !cmd.Flags().Changed("json") {
|
|
||||||
updateReq.OutputSchemaName = args[2]
|
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := w.LakehouseMonitors.Update(ctx, updateReq)
|
response, err := w.LakehouseMonitors.Update(ctx, updateReq)
|
||||||
|
|
|
@ -0,0 +1,238 @@
|
||||||
|
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
|
||||||
|
|
||||||
|
package online_tables
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/databricks/cli/cmd/root"
|
||||||
|
"github.com/databricks/cli/libs/cmdio"
|
||||||
|
"github.com/databricks/cli/libs/flags"
|
||||||
|
"github.com/databricks/databricks-sdk-go/service/catalog"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var cmdOverrides []func(*cobra.Command)
|
||||||
|
|
||||||
|
func New() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "online-tables",
|
||||||
|
Short: `Online tables provide lower latency and higher QPS access to data from Delta tables.`,
|
||||||
|
Long: `Online tables provide lower latency and higher QPS access to data from Delta
|
||||||
|
tables.`,
|
||||||
|
GroupID: "catalog",
|
||||||
|
Annotations: map[string]string{
|
||||||
|
"package": "catalog",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range cmdOverrides {
|
||||||
|
fn(cmd)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
// start create command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var createOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*catalog.ViewData,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newCreate() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var createReq catalog.ViewData
|
||||||
|
var createJson flags.JsonFlag
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
|
||||||
|
|
||||||
|
cmd.Flags().StringVar(&createReq.Name, "name", createReq.Name, `Full three-part (catalog, schema, table) name of the table.`)
|
||||||
|
// TODO: complex arg: spec
|
||||||
|
|
||||||
|
cmd.Use = "create"
|
||||||
|
cmd.Short = `Create an Online Table.`
|
||||||
|
cmd.Long = `Create an Online Table.
|
||||||
|
|
||||||
|
Create a new Online Table.`
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := cobra.ExactArgs(0)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
if cmd.Flags().Changed("json") {
|
||||||
|
err = createJson.Unmarshal(&createReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
response, err := w.OnlineTables.Create(ctx, createReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return cmdio.Render(ctx, response)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range createOverrides {
|
||||||
|
fn(cmd, &createReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
cmdOverrides = append(cmdOverrides, func(cmd *cobra.Command) {
|
||||||
|
cmd.AddCommand(newCreate())
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// start delete command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var deleteOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*catalog.DeleteOnlineTableRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newDelete() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var deleteReq catalog.DeleteOnlineTableRequest
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
|
||||||
|
cmd.Use = "delete NAME"
|
||||||
|
cmd.Short = `Delete an Online Table.`
|
||||||
|
cmd.Long = `Delete an Online Table.
|
||||||
|
|
||||||
|
Delete an online table. Warning: This will delete all the data in the online
|
||||||
|
table. If the source Delta table was deleted or modified since this Online
|
||||||
|
Table was created, this will lose the data forever!
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
NAME: Full three-part (catalog, schema, table) name of the table.`
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := cobra.ExactArgs(1)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
deleteReq.Name = args[0]
|
||||||
|
|
||||||
|
err = w.OnlineTables.Delete(ctx, deleteReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range deleteOverrides {
|
||||||
|
fn(cmd, &deleteReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
cmdOverrides = append(cmdOverrides, func(cmd *cobra.Command) {
|
||||||
|
cmd.AddCommand(newDelete())
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// start get command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var getOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*catalog.GetOnlineTableRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newGet() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var getReq catalog.GetOnlineTableRequest
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
|
||||||
|
cmd.Use = "get NAME"
|
||||||
|
cmd.Short = `Get an Online Table.`
|
||||||
|
cmd.Long = `Get an Online Table.
|
||||||
|
|
||||||
|
Get information about an existing online table and its status.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
NAME: Full three-part (catalog, schema, table) name of the table.`
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := cobra.ExactArgs(1)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
getReq.Name = args[0]
|
||||||
|
|
||||||
|
response, err := w.OnlineTables.Get(ctx, getReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return cmdio.Render(ctx, response)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range getOverrides {
|
||||||
|
fn(cmd, &getReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
cmdOverrides = append(cmdOverrides, func(cmd *cobra.Command) {
|
||||||
|
cmd.AddCommand(newGet())
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// end service OnlineTables
|
2
go.mod
2
go.mod
|
@ -4,7 +4,7 @@ go 1.21
|
||||||
|
|
||||||
require (
|
require (
|
||||||
github.com/briandowns/spinner v1.23.0 // Apache 2.0
|
github.com/briandowns/spinner v1.23.0 // Apache 2.0
|
||||||
github.com/databricks/databricks-sdk-go v0.32.0 // Apache 2.0
|
github.com/databricks/databricks-sdk-go v0.33.0 // Apache 2.0
|
||||||
github.com/fatih/color v1.16.0 // MIT
|
github.com/fatih/color v1.16.0 // MIT
|
||||||
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
|
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
|
||||||
github.com/google/uuid v1.6.0 // BSD-3-Clause
|
github.com/google/uuid v1.6.0 // BSD-3-Clause
|
||||||
|
|
|
@ -28,8 +28,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
|
||||||
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||||
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
|
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
|
||||||
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
|
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
|
||||||
github.com/databricks/databricks-sdk-go v0.32.0 h1:H6SQmfOOXd6x2fOp+zISkcR1nzJ7NTXXmIv8lWyK66Y=
|
github.com/databricks/databricks-sdk-go v0.33.0 h1:0ldeP8aPnpKLV/mvNKsOVijOaLLo6TxRGdIwrEf2rlQ=
|
||||||
github.com/databricks/databricks-sdk-go v0.32.0/go.mod h1:yyXGdhEfXBBsIoTm0mdl8QN0xzCQPUVZTozMM/7wVuI=
|
github.com/databricks/databricks-sdk-go v0.33.0/go.mod h1:yyXGdhEfXBBsIoTm0mdl8QN0xzCQPUVZTozMM/7wVuI=
|
||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
|
|
Loading…
Reference in New Issue