mirror of https://github.com/databricks/cli.git
Bump github.com/databricks/databricks-sdk-go from 0.38.0 to 0.39.0 (#1405)
Bumps [github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go) from 0.38.0 to 0.39.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/databricks/databricks-sdk-go/releases">github.com/databricks/databricks-sdk-go's releases</a>.</em></p> <blockquote> <h2>v0.39.0</h2> <h2>0.39.0</h2> <ul> <li>Ignored flaky integration tests (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/894">#894</a>).</li> <li>Added retries for "worker env WorkerEnvId(workerenv-XXXXX) not found" (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/890">#890</a>).</li> <li>Updated SDK to OpenAPI spec (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/899">#899</a>).</li> </ul> <p>Note: This release contains breaking changes, please see the API changes below for more details.</p> <p>API Changes:</p> <ul> <li>Added <code>IngestionDefinition</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#CreatePipeline">pipelines.CreatePipeline</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#EditPipeline">pipelines.EditPipeline</a> and <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#PipelineSpec">pipelines.PipelineSpec</a>.</li> <li>Added <code>Deployment</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#CreatePipeline">pipelines.CreatePipeline</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#EditPipeline">pipelines.EditPipeline</a> and <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#PipelineSpec">pipelines.PipelineSpec</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#ClusterStatus">compute.ClusterStatus</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#ClusterStatusResponse">compute.ClusterStatusResponse</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#LibraryInstallStatus">compute.LibraryInstallStatus</a>.</li> <li>Added <code>WarehouseId</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#NotebookTask">jobs.NotebookTask</a>.</li> <li>Added <code>RunAs</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SubmitRun">jobs.SubmitRun</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#DeploymentKind">pipelines.DeploymentKind</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#IngestionConfig">pipelines.IngestionConfig</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#ManagedIngestionPipelineDefinition">pipelines.ManagedIngestionPipelineDefinition</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#PipelineDeployment">pipelines.PipelineDeployment</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#SchemaSpec">pipelines.SchemaSpec</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#TableSpec">pipelines.TableSpec</a>.</li> <li>Added <code>GetOpenApi</code> method for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a> workspace-level service.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#GetOpenApiRequest">serving.GetOpenApiRequest</a>.</li> <li>Added <code>SchemaId</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#SchemaInfo">catalog.SchemaInfo</a>.</li> <li>Added <code>Operation</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResult">catalog.ValidationResult</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResultOperation">catalog.ValidationResultOperation</a>.</li> <li>Added <code>Requirements</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#Library">compute.Library</a>.</li> <li>Removed <code>AwsOperation</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResult">catalog.ValidationResult</a>.</li> <li>Removed <code>AzureOperation</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResult">catalog.ValidationResult</a>.</li> <li>Removed <code>GcpOperation</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResult">catalog.ValidationResult</a>.</li> <li>Removed <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResultAwsOperation">catalog.ValidationResultAwsOperation</a>.</li> <li>Removed <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResultAzureOperation">catalog.ValidationResultAzureOperation</a>.</li> <li>Removed <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResultGcpOperation">catalog.ValidationResultGcpOperation</a>.</li> <li>Removed <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#ClusterStatusRequest">compute.ClusterStatusRequest</a>.</li> <li>Removed <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#LibraryFullStatusStatus">compute.LibraryFullStatusStatus</a>.</li> <li>Changed <code>ClusterStatus</code> method for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#LibrariesAPI">w.Libraries</a> workspace-level service . New request type is <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#ClusterStatus">compute.ClusterStatus</a>.</li> <li>Changed <code>ClusterStatus</code> method for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#LibrariesAPI">w.Libraries</a> workspace-level service to return <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#ClusterStatusResponse">compute.ClusterStatusResponse</a>.</li> <li>Changed <code>Status</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#LibraryFullStatus">compute.LibraryFullStatus</a> to <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#LibraryInstallStatus">compute.LibraryInstallStatus</a>.</li> </ul> <p>OpenAPI SHA: 21f9f1482f9d0d15228da59f2cd9f0863d2a6d55, Date: 2024-04-23</p> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md">github.com/databricks/databricks-sdk-go's changelog</a>.</em></p> <blockquote> <h2>0.39.0</h2> <ul> <li>Ignored flaky integration tests (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/894">#894</a>).</li> <li>Added retries for "worker env WorkerEnvId(workerenv-XXXXX) not found" (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/890">#890</a>).</li> <li>Updated SDK to OpenAPI spec (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/899">#899</a>).</li> </ul> <p>Note: This release contains breaking changes, please see the API changes below for more details.</p> <p>API Changes:</p> <ul> <li>Added <code>IngestionDefinition</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#CreatePipeline">pipelines.CreatePipeline</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#EditPipeline">pipelines.EditPipeline</a> and <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#PipelineSpec">pipelines.PipelineSpec</a>.</li> <li>Added <code>Deployment</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#CreatePipeline">pipelines.CreatePipeline</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#EditPipeline">pipelines.EditPipeline</a> and <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#PipelineSpec">pipelines.PipelineSpec</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#ClusterStatus">compute.ClusterStatus</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#ClusterStatusResponse">compute.ClusterStatusResponse</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#LibraryInstallStatus">compute.LibraryInstallStatus</a>.</li> <li>Added <code>WarehouseId</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#NotebookTask">jobs.NotebookTask</a>.</li> <li>Added <code>RunAs</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SubmitRun">jobs.SubmitRun</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#DeploymentKind">pipelines.DeploymentKind</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#IngestionConfig">pipelines.IngestionConfig</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#ManagedIngestionPipelineDefinition">pipelines.ManagedIngestionPipelineDefinition</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#PipelineDeployment">pipelines.PipelineDeployment</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#SchemaSpec">pipelines.SchemaSpec</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#TableSpec">pipelines.TableSpec</a>.</li> <li>Added <code>GetOpenApi</code> method for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a> workspace-level service.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#GetOpenApiRequest">serving.GetOpenApiRequest</a>.</li> <li>Added <code>SchemaId</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#SchemaInfo">catalog.SchemaInfo</a>.</li> <li>Added <code>Operation</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResult">catalog.ValidationResult</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResultOperation">catalog.ValidationResultOperation</a>.</li> <li>Added <code>Requirements</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#Library">compute.Library</a>.</li> <li>Removed <code>AwsOperation</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResult">catalog.ValidationResult</a>.</li> <li>Removed <code>AzureOperation</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResult">catalog.ValidationResult</a>.</li> <li>Removed <code>GcpOperation</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResult">catalog.ValidationResult</a>.</li> <li>Removed <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResultAwsOperation">catalog.ValidationResultAwsOperation</a>.</li> <li>Removed <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResultAzureOperation">catalog.ValidationResultAzureOperation</a>.</li> <li>Removed <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidationResultGcpOperation">catalog.ValidationResultGcpOperation</a>.</li> <li>Removed <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#ClusterStatusRequest">compute.ClusterStatusRequest</a>.</li> <li>Removed <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#LibraryFullStatusStatus">compute.LibraryFullStatusStatus</a>.</li> <li>Changed <code>ClusterStatus</code> method for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#LibrariesAPI">w.Libraries</a> workspace-level service . New request type is <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#ClusterStatus">compute.ClusterStatus</a>.</li> <li>Changed <code>ClusterStatus</code> method for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#LibrariesAPI">w.Libraries</a> workspace-level service to return <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#ClusterStatusResponse">compute.ClusterStatusResponse</a>.</li> <li>Changed <code>Status</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#LibraryFullStatus">compute.LibraryFullStatus</a> to <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#LibraryInstallStatus">compute.LibraryInstallStatus</a>.</li> </ul> <p>OpenAPI SHA: 21f9f1482f9d0d15228da59f2cd9f0863d2a6d55, Date: 2024-04-23</p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="7672dece38
"><code>7672dec</code></a> Release v0.39.0 (<a href="https://redirect.github.com/databricks/databricks-sdk-go/issues/901">#901</a>)</li> <li><a href="2f56ab8431
"><code>2f56ab8</code></a> Update SDK to OpenAPI spec (<a href="https://redirect.github.com/databricks/databricks-sdk-go/issues/899">#899</a>)</li> <li><a href="fa3a5d24eb
"><code>fa3a5d2</code></a> Add retries for "worker env WorkerEnvId(workerenv-XXXXX) not found" (<a href="https://redirect.github.com/databricks/databricks-sdk-go/issues/890">#890</a>)</li> <li><a href="219975c53f
"><code>219975c</code></a> Ignore flaky integration tests (<a href="https://redirect.github.com/databricks/databricks-sdk-go/issues/894">#894</a>)</li> <li>See full diff in <a href="https://github.com/databricks/databricks-sdk-go/compare/v0.38.0...v0.39.0">compare view</a></li> </ul> </details> <br /> <details> <summary>Most Recent Ignore Conditions Applied to This Pull Request</summary> | Dependency Name | Ignore Conditions | | --- | --- | | github.com/databricks/databricks-sdk-go | [>= 0.28.a, < 0.29] | </details> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/databricks/databricks-sdk-go&package-manager=go_modules&previous-version=0.38.0&new-version=0.39.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
This commit is contained in:
parent
a292eefc2e
commit
781688c9cb
|
@ -1 +1 @@
|
||||||
94684175b8bd65f8701f89729351f8069e8309c9
|
21f9f1482f9d0d15228da59f2cd9f0863d2a6d55
|
|
@ -151,6 +151,7 @@ func new{{.PascalName}}() *cobra.Command {
|
||||||
"provider-exchanges delete"
|
"provider-exchanges delete"
|
||||||
"provider-exchanges delete-listing-from-exchange"
|
"provider-exchanges delete-listing-from-exchange"
|
||||||
"provider-exchanges list-exchanges-for-listing"
|
"provider-exchanges list-exchanges-for-listing"
|
||||||
|
"provider-exchanges list-listings-for-exchange"
|
||||||
-}}
|
-}}
|
||||||
{{- $fullCommandName := (print $serviceName " " .KebabName) -}}
|
{{- $fullCommandName := (print $serviceName " " .KebabName) -}}
|
||||||
{{- $noPrompt := or .IsCrudCreate (in $excludeFromPrompts $fullCommandName) }}
|
{{- $noPrompt := or .IsCrudCreate (in $excludeFromPrompts $fullCommandName) }}
|
||||||
|
|
|
@ -46,6 +46,17 @@
|
||||||
"properties": {
|
"properties": {
|
||||||
"fail_on_active_runs": {
|
"fail_on_active_runs": {
|
||||||
"description": ""
|
"description": ""
|
||||||
|
},
|
||||||
|
"lock": {
|
||||||
|
"description": "",
|
||||||
|
"properties": {
|
||||||
|
"enabled": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"force": {
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -76,6 +87,9 @@
|
||||||
"additionalproperties": {
|
"additionalproperties": {
|
||||||
"description": ""
|
"description": ""
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
"use_legacy_run_as": {
|
||||||
|
"description": ""
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -242,7 +256,7 @@
|
||||||
"description": "",
|
"description": "",
|
||||||
"properties": {
|
"properties": {
|
||||||
"client": {
|
"client": {
|
||||||
"description": "*\nUser-friendly name for the client version: “client”: “1”\nThe version is a string, consisting of the major client version"
|
"description": "Client version used by the environment\nThe client is the user-facing environment of the runtime.\nEach client comes with a specific set of pre-installed libraries.\nThe version is a string, consisting of the major client version."
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"description": "List of pip dependencies, as supported by the version of pip in this environment.\nEach dependency is a pip requirement file line https://pip.pypa.io/en/stable/reference/requirements-file-format/\nAllowed dependency could be \u003crequirement specifier\u003e, \u003carchive url/path\u003e, \u003clocal project path\u003e(WSFS or Volumes in Databricks), \u003cvcs project url\u003e\nE.g. dependencies: [\"foo==0.0.1\", \"-r /Workspace/test/requirements.txt\"]",
|
"description": "List of pip dependencies, as supported by the version of pip in this environment.\nEach dependency is a pip requirement file line https://pip.pypa.io/en/stable/reference/requirements-file-format/\nAllowed dependency could be \u003crequirement specifier\u003e, \u003carchive url/path\u003e, \u003clocal project path\u003e(WSFS or Volumes in Databricks), \u003cvcs project url\u003e\nE.g. dependencies: [\"foo==0.0.1\", \"-r /Workspace/test/requirements.txt\"]",
|
||||||
|
@ -909,10 +923,10 @@
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"egg": {
|
"egg": {
|
||||||
"description": "URI of the egg to be installed. Currently only DBFS and S3 URIs are supported.\nFor example: `{ \"egg\": \"dbfs:/my/egg\" }` or\n`{ \"egg\": \"s3://my-bucket/egg\" }`.\nIf S3 is used, please make sure the cluster has read access on the library. You may need to\nlaunch the cluster with an IAM role to access the S3 URI."
|
"description": "URI of the egg library to install. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs.\nFor example: `{ \"egg\": \"/Workspace/path/to/library.egg\" }`, `{ \"egg\" : \"/Volumes/path/to/library.egg\" }` or\n`{ \"egg\": \"s3://my-bucket/library.egg\" }`.\nIf S3 is used, please make sure the cluster has read access on the library. You may need to\nlaunch the cluster with an IAM role to access the S3 URI."
|
||||||
},
|
},
|
||||||
"jar": {
|
"jar": {
|
||||||
"description": "URI of the jar to be installed. Currently only DBFS and S3 URIs are supported.\nFor example: `{ \"jar\": \"dbfs:/mnt/databricks/library.jar\" }` or\n`{ \"jar\": \"s3://my-bucket/library.jar\" }`.\nIf S3 is used, please make sure the cluster has read access on the library. You may need to\nlaunch the cluster with an IAM role to access the S3 URI."
|
"description": "URI of the JAR library to install. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs.\nFor example: `{ \"jar\": \"/Workspace/path/to/library.jar\" }`, `{ \"jar\" : \"/Volumes/path/to/library.jar\" }` or\n`{ \"jar\": \"s3://my-bucket/library.jar\" }`.\nIf S3 is used, please make sure the cluster has read access on the library. You may need to\nlaunch the cluster with an IAM role to access the S3 URI."
|
||||||
},
|
},
|
||||||
"maven": {
|
"maven": {
|
||||||
"description": "Specification of a maven library to be installed. For example:\n`{ \"coordinates\": \"org.jsoup:jsoup:1.7.2\" }`",
|
"description": "Specification of a maven library to be installed. For example:\n`{ \"coordinates\": \"org.jsoup:jsoup:1.7.2\" }`",
|
||||||
|
@ -942,8 +956,11 @@
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"requirements": {
|
||||||
|
"description": "URI of the requirements.txt file to install. Only Workspace paths and Unity Catalog Volumes paths are supported.\nFor example: `{ \"requirements\": \"/Workspace/path/to/requirements.txt\" }` or `{ \"requirements\" : \"/Volumes/path/to/requirements.txt\" }`"
|
||||||
|
},
|
||||||
"whl": {
|
"whl": {
|
||||||
"description": "URI of the wheel to be installed.\nFor example: `{ \"whl\": \"dbfs:/my/whl\" }` or `{ \"whl\": \"s3://my-bucket/whl\" }`.\nIf S3 is used, please make sure the cluster has read access on the library. You may need to\nlaunch the cluster with an IAM role to access the S3 URI."
|
"description": "URI of the wheel library to install. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs.\nFor example: `{ \"whl\": \"/Workspace/path/to/library.whl\" }`, `{ \"whl\" : \"/Volumes/path/to/library.whl\" }` or\n`{ \"whl\": \"s3://my-bucket/library.whl\" }`.\nIf S3 is used, please make sure the cluster has read access on the library. You may need to\nlaunch the cluster with an IAM role to access the S3 URI."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1303,6 +1320,9 @@
|
||||||
},
|
},
|
||||||
"source": {
|
"source": {
|
||||||
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved from the local Databricks workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n* `WORKSPACE`: Notebook is located in Databricks workspace.\n* `GIT`: Notebook is located in cloud Git provider."
|
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved from the local Databricks workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n* `WORKSPACE`: Notebook is located in Databricks workspace.\n* `GIT`: Notebook is located in cloud Git provider."
|
||||||
|
},
|
||||||
|
"warehouse_id": {
|
||||||
|
"description": "Optional `warehouse_id` to run the notebook on a SQL warehouse. Classic SQL warehouses are NOT supported, please use serverless or pro SQL warehouses.\n\nNote that SQL warehouses only support SQL cells; if the notebook contains non-SQL cells, the run will fail."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -1526,7 +1546,7 @@
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"file": {
|
"file": {
|
||||||
"description": "If file, indicates that this job runs a SQL file in a remote Git repository. Only one SQL statement is supported in a file. Multiple SQL statements separated by semicolons (;) are not permitted.",
|
"description": "If file, indicates that this job runs a SQL file in a remote Git repository.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"path": {
|
"path": {
|
||||||
"description": "Path of the SQL file. Must be relative if the source is a remote Git repository and absolute for workspace paths."
|
"description": "Path of the SQL file. Must be relative if the source is a remote Git repository and absolute for workspace paths."
|
||||||
|
@ -1562,7 +1582,7 @@
|
||||||
"description": "An optional timeout applied to each run of this job task. A value of `0` means no timeout."
|
"description": "An optional timeout applied to each run of this job task. A value of `0` means no timeout."
|
||||||
},
|
},
|
||||||
"webhook_notifications": {
|
"webhook_notifications": {
|
||||||
"description": "A collection of system notification IDs to notify when runs of this task begin or complete. The default behavior is to not send any system notifications.",
|
"description": "A collection of system notification IDs to notify when runs of this job begin or complete.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"on_duration_warning_threshold_exceeded": {
|
"on_duration_warning_threshold_exceeded": {
|
||||||
"description": "An optional list of system notification IDs to call when the duration of a run exceeds the threshold specified for the `RUN_DURATION_SECONDS` metric in the `health` field. A maximum of 3 destinations can be specified for the `on_duration_warning_threshold_exceeded` property.",
|
"description": "An optional list of system notification IDs to call when the duration of a run exceeds the threshold specified for the `RUN_DURATION_SECONDS` metric in the `health` field. A maximum of 3 destinations can be specified for the `on_duration_warning_threshold_exceeded` property.",
|
||||||
|
@ -1679,7 +1699,7 @@
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"webhook_notifications": {
|
"webhook_notifications": {
|
||||||
"description": "A collection of system notification IDs to notify when runs of this task begin or complete. The default behavior is to not send any system notifications.",
|
"description": "A collection of system notification IDs to notify when runs of this job begin or complete.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"on_duration_warning_threshold_exceeded": {
|
"on_duration_warning_threshold_exceeded": {
|
||||||
"description": "An optional list of system notification IDs to call when the duration of a run exceeds the threshold specified for the `RUN_DURATION_SECONDS` metric in the `health` field. A maximum of 3 destinations can be specified for the `on_duration_warning_threshold_exceeded` property.",
|
"description": "An optional list of system notification IDs to call when the duration of a run exceeds the threshold specified for the `RUN_DURATION_SECONDS` metric in the `health` field. A maximum of 3 destinations can be specified for the `on_duration_warning_threshold_exceeded` property.",
|
||||||
|
@ -2415,6 +2435,17 @@
|
||||||
"continuous": {
|
"continuous": {
|
||||||
"description": "Whether the pipeline is continuous or triggered. This replaces `trigger`."
|
"description": "Whether the pipeline is continuous or triggered. This replaces `trigger`."
|
||||||
},
|
},
|
||||||
|
"deployment": {
|
||||||
|
"description": "Deployment type of this pipeline.",
|
||||||
|
"properties": {
|
||||||
|
"kind": {
|
||||||
|
"description": "The deployment method that manages the pipeline."
|
||||||
|
},
|
||||||
|
"metadata_file_path": {
|
||||||
|
"description": "The path to the file containing metadata about the deployment."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"development": {
|
"development": {
|
||||||
"description": "Whether the pipeline is in Development mode. Defaults to false."
|
"description": "Whether the pipeline is in Development mode. Defaults to false."
|
||||||
},
|
},
|
||||||
|
@ -2441,6 +2472,65 @@
|
||||||
"id": {
|
"id": {
|
||||||
"description": "Unique identifier for this pipeline."
|
"description": "Unique identifier for this pipeline."
|
||||||
},
|
},
|
||||||
|
"ingestion_definition": {
|
||||||
|
"description": "The configuration for a managed ingestion pipeline. These settings cannot be used with the 'libraries', 'target' or 'catalog' settings.",
|
||||||
|
"properties": {
|
||||||
|
"connection_name": {
|
||||||
|
"description": "Immutable. The Unity Catalog connection this ingestion pipeline uses to communicate with the source. Specify either ingestion_gateway_id or connection_name."
|
||||||
|
},
|
||||||
|
"ingestion_gateway_id": {
|
||||||
|
"description": "Immutable. Identifier for the ingestion gateway used by this ingestion pipeline to communicate with the source. Specify either ingestion_gateway_id or connection_name."
|
||||||
|
},
|
||||||
|
"objects": {
|
||||||
|
"description": "Required. Settings specifying tables to replicate and the destination for the replicated tables.",
|
||||||
|
"items": {
|
||||||
|
"description": "",
|
||||||
|
"properties": {
|
||||||
|
"schema": {
|
||||||
|
"description": "Select tables from a specific source schema.",
|
||||||
|
"properties": {
|
||||||
|
"destination_catalog": {
|
||||||
|
"description": "Required. Destination catalog to store tables."
|
||||||
|
},
|
||||||
|
"destination_schema": {
|
||||||
|
"description": "Required. Destination schema to store tables in. Tables with the same name as the source tables are created in this destination schema. The pipeline fails If a table with the same name already exists."
|
||||||
|
},
|
||||||
|
"source_catalog": {
|
||||||
|
"description": "The source catalog name. Might be optional depending on the type of source."
|
||||||
|
},
|
||||||
|
"source_schema": {
|
||||||
|
"description": "Required. Schema name in the source database."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"table": {
|
||||||
|
"description": "Select tables from a specific source table.",
|
||||||
|
"properties": {
|
||||||
|
"destination_catalog": {
|
||||||
|
"description": "Required. Destination catalog to store table."
|
||||||
|
},
|
||||||
|
"destination_schema": {
|
||||||
|
"description": "Required. Destination schema to store table."
|
||||||
|
},
|
||||||
|
"destination_table": {
|
||||||
|
"description": "Optional. Destination table name. The pipeline fails If a table with that name already exists. If not set, the source table name is used."
|
||||||
|
},
|
||||||
|
"source_catalog": {
|
||||||
|
"description": "Source catalog name. Might be optional depending on the type of source."
|
||||||
|
},
|
||||||
|
"source_schema": {
|
||||||
|
"description": "Schema name in the source database. Might be optional depending on the type of source."
|
||||||
|
},
|
||||||
|
"source_table": {
|
||||||
|
"description": "Required. Table name in the source database."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"libraries": {
|
"libraries": {
|
||||||
"description": "Libraries or code needed by this deployment.",
|
"description": "Libraries or code needed by this deployment.",
|
||||||
"items": {
|
"items": {
|
||||||
|
@ -2682,6 +2772,17 @@
|
||||||
"properties": {
|
"properties": {
|
||||||
"fail_on_active_runs": {
|
"fail_on_active_runs": {
|
||||||
"description": ""
|
"description": ""
|
||||||
|
},
|
||||||
|
"lock": {
|
||||||
|
"description": "",
|
||||||
|
"properties": {
|
||||||
|
"enabled": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"force": {
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -2878,7 +2979,7 @@
|
||||||
"description": "",
|
"description": "",
|
||||||
"properties": {
|
"properties": {
|
||||||
"client": {
|
"client": {
|
||||||
"description": "*\nUser-friendly name for the client version: “client”: “1”\nThe version is a string, consisting of the major client version"
|
"description": "Client version used by the environment\nThe client is the user-facing environment of the runtime.\nEach client comes with a specific set of pre-installed libraries.\nThe version is a string, consisting of the major client version."
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"description": "List of pip dependencies, as supported by the version of pip in this environment.\nEach dependency is a pip requirement file line https://pip.pypa.io/en/stable/reference/requirements-file-format/\nAllowed dependency could be \u003crequirement specifier\u003e, \u003carchive url/path\u003e, \u003clocal project path\u003e(WSFS or Volumes in Databricks), \u003cvcs project url\u003e\nE.g. dependencies: [\"foo==0.0.1\", \"-r /Workspace/test/requirements.txt\"]",
|
"description": "List of pip dependencies, as supported by the version of pip in this environment.\nEach dependency is a pip requirement file line https://pip.pypa.io/en/stable/reference/requirements-file-format/\nAllowed dependency could be \u003crequirement specifier\u003e, \u003carchive url/path\u003e, \u003clocal project path\u003e(WSFS or Volumes in Databricks), \u003cvcs project url\u003e\nE.g. dependencies: [\"foo==0.0.1\", \"-r /Workspace/test/requirements.txt\"]",
|
||||||
|
@ -3545,10 +3646,10 @@
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"egg": {
|
"egg": {
|
||||||
"description": "URI of the egg to be installed. Currently only DBFS and S3 URIs are supported.\nFor example: `{ \"egg\": \"dbfs:/my/egg\" }` or\n`{ \"egg\": \"s3://my-bucket/egg\" }`.\nIf S3 is used, please make sure the cluster has read access on the library. You may need to\nlaunch the cluster with an IAM role to access the S3 URI."
|
"description": "URI of the egg library to install. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs.\nFor example: `{ \"egg\": \"/Workspace/path/to/library.egg\" }`, `{ \"egg\" : \"/Volumes/path/to/library.egg\" }` or\n`{ \"egg\": \"s3://my-bucket/library.egg\" }`.\nIf S3 is used, please make sure the cluster has read access on the library. You may need to\nlaunch the cluster with an IAM role to access the S3 URI."
|
||||||
},
|
},
|
||||||
"jar": {
|
"jar": {
|
||||||
"description": "URI of the jar to be installed. Currently only DBFS and S3 URIs are supported.\nFor example: `{ \"jar\": \"dbfs:/mnt/databricks/library.jar\" }` or\n`{ \"jar\": \"s3://my-bucket/library.jar\" }`.\nIf S3 is used, please make sure the cluster has read access on the library. You may need to\nlaunch the cluster with an IAM role to access the S3 URI."
|
"description": "URI of the JAR library to install. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs.\nFor example: `{ \"jar\": \"/Workspace/path/to/library.jar\" }`, `{ \"jar\" : \"/Volumes/path/to/library.jar\" }` or\n`{ \"jar\": \"s3://my-bucket/library.jar\" }`.\nIf S3 is used, please make sure the cluster has read access on the library. You may need to\nlaunch the cluster with an IAM role to access the S3 URI."
|
||||||
},
|
},
|
||||||
"maven": {
|
"maven": {
|
||||||
"description": "Specification of a maven library to be installed. For example:\n`{ \"coordinates\": \"org.jsoup:jsoup:1.7.2\" }`",
|
"description": "Specification of a maven library to be installed. For example:\n`{ \"coordinates\": \"org.jsoup:jsoup:1.7.2\" }`",
|
||||||
|
@ -3578,8 +3679,11 @@
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"requirements": {
|
||||||
|
"description": "URI of the requirements.txt file to install. Only Workspace paths and Unity Catalog Volumes paths are supported.\nFor example: `{ \"requirements\": \"/Workspace/path/to/requirements.txt\" }` or `{ \"requirements\" : \"/Volumes/path/to/requirements.txt\" }`"
|
||||||
|
},
|
||||||
"whl": {
|
"whl": {
|
||||||
"description": "URI of the wheel to be installed.\nFor example: `{ \"whl\": \"dbfs:/my/whl\" }` or `{ \"whl\": \"s3://my-bucket/whl\" }`.\nIf S3 is used, please make sure the cluster has read access on the library. You may need to\nlaunch the cluster with an IAM role to access the S3 URI."
|
"description": "URI of the wheel library to install. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs.\nFor example: `{ \"whl\": \"/Workspace/path/to/library.whl\" }`, `{ \"whl\" : \"/Volumes/path/to/library.whl\" }` or\n`{ \"whl\": \"s3://my-bucket/library.whl\" }`.\nIf S3 is used, please make sure the cluster has read access on the library. You may need to\nlaunch the cluster with an IAM role to access the S3 URI."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -3939,6 +4043,9 @@
|
||||||
},
|
},
|
||||||
"source": {
|
"source": {
|
||||||
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved from the local Databricks workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n* `WORKSPACE`: Notebook is located in Databricks workspace.\n* `GIT`: Notebook is located in cloud Git provider."
|
"description": "Optional location type of the notebook. When set to `WORKSPACE`, the notebook will be retrieved from the local Databricks workspace. When set to `GIT`, the notebook will be retrieved from a Git repository\ndefined in `git_source`. If the value is empty, the task will use `GIT` if `git_source` is defined and `WORKSPACE` otherwise.\n* `WORKSPACE`: Notebook is located in Databricks workspace.\n* `GIT`: Notebook is located in cloud Git provider."
|
||||||
|
},
|
||||||
|
"warehouse_id": {
|
||||||
|
"description": "Optional `warehouse_id` to run the notebook on a SQL warehouse. Classic SQL warehouses are NOT supported, please use serverless or pro SQL warehouses.\n\nNote that SQL warehouses only support SQL cells; if the notebook contains non-SQL cells, the run will fail."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -4162,7 +4269,7 @@
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"file": {
|
"file": {
|
||||||
"description": "If file, indicates that this job runs a SQL file in a remote Git repository. Only one SQL statement is supported in a file. Multiple SQL statements separated by semicolons (;) are not permitted.",
|
"description": "If file, indicates that this job runs a SQL file in a remote Git repository.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"path": {
|
"path": {
|
||||||
"description": "Path of the SQL file. Must be relative if the source is a remote Git repository and absolute for workspace paths."
|
"description": "Path of the SQL file. Must be relative if the source is a remote Git repository and absolute for workspace paths."
|
||||||
|
@ -4198,7 +4305,7 @@
|
||||||
"description": "An optional timeout applied to each run of this job task. A value of `0` means no timeout."
|
"description": "An optional timeout applied to each run of this job task. A value of `0` means no timeout."
|
||||||
},
|
},
|
||||||
"webhook_notifications": {
|
"webhook_notifications": {
|
||||||
"description": "A collection of system notification IDs to notify when runs of this task begin or complete. The default behavior is to not send any system notifications.",
|
"description": "A collection of system notification IDs to notify when runs of this job begin or complete.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"on_duration_warning_threshold_exceeded": {
|
"on_duration_warning_threshold_exceeded": {
|
||||||
"description": "An optional list of system notification IDs to call when the duration of a run exceeds the threshold specified for the `RUN_DURATION_SECONDS` metric in the `health` field. A maximum of 3 destinations can be specified for the `on_duration_warning_threshold_exceeded` property.",
|
"description": "An optional list of system notification IDs to call when the duration of a run exceeds the threshold specified for the `RUN_DURATION_SECONDS` metric in the `health` field. A maximum of 3 destinations can be specified for the `on_duration_warning_threshold_exceeded` property.",
|
||||||
|
@ -4315,7 +4422,7 @@
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"webhook_notifications": {
|
"webhook_notifications": {
|
||||||
"description": "A collection of system notification IDs to notify when runs of this task begin or complete. The default behavior is to not send any system notifications.",
|
"description": "A collection of system notification IDs to notify when runs of this job begin or complete.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"on_duration_warning_threshold_exceeded": {
|
"on_duration_warning_threshold_exceeded": {
|
||||||
"description": "An optional list of system notification IDs to call when the duration of a run exceeds the threshold specified for the `RUN_DURATION_SECONDS` metric in the `health` field. A maximum of 3 destinations can be specified for the `on_duration_warning_threshold_exceeded` property.",
|
"description": "An optional list of system notification IDs to call when the duration of a run exceeds the threshold specified for the `RUN_DURATION_SECONDS` metric in the `health` field. A maximum of 3 destinations can be specified for the `on_duration_warning_threshold_exceeded` property.",
|
||||||
|
@ -5051,6 +5158,17 @@
|
||||||
"continuous": {
|
"continuous": {
|
||||||
"description": "Whether the pipeline is continuous or triggered. This replaces `trigger`."
|
"description": "Whether the pipeline is continuous or triggered. This replaces `trigger`."
|
||||||
},
|
},
|
||||||
|
"deployment": {
|
||||||
|
"description": "Deployment type of this pipeline.",
|
||||||
|
"properties": {
|
||||||
|
"kind": {
|
||||||
|
"description": "The deployment method that manages the pipeline."
|
||||||
|
},
|
||||||
|
"metadata_file_path": {
|
||||||
|
"description": "The path to the file containing metadata about the deployment."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"development": {
|
"development": {
|
||||||
"description": "Whether the pipeline is in Development mode. Defaults to false."
|
"description": "Whether the pipeline is in Development mode. Defaults to false."
|
||||||
},
|
},
|
||||||
|
@ -5077,6 +5195,65 @@
|
||||||
"id": {
|
"id": {
|
||||||
"description": "Unique identifier for this pipeline."
|
"description": "Unique identifier for this pipeline."
|
||||||
},
|
},
|
||||||
|
"ingestion_definition": {
|
||||||
|
"description": "The configuration for a managed ingestion pipeline. These settings cannot be used with the 'libraries', 'target' or 'catalog' settings.",
|
||||||
|
"properties": {
|
||||||
|
"connection_name": {
|
||||||
|
"description": "Immutable. The Unity Catalog connection this ingestion pipeline uses to communicate with the source. Specify either ingestion_gateway_id or connection_name."
|
||||||
|
},
|
||||||
|
"ingestion_gateway_id": {
|
||||||
|
"description": "Immutable. Identifier for the ingestion gateway used by this ingestion pipeline to communicate with the source. Specify either ingestion_gateway_id or connection_name."
|
||||||
|
},
|
||||||
|
"objects": {
|
||||||
|
"description": "Required. Settings specifying tables to replicate and the destination for the replicated tables.",
|
||||||
|
"items": {
|
||||||
|
"description": "",
|
||||||
|
"properties": {
|
||||||
|
"schema": {
|
||||||
|
"description": "Select tables from a specific source schema.",
|
||||||
|
"properties": {
|
||||||
|
"destination_catalog": {
|
||||||
|
"description": "Required. Destination catalog to store tables."
|
||||||
|
},
|
||||||
|
"destination_schema": {
|
||||||
|
"description": "Required. Destination schema to store tables in. Tables with the same name as the source tables are created in this destination schema. The pipeline fails If a table with the same name already exists."
|
||||||
|
},
|
||||||
|
"source_catalog": {
|
||||||
|
"description": "The source catalog name. Might be optional depending on the type of source."
|
||||||
|
},
|
||||||
|
"source_schema": {
|
||||||
|
"description": "Required. Schema name in the source database."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"table": {
|
||||||
|
"description": "Select tables from a specific source table.",
|
||||||
|
"properties": {
|
||||||
|
"destination_catalog": {
|
||||||
|
"description": "Required. Destination catalog to store table."
|
||||||
|
},
|
||||||
|
"destination_schema": {
|
||||||
|
"description": "Required. Destination schema to store table."
|
||||||
|
},
|
||||||
|
"destination_table": {
|
||||||
|
"description": "Optional. Destination table name. The pipeline fails If a table with that name already exists. If not set, the source table name is used."
|
||||||
|
},
|
||||||
|
"source_catalog": {
|
||||||
|
"description": "Source catalog name. Might be optional depending on the type of source."
|
||||||
|
},
|
||||||
|
"source_schema": {
|
||||||
|
"description": "Schema name in the source database. Might be optional depending on the type of source."
|
||||||
|
},
|
||||||
|
"source_table": {
|
||||||
|
"description": "Required. Table name in the source database."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"libraries": {
|
"libraries": {
|
||||||
"description": "Libraries or code needed by this deployment.",
|
"description": "Libraries or code needed by this deployment.",
|
||||||
"items": {
|
"items": {
|
||||||
|
|
|
@ -25,6 +25,9 @@ func New() *cobra.Command {
|
||||||
setting is disabled for new workspaces. After workspace creation, account
|
setting is disabled for new workspaces. After workspace creation, account
|
||||||
admins can enable enhanced security monitoring individually for each
|
admins can enable enhanced security monitoring individually for each
|
||||||
workspace.`,
|
workspace.`,
|
||||||
|
|
||||||
|
// This service is being previewed; hide from help output.
|
||||||
|
Hidden: true,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add methods
|
// Add methods
|
||||||
|
|
|
@ -22,6 +22,9 @@ func New() *cobra.Command {
|
||||||
Short: `Controls whether automatic cluster update is enabled for the current workspace.`,
|
Short: `Controls whether automatic cluster update is enabled for the current workspace.`,
|
||||||
Long: `Controls whether automatic cluster update is enabled for the current
|
Long: `Controls whether automatic cluster update is enabled for the current
|
||||||
workspace. By default, it is turned off.`,
|
workspace. By default, it is turned off.`,
|
||||||
|
|
||||||
|
// This service is being previewed; hide from help output.
|
||||||
|
Hidden: true,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add methods
|
// Add methods
|
||||||
|
|
|
@ -25,6 +25,9 @@ func New() *cobra.Command {
|
||||||
off.
|
off.
|
||||||
|
|
||||||
This settings can NOT be disabled once it is enabled.`,
|
This settings can NOT be disabled once it is enabled.`,
|
||||||
|
|
||||||
|
// This service is being previewed; hide from help output.
|
||||||
|
Hidden: true,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add methods
|
// Add methods
|
||||||
|
|
|
@ -27,6 +27,9 @@ func New() *cobra.Command {
|
||||||
|
|
||||||
If the compliance security profile is disabled, you can enable or disable this
|
If the compliance security profile is disabled, you can enable or disable this
|
||||||
setting and it is not permanent.`,
|
setting and it is not permanent.`,
|
||||||
|
|
||||||
|
// This service is being previewed; hide from help output.
|
||||||
|
Hidden: true,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add methods
|
// Add methods
|
||||||
|
|
|
@ -1513,6 +1513,7 @@ func newSubmit() *cobra.Command {
|
||||||
// TODO: complex arg: pipeline_task
|
// TODO: complex arg: pipeline_task
|
||||||
// TODO: complex arg: python_wheel_task
|
// TODO: complex arg: python_wheel_task
|
||||||
// TODO: complex arg: queue
|
// TODO: complex arg: queue
|
||||||
|
// TODO: complex arg: run_as
|
||||||
// TODO: complex arg: run_job_task
|
// TODO: complex arg: run_job_task
|
||||||
cmd.Flags().StringVar(&submitReq.RunName, "run-name", submitReq.RunName, `An optional name for the run.`)
|
cmd.Flags().StringVar(&submitReq.RunName, "run-name", submitReq.RunName, `An optional name for the run.`)
|
||||||
// TODO: complex arg: spark_jar_task
|
// TODO: complex arg: spark_jar_task
|
||||||
|
|
|
@ -25,18 +25,14 @@ func New() *cobra.Command {
|
||||||
|
|
||||||
To make third-party or custom code available to notebooks and jobs running on
|
To make third-party or custom code available to notebooks and jobs running on
|
||||||
your clusters, you can install a library. Libraries can be written in Python,
|
your clusters, you can install a library. Libraries can be written in Python,
|
||||||
Java, Scala, and R. You can upload Java, Scala, and Python libraries and point
|
Java, Scala, and R. You can upload Python, Java, Scala and R libraries and
|
||||||
to external packages in PyPI, Maven, and CRAN repositories.
|
point to external packages in PyPI, Maven, and CRAN repositories.
|
||||||
|
|
||||||
Cluster libraries can be used by all notebooks running on a cluster. You can
|
Cluster libraries can be used by all notebooks running on a cluster. You can
|
||||||
install a cluster library directly from a public repository such as PyPI or
|
install a cluster library directly from a public repository such as PyPI or
|
||||||
Maven, using a previously installed workspace library, or using an init
|
Maven, using a previously installed workspace library, or using an init
|
||||||
script.
|
script.
|
||||||
|
|
||||||
When you install a library on a cluster, a notebook already attached to that
|
|
||||||
cluster will not immediately see the new library. You must first detach and
|
|
||||||
then reattach the notebook to the cluster.
|
|
||||||
|
|
||||||
When you uninstall a library from a cluster, the library is removed only when
|
When you uninstall a library from a cluster, the library is removed only when
|
||||||
you restart the cluster. Until you restart the cluster, the status of the
|
you restart the cluster. Until you restart the cluster, the status of the
|
||||||
uninstalled library appears as Uninstall pending restart.`,
|
uninstalled library appears as Uninstall pending restart.`,
|
||||||
|
@ -75,9 +71,8 @@ func newAllClusterStatuses() *cobra.Command {
|
||||||
cmd.Short = `Get all statuses.`
|
cmd.Short = `Get all statuses.`
|
||||||
cmd.Long = `Get all statuses.
|
cmd.Long = `Get all statuses.
|
||||||
|
|
||||||
Get the status of all libraries on all clusters. A status will be available
|
Get the status of all libraries on all clusters. A status is returned for all
|
||||||
for all libraries installed on this cluster via the API or the libraries UI as
|
libraries installed on this cluster via the API or the libraries UI.`
|
||||||
well as libraries set to be installed on all clusters via the libraries UI.`
|
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
@ -110,13 +105,13 @@ func newAllClusterStatuses() *cobra.Command {
|
||||||
// Functions can be added from the `init()` function in manually curated files in this directory.
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
var clusterStatusOverrides []func(
|
var clusterStatusOverrides []func(
|
||||||
*cobra.Command,
|
*cobra.Command,
|
||||||
*compute.ClusterStatusRequest,
|
*compute.ClusterStatus,
|
||||||
)
|
)
|
||||||
|
|
||||||
func newClusterStatus() *cobra.Command {
|
func newClusterStatus() *cobra.Command {
|
||||||
cmd := &cobra.Command{}
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
var clusterStatusReq compute.ClusterStatusRequest
|
var clusterStatusReq compute.ClusterStatus
|
||||||
|
|
||||||
// TODO: short flags
|
// TODO: short flags
|
||||||
|
|
||||||
|
@ -124,21 +119,13 @@ func newClusterStatus() *cobra.Command {
|
||||||
cmd.Short = `Get status.`
|
cmd.Short = `Get status.`
|
||||||
cmd.Long = `Get status.
|
cmd.Long = `Get status.
|
||||||
|
|
||||||
Get the status of libraries on a cluster. A status will be available for all
|
Get the status of libraries on a cluster. A status is returned for all
|
||||||
libraries installed on this cluster via the API or the libraries UI as well as
|
libraries installed on this cluster via the API or the libraries UI. The order
|
||||||
libraries set to be installed on all clusters via the libraries UI. The order
|
of returned libraries is as follows: 1. Libraries set to be installed on this
|
||||||
of returned libraries will be as follows.
|
cluster, in the order that the libraries were added to the cluster, are
|
||||||
|
returned first. 2. Libraries that were previously requested to be installed on
|
||||||
1. Libraries set to be installed on this cluster will be returned first.
|
this cluster or, but are now marked for removal, in no particular order, are
|
||||||
Within this group, the final order will be order in which the libraries were
|
returned last.
|
||||||
added to the cluster.
|
|
||||||
|
|
||||||
2. Libraries set to be installed on all clusters are returned next. Within
|
|
||||||
this group there is no order guarantee.
|
|
||||||
|
|
||||||
3. Libraries that were previously requested on this cluster or on all
|
|
||||||
clusters, but now marked for removal. Within this group there is no order
|
|
||||||
guarantee.
|
|
||||||
|
|
||||||
Arguments:
|
Arguments:
|
||||||
CLUSTER_ID: Unique identifier of the cluster whose status should be retrieved.`
|
CLUSTER_ID: Unique identifier of the cluster whose status should be retrieved.`
|
||||||
|
@ -195,12 +182,8 @@ func newInstall() *cobra.Command {
|
||||||
cmd.Short = `Add a library.`
|
cmd.Short = `Add a library.`
|
||||||
cmd.Long = `Add a library.
|
cmd.Long = `Add a library.
|
||||||
|
|
||||||
Add libraries to be installed on a cluster. The installation is asynchronous;
|
Add libraries to install on a cluster. The installation is asynchronous; it
|
||||||
it happens in the background after the completion of this request.
|
happens in the background after the completion of this request.`
|
||||||
|
|
||||||
**Note**: The actual set of libraries to be installed on a cluster is the
|
|
||||||
union of the libraries specified via this method and the libraries set to be
|
|
||||||
installed on all clusters via the libraries UI.`
|
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
@ -259,9 +242,9 @@ func newUninstall() *cobra.Command {
|
||||||
cmd.Short = `Uninstall libraries.`
|
cmd.Short = `Uninstall libraries.`
|
||||||
cmd.Long = `Uninstall libraries.
|
cmd.Long = `Uninstall libraries.
|
||||||
|
|
||||||
Set libraries to be uninstalled on a cluster. The libraries won't be
|
Set libraries to uninstall from a cluster. The libraries won't be uninstalled
|
||||||
uninstalled until the cluster is restarted. Uninstalling libraries that are
|
until the cluster is restarted. A request to uninstall a library that is not
|
||||||
not installed on the cluster will have no impact but is not an error.`
|
currently installed is ignored.`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
|
|
@ -940,11 +940,13 @@ func newUpdate() *cobra.Command {
|
||||||
// TODO: array: clusters
|
// TODO: array: clusters
|
||||||
// TODO: map via StringToStringVar: configuration
|
// TODO: map via StringToStringVar: configuration
|
||||||
cmd.Flags().BoolVar(&updateReq.Continuous, "continuous", updateReq.Continuous, `Whether the pipeline is continuous or triggered.`)
|
cmd.Flags().BoolVar(&updateReq.Continuous, "continuous", updateReq.Continuous, `Whether the pipeline is continuous or triggered.`)
|
||||||
|
// TODO: complex arg: deployment
|
||||||
cmd.Flags().BoolVar(&updateReq.Development, "development", updateReq.Development, `Whether the pipeline is in Development mode.`)
|
cmd.Flags().BoolVar(&updateReq.Development, "development", updateReq.Development, `Whether the pipeline is in Development mode.`)
|
||||||
cmd.Flags().StringVar(&updateReq.Edition, "edition", updateReq.Edition, `Pipeline product edition.`)
|
cmd.Flags().StringVar(&updateReq.Edition, "edition", updateReq.Edition, `Pipeline product edition.`)
|
||||||
cmd.Flags().Int64Var(&updateReq.ExpectedLastModified, "expected-last-modified", updateReq.ExpectedLastModified, `If present, the last-modified time of the pipeline settings before the edit.`)
|
cmd.Flags().Int64Var(&updateReq.ExpectedLastModified, "expected-last-modified", updateReq.ExpectedLastModified, `If present, the last-modified time of the pipeline settings before the edit.`)
|
||||||
// TODO: complex arg: filters
|
// TODO: complex arg: filters
|
||||||
cmd.Flags().StringVar(&updateReq.Id, "id", updateReq.Id, `Unique identifier for this pipeline.`)
|
cmd.Flags().StringVar(&updateReq.Id, "id", updateReq.Id, `Unique identifier for this pipeline.`)
|
||||||
|
// TODO: complex arg: ingestion_definition
|
||||||
// TODO: array: libraries
|
// TODO: array: libraries
|
||||||
cmd.Flags().StringVar(&updateReq.Name, "name", updateReq.Name, `Friendly identifier for this pipeline.`)
|
cmd.Flags().StringVar(&updateReq.Name, "name", updateReq.Name, `Friendly identifier for this pipeline.`)
|
||||||
// TODO: array: notifications
|
// TODO: array: notifications
|
||||||
|
|
|
@ -508,28 +508,16 @@ func newListListingsForExchange() *cobra.Command {
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(1)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
cmd.PreRunE = root.MustWorkspaceClient
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
ctx := cmd.Context()
|
ctx := cmd.Context()
|
||||||
w := root.WorkspaceClient(ctx)
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
if len(args) == 0 {
|
|
||||||
promptSpinner := cmdio.Spinner(ctx)
|
|
||||||
promptSpinner <- "No EXCHANGE_ID argument specified. Loading names for Provider Exchanges drop-down."
|
|
||||||
names, err := w.ProviderExchanges.ExchangeListingExchangeNameToExchangeIdMap(ctx, marketplace.ListExchangesForListingRequest{})
|
|
||||||
close(promptSpinner)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to load names for Provider Exchanges drop-down. Please manually specify required arguments. Original error: %w", err)
|
|
||||||
}
|
|
||||||
id, err := cmdio.Select(ctx, names, "")
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
args = append(args, id)
|
|
||||||
}
|
|
||||||
if len(args) != 1 {
|
|
||||||
return fmt.Errorf("expected to have ")
|
|
||||||
}
|
|
||||||
listListingsForExchangeReq.ExchangeId = args[0]
|
listListingsForExchangeReq.ExchangeId = args[0]
|
||||||
|
|
||||||
response := w.ProviderExchanges.ListListingsForExchange(ctx, listListingsForExchangeReq)
|
response := w.ProviderExchanges.ListListingsForExchange(ctx, listListingsForExchangeReq)
|
||||||
|
|
|
@ -46,6 +46,7 @@ func New() *cobra.Command {
|
||||||
cmd.AddCommand(newDelete())
|
cmd.AddCommand(newDelete())
|
||||||
cmd.AddCommand(newExportMetrics())
|
cmd.AddCommand(newExportMetrics())
|
||||||
cmd.AddCommand(newGet())
|
cmd.AddCommand(newGet())
|
||||||
|
cmd.AddCommand(newGetOpenApi())
|
||||||
cmd.AddCommand(newGetPermissionLevels())
|
cmd.AddCommand(newGetPermissionLevels())
|
||||||
cmd.AddCommand(newGetPermissions())
|
cmd.AddCommand(newGetPermissions())
|
||||||
cmd.AddCommand(newList())
|
cmd.AddCommand(newList())
|
||||||
|
@ -379,6 +380,67 @@ func newGet() *cobra.Command {
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// start get-open-api command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var getOpenApiOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*serving.GetOpenApiRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newGetOpenApi() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var getOpenApiReq serving.GetOpenApiRequest
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
|
||||||
|
cmd.Use = "get-open-api NAME"
|
||||||
|
cmd.Short = `Get the schema for a serving endpoint.`
|
||||||
|
cmd.Long = `Get the schema for a serving endpoint.
|
||||||
|
|
||||||
|
Get the query schema of the serving endpoint in OpenAPI format. The schema
|
||||||
|
contains information for the supported paths, input and output format and
|
||||||
|
datatypes.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
NAME: The name of the serving endpoint that the served model belongs to. This
|
||||||
|
field is required.`
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(1)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
getOpenApiReq.Name = args[0]
|
||||||
|
|
||||||
|
err = w.ServingEndpoints.GetOpenApi(ctx, getOpenApiReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range getOpenApiOverrides {
|
||||||
|
fn(cmd, &getOpenApiReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
// start get-permission-levels command
|
// start get-permission-levels command
|
||||||
|
|
||||||
// Slice with functions to override default command behavior.
|
// Slice with functions to override default command behavior.
|
||||||
|
|
2
go.mod
2
go.mod
|
@ -5,7 +5,7 @@ go 1.21
|
||||||
require (
|
require (
|
||||||
github.com/Masterminds/semver/v3 v3.2.1 // MIT
|
github.com/Masterminds/semver/v3 v3.2.1 // MIT
|
||||||
github.com/briandowns/spinner v1.23.0 // Apache 2.0
|
github.com/briandowns/spinner v1.23.0 // Apache 2.0
|
||||||
github.com/databricks/databricks-sdk-go v0.38.0 // Apache 2.0
|
github.com/databricks/databricks-sdk-go v0.39.0 // Apache 2.0
|
||||||
github.com/fatih/color v1.16.0 // MIT
|
github.com/fatih/color v1.16.0 // MIT
|
||||||
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
|
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
|
||||||
github.com/google/uuid v1.6.0 // BSD-3-Clause
|
github.com/google/uuid v1.6.0 // BSD-3-Clause
|
||||||
|
|
|
@ -30,8 +30,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
|
||||||
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||||
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
|
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
|
||||||
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
|
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
|
||||||
github.com/databricks/databricks-sdk-go v0.38.0 h1:MQhOCWTkdKItG+n6ZwcXQv9FWBVXq9fax8VSZns2e+0=
|
github.com/databricks/databricks-sdk-go v0.39.0 h1:nVnQYkk47SkEsRSXWkn6j7jBOxXgusjoo6xwbaHTGss=
|
||||||
github.com/databricks/databricks-sdk-go v0.38.0/go.mod h1:Yjy1gREDLK65g4axpVbVNKYAHYE2Sqzj0AB9QWHCBVM=
|
github.com/databricks/databricks-sdk-go v0.39.0/go.mod h1:Yjy1gREDLK65g4axpVbVNKYAHYE2Sqzj0AB9QWHCBVM=
|
||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
|
|
Loading…
Reference in New Issue