Compare commits

..

23 Commits

Author SHA1 Message Date
Shreyas Goenka 793e1dc124
merge 2025-01-28 11:34:53 +01:00
Denis Bilenko 3ffac80007
acc: Use real terraform when CLOUD_ENV is set (#2245)
## Changes
- If CLOUD_ENV is set to do not override with dummy value. This allows
running acceptance tests as integration tests.
- Needed for https://github.com/databricks/cli/pull/2242

## Tests
Manually run the test suite against dogfood. `CLOUD_ENV=aws go test
./acceptance`
2025-01-28 10:23:44 +00:00
Denis Bilenko 11436faafe
acc: Avoid reading and applying replacements on large files; validate utf8 (#2244)
## Changes
- Do not start replacement / comparison if file is too large or not
valid utf-8.
- This helps to prevent replacements if there is accidentally a large
binary (e.g. terraform).

## Tests
Found this problem when working on
https://github.com/databricks/cli/pull/2242 -- the tests tried to
applied replacements on terraform binary and crashed. With this change,
an error is reported instead.
2025-01-28 10:22:29 +00:00
Denis Bilenko 60709e3d48
acc: Restore unexpected output error (#2243)
## Changes
Restore original behaviour of acceptance tests: any unaccounted for
files trigger an error (not just those that start with "out"). This got
changed in
https://github.com/databricks/cli/pull/2146/files#diff-2bb968d823f4afb825e1dcea2879bdbdedf2b7c15d4e77f47905691b14246a04L196
which started only checking files starting with "out*" and skipping
everything else.

## Tests
Existing tests.
2025-01-28 10:15:32 +00:00
Denis Bilenko be908ee1a1
Add acceptance test for 'experimental.scripts' (#2240) 2025-01-27 15:28:33 +00:00
Denis Bilenko 67d1413db5
Add default regex for DEV_VERSION (#2241)
## Changes

- Replace development version with $DEV_VERSION
- Update experimental-jobs-as-code to make use of it.

## Tests
- Existing tests.
- Using this in https://github.com/databricks/cli/pull/2213
2025-01-27 15:34:53 +01:00
Denis Bilenko 52bf7e388a
acc: Propagate user's UV_CACHE_DIR to tests (#2239)
There is a speed up in 0.5s but it is still 4.4s, so something else is
slow there.

Benchmarking bundle/templates/experimental-jobs-as-code:

```
# Without UV_CACHE_DIR
~/work/cli/acceptance/bundle/templates/experimental-jobs-as-code % hyperfine --warmup 2 'testme -count=1'
Benchmark 1: testme -count=1
  Time (mean ± σ):      4.950 s ±  0.079 s    [User: 2.730 s, System: 8.524 s]
  Range (min … max):    4.838 s …  5.076 s    10 runs

# With UV_CACHE_DIR
~/work/cli/acceptance/bundle/templates/experimental-jobs-as-code % hyperfine --warmup 2 'testme -count=1'
Benchmark 1: testme -count=1
  Time (mean ± σ):      4.410 s ±  0.049 s    [User: 2.669 s, System: 8.710 s]
  Range (min … max):    4.324 s …  4.467 s    10 runs
```
2025-01-27 15:25:56 +01:00
Denis Bilenko 65fbbd9a7c
libs/python: Remove DetectInterpreters (#2234)
## Changes
- Remove DetectInterpreters from DetectExecutable call: python3 or
python should always be on on the PATH. We don't need to detect
non-standard situations like python3.10 is present but python3 is not.
- I moved DetectInterpreters to cmd/labs where it is still used.

This is a follow up to https://github.com/databricks/cli/pull/2034

## Tests
Existing tests.
2025-01-27 13:22:08 +00:00
dependabot[bot] 4595c6f1b5
Bump github.com/databricks/databricks-sdk-go from 0.55.0 to 0.56.1 (#2238)
Bumps
[github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go)
from 0.55.0 to 0.56.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/releases">github.com/databricks/databricks-sdk-go's
releases</a>.</em></p>
<blockquote>
<h2>v0.56.1</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Do not send query parameters when set to zero value (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1136">#1136</a>).</li>
</ul>
<h2>v0.56.0</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Support Query parameters for all HTTP operations (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1124">#1124</a>).</li>
</ul>
<h3>Internal Changes</h3>
<ul>
<li>Add download target to MakeFile (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1125">#1125</a>).</li>
<li>Delete examples/mocking module (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1126">#1126</a>).</li>
<li>Scope the traversing directory in the Recursive list workspace test
(<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1120">#1120</a>).</li>
</ul>
<h3>API Changes:</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/iam#AccessControlAPI">w.AccessControl</a>
workspace-level service.</li>
<li>Added <code>HttpRequest</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a>
workspace-level service.</li>
<li>Added <code>ReviewState</code>, <code>Reviews</code> and
<code>RunnerCollaborators</code> fields for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/cleanrooms#CleanRoomAssetNotebook">cleanrooms.CleanRoomAssetNotebook</a>.</li>
<li>Added <code>CleanRoomsNotebookOutput</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#RunOutput">jobs.RunOutput</a>.</li>
<li>Added <code>RunAsRepl</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SparkJarTask">jobs.SparkJarTask</a>.</li>
<li>Added <code>Scopes</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/oauth2#UpdateCustomAppIntegration">oauth2.UpdateCustomAppIntegration</a>.</li>
<li>Added <code>Contents</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#GetOpenApiResponse">serving.GetOpenApiResponse</a>.</li>
<li>Added <code>Activated</code>, <code>ActivationUrl</code>,
<code>AuthenticationType</code>, <code>Cloud</code>,
<code>Comment</code>, <code>CreatedAt</code>, <code>CreatedBy</code>,
<code>DataRecipientGlobalMetastoreId</code>, <code>IpAccessList</code>,
<code>MetastoreId</code>, <code>Name</code>, <code>Owner</code>,
<code>PropertiesKvpairs</code>, <code>Region</code>,
<code>SharingCode</code>, <code>Tokens</code>, <code>UpdatedAt</code>
and <code>UpdatedBy</code> fields for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientInfo">sharing.RecipientInfo</a>.</li>
<li>Added <code>ExpirationTime</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientInfo">sharing.RecipientInfo</a>.</li>
<li>Added <code>Pending</code> enum value for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/cleanrooms#CleanRoomAssetStatusEnum">cleanrooms.CleanRoomAssetStatusEnum</a>.</li>
<li>Added <code>AddNodesFailed</code>,
<code>AutomaticClusterUpdate</code>, <code>AutoscalingBackoff</code> and
<code>AutoscalingFailed</code> enum values for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#EventType">compute.EventType</a>.</li>
<li>Added <code>PendingWarehouse</code> enum value for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#MessageStatus">dashboards.MessageStatus</a>.</li>
<li>Added <code>Cpu</code>, <code>GpuLarge</code>,
<code>GpuMedium</code>, <code>GpuSmall</code> and
<code>MultigpuMedium</code> enum values for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingModelWorkloadType">serving.ServingModelWorkloadType</a>.</li>
<li>Changed <code>Update</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientsAPI">w.Recipients</a>
workspace-level service to return <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientInfo">sharing.RecipientInfo</a>.</li>
<li>Changed <code>Update</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientsAPI">w.Recipients</a>
workspace-level service return type to become non-empty.</li>
<li>Changed <code>Update</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientsAPI">w.Recipients</a>
workspace-level service to type <code>Update</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientsAPI">w.Recipients</a>
workspace-level service.</li>
<li>Changed <code>Create</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a>
workspace-level service with new required argument order.</li>
<li>Changed <code>GetOpenApi</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a>
workspace-level service return type to become non-empty.</li>
<li>Changed <code>Patch</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a>
workspace-level service to type <code>Patch</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a>
workspace-level service.</li>
<li>Changed <code>Patch</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a>
workspace-level service to return <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#EndpointTags">serving.EndpointTags</a>.</li>
<li>Changed <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#EndpointTagList">serving.EndpointTagList</a>
to.</li>
<li>Changed <code>CollaboratorAlias</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/cleanrooms#CleanRoomCollaborator">cleanrooms.CleanRoomCollaborator</a>
to be required.</li>
<li>Changed <code>CollaboratorAlias</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/cleanrooms#CleanRoomCollaborator">cleanrooms.CleanRoomCollaborator</a>
to be required.</li>
<li>Changed <code>Behavior</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#AiGatewayGuardrailPiiBehavior">serving.AiGatewayGuardrailPiiBehavior</a>
to no longer be required.</li>
<li>Changed <code>Behavior</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#AiGatewayGuardrailPiiBehavior">serving.AiGatewayGuardrailPiiBehavior</a>
to no longer be required.</li>
<li>Changed <code>Config</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#CreateServingEndpoint">serving.CreateServingEndpoint</a>
to no longer be required.</li>
<li>Changed <code>ProjectId</code> and <code>Region</code> fields for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#GoogleCloudVertexAiConfig">serving.GoogleCloudVertexAiConfig</a>
to be required.</li>
<li>Changed <code>ProjectId</code> and <code>Region</code> fields for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#GoogleCloudVertexAiConfig">serving.GoogleCloudVertexAiConfig</a>
to be required.</li>
<li>Changed <code>WorkloadType</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServedEntityInput">serving.ServedEntityInput</a>
to type <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingModelWorkloadType">serving.ServingModelWorkloadType</a>.</li>
<li>Changed <code>WorkloadType</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServedEntityOutput">serving.ServedEntityOutput</a>
to type <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingModelWorkloadType">serving.ServingModelWorkloadType</a>.</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md">github.com/databricks/databricks-sdk-go's
changelog</a>.</em></p>
<blockquote>
<h2>[Release] Release v0.56.1</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Do not send query parameters when set to zero value (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1136">#1136</a>).</li>
</ul>
<h2>[Release] Release v0.56.0</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Support Query parameters for all HTTP operations (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1124">#1124</a>).</li>
</ul>
<h3>Internal Changes</h3>
<ul>
<li>Add download target to MakeFile (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1125">#1125</a>).</li>
<li>Delete examples/mocking module (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1126">#1126</a>).</li>
<li>Scope the traversing directory in the Recursive list workspace test
(<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1120">#1120</a>).</li>
</ul>
<h3>API Changes:</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/iam#AccessControlAPI">w.AccessControl</a>
workspace-level service.</li>
<li>Added <code>HttpRequest</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a>
workspace-level service.</li>
<li>Added <code>ReviewState</code>, <code>Reviews</code> and
<code>RunnerCollaborators</code> fields for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/cleanrooms#CleanRoomAssetNotebook">cleanrooms.CleanRoomAssetNotebook</a>.</li>
<li>Added <code>CleanRoomsNotebookOutput</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#RunOutput">jobs.RunOutput</a>.</li>
<li>Added <code>RunAsRepl</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SparkJarTask">jobs.SparkJarTask</a>.</li>
<li>Added <code>Scopes</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/oauth2#UpdateCustomAppIntegration">oauth2.UpdateCustomAppIntegration</a>.</li>
<li>Added <code>Contents</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#GetOpenApiResponse">serving.GetOpenApiResponse</a>.</li>
<li>Added <code>Activated</code>, <code>ActivationUrl</code>,
<code>AuthenticationType</code>, <code>Cloud</code>,
<code>Comment</code>, <code>CreatedAt</code>, <code>CreatedBy</code>,
<code>DataRecipientGlobalMetastoreId</code>, <code>IpAccessList</code>,
<code>MetastoreId</code>, <code>Name</code>, <code>Owner</code>,
<code>PropertiesKvpairs</code>, <code>Region</code>,
<code>SharingCode</code>, <code>Tokens</code>, <code>UpdatedAt</code>
and <code>UpdatedBy</code> fields for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientInfo">sharing.RecipientInfo</a>.</li>
<li>Added <code>ExpirationTime</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientInfo">sharing.RecipientInfo</a>.</li>
<li>Added <code>Pending</code> enum value for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/cleanrooms#CleanRoomAssetStatusEnum">cleanrooms.CleanRoomAssetStatusEnum</a>.</li>
<li>Added <code>AddNodesFailed</code>,
<code>AutomaticClusterUpdate</code>, <code>AutoscalingBackoff</code> and
<code>AutoscalingFailed</code> enum values for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/compute#EventType">compute.EventType</a>.</li>
<li>Added <code>PendingWarehouse</code> enum value for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#MessageStatus">dashboards.MessageStatus</a>.</li>
<li>Added <code>Cpu</code>, <code>GpuLarge</code>,
<code>GpuMedium</code>, <code>GpuSmall</code> and
<code>MultigpuMedium</code> enum values for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingModelWorkloadType">serving.ServingModelWorkloadType</a>.</li>
<li>Changed <code>Update</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientsAPI">w.Recipients</a>
workspace-level service to return <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientInfo">sharing.RecipientInfo</a>.</li>
<li>Changed <code>Update</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientsAPI">w.Recipients</a>
workspace-level service return type to become non-empty.</li>
<li>Changed <code>Update</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientsAPI">w.Recipients</a>
workspace-level service to type <code>Update</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#RecipientsAPI">w.Recipients</a>
workspace-level service.</li>
<li>Changed <code>Create</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a>
workspace-level service with new required argument order.</li>
<li>Changed <code>GetOpenApi</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a>
workspace-level service return type to become non-empty.</li>
<li>Changed <code>Patch</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a>
workspace-level service to type <code>Patch</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a>
workspace-level service.</li>
<li>Changed <code>Patch</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsAPI">w.ServingEndpoints</a>
workspace-level service to return <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#EndpointTags">serving.EndpointTags</a>.</li>
<li>Changed <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#EndpointTagList">serving.EndpointTagList</a>
to.</li>
<li>Changed <code>CollaboratorAlias</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/cleanrooms#CleanRoomCollaborator">cleanrooms.CleanRoomCollaborator</a>
to be required.</li>
<li>Changed <code>CollaboratorAlias</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/cleanrooms#CleanRoomCollaborator">cleanrooms.CleanRoomCollaborator</a>
to be required.</li>
<li>Changed <code>Behavior</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#AiGatewayGuardrailPiiBehavior">serving.AiGatewayGuardrailPiiBehavior</a>
to no longer be required.</li>
<li>Changed <code>Behavior</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#AiGatewayGuardrailPiiBehavior">serving.AiGatewayGuardrailPiiBehavior</a>
to no longer be required.</li>
<li>Changed <code>Config</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#CreateServingEndpoint">serving.CreateServingEndpoint</a>
to no longer be required.</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bf617bb7a6"><code>bf617bb</code></a>
[Release] Release v0.56.1 (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1137">#1137</a>)</li>
<li><a
href="18cebf1d5c"><code>18cebf1</code></a>
[Fix] Do not send query parameters when set to zero value (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1136">#1136</a>)</li>
<li><a
href="28ff749ee2"><code>28ff749</code></a>
[Release] Release v0.56.0 (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1134">#1134</a>)</li>
<li><a
href="113454080f"><code>1134540</code></a>
[Internal] Add download target to MakeFile (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1125">#1125</a>)</li>
<li><a
href="e079db96f3"><code>e079db9</code></a>
[Fix] Support Query parameters for all HTTP operations (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1124">#1124</a>)</li>
<li><a
href="1045fb9697"><code>1045fb9</code></a>
[Internal] Delete examples/mocking module (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1126">#1126</a>)</li>
<li><a
href="914ab6b7e8"><code>914ab6b</code></a>
[Internal] Scope the traversing directory in the Recursive list
workspace tes...</li>
<li>See full diff in <a
href="https://github.com/databricks/databricks-sdk-go/compare/v0.55.0...v0.56.1">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| github.com/databricks/databricks-sdk-go | [>= 0.28.a, < 0.29] |
</details>


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/databricks/databricks-sdk-go&package-manager=go_modules&previous-version=0.55.0&new-version=0.56.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
2025-01-27 13:11:07 +00:00
Denis Bilenko b7dd70b8b3
acc: Add a couple of error tests for 'bundle init' (#2233)
This captures how we log errors related to subprocess run and what does
the output look like.
2025-01-27 12:22:40 +00:00
Denis Bilenko 6e8f0ea8af
CI: Move ruff to 'lint' job (#2232)
This is where it belongs and also there is no need to run it 3 times.
2025-01-27 10:33:16 +00:00
Denis Bilenko 1cb32eca90
acc: Support custom replacements (#2231)
## Changes
- Ability to extend a list of replacements via test.toml
- Modify selftest to both demo this feature and to get rid of sed on
Windows.

## Tests
Acceptance tests. I'm also using it
https://github.com/databricks/cli/pull/2213 for things like pid.
2025-01-27 09:11:06 +00:00
Denis Bilenko 82b0dd36d6
Add acceptance/selftest, showcasing basic features (#2229)
Also make TestInprocessMode use this test.
2025-01-27 09:17:22 +01:00
Denis Bilenko b3d98fe666
acc: Print replacements on error and rm duplicates (#2230)
## Changes
- File comparison files in acceptance test, print the contents of all
applied replacements. Do it once per test.
- Remove duplicate entries in replacement list.

## Tests
Manually, change out files of existing test, you'll get this printed
once, after first assertion:

```
        acceptance_test.go:307: Available replacements:
            REPL /Users/denis\.bilenko/work/cli/acceptance/build/databricks => $$CLI
            REPL /private/var/folders/5y/9kkdnjw91p11vsqwk0cvmk200000gp/T/TestAccept598522733/001 => $$TMPHOME
            ...
```
2025-01-27 07:45:09 +00:00
Denis Bilenko 468660dc45
Add an acc test covering failures when reading .git (#2223)
## Changes
- New test covering failures in reading .git. One case results in error,
some result in warning (not shown).
- New helper withdir runs commands in a subdirectory.

## Tests
New acceptance test.
2025-01-24 15:53:06 +00:00
Pieter Noordhuis f65508690d
Update publish-winget action to use Komac directly (#2228)
## Changes

For the most recent release, I had to re-run the "publish-winget" action
a couple of times before it passed. The underlying issue that causes the
failure should be solved by the latest version of the action, but upon
inspection of the latest version, I found that it always installs the
latest version of [Komac](https://github.com/russellbanks/Komac). To
both fix the issue and lock this down further, I updated our action to
call Komac directly instead of relying on a separate action to do this
for us.

## Tests

Successful run in
https://github.com/databricks/cli/actions/runs/12951529979.
2025-01-24 15:33:54 +00:00
Denis Bilenko 959e43e556
acc: Support per-test configuration; GOOS option to disable OS (#2227)
## Changes
- Acceptance tests load test.toml to configure test behaviour.
- If file is not found in the test directory, parents are searched,
until the test root.
- Currently there is one option: runtime.GOOS to switch off tests per
OS.

## Tests
Using it in https://github.com/databricks/cli/pull/2223 to disable test
on Windows that cannot be run there.
2025-01-24 14:28:23 +00:00
shreyas-goenka a47a058506
Limit test server to only accept GET on read endpoints (#2225)
## Changes
Now the test server will only match GET queries for these endpoints

## Tests
Existing tests.
2025-01-24 11:05:00 +00:00
Denis Bilenko b4ed235104
Include EvalSymlinks in SetPath and use SetPath on all paths (#2219)
## Changes
When adding path, a few things should take care of:
- symlink expansion
- forward/backward slashes, so that tests could do sed 's/\\\\/\//g' to
make it pass on Windows (see
acceptance/bundle/syncroot/dotdot-git/script)

SetPath() function takes care of both.

This PR uses SetPath() on all paths consistently.

## Tests
Existing tests.
2025-01-24 10:18:44 +00:00
Denis Bilenko d6d9b994d4
acc: only print non-zero exit codes in errcode function (#2222)
Reduce noise in the output and matches how "Exit code" is handled for
the whole script.
2025-01-24 10:47:12 +01:00
Andrew Nester d784147e99
[Release] Release v0.239.1 (#2218)
CLI:
* Added text output templates for apps list and list-deployments
([#2175](https://github.com/databricks/cli/pull/2175)).
* Fix duplicate "apps" entry in help output
([#2191](https://github.com/databricks/cli/pull/2191)).

Bundles:
* Allow yaml-anchors in schema
([#2200](https://github.com/databricks/cli/pull/2200)).
* Show an error when non-yaml files used in include section
([#2201](https://github.com/databricks/cli/pull/2201)).
* Set WorktreeRoot to sync root outside git repo
([#2197](https://github.com/databricks/cli/pull/2197)).
* fix: Detailed message for using source-linked deployment with
file_path specified
([#2119](https://github.com/databricks/cli/pull/2119)).
* Allow using variables in enum fields
([#2199](https://github.com/databricks/cli/pull/2199)).
* Add experimental-jobs-as-code template
([#2177](https://github.com/databricks/cli/pull/2177)).
* Reading variables from file
([#2171](https://github.com/databricks/cli/pull/2171)).
* Fixed an apps message order and added output test
([#2174](https://github.com/databricks/cli/pull/2174)).
* Default to forward slash-separated paths for path translation
([#2145](https://github.com/databricks/cli/pull/2145)).
* Include a materialized copy of built-in templates
([#2146](https://github.com/databricks/cli/pull/2146)).
2025-01-23 15:54:55 +00:00
Ilya Kuznetsov 0487e816cc
Reading variables from file (#2171)
## Changes

New source of default values for variables - variable file
`.databricks/bundle/<target>/variable-overrides.json`

CLI tries to stat and read that file every time during variable
initialisation phase

<!-- Summary of your changes that are easy to understand -->

## Tests

Acceptance tests
2025-01-23 14:35:33 +00:00
Andrew Nester 8af9efaa62
Show an error when non-yaml files used in include section (#2201)
## Changes
`include` section is used only to include other bundle configuration
YAML files. If any other file type is used, raise an error and guide
users to use `sync.include` instead

## Tests
Added acceptance test

---------

Co-authored-by: Julia Crawford (Databricks) <julia.crawford@databricks.com>
2025-01-23 13:58:18 +00:00
102 changed files with 1542 additions and 522 deletions

View File

@ -1 +1 @@
779817ed8d63031f5ea761fbd25ee84f38feec0d 0be1b914249781b5e903b7676fd02255755bc851

View File

@ -109,16 +109,19 @@ var {{.CamelName}}Overrides []func(
{{- end }} {{- end }}
) )
{{- $excludeFromJson := list "http-request"}}
func new{{.PascalName}}() *cobra.Command { func new{{.PascalName}}() *cobra.Command {
cmd := &cobra.Command{} cmd := &cobra.Command{}
{{- $canUseJson := and .CanUseJson (not (in $excludeFromJson .KebabName )) -}}
{{- if .Request}} {{- if .Request}}
var {{.CamelName}}Req {{.Service.Package.Name}}.{{.Request.PascalName}} var {{.CamelName}}Req {{.Service.Package.Name}}.{{.Request.PascalName}}
{{- if .RequestBodyField }} {{- if .RequestBodyField }}
{{.CamelName}}Req.{{.RequestBodyField.PascalName}} = &{{.Service.Package.Name}}.{{.RequestBodyField.Entity.PascalName}}{} {{.CamelName}}Req.{{.RequestBodyField.PascalName}} = &{{.Service.Package.Name}}.{{.RequestBodyField.Entity.PascalName}}{}
{{- end }} {{- end }}
{{- if .CanUseJson}} {{- if $canUseJson}}
var {{.CamelName}}Json flags.JsonFlag var {{.CamelName}}Json flags.JsonFlag
{{- end}} {{- end}}
{{- end}} {{- end}}
@ -135,7 +138,7 @@ func new{{.PascalName}}() *cobra.Command {
{{- $request = .RequestBodyField.Entity -}} {{- $request = .RequestBodyField.Entity -}}
{{- end -}} {{- end -}}
{{if $request }}// TODO: short flags {{if $request }}// TODO: short flags
{{- if .CanUseJson}} {{- if $canUseJson}}
cmd.Flags().Var(&{{.CamelName}}Json, "json", `either inline JSON string or @path/to/file.json with request body`) cmd.Flags().Var(&{{.CamelName}}Json, "json", `either inline JSON string or @path/to/file.json with request body`)
{{- end}} {{- end}}
{{$method := .}} {{$method := .}}
@ -177,7 +180,7 @@ func new{{.PascalName}}() *cobra.Command {
{{- $hasRequiredArgs := and (not $hasIdPrompt) $hasPosArgs -}} {{- $hasRequiredArgs := and (not $hasIdPrompt) $hasPosArgs -}}
{{- $hasSingleRequiredRequestBodyFieldWithPrompt := and (and $hasIdPrompt $request) (eq 1 (len $request.RequiredRequestBodyFields)) -}} {{- $hasSingleRequiredRequestBodyFieldWithPrompt := and (and $hasIdPrompt $request) (eq 1 (len $request.RequiredRequestBodyFields)) -}}
{{- $onlyPathArgsRequiredAsPositionalArguments := and $request (eq (len .RequiredPositionalArguments) (len $request.RequiredPathFields)) -}} {{- $onlyPathArgsRequiredAsPositionalArguments := and $request (eq (len .RequiredPositionalArguments) (len $request.RequiredPathFields)) -}}
{{- $hasDifferentArgsWithJsonFlag := and (not $onlyPathArgsRequiredAsPositionalArguments) (and .CanUseJson (or $request.HasRequiredRequestBodyFields )) -}} {{- $hasDifferentArgsWithJsonFlag := and (not $onlyPathArgsRequiredAsPositionalArguments) (and $canUseJson (or $request.HasRequiredRequestBodyFields )) -}}
{{- $hasCustomArgHandler := or $hasRequiredArgs $hasDifferentArgsWithJsonFlag -}} {{- $hasCustomArgHandler := or $hasRequiredArgs $hasDifferentArgsWithJsonFlag -}}
{{- $atleastOneArgumentWithDescription := false -}} {{- $atleastOneArgumentWithDescription := false -}}
@ -239,7 +242,7 @@ func new{{.PascalName}}() *cobra.Command {
ctx := cmd.Context() ctx := cmd.Context()
{{if .Service.IsAccounts}}a := root.AccountClient(ctx){{else}}w := root.WorkspaceClient(ctx){{end}} {{if .Service.IsAccounts}}a := root.AccountClient(ctx){{else}}w := root.WorkspaceClient(ctx){{end}}
{{- if .Request }} {{- if .Request }}
{{ if .CanUseJson }} {{ if $canUseJson }}
if cmd.Flags().Changed("json") { if cmd.Flags().Changed("json") {
diags := {{.CamelName}}Json.Unmarshal(&{{.CamelName}}Req{{ if .RequestBodyField }}.{{.RequestBodyField.PascalName}}{{ end }}) diags := {{.CamelName}}Json.Unmarshal(&{{.CamelName}}Req{{ if .RequestBodyField }}.{{.RequestBodyField.PascalName}}{{ end }})
if diags.HasError() { if diags.HasError() {
@ -255,7 +258,7 @@ func new{{.PascalName}}() *cobra.Command {
return fmt.Errorf("please provide command input in JSON format by specifying the --json flag") return fmt.Errorf("please provide command input in JSON format by specifying the --json flag")
}{{- end}} }{{- end}}
{{- if $hasPosArgs }} {{- if $hasPosArgs }}
{{- if and .CanUseJson $hasSingleRequiredRequestBodyFieldWithPrompt }} else { {{- if and $canUseJson $hasSingleRequiredRequestBodyFieldWithPrompt }} else {
{{- end}} {{- end}}
{{- if $hasIdPrompt}} {{- if $hasIdPrompt}}
if len(args) == 0 { if len(args) == 0 {
@ -279,9 +282,9 @@ func new{{.PascalName}}() *cobra.Command {
{{$method := .}} {{$method := .}}
{{- range $arg, $field := .RequiredPositionalArguments}} {{- range $arg, $field := .RequiredPositionalArguments}}
{{- template "args-scan" (dict "Arg" $arg "Field" $field "Method" $method "HasIdPrompt" $hasIdPrompt)}} {{- template "args-scan" (dict "Arg" $arg "Field" $field "Method" $method "HasIdPrompt" $hasIdPrompt "ExcludeFromJson" $excludeFromJson)}}
{{- end -}} {{- end -}}
{{- if and .CanUseJson $hasSingleRequiredRequestBodyFieldWithPrompt }} {{- if and $canUseJson $hasSingleRequiredRequestBodyFieldWithPrompt }}
} }
{{- end}} {{- end}}
@ -392,7 +395,8 @@ func new{{.PascalName}}() *cobra.Command {
{{- $method := .Method -}} {{- $method := .Method -}}
{{- $arg := .Arg -}} {{- $arg := .Arg -}}
{{- $hasIdPrompt := .HasIdPrompt -}} {{- $hasIdPrompt := .HasIdPrompt -}}
{{- $optionalIfJsonIsUsed := and (not $hasIdPrompt) (and $field.IsRequestBodyField $method.CanUseJson) }} {{ $canUseJson := and $method.CanUseJson (not (in .ExcludeFromJson $method.KebabName)) }}
{{- $optionalIfJsonIsUsed := and (not $hasIdPrompt) (and $field.IsRequestBodyField $canUseJson) }}
{{- if $optionalIfJsonIsUsed }} {{- if $optionalIfJsonIsUsed }}
if !cmd.Flags().Changed("json") { if !cmd.Flags().Changed("json") {
{{- end }} {{- end }}

1
.gitattributes vendored
View File

@ -31,6 +31,7 @@ cmd/account/users/users.go linguist-generated=true
cmd/account/vpc-endpoints/vpc-endpoints.go linguist-generated=true cmd/account/vpc-endpoints/vpc-endpoints.go linguist-generated=true
cmd/account/workspace-assignment/workspace-assignment.go linguist-generated=true cmd/account/workspace-assignment/workspace-assignment.go linguist-generated=true
cmd/account/workspaces/workspaces.go linguist-generated=true cmd/account/workspaces/workspaces.go linguist-generated=true
cmd/workspace/access-control/access-control.go linguist-generated=true
cmd/workspace/aibi-dashboard-embedding-access-policy/aibi-dashboard-embedding-access-policy.go linguist-generated=true cmd/workspace/aibi-dashboard-embedding-access-policy/aibi-dashboard-embedding-access-policy.go linguist-generated=true
cmd/workspace/aibi-dashboard-embedding-approved-domains/aibi-dashboard-embedding-approved-domains.go linguist-generated=true cmd/workspace/aibi-dashboard-embedding-approved-domains/aibi-dashboard-embedding-approved-domains.go linguist-generated=true
cmd/workspace/alerts-legacy/alerts-legacy.go linguist-generated=true cmd/workspace/alerts-legacy/alerts-legacy.go linguist-generated=true

View File

@ -10,19 +10,65 @@ on:
jobs: jobs:
publish-to-winget-pkgs: publish-to-winget-pkgs:
runs-on: runs-on:
group: databricks-protected-runner-group group: databricks-deco-testing-runner-group
labels: windows-server-latest labels: ubuntu-latest-deco
environment: release environment: release
steps: steps:
- uses: vedantmgoyal2009/winget-releaser@93fd8b606a1672ec3e5c6c3bb19426be68d1a8b0 # v2 - name: Checkout repository and submodules
with: uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
identifier: Databricks.DatabricksCLI
installers-regex: 'windows_.*-signed\.zip$' # Only signed Windows releases
token: ${{ secrets.ENG_DEV_ECOSYSTEM_BOT_TOKEN }}
fork-user: eng-dev-ecosystem-bot
# Use the tag from the input, or the ref name if the input is not provided. # When updating the version of komac, make sure to update the checksum in the next step.
# The ref name is equal to the tag name when this workflow is triggered by the "sign-cli" command. # Find both at https://github.com/russellbanks/Komac/releases.
release-tag: ${{ inputs.tag || github.ref_name }} - name: Download komac binary
run: |
curl -s -L -o $RUNNER_TEMP/komac-2.9.0-x86_64-unknown-linux-gnu.tar.gz https://github.com/russellbanks/Komac/releases/download/v2.9.0/komac-2.9.0-x86_64-unknown-linux-gnu.tar.gz
- name: Verify komac binary
run: |
echo "d07a12831ad5418fee715488542a98ce3c0e591d05c850dd149fe78432be8c4c $RUNNER_TEMP/komac-2.9.0-x86_64-unknown-linux-gnu.tar.gz" | sha256sum -c -
- name: Untar komac binary to temporary path
run: |
mkdir -p $RUNNER_TEMP/komac
tar -xzf $RUNNER_TEMP/komac-2.9.0-x86_64-unknown-linux-gnu.tar.gz -C $RUNNER_TEMP/komac
- name: Add komac to PATH
run: echo "$RUNNER_TEMP/komac" >> $GITHUB_PATH
- name: Confirm komac version
run: komac --version
# Use the tag from the input, or the ref name if the input is not provided.
# The ref name is equal to the tag name when this workflow is triggered by the "sign-cli" command.
- name: Strip "v" prefix from version
id: strip_version
run: echo "version=$(echo ${{ inputs.tag || github.ref_name }} | sed 's/^v//')" >> "$GITHUB_OUTPUT"
- name: Get URLs of signed Windows binaries
id: get_windows_urls
run: |
urls=$(
gh api https://api.github.com/repos/databricks/cli/releases/tags/${{ inputs.tag || github.ref_name }} | \
jq -r .assets[].browser_download_url | \
grep -E '_windows_.*-signed\.zip$' | \
tr '\n' ' '
)
if [ -z "$urls" ]; then
echo "No signed Windows binaries found" >&2
exit 1
fi
echo "urls=$urls" >> "$GITHUB_OUTPUT"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Publish to Winget
run: |
komac update Databricks.DatabricksCLI \
--version ${{ steps.strip_version.outputs.version }} \
--submit \
--urls ${{ steps.get_windows_urls.outputs.urls }} \
env:
KOMAC_FORK_OWNER: eng-dev-ecosystem-bot
GITHUB_TOKEN: ${{ secrets.ENG_DEV_ECOSYSTEM_BOT_TOKEN }}

View File

@ -60,12 +60,6 @@ jobs:
- name: Install uv - name: Install uv
uses: astral-sh/setup-uv@887a942a15af3a7626099df99e897a18d9e5ab3a # v5.1.0 uses: astral-sh/setup-uv@887a942a15af3a7626099df99e897a18d9e5ab3a # v5.1.0
- name: Run ruff
uses: astral-sh/ruff-action@31a518504640beb4897d0b9f9e50a2a9196e75ba # v3.0.1
with:
version: "0.9.1"
args: "format --check"
- name: Set go env - name: Set go env
run: | run: |
echo "GOPATH=$(go env GOPATH)" >> $GITHUB_ENV echo "GOPATH=$(go env GOPATH)" >> $GITHUB_ENV
@ -80,7 +74,7 @@ jobs:
- name: Run tests with coverage - name: Run tests with coverage
run: make cover run: make cover
golangci: linters:
needs: cleanups needs: cleanups
name: lint name: lint
runs-on: ubuntu-latest runs-on: ubuntu-latest
@ -105,6 +99,11 @@ jobs:
with: with:
version: v1.63.4 version: v1.63.4
args: --timeout=15m args: --timeout=15m
- name: Run ruff
uses: astral-sh/ruff-action@31a518504640beb4897d0b9f9e50a2a9196e75ba # v3.0.1
with:
version: "0.9.1"
args: "format --check"
validate-bundle-schema: validate-bundle-schema:
needs: cleanups needs: cleanups

View File

@ -1,5 +1,25 @@
# Version changelog # Version changelog
## [Release] Release v0.239.1
CLI:
* Added text output templates for apps list and list-deployments ([#2175](https://github.com/databricks/cli/pull/2175)).
* Fix duplicate "apps" entry in help output ([#2191](https://github.com/databricks/cli/pull/2191)).
Bundles:
* Allow yaml-anchors in schema ([#2200](https://github.com/databricks/cli/pull/2200)).
* Show an error when non-yaml files used in include section ([#2201](https://github.com/databricks/cli/pull/2201)).
* Set WorktreeRoot to sync root outside git repo ([#2197](https://github.com/databricks/cli/pull/2197)).
* fix: Detailed message for using source-linked deployment with file_path specified ([#2119](https://github.com/databricks/cli/pull/2119)).
* Allow using variables in enum fields ([#2199](https://github.com/databricks/cli/pull/2199)).
* Add experimental-jobs-as-code template ([#2177](https://github.com/databricks/cli/pull/2177)).
* Reading variables from file ([#2171](https://github.com/databricks/cli/pull/2171)).
* Fixed an apps message order and added output test ([#2174](https://github.com/databricks/cli/pull/2174)).
* Default to forward slash-separated paths for path translation ([#2145](https://github.com/databricks/cli/pull/2145)).
* Include a materialized copy of built-in templates ([#2146](https://github.com/databricks/cli/pull/2146)).
## [Release] Release v0.239.0 ## [Release] Release v0.239.0
### New feature announcement ### New feature announcement

4
NOTICE
View File

@ -105,3 +105,7 @@ License - https://github.com/wI2L/jsondiff/blob/master/LICENSE
https://github.com/hexops/gotextdiff https://github.com/hexops/gotextdiff
Copyright (c) 2009 The Go Authors. All rights reserved. Copyright (c) 2009 The Go Authors. All rights reserved.
License - https://github.com/hexops/gotextdiff/blob/main/LICENSE License - https://github.com/hexops/gotextdiff/blob/main/LICENSE
https://github.com/BurntSushi/toml
Copyright (c) 2013 TOML authors
https://github.com/BurntSushi/toml/blob/master/COPYING

View File

@ -17,3 +17,5 @@ For more complex tests one can also use:
- `errcode` helper: if the command fails with non-zero code, it appends `Exit code: N` to the output but returns success to caller (bash), allowing continuation of script. - `errcode` helper: if the command fails with non-zero code, it appends `Exit code: N` to the output but returns success to caller (bash), allowing continuation of script.
- `trace` helper: prints the arguments before executing the command. - `trace` helper: prints the arguments before executing the command.
- custom output files: redirect output to custom file (it must start with `out`), e.g. `$CLI bundle validate > out.txt 2> out.error.txt`. - custom output files: redirect output to custom file (it must start with `out`), e.g. `$CLI bundle validate > out.txt 2> out.error.txt`.
See [selftest](./selftest) for a toy test.

View File

@ -15,6 +15,7 @@ import (
"strings" "strings"
"testing" "testing"
"time" "time"
"unicode/utf8"
"github.com/databricks/cli/internal/testutil" "github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/env" "github.com/databricks/cli/libs/env"
@ -44,6 +45,7 @@ const (
EntryPointScript = "script" EntryPointScript = "script"
CleanupScript = "script.cleanup" CleanupScript = "script.cleanup"
PrepareScript = "script.prepare" PrepareScript = "script.prepare"
MaxFileSize = 100_000
) )
var Scripts = map[string]bool{ var Scripts = map[string]bool{
@ -60,12 +62,7 @@ func TestInprocessMode(t *testing.T) {
if InprocessMode { if InprocessMode {
t.Skip("Already tested by TestAccept") t.Skip("Already tested by TestAccept")
} }
if runtime.GOOS == "windows" { require.Equal(t, 1, testAccept(t, true, "selftest"))
// - catalogs A catalog is the first layer of Unity Catalogs three-level namespace.
// + catalogs A catalog is the first layer of Unity Catalog<6F>s three-level namespace.
t.Skip("Fails on CI on unicode characters")
}
require.NotZero(t, testAccept(t, true, "help"))
} }
func testAccept(t *testing.T, InprocessMode bool, singleTest string) int { func testAccept(t *testing.T, InprocessMode bool, singleTest string) int {
@ -93,17 +90,18 @@ func testAccept(t *testing.T, InprocessMode bool, singleTest string) int {
} }
t.Setenv("CLI", execPath) t.Setenv("CLI", execPath)
repls.Set(execPath, "$CLI") repls.SetPath(execPath, "$CLI")
// Make helper scripts available // Make helper scripts available
t.Setenv("PATH", fmt.Sprintf("%s%c%s", filepath.Join(cwd, "bin"), os.PathListSeparator, os.Getenv("PATH"))) t.Setenv("PATH", fmt.Sprintf("%s%c%s", filepath.Join(cwd, "bin"), os.PathListSeparator, os.Getenv("PATH")))
tempHomeDir := t.TempDir() tempHomeDir := t.TempDir()
repls.Set(tempHomeDir, "$TMPHOME") repls.SetPath(tempHomeDir, "$TMPHOME")
t.Logf("$TMPHOME=%v", tempHomeDir) t.Logf("$TMPHOME=%v", tempHomeDir)
// Prevent CLI from downloading terraform in each test: // Make use of uv cache; since we set HomeEnvVar to temporary directory, it is not picked up automatically
t.Setenv("DATABRICKS_TF_EXEC_PATH", tempHomeDir) uvCache := getUVDefaultCacheDir(t)
t.Setenv("UV_CACHE_DIR", uvCache)
ctx := context.Background() ctx := context.Background()
cloudEnv := os.Getenv("CLOUD_ENV") cloudEnv := os.Getenv("CLOUD_ENV")
@ -118,6 +116,9 @@ func testAccept(t *testing.T, InprocessMode bool, singleTest string) int {
homeDir := t.TempDir() homeDir := t.TempDir()
// Do not read user's ~/.databrickscfg // Do not read user's ~/.databrickscfg
t.Setenv(env.HomeEnvVar(), homeDir) t.Setenv(env.HomeEnvVar(), homeDir)
// Prevent CLI from downloading terraform in each test:
t.Setenv("DATABRICKS_TF_EXEC_PATH", tempHomeDir)
} }
workspaceClient, err := databricks.NewWorkspaceClient() workspaceClient, err := databricks.NewWorkspaceClient()
@ -129,6 +130,7 @@ func testAccept(t *testing.T, InprocessMode bool, singleTest string) int {
testdiff.PrepareReplacementsUser(t, &repls, *user) testdiff.PrepareReplacementsUser(t, &repls, *user)
testdiff.PrepareReplacementsWorkspaceClient(t, &repls, workspaceClient) testdiff.PrepareReplacementsWorkspaceClient(t, &repls, workspaceClient)
testdiff.PrepareReplacementsUUID(t, &repls) testdiff.PrepareReplacementsUUID(t, &repls)
testdiff.PrepareReplacementsDevVersion(t, &repls)
testDirs := getTests(t) testDirs := getTests(t)
require.NotEmpty(t, testDirs) require.NotEmpty(t, testDirs)
@ -175,6 +177,13 @@ func getTests(t *testing.T) []string {
} }
func runTest(t *testing.T, dir, coverDir string, repls testdiff.ReplacementsContext) { func runTest(t *testing.T, dir, coverDir string, repls testdiff.ReplacementsContext) {
config, configPath := LoadConfig(t, dir)
isEnabled, isPresent := config.GOOS[runtime.GOOS]
if isPresent && !isEnabled {
t.Skipf("Disabled via GOOS.%s setting in %s", runtime.GOOS, configPath)
}
var tmpDir string var tmpDir string
var err error var err error
if KeepTmp { if KeepTmp {
@ -187,12 +196,8 @@ func runTest(t *testing.T, dir, coverDir string, repls testdiff.ReplacementsCont
tmpDir = t.TempDir() tmpDir = t.TempDir()
} }
// Converts C:\Users\DENIS~1.BIL -> C:\Users\denis.bilenko
tmpDirEvalled, err1 := filepath.EvalSymlinks(tmpDir)
if err1 == nil && tmpDirEvalled != tmpDir {
repls.SetPathWithParents(tmpDirEvalled, "$TMPDIR")
}
repls.SetPathWithParents(tmpDir, "$TMPDIR") repls.SetPathWithParents(tmpDir, "$TMPDIR")
repls.Repls = append(repls.Repls, config.Repls...)
scriptContents := readMergedScriptContents(t, dir) scriptContents := readMergedScriptContents(t, dir)
testutil.WriteFile(t, filepath.Join(tmpDir, EntryPointScript), scriptContents) testutil.WriteFile(t, filepath.Join(tmpDir, EntryPointScript), scriptContents)
@ -226,9 +231,11 @@ func runTest(t *testing.T, dir, coverDir string, repls testdiff.ReplacementsCont
formatOutput(out, err) formatOutput(out, err)
require.NoError(t, out.Close()) require.NoError(t, out.Close())
printedRepls := false
// Compare expected outputs // Compare expected outputs
for relPath := range outputs { for relPath := range outputs {
doComparison(t, repls, dir, tmpDir, relPath) doComparison(t, repls, dir, tmpDir, relPath, &printedRepls)
} }
// Make sure there are not unaccounted for new files // Make sure there are not unaccounted for new files
@ -240,26 +247,27 @@ func runTest(t *testing.T, dir, coverDir string, repls testdiff.ReplacementsCont
if _, ok := outputs[relPath]; ok { if _, ok := outputs[relPath]; ok {
continue continue
} }
t.Errorf("Unexpected output: %s", relPath)
if strings.HasPrefix(relPath, "out") { if strings.HasPrefix(relPath, "out") {
// We have a new file starting with "out" // We have a new file starting with "out"
// Show the contents & support overwrite mode for it: // Show the contents & support overwrite mode for it:
doComparison(t, repls, dir, tmpDir, relPath) doComparison(t, repls, dir, tmpDir, relPath, &printedRepls)
} }
} }
} }
func doComparison(t *testing.T, repls testdiff.ReplacementsContext, dirRef, dirNew, relPath string) { func doComparison(t *testing.T, repls testdiff.ReplacementsContext, dirRef, dirNew, relPath string, printedRepls *bool) {
pathRef := filepath.Join(dirRef, relPath) pathRef := filepath.Join(dirRef, relPath)
pathNew := filepath.Join(dirNew, relPath) pathNew := filepath.Join(dirNew, relPath)
bufRef, okRef := readIfExists(t, pathRef) bufRef, okRef := tryReading(t, pathRef)
bufNew, okNew := readIfExists(t, pathNew) bufNew, okNew := tryReading(t, pathNew)
if !okRef && !okNew { if !okRef && !okNew {
t.Errorf("Both files are missing: %s, %s", pathRef, pathNew) t.Errorf("Both files are missing or have errors: %s, %s", pathRef, pathNew)
return return
} }
valueRef := testdiff.NormalizeNewlines(string(bufRef)) valueRef := testdiff.NormalizeNewlines(bufRef)
valueNew := testdiff.NormalizeNewlines(string(bufNew)) valueNew := testdiff.NormalizeNewlines(bufNew)
// Apply replacements to the new value only. // Apply replacements to the new value only.
// The reference value is stored after applying replacements. // The reference value is stored after applying replacements.
@ -293,6 +301,15 @@ func doComparison(t *testing.T, repls testdiff.ReplacementsContext, dirRef, dirN
t.Logf("Overwriting existing output file: %s", relPath) t.Logf("Overwriting existing output file: %s", relPath)
testutil.WriteFile(t, pathRef, valueNew) testutil.WriteFile(t, pathRef, valueNew)
} }
if !equal && printedRepls != nil && !*printedRepls {
*printedRepls = true
var items []string
for _, item := range repls.Repls {
items = append(items, fmt.Sprintf("REPL %s => %s", item.Old, item.New))
}
t.Log("Available replacements:\n" + strings.Join(items, "\n"))
}
} }
// Returns combined script.prepare (root) + script.prepare (parent) + ... + script + ... + script.cleanup (parent) + ... // Returns combined script.prepare (root) + script.prepare (parent) + ... + script + ... + script.cleanup (parent) + ...
@ -308,14 +325,14 @@ func readMergedScriptContents(t *testing.T, dir string) string {
cleanups := []string{} cleanups := []string{}
for { for {
x, ok := readIfExists(t, filepath.Join(dir, CleanupScript)) x, ok := tryReading(t, filepath.Join(dir, CleanupScript))
if ok { if ok {
cleanups = append(cleanups, string(x)) cleanups = append(cleanups, x)
} }
x, ok = readIfExists(t, filepath.Join(dir, PrepareScript)) x, ok = tryReading(t, filepath.Join(dir, PrepareScript))
if ok { if ok {
prepares = append(prepares, string(x)) prepares = append(prepares, x)
} }
if dir == "" || dir == "." { if dir == "" || dir == "." {
@ -402,16 +419,33 @@ func formatOutput(w io.Writer, err error) {
} }
} }
func readIfExists(t *testing.T, path string) ([]byte, bool) { func tryReading(t *testing.T, path string) (string, bool) {
data, err := os.ReadFile(path) info, err := os.Stat(path)
if err == nil { if err != nil {
return data, true if !errors.Is(err, os.ErrNotExist) {
t.Errorf("%s: %s", path, err)
}
return "", false
} }
if !errors.Is(err, os.ErrNotExist) { if info.Size() > MaxFileSize {
t.Fatalf("%s: %s", path, err) t.Errorf("%s: ignoring, too large: %d", path, info.Size())
return "", false
} }
return []byte{}, false
data, err := os.ReadFile(path)
if err != nil {
// already checked ErrNotExist above
t.Errorf("%s: %s", path, err)
return "", false
}
if !utf8.Valid(data) {
t.Errorf("%s: not valid utf-8", path)
return "", false
}
return string(data), true
} }
func CopyDir(src, dst string, inputs, outputs map[string]bool) error { func CopyDir(src, dst string, inputs, outputs map[string]bool) error {
@ -477,3 +511,16 @@ func ListDir(t *testing.T, src string) []string {
} }
return files return files
} }
func getUVDefaultCacheDir(t *testing.T) string {
// According to uv docs https://docs.astral.sh/uv/concepts/cache/#caching-in-continuous-integration
// the default cache directory is
// "A system-appropriate cache directory, e.g., $XDG_CACHE_HOME/uv or $HOME/.cache/uv on Unix and %LOCALAPPDATA%\uv\cache on Windows"
cacheDir, err := os.UserCacheDir()
require.NoError(t, err)
if runtime.GOOS == "windows" {
return cacheDir + "\\uv\\cache"
} else {
return cacheDir + "/uv"
}
}

View File

@ -0,0 +1,2 @@
bundle:
name: git-permerror

View File

@ -0,0 +1,78 @@
=== No permission to access .git. Badness: inferred flag is set to true even though we did not infer branch. bundle_root_path is not correct in subdir case.
>>> chmod 000 .git
>>> $CLI bundle validate
Error: unable to load repository specific gitconfig: open config: permission denied
Name: git-permerror
Target: default
Workspace:
User: $USERNAME
Path: /Workspace/Users/$USERNAME/.bundle/git-permerror/default
Found 1 error
Exit code: 1
>>> $CLI bundle validate -o json
Error: unable to load repository specific gitconfig: open config: permission denied
Exit code: 1
{
"bundle_root_path": ".",
"inferred": true
}
>>> withdir subdir/a/b $CLI bundle validate -o json
Error: unable to load repository specific gitconfig: open config: permission denied
Exit code: 1
{
"bundle_root_path": ".",
"inferred": true
}
=== No permissions to read .git/HEAD. Badness: warning is not shown. inferred is incorrectly set to true. bundle_root_path is not correct in subdir case.
>>> chmod 000 .git/HEAD
>>> $CLI bundle validate -o json
{
"bundle_root_path": ".",
"inferred": true
}
>>> withdir subdir/a/b $CLI bundle validate -o json
{
"bundle_root_path": ".",
"inferred": true
}
=== No permissions to read .git/config. Badness: inferred is incorretly set to true. bundle_root_path is not correct is subdir case.
>>> chmod 000 .git/config
>>> $CLI bundle validate -o json
Error: unable to load repository specific gitconfig: open config: permission denied
Exit code: 1
{
"bundle_root_path": ".",
"inferred": true
}
>>> withdir subdir/a/b $CLI bundle validate -o json
Error: unable to load repository specific gitconfig: open config: permission denied
Exit code: 1
{
"bundle_root_path": ".",
"inferred": true
}

View File

@ -0,0 +1,26 @@
mkdir myrepo
cd myrepo
cp ../databricks.yml .
git-repo-init
mkdir -p subdir/a/b
printf "=== No permission to access .git. Badness: inferred flag is set to true even though we did not infer branch. bundle_root_path is not correct in subdir case.\n"
trace chmod 000 .git
errcode trace $CLI bundle validate
errcode trace $CLI bundle validate -o json | jq .bundle.git
errcode trace withdir subdir/a/b $CLI bundle validate -o json | jq .bundle.git
printf "\n\n=== No permissions to read .git/HEAD. Badness: warning is not shown. inferred is incorrectly set to true. bundle_root_path is not correct in subdir case.\n"
chmod 700 .git
trace chmod 000 .git/HEAD
errcode trace $CLI bundle validate -o json | jq .bundle.git
errcode trace withdir subdir/a/b $CLI bundle validate -o json | jq .bundle.git
printf "\n\n=== No permissions to read .git/config. Badness: inferred is incorretly set to true. bundle_root_path is not correct is subdir case.\n"
chmod 666 .git/HEAD
trace chmod 000 .git/config
errcode trace $CLI bundle validate -o json | jq .bundle.git
errcode trace withdir subdir/a/b $CLI bundle validate -o json | jq .bundle.git
cd ..
rm -fr myrepo

View File

@ -0,0 +1,5 @@
Badness = "Warning logs not shown; inferred flag is set to true incorrect; bundle_root_path is not correct"
[GOOS]
# This test relies on chmod which does not work on Windows
windows = false

View File

@ -0,0 +1,6 @@
bundle:
name: non_yaml_in_includes
include:
- test.py
- resources/*.yml

View File

@ -0,0 +1,10 @@
Error: Files in the 'include' configuration section must be YAML files.
in databricks.yml:5:4
The file test.py in the 'include' configuration section is not a YAML file, and only YAML files are supported. To include files to sync, specify them in the 'sync.include' configuration section instead.
Name: non_yaml_in_includes
Found 1 error
Exit code: 1

View File

@ -0,0 +1 @@
$CLI bundle validate

View File

@ -0,0 +1 @@
print("Hello world")

View File

@ -1,8 +1,6 @@
>>> $CLI bundle validate -t development -o json >>> $CLI bundle validate -t development -o json
Exit code: 0
>>> $CLI bundle validate -t error >>> $CLI bundle validate -t error
Error: notebook this value is overridden not found. Local notebook references are expected Error: notebook this value is overridden not found. Local notebook references are expected
to contain one of the following file extensions: [.py, .r, .scala, .sql, .ipynb] to contain one of the following file extensions: [.py, .r, .scala, .sql, .ipynb]

View File

@ -1,8 +1,6 @@
>>> $CLI bundle validate -t development -o json >>> $CLI bundle validate -t development -o json
Exit code: 0
>>> $CLI bundle validate -t error >>> $CLI bundle validate -t error
Error: notebook this value is overridden not found. Local notebook references are expected Error: notebook this value is overridden not found. Local notebook references are expected
to contain one of the following file extensions: [.py, .r, .scala, .sql, .ipynb] to contain one of the following file extensions: [.py, .r, .scala, .sql, .ipynb]

View File

@ -0,0 +1,11 @@
bundle:
name: scripts
experimental:
scripts:
preinit: "python3 ./myscript.py $EXITCODE preinit"
postinit: "python3 ./myscript.py 0 postinit"
prebuild: "python3 ./myscript.py 0 prebuild"
postbuild: "python3 ./myscript.py 0 postbuild"
predeploy: "python3 ./myscript.py 0 predeploy"
postdeploy: "python3 ./myscript.py 0 postdeploy"

View File

@ -0,0 +1,8 @@
import sys
info = " ".join(sys.argv[1:])
sys.stderr.write(f"from myscript.py {info}: hello stderr!\n")
sys.stdout.write(f"from myscript.py {info}: hello stdout!\n")
exitcode = int(sys.argv[1])
sys.exit(exitcode)

View File

@ -0,0 +1,52 @@
>>> EXITCODE=0 errcode $CLI bundle validate
Executing 'preinit' script
from myscript.py 0 preinit: hello stdout!
from myscript.py 0 preinit: hello stderr!
Executing 'postinit' script
from myscript.py 0 postinit: hello stdout!
from myscript.py 0 postinit: hello stderr!
Name: scripts
Target: default
Workspace:
User: $USERNAME
Path: /Workspace/Users/$USERNAME/.bundle/scripts/default
Validation OK!
>>> EXITCODE=1 errcode $CLI bundle validate
Executing 'preinit' script
from myscript.py 1 preinit: hello stdout!
from myscript.py 1 preinit: hello stderr!
Error: failed to execute script: exit status 1
Name: scripts
Found 1 error
Exit code: 1
>>> EXITCODE=0 errcode $CLI bundle deploy
Executing 'preinit' script
from myscript.py 0 preinit: hello stdout!
from myscript.py 0 preinit: hello stderr!
Executing 'postinit' script
from myscript.py 0 postinit: hello stdout!
from myscript.py 0 postinit: hello stderr!
Executing 'prebuild' script
from myscript.py 0 prebuild: hello stdout!
from myscript.py 0 prebuild: hello stderr!
Executing 'postbuild' script
from myscript.py 0 postbuild: hello stdout!
from myscript.py 0 postbuild: hello stderr!
Executing 'predeploy' script
from myscript.py 0 predeploy: hello stdout!
from myscript.py 0 predeploy: hello stderr!
Error: unable to deploy to /Workspace/Users/$USERNAME/.bundle/scripts/default/state as $USERNAME.
Please make sure the current user or one of their groups is listed under the permissions of this bundle.
For assistance, contact the owners of this project.
They may need to redeploy the bundle to apply the new permissions.
Please refer to https://docs.databricks.com/dev-tools/bundles/permissions.html for more on managing permissions.
Exit code: 1

View File

@ -0,0 +1,3 @@
trace EXITCODE=0 errcode $CLI bundle validate
trace EXITCODE=1 errcode $CLI bundle validate
trace EXITCODE=0 errcode $CLI bundle deploy

View File

@ -3,4 +3,6 @@ mkdir myrepo
cd myrepo cd myrepo
cp ../databricks.yml . cp ../databricks.yml .
git-repo-init git-repo-init
$CLI bundle validate | sed 's/\\\\/\//g' errcode $CLI bundle validate
cd ..
rm -fr myrepo

View File

@ -0,0 +1,3 @@
[[Repls]]
Old = '\\\\myrepo'
New = '/myrepo'

View File

@ -10,6 +10,8 @@ Please refer to the README.md file for "getting started" instructions.
See also the documentation at https://docs.databricks.com/dev-tools/bundles/index.html. See also the documentation at https://docs.databricks.com/dev-tools/bundles/index.html.
>>> $CLI bundle validate -t dev --output json >>> $CLI bundle validate -t dev --output json
Warning: Ignoring Databricks CLI version constraint for development build. Required: >= 0.238.0, current: $DEV_VERSION
{ {
"jobs": { "jobs": {
"my_jobs_as_code_job": { "my_jobs_as_code_job": {

View File

@ -3,6 +3,7 @@
bundle: bundle:
name: my_jobs_as_code name: my_jobs_as_code
uuid: [UUID] uuid: [UUID]
databricks_cli_version: ">= 0.238.0"
experimental: experimental:
python: python:

View File

@ -3,11 +3,7 @@ trace $CLI bundle init experimental-jobs-as-code --config-file ./input.json --ou
cd output/my_jobs_as_code cd output/my_jobs_as_code
# silence uv output because it's non-deterministic # silence uv output because it's non-deterministic
uv sync 2> /dev/null uv sync -q
# remove version constraint because it always creates a warning on dev builds
cat databricks.yml | grep -v databricks_cli_version > databricks.yml.new
mv databricks.yml.new databricks.yml
trace $CLI bundle validate -t dev --output json | jq ".resources" trace $CLI bundle validate -t dev --output json | jq ".resources"

View File

@ -0,0 +1,3 @@
Error: not a bundle template: expected to find a template schema file at databricks_template_schema.json
Exit code: 1

View File

@ -0,0 +1,2 @@
export NO_COLOR=1
$CLI bundle init /DOES/NOT/EXIST

View File

@ -0,0 +1 @@
Badness = 'The error message should include full path: "expected to find a template schema file at databricks_template_schema.json"'

View File

@ -0,0 +1,5 @@
Error: git clone failed: git clone https://invalid-domain-123.databricks.com/hello/world $TMPDIR_GPARENT/world-123456 --no-tags --depth=1: exit status 128. Cloning into '$TMPDIR_GPARENT/world-123456'...
fatal: unable to access 'https://invalid-domain-123.databricks.com/hello/world/': Could not resolve host: invalid-domain-123.databricks.com
Exit code: 1

View File

@ -0,0 +1,2 @@
export NO_COLOR=1
$CLI bundle init https://invalid-domain-123.databricks.com/hello/world

View File

@ -0,0 +1,7 @@
[[Repls]]
Old = '\\'
New = '/'
[[Repls]]
Old = '/world-[0-9]+'
New = '/world-123456'

View File

@ -1,7 +1,5 @@
>>> errcode $CLI bundle validate --var a=one -o json >>> errcode $CLI bundle validate --var a=one -o json
Exit code: 0
{ {
"a": { "a": {
"default": "hello", "default": "hello",

View File

@ -1,4 +1,4 @@
Error: no value assigned to required variable a. Assignment can be done through the "--var" flag or by setting the BUNDLE_VAR_a environment variable Error: no value assigned to required variable a. Assignment can be done using "--var", by setting the BUNDLE_VAR_a environment variable, or in .databricks/bundle/<target>/variable-overrides.json file
Name: empty${var.a} Name: empty${var.a}
Target: default Target: default

View File

@ -9,7 +9,7 @@
"prod-a env-var-b" "prod-a env-var-b"
>>> errcode $CLI bundle validate -t env-missing-a-required-variable-assignment >>> errcode $CLI bundle validate -t env-missing-a-required-variable-assignment
Error: no value assigned to required variable b. Assignment can be done through the "--var" flag or by setting the BUNDLE_VAR_b environment variable Error: no value assigned to required variable b. Assignment can be done using "--var", by setting the BUNDLE_VAR_b environment variable, or in .databricks/bundle/<target>/variable-overrides.json file
Name: test bundle Name: test bundle
Target: env-missing-a-required-variable-assignment Target: env-missing-a-required-variable-assignment

View File

@ -0,0 +1,5 @@
{
"cluster_key": {
"node_type_id": "Standard_DS3_v2"
}
}

View File

@ -0,0 +1,7 @@
{
"cluster": {
"node_type_id": "Standard_DS3_v2"
},
"cluster_key": "mlops_stacks-cluster",
"cluster_workers": 2
}

View File

@ -0,0 +1,3 @@
{
"cluster": "mlops_stacks-cluster"
}

View File

@ -0,0 +1,3 @@
{
"cluster_key": "mlops_stacks-cluster-from-file"
}

View File

@ -0,0 +1,4 @@
{
"cluster_key": "mlops_stacks-cluster",
"cluster_workers": 2
}

View File

@ -0,0 +1 @@
!.databricks

View File

@ -0,0 +1,53 @@
bundle:
name: TestResolveVariablesFromFile
variables:
cluster:
type: "complex"
cluster_key:
cluster_workers:
resources:
jobs:
job1:
job_clusters:
- job_cluster_key: ${var.cluster_key}
new_cluster:
node_type_id: "${var.cluster.node_type_id}"
num_workers: ${var.cluster_workers}
targets:
default:
default: true
variables:
cluster_workers: 1
cluster:
node_type_id: "default"
cluster_key: "default"
without_defaults:
complex_to_string:
variables:
cluster_workers: 1
cluster:
node_type_id: "default"
cluster_key: "default"
string_to_complex:
variables:
cluster_workers: 1
cluster:
node_type_id: "default"
cluster_key: "default"
wrong_file_structure:
invalid_json:
with_value:
variables:
cluster_workers: 1
cluster:
node_type_id: "default"
cluster_key: cluster_key_value

View File

@ -0,0 +1,82 @@
=== variable file
>>> $CLI bundle validate -o json
{
"job_cluster_key": "mlops_stacks-cluster",
"new_cluster": {
"node_type_id": "Standard_DS3_v2",
"num_workers": 2
}
}
=== variable file and variable flag
>>> $CLI bundle validate -o json --var=cluster_key=mlops_stacks-cluster-overriden
{
"job_cluster_key": "mlops_stacks-cluster-overriden",
"new_cluster": {
"node_type_id": "Standard_DS3_v2",
"num_workers": 2
}
}
=== variable file and environment variable
>>> BUNDLE_VAR_cluster_key=mlops_stacks-cluster-overriden $CLI bundle validate -o json
{
"job_cluster_key": "mlops_stacks-cluster-overriden",
"new_cluster": {
"node_type_id": "Standard_DS3_v2",
"num_workers": 2
}
}
=== variable has value in config file
>>> $CLI bundle validate -o json --target with_value
{
"job_cluster_key": "mlops_stacks-cluster-from-file",
"new_cluster": {
"node_type_id": "default",
"num_workers": 1
}
}
=== file has variable that is complex but default is string
>>> errcode $CLI bundle validate -o json --target complex_to_string
Error: variable cluster_key is not of type complex, but the value in the variable file is a complex type
Exit code: 1
{
"job_cluster_key": "${var.cluster_key}",
"new_cluster": {
"node_type_id": "${var.cluster.node_type_id}",
"num_workers": "${var.cluster_workers}"
}
}
=== file has variable that is string but default is complex
>>> errcode $CLI bundle validate -o json --target string_to_complex
Error: variable cluster is of type complex, but the value in the variable file is not a complex type
Exit code: 1
{
"job_cluster_key": "${var.cluster_key}",
"new_cluster": {
"node_type_id": "${var.cluster.node_type_id}",
"num_workers": "${var.cluster_workers}"
}
}
=== variable is required but it's not provided in the file
>>> errcode $CLI bundle validate -o json --target without_defaults
Error: no value assigned to required variable cluster. Assignment can be done using "--var", by setting the BUNDLE_VAR_cluster environment variable, or in .databricks/bundle/<target>/variable-overrides.json file
Exit code: 1
{
"job_cluster_key": "${var.cluster_key}",
"new_cluster": {
"node_type_id": "${var.cluster.node_type_id}",
"num_workers": "${var.cluster_workers}"
}
}

View File

@ -0,0 +1,30 @@
cluster_expr=".resources.jobs.job1.job_clusters[0]"
# defaults from variable file, see .databricks/bundle/<target>/variable-overrides.json
title "variable file"
trace $CLI bundle validate -o json | jq $cluster_expr
title "variable file and variable flag"
trace $CLI bundle validate -o json --var="cluster_key=mlops_stacks-cluster-overriden" | jq $cluster_expr
title "variable file and environment variable"
trace BUNDLE_VAR_cluster_key=mlops_stacks-cluster-overriden $CLI bundle validate -o json | jq $cluster_expr
title "variable has value in config file"
trace $CLI bundle validate -o json --target with_value | jq $cluster_expr
# title "file cannot be parsed"
# trace errcode $CLI bundle validate -o json --target invalid_json | jq $cluster_expr
# title "file has wrong structure"
# trace errcode $CLI bundle validate -o json --target wrong_file_structure | jq $cluster_expr
title "file has variable that is complex but default is string"
trace errcode $CLI bundle validate -o json --target complex_to_string | jq $cluster_expr
title "file has variable that is string but default is complex"
trace errcode $CLI bundle validate -o json --target string_to_complex | jq $cluster_expr
title "variable is required but it's not provided in the file"
trace errcode $CLI bundle validate -o json --target without_defaults | jq $cluster_expr

View File

@ -3,7 +3,7 @@
"abc def" "abc def"
>>> errcode $CLI bundle validate >>> errcode $CLI bundle validate
Error: no value assigned to required variable b. Assignment can be done through the "--var" flag or by setting the BUNDLE_VAR_b environment variable Error: no value assigned to required variable b. Assignment can be done using "--var", by setting the BUNDLE_VAR_b environment variable, or in .databricks/bundle/<target>/variable-overrides.json file
Name: ${var.a} ${var.b} Name: ${var.a} ${var.b}
Target: default Target: default

104
acceptance/config_test.go Normal file
View File

@ -0,0 +1,104 @@
package acceptance_test
import (
"os"
"path/filepath"
"sync"
"testing"
"github.com/BurntSushi/toml"
"github.com/databricks/cli/libs/testdiff"
"github.com/stretchr/testify/require"
)
const configFilename = "test.toml"
var (
configCache map[string]TestConfig
configMutex sync.Mutex
)
type TestConfig struct {
// Place to describe what's wrong with this test. Does not affect how the test is run.
Badness string
// Which OSes the test is enabled on. Each string is compared against runtime.GOOS.
// If absent, default to true.
GOOS map[string]bool
// List of additional replacements to apply on this test.
// Old is a regexp, New is a replacement expression.
Repls []testdiff.Replacement
}
// FindConfig finds the closest config file.
func FindConfig(t *testing.T, dir string) (string, bool) {
shared := false
for {
path := filepath.Join(dir, configFilename)
_, err := os.Stat(path)
if err == nil {
return path, shared
}
shared = true
if dir == "" || dir == "." {
break
}
if os.IsNotExist(err) {
dir = filepath.Dir(dir)
continue
}
t.Fatalf("Error while reading %s: %s", path, err)
}
t.Fatal("Config not found: " + configFilename)
return "", shared
}
// LoadConfig loads the config file. Non-leaf configs are cached.
func LoadConfig(t *testing.T, dir string) (TestConfig, string) {
path, leafConfig := FindConfig(t, dir)
if leafConfig {
return DoLoadConfig(t, path), path
}
configMutex.Lock()
defer configMutex.Unlock()
if configCache == nil {
configCache = make(map[string]TestConfig)
}
result, ok := configCache[path]
if ok {
return result, path
}
result = DoLoadConfig(t, path)
configCache[path] = result
return result, path
}
func DoLoadConfig(t *testing.T, path string) TestConfig {
bytes, err := os.ReadFile(path)
if err != nil {
t.Fatalf("failed to read config: %s", err)
}
var config TestConfig
meta, err := toml.Decode(string(bytes), &config)
require.NoError(t, err)
keys := meta.Undecoded()
if len(keys) > 0 {
t.Fatalf("Undecoded keys in %s: %#v", path, keys)
}
return config
}

View File

@ -6,7 +6,9 @@ errcode() {
local exit_code=$? local exit_code=$?
# Re-enable 'set -e' if it was previously set # Re-enable 'set -e' if it was previously set
set -e set -e
>&2 printf "\nExit code: $exit_code\n" if [ $exit_code -ne 0 ]; then
>&2 printf "\nExit code: $exit_code\n"
fi
} }
trace() { trace() {
@ -40,3 +42,19 @@ git-repo-init() {
git add databricks.yml git add databricks.yml
git commit -qm 'Add databricks.yml' git commit -qm 'Add databricks.yml'
} }
title() {
local label="$1"
printf "\n=== %s" "$label"
}
withdir() {
local dir="$1"
shift
local orig_dir="$(pwd)"
cd "$dir" || return $?
"$@"
local exit_code=$?
cd "$orig_dir" || return $?
return $exit_code
}

View File

@ -0,0 +1 @@
HELLO

View File

@ -0,0 +1,39 @@
=== Capturing STDERR
>>> python3 -c import sys; sys.stderr.write("STDERR\n")
STDERR
=== Capturing STDOUT
>>> python3 -c import sys; sys.stderr.write("STDOUT\n")
STDOUT
=== Capturing exit code
>>> errcode python3 -c raise SystemExit(5)
Exit code: 5
=== Capturing exit code (alt)
>>> python3 -c raise SystemExit(7)
Exit code: 7
=== Capturing pwd
>>> python3 -c import os; print(os.getcwd())
$TMPDIR
=== Capturing subdir
>>> mkdir -p subdir/a/b/c
>>> withdir subdir/a/b/c python3 -c import os; print(os.getcwd())
$TMPDIR/subdir/a/b/c
=== Custom output files - everything starting with out is captured and compared
>>> echo HELLO
=== Custom regex can be specified in [[Repl]] section
1234
CUSTOM_NUMBER_REGEX
123456
=== Testing --version
>>> $CLI --version
Databricks CLI v$DEV_VERSION

View File

@ -0,0 +1,29 @@
printf "=== Capturing STDERR"
trace python3 -c 'import sys; sys.stderr.write("STDERR\n")'
printf "\n=== Capturing STDOUT"
trace python3 -c 'import sys; sys.stderr.write("STDOUT\n")'
printf "\n=== Capturing exit code"
trace errcode python3 -c 'raise SystemExit(5)'
printf "\n=== Capturing exit code (alt)"
errcode trace python3 -c 'raise SystemExit(7)'
printf "\n=== Capturing pwd"
trace python3 -c 'import os; print(os.getcwd())'
printf "\n=== Capturing subdir"
trace mkdir -p subdir/a/b/c
trace withdir subdir/a/b/c python3 -c 'import os; print(os.getcwd())'
printf "\n=== Custom output files - everything starting with out is captured and compared"
trace echo HELLO > out.hello.txt
printf "\n=== Custom regex can be specified in [[Repl]] section\n"
echo 1234
echo 12345
echo 123456
printf "\n=== Testing --version"
trace $CLI --version

View File

@ -0,0 +1,20 @@
# Badness = "Brief description of what's wrong with the test output, if anything"
#[GOOS]
# Disable on Windows
#windows = false
# Disable on Mac
#mac = false
# Disable on Linux
#linux = false
[[Repls]]
Old = '\b[0-9]{5}\b'
New = "CUSTOM_NUMBER_REGEX"
[[Repls]]
# Fix path with reverse slashes in the output for Windows.
Old = '\$TMPDIR\\subdir\\a\\b\\c'
New = '$$TMPDIR/subdir/a/b/c'

View File

@ -68,7 +68,7 @@ func StartServer(t *testing.T) *TestServer {
} }
func AddHandlers(server *TestServer) { func AddHandlers(server *TestServer) {
server.Handle("/api/2.0/policies/clusters/list", func(r *http.Request) (any, error) { server.Handle("GET /api/2.0/policies/clusters/list", func(r *http.Request) (any, error) {
return compute.ListPoliciesResponse{ return compute.ListPoliciesResponse{
Policies: []compute.Policy{ Policies: []compute.Policy{
{ {
@ -83,7 +83,7 @@ func AddHandlers(server *TestServer) {
}, nil }, nil
}) })
server.Handle("/api/2.0/instance-pools/list", func(r *http.Request) (any, error) { server.Handle("GET /api/2.0/instance-pools/list", func(r *http.Request) (any, error) {
return compute.ListInstancePools{ return compute.ListInstancePools{
InstancePools: []compute.InstancePoolAndStats{ InstancePools: []compute.InstancePoolAndStats{
{ {
@ -94,7 +94,7 @@ func AddHandlers(server *TestServer) {
}, nil }, nil
}) })
server.Handle("/api/2.1/clusters/list", func(r *http.Request) (any, error) { server.Handle("GET /api/2.1/clusters/list", func(r *http.Request) (any, error) {
return compute.ListClustersResponse{ return compute.ListClustersResponse{
Clusters: []compute.ClusterDetails{ Clusters: []compute.ClusterDetails{
{ {
@ -109,13 +109,13 @@ func AddHandlers(server *TestServer) {
}, nil }, nil
}) })
server.Handle("/api/2.0/preview/scim/v2/Me", func(r *http.Request) (any, error) { server.Handle("GET /api/2.0/preview/scim/v2/Me", func(r *http.Request) (any, error) {
return iam.User{ return iam.User{
UserName: "tester@databricks.com", UserName: "tester@databricks.com",
}, nil }, nil
}) })
server.Handle("/api/2.0/workspace/get-status", func(r *http.Request) (any, error) { server.Handle("GET /api/2.0/workspace/get-status", func(r *http.Request) (any, error) {
return workspace.ObjectInfo{ return workspace.ObjectInfo{
ObjectId: 1001, ObjectId: 1001,
ObjectType: "DIRECTORY", ObjectType: "DIRECTORY",
@ -124,13 +124,13 @@ func AddHandlers(server *TestServer) {
}, nil }, nil
}) })
server.Handle("/api/2.1/unity-catalog/current-metastore-assignment", func(r *http.Request) (any, error) { server.Handle("GET /api/2.1/unity-catalog/current-metastore-assignment", func(r *http.Request) (any, error) {
return catalog.MetastoreAssignment{ return catalog.MetastoreAssignment{
DefaultCatalogName: "main", DefaultCatalogName: "main",
}, nil }, nil
}) })
server.Handle("/api/2.0/permissions/directories/1001", func(r *http.Request) (any, error) { server.Handle("GET /api/2.0/permissions/directories/1001", func(r *http.Request) (any, error) {
return workspace.WorkspaceObjectPermissions{ return workspace.WorkspaceObjectPermissions{
ObjectId: "1001", ObjectId: "1001",
ObjectType: "DIRECTORY", ObjectType: "DIRECTORY",
@ -146,4 +146,8 @@ func AddHandlers(server *TestServer) {
}, },
}, nil }, nil
}) })
server.Handle("POST /api/2.0/workspace/mkdirs", func(r *http.Request) (any, error) {
return "{}", nil
})
} }

2
acceptance/test.toml Normal file
View File

@ -0,0 +1,2 @@
# If test directory nor any of its parents do not have test.toml then this file serves as fallback configuration.
# The configurations are not merged across parents; the closest one is used fully.

View File

@ -2,6 +2,7 @@ package loader
import ( import (
"context" "context"
"fmt"
"path/filepath" "path/filepath"
"slices" "slices"
"strings" "strings"
@ -36,6 +37,7 @@ func (m *processRootIncludes) Apply(ctx context.Context, b *bundle.Bundle) diag.
// Maintain list of files in order of files being loaded. // Maintain list of files in order of files being loaded.
// This is stored in the bundle configuration for observability. // This is stored in the bundle configuration for observability.
var files []string var files []string
var diags diag.Diagnostics
// For each glob, find all files to load. // For each glob, find all files to load.
// Ordering of the list of globs is maintained in the output. // Ordering of the list of globs is maintained in the output.
@ -60,7 +62,7 @@ func (m *processRootIncludes) Apply(ctx context.Context, b *bundle.Bundle) diag.
// Filter matches to ones we haven't seen yet. // Filter matches to ones we haven't seen yet.
var includes []string var includes []string
for _, match := range matches { for i, match := range matches {
rel, err := filepath.Rel(b.BundleRootPath, match) rel, err := filepath.Rel(b.BundleRootPath, match)
if err != nil { if err != nil {
return diag.FromErr(err) return diag.FromErr(err)
@ -69,9 +71,22 @@ func (m *processRootIncludes) Apply(ctx context.Context, b *bundle.Bundle) diag.
continue continue
} }
seen[rel] = true seen[rel] = true
if filepath.Ext(rel) != ".yaml" && filepath.Ext(rel) != ".yml" {
diags = diags.Append(diag.Diagnostic{
Severity: diag.Error,
Summary: "Files in the 'include' configuration section must be YAML files.",
Detail: fmt.Sprintf("The file %s in the 'include' configuration section is not a YAML file, and only YAML files are supported. To include files to sync, specify them in the 'sync.include' configuration section instead.", rel),
Locations: b.Config.GetLocations(fmt.Sprintf("include[%d]", i)),
})
continue
}
includes = append(includes, rel) includes = append(includes, rel)
} }
if len(diags) > 0 {
return diags
}
// Add matches to list of mutators to return. // Add matches to list of mutators to return.
slices.Sort(includes) slices.Sort(includes)
files = append(files, includes...) files = append(files, includes...)

View File

@ -3,11 +3,14 @@ package mutator
import ( import (
"context" "context"
"fmt" "fmt"
"os"
"path/filepath"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config/variable" "github.com/databricks/cli/bundle/config/variable"
"github.com/databricks/cli/libs/diag" "github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn" "github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/jsonloader"
"github.com/databricks/cli/libs/env" "github.com/databricks/cli/libs/env"
) )
@ -23,7 +26,11 @@ func (m *setVariables) Name() string {
return "SetVariables" return "SetVariables"
} }
func setVariable(ctx context.Context, v dyn.Value, variable *variable.Variable, name string) (dyn.Value, error) { func getDefaultVariableFilePath(target string) string {
return ".databricks/bundle/" + target + "/variable-overrides.json"
}
func setVariable(ctx context.Context, v dyn.Value, variable *variable.Variable, name string, fileDefault dyn.Value) (dyn.Value, error) {
// case: variable already has value initialized, so skip // case: variable already has value initialized, so skip
if variable.HasValue() { if variable.HasValue() {
return v, nil return v, nil
@ -49,6 +56,26 @@ func setVariable(ctx context.Context, v dyn.Value, variable *variable.Variable,
return v, nil return v, nil
} }
// case: Set the variable to the default value from the variable file
if fileDefault.Kind() != dyn.KindInvalid && fileDefault.Kind() != dyn.KindNil {
hasComplexType := variable.IsComplex()
hasComplexValue := fileDefault.Kind() == dyn.KindMap || fileDefault.Kind() == dyn.KindSequence
if hasComplexType && !hasComplexValue {
return dyn.InvalidValue, fmt.Errorf(`variable %s is of type complex, but the value in the variable file is not a complex type`, name)
}
if !hasComplexType && hasComplexValue {
return dyn.InvalidValue, fmt.Errorf(`variable %s is not of type complex, but the value in the variable file is a complex type`, name)
}
v, err := dyn.Set(v, "value", fileDefault)
if err != nil {
return dyn.InvalidValue, fmt.Errorf(`failed to assign default value from variable file to variable %s with error: %v`, name, err)
}
return v, nil
}
// case: Set the variable to its default value // case: Set the variable to its default value
if variable.HasDefault() { if variable.HasDefault() {
vDefault, err := dyn.Get(v, "default") vDefault, err := dyn.Get(v, "default")
@ -64,10 +91,43 @@ func setVariable(ctx context.Context, v dyn.Value, variable *variable.Variable,
} }
// We should have had a value to set for the variable at this point. // We should have had a value to set for the variable at this point.
return dyn.InvalidValue, fmt.Errorf(`no value assigned to required variable %s. Assignment can be done through the "--var" flag or by setting the %s environment variable`, name, bundleVarPrefix+name) return dyn.InvalidValue, fmt.Errorf(`no value assigned to required variable %s. Assignment can be done using "--var", by setting the %s environment variable, or in %s file`, name, bundleVarPrefix+name, getDefaultVariableFilePath("<target>"))
}
func readVariablesFromFile(b *bundle.Bundle) (dyn.Value, diag.Diagnostics) {
var diags diag.Diagnostics
filePath := filepath.Join(b.BundleRootPath, getDefaultVariableFilePath(b.Config.Bundle.Target))
if _, err := os.Stat(filePath); err != nil {
return dyn.InvalidValue, nil
}
f, err := os.ReadFile(filePath)
if err != nil {
return dyn.InvalidValue, diag.FromErr(fmt.Errorf("failed to read variables file: %w", err))
}
val, err := jsonloader.LoadJSON(f, filePath)
if err != nil {
return dyn.InvalidValue, diag.FromErr(fmt.Errorf("failed to parse variables file %s: %w", filePath, err))
}
if val.Kind() != dyn.KindMap {
return dyn.InvalidValue, diags.Append(diag.Diagnostic{
Severity: diag.Error,
Summary: fmt.Sprintf("failed to parse variables file %s: invalid format", filePath),
Detail: "Variables file must be a JSON object with the following format:\n{\"var1\": \"value1\", \"var2\": \"value2\"}",
})
}
return val, nil
} }
func (m *setVariables) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (m *setVariables) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
defaults, diags := readVariablesFromFile(b)
if diags.HasError() {
return diags
}
err := b.Config.Mutate(func(v dyn.Value) (dyn.Value, error) { err := b.Config.Mutate(func(v dyn.Value) (dyn.Value, error) {
return dyn.Map(v, "variables", dyn.Foreach(func(p dyn.Path, variable dyn.Value) (dyn.Value, error) { return dyn.Map(v, "variables", dyn.Foreach(func(p dyn.Path, variable dyn.Value) (dyn.Value, error) {
name := p[1].Key() name := p[1].Key()
@ -76,9 +136,10 @@ func (m *setVariables) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
return dyn.InvalidValue, fmt.Errorf(`variable "%s" is not defined`, name) return dyn.InvalidValue, fmt.Errorf(`variable "%s" is not defined`, name)
} }
return setVariable(ctx, variable, v, name) fileDefault, _ := dyn.Get(defaults, name)
return setVariable(ctx, variable, v, name, fileDefault)
})) }))
}) })
return diag.FromErr(err) return diags.Extend(diag.FromErr(err))
} }

View File

@ -25,7 +25,7 @@ func TestSetVariableFromProcessEnvVar(t *testing.T) {
v, err := convert.FromTyped(variable, dyn.NilValue) v, err := convert.FromTyped(variable, dyn.NilValue)
require.NoError(t, err) require.NoError(t, err)
v, err = setVariable(context.Background(), v, &variable, "foo") v, err = setVariable(context.Background(), v, &variable, "foo", dyn.NilValue)
require.NoError(t, err) require.NoError(t, err)
err = convert.ToTyped(&variable, v) err = convert.ToTyped(&variable, v)
@ -43,7 +43,7 @@ func TestSetVariableUsingDefaultValue(t *testing.T) {
v, err := convert.FromTyped(variable, dyn.NilValue) v, err := convert.FromTyped(variable, dyn.NilValue)
require.NoError(t, err) require.NoError(t, err)
v, err = setVariable(context.Background(), v, &variable, "foo") v, err = setVariable(context.Background(), v, &variable, "foo", dyn.NilValue)
require.NoError(t, err) require.NoError(t, err)
err = convert.ToTyped(&variable, v) err = convert.ToTyped(&variable, v)
@ -65,7 +65,7 @@ func TestSetVariableWhenAlreadyAValueIsAssigned(t *testing.T) {
v, err := convert.FromTyped(variable, dyn.NilValue) v, err := convert.FromTyped(variable, dyn.NilValue)
require.NoError(t, err) require.NoError(t, err)
v, err = setVariable(context.Background(), v, &variable, "foo") v, err = setVariable(context.Background(), v, &variable, "foo", dyn.NilValue)
require.NoError(t, err) require.NoError(t, err)
err = convert.ToTyped(&variable, v) err = convert.ToTyped(&variable, v)
@ -90,7 +90,7 @@ func TestSetVariableEnvVarValueDoesNotOverridePresetValue(t *testing.T) {
v, err := convert.FromTyped(variable, dyn.NilValue) v, err := convert.FromTyped(variable, dyn.NilValue)
require.NoError(t, err) require.NoError(t, err)
v, err = setVariable(context.Background(), v, &variable, "foo") v, err = setVariable(context.Background(), v, &variable, "foo", dyn.NilValue)
require.NoError(t, err) require.NoError(t, err)
err = convert.ToTyped(&variable, v) err = convert.ToTyped(&variable, v)
@ -107,8 +107,8 @@ func TestSetVariablesErrorsIfAValueCouldNotBeResolved(t *testing.T) {
v, err := convert.FromTyped(variable, dyn.NilValue) v, err := convert.FromTyped(variable, dyn.NilValue)
require.NoError(t, err) require.NoError(t, err)
_, err = setVariable(context.Background(), v, &variable, "foo") _, err = setVariable(context.Background(), v, &variable, "foo", dyn.NilValue)
assert.ErrorContains(t, err, "no value assigned to required variable foo. Assignment can be done through the \"--var\" flag or by setting the BUNDLE_VAR_foo environment variable") assert.ErrorContains(t, err, "no value assigned to required variable foo. Assignment can be done using \"--var\", by setting the BUNDLE_VAR_foo environment variable, or in .databricks/bundle/<target>/variable-overrides.json file")
} }
func TestSetVariablesMutator(t *testing.T) { func TestSetVariablesMutator(t *testing.T) {
@ -157,6 +157,6 @@ func TestSetComplexVariablesViaEnvVariablesIsNotAllowed(t *testing.T) {
v, err := convert.FromTyped(variable, dyn.NilValue) v, err := convert.FromTyped(variable, dyn.NilValue)
require.NoError(t, err) require.NoError(t, err)
_, err = setVariable(context.Background(), v, &variable, "foo") _, err = setVariable(context.Background(), v, &variable, "foo", dyn.NilValue)
assert.ErrorContains(t, err, "setting via environment variables (BUNDLE_VAR_foo) is not supported for complex variable foo") assert.ErrorContains(t, err, "setting via environment variables (BUNDLE_VAR_foo) is not supported for complex variable foo")
} }

View File

@ -36,11 +36,12 @@ type Variable struct {
// This field stores the resolved value for the variable. The variable are // This field stores the resolved value for the variable. The variable are
// resolved in the following priority order (from highest to lowest) // resolved in the following priority order (from highest to lowest)
// //
// 1. Command line flag. For example: `--var="foo=bar"` // 1. Command line flag `--var="foo=bar"`
// 2. Target variable. eg: BUNDLE_VAR_foo=bar // 2. Environment variable. eg: BUNDLE_VAR_foo=bar
// 3. Default value as defined in the applicable environments block // 3. Load defaults from .databricks/bundle/<target>/variable-overrides.json
// 4. Default value defined in variable definition // 4. Default value as defined in the applicable targets block
// 5. Throw error, since if no default value is defined, then the variable // 5. Default value defined in variable definition
// 6. Throw error, since if no default value is defined, then the variable
// is required // is required
Value VariableValue `json:"value,omitempty" bundle:"readonly"` Value VariableValue `json:"value,omitempty" bundle:"readonly"`

View File

@ -419,7 +419,7 @@ func TestBundleToTerraformModelServing(t *testing.T) {
src := resources.ModelServingEndpoint{ src := resources.ModelServingEndpoint{
CreateServingEndpoint: &serving.CreateServingEndpoint{ CreateServingEndpoint: &serving.CreateServingEndpoint{
Name: "name", Name: "name",
Config: serving.EndpointCoreConfigInput{ Config: &serving.EndpointCoreConfigInput{
ServedModels: []serving.ServedModelInput{ ServedModels: []serving.ServedModelInput{
{ {
ModelName: "model_name", ModelName: "model_name",
@ -474,7 +474,7 @@ func TestBundleToTerraformModelServingPermissions(t *testing.T) {
// and as such observed the `omitempty` tag. // and as such observed the `omitempty` tag.
// The new method leverages [dyn.Value] where any field that is not // The new method leverages [dyn.Value] where any field that is not
// explicitly set is not part of the value. // explicitly set is not part of the value.
Config: serving.EndpointCoreConfigInput{ Config: &serving.EndpointCoreConfigInput{
ServedModels: []serving.ServedModelInput{ ServedModels: []serving.ServedModelInput{
{ {
ModelName: "model_name", ModelName: "model_name",

View File

@ -17,7 +17,7 @@ func TestConvertModelServingEndpoint(t *testing.T) {
src := resources.ModelServingEndpoint{ src := resources.ModelServingEndpoint{
CreateServingEndpoint: &serving.CreateServingEndpoint{ CreateServingEndpoint: &serving.CreateServingEndpoint{
Name: "name", Name: "name",
Config: serving.EndpointCoreConfigInput{ Config: &serving.EndpointCoreConfigInput{
ServedModels: []serving.ServedModelInput{ ServedModels: []serving.ServedModelInput{
{ {
ModelName: "model_name", ModelName: "model_name",

View File

@ -353,12 +353,12 @@ github.com/databricks/cli/bundle/config/resources.MlflowModel:
github.com/databricks/cli/bundle/config/resources.ModelServingEndpoint: github.com/databricks/cli/bundle/config/resources.ModelServingEndpoint:
"ai_gateway": "ai_gateway":
"description": |- "description": |-
The AI Gateway configuration for the serving endpoint. NOTE: only external model endpoints are supported as of now. The AI Gateway configuration for the serving endpoint. NOTE: Only external model and provisioned throughput endpoints are currently supported.
"config": "config":
"description": |- "description": |-
The core config of the serving endpoint. The core config of the serving endpoint.
"name": "name":
"description": | "description": |-
The name of the serving endpoint. This field is required and must be unique across a Databricks workspace. The name of the serving endpoint. This field is required and must be unique across a Databricks workspace.
An endpoint name can consist of alphanumeric characters, dashes, and underscores. An endpoint name can consist of alphanumeric characters, dashes, and underscores.
"rate_limits": "rate_limits":
@ -1974,6 +1974,9 @@ github.com/databricks/databricks-sdk-go/service/jobs.SparkJarTask:
Parameters passed to the main method. Parameters passed to the main method.
Use [Task parameter variables](https://docs.databricks.com/jobs.html#parameter-variables) to set parameters containing information about job runs. Use [Task parameter variables](https://docs.databricks.com/jobs.html#parameter-variables) to set parameters containing information about job runs.
"run_as_repl":
"description": |-
Deprecated. A value of `false` is no longer supported.
github.com/databricks/databricks-sdk-go/service/jobs.SparkPythonTask: github.com/databricks/databricks-sdk-go/service/jobs.SparkPythonTask:
"parameters": "parameters":
"description": |- "description": |-
@ -2684,27 +2687,36 @@ github.com/databricks/databricks-sdk-go/service/pipelines.TableSpecificConfigScd
github.com/databricks/databricks-sdk-go/service/serving.Ai21LabsConfig: github.com/databricks/databricks-sdk-go/service/serving.Ai21LabsConfig:
"ai21labs_api_key": "ai21labs_api_key":
"description": |- "description": |-
The Databricks secret key reference for an AI21 Labs API key. If you prefer to paste your API key directly, see `ai21labs_api_key_plaintext`. You must provide an API key using one of the following fields: `ai21labs_api_key` or `ai21labs_api_key_plaintext`. The Databricks secret key reference for an AI21 Labs API key. If you
prefer to paste your API key directly, see `ai21labs_api_key_plaintext`.
You must provide an API key using one of the following fields:
`ai21labs_api_key` or `ai21labs_api_key_plaintext`.
"ai21labs_api_key_plaintext": "ai21labs_api_key_plaintext":
"description": |- "description": |-
An AI21 Labs API key provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `ai21labs_api_key`. You must provide an API key using one of the following fields: `ai21labs_api_key` or `ai21labs_api_key_plaintext`. An AI21 Labs API key provided as a plaintext string. If you prefer to
reference your key using Databricks Secrets, see `ai21labs_api_key`. You
must provide an API key using one of the following fields:
`ai21labs_api_key` or `ai21labs_api_key_plaintext`.
github.com/databricks/databricks-sdk-go/service/serving.AiGatewayConfig: github.com/databricks/databricks-sdk-go/service/serving.AiGatewayConfig:
"guardrails": "guardrails":
"description": |- "description": |-
Configuration for AI Guardrails to prevent unwanted data and unsafe data in requests and responses. Configuration for AI Guardrails to prevent unwanted data and unsafe data in requests and responses.
"inference_table_config": "inference_table_config":
"description": |- "description": |-
Configuration for payload logging using inference tables. Use these tables to monitor and audit data being sent to and received from model APIs and to improve model quality. Configuration for payload logging using inference tables.
Use these tables to monitor and audit data being sent to and received from model APIs and to improve model quality.
"rate_limits": "rate_limits":
"description": |- "description": |-
Configuration for rate limits which can be set to limit endpoint traffic. Configuration for rate limits which can be set to limit endpoint traffic.
"usage_tracking_config": "usage_tracking_config":
"description": |- "description": |-
Configuration to enable usage tracking using system tables. These tables allow you to monitor operational usage on endpoints and their associated costs. Configuration to enable usage tracking using system tables.
These tables allow you to monitor operational usage on endpoints and their associated costs.
github.com/databricks/databricks-sdk-go/service/serving.AiGatewayGuardrailParameters: github.com/databricks/databricks-sdk-go/service/serving.AiGatewayGuardrailParameters:
"invalid_keywords": "invalid_keywords":
"description": |- "description": |-
List of invalid keywords. AI guardrail uses keyword or string matching to decide if the keyword exists in the request or response content. List of invalid keywords.
AI guardrail uses keyword or string matching to decide if the keyword exists in the request or response content.
"pii": "pii":
"description": |- "description": |-
Configuration for guardrail PII filter. Configuration for guardrail PII filter.
@ -2713,15 +2725,14 @@ github.com/databricks/databricks-sdk-go/service/serving.AiGatewayGuardrailParame
Indicates whether the safety filter is enabled. Indicates whether the safety filter is enabled.
"valid_topics": "valid_topics":
"description": |- "description": |-
The list of allowed topics. Given a chat request, this guardrail flags the request if its topic is not in the allowed topics. The list of allowed topics.
Given a chat request, this guardrail flags the request if its topic is not in the allowed topics.
github.com/databricks/databricks-sdk-go/service/serving.AiGatewayGuardrailPiiBehavior: github.com/databricks/databricks-sdk-go/service/serving.AiGatewayGuardrailPiiBehavior:
"behavior": "behavior":
"description": |- "description": |-
Behavior for PII filter. Currently only 'BLOCK' is supported. If 'BLOCK' is set for the input guardrail and the request contains PII, the request is not sent to the model server and 400 status code is returned; if 'BLOCK' is set for the output guardrail and the model response contains PII, the PII info in the response is redacted and 400 status code is returned. Configuration for input guardrail filters.
github.com/databricks/databricks-sdk-go/service/serving.AiGatewayGuardrailPiiBehaviorBehavior: github.com/databricks/databricks-sdk-go/service/serving.AiGatewayGuardrailPiiBehaviorBehavior:
"_": "_":
"description": |-
Behavior for PII filter. Currently only 'BLOCK' is supported. If 'BLOCK' is set for the input guardrail and the request contains PII, the request is not sent to the model server and 400 status code is returned; if 'BLOCK' is set for the output guardrail and the model response contains PII, the PII info in the response is redacted and 400 status code is returned.
"enum": "enum":
- |- - |-
NONE NONE
@ -2737,30 +2748,32 @@ github.com/databricks/databricks-sdk-go/service/serving.AiGatewayGuardrails:
github.com/databricks/databricks-sdk-go/service/serving.AiGatewayInferenceTableConfig: github.com/databricks/databricks-sdk-go/service/serving.AiGatewayInferenceTableConfig:
"catalog_name": "catalog_name":
"description": |- "description": |-
The name of the catalog in Unity Catalog. Required when enabling inference tables. NOTE: On update, you have to disable inference table first in order to change the catalog name. The name of the catalog in Unity Catalog. Required when enabling inference tables.
NOTE: On update, you have to disable inference table first in order to change the catalog name.
"enabled": "enabled":
"description": |- "description": |-
Indicates whether the inference table is enabled. Indicates whether the inference table is enabled.
"schema_name": "schema_name":
"description": |- "description": |-
The name of the schema in Unity Catalog. Required when enabling inference tables. NOTE: On update, you have to disable inference table first in order to change the schema name. The name of the schema in Unity Catalog. Required when enabling inference tables.
NOTE: On update, you have to disable inference table first in order to change the schema name.
"table_name_prefix": "table_name_prefix":
"description": |- "description": |-
The prefix of the table in Unity Catalog. NOTE: On update, you have to disable inference table first in order to change the prefix name. The prefix of the table in Unity Catalog.
NOTE: On update, you have to disable inference table first in order to change the prefix name.
github.com/databricks/databricks-sdk-go/service/serving.AiGatewayRateLimit: github.com/databricks/databricks-sdk-go/service/serving.AiGatewayRateLimit:
"calls": "calls":
"description": |- "description": |-
Used to specify how many calls are allowed for a key within the renewal_period. Used to specify how many calls are allowed for a key within the renewal_period.
"key": "key":
"description": |- "description": |-
Key field for a rate limit. Currently, only 'user' and 'endpoint' are supported, with 'endpoint' being the default if not specified. Key field for a rate limit. Currently, only 'user' and 'endpoint' are supported,
with 'endpoint' being the default if not specified.
"renewal_period": "renewal_period":
"description": |- "description": |-
Renewal period field for a rate limit. Currently, only 'minute' is supported. Renewal period field for a rate limit. Currently, only 'minute' is supported.
github.com/databricks/databricks-sdk-go/service/serving.AiGatewayRateLimitKey: github.com/databricks/databricks-sdk-go/service/serving.AiGatewayRateLimitKey:
"_": "_":
"description": |-
Key field for a rate limit. Currently, only 'user' and 'endpoint' are supported, with 'endpoint' being the default if not specified.
"enum": "enum":
- |- - |-
user user
@ -2768,8 +2781,6 @@ github.com/databricks/databricks-sdk-go/service/serving.AiGatewayRateLimitKey:
endpoint endpoint
github.com/databricks/databricks-sdk-go/service/serving.AiGatewayRateLimitRenewalPeriod: github.com/databricks/databricks-sdk-go/service/serving.AiGatewayRateLimitRenewalPeriod:
"_": "_":
"description": |-
Renewal period field for a rate limit. Currently, only 'minute' is supported.
"enum": "enum":
- |- - |-
minute minute
@ -2780,26 +2791,43 @@ github.com/databricks/databricks-sdk-go/service/serving.AiGatewayUsageTrackingCo
github.com/databricks/databricks-sdk-go/service/serving.AmazonBedrockConfig: github.com/databricks/databricks-sdk-go/service/serving.AmazonBedrockConfig:
"aws_access_key_id": "aws_access_key_id":
"description": |- "description": |-
The Databricks secret key reference for an AWS access key ID with permissions to interact with Bedrock services. If you prefer to paste your API key directly, see `aws_access_key_id`. You must provide an API key using one of the following fields: `aws_access_key_id` or `aws_access_key_id_plaintext`. The Databricks secret key reference for an AWS access key ID with
permissions to interact with Bedrock services. If you prefer to paste
your API key directly, see `aws_access_key_id_plaintext`. You must provide an API
key using one of the following fields: `aws_access_key_id` or
`aws_access_key_id_plaintext`.
"aws_access_key_id_plaintext": "aws_access_key_id_plaintext":
"description": |- "description": |-
An AWS access key ID with permissions to interact with Bedrock services provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `aws_access_key_id`. You must provide an API key using one of the following fields: `aws_access_key_id` or `aws_access_key_id_plaintext`. An AWS access key ID with permissions to interact with Bedrock services
provided as a plaintext string. If you prefer to reference your key using
Databricks Secrets, see `aws_access_key_id`. You must provide an API key
using one of the following fields: `aws_access_key_id` or
`aws_access_key_id_plaintext`.
"aws_region": "aws_region":
"description": |- "description": |-
The AWS region to use. Bedrock has to be enabled there. The AWS region to use. Bedrock has to be enabled there.
"aws_secret_access_key": "aws_secret_access_key":
"description": |- "description": |-
The Databricks secret key reference for an AWS secret access key paired with the access key ID, with permissions to interact with Bedrock services. If you prefer to paste your API key directly, see `aws_secret_access_key_plaintext`. You must provide an API key using one of the following fields: `aws_secret_access_key` or `aws_secret_access_key_plaintext`. The Databricks secret key reference for an AWS secret access key paired
with the access key ID, with permissions to interact with Bedrock
services. If you prefer to paste your API key directly, see
`aws_secret_access_key_plaintext`. You must provide an API key using one
of the following fields: `aws_secret_access_key` or
`aws_secret_access_key_plaintext`.
"aws_secret_access_key_plaintext": "aws_secret_access_key_plaintext":
"description": |- "description": |-
An AWS secret access key paired with the access key ID, with permissions to interact with Bedrock services provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `aws_secret_access_key`. You must provide an API key using one of the following fields: `aws_secret_access_key` or `aws_secret_access_key_plaintext`. An AWS secret access key paired with the access key ID, with permissions
to interact with Bedrock services provided as a plaintext string. If you
prefer to reference your key using Databricks Secrets, see
`aws_secret_access_key`. You must provide an API key using one of the
following fields: `aws_secret_access_key` or
`aws_secret_access_key_plaintext`.
"bedrock_provider": "bedrock_provider":
"description": |- "description": |-
The underlying provider in Amazon Bedrock. Supported values (case insensitive) include: Anthropic, Cohere, AI21Labs, Amazon. The underlying provider in Amazon Bedrock. Supported values (case
insensitive) include: Anthropic, Cohere, AI21Labs, Amazon.
github.com/databricks/databricks-sdk-go/service/serving.AmazonBedrockConfigBedrockProvider: github.com/databricks/databricks-sdk-go/service/serving.AmazonBedrockConfigBedrockProvider:
"_": "_":
"description": |-
The underlying provider in Amazon Bedrock. Supported values (case insensitive) include: Anthropic, Cohere, AI21Labs, Amazon.
"enum": "enum":
- |- - |-
anthropic anthropic
@ -2812,10 +2840,16 @@ github.com/databricks/databricks-sdk-go/service/serving.AmazonBedrockConfigBedro
github.com/databricks/databricks-sdk-go/service/serving.AnthropicConfig: github.com/databricks/databricks-sdk-go/service/serving.AnthropicConfig:
"anthropic_api_key": "anthropic_api_key":
"description": |- "description": |-
The Databricks secret key reference for an Anthropic API key. If you prefer to paste your API key directly, see `anthropic_api_key_plaintext`. You must provide an API key using one of the following fields: `anthropic_api_key` or `anthropic_api_key_plaintext`. The Databricks secret key reference for an Anthropic API key. If you
prefer to paste your API key directly, see `anthropic_api_key_plaintext`.
You must provide an API key using one of the following fields:
`anthropic_api_key` or `anthropic_api_key_plaintext`.
"anthropic_api_key_plaintext": "anthropic_api_key_plaintext":
"description": |- "description": |-
The Anthropic API key provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `anthropic_api_key`. You must provide an API key using one of the following fields: `anthropic_api_key` or `anthropic_api_key_plaintext`. The Anthropic API key provided as a plaintext string. If you prefer to
reference your key using Databricks Secrets, see `anthropic_api_key`. You
must provide an API key using one of the following fields:
`anthropic_api_key` or `anthropic_api_key_plaintext`.
github.com/databricks/databricks-sdk-go/service/serving.AutoCaptureConfigInput: github.com/databricks/databricks-sdk-go/service/serving.AutoCaptureConfigInput:
"catalog_name": "catalog_name":
"description": |- "description": |-
@ -2831,42 +2865,58 @@ github.com/databricks/databricks-sdk-go/service/serving.AutoCaptureConfigInput:
The prefix of the table in Unity Catalog. NOTE: On update, you cannot change the prefix name if the inference table is already enabled. The prefix of the table in Unity Catalog. NOTE: On update, you cannot change the prefix name if the inference table is already enabled.
github.com/databricks/databricks-sdk-go/service/serving.CohereConfig: github.com/databricks/databricks-sdk-go/service/serving.CohereConfig:
"cohere_api_base": "cohere_api_base":
"description": "This is an optional field to provide a customized base URL for the Cohere API. \nIf left unspecified, the standard Cohere base URL is used.\n" "description": |-
This is an optional field to provide a customized base URL for the Cohere
API. If left unspecified, the standard Cohere base URL is used.
"cohere_api_key": "cohere_api_key":
"description": |- "description": |-
The Databricks secret key reference for a Cohere API key. If you prefer to paste your API key directly, see `cohere_api_key_plaintext`. You must provide an API key using one of the following fields: `cohere_api_key` or `cohere_api_key_plaintext`. The Databricks secret key reference for a Cohere API key. If you prefer
to paste your API key directly, see `cohere_api_key_plaintext`. You must
provide an API key using one of the following fields: `cohere_api_key` or
`cohere_api_key_plaintext`.
"cohere_api_key_plaintext": "cohere_api_key_plaintext":
"description": |- "description": |-
The Cohere API key provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `cohere_api_key`. You must provide an API key using one of the following fields: `cohere_api_key` or `cohere_api_key_plaintext`. The Cohere API key provided as a plaintext string. If you prefer to
reference your key using Databricks Secrets, see `cohere_api_key`. You
must provide an API key using one of the following fields:
`cohere_api_key` or `cohere_api_key_plaintext`.
github.com/databricks/databricks-sdk-go/service/serving.DatabricksModelServingConfig: github.com/databricks/databricks-sdk-go/service/serving.DatabricksModelServingConfig:
"databricks_api_token": "databricks_api_token":
"description": | "description": |-
The Databricks secret key reference for a Databricks API token that corresponds to a user or service The Databricks secret key reference for a Databricks API token that
principal with Can Query access to the model serving endpoint pointed to by this external model. corresponds to a user or service principal with Can Query access to the
If you prefer to paste your API key directly, see `databricks_api_token_plaintext`. model serving endpoint pointed to by this external model. If you prefer
You must provide an API key using one of the following fields: `databricks_api_token` or `databricks_api_token_plaintext`. to paste your API key directly, see `databricks_api_token_plaintext`. You
must provide an API key using one of the following fields:
`databricks_api_token` or `databricks_api_token_plaintext`.
"databricks_api_token_plaintext": "databricks_api_token_plaintext":
"description": | "description": |-
The Databricks API token that corresponds to a user or service The Databricks API token that corresponds to a user or service principal
principal with Can Query access to the model serving endpoint pointed to by this external model provided as a plaintext string. with Can Query access to the model serving endpoint pointed to by this
If you prefer to reference your key using Databricks Secrets, see `databricks_api_token`. external model provided as a plaintext string. If you prefer to reference
You must provide an API key using one of the following fields: `databricks_api_token` or `databricks_api_token_plaintext`. your key using Databricks Secrets, see `databricks_api_token`. You must
provide an API key using one of the following fields:
`databricks_api_token` or `databricks_api_token_plaintext`.
"databricks_workspace_url": "databricks_workspace_url":
"description": | "description": |-
The URL of the Databricks workspace containing the model serving endpoint pointed to by this external model. The URL of the Databricks workspace containing the model serving endpoint
pointed to by this external model.
github.com/databricks/databricks-sdk-go/service/serving.EndpointCoreConfigInput: github.com/databricks/databricks-sdk-go/service/serving.EndpointCoreConfigInput:
"auto_capture_config": "auto_capture_config":
"description": |- "description": |-
Configuration for Inference Tables which automatically logs requests and responses to Unity Catalog. Configuration for Inference Tables which automatically logs requests and responses to Unity Catalog.
Note: this field is deprecated for creating new provisioned throughput endpoints,
or updating existing provisioned throughput endpoints that never have inference table configured;
in these cases please use AI Gateway to manage inference tables.
"served_entities": "served_entities":
"description": |- "description": |-
A list of served entities for the endpoint to serve. A serving endpoint can have up to 15 served entities. The list of served entities under the serving endpoint config.
"served_models": "served_models":
"description": |- "description": |-
(Deprecated, use served_entities instead) A list of served models for the endpoint to serve. A serving endpoint can have up to 15 served models. (Deprecated, use served_entities instead) The list of served models under the serving endpoint config.
"traffic_config": "traffic_config":
"description": |- "description": |-
The traffic config defining how invocations to the serving endpoint should be routed. The traffic configuration associated with the serving endpoint config.
github.com/databricks/databricks-sdk-go/service/serving.EndpointTag: github.com/databricks/databricks-sdk-go/service/serving.EndpointTag:
"key": "key":
"description": |- "description": |-
@ -2903,17 +2953,13 @@ github.com/databricks/databricks-sdk-go/service/serving.ExternalModel:
"description": |- "description": |-
PaLM Config. Only required if the provider is 'palm'. PaLM Config. Only required if the provider is 'palm'.
"provider": "provider":
"description": | "description": |-
The name of the provider for the external model. Currently, the supported providers are 'ai21labs', 'anthropic', The name of the provider for the external model. Currently, the supported providers are 'ai21labs', 'anthropic', 'amazon-bedrock', 'cohere', 'databricks-model-serving', 'google-cloud-vertex-ai', 'openai', and 'palm'.
'amazon-bedrock', 'cohere', 'databricks-model-serving', 'google-cloud-vertex-ai', 'openai', and 'palm'.",
"task": "task":
"description": |- "description": |-
The task type of the external model. The task type of the external model.
github.com/databricks/databricks-sdk-go/service/serving.ExternalModelProvider: github.com/databricks/databricks-sdk-go/service/serving.ExternalModelProvider:
"_": "_":
"description": |
The name of the provider for the external model. Currently, the supported providers are 'ai21labs', 'anthropic',
'amazon-bedrock', 'cohere', 'databricks-model-serving', 'google-cloud-vertex-ai', 'openai', and 'palm'.",
"enum": "enum":
- |- - |-
ai21labs ai21labs
@ -2934,70 +2980,114 @@ github.com/databricks/databricks-sdk-go/service/serving.ExternalModelProvider:
github.com/databricks/databricks-sdk-go/service/serving.GoogleCloudVertexAiConfig: github.com/databricks/databricks-sdk-go/service/serving.GoogleCloudVertexAiConfig:
"private_key": "private_key":
"description": |- "description": |-
The Databricks secret key reference for a private key for the service account which has access to the Google Cloud Vertex AI Service. See [Best practices for managing service account keys](https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys). If you prefer to paste your API key directly, see `private_key_plaintext`. You must provide an API key using one of the following fields: `private_key` or `private_key_plaintext` The Databricks secret key reference for a private key for the service
account which has access to the Google Cloud Vertex AI Service. See [Best
practices for managing service account keys]. If you prefer to paste your
API key directly, see `private_key_plaintext`. You must provide an API
key using one of the following fields: `private_key` or
`private_key_plaintext`
[Best practices for managing service account keys]: https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys
"private_key_plaintext": "private_key_plaintext":
"description": |- "description": |-
The private key for the service account which has access to the Google Cloud Vertex AI Service provided as a plaintext secret. See [Best practices for managing service account keys](https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys). If you prefer to reference your key using Databricks Secrets, see `private_key`. You must provide an API key using one of the following fields: `private_key` or `private_key_plaintext`. The private key for the service account which has access to the Google
Cloud Vertex AI Service provided as a plaintext secret. See [Best
practices for managing service account keys]. If you prefer to reference
your key using Databricks Secrets, see `private_key`. You must provide an
API key using one of the following fields: `private_key` or
`private_key_plaintext`.
[Best practices for managing service account keys]: https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys
"project_id": "project_id":
"description": |- "description": |-
This is the Google Cloud project id that the service account is associated with. This is the Google Cloud project id that the service account is
associated with.
"region": "region":
"description": |- "description": |-
This is the region for the Google Cloud Vertex AI Service. See [supported regions](https://cloud.google.com/vertex-ai/docs/general/locations) for more details. Some models are only available in specific regions. This is the region for the Google Cloud Vertex AI Service. See [supported
regions] for more details. Some models are only available in specific
regions.
[supported regions]: https://cloud.google.com/vertex-ai/docs/general/locations
github.com/databricks/databricks-sdk-go/service/serving.OpenAiConfig: github.com/databricks/databricks-sdk-go/service/serving.OpenAiConfig:
"_":
"description": |-
Configs needed to create an OpenAI model route.
"microsoft_entra_client_id": "microsoft_entra_client_id":
"description": | "description": |-
This field is only required for Azure AD OpenAI and is the Microsoft Entra Client ID. This field is only required for Azure AD OpenAI and is the Microsoft
Entra Client ID.
"microsoft_entra_client_secret": "microsoft_entra_client_secret":
"description": | "description": |-
The Databricks secret key reference for a client secret used for Microsoft Entra ID authentication. The Databricks secret key reference for a client secret used for
If you prefer to paste your client secret directly, see `microsoft_entra_client_secret_plaintext`. Microsoft Entra ID authentication. If you prefer to paste your client
You must provide an API key using one of the following fields: `microsoft_entra_client_secret` or `microsoft_entra_client_secret_plaintext`. secret directly, see `microsoft_entra_client_secret_plaintext`. You must
provide an API key using one of the following fields:
`microsoft_entra_client_secret` or
`microsoft_entra_client_secret_plaintext`.
"microsoft_entra_client_secret_plaintext": "microsoft_entra_client_secret_plaintext":
"description": | "description": |-
The client secret used for Microsoft Entra ID authentication provided as a plaintext string. The client secret used for Microsoft Entra ID authentication provided as
If you prefer to reference your key using Databricks Secrets, see `microsoft_entra_client_secret`. a plaintext string. If you prefer to reference your key using Databricks
You must provide an API key using one of the following fields: `microsoft_entra_client_secret` or `microsoft_entra_client_secret_plaintext`. Secrets, see `microsoft_entra_client_secret`. You must provide an API key
using one of the following fields: `microsoft_entra_client_secret` or
`microsoft_entra_client_secret_plaintext`.
"microsoft_entra_tenant_id": "microsoft_entra_tenant_id":
"description": | "description": |-
This field is only required for Azure AD OpenAI and is the Microsoft Entra Tenant ID. This field is only required for Azure AD OpenAI and is the Microsoft
Entra Tenant ID.
"openai_api_base": "openai_api_base":
"description": | "description": |-
This is a field to provide a customized base URl for the OpenAI API. This is a field to provide a customized base URl for the OpenAI API. For
For Azure OpenAI, this field is required, and is the base URL for the Azure OpenAI API service Azure OpenAI, this field is required, and is the base URL for the Azure
provided by Azure. OpenAI API service provided by Azure. For other OpenAI API types, this
For other OpenAI API types, this field is optional, and if left unspecified, the standard OpenAI base URL is used. field is optional, and if left unspecified, the standard OpenAI base URL
is used.
"openai_api_key": "openai_api_key":
"description": |- "description": |-
The Databricks secret key reference for an OpenAI API key using the OpenAI or Azure service. If you prefer to paste your API key directly, see `openai_api_key_plaintext`. You must provide an API key using one of the following fields: `openai_api_key` or `openai_api_key_plaintext`. The Databricks secret key reference for an OpenAI API key using the
OpenAI or Azure service. If you prefer to paste your API key directly,
see `openai_api_key_plaintext`. You must provide an API key using one of
the following fields: `openai_api_key` or `openai_api_key_plaintext`.
"openai_api_key_plaintext": "openai_api_key_plaintext":
"description": |- "description": |-
The OpenAI API key using the OpenAI or Azure service provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `openai_api_key`. You must provide an API key using one of the following fields: `openai_api_key` or `openai_api_key_plaintext`. The OpenAI API key using the OpenAI or Azure service provided as a
plaintext string. If you prefer to reference your key using Databricks
Secrets, see `openai_api_key`. You must provide an API key using one of
the following fields: `openai_api_key` or `openai_api_key_plaintext`.
"openai_api_type": "openai_api_type":
"description": | "description": |-
This is an optional field to specify the type of OpenAI API to use. This is an optional field to specify the type of OpenAI API to use. For
For Azure OpenAI, this field is required, and adjust this parameter to represent the preferred security Azure OpenAI, this field is required, and adjust this parameter to
access validation protocol. For access token validation, use azure. For authentication using Azure Active represent the preferred security access validation protocol. For access
token validation, use azure. For authentication using Azure Active
Directory (Azure AD) use, azuread. Directory (Azure AD) use, azuread.
"openai_api_version": "openai_api_version":
"description": | "description": |-
This is an optional field to specify the OpenAI API version. This is an optional field to specify the OpenAI API version. For Azure
For Azure OpenAI, this field is required, and is the version of the Azure OpenAI service to OpenAI, this field is required, and is the version of the Azure OpenAI
utilize, specified by a date. service to utilize, specified by a date.
"openai_deployment_name": "openai_deployment_name":
"description": | "description": |-
This field is only required for Azure OpenAI and is the name of the deployment resource for the This field is only required for Azure OpenAI and is the name of the
Azure OpenAI service. deployment resource for the Azure OpenAI service.
"openai_organization": "openai_organization":
"description": | "description": |-
This is an optional field to specify the organization in OpenAI or Azure OpenAI. This is an optional field to specify the organization in OpenAI or Azure
OpenAI.
github.com/databricks/databricks-sdk-go/service/serving.PaLmConfig: github.com/databricks/databricks-sdk-go/service/serving.PaLmConfig:
"palm_api_key": "palm_api_key":
"description": |- "description": |-
The Databricks secret key reference for a PaLM API key. If you prefer to paste your API key directly, see `palm_api_key_plaintext`. You must provide an API key using one of the following fields: `palm_api_key` or `palm_api_key_plaintext`. The Databricks secret key reference for a PaLM API key. If you prefer to
paste your API key directly, see `palm_api_key_plaintext`. You must
provide an API key using one of the following fields: `palm_api_key` or
`palm_api_key_plaintext`.
"palm_api_key_plaintext": "palm_api_key_plaintext":
"description": |- "description": |-
The PaLM API key provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `palm_api_key`. You must provide an API key using one of the following fields: `palm_api_key` or `palm_api_key_plaintext`. The PaLM API key provided as a plaintext string. If you prefer to
reference your key using Databricks Secrets, see `palm_api_key`. You must
provide an API key using one of the following fields: `palm_api_key` or
`palm_api_key_plaintext`.
github.com/databricks/databricks-sdk-go/service/serving.RateLimit: github.com/databricks/databricks-sdk-go/service/serving.RateLimit:
"calls": "calls":
"description": |- "description": |-
@ -3010,8 +3100,6 @@ github.com/databricks/databricks-sdk-go/service/serving.RateLimit:
Renewal period field for a serving endpoint rate limit. Currently, only 'minute' is supported. Renewal period field for a serving endpoint rate limit. Currently, only 'minute' is supported.
github.com/databricks/databricks-sdk-go/service/serving.RateLimitKey: github.com/databricks/databricks-sdk-go/service/serving.RateLimitKey:
"_": "_":
"description": |-
Key field for a serving endpoint rate limit. Currently, only 'user' and 'endpoint' are supported, with 'endpoint' being the default if not specified.
"enum": "enum":
- |- - |-
user user
@ -3019,8 +3107,6 @@ github.com/databricks/databricks-sdk-go/service/serving.RateLimitKey:
endpoint endpoint
github.com/databricks/databricks-sdk-go/service/serving.RateLimitRenewalPeriod: github.com/databricks/databricks-sdk-go/service/serving.RateLimitRenewalPeriod:
"_": "_":
"description": |-
Renewal period field for a serving endpoint rate limit. Currently, only 'minute' is supported.
"enum": "enum":
- |- - |-
minute minute
@ -3033,21 +3119,15 @@ github.com/databricks/databricks-sdk-go/service/serving.Route:
The percentage of endpoint traffic to send to this route. It must be an integer between 0 and 100 inclusive. The percentage of endpoint traffic to send to this route. It must be an integer between 0 and 100 inclusive.
github.com/databricks/databricks-sdk-go/service/serving.ServedEntityInput: github.com/databricks/databricks-sdk-go/service/serving.ServedEntityInput:
"entity_name": "entity_name":
"description": |
The name of the entity to be served. The entity may be a model in the Databricks Model Registry, a model in the Unity Catalog (UC),
or a function of type FEATURE_SPEC in the UC. If it is a UC object, the full name of the object should be given in the form of
__catalog_name__.__schema_name__.__model_name__.
"entity_version":
"description": |- "description": |-
The version of the model in Databricks Model Registry to be served or empty if the entity is a FEATURE_SPEC. The name of the entity to be served. The entity may be a model in the Databricks Model Registry, a model in the Unity Catalog (UC), or a function of type FEATURE_SPEC in the UC. If it is a UC object, the full name of the object should be given in the form of **catalog_name.schema_name.model_name**.
"entity_version": {}
"environment_vars": "environment_vars":
"description": "An object containing a set of optional, user-specified environment variable key-value pairs used for serving this entity.\nNote: this is an experimental feature and subject to change. \nExample entity environment variables that refer to Databricks secrets: `{\"OPENAI_API_KEY\": \"{{secrets/my_scope/my_key}}\", \"DATABRICKS_TOKEN\": \"{{secrets/my_scope2/my_key2}}\"}`" "description": |-
An object containing a set of optional, user-specified environment variable key-value pairs used for serving this entity. Note: this is an experimental feature and subject to change. Example entity environment variables that refer to Databricks secrets: `{"OPENAI_API_KEY": "{{secrets/my_scope/my_key}}", "DATABRICKS_TOKEN": "{{secrets/my_scope2/my_key2}}"}`
"external_model": "external_model":
"description": | "description": |-
The external model to be served. NOTE: Only one of external_model and (entity_name, entity_version, workload_size, workload_type, and scale_to_zero_enabled) The external model to be served. NOTE: Only one of external_model and (entity_name, entity_version, workload_size, workload_type, and scale_to_zero_enabled) can be specified with the latter set being used for custom model serving for a Databricks registered model. For an existing endpoint with external_model, it cannot be updated to an endpoint without external_model. If the endpoint is created without external_model, users cannot update it to add external_model later. The task type of all external models within an endpoint must be the same.
can be specified with the latter set being used for custom model serving for a Databricks registered model. For an existing endpoint with external_model,
it cannot be updated to an endpoint without external_model. If the endpoint is created without external_model, users cannot update it to add external_model later.
The task type of all external models within an endpoint must be the same.
"instance_profile_arn": "instance_profile_arn":
"description": |- "description": |-
ARN of the instance profile that the served entity uses to access AWS resources. ARN of the instance profile that the served entity uses to access AWS resources.
@ -3058,68 +3138,46 @@ github.com/databricks/databricks-sdk-go/service/serving.ServedEntityInput:
"description": |- "description": |-
The minimum tokens per second that the endpoint can scale down to. The minimum tokens per second that the endpoint can scale down to.
"name": "name":
"description": | "description": |-
The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores. The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores. If not specified for an external model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other entities, it defaults to entity_name-entity_version.
If not specified for an external model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other
entities, it defaults to <entity-name>-<entity-version>.
"scale_to_zero_enabled": "scale_to_zero_enabled":
"description": |- "description": |-
Whether the compute resources for the served entity should scale down to zero. Whether the compute resources for the served entity should scale down to zero.
"workload_size": "workload_size":
"description": | "description": |-
The workload size of the served entity. The workload size corresponds to a range of provisioned concurrency that the compute autoscales between. The workload size of the served entity. The workload size corresponds to a range of provisioned concurrency that the compute autoscales between. A single unit of provisioned concurrency can process one request at a time. Valid workload sizes are "Small" (4 - 4 provisioned concurrency), "Medium" (8 - 16 provisioned concurrency), and "Large" (16 - 64 provisioned concurrency). If scale-to-zero is enabled, the lower bound of the provisioned concurrency for each workload size is 0.
A single unit of provisioned concurrency can process one request at a time.
Valid workload sizes are "Small" (4 - 4 provisioned concurrency), "Medium" (8 - 16 provisioned concurrency), and "Large" (16 - 64 provisioned concurrency).
If scale-to-zero is enabled, the lower bound of the provisioned concurrency for each workload size is 0.
"workload_type": "workload_type":
"description": | "description": |-
The workload type of the served entity. The workload type selects which type of compute to use in the endpoint. The default value for this parameter is The workload type of the served entity. The workload type selects which type of compute to use in the endpoint. The default value for this parameter is "CPU". For deep learning workloads, GPU acceleration is available by selecting workload types like GPU_SMALL and others. See the available [GPU types](https://docs.databricks.com/en/machine-learning/model-serving/create-manage-serving-endpoints.html#gpu-workload-types).
"CPU". For deep learning workloads, GPU acceleration is available by selecting workload types like GPU_SMALL and others.
See the available [GPU types](https://docs.databricks.com/machine-learning/model-serving/create-manage-serving-endpoints.html#gpu-workload-types).
github.com/databricks/databricks-sdk-go/service/serving.ServedModelInput: github.com/databricks/databricks-sdk-go/service/serving.ServedModelInput:
"environment_vars": "environment_vars":
"description": "An object containing a set of optional, user-specified environment variable key-value pairs used for serving this model.\nNote: this is an experimental feature and subject to change. \nExample model environment variables that refer to Databricks secrets: `{\"OPENAI_API_KEY\": \"{{secrets/my_scope/my_key}}\", \"DATABRICKS_TOKEN\": \"{{secrets/my_scope2/my_key2}}\"}`" "description": |-
An object containing a set of optional, user-specified environment variable key-value pairs used for serving this entity. Note: this is an experimental feature and subject to change. Example entity environment variables that refer to Databricks secrets: `{"OPENAI_API_KEY": "{{secrets/my_scope/my_key}}", "DATABRICKS_TOKEN": "{{secrets/my_scope2/my_key2}}"}`
"instance_profile_arn": "instance_profile_arn":
"description": |- "description": |-
ARN of the instance profile that the served model will use to access AWS resources. ARN of the instance profile that the served entity uses to access AWS resources.
"max_provisioned_throughput": "max_provisioned_throughput":
"description": |- "description": |-
The maximum tokens per second that the endpoint can scale up to. The maximum tokens per second that the endpoint can scale up to.
"min_provisioned_throughput": "min_provisioned_throughput":
"description": |- "description": |-
The minimum tokens per second that the endpoint can scale down to. The minimum tokens per second that the endpoint can scale down to.
"model_name": "model_name": {}
"description": | "model_version": {}
The name of the model in Databricks Model Registry to be served or if the model resides in Unity Catalog, the full name of model,
in the form of __catalog_name__.__schema_name__.__model_name__.
"model_version":
"description": |-
The version of the model in Databricks Model Registry or Unity Catalog to be served.
"name": "name":
"description": | "description": |-
The name of a served model. It must be unique across an endpoint. If not specified, this field will default to <model-name>-<model-version>. The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores. If not specified for an external model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other entities, it defaults to entity_name-entity_version.
A served model name can consist of alphanumeric characters, dashes, and underscores.
"scale_to_zero_enabled": "scale_to_zero_enabled":
"description": |- "description": |-
Whether the compute resources for the served model should scale down to zero. Whether the compute resources for the served entity should scale down to zero.
"workload_size": "workload_size":
"description": | "description": |-
The workload size of the served model. The workload size corresponds to a range of provisioned concurrency that the compute will autoscale between. The workload size of the served entity. The workload size corresponds to a range of provisioned concurrency that the compute autoscales between. A single unit of provisioned concurrency can process one request at a time. Valid workload sizes are "Small" (4 - 4 provisioned concurrency), "Medium" (8 - 16 provisioned concurrency), and "Large" (16 - 64 provisioned concurrency). If scale-to-zero is enabled, the lower bound of the provisioned concurrency for each workload size is 0.
A single unit of provisioned concurrency can process one request at a time.
Valid workload sizes are "Small" (4 - 4 provisioned concurrency), "Medium" (8 - 16 provisioned concurrency), and "Large" (16 - 64 provisioned concurrency).
If scale-to-zero is enabled, the lower bound of the provisioned concurrency for each workload size will be 0.
"workload_type": "workload_type":
"description": | "description": |-
The workload type of the served model. The workload type selects which type of compute to use in the endpoint. The default value for this parameter is The workload type of the served entity. The workload type selects which type of compute to use in the endpoint. The default value for this parameter is "CPU". For deep learning workloads, GPU acceleration is available by selecting workload types like GPU_SMALL and others. See the available [GPU types](https://docs.databricks.com/en/machine-learning/model-serving/create-manage-serving-endpoints.html#gpu-workload-types).
"CPU". For deep learning workloads, GPU acceleration is available by selecting workload types like GPU_SMALL and others.
See the available [GPU types](https://docs.databricks.com/machine-learning/model-serving/create-manage-serving-endpoints.html#gpu-workload-types).
github.com/databricks/databricks-sdk-go/service/serving.ServedModelInputWorkloadSize: github.com/databricks/databricks-sdk-go/service/serving.ServedModelInputWorkloadSize:
"_": "_":
"description": |
The workload size of the served model. The workload size corresponds to a range of provisioned concurrency that the compute will autoscale between.
A single unit of provisioned concurrency can process one request at a time.
Valid workload sizes are "Small" (4 - 4 provisioned concurrency), "Medium" (8 - 16 provisioned concurrency), and "Large" (16 - 64 provisioned concurrency).
If scale-to-zero is enabled, the lower bound of the provisioned concurrency for each workload size will be 0.
"enum": "enum":
- |- - |-
Small Small
@ -3129,17 +3187,26 @@ github.com/databricks/databricks-sdk-go/service/serving.ServedModelInputWorkload
Large Large
github.com/databricks/databricks-sdk-go/service/serving.ServedModelInputWorkloadType: github.com/databricks/databricks-sdk-go/service/serving.ServedModelInputWorkloadType:
"_": "_":
"description": |
The workload type of the served model. The workload type selects which type of compute to use in the endpoint. The default value for this parameter is
"CPU". For deep learning workloads, GPU acceleration is available by selecting workload types like GPU_SMALL and others.
See the available [GPU types](https://docs.databricks.com/machine-learning/model-serving/create-manage-serving-endpoints.html#gpu-workload-types).
"enum": "enum":
- |- - |-
CPU CPU
- |-
GPU_MEDIUM
- |- - |-
GPU_SMALL GPU_SMALL
- |-
GPU_LARGE
- |-
MULTIGPU_MEDIUM
github.com/databricks/databricks-sdk-go/service/serving.ServingModelWorkloadType:
"_":
"enum":
- |-
CPU
- |- - |-
GPU_MEDIUM GPU_MEDIUM
- |-
GPU_SMALL
- |- - |-
GPU_LARGE GPU_LARGE
- |- - |-

View File

@ -197,3 +197,14 @@ github.com/databricks/databricks-sdk-go/service/pipelines.PipelineTrigger:
"manual": "manual":
"description": |- "description": |-
PLACEHOLDER PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/serving.ServedEntityInput:
"entity_version":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/serving.ServedModelInput:
"model_name":
"description": |-
PLACEHOLDER
"model_version":
"description": |-
PLACEHOLDER

View File

@ -546,7 +546,7 @@
"type": "object", "type": "object",
"properties": { "properties": {
"ai_gateway": { "ai_gateway": {
"description": "The AI Gateway configuration for the serving endpoint. NOTE: only external model endpoints are supported as of now.", "description": "The AI Gateway configuration for the serving endpoint. NOTE: Only external model and provisioned throughput endpoints are currently supported.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayConfig" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayConfig"
}, },
"config": { "config": {
@ -554,7 +554,7 @@
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.EndpointCoreConfigInput" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.EndpointCoreConfigInput"
}, },
"name": { "name": {
"description": "The name of the serving endpoint. This field is required and must be unique across a Databricks workspace.\nAn endpoint name can consist of alphanumeric characters, dashes, and underscores.\n", "description": "The name of the serving endpoint. This field is required and must be unique across a Databricks workspace.\nAn endpoint name can consist of alphanumeric characters, dashes, and underscores.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"permissions": { "permissions": {
@ -575,7 +575,6 @@
}, },
"additionalProperties": false, "additionalProperties": false,
"required": [ "required": [
"config",
"name" "name"
] ]
}, },
@ -4142,6 +4141,10 @@
"parameters": { "parameters": {
"description": "Parameters passed to the main method.\n\nUse [Task parameter variables](https://docs.databricks.com/jobs.html#parameter-variables) to set parameters containing information about job runs.", "description": "Parameters passed to the main method.\n\nUse [Task parameter variables](https://docs.databricks.com/jobs.html#parameter-variables) to set parameters containing information about job runs.",
"$ref": "#/$defs/slice/string" "$ref": "#/$defs/slice/string"
},
"run_as_repl": {
"description": "Deprecated. A value of `false` is no longer supported.",
"$ref": "#/$defs/bool"
} }
}, },
"additionalProperties": false "additionalProperties": false
@ -5502,11 +5505,11 @@
"type": "object", "type": "object",
"properties": { "properties": {
"ai21labs_api_key": { "ai21labs_api_key": {
"description": "The Databricks secret key reference for an AI21 Labs API key. If you prefer to paste your API key directly, see `ai21labs_api_key_plaintext`. You must provide an API key using one of the following fields: `ai21labs_api_key` or `ai21labs_api_key_plaintext`.", "description": "The Databricks secret key reference for an AI21 Labs API key. If you\nprefer to paste your API key directly, see `ai21labs_api_key_plaintext`.\nYou must provide an API key using one of the following fields:\n`ai21labs_api_key` or `ai21labs_api_key_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"ai21labs_api_key_plaintext": { "ai21labs_api_key_plaintext": {
"description": "An AI21 Labs API key provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `ai21labs_api_key`. You must provide an API key using one of the following fields: `ai21labs_api_key` or `ai21labs_api_key_plaintext`.", "description": "An AI21 Labs API key provided as a plaintext string. If you prefer to\nreference your key using Databricks Secrets, see `ai21labs_api_key`. You\nmust provide an API key using one of the following fields:\n`ai21labs_api_key` or `ai21labs_api_key_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
@ -5528,7 +5531,7 @@
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayGuardrails" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayGuardrails"
}, },
"inference_table_config": { "inference_table_config": {
"description": "Configuration for payload logging using inference tables. Use these tables to monitor and audit data being sent to and received from model APIs and to improve model quality.", "description": "Configuration for payload logging using inference tables.\nUse these tables to monitor and audit data being sent to and received from model APIs and to improve model quality.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayInferenceTableConfig" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayInferenceTableConfig"
}, },
"rate_limits": { "rate_limits": {
@ -5536,7 +5539,7 @@
"$ref": "#/$defs/slice/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayRateLimit" "$ref": "#/$defs/slice/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayRateLimit"
}, },
"usage_tracking_config": { "usage_tracking_config": {
"description": "Configuration to enable usage tracking using system tables. These tables allow you to monitor operational usage on endpoints and their associated costs.", "description": "Configuration to enable usage tracking using system tables.\nThese tables allow you to monitor operational usage on endpoints and their associated costs.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayUsageTrackingConfig" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayUsageTrackingConfig"
} }
}, },
@ -5554,7 +5557,7 @@
"type": "object", "type": "object",
"properties": { "properties": {
"invalid_keywords": { "invalid_keywords": {
"description": "List of invalid keywords. AI guardrail uses keyword or string matching to decide if the keyword exists in the request or response content.", "description": "List of invalid keywords.\nAI guardrail uses keyword or string matching to decide if the keyword exists in the request or response content.",
"$ref": "#/$defs/slice/string" "$ref": "#/$defs/slice/string"
}, },
"pii": { "pii": {
@ -5566,7 +5569,7 @@
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
}, },
"valid_topics": { "valid_topics": {
"description": "The list of allowed topics. Given a chat request, this guardrail flags the request if its topic is not in the allowed topics.", "description": "The list of allowed topics.\nGiven a chat request, this guardrail flags the request if its topic is not in the allowed topics.",
"$ref": "#/$defs/slice/string" "$ref": "#/$defs/slice/string"
} }
}, },
@ -5584,14 +5587,11 @@
"type": "object", "type": "object",
"properties": { "properties": {
"behavior": { "behavior": {
"description": "Behavior for PII filter. Currently only 'BLOCK' is supported. If 'BLOCK' is set for the input guardrail and the request contains PII, the request is not sent to the model server and 400 status code is returned; if 'BLOCK' is set for the output guardrail and the model response contains PII, the PII info in the response is redacted and 400 status code is returned.", "description": "Configuration for input guardrail filters.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayGuardrailPiiBehaviorBehavior" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayGuardrailPiiBehaviorBehavior"
} }
}, },
"additionalProperties": false, "additionalProperties": false
"required": [
"behavior"
]
}, },
{ {
"type": "string", "type": "string",
@ -5603,7 +5603,6 @@
"oneOf": [ "oneOf": [
{ {
"type": "string", "type": "string",
"description": "Behavior for PII filter. Currently only 'BLOCK' is supported. If 'BLOCK' is set for the input guardrail and the request contains PII, the request is not sent to the model server and 400 status code is returned; if 'BLOCK' is set for the output guardrail and the model response contains PII, the PII info in the response is redacted and 400 status code is returned.",
"enum": [ "enum": [
"NONE", "NONE",
"BLOCK" "BLOCK"
@ -5643,7 +5642,7 @@
"type": "object", "type": "object",
"properties": { "properties": {
"catalog_name": { "catalog_name": {
"description": "The name of the catalog in Unity Catalog. Required when enabling inference tables. NOTE: On update, you have to disable inference table first in order to change the catalog name.", "description": "The name of the catalog in Unity Catalog. Required when enabling inference tables.\nNOTE: On update, you have to disable inference table first in order to change the catalog name.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"enabled": { "enabled": {
@ -5651,11 +5650,11 @@
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
}, },
"schema_name": { "schema_name": {
"description": "The name of the schema in Unity Catalog. Required when enabling inference tables. NOTE: On update, you have to disable inference table first in order to change the schema name.", "description": "The name of the schema in Unity Catalog. Required when enabling inference tables.\nNOTE: On update, you have to disable inference table first in order to change the schema name.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"table_name_prefix": { "table_name_prefix": {
"description": "The prefix of the table in Unity Catalog. NOTE: On update, you have to disable inference table first in order to change the prefix name.", "description": "The prefix of the table in Unity Catalog.\nNOTE: On update, you have to disable inference table first in order to change the prefix name.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
@ -5674,10 +5673,10 @@
"properties": { "properties": {
"calls": { "calls": {
"description": "Used to specify how many calls are allowed for a key within the renewal_period.", "description": "Used to specify how many calls are allowed for a key within the renewal_period.",
"$ref": "#/$defs/int" "$ref": "#/$defs/int64"
}, },
"key": { "key": {
"description": "Key field for a rate limit. Currently, only 'user' and 'endpoint' are supported, with 'endpoint' being the default if not specified.", "description": "Key field for a rate limit. Currently, only 'user' and 'endpoint' are supported,\nwith 'endpoint' being the default if not specified.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayRateLimitKey" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayRateLimitKey"
}, },
"renewal_period": { "renewal_period": {
@ -5701,7 +5700,6 @@
"oneOf": [ "oneOf": [
{ {
"type": "string", "type": "string",
"description": "Key field for a rate limit. Currently, only 'user' and 'endpoint' are supported, with 'endpoint' being the default if not specified.",
"enum": [ "enum": [
"user", "user",
"endpoint" "endpoint"
@ -5717,7 +5715,6 @@
"oneOf": [ "oneOf": [
{ {
"type": "string", "type": "string",
"description": "Renewal period field for a rate limit. Currently, only 'minute' is supported.",
"enum": [ "enum": [
"minute" "minute"
] ]
@ -5752,11 +5749,11 @@
"type": "object", "type": "object",
"properties": { "properties": {
"aws_access_key_id": { "aws_access_key_id": {
"description": "The Databricks secret key reference for an AWS access key ID with permissions to interact with Bedrock services. If you prefer to paste your API key directly, see `aws_access_key_id`. You must provide an API key using one of the following fields: `aws_access_key_id` or `aws_access_key_id_plaintext`.", "description": "The Databricks secret key reference for an AWS access key ID with\npermissions to interact with Bedrock services. If you prefer to paste\nyour API key directly, see `aws_access_key_id_plaintext`. You must provide an API\nkey using one of the following fields: `aws_access_key_id` or\n`aws_access_key_id_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"aws_access_key_id_plaintext": { "aws_access_key_id_plaintext": {
"description": "An AWS access key ID with permissions to interact with Bedrock services provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `aws_access_key_id`. You must provide an API key using one of the following fields: `aws_access_key_id` or `aws_access_key_id_plaintext`.", "description": "An AWS access key ID with permissions to interact with Bedrock services\nprovided as a plaintext string. If you prefer to reference your key using\nDatabricks Secrets, see `aws_access_key_id`. You must provide an API key\nusing one of the following fields: `aws_access_key_id` or\n`aws_access_key_id_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"aws_region": { "aws_region": {
@ -5764,15 +5761,15 @@
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"aws_secret_access_key": { "aws_secret_access_key": {
"description": "The Databricks secret key reference for an AWS secret access key paired with the access key ID, with permissions to interact with Bedrock services. If you prefer to paste your API key directly, see `aws_secret_access_key_plaintext`. You must provide an API key using one of the following fields: `aws_secret_access_key` or `aws_secret_access_key_plaintext`.", "description": "The Databricks secret key reference for an AWS secret access key paired\nwith the access key ID, with permissions to interact with Bedrock\nservices. If you prefer to paste your API key directly, see\n`aws_secret_access_key_plaintext`. You must provide an API key using one\nof the following fields: `aws_secret_access_key` or\n`aws_secret_access_key_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"aws_secret_access_key_plaintext": { "aws_secret_access_key_plaintext": {
"description": "An AWS secret access key paired with the access key ID, with permissions to interact with Bedrock services provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `aws_secret_access_key`. You must provide an API key using one of the following fields: `aws_secret_access_key` or `aws_secret_access_key_plaintext`.", "description": "An AWS secret access key paired with the access key ID, with permissions\nto interact with Bedrock services provided as a plaintext string. If you\nprefer to reference your key using Databricks Secrets, see\n`aws_secret_access_key`. You must provide an API key using one of the\nfollowing fields: `aws_secret_access_key` or\n`aws_secret_access_key_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"bedrock_provider": { "bedrock_provider": {
"description": "The underlying provider in Amazon Bedrock. Supported values (case insensitive) include: Anthropic, Cohere, AI21Labs, Amazon.", "description": "The underlying provider in Amazon Bedrock. Supported values (case\ninsensitive) include: Anthropic, Cohere, AI21Labs, Amazon.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AmazonBedrockConfigBedrockProvider" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AmazonBedrockConfigBedrockProvider"
} }
}, },
@ -5792,7 +5789,6 @@
"oneOf": [ "oneOf": [
{ {
"type": "string", "type": "string",
"description": "The underlying provider in Amazon Bedrock. Supported values (case insensitive) include: Anthropic, Cohere, AI21Labs, Amazon.",
"enum": [ "enum": [
"anthropic", "anthropic",
"cohere", "cohere",
@ -5812,11 +5808,11 @@
"type": "object", "type": "object",
"properties": { "properties": {
"anthropic_api_key": { "anthropic_api_key": {
"description": "The Databricks secret key reference for an Anthropic API key. If you prefer to paste your API key directly, see `anthropic_api_key_plaintext`. You must provide an API key using one of the following fields: `anthropic_api_key` or `anthropic_api_key_plaintext`.", "description": "The Databricks secret key reference for an Anthropic API key. If you\nprefer to paste your API key directly, see `anthropic_api_key_plaintext`.\nYou must provide an API key using one of the following fields:\n`anthropic_api_key` or `anthropic_api_key_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"anthropic_api_key_plaintext": { "anthropic_api_key_plaintext": {
"description": "The Anthropic API key provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `anthropic_api_key`. You must provide an API key using one of the following fields: `anthropic_api_key` or `anthropic_api_key_plaintext`.", "description": "The Anthropic API key provided as a plaintext string. If you prefer to\nreference your key using Databricks Secrets, see `anthropic_api_key`. You\nmust provide an API key using one of the following fields:\n`anthropic_api_key` or `anthropic_api_key_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
@ -5864,15 +5860,15 @@
"type": "object", "type": "object",
"properties": { "properties": {
"cohere_api_base": { "cohere_api_base": {
"description": "This is an optional field to provide a customized base URL for the Cohere API. \nIf left unspecified, the standard Cohere base URL is used.\n", "description": "This is an optional field to provide a customized base URL for the Cohere\nAPI. If left unspecified, the standard Cohere base URL is used.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"cohere_api_key": { "cohere_api_key": {
"description": "The Databricks secret key reference for a Cohere API key. If you prefer to paste your API key directly, see `cohere_api_key_plaintext`. You must provide an API key using one of the following fields: `cohere_api_key` or `cohere_api_key_plaintext`.", "description": "The Databricks secret key reference for a Cohere API key. If you prefer\nto paste your API key directly, see `cohere_api_key_plaintext`. You must\nprovide an API key using one of the following fields: `cohere_api_key` or\n`cohere_api_key_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"cohere_api_key_plaintext": { "cohere_api_key_plaintext": {
"description": "The Cohere API key provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `cohere_api_key`. You must provide an API key using one of the following fields: `cohere_api_key` or `cohere_api_key_plaintext`.", "description": "The Cohere API key provided as a plaintext string. If you prefer to\nreference your key using Databricks Secrets, see `cohere_api_key`. You\nmust provide an API key using one of the following fields:\n`cohere_api_key` or `cohere_api_key_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
@ -5890,15 +5886,15 @@
"type": "object", "type": "object",
"properties": { "properties": {
"databricks_api_token": { "databricks_api_token": {
"description": "The Databricks secret key reference for a Databricks API token that corresponds to a user or service\nprincipal with Can Query access to the model serving endpoint pointed to by this external model.\nIf you prefer to paste your API key directly, see `databricks_api_token_plaintext`.\nYou must provide an API key using one of the following fields: `databricks_api_token` or `databricks_api_token_plaintext`.\n", "description": "The Databricks secret key reference for a Databricks API token that\ncorresponds to a user or service principal with Can Query access to the\nmodel serving endpoint pointed to by this external model. If you prefer\nto paste your API key directly, see `databricks_api_token_plaintext`. You\nmust provide an API key using one of the following fields:\n`databricks_api_token` or `databricks_api_token_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"databricks_api_token_plaintext": { "databricks_api_token_plaintext": {
"description": "The Databricks API token that corresponds to a user or service\nprincipal with Can Query access to the model serving endpoint pointed to by this external model provided as a plaintext string.\nIf you prefer to reference your key using Databricks Secrets, see `databricks_api_token`.\nYou must provide an API key using one of the following fields: `databricks_api_token` or `databricks_api_token_plaintext`.\n", "description": "The Databricks API token that corresponds to a user or service principal\nwith Can Query access to the model serving endpoint pointed to by this\nexternal model provided as a plaintext string. If you prefer to reference\nyour key using Databricks Secrets, see `databricks_api_token`. You must\nprovide an API key using one of the following fields:\n`databricks_api_token` or `databricks_api_token_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"databricks_workspace_url": { "databricks_workspace_url": {
"description": "The URL of the Databricks workspace containing the model serving endpoint pointed to by this external model.\n", "description": "The URL of the Databricks workspace containing the model serving endpoint\npointed to by this external model.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
@ -5919,19 +5915,19 @@
"type": "object", "type": "object",
"properties": { "properties": {
"auto_capture_config": { "auto_capture_config": {
"description": "Configuration for Inference Tables which automatically logs requests and responses to Unity Catalog.", "description": "Configuration for Inference Tables which automatically logs requests and responses to Unity Catalog.\nNote: this field is deprecated for creating new provisioned throughput endpoints,\nor updating existing provisioned throughput endpoints that never have inference table configured;\nin these cases please use AI Gateway to manage inference tables.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AutoCaptureConfigInput" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AutoCaptureConfigInput"
}, },
"served_entities": { "served_entities": {
"description": "A list of served entities for the endpoint to serve. A serving endpoint can have up to 15 served entities.", "description": "The list of served entities under the serving endpoint config.",
"$ref": "#/$defs/slice/github.com/databricks/databricks-sdk-go/service/serving.ServedEntityInput" "$ref": "#/$defs/slice/github.com/databricks/databricks-sdk-go/service/serving.ServedEntityInput"
}, },
"served_models": { "served_models": {
"description": "(Deprecated, use served_entities instead) A list of served models for the endpoint to serve. A serving endpoint can have up to 15 served models.", "description": "(Deprecated, use served_entities instead) The list of served models under the serving endpoint config.",
"$ref": "#/$defs/slice/github.com/databricks/databricks-sdk-go/service/serving.ServedModelInput" "$ref": "#/$defs/slice/github.com/databricks/databricks-sdk-go/service/serving.ServedModelInput"
}, },
"traffic_config": { "traffic_config": {
"description": "The traffic config defining how invocations to the serving endpoint should be routed.", "description": "The traffic configuration associated with the serving endpoint config.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.TrafficConfig" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.TrafficConfig"
} }
}, },
@ -6010,7 +6006,7 @@
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.PaLmConfig" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.PaLmConfig"
}, },
"provider": { "provider": {
"description": "The name of the provider for the external model. Currently, the supported providers are 'ai21labs', 'anthropic',\n'amazon-bedrock', 'cohere', 'databricks-model-serving', 'google-cloud-vertex-ai', 'openai', and 'palm'.\",\n", "description": "The name of the provider for the external model. Currently, the supported providers are 'ai21labs', 'anthropic', 'amazon-bedrock', 'cohere', 'databricks-model-serving', 'google-cloud-vertex-ai', 'openai', and 'palm'.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.ExternalModelProvider" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.ExternalModelProvider"
}, },
"task": { "task": {
@ -6035,7 +6031,6 @@
"oneOf": [ "oneOf": [
{ {
"type": "string", "type": "string",
"description": "The name of the provider for the external model. Currently, the supported providers are 'ai21labs', 'anthropic',\n'amazon-bedrock', 'cohere', 'databricks-model-serving', 'google-cloud-vertex-ai', 'openai', and 'palm'.\",\n",
"enum": [ "enum": [
"ai21labs", "ai21labs",
"anthropic", "anthropic",
@ -6059,23 +6054,27 @@
"type": "object", "type": "object",
"properties": { "properties": {
"private_key": { "private_key": {
"description": "The Databricks secret key reference for a private key for the service account which has access to the Google Cloud Vertex AI Service. See [Best practices for managing service account keys](https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys). If you prefer to paste your API key directly, see `private_key_plaintext`. You must provide an API key using one of the following fields: `private_key` or `private_key_plaintext`", "description": "The Databricks secret key reference for a private key for the service\naccount which has access to the Google Cloud Vertex AI Service. See [Best\npractices for managing service account keys]. If you prefer to paste your\nAPI key directly, see `private_key_plaintext`. You must provide an API\nkey using one of the following fields: `private_key` or\n`private_key_plaintext`\n\n[Best practices for managing service account keys]: https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"private_key_plaintext": { "private_key_plaintext": {
"description": "The private key for the service account which has access to the Google Cloud Vertex AI Service provided as a plaintext secret. See [Best practices for managing service account keys](https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys). If you prefer to reference your key using Databricks Secrets, see `private_key`. You must provide an API key using one of the following fields: `private_key` or `private_key_plaintext`.", "description": "The private key for the service account which has access to the Google\nCloud Vertex AI Service provided as a plaintext secret. See [Best\npractices for managing service account keys]. If you prefer to reference\nyour key using Databricks Secrets, see `private_key`. You must provide an\nAPI key using one of the following fields: `private_key` or\n`private_key_plaintext`.\n\n[Best practices for managing service account keys]: https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"project_id": { "project_id": {
"description": "This is the Google Cloud project id that the service account is associated with.", "description": "This is the Google Cloud project id that the service account is\nassociated with.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"region": { "region": {
"description": "This is the region for the Google Cloud Vertex AI Service. See [supported regions](https://cloud.google.com/vertex-ai/docs/general/locations) for more details. Some models are only available in specific regions.", "description": "This is the region for the Google Cloud Vertex AI Service. See [supported\nregions] for more details. Some models are only available in specific\nregions.\n\n[supported regions]: https://cloud.google.com/vertex-ai/docs/general/locations",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
"additionalProperties": false "additionalProperties": false,
"required": [
"project_id",
"region"
]
}, },
{ {
"type": "string", "type": "string",
@ -6087,49 +6086,50 @@
"oneOf": [ "oneOf": [
{ {
"type": "object", "type": "object",
"description": "Configs needed to create an OpenAI model route.",
"properties": { "properties": {
"microsoft_entra_client_id": { "microsoft_entra_client_id": {
"description": "This field is only required for Azure AD OpenAI and is the Microsoft Entra Client ID.\n", "description": "This field is only required for Azure AD OpenAI and is the Microsoft\nEntra Client ID.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"microsoft_entra_client_secret": { "microsoft_entra_client_secret": {
"description": "The Databricks secret key reference for a client secret used for Microsoft Entra ID authentication.\nIf you prefer to paste your client secret directly, see `microsoft_entra_client_secret_plaintext`.\nYou must provide an API key using one of the following fields: `microsoft_entra_client_secret` or `microsoft_entra_client_secret_plaintext`.\n", "description": "The Databricks secret key reference for a client secret used for\nMicrosoft Entra ID authentication. If you prefer to paste your client\nsecret directly, see `microsoft_entra_client_secret_plaintext`. You must\nprovide an API key using one of the following fields:\n`microsoft_entra_client_secret` or\n`microsoft_entra_client_secret_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"microsoft_entra_client_secret_plaintext": { "microsoft_entra_client_secret_plaintext": {
"description": "The client secret used for Microsoft Entra ID authentication provided as a plaintext string.\nIf you prefer to reference your key using Databricks Secrets, see `microsoft_entra_client_secret`.\nYou must provide an API key using one of the following fields: `microsoft_entra_client_secret` or `microsoft_entra_client_secret_plaintext`.\n", "description": "The client secret used for Microsoft Entra ID authentication provided as\na plaintext string. If you prefer to reference your key using Databricks\nSecrets, see `microsoft_entra_client_secret`. You must provide an API key\nusing one of the following fields: `microsoft_entra_client_secret` or\n`microsoft_entra_client_secret_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"microsoft_entra_tenant_id": { "microsoft_entra_tenant_id": {
"description": "This field is only required for Azure AD OpenAI and is the Microsoft Entra Tenant ID.\n", "description": "This field is only required for Azure AD OpenAI and is the Microsoft\nEntra Tenant ID.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"openai_api_base": { "openai_api_base": {
"description": "This is a field to provide a customized base URl for the OpenAI API.\nFor Azure OpenAI, this field is required, and is the base URL for the Azure OpenAI API service\nprovided by Azure.\nFor other OpenAI API types, this field is optional, and if left unspecified, the standard OpenAI base URL is used.\n", "description": "This is a field to provide a customized base URl for the OpenAI API. For\nAzure OpenAI, this field is required, and is the base URL for the Azure\nOpenAI API service provided by Azure. For other OpenAI API types, this\nfield is optional, and if left unspecified, the standard OpenAI base URL\nis used.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"openai_api_key": { "openai_api_key": {
"description": "The Databricks secret key reference for an OpenAI API key using the OpenAI or Azure service. If you prefer to paste your API key directly, see `openai_api_key_plaintext`. You must provide an API key using one of the following fields: `openai_api_key` or `openai_api_key_plaintext`.", "description": "The Databricks secret key reference for an OpenAI API key using the\nOpenAI or Azure service. If you prefer to paste your API key directly,\nsee `openai_api_key_plaintext`. You must provide an API key using one of\nthe following fields: `openai_api_key` or `openai_api_key_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"openai_api_key_plaintext": { "openai_api_key_plaintext": {
"description": "The OpenAI API key using the OpenAI or Azure service provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `openai_api_key`. You must provide an API key using one of the following fields: `openai_api_key` or `openai_api_key_plaintext`.", "description": "The OpenAI API key using the OpenAI or Azure service provided as a\nplaintext string. If you prefer to reference your key using Databricks\nSecrets, see `openai_api_key`. You must provide an API key using one of\nthe following fields: `openai_api_key` or `openai_api_key_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"openai_api_type": { "openai_api_type": {
"description": "This is an optional field to specify the type of OpenAI API to use.\nFor Azure OpenAI, this field is required, and adjust this parameter to represent the preferred security\naccess validation protocol. For access token validation, use azure. For authentication using Azure Active\nDirectory (Azure AD) use, azuread.\n", "description": "This is an optional field to specify the type of OpenAI API to use. For\nAzure OpenAI, this field is required, and adjust this parameter to\nrepresent the preferred security access validation protocol. For access\ntoken validation, use azure. For authentication using Azure Active\nDirectory (Azure AD) use, azuread.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"openai_api_version": { "openai_api_version": {
"description": "This is an optional field to specify the OpenAI API version.\nFor Azure OpenAI, this field is required, and is the version of the Azure OpenAI service to\nutilize, specified by a date.\n", "description": "This is an optional field to specify the OpenAI API version. For Azure\nOpenAI, this field is required, and is the version of the Azure OpenAI\nservice to utilize, specified by a date.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"openai_deployment_name": { "openai_deployment_name": {
"description": "This field is only required for Azure OpenAI and is the name of the deployment resource for the\nAzure OpenAI service.\n", "description": "This field is only required for Azure OpenAI and is the name of the\ndeployment resource for the Azure OpenAI service.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"openai_organization": { "openai_organization": {
"description": "This is an optional field to specify the organization in OpenAI or Azure OpenAI.\n", "description": "This is an optional field to specify the organization in OpenAI or Azure\nOpenAI.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
@ -6147,11 +6147,11 @@
"type": "object", "type": "object",
"properties": { "properties": {
"palm_api_key": { "palm_api_key": {
"description": "The Databricks secret key reference for a PaLM API key. If you prefer to paste your API key directly, see `palm_api_key_plaintext`. You must provide an API key using one of the following fields: `palm_api_key` or `palm_api_key_plaintext`.", "description": "The Databricks secret key reference for a PaLM API key. If you prefer to\npaste your API key directly, see `palm_api_key_plaintext`. You must\nprovide an API key using one of the following fields: `palm_api_key` or\n`palm_api_key_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"palm_api_key_plaintext": { "palm_api_key_plaintext": {
"description": "The PaLM API key provided as a plaintext string. If you prefer to reference your key using Databricks Secrets, see `palm_api_key`. You must provide an API key using one of the following fields: `palm_api_key` or `palm_api_key_plaintext`.", "description": "The PaLM API key provided as a plaintext string. If you prefer to\nreference your key using Databricks Secrets, see `palm_api_key`. You must\nprovide an API key using one of the following fields: `palm_api_key` or\n`palm_api_key_plaintext`.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
@ -6170,7 +6170,7 @@
"properties": { "properties": {
"calls": { "calls": {
"description": "Used to specify how many calls are allowed for a key within the renewal_period.", "description": "Used to specify how many calls are allowed for a key within the renewal_period.",
"$ref": "#/$defs/int" "$ref": "#/$defs/int64"
}, },
"key": { "key": {
"description": "Key field for a serving endpoint rate limit. Currently, only 'user' and 'endpoint' are supported, with 'endpoint' being the default if not specified.", "description": "Key field for a serving endpoint rate limit. Currently, only 'user' and 'endpoint' are supported, with 'endpoint' being the default if not specified.",
@ -6197,7 +6197,6 @@
"oneOf": [ "oneOf": [
{ {
"type": "string", "type": "string",
"description": "Key field for a serving endpoint rate limit. Currently, only 'user' and 'endpoint' are supported, with 'endpoint' being the default if not specified.",
"enum": [ "enum": [
"user", "user",
"endpoint" "endpoint"
@ -6213,7 +6212,6 @@
"oneOf": [ "oneOf": [
{ {
"type": "string", "type": "string",
"description": "Renewal period field for a serving endpoint rate limit. Currently, only 'minute' is supported.",
"enum": [ "enum": [
"minute" "minute"
] ]
@ -6256,19 +6254,18 @@
"type": "object", "type": "object",
"properties": { "properties": {
"entity_name": { "entity_name": {
"description": "The name of the entity to be served. The entity may be a model in the Databricks Model Registry, a model in the Unity Catalog (UC),\nor a function of type FEATURE_SPEC in the UC. If it is a UC object, the full name of the object should be given in the form of\n__catalog_name__.__schema_name__.__model_name__.\n", "description": "The name of the entity to be served. The entity may be a model in the Databricks Model Registry, a model in the Unity Catalog (UC), or a function of type FEATURE_SPEC in the UC. If it is a UC object, the full name of the object should be given in the form of **catalog_name.schema_name.model_name**.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"entity_version": { "entity_version": {
"description": "The version of the model in Databricks Model Registry to be served or empty if the entity is a FEATURE_SPEC.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"environment_vars": { "environment_vars": {
"description": "An object containing a set of optional, user-specified environment variable key-value pairs used for serving this entity.\nNote: this is an experimental feature and subject to change. \nExample entity environment variables that refer to Databricks secrets: `{\"OPENAI_API_KEY\": \"{{secrets/my_scope/my_key}}\", \"DATABRICKS_TOKEN\": \"{{secrets/my_scope2/my_key2}}\"}`", "description": "An object containing a set of optional, user-specified environment variable key-value pairs used for serving this entity. Note: this is an experimental feature and subject to change. Example entity environment variables that refer to Databricks secrets: `{\"OPENAI_API_KEY\": \"{{secrets/my_scope/my_key}}\", \"DATABRICKS_TOKEN\": \"{{secrets/my_scope2/my_key2}}\"}`",
"$ref": "#/$defs/map/string" "$ref": "#/$defs/map/string"
}, },
"external_model": { "external_model": {
"description": "The external model to be served. NOTE: Only one of external_model and (entity_name, entity_version, workload_size, workload_type, and scale_to_zero_enabled)\ncan be specified with the latter set being used for custom model serving for a Databricks registered model. For an existing endpoint with external_model,\nit cannot be updated to an endpoint without external_model. If the endpoint is created without external_model, users cannot update it to add external_model later.\nThe task type of all external models within an endpoint must be the same.\n", "description": "The external model to be served. NOTE: Only one of external_model and (entity_name, entity_version, workload_size, workload_type, and scale_to_zero_enabled) can be specified with the latter set being used for custom model serving for a Databricks registered model. For an existing endpoint with external_model, it cannot be updated to an endpoint without external_model. If the endpoint is created without external_model, users cannot update it to add external_model later. The task type of all external models within an endpoint must be the same.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.ExternalModel" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.ExternalModel"
}, },
"instance_profile_arn": { "instance_profile_arn": {
@ -6284,7 +6281,7 @@
"$ref": "#/$defs/int" "$ref": "#/$defs/int"
}, },
"name": { "name": {
"description": "The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores.\nIf not specified for an external model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other\nentities, it defaults to \u003centity-name\u003e-\u003centity-version\u003e.\n", "description": "The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores. If not specified for an external model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other entities, it defaults to entity_name-entity_version.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"scale_to_zero_enabled": { "scale_to_zero_enabled": {
@ -6292,12 +6289,12 @@
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
}, },
"workload_size": { "workload_size": {
"description": "The workload size of the served entity. The workload size corresponds to a range of provisioned concurrency that the compute autoscales between.\nA single unit of provisioned concurrency can process one request at a time.\nValid workload sizes are \"Small\" (4 - 4 provisioned concurrency), \"Medium\" (8 - 16 provisioned concurrency), and \"Large\" (16 - 64 provisioned concurrency).\nIf scale-to-zero is enabled, the lower bound of the provisioned concurrency for each workload size is 0.\n", "description": "The workload size of the served entity. The workload size corresponds to a range of provisioned concurrency that the compute autoscales between. A single unit of provisioned concurrency can process one request at a time. Valid workload sizes are \"Small\" (4 - 4 provisioned concurrency), \"Medium\" (8 - 16 provisioned concurrency), and \"Large\" (16 - 64 provisioned concurrency). If scale-to-zero is enabled, the lower bound of the provisioned concurrency for each workload size is 0.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"workload_type": { "workload_type": {
"description": "The workload type of the served entity. The workload type selects which type of compute to use in the endpoint. The default value for this parameter is\n\"CPU\". For deep learning workloads, GPU acceleration is available by selecting workload types like GPU_SMALL and others.\nSee the available [GPU types](https://docs.databricks.com/machine-learning/model-serving/create-manage-serving-endpoints.html#gpu-workload-types).\n", "description": "The workload type of the served entity. The workload type selects which type of compute to use in the endpoint. The default value for this parameter is \"CPU\". For deep learning workloads, GPU acceleration is available by selecting workload types like GPU_SMALL and others. See the available [GPU types](https://docs.databricks.com/en/machine-learning/model-serving/create-manage-serving-endpoints.html#gpu-workload-types).",
"$ref": "#/$defs/string" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.ServingModelWorkloadType"
} }
}, },
"additionalProperties": false "additionalProperties": false
@ -6314,11 +6311,11 @@
"type": "object", "type": "object",
"properties": { "properties": {
"environment_vars": { "environment_vars": {
"description": "An object containing a set of optional, user-specified environment variable key-value pairs used for serving this model.\nNote: this is an experimental feature and subject to change. \nExample model environment variables that refer to Databricks secrets: `{\"OPENAI_API_KEY\": \"{{secrets/my_scope/my_key}}\", \"DATABRICKS_TOKEN\": \"{{secrets/my_scope2/my_key2}}\"}`", "description": "An object containing a set of optional, user-specified environment variable key-value pairs used for serving this entity. Note: this is an experimental feature and subject to change. Example entity environment variables that refer to Databricks secrets: `{\"OPENAI_API_KEY\": \"{{secrets/my_scope/my_key}}\", \"DATABRICKS_TOKEN\": \"{{secrets/my_scope2/my_key2}}\"}`",
"$ref": "#/$defs/map/string" "$ref": "#/$defs/map/string"
}, },
"instance_profile_arn": { "instance_profile_arn": {
"description": "ARN of the instance profile that the served model will use to access AWS resources.", "description": "ARN of the instance profile that the served entity uses to access AWS resources.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"max_provisioned_throughput": { "max_provisioned_throughput": {
@ -6330,27 +6327,25 @@
"$ref": "#/$defs/int" "$ref": "#/$defs/int"
}, },
"model_name": { "model_name": {
"description": "The name of the model in Databricks Model Registry to be served or if the model resides in Unity Catalog, the full name of model,\nin the form of __catalog_name__.__schema_name__.__model_name__.\n",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"model_version": { "model_version": {
"description": "The version of the model in Databricks Model Registry or Unity Catalog to be served.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"name": { "name": {
"description": "The name of a served model. It must be unique across an endpoint. If not specified, this field will default to \u003cmodel-name\u003e-\u003cmodel-version\u003e.\nA served model name can consist of alphanumeric characters, dashes, and underscores.\n", "description": "The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores. If not specified for an external model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other entities, it defaults to entity_name-entity_version.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"scale_to_zero_enabled": { "scale_to_zero_enabled": {
"description": "Whether the compute resources for the served model should scale down to zero.", "description": "Whether the compute resources for the served entity should scale down to zero.",
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
}, },
"workload_size": { "workload_size": {
"description": "The workload size of the served model. The workload size corresponds to a range of provisioned concurrency that the compute will autoscale between.\nA single unit of provisioned concurrency can process one request at a time.\nValid workload sizes are \"Small\" (4 - 4 provisioned concurrency), \"Medium\" (8 - 16 provisioned concurrency), and \"Large\" (16 - 64 provisioned concurrency).\nIf scale-to-zero is enabled, the lower bound of the provisioned concurrency for each workload size will be 0.\n", "description": "The workload size of the served entity. The workload size corresponds to a range of provisioned concurrency that the compute autoscales between. A single unit of provisioned concurrency can process one request at a time. Valid workload sizes are \"Small\" (4 - 4 provisioned concurrency), \"Medium\" (8 - 16 provisioned concurrency), and \"Large\" (16 - 64 provisioned concurrency). If scale-to-zero is enabled, the lower bound of the provisioned concurrency for each workload size is 0.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.ServedModelInputWorkloadSize" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.ServedModelInputWorkloadSize"
}, },
"workload_type": { "workload_type": {
"description": "The workload type of the served model. The workload type selects which type of compute to use in the endpoint. The default value for this parameter is\n\"CPU\". For deep learning workloads, GPU acceleration is available by selecting workload types like GPU_SMALL and others.\nSee the available [GPU types](https://docs.databricks.com/machine-learning/model-serving/create-manage-serving-endpoints.html#gpu-workload-types).\n", "description": "The workload type of the served entity. The workload type selects which type of compute to use in the endpoint. The default value for this parameter is \"CPU\". For deep learning workloads, GPU acceleration is available by selecting workload types like GPU_SMALL and others. See the available [GPU types](https://docs.databricks.com/en/machine-learning/model-serving/create-manage-serving-endpoints.html#gpu-workload-types).",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.ServedModelInputWorkloadType" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.ServedModelInputWorkloadType"
} }
}, },
@ -6371,7 +6366,6 @@
"oneOf": [ "oneOf": [
{ {
"type": "string", "type": "string",
"description": "The workload size of the served model. The workload size corresponds to a range of provisioned concurrency that the compute will autoscale between.\nA single unit of provisioned concurrency can process one request at a time.\nValid workload sizes are \"Small\" (4 - 4 provisioned concurrency), \"Medium\" (8 - 16 provisioned concurrency), and \"Large\" (16 - 64 provisioned concurrency).\nIf scale-to-zero is enabled, the lower bound of the provisioned concurrency for each workload size will be 0.\n",
"enum": [ "enum": [
"Small", "Small",
"Medium", "Medium",
@ -6388,11 +6382,28 @@
"oneOf": [ "oneOf": [
{ {
"type": "string", "type": "string",
"description": "The workload type of the served model. The workload type selects which type of compute to use in the endpoint. The default value for this parameter is\n\"CPU\". For deep learning workloads, GPU acceleration is available by selecting workload types like GPU_SMALL and others.\nSee the available [GPU types](https://docs.databricks.com/machine-learning/model-serving/create-manage-serving-endpoints.html#gpu-workload-types).\n",
"enum": [ "enum": [
"CPU", "CPU",
"GPU_SMALL",
"GPU_MEDIUM", "GPU_MEDIUM",
"GPU_SMALL",
"GPU_LARGE",
"MULTIGPU_MEDIUM"
]
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"serving.ServingModelWorkloadType": {
"oneOf": [
{
"type": "string",
"enum": [
"CPU",
"GPU_MEDIUM",
"GPU_SMALL",
"GPU_LARGE", "GPU_LARGE",
"MULTIGPU_MEDIUM" "MULTIGPU_MEDIUM"
] ]

View File

@ -1,51 +0,0 @@
package scripts
import (
"bufio"
"context"
"strings"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/libs/exec"
"github.com/stretchr/testify/require"
)
func TestExecutesHook(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Experimental: &config.Experimental{
Scripts: map[config.ScriptHook]config.Command{
config.ScriptPreBuild: "echo 'Hello'",
},
},
},
}
executor, err := exec.NewCommandExecutor(b.BundleRootPath)
require.NoError(t, err)
_, out, err := executeHook(context.Background(), executor, b, config.ScriptPreBuild)
require.NoError(t, err)
reader := bufio.NewReader(out)
line, err := reader.ReadString('\n')
require.NoError(t, err)
require.Equal(t, "Hello", strings.TrimSpace(line))
}
func TestExecuteMutator(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Experimental: &config.Experimental{
Scripts: map[config.ScriptHook]config.Command{
config.ScriptPreBuild: "echo 'Hello'",
},
},
},
}
diags := bundle.Apply(context.Background(), b, Execute(config.ScriptPreInit))
require.NoError(t, diags.Error())
}

View File

@ -307,6 +307,7 @@ func newUpdate() *cobra.Command {
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`) cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
// TODO: array: redirect_urls // TODO: array: redirect_urls
// TODO: array: scopes
// TODO: complex arg: token_access_policy // TODO: complex arg: token_access_policy
cmd.Use = "update INTEGRATION_ID" cmd.Use = "update INTEGRATION_ID"

View File

@ -62,7 +62,7 @@ func makeCommand(method string) *cobra.Command {
var response any var response any
headers := map[string]string{"Content-Type": "application/json"} headers := map[string]string{"Content-Type": "application/json"}
err = api.Do(cmd.Context(), method, path, headers, request, &response) err = api.Do(cmd.Context(), method, path, headers, nil, request, &response)
if err != nil { if err != nil {
return err return err
} }

View File

@ -15,7 +15,6 @@ import (
"github.com/databricks/cli/libs/databrickscfg/profile" "github.com/databricks/cli/libs/databrickscfg/profile"
"github.com/databricks/cli/libs/log" "github.com/databricks/cli/libs/log"
"github.com/databricks/cli/libs/process" "github.com/databricks/cli/libs/process"
"github.com/databricks/cli/libs/python"
"github.com/databricks/databricks-sdk-go" "github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/sql" "github.com/databricks/databricks-sdk-go/service/sql"
@ -223,7 +222,7 @@ func (i *installer) setupPythonVirtualEnvironment(ctx context.Context, w *databr
feedback := cmdio.Spinner(ctx) feedback := cmdio.Spinner(ctx)
defer close(feedback) defer close(feedback)
feedback <- "Detecting all installed Python interpreters on the system" feedback <- "Detecting all installed Python interpreters on the system"
pythonInterpreters, err := python.DetectInterpreters(ctx) pythonInterpreters, err := DetectInterpreters(ctx)
if err != nil { if err != nil {
return fmt.Errorf("detect: %w", err) return fmt.Errorf("detect: %w", err)
} }

View File

@ -1,4 +1,4 @@
package python package project
import ( import (
"context" "context"

View File

@ -1,6 +1,6 @@
//go:build unix //go:build unix
package python package project
import ( import (
"context" "context"

View File

@ -1,6 +1,6 @@
//go:build windows //go:build windows
package python package project
import ( import (
"context" "context"

109
cmd/workspace/access-control/access-control.go generated Executable file
View File

@ -0,0 +1,109 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package access_control
import (
"fmt"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/iam"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "access-control",
Short: `Rule based Access Control for Databricks Resources.`,
Long: `Rule based Access Control for Databricks Resources.`,
GroupID: "iam",
Annotations: map[string]string{
"package": "iam",
},
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add methods
cmd.AddCommand(newCheckPolicy())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start check-policy command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var checkPolicyOverrides []func(
*cobra.Command,
*iam.CheckPolicyRequest,
)
func newCheckPolicy() *cobra.Command {
cmd := &cobra.Command{}
var checkPolicyReq iam.CheckPolicyRequest
var checkPolicyJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&checkPolicyJson, "json", `either inline JSON string or @path/to/file.json with request body`)
// TODO: complex arg: resource_info
cmd.Use = "check-policy"
cmd.Short = `Check access policy to a resource.`
cmd.Long = `Check access policy to a resource.`
cmd.Annotations = make(map[string]string)
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
diags := checkPolicyJson.Unmarshal(&checkPolicyReq)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
} else {
return fmt.Errorf("please provide command input in JSON format by specifying the --json flag")
}
response, err := w.AccessControl.CheckPolicy(ctx, checkPolicyReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range checkPolicyOverrides {
fn(cmd, &checkPolicyReq)
}
return cmd
}
// end service AccessControl

2
cmd/workspace/cmd.go generated
View File

@ -3,6 +3,7 @@
package workspace package workspace
import ( import (
access_control "github.com/databricks/cli/cmd/workspace/access-control"
alerts "github.com/databricks/cli/cmd/workspace/alerts" alerts "github.com/databricks/cli/cmd/workspace/alerts"
alerts_legacy "github.com/databricks/cli/cmd/workspace/alerts-legacy" alerts_legacy "github.com/databricks/cli/cmd/workspace/alerts-legacy"
apps "github.com/databricks/cli/cmd/workspace/apps" apps "github.com/databricks/cli/cmd/workspace/apps"
@ -96,6 +97,7 @@ import (
func All() []*cobra.Command { func All() []*cobra.Command {
var out []*cobra.Command var out []*cobra.Command
out = append(out, access_control.New())
out = append(out, alerts.New()) out = append(out, alerts.New())
out = append(out, alerts_legacy.New()) out = append(out, alerts_legacy.New())
out = append(out, apps.New()) out = append(out, apps.New())

View File

@ -64,7 +64,7 @@ func newCreate() *cobra.Command {
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`) cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&createReq.Comment, "comment", createReq.Comment, `Description about the provider.`) cmd.Flags().StringVar(&createReq.Comment, "comment", createReq.Comment, `Description about the provider.`)
cmd.Flags().StringVar(&createReq.RecipientProfileStr, "recipient-profile-str", createReq.RecipientProfileStr, `This field is required when the __authentication_type__ is **TOKEN** or not provided.`) cmd.Flags().StringVar(&createReq.RecipientProfileStr, "recipient-profile-str", createReq.RecipientProfileStr, `This field is required when the __authentication_type__ is **TOKEN**, **OAUTH_CLIENT_CREDENTIALS** or not provided.`)
cmd.Use = "create NAME AUTHENTICATION_TYPE" cmd.Use = "create NAME AUTHENTICATION_TYPE"
cmd.Short = `Create an auth provider.` cmd.Short = `Create an auth provider.`
@ -430,7 +430,7 @@ func newUpdate() *cobra.Command {
cmd.Flags().StringVar(&updateReq.Comment, "comment", updateReq.Comment, `Description about the provider.`) cmd.Flags().StringVar(&updateReq.Comment, "comment", updateReq.Comment, `Description about the provider.`)
cmd.Flags().StringVar(&updateReq.NewName, "new-name", updateReq.NewName, `New name for the provider.`) cmd.Flags().StringVar(&updateReq.NewName, "new-name", updateReq.NewName, `New name for the provider.`)
cmd.Flags().StringVar(&updateReq.Owner, "owner", updateReq.Owner, `Username of Provider owner.`) cmd.Flags().StringVar(&updateReq.Owner, "owner", updateReq.Owner, `Username of Provider owner.`)
cmd.Flags().StringVar(&updateReq.RecipientProfileStr, "recipient-profile-str", updateReq.RecipientProfileStr, `This field is required when the __authentication_type__ is **TOKEN** or not provided.`) cmd.Flags().StringVar(&updateReq.RecipientProfileStr, "recipient-profile-str", updateReq.RecipientProfileStr, `This field is required when the __authentication_type__ is **TOKEN**, **OAUTH_CLIENT_CREDENTIALS** or not provided.`)
cmd.Use = "update NAME" cmd.Use = "update NAME"
cmd.Short = `Update a provider.` cmd.Short = `Update a provider.`

View File

@ -91,7 +91,7 @@ func newCreate() *cobra.Command {
cmd.Long = `Create a share recipient. cmd.Long = `Create a share recipient.
Creates a new recipient with the delta sharing authentication type in the Creates a new recipient with the delta sharing authentication type in the
metastore. The caller must be a metastore admin or has the metastore. The caller must be a metastore admin or have the
**CREATE_RECIPIENT** privilege on the metastore. **CREATE_RECIPIENT** privilege on the metastore.
Arguments: Arguments:
@ -186,28 +186,16 @@ func newDelete() *cobra.Command {
cmd.Annotations = make(map[string]string) cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) { cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context() ctx := cmd.Context()
w := root.WorkspaceClient(ctx) w := root.WorkspaceClient(ctx)
if len(args) == 0 {
promptSpinner := cmdio.Spinner(ctx)
promptSpinner <- "No NAME argument specified. Loading names for Recipients drop-down."
names, err := w.Recipients.RecipientInfoNameToMetastoreIdMap(ctx, sharing.ListRecipientsRequest{})
close(promptSpinner)
if err != nil {
return fmt.Errorf("failed to load names for Recipients drop-down. Please manually specify required arguments. Original error: %w", err)
}
id, err := cmdio.Select(ctx, names, "Name of the recipient")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) != 1 {
return fmt.Errorf("expected to have name of the recipient")
}
deleteReq.Name = args[0] deleteReq.Name = args[0]
err = w.Recipients.Delete(ctx, deleteReq) err = w.Recipients.Delete(ctx, deleteReq)
@ -258,28 +246,16 @@ func newGet() *cobra.Command {
cmd.Annotations = make(map[string]string) cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) { cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context() ctx := cmd.Context()
w := root.WorkspaceClient(ctx) w := root.WorkspaceClient(ctx)
if len(args) == 0 {
promptSpinner := cmdio.Spinner(ctx)
promptSpinner <- "No NAME argument specified. Loading names for Recipients drop-down."
names, err := w.Recipients.RecipientInfoNameToMetastoreIdMap(ctx, sharing.ListRecipientsRequest{})
close(promptSpinner)
if err != nil {
return fmt.Errorf("failed to load names for Recipients drop-down. Please manually specify required arguments. Original error: %w", err)
}
id, err := cmdio.Select(ctx, names, "Name of the recipient")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) != 1 {
return fmt.Errorf("expected to have name of the recipient")
}
getReq.Name = args[0] getReq.Name = args[0]
response, err := w.Recipients.Get(ctx, getReq) response, err := w.Recipients.Get(ctx, getReq)
@ -384,7 +360,7 @@ func newRotateToken() *cobra.Command {
the provided token info. The caller must be the owner of the recipient. the provided token info. The caller must be the owner of the recipient.
Arguments: Arguments:
NAME: The name of the recipient. NAME: The name of the Recipient.
EXISTING_TOKEN_EXPIRE_IN_SECONDS: The expiration time of the bearer token in ISO 8601 format. This will set EXISTING_TOKEN_EXPIRE_IN_SECONDS: The expiration time of the bearer token in ISO 8601 format. This will set
the expiration_time of existing token only to a smaller timestamp, it the expiration_time of existing token only to a smaller timestamp, it
cannot extend the expiration_time. Use 0 to expire the existing token cannot extend the expiration_time. Use 0 to expire the existing token
@ -479,28 +455,16 @@ func newSharePermissions() *cobra.Command {
cmd.Annotations = make(map[string]string) cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) { cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context() ctx := cmd.Context()
w := root.WorkspaceClient(ctx) w := root.WorkspaceClient(ctx)
if len(args) == 0 {
promptSpinner := cmdio.Spinner(ctx)
promptSpinner <- "No NAME argument specified. Loading names for Recipients drop-down."
names, err := w.Recipients.RecipientInfoNameToMetastoreIdMap(ctx, sharing.ListRecipientsRequest{})
close(promptSpinner)
if err != nil {
return fmt.Errorf("failed to load names for Recipients drop-down. Please manually specify required arguments. Original error: %w", err)
}
id, err := cmdio.Select(ctx, names, "The name of the Recipient")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) != 1 {
return fmt.Errorf("expected to have the name of the recipient")
}
sharePermissionsReq.Name = args[0] sharePermissionsReq.Name = args[0]
response, err := w.Recipients.SharePermissions(ctx, sharePermissionsReq) response, err := w.Recipients.SharePermissions(ctx, sharePermissionsReq)
@ -560,6 +524,11 @@ func newUpdate() *cobra.Command {
cmd.Annotations = make(map[string]string) cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) { cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context() ctx := cmd.Context()
@ -577,30 +546,13 @@ func newUpdate() *cobra.Command {
} }
} }
} }
if len(args) == 0 {
promptSpinner := cmdio.Spinner(ctx)
promptSpinner <- "No NAME argument specified. Loading names for Recipients drop-down."
names, err := w.Recipients.RecipientInfoNameToMetastoreIdMap(ctx, sharing.ListRecipientsRequest{})
close(promptSpinner)
if err != nil {
return fmt.Errorf("failed to load names for Recipients drop-down. Please manually specify required arguments. Original error: %w", err)
}
id, err := cmdio.Select(ctx, names, "Name of the recipient")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) != 1 {
return fmt.Errorf("expected to have name of the recipient")
}
updateReq.Name = args[0] updateReq.Name = args[0]
err = w.Recipients.Update(ctx, updateReq) response, err := w.Recipients.Update(ctx, updateReq)
if err != nil { if err != nil {
return err return err
} }
return nil return cmdio.Render(ctx, response)
} }
// Disable completions since they are not applicable. // Disable completions since they are not applicable.

View File

@ -49,6 +49,7 @@ func New() *cobra.Command {
cmd.AddCommand(newGetOpenApi()) cmd.AddCommand(newGetOpenApi())
cmd.AddCommand(newGetPermissionLevels()) cmd.AddCommand(newGetPermissionLevels())
cmd.AddCommand(newGetPermissions()) cmd.AddCommand(newGetPermissions())
cmd.AddCommand(newHttpRequest())
cmd.AddCommand(newList()) cmd.AddCommand(newList())
cmd.AddCommand(newLogs()) cmd.AddCommand(newLogs())
cmd.AddCommand(newPatch()) cmd.AddCommand(newPatch())
@ -153,16 +154,34 @@ func newCreate() *cobra.Command {
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`) cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
// TODO: complex arg: ai_gateway // TODO: complex arg: ai_gateway
// TODO: complex arg: config
// TODO: array: rate_limits // TODO: array: rate_limits
cmd.Flags().BoolVar(&createReq.RouteOptimized, "route-optimized", createReq.RouteOptimized, `Enable route optimization for the serving endpoint.`) cmd.Flags().BoolVar(&createReq.RouteOptimized, "route-optimized", createReq.RouteOptimized, `Enable route optimization for the serving endpoint.`)
// TODO: array: tags // TODO: array: tags
cmd.Use = "create" cmd.Use = "create NAME"
cmd.Short = `Create a new serving endpoint.` cmd.Short = `Create a new serving endpoint.`
cmd.Long = `Create a new serving endpoint.` cmd.Long = `Create a new serving endpoint.
Arguments:
NAME: The name of the serving endpoint. This field is required and must be
unique across a Databricks workspace. An endpoint name can consist of
alphanumeric characters, dashes, and underscores.`
cmd.Annotations = make(map[string]string) cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
if cmd.Flags().Changed("json") {
err := root.ExactArgs(0)(cmd, args)
if err != nil {
return fmt.Errorf("when --json flag is specified, no positional arguments are required. Provide 'name' in your JSON input")
}
return nil
}
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) { cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context() ctx := cmd.Context()
@ -179,8 +198,9 @@ func newCreate() *cobra.Command {
return err return err
} }
} }
} else { }
return fmt.Errorf("please provide command input in JSON format by specifying the --json flag") if !cmd.Flags().Changed("json") {
createReq.Name = args[0]
} }
wait, err := w.ServingEndpoints.Create(ctx, createReq) wait, err := w.ServingEndpoints.Create(ctx, createReq)
@ -233,10 +253,7 @@ func newDelete() *cobra.Command {
cmd.Use = "delete NAME" cmd.Use = "delete NAME"
cmd.Short = `Delete a serving endpoint.` cmd.Short = `Delete a serving endpoint.`
cmd.Long = `Delete a serving endpoint. cmd.Long = `Delete a serving endpoint.`
Arguments:
NAME: The name of the serving endpoint. This field is required.`
cmd.Annotations = make(map[string]string) cmd.Annotations = make(map[string]string)
@ -432,11 +449,12 @@ func newGetOpenApi() *cobra.Command {
getOpenApiReq.Name = args[0] getOpenApiReq.Name = args[0]
err = w.ServingEndpoints.GetOpenApi(ctx, getOpenApiReq) response, err := w.ServingEndpoints.GetOpenApi(ctx, getOpenApiReq)
if err != nil { if err != nil {
return err return err
} }
return nil defer response.Contents.Close()
return cmdio.Render(ctx, response.Contents)
} }
// Disable completions since they are not applicable. // Disable completions since they are not applicable.
@ -568,6 +586,77 @@ func newGetPermissions() *cobra.Command {
return cmd return cmd
} }
// start http-request command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var httpRequestOverrides []func(
*cobra.Command,
*serving.ExternalFunctionRequest,
)
func newHttpRequest() *cobra.Command {
cmd := &cobra.Command{}
var httpRequestReq serving.ExternalFunctionRequest
// TODO: short flags
cmd.Flags().StringVar(&httpRequestReq.Headers, "headers", httpRequestReq.Headers, `Additional headers for the request.`)
cmd.Flags().StringVar(&httpRequestReq.Json, "json", httpRequestReq.Json, `The JSON payload to send in the request body.`)
cmd.Flags().StringVar(&httpRequestReq.Params, "params", httpRequestReq.Params, `Query parameters for the request.`)
cmd.Use = "http-request CONNECTION_NAME METHOD PATH"
cmd.Short = `Make external services call using the credentials stored in UC Connection.`
cmd.Long = `Make external services call using the credentials stored in UC Connection.
Arguments:
CONNECTION_NAME: The connection name to use. This is required to identify the external
connection.
METHOD: The HTTP method to use (e.g., 'GET', 'POST').
PATH: The relative path for the API endpoint. This is required.`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(3)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
httpRequestReq.ConnectionName = args[0]
_, err = fmt.Sscan(args[1], &httpRequestReq.Method)
if err != nil {
return fmt.Errorf("invalid METHOD: %s", args[1])
}
httpRequestReq.Path = args[2]
response, err := w.ServingEndpoints.HttpRequest(ctx, httpRequestReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range httpRequestOverrides {
fn(cmd, &httpRequestReq)
}
return cmd
}
// start list command // start list command
// Slice with functions to override default command behavior. // Slice with functions to override default command behavior.
@ -849,7 +938,7 @@ func newPutAiGateway() *cobra.Command {
cmd.Long = `Update AI Gateway of a serving endpoint. cmd.Long = `Update AI Gateway of a serving endpoint.
Used to update the AI Gateway of a serving endpoint. NOTE: Only external model Used to update the AI Gateway of a serving endpoint. NOTE: Only external model
endpoints are currently supported. and provisioned throughput endpoints are currently supported.
Arguments: Arguments:
NAME: The name of the serving endpoint whose AI Gateway is being updated. This NAME: The name of the serving endpoint whose AI Gateway is being updated. This

3
go.mod
View File

@ -5,9 +5,10 @@ go 1.23
toolchain go1.23.4 toolchain go1.23.4
require ( require (
github.com/BurntSushi/toml v1.4.0 // MIT
github.com/Masterminds/semver/v3 v3.3.1 // MIT github.com/Masterminds/semver/v3 v3.3.1 // MIT
github.com/briandowns/spinner v1.23.1 // Apache 2.0 github.com/briandowns/spinner v1.23.1 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.55.0 // Apache 2.0 github.com/databricks/databricks-sdk-go v0.56.1 // Apache 2.0
github.com/fatih/color v1.18.0 // MIT github.com/fatih/color v1.18.0 // MIT
github.com/google/uuid v1.6.0 // BSD-3-Clause github.com/google/uuid v1.6.0 // BSD-3-Clause
github.com/hashicorp/go-version v1.7.0 // MPL 2.0 github.com/hashicorp/go-version v1.7.0 // MPL 2.0

6
go.sum generated
View File

@ -8,6 +8,8 @@ cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1h
dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk= dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk=
dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk= dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.4.0 h1:kuoIxZQy2WRRk1pttg9asf+WVv6tWQuBNVmK8+nqPr0=
github.com/BurntSushi/toml v1.4.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/Masterminds/semver/v3 v3.3.1 h1:QtNSWtVZ3nBfk8mAOu/B6v7FMJ+NHTIgUPi7rj+4nv4= github.com/Masterminds/semver/v3 v3.3.1 h1:QtNSWtVZ3nBfk8mAOu/B6v7FMJ+NHTIgUPi7rj+4nv4=
github.com/Masterminds/semver/v3 v3.3.1/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= github.com/Masterminds/semver/v3 v3.3.1/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=
github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow= github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow=
@ -32,8 +34,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.5 h1:6iR5tXJ/e6tJZzzdMc1km3Sa7RRIVBKAK32O2s7AYfo= github.com/cyphar/filepath-securejoin v0.2.5 h1:6iR5tXJ/e6tJZzzdMc1km3Sa7RRIVBKAK32O2s7AYfo=
github.com/cyphar/filepath-securejoin v0.2.5/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= github.com/cyphar/filepath-securejoin v0.2.5/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/databricks/databricks-sdk-go v0.55.0 h1:ReziD6spzTDltM0ml80LggKo27F3oUjgTinCFDJDnak= github.com/databricks/databricks-sdk-go v0.56.1 h1:sgweTRvAQaI8EPrfDnVdAB0lNX6L5uTT720SlMMQI2U=
github.com/databricks/databricks-sdk-go v0.55.0/go.mod h1:JpLizplEs+up9/Z4Xf2x++o3sM9eTTWFGzIXAptKJzI= github.com/databricks/databricks-sdk-go v0.56.1/go.mod h1:JpLizplEs+up9/Z4Xf2x++o3sM9eTTWFGzIXAptKJzI=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=

View File

@ -158,7 +158,7 @@ func (a *syncTest) remoteFileContent(ctx context.Context, relativePath, expected
var res []byte var res []byte
a.c.Eventually(func() bool { a.c.Eventually(func() bool {
err = apiClient.Do(ctx, http.MethodGet, urlPath, nil, nil, &res) err = apiClient.Do(ctx, http.MethodGet, urlPath, nil, nil, nil, &res)
require.NoError(a.t, err) require.NoError(a.t, err)
actualContent := string(res) actualContent := string(res)
return actualContent == expectedContent return actualContent == expectedContent

View File

@ -148,7 +148,7 @@ func (w *FilesClient) Write(ctx context.Context, name string, reader io.Reader,
overwrite := slices.Contains(mode, OverwriteIfExists) overwrite := slices.Contains(mode, OverwriteIfExists)
urlPath = fmt.Sprintf("%s?overwrite=%t", urlPath, overwrite) urlPath = fmt.Sprintf("%s?overwrite=%t", urlPath, overwrite)
headers := map[string]string{"Content-Type": "application/octet-stream"} headers := map[string]string{"Content-Type": "application/octet-stream"}
err = w.apiClient.Do(ctx, http.MethodPut, urlPath, headers, reader, nil) err = w.apiClient.Do(ctx, http.MethodPut, urlPath, headers, nil, reader, nil)
// Return early on success. // Return early on success.
if err == nil { if err == nil {
@ -176,7 +176,7 @@ func (w *FilesClient) Read(ctx context.Context, name string) (io.ReadCloser, err
} }
var reader io.ReadCloser var reader io.ReadCloser
err = w.apiClient.Do(ctx, http.MethodGet, urlPath, nil, nil, &reader) err = w.apiClient.Do(ctx, http.MethodGet, urlPath, nil, nil, nil, &reader)
// Return early on success. // Return early on success.
if err == nil { if err == nil {

View File

@ -106,7 +106,7 @@ func (info *wsfsFileInfo) MarshalJSON() ([]byte, error) {
// as an interface to allow for mocking in tests. // as an interface to allow for mocking in tests.
type apiClient interface { type apiClient interface {
Do(ctx context.Context, method, path string, Do(ctx context.Context, method, path string,
headers map[string]string, request, response any, headers map[string]string, queryString map[string]any, request, response any,
visitors ...func(*http.Request) error) error visitors ...func(*http.Request) error) error
} }
@ -156,7 +156,7 @@ func (w *WorkspaceFilesClient) Write(ctx context.Context, name string, reader io
return err return err
} }
err = w.apiClient.Do(ctx, http.MethodPost, urlPath, nil, body, nil) err = w.apiClient.Do(ctx, http.MethodPost, urlPath, nil, nil, body, nil)
// Return early on success. // Return early on success.
if err == nil { if err == nil {
@ -341,6 +341,7 @@ func (w *WorkspaceFilesClient) Stat(ctx context.Context, name string) (fs.FileIn
http.MethodGet, http.MethodGet,
"/api/2.0/workspace/get-status", "/api/2.0/workspace/get-status",
nil, nil,
nil,
map[string]string{ map[string]string{
"path": absPath, "path": absPath,
"return_export_info": "true", "return_export_info": "true",

View File

@ -17,7 +17,7 @@ type mockApiClient struct {
} }
func (m *mockApiClient) Do(ctx context.Context, method, path string, func (m *mockApiClient) Do(ctx context.Context, method, path string,
headers map[string]string, request, response any, headers map[string]string, queryString map[string]any, request, response any,
visitors ...func(*http.Request) error, visitors ...func(*http.Request) error,
) error { ) error {
args := m.Called(ctx, method, path, headers, request, response, visitors) args := m.Called(ctx, method, path, headers, request, response, visitors)

View File

@ -66,6 +66,7 @@ func fetchRepositoryInfoAPI(ctx context.Context, path string, w *databricks.Work
http.MethodGet, http.MethodGet,
apiEndpoint, apiEndpoint,
nil, nil,
nil,
map[string]string{ map[string]string{
"path": path, "path": path,
"return_git_info": "true", "return_git_info": "true",

View File

@ -39,27 +39,7 @@ func DetectExecutable(ctx context.Context) (string, error) {
// //
// See https://github.com/pyenv/pyenv#understanding-python-version-selection // See https://github.com/pyenv/pyenv#understanding-python-version-selection
out, err := exec.LookPath(GetExecutable()) return exec.LookPath(GetExecutable())
// most of the OS'es have python3 in $PATH, but for those which don't,
// we perform the latest version lookup
if err != nil && !errors.Is(err, exec.ErrNotFound) {
return "", err
}
if out != "" {
return out, nil
}
// otherwise, detect all interpreters and pick the least that satisfies
// minimal version requirements
all, err := DetectInterpreters(ctx)
if err != nil {
return "", err
}
interpreter, err := all.AtLeast("3.8")
if err != nil {
return "", err
}
return interpreter.Path, nil
} }
// DetectVEnvExecutable returns the path to the python3 executable inside venvPath, // DetectVEnvExecutable returns the path to the python3 executable inside venvPath,

View File

@ -16,24 +16,16 @@ func TestDetectsViaPathLookup(t *testing.T) {
assert.NotEmpty(t, py) assert.NotEmpty(t, py)
} }
func TestDetectsViaListing(t *testing.T) {
t.Setenv("PATH", "testdata/other-binaries-filtered")
ctx := context.Background()
py, err := DetectExecutable(ctx)
assert.NoError(t, err)
assert.Equal(t, "testdata/other-binaries-filtered/python3.10", py)
}
func TestDetectFailsNoInterpreters(t *testing.T) { func TestDetectFailsNoInterpreters(t *testing.T) {
t.Setenv("PATH", "testdata") t.Setenv("PATH", "testdata")
ctx := context.Background() ctx := context.Background()
_, err := DetectExecutable(ctx) _, err := DetectExecutable(ctx)
assert.Equal(t, ErrNoPythonInterpreters, err) assert.Error(t, err)
} }
func TestDetectFailsNoMinimalVersion(t *testing.T) { func TestDetectFailsNoMinimalVersion(t *testing.T) {
t.Setenv("PATH", "testdata/no-python3") t.Setenv("PATH", "testdata/no-python3")
ctx := context.Background() ctx := context.Background()
_, err := DetectExecutable(ctx) _, err := DetectExecutable(ctx)
assert.EqualError(t, err, "cannot find Python greater or equal to v3.8.0") assert.Error(t, err)
} }

Some files were not shown because too many files have changed in this diff Show More