Compare commits

..

4 Commits

Author SHA1 Message Date
Andrew Nester 6b4b908682
[Release] Release v0.237.0 (#2031)
Bundles:
* Allow overriding compute for non-development mode targets
([#1899](https://github.com/databricks/cli/pull/1899)).
* Show an error when using a cluster override with 'mode: production'
([#1994](https://github.com/databricks/cli/pull/1994)).

API Changes:
 * Added `databricks account federation-policy` command group.
* Added `databricks account service-principal-federation-policy` command
group.
* Added `databricks aibi-dashboard-embedding-access-policy delete`
command.
* Added `databricks aibi-dashboard-embedding-approved-domains delete`
command.

OpenAPI commit a6a317df8327c9b1e5cb59a03a42ffa2aabeef6d (2024-12-16)
Dependency updates:
* Upgrade TF provider to 1.62.0
([#2030](https://github.com/databricks/cli/pull/2030)).
* Upgrade Go SDK to 0.54.0
([#2029](https://github.com/databricks/cli/pull/2029)).
* Bump TF codegen dependencies to latest
([#1961](https://github.com/databricks/cli/pull/1961)).
* Bump golang.org/x/term from 0.26.0 to 0.27.0
([#1983](https://github.com/databricks/cli/pull/1983)).
* Bump golang.org/x/sync from 0.9.0 to 0.10.0
([#1984](https://github.com/databricks/cli/pull/1984)).
* Bump github.com/databricks/databricks-sdk-go from 0.52.0 to 0.53.0
([#1985](https://github.com/databricks/cli/pull/1985)).
* Bump golang.org/x/crypto from 0.24.0 to 0.31.0
([#2006](https://github.com/databricks/cli/pull/2006)).
* Bump golang.org/x/crypto from 0.30.0 to 0.31.0 in
/bundle/internal/tf/codegen
([#2005](https://github.com/databricks/cli/pull/2005)).
2024-12-18 17:17:02 +01:00
Andrew Nester e3b256e753
Upgrade TF provider to 1.62.0 (#2030)
## Changes
* Added support for `IsSingleNode`, `Kind` and `UseMlRuntime` for
clusters
* Added support for `CleanRoomsNotebookTask`
* `DaysOfWeek` for pipeline restart window is now a list
2024-12-18 14:03:08 +00:00
Andrew Nester 59f0859e00
Upgrade Go SDK to 0.54.0 (#2029)
## Changes

* Added
[a.AccountFederationPolicy](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/oauth2#AccountFederationPolicyAPI)
account-level service and
[a.ServicePrincipalFederationPolicy](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/oauth2#ServicePrincipalFederationPolicyAPI)
account-level service.
* Added `IsSingleNode`, `Kind` and `UseMlRuntime` fields for Cluster
commands.
* Added `UpdateParameterSyntax` field for
[dashboards.MigrateDashboardRequest](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#MigrateDashboardRequest).
2024-12-18 12:43:27 +00:00
Ilya Kuznetsov 042c8d88c6
Custom annotations for bundle-specific JSON schema fields (#1957)
## Changes

Adds annotations to json-schema for fields which are not covered by
OpenAPI spec.

Custom descriptions were copy-pasted from documentation PR which is
still WIP so descriptions for some fields are missing

Further improvements:
* documentation autogen based on json-schema
* fix missing descriptions

## Tests

This script is not part of CLI package so I didn't test all corner
cases. Few high-level tests were added to be sure that schema
annotations is in sync with actual config

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-12-18 10:19:14 +00:00
32 changed files with 5270 additions and 342 deletions

View File

@ -11,7 +11,7 @@
"required": ["go"], "required": ["go"],
"post_generate": [ "post_generate": [
"go test -timeout 240s -run TestConsistentDatabricksSdkVersion github.com/databricks/cli/internal/build", "go test -timeout 240s -run TestConsistentDatabricksSdkVersion github.com/databricks/cli/internal/build",
"go run ./bundle/internal/schema/*.go ./bundle/schema/jsonschema.json", "make schema",
"echo 'bundle/internal/tf/schema/\\*.go linguist-generated=true' >> ./.gitattributes", "echo 'bundle/internal/tf/schema/\\*.go linguist-generated=true' >> ./.gitattributes",
"echo 'go.sum linguist-generated=true' >> ./.gitattributes", "echo 'go.sum linguist-generated=true' >> ./.gitattributes",
"echo 'bundle/schema/jsonschema.json linguist-generated=true' >> ./.gitattributes" "echo 'bundle/schema/jsonschema.json linguist-generated=true' >> ./.gitattributes"

View File

@ -1 +1 @@
7016dcbf2e011459416cf408ce21143bcc4b3a25 a6a317df8327c9b1e5cb59a03a42ffa2aabeef6d

2
.gitattributes vendored
View File

@ -8,6 +8,7 @@ cmd/account/custom-app-integration/custom-app-integration.go linguist-generated=
cmd/account/disable-legacy-features/disable-legacy-features.go linguist-generated=true cmd/account/disable-legacy-features/disable-legacy-features.go linguist-generated=true
cmd/account/encryption-keys/encryption-keys.go linguist-generated=true cmd/account/encryption-keys/encryption-keys.go linguist-generated=true
cmd/account/esm-enablement-account/esm-enablement-account.go linguist-generated=true cmd/account/esm-enablement-account/esm-enablement-account.go linguist-generated=true
cmd/account/federation-policy/federation-policy.go linguist-generated=true
cmd/account/groups/groups.go linguist-generated=true cmd/account/groups/groups.go linguist-generated=true
cmd/account/ip-access-lists/ip-access-lists.go linguist-generated=true cmd/account/ip-access-lists/ip-access-lists.go linguist-generated=true
cmd/account/log-delivery/log-delivery.go linguist-generated=true cmd/account/log-delivery/log-delivery.go linguist-generated=true
@ -19,6 +20,7 @@ cmd/account/o-auth-published-apps/o-auth-published-apps.go linguist-generated=tr
cmd/account/personal-compute/personal-compute.go linguist-generated=true cmd/account/personal-compute/personal-compute.go linguist-generated=true
cmd/account/private-access/private-access.go linguist-generated=true cmd/account/private-access/private-access.go linguist-generated=true
cmd/account/published-app-integration/published-app-integration.go linguist-generated=true cmd/account/published-app-integration/published-app-integration.go linguist-generated=true
cmd/account/service-principal-federation-policy/service-principal-federation-policy.go linguist-generated=true
cmd/account/service-principal-secrets/service-principal-secrets.go linguist-generated=true cmd/account/service-principal-secrets/service-principal-secrets.go linguist-generated=true
cmd/account/service-principals/service-principals.go linguist-generated=true cmd/account/service-principals/service-principals.go linguist-generated=true
cmd/account/settings/settings.go linguist-generated=true cmd/account/settings/settings.go linguist-generated=true

View File

@ -99,14 +99,19 @@ jobs:
# By default the ajv-cli runs in strict mode which will fail if the schema # By default the ajv-cli runs in strict mode which will fail if the schema
# itself is not valid. Strict mode is more strict than the JSON schema # itself is not valid. Strict mode is more strict than the JSON schema
# specification. See for details: https://ajv.js.org/options.html#strict-mode-options # specification. See for details: https://ajv.js.org/options.html#strict-mode-options
# The ajv-cli is configured to use the markdownDescription keyword which is not part of the JSON schema specification,
# but is used in editors like VSCode to render markdown in the description field
- name: Validate bundle schema - name: Validate bundle schema
run: | run: |
go run main.go bundle schema > schema.json go run main.go bundle schema > schema.json
# Add markdownDescription keyword to ajv
echo "module.exports=function(a){a.addKeyword('markdownDescription')}" >> keywords.js
for file in ./bundle/internal/schema/testdata/pass/*.yml; do for file in ./bundle/internal/schema/testdata/pass/*.yml; do
ajv test -s schema.json -d $file --valid ajv test -s schema.json -d $file --valid -c=./keywords.js
done done
for file in ./bundle/internal/schema/testdata/fail/*.yml; do for file in ./bundle/internal/schema/testdata/fail/*.yml; do
ajv test -s schema.json -d $file --invalid ajv test -s schema.json -d $file --invalid -c=./keywords.js
done done

View File

@ -1,5 +1,28 @@
# Version changelog # Version changelog
## [Release] Release v0.237.0
Bundles:
* Allow overriding compute for non-development mode targets ([#1899](https://github.com/databricks/cli/pull/1899)).
* Show an error when using a cluster override with 'mode: production' ([#1994](https://github.com/databricks/cli/pull/1994)).
API Changes:
* Added `databricks account federation-policy` command group.
* Added `databricks account service-principal-federation-policy` command group.
* Added `databricks aibi-dashboard-embedding-access-policy delete` command.
* Added `databricks aibi-dashboard-embedding-approved-domains delete` command.
OpenAPI commit a6a317df8327c9b1e5cb59a03a42ffa2aabeef6d (2024-12-16)
Dependency updates:
* Upgrade TF provider to 1.62.0 ([#2030](https://github.com/databricks/cli/pull/2030)).
* Upgrade Go SDK to 0.54.0 ([#2029](https://github.com/databricks/cli/pull/2029)).
* Bump TF codegen dependencies to latest ([#1961](https://github.com/databricks/cli/pull/1961)).
* Bump golang.org/x/term from 0.26.0 to 0.27.0 ([#1983](https://github.com/databricks/cli/pull/1983)).
* Bump golang.org/x/sync from 0.9.0 to 0.10.0 ([#1984](https://github.com/databricks/cli/pull/1984)).
* Bump github.com/databricks/databricks-sdk-go from 0.52.0 to 0.53.0 ([#1985](https://github.com/databricks/cli/pull/1985)).
* Bump golang.org/x/crypto from 0.24.0 to 0.31.0 ([#2006](https://github.com/databricks/cli/pull/2006)).
* Bump golang.org/x/crypto from 0.30.0 to 0.31.0 in /bundle/internal/tf/codegen ([#2005](https://github.com/databricks/cli/pull/2005)).
## [Release] Release v0.236.0 ## [Release] Release v0.236.0
**New features for Databricks Asset Bundles:** **New features for Databricks Asset Bundles:**

View File

@ -30,6 +30,10 @@ vendor:
@echo "✓ Filling vendor folder with library code ..." @echo "✓ Filling vendor folder with library code ..."
@go mod vendor @go mod vendor
schema:
@echo "✓ Generating json-schema ..."
@go run ./bundle/internal/schema ./bundle/internal/schema ./bundle/schema/jsonschema.json
INTEGRATION = gotestsum --format github-actions --rerun-fails --jsonfile output.json --packages "./integration/..." -- -parallel 4 -timeout=2h INTEGRATION = gotestsum --format github-actions --rerun-fails --jsonfile output.json --packages "./integration/..." -- -parallel 4 -timeout=2h
integration: integration:
@ -38,4 +42,4 @@ integration:
integration-short: integration-short:
$(INTEGRATION) -short $(INTEGRATION) -short
.PHONY: lint lintcheck test testonly coverage build snapshot vendor integration integration-short .PHONY: lint lintcheck test testonly coverage build snapshot vendor schema integration integration-short

View File

@ -0,0 +1,209 @@
package main
import (
"bytes"
"fmt"
"os"
"reflect"
"regexp"
"strings"
yaml3 "gopkg.in/yaml.v3"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert"
"github.com/databricks/cli/libs/dyn/merge"
"github.com/databricks/cli/libs/dyn/yamlloader"
"github.com/databricks/cli/libs/dyn/yamlsaver"
"github.com/databricks/cli/libs/jsonschema"
)
type annotation struct {
Description string `json:"description,omitempty"`
MarkdownDescription string `json:"markdown_description,omitempty"`
Title string `json:"title,omitempty"`
Default any `json:"default,omitempty"`
Enum []any `json:"enum,omitempty"`
}
type annotationHandler struct {
// Annotations read from all annotation files including all overrides
parsedAnnotations annotationFile
// Missing annotations for fields that are found in config that need to be added to the annotation file
missingAnnotations annotationFile
}
/**
* Parsed file with annotations, expected format:
* github.com/databricks/cli/bundle/config.Bundle:
* cluster_id:
* description: "Description"
*/
type annotationFile map[string]map[string]annotation
const Placeholder = "PLACEHOLDER"
// Adds annotations to the JSON schema reading from the annotation files.
// More details https://json-schema.org/understanding-json-schema/reference/annotations
func newAnnotationHandler(sources []string) (*annotationHandler, error) {
prev := dyn.NilValue
for _, path := range sources {
b, err := os.ReadFile(path)
if err != nil {
return nil, err
}
generated, err := yamlloader.LoadYAML(path, bytes.NewBuffer(b))
if err != nil {
return nil, err
}
prev, err = merge.Merge(prev, generated)
if err != nil {
return nil, err
}
}
var data annotationFile
err := convert.ToTyped(&data, prev)
if err != nil {
return nil, err
}
d := &annotationHandler{}
d.parsedAnnotations = data
d.missingAnnotations = annotationFile{}
return d, nil
}
func (d *annotationHandler) addAnnotations(typ reflect.Type, s jsonschema.Schema) jsonschema.Schema {
refPath := getPath(typ)
shouldHandle := strings.HasPrefix(refPath, "github.com")
if !shouldHandle {
return s
}
annotations := d.parsedAnnotations[refPath]
if annotations == nil {
annotations = map[string]annotation{}
}
rootTypeAnnotation, ok := annotations[RootTypeKey]
if ok {
assignAnnotation(&s, rootTypeAnnotation)
}
for k, v := range s.Properties {
item := annotations[k]
if item.Description == "" {
item.Description = Placeholder
emptyAnnotations := d.missingAnnotations[refPath]
if emptyAnnotations == nil {
emptyAnnotations = map[string]annotation{}
d.missingAnnotations[refPath] = emptyAnnotations
}
emptyAnnotations[k] = item
}
assignAnnotation(v, item)
}
return s
}
// Writes missing annotations with placeholder values back to the annotation file
func (d *annotationHandler) syncWithMissingAnnotations(outputPath string) error {
existingFile, err := os.ReadFile(outputPath)
if err != nil {
return err
}
existing, err := yamlloader.LoadYAML("", bytes.NewBuffer(existingFile))
if err != nil {
return err
}
missingAnnotations, err := convert.FromTyped(&d.missingAnnotations, dyn.NilValue)
if err != nil {
return err
}
output, err := merge.Merge(existing, missingAnnotations)
if err != nil {
return err
}
err = saveYamlWithStyle(outputPath, output)
if err != nil {
return err
}
return nil
}
func getPath(typ reflect.Type) string {
return typ.PkgPath() + "." + typ.Name()
}
func assignAnnotation(s *jsonschema.Schema, a annotation) {
if a.Description != Placeholder {
s.Description = a.Description
}
if a.Default != nil {
s.Default = a.Default
}
s.MarkdownDescription = convertLinksToAbsoluteUrl(a.MarkdownDescription)
s.Title = a.Title
s.Enum = a.Enum
}
func saveYamlWithStyle(outputPath string, input dyn.Value) error {
style := map[string]yaml3.Style{}
file, _ := input.AsMap()
for _, v := range file.Keys() {
style[v.MustString()] = yaml3.LiteralStyle
}
saver := yamlsaver.NewSaverWithStyle(style)
err := saver.SaveAsYAML(file, outputPath, true)
if err != nil {
return err
}
return nil
}
func convertLinksToAbsoluteUrl(s string) string {
if s == "" {
return s
}
base := "https://docs.databricks.com"
referencePage := "/dev-tools/bundles/reference.html"
// Regular expression to match Markdown-style links like [_](link)
re := regexp.MustCompile(`\[_\]\(([^)]+)\)`)
result := re.ReplaceAllStringFunc(s, func(match string) string {
matches := re.FindStringSubmatch(match)
if len(matches) < 2 {
return match
}
link := matches[1]
var text, absoluteURL string
if strings.HasPrefix(link, "#") {
text = strings.TrimPrefix(link, "#")
absoluteURL = fmt.Sprintf("%s%s%s", base, referencePage, link)
// Handle relative paths like /dev-tools/bundles/resources.html#dashboard
} else if strings.HasPrefix(link, "/") {
absoluteURL = strings.ReplaceAll(fmt.Sprintf("%s%s", base, link), ".md", ".html")
if strings.Contains(link, "#") {
parts := strings.Split(link, "#")
text = parts[1]
} else {
text = "link"
}
} else {
return match
}
return fmt.Sprintf("[%s](%s)", text, absoluteURL)
})
return result
}

View File

@ -0,0 +1,501 @@
github.com/databricks/cli/bundle/config.Artifact:
"build":
"description": |-
An optional set of non-default build commands that you want to run locally before deployment.
For Python wheel builds, the Databricks CLI assumes that it can find a local install of the Python wheel package to run builds, and it runs the command python setup.py bdist_wheel by default during each bundle deployment.
To specify multiple build commands, separate each command with double-ampersand (&&) characters.
"executable":
"description": |-
The executable type.
"files":
"description": |-
The source files for the artifact.
"markdown_description": |-
The source files for the artifact, defined as an [_](#artifact_file).
"path":
"description": |-
The location where the built artifact will be saved.
"type":
"description": |-
The type of the artifact.
"markdown_description": |-
The type of the artifact. Valid values are `wheel` or `jar`
github.com/databricks/cli/bundle/config.ArtifactFile:
"source":
"description": |-
The path of the files used to build the artifact.
github.com/databricks/cli/bundle/config.Bundle:
"cluster_id":
"description": |-
The ID of a cluster to use to run the bundle.
"markdown_description": |-
The ID of a cluster to use to run the bundle. See [_](/dev-tools/bundles/settings.md#cluster_id).
"compute_id":
"description": |-
PLACEHOLDER
"databricks_cli_version":
"description": |-
The Databricks CLI version to use for the bundle.
"markdown_description": |-
The Databricks CLI version to use for the bundle. See [_](/dev-tools/bundles/settings.md#databricks_cli_version).
"deployment":
"description": |-
The definition of the bundle deployment
"markdown_description": |-
The definition of the bundle deployment. For supported attributes, see [_](#deployment) and [_](/dev-tools/bundles/deployment-modes.md).
"git":
"description": |-
The Git version control details that are associated with your bundle.
"markdown_description": |-
The Git version control details that are associated with your bundle. For supported attributes, see [_](#git) and [_](/dev-tools/bundles/settings.md#git).
"name":
"description": |-
The name of the bundle.
"uuid":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config.Deployment:
"fail_on_active_runs":
"description": |-
Whether to fail on active runs. If this is set to true a deployment that is running can be interrupted.
"lock":
"description": |-
The deployment lock attributes.
"markdown_description": |-
The deployment lock attributes. See [_](#lock).
github.com/databricks/cli/bundle/config.Experimental:
"pydabs":
"description": |-
The PyDABs configuration.
"python_wheel_wrapper":
"description": |-
Whether to use a Python wheel wrapper
"scripts":
"description": |-
The commands to run
"use_legacy_run_as":
"description": |-
Whether to use the legacy run_as behavior
github.com/databricks/cli/bundle/config.Git:
"branch":
"description": |-
The Git branch name.
"markdown_description": |-
The Git branch name. See [_](/dev-tools/bundles/settings.md#git).
"origin_url":
"description": |-
The origin URL of the repository.
"markdown_description": |-
The origin URL of the repository. See [_](/dev-tools/bundles/settings.md#git).
github.com/databricks/cli/bundle/config.Lock:
"enabled":
"description": |-
Whether this lock is enabled.
"force":
"description": |-
Whether to force this lock if it is enabled.
github.com/databricks/cli/bundle/config.Presets:
"jobs_max_concurrent_runs":
"description": |-
The maximum concurrent runs for a job.
"name_prefix":
"description": |-
The prefix for job runs of the bundle.
"pipelines_development":
"description": |-
Whether pipeline deployments should be locked in development mode.
"source_linked_deployment":
"description": |-
Whether to link the deployment to the bundle source.
"tags":
"description": |-
The tags for the bundle deployment.
"trigger_pause_status":
"description": |-
A pause status to apply to all job triggers and schedules. Valid values are PAUSED or UNPAUSED.
github.com/databricks/cli/bundle/config.PyDABs:
"enabled":
"description": |-
Whether or not PyDABs (Private Preview) is enabled
"import":
"description": |-
The PyDABs project to import to discover resources, resource generator and mutators
"venv_path":
"description": |-
The Python virtual environment path
github.com/databricks/cli/bundle/config.Resources:
"clusters":
"description": |-
The cluster definitions for the bundle.
"markdown_description": |-
The cluster definitions for the bundle. See [_](/dev-tools/bundles/resources.md#cluster)
"dashboards":
"description": |-
The dashboard definitions for the bundle.
"markdown_description": |-
The dashboard definitions for the bundle. See [_](/dev-tools/bundles/resources.md#dashboard)
"experiments":
"description": |-
The experiment definitions for the bundle.
"markdown_description": |-
The experiment definitions for the bundle. See [_](/dev-tools/bundles/resources.md#experiment)
"jobs":
"description": |-
The job definitions for the bundle.
"markdown_description": |-
The job definitions for the bundle. See [_](/dev-tools/bundles/resources.md#job)
"model_serving_endpoints":
"description": |-
The model serving endpoint definitions for the bundle.
"markdown_description": |-
The model serving endpoint definitions for the bundle. See [_](/dev-tools/bundles/resources.md#model_serving_endpoint)
"models":
"description": |-
The model definitions for the bundle.
"markdown_description": |-
The model definitions for the bundle. See [_](/dev-tools/bundles/resources.md#model)
"pipelines":
"description": |-
The pipeline definitions for the bundle.
"markdown_description": |-
The pipeline definitions for the bundle. See [_](/dev-tools/bundles/resources.md#pipeline)
"quality_monitors":
"description": |-
The quality monitor definitions for the bundle.
"markdown_description": |-
The quality monitor definitions for the bundle. See [_](/dev-tools/bundles/resources.md#quality_monitor)
"registered_models":
"description": |-
The registered model definitions for the bundle.
"markdown_description": |-
The registered model definitions for the bundle. See [_](/dev-tools/bundles/resources.md#registered_model)
"schemas":
"description": |-
The schema definitions for the bundle.
"markdown_description": |-
The schema definitions for the bundle. See [_](/dev-tools/bundles/resources.md#schema)
"volumes":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config.Root:
"artifacts":
"description": |-
Defines the attributes to build an artifact
"bundle":
"description": |-
The attributes of the bundle.
"markdown_description": |-
The attributes of the bundle. See [_](/dev-tools/bundles/settings.md#bundle)
"experimental":
"description": |-
Defines attributes for experimental features.
"include":
"description": |-
Specifies a list of path globs that contain configuration files to include within the bundle.
"markdown_description": |-
Specifies a list of path globs that contain configuration files to include within the bundle. See [_](/dev-tools/bundles/settings.md#include)
"permissions":
"description": |-
Defines the permissions to apply to experiments, jobs, pipelines, and models defined in the bundle
"markdown_description": |-
Defines the permissions to apply to experiments, jobs, pipelines, and models defined in the bundle. See [_](/dev-tools/bundles/settings.md#permissions) and [_](/dev-tools/bundles/permissions.md).
"presets":
"description": |-
Defines bundle deployment presets.
"markdown_description": |-
Defines bundle deployment presets. See [_](/dev-tools/bundles/deployment-modes.md#presets).
"resources":
"description": |-
Specifies information about the Databricks resources used by the bundle
"markdown_description": |-
Specifies information about the Databricks resources used by the bundle. See [_](/dev-tools/bundles/resources.md).
"run_as":
"description": |-
The identity to use to run the bundle.
"sync":
"description": |-
The files and file paths to include or exclude in the bundle.
"markdown_description": |-
The files and file paths to include or exclude in the bundle. See [_](/dev-tools/bundles/)
"targets":
"description": |-
Defines deployment targets for the bundle.
"variables":
"description": |-
A Map that defines the custom variables for the bundle, where each key is the name of the variable, and the value is a Map that defines the variable.
"workspace":
"description": |-
Defines the Databricks workspace for the bundle.
github.com/databricks/cli/bundle/config.Sync:
"exclude":
"description": |-
A list of files or folders to exclude from the bundle.
"include":
"description": |-
A list of files or folders to include in the bundle.
"paths":
"description": |-
The local folder paths, which can be outside the bundle root, to synchronize to the workspace when the bundle is deployed.
github.com/databricks/cli/bundle/config.Target:
"artifacts":
"description": |-
The artifacts to include in the target deployment.
"markdown_description": |-
The artifacts to include in the target deployment. See [_](#artifact)
"bundle":
"description": |-
The name of the bundle when deploying to this target.
"cluster_id":
"description": |-
The ID of the cluster to use for this target.
"compute_id":
"description": |-
Deprecated. The ID of the compute to use for this target.
"default":
"description": |-
Whether this target is the default target.
"git":
"description": |-
The Git version control settings for the target.
"markdown_description": |-
The Git version control settings for the target. See [_](#git).
"mode":
"description": |-
The deployment mode for the target.
"markdown_description": |-
The deployment mode for the target. Valid values are `development` or `production`. See [_](/dev-tools/bundles/deployment-modes.md).
"permissions":
"description": |-
The permissions for deploying and running the bundle in the target.
"markdown_description": |-
The permissions for deploying and running the bundle in the target. See [_](#permission).
"presets":
"description": |-
The deployment presets for the target.
"markdown_description": |-
The deployment presets for the target. See [_](#preset).
"resources":
"description": |-
The resource definitions for the target.
"markdown_description": |-
The resource definitions for the target. See [_](#resources).
"run_as":
"description": |-
The identity to use to run the bundle.
"markdown_description": |-
The identity to use to run the bundle. See [_](#job_run_as) and [_](/dev-tools/bundles/run_as.md).
"sync":
"description": |-
The local paths to sync to the target workspace when a bundle is run or deployed.
"markdown_description": |-
The local paths to sync to the target workspace when a bundle is run or deployed. See [_](#sync).
"variables":
"description": |-
The custom variable definitions for the target.
"markdown_description": |-
The custom variable definitions for the target. See [_](/dev-tools/bundles/settings.md#variables) and [_](/dev-tools/bundles/variables.md).
"workspace":
"description": |-
The Databricks workspace for the target.
"markdown_description": |-
The Databricks workspace for the target. [_](#workspace)
github.com/databricks/cli/bundle/config.Workspace:
"artifact_path":
"description": |-
The artifact path to use within the workspace for both deployments and workflow runs
"auth_type":
"description": |-
The authentication type.
"azure_client_id":
"description": |-
The Azure client ID
"azure_environment":
"description": |-
The Azure environment
"azure_login_app_id":
"description": |-
The Azure login app ID
"azure_tenant_id":
"description": |-
The Azure tenant ID
"azure_use_msi":
"description": |-
Whether to use MSI for Azure
"azure_workspace_resource_id":
"description": |-
The Azure workspace resource ID
"client_id":
"description": |-
The client ID for the workspace
"file_path":
"description": |-
The file path to use within the workspace for both deployments and workflow runs
"google_service_account":
"description": |-
The Google service account name
"host":
"description": |-
The Databricks workspace host URL
"profile":
"description": |-
The Databricks workspace profile name
"resource_path":
"description": |-
The workspace resource path
"root_path":
"description": |-
The Databricks workspace root path
"state_path":
"description": |-
The workspace state path
github.com/databricks/cli/bundle/config/resources.Grant:
"principal":
"description": |-
The name of the principal that will be granted privileges
"privileges":
"description": |-
The privileges to grant to the specified entity
github.com/databricks/cli/bundle/config/resources.Permission:
"group_name":
"description": |-
The name of the group that has the permission set in level.
"level":
"description": |-
The allowed permission for user, group, service principal defined for this permission.
"service_principal_name":
"description": |-
The name of the service principal that has the permission set in level.
"user_name":
"description": |-
The name of the user that has the permission set in level.
github.com/databricks/cli/bundle/config/variable.Lookup:
"alert":
"description": |-
PLACEHOLDER
"cluster":
"description": |-
PLACEHOLDER
"cluster_policy":
"description": |-
PLACEHOLDER
"dashboard":
"description": |-
PLACEHOLDER
"instance_pool":
"description": |-
PLACEHOLDER
"job":
"description": |-
PLACEHOLDER
"metastore":
"description": |-
PLACEHOLDER
"notification_destination":
"description": |-
PLACEHOLDER
"pipeline":
"description": |-
PLACEHOLDER
"query":
"description": |-
PLACEHOLDER
"service_principal":
"description": |-
PLACEHOLDER
"warehouse":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/variable.TargetVariable:
"default":
"description": |-
PLACEHOLDER
"description":
"description": |-
The description of the variable.
"lookup":
"description": |-
The name of the alert, cluster_policy, cluster, dashboard, instance_pool, job, metastore, pipeline, query, service_principal, or warehouse object for which to retrieve an ID.
"type":
"description": |-
The type of the variable.
"markdown_description":
"description": |-
The type of the variable.
github.com/databricks/cli/bundle/config/variable.Variable:
"default":
"description": |-
PLACEHOLDER
"description":
"description": |-
The description of the variable
"lookup":
"description": |-
The name of the alert, cluster_policy, cluster, dashboard, instance_pool, job, metastore, pipeline, query, service_principal, or warehouse object for which to retrieve an ID.
"markdown_description": |-
The name of the `alert`, `cluster_policy`, `cluster`, `dashboard`, `instance_pool`, `job`, `metastore`, `pipeline`, `query`, `service_principal`, or `warehouse` object for which to retrieve an ID."
"type":
"description": |-
The type of the variable.
github.com/databricks/databricks-sdk-go/service/serving.Ai21LabsConfig:
"ai21labs_api_key":
"description": |-
PLACEHOLDER
"ai21labs_api_key_plaintext":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/serving.GoogleCloudVertexAiConfig:
"private_key":
"description": |-
PLACEHOLDER
"private_key_plaintext":
"description": |-
PLACEHOLDER
"project_id":
"description": |-
PLACEHOLDER
"region":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/serving.OpenAiConfig:
"microsoft_entra_client_id":
"description": |-
PLACEHOLDER
"microsoft_entra_client_secret":
"description": |-
PLACEHOLDER
"microsoft_entra_client_secret_plaintext":
"description": |-
PLACEHOLDER
"microsoft_entra_tenant_id":
"description": |-
PLACEHOLDER
"openai_api_base":
"description": |-
PLACEHOLDER
"openai_api_key":
"description": |-
PLACEHOLDER
"openai_api_key_plaintext":
"description": |-
PLACEHOLDER
"openai_api_type":
"description": |-
PLACEHOLDER
"openai_api_version":
"description": |-
PLACEHOLDER
"openai_deployment_name":
"description": |-
PLACEHOLDER
"openai_organization":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/serving.PaLmConfig:
"palm_api_key":
"description": |-
PLACEHOLDER
"palm_api_key_plaintext":
"description": |-
PLACEHOLDER

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,161 @@
github.com/databricks/cli/bundle/config/resources.Cluster:
"data_security_mode":
"description": |-
PLACEHOLDER
"docker_image":
"description": |-
PLACEHOLDER
"kind":
"description": |-
PLACEHOLDER
"permissions":
"description": |-
PLACEHOLDER
"runtime_engine":
"description": |-
PLACEHOLDER
"workload_type":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.Dashboard:
"embed_credentials":
"description": |-
PLACEHOLDER
"file_path":
"description": |-
PLACEHOLDER
"permissions":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.Job:
"health":
"description": |-
PLACEHOLDER
"permissions":
"description": |-
PLACEHOLDER
"run_as":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.MlflowExperiment:
"permissions":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.MlflowModel:
"permissions":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.ModelServingEndpoint:
"permissions":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.Pipeline:
"permissions":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.QualityMonitor:
"table_name":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.RegisteredModel:
"grants":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.Schema:
"grants":
"description": |-
PLACEHOLDER
"properties":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.Volume:
"grants":
"description": |-
PLACEHOLDER
"volume_type":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.AwsAttributes:
"availability":
"description": |-
PLACEHOLDER
"ebs_volume_type":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.AzureAttributes:
"availability":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
"data_security_mode":
"description": |-
PLACEHOLDER
"docker_image":
"description": |-
PLACEHOLDER
"kind":
"description": |-
PLACEHOLDER
"runtime_engine":
"description": |-
PLACEHOLDER
"workload_type":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.DockerImage:
"basic_auth":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.GcpAttributes:
"availability":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.GitSource:
"git_snapshot":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.JobEnvironment:
"spec":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.JobsHealthRule:
"metric":
"description": |-
PLACEHOLDER
"op":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.JobsHealthRules:
"rules":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.RunJobTask:
"python_named_params":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.Task:
"health":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.TriggerSettings:
"table_update":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.Webhook:
"id":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/pipelines.CronTrigger:
"quartz_cron_schedule":
"description": |-
PLACEHOLDER
"timezone_id":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/pipelines.PipelineTrigger:
"cron":
"description": |-
PLACEHOLDER
"manual":
"description": |-
PLACEHOLDER

View File

@ -0,0 +1,44 @@
package main
import (
"testing"
)
func TestConvertLinksToAbsoluteUrl(t *testing.T) {
tests := []struct {
input string
expected string
}{
{
input: "",
expected: "",
},
{
input: "Some text (not a link)",
expected: "Some text (not a link)",
},
{
input: "This is a link to [_](#section)",
expected: "This is a link to [section](https://docs.databricks.com/dev-tools/bundles/reference.html#section)",
},
{
input: "This is a link to [_](/dev-tools/bundles/resources.html#dashboard)",
expected: "This is a link to [dashboard](https://docs.databricks.com/dev-tools/bundles/resources.html#dashboard)",
},
{
input: "This is a link to [_](/dev-tools/bundles/resources.html)",
expected: "This is a link to [link](https://docs.databricks.com/dev-tools/bundles/resources.html)",
},
{
input: "This is a link to [external](https://external.com)",
expected: "This is a link to [external](https://external.com)",
},
}
for _, test := range tests {
result := convertLinksToAbsoluteUrl(test.input)
if result != test.expected {
t.Errorf("For input '%s', expected '%s', but got '%s'", test.input, test.expected, result)
}
}
}

View File

@ -5,6 +5,7 @@ import (
"fmt" "fmt"
"log" "log"
"os" "os"
"path/filepath"
"reflect" "reflect"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
@ -43,7 +44,8 @@ func addInterpolationPatterns(typ reflect.Type, s jsonschema.Schema) jsonschema.
case jsonschema.ArrayType, jsonschema.ObjectType: case jsonschema.ArrayType, jsonschema.ObjectType:
// arrays and objects can have complex variable values specified. // arrays and objects can have complex variable values specified.
return jsonschema.Schema{ return jsonschema.Schema{
AnyOf: []jsonschema.Schema{ // OneOf is used because we don't expect more than 1 match and schema-based auto-complete works better with OneOf
OneOf: []jsonschema.Schema{
s, s,
{ {
Type: jsonschema.StringType, Type: jsonschema.StringType,
@ -55,7 +57,7 @@ func addInterpolationPatterns(typ reflect.Type, s jsonschema.Schema) jsonschema.
// primitives can have variable values, or references like ${bundle.xyz} // primitives can have variable values, or references like ${bundle.xyz}
// or ${workspace.xyz} // or ${workspace.xyz}
return jsonschema.Schema{ return jsonschema.Schema{
AnyOf: []jsonschema.Schema{ OneOf: []jsonschema.Schema{
s, s,
{Type: jsonschema.StringType, Pattern: interpolationPattern("resources")}, {Type: jsonschema.StringType, Pattern: interpolationPattern("resources")},
{Type: jsonschema.StringType, Pattern: interpolationPattern("bundle")}, {Type: jsonschema.StringType, Pattern: interpolationPattern("bundle")},
@ -113,37 +115,60 @@ func makeVolumeTypeOptional(typ reflect.Type, s jsonschema.Schema) jsonschema.Sc
} }
func main() { func main() {
if len(os.Args) != 2 { if len(os.Args) != 3 {
fmt.Println("Usage: go run main.go <output-file>") fmt.Println("Usage: go run main.go <work-dir> <output-file>")
os.Exit(1) os.Exit(1)
} }
// Directory with annotation files
workdir := os.Args[1]
// Output file, where the generated JSON schema will be written to. // Output file, where the generated JSON schema will be written to.
outputFile := os.Args[1] outputFile := os.Args[2]
generateSchema(workdir, outputFile)
}
func generateSchema(workdir, outputFile string) {
annotationsPath := filepath.Join(workdir, "annotations.yml")
annotationsOpenApiPath := filepath.Join(workdir, "annotations_openapi.yml")
annotationsOpenApiOverridesPath := filepath.Join(workdir, "annotations_openapi_overrides.yml")
// Input file, the databricks openapi spec. // Input file, the databricks openapi spec.
inputFile := os.Getenv("DATABRICKS_OPENAPI_SPEC") inputFile := os.Getenv("DATABRICKS_OPENAPI_SPEC")
if inputFile == "" { if inputFile != "" {
log.Fatal("DATABRICKS_OPENAPI_SPEC environment variable not set") p, err := newParser(inputFile)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Writing OpenAPI annotations to %s\n", annotationsOpenApiPath)
err = p.extractAnnotations(reflect.TypeOf(config.Root{}), annotationsOpenApiPath, annotationsOpenApiOverridesPath)
if err != nil {
log.Fatal(err)
}
} }
p, err := newParser(inputFile) a, err := newAnnotationHandler([]string{annotationsOpenApiPath, annotationsOpenApiOverridesPath, annotationsPath})
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)
} }
// Generate the JSON schema from the bundle Go struct. // Generate the JSON schema from the bundle Go struct.
s, err := jsonschema.FromType(reflect.TypeOf(config.Root{}), []func(reflect.Type, jsonschema.Schema) jsonschema.Schema{ s, err := jsonschema.FromType(reflect.TypeOf(config.Root{}), []func(reflect.Type, jsonschema.Schema) jsonschema.Schema{
p.addDescriptions,
p.addEnums,
removeJobsFields, removeJobsFields,
makeVolumeTypeOptional, makeVolumeTypeOptional,
a.addAnnotations,
addInterpolationPatterns, addInterpolationPatterns,
}) })
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)
} }
// Overwrite the input annotation file, adding missing annotations
err = a.syncWithMissingAnnotations(annotationsPath)
if err != nil {
log.Fatal(err)
}
b, err := json.MarshalIndent(s, "", " ") b, err := json.MarshalIndent(s, "", " ")
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)

View File

@ -0,0 +1,125 @@
package main
import (
"bytes"
"fmt"
"io"
"os"
"path"
"reflect"
"strings"
"testing"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/merge"
"github.com/databricks/cli/libs/dyn/yamlloader"
"github.com/databricks/cli/libs/jsonschema"
"github.com/ghodss/yaml"
"github.com/stretchr/testify/assert"
)
func copyFile(src, dst string) error {
in, err := os.Open(src)
if err != nil {
return err
}
defer in.Close()
out, err := os.Create(dst)
if err != nil {
return err
}
defer out.Close()
_, err = io.Copy(out, in)
if err != nil {
return err
}
return out.Close()
}
// Checks whether descriptions are added for new config fields in the annotations.yml file
// If this test fails either manually add descriptions to the `annotations.yml` or do the following:
// 1. run `make schema` from the repository root to add placeholder descriptions
// 2. replace all "PLACEHOLDER" values with the actual descriptions if possible
// 3. run `make schema` again to regenerate the schema with acutal descriptions
func TestRequiredAnnotationsForNewFields(t *testing.T) {
workdir := t.TempDir()
annotationsPath := path.Join(workdir, "annotations.yml")
annotationsOpenApiPath := path.Join(workdir, "annotations_openapi.yml")
annotationsOpenApiOverridesPath := path.Join(workdir, "annotations_openapi_overrides.yml")
// Copy existing annotation files from the same folder as this test
err := copyFile("annotations.yml", annotationsPath)
assert.NoError(t, err)
err = copyFile("annotations_openapi.yml", annotationsOpenApiPath)
assert.NoError(t, err)
err = copyFile("annotations_openapi_overrides.yml", annotationsOpenApiOverridesPath)
assert.NoError(t, err)
generateSchema(workdir, path.Join(t.TempDir(), "schema.json"))
originalFile, err := os.ReadFile("annotations.yml")
assert.NoError(t, err)
currentFile, err := os.ReadFile(annotationsPath)
assert.NoError(t, err)
original, err := yamlloader.LoadYAML("", bytes.NewBuffer(originalFile))
assert.NoError(t, err)
current, err := yamlloader.LoadYAML("", bytes.NewBuffer(currentFile))
assert.NoError(t, err)
// Collect added paths.
var updatedFieldPaths []string
_, err = merge.Override(original, current, merge.OverrideVisitor{
VisitInsert: func(basePath dyn.Path, right dyn.Value) (dyn.Value, error) {
updatedFieldPaths = append(updatedFieldPaths, basePath.String())
return right, nil
},
})
assert.NoError(t, err)
assert.Empty(t, updatedFieldPaths, fmt.Sprintf("Missing JSON-schema descriptions for new config fields in bundle/internal/schema/annotations.yml:\n%s", strings.Join(updatedFieldPaths, "\n")))
}
// Checks whether types in annotation files are still present in Config type
func TestNoDetachedAnnotations(t *testing.T) {
files := []string{
"annotations.yml",
"annotations_openapi.yml",
"annotations_openapi_overrides.yml",
}
types := map[string]bool{}
for _, file := range files {
annotations, err := getAnnotations(file)
assert.NoError(t, err)
for k := range annotations {
types[k] = false
}
}
_, err := jsonschema.FromType(reflect.TypeOf(config.Root{}), []func(reflect.Type, jsonschema.Schema) jsonschema.Schema{
func(typ reflect.Type, s jsonschema.Schema) jsonschema.Schema {
delete(types, getPath(typ))
return s
},
})
assert.NoError(t, err)
for typ := range types {
t.Errorf("Type `%s` in annotations file is not found in `root.Config` type", typ)
}
assert.Empty(t, types, "Detached annotations found, regenerate schema and check for package path changes")
}
func getAnnotations(path string) (annotationFile, error) {
b, err := os.ReadFile(path)
if err != nil {
return nil, err
}
var data annotationFile
err = yaml.Unmarshal(b, &data)
return data, err
}

View File

@ -1,6 +1,7 @@
package main package main
import ( import (
"bytes"
"encoding/json" "encoding/json"
"fmt" "fmt"
"os" "os"
@ -8,6 +9,9 @@ import (
"reflect" "reflect"
"strings" "strings"
"github.com/ghodss/yaml"
"github.com/databricks/cli/libs/dyn/yamlloader"
"github.com/databricks/cli/libs/jsonschema" "github.com/databricks/cli/libs/jsonschema"
) )
@ -23,6 +27,8 @@ type openapiParser struct {
ref map[string]jsonschema.Schema ref map[string]jsonschema.Schema
} }
const RootTypeKey = "_"
func newParser(path string) (*openapiParser, error) { func newParser(path string) (*openapiParser, error) {
b, err := os.ReadFile(path) b, err := os.ReadFile(path)
if err != nil { if err != nil {
@ -89,35 +95,95 @@ func (p *openapiParser) findRef(typ reflect.Type) (jsonschema.Schema, bool) {
} }
// Use the OpenAPI spec to load descriptions for the given type. // Use the OpenAPI spec to load descriptions for the given type.
func (p *openapiParser) addDescriptions(typ reflect.Type, s jsonschema.Schema) jsonschema.Schema { func (p *openapiParser) extractAnnotations(typ reflect.Type, outputPath, overridesPath string) error {
ref, ok := p.findRef(typ) annotations := annotationFile{}
if !ok { overrides := annotationFile{}
return s
b, err := os.ReadFile(overridesPath)
if err != nil {
return err
}
err = yaml.Unmarshal(b, &overrides)
if err != nil {
return err
}
if overrides == nil {
overrides = annotationFile{}
} }
s.Description = ref.Description _, err = jsonschema.FromType(typ, []func(reflect.Type, jsonschema.Schema) jsonschema.Schema{
for k, v := range s.Properties { func(typ reflect.Type, s jsonschema.Schema) jsonschema.Schema {
if refProp, ok := ref.Properties[k]; ok { ref, ok := p.findRef(typ)
v.Description = refProp.Description if !ok {
} return s
}
basePath := getPath(typ)
pkg := map[string]annotation{}
annotations[basePath] = pkg
if ref.Description != "" || ref.Enum != nil {
pkg[RootTypeKey] = annotation{Description: ref.Description, Enum: ref.Enum}
}
for k := range s.Properties {
if refProp, ok := ref.Properties[k]; ok {
pkg[k] = annotation{Description: refProp.Description, Enum: refProp.Enum}
if refProp.Description == "" {
addEmptyOverride(k, basePath, overrides)
}
} else {
addEmptyOverride(k, basePath, overrides)
}
}
return s
},
})
if err != nil {
return err
} }
return s b, err = yaml.Marshal(overrides)
if err != nil {
return err
}
o, err := yamlloader.LoadYAML("", bytes.NewBuffer(b))
if err != nil {
return err
}
err = saveYamlWithStyle(overridesPath, o)
if err != nil {
return err
}
b, err = yaml.Marshal(annotations)
if err != nil {
return err
}
b = bytes.Join([][]byte{[]byte("# This file is auto-generated. DO NOT EDIT."), b}, []byte("\n"))
err = os.WriteFile(outputPath, b, 0o644)
if err != nil {
return err
}
return nil
} }
// Use the OpenAPI spec add enum values for the given type. func addEmptyOverride(key, pkg string, overridesFile annotationFile) {
func (p *openapiParser) addEnums(typ reflect.Type, s jsonschema.Schema) jsonschema.Schema { if overridesFile[pkg] == nil {
ref, ok := p.findRef(typ) overridesFile[pkg] = map[string]annotation{}
}
overrides := overridesFile[pkg]
if overrides[key].Description == "" {
overrides[key] = annotation{Description: Placeholder}
}
a, ok := overrides[key]
if !ok { if !ok {
return s a = annotation{}
} }
if a.Description == "" {
s.Enum = append(s.Enum, ref.Enum...) a.Description = Placeholder
for k, v := range s.Properties {
if refProp, ok := ref.Properties[k]; ok {
v.Enum = append(v.Enum, refProp.Enum...)
}
} }
overrides[key] = a
return s
} }

View File

@ -0,0 +1,5 @@
targets:
production:
variables:
myvar:
default: true

View File

@ -1,3 +1,3 @@
package schema package schema
const ProviderVersion = "1.61.0" const ProviderVersion = "1.62.0"

View File

@ -317,6 +317,8 @@ type DataSourceClusterClusterInfoSpec struct {
EnableLocalDiskEncryption bool `json:"enable_local_disk_encryption,omitempty"` EnableLocalDiskEncryption bool `json:"enable_local_disk_encryption,omitempty"`
IdempotencyToken string `json:"idempotency_token,omitempty"` IdempotencyToken string `json:"idempotency_token,omitempty"`
InstancePoolId string `json:"instance_pool_id,omitempty"` InstancePoolId string `json:"instance_pool_id,omitempty"`
IsSingleNode bool `json:"is_single_node,omitempty"`
Kind string `json:"kind,omitempty"`
NodeTypeId string `json:"node_type_id,omitempty"` NodeTypeId string `json:"node_type_id,omitempty"`
NumWorkers int `json:"num_workers,omitempty"` NumWorkers int `json:"num_workers,omitempty"`
PolicyId string `json:"policy_id,omitempty"` PolicyId string `json:"policy_id,omitempty"`
@ -326,6 +328,7 @@ type DataSourceClusterClusterInfoSpec struct {
SparkEnvVars map[string]string `json:"spark_env_vars,omitempty"` SparkEnvVars map[string]string `json:"spark_env_vars,omitempty"`
SparkVersion string `json:"spark_version"` SparkVersion string `json:"spark_version"`
SshPublicKeys []string `json:"ssh_public_keys,omitempty"` SshPublicKeys []string `json:"ssh_public_keys,omitempty"`
UseMlRuntime bool `json:"use_ml_runtime,omitempty"`
Autoscale *DataSourceClusterClusterInfoSpecAutoscale `json:"autoscale,omitempty"` Autoscale *DataSourceClusterClusterInfoSpecAutoscale `json:"autoscale,omitempty"`
AwsAttributes *DataSourceClusterClusterInfoSpecAwsAttributes `json:"aws_attributes,omitempty"` AwsAttributes *DataSourceClusterClusterInfoSpecAwsAttributes `json:"aws_attributes,omitempty"`
AzureAttributes *DataSourceClusterClusterInfoSpecAzureAttributes `json:"azure_attributes,omitempty"` AzureAttributes *DataSourceClusterClusterInfoSpecAzureAttributes `json:"azure_attributes,omitempty"`
@ -369,7 +372,9 @@ type DataSourceClusterClusterInfo struct {
EnableElasticDisk bool `json:"enable_elastic_disk,omitempty"` EnableElasticDisk bool `json:"enable_elastic_disk,omitempty"`
EnableLocalDiskEncryption bool `json:"enable_local_disk_encryption,omitempty"` EnableLocalDiskEncryption bool `json:"enable_local_disk_encryption,omitempty"`
InstancePoolId string `json:"instance_pool_id,omitempty"` InstancePoolId string `json:"instance_pool_id,omitempty"`
IsSingleNode bool `json:"is_single_node,omitempty"`
JdbcPort int `json:"jdbc_port,omitempty"` JdbcPort int `json:"jdbc_port,omitempty"`
Kind string `json:"kind,omitempty"`
LastRestartedTime int `json:"last_restarted_time,omitempty"` LastRestartedTime int `json:"last_restarted_time,omitempty"`
LastStateLossTime int `json:"last_state_loss_time,omitempty"` LastStateLossTime int `json:"last_state_loss_time,omitempty"`
NodeTypeId string `json:"node_type_id,omitempty"` NodeTypeId string `json:"node_type_id,omitempty"`
@ -386,6 +391,7 @@ type DataSourceClusterClusterInfo struct {
State string `json:"state,omitempty"` State string `json:"state,omitempty"`
StateMessage string `json:"state_message,omitempty"` StateMessage string `json:"state_message,omitempty"`
TerminatedTime int `json:"terminated_time,omitempty"` TerminatedTime int `json:"terminated_time,omitempty"`
UseMlRuntime bool `json:"use_ml_runtime,omitempty"`
Autoscale *DataSourceClusterClusterInfoAutoscale `json:"autoscale,omitempty"` Autoscale *DataSourceClusterClusterInfoAutoscale `json:"autoscale,omitempty"`
AwsAttributes *DataSourceClusterClusterInfoAwsAttributes `json:"aws_attributes,omitempty"` AwsAttributes *DataSourceClusterClusterInfoAwsAttributes `json:"aws_attributes,omitempty"`
AzureAttributes *DataSourceClusterClusterInfoAzureAttributes `json:"azure_attributes,omitempty"` AzureAttributes *DataSourceClusterClusterInfoAzureAttributes `json:"azure_attributes,omitempty"`

View File

@ -176,6 +176,8 @@ type ResourceCluster struct {
IdempotencyToken string `json:"idempotency_token,omitempty"` IdempotencyToken string `json:"idempotency_token,omitempty"`
InstancePoolId string `json:"instance_pool_id,omitempty"` InstancePoolId string `json:"instance_pool_id,omitempty"`
IsPinned bool `json:"is_pinned,omitempty"` IsPinned bool `json:"is_pinned,omitempty"`
IsSingleNode bool `json:"is_single_node,omitempty"`
Kind string `json:"kind,omitempty"`
NoWait bool `json:"no_wait,omitempty"` NoWait bool `json:"no_wait,omitempty"`
NodeTypeId string `json:"node_type_id,omitempty"` NodeTypeId string `json:"node_type_id,omitempty"`
NumWorkers int `json:"num_workers,omitempty"` NumWorkers int `json:"num_workers,omitempty"`
@ -188,6 +190,7 @@ type ResourceCluster struct {
SshPublicKeys []string `json:"ssh_public_keys,omitempty"` SshPublicKeys []string `json:"ssh_public_keys,omitempty"`
State string `json:"state,omitempty"` State string `json:"state,omitempty"`
Url string `json:"url,omitempty"` Url string `json:"url,omitempty"`
UseMlRuntime bool `json:"use_ml_runtime,omitempty"`
Autoscale *ResourceClusterAutoscale `json:"autoscale,omitempty"` Autoscale *ResourceClusterAutoscale `json:"autoscale,omitempty"`
AwsAttributes *ResourceClusterAwsAttributes `json:"aws_attributes,omitempty"` AwsAttributes *ResourceClusterAwsAttributes `json:"aws_attributes,omitempty"`
AzureAttributes *ResourceClusterAzureAttributes `json:"azure_attributes,omitempty"` AzureAttributes *ResourceClusterAzureAttributes `json:"azure_attributes,omitempty"`

View File

@ -240,6 +240,8 @@ type ResourceJobJobClusterNewCluster struct {
EnableLocalDiskEncryption bool `json:"enable_local_disk_encryption,omitempty"` EnableLocalDiskEncryption bool `json:"enable_local_disk_encryption,omitempty"`
IdempotencyToken string `json:"idempotency_token,omitempty"` IdempotencyToken string `json:"idempotency_token,omitempty"`
InstancePoolId string `json:"instance_pool_id,omitempty"` InstancePoolId string `json:"instance_pool_id,omitempty"`
IsSingleNode bool `json:"is_single_node,omitempty"`
Kind string `json:"kind,omitempty"`
NodeTypeId string `json:"node_type_id,omitempty"` NodeTypeId string `json:"node_type_id,omitempty"`
NumWorkers int `json:"num_workers,omitempty"` NumWorkers int `json:"num_workers,omitempty"`
PolicyId string `json:"policy_id,omitempty"` PolicyId string `json:"policy_id,omitempty"`
@ -249,6 +251,7 @@ type ResourceJobJobClusterNewCluster struct {
SparkEnvVars map[string]string `json:"spark_env_vars,omitempty"` SparkEnvVars map[string]string `json:"spark_env_vars,omitempty"`
SparkVersion string `json:"spark_version"` SparkVersion string `json:"spark_version"`
SshPublicKeys []string `json:"ssh_public_keys,omitempty"` SshPublicKeys []string `json:"ssh_public_keys,omitempty"`
UseMlRuntime bool `json:"use_ml_runtime,omitempty"`
Autoscale *ResourceJobJobClusterNewClusterAutoscale `json:"autoscale,omitempty"` Autoscale *ResourceJobJobClusterNewClusterAutoscale `json:"autoscale,omitempty"`
AwsAttributes *ResourceJobJobClusterNewClusterAwsAttributes `json:"aws_attributes,omitempty"` AwsAttributes *ResourceJobJobClusterNewClusterAwsAttributes `json:"aws_attributes,omitempty"`
AzureAttributes *ResourceJobJobClusterNewClusterAzureAttributes `json:"azure_attributes,omitempty"` AzureAttributes *ResourceJobJobClusterNewClusterAzureAttributes `json:"azure_attributes,omitempty"`
@ -462,6 +465,8 @@ type ResourceJobNewCluster struct {
EnableLocalDiskEncryption bool `json:"enable_local_disk_encryption,omitempty"` EnableLocalDiskEncryption bool `json:"enable_local_disk_encryption,omitempty"`
IdempotencyToken string `json:"idempotency_token,omitempty"` IdempotencyToken string `json:"idempotency_token,omitempty"`
InstancePoolId string `json:"instance_pool_id,omitempty"` InstancePoolId string `json:"instance_pool_id,omitempty"`
IsSingleNode bool `json:"is_single_node,omitempty"`
Kind string `json:"kind,omitempty"`
NodeTypeId string `json:"node_type_id,omitempty"` NodeTypeId string `json:"node_type_id,omitempty"`
NumWorkers int `json:"num_workers,omitempty"` NumWorkers int `json:"num_workers,omitempty"`
PolicyId string `json:"policy_id,omitempty"` PolicyId string `json:"policy_id,omitempty"`
@ -471,6 +476,7 @@ type ResourceJobNewCluster struct {
SparkEnvVars map[string]string `json:"spark_env_vars,omitempty"` SparkEnvVars map[string]string `json:"spark_env_vars,omitempty"`
SparkVersion string `json:"spark_version"` SparkVersion string `json:"spark_version"`
SshPublicKeys []string `json:"ssh_public_keys,omitempty"` SshPublicKeys []string `json:"ssh_public_keys,omitempty"`
UseMlRuntime bool `json:"use_ml_runtime,omitempty"`
Autoscale *ResourceJobNewClusterAutoscale `json:"autoscale,omitempty"` Autoscale *ResourceJobNewClusterAutoscale `json:"autoscale,omitempty"`
AwsAttributes *ResourceJobNewClusterAwsAttributes `json:"aws_attributes,omitempty"` AwsAttributes *ResourceJobNewClusterAwsAttributes `json:"aws_attributes,omitempty"`
AzureAttributes *ResourceJobNewClusterAzureAttributes `json:"azure_attributes,omitempty"` AzureAttributes *ResourceJobNewClusterAzureAttributes `json:"azure_attributes,omitempty"`
@ -548,6 +554,13 @@ type ResourceJobSparkSubmitTask struct {
Parameters []string `json:"parameters,omitempty"` Parameters []string `json:"parameters,omitempty"`
} }
type ResourceJobTaskCleanRoomsNotebookTask struct {
CleanRoomName string `json:"clean_room_name"`
Etag string `json:"etag,omitempty"`
NotebookBaseParameters map[string]string `json:"notebook_base_parameters,omitempty"`
NotebookName string `json:"notebook_name"`
}
type ResourceJobTaskConditionTask struct { type ResourceJobTaskConditionTask struct {
Left string `json:"left"` Left string `json:"left"`
Op string `json:"op"` Op string `json:"op"`
@ -578,6 +591,13 @@ type ResourceJobTaskEmailNotifications struct {
OnSuccess []string `json:"on_success,omitempty"` OnSuccess []string `json:"on_success,omitempty"`
} }
type ResourceJobTaskForEachTaskTaskCleanRoomsNotebookTask struct {
CleanRoomName string `json:"clean_room_name"`
Etag string `json:"etag,omitempty"`
NotebookBaseParameters map[string]string `json:"notebook_base_parameters,omitempty"`
NotebookName string `json:"notebook_name"`
}
type ResourceJobTaskForEachTaskTaskConditionTask struct { type ResourceJobTaskForEachTaskTaskConditionTask struct {
Left string `json:"left"` Left string `json:"left"`
Op string `json:"op"` Op string `json:"op"`
@ -814,6 +834,8 @@ type ResourceJobTaskForEachTaskTaskNewCluster struct {
EnableLocalDiskEncryption bool `json:"enable_local_disk_encryption,omitempty"` EnableLocalDiskEncryption bool `json:"enable_local_disk_encryption,omitempty"`
IdempotencyToken string `json:"idempotency_token,omitempty"` IdempotencyToken string `json:"idempotency_token,omitempty"`
InstancePoolId string `json:"instance_pool_id,omitempty"` InstancePoolId string `json:"instance_pool_id,omitempty"`
IsSingleNode bool `json:"is_single_node,omitempty"`
Kind string `json:"kind,omitempty"`
NodeTypeId string `json:"node_type_id,omitempty"` NodeTypeId string `json:"node_type_id,omitempty"`
NumWorkers int `json:"num_workers,omitempty"` NumWorkers int `json:"num_workers,omitempty"`
PolicyId string `json:"policy_id,omitempty"` PolicyId string `json:"policy_id,omitempty"`
@ -823,6 +845,7 @@ type ResourceJobTaskForEachTaskTaskNewCluster struct {
SparkEnvVars map[string]string `json:"spark_env_vars,omitempty"` SparkEnvVars map[string]string `json:"spark_env_vars,omitempty"`
SparkVersion string `json:"spark_version"` SparkVersion string `json:"spark_version"`
SshPublicKeys []string `json:"ssh_public_keys,omitempty"` SshPublicKeys []string `json:"ssh_public_keys,omitempty"`
UseMlRuntime bool `json:"use_ml_runtime,omitempty"`
Autoscale *ResourceJobTaskForEachTaskTaskNewClusterAutoscale `json:"autoscale,omitempty"` Autoscale *ResourceJobTaskForEachTaskTaskNewClusterAutoscale `json:"autoscale,omitempty"`
AwsAttributes *ResourceJobTaskForEachTaskTaskNewClusterAwsAttributes `json:"aws_attributes,omitempty"` AwsAttributes *ResourceJobTaskForEachTaskTaskNewClusterAwsAttributes `json:"aws_attributes,omitempty"`
AzureAttributes *ResourceJobTaskForEachTaskTaskNewClusterAzureAttributes `json:"azure_attributes,omitempty"` AzureAttributes *ResourceJobTaskForEachTaskTaskNewClusterAzureAttributes `json:"azure_attributes,omitempty"`
@ -963,34 +986,35 @@ type ResourceJobTaskForEachTaskTaskWebhookNotifications struct {
} }
type ResourceJobTaskForEachTaskTask struct { type ResourceJobTaskForEachTaskTask struct {
Description string `json:"description,omitempty"` Description string `json:"description,omitempty"`
DisableAutoOptimization bool `json:"disable_auto_optimization,omitempty"` DisableAutoOptimization bool `json:"disable_auto_optimization,omitempty"`
EnvironmentKey string `json:"environment_key,omitempty"` EnvironmentKey string `json:"environment_key,omitempty"`
ExistingClusterId string `json:"existing_cluster_id,omitempty"` ExistingClusterId string `json:"existing_cluster_id,omitempty"`
JobClusterKey string `json:"job_cluster_key,omitempty"` JobClusterKey string `json:"job_cluster_key,omitempty"`
MaxRetries int `json:"max_retries,omitempty"` MaxRetries int `json:"max_retries,omitempty"`
MinRetryIntervalMillis int `json:"min_retry_interval_millis,omitempty"` MinRetryIntervalMillis int `json:"min_retry_interval_millis,omitempty"`
RetryOnTimeout bool `json:"retry_on_timeout,omitempty"` RetryOnTimeout bool `json:"retry_on_timeout,omitempty"`
RunIf string `json:"run_if,omitempty"` RunIf string `json:"run_if,omitempty"`
TaskKey string `json:"task_key"` TaskKey string `json:"task_key"`
TimeoutSeconds int `json:"timeout_seconds,omitempty"` TimeoutSeconds int `json:"timeout_seconds,omitempty"`
ConditionTask *ResourceJobTaskForEachTaskTaskConditionTask `json:"condition_task,omitempty"` CleanRoomsNotebookTask *ResourceJobTaskForEachTaskTaskCleanRoomsNotebookTask `json:"clean_rooms_notebook_task,omitempty"`
DbtTask *ResourceJobTaskForEachTaskTaskDbtTask `json:"dbt_task,omitempty"` ConditionTask *ResourceJobTaskForEachTaskTaskConditionTask `json:"condition_task,omitempty"`
DependsOn []ResourceJobTaskForEachTaskTaskDependsOn `json:"depends_on,omitempty"` DbtTask *ResourceJobTaskForEachTaskTaskDbtTask `json:"dbt_task,omitempty"`
EmailNotifications *ResourceJobTaskForEachTaskTaskEmailNotifications `json:"email_notifications,omitempty"` DependsOn []ResourceJobTaskForEachTaskTaskDependsOn `json:"depends_on,omitempty"`
Health *ResourceJobTaskForEachTaskTaskHealth `json:"health,omitempty"` EmailNotifications *ResourceJobTaskForEachTaskTaskEmailNotifications `json:"email_notifications,omitempty"`
Library []ResourceJobTaskForEachTaskTaskLibrary `json:"library,omitempty"` Health *ResourceJobTaskForEachTaskTaskHealth `json:"health,omitempty"`
NewCluster *ResourceJobTaskForEachTaskTaskNewCluster `json:"new_cluster,omitempty"` Library []ResourceJobTaskForEachTaskTaskLibrary `json:"library,omitempty"`
NotebookTask *ResourceJobTaskForEachTaskTaskNotebookTask `json:"notebook_task,omitempty"` NewCluster *ResourceJobTaskForEachTaskTaskNewCluster `json:"new_cluster,omitempty"`
NotificationSettings *ResourceJobTaskForEachTaskTaskNotificationSettings `json:"notification_settings,omitempty"` NotebookTask *ResourceJobTaskForEachTaskTaskNotebookTask `json:"notebook_task,omitempty"`
PipelineTask *ResourceJobTaskForEachTaskTaskPipelineTask `json:"pipeline_task,omitempty"` NotificationSettings *ResourceJobTaskForEachTaskTaskNotificationSettings `json:"notification_settings,omitempty"`
PythonWheelTask *ResourceJobTaskForEachTaskTaskPythonWheelTask `json:"python_wheel_task,omitempty"` PipelineTask *ResourceJobTaskForEachTaskTaskPipelineTask `json:"pipeline_task,omitempty"`
RunJobTask *ResourceJobTaskForEachTaskTaskRunJobTask `json:"run_job_task,omitempty"` PythonWheelTask *ResourceJobTaskForEachTaskTaskPythonWheelTask `json:"python_wheel_task,omitempty"`
SparkJarTask *ResourceJobTaskForEachTaskTaskSparkJarTask `json:"spark_jar_task,omitempty"` RunJobTask *ResourceJobTaskForEachTaskTaskRunJobTask `json:"run_job_task,omitempty"`
SparkPythonTask *ResourceJobTaskForEachTaskTaskSparkPythonTask `json:"spark_python_task,omitempty"` SparkJarTask *ResourceJobTaskForEachTaskTaskSparkJarTask `json:"spark_jar_task,omitempty"`
SparkSubmitTask *ResourceJobTaskForEachTaskTaskSparkSubmitTask `json:"spark_submit_task,omitempty"` SparkPythonTask *ResourceJobTaskForEachTaskTaskSparkPythonTask `json:"spark_python_task,omitempty"`
SqlTask *ResourceJobTaskForEachTaskTaskSqlTask `json:"sql_task,omitempty"` SparkSubmitTask *ResourceJobTaskForEachTaskTaskSparkSubmitTask `json:"spark_submit_task,omitempty"`
WebhookNotifications *ResourceJobTaskForEachTaskTaskWebhookNotifications `json:"webhook_notifications,omitempty"` SqlTask *ResourceJobTaskForEachTaskTaskSqlTask `json:"sql_task,omitempty"`
WebhookNotifications *ResourceJobTaskForEachTaskTaskWebhookNotifications `json:"webhook_notifications,omitempty"`
} }
type ResourceJobTaskForEachTask struct { type ResourceJobTaskForEachTask struct {
@ -1205,6 +1229,8 @@ type ResourceJobTaskNewCluster struct {
EnableLocalDiskEncryption bool `json:"enable_local_disk_encryption,omitempty"` EnableLocalDiskEncryption bool `json:"enable_local_disk_encryption,omitempty"`
IdempotencyToken string `json:"idempotency_token,omitempty"` IdempotencyToken string `json:"idempotency_token,omitempty"`
InstancePoolId string `json:"instance_pool_id,omitempty"` InstancePoolId string `json:"instance_pool_id,omitempty"`
IsSingleNode bool `json:"is_single_node,omitempty"`
Kind string `json:"kind,omitempty"`
NodeTypeId string `json:"node_type_id,omitempty"` NodeTypeId string `json:"node_type_id,omitempty"`
NumWorkers int `json:"num_workers,omitempty"` NumWorkers int `json:"num_workers,omitempty"`
PolicyId string `json:"policy_id,omitempty"` PolicyId string `json:"policy_id,omitempty"`
@ -1214,6 +1240,7 @@ type ResourceJobTaskNewCluster struct {
SparkEnvVars map[string]string `json:"spark_env_vars,omitempty"` SparkEnvVars map[string]string `json:"spark_env_vars,omitempty"`
SparkVersion string `json:"spark_version"` SparkVersion string `json:"spark_version"`
SshPublicKeys []string `json:"ssh_public_keys,omitempty"` SshPublicKeys []string `json:"ssh_public_keys,omitempty"`
UseMlRuntime bool `json:"use_ml_runtime,omitempty"`
Autoscale *ResourceJobTaskNewClusterAutoscale `json:"autoscale,omitempty"` Autoscale *ResourceJobTaskNewClusterAutoscale `json:"autoscale,omitempty"`
AwsAttributes *ResourceJobTaskNewClusterAwsAttributes `json:"aws_attributes,omitempty"` AwsAttributes *ResourceJobTaskNewClusterAwsAttributes `json:"aws_attributes,omitempty"`
AzureAttributes *ResourceJobTaskNewClusterAzureAttributes `json:"azure_attributes,omitempty"` AzureAttributes *ResourceJobTaskNewClusterAzureAttributes `json:"azure_attributes,omitempty"`
@ -1354,35 +1381,36 @@ type ResourceJobTaskWebhookNotifications struct {
} }
type ResourceJobTask struct { type ResourceJobTask struct {
Description string `json:"description,omitempty"` Description string `json:"description,omitempty"`
DisableAutoOptimization bool `json:"disable_auto_optimization,omitempty"` DisableAutoOptimization bool `json:"disable_auto_optimization,omitempty"`
EnvironmentKey string `json:"environment_key,omitempty"` EnvironmentKey string `json:"environment_key,omitempty"`
ExistingClusterId string `json:"existing_cluster_id,omitempty"` ExistingClusterId string `json:"existing_cluster_id,omitempty"`
JobClusterKey string `json:"job_cluster_key,omitempty"` JobClusterKey string `json:"job_cluster_key,omitempty"`
MaxRetries int `json:"max_retries,omitempty"` MaxRetries int `json:"max_retries,omitempty"`
MinRetryIntervalMillis int `json:"min_retry_interval_millis,omitempty"` MinRetryIntervalMillis int `json:"min_retry_interval_millis,omitempty"`
RetryOnTimeout bool `json:"retry_on_timeout,omitempty"` RetryOnTimeout bool `json:"retry_on_timeout,omitempty"`
RunIf string `json:"run_if,omitempty"` RunIf string `json:"run_if,omitempty"`
TaskKey string `json:"task_key"` TaskKey string `json:"task_key"`
TimeoutSeconds int `json:"timeout_seconds,omitempty"` TimeoutSeconds int `json:"timeout_seconds,omitempty"`
ConditionTask *ResourceJobTaskConditionTask `json:"condition_task,omitempty"` CleanRoomsNotebookTask *ResourceJobTaskCleanRoomsNotebookTask `json:"clean_rooms_notebook_task,omitempty"`
DbtTask *ResourceJobTaskDbtTask `json:"dbt_task,omitempty"` ConditionTask *ResourceJobTaskConditionTask `json:"condition_task,omitempty"`
DependsOn []ResourceJobTaskDependsOn `json:"depends_on,omitempty"` DbtTask *ResourceJobTaskDbtTask `json:"dbt_task,omitempty"`
EmailNotifications *ResourceJobTaskEmailNotifications `json:"email_notifications,omitempty"` DependsOn []ResourceJobTaskDependsOn `json:"depends_on,omitempty"`
ForEachTask *ResourceJobTaskForEachTask `json:"for_each_task,omitempty"` EmailNotifications *ResourceJobTaskEmailNotifications `json:"email_notifications,omitempty"`
Health *ResourceJobTaskHealth `json:"health,omitempty"` ForEachTask *ResourceJobTaskForEachTask `json:"for_each_task,omitempty"`
Library []ResourceJobTaskLibrary `json:"library,omitempty"` Health *ResourceJobTaskHealth `json:"health,omitempty"`
NewCluster *ResourceJobTaskNewCluster `json:"new_cluster,omitempty"` Library []ResourceJobTaskLibrary `json:"library,omitempty"`
NotebookTask *ResourceJobTaskNotebookTask `json:"notebook_task,omitempty"` NewCluster *ResourceJobTaskNewCluster `json:"new_cluster,omitempty"`
NotificationSettings *ResourceJobTaskNotificationSettings `json:"notification_settings,omitempty"` NotebookTask *ResourceJobTaskNotebookTask `json:"notebook_task,omitempty"`
PipelineTask *ResourceJobTaskPipelineTask `json:"pipeline_task,omitempty"` NotificationSettings *ResourceJobTaskNotificationSettings `json:"notification_settings,omitempty"`
PythonWheelTask *ResourceJobTaskPythonWheelTask `json:"python_wheel_task,omitempty"` PipelineTask *ResourceJobTaskPipelineTask `json:"pipeline_task,omitempty"`
RunJobTask *ResourceJobTaskRunJobTask `json:"run_job_task,omitempty"` PythonWheelTask *ResourceJobTaskPythonWheelTask `json:"python_wheel_task,omitempty"`
SparkJarTask *ResourceJobTaskSparkJarTask `json:"spark_jar_task,omitempty"` RunJobTask *ResourceJobTaskRunJobTask `json:"run_job_task,omitempty"`
SparkPythonTask *ResourceJobTaskSparkPythonTask `json:"spark_python_task,omitempty"` SparkJarTask *ResourceJobTaskSparkJarTask `json:"spark_jar_task,omitempty"`
SparkSubmitTask *ResourceJobTaskSparkSubmitTask `json:"spark_submit_task,omitempty"` SparkPythonTask *ResourceJobTaskSparkPythonTask `json:"spark_python_task,omitempty"`
SqlTask *ResourceJobTaskSqlTask `json:"sql_task,omitempty"` SparkSubmitTask *ResourceJobTaskSparkSubmitTask `json:"spark_submit_task,omitempty"`
WebhookNotifications *ResourceJobTaskWebhookNotifications `json:"webhook_notifications,omitempty"` SqlTask *ResourceJobTaskSqlTask `json:"sql_task,omitempty"`
WebhookNotifications *ResourceJobTaskWebhookNotifications `json:"webhook_notifications,omitempty"`
} }
type ResourceJobTriggerFileArrival struct { type ResourceJobTriggerFileArrival struct {

View File

@ -244,9 +244,9 @@ type ResourcePipelineNotification struct {
} }
type ResourcePipelineRestartWindow struct { type ResourcePipelineRestartWindow struct {
DaysOfWeek string `json:"days_of_week,omitempty"` DaysOfWeek []string `json:"days_of_week,omitempty"`
StartHour int `json:"start_hour"` StartHour int `json:"start_hour"`
TimeZoneId string `json:"time_zone_id,omitempty"` TimeZoneId string `json:"time_zone_id,omitempty"`
} }
type ResourcePipelineTriggerCron struct { type ResourcePipelineTriggerCron struct {

View File

@ -21,7 +21,7 @@ type Root struct {
const ProviderHost = "registry.terraform.io" const ProviderHost = "registry.terraform.io"
const ProviderSource = "databricks/databricks" const ProviderSource = "databricks/databricks"
const ProviderVersion = "1.61.0" const ProviderVersion = "1.62.0"
func NewRoot() *Root { func NewRoot() *Root {
return &Root{ return &Root{

View File

@ -41,21 +41,21 @@ func TestJsonSchema(t *testing.T) {
resourceJob := walk(s.Definitions, "github.com", "databricks", "cli", "bundle", "config", "resources.Job") resourceJob := walk(s.Definitions, "github.com", "databricks", "cli", "bundle", "config", "resources.Job")
fields := []string{"name", "continuous", "tasks", "trigger"} fields := []string{"name", "continuous", "tasks", "trigger"}
for _, field := range fields { for _, field := range fields {
assert.NotEmpty(t, resourceJob.AnyOf[0].Properties[field].Description) assert.NotEmpty(t, resourceJob.OneOf[0].Properties[field].Description)
} }
// Assert descriptions were also loaded for a job task definition. // Assert descriptions were also loaded for a job task definition.
jobTask := walk(s.Definitions, "github.com", "databricks", "databricks-sdk-go", "service", "jobs.Task") jobTask := walk(s.Definitions, "github.com", "databricks", "databricks-sdk-go", "service", "jobs.Task")
fields = []string{"notebook_task", "spark_jar_task", "spark_python_task", "spark_submit_task", "description", "depends_on", "environment_key", "for_each_task", "existing_cluster_id"} fields = []string{"notebook_task", "spark_jar_task", "spark_python_task", "spark_submit_task", "description", "depends_on", "environment_key", "for_each_task", "existing_cluster_id"}
for _, field := range fields { for _, field := range fields {
assert.NotEmpty(t, jobTask.AnyOf[0].Properties[field].Description) assert.NotEmpty(t, jobTask.OneOf[0].Properties[field].Description)
} }
// Assert descriptions are loaded for pipelines // Assert descriptions are loaded for pipelines
pipeline := walk(s.Definitions, "github.com", "databricks", "cli", "bundle", "config", "resources.Pipeline") pipeline := walk(s.Definitions, "github.com", "databricks", "cli", "bundle", "config", "resources.Pipeline")
fields = []string{"name", "catalog", "clusters", "channel", "continuous", "development"} fields = []string{"name", "catalog", "clusters", "channel", "continuous", "development"}
for _, field := range fields { for _, field := range fields {
assert.NotEmpty(t, pipeline.AnyOf[0].Properties[field].Description) assert.NotEmpty(t, pipeline.OneOf[0].Properties[field].Description)
} }
providers := walk(s.Definitions, "github.com", "databricks", "databricks-sdk-go", "service", "jobs.GitProvider") providers := walk(s.Definitions, "github.com", "databricks", "databricks-sdk-go", "service", "jobs.GitProvider")

File diff suppressed because it is too large Load Diff

4
cmd/account/cmd.go generated
View File

@ -11,6 +11,7 @@ import (
credentials "github.com/databricks/cli/cmd/account/credentials" credentials "github.com/databricks/cli/cmd/account/credentials"
custom_app_integration "github.com/databricks/cli/cmd/account/custom-app-integration" custom_app_integration "github.com/databricks/cli/cmd/account/custom-app-integration"
encryption_keys "github.com/databricks/cli/cmd/account/encryption-keys" encryption_keys "github.com/databricks/cli/cmd/account/encryption-keys"
account_federation_policy "github.com/databricks/cli/cmd/account/federation-policy"
account_groups "github.com/databricks/cli/cmd/account/groups" account_groups "github.com/databricks/cli/cmd/account/groups"
account_ip_access_lists "github.com/databricks/cli/cmd/account/ip-access-lists" account_ip_access_lists "github.com/databricks/cli/cmd/account/ip-access-lists"
log_delivery "github.com/databricks/cli/cmd/account/log-delivery" log_delivery "github.com/databricks/cli/cmd/account/log-delivery"
@ -21,6 +22,7 @@ import (
o_auth_published_apps "github.com/databricks/cli/cmd/account/o-auth-published-apps" o_auth_published_apps "github.com/databricks/cli/cmd/account/o-auth-published-apps"
private_access "github.com/databricks/cli/cmd/account/private-access" private_access "github.com/databricks/cli/cmd/account/private-access"
published_app_integration "github.com/databricks/cli/cmd/account/published-app-integration" published_app_integration "github.com/databricks/cli/cmd/account/published-app-integration"
service_principal_federation_policy "github.com/databricks/cli/cmd/account/service-principal-federation-policy"
service_principal_secrets "github.com/databricks/cli/cmd/account/service-principal-secrets" service_principal_secrets "github.com/databricks/cli/cmd/account/service-principal-secrets"
account_service_principals "github.com/databricks/cli/cmd/account/service-principals" account_service_principals "github.com/databricks/cli/cmd/account/service-principals"
account_settings "github.com/databricks/cli/cmd/account/settings" account_settings "github.com/databricks/cli/cmd/account/settings"
@ -44,6 +46,7 @@ func New() *cobra.Command {
cmd.AddCommand(credentials.New()) cmd.AddCommand(credentials.New())
cmd.AddCommand(custom_app_integration.New()) cmd.AddCommand(custom_app_integration.New())
cmd.AddCommand(encryption_keys.New()) cmd.AddCommand(encryption_keys.New())
cmd.AddCommand(account_federation_policy.New())
cmd.AddCommand(account_groups.New()) cmd.AddCommand(account_groups.New())
cmd.AddCommand(account_ip_access_lists.New()) cmd.AddCommand(account_ip_access_lists.New())
cmd.AddCommand(log_delivery.New()) cmd.AddCommand(log_delivery.New())
@ -54,6 +57,7 @@ func New() *cobra.Command {
cmd.AddCommand(o_auth_published_apps.New()) cmd.AddCommand(o_auth_published_apps.New())
cmd.AddCommand(private_access.New()) cmd.AddCommand(private_access.New())
cmd.AddCommand(published_app_integration.New()) cmd.AddCommand(published_app_integration.New())
cmd.AddCommand(service_principal_federation_policy.New())
cmd.AddCommand(service_principal_secrets.New()) cmd.AddCommand(service_principal_secrets.New())
cmd.AddCommand(account_service_principals.New()) cmd.AddCommand(account_service_principals.New())
cmd.AddCommand(account_settings.New()) cmd.AddCommand(account_settings.New())

View File

@ -0,0 +1,402 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package federation_policy
import (
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/oauth2"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "federation-policy",
Short: `These APIs manage account federation policies.`,
Long: `These APIs manage account federation policies.
Account federation policies allow users and service principals in your
Databricks account to securely access Databricks APIs using tokens from your
trusted identity providers (IdPs).
With token federation, your users and service principals can exchange tokens
from your IdP for Databricks OAuth tokens, which can be used to access
Databricks APIs. Token federation eliminates the need to manage Databricks
secrets, and allows you to centralize management of token issuance policies in
your IdP. Databricks token federation is typically used in combination with
[SCIM], so users in your IdP are synchronized into your Databricks account.
Token federation is configured in your Databricks account using an account
federation policy. An account federation policy specifies: * which IdP, or
issuer, your Databricks account should accept tokens from * how to determine
which Databricks user, or subject, a token is issued for
To configure a federation policy, you provide the following: * The required
token __issuer__, as specified in the iss claim of your tokens. The
issuer is an https URL that identifies your IdP. * The allowed token
__audiences__, as specified in the aud claim of your tokens. This
identifier is intended to represent the recipient of the token. As long as the
audience in the token matches at least one audience in the policy, the token
is considered a match. If unspecified, the default value is your Databricks
account id. * The __subject claim__, which indicates which token claim
contains the Databricks username of the user the token was issued for. If
unspecified, the default value is sub. * Optionally, the public keys
used to validate the signature of your tokens, in JWKS format. If unspecified
(recommended), Databricks automatically fetches the public keys from your
issuers well known endpoint. Databricks strongly recommends relying on your
issuers well known endpoint for discovering public keys.
An example federation policy is: issuer: "https://idp.mycompany.com/oidc"
audiences: ["databricks"] subject_claim: "sub"
An example JWT token body that matches this policy and could be used to
authenticate to Databricks as user username@mycompany.com is: { "iss":
"https://idp.mycompany.com/oidc", "aud": "databricks", "sub":
"username@mycompany.com" }
You may also need to configure your IdP to generate tokens for your users to
exchange with Databricks, if your users do not already have the ability to
generate tokens that are compatible with your federation policy.
You do not need to configure an OAuth application in Databricks to use token
federation.
[SCIM]: https://docs.databricks.com/admin/users-groups/scim/index.html`,
GroupID: "oauth2",
Annotations: map[string]string{
"package": "oauth2",
},
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add methods
cmd.AddCommand(newCreate())
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newList())
cmd.AddCommand(newUpdate())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start create command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var createOverrides []func(
*cobra.Command,
*oauth2.CreateAccountFederationPolicyRequest,
)
func newCreate() *cobra.Command {
cmd := &cobra.Command{}
var createReq oauth2.CreateAccountFederationPolicyRequest
createReq.Policy = &oauth2.FederationPolicy{}
var createJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&createReq.Policy.Description, "description", createReq.Policy.Description, `Description of the federation policy.`)
cmd.Flags().StringVar(&createReq.Policy.Name, "name", createReq.Policy.Name, `Name of the federation policy.`)
// TODO: complex arg: oidc_policy
cmd.Use = "create"
cmd.Short = `Create account federation policy.`
cmd.Long = `Create account federation policy.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
if cmd.Flags().Changed("json") {
diags := createJson.Unmarshal(&createReq.Policy)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
response, err := a.FederationPolicy.Create(ctx, createReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range createOverrides {
fn(cmd, &createReq)
}
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*oauth2.DeleteAccountFederationPolicyRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq oauth2.DeleteAccountFederationPolicyRequest
// TODO: short flags
cmd.Use = "delete POLICY_ID"
cmd.Short = `Delete account federation policy.`
cmd.Long = `Delete account federation policy.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
deleteReq.PolicyId = args[0]
err = a.FederationPolicy.Delete(ctx, deleteReq)
if err != nil {
return err
}
return nil
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getOverrides []func(
*cobra.Command,
*oauth2.GetAccountFederationPolicyRequest,
)
func newGet() *cobra.Command {
cmd := &cobra.Command{}
var getReq oauth2.GetAccountFederationPolicyRequest
// TODO: short flags
cmd.Use = "get POLICY_ID"
cmd.Short = `Get account federation policy.`
cmd.Long = `Get account federation policy.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
getReq.PolicyId = args[0]
response, err := a.FederationPolicy.Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getOverrides {
fn(cmd, &getReq)
}
return cmd
}
// start list command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var listOverrides []func(
*cobra.Command,
*oauth2.ListAccountFederationPoliciesRequest,
)
func newList() *cobra.Command {
cmd := &cobra.Command{}
var listReq oauth2.ListAccountFederationPoliciesRequest
// TODO: short flags
cmd.Flags().IntVar(&listReq.PageSize, "page-size", listReq.PageSize, ``)
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, ``)
cmd.Use = "list"
cmd.Short = `List account federation policies.`
cmd.Long = `List account federation policies.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
response := a.FederationPolicy.List(ctx, listReq)
return cmdio.RenderIterator(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range listOverrides {
fn(cmd, &listReq)
}
return cmd
}
// start update command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var updateOverrides []func(
*cobra.Command,
*oauth2.UpdateAccountFederationPolicyRequest,
)
func newUpdate() *cobra.Command {
cmd := &cobra.Command{}
var updateReq oauth2.UpdateAccountFederationPolicyRequest
updateReq.Policy = &oauth2.FederationPolicy{}
var updateJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&updateReq.Policy.Description, "description", updateReq.Policy.Description, `Description of the federation policy.`)
cmd.Flags().StringVar(&updateReq.Policy.Name, "name", updateReq.Policy.Name, `Name of the federation policy.`)
// TODO: complex arg: oidc_policy
cmd.Use = "update POLICY_ID UPDATE_MASK"
cmd.Short = `Update account federation policy.`
cmd.Long = `Update account federation policy.
Arguments:
POLICY_ID:
UPDATE_MASK: Field mask is required to be passed into the PATCH request. Field mask
specifies which fields of the setting payload will be updated. The field
mask needs to be supplied as single string. To specify multiple fields in
the field mask, use comma as the separator (no space).`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(2)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
if cmd.Flags().Changed("json") {
diags := updateJson.Unmarshal(&updateReq.Policy)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
updateReq.PolicyId = args[0]
updateReq.UpdateMask = args[1]
response, err := a.FederationPolicy.Update(ctx, updateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range updateOverrides {
fn(cmd, &updateReq)
}
return cmd
}
// end service AccountFederationPolicy

View File

@ -0,0 +1,445 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package service_principal_federation_policy
import (
"fmt"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/oauth2"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "service-principal-federation-policy",
Short: `These APIs manage service principal federation policies.`,
Long: `These APIs manage service principal federation policies.
Service principal federation, also known as Workload Identity Federation,
allows your automated workloads running outside of Databricks to securely
access Databricks APIs without the need for Databricks secrets. With Workload
Identity Federation, your application (or workload) authenticates to
Databricks as a Databricks service principal, using tokens provided by the
workload runtime.
Databricks strongly recommends using Workload Identity Federation to
authenticate to Databricks from automated workloads, over alternatives such as
OAuth client secrets or Personal Access Tokens, whenever possible. Workload
Identity Federation is supported by many popular services, including Github
Actions, Azure DevOps, GitLab, Terraform Cloud, and Kubernetes clusters, among
others.
Workload identity federation is configured in your Databricks account using a
service principal federation policy. A service principal federation policy
specifies: * which IdP, or issuer, the service principal is allowed to
authenticate from * which workload identity, or subject, is allowed to
authenticate as the Databricks service principal
To configure a federation policy, you provide the following: * The required
token __issuer__, as specified in the iss claim of workload identity
tokens. The issuer is an https URL that identifies the workload identity
provider. * The required token __subject__, as specified in the sub
claim of workload identity tokens. The subject uniquely identifies the
workload in the workload runtime environment. * The allowed token
__audiences__, as specified in the aud claim of workload identity
tokens. The audience is intended to represent the recipient of the token. As
long as the audience in the token matches at least one audience in the policy,
the token is considered a match. If unspecified, the default value is your
Databricks account id. * Optionally, the public keys used to validate the
signature of the workload identity tokens, in JWKS format. If unspecified
(recommended), Databricks automatically fetches the public keys from the
issuers well known endpoint. Databricks strongly recommends relying on the
issuers well known endpoint for discovering public keys.
An example service principal federation policy, for a Github Actions workload,
is: issuer: "https://token.actions.githubusercontent.com" audiences:
["https://github.com/my-github-org"] subject:
"repo:my-github-org/my-repo:environment:prod"
An example JWT token body that matches this policy and could be used to
authenticate to Databricks is: { "iss":
"https://token.actions.githubusercontent.com", "aud":
"https://github.com/my-github-org", "sub":
"repo:my-github-org/my-repo:environment:prod" }
You may also need to configure the workload runtime to generate tokens for
your workloads.
You do not need to configure an OAuth application in Databricks to use token
federation.`,
GroupID: "oauth2",
Annotations: map[string]string{
"package": "oauth2",
},
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add methods
cmd.AddCommand(newCreate())
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newList())
cmd.AddCommand(newUpdate())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start create command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var createOverrides []func(
*cobra.Command,
*oauth2.CreateServicePrincipalFederationPolicyRequest,
)
func newCreate() *cobra.Command {
cmd := &cobra.Command{}
var createReq oauth2.CreateServicePrincipalFederationPolicyRequest
createReq.Policy = &oauth2.FederationPolicy{}
var createJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&createReq.Policy.Description, "description", createReq.Policy.Description, `Description of the federation policy.`)
cmd.Flags().StringVar(&createReq.Policy.Name, "name", createReq.Policy.Name, `Name of the federation policy.`)
// TODO: complex arg: oidc_policy
cmd.Use = "create SERVICE_PRINCIPAL_ID"
cmd.Short = `Create service principal federation policy.`
cmd.Long = `Create service principal federation policy.
Arguments:
SERVICE_PRINCIPAL_ID: The service principal id for the federation policy.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
if cmd.Flags().Changed("json") {
diags := createJson.Unmarshal(&createReq.Policy)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
_, err = fmt.Sscan(args[0], &createReq.ServicePrincipalId)
if err != nil {
return fmt.Errorf("invalid SERVICE_PRINCIPAL_ID: %s", args[0])
}
response, err := a.ServicePrincipalFederationPolicy.Create(ctx, createReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range createOverrides {
fn(cmd, &createReq)
}
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*oauth2.DeleteServicePrincipalFederationPolicyRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq oauth2.DeleteServicePrincipalFederationPolicyRequest
// TODO: short flags
cmd.Use = "delete SERVICE_PRINCIPAL_ID POLICY_ID"
cmd.Short = `Delete service principal federation policy.`
cmd.Long = `Delete service principal federation policy.
Arguments:
SERVICE_PRINCIPAL_ID: The service principal id for the federation policy.
POLICY_ID: `
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(2)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
_, err = fmt.Sscan(args[0], &deleteReq.ServicePrincipalId)
if err != nil {
return fmt.Errorf("invalid SERVICE_PRINCIPAL_ID: %s", args[0])
}
deleteReq.PolicyId = args[1]
err = a.ServicePrincipalFederationPolicy.Delete(ctx, deleteReq)
if err != nil {
return err
}
return nil
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getOverrides []func(
*cobra.Command,
*oauth2.GetServicePrincipalFederationPolicyRequest,
)
func newGet() *cobra.Command {
cmd := &cobra.Command{}
var getReq oauth2.GetServicePrincipalFederationPolicyRequest
// TODO: short flags
cmd.Use = "get SERVICE_PRINCIPAL_ID POLICY_ID"
cmd.Short = `Get service principal federation policy.`
cmd.Long = `Get service principal federation policy.
Arguments:
SERVICE_PRINCIPAL_ID: The service principal id for the federation policy.
POLICY_ID: `
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(2)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
_, err = fmt.Sscan(args[0], &getReq.ServicePrincipalId)
if err != nil {
return fmt.Errorf("invalid SERVICE_PRINCIPAL_ID: %s", args[0])
}
getReq.PolicyId = args[1]
response, err := a.ServicePrincipalFederationPolicy.Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getOverrides {
fn(cmd, &getReq)
}
return cmd
}
// start list command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var listOverrides []func(
*cobra.Command,
*oauth2.ListServicePrincipalFederationPoliciesRequest,
)
func newList() *cobra.Command {
cmd := &cobra.Command{}
var listReq oauth2.ListServicePrincipalFederationPoliciesRequest
// TODO: short flags
cmd.Flags().IntVar(&listReq.PageSize, "page-size", listReq.PageSize, ``)
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, ``)
cmd.Use = "list SERVICE_PRINCIPAL_ID"
cmd.Short = `List service principal federation policies.`
cmd.Long = `List service principal federation policies.
Arguments:
SERVICE_PRINCIPAL_ID: The service principal id for the federation policy.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
_, err = fmt.Sscan(args[0], &listReq.ServicePrincipalId)
if err != nil {
return fmt.Errorf("invalid SERVICE_PRINCIPAL_ID: %s", args[0])
}
response := a.ServicePrincipalFederationPolicy.List(ctx, listReq)
return cmdio.RenderIterator(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range listOverrides {
fn(cmd, &listReq)
}
return cmd
}
// start update command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var updateOverrides []func(
*cobra.Command,
*oauth2.UpdateServicePrincipalFederationPolicyRequest,
)
func newUpdate() *cobra.Command {
cmd := &cobra.Command{}
var updateReq oauth2.UpdateServicePrincipalFederationPolicyRequest
updateReq.Policy = &oauth2.FederationPolicy{}
var updateJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&updateReq.Policy.Description, "description", updateReq.Policy.Description, `Description of the federation policy.`)
cmd.Flags().StringVar(&updateReq.Policy.Name, "name", updateReq.Policy.Name, `Name of the federation policy.`)
// TODO: complex arg: oidc_policy
cmd.Use = "update SERVICE_PRINCIPAL_ID POLICY_ID UPDATE_MASK"
cmd.Short = `Update service principal federation policy.`
cmd.Long = `Update service principal federation policy.
Arguments:
SERVICE_PRINCIPAL_ID: The service principal id for the federation policy.
POLICY_ID:
UPDATE_MASK: Field mask is required to be passed into the PATCH request. Field mask
specifies which fields of the setting payload will be updated. The field
mask needs to be supplied as single string. To specify multiple fields in
the field mask, use comma as the separator (no space).`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(3)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
if cmd.Flags().Changed("json") {
diags := updateJson.Unmarshal(&updateReq.Policy)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
_, err = fmt.Sscan(args[0], &updateReq.ServicePrincipalId)
if err != nil {
return fmt.Errorf("invalid SERVICE_PRINCIPAL_ID: %s", args[0])
}
updateReq.PolicyId = args[1]
updateReq.UpdateMask = args[2]
response, err := a.ServicePrincipalFederationPolicy.Update(ctx, updateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range updateOverrides {
fn(cmd, &updateReq)
}
return cmd
}
// end service ServicePrincipalFederationPolicy

View File

@ -204,6 +204,9 @@ func newCreate() *cobra.Command {
cmd.Flags().StringVar(&createReq.ClusterName, "cluster-name", createReq.ClusterName, `Cluster name requested by the user.`) cmd.Flags().StringVar(&createReq.ClusterName, "cluster-name", createReq.ClusterName, `Cluster name requested by the user.`)
// TODO: map via StringToStringVar: custom_tags // TODO: map via StringToStringVar: custom_tags
cmd.Flags().Var(&createReq.DataSecurityMode, "data-security-mode", `Data security mode decides what data governance model to use when accessing data from a cluster. Supported values: [ cmd.Flags().Var(&createReq.DataSecurityMode, "data-security-mode", `Data security mode decides what data governance model to use when accessing data from a cluster. Supported values: [
DATA_SECURITY_MODE_AUTO,
DATA_SECURITY_MODE_DEDICATED,
DATA_SECURITY_MODE_STANDARD,
LEGACY_PASSTHROUGH, LEGACY_PASSTHROUGH,
LEGACY_SINGLE_USER, LEGACY_SINGLE_USER,
LEGACY_SINGLE_USER_STANDARD, LEGACY_SINGLE_USER_STANDARD,
@ -220,6 +223,8 @@ func newCreate() *cobra.Command {
// TODO: complex arg: gcp_attributes // TODO: complex arg: gcp_attributes
// TODO: array: init_scripts // TODO: array: init_scripts
cmd.Flags().StringVar(&createReq.InstancePoolId, "instance-pool-id", createReq.InstancePoolId, `The optional ID of the instance pool to which the cluster belongs.`) cmd.Flags().StringVar(&createReq.InstancePoolId, "instance-pool-id", createReq.InstancePoolId, `The optional ID of the instance pool to which the cluster belongs.`)
cmd.Flags().BoolVar(&createReq.IsSingleNode, "is-single-node", createReq.IsSingleNode, `This field can only be used with kind.`)
cmd.Flags().Var(&createReq.Kind, "kind", `The kind of compute described by this compute specification. Supported values: [CLASSIC_PREVIEW]`)
cmd.Flags().StringVar(&createReq.NodeTypeId, "node-type-id", createReq.NodeTypeId, `This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster.`) cmd.Flags().StringVar(&createReq.NodeTypeId, "node-type-id", createReq.NodeTypeId, `This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster.`)
cmd.Flags().IntVar(&createReq.NumWorkers, "num-workers", createReq.NumWorkers, `Number of worker nodes that this cluster should have.`) cmd.Flags().IntVar(&createReq.NumWorkers, "num-workers", createReq.NumWorkers, `Number of worker nodes that this cluster should have.`)
cmd.Flags().StringVar(&createReq.PolicyId, "policy-id", createReq.PolicyId, `The ID of the cluster policy used to create the cluster if applicable.`) cmd.Flags().StringVar(&createReq.PolicyId, "policy-id", createReq.PolicyId, `The ID of the cluster policy used to create the cluster if applicable.`)
@ -228,6 +233,7 @@ func newCreate() *cobra.Command {
// TODO: map via StringToStringVar: spark_conf // TODO: map via StringToStringVar: spark_conf
// TODO: map via StringToStringVar: spark_env_vars // TODO: map via StringToStringVar: spark_env_vars
// TODO: array: ssh_public_keys // TODO: array: ssh_public_keys
cmd.Flags().BoolVar(&createReq.UseMlRuntime, "use-ml-runtime", createReq.UseMlRuntime, `This field can only be used with kind.`)
// TODO: complex arg: workload_type // TODO: complex arg: workload_type
cmd.Use = "create SPARK_VERSION" cmd.Use = "create SPARK_VERSION"
@ -468,6 +474,9 @@ func newEdit() *cobra.Command {
cmd.Flags().StringVar(&editReq.ClusterName, "cluster-name", editReq.ClusterName, `Cluster name requested by the user.`) cmd.Flags().StringVar(&editReq.ClusterName, "cluster-name", editReq.ClusterName, `Cluster name requested by the user.`)
// TODO: map via StringToStringVar: custom_tags // TODO: map via StringToStringVar: custom_tags
cmd.Flags().Var(&editReq.DataSecurityMode, "data-security-mode", `Data security mode decides what data governance model to use when accessing data from a cluster. Supported values: [ cmd.Flags().Var(&editReq.DataSecurityMode, "data-security-mode", `Data security mode decides what data governance model to use when accessing data from a cluster. Supported values: [
DATA_SECURITY_MODE_AUTO,
DATA_SECURITY_MODE_DEDICATED,
DATA_SECURITY_MODE_STANDARD,
LEGACY_PASSTHROUGH, LEGACY_PASSTHROUGH,
LEGACY_SINGLE_USER, LEGACY_SINGLE_USER,
LEGACY_SINGLE_USER_STANDARD, LEGACY_SINGLE_USER_STANDARD,
@ -484,6 +493,8 @@ func newEdit() *cobra.Command {
// TODO: complex arg: gcp_attributes // TODO: complex arg: gcp_attributes
// TODO: array: init_scripts // TODO: array: init_scripts
cmd.Flags().StringVar(&editReq.InstancePoolId, "instance-pool-id", editReq.InstancePoolId, `The optional ID of the instance pool to which the cluster belongs.`) cmd.Flags().StringVar(&editReq.InstancePoolId, "instance-pool-id", editReq.InstancePoolId, `The optional ID of the instance pool to which the cluster belongs.`)
cmd.Flags().BoolVar(&editReq.IsSingleNode, "is-single-node", editReq.IsSingleNode, `This field can only be used with kind.`)
cmd.Flags().Var(&editReq.Kind, "kind", `The kind of compute described by this compute specification. Supported values: [CLASSIC_PREVIEW]`)
cmd.Flags().StringVar(&editReq.NodeTypeId, "node-type-id", editReq.NodeTypeId, `This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster.`) cmd.Flags().StringVar(&editReq.NodeTypeId, "node-type-id", editReq.NodeTypeId, `This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster.`)
cmd.Flags().IntVar(&editReq.NumWorkers, "num-workers", editReq.NumWorkers, `Number of worker nodes that this cluster should have.`) cmd.Flags().IntVar(&editReq.NumWorkers, "num-workers", editReq.NumWorkers, `Number of worker nodes that this cluster should have.`)
cmd.Flags().StringVar(&editReq.PolicyId, "policy-id", editReq.PolicyId, `The ID of the cluster policy used to create the cluster if applicable.`) cmd.Flags().StringVar(&editReq.PolicyId, "policy-id", editReq.PolicyId, `The ID of the cluster policy used to create the cluster if applicable.`)
@ -492,6 +503,7 @@ func newEdit() *cobra.Command {
// TODO: map via StringToStringVar: spark_conf // TODO: map via StringToStringVar: spark_conf
// TODO: map via StringToStringVar: spark_env_vars // TODO: map via StringToStringVar: spark_env_vars
// TODO: array: ssh_public_keys // TODO: array: ssh_public_keys
cmd.Flags().BoolVar(&editReq.UseMlRuntime, "use-ml-runtime", editReq.UseMlRuntime, `This field can only be used with kind.`)
// TODO: complex arg: workload_type // TODO: complex arg: workload_type
cmd.Use = "edit CLUSTER_ID SPARK_VERSION" cmd.Use = "edit CLUSTER_ID SPARK_VERSION"

View File

@ -828,6 +828,7 @@ func newMigrate() *cobra.Command {
cmd.Flags().StringVar(&migrateReq.DisplayName, "display-name", migrateReq.DisplayName, `Display name for the new Lakeview dashboard.`) cmd.Flags().StringVar(&migrateReq.DisplayName, "display-name", migrateReq.DisplayName, `Display name for the new Lakeview dashboard.`)
cmd.Flags().StringVar(&migrateReq.ParentPath, "parent-path", migrateReq.ParentPath, `The workspace path of the folder to contain the migrated Lakeview dashboard.`) cmd.Flags().StringVar(&migrateReq.ParentPath, "parent-path", migrateReq.ParentPath, `The workspace path of the folder to contain the migrated Lakeview dashboard.`)
cmd.Flags().BoolVar(&migrateReq.UpdateParameterSyntax, "update-parameter-syntax", migrateReq.UpdateParameterSyntax, `Flag to indicate if mustache parameter syntax ({{ param }}) should be auto-updated to named syntax (:param) when converting datasets in the dashboard.`)
cmd.Use = "migrate SOURCE_DASHBOARD_ID" cmd.Use = "migrate SOURCE_DASHBOARD_ID"
cmd.Short = `Migrate dashboard.` cmd.Short = `Migrate dashboard.`

2
go.mod
View File

@ -7,7 +7,7 @@ toolchain go1.23.2
require ( require (
github.com/Masterminds/semver/v3 v3.3.1 // MIT github.com/Masterminds/semver/v3 v3.3.1 // MIT
github.com/briandowns/spinner v1.23.1 // Apache 2.0 github.com/briandowns/spinner v1.23.1 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.53.0 // Apache 2.0 github.com/databricks/databricks-sdk-go v0.54.0 // Apache 2.0
github.com/fatih/color v1.18.0 // MIT github.com/fatih/color v1.18.0 // MIT
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
github.com/google/uuid v1.6.0 // BSD-3-Clause github.com/google/uuid v1.6.0 // BSD-3-Clause

4
go.sum generated
View File

@ -32,8 +32,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg= github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/databricks/databricks-sdk-go v0.53.0 h1:rZMXaTC3HNKZt+m4C4I/dY3EdZj+kl/sVd/Kdq55Qfo= github.com/databricks/databricks-sdk-go v0.54.0 h1:L8gsA3NXs+uYU3QtW/OUgjxMQxOH24k0MT9JhB3zLlM=
github.com/databricks/databricks-sdk-go v0.53.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU= github.com/databricks/databricks-sdk-go v0.54.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=

View File

@ -34,4 +34,10 @@ type Extension struct {
// Version of the schema. This is used to determine if the schema is // Version of the schema. This is used to determine if the schema is
// compatible with the current CLI version. // compatible with the current CLI version.
Version *int `json:"version,omitempty"` Version *int `json:"version,omitempty"`
// This field is not in JSON schema spec, but it is supported in VSCode and in the Databricks Workspace
// It is used to provide a rich description of the field in the hover tooltip.
// https://code.visualstudio.com/docs/languages/json#_use-rich-formatting-in-hovers
// Also it can be used in documentation generation.
MarkdownDescription string `json:"markdownDescription,omitempty"`
} }

View File

@ -69,6 +69,13 @@ type Schema struct {
// Schema that must match any of the schemas in the array // Schema that must match any of the schemas in the array
AnyOf []Schema `json:"anyOf,omitempty"` AnyOf []Schema `json:"anyOf,omitempty"`
// Schema that must match one of the schemas in the array
OneOf []Schema `json:"oneOf,omitempty"`
// Title of the object, rendered as inline documentation in the IDE.
// https://json-schema.org/understanding-json-schema/reference/annotations
Title string `json:"title,omitempty"`
} }
// Default value defined in a JSON Schema, represented as a string. // Default value defined in a JSON Schema, represented as a string.