Commit Graph

5 Commits

Author SHA1 Message Date
Andrew Nester 66307134c1
Fixed generated YAML missing 'default' for empty values (#1765)
## Changes
Fixed generated YAML missing 'default' for empty values

## Tests
Added unit test
2024-09-11 09:49:58 +00:00
Pieter Noordhuis 0f4891f0fe
Add `dyn.Time` to box a timestamp with its original string value (#1732)
## Changes

If not explicitly quoted, the YAML loader interprets a value like
`2024-08-29` as a timestamp. Such a value is usually intended to be a
string instead. Our normalization logic was not able to turn a time
value back into the original string.

This change boxes the time value to include its original string
representation. Normalization of one of these values into a string can
now use the original input value.

## Tests

Unit tests in `libs/dyn/convert`.
2024-08-29 13:02:34 +00:00
Pieter Noordhuis 26094f01a0
Define `dyn.Mapping` to represent maps (#1301)
## Changes

Before this change maps were stored as a regular Go map with string
keys. This didn't let us capture metadata (location information) for map
keys.

To address this, this change replaces the use of the regular Go map with
a dedicated type for a dynamic map. This type stores the `dyn.Value` for
both the key and the value. It uses a map to still allow O(1) lookups
and redirects those into a slice.

## Tests

* All existing unit tests pass (some with minor modifications due to
interface change).
* Equality assertions with `assert.Equal` no longer worked because the
new `dyn.Mapping` persists the order in which keys are set and is
therefore susceptible to map ordering issues. To fix this, I added a
`dynassert` package that forwards all assertions to `testify/assert` but
intercepts equality for `dyn.Value` arguments.
2024-03-25 11:01:09 +00:00
Andrew Nester e474948a4b
Generate correct YAML if custom_tags or spark_conf is used for pipeline or job cluster configuration (#1210)
These fields (key and values) needs to be double quoted in order for
yaml loader to read, parse and unmarshal it into Go struct correctly
because these fields are `map[string]string` type.

## Tests
Added regression unit and E2E tests
2024-02-15 15:03:19 +00:00
Andrew Nester 70fe0e36ef
Added `databricks bundle generate job` command (#1043)
## Changes
Now it's possible to generate bundle configuration for existing job.
For now it only supports jobs with notebook tasks.

It will download notebooks referenced in the job tasks and generate
bundle YAML config for this job which can be included in larger bundle.

## Tests
Running command manually

Example of generated config
```
resources:
  jobs:
    job_128737545467921:
      name: Notebook job
      format: MULTI_TASK
      tasks:
        - task_key: as_notebook
          existing_cluster_id: 0704-xxxxxx-yyyyyyy
          notebook_task:
            base_parameters:
              bundle_root: /Users/andrew.nester@databricks.com/.bundle/job_with_module_imports/development/files
            notebook_path: ./entry_notebook.py
            source: WORKSPACE
          run_if: ALL_SUCCESS
      max_concurrent_runs: 1
 ```

## Tests
Manual (on our last 100 jobs) + added end-to-end test

```
--- PASS: TestAccGenerateFromExistingJobAndDeploy (50.91s)
PASS
coverage: 61.5% of statements in ./...
ok github.com/databricks/cli/internal/bundle 51.209s coverage: 61.5% of
statements in ./...
```
2024-01-17 14:26:33 +00:00