mirror of https://github.com/databricks/cli.git
33c446dadd
## Changes The approach to do this was: 1. Iterate over all libraries in all job tasks 2. Find references to local libraries 3. Store pointer to `compute.Library` in the matching artifact file to signal it should be uploaded This breaks down when introducing #1098 because we can no longer track unexported state across mutators. The approach in this PR performs the path matching twice; once in the matching mutator where we check if each referenced file has an artifacts section, and once during artifact upload to rewrite the library path from a local file reference to an absolute Databricks path. ## Tests Integration tests pass. |
||
---|---|---|
.. | ||
pipeline_glob_paths | ||
python_wheel | ||
python_wheel_dbfs_lib | ||
python_wheel_no_artifact | ||
python_wheel_no_artifact_no_setup | ||
pipeline_glob_paths_test.go | ||
wheel_test.go |