registry:2
--ci
in your invocation of earthly
in your CI, or --use-inline-cache
on individual developer's machines. If the --push
command is also specified, the use of the cache will be read-write.SAVE IMAGE --push
declarations as source and destination for any inline cache.SAVE IMAGE --cache-from=...
. This may be useful so that PR builds are able to use the main branch cache. Here is a simple example:--ci
flag will enable, among other things, both --use-inline-cache
and --save-inline-cache
flags. The --use-inline-cache
flag is required to enable importing existing caches, and the --save-inline-cache
flag is required to enable exporting images to the remote cache.apt-get install
command. Reusing the cache improves performance by a factor of 4X.SAVE IMAGE --push
commands. So there is no performance penalty on the cache upload side. The command that would be used in the CI to execute the builds together with inline caching is--remote-cache=...
to specify the Docker tag to use as cache. Make sure that this Docker tag is not used for anything else (e.g. DO NOT use myimage:latest
, in case latest
is used in a critical workflow).mycompany/myimage:cache
, then the flag can be used as follows.earthly
invocations, it is recommended to use different --remote-cache
Docker tags for each pipeline or invocation. This will prevent the cache from being overwritten in ways in which it makes it less effective.SAVE IMAGE --push ...
. If additional targets need to be added as part of the cache, it is possible to add SAVE IMAGE --cache-hint
(no Docker tag necessary) at the end, in order to mark them for explicit caching.--max-remote-cache
can be used to enable this. Note that this results in large uploads and is usually not very effective. An example where this feature is useful, however, is when you would like to optimize CI run times in PRs, and are willing to sacrifice CI run times in default branch builds. This can be achieved by enabling --push
and --max-remote-cache
on the default branch builds only.+project-files
is perfect for introducing a cache hint via SAVE IMAGE --cache-hint
. The processing that takes place as part of installing Scala and compiling the dependencies is sufficiently compute-intensive to save ~2 min from the total build time in CI. In addition, these dependencies change rarely enough that the cache can be utilized consistently.SAVE IMAGE --push
command adds more cacheable targets in the form of separate images. However, in the case of explicit caching, the entire cache is stored as part of a single Docker tag and every SAVE IMAGE --cache-hint
command adds more cacheable targets within the image. This final image containing all the explicit cache cannot be used for anything else. So as a user, you incur the performance cost of both the upload and the subsequent download.--ci
to your earthly
invocations in CI)SAVE IMAGE --push <docker-tag>
commandsSAVE IMAGE --cache-hint
commandsapk
tool shipped in alpine
images. Installing packages via apk
is download-heavy, but usually not very compute-heavy, and so using shared caching to offset apk
download times might not be as effective. On the other hand, consider apt-get
tool shipped in ubuntu
images. Besides performing downloads, apt-get
also performs additional post-download steps which tend to be compute-intensive. For this reason, shared caching is usually very effective here.apk
and apt-get
, similar remarks can be made about the various language-specific dependency management tools. Some will be pure download-based (e.g. go mod download
), while others will be a mix of download and computation (.e.g sbt
).FROM +some-target
instruction versus just using the previously built image directly. If +some-target
has a SAVE IMAGE --push myimage:latest
instruction, then the performance becomes almost the same to using FROM myimage:latest
directly.