diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball
Ian Wienand 78d389526c ubuntu: more exact match on squashfs file, containerfile: use focal
This is a squash of two changes that have unfortunately simultaneously
broken the gate.

The functests are failing with

 sha256sum: bionic-server-cloudimg-amd64.squashfs.manifest: No such file or directory

I think what has happened here is that the SHA256 sums file being used
has got a new entry "bionic-server-cloudimg-amd64.squashfs.manifest"
which is showing up in a grep for
"bionic-server-cloudimg-amd64.squashfs".  sha256 then tries to also
check this hash, and has started failing.

To avoid this, add an EOL marker to the grep so it only matches the
exact filename.

Change I7fb585bc5ccc52803eea107e76dddf5e9fde8646 updated the
containerfile tests to Jammy and it seems that cgroups v2 prevents
podman running inside docker [1].  While we investigate, move this
testing back to focal.

[1] https://github.com/containers/podman/issues/14884
Change-Id: I1af9f5599168aadc1e7fcdfae281935e6211a597
2022-07-11 19:56:36 +10:00

70 lines
2.5 KiB
Bash
Executable File

#!/bin/bash
# These are useful, or at worst not harmful, for all images we build.
if [ ${DIB_DEBUG_TRACE:-1} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
[ -n "$ARCH" ]
[ -n "$TARGET_ROOT" ]
shopt -s extglob
DIB_RELEASE=${DIB_RELEASE:-trusty}
DIB_CLOUD_IMAGES=${DIB_CLOUD_IMAGES:-https://cloud-images.ubuntu.com/$DIB_RELEASE/current}
if [ $DIB_RELEASE != "trusty" ] ; then
BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-$DIB_RELEASE-server-cloudimg-$ARCH.squashfs}
else
BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-$DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz}
fi
SHA256SUMS=${SHA256SUMS:-https://${DIB_CLOUD_IMAGES##http?(s)://}/SHA256SUMS}
CACHED_FILE=$DIB_IMAGE_CACHE/$BASE_IMAGE_FILE
CACHED_FILE_LOCK=$DIB_LOCKFILES/$BASE_IMAGE_FILE.lock
CACHED_SUMS=$DIB_IMAGE_CACHE/SHA256SUMS.ubuntu.$DIB_RELEASE.$ARCH
function get_ubuntu_tarball() {
if [ -n "$DIB_LOCAL_IMAGE" ] ; then
echo "Using local image without download"
if [ ! -f "$DIB_LOCAL_IMAGE" ] ; then
echo "Unable to find image $DIB_LOCAL_IMAGE locally! Check path and file name to source image"
exit 1
fi
IMAGE_PATH=$DIB_LOCAL_IMAGE
else
if [ -n "$DIB_OFFLINE" -a -f "$CACHED_FILE" ] ; then
echo "Not checking freshness of cached $CACHED_FILE."
else
echo "Fetching Base Image"
$TMP_HOOKS_PATH/bin/cache-url $SHA256SUMS $CACHED_SUMS
$TMP_HOOKS_PATH/bin/cache-url \
$DIB_CLOUD_IMAGES/$BASE_IMAGE_FILE $CACHED_FILE
pushd $DIB_IMAGE_CACHE
if ! grep "${BASE_IMAGE_FILE}$" $CACHED_SUMS | sha256sum --check - ; then
$TMP_HOOKS_PATH/bin/cache-url -f \
$DIB_CLOUD_IMAGES/$BASE_IMAGE_FILE $CACHED_FILE
grep "${BASE_IMAGE_FILE}$" $CACHED_SUMS | sha256sum --check -
fi
fi
IMAGE_PATH=$DIB_IMAGE_CACHE/$BASE_IMAGE_FILE
fi
# Extract the base image (use --numeric-owner to avoid UID/GID mismatch between
# image tarball and host OS e.g. when building Ubuntu image on an openSUSE host)
if [ "$DIB_RELEASE" != "trusty" ] ; then
sudo unsquashfs -f -d $TARGET_ROOT $IMAGE_PATH
else
sudo tar -C $TARGET_ROOT --numeric-owner -xzf $IMAGE_PATH
fi
}
(
echo "Getting $CACHED_FILE_LOCK: $(date)"
# Wait up to 20 minutes for another process to download
if ! flock -w 1200 9 ; then
echo "Did not get $CACHED_FILE_LOCK: $(date)"
exit 1
fi
get_ubuntu_tarball
) 9> $CACHED_FILE_LOCK