diskimage-builder/diskimage_builder/lib/disk-image-create

635 lines
24 KiB
Plaintext
Raw Permalink Normal View History

2012-11-09 11:04:13 +00:00
#!/bin/bash
2012-11-15 03:20:32 +00:00
#
# Copyright 2012 Hewlett-Packard Development Company, L.P.
#
2012-11-15 03:20:32 +00:00
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
2012-11-15 03:20:32 +00:00
# http://www.apache.org/licenses/LICENSE-2.0
#
2012-11-15 03:20:32 +00:00
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
2012-11-09 11:04:13 +00:00
set -eE
2012-11-09 11:04:13 +00:00
# Set/override locale. This ensures consistency in sorting etc. We
# need to choose a lowest-common denominator locale, as this is
# applied when running in the building chroot too (maybe a bug and we
# should prune this?). Thus "C" --centOS 7 doesn't include C.utf-8
# (fedora does, centos 8 probably will). Note: LC_ALL to really
# override this; it overrides LANG and all other LC_ vars
export LC_ALL=C
# Store our initial environment and command line args for later
export DIB_ARGS="$@"
export DIB_ENV=$(declare -p $(compgen -v | grep '^DIB_'))
2012-11-09 11:04:13 +00:00
SCRIPTNAME=$(basename $0)
if [ -z "$_LIB" ]; then
echo "_LIB not set!"
exit 1
fi
_BASE_ELEMENT_DIR=$(${DIB_PYTHON_EXEC:-python} -c '
Move elements & lib relative to diskimage_builder package Currently we have all our elements and library files in a top-level directory and install them into <root>/share/diskimage-builder/[elements|lib] (where root is either / or the root of a virtualenv). The problem with this is that editable/development installs (pip -e) do *not* install data_files. Thus we have no canonical location to look for elements -- leading to the various odd things we do such as a whole bunch of guessing at the top of disk-image-create and having a special test-loader in tests/test_elements.py so we can run python unit tests on those elements that have it. data_files is really the wrong thing to use for what are essentially assets of the program. data_files install works well for things like config-files, init.d files or dropping documentation files. By moving the elements under the diskimage_builder package, we always know where they are relative to where we import from. In fact, pkg_resources has an api for this which we wrap in the new diskimage_builder/paths.py helper [1]. We use this helper to find the correct path in the couple of places we need to find the base-elements dir, and for the paths to import the library shell functions. Elements such as svc-map and pkg-map include python unit-tests, which we do not need tests/test_elements.py to special-case load any more. They just get found automatically by the normal subunit loader. I have a follow-on change (I69ca3d26fede0506a6353c077c69f735c8d84d28) to move disk-image-create to a regular python entry-point. Unfortunately, this has to move to work with setuptools. You'd think a symlink under diskimage_builder/[elements|lib] would work, but it doesn't. [1] this API handles stuff like getting files out of .zip archive modules, which we don't do. Essentially for us it's returning __file__. Change-Id: I5e3e3c97f385b1a4ff2031a161a55b231895df5b
2016-09-09 03:11:52 +00:00
import diskimage_builder.paths
diskimage_builder.paths.show_path("elements")')
2012-11-09 11:04:13 +00:00
source $_LIB/die
DIB_BLOCK_DEVICE="${DIB_PYTHON_EXEC} ${_LIB}/dib-block-device.py"
IS_RAMDISK=0
if [ "$SCRIPTNAME" == "ramdisk-image-create" ]; then
IS_RAMDISK=1
fi
2012-11-09 11:04:13 +00:00
function show_options () {
echo "Usage: ${SCRIPTNAME} [OPTION]... [ELEMENT]..."
echo
2012-11-09 11:04:13 +00:00
echo "Options:"
echo " -a amd64|armhf|arm64 -- set the architecture of the image(default amd64)"
echo " -o imagename -- set the imagename of the output image file(default image)"
echo " -t qcow2,tar,tgz,squashfs,vhd,docker,aci,raw -- set the image types of the output image files (default qcow2)"
echo " File types should be comma separated. VHD outputting requires the vhd-util"
echo " executable be in your PATH. ACI outputting requires the ACI_MANIFEST "
echo " environment variable be a path to a manifest file."
echo " -x -- turn on tracing (use -x -x for very detailed tracing)."
echo " -u -- uncompressed; do not compress the image - larger but faster"
echo " -c -- clear environment before starting work"
echo " --logfile -- save run output to given logfile (implies DIB_QUIET=1)"
echo " --checksum -- generate MD5 and SHA256 checksum files for the created image"
echo " --image-size size -- image size in GB for the created image"
echo " --image-extra-size size -- extra image size in GB for the created image"
echo " --image-cache directory -- location for cached images(default ~/.cache/image-create)"
echo " --max-online-resize size -- max number of filesystem blocks to support when resizing."
echo " Useful if you want a really large root partition when the image is deployed."
echo " Using a very large value may run into a known bug in resize2fs."
echo " Setting the value to 274877906944 will get you a 1PB root file system."
echo " Making this value unnecessarily large will consume extra disk space "
echo " on the root partition with extra file system inodes."
echo " --min-tmpfs size -- minimum size in GB needed in tmpfs to build the image"
echo " --mkfs-journal-size -- filesystem journal size in MB to pass to mkfs."
echo " --mkfs-options -- option flags to be passed directly to mkfs."
echo " Options should be passed as a single string value."
echo " --no-tmpfs -- do not use tmpfs to speed image build"
echo " --offline -- do not update cached resources"
echo " --qemu-img-options -- option flags to be passed directly to qemu-img."
echo " Options need to be comma separated, and follow the key=value pattern."
echo " --root-label label -- label for the root filesystem. Defaults to 'cloudimg-rootfs'."
echo " --ramdisk-element -- specify the main element to be used for building ramdisks."
echo " Defaults to 'ramdisk'. Should be set to 'dracut-ramdisk' for platforms such"
echo " as RHEL and CentOS that do not package busybox."
echo " --install-type -- specify the default installation type. Defaults to 'source'. Set to 'package' to use package based installations by default."
echo " --docker-target -- specify the repo and tag to use if the output type is docker. Defaults to the value of output imagename"
if [ "$IS_RAMDISK" == "0" ]; then
echo " -n skip the default inclusion of the 'base' element"
echo " -p package[,p2...] [-p p3] -- extra packages to install in the image. Runs once, after 'install.d' phase. Can be specified multiple times"
fi
echo " -h|--help -- display this help and exit"
echo " --version -- display version and exit"
echo
Add timestamp output filter This adds a devstack-inspired output filter to standardise timestamping. Currently, python tools timestamp always (timestamp setup in logging_config.py) but all the surrounding bash does not. We have extra timestamps added in run_functests.sh for our own purposes to get the bash timestamps; but this ends up giving us double-timestamps for the python bits. Additionally, callers such as nodepool capture our output and put their own timestamps on it, and again have the double-timestamps. This uses a lightly modified outfilter.py from devstack to standardise this. All output is run through this filter, which will timestamp it. I have removed the places where we double-timestamp -- logging_config.py and the prefix in dib-run-parts. An env option is added to turn timestamps off completely (does not seem worth taking up a command-line option for). For callers like nodepool, they can set this and will just have their own timestamps as they collect the lines. Since all logging is going through outfilter, it's easy to add a --logfile option. I think this will be quite handy; personally I'm always redirecting dib runs to files for debugging. I've also added a "quiet" option. I think this could be useful in run_tests.sh if we were to start logging the output of each test to individual files. This would be much easier to deal with than the very large log files we get (especially if we wanted to turn on parallel running...) Change-Id: I202e1cb200bde17f6d7770cf1e2710bbf4cca64c
2017-06-16 02:09:24 +00:00
echo "Environment Variables:"
echo " (this is not a complete list)"
echo
echo " * ELEMENTS_PATH: specify external locations for the elements. As for \$PATH"
echo " * DIB_NO_TIMESTAMP: no timestamp prefix on output. Useful if capturing output"
echo " * DIB_QUIET: 1=do not output log output to stdout; 0=always ouptut to stdout. See --logfile"
echo
echo "NOTE: At least one distribution root element must be specified."
echo
echo "NOTE: If using the VHD output format you need to have a patched version of vhd-util installed for the image"
echo " to be bootable. The patch is available here: https://github.com/emonty/vhd-util/blob/master/debian/patches/citrix"
echo " and a PPA with the patched tool is available here: https://launchpad.net/~openstack-ci-core/+archive/ubuntu/vhd-util"
echo
echo "Examples:"
if [ "$IS_RAMDISK" == "0" ]; then
echo " ${SCRIPTNAME} -a amd64 -o ubuntu-amd64 vm ubuntu"
echo " export ELEMENTS_PATH=~/source/tripleo-image-elements/elements"
echo " ${SCRIPTNAME} -a amd64 -o fedora-amd64-heat-cfntools vm fedora heat-cfntools"
else
echo " ${SCRIPTNAME} -a amd64 -o fedora-deploy deploy fedora"
echo " ${SCRIPTNAME} -a amd64 -o ubuntu-ramdisk ramdisk ubuntu"
fi
2012-11-09 11:04:13 +00:00
}
function show_version() {
${DIB_PYTHON_EXEC:-python} -c "from diskimage_builder import version; print(version.version_info.version_string())"
}
DIB_DEBUG_TRACE=${DIB_DEBUG_TRACE:-0}
INSTALL_PACKAGES=""
IMAGE_TYPES=("qcow2")
COMPRESS_IMAGE="true"
DIB_GZIP_BIN=${DIB_GZIP_BIN:-"gzip"}
ROOT_LABEL="${ROOT_LABEL:-}"
DIB_DEFAULT_INSTALLTYPE=${DIB_DEFAULT_INSTALLTYPE:-"source"}
MKFS_OPTS=""
ACI_MANIFEST=${ACI_MANIFEST:-}
DOCKER_TARGET=""
Add timestamp output filter This adds a devstack-inspired output filter to standardise timestamping. Currently, python tools timestamp always (timestamp setup in logging_config.py) but all the surrounding bash does not. We have extra timestamps added in run_functests.sh for our own purposes to get the bash timestamps; but this ends up giving us double-timestamps for the python bits. Additionally, callers such as nodepool capture our output and put their own timestamps on it, and again have the double-timestamps. This uses a lightly modified outfilter.py from devstack to standardise this. All output is run through this filter, which will timestamp it. I have removed the places where we double-timestamp -- logging_config.py and the prefix in dib-run-parts. An env option is added to turn timestamps off completely (does not seem worth taking up a command-line option for). For callers like nodepool, they can set this and will just have their own timestamps as they collect the lines. Since all logging is going through outfilter, it's easy to add a --logfile option. I think this will be quite handy; personally I'm always redirecting dib runs to files for debugging. I've also added a "quiet" option. I think this could be useful in run_tests.sh if we were to start logging the output of each test to individual files. This would be much easier to deal with than the very large log files we get (especially if we wanted to turn on parallel running...) Change-Id: I202e1cb200bde17f6d7770cf1e2710bbf4cca64c
2017-06-16 02:09:24 +00:00
LOGFILE=""
TEMP=`getopt -o a:ho:t:xucnp: -l checksum,no-tmpfs,offline,help,version,min-tmpfs:,image-size:,image-cache:,max-online-resize:,mkfs-options:,qemu-img-options:,ramdisk-element:,root-label:,install-type:,docker-target:,logfile: -n $SCRIPTNAME -- "$@"`
if [ $? -ne 0 ] ; then echo "Terminating..." >&2 ; exit 1 ; fi
2012-11-09 11:04:13 +00:00
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-a) export ARCH=$2; shift 2 ;;
-o) export IMAGE_NAME=$2; shift 2 ;;
-t) IFS="," read -a IMAGE_TYPES <<< "$2"; export IMAGE_TYPES ; shift 2 ;;
-h|--help) show_options; exit 0;;
--version) show_version; exit 0;;
-x) shift; DIB_DEBUG_TRACE=$(( $DIB_DEBUG_TRACE + 1 ));;
-u) shift; export COMPRESS_IMAGE="";;
-c) shift ; export CLEAR_ENV=1;;
-n) shift; export SKIP_BASE="1";;
-p) IFS="," read -a _INSTALL_PACKAGES <<< "$2"; export INSTALL_PACKAGES=( ${INSTALL_PACKAGES[@]} ${_INSTALL_PACKAGES[@]} ) ; shift 2 ;;
--checksum) shift; export DIB_CHECKSUM=1;;
--image-size) export DIB_IMAGE_SIZE=$2; shift 2;;
--image-extra-size) export DIB_IMAGE_EXTRA_SIZE=$2; shift 2;;
--image-cache) export DIB_IMAGE_CACHE=$2; shift 2;;
--max-online-resize) export MAX_ONLINE_RESIZE=$2; shift 2;;
--mkfs-journal-size) export DIB_JOURNAL_SIZE=$2; shift 2;;
--mkfs-options) MKFS_OPTS=$2; shift 2;;
--min-tmpfs) export DIB_MIN_TMPFS=$2; shift 2;;
--no-tmpfs) shift; export DIB_NO_TMPFS=1;;
--offline) shift; export DIB_OFFLINE=1;;
--qemu-img-options) QEMU_IMG_OPTIONS=$2; shift 2;;
--root-label) ROOT_LABEL=$2; shift 2;;
--ramdisk-element) RAMDISK_ELEMENT=$2; shift 2;;
--install-type) DIB_DEFAULT_INSTALLTYPE=$2; shift 2;;
--docker-target) export DOCKER_TARGET=$2; shift 2 ;;
Add timestamp output filter This adds a devstack-inspired output filter to standardise timestamping. Currently, python tools timestamp always (timestamp setup in logging_config.py) but all the surrounding bash does not. We have extra timestamps added in run_functests.sh for our own purposes to get the bash timestamps; but this ends up giving us double-timestamps for the python bits. Additionally, callers such as nodepool capture our output and put their own timestamps on it, and again have the double-timestamps. This uses a lightly modified outfilter.py from devstack to standardise this. All output is run through this filter, which will timestamp it. I have removed the places where we double-timestamp -- logging_config.py and the prefix in dib-run-parts. An env option is added to turn timestamps off completely (does not seem worth taking up a command-line option for). For callers like nodepool, they can set this and will just have their own timestamps as they collect the lines. Since all logging is going through outfilter, it's easy to add a --logfile option. I think this will be quite handy; personally I'm always redirecting dib runs to files for debugging. I've also added a "quiet" option. I think this could be useful in run_tests.sh if we were to start logging the output of each test to individual files. This would be much easier to deal with than the very large log files we get (especially if we wanted to turn on parallel running...) Change-Id: I202e1cb200bde17f6d7770cf1e2710bbf4cca64c
2017-06-16 02:09:24 +00:00
--logfile) export LOGFILE=$2; shift 2 ;;
2012-11-09 11:04:13 +00:00
--) shift ; break ;;
*) echo "Internal error!" ; exit 1 ;;
esac
done
export DIB_DEBUG_TRACE
# TODO: namespace this under ~/.cache/dib/ for consistency
export DIB_IMAGE_CACHE=${DIB_IMAGE_CACHE:-~/.cache/image-create}
mkdir -p $DIB_IMAGE_CACHE
# We have a couple of critical sections (touching parts of the host
# system or download images to common cache) that we use flock around.
# Use this directory for lockfiles.
export DIB_LOCKFILES=${DIB_LOCKFILES:-~/.cache/dib/lockfiles}
mkdir -p $DIB_LOCKFILES
if [ "$CLEAR_ENV" = "1" -a "$HOME" != "" ]; then
echo "Re-execing to clear environment."
echo "(note this will prevent much of the local_config element from working)"
exec -c $0 "$@"
fi
Add timestamp output filter This adds a devstack-inspired output filter to standardise timestamping. Currently, python tools timestamp always (timestamp setup in logging_config.py) but all the surrounding bash does not. We have extra timestamps added in run_functests.sh for our own purposes to get the bash timestamps; but this ends up giving us double-timestamps for the python bits. Additionally, callers such as nodepool capture our output and put their own timestamps on it, and again have the double-timestamps. This uses a lightly modified outfilter.py from devstack to standardise this. All output is run through this filter, which will timestamp it. I have removed the places where we double-timestamp -- logging_config.py and the prefix in dib-run-parts. An env option is added to turn timestamps off completely (does not seem worth taking up a command-line option for). For callers like nodepool, they can set this and will just have their own timestamps as they collect the lines. Since all logging is going through outfilter, it's easy to add a --logfile option. I think this will be quite handy; personally I'm always redirecting dib runs to files for debugging. I've also added a "quiet" option. I think this could be useful in run_tests.sh if we were to start logging the output of each test to individual files. This would be much easier to deal with than the very large log files we get (especially if we wanted to turn on parallel running...) Change-Id: I202e1cb200bde17f6d7770cf1e2710bbf4cca64c
2017-06-16 02:09:24 +00:00
# We send stdout & stderr through "outfilter" which does timestamping,
# basic filtering and log file output.
_TS_FLAG=""
if [[ "${DIB_NO_TIMESTAMP:-0}" -eq 1 ]]; then
_TS_FLAG="--no-timestamp"
fi
# A logfile with *no* DIB_QUIET specified implies we just want output
# to the logfile. Explicitly setting DIB_QUIET=0 will overide this
# and log both.
if [[ -n "${LOGFILE}" && -z "${DIB_QUIET}" ]]; then
DIB_QUIET=1
fi
Add timestamp output filter This adds a devstack-inspired output filter to standardise timestamping. Currently, python tools timestamp always (timestamp setup in logging_config.py) but all the surrounding bash does not. We have extra timestamps added in run_functests.sh for our own purposes to get the bash timestamps; but this ends up giving us double-timestamps for the python bits. Additionally, callers such as nodepool capture our output and put their own timestamps on it, and again have the double-timestamps. This uses a lightly modified outfilter.py from devstack to standardise this. All output is run through this filter, which will timestamp it. I have removed the places where we double-timestamp -- logging_config.py and the prefix in dib-run-parts. An env option is added to turn timestamps off completely (does not seem worth taking up a command-line option for). For callers like nodepool, they can set this and will just have their own timestamps as they collect the lines. Since all logging is going through outfilter, it's easy to add a --logfile option. I think this will be quite handy; personally I'm always redirecting dib runs to files for debugging. I've also added a "quiet" option. I think this could be useful in run_tests.sh if we were to start logging the output of each test to individual files. This would be much easier to deal with than the very large log files we get (especially if we wanted to turn on parallel running...) Change-Id: I202e1cb200bde17f6d7770cf1e2710bbf4cca64c
2017-06-16 02:09:24 +00:00
_QUIET_FLAG="-v"
if [[ "${DIB_QUIET:-0}" -eq 1 ]]; then
_QUIET_FLAG=""
fi
_LOGFILE_FLAG=""
if [[ -n "${LOGFILE}" ]]; then
echo "Output logs going to: ${LOGFILE}"
_LOGFILE_FLAG="-o ${LOGFILE}"
fi
# Save the existing stdout to fd3
exec 3>&1
Add timestamp output filter This adds a devstack-inspired output filter to standardise timestamping. Currently, python tools timestamp always (timestamp setup in logging_config.py) but all the surrounding bash does not. We have extra timestamps added in run_functests.sh for our own purposes to get the bash timestamps; but this ends up giving us double-timestamps for the python bits. Additionally, callers such as nodepool capture our output and put their own timestamps on it, and again have the double-timestamps. This uses a lightly modified outfilter.py from devstack to standardise this. All output is run through this filter, which will timestamp it. I have removed the places where we double-timestamp -- logging_config.py and the prefix in dib-run-parts. An env option is added to turn timestamps off completely (does not seem worth taking up a command-line option for). For callers like nodepool, they can set this and will just have their own timestamps as they collect the lines. Since all logging is going through outfilter, it's easy to add a --logfile option. I think this will be quite handy; personally I'm always redirecting dib runs to files for debugging. I've also added a "quiet" option. I think this could be useful in run_tests.sh if we were to start logging the output of each test to individual files. This would be much easier to deal with than the very large log files we get (especially if we wanted to turn on parallel running...) Change-Id: I202e1cb200bde17f6d7770cf1e2710bbf4cca64c
2017-06-16 02:09:24 +00:00
exec 1> >( ${DIB_PYTHON_EXEC:-python} $_LIB/outfilter.py ${_TS_FLAG} ${_QUIET_FLAG} ${_LOGFILE_FLAG} ) 2>&1
Add PS4 to show file/function/line in debug output For something fairly simple, I went back-and-forward with this a bit. Firstly, I realise calling readlink constantly sucks. Due to the way we call dib and source various files, you end up with the source-file from "caller" being usually a very ugly path including levels of "../" indirection. Cleaning this up to something canonical is the only sane way to present it. Because we evaluate _ps4() from a sub-shell in the PS4 string, there's no way for it to do something like build a global in-memory cache in an associative array or similar. It could write out a temp file or some other side-band method, but the overheads of managing this don't seem any different to just calling readlink. If anyone can think of a bash-hack around this that doesn't involve a fork() I'm interested. We could potentially strip some of the leading paths in the assumption you know what they are; but it gets complex when things are split across /usr/bin & /usr/lib and external elements, etc. I thought about arbitrarily shortening it (e.g. just take last 20 characters) which gives you enough of an idea of the file, but looks a bit ugly. Or we could just leave the file-name out all together and assume the function name is unique enough; this also seemed a bit ugly. Obviously it's a matter of taste in the output. It is certainly wider, but it also adds a lot of information. It also makes it fairly clear where there are things we can make less verbose, e.g. I1e39822f218dc0322e2490a770f3dc867a55802c disables tracing in run-parts which is just noise. There's a few other frequently used loops that we could disable tracing for by default to benefit signal:noise. tl;dr : take a look at the logs. I think it is a step in the right direction of making the logs more usable for debugging. Change-Id: I8054a3050415fcb527baeb7012bf133e5c864bf3
2016-05-17 05:43:34 +00:00
# Display the current file/function/line in the debug output
function _ps4 {
IFS=" " called=($(caller 0))
local f=$(readlink -f ${called[2]})
# As we're being run out of the python package's lib/ dir (either
# virtualenv or system), we can strip everything before
# "site-packages" to significantly shorten the line without really
f=${f##*site-packages/}
Add PS4 to show file/function/line in debug output For something fairly simple, I went back-and-forward with this a bit. Firstly, I realise calling readlink constantly sucks. Due to the way we call dib and source various files, you end up with the source-file from "caller" being usually a very ugly path including levels of "../" indirection. Cleaning this up to something canonical is the only sane way to present it. Because we evaluate _ps4() from a sub-shell in the PS4 string, there's no way for it to do something like build a global in-memory cache in an associative array or similar. It could write out a temp file or some other side-band method, but the overheads of managing this don't seem any different to just calling readlink. If anyone can think of a bash-hack around this that doesn't involve a fork() I'm interested. We could potentially strip some of the leading paths in the assumption you know what they are; but it gets complex when things are split across /usr/bin & /usr/lib and external elements, etc. I thought about arbitrarily shortening it (e.g. just take last 20 characters) which gives you enough of an idea of the file, but looks a bit ugly. Or we could just leave the file-name out all together and assume the function name is unique enough; this also seemed a bit ugly. Obviously it's a matter of taste in the output. It is certainly wider, but it also adds a lot of information. It also makes it fairly clear where there are things we can make less verbose, e.g. I1e39822f218dc0322e2490a770f3dc867a55802c disables tracing in run-parts which is just noise. There's a few other frequently used loops that we could disable tracing for by default to benefit signal:noise. tl;dr : take a look at the logs. I think it is a step in the right direction of making the logs more usable for debugging. Change-Id: I8054a3050415fcb527baeb7012bf133e5c864bf3
2016-05-17 05:43:34 +00:00
printf "%-80s " "$f:${called[1]}:${called[0]}"
}
export -f _ps4
export PS4='+ $(_ps4): '
2012-11-09 11:04:13 +00:00
source $_LIB/img-defaults
source $_LIB/common-functions
2012-11-09 11:04:13 +00:00
source $_LIB/img-functions
if [ "$IS_RAMDISK" == "1" ]; then
source $_LIB/ramdisk-defaults
source $_LIB/ramdisk-functions
fi
echo "diskimage-builder version $(show_version)"
# If no elements are specified theres no way we can succeed
if [ -z "$*" ]; then
echo "ERROR: At least one distribution root element must be specified"
exit 1
fi
arg_to_elements "$@"
# start tracing after most boilerplate
if [ ${DIB_DEBUG_TRACE} -gt 0 ]; then
set -x
fi
if [ "${#IMAGE_TYPES[@]}" = "1" ]; then
export IMAGE_NAME=${IMAGE_NAME%%\.${IMAGE_TYPES[0]}}
fi
# Check for required tools early on
for X in ${!IMAGE_TYPES[@]}; do
case "${IMAGE_TYPES[$X]}" in
qcow2)
if ! type qemu-img > /dev/null 2>&1; then
echo "qcow2 output format specified but qemu-img executable not found."
exit 1
fi
;;
tgz)
# Force tar to be created.
IMAGE_TYPES+=('tar')
;;
vhd)
if ! type vhd-util > /dev/null 2>&1; then
echo "vhd output format specified but no vhd-util executable found."
exit 1
fi
;;
squashfs)
if ! type mksquashfs > /dev/null 2>&1; then
echo "squashfs output format specified but no mksquashfs executable found."
exit 1
fi
;;
docker)
if ! type docker > /dev/null 2>&1; then
echo "docker output format specified but no docker executable found."
exit 1
fi
if [ -z "$DOCKER_TARGET" ]; then
echo "Please set --docker-target."
exit 1
fi
;;
esac
done
# NOTE: fstrim is on most all recent systems. It is provided by the util-linux
# package.
if ! type fstrim > /dev/null 2>&1; then
echo "fstrim utility is not found. This is provided by util-linux package"
echo "Please check your PATH variable is set correctly"
exit 1
fi
# xattr support cannot be relied upon with tmpfs builds
# some kernels supoprt it, some don't
if [[ -n "${GENTOO_PROFILE}" ]]; then
if [[ "${GENTOO_PROFILE}" =~ "hardened" ]]; then
echo 'disabling tmpfs for gentoo hardened build'
export DIB_NO_TMPFS=1
fi
fi
2012-11-09 11:04:13 +00:00
mk_build_dir
# Create the YAML file with the final and raw configuration for
# the block device layer.
mkdir -p ${TMP_BUILD_DIR}/block-device
BLOCK_DEVICE_CONFIG_YAML=${TMP_BUILD_DIR}/block-device/config.yaml
block_device_create_config_file "${BLOCK_DEVICE_CONFIG_YAML}"
# Write out the parameter file
DIB_BLOCK_DEVICE_PARAMS_YAML=${TMP_BUILD_DIR}/block-device/params.yaml
export DIB_BLOCK_DEVICE_PARAMS_YAML
cat >${DIB_BLOCK_DEVICE_PARAMS_YAML} <<EOF
config: ${BLOCK_DEVICE_CONFIG_YAML}
image-dir: ${TMP_IMAGE_DIR}
root-fs-type: ${FS_TYPE}
root-label: ${ROOT_LABEL}
mount-base: ${TMP_BUILD_DIR}/mnt
build-dir: ${TMP_BUILD_DIR}
EOF
${DIB_BLOCK_DEVICE} init
# Need to get the real root label because it can be overwritten
# by the BLOCK_DEVICE_CONFIG.
DIB_ROOT_LABEL=$(${DIB_BLOCK_DEVICE} getval root-label)
export DIB_ROOT_LABEL
# Need to get the real fs type for the root filesystem
DIB_ROOT_FSTYPE=$(${DIB_BLOCK_DEVICE} getval root-fstype)
export DIB_ROOT_FSTYPE
Correct boot path to cover FIPS usage cases When your booting a Linux system using dracut, i.e. with any redhat style distribution, dracut's internal code looks to validate the kernel hmac signature in before proceeding to userspace. It does this by looking at the /boot/ folder file for the kernel hmac file. And it normally does this with the root filesystem. Except if the kernel is not on the root filesystem and is instead on a /boot filesystem, this breaks horribly. This is compounded because DIB enables the operator to restructure the OS image/layout to fit their needs. In order for this to be navigated, as dracut is written, we need to pass a "boot=" argument to the kernel. So now we attempt to purge any prior boot entry in the disk image content, which is good because any filesystem operations invalidate it, and then we attempt to identify the boot filesystem, and save a boot kernel command line parameter so the resulting image can boot properly if FIPS was enabled in the prior image. Regex developed with https://sed.js.org utilizing stdin: VAR="quiet boot=UUID=173c759f-1302-48a3-9d51-a17784c21e03 text" VAR="quiet boot=PARTUUID=173c759f-1302-48a3-9d51-a17784c21e03" VAR="quiet boot=PARTUUID=173c759f-1302-48a3-9d51-a17784c21e03 reboot=meow" VAR="quiet boot=UUID=/dev/sda1 text" VAR="quiet boot=/dev/sda1" VAR="quiet boot=/dev/sda1 reboot=meow" VAR="quiet after_boot=1 reboot=meow boot=/dev/sda1" VAR="quiet after_boot=1 reboot=meow" Which resulted in stdout: VAR="quiet text" VAR="quiet" VAR="quiet reboot=meow" VAR="quiet text" VAR="quiet" VAR="quiet reboot=meow" VAR="quiet after_boot=1 reboot=meow" VAR="quiet after_boot=1 reboot=meow" Change-Id: I9034c21e84deda2ba2c0ec0d1d6d6595ed10bed4
2023-03-02 16:43:50 +00:00
# Need to get the boot device label because, if defined, we may
# need to update boot configuration in some cases
DIB_BOOT_LABEL=$(${DIB_BLOCK_DEVICE} getval boot-label)
export DIB_BOOT_LABEL
# retrieve mount points so we can reuse in elements
DIB_MOUNTPOINTS=$(${DIB_BLOCK_DEVICE} getval mount-points)
export DIB_MOUNTPOINTS
2012-11-09 11:04:13 +00:00
create_base
# This variable needs to be propagated into the chroot
mkdir -p $TMP_HOOKS_PATH/environment.d
echo "export DIB_DEFAULT_INSTALLTYPE=\${DIB_DEFAULT_INSTALLTYPE:-\"${DIB_DEFAULT_INSTALLTYPE}\"}" > $TMP_HOOKS_PATH/environment.d/11-dib-install-type.bash
2012-11-09 11:04:13 +00:00
run_d extra-data
# Run pre-install scripts. These do things that prepare the chroot for package installs
run_d_in_target pre-install
# Call install scripts to pull in the software users want.
run_d_in_target install
do_extra_package_install
run_d_in_target post-install
run_d post-root
# ensure we do not have a lost+found directory in the root folder
# that could cause copy to fail (it will be created again later
# when creating the file system, if it needs such directory)
if [ -e "$TMP_BUILD_DIR/mnt/lost+found" ]; then
sudo rm -rf "$TMP_BUILD_DIR/mnt/lost+found"
fi
# Free up /mnt
unmount_image
mv $TMP_BUILD_DIR/mnt $TMP_BUILD_DIR/built
# save xtrace state, as we always want to turn it off to avoid
# spamming the logs with du output below.
xtrace=$(set +o | grep xtrace)
# temp file for holding du output
du_output=${TMP_BUILD_DIR}/du_output.tmp
if [ -n "$DIB_IMAGE_SIZE" ]; then
du_size=$(echo "$DIB_IMAGE_SIZE" | awk '{printf("%d\n",$1 * 1024 *1024)}')
else
set +o xtrace
echo "Calculating image size (this may take a minute)..."
sudo du -a -c -x ${TMP_BUILD_DIR}/built > ${du_output}
# the last line is the total size from "-c".
if [ -n "$DIB_IMAGE_EXTRA_SIZE" ]; then
# add DIB_IMAGE_EXTRA_SIZE megabytes to create a bigger image as requested
du_extra_size=$(echo "$DIB_IMAGE_EXTRA_SIZE" | awk '{printf("%d\n",$1 * 1024)}')
du_size_tmp=$(tail -n1 ${du_output} | cut -f1)
du_size=$(echo "$du_size_tmp $du_extra_size" | awk '{print int($1 + $2)}')
else
# scale this by 0.6 to create a slightly bigger image
du_size=$(tail -n1 ${du_output} | cut -f1 | awk '{print int($1 / 0.6)}')
fi
$xtrace
fi
if [[ "${DIB_SHOW_IMAGE_USAGE:-0}" != 0 ]]; then
set +o xtrace
if [ ! -f "$du_output" ]; then
sudo du -a -c -x ${TMP_BUILD_DIR}/built > ${du_output}
fi
du_output_show="sort -nr ${du_output} |
numfmt --to=iec-i --padding=7
--suffix=B --field=1 --from-unit=1024"
# by default show the 10MiB and greater files & directories -- a
# dir with lots of little files will still show up, but this helps
# signal:noise ratio
if [[ ${DIB_SHOW_IMAGE_USAGE_FULL:-0} == 0 ]]; then
# numfmt will start giving a decimal place when < 10MiB
du_output_show+="| egrep 'MiB|GiB|TiB|PiB' | grep -v '\..MiB'"
echo "================================="
echo "Image size report (files > 10MiB)"
echo "================================="
else
echo "================="
echo "Image size report"
echo "================="
fi
eval ${du_output_show}
echo
echo "===== end image size report ====="
echo
$xtrace
fi
rm -f ${du_output}
if [ -n "$DIB_JOURNAL_SIZE" ]; then
journal_size="$DIB_JOURNAL_SIZE"
else
journal_size=64
fi
if [ "$DIB_ROOT_FSTYPE" = "ext4" ] ; then
# Very conservative to handle images being resized a lot
# We set journal size to 64M so our journal is large enough when we
# perform an FS resize.
MKFS_OPTS="-i 4096 -J size=$journal_size $MKFS_OPTS"
# NOTE(ianw) 2019-12-11 : this is a terrible hack ... if building on
# >=Bionic hosts, mkfs sets "metadata_csum" for ext4 filesystems,
# which makes broken Trusty images as that era fsck doesn't
# understand this flag. The image will stop in early boot
# complaining:
#
# Serious errors were found while checking the disk drive for /.
#
# We do not really have any suitable hook points where one of the
# ubuntu elements or block-device-* could set this override flag for
# just Trusty. We probably should, but desire to implement more
# code to support the out-of-date trusty at this point is
# non-existant. So hack in disabling this here.
if [[ ${DIB_RELEASE} == "trusty" ]]; then
MKFS_OPTS="-O ^metadata_csum $MKFS_OPTS"
fi
# Grow the image size to account for the journal, only if the user
# has not asked for a specific size.
if [ -z "$DIB_IMAGE_SIZE" ]; then
du_size=$(( $du_size + ($journal_size * 1024) ))
fi
fi
# EFI system partitions default to be quite large at 512mb for maximum
# compatability (see notes in
# 7fd52ba84180b4e749ccf4c9db8c49eafff46ea8) . We need to increase the
# total size to account for this, or we run out of space creating the
# final image. See if we have included the block-device-efi element,
# which implies we have a large EFI partition, and then pad the final
# image size.
if [[ ${IMAGE_ELEMENT} =~ "block-device-efi" ]]; then
echo "Expanding disk for EFI partition"
du_size=$(( $du_size + (525 * 1024) ))
fi
# Rounding down size so that is is a multiple of 64, works around a bug in
# qemu-img that may occur when compressing raw images that aren't a multiple
# of 64k. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1180021
export DIB_IMAGE_SIZE=$(echo "$du_size" | awk ' { if ($1 % 64 != 0) print $1 + 64 - ( $1 % 64); else print $1; } ')
if [ -n "$MAX_ONLINE_RESIZE" ]; then
MKFS_OPTS="-E resize=$MAX_ONLINE_RESIZE $MKFS_OPTS"
fi
export TMP_IMAGE_DIR
# Try the 'old fashioned' way calling the block device
# phase. If this gives no result, use the configuration based approach:
eval_run_d block-device "IMAGE_BLOCK_DEVICE="
if [ -z ${IMAGE_BLOCK_DEVICE} ] ; then
# For compatibily reasons in addition to the YAML configuration
# there is the need to handle the old environment variables.
echo "image-size: ${DIB_IMAGE_SIZE}KiB" >> ${DIB_BLOCK_DEVICE_PARAMS_YAML}
if [ -n "${MKFS_OPTS}" ] ; then
echo "root-fs-opts: '${MKFS_OPTS}'" >> ${DIB_BLOCK_DEVICE_PARAMS_YAML}
fi
# After changeing the parameters, there is the need to
# re-run ${DIB_BLOCK_DEVICE} init because some value might
# change based on the new set parameters.
${DIB_BLOCK_DEVICE} init
# values to ${DIB_BLOCK_DEVICE}: using the YAML config and
${DIB_BLOCK_DEVICE} create
Pass all blockdevices to bootloader Currently we only export "image-block-device" which is the loopback device (/dev/loopX) for the underlying image. This is the device we install grub to (from inside the chroot ...) This is ok for x86, but is insufficient for some platforms like PPC which have a separate boot partition. They do not want to install to the loop device, but do things like dd special ELF files into special boot partitions. The first problem seems to be that in level1/partitioning.py we have a whole bunch of different paths that either call partprobe on the loop device, or kpartx. We have _all_part_devices_exist() that gates the kpartx for unknown reasons. We have detach_loopback() that does not seem to remove losetup created devices. I don't think this does cleanup if it uses kpartx correctly. It is extremley unclear what's going to be mapped where. This moves to us *only* using kpartx to map the partitions of the loop device. We will *not* call partprobe and create the /dev/loopXpN devices and will only have the devicemapper nodes kpartx creates. This seems to be best. Cleanup happens inside partitioning.py. practice. Deeper thinking about this, and more cleanup of the variables will be welcome. This adds "image-block-devices" (note the extra "s") which exports all the block devices with name and path. This is in a string format that can be eval'd to an array (you can't export arrays). This is then used in a follow-on (I0918e8df8797d6dbabf7af618989ab7f79ee9580) to pick the right partition on PPC. Change-Id: If8e33106b4104da2d56d7941ce96ffcb014907bc
2017-06-06 02:09:24 +00:00
# This is the device (/dev/loopX). It's where to install the
# bootloader.
IMAGE_BLOCK_DEVICE=$(${DIB_BLOCK_DEVICE} getval image-block-device)
Pass all blockdevices to bootloader Currently we only export "image-block-device" which is the loopback device (/dev/loopX) for the underlying image. This is the device we install grub to (from inside the chroot ...) This is ok for x86, but is insufficient for some platforms like PPC which have a separate boot partition. They do not want to install to the loop device, but do things like dd special ELF files into special boot partitions. The first problem seems to be that in level1/partitioning.py we have a whole bunch of different paths that either call partprobe on the loop device, or kpartx. We have _all_part_devices_exist() that gates the kpartx for unknown reasons. We have detach_loopback() that does not seem to remove losetup created devices. I don't think this does cleanup if it uses kpartx correctly. It is extremley unclear what's going to be mapped where. This moves to us *only* using kpartx to map the partitions of the loop device. We will *not* call partprobe and create the /dev/loopXpN devices and will only have the devicemapper nodes kpartx creates. This seems to be best. Cleanup happens inside partitioning.py. practice. Deeper thinking about this, and more cleanup of the variables will be welcome. This adds "image-block-devices" (note the extra "s") which exports all the block devices with name and path. This is in a string format that can be eval'd to an array (you can't export arrays). This is then used in a follow-on (I0918e8df8797d6dbabf7af618989ab7f79ee9580) to pick the right partition on PPC. Change-Id: If8e33106b4104da2d56d7941ce96ffcb014907bc
2017-06-06 02:09:24 +00:00
export IMAGE_BLOCK_DEVICE
# Similar to above, but all mounted devices. This is handy for
# some bootloaders that have multi-partition layouts and want to
# copy things to different places other than just
# IMAGE_BLOCK_DEVICE. "eval" this into an array as needed
IMAGE_BLOCK_DEVICES=$(${DIB_BLOCK_DEVICE} getval image-block-devices)
Pass all blockdevices to bootloader Currently we only export "image-block-device" which is the loopback device (/dev/loopX) for the underlying image. This is the device we install grub to (from inside the chroot ...) This is ok for x86, but is insufficient for some platforms like PPC which have a separate boot partition. They do not want to install to the loop device, but do things like dd special ELF files into special boot partitions. The first problem seems to be that in level1/partitioning.py we have a whole bunch of different paths that either call partprobe on the loop device, or kpartx. We have _all_part_devices_exist() that gates the kpartx for unknown reasons. We have detach_loopback() that does not seem to remove losetup created devices. I don't think this does cleanup if it uses kpartx correctly. It is extremley unclear what's going to be mapped where. This moves to us *only* using kpartx to map the partitions of the loop device. We will *not* call partprobe and create the /dev/loopXpN devices and will only have the devicemapper nodes kpartx creates. This seems to be best. Cleanup happens inside partitioning.py. practice. Deeper thinking about this, and more cleanup of the variables will be welcome. This adds "image-block-devices" (note the extra "s") which exports all the block devices with name and path. This is in a string format that can be eval'd to an array (you can't export arrays). This is then used in a follow-on (I0918e8df8797d6dbabf7af618989ab7f79ee9580) to pick the right partition on PPC. Change-Id: If8e33106b4104da2d56d7941ce96ffcb014907bc
2017-06-06 02:09:24 +00:00
export IMAGE_BLOCK_DEVICES
# Write the fstab
${DIB_BLOCK_DEVICE} writefstab
fi
Pass all blockdevices to bootloader Currently we only export "image-block-device" which is the loopback device (/dev/loopX) for the underlying image. This is the device we install grub to (from inside the chroot ...) This is ok for x86, but is insufficient for some platforms like PPC which have a separate boot partition. They do not want to install to the loop device, but do things like dd special ELF files into special boot partitions. The first problem seems to be that in level1/partitioning.py we have a whole bunch of different paths that either call partprobe on the loop device, or kpartx. We have _all_part_devices_exist() that gates the kpartx for unknown reasons. We have detach_loopback() that does not seem to remove losetup created devices. I don't think this does cleanup if it uses kpartx correctly. It is extremley unclear what's going to be mapped where. This moves to us *only* using kpartx to map the partitions of the loop device. We will *not* call partprobe and create the /dev/loopXpN devices and will only have the devicemapper nodes kpartx creates. This seems to be best. Cleanup happens inside partitioning.py. practice. Deeper thinking about this, and more cleanup of the variables will be welcome. This adds "image-block-devices" (note the extra "s") which exports all the block devices with name and path. This is in a string format that can be eval'd to an array (you can't export arrays). This is then used in a follow-on (I0918e8df8797d6dbabf7af618989ab7f79ee9580) to pick the right partition on PPC. Change-Id: If8e33106b4104da2d56d7941ce96ffcb014907bc
2017-06-06 02:09:24 +00:00
# XXX: needed?
LOOPDEV=${IMAGE_BLOCK_DEVICE}
# At this point, ${DIB_BLOCK_DEVICE} has created the raw image file
Pass all blockdevices to bootloader Currently we only export "image-block-device" which is the loopback device (/dev/loopX) for the underlying image. This is the device we install grub to (from inside the chroot ...) This is ok for x86, but is insufficient for some platforms like PPC which have a separate boot partition. They do not want to install to the loop device, but do things like dd special ELF files into special boot partitions. The first problem seems to be that in level1/partitioning.py we have a whole bunch of different paths that either call partprobe on the loop device, or kpartx. We have _all_part_devices_exist() that gates the kpartx for unknown reasons. We have detach_loopback() that does not seem to remove losetup created devices. I don't think this does cleanup if it uses kpartx correctly. It is extremley unclear what's going to be mapped where. This moves to us *only* using kpartx to map the partitions of the loop device. We will *not* call partprobe and create the /dev/loopXpN devices and will only have the devicemapper nodes kpartx creates. This seems to be best. Cleanup happens inside partitioning.py. practice. Deeper thinking about this, and more cleanup of the variables will be welcome. This adds "image-block-devices" (note the extra "s") which exports all the block devices with name and path. This is in a string format that can be eval'd to an array (you can't export arrays). This is then used in a follow-on (I0918e8df8797d6dbabf7af618989ab7f79ee9580) to pick the right partition on PPC. Change-Id: If8e33106b4104da2d56d7941ce96ffcb014907bc
2017-06-06 02:09:24 +00:00
# (IMAGE_BLOCK_DEVICE) and mounted all the partitions under
# $TMP_BUILD_DIR/mnt for us. We can now copy into the final image.
# 'mv' is not usable here - especially when a top level directory
# has the same name as a mount point of a partition. If so, 'mv'
# will complain:
# mv: inter-device move failed: '...' to '...'; \
# unable to remove target: Device or resource busy
# therefore a 'cp' and 'rm' approach is used.
sudo cp -ra ${TMP_BUILD_DIR}/built/* $TMP_BUILD_DIR/mnt
sudo rm -fr ${TMP_BUILD_DIR}/built/*
mount_proc_dev_sys
run_d pre-finalise
run_d_in_target finalise
2012-11-09 11:04:13 +00:00
finalise_base
for X in ${!IMAGE_TYPES[@]} ; do
if [[ " tar aci " =~ "${IMAGE_TYPES[$X]}" ]]; then
if [ "${IMAGE_TYPES[$X]}" = "aci" ]; then
sudo tar -C ${TMP_BUILD_DIR}/mnt -cf $IMAGE_NAME.aci --exclude ./sys \
--exclude ./proc --xattrs --xattrs-include=\* \
--transform 's,^.,rootfs,S' .
if [ -n "$ACI_MANIFEST" ]; then
cp $ACI_MANIFEST ${TMP_BUILD_DIR}/manifest
sudo tar -C ${TMP_BUILD_DIR} --append -f $IMAGE_NAME.aci manifest
else
echo "No ACI_MANIFEST specified. An ACI_MANIFEST must be specified for"
echo " this image to be usable."
fi
else
sudo tar -C ${TMP_BUILD_DIR}/mnt -cf $IMAGE_NAME.tar --exclude ./sys \
2023-11-21 21:28:47 +00:00
--exclude ./proc --exclude ./dev/* --xattrs --xattrs-include=\* .
fi
sudo chown $USER: $IMAGE_NAME.${IMAGE_TYPES[$X]}
unset IMAGE_TYPES[$X]
elif [ "${IMAGE_TYPES[$x]}" == "squashfs" ]; then
sudo mksquashfs ${TMP_BUILD_DIR}/mnt $IMAGE_NAME.squash -comp xz \
-noappend -root-becomes ${TMP_BUILD_DIR}/mnt \
-wildcards -e "proc/*" -e "sys/*" -no-recovery
elif [ "${IMAGE_TYPES[$X]}" == "docker" ]; then
sudo tar -C ${TMP_BUILD_DIR}/mnt -cf - --exclude ./sys \
--exclude ./proc --xattrs --xattrs-include=\* . \
| sudo docker import - $DOCKER_TARGET
unset IMAGE_TYPES[$X]
fi
done
# Unmount and cleanup the /mnt and /build subdirectories, to save
# space before converting the image to some other format.
Pass all blockdevices to bootloader Currently we only export "image-block-device" which is the loopback device (/dev/loopX) for the underlying image. This is the device we install grub to (from inside the chroot ...) This is ok for x86, but is insufficient for some platforms like PPC which have a separate boot partition. They do not want to install to the loop device, but do things like dd special ELF files into special boot partitions. The first problem seems to be that in level1/partitioning.py we have a whole bunch of different paths that either call partprobe on the loop device, or kpartx. We have _all_part_devices_exist() that gates the kpartx for unknown reasons. We have detach_loopback() that does not seem to remove losetup created devices. I don't think this does cleanup if it uses kpartx correctly. It is extremley unclear what's going to be mapped where. This moves to us *only* using kpartx to map the partitions of the loop device. We will *not* call partprobe and create the /dev/loopXpN devices and will only have the devicemapper nodes kpartx creates. This seems to be best. Cleanup happens inside partitioning.py. practice. Deeper thinking about this, and more cleanup of the variables will be welcome. This adds "image-block-devices" (note the extra "s") which exports all the block devices with name and path. This is in a string format that can be eval'd to an array (you can't export arrays). This is then used in a follow-on (I0918e8df8797d6dbabf7af618989ab7f79ee9580) to pick the right partition on PPC. Change-Id: If8e33106b4104da2d56d7941ce96ffcb014907bc
2017-06-06 02:09:24 +00:00
# XXX ? needed?
export EXTRA_UNMOUNT=""
2012-11-09 11:04:13 +00:00
unmount_image
TMP_IMAGE_PATH=$(${DIB_BLOCK_DEVICE} getval image-path)
export TMP_IMAGE_PATH
Pass all blockdevices to bootloader Currently we only export "image-block-device" which is the loopback device (/dev/loopX) for the underlying image. This is the device we install grub to (from inside the chroot ...) This is ok for x86, but is insufficient for some platforms like PPC which have a separate boot partition. They do not want to install to the loop device, but do things like dd special ELF files into special boot partitions. The first problem seems to be that in level1/partitioning.py we have a whole bunch of different paths that either call partprobe on the loop device, or kpartx. We have _all_part_devices_exist() that gates the kpartx for unknown reasons. We have detach_loopback() that does not seem to remove losetup created devices. I don't think this does cleanup if it uses kpartx correctly. It is extremley unclear what's going to be mapped where. This moves to us *only* using kpartx to map the partitions of the loop device. We will *not* call partprobe and create the /dev/loopXpN devices and will only have the devicemapper nodes kpartx creates. This seems to be best. Cleanup happens inside partitioning.py. practice. Deeper thinking about this, and more cleanup of the variables will be welcome. This adds "image-block-devices" (note the extra "s") which exports all the block devices with name and path. This is in a string format that can be eval'd to an array (you can't export arrays). This is then used in a follow-on (I0918e8df8797d6dbabf7af618989ab7f79ee9580) to pick the right partition on PPC. Change-Id: If8e33106b4104da2d56d7941ce96ffcb014907bc
2017-06-06 02:09:24 +00:00
# remove all mounts
${DIB_BLOCK_DEVICE} umount
${DIB_BLOCK_DEVICE} cleanup
cleanup_build_dir
if [[ (! $IMAGE_ELEMENT =~ no-final-image) && "$IS_RAMDISK" == "0" ]]; then
has_raw_type=
for IMAGE_TYPE in ${IMAGE_TYPES[@]} ; do
# We have to do raw last because it is destructive
if [ "$IMAGE_TYPE" = "raw" ]; then
has_raw_type=1
elif [ "$IMAGE_TYPE" != "squashfs" ]; then
compress_and_save_image $IMAGE_NAME.$IMAGE_TYPE
fi
done
if [ -n "$has_raw_type" ]; then
IMAGE_TYPE="raw"
compress_and_save_image $IMAGE_NAME.$IMAGE_TYPE
fi
fi
# Remove the leftovers, i.e. the temporary image directory.
cleanup_image_dir
# Restore fd 1&2 from the outfilter.py redirect back to the original
# saved fd. Note small hack that we can't really wait properly for
# outfilter.py so put in a sleep (might be possible to use coproc for
# this...?)
#
# TODO(ianw): probably better to cleanup the exit handler a bit for
# this? We really want some helper functions that append to the exit
# handler so we can register multiple things.
set +o xtrace
echo "Build completed successfully"
exec 1>&3 2>&3
sleep 1
# All done!
trap EXIT