Because we're still fundamentally a python program calling a
shell-script, there's some oddities like not having the virtualenv
bin/ in the $PATH if we call disk-image-create directly.
We can detect this, however, and activate the virtualenv before we
fork the disk-image-create shell script so everything "just works".
See also nodepool change I0537cbf167bb18edf26f84ac269cbd9c8a1ea6a2
Change-Id: Ibfea6cf6a6fd0c7f1e468d501c61ae0b58992042
After writing the basearch value to /etc/dnf/vars/basearch the
arch value was overwriting the same file. This appears to be
incorrect, so changing it to write /etc/dnf/vars/arch, which
matches the subsequent 'yum' code paths.
Change-Id: I5da54f03224c11f9e286f16b68533936c4174c2a
This change was made for pre-install so it applies during the
image build, but wasn't applied to the os-refresh-config script
that would run after deployment. The same problems apply there,
so we should do the same thing.
Change-Id: I4b8534cc9586eeb588b5c358550e76e27d40556a
Closes-Bug: 1629922
It has been observed that some chroot operations spawn additional
processes which rely on chroot files. More specifically, zypper, uses
gpg-agent to import and validate gpg keys for its repositories. This
gpg-agent process may stay alive for longer which prevents unmounting of
the tmpfs directory since the gpg-agent process still uses libraries etc
which were present in the chroot. We try to solve this by using walking
all the pids in /proc to find out the running processes in the chroot and
kill them gracefully. If that fails for whatever reason, then we simply
keep trying to umount the tmpfs directory before we give up.
The gpg-agent process usually terminates soon after its home directory
disappears but on fast systems we can reach the 'umount tmpfs' point
before gpg-agent terminates by itself. The solution is generic enough so
other 'chroot processes' can also be handled appropriately.
Change-Id: Iccf332678c79266113e76f062884fc5ee79e515d
for fedora/rhel/centos the main supported ARCH is x86_64. This patch allow
to call diskimage-builder with the above distro's with param ARCH=x86_64,
And also retain same behaiver when call with ARCH=amd64 as it translate
anyway to x86_64. Doing so wil simplify user expirience.
Change-Id: I229e0912434109b1b48a030bd35ad8dc1096a629
Without the dialog package is not possible
to properly use an interactive frontend.
debconf will print the following errors:
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed,
so the dialog based frontend cannot be used. at
/usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 1.)
Change-Id: I0c7142f717cacf7437dbac1e1696f39b00cb4c49
We have a pkg-map entry for lsb_release, but in package-installs.yaml
we refer to the actual package name instead. This will happen to
work on Red Hat platforms, but it's actually wrong.
Change-Id: Idb248f96e75fa1090422fa08e5fbb2385cc1f517
yumdownloader has to have all the repo XML files, etc, which adds up
to a not totally insignificant 150MiB or so. Currently we're leaking
this directory for every build, which adds up on regualar builders
like nodepool.
Isolate the call with a separate TMPDIR so we can clean it up after
the initial download.
Change-Id: Ic65e8ca837cc76b7a1bb9f83027b4a5bdd270f75
Remove the x bit from lib/disk-image-create; because it's called
directly by the entry-point, it doesn't need to be exectuable.
This should also be clearer that you're not supposed to run it
by hand.
Remove some boilerplate from old file
Change-Id: Ibb6cdae613e6c9cf21dd6aecc8e1f739bc3a2643
Move dib-run-parts from dib-utils into diskimage-builder directly.
For calling outside the chroot, we provide a standard entry-point
script. However, as noted in the warning comment, the underlying
script is still copied directly into the chroot by the dib-run-parts
element. I believe this to be the KISS approach.
This removes the dependency on dib-utils. We have discussed this
previously and nobody seemed to think retiring dib-utils was going to
be an issue.
This also updates the documentation to not mention dib-utils, or using
disk-image-create via $PATH setup, but rather gives instructions on
installing from pip with a virtualenv.
Change-Id: Ic1e22ba498d2c368da7d72e2e2b70ff34324feb8
while using disk-image-builder for building overcloud images for TripleO
using RDO, this repository is (in my opinion) wrongly disabled because
contains certain dependencies needed by RDO packages.
Example: python-cheetah is required for python-nova, but is not
available through RDO repository but only from
rhel-7-server-rh-common-rpms
Closes-Bug: #1638938
Change-Id: I76824c8ec02590397f1ff1d4f177ad061c7bf441
Signed-off-by: Luca Lorenzetto <lorenzetto.luca@gmail.com>
Mount all the usual /dev /sys /proc pseudo filesystems during the
root.d phase in order to make sure they are available for the rpm
post-installation phases.
Change-Id: I28221debf1036d9eb5137161757eb30811eafab1
On Centos and RHEL 6 the init system is upsart but but networking is using
sysv compatabiliy and a code path the handle this situation.
We can't use DISTRO_NAME because the centos-minimal element sets it to
centos for CentOS 7 but the centos element sets it to centos for CentOS 6.
Change-Id: Ib8e33ed78b3d6a5737eb7449bccef2d33f72b131
Closes-Bug: #1638527
It has always been a weird thing that dib is a python package, but
is totally driven by the disk-image-create script. It creates this
strange division that is hard to explain.
This moves disk-image-create to a regular python entry-point
Currently, this simply exec()s the original disk-image-create script.
However, we now have a (private) interface between disk-image-create
written in python and the driver shell script. Here's some things we
could do, for example:
* Argument parsing is generally nicer in Python, and then end result
is mostly just setting environment variables to flag different things
in the shell script. I could see us moving the argument-parsing into
diskimage_builder.disk_image_create:main() and just setting things in
os.environ before the exec()).
* I7092e1845942f249175933d67ab121188f3511fd sets IMAGE_ELEMENT_YAML in
disk-image-create by calling-back to element-info. We can just call
element_dependencies.find_all_elements() in here an export is to
os.environ before disk-image-create starts.
* remove need for ramdisk-image-create symlink by just exporting
IS_RAMDISK based on sys.argv[1] value
* you could even unit test some of this :)
Change-Id: I69ca3d26fede0506a6353c077c69f735c8d84d28
Currently we have all our elements and library files in a top-level
directory and install them into
<root>/share/diskimage-builder/[elements|lib] (where root is either /
or the root of a virtualenv).
The problem with this is that editable/development installs (pip -e)
do *not* install data_files. Thus we have no canonical location to
look for elements -- leading to the various odd things we do such as a
whole bunch of guessing at the top of disk-image-create and having a
special test-loader in tests/test_elements.py so we can run python
unit tests on those elements that have it.
data_files is really the wrong thing to use for what are essentially
assets of the program. data_files install works well for things like
config-files, init.d files or dropping documentation files.
By moving the elements under the diskimage_builder package, we always
know where they are relative to where we import from. In fact,
pkg_resources has an api for this which we wrap in the new
diskimage_builder/paths.py helper [1].
We use this helper to find the correct path in the couple of places we
need to find the base-elements dir, and for the paths to import the
library shell functions.
Elements such as svc-map and pkg-map include python unit-tests, which
we do not need tests/test_elements.py to special-case load any more.
They just get found automatically by the normal subunit loader.
I have a follow-on change (I69ca3d26fede0506a6353c077c69f735c8d84d28)
to move disk-image-create to a regular python entry-point.
Unfortunately, this has to move to work with setuptools. You'd think
a symlink under diskimage_builder/[elements|lib] would work, but it
doesn't.
[1] this API handles stuff like getting files out of .zip archive
modules, which we don't do. Essentially for us it's returning
__file__.
Change-Id: I5e3e3c97f385b1a4ff2031a161a55b231895df5b
If element-info fails we do not detect it due to it being run in a
subshell. Whenever this happens it is a terminal error (theres no way we
can run without an expanded set of elements) so lets detect and fail
early.
Change-Id: Ibdeecf19bc2824982273ef5cda6d7b7b614e484e
The refresh operation must happen after the cache has been added in
order to ensure that whatever is in the cache is still relevant to
the current build and we are not using stale packages.
Change-Id: Iafd718e9738f85b8c235806c027665730f44d89b