This commit adds change in 'proliant-tools' element to
install a package 'unzip' which is required to perform
SUM based firmware update for HPE Proliant servers.
Change-Id: Ib8f6d18402439edd93d100cc7a4fb2094c863715
As described in the comment, we need to create the /etc/machine-id for
the image-based build when systemd isn't updated (as is usually the
case for a new distro)
Work on clearing this out continues, but this brings it to parity with
fedora-minimal.
Change-Id: Icbbbabb4114d4d95909648d8e39a6bae6d2a7b7b
Depends-On: I761e425f8a658669d9b8a70ce4260cec263ea51a
The URL we are using seems to have disappeared. Update this to
download.fedoraproject.org. The new URL requires a "subrelease" now,
add it, along with a note on where it comes from.
Change-Id: I761e425f8a658669d9b8a70ce4260cec263ea51a
This element was assuming that yaml was included as package,
but there are systems not including it. So properly add yaml
as a dependency.
Change-Id: I72da2776674a3963657052b9a9715abcb4fab1e2
Partially-Fixes-Bug: #1715686
When using combined with rhel7 image, the unregister of repos
has already happened, because it is executed under 60- ordering.
As dracut-regenerate may need to install extra packages for it,
it causes this step to fail, because it cannot find repos where
to pull the packages from.
Change-Id: I35e37df7990ad76a5004cb90fdd863ec743a5483
Per the bug report, these seem to be causing issues with maintaining
file capabilities. They aren't necessary so let's just remove them.
Change-Id: I06c90fdc85655986142b936cadbe04d75dd27427
Closes-Bug: 1714604
Avoid incorrect use of [ with =~ matching
I guess this doesn't trip "-e" because it's in an if-conditional. I'm
looking at making bashate detect this; maybe we can run bashate over
things we know are scripts
Change-Id: Ia3fe2b978fae5bdaadbb1789058180d3ad950d00
In Ubuntu/Debian, the default dependencies cannot be relied
upon as we enter into a cyclical dependency relationship which
prevents the unit from starting.
Added the required configuration to the systemd unit file.
This issue has also been observed in glean[0], which has a nearly
identical unit file for interface start-up.
[0]: https://review.openstack.org/#/c/485748
Closes-Bug: #1708685
Change-Id: I23ac9510d1a21c7073bd33f76ba66fa04a8be035
This provides a basic LVM support to dib-block-device.
Co-Authored-By: Ian Wienand <iwienand@redhat.com>
Change-Id: Ibd624d9f95ee68b20a15891f639ddd5b3188cdf9
Under certain environments, this timeout was causing failures
because it was too short. Increasing to 10, to give time to
perform the specified tasks.
Change-Id: I01dd3553f38e1137b2fcb04b4ee12202be3ad1a8
Many programs rely upon /etc/protocols to be present
however the default debian image that is generated lacks
/etc/protocols. This is observable when building an image
for use with ironic via the ironic-agent element, since
the IPA agent fails to start as python needs /etc/protocols
to open a socket connection.
Added to debian-minimal as it is inherited into the debian
element.
Change-Id: Icc81635870961943707cf6b3f61a9ddbd51cb8fd
Closes-Bug: #1708531
There is some confusion in the readme's over what is happening. The
original change (Iaf46c8e61bf1cac9a096cbfd75d6d6a9111b701e) split out
debian-minimal and made debian "... simply be a collection of the
extra things we do to make it look like a cloud-init based cloud
image"
Make this clearer in the documentation
Change-Id: Ibe6fad9c67b70a5e31e43e06419968135174fef3
Deploying many nodes with the generated image shouldn't have the same
/etc/machine-id so clearing it and letting systemd generate a new
id upon first boot seems to be the best way to achieve this.
Change-Id: I73d0577d31464521b3989312fd9d982a1312a268
Closes-bug: 1707526
Closes-bug: 1672461
Fedora 26 is now the latest release:
https://fedoraproject.org/wiki/Releases/26/Schedule
We are building and using these in infra now
Change-Id: I012c2d28255be274e88abc2751d968bafaf76fbb
Depends-On: Ieba5f69020a13681074f72cfca2955071801b63a
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Change I008f8bbc9c8414ce948c601e3907e27764e15a52 has shown that we
build redhat images without the "semange" tool available, which comes
from the policycoreutils-python package (see also
I3f9e2c322d042a5dddba33451c0fc21a4d32a88a).
I403e7806ae10d5dd96d0727832f4da20e34b94c7 added some of the selinux
libraries to yum-minimal for ansible support, but not to others.
Given both these changes, it seems that selinux[-targeted],
libselinux[-python] and policycoreutils[-python] can reasonably
considered part of all base images. Move the selinux related packages
into redhat-common.
This also adds it explicitly to install_test_deps.sh. It was actually
being dragged in by the docker install, but is a required component
for building (should be in bindep, but not there with that yet).
Change-Id: Idd4ae71ee6deee84604823b6b5dc4a845f316e01
Related-Bug: #1707788
The MBR Partition Table Entry (PTE) allows one to specify many
possible partition types and one of the benefits of this is being able
to specify the CHS variant or the LBA variant.
By default, LBA only creates partitions of type 0x83 (of course,
that's only because the documentation doesn't tell you how to make it
do anything else).
I will take up Ian's suggestion in patch set 2 for a more rigorous
test in an independent patch set.
Change-Id: If3068535980eac2e58d4025444c65147a8c7fedc
Closes-Bug:#1703352
Currently, the cleanup script is using existence of
semanage binary to check if selinux is enabled. However
this is misleading and can lead to problems when selinux
is disabled in a system where the binary exist.
This patch changes the detection logic to use /sys/fs/selinux
directory which is a in-memory filesystem created only when
selinux is really enabled.
Change-Id: I008f8bbc9c8414ce948c601e3907e27764e15a52
Related-Bug: 1706386
tar is an essential package but nothing pulls it explicitly. This causes
some issues in the openSUSE CI jobs like the following one
"Failed to execute tar: No such file or directory", "Failed to write
file: Broken pipe", "Failed to retrieve image file. (Wrong URL?)",
"Exiting."], "stdout": "", "stdout_lines": []}
Just like 'sed', add 'tar' to the list of packages for the openSUSE
minimal builds.
Change-Id: Ia36e3d9fd6b78862a6831ba80b43d4614a349ca0
As described in the comments inline, on a selinux enabled kernel (such
as a centos build host) you need to have permissions to change the
contexts to those the kernel doesn't understand -- such as when you're
building a fedora image.
For some reason, setfiles has an arbitrary limit of 10 errors before
it stops. I believe we previously had 9 errors (this mean 9
mis-labeled files, which were just waiting to cause problems).
Something changed with F26 setfiles and it started erroring
immediately, which lead to investigation. Infra builds, on
non-selinux Ubuntu kernel's, would not have hit this issue.
This means we need to move this to run with a manual chroot into the
image under restorecon.
I'm really not sure why ironic-agent removes all the selinux tools
from the image, it seems like an over-optimisation (it's been like
that since Id6333ca5d99716ccad75ea1964896acf371fa72a). Keep them so
we can run the relabel.
Change-Id: I4f5b591817ffcd776cbee0a0f9ca9f48de72aa6b
For builds inside the infra, we don't want to pack the cache
inside the image (as it might be different at the time the image
runs). In an opensuse-minimal image this saves about 10MB of image
size.
Change-Id: I5ecabd46f0a662798bda3e4468395ad8308d0055
As described in the comment and associated bugzilla, the behaviour of
setfiles has changed in Fedora 26 to require "-m" situations where
labeled file-systems are mounted below non-labeled file-systems. Our
loopback/chroot system appears to trigger this nicely, leading to a
setfiles call that does nothing without this.
Change-Id: I276c6f6a4fb44f4bea5004f6b4214f94757728ae
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
As described in the referenced bug, the dependency solver in yum
doesn't handle weak dependencies well and in some cases, such as
Fedora 26, can end up choosing coreutils-single (the busybox-esque
single binary) instead of actual coreutils, which then causes problems
with conflicting packages later.
Change-Id: I2907bf3b74c146986b483d52cc6ac437036330b4
On a system where the packaged pip/virtualenv is up-to-date with
upstream (such as Fedora 26 ... for now), we don't reinstall, which
then violates a bunch of assumptions later on. Force install.
Change-Id: I6ebcda0351997fa7e32f0e6e77a98b2c33764e3f
It turns out dnf argparse can't handle negative numbers without "=".
It's actually documented in the man page
--latest-limit <number> ... If <number> is negative skip <number>
of latest packages. If a negative number is used use syntax
--latest-limit=<number>
But who reads that :) This started failing with Fedora 26
Change-Id: I884af94c07fa11b010f69863047a04711b14f21e
We expect LC_ALL for non-C locales to be working inside
images, so always install glibc-locale for openSUSE.
Change-Id: I8fe92773e377539070d9d9fe2960a6202bb80a18
In preparation for promoting the openSUSE jobs to voting ones we should
use the OpenStack mirrors. As such, the opensuse elements are modified
to make use of the DIB_DISTRIBUTION_MIRROR variable which is normally
exported by the openstack-ci-mirrors element.
Change-Id: Ie588c1c1eec13190cfb2ec718ba51f8c9878283f
We added the DIB_distro_DISTRIBUTION_MIRROR arguments with
I92964b17ec3e47cf97e3a3091f054b2a205ac768 as a way that we could
source a list of mirrors and then have the distro elements choose
which one applied to them.
However, this hasn't worked out to be so useful. The
openstack-ci-mirrors element is working as a mirror setup script -- it
translates the openstack CI mirror list variables into the generic
"DIB_DISTRIBUTION_MIRROR" as appropriate for each distro's build.
Also, it turns out there's other things that need to be done, such as
turning off gpg checking, which mean the idea of "just export
variables" hasn't turned out as valid ... you need actual code
involved to get it right.
AFAICT we never actually documented these, and they do not seem to be
in use. They have caused considerable confusion when dealing with new
platforms as we try to keep consistency. Remove them.
[1] http://codesearch.openstack.org/?q=DIB_.*_DISTRIBUTION_MIRROR&i=nope&files=&repos=
Change-Id: Ifc4ab700631ffdfbe790068558f670f9a11dde5e
The code in mkfs correctly extends the command line with a '-n' for
vfat but does not currently do it for fat. This means that mkfs for
fat ends up with a '-L' which is what you'd do for everything like
ext[234].
The change just treats fat like vfat in the one place where this check
is required.
Change-Id: If65dfd949acdadff33a564640fb42ea73026a786
Closes-Bug: #1703063
The purpose of the openSUSE element is to build openSUSE distribution
based images, so an additional community repo shouldn't be pulled into
the image. In addition the dkms dependency is blacklisted for SUSE
in the dkms element anyway, so this should be a noop.
Change-Id: I0aa06d9f4f110546032f910e3361840693d02de7
On Power systems console should be added the kernel command line
in the following order: 'console=tty0 console=hvc0'.
The first one is the graphical console. The last one is the serial
console. The kernel enables all the consoles pointed through the
kernel command line. However, only the last one will receive
input/output during kernel boot. All the other consoles will be
enabled after the boot.
Change-Id: I0069f608e0ab104d3778954e033fb82ed5ea7693
We replace the base resolv.conf with an "outside" copy so that
resolving works when we're in the chroot.
Installing resolvconf package modifies the in-chroot resolv.conf to a
symlink (to /var/run) which it wants maintained in the final image.
We have the existing "immutable" check for a created resolv.conf file,
but no eqivalent for a symlink.
This adds a check to see if the resolv.conf is a symlink and leave it
alone if it is, assuming it has been re-created in the chroot.
I have tested this with ubuntu-minimal+resolvconf with
dhcp-all-interfaces and the system seems to work with resolvconf
working correctly.
Change-Id: Idd5a26e9d55979bd951577d5b098ed4bfba91ad3
The python3 package actually contains some core modules (like the xml
one) which are not present in the python3-base on which is pulled by
the python3-devel package. As such, it's best to have it installed
similar to python-xml for python2.
Change-Id: I5cd5d1127ae62d6753c2ace44965179c5400bb9a
In order to support {CentOS,RHEL}7 for building cloud images we need to
handle the differences in grub packaging from Ubuntu. We also need to
populate the defualt location for cloud images for CentOS builds.
Change-Id: Ie0d82ff21a42b08c4cb94b7a5635f80bfabf684e
When a download redirector redirects to a broken mirror, timeout
quickly rather than waiting until the overall job is being timed out.
Change-Id: If7eb63d406aaf61f71aa9203cf708c474aa63fd0
The 'packages' variable already contains the packages we need so
use it instead of duplicating the packages.
Change-Id: Id22e1862f9654e66252d03a0fed9839cf004d750
Several people have popped up in IRC recently with failures in these
elements. Without Python 2.7 available in the image they are
unsupported (OpenStack hasn't supported it for a long time). Remove
these to avoid further confusion.
The centos/centos7 DISTRO split that has happened with centos-minimal
is unfortunate but I don't think it helps to rename centos7/rhel7 ATM.
To summarise; DISTRO=centos7 means image based build,
DISTRO=centos && DIB_RELEASE=7 means the minimal build.
In the future, I think it is important that the minimal builds and
image builds set the same DISTRO. This reflects that "upper" layers
shouldn't care about the exact building of the lower layers. I see
CentOS 8 going one of two ways
1) the changes are so significant, we start separate centos8 /
centos8-minimal elements. They both set DISTRO=centos8 (and
DIB_RELEASE to point-release maybe?). This means we have to update
all "if DISTRO == centos || DISTRO == centos7" branches to also check
for "centos8". Evenually (!) "centos" goes away for versioned DISTRO
only
2) we restore centos element with DISTRO=centos and DIB_RELEASE=8, and
centos-minimal remains the same. This means we have to audit all "if
DISTRO == centos" calls to make sure they're appropriate for version 8
(stick a "&& DIB_RELEASE=7" on them all basically).
I'm not sure we can fully decide until we start to see excatly how the
distro switching/matching bits look, but (2) is consistent with Ubuntu
and probably the preferred solution.
Some "rhel" parts have been cleaned up. More could be done in
rhel-common, but given our lack of coverage of that I'd prefer to
leave it for now.
Change-Id: I6ea784116ef59ca22878c8512c963f29c815a00a
The image download tests have long been too unreliable for the gate.
We need to cache the base images similar to how devstack caches it's
testing images. Let's move them to non-voting jobs for the time
being.
This means that the gate jobs are now all based on "-minimal" and are
using infra mirrors. Unfortunately, there is still some unreliability
because we currently have issues with infra mirrors being very slow
after AFS updates, leading to job timeouts. But we're on the right
path...
Also, I noticed we don't have tests of the "ubuntu" image-download
based tests, which were tacitly being tested by apt-sources before we
moved that to -minimal. Add simple tests for these.
Change-Id: Ie33ee49656872467ef68d753210032156bb6b2cb
In a system where python2 is not installed and /usr/bin/python is not
linked then the cleanup process will fail trying to invoke the python
script. Use the previously determined DIB_PYTHON_EXEC if it's available.
Change-Id: I128292808ccef92cc1803988b35caae5aa6fa541
This was previously defined as python2-devel (which is what rhel uses),
but the actual package name is python-devel. See:
https://software.opensuse.org/package/python-devel
Change-Id: Id61e5b05772d10c32b33d3e70cb64d5ebdcba6e4
I'm uncertain as to why this is using the "fedora" element for testing
... but it requires downloading the fedora .qcow on every test which
has shown to be unreliable. An easy thing to do is to switch it to
fedora-minimal; that will only involve downloads from local mirrors in
the gate.
Add redhat-rpm-config for minimal. I admit I have not fully gone
through why this is not pulled in. It's been an issue since
I459f2203fa145049dda185da952813118193d573 and there's all sorts of
bugs.
Change-Id: I37458e3926dae32a259bd5aa9efc645561b029a0
fedora/centos-minimal don't obey DIB_DISTRIBUTION_MIRROR currently. I
don't really want them too -- we want to be able to separate the
mirrors used during the build process from those embedded into the
final image. Add DIB_YUM_MINIMAL_BOOTSTRAP_REPOS which is a directory
with repo files to use during the install.
This introduces setup-gate-mirrors.sh which is intended to setup
repo/sources/whatever files in the openstack gate that point to the
local region mirror. It pulls the info from the mirror_info.sh script
on each CI node.
The openstack-ci-mirrors element is updated to export these variables.
elements are updated to depend on it. Tests are restored
Change-Id: I7604fc4d41cb1483be16b8d628a24e8fc764f515
This adds "openstack-ci-mirrors" element which performs various
settings to get builds using local mirrors. As a first step, we
convert ubuntu-minimal jobs
The main trick is that since infra mirrors are created with rerepo
they are not signed (they are recreated, not cloned, and not signing
is seen as a feature in that it deters external use). So we need to
instruct debootstrap to ignore signing and also turn it off for
in-chroot apt. Other than that, the existing DIB_DISTRIBUTION_MIRROR
works to redirect installs.
Remove "restricted" as it's not mirrored, and I don't think we want it
in here by default.
(I think DIB_DISTRIBUTION_MIRROR is a bit of an anti-pattern, because
it leaves the mirrors in the final image -- just because you use them
to build, doesn't mean you want them at runtime). But we don't need
to fix that now, and we don't use any created images.)
This pauses fedora testing until the next change, which moves to using
local mirrors for testing on fedora/centos
Change-Id: I778bd05a1e615c27edf1c9f0a1409119a6b3a850
The gate is currently extremley unstable, and these two issues are
causing most of the problems. We need to commit them atomically so we
can get anything moving again
---
The gate is very unstable downloading the ubuntu tarballs from
upstream at the moment. Move this to ubuntu-minimal which, in a later
change will source files from our local mirror.
We need a caching mechanism for these large files to avoid this
instability. This is future work for the various image-based jobs.
---
Move debian to default skip lists
I don't know if it's mirrors being worked hard for the Stretch
release, but this is constantly failing the gate. I will move this to
the -nv extras job
I am working on having the voting job use local mirrors for
everything. Unfortunately debian infra mirrors don't have stretch yet
and we need to do some fiddling to get "stable" available. Once we
have all this, we can consider making it voting again.
Change-Id: Iaf7b3888ef06c7aef63cbf76a94b33f96bc9c5c2
We introduced the "settle" in
I90103b59357edebbac7a641e8980cb282d37561b thinking that maybe kpartx
had not finished writing the partition. This probably wasn't a bad
first assumption, since we used to have this -- but is seems
insufficient.
The other failiure here seems to be if kpartx hasn't actually seen the
updated partition table in the image, so it has correctly (in it's
mind) not mounted the partition.
Looking at strace of fdisk run manually on a loopback, it will do a
fsync on the raw device after writing and then a global sync as it
exits.
This replicates this; we flush and fsync in mbr.py in the exit handler
after writing the partition, before closing the file (i've updated one
of the unit tests to double-check the call). In the partitioning.py
caller we execute a sync call too.
Since it does seem unlikely the "-s" option of kpartx is not working,
I've removed the udev settle work-around too.
Change-Id: Ia77a0ffe4c76854b326ed76490479d9c691b49aa
Partial-Bug: #1698337
Debian Stretch released as stable recently, and the init system is
less tightly specified in the base dependencies (for some info, see
[1]). It seems, probably unintentionally, that in the previous
release systemd-sysv was brought in by debootstrap, but that is no
longer happening.
Add systemd as an early dependency of debian-minimal.
Remove the package-installs.yaml as that happens too late (other
things need to know the init system to write out service files, etc
and probe for systemd utils before package-installs). As mentioned, I
do not believe the "only install systemd on testing" idea was actually
working here, because it was being brought in during the initial
debootstrap.
Update some documentation to explain what's going on
[1] https://lists.debian.org/debian-boot/2015/05/msg00156.html
Change-Id: Id67c0cf08728407d234976f9807d3bd71d12f758
There was a race in diskimage-builder where the mkfs call after a
kpartx -avs for the loop device would fail because the device was
not yet ready. This adds a udevadm settle call after the kpartx
to make sure the udev event queue has cleared.
Change-Id: I90103b59357edebbac7a641e8980cb282d37561b
Closes-Bug: #1698337
This adds a devstack-inspired output filter to standardise
timestamping.
Currently, python tools timestamp always (timestamp setup in
logging_config.py) but all the surrounding bash does not.
We have extra timestamps added in run_functests.sh for our own
purposes to get the bash timestamps; but this ends up giving us
double-timestamps for the python bits. Additionally, callers such as
nodepool capture our output and put their own timestamps on it, and
again have the double-timestamps.
This uses a lightly modified outfilter.py from devstack to standardise
this.
All output is run through this filter, which will timestamp it. I
have removed the places where we double-timestamp -- logging_config.py
and the prefix in dib-run-parts.
An env option is added to turn timestamps off completely (does not
seem worth taking up a command-line option for). For callers like
nodepool, they can set this and will just have their own timestamps as
they collect the lines.
Since all logging is going through outfilter, it's easy to add a
--logfile option. I think this will be quite handy; personally I'm
always redirecting dib runs to files for debugging.
I've also added a "quiet" option. I think this could be useful in
run_tests.sh if we were to start logging the output of each test to
individual files. This would be much easier to deal with than the
very large log files we get (especially if we wanted to turn on
parallel running...)
Change-Id: I202e1cb200bde17f6d7770cf1e2710bbf4cca64c
Using the newly exposed variables from the prior change, install the
ppc bootloader to the boot partition, not the underlying loopback
device.
Change-Id: I0918e8df8797d6dbabf7af618989ab7f79ee9580
Currently we only export "image-block-device" which is the loopback
device (/dev/loopX) for the underlying image. This is the device we
install grub to (from inside the chroot ...)
This is ok for x86, but is insufficient for some platforms like PPC
which have a separate boot partition. They do not want to install to
the loop device, but do things like dd special ELF files into special
boot partitions.
The first problem seems to be that in level1/partitioning.py we have a
whole bunch of different paths that either call partprobe on the loop
device, or kpartx. We have _all_part_devices_exist() that gates the
kpartx for unknown reasons. We have detach_loopback() that does not
seem to remove losetup created devices. I don't think this does
cleanup if it uses kpartx correctly. It is extremley unclear what's
going to be mapped where.
This moves to us *only* using kpartx to map the partitions of the loop
device. We will *not* call partprobe and create the /dev/loopXpN
devices and will only have the devicemapper nodes kpartx creates.
This seems to be best. Cleanup happens inside partitioning.py.
practice. Deeper thinking about this, and more cleanup of the
variables will be welcome.
This adds "image-block-devices" (note the extra "s") which exports all
the block devices with name and path. This is in a string format that
can be eval'd to an array (you can't export arrays).
This is then used in a follow-on
(I0918e8df8797d6dbabf7af618989ab7f79ee9580) to pick the right
partition on PPC.
Change-Id: If8e33106b4104da2d56d7941ce96ffcb014907bc
Currently we pass a reference to a global "rollback" list to create()
to keep rollback functions. Other nodes don't need to know about
global rollback state, and by passing by reference we're giving them
the chance to mess it up for everyone else.
Add a "add_rollback()" function in NodeBase for create() calls to
register rollback calls within themselves. As they hit rollback
points they can add a new entry. lambda v arguments is much of a
muchness -- but this is similar to the standard atexit() call so with
go with that pattern. A new "rollback()" call is added that the
driver will invoke on each node as it works its way backwards in case
of failure.
On error, nodes will have rollback() called in reverse order (which
then calls registered rollbacks in reverse order).
A unit test is added to test rollback behaviour
Change-Id: I65214e72c7ef607dd08f750a6d32a0b10fe97ac3
Keep track of the mount-point ordering in a state variable, rather
than a global. This path is tested by existing unit tests.
Note a prior change inserted the MountNode objects directly into a
list in self.state, which makes sorting quite easy as it can just
implement __lt__. Unfortunately we still json dump the state, and
thus we can't have aribtrary objects in it (future work may be to
check keys inserted into the status object...). So we have to do a
bit of wrangling with tuple lists and comparision functions here, but
it's not too bad.
Change-Id: I0c51e0c53c4efdb7a65ab0efe09a6780cb1affa8
As we add file-systems, add them to global state and check the labels
are uniqiue. Add a unit test and remove the old global value.
Bonus fixup to the length check, and a test for that too.
Change-Id: I0f5a96f687c92e000afc9c98a26c49c4b1d3f28d
With I468dbf5134947629f125504513703d6f2cdace59 each node has a
reference to the global state object. This means it gets pickled into
the node-list, which is loaded for later calls. There is no need to
reload the state.json it and pass it for later cmd_* calls, as the
nodes can see it via the unpickled self.state
Change-Id: I9e2f8910f17599d92ee33e7df8e36d8ed4d44575
Making the global state reference a defined part of the node makes
some parts of the block device processing easier and removes the need
for other global values.
The state is passed to PluginNodeBase.__init__() and expected to be
passed into all nodes as they are created. NodeBase.__init__() is
updated with the new paramater 'state'.
The parameter is removed from the create() call as nodes can simply
reference it at any point as "self.state".
This is similar to 1cdc8b20373c5d582ea928cfd7334469ff36dbce, except it
is based on I68840594a34af28d41d9522addcfd830bd203b97 which loads the
node-list from pickled state for later cmd_* calls. Thus we only
build the state *once*, at cmd_create() time as we build the node
list.
Change-Id: I468dbf5134947629f125504513703d6f2cdace59
Currently the later cmd_* calls -- umount, cleanup, delete -- all
recreate the node graph by parsing the config file using
create_graph()
There is some need, however, to have a sense of global state when
building the node list. The problem is, this is a one time operation
-- we do not want to rebuild that state for these later calls (see the
"loaded" checks in proposed
Ic3b805f9258128d5233b21ff25579c03487c7fcc).
An insight here seems to be that these cmd_* calls do not actually
want to re-parse the configuration file and rebuild the node list;
they just want to walk the node list in reverse with the state as
provided after cmd_create().
So, rather than re-creating the node list, we might as well just
pickle it, save it to disk along side the state dictionary dump and
reload it for cmd_*.
After this, I think we can safely have PluginBase.__init__() be passed
the state. We will now know that this will only be called once,
during initial creation.
Change-Id: I68840594a34af28d41d9522addcfd830bd203b97
You can't pickle a static method reference which complicates being
able to save the node graph when the "rollback" call-back wants to
hold references to these functions. The outer module (localoop.py) is
small anyway, so from an organisation point of view the difference is
minimal. Since these are really only called with parameters from the
containing class, they could be class methods with no parameters, at
the small expense of having to fiddle the mbr test-case a bit.
Change-Id: I6f9592a4295abe1b41294b79828bc2f3c2da01c6
The supported ppc ${ARCH} is "ppc64el" (at least in the gate testing
...) so move the file to that, so gets picked up by
block_device_create_config_file
Change-Id: I9273f35cdbfb0a62404461cbc1df9b2a92155fb0
Something seems to be going on with the ppc matching in the gate test.
Small updates to see what's going on...
Change-Id: Ie48cd4ce1f983a58932a577a43746240f6866936
Because we append the function/line info after debug lines in the gate
logs, the pretty-print ends up not looking all that pretty. Pad it.
Change-Id: Ice013428342614300cd51e8b7be56e79b75b31fc
This is code motion with some small changes to make follow-on's
easier.
test_blockdevice_mbr.py is moved alongside the other tests. It is
modified slightly to use the standard base class and remove a lot of
repeated test setup; a fixture is used for the tempdir (so it doesn't
have to be torn-down, and is removed properly on error) and the partx
args are moved into the setUp() so each test doesn't have to create
it. No functional change. renamed test_mbr.py for shortness.
test_blockdevice_utils.py is merged with existing test_utils.py. No
change to the tests.
test_blockdevice.py is removed. It isn't doing anything currently; to
work it will need to take an approach based more on mocking of calls
that require elevated permissions. It's in history if we need it.
Change-Id: I87b1ea94afaaa0b44e6a57b9d073f95a63a04cf0
assertRaisesRegexp was renamed to assertRaisesRegex in Py3.2
For more details, please check:
https://docs.python.org/3/library/
unittest.html#unittest.TestCase.assertRaisesRegex
Change-Id: I705c958c0dbf1daa409ed29ccbc038426298c306
Closes-Bug: #1436957
package-installs.yaml is installing python-dev, not python2-dev,
so we need to adjust the mapping accordingly.
In addition, zypper-minimal used an dpkg specific package name,
while there is a SUSE equivalent (and zypper-minimal is anyway
SUSE family specific)
Change-Id: Ia9dd061fa46a514781808d62e5e93b03f75c6745
Ubuntu 12.04 LTS reached its regular End of Life on April 28, 2017.
Depends-On: I5e145095a10db112bb27516bfe652d2cdc052a61
Change-Id: I64af4c5183d77a75dcd062895d19b0a1330c8da8
While plugins treat the state as just a dictionary, it's nice for the
driver functions to keep state related functions encapsulated in the
state object singleton. Wrap the internal state dictionary so we can
pass the BlockDeviceState directly without dereferencing.
Change-Id: Ic0193c64d645ed1312f898cbfef87841f460799c
Currently we keep a global list of mount-points defined in the
configuration and automatically setup dependencies between mount nodes
based on their global "mount order" (i.e. higher directories mount
first).
The current method for achieving this is roughly to add the mount
points to a dictionary indexed my mount-point, then at "get_edge()"
call build the sorted list ... unless it has already been built
because this gets called for every node.
It seems much simpler to simply keep a sorted list of the
MountPointNode objects as we add them. We don't need to implement a
sorting algorithm then, we can just use sort() and implement __lt__
for the nodes.
I believe the existing mount-order unit testing is sufficient; I'm
struggling to find a valid configuration where the mount-order is
*not* correctly specified in the configuration graph.
Change-Id: Idc05cdf42d95e230b9906773aa2b4a3b0f075598
This element has not been functioning correctly for some time due to
an incorrect path to select-boot-kernel-initrd (should be /usr/local/bin).
The dracut-regenerate element can be used to regenerate dracut ramdisks
and is more flexible than this element.
Change-Id: I33d555ffd4a92b2948b2ea4a66b151f0422ccb8c
Closes-Bug: #1688546
This patch removes the ccache handling from the base element. For
mostly all systems this was never used at all.
This is working towards the removal of the base element from DIB
Change-Id: Ieb16ef612ebd98470993dcd6f55b3a22d37084ba
Signed-off-by: Andreas Florath <andreas@florath.net>
In python3, the standard out data returned by
subprocess.Popen.communicate() will in most cases be bytes rather than a
string and must therefore be decoded.
Without this fix we hit the following error:
TypeError: a bytes-like object is required, not 'str'
Change-Id: I6d75f867ebfdb925970c3397175214b9050d7632
Closes-Bug: #1694463
Currently openSUSE 42.3 has entered feature freeze mode
so it is a good point in time to verify that 42.3 builds
are working successfully. Also test opensuse-minimal for
platforms that support it (need working zypper package)
Change-Id: I4c613e1e68cb7375c29d544bbf70b5da9bf21414
A couple of things going on, but I think it makes sense to do them
atomically.
The NodeBase.create() argument "results" is the global state
dictionary that will be saved to "state.json", and re-loaded in later
phases and passed to them as the argument "state". So for
consistency, call this argument "state" (this fits with the change out
to start building the state dictionary earlier in the
PluginBase.__init__() calls).
Since the "state" is a pretty important part of how everything works,
move it into a separate object. This is treated as essentially a
singleton. It bundles it nicely together for some added
documentation [1].
We move instantiation of this object out of the generic
BlockDevice.__init__() call and into the actual cmd_* drivers. This
is because there's two distinct instantiation operations -- creating a
new state (during cmd_create) and loading an existing state (other
cmd_*). This is also safer -- since we know the cmd_* arguments are
looking for an existing state.json, we will fail if it somehow goes
missing.
To more fully unit test this, some testing plugins and new
entry-points are added. These add known state values which we check
for. These should be a good basis for further tests.
[1] as noted, we could probably do some fun things in the future like
make this implement a dictionary and have some saftey features like
r/o keys.
Change-Id: I90eb711b3e9b1ce139eb34bdf3cde641fd06828f
As described in pep282 [1], the variable part of a log message
should be passed in via parameter. In this case the parameters
are evaluated only when they need to be.
This patch fixes (unifies) this for DIB.
A check using pylint was added that this kind of passing parameters to
the logging subsystem is enforced in future. As a blueprint a similar
(stripped-down) approach from cinder [2] was used.
[1] https://www.python.org/dev/peps/pep-0282/
[2] https://github.com/openstack/cinder/blob/master/tox.ini
Change-Id: I2d7bcc863e4e9583d82d204438b3c781ac99824e
Signed-off-by: Andreas Florath <andreas@florath.net>
This is an initial creation of pylint with a basic indent checker.
Small issues corrected. Job added to gate with
Ib554a284e92583cc1d6a5c2219b3922852ca4c73
Change-Id: I7e24d8348db3aef79e1395d12692199a1f80161a
Co-Authored-By: Andreas Florath <andreas@florath.net>
This is consistent with how dpkg based images are configured
and minimizes the nodepool images drastically (avoid installing
texlive for example)
Change-Id: I98fb31bc0e06869e9770fae3dbd62f0d86acb879
This was suggested in a review comment in
I8a5d62a076a5a50597f2f1df3a8615afba6dadb2. It works out quite nicely
because the BlockDevice() driver now doesn't need to know anything
about stevedore or plugins, and just works on the node list. It also
simplifies the unit testing by not having to call create_graph through
a BlockDevice object.
Change-Id: I98512f6cf42e256d2ea8225a0b496d303bf357b8
This completes the transitions started in
Ic5a61365ef0132476b11bdbf1dd96885e91c3cb6
The new file plugin.py is the place to start with this change. The
abstract base classes PluginBase and NodeBase are heavily documented.
NodeBase essentially replaces Digraph.Node
The changes in level?/*.py make no functional changes, but are just
refactoring to implement the plugin and node classes consistently.
Additionally we have added asserts during parsing & generation to
ensure plugins are implemented PluginBase, and get_nodes() is always
returning NodeBase objects for the graph.
Change-Id: Ie648e9224749491260dea65d7e8b8151a6824b9c
This switches the code to use networkx for the digraph implementation.
Note that the old implementation specifically isn't removed in this
change -- for review clarity. It will be replaced by a base class
that defines things properly to the API described below.
Plugins return a node object with three functions
get_name() : return the unique name of this node
get_nodes() : return a list of nodes for insertion into the graph.
Usually this is just "self". Some special things like partitioning
add extra nodes at this point, however.
get_edges() : return a tuple of two lists; edges_from and edges_to
As you would expect the first is a list of node names that points to
us, and the second is a list of node names we point to. Usually
this is only populated as ([self.base],[]) -- i.e. our "base" node
points to us. Some plugins, such as mounting, create links both to
and from themselves, however.
Plugins have been updated, some test cases added (error cases
specifically)
Change-Id: Ic5a61365ef0132476b11bdbf1dd96885e91c3cb6
This moves to a more generic config parser that doesn't have plugins
parsing part of the tree.
I understand why it ended up that way; we have "partitions" key which
has special semantics compared to others keys and there was a desire
to keep it isolated from core tree->graph code. But this isn't really
isolated; you have to reverse-engineer several module-crossing
boundaries, extras classes and repetitive recursive functions.
Ultimately, plugins should have access to the node graph, but not
participate in configuration parsing. This way we ensure that plugins
can't invent new methods of configuration parsing.
Note: unit tests produce the same tree -> graph conversion as the old
method. i.e. this is not intended to have a functional change.
Change-Id: I8a5d62a076a5a50597f2f1df3a8615afba6dadb2
Add a range of unit-testing for configuration parsing, graph
generation and mount-point generation. Unfortunately there's some
global variable hacks, and some stubs, but it's a start.
Change-Id: I9e4f950c2c2ea656fc0c1a14594059fb4c62fa35
There's an increasing amount of pydoc based documentation. Output the
module reference and link it into the developers main page.
One fixup to the rst needed
Change-Id: I1d43a1fe1c735eb4559e3d2b40834d1c8115c586
This is a follow-on to 57ef187632.
There's two things going on here; DIB_MANIFEST_IMAGE_DIR is *outside*
the chroot on the build host. We copy the files here for posterity, I
guess. MANIFEST_IMAGE_PATH is *inside* the chroot and are the files
we want to ensure are locked to root.
The prior change modified the permissions on DIB_MANIFEST_IMAGE_DIR.
So the first time you build, it works -- then the second time,
assuming you're using the same output filename, it hits the root-owned
manifest directories and causes a build failure.
I have built with this and checked that the manifest files in the
image are locked to root:
$ virt-ls -a ./test.qcow2 -l /etc/dib-manifests
total 32
drwxr-xr-x 2 0 0 4096 May 24 03:39 .
drwxr-xr-x 53 0 0 4096 May 24 03:39 ..
-rw------- 1 0 0 15236 May 24 03:39 dib-manifest-dpkg-test
-rw------- 1 0 0 35 May 24 03:39 dib_arguments
-rw------- 1 0 0 137 May 24 03:39 dib_environment
Related-Bug: #1671842
Change-Id: I08319d0b5fcc461d40fe0be8427dcf0e37ad21e6
Move Partition() object creation into the actual Partition object,
rather than having the logic within the Partitioning() object
Change-Id: I833ed419a0fca38181a9e2db28e5af87500d8ba4
Split Partition() into it's own file for clarity. This will be
followed-on by less dependence between Partitions and Partition
Change-Id: I860f6a1787c0e4fe99f93919ac37cf7d80bfaae9
Moving the exception didn't cause problems in
I925ed62bdc808f0e07862f6e0905e80b50fbe942, but in later changes where
we split blockdevice.py up a bit more, we can get a bit tangled with
circular imports.
Change-Id: I8297483f64c4e1deecd5ec88ee40e9198bb83589
Because in some cases (e.g. partitioning) the order is needed,
add weights to the digraph to get an (somewhat) stable
topological sort.
Change-Id: I5ef1acc6338ac93c593faa0eafe26cbed42ed887
Signed-off-by: Andreas Florath <andreas@florath.net>
Per [1] this is the "official" CDN mirror, which I think is the most
appropriate for the default. I think this addresses the concerns
httpredir service, which I don't think ever quite got out of beta.
[1] https://wiki.debian.org/DebianGeoMirror
Change-Id: I55f2a00b8bbb0f0a20d3be229e4c2c32a7b69057
Instead, either use the bash built-in of type to ensure it exists. Since
which is an external dep, things can fail oddly in a constrained
environment.
Also add a dib-lint test for this.
Change-Id: I645029f5b5bfe1198c89ce10fd3246be8636e8af
Signed-off-by: Jesse Keating <omgjlk@us.ibm.com>
This new element will allow to regenerate dracut
on the produced images, to enable different modules. It
relies on a yaml blob to specify modules and packages
needed. It defaults to installing lvm and crypt.
Change-Id: I292fb70cde41ee6053b7b81a67931bcdaaa6d664
"log and throw" is arguably an anti-pattern; the error message either
bubles-up into the exception, or the handler figures it out. We have
an example where this logs, and then the handler in blockdevice.py
catches it and logs it again.
Less layers is better; just raise the exception, and use log.exception
to get tracebacks where handled.
Change-Id: I8efd94fbe52a3911253753f447afdb7565849185
Manifests files can release sensitive information and therefore should
have restrictive permissions.
Change-Id: I64d6c830217a7d8b0172df2dc774079dcd1e2a68
Related-Bug: #1671842
A majority of the "plugins" aren't implementing the plugin class.
Clearly we need some refactoring of the ideas here. Remove for
simplicity.
Change-Id: If399a371b171f4fd17cfa5856fe55daca4c86e60
To avoid failures with double unmount, skip unmounting
the mountpoints that are managed by block device.
Change-Id: I228779eb9bf544a27a53e5017c87573023fd375a
With new block device definition, where content of the image
can be mounted on different partitions, is not enough with
executing setfiles on root directory. Instead of that, expose
all the mountpoints on the image, and apply setfiles on them.
Change-Id: I153f979722eaec49eab93d7cd398c5589b9bfc44
This patch finalizes the block device refactoring. It moves the three
remaining levels (filesystem creation, mount and fstab handling) into
the new python module.
Now it is possible to use any number of disk images, any number of
partitions and used them mounted to different directories.
Notes:
* unmount_dir : modified to only unmount the subdirs mounted by
mount_proc_sys_dev(). dib-block-device unmounts
$TMP_MOUNT_PATH/mnt (see I85e01f3898d3c043071de5fad82307cb091a64a9)
Change-Id: I592c0b1329409307197460cfa8fd69798013f1f8
Signed-off-by: Andreas Florath <andreas@florath.net>
Closes-Bug: #1664924
The args agument was only used to find the symbol for the getval
command. Have the command pass the symbol to find in directly. We
can therefore remove the args paramater to the BlockDevice() creation.
Change-Id: I8e357131b70a00e4a2c4792c009f6058d1d5ae9e
Move argument parsing to subparsers, rather than positional arguments.
This better reflects the tool's role as a driver and allows
sub-commands to deal with arguments in a natural way.
Change-Id: Iae8c368e0f3fe47abfddb9e0a1558bd5b3423aee
I accidentally dropped the clearing of this file when it moved to
cmd.py during rebase of I1919f6e865acae14ee95cd025c9c7b75ca266a9c
Change-Id: Ibe9fcde594770cb51c732cc253987308dc038083
DIB_BLOCK_DEVICE_PARAMS_YAML should be exported, and the
dib-block-device will take this as the value of --params. Remove this
to simplify the command-line
Change-Id: I6764ed223ecd36f9d24e19f164b6a927380b410f
This creates a BlockDeviceCmd object to hold the main() function.
This doesn't really do anything different right now, but sets a base
for using argparse subparsers to handle the command-line
Change-Id: I4acf95ff4d554a3b4e7e2244ab1706631b98458f
This moves the YAML parameter parsing into the command-line driver.
It makes the argument optional so it can be taken from the environment
variable directly. The parsed YAML is passed to the BlockDevice
object.
Change-Id: I6fa5e5b7d1fccfc7cf47d6e4a1fa6e560734680d
To avoid any confusion, commands passed to exec_sudo() should be a
list of "str"s. Log a message if we see unicode issues.
This also adds a debug trace of all output. stderr is captured.
This is modified to raise CalledProcessError on failure, like
check_call(). Calls that are ok to fail will need to explicitly catch
and ignore this.
The two calls that we expect to fail are wrapped
We wish to try rolling back if one of these command raises an
exception. Modify the create handler to initiate rollback on all
exceptions.
Change-Id: Iee4fa41ffaf243e4728bf3a5eeec5c8fa8d2dadc
The await function is essentially a non-standard check_output call.
Let's use standard calls to increase maintainability.
Change-Id: I2c25e1cd7122791fcaa86b46bd801e661471bc9e
As this method can be introduced without any dependency,
provide it on an independent change to simplify reviews.
This is a partial refactor based on
I592c0b1329409307197460cfa8fd69798013f1f8
Change-Id: Idaf3d2b3b3e23d0b9d6bc071d67b961a829ae422
Co-Authored-By: Andreas Florath <andreas@florath.net>
This is a partial refactor from change
I592c0b1329409307197460cfa8fd69798013f1f8
Change-Id: I8822e68e41c4ebd47eea9ffed4557efc130a7bf7
Co-Authored-By: Andreas Florath <andreas@florath.net>
Add a new method in the block device library called
exec_sudo, so it can be reused.
This is a partial refactor of change
I592c0b1329409307197460cfa8fd69798013f1f8
Change-Id: Id621f6d029e1275a35c4fd3f19b57c8518076134
Co-Authored-By: Andreas Florath <andreas@florath.net>
As part of the final steps, refactor the bits belonging
to block device and functions. This is a partial refactor
from I3600c6a3d663c697b59d91bd3fbb5e408af345e4
Change-Id: I7aa4fe0466e44846d8fa3194575d446fe4b5b2e6
Co-Authored-By: Andreas Florath <andreas@florath.net>
Introducing the refactors of the block device to allow a tree-like
configuration, and start using it for the partitions level.
Based on patch I3600c6a3d663c697b59d91bd3fbb5e408af345e4
Change-Id: I58bb3c256a1dfd100d29266571c333c2d43334f7
Co-Authored-By: Andreas Florath <andreas@florath.net>
It seems that the redhat nodepool job is quite reliably geting a
"floating point" error during centos image build. This happens after
03-yum-cleanup which is pruning the locales. This might be a
red-herring, since the logs are full of
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (C.UTF-8)
I think in our recent de-puppetisation of hosts, something might have
changed that is setting LC_ALL=C.UTF-8 for the jenkins user, at least
on Ubuntu. This is a problem for centos, as it doesn't have C.UTF-8
locale. I then think using the invalid locale is what leads the the
floating-point error when doing some maths in dib-run-parts to
calculate runtimes.
We are currently overriding LANG, but we really want LC_ALL to ensure
this applies globally.
Change-Id: I8e7cae093c4b32e0d20b73ae0086f14c7cc6a9cb
Add a new getval call that allows to retrieve values
from the block device. Also isolating the block device
information into a 'blockdev' dictionary entry, to better
return it with the getval command.
This is a refactor from the original code at
I3600c6a3d663c697b59d91bd3fbb5e408af345e4.
Change-Id: I93d33669a3a0ae644eab9f9b955bb5a9470cadeb
Co-Authored-By: Andreas Florath <andreas@florath.net>
The original approach was to pass each and every command
line parameter to the block device. While the block device
functionality gets extended, this is not any longer practical.
Instead of passing in all the parameters separately this patch
collects these in a YAML file that is passed in to the block device
layer.
Change-Id: I9d07593a01441b62632234468ac25a982cf1a9f0
Signed-off-by: Andreas Florath <andreas@florath.net>
Some package updates are more complex and require things like --backtrack=99 to
be passed to emerge. We also try harder to ensure the system is in a consistent
state as a last step.
Change-Id: Ia5d3514e8b2a6cb2d656ade997cebb798d9c0a47
With 8e822768f9 we added the ability to
disable the EPEL repository, however we need yum-utils to use
yum-config-manager.
Change-Id: Iea445f84494fd9a89fd93e9b35f920eb5e55211d
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Recent changes in the default configuration of cloud-init in Ubuntu
cause warnings when the Ec2 datasource is used on non-Amazon clouds,
see https://bugs.launchpad.net/cloud-init/+bug/1660385
We explicitly select the previous behavior when an Ec2 datasource is
desired.
Change-Id: Iebad8f6c0017fe08013dd5fe667c6132158b71cd
Closes-bug: 1683038
We only need dib-python when we build the image, no need to leak it to
the final product. Remove it in cleanup.d outside the chroot so
nothing can be using it.
Change-Id: I1e229caad7968fb3ab8e44ecdda427e174088d2d
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
If DIB_PYTHON_VERSION is < 3 on the !redhat path, that means we're on
an older platform that may not have python3-virtualenv packages. Skip
install.
Ensure the order of operations happens by forcing the installs
Also add a note about limited platform support (patches welcome :)
Change-Id: I18412767f0ebf946d557a0a126285369e96af159
Recent changes in project-config have shown that we leave the system
in an inconsistent state when installing from source. On fedora, we
will have installed the python2 packages, but then used $DIB_PYTHON to
install python3 pip from source!
This tries to clarify the situation. As described in the document,
with package installs, we just install the $DIB_PYTHON packaged
versions.
Source installs want to take over the global namespace. This is the
price you pay for running the latest versions outside package managers
:) The only sane thing seems to be for us to normalise python2 &
python3 versions of pip, setuptools and virtualenv and then hacking
things such that "/usr/bin/pip" and "/usr/bin/virtalenv" remain
defaulted to python2 versions.
Documentation is added
Change-Id: Ibc6572b89e256d1f48b7fe7c672b8b9524dc704f
Currently we install pip/virtualenv with "/usr/local/bin/dib-python".
This means that every time you create a virtualenv, the python
interpreter inside it is called "dib-python" which is confusing.
Add an env var DIB_PYTHON that points directly the to interpreter
available during build, for use when running scripts.
Change-Id: I88ad3c9eb958d58db4631d9b27bc2c592f970345
This change move "do_extra_package_install" from pre-install to install
phase.
Extra packages are added by user request using the flag "-p", This
package should not be something the elements depend on.
The reason behind this patch is to move the extra package install to
a proper phase, Also more reasonable if base element run package update
to be before we install extra packages.
Change-Id: I68cc773aba9aa01743f0dda9f4e635e4cac2a282
Apt gets confused if it talks to a mirror with an older index than the
index currently cached by apt. This can happen when image builds use a
newer index than the booted image. Avoid these problems entirely by
removing those index caches at the end of image building.
Change-Id: I245d516ee8a44831b2c29612b782bad555c48a3f
Co-Author: Clark Boylan <clark.boylan@gmail.com>
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
This patch removes three nearly-copies of debootstrap documentation
and fixes some documentation aspects.
Change-Id: Ief7794f5c1abad73788c063af6c862472cd34744
Signed-off-by: Andreas Florath <andreas@florath.net>
The current output for package-installs-v2 is inscrutable [1]
The problem starts with process_output() which is not capturing
stderr. This means that any stderr output is dislocated from any
stdout output around it. This is *really* confusing as you get a
bunch of seemingly meaningless stderr output from any calls before you
see any stdout (e.g. in [1] you can see random yum error output that
should have been with the yum call)). The simplest thing to do is to
redirect stderr to stdout which keeps everything in sync.
This causes a slight problem, however, because pkg-map outputs both
status information and errors on stderr. To work around this but
maintain compatibility, we add a "--prefix" argument that prepends
mapped packages from pkg-map with a value we can match on. The
existing status/debug output from pkg-map is low-value; modify the
call so that it will be traced only at higher debug levels (e.g. -x
-x).
The current loop is also calling pkg-map for every package in every
element (this is why in [1] the same message is repeated over and
over). This is unnecessary; it only needs to pkg-map once for each
element, giving the package list as the arguments. Create package
lists by element and pass those to pkg-map.
As a cleanup, there is no point in printing e.output if the
process_output fails for the install because we are already tracing
it; i.e. the output, even for failures, is already in the logs.
Printing it again just duplicates the output.
[2] is an extract showing what I feel is a much more understandable
log output for a fairly complex install.
[1] http://paste.openstack.org/show/595118/
[2] http://paste.openstack.org/show/595303/
Change-Id: Ia74602a5d2db032a476481caec0e45dab013d54f
DIB_PYTHON_EXEC was added in I5fab0e192c3a2dad8f60e821c184479e24e33bcd
to export the python that disk-image-create is running under. If dib
is installed with python3 then just calls "python" (python2) to run
sub-scripts, it fails to find itself.
This has been tested in devstack gates that use py3x
Change-Id: Ia1028972bfc0517b468b279aab9decdbcd7424ca
If the path is missing, unmount_dir currently exits with an error which
unintentionally aborts cleanup efforts early. This change makes
unmount_dir idempotent by exiting successfully if a directory doesn't
exist.
Change-Id: I1491b4344e8569ecb2833f44baee445a89a39d61
The dib-run-parts element was copying our internal version of
dib-run-parts into /usr/local/bin to be used running scripts inside
the target chroot. However, it never cleaned up after itself. This
means all images were left with an unmanaged local install of
dib-run-parts.
This copies dib-run-parts into the hooks directory of the chroot and
runs it from there. It is cleaned up automatically on the exit path.
The dib-run-parts element is no longer required and it has been
removed from all dependencies. It is left with a deprecation notice
in the README. For compatability we convert it to simply install
dib-utils.
Codesearch shows no users depending on this unintentional implicit
install. Note os-refresh-config depends on dib-utils and thus will
have an explicitly installed version.
Partial-Bug: #1673144
Change-Id: Ia2e96c00a4246c04beb96c17f83b8aefb69219ca
It was an oversight during v2 development for dib to start providing
dib-run-parts. The intention was for dib to use a vendored
dib-run-parts directly from $_LIB and have no dependencies on
dib-utils at all. By exporting dib-run-parts, we created an
unintentional conflict with the dib-utils package which provides the
same script.
Tools that depend on dib-utils are unaffected by this
(os-refresh-config).
The only tool that installs diskimage-builder and then assumes
dib-run-parts is available in the path is instack. I have proposed
Ibfe972208df40fa092b11b5419043524c903f1b4 to modify that to use our
internal version.
Change-Id: I149c345d38d761a49b3a6ccc4833482f09f1cd05
Add DIB_EPEL_DISABLED flag that allows installation of the EPEL repo,
but to have it disabled by default. This will help when you have
unavoidable EPEL dependencies, but want to make sure you only pull
specific things in with "--enablerepo" calls when installing those
packages.
Change-Id: Iedf6167a7cd69418255ebbee095aea04c50d73fd
A code-block in README of rhel7 element is not rendered as expected.
This patch fixes it to be rendered correctly.
Change-Id: Ie8f4c05edd1dd93314290682e4b2734622894e15
zypper on non-suse hosts is not parsing the pattern repodata because
those are marked as an inofficial extension to the repomd specification.
This is not a big issue as there is meanwhile in newer openSUSE
distributions a pattern *package* that depends on the same packages like
the pattern would do, so we can just replace it with that.
Change-Id: I0c8f713075bd7e5bf1d425f81933b4666654add7
Depends-On: I34e98f0f7693859ed05011b008334628adff612f
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
When a glean is running on centos with multiple NICs, it will try to
systemctl enable network.service multiple times for each interface.
Because of systemd magic, it is possible for the systemctl command to
fail in a race condition.
glean shouldn't be enabling network.service during boot in
pre-networking phases (Ib2b618dd975ca44e9c6b0a2c9027642ffc46b9b0). I
have proposed I8319f1ed6498a9d447950c2b4b34bca59e7b97e4 to remove this
and document the behaviour.
This also bring across suse's version
(I20bffabd333ea290d8712ec2a467f2b2d5678f3a)
Change-Id: I89d9443cb61e287bd0d9da3f48315272218ee335
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
This patch introduces stevedore plugin mechanism for use
with the block device layer. This makes it possible that
other projects pass in their own block device plugins.
Change-Id: Id3ea56aaf75f5a20a4e1b6ac2a68adb12c56b574
Signed-off-by: Andreas Florath <andreas@florath.net>
diskimage-builder usually provides defaults that work out of the box.
One default that does not work outside of x86 land is Ubuntu distro
mirror url. Considering there are only two valid default options, we can
automatically choose a better default.
This patch changes behavior only for architectures known to be using
http://ports.ubuntu.com/ubuntu-ports. All others still would use
http://archive.ubuntu.com/ubuntu as default. It provides some guarantee
that we do not introduce a regression.
Change-Id: If95a64bac0c88f30736da4bae7f1fdce126c0bf6
We somewhat discussed skipping qcow2 generation previously in
I9372e195913798a851c96e62eee89029e067baa1. As recent issues with PPC
testing have shown, we are not actually testing the "vm" element and
hence the bootloader path in the functional tests.
I don't think we need to test this on every element; it overlaps
somewhat with the testing done by the nodepool jobs which build full
images and boot them. I also didn't want to introduce a separate run
for this. Thus it seems valuable to at least have one element
enhanced to do this installation and conversion in our default tests
for basic sanity.
This disables qcow generation by default, as per the other change, but
allows an element to drop a file that will override the output
formats. The Xenial element is modified to produce a qcow2 using
this, and also introduces a dependency on the "vm" element so it tries
to install the bootloader.
We now exit if the .qcow2 fails to build as well.
Change-Id: I1a6acefe52f8c696c39b2d592fdc7ae32a87e6fe
Add a default PPC block-device layout. I've extracted this into
separate yaml files for ease of editing and to facilitate things like
longer comments.
This is not sufficient to get PPC images working, but it is required.
Change-Id: I09e5d1ed92260bdb632333f5203dd7e70d512dc8
The stdout of the script is captured, so anything coming out from
these commands needs to be captured. Move to check_process and show
the output as part of an error log in failure case.
Change-Id: I1150375cdc479d4f19b8ddeb49a824ab16fdf831
Sphix 1.5 (I9e7261c4124b71eeb6bddd9e21747b61bbdc16fa) includes
"warning-is-error" which supersedes pbr's warnerrors. Enable this and
fix up the resulting failures
- trailing lines for lists in element_deps directive
- missing README's that are linked
- syntax error and highlighting in building instructions
Change-Id: I6549551b4a9bf47076c9811a7a38a666cbea2a50
99-squash-package-install in the package-installs element does not
know which python environments the requirements were installed into.
This can cause it to select the wrong python to run the
package-installs-squash script.
Co-Authored-By: Adam Harwell <flux.adam@gmail.com>
Change-Id: I5fab0e192c3a2dad8f60e821c184479e24e33bcd
execfile() has gone in python3, and if you google various
stackoverflow results, python-dev mailing list threads and other
projects it seems runpy is considered one of the better solutions.
This should be py2.7 safe too.
Change-Id: I18077ba9d603752492cc81f260e12710981f4dff
On Debian Jessie and Debian Stretch systemctl is in /bin.
If the package systemd-sysv is not installed the script
dib-init-system did not find the init system.
This patch fixes the problem: it also looks in /bin
for systemctl and if found decides for systemd.
Change-Id: I5a18052a070bad5e16b14672237a1e2b38513949
Signed-off-by: Andreas Florath <andreas@florath.net>
The architecture-emulation-binaries element does for a limited number
of architectures on Ubuntu exactly the same as `qemu-debootstrap`.
`qemu-debootstrap` comes in the qemu-user-static package that is
anyhow used when cross-building an image.
This patch replaces the complete architecture-emulation-binaries
element with a call to qemu-debootstrap.
Change-Id: Ib9667307bfd3ff7592444a2ec5b04aa5365a1872
Signed-off-by: Andreas Florath <andreas@florath.net>
Depending on the types of deployment (security, nfv...) some extra
kernel flags are needed on the images. This change exposes the
DIB_BOOTLOADER_DEFAULT_CMDLINE parameter, defaulting to the
existing 'nofb nomodeset vga=normal', that will allow to modify
these boot params.
Change-Id: I67d191fa5ca44a57f776cb9739a02dd71212969c
Closes-Bug: #1668890
The order of the partitions is important, it needs to be preserved.
If using a simple dict, this is not happening. As a consequence,
checks like 'primary partition being first' are failing because the
dictionary sorts the partitions randomly.
Switched to OrderedDict solved the problem, as it preserves the
ordering it gets from the yaml blob.
Change-Id: Icfa9bd95ffd0203d7c3f6af95de3a6f848c2a954
Now that the main partitioning refactor patch is merged, there is
a small relict of handling partitions still in the disk-image-create
main.
This patch moves the functionality from disk-image-create to the
block-device/partitioning module: it is mostly a rewrite of the
original bash code in python.
Change-Id: Ia73baeca74180a7bc9ea487da03ff56d6a3070ce
Signed-off-by: Andreas Florath <andreas@florath.net>
Looks that the special handling for Ubuntu is not needed any longer
(its a pity that there are no detailed comments...).
The grub2 element is a second implementation of the bootstrap element
- but because there are some features that come only here, e.g. efi
boot, it should be working as long as this is not implemented in the
bootloader element.
Change-Id: I74269116ea30b84f3259805720d5cd1616f960c5
Signed-off-by: Andreas Florath <andreas@florath.net>
Closes-Bug: #1627402
Currently there is no description of dependencies in the generated
documentation of the elements: therefore a user of an element does not
know which other elements are automatically included and e.g. which
configuration options are available. In addition there are some
copy&pastes of parts of the README.rst scattered thought different
Ubuntu and Debian specific elements.
This patch adds a semi-automatic generation of dependency information
of all elements. Nevertheless these are not automatically included.
The author of the element's README.rst can decide if and where the
dependency information should appear and can use the descriptor
.. element_deps::
for this.
This patch adds the dependency information for some Debian and
Ubuntu patches - and creates the base for later removing the
duplicated parts.
A call is added to element_dependencies._find_all_elements() to
populate reverse dependencies for Element objects.
(This is a reworking of I31d2b6050b6c46fefe37378698e9a330025db430 for
the feature/v2 branch)
Change-Id: Iebb83916fed71565071246baa550849eef40560b
In error scenarios there might be no loadable state available.
This patch adds some robustness handling this scenarios
by checking the result and possible bailing out.
Change-Id: I02e2731b14bec22c22db08d60d8d15db1911a84e
Signed-off-by: Andreas Florath <andreas@florath.net>