mirror of
https://github.com/rocky-linux/os-autoinst-distri-rocky.git
synced 2024-11-18 11:11:26 +00:00
92d588f245
Summary: This adds an entirely new workflow for testing distribution updates. The `ADVISORY` variable is introduced: when set, `main.pm` will load an early post-install test that sets up a repository containing the packages from the specified update, runs `dnf -y update`, and reboots. A new templates file is added, `templates-updates`, which adds two new flavors called `updates-server` and `updates-workstation`, each containing job templates for appropriate post-install tests. Scheduler is expected to post `ADVISORY=(update ID) HDD_1=(base image) FLAVOR=updates-(server|workstation)`, where (base image) is one of the stable release base disk images produced by `createhdds` and usually used for upgrade testing. This will result in the appropriate job templates being loaded. We rejig postinstall test loading and static network config a bit so that this works for both the 'compose' and 'updates' test flows: we have to ensure we bring up networking for the tap tests before we try and install the updates, but still allow later adjustment of the configuration. We take advantage of the openQA feature that was added a few months back to run the same module multiple times, so the `_advisory_update` module can reboot after installing the updates and the modules that take care of bootloader, encryption and login get run again. This looks slightly wacky in the web UI, though - it doesn't show the later runs of each module. We also use the recently added feature to specify `+HDD_1` in the test suites which use a disk image uploaded by an earlier post-install test, so the test suite value will take priority over the value POSTed by the scheduler for those tests, and we will use the uploaded disk image (and not the clean base image POSTed by the scheduler) for those tests. My intent here is to enhance the scheduler, adding a consumer which listens out for critpath updates, and runs this test flow for each one, then reports the results to ResultsDB where Bodhi could query and display them. We could also add a list of other packages to have one or both sets of update tests run on it, I guess. Test Plan: Try a post something like: HDD_1=disk_f25_server_3_x86_64.img DISTRI=fedora VERSION=25 FLAVOR=updates-server ARCH=x86_64 BUILD=FEDORA-2017-376ae2b92c ADVISORY=FEDORA-2017-376ae2b92c CURRREL=25 PREVREL=24 Pick an appropriate `ADVISORY` (ideally, one containing some packages which might actually be involved in the tests), and matching `FLAVOR` and `HDD_1`. The appropriate tests should run, a repo with the update packages should be created and enabled (and dnf update run), and the tests should work properly. Also test a regular compose run to make sure I didn't break anything. Reviewers: jskladan, jsedlak Reviewed By: jsedlak Subscribers: tflink Differential Revision: https://phab.qa.fedoraproject.org/D1143
59 lines
1.9 KiB
Perl
59 lines
1.9 KiB
Perl
use base "installedtest";
|
|
use strict;
|
|
use testapi;
|
|
use lockapi;
|
|
use utils;
|
|
use tapnet;
|
|
|
|
sub run {
|
|
my $self = shift;
|
|
# use FreeIPA server as DNS server
|
|
assert_script_run "printf 'search domain.local\nnameserver 10.0.2.100' > /etc/resolv.conf";
|
|
# wait for the server to be ready (do it now just to make sure name
|
|
# resolution is working before we proceed)
|
|
mutex_lock "freeipa_ready";
|
|
mutex_unlock "freeipa_ready";
|
|
# do repo setup
|
|
repo_setup();
|
|
# run firefox and login to cockpit
|
|
# note: we can't use wait_screen_change, wait_still_screen or
|
|
# check_type_string in cockpit because of that fucking constantly
|
|
# scrolling graph
|
|
start_cockpit(1);
|
|
assert_and_click "cockpit_join_domain_button";
|
|
assert_screen "cockpit_join_domain";
|
|
send_key "tab";
|
|
sleep 3;
|
|
type_string("ipa001.domain.local", 4);
|
|
type_string("\t\t", 4);
|
|
type_string("admin", 4);
|
|
send_key "tab";
|
|
sleep 3;
|
|
type_string("monkeys123", 4);
|
|
sleep 3;
|
|
assert_and_click "cockpit_join_button";
|
|
# check we hit the progress screen, so we fail faster if it's
|
|
# broken
|
|
assert_screen "cockpit_join_progress";
|
|
# join involves package installs, so it may take some time
|
|
assert_screen "cockpit_join_complete", 300;
|
|
# quit browser to return to console
|
|
send_key "ctrl-q";
|
|
# we don't get back to a prompt instantly and keystrokes while X
|
|
# is still shutting down are swallowed, so wait_still_screen before
|
|
# finishing (and handing off to freeipa_client_postinstall)
|
|
wait_still_screen 5;
|
|
}
|
|
|
|
sub test_flags {
|
|
# without anything - rollback to 'lastgood' snapshot if failed
|
|
# 'fatal' - whole test suite is in danger if this fails
|
|
# 'milestone' - after this test succeeds, update 'lastgood'
|
|
# 'important' - if this fails, set the overall state to 'fail'
|
|
return { fatal => 1, milestone => 1 };
|
|
}
|
|
|
|
1;
|
|
|
|
# vim: set sw=4 et:
|