It's not really a good idea to have the comments that explain
the test_flags in *every* test, because they can go stale and
then we either have to live with them being old or update them
all. Like, now. So let's just take 'em all out. There's always
a reference in the openQA and os-autoinst docs, and those get
updated faster.
More importantly, add the new `ignore_failure` flag to relevant
tests - all the tests that don't have the 'important' or
'fatal' flag at present. Upstream killed the 'important' flag
(making all tests 'important' by default), I got it replaced
with the 'ignore_failure' flag, we now need to explicitly mark
all modules we want the 'ignore_failure' behaviour for.
The way this was set up before, if `anaconda_main_hub` matched
immediately but some spoke was still in a 'processing' state,
it only had 30 seconds (default `assert_and_click` timeout) to
complete and allow the 'Begin Installation' button to appear.
It seems unnecessary to match on *both* needles, really, so
let's just give 300 seconds for the `begin_installation` needle
to appear. It's not going to appear on any other screen.
This problem caused a couple of spurious failures today -
https://openqa.fedoraproject.org/tests/77839 and
https://openqa.fedoraproject.org/tests/77858 - because they
took a bit too long for the INSTALLATION DESTINATION spoke to
clear.
Summary:
This adds a new test suite, run for Workstation and KDE live
images, which does not create a user during install. It then
expects initial-setup (KDE) or gnome-initial-setup (Workstation)
to appear after install, creates a user, and proceeds with
normal boot.
Note the ARM image test already covers the initial-setup text
mode, and the ARM minimal image is the only case where that
actually matters (it's not included in Server).
Test Plan:
Run the new tests, check they work. Run all old
tests, check the changes didn't break them.
Reviewers: jsedlak, jskladan
Reviewed By: jsedlak
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1185
There's an issue where the follow-on _advisory_post test tries
to log in before the 'login failed' error has cleared. We can
easily avoid this by using tty2 for the login tests, then
_advisory_post will switch to tty3 for its stuff.
Summary:
For some reason, we have `USER_LOGIN` set to 'false' for the KDE
package set install test. I really don't know / remember why
that would be; I'd think we should create a user and log in as
that user to make sure it works properly when installing KDE
from the traditional installer. It's not strictly part of the
package set test, true, but still, seems worth doing.
Also, when `USER_LOGIN` is set to 'false' and the installer runs,
we create a user called 'false'. This doesn't seem like what we
wanted, so let's not do that. I dunno if there are any other
cases besides the KDE one that this commit changes, but still.
Test Plan:
Run the full test suite and look for weirdness, check
KDE package set test works as intended (now creates a user called
'test' and logs in as that user).
Reviewers: jsedlak, jskladan
Reviewed By: jsedlak
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1182
Committing without review as this is pretty trivial and I've
had it on staging for the last few days without issue. Just gets
us somewhat better info for debugging FreeIPA issues.
We used to do this only for KDE, but I've seen the new update
tests sometimes fail at this point for no apparent reason, and
I'm thinking a wait may help (in case they're clicking the
button before it's really 'ready').
This keeps failing because the default `assert_script_run`
timeout changed from 90 to 30 in the last os-autoinst update
(an unintended consequence of a change I made). This has been
fixed upstream, but in the mean time, let's just set an
explicit timeout on the call.
Noticed in e.g. https://openqa.fedoraproject.org/tests/58798
we're doing this wrong, `boot_decrypt` was moved into utils as
a function, but we were still calling it as a method...
Summary:
This adds some logging related to the update testing workflow,
so we have some idea what we actually tested. We log precisely
which packages were actually downloaded from the update - this
is important as updates can be edited and when examining results
we'll want to know which packages actually got used. We also
add a new module which runs at the end of postinstall and tries
to figure out which packages from the update were installed in
the course of the test. This still isn't a guarantee the test
actually *tested them* in any way, but it at least means they
got installed successfully and didn't interfere with the test.
Test Plan:
Run the update test workflow, check the logs get
uploaded and seem accurate (sometimes some RPM garbage messages
wind up in the package log, I'm not too worried about that at
present). Run the compose test workflow and check it didn't
break.
Reviewers: jsedlak
Reviewed By: jsedlak
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1149
Summary:
This adds an entirely new workflow for testing distribution
updates. The `ADVISORY` variable is introduced: when set,
`main.pm` will load an early post-install test that sets up
a repository containing the packages from the specified update,
runs `dnf -y update`, and reboots. A new templates file is
added, `templates-updates`, which adds two new flavors called
`updates-server` and `updates-workstation`, each containing
job templates for appropriate post-install tests. Scheduler is
expected to post `ADVISORY=(update ID) HDD_1=(base image)
FLAVOR=updates-(server|workstation)`, where (base image) is one
of the stable release base disk images produced by `createhdds`
and usually used for upgrade testing. This will result in the
appropriate job templates being loaded.
We rejig postinstall test loading and static network config a
bit so that this works for both the 'compose' and 'updates' test
flows: we have to ensure we bring up networking for the tap
tests before we try and install the updates, but still allow
later adjustment of the configuration. We take advantage of the
openQA feature that was added a few months back to run the same
module multiple times, so the `_advisory_update` module can
reboot after installing the updates and the modules that take
care of bootloader, encryption and login get run again. This
looks slightly wacky in the web UI, though - it doesn't show the
later runs of each module.
We also use the recently added feature to specify `+HDD_1` in
the test suites which use a disk image uploaded by an earlier
post-install test, so the test suite value will take priority
over the value POSTed by the scheduler for those tests, and we
will use the uploaded disk image (and not the clean base image
POSTed by the scheduler) for those tests.
My intent here is to enhance the scheduler, adding a consumer
which listens out for critpath updates, and runs this test flow
for each one, then reports the results to ResultsDB where Bodhi
could query and display them. We could also add a list of other
packages to have one or both sets of update tests run on it, I
guess.
Test Plan:
Try a post something like:
HDD_1=disk_f25_server_3_x86_64.img DISTRI=fedora VERSION=25
FLAVOR=updates-server ARCH=x86_64 BUILD=FEDORA-2017-376ae2b92c
ADVISORY=FEDORA-2017-376ae2b92c CURRREL=25 PREVREL=24
Pick an appropriate `ADVISORY` (ideally, one containing some
packages which might actually be involved in the tests), and
matching `FLAVOR` and `HDD_1`. The appropriate tests should run,
a repo with the update packages should be created and enabled
(and dnf update run), and the tests should work properly. Also
test a regular compose run to make sure I didn't break anything.
Reviewers: jskladan, jsedlak
Reviewed By: jsedlak
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1143
Summary:
This is to handle cases like #1414904 , where the system boots
to emergency mode. We really need logs to try and debug this.
Test Plan:
Force a test to hit emergency mode somehow (right now
you can just run base_services_start on Rawhide over and over
until you hit #1414904, but there's probably an easier way to
do it, I think there's a systemd boot arg to tell it which target
to boot for e.g.) and check logs get uploaded. Also check this
doesn't break log upload for a 'normal' failure.
Reviewers: garretraziel_but_actually_jsedlak_who_uses_stupid_nicknames
Reviewed By: garretraziel_but_actually_jsedlak_who_uses_stupid_nicknames
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1103
Summary:
This adds a couple of new exporter modules, renames main_common
to utils (this is a better name: openSUSE's main_common is
functions used in main.pm, utils is what they call their module
full of miscellaneous commonly-used functions), and moves a
bunch of utility functions that were previously needlessly
implemented as instance methods in base classes into the
exporter modules. That means we can get rid of all the annoying
$self-> syntax for calling them.
We get rid of `fedorabase` entirely, as it's no longer useful
for anything. Other base classes keep the 'standard' methods
(like `post_fail_hook`) and methods which actually need to be
methods (like `root_console`, whose behaviour is different in
anacondatest and installedtest).
Test Plan:
Do a full test suite run and check everything lines
up. There should be no functional differences from before at all,
this is just a re-org.
Reviewers: jskladan, garretraziel_but_actually_jsedlak_who_uses_stupid_nicknames
Reviewed By: garretraziel_but_actually_jsedlak_who_uses_stupid_nicknames
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1080
Summary:
Since 26.17, anaconda shows a warning when the user password
contains non-ASCII characters, and requires a second Done click
to confirm. This change should handle that.
On the 'catch cases where password typing went wrong and re-try'
bit: to keep that, but not re-type the password *every single
time* on the Russian install test, we'd have to make the needle
match the text of the warning. This is problematic because then
that needle will be able to break without us easily noticing;
that's why I wanted to keep the 'warning bar' needle text-free.
Unfortunately, that means we have to skip the protection for
switched-layout installs.
Note the protection was actually not working for any non-English
install anyhow, because the needle had `LANGUAGE-english` as a
tag. We never noticed that. Failed password typing is pretty
rare now, so we can live without the protection - it's just nice
to have it for the English install tests because there's so many
of them.
Test Plan:
Run the Russian install with a recent Rawhide image,
check it clicks 'Done' twice. Note, it will still fail, because
of RHBZ #1413813.
Reviewers: jskladan, garretraziel_but_actually_jsedlak_who_uses_stupid_nicknames
Reviewed By: garretraziel_but_actually_jsedlak_who_uses_stupid_nicknames
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1084
Summary:
This adds a new test, memory_check, which just does a default
package set install with `inst.debug` parameter then uploads
the memory usage file (`/tmp/memory.dat`) at the end. We can
have check-compose use the data to analyze changes in memory
usage over time.
Test Plan:
Fire off the Workstation network install image tests
and make sure the memory usage test runs and works on all three
machines. This is live on staging already.
Reviewers: jskladan, garretraziel_but_actually_jsedlak_who_uses_stupid_nicknames
Reviewed By: garretraziel_but_actually_jsedlak_who_uses_stupid_nicknames
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1082
The old version wasn't working - it was passing even though two
services fail to start in Workstation currently. I'm really not
sure why the old approach wasn't working, but it wasn't, and I
rather hate `script_output` anyway, so here's a different way
of doing it which relies on `eval`ing `assert_script_output`
instead. (I really should send a PR for a non-fatal version of
assert_script_output...)
Without this, when there are failed services, we get an extra
column to the left of the service names with a unicode dot for
each failed service, which is awkward and screws up the parsing.
Summary:
Include some basic testing of Japanese input, and split the
input testing (including Russian) into a separate module, since
it's not really part of 'login' testing.
Test Plan:
Run the test, and the Russian and French tests too to
make sure they didn't break. Tested on staging. Note the Japanese
test soft fails, intentionally, at present, as I discovered a bug
while working on it:
https://bugzilla.gnome.org/show_bug.cgi?id=776189
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1072
Summary:
This isn't in the criteria, but it's commonly used, so we ought
to test this way. Require authentication for the iSCSI target
and have the test provide the appropriate auth info.
Test Plan:
Run the iscsi test and check it works (you need the
recent fixes for support_server to make *that* work). Nothing
else should be affected.
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1070
Summary:
The non-English tests so far did not test that graphical login
worked as expected, which is a fairly large hole. With this
change, they should do a Workstation install and test login to
both GNOME and the console works as expected. KDE is not yet
tested.
As part of this we tweak the implementation of keyboard layout
switching in graphical environments to use a generic function
in main_common which can handle both anaconda and desktops
(just GNOME at present, but should extend easily to any desktop
with a known switcher key and a visible layout indicator),
replacing the anacondatest class method. I kinda don't like that
the test has to specifically tell the function when it's in
anaconda, but I don't think I want to start experimenting with
a global 'test phase' openQA variable or anything like that at
present.
Fixes T842.
Test Plan:
Run the French and Russian install tests and check
they work as expected. Also run an English Workstation install
if you like, and make sure that didn't break. This change is
live on staging ATM, seems to work fine.
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Maniphest Tasks: T842
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1071
support_server was failing because of #1402427. I re-generated
the disk image with latest F25 so the fixed selinux-policy is
used, but even then, it seems we have to run 'restorecon' on
rpcbind manually before starting nfs.
Committing without review as this causes failures...try to make
sure we only run the AVC test when it makes sense, and fix
running it on the French install test.
Summary:
This has all console tests check for AVCs (with ausearch) and
crashes (with coredumpctl) at post-install stage. It's non-
fatal as this doesn't really mean the test failed, but we want
to spot when there are unexpected AVCs or crashes.
Test Plan:
Run some console tests, check it works right. I only
tested with one test, since so many are broken on Rawhide ATM
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1066
Committing without review as this is pretty trivial. Running
top *after* all the other collect_data tasks leads to some
meaningless fluctuations in the output; I intended to look for
activity caused by stuff running 'as usual', not activity from
the other collect_data tasks.
Summary:
I've been wanting to do this for a while; I think it'll let us
check for some significant changes between composes. This should
cause runs of a few test cases to collect and upload info on:
* installed packages
* free memory
* disk space
* active services
* 1 minute of CPU usage info (via top)
immediately after install and initial login. In some cases this
will be useful / interesting simply to look at directly, but
we can also have check-compose analyze the data and include
significant changes in its reports.
Test Plan:
Run affected tests, make sure the data collection
works.
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1046
Summary:
GNOME's update notification criteria are pretty braindead: it
fires the update check timer once on login then once every hour
thereafter, but only actually checks for and notifies of updates
once a day if it's after 6am(?!?!?!). So we have to do a bunch
of fiddling around to ensure we reliably get a notification.
Move the clock to 6am if it's earlier than that, and reset the
'last update check' timer to 48 hours ago, then log in to GNOME
after that.
Note: I thought this still wasn't fully reliable, but I've looked
into all the recent failures of either test on staging and
there's only one which was really 'no update notification came
up', and the logs clearly indicate PK did run an update check,
so I don't think that was a test bug (I think something went
wrong with the update check). The other failures are all 'GNOME
did something wacky', plus one case where the needle didn't quite
match because I think the match area is slightly too tall; I'll
fix that in a second.
Test Plan:
Run the tests on both KDE and GNOME and check they
work properly now (assuming nothing unrelated breaks, like KDE
crashing...)
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1039
We're not really *testing* shutdown here, we're just shutting
down to make sure the uploaded disk image is clean. So we don't
really mind if shutdown takes a while. It often seems to take
longer than 1 minute on KDE installs and cause a soft fail, so
let's bump the timeout to 3 minutes.
Summary:
os-autoinst implements `script_run` itself now, we aren't
required to implement it ourselves any more. os-autoinst's
implementation is better than ours, as it allows for verifying
the script actually ran (via the redirect-output-to-serial-
console trick).
So this drops our implementation so we'll just use the upstream
one. Where I judged we don't want to bother with the 'check
the command actually ran' feature I've adjusted our direct
`script_run` calls to pass a wait time of 0, which skips the
'wait for command to run' stuff entirely and just does a simple
'type the string and hit enter'.
Because of how the inheritance works, our `assert_script_run`
calls already used the os-autoinst `script_run`, rather than
the one from our distribution.
This should prevent `prepare_test_packages` sometimes going
wrong right after removing the python3-kickstart package, as
we'll properly wait for that removal to complete now (before
we weren't, we'd just start typing the next command while it
was still running, which could result in lost keypresses).
Test Plan:
Check all tests still run OK (I've tried this on
staging and it seems fine).
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1034
Summary:
Since we started using `type_very_safely` for typing the root
password, we starting hitting a race issue. If we complete the
root password spoke so slowly that the software deployment
process completes in the meantime, anaconda will wait until we
complete the spoke then immediately flip to the 'post-install
configuration' step, at which point access to the USER CREATION
spoke is blocked.
We don't hit this case on regular RPM installs or live installs
as the deployment phase still takes a while for both of those,
but we are sometimes hitting it for the Atomic ostree install
image, as the software deployment phase is pretty fast there.
We *could* just not bother creating a user and testing we can
log in as a user for that test, but I don't like that approach,
we *should* be testing that user creation and login works OK
for ostree installs. So instead, let's just type the root
password a bit less safely for ostree installs; this will be
more vulnerable to typing errors but hopefully will avoid the
race problem.
Test Plan:
Run a few Atomic installs, see if they hit the race.
Might need to run other tests at the same time, and you may not
be able to hit it, this is obviously dependent on the I/O of
the worker host...
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1022
Summary:
Since we can match on multiple needles, we can drop the loop
from console_login and instead do it this way, which is simpler
and should work better on ARM (the timeouts will scale and
allow ARM to be slow here). Also move it to main_common as
there's no logical reason for it to be a class method.
Also remove the `check` arg. `check` was only set to 0 by two
tests, _console_shutdown and anacondatest's _post_fail_hook.
For _console_shutdown, I think I just wanted to give it the
best possible chance of succeeding. But we're really not going
to lose anything significant by checking, the only case where
check=>0 would've helped is if the 'good' needle had stopped
matching, and all sorts of other tests will fail in that case.
anacondatest was only using it to save a screenshot of whatever
was on the tty if it didn't reach a root console, which doesn't
seem that useful, and we'll get screenshots from check_screen
and assert_screen anyway.
Test Plan:
Run all tests, check they behave as expected and
none inappropriately fails on console login.
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1016
As with the text install test, there's an additional choice on
the 'Time settings' path compared to whatever image garret
developed the test on; right after 'Time settings' you have to
pick 'Set timezone' or 'Configure NTP servers'. So adjust the
test to handle this, just like we did there.
Summary:
I started out wanting to fix an issue I noticed today where
graphical upgrade tests were failing because they didn't wait
for the graphical login screen properly; the test was sitting
at the 'full Fedora logo' state of plymouth for a long time,
so the current boot_to_login_screen's wait_still_screen was
triggered by it and the function wound up failing on the
assert_screen, because it was still some time before the real
login screen appeared.
So I tweaked the boot_to_login_screen implementation to work
slightly differently (look for a login screen match, *then* -
if we're dealing with a graphical login - wait_still_screen
to defeat the 'old GPU buffer showing login screen' problem
and assert the login screen again). But while working on it,
I figured we really should consolidate all the various places
that handle the bootloader -> login, we were doing it quite
differently in all sorts of different places. And as part of
that, I converted the base tests to use POSTINSTALL (and thus
go through the shared _wait_login tests) instead of handling
boot themselves. As part of *that*, I tweaked main.pm to not
require all POSTINSTALL tests have the _postinstall suffix on
their names, as it really doesn't make sense, and renamed the
tests.
Test Plan: Run all tests, see if they work.
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1015
I have a better fix for this coming, but it's a big change that
requires review, so for now, do this. The tests are actually
failing because the 30 second wait_still_screen is too short(!)
- it's triggering while the test is sitting at the 'full Fedora
logo' bootsplash screen then doing a 30 second assert_screen
graphical_login, which is failing because it actually takes
more than 60 seconds to get from the 'full F' screen to the
login screen. So let's just make the wait_still_screen 45
seconds for now.
use 'ps' output for Xorg and Xwayland. We'd need some new
openQA var to get this right by 'guessing', as it's vt1 for
Workstation when running live - so long as autologin worked -
but vt2 after install. We'd need a var or some other thing to
detect which case we're running in. LIVE doesn't do it, it's
set even when running a post-install test from a live image.
So instead let's just do it a bit more cleverly. This also
gives us a bit of insurance against changes in GDM, SDDM etc.
behaviour, so long as Xwayland or Xorg is running (and we can
add additional processes to the list, like gnome-shell, if
needed/appropriate). We assume the *final* listed process -
i.e. the most recently-started one - will be the desktop;
this covers gdm's behaviour of starting up on vt1 then running
the user session on vt2. We can make this function more complex
and add args if we ever get to the point where we have multi-
user tests running or anything (e.g. allow to pass a username
and only look for that user's processes).
Landing without review as this broke the live variant of the
test on Workstation in production (kinda not sure why it worked
in testing, or I didn't notice that it failed, but never mind).
I've tested it on staging.