Summary:
I started out wanting to fix an issue I noticed today where
graphical upgrade tests were failing because they didn't wait
for the graphical login screen properly; the test was sitting
at the 'full Fedora logo' state of plymouth for a long time,
so the current boot_to_login_screen's wait_still_screen was
triggered by it and the function wound up failing on the
assert_screen, because it was still some time before the real
login screen appeared.
So I tweaked the boot_to_login_screen implementation to work
slightly differently (look for a login screen match, *then* -
if we're dealing with a graphical login - wait_still_screen
to defeat the 'old GPU buffer showing login screen' problem
and assert the login screen again). But while working on it,
I figured we really should consolidate all the various places
that handle the bootloader -> login, we were doing it quite
differently in all sorts of different places. And as part of
that, I converted the base tests to use POSTINSTALL (and thus
go through the shared _wait_login tests) instead of handling
boot themselves. As part of *that*, I tweaked main.pm to not
require all POSTINSTALL tests have the _postinstall suffix on
their names, as it really doesn't make sense, and renamed the
tests.
Test Plan: Run all tests, see if they work.
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1015
Summary:
we have a long-standing problem with all the tests that hit
the repositories. The tests are triggered as soon as a compose
completes. At this point in time, the compose is not synced to
the mirrors, where the default 'fedora' repo definition looks;
the sync happens after the compose completes, and there is also
a metadata sync step that must happen after *that* before any
operation that uses the 'fedora' repository definition will
actually use the packages from the new compose. Thus all net
install tests and tests that installed packages have been
effectively testing the previous compose, not the current one.
We have some thoughts about how to fix this 'properly' (such
that the openQA tests wouldn't have to do anything special,
but their 'fedora' repository would somehow reflect the compose
under test), but none of them is in place right now or likely
to happen in the short term, so in the mean time this should
deal with most of the issues. With this change, everything but
the default_install tests for the netinst images should use
the compose-under-test's Everything tree instead of the 'fedora'
repository, and thus should install and test the correct
packages.
This relies on a corresponding change to openqa_fedora_tools
to set the LOCATION openQA setting (which is simply the base
location of the compose under test).
Test Plan:
Do a full test run, check (as far as you can) tests run sensibly
and use appropriate repositories.
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D989
Summary:
Except when running on the pre-upgrade release in the upgrade
tests (where GPG check should always be OK).
Currently we always need to use --nogpgcheck on Rawhide, and we
must also use it on Branched prior to the Bodhi activation
point. At present we don't really have any simple way to know
when the Bodhi activation point has kicked in. We could assume
that it's safe to do GPG checking for 'candidate' (not nightly)
composes, but even that isn't 100% safe and isn't really the
*right* thing to do. So I think for now it's best to just always
use --nogpgcheck , until we come up with a decent way to check
for Bodhi enablement, or releng figures things out so we can
rely on packages being signed in Rawhide and in Branched before
Bodhi enablement.
Test Plan:
Check the tests all still run, make sure I didn't
miss any dnf calls.
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D964
Summary:
This adds tests for the Server_cockpit_default and cockpit_basic
test cases. Some notes: I was initially thinking of combining
these into a single test with multiple test modules and coming
up with a system for doing wiki reporting based on individual
test module status, but because we'll also want to do a cockpit
FreeIPA enrol test, I decided against it. We don't really want
to combine all three because then we would skip the cockpit
tests whenever FreeIPA server deployment failed, which isn't
ideal. So since we'll need a separate FreeIPA enrolment test
anyway it doesn't really make sense to go to the trouble of
designing a system for loading multiple postinstall tests (though
I have an idea for that!) and a per-module wiki reporting system.
This was the most minimal and hopefully reliable method for
running Cockpit from a stock Server install that I could think
of. An alternative approach would be to have, say, the most
recent stable Workstation live as a 'stock' asset and have two
tests, one which runs a stock Server install and just waits and
another which boots the live image and accesses the cockpit
running on the other box, but that seems a bit over-complex. It
is not possible to have dependencies between tests for different
ISOs, in case you were wondering about having a Workstation live
test which runs parallel with a Server DVD test, we can't do
that. One funny thing is the font that winds up getting used for
the desktop, but I don't *think* that should be a problem.
Picking needles was a bit tricky; any improvement suggestions
are welcome. I'm hoping it turns out to be safe to rely on some
dbus log messages being present; I think logging into Cockpit
triggers activation of the realmd dbus interface, so there
*should* always be some messages related to that. An alternative
would just be to match on a sliver of the dark grey table header
and the light grey row beneath it and assume that'll always be
the first message (whatever the message is), but then we have to
find some area of the message details screen which is always
present for any message, and it just seems a tad more likely to
result in false passes. Similary I'm making an assumption that
auditd is always going to show up on the first page of the
Services screen and the details screen will always show that
'loaded...enabled' text.
Test Plan:
Run the tests and see if they work! See
https://openqa.stg.fedoraproject.org/tests/21373 and
https://openqa.stg.fedoraproject.org/tests/21371 for my tests.
Reviewers: garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D874