mirror of
https://github.com/rocky-linux/os-autoinst-distri-rocky.git
synced 2024-12-07 12:06:26 +00:00
Add fifloader tests, template schemas, update README
This adds a test suite for fifloader (renamed fifloader.py for test sanity). It adds JSON Schema form schemas for both FIF and upstream openQA template data, and has fifloader (optionally, but by default) validate both input and output data against the schemas. It also adds a tox.ini configured to run the fifloader tests, use fifloader to validate the template files, and do diff coverage and lint checks. It also adjusts the Zuul config to run tox instead of the test job. There are also some pylint cleanups, since the new tests run pylint. fifcheck, fifconverter and tojson.pm are removed, as they were mainly only needed for one-time conversion of the old format templates; now they are in the git history we can always recover them if we need them. Along with all this I updated the README a bit to explain some of it (and explain FIF better), and to explicitly state that this repo is GPLv2+ licensed, and added GPL headers to some of the files. Signed-off-by: Adam Williamson <awilliam@redhat.com>
This commit is contained in:
parent
7b9a4306cd
commit
214f2cc8eb
7
.gitignore
vendored
7
.gitignore
vendored
@ -1,3 +1,10 @@
|
||||
*.swp
|
||||
*~
|
||||
*#
|
||||
**/*.pyc
|
||||
**/*.pyo
|
||||
.cache/
|
||||
.tox/
|
||||
.coverage
|
||||
coverage.xml
|
||||
|
||||
|
@ -1,8 +1,9 @@
|
||||
- job:
|
||||
name: os-autoinst-test
|
||||
run: ci/run.yaml
|
||||
name: os-autoinst-tox
|
||||
run: ci/tox.yaml
|
||||
nodeset: fedora-31-vm
|
||||
|
||||
- project:
|
||||
check:
|
||||
jobs:
|
||||
- os-autoinst-test
|
||||
- os-autoinst-tox
|
||||
|
29
README.md
29
README.md
@ -1,17 +1,17 @@
|
||||
openQA tests for the Fedora distribution
|
||||
========================================
|
||||
# openQA tests for the Fedora distribution
|
||||
|
||||
This repository contains tests and images for testing [Fedora](https://getfedora.org/) with [openQA](http://os-autoinst.github.io/openQA/). The [fedora_openqa library and CLI](https://pagure.io/fedora-qa/fedora_openqa) are used for scheduling tests, and [createhdds](https://pagure.io/fedora-qa/createhdds) is used for creating base disk images for the test. For openQA installation instructions, see [the Fedora openQA wiki page](https://fedoraproject.org/wiki/OpenQA).
|
||||
|
||||
Issues
|
||||
------
|
||||
## Issues
|
||||
|
||||
[Issues](https://pagure.io/fedora-qa/os-autoinst-distri-fedora/issues) and [pull requests](https://pagure.io/fedora-qa/os-autoinst-distri-fedora/pull-requests) are tracked in [os-autoinst-distri-fedora Pagure](https://pagure.io/fedora-qa/os-autoinst-distri-fedora). Pagure uses a Github-like pull request workflow, so if you're familiar with that, you can easily submit Pagure pull requests. If not, you can read up in the [Pagure documentation](https://docs.pagure.org/pagure/usage/index.html).
|
||||
|
||||
Note that this repository does not use the 'gitflow' system, so the main development branch is `master`: please branch from `master` and submit diffs against it. This is not a Python repository and has no tests or linting.
|
||||
## Requirements
|
||||
|
||||
Obviously, this repository is little use without access to an openQA installation. To load templates from this repository, you will need the upstream client tools (packaged as `openqa-client` in Fedora) and the dependencies of `fifloader.py` (see below for more on this tool) installed. Those dependencies are Python 3 and the `jsonschema` library. For running the unit tests, you will additionally need `pytest` and `tox`.
|
||||
|
||||
## Test development
|
||||
|
||||
Test development
|
||||
----------------
|
||||
See official documentation on:
|
||||
|
||||
* [basic concept](https://github.com/os-autoinst/openQA/blob/master/docs/GettingStarted.asciidoc)
|
||||
@ -21,6 +21,10 @@ See official documentation on:
|
||||
|
||||
See [this example repo](https://github.com/os-autoinst/os-autoinst-distri-example) on how tests should be structured.
|
||||
|
||||
### FIF template format
|
||||
|
||||
The test templates in this repository (files ending in `fif.json`) are not in the same format as expected by and are not directly compatible with the upstream template loader. They are in a format referred to as 'FIF' ('Fedora Intermediate Format') which is parsed into the upstream format by the `fifloader.py` utility found in this repository. This format is intended to be more convenient for human reading and editing. It is more fully explained in the docstring at the top of `fifloader.py`. Please refer to this when adding new tests to the templates. A command like `./fifloader.py --load templates.fif.json templates-updates.fif.json` can be used to load templates in the FIF format (this converts them to the upstream format, and calls the upstream template loader on the converted data). See `./fifloader.py -h` for further details on `fifloader.py`.
|
||||
|
||||
### main.pm modular architecture
|
||||
|
||||
Since openQA uses only one entrypoint for all tests (main.pm), we have decided to utilize this feature and make tests modular. It means that basic passing through main.pm (without any variables set) results in most basic installation test executed. Developer can customize it with additional variables (for example by setting `PACKAGE_SET=minimal` to do installation only with minimal package set).
|
||||
@ -79,8 +83,9 @@ and `test_flags()` method, inheriting from one of the classes mentioned above.
|
||||
5. Create needles (images) by using interactive mode and needles editor in WebUI.
|
||||
6. Add new test suite and profiles into `templates.fif.json` file (and/or `templates-updates.fif.json`, if the test is applicable to the update testing workflow)
|
||||
7. Add new Test suite and Test case into [`conf_test_suites.py`](https://pagure.io/fedora-qa/fedora_openqa/blob/master/f/fedora_openqa/conf_test_suites.py) file in fedora_openqa repository.
|
||||
8. Open pull request for the os-autoinst-distri-fedora changes in [Pagure](https://pagure.io/fedora-qa/os-autoinst-distri-fedora). Pagure uses a Github-style workflow (summary: fork the project via the web interface, push your changes to a branch on your fork, then use the web interface to submit a pull request). See the [Pagure documentation](https://docs.pagure.org/pagure/usage/index.html) for more details.
|
||||
9. Open a pull request in [fedora_openqa Pagure](https://pagure.io/fedora-qa/fedora_openqa) for any necessary fedora_openqa changes.
|
||||
8. Run `tox`. This will check the templates are valid.
|
||||
9. Open pull request for the os-autoinst-distri-fedora changes in [Pagure](https://pagure.io/fedora-qa/os-autoinst-distri-fedora). Pagure uses a Github-style workflow (summary: fork the project via the web interface, push your changes to a branch on your fork, then use the web interface to submit a pull request). See the [Pagure documentation](https://docs.pagure.org/pagure/usage/index.html) for more details.
|
||||
10. Open a pull request in [fedora_openqa Pagure](https://pagure.io/fedora-qa/fedora_openqa) for any necessary fedora_openqa changes.
|
||||
|
||||
### Language handling
|
||||
|
||||
@ -93,3 +98,9 @@ Tests can run in different languages. To set the language which will be used for
|
||||
It is very important, therefore, that needles have the correct tags. Any needle which is expected to match for tests run in *any* language must have no `LANGUAGE` tags. Other needles must have the appropriate tag(s) for the languages they are expected to match. The safest option if you are unsure is to set no `LANGUAGE` tag(s). The only danger of this is that missing translations may not be caught.
|
||||
|
||||
Note that tags of the form `ENV-INSTLANG-(anything)` are useless artefacts and should be removed.
|
||||
|
||||
## Licensing and credits
|
||||
|
||||
The contents of this repository are available under the GPL, version 3 or any later version. A copy is included as COPYING. Note that we do not include the full GPL header in every single test file as they are quite short and this would waste a lot of space.
|
||||
|
||||
The tools and tests in this repository are maintained by the [Fedora QA team](https://fedoraproject.org/wiki/QA). We are grateful to the [openSUSE](https://opensuse.org) team for developing openQA, and for the [openSUSE tests](https://github.com/os-autoinst/os-autoinst-distri-opensuse) on which this repository was initially based (and from which occasional pieces are still borrowed).
|
||||
|
@ -1,6 +0,0 @@
|
||||
- hosts: all
|
||||
tasks:
|
||||
- name: Run test
|
||||
command: ls -la
|
||||
args:
|
||||
chdir: '{{ zuul.project.src_dir }}'
|
14
ci/tox.yaml
Normal file
14
ci/tox.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
- hosts: all
|
||||
tasks:
|
||||
- name: Ensure tox is installed
|
||||
include_role:
|
||||
name: ensure-tox
|
||||
- name: Install all Python versions to test
|
||||
package:
|
||||
name: ['python37', 'python38', 'python39']
|
||||
state: present
|
||||
become: yes
|
||||
- name: Run tox
|
||||
command: tox
|
||||
args:
|
||||
chdir: '{{ zuul.project.src_dir }}'
|
43
fifcheck
43
fifcheck
@ -1,43 +0,0 @@
|
||||
#!/bin/python3
|
||||
|
||||
"""This is a sanity check for the Fedora Intermediate Format (fif) converter and loader. It reads
|
||||
in templates.old.json and templates-updates.old.json - which are expected to be our original-format
|
||||
templates in JSON format - runs them through the converter to the intermediate format, then runs
|
||||
them through the loader *from* the intermediate format, and (via DeepDiff, thanks jskladan!) checks
|
||||
that the results are equivalent to the input, pace a couple of expected differences.
|
||||
"""
|
||||
|
||||
from deepdiff import DeepDiff
|
||||
import json
|
||||
import subprocess
|
||||
|
||||
with open('templates.old.json', 'r') as tempfh:
|
||||
origtemp = json.load(tempfh)
|
||||
with open('templates-updates.old.json', 'r') as updfh:
|
||||
origupd = json.load(updfh)
|
||||
|
||||
# run the converter
|
||||
subprocess.run(['./fifconverter.py'])
|
||||
# run the loader on the converted files
|
||||
subprocess.run(['./fifloader.py', '--write', 'templates.fif.json', 'templates-updates.fif.json'])
|
||||
with open('generated.json', 'r') as generatedfh:
|
||||
generated = json.load(generatedfh)
|
||||
|
||||
# merge origs
|
||||
origtemp['Products'].extend(origupd['Products'])
|
||||
origtemp['TestSuites'].extend(origupd['TestSuites'])
|
||||
origtemp['JobTemplates'].extend(origupd['JobTemplates'])
|
||||
|
||||
for item in generated['Products']:
|
||||
# we generate the product names in the converter, our original
|
||||
# templates don't have them
|
||||
item['name'] = ""
|
||||
for item in generated['JobTemplates']:
|
||||
if item['group_name'] == 'fedora':
|
||||
# we don't explicitly specify this in our original templates,
|
||||
# but the converter adds it (rather than relying on openQA
|
||||
# to guess when loading)
|
||||
del item['group_name']
|
||||
ddiff = DeepDiff(origtemp, generated, ignore_order=True, report_repetition=True)
|
||||
# if this is just {}, we're good
|
||||
print(ddiff)
|
104
fifconverter
104
fifconverter
@ -1,104 +0,0 @@
|
||||
#!/bin/python3
|
||||
|
||||
"""
|
||||
This script takes JSON-formatted openQA template data (in the older format with a JobTemplates
|
||||
dict, not the newer YAML-ish format organized by job group) and converts to an intermediate format
|
||||
(Fedora Intermediate Format - 'fif') intended to be easier for human editing. It extracts all the
|
||||
unique 'environment profiles' - a combination of machine and product - from the JobTemplates and
|
||||
stores them in a 'Profiles' dict; it then adds a 'profiles' key to each test suite, indicating
|
||||
which profiles that suite is run on. It is fairly easy to reverse this process to reproduce the
|
||||
openQA loader-compatible data, but the intermediate format is more friendly to a human editor.
|
||||
Adding a new test suite to run on existing 'profiles' only requires adding the suite and an
|
||||
appropriate 'profiles' dict. Adding a new profile involves adding the machine and/or product,
|
||||
manually adding the profile to the Profiles dict, and then adding the profile to all the test
|
||||
suites which should be run on it. See also fifloader.py, which handles converting FIF input to
|
||||
upstream format, and optionally can pass it through to the upstream loader.
|
||||
"""
|
||||
|
||||
import json
|
||||
|
||||
with open('templates.old.json', 'r') as tempfh:
|
||||
tempdata = json.load(tempfh)
|
||||
with open('templates-updates.old.json', 'r') as updfh:
|
||||
updata = json.load(updfh)
|
||||
|
||||
def _synthesize_product_name(product):
|
||||
"""Synthesize a product name from a product dict. We do this when
|
||||
reading the templates file and also when constructing the profiles
|
||||
so use a function to make sure they both do it the same way.
|
||||
"""
|
||||
return "-".join((product['distri'], product['flavor'], product['arch'], product['version']))
|
||||
|
||||
def read_templates(templates):
|
||||
newtemps = {}
|
||||
if 'Machines' in templates:
|
||||
newtemps['Machines'] = {}
|
||||
for machine in templates['Machines']:
|
||||
# condense the stupid settings format
|
||||
machine['settings'] = {settdict['key']: settdict['value'] for settdict in machine['settings']}
|
||||
# just use a dict, not a list of dicts with 'name' keys...
|
||||
name = machine.pop('name')
|
||||
newtemps['Machines'][name] = machine
|
||||
if 'Products' in templates:
|
||||
newtemps['Products'] = {}
|
||||
for product in templates['Products']:
|
||||
# condense the stupid settings format
|
||||
product['settings'] = {settdict['key']: settdict['value'] for settdict in product['settings']}
|
||||
# synthesize a name, as we don't have any in our templates
|
||||
# and we can use them in the scenarios. however, note that
|
||||
# openQA itself doesn't let you use the product name as a
|
||||
# key when loading templates, unlike the machine name, our
|
||||
# loader has to reverse this and provide the full product
|
||||
# dict to the upstream loader
|
||||
name = _synthesize_product_name(product)
|
||||
# this is always an empty string in our templates
|
||||
del product['name']
|
||||
newtemps['Products'][name] = product
|
||||
if 'TestSuites' in templates:
|
||||
newtemps['TestSuites'] = {}
|
||||
for testsuite in templates['TestSuites']:
|
||||
# condense the stupid settings format
|
||||
testsuite['settings'] = {settdict['key']: settdict['value'] for settdict in testsuite['settings']}
|
||||
# just use a dict, not a list of dicts with 'name' keys...
|
||||
name = testsuite.pop('name')
|
||||
newtemps['TestSuites'][name] = testsuite
|
||||
profiles = {}
|
||||
for jobtemp in templates['JobTemplates']:
|
||||
# figure out the profile for each job template and add it to
|
||||
# the dict. For Fedora, the group name is predictable based on
|
||||
# the arch and whether it's an update test; the intermediate
|
||||
# loader figures that out
|
||||
profile = {
|
||||
'machine': jobtemp['machine']['name'],
|
||||
'product': _synthesize_product_name(jobtemp['product']),
|
||||
}
|
||||
profname = '-'.join([profile['product'], profile['machine']])
|
||||
# keep track of all the profiles we've hit
|
||||
profiles[profname] = profile
|
||||
|
||||
test = jobtemp['test_suite']['name']
|
||||
prio = jobtemp['prio']
|
||||
try:
|
||||
suite = newtemps['TestSuites'][test]
|
||||
except KeyError:
|
||||
# this is a templates-updates JobTemplate which refers to a
|
||||
# TestSuite defined in templates. What we do here is define
|
||||
# a partial TestSuite which contains only the name and the
|
||||
# profiles; the loader for this format knows how to combine
|
||||
# dicts (including incomplete ones) from multiple source
|
||||
# files into one big final-format lump
|
||||
suite = {}
|
||||
newtemps['TestSuites'][test] = suite
|
||||
if 'profiles' in suite:
|
||||
suite['profiles'][profname] = prio
|
||||
else:
|
||||
suite['profiles'] = {profname: prio}
|
||||
|
||||
newtemps['Profiles'] = profiles
|
||||
return newtemps
|
||||
|
||||
with open('templates.fif.json', 'w') as newtempfh:
|
||||
json.dump(read_templates(tempdata), newtempfh, sort_keys=True, indent=4)
|
||||
|
||||
with open('templates-updates.fif.json', 'w') as newtempfh:
|
||||
json.dump(read_templates(updata), newtempfh, sort_keys=True, indent=4)
|
@ -1,4 +1,23 @@
|
||||
#!/bin/python3
|
||||
#!/usr/bin/python3
|
||||
|
||||
# Copyright (C) 2020 Red Hat
|
||||
#
|
||||
# This file is part of os-autoinst-distri-fedora.
|
||||
#
|
||||
# os-autoinst-distri-fedora is free software; you can redistribute it
|
||||
# and/or modify it under the terms of the GNU General Public License
|
||||
# as published by the Free Software Foundation, either version 2 of
|
||||
# the License, or (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# Author: Adam Williamson <awilliam@redhat.com>
|
||||
|
||||
"""This is an openQA template loader/converter for FIF, the Fedora Intermediate Format. It reads
|
||||
from one or more files expected to contain FIF JSON-formatted template data; read on for details
|
||||
@ -56,12 +75,48 @@ loader will combine those into a single complete TestSuite entry with the `profi
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
def merge_inputs(inputs):
|
||||
"""Merge multiple input files. Expects JSON file names. Returns
|
||||
a 5-tuple of machines, products, profiles, testsuites and
|
||||
import jsonschema
|
||||
|
||||
SCHEMAPATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'schemas')
|
||||
|
||||
def schema_validate(instance, fif=True, complete=True, schemapath=SCHEMAPATH):
|
||||
"""Validate some input against one of our JSON schemas. We have
|
||||
'complete' and 'incomplete' schemas for FIF and the upstream
|
||||
template format. The 'complete' schemas expect the validated
|
||||
input to contain a complete set of data (everything needed for
|
||||
an openQA deployment to actually run tests). The 'incomplete'
|
||||
schemas expect the validated input to contain at least *some*
|
||||
valid data - they are intended for validating input files which
|
||||
will be combined into 'complete' data, or which will be loaded
|
||||
without --clean, to add to an existing configuration.
|
||||
"""
|
||||
filename = 'openqa-'
|
||||
if fif:
|
||||
filename = 'fif-'
|
||||
if complete:
|
||||
filename += 'complete.json'
|
||||
else:
|
||||
filename += 'incomplete.json'
|
||||
base_uri = "file://{0}/".format(schemapath)
|
||||
resolver = jsonschema.RefResolver(base_uri, None)
|
||||
schemafile = os.path.join(schemapath, filename)
|
||||
with open(schemafile, 'r') as schemafh:
|
||||
schema = json.load(schemafh)
|
||||
# raises an exception if it fails
|
||||
jsonschema.validate(instance=instance, schema=schema, resolver=resolver)
|
||||
return True
|
||||
|
||||
# you could refactor this just using a couple of dicts, but I don't
|
||||
# think that would really make it *better*
|
||||
# pylint:disable=too-many-locals, too-many-branches
|
||||
def merge_inputs(inputs, validate=False, clean=False):
|
||||
"""Merge multiple input files. Expects JSON file names. Optionally
|
||||
validates the input files before merging, and the merged output.
|
||||
Returns a 5-tuple of machines, products, profiles, testsuites and
|
||||
jobtemplates (the first four as dicts, the fifth as a list).
|
||||
"""
|
||||
machines = {}
|
||||
@ -70,20 +125,25 @@ def merge_inputs(inputs):
|
||||
testsuites = {}
|
||||
jobtemplates = []
|
||||
|
||||
for input in inputs:
|
||||
for _input in inputs:
|
||||
try:
|
||||
with open(input, 'r') as inputfh:
|
||||
with open(_input, 'r') as inputfh:
|
||||
data = json.load(inputfh)
|
||||
# we're just wrapping the exception a bit, so this is fine
|
||||
# pylint:disable=broad-except
|
||||
except Exception as err:
|
||||
print("Reading input file {} failed!".format(input))
|
||||
print("Reading input file {} failed!".format(_input))
|
||||
sys.exit(str(err))
|
||||
# validate against incomplete schema
|
||||
if validate:
|
||||
schema_validate(data, fif=True, complete=False)
|
||||
|
||||
# simple merges for all these
|
||||
for (datatype, tgt) in (
|
||||
('Machines', machines),
|
||||
('Products', products),
|
||||
('Profiles', profiles),
|
||||
('JobTemplates', jobtemplates),
|
||||
('Machines', machines),
|
||||
('Products', products),
|
||||
('Profiles', profiles),
|
||||
('JobTemplates', jobtemplates),
|
||||
):
|
||||
if datatype in data:
|
||||
if datatype == 'JobTemplates':
|
||||
@ -106,9 +166,27 @@ def merge_inputs(inputs):
|
||||
except KeyError:
|
||||
testsuites[name] = newsuite
|
||||
|
||||
# validate combined data, against complete schema if clean is True
|
||||
# (as we'd expect data to be loaded with --clean to be complete),
|
||||
# incomplete schema otherwise
|
||||
if validate:
|
||||
merged = {}
|
||||
if machines:
|
||||
merged['Machines'] = machines
|
||||
if products:
|
||||
merged['Products'] = products
|
||||
if profiles:
|
||||
merged['Profiles'] = profiles
|
||||
if testsuites:
|
||||
merged['TestSuites'] = testsuites
|
||||
if jobtemplates:
|
||||
merged['JobTemplates'] = jobtemplates
|
||||
schema_validate(merged, fif=True, complete=clean)
|
||||
print("Input template data is valid")
|
||||
|
||||
return (machines, products, profiles, testsuites, jobtemplates)
|
||||
|
||||
def generate_job_templates(machines, products, profiles, testsuites):
|
||||
def generate_job_templates(products, profiles, testsuites):
|
||||
"""Given machines, products, profiles and testsuites (after
|
||||
merging, but still in intermediate format), generates job
|
||||
templates and returns them as a list.
|
||||
@ -127,7 +205,7 @@ def generate_job_templates(machines, products, profiles, testsuites):
|
||||
jobtemplate['arch'] = product['arch']
|
||||
jobtemplate['flavor'] = product['flavor']
|
||||
jobtemplate['distri'] = product['distri']
|
||||
jobtemplate['version']= product['version']
|
||||
jobtemplate['version'] = product['version']
|
||||
if jobtemplate['machine_name'] == 'ppc64le':
|
||||
if 'updates' in product['flavor']:
|
||||
jobtemplate['group_name'] = "Fedora PowerPC Updates"
|
||||
@ -139,8 +217,8 @@ def generate_job_templates(machines, products, profiles, testsuites):
|
||||
else:
|
||||
jobtemplate['group_name'] = "Fedora AArch64"
|
||||
elif 'updates' in product['flavor']:
|
||||
# x86_64 updates
|
||||
jobtemplate['group_name'] = "Fedora Updates"
|
||||
# x86_64 updates
|
||||
jobtemplate['group_name'] = "Fedora Updates"
|
||||
jobtemplates.append(jobtemplate)
|
||||
return jobtemplates
|
||||
|
||||
@ -171,20 +249,24 @@ def reverse_qol(machines, products, testsuites):
|
||||
converted.append({'key': key, 'value': value})
|
||||
return converted
|
||||
|
||||
# drop profiles from test suites - these are only used for job
|
||||
# template generation and should not be in final output. if suite
|
||||
# *only* contained profiles, drop it
|
||||
for suite in testsuites.values():
|
||||
del suite['profiles']
|
||||
testsuites = {name: suite for (name, suite) in testsuites.items() if suite}
|
||||
|
||||
machines = to_list_of_dicts(machines)
|
||||
products = to_list_of_dicts(products)
|
||||
testsuites = to_list_of_dicts(testsuites)
|
||||
for datatype in (machines, products, testsuites):
|
||||
for item in datatype:
|
||||
item['settings'] = dumb_settings(item['settings'])
|
||||
if 'profiles' in item:
|
||||
# this is only part of the intermediate format, should
|
||||
# not be in the final output
|
||||
del item['profiles']
|
||||
if 'settings' in item:
|
||||
item['settings'] = dumb_settings(item['settings'])
|
||||
|
||||
return (machines, products, testsuites)
|
||||
|
||||
def parse_args():
|
||||
def parse_args(args):
|
||||
"""Parse arguments with argparse."""
|
||||
parser = argparse.ArgumentParser(description=(
|
||||
"Alternative openQA template loader/generator, using a more "
|
||||
@ -213,25 +295,36 @@ def parse_args():
|
||||
'-u', '--update', help="If specified with --load, passed to "
|
||||
"upstream loader and behaves as documented there.",
|
||||
action='store_true')
|
||||
parser.add_argument(
|
||||
'--no-validate', help="Do not do schema validation on input "
|
||||
"or output data", action='store_false', dest='validate')
|
||||
parser.add_argument(
|
||||
'files', help="Input JSON files", nargs='+')
|
||||
return parser.parse_args()
|
||||
return parser.parse_args(args)
|
||||
|
||||
def run():
|
||||
def run(args):
|
||||
"""Read in arguments and run the appropriate steps."""
|
||||
args = parse_args()
|
||||
if not args.write and not args.load:
|
||||
sys.exit("Neither --write nor --load specified! Doing nothing.")
|
||||
(machines, products, profiles, testsuites, jobtemplates) = merge_inputs(args.files)
|
||||
jobtemplates.extend(generate_job_templates(machines, products, profiles, testsuites))
|
||||
args = parse_args(args)
|
||||
if not args.validate and not args.write and not args.load:
|
||||
sys.exit("--no-validate specified and neither --write nor --load specified! Doing nothing.")
|
||||
(machines, products, profiles, testsuites, jobtemplates) = merge_inputs(
|
||||
args.files, validate=args.validate, clean=args.clean)
|
||||
jobtemplates.extend(generate_job_templates(products, profiles, testsuites))
|
||||
(machines, products, testsuites) = reverse_qol(machines, products, testsuites)
|
||||
# now produce the output in upstream-compatible format
|
||||
out = {
|
||||
'JobTemplates': jobtemplates,
|
||||
'Machines': machines,
|
||||
'Products': products,
|
||||
'TestSuites': testsuites
|
||||
}
|
||||
out = {}
|
||||
if jobtemplates:
|
||||
out['JobTemplates'] = jobtemplates
|
||||
if machines:
|
||||
out['Machines'] = machines
|
||||
if products:
|
||||
out['Products'] = products
|
||||
if testsuites:
|
||||
out['TestSuites'] = testsuites
|
||||
if args.validate:
|
||||
# validate generated data against upstream schema
|
||||
schema_validate(out, fif=False, complete=args.clean)
|
||||
print("Generated template data is valid")
|
||||
if args.write:
|
||||
# write generated output to given filename
|
||||
with open(args.filename, 'w') as outfh:
|
||||
@ -252,7 +345,7 @@ def run():
|
||||
def main():
|
||||
"""Main loop."""
|
||||
try:
|
||||
run()
|
||||
run(args=sys.argv[1:])
|
||||
except KeyboardInterrupt:
|
||||
sys.stderr.write("Interrupted, exiting...\n")
|
||||
sys.exit(1)
|
16
main.pm
16
main.pm
@ -1,18 +1,20 @@
|
||||
# Copyright (C) 2014 SUSE Linux GmbH
|
||||
# Copyright (C) 2020 Red Hat
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation; either version 2 of the License, or
|
||||
# (at your option) any later version.
|
||||
# This file is part of os-autoinst-distri-fedora.
|
||||
#
|
||||
# os-autoinst-distri-fedora is free software; you can redistribute it
|
||||
# and/or modify it under the terms of the GNU General Public License
|
||||
# as published by the Free Software Foundation, either version 2 of
|
||||
# the License, or (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use strict;
|
||||
use testapi;
|
||||
|
10
schemas/README.md
Normal file
10
schemas/README.md
Normal file
@ -0,0 +1,10 @@
|
||||
# FIF and openQA template schemas
|
||||
|
||||
This directory contains [JSON Schema](https://json-schema.org/) format schemas for the FIF and
|
||||
upstream openQA template data formats. `fif-complete.json` and `fif-incomplete.json` are the FIF
|
||||
schemas; `openqa-complete.json` and `openqa-incomplete.json` are the upstream schemas. The
|
||||
'complete' schemas expect the input to contain a *complete* set of template data (enough for an
|
||||
openQA instance to schedule and run tests); the *incomplete* schemas expect the input to contain
|
||||
only *some* valid template data (these may be files that will be combined into complete data, or
|
||||
files intended to be loaded without `--clean` only as supplementary data to an openQA deployment
|
||||
with existing data). The other files are subcomponents of the schemas that are loaded by reference.
|
7
schemas/fif-arch.json
Normal file
7
schemas/fif-arch.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-arch.json",
|
||||
"title": "FIF arch schema",
|
||||
"type": "string",
|
||||
"enum": [ "x86_64", "arm", "aarch64", "ppc64le" ]
|
||||
}
|
21
schemas/fif-complete.json
Normal file
21
schemas/fif-complete.json
Normal file
@ -0,0 +1,21 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-complete.json",
|
||||
"type": "object",
|
||||
"title": "Schema for complete Fedora Intermediate Format (FIF) openQA job template data",
|
||||
"required": [
|
||||
"Machines",
|
||||
"Products",
|
||||
"Profiles",
|
||||
"TestSuites"
|
||||
],
|
||||
"properties": {
|
||||
"Machines": { "$ref": "fif-machines.json" },
|
||||
"Products": { "$ref": "fif-products.json" },
|
||||
"Profiles": { "$ref": "fif-profiles.json" },
|
||||
"TestSuites": { "$ref": "fif-testsuites.json" },
|
||||
"JobTemplates": { "$ref": "openqa-jobtemplates.json" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
|
6
schemas/fif-distri.json
Normal file
6
schemas/fif-distri.json
Normal file
@ -0,0 +1,6 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-distri.json",
|
||||
"title": "FIF distri schema",
|
||||
"const": "fedora"
|
||||
}
|
21
schemas/fif-incomplete.json
Normal file
21
schemas/fif-incomplete.json
Normal file
@ -0,0 +1,21 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-incomplete.json",
|
||||
"type": "object",
|
||||
"title": "Schema for incomplete Fedora Intermediate Format (FIF) openQA job template data",
|
||||
"anyOf": [
|
||||
{ "required": [ "Machines" ]},
|
||||
{ "required": [ "Products" ]},
|
||||
{ "required": [ "Profiles" ]},
|
||||
{ "required": [ "TestSuites" ]}
|
||||
],
|
||||
"properties": {
|
||||
"Machines": { "$ref": "fif-machines.json" },
|
||||
"Products": { "$ref": "fif-products.json" },
|
||||
"Profiles": { "$ref": "fif-profiles.json" },
|
||||
"TestSuites": { "$ref": "fif-testsuites.json" },
|
||||
"JobTemplates": { "$ref": "openqa-jobtemplates.json" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
|
15
schemas/fif-machine.json
Normal file
15
schemas/fif-machine.json
Normal file
@ -0,0 +1,15 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-machine.json",
|
||||
"title": "FIF single machine schema",
|
||||
"type": "object",
|
||||
"required": [
|
||||
"backend",
|
||||
"settings"
|
||||
],
|
||||
"properties": {
|
||||
"backend": { "type": "string" },
|
||||
"settings": { "$ref": "fif-settingshash.json" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
8
schemas/fif-machines.json
Normal file
8
schemas/fif-machines.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-machines.json",
|
||||
"title": "FIF Machines object schema",
|
||||
"type": "object",
|
||||
"minProperties": 1,
|
||||
"additionalProperties": { "$ref": "fif-machine.json" }
|
||||
}
|
21
schemas/fif-product.json
Normal file
21
schemas/fif-product.json
Normal file
@ -0,0 +1,21 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-product.json",
|
||||
"title": "FIF single product schema",
|
||||
"type": "object",
|
||||
"required": [
|
||||
"arch",
|
||||
"distri",
|
||||
"flavor",
|
||||
"version"
|
||||
],
|
||||
"properties": {
|
||||
"arch": { "$ref": "fif-arch.json" },
|
||||
"distri": { "$ref": "fif-distri.json" },
|
||||
"flavor": { "type": "string" },
|
||||
"version": { "$ref": "fif-version.json" },
|
||||
"settings": { "$ref": "fif-settingshash.json" },
|
||||
"name": { "type": "string" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
8
schemas/fif-products.json
Normal file
8
schemas/fif-products.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-products.json",
|
||||
"title": "FIF Products object schema",
|
||||
"type": "object",
|
||||
"minProperties": 1,
|
||||
"additionalProperties": { "$ref": "fif-product.json" }
|
||||
}
|
14
schemas/fif-profile.json
Normal file
14
schemas/fif-profile.json
Normal file
@ -0,0 +1,14 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-profile.json",
|
||||
"title": "FIF single profile schema",
|
||||
"required": [
|
||||
"machine",
|
||||
"product"
|
||||
],
|
||||
"properties": {
|
||||
"machine": { "type": "string" },
|
||||
"product": { "type": "string" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
8
schemas/fif-profiles.json
Normal file
8
schemas/fif-profiles.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-profiles.json",
|
||||
"title": "FIF Profiles object schema",
|
||||
"type": "object",
|
||||
"minProperties": 1,
|
||||
"additionalProperties": { "$ref": "fif-profile.json" }
|
||||
}
|
10
schemas/fif-settingshash.json
Normal file
10
schemas/fif-settingshash.json
Normal file
@ -0,0 +1,10 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-settingshash.json",
|
||||
"title": "FIF settings hash schema",
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
"^[+]?[A-Z0-9_].*$": { "type": [ "string", "number" ] }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
17
schemas/fif-testsuite.json
Normal file
17
schemas/fif-testsuite.json
Normal file
@ -0,0 +1,17 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-testsuite.json",
|
||||
"title": "FIF single test suite schema",
|
||||
"required": [
|
||||
"profiles"
|
||||
],
|
||||
"properties": {
|
||||
"profiles": {
|
||||
"type": "object",
|
||||
"title": "A testsuite profile entry schema",
|
||||
"additionalProperties": { "type": "number" }
|
||||
},
|
||||
"settings": { "$ref": "fif-settingshash.json" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
8
schemas/fif-testsuites.json
Normal file
8
schemas/fif-testsuites.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-testsuites.json",
|
||||
"title": "FIF TestSuites object schema",
|
||||
"type": "object",
|
||||
"minProperties": 1,
|
||||
"additionalProperties": { "$ref": "fif-testsuite.json" }
|
||||
}
|
7
schemas/fif-version.json
Normal file
7
schemas/fif-version.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "fif-version.json",
|
||||
"title": "FIF version schema",
|
||||
"type": "string",
|
||||
"pattern": "^([*]|[[:digit:]]{1,3})$"
|
||||
}
|
23
schemas/openqa-complete.json
Normal file
23
schemas/openqa-complete.json
Normal file
@ -0,0 +1,23 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-complete.json",
|
||||
"type": "object",
|
||||
"title": "Schema for complete upstream format openQA job template data",
|
||||
"required": [
|
||||
"Machines",
|
||||
"TestSuites",
|
||||
"Products"
|
||||
],
|
||||
"anyOf": [
|
||||
{"required": [ "JobTemplates" ]},
|
||||
{"required": [ "JobGroups" ]}
|
||||
],
|
||||
"properties": {
|
||||
"Machines": { "$ref": "openqa-machines.json" },
|
||||
"TestSuites": { "$ref": "openqa-testsuites.json" },
|
||||
"Products": { "$ref": "openqa-products.json" },
|
||||
"JobTemplates": { "$ref": "openqa-jobtemplates.json" },
|
||||
"JobGroups": { "$ref": "openqa-jobgroups.json" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
21
schemas/openqa-incomplete.json
Normal file
21
schemas/openqa-incomplete.json
Normal file
@ -0,0 +1,21 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-incomplete.json",
|
||||
"type": "object",
|
||||
"title": "Schema for incomplete upstream format openQA job template data",
|
||||
"anyOf": [
|
||||
{"required": [ "Machines" ]},
|
||||
{"required": [ "TestSuites" ]},
|
||||
{"required": [ "Products" ]},
|
||||
{"required": [ "JobTemplates" ]},
|
||||
{"required": [ "JobGroups" ]}
|
||||
],
|
||||
"properties": {
|
||||
"Machines": { "$ref": "openqa-machines.json" },
|
||||
"TestSuites": { "$ref": "openqa-testsuites.json" },
|
||||
"Products": { "$ref": "openqa-products.json" },
|
||||
"JobTemplates": { "$ref": "openqa-jobtemplates.json" },
|
||||
"JobGroups": { "$ref": "openqa-jobgroups.json" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
15
schemas/openqa-jobgroup.json
Normal file
15
schemas/openqa-jobgroup.json
Normal file
@ -0,0 +1,15 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-jobgroup.json",
|
||||
"title": "Upstream openQA single job group schema",
|
||||
"type": "object",
|
||||
"required": [
|
||||
"group_name",
|
||||
"template"
|
||||
],
|
||||
"properties": {
|
||||
"group_name": { "type": "string" },
|
||||
"template": { "type": "string" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
8
schemas/openqa-jobgroups.json
Normal file
8
schemas/openqa-jobgroups.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-jobgroups.json",
|
||||
"title": "Upstream openQA JobGroups array schema",
|
||||
"type": "array",
|
||||
"minItems": 1,
|
||||
"items": { "$ref": "openqa-jobgroup.json" }
|
||||
}
|
59
schemas/openqa-jobtemplate.json
Normal file
59
schemas/openqa-jobtemplate.json
Normal file
@ -0,0 +1,59 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-jobtemplate.json",
|
||||
"title": "Upstream openQA single job template schema",
|
||||
"type": "object",
|
||||
"title": "A single job template schema",
|
||||
"allOf": [
|
||||
{
|
||||
"oneOf": [
|
||||
{ "required": [ "test_suite" ] },
|
||||
{ "required": [ "test_suite_name" ] }
|
||||
]
|
||||
},
|
||||
{
|
||||
"oneOf": [
|
||||
{ "required": [ "machine" ] },
|
||||
{ "required": [ "machine_name" ] }
|
||||
]
|
||||
},
|
||||
{
|
||||
"oneOf": [
|
||||
{ "required": [ "product" ] },
|
||||
{ "required": [
|
||||
"arch",
|
||||
"distri",
|
||||
"flavor",
|
||||
"version"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"group_name": { "type": "string" },
|
||||
"machine": {
|
||||
"type": "object",
|
||||
"required": [ "name" ],
|
||||
"properties": {
|
||||
"name": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"machine_name": { "type": "string" },
|
||||
"prio": { "type": "number" },
|
||||
"product": { "$ref": "openqa-product.json" },
|
||||
"arch": { "type": "string" },
|
||||
"distri": { "type": "string" },
|
||||
"flavor": { "type": "string" },
|
||||
"version": { "type": "string" },
|
||||
"test_suite": {
|
||||
"type": "object",
|
||||
"required": [ "name" ],
|
||||
"properties": {
|
||||
"name": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"test_suite_name": { "type": "string" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
8
schemas/openqa-jobtemplates.json
Normal file
8
schemas/openqa-jobtemplates.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-jobtemplates.json",
|
||||
"title": "openQA JobTemplates array schema",
|
||||
"type": "array",
|
||||
"minItems": 1,
|
||||
"items": { "$ref": "openqa-jobtemplate.json" }
|
||||
}
|
18
schemas/openqa-machine.json
Normal file
18
schemas/openqa-machine.json
Normal file
@ -0,0 +1,18 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-machine.json",
|
||||
"title": "Upstream openQA single machine schema",
|
||||
"type": "object",
|
||||
"required": [
|
||||
"name",
|
||||
"backend",
|
||||
"settings"
|
||||
],
|
||||
"properties": {
|
||||
"name": { "type": "string" },
|
||||
"description": { "type": "string" },
|
||||
"backend": { "type": "string" },
|
||||
"settings": { "$ref": "openqa-settingsarray.json" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
8
schemas/openqa-machines.json
Normal file
8
schemas/openqa-machines.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-machines.json",
|
||||
"title": "Upstream openQA Machines array schema",
|
||||
"type": "array",
|
||||
"minItems": 1,
|
||||
"items": { "$ref": "openqa-machine.json" }
|
||||
}
|
22
schemas/openqa-product.json
Normal file
22
schemas/openqa-product.json
Normal file
@ -0,0 +1,22 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-product.json",
|
||||
"title": "Upstream openQA single product schema",
|
||||
"type": "object",
|
||||
"required": [
|
||||
"arch",
|
||||
"distri",
|
||||
"flavor",
|
||||
"version"
|
||||
],
|
||||
"properties": {
|
||||
"arch": { "type": "string" },
|
||||
"distri": { "type": "string" },
|
||||
"flavor": { "type": "string" },
|
||||
"version": { "type": "string" },
|
||||
"settings": { "$ref": "openqa-settingsarray.json" },
|
||||
"name": { "type": "string" },
|
||||
"description": { "type": "string" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
8
schemas/openqa-products.json
Normal file
8
schemas/openqa-products.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-products.json",
|
||||
"title": "Upstream openQA Products array schema",
|
||||
"type": "array",
|
||||
"minItems": 1,
|
||||
"items": { "$ref": "openqa-product.json" }
|
||||
}
|
21
schemas/openqa-settingsarray.json
Normal file
21
schemas/openqa-settingsarray.json
Normal file
@ -0,0 +1,21 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-settingsarray.json",
|
||||
"title": "openQA settings array schema",
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": [
|
||||
"key",
|
||||
"value"
|
||||
],
|
||||
"properties": {
|
||||
"key": {
|
||||
"type": "string",
|
||||
"pattern": "^[+]?[A-Z0-9_].*$"
|
||||
},
|
||||
"value": { "type": [ "string", "number" ] }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
}
|
14
schemas/openqa-testsuite.json
Normal file
14
schemas/openqa-testsuite.json
Normal file
@ -0,0 +1,14 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-testsuite.json",
|
||||
"title": "Upstream openQA single test suite schema",
|
||||
"required": [
|
||||
"name"
|
||||
],
|
||||
"properties": {
|
||||
"name": { "type": "string" },
|
||||
"description": { "type": "string" },
|
||||
"settings": { "$ref": "openqa-settingsarray.json" }
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
8
schemas/openqa-testsuites.json
Normal file
8
schemas/openqa-testsuites.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "openqa-testsuites.json",
|
||||
"title": "Upstream openQA TestSuites array schema",
|
||||
"type": "array",
|
||||
"minItems": 1,
|
||||
"items": { "$ref": "openqa-testsuite.json" }
|
||||
}
|
15
tojson.pm
15
tojson.pm
@ -1,15 +0,0 @@
|
||||
#!/bin/perl
|
||||
|
||||
use JSON;
|
||||
|
||||
my $templates = do './templates';
|
||||
my $updates = do './templates-updates';
|
||||
|
||||
my $tempjson = JSON->new->utf8(1)->pretty(1)->encode($templates);
|
||||
my $updjson = JSON->new->utf8(1)->pretty(1)->encode($updates);
|
||||
|
||||
open(FILE, "> templates.json");
|
||||
print FILE $tempjson;
|
||||
|
||||
open (FILE, "> templates-updates.json");
|
||||
print FILE $updjson;
|
21
tox.ini
Normal file
21
tox.ini
Normal file
@ -0,0 +1,21 @@
|
||||
[tox]
|
||||
skipsdist = True
|
||||
envlist = py37,py38,py39
|
||||
skip_missing_interpreters = true
|
||||
[testenv]
|
||||
deps =
|
||||
pytest
|
||||
jsonschema
|
||||
coverage
|
||||
diff-cover
|
||||
pylint
|
||||
pytest-cov
|
||||
|
||||
commands=
|
||||
./fifloader.py --clean templates.fif.json templates-updates.fif.json
|
||||
py.test unittests/
|
||||
py.test --cov-report term-missing --cov-report xml --cov fifloader unittests/
|
||||
diff-cover coverage.xml --fail-under=90
|
||||
diff-quality --violations=pylint --fail-under=90
|
||||
setenv =
|
||||
PYTHONPATH = {toxinidir}
|
48
unittests/data/templates-updates.fif.json
Normal file
48
unittests/data/templates-updates.fif.json
Normal file
@ -0,0 +1,48 @@
|
||||
{
|
||||
"Products": {
|
||||
"fedora-updates-server-ppc64le-*": {
|
||||
"arch": "ppc64le",
|
||||
"distri": "fedora",
|
||||
"flavor": "updates-server",
|
||||
"settings": {},
|
||||
"version": "*"
|
||||
},
|
||||
"fedora-updates-server-x86_64-*": {
|
||||
"arch": "x86_64",
|
||||
"distri": "fedora",
|
||||
"flavor": "updates-server",
|
||||
"settings": {},
|
||||
"version": "*"
|
||||
}
|
||||
},
|
||||
"Profiles": {
|
||||
"fedora-updates-server-ppc64le-*-ppc64le": {
|
||||
"machine": "ppc64le",
|
||||
"product": "fedora-updates-server-ppc64le-*"
|
||||
},
|
||||
"fedora-updates-server-x86_64-*-64bit": {
|
||||
"machine": "64bit",
|
||||
"product": "fedora-updates-server-x86_64-*"
|
||||
}
|
||||
},
|
||||
"TestSuites": {
|
||||
"advisory_boot": {
|
||||
"profiles": {
|
||||
"fedora-updates-server-ppc64le-*-ppc64le": 40,
|
||||
"fedora-updates-server-x86_64-*-64bit": 40
|
||||
},
|
||||
"settings": {
|
||||
"ADVISORY_BOOT_TEST": "1",
|
||||
"BOOTFROM": "c",
|
||||
"ROOT_PASSWORD": "weakpassword",
|
||||
"USER_LOGIN": "false"
|
||||
}
|
||||
},
|
||||
"base_selinux": {
|
||||
"profiles": {
|
||||
"fedora-updates-server-ppc64le-*-ppc64le": 40,
|
||||
"fedora-updates-server-x86_64-*-64bit": 40
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
88
unittests/data/templates.fif.json
Normal file
88
unittests/data/templates.fif.json
Normal file
@ -0,0 +1,88 @@
|
||||
{
|
||||
"Machines": {
|
||||
"64bit": {
|
||||
"backend": "qemu",
|
||||
"settings": {
|
||||
"ARCH_BASE_MACHINE": "64bit",
|
||||
"PART_TABLE_TYPE": "mbr",
|
||||
"QEMUCPU": "Nehalem",
|
||||
"QEMUCPUS": "2",
|
||||
"QEMURAM": "2048",
|
||||
"QEMUVGA": "virtio",
|
||||
"QEMU_VIRTIO_RNG": "1",
|
||||
"WORKER_CLASS": "qemu_x86_64"
|
||||
}
|
||||
},
|
||||
"ppc64le": {
|
||||
"backend": "qemu",
|
||||
"settings": {
|
||||
"ARCH_BASE_MACHINE": "ppc64le",
|
||||
"OFW": 1,
|
||||
"PART_TABLE_TYPE": "mbr",
|
||||
"QEMU": "ppc64",
|
||||
"QEMUCPU": "host",
|
||||
"QEMURAM": 4096,
|
||||
"QEMUVGA": "virtio",
|
||||
"QEMU_VIRTIO_RNG": "1",
|
||||
"WORKER_CLASS": "qemu_ppc64le"
|
||||
}
|
||||
}
|
||||
},
|
||||
"Products": {
|
||||
"fedora-Server-dvd-iso-ppc64le-*": {
|
||||
"arch": "ppc64le",
|
||||
"distri": "fedora",
|
||||
"flavor": "Server-dvd-iso",
|
||||
"settings": {
|
||||
"TEST_TARGET": "ISO"
|
||||
},
|
||||
"version": "*"
|
||||
},
|
||||
"fedora-Server-dvd-iso-x86_64-*": {
|
||||
"arch": "x86_64",
|
||||
"distri": "fedora",
|
||||
"flavor": "Server-dvd-iso",
|
||||
"settings": {
|
||||
"TEST_TARGET": "ISO"
|
||||
},
|
||||
"version": "*"
|
||||
}
|
||||
},
|
||||
"Profiles": {
|
||||
"fedora-Server-dvd-iso-ppc64le-*-ppc64le": {
|
||||
"machine": "ppc64le",
|
||||
"product": "fedora-Server-dvd-iso-ppc64le-*"
|
||||
},
|
||||
"fedora-Server-dvd-iso-x86_64-*-64bit": {
|
||||
"machine": "64bit",
|
||||
"product": "fedora-Server-dvd-iso-x86_64-*"
|
||||
}
|
||||
},
|
||||
"TestSuites": {
|
||||
"base_selinux": {
|
||||
"profiles": {
|
||||
"fedora-Server-dvd-iso-ppc64le-*-ppc64le": 40,
|
||||
"fedora-Server-dvd-iso-x86_64-*-64bit": 40
|
||||
},
|
||||
"settings": {
|
||||
"BOOTFROM": "c",
|
||||
"HDD_1": "disk_%FLAVOR%_%MACHINE%.qcow2",
|
||||
"POSTINSTALL": "base_selinux",
|
||||
"ROOT_PASSWORD": "weakpassword",
|
||||
"START_AFTER_TEST": "install_default_upload",
|
||||
"USER_LOGIN": "false"
|
||||
}
|
||||
},
|
||||
"install_default_upload": {
|
||||
"profiles": {
|
||||
"fedora-Server-dvd-iso-ppc64le-*-ppc64le": 10,
|
||||
"fedora-Server-dvd-iso-x86_64-*-64bit": 10
|
||||
},
|
||||
"settings": {
|
||||
"PACKAGE_SET": "default",
|
||||
"POSTINSTALL": "_collect_data",
|
||||
"STORE_HDD_1": "disk_%FLAVOR%_%MACHINE%.qcow2"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
181
unittests/test_fifloader.py
Normal file
181
unittests/test_fifloader.py
Normal file
@ -0,0 +1,181 @@
|
||||
# Copyright (C) 2020 Red Hat
|
||||
#
|
||||
# This file is part of os-autoinst-distri-fedora.
|
||||
#
|
||||
# os-autoinst-distri-fedora is free software; you can redistribute it
|
||||
# and/or modify it under the terms of the GNU General Public License
|
||||
# as published by the Free Software Foundation, either version 2 of
|
||||
# the License, or (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# Author: Adam Williamson <awilliam@redhat.com>
|
||||
|
||||
# these are all kinda inappropriate for pytest patterns
|
||||
# pylint: disable=no-init, protected-access, no-self-use, unused-argument
|
||||
|
||||
"""Tests for fifloader.py."""
|
||||
|
||||
# core imports
|
||||
import json
|
||||
import os
|
||||
import tempfile
|
||||
from unittest import mock
|
||||
|
||||
# third party imports
|
||||
import jsonschema.exceptions
|
||||
import pytest
|
||||
|
||||
# internal imports
|
||||
import fifloader
|
||||
|
||||
DATAPATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data')
|
||||
|
||||
def _get_merged(input1='templates.fif.json', input2='templates-updates.fif.json'):
|
||||
"""Convenience function as multiple tests need to do this."""
|
||||
return fifloader.merge_inputs(
|
||||
[os.path.join(DATAPATH, input1), os.path.join(DATAPATH, input2)])
|
||||
|
||||
def test_schema_validate():
|
||||
"""Test for schema_validate."""
|
||||
with open(os.path.join(DATAPATH, 'templates.fif.json'), 'r') as tempfh:
|
||||
tempdata = json.load(tempfh)
|
||||
with open(os.path.join(DATAPATH, 'templates-updates.fif.json'), 'r') as updfh:
|
||||
updata = json.load(updfh)
|
||||
assert fifloader.schema_validate(tempdata, fif=True, complete=True) is True
|
||||
assert fifloader.schema_validate(tempdata, fif=True, complete=False) is True
|
||||
assert fifloader.schema_validate(updata, fif=True, complete=False) is True
|
||||
with pytest.raises(jsonschema.exceptions.ValidationError):
|
||||
fifloader.schema_validate(updata, fif=True, complete=True)
|
||||
with pytest.raises(jsonschema.exceptions.ValidationError):
|
||||
fifloader.schema_validate(tempdata, fif=False, complete=True)
|
||||
with pytest.raises(jsonschema.exceptions.ValidationError):
|
||||
fifloader.schema_validate(tempdata, fif=False, complete=False)
|
||||
# we test successful openQA validation later in test_run
|
||||
|
||||
# we test merging in both orders, because it can work in one order
|
||||
# but be broken in the other
|
||||
@pytest.mark.parametrize(
|
||||
"input1, input2",
|
||||
[
|
||||
('templates.fif.json', 'templates-updates.fif.json'),
|
||||
('templates-updates.fif.json', 'templates.fif.json'),
|
||||
]
|
||||
)
|
||||
def test_merge_inputs(input1, input2):
|
||||
"""Test for merge_inputs."""
|
||||
(machines, products, profiles, testsuites, jobtemplates) = _get_merged(input1, input2)
|
||||
# a few known attributes of the test data to ensure the merge worked
|
||||
assert len(machines) == 2
|
||||
assert len(products) == 4
|
||||
assert len(profiles) == 4
|
||||
assert not jobtemplates
|
||||
# testsuite merging is the most complex feature
|
||||
# len should be 3 as there is 1 unique suite in each input file,
|
||||
# and one defined in both which should be merged
|
||||
assert len(testsuites) == 3
|
||||
# check the merged suite was merged correctly
|
||||
# we should have the profiles from *both* input files...
|
||||
assert len(testsuites['base_selinux']['profiles']) == 4
|
||||
# and we should still have the settings (note, combining settings
|
||||
# is not supported, the last-read settings dict is always used)
|
||||
assert len(testsuites['base_selinux']['settings']) == 6
|
||||
|
||||
def test_generate_job_templates():
|
||||
"""Test for generate_job_templates."""
|
||||
(machines, products, profiles, testsuites, _) = _get_merged()
|
||||
templates = fifloader.generate_job_templates(products, profiles, testsuites)
|
||||
# we should get one template per profile in merged input
|
||||
assert len(templates) == 8
|
||||
for template in templates:
|
||||
assert template['group_name'] in ['fedora', 'Fedora PowerPC', 'Fedora AArch64',
|
||||
'Fedora Updates', 'Fedora PowerPC Updates',
|
||||
'Fedora AArch64 Updates']
|
||||
assert template['machine_name'] in list(machines.keys())
|
||||
assert isinstance(template['prio'], int)
|
||||
for item in ('arch', 'distri', 'flavor', 'version'):
|
||||
assert item in template
|
||||
assert template['test_suite_name'] in list(testsuites.keys())
|
||||
|
||||
def test_reverse_qol():
|
||||
"""Test for reverse_qol."""
|
||||
(machines, products, _, testsuites, _) = _get_merged()
|
||||
(machines, products, testsuites) = fifloader.reverse_qol(machines, products, testsuites)
|
||||
assert isinstance(machines, list)
|
||||
assert isinstance(products, list)
|
||||
assert isinstance(testsuites, list)
|
||||
assert len(machines) == 2
|
||||
assert len(products) == 4
|
||||
assert len(testsuites) == 3
|
||||
settlists = []
|
||||
for datatype in (machines, products, testsuites):
|
||||
for item in datatype:
|
||||
# all items should have one of these
|
||||
settlists.append(item['settings'])
|
||||
# no items should have one of these
|
||||
assert 'profiles' not in item
|
||||
for settlist in settlists:
|
||||
assert isinstance(settlist, list)
|
||||
for setting in settlist:
|
||||
assert list(setting.keys()) == ['key', 'value']
|
||||
|
||||
def test_parse_args():
|
||||
"""Test for parse_args."""
|
||||
args = fifloader.parse_args(['-l', '--host', 'https://openqa.example', '--clean', '--update',
|
||||
'--loader', '/tmp/newloader', 'foo.json', 'bar.json'])
|
||||
assert args.load is True
|
||||
assert args.host == 'https://openqa.example'
|
||||
assert args.clean is True
|
||||
assert args.update is True
|
||||
assert args.write is False
|
||||
assert args.loader == '/tmp/newloader'
|
||||
assert args.files == ['foo.json', 'bar.json']
|
||||
args = fifloader.parse_args(['-l', '-w', 'foo.json', 'bar.json'])
|
||||
assert args.load is True
|
||||
assert not args.host
|
||||
assert args.clean is False
|
||||
assert args.update is False
|
||||
assert args.write is True
|
||||
assert args.filename == 'generated.json'
|
||||
assert args.loader == '/usr/share/openqa/script/load_templates'
|
||||
assert args.files == ['foo.json', 'bar.json']
|
||||
args = fifloader.parse_args(['-w', '--filename', 'newout.json', 'foo.json'])
|
||||
assert args.load is False
|
||||
assert args.write is True
|
||||
assert args.filename == 'newout.json'
|
||||
assert args.files == ['foo.json']
|
||||
|
||||
@mock.patch('subprocess.run', autospec=True)
|
||||
def test_run(fakerun):
|
||||
"Test for run()."""
|
||||
with pytest.raises(SystemExit, match=r".neither --write nor --load.*"):
|
||||
fifloader.run(['--no-validate', 'foo.json'])
|
||||
with pytest.raises(SystemExit) as excinfo:
|
||||
fifloader.run(['-l'])
|
||||
assert "arguments are required: files" in excinfo.message
|
||||
with tempfile.NamedTemporaryFile() as tempfh:
|
||||
# this will actually do everything and write out template data
|
||||
# parsed from the test inputs to the temporary file
|
||||
fifloader.run(['-w', '--filename', tempfh.name,
|
||||
os.path.join(DATAPATH, 'templates.fif.json'),
|
||||
os.path.join(DATAPATH, 'templates-updates.fif.json')])
|
||||
written = json.load(tempfh)
|
||||
# check written data matches upstream data schema
|
||||
assert fifloader.schema_validate(written, fif=False, complete=True) is True
|
||||
fifloader.run(['-l', '--loader', '/tmp/newloader', '--host',
|
||||
'https://openqa.example', '--clean', '--update',
|
||||
os.path.join(DATAPATH, 'templates.fif.json'),
|
||||
os.path.join(DATAPATH, 'templates-updates.fif.json')])
|
||||
assert fakerun.call_count == 1
|
||||
assert fakerun.call_args[0][0] == ['/tmp/newloader', '--host', 'https://openqa.example',
|
||||