createhdds/openqa_trigger/report_job_results.py
Adam Williamson b54aed6aa1 Use python-wikitcms and fedfind
The basic approach is that openqa_trigger gets a ValidationEvent from
python-wikitcms - either the Wiki.current_event property for
'current', or the event specified, obtained via the newly-added
Wiki.get_validation_event(), for 'event'. For 'event' it then just
goes ahead and runs the jobs and prints the IDs. For 'current' it
checks the last run compose version for each arch and runs if needed,
as before. The ValidationEvent's 'sortname' property is the value
written out to PERSISTENT to track the 'last run' - this property is
intended to always sort compose events 'correctly', so we should
always run when appropriate even when going from Rawhide to Branched,
Branched to a TC, TC to RC, RC to (next milestone) TC.

On both paths it gets a fedfind.Release object via the ValidationEvent
- ValidationEvents have a ff_release property which is the
fedfind.Release object that matches that event. It then queries
fedfind for image locations using a query that tries to get just *one*
generic-ish network install image for each arch. It passes the
location to download_image(), which is just download_rawhide_iso()
renamed and does the same job, only it can be simpler now.

From there it works pretty much as before, except we use the
ValidationEvent's 'version' property as the BUILD setting for OpenQA,
and report_job_results get_relval_commands() is tweaked slightly to
parse this properly to produce a correct report-auto command.

Probably the most likely bits to break here are the sortname thing
(see wikitcms helpers.py fedora_release_sort(), it's pretty stupid, I
should re-write it) and the image query, which might wind up getting
more than one image depending on how exactly the F22 Alpha composes
look. I'll keep a close eye on that. We can always take the list from
fedfind and further filter it so we have just one image per arch.
Image objects have a .arch attribute so this will be easy to do if
necessary. I *could* give the fedfind query code a 'I'm feeling lucky'-
ish mode to only return one image per (whatever), but not sure if that
would be too specialized, I'll think about it.
2015-02-16 18:04:40 +01:00

90 lines
2.9 KiB
Python

import requests
import argparse
import os
import time
import conf_test_suites
API_ROOT = "http://localhost/api/v1"
SLEEPTIME = 60
def get_passed_testcases(job_ids):
"""
job_ids ~ list of int (job ids)
Returns ~ list of str - names of passed testcases
"""
running_jobs = dict([(job_id, "%s/jobs/%s" % (API_ROOT, job_id)) for job_id in job_ids])
finished_jobs = {}
while running_jobs:
for job_id, url in running_jobs.items():
job_state = requests.get(url).json()['job']
if job_state['state'] == 'done':
print "Job %s is done" % job_id
finished_jobs[job_id] = job_state
del running_jobs[job_id]
if running_jobs:
time.sleep(SLEEPTIME)
passed_testcases = {} # key = VERSION_BUILD_ARCH
for job_id in job_ids:
job = finished_jobs[job_id]
if job['result'] =='passed':
key = (job['settings']['VERSION'], job['settings']['FLAVOR'], job['settings'].get('BUILD', None), job['settings']['ARCH'])
passed_testcases.setdefault(key, [])
passed_testcases[key].extend(conf_test_suites.TESTSUITES[job['settings']['TEST']])
for key, value in passed_testcases.iteritems():
passed_testcases[key] = sorted(list(set(value)))
return passed_testcases
def get_relval_commands(passed_testcases):
relval_template = "relval report-auto"
commands = []
for key in passed_testcases:
cmd_ = relval_template
version, _, build, arch = key
cmd_ += ' --release "%s" --milestone "%s" --compose "%s"' % tuple(build.split('_'))
for tc_name in passed_testcases[key]:
testcase = conf_test_suites.TESTCASES[tc_name]
tc_env = arch if testcase['env'] == '$RUNARCH$' else testcase['env']
tc_type = testcase['type']
tc_section = testcase['section']
commands.append('%s --environment "%s" --testtype "%s" --section "%s" --testcase "%s" pass' % (cmd_, tc_env, tc_type, tc_section, tc_name))
return commands
def report_results(job_ids):
commands = get_relval_commands(get_passed_testcases(job_ids))
print "Running relval commands:"
for command in commands:
print command
os.system(command)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Evaluate per-testcase results from OpenQA job runs")
parser.add_argument('jobs', type=int, nargs='+')
parser.add_argument('--report', default=False, action='store_true')
args = parser.parse_args()
passed_testcases = get_passed_testcases(args.jobs)
commands = get_relval_commands(passed_testcases)
import pprint
pprint.pprint(passed_testcases)
if not args.report:
print "\n\n### No reporting is done! ###\n\n"
pprint.pprint(commands)
else:
for command in commands:
print command
os.system(command)