Compare commits

..

20 Commits
devel ... devel

Author SHA1 Message Date
1470e590d3
add group auditor 1/?
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 4s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-18 14:24:26 -07:00
546f8b4687
look at host category for ALL
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 4s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-18 12:35:08 -07:00
7f3a4b4761
add shim unsigned to parser part 2
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 5s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-17 15:37:44 -07:00
4906749ed0
add shim unsigned to parser 2024-10-17 15:37:15 -07:00
1a45143b00
fix rs for generators
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 4s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-17 15:14:16 -07:00
fc0b738c75
add notice for 0 hosts
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 4s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-17 12:25:31 -07:00
689e7aa793
mangle: separate hbac hosts by lists
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 5s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-17 11:55:14 -07:00
9c1b828ab7
remove resilient storage from r10
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 4s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-17 10:01:56 -07:00
448b8c035b
mangle/ipa: all hbac access supersedes everything else
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 5s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-16 10:17:09 -07:00
a6f4632d66
prepare for 9.5 builds
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 5s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-16 01:29:40 -07:00
08d8995344
Use label=disable to prevent context changes
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 6s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-07 15:09:06 -07:00
333f3614f9
add python_freeipa support
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 5s
Build empanada container images for lorax / buildx (push) Successful in 4s
2024-10-02 11:25:01 -07:00
dc53a5be9e
catch all category
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 4s
Build empanada container images for lorax / buildx (push) Successful in 2s
2024-10-02 10:30:39 -07:00
678c807741
allow mangle to work on IPA DC
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 4s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-02 10:12:28 -07:00
f482ef6e1f
add HBAC host list to user deep
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 7s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-02 09:54:52 -07:00
eba3593cfd
ensure el10 has reposcan capabilities
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 5s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-01 14:47:55 -07:00
b53afe66e2
fix: ensures skipped images are not in the list for podman
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 4s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-01 14:44:32 -07:00
30a84cfed5
Add reposcan option for variants that should not look at compose
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 5s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-01 13:56:56 -07:00
5e6427ea4b
some profiles do not have squashfs listed
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 9s
Build empanada container images for lorax / buildx (push) Successful in 1s
2024-10-01 12:55:33 -07:00
8c775c308c
resolve #19
Some checks failed
Build empanada images for imagefactory / buildx (push) Failing after 5s
Build empanada container images for lorax / buildx (push) Successful in 2s
Resolves PR #19 and makes both iso and live classes consistent.

Signed-off-by: Sergei Shtepa <sshtepa@noreply@resf.org>
Signed-off-by: Louis Abel <label@rockylinux.org>
2024-09-12 09:40:24 -07:00
13 changed files with 249 additions and 61 deletions

View File

@ -31,7 +31,6 @@
- 'AppStream'
- 'CRB'
- 'HighAvailability'
- 'ResilientStorage'
- 'RT'
- 'NFV'
- 'SAP'
@ -47,6 +46,7 @@
images:
dvd:
disc: True
reposcan: False
variant: 'AppStream'
repos:
- 'BaseOS'
@ -54,6 +54,7 @@
minimal:
disc: True
isoskip: True
reposcan: False
repos:
- 'minimal'
- 'BaseOS'
@ -188,9 +189,6 @@
HighAvailability:
- BaseOS
- AppStream
ResilientStorage:
- BaseOS
- AppStream
RT:
- BaseOS
- AppStream

View File

@ -31,7 +31,6 @@
- 'AppStream'
- 'CRB'
- 'HighAvailability'
- 'ResilientStorage'
- 'RT'
- 'NFV'
- 'SAP'
@ -47,6 +46,7 @@
images:
dvd:
disc: True
reposcan: True
variant: 'AppStream'
repos:
- 'BaseOS'
@ -54,6 +54,7 @@
minimal:
disc: True
isoskip: True
reposcan: False
repos:
- 'minimal'
- 'BaseOS'
@ -188,9 +189,6 @@
HighAvailability:
- BaseOS
- AppStream
ResilientStorage:
- BaseOS
- AppStream
RT:
- BaseOS
- AppStream

View File

@ -1,10 +1,10 @@
---
'9-beta':
fullname: 'Rocky Linux 9.4'
revision: '9.4'
fullname: 'Rocky Linux 9.5'
revision: '9.5'
rclvl: 'BETA1'
major: '9'
minor: '4'
minor: '5'
profile: '9-beta'
disttag: 'el9'
code: "Blue Onyx"
@ -20,7 +20,7 @@
- ppc64le
- s390x
provide_multilib: True
project_id: 'df5bcbfc-ba83-4da8-84d6-ae0168921b4d'
project_id: 'ae163d6a-f050-484f-bbaa-100ca673f146'
repo_symlinks:
NFV: 'nfv'
renames:
@ -53,12 +53,14 @@
images:
dvd:
disc: True
reposcan: True
variant: 'AppStream'
repos:
- 'BaseOS'
- 'AppStream'
minimal:
disc: True
reposcan: False
isoskip: True
repos:
- 'minimal'

View File

@ -53,6 +53,7 @@
images:
dvd:
disc: True
reposcan: True
variant: 'AppStream'
repos:
- 'BaseOS'
@ -60,6 +61,7 @@
minimal:
disc: True
isoskip: True
reposcan: False
repos:
- 'minimal'
- 'BaseOS'

View File

@ -1,10 +1,10 @@
---
'9-lookahead':
fullname: 'Rocky Linux 9.5'
revision: '9.5'
fullname: 'Rocky Linux 9.6'
revision: '9.6'
rclvl: 'LH1'
major: '9'
minor: '5'
minor: '6'
profile: '9-lookahead'
disttag: 'el9'
code: "Blue Onyx"
@ -20,7 +20,7 @@
- ppc64le
- s390x
provide_multilib: True
project_id: '6794b5a8-290b-4d0d-ad5a-47164329cbb0'
project_id: 'ae163d6a-f050-484f-bbaa-100ca673f146'
repo_symlinks:
NFV: 'nfv'
renames:
@ -53,6 +53,7 @@
images:
dvd:
disc: True
reposcan: True
variant: 'AppStream'
repos:
- 'BaseOS'
@ -60,6 +61,7 @@
minimal:
disc: True
isoskip: True
reposcan: False
repos:
- 'minimal'
- 'BaseOS'

View File

@ -560,7 +560,7 @@ class RepoSync:
#print(entry_name_list)
for pod in entry_name_list:
podman_cmd_entry = '{} run -d -it -v "{}:{}" -v "{}:{}:z" -v "{}:{}" --name {} --entrypoint {}/{} {}'.format(
podman_cmd_entry = '{} run -d -it --security-opt label=disable -v "{}:{}" -v "{}:{}" -v "{}:{}" --name {} --entrypoint {}/{} {}'.format(
cmd,
self.compose_root,
self.compose_root,
@ -714,7 +714,7 @@ class RepoSync:
self.log.info('Spawning pods for %s' % repo)
for pod in repoclosure_entry_name_list:
podman_cmd_entry = '{} run -d -it -v "{}:{}" -v "{}:{}:z" -v "{}:{}" --name {} --entrypoint {}/{} {}'.format(
podman_cmd_entry = '{} run -d -it --security-opt label=disable -v "{}:{}" -v "{}:{}" -v "{}:{}" --name {} --entrypoint {}/{} {}'.format(
cmd,
self.compose_root,
self.compose_root,
@ -1509,7 +1509,7 @@ class RepoSync:
self.log.info('Spawning pods for %s' % repo)
for pod in repoclosure_entry_name_list:
podman_cmd_entry = '{} run -d -it -v "{}:{}" -v "{}:{}:z" -v "{}:{}" --name {} --entrypoint {}/{} {}'.format(
podman_cmd_entry = '{} run -d -it --security-opt label=disable -v "{}:{}" -v "{}:{}" -v "{}:{}" --name {} --entrypoint {}/{} {}'.format(
cmd,
self.compose_root,
self.compose_root,
@ -2045,7 +2045,7 @@ class SigRepoSync:
#print(entry_name_list)
for pod in entry_name_list:
podman_cmd_entry = '{} run -d -it -v "{}:{}" -v "{}:{}:z" -v "{}:{}" --name {} --entrypoint {}/{} {}'.format(
podman_cmd_entry = '{} run -d -it --security-opt label=disable -v "{}:{}" -v "{}:{}" -v "{}:{}" --name {} --entrypoint {}/{} {}'.format(
cmd,
self.compose_root,
self.compose_root,

View File

@ -81,12 +81,11 @@ class IsoBuild:
self.compose_root = config['compose_root']
self.compose_base = config['compose_root'] + "/" + major
self.current_arch = config['arch']
self.required_pkgs = rlvars['iso_map']['lorax']['required_pkgs']
#self.required_pkgs = rlvars['iso_map']['lorax']['required_pkgs']
self.mock_work_root = config['mock_work_root']
self.lorax_result_root = config['mock_work_root'] + "/" + "lorax"
self.mock_isolation = isolation
self.iso_map = rlvars['iso_map']
#self.livemap = rlvars['livemap']
self.cloudimages = rlvars['cloudimages']
self.release_candidate = rc
self.s3 = s3
@ -253,6 +252,7 @@ class IsoBuild:
mock_iso_path = '/var/tmp/lorax-' + self.release + '.cfg'
mock_sh_path = '/var/tmp/isobuild.sh'
iso_template_path = '/var/tmp/buildImage.sh'
required_pkgs = self.iso_map['lorax']['required_pkgs']
rclevel = ''
if self.release_candidate:
@ -264,7 +264,7 @@ class IsoBuild:
releasever=self.release,
fullname=self.fullname,
shortname=self.shortname,
required_pkgs=self.required_pkgs,
required_pkgs=required_pkgs,
dist=self.disttag,
repos=self.repolist,
user_agent='{{ user_agent }}',
@ -294,7 +294,7 @@ class IsoBuild:
builddir=self.mock_work_root,
lorax_work_root=self.lorax_result_root,
bugurl=self.bugurl,
squashfs_only=self.iso_map['lorax']['squashfs_only'],
squashfs_only=self.iso_map['lorax'].get('squashfs_only', None),
)
with open(mock_iso_path, "w+") as mock_iso_entry:
@ -725,8 +725,7 @@ class IsoBuild:
def _extra_iso_build_wrap(self):
"""
Try to figure out where the build is going, we only support mock for
now.
Try to figure out where the build is going, podman or mock.
"""
work_root = os.path.join(
self.compose_latest_dir,
@ -737,15 +736,23 @@ class IsoBuild:
if self.arch:
arches_to_build = [self.arch]
images_to_build = self.iso_map['images']
images_to_build = list(self.iso_map['images'].keys())
if self.extra_iso:
images_to_build = [self.extra_iso]
images_to_skip = []
for y in images_to_build:
if 'isoskip' in self.iso_map['images'][y] and self.iso_map['images'][y]['isoskip']:
self.log.info(Color.WARN + 'Skipping ' + y + ' image')
self.log.info(Color.WARN + f'Skipping {y} image')
images_to_skip.append(y)
continue
reposcan = True
if 'reposcan' in self.iso_map['images'][y] and not self.iso_map['images'][y]['reposcan']:
self.log.info(Color.WARN + f"Skipping compose repository scans for {y}")
reposcan = False
# Kind of hacky, but if we decide to have more than boot/dvd iso's,
# we need to make sure volname matches the initial lorax image,
# which the volid contains "dvd". AKA, file name doesn't always
@ -770,6 +777,7 @@ class IsoBuild:
a,
y,
self.iso_map['images'][y]['repos'],
reposcan=reposcan
)
self._extra_iso_local_config(a, y, grafts, work_root, volname)
@ -782,7 +790,14 @@ class IsoBuild:
raise SystemExit()
if self.extra_iso_mode == 'podman':
self._extra_iso_podman_run(arches_to_build, images_to_build, work_root)
# I can't think of a better way to do this
images_to_build_podman = images_to_build.copy()
for item in images_to_build_podman[:]:
for skip in images_to_skip:
if item == skip:
images_to_build_podman.remove(item)
self._extra_iso_podman_run(arches_to_build, images_to_build_podman, work_root)
def _extra_iso_local_config(self, arch, image, grafts, work_root, volname):
"""
@ -829,6 +844,7 @@ class IsoBuild:
isoname = f'{self.shortname}-{self.release}{rclevel}{datestamp}-{arch}-{image}.iso'
generic_isoname = f'{self.shortname}-{arch}-{image}.iso'
latest_isoname = f'{self.shortname}-{self.major_version}-latest-{arch}-{image}.iso'
required_pkgs = self.iso_map['lorax']['required_pkgs']
lorax_pkg_cmd = '/usr/bin/dnf install {} -y {}'.format(
' '.join(required_pkgs),
@ -1006,7 +1022,7 @@ class IsoBuild:
checksum_list.append(latestname)
for pod in entry_name_list:
podman_cmd_entry = '{} run -d -it -v "{}:{}" -v "{}:{}" --name {} --entrypoint {}/{} {}'.format(
podman_cmd_entry = '{} run -d -it --security-opt label=disable -v "{}:{}" -v "{}:{}" --name {} --entrypoint {}/{} {}'.format(
cmd,
self.compose_root,
self.compose_root,
@ -1090,6 +1106,7 @@ class IsoBuild:
arch,
iso,
variants,
reposcan: bool = True,
):
"""
Get a list of packages for an extras ISO. This should NOT be called
@ -1119,6 +1136,8 @@ class IsoBuild:
# actually get the boot data
files = self._get_grafts([lorax_for_var, extra_files_for_var])
# Some variants cannot go through a proper scan.
if reposcan:
# This is to get all the packages for each repo
for repo in variants:
pkg_for_var = os.path.join(
@ -1518,7 +1537,7 @@ class LiveBuild:
self.compose_base = config['compose_root'] + "/" + major
self.current_arch = config['arch']
self.livemap = rlvars['livemap']
self.required_pkgs = rlvars['livemap']['required_pkgs']
#self.required_pkgs = rlvars['livemap']['required_pkgs']
self.mock_work_root = config['mock_work_root']
self.live_result_root = config['mock_work_root'] + "/lmc"
self.mock_isolation = isolation

View File

@ -47,7 +47,6 @@ class common:
'CRB': ['aarch64', 'ppc64le', 's390x', 'x86_64'],
'HighAvailability': ['aarch64', 'ppc64le', 's390x', 'x86_64'],
'NFV': ['x86_64'],
'ResilientStorage': ['ppc64le', 's390x', 'x86_64'],
'RT': ['x86_64'],
'SAP': ['ppc64le', 's390x', 'x86_64'],
'SAPHANA': ['ppc64le', 'x86_64']

View File

@ -1,6 +1,6 @@
# To be sourced by scripts to use
REPO=("BaseOS" "AppStream" "CRB" "HighAvailability" "ResilientStorage" "NFV" "RT" "SAP" "SAPHANA")
REPO=("BaseOS" "AppStream" "CRB" "HighAvailability" "NFV" "RT" "SAP" "SAPHANA")
ARCH=("aarch64" "ppc64le" "s390x" "x86_64")
MAJOR="10"

View File

@ -12,6 +12,8 @@ IGNORES = [
'insights-client',
'lorax-templates-rhel',
'shim',
'shim-unsigned-x64',
'shim-unsigned-aarch64',
'redhat-cloud-client-configuration',
'rhc',
'rhc-worker-playbook',

View File

@ -20,6 +20,9 @@ REPOS = switcher.rlver(results.version,
# Source packages we do not ship or are rocky branded
IGNORES = [
'insights-client',
'shim',
'shim-unsigned-x64',
'shim-unsigned-aarch64',
'redhat-cloud-client-configuration',
'rhc',
'rhc-worker-playbook',

30
mangle/ipa/ipaaudit-noipa Executable file
View File

@ -0,0 +1,30 @@
#!/bin/bash
# Wrapper for ipaauditor.py audit
source /etc/os-release
case "$ID" in
rocky|centos|rhel)
case "${VERSION_ID:0:1}" in
5|6|7)
echo "Not supported."
exit 3
;;
8)
PYTHON_EXEC="/usr/libexec/platform-python"
;;
*)
PYTHON_EXEC="/usr/bin/python3"
;;
esac ;;
ubuntu|debian)
PYTHON_EXEC="/usr/bin/python3"
;;
fedora)
PYTHON_EXEC="/usr/bin/python3"
esac
$PYTHON_EXEC ipaauditor.py --user test \
--password test \
--server test \
--library python_freeipa \
audit "$@"

View File

@ -58,6 +58,9 @@ audit_parser = subparser.add_parser('audit', epilog='Use this to perform audits
parser.add_argument('--library', type=str, default='ipalib',
help='Choose the ipa library to use for the auditor',
choices=('ipalib', 'python_freeipa'))
parser.add_argument('--user', type=str, default='', help='Set the username (python_freeipa only)')
parser.add_argument('--password', type=str, default='', help='Set the password (python_freeipa only)')
parser.add_argument('--server', type=str, default='', help='Set the server (python_freeipa only)')
audit_parser.add_argument('--type', type=str, required=True,
help='Type of audit: hbac, rbac, group, user',
@ -106,7 +109,7 @@ class EtcIPADefault:
outter_info['ipa_joined_name'] = __config['global']['host']
outter_info['ipa_domain'] = __config['global']['domain']
outter_info['ipa_realm'] = __config['global']['realm']
outter_info['registered_dc'] = __config['global']['server']
outter_info['registered_dc'] = __config['global']['host'] if not __config['global'].get('server', None) else __config['global']['server']
return outter_info
class SssctlInfo:
@ -274,16 +277,89 @@ class IPAAudit:
@staticmethod
def user_pull(api, name, deep):
"""
Gets requested rbac info
Gets requested user info
"""
print()
try:
user_results = IPAQuery.user_data(api, name)
except:
print(f'Could not find {name}', sys.stderr)
sys.exit(1)
user_first = '' if not user_results.get('givenname', None) else user_results['givenname'][0]
user_last = '' if not user_results.get('sn', None) else user_results['sn'][0]
user_uid = '' if not user_results.get('uid', None) else user_results['uid'][0]
user_uidnum = '' if not user_results.get('uidnumber', None) else user_results['uidnumber'][0]
user_gidnum = '' if not user_results.get('gidnumber', None) else user_results['gidnumber'][0]
user_groups = '' if not user_results.get('memberof_group', None) else '\n '.join(user_results['memberof_group'])
user_hbachosts = '' if not user_results.get('memberof_hbacrule', None) else '\n '.join(user_results['memberof_hbacrule'])
user_indhbachosts = '' if not user_results.get('memberofindirect_hbacrule', None) else '\n '.join(user_results['memberofindirect_hbacrule'])
starter_user = {
'User name': user_uid,
'First name': user_first,
'Last name': user_last,
'UID': user_uidnum,
'GID': user_gidnum,
'Groups': user_groups,
}
print('User Information')
print('------------------------------------------')
for key, value in starter_user.items():
if len(value) > 0:
print(f'{key: <16}{value}')
print('')
if deep:
group_list = [] if not user_results.get('memberof_group', None) else user_results['memberof_group']
hbac_list = [] if not user_results.get('memberof_hbacrule', None) else user_results['memberof_hbacrule']
IPAAudit.user_deep_list(api, name, group_list, hbac_list)
@staticmethod
def group_pull(api, name, deep):
"""
Gets requested rbac info
"""
print()
try:
group_results = IPAQuery.group_data(api, name)
except:
print(f'Could not find {name}', sys.stderr)
sys.exit(1)
group_name = '' if not group_results.get('cn', None) else group_results['cn'][0]
group_gidnum = '' if not group_results.get('gidnumber', None) else group_results['gidnumber'][0]
group_members_direct = [] if not group_results.get('member_user', None) else group_results['member_user']
group_members_indirect = [] if not group_results.get('memberindirect_user', None) else group_results['memberindirect_user']
group_members = list(group_members_direct) + list(group_members_indirect)
num_of_group_members = str(len(group_members))
group_hbacs_direct = [] if not group_results.get('memberof_hbacrule', None) else group_results['memberof_hbacrule']
group_hbacs_indirect = [] if not group_results.get('memberofindirect_hbacrule', None) else group_results['memberofindirect_hbacrule']
group_hbacs = list(group_hbacs_direct) + list(group_hbacs_indirect)
num_of_hbacs = str(len(group_hbacs))
group_sudo_direct = [] if not group_results.get('memberof_sudorule', None) else group_results['memberof_sudorule']
group_sudo_indirect = [] if not group_results.get('memberofindirect_sudorule', None) else group_results['memberofindirect_sudorule']
group_sudos = list(group_sudo_direct) + list(group_sudo_indirect)
num_of_sudos = str(len(group_sudos))
starter_group = {
'Group name': group_name,
'GID': group_gidnum,
'Number of Users': num_of_group_members,
'Number of HBAC Rules': num_of_hbacs,
'Number of SUDO Rules': num_of_sudos,
}
print('Group Information')
print('------------------------------------------')
for key, value in starter_group.items():
if len(value) > 0:
print(f'{key: <24}{value}')
print('')
if deep:
IPAAudit.group_deep_list(api, name, group_members, group_hbacs, group_sudos)
@staticmethod
def hbac_pull(api, name, deep):
@ -369,7 +445,7 @@ class IPAAudit:
if perm not in starting_perms:
starting_perms.append(perm)
print(f'Permissions Applied to this Role')
print('Permissions Applied to this Role')
print('----------------------------------------')
for item in starting_perms:
print(item)
@ -427,13 +503,63 @@ class IPAAudit:
print(f'{key: <24}{value}')
@staticmethod
def user_deep_list(api, user):
def user_deep_list(api, user, groups, hbacs):
"""
Does a recursive dig on a user
"""
hbac_rule_list = list(hbacs)
hbac_rule_all_hosts = []
host_list = []
for group in groups:
group_results = IPAQuery.group_data(api, group)
hbac_list = [] if not group_results.get('memberof_hbacrule', None) else group_results['memberof_hbacrule']
hbacind_list = [] if not group_results.get('memberofindirect_hbacrule', None) else group_results['memberofindirect_hbacrule']
hbac_rule_list.extend(hbac_list)
hbac_rule_list.extend(hbacind_list)
# TODO: Add HBAC list (including services)
# TODO: Add RBAC list
hbac_host_dict = {}
for hbac in hbac_rule_list:
hbac_hosts = []
hbac_results = IPAQuery.hbac_data(api, hbac)
hbac_host_list = [] if not hbac_results.get('memberhost_host', None) else hbac_results['memberhost_host']
hbac_hostgroup_list = [] if not hbac_results.get('memberhost_hostgroup', None) else hbac_results['memberhost_hostgroup']
if hbac_results.get('hostcategory'):
hbac_rule_all_hosts.append(hbac)
for host in hbac_host_list:
hbac_hosts.append(host)
for hostgroup in hbac_hostgroup_list:
hostgroup_data = IPAQuery.hostgroup_data(api, hostgroup)
host_list = [] if not hostgroup_data.get('member_host', None) else hostgroup_data['member_host']
hbac_hosts.extend(host_list)
hbac_host_dict[hbac] = hbac_hosts
#new_hbac_hosts = sorted(set(hbac_hosts))
print('User Has Access To These Hosts')
print('------------------------------------------')
if len(hbac_rule_all_hosts) > 0:
print('!! Notice: User has access to ALL hosts from the following rules:')
hbac_rule_all_hosts = sorted(set(hbac_rule_all_hosts))
for allrule in hbac_rule_all_hosts:
print(allrule)
else:
for hrule in hbac_host_dict:
print()
print(f'HBAC Rule: {hrule}')
print('==========================================')
for h in hbac_host_dict[hrule]:
print(h)
if len(hbac_host_dict[hrule]) == 0:
print('(No hosts set for this rule)')
@staticmethod
def group_deep_list(api, group):
def group_deep_list(api, group, members, hbacs, sudos):
"""
Does a recursive dig on a group
"""
@ -567,7 +693,7 @@ memberOf:{groups}
return api.hbacsvcgroup_show(hbacsvcgroup)['result']
# start main
def get_api(ipa_library='ipalib'):
def get_api(ipa_library='ipalib', user='', password='', server=''):
"""
Gets and returns the right API entrypoint
"""
@ -586,7 +712,13 @@ def get_api(ipa_library='ipalib'):
print('WARNING: No kerberos credentials\n')
command_api = None
elif ipa_library == 'python_freeipa':
print()
api = ClientMeta(server)
try:
api.login(user, password)
command_api = api
except:
print('ERROR: Unable to login, check user/password/server')
command_api = None
else:
print('Unsupported ipa library', sys.stderr)
sys.exit(1)
@ -597,7 +729,8 @@ def main():
"""
Main function entrypoint
"""
command_api = get_api()
command_api = get_api(ipa_library=results.library, user=results.user,
password=results.password, server=results.server)
if command == 'audit':
IPAAudit.entry(command_api, results.type, results.name, results.deep)
elif command == 'info':