{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Release Engineering (SIG/Core) Wiki","text":""},{"location":"#about","title":"About","text":"

The Rocky Linux Release Engineering Team (also known as SIG/Core) dedicates themselves to the development, building, management, production, and release of Rocky Linux. This group combines development and infrastructure in a single, cohesive unit of individuals that ultimately make the distribution happen.

The \"SIG/Core\" reference name is not a strict Special Interest Group (as defined by the Rocky Linux wiki).

The general goals (or \"interests\") is:

"},{"location":"#mission","title":"Mission","text":"

Release Engineering strives to ensure a stable distribution is developed, built, tested, and provided to the community from the RESF as a compatible derivative of Red Hat Enterprise Linux. To achieve this goal, some of the things we do are:

See the What We Do page for a more detailed explanation of our activities.

"},{"location":"#getting-in-touch-contributing","title":"Getting In Touch / Contributing","text":"

There are various ways to get in touch with Release Engineering and provide help, assistance, or even just ideas that can benefit us or the entire community.

For a list of our members, see the Members page.

"},{"location":"#resources-and-rocky-linux-policies","title":"Resources and Rocky Linux Policies","text":""},{"location":"#general-packaging-resources","title":"General Packaging Resources","text":""},{"location":"members/","title":"Members","text":"

Release Engineering (SIG/Core) is a mix of Development and Infrastructure members to ensure a high quality release of Rocky Linux as well as the uptime of the services provided to the community. The current members of this group are listed in the table below. Some members may also be found in various Special Interest Groups, such as SIG/AltArch and SIG/Kernel.

Role Name Email Mattermost Name IRC Name Release Engineering Co-Lead and Infrastructure Louis Abel label@rockylinux.org @nazunalika Sokel/label/Sombra Release Engineering Co-Lead Mustafa Gezen mustafa@rockylinux.org @mustafa mstg Release Engineering and Development Skip Grube skip@rockylinux.org @skip77 Release Engineering and Development Sherif Nagy sherif@rockylinux.org @sherif Release Engineering and Development Pablo Greco pgreco@rockylinux.org @pgreco pgreco Infrastructure Lead Neil Hanlon neil@resf.org @neil neil Infrastructure Lead Taylor Goodwill tg@resf.org @tgo tg"},{"location":"what_we_do/","title":"What We Do","text":"

Release Engineering (SIG/Core) was brought together as a combination of varying expertise (development and infrastructure) to try to fill in gaps of knowledge but to also to ensure that the primary goal of having a stable release of Rocky Linux is reached.

Some of the things we do in pursuit of our mission goals:

\"Why the name SIG/Core?\"

While not an actual Special Interest Group, the reality is that Release Engineering is ultimately the \"core\" of Rocky Linux's production. The idea of \"SIG/Core\" stemmed from the thought that without this group, Rocky Linux would not exist as it is now, so we are \"core\" to its existence. The other idea was that SIG/Core would eventually branch out to elsewhere. Where this would go, it is uncertain.

"},{"location":"documentation/","title":"Release General Overview","text":"

This section goes over at a high level how we compose releases for Rocky Linux. As most of our tools are home grown, we have made sure that the tools are open source and in our git services.

This page should serve as an idea of the steps we generally take and we hope that other projects out there who wish to also use our tools can make sure they can use them in this same way, whether they want to be an Enterprise Linux derivative or another project entirely.

"},{"location":"documentation/#build-system-and-tools","title":"Build System and Tools","text":"

The tools in use for the distribution are in the table below.

Tool Maintainer Code Location srpmproc SIG/Core at RESF GitHub empanadas SIG/Core at RESF sig-core-toolkit Peridot SIG/Core at RESF GitHub MirrorManager 2 Fedora Project MirrorManager2

For Rocky Linux to be build, we use Peridot as the build system and empanadas to \"compose\" the distribution. As we do not use Koji for Rocky Linux beyond version 9, pungi can no longer be used. Peridot instead takes pungi configuration data and comps and transforms them into a format it can understand. Empanadas then comes in to do the \"compose\" and sync all the repositories down.

"},{"location":"documentation/#full-compose-major-or-minor-releases","title":"Full Compose (major or minor releases)","text":"

Step by step, it looks like this:

"},{"location":"documentation/#general-updates","title":"General Updates","text":"

Step by step, it looks like this:

"},{"location":"documentation/empanadas/","title":"Empanadas","text":"

This page goes over empanadas, which is part of the SIG/Core toolkit. Empanadas assists SIG/Core is composing repositories, creating ISO's, creating images, and various other activities in Rocky Linux. It is also used for general testing and debugging of repositories and its metadata.

"},{"location":"documentation/empanadas/#contact-information","title":"Contact Information","text":"Owner SIG/Core (Release Engineering & Infrastructure) Email Contact releng@rockylinux.org Mattermost Contacts @label @neil Mattermost Channels ~Development"},{"location":"documentation/empanadas/#general-information","title":"General Information","text":"

empanadas is a python project using poetry, containing various built-in modules with the goal to try to emulate the Fedora Project's pungi to an extent. While it is not perfect, it achieves the very basic goals of creating repositories, images and ISO's for consumption by the end user. It also has interactions with peridot, the build system used by the RESF to build the Rocky Linux distribution.

For performing syncs, it relies on the use of podman to perform syncing in a parallel fashion. This was done because it is not possible to run multiple dnf transactions at once on a single system and looping one repository at a time is not sustainable (nor fast).

"},{"location":"documentation/empanadas/#requirements","title":"Requirements","text":""},{"location":"documentation/empanadas/#features","title":"Features","text":"

As of this writing, empanadas has the following abilities:

"},{"location":"documentation/empanadas/#installing-empanadas","title":"Installing Empanadas","text":"

The below is how to install empanadas from the development branch on a Fedora system.

% dnf install git podman fpart poetry mock -y\n% git clone https://git.resf.org/sig_core/toolkit.git -b devel\n% cd toolkit/iso/empanadas\n% poetry install\n
"},{"location":"documentation/empanadas/#configuring-empanadas","title":"Configuring Empanadas","text":"

Depending on how you are using empanadas will depend on how your configurations will be setup.

These configuration files are delicate and can control a wide variety of the moving parts of empanadas. As these configurations are fairly massive, we recommend checking the reference guides for deeper details into configuring for base distribution or \"SIG\" content.

"},{"location":"documentation/empanadas/#using-empanadas","title":"Using Empanadas","text":"

The most common way to use empanadas is to sync repositories from a peridot instance. This is performed upon each release or on each set of updates as they come from upstream. Below lists how to use empanadas, as well as the common options.

Note that for each of these commands, it is fully expected you are running poetry run in the root of empanadas.

# Syncs all repositoryes for the \"9\" release\n% poetry run sync-from-peridot --release 9 --clean-old-packages\n\n# Syncs only the BaseOS repository without syncing sources\n% poetry run sync-from-peridot --release 9 --clean-old-packages --repo BaseOS --ignore-source\n\n# Syncs only AppStream for ppc64le\n% poetry run sync-from-peridot --release 9 --clean-old-packages --repo AppStream --arch ppc64le\n
Resources Account ServicesGit (RESF Git Service)Git (Rocky Linux GitHub)Git (Rocky Linux GitLab)Mail ListsContacts

URL: https://accounts.rockylinux.org

Purpose: Account Services maintains the accounts for almost all components of the Rocky ecosystem

Technology: Noggin used by Fedora Infrastructure

Contact: ~Infrastructure in Mattermost and #rockylinux-infra in Libera IRC

URL: https://git.resf.org

Purpose: General projects, code, and so on for the Rocky Enterprise Software Foundation.

Technology: Gitea

Contact: ~Infrastructure, ~Development in Mattermost and #rockylinux-infra, #rockylinux-devel in Libera IRC

URL: https://github.com/rocky-linux

Purpose: General purpose code, assets, and so on for Rocky Linux. Some content is mirrored to the RESF Git Service.

Technology: GitHub

Contact: ~Infrastructure, ~Development in Mattermost and #rockylinux-infra, #rockylinux-devel in Libera IRC

URL: https://git.rockylinux.org

Purpose: Packages and light code for the Rocky Linux distribution

Technology: GitLab

Contact: ~Infrastructure, ~Development in Mattermost and #rockylinux-infra, #rockylinux-devel in Libera IRC

URL: https://lists.resf.org

Purpose: Users can subscribe and interact with various mail lists for the Rocky ecosystem

Technology: Mailman 3 + Hyper Kitty

Contact: ~Infrastructure in Mattermost and #rockylinux-infra in Libera IRC

Name Email Mattermost Name IRC Name Louis Abel label@rockylinux.org @nazunalika Sokel/label/Sombra Mustafa Gezen mustafa@rockylinux.org @mustafa mstg Skip Grube skip@rockylinux.org @skip77 Sherif Nagy sherif@rockylinux.org @sherif Pablo Greco pgreco@rockylinux.org @pgreco pgreco Neil Hanlon neil@resf.org @neil neil Taylor Goodwill tg@resf.org @tgo tg"},{"location":"documentation/peridot/","title":"Peridot Build System","text":"

This page goes over the Peridot Build System and how SIG/Core utilizes it.

More to come.

"},{"location":"documentation/rebuild/","title":"Rebuild Version Bump","text":"

In some cases, a package has to be rebuilt. A package may be rebuilt for these reasons:

This typically applies to packages being built from a given src subgroup. Packages pulled from upstream don't fall into this category in normal circumstances. In those cases, they receive .0.1 and so on as standalone rebuilds.

"},{"location":"documentation/compose/","title":"Composing and Managing Releases","text":"

This section goes over the process of composing a release from a bunch of packages to repositories, to images. This section also goes over the basics of working with koji when necessary.

"},{"location":"documentation/compose/koji/","title":"Updates and Management in Koji, A Manual","text":"

More to come.

"},{"location":"documentation/debranding/","title":"Intro to Debranding with Rocky Linux","text":""},{"location":"documentation/debranding/#what-is-debranding","title":"What is Debranding?","text":"

Certain packages in the upstream RHEL/CentOS have logos, trademarks, and other specific text, images, or multimedia that other entities (like the Rocky Enterprise Software Foundation) are not allowed to redistribute.

A visible, simple example is the Apache web server (package httpd). If you've ever installed it and visited the default web server page, you will see a test page specific to your Linux distro, complete with a \"powered by\" logo and distro-specific information. While we are allowed to compile and redistribute the Apache web server software, Rocky Linux is NOT allowed to include these trademarked images or distro-specific text.

We must have an automated process that will strip these assets out and replace them with our own branding upon import into our Git.

"},{"location":"documentation/debranding/#how-rocky-debranding-works","title":"How Rocky Debranding Works","text":"

Rocky's method for importing packages from the upstream is a tool called srpmproc.

Srpmproc's purpose in life is to:

"},{"location":"documentation/debranding/#helping-with-debrands","title":"Helping with Debrands","text":"

There are 2 tasks involved with debranding: Identifying packages that require debranding and developing patches+configs to debrand the necessary packages.

If you want to help with the latter, please see the patching guide for examples and a case study. If you find something that you suspect is missing branding, you can also file a bug report instead.

"},{"location":"documentation/debranding/#debrand-packages-tracking","title":"Debrand Packages Tracking","text":"

A list of packages that need debranding is located in the a metadata file in our git here. This generally does not track status and is simply a reference on what is debranded, if it's no longer debranded (aka Rocky Linux is upstreamed), and other notes.

"},{"location":"documentation/debranding/debrand_info/","title":"Debranding Information","text":"

This page goes over the methodology and some packages that require changes to their material for acceptance in Rocky Linux. Usually this means there is some text or images in the package that reference upstream trademarks, and these must be swapped out before we can distribute them.

CentOS had a wiki page at one point where it was documented, but it wasn't always up to date. For example, the package nginx did not appear on their list, and still had RHEL branding in the CentOS repos. As a result, this forced us to do a deeper investigation into what needs to be changed or altered.

There are a few ways we've determined some of the changes:

When we need to make changes, it can possibly be one or more of these things:

Current patches (for staging) are here.

"},{"location":"documentation/debranding/debrand_info/#packages-that-need-debranding-changes","title":"Packages that need debranding changes:","text":"

There is a metadata file that helps track this information for us. It can be located here and is separated by section and branch.

In essence, the file goes over these sections:

"},{"location":"documentation/debranding/debrand_info/#packages-that-need-to-become-other-packages","title":"Packages that need to become other packages:","text":"

There is a metadata file that tracks this for us. It can be located here. The section in particular is called provides.

This is for example, redhat-logos or system-logos is provided or \"becomes\" rocky-logos.

"},{"location":"documentation/debranding/debrand_info/#packages-that-exist-in-rhel-but-do-not-exist-in-most-derivatives","title":"Packages that Exist in RHEL, but do not exist in most derivatives","text":"

For sake of complete information, here is a list of packages that are in RHEL, but may not exist in derivatives. We do not need to worry about these packages:

"},{"location":"documentation/debranding/patching/","title":"Rocky Patching Guide","text":"

This explains how to debrand/patch a package for the Rocky Linux distribution.

"},{"location":"documentation/debranding/patching/#general-instructions","title":"General Instructions","text":""},{"location":"documentation/debranding/patching/#the-patch-config-language","title":"The Patch Config Language","text":"

Patching uses simple proto3 config files. The general format is:

Action {\n    file: \"OriginalFile\"\n    with_file: \"ROCKY/_supporting/RockyReplaceFile\"\n}\n

A simple example to replace a file:

replace {\n    file: \"redhatlogo.png\"\n    with_file: \"ROCKY/_supporting/rockylogo.png\"\n}\n

The file \"redhatlogo.png\" would be located in under SOURCES/ in the project's Git repository (and SRPM).

"},{"location":"documentation/debranding/patching/#patch-configuration-options","title":"Patch configuration options","text":"

Patch configuration structure:

.\n\u2514\u2500\u2500 ROCKY\n    \u251c\u2500\u2500 CFG\n    \u2514\u2500\u2500 _supporting\n
"},{"location":"documentation/debranding/patching/#case-study-nginx","title":"Case Study: Nginx","text":"

(note: all example data here is currently in the staging/ area of Rocky Linux Git. We will update it when the projects are moved to the production area)

Let's go over an example debrand, featuring the Nginx web server package.

The source repository is located here: https://git.centos.org/rpms/nginx

If we browse one of the c8-* branches, we see under SOURCES/ that there is definitely some content that needs to be debranded:

404.html\n50x.html\nindex.html\npoweredby.png  (binary file in dist-git, referred to in .nginx.metadata)\n

These files all refer to Red Hat inc., and must be replaced before they make it to Rocky Linux.

1: Come up with the patches: Each of these files has a Rocky Linux counterpart, and they must be created. Some of this should be done by the Design Team, especially logo work (#Design on chat)

2: Commit patches to the matching patch/PROJECT Git repository : For example, Nginx patches are located here: https://git.rockylinux.org/staging/patch/nginx (staging/ prefix is currently used until our production repos are set up)

3: Develop a matching config file: Our example Nginx has this here: https://git.rockylinux.org/staging/patch/nginx/-/blob/main/ROCKY/CFG/pages.cfg

It looks like this:

replace {\n    file: \"index.html\"\n    with_file: \"ROCKY/_supporting/index.html\"\n}\n\nreplace {\n    file: \"404.html\"\n    with_file: \"ROCKY/_supporting/404.html\"\n}\n\nreplace {\n    file: \"50x.html\"\n    with_file: \"ROCKY/_supporting/50x.html\"\n}\n\nreplace {\n    file: \"poweredby.png\"\n    with_file: \"ROCKY/_supporting/poweredby.png\"\n}\n

4: Test the import: Now, when the upstream is imported, we can check the main Rocky nginx repository and ensure our updates were successful: https://git.rockylinux.org/staging/rpms/nginx/ (again, staging/ group is used only for now)

5: You're Done! Great! Now do the next one... ;-)

"},{"location":"documentation/debranding/patching/#more-debrand-config-language","title":"More Debrand Config Language","text":"

The Nginx example showed just the replace directive, but there are several more available. They are add, patch, and delete.

Here they are, with examples:

# Add a file to the project (file is added to SOURCES/ folder )\nadd {\n    file: \"ROCKY/_supporting/add_me.txt\"\n}\n\n# Apply a .patch file (generated using the Linux \"patch\" utility)\npatch {\n    file: \"ROCKY/_supporting/002-test-html.patch\"\n}\n\n# Delete a file from the source project\ndelete {\n    file: \"SOURCES/dontneed.txt\"\n}\n

And the .patch file example looks like this:

diff --git a/SOURCES/test.html b/SOURCES/test.html\nindex 8d91ffd..3f76c3b 100644\n--- a/SOURCES/test.html\n+++ b/SOURCES/test.html\n@@ -1,6 +1,6 @@\n <!DOCTYPE html>\n <html>\n     <body>\n-        <h1>Replace me</h1>\n+        <h1>Replace I did!</h1>\n     </body>\n </html>\n

It also supports spec file changes, as it may be necessary. For example, from the anaconda debrand patch repo.

add {\n    file: \"ROCKY/_supporting/0002-Rocky-disable-cdn-radiobutton.patch\"\n}\n\nspec_change {\n    # Adds a Patch line with the file name as listed above\n    file {\n        name: \"0002-Rocky-disable-cdn-radiobutton.patch\"\n        type: Patch\n        add: true\n    }\n\n    # Appends to the end of a field's line, in this case the Release field gets .rocky\n    append {\n        field: \"Release\"\n        value: \".rocky\"\n    }\n\n    # Adds to the change log properly\n    changelog {\n        author_name: \"Mustafa Gezen\"\n        author_email: \"mustafa@rockylinux.org\"\n        message: \"Disable CDN and add .rocky to Release\"\n    }\n}\n

At the end, the spec file should be changed.

Summary:              Graphical system installer\nName:                 anaconda\nVersion:              33.16.3.26\n                      # Our .rocky appears here\nRelease:              2%{?dist}.rocky\n\n-- snip --\n\nPatch1:               0001-network-do-not-crash-on-infiniband-devices-activated.patch\n                      # Look, our patch was added!\n                      # Luckily this RPM uses %autosetup, so no %patch lines\nPatch2:               0002-Rocky-disable-cdn-radiobutton.patch\n\n-- snip --\n\n# And below the added changelog\n%changelog\n* Thu Feb 25 2021 Mustafa Gezen <mustafa@rockylinux.org> - 33.16.3.26-2\n- Disable CDN and add .rocky to Release\n\n* Thu Oct 22 2020 Radek Vykydal <rvykydal@redhat.com> - 33.16.3.26-2\n- network: do not crash on infiniband devices activated in initramfs\n  (rvykydal)\n  Resolves: rhbz#1890261\n
"},{"location":"documentation/guidelines/","title":"Guidelines","text":"

This section is primarily for documentation and useful information as it pertains to guidelines for various packages or asset usage.

Release Engineering has the final \"go/no-go\" decision on submissions, assets, images, and the general structure of the release of Rocky Linux.

"},{"location":"documentation/guidelines/rocky_logos_guidelines/","title":"Rocky Logos Package Guidelines","text":"

This page goes over the basic guidelines for the rocky-logos package, which produces assets for anaconda, wallpapers, and other assets for the distribution.

Release Engineering has the final \"go/no-go\" decision on submissios/assets/images in the package.

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#rocky-logo-assets","title":"Rocky Logo Assets","text":"

In various parts of the package, the Rocky logo will need to exist in multiple forms:

This can be in the form of PNG, JPG, or SVG files.

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#anaconda","title":"anaconda","text":"

All anaconda image assets will be in PNG form. Backgrounds should be transparent with the exception of rnotes if applicable.

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#backgrounds","title":"Backgrounds","text":"

See Backgrounds Section

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#fedora","title":"fedora","text":"

SVG format of logo assets as fedora_logo (color) and fedora_logo_darkbackground (white), and a default as fedora_logo.

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#firstboot","title":"firstboot","text":"

First boot assets. This is generally the sidebar (like the anaconda installer) and a workstation icon. PNG format.

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#iconshicolor","title":"icons/hicolor","text":"

Rocky icons will appear here in different resolutions and must be in PNG or SVG format:

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#ipa","title":"ipa","text":"

IPA specific assets, usually text. They are generally PNG or JPG:

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#pixmaps","title":"pixmaps","text":"

PNG format, these are usually assets used within GNOME, GDM, and other desktop environments.

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#plymouthcharge","title":"plymouth/charge","text":"

Typically unchanged and is for the plymouth loading screen

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#svg","title":"svg","text":"

SVG format of logo assets as fedora_logo (color) and fedora_logo_darkbackground (white)

color file dictates background color if applicable

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#testpage","title":"testpage","text":"

index.html for httpd/nginx/etc

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#backgroundswallpapers","title":"Backgrounds/Wallpapers","text":""},{"location":"documentation/guidelines/rocky_logos_guidelines/#structure","title":"Structure","text":"

Wallpapers appear in PNG format with a backing XML file to list out all available resolutions if applicable, as well as defaults.

A defaults file looks at every other XML that is a default background provided by the backgrounds package and default options if applicable.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE wallpapers SYSTEM \"gnome-wp-list.dtd\">\n<wallpapers>\n    <wallpaper deleted=\"false\">\n        <name>Rocky Linux 9 Default Background - Placeholder Mesh</name>\n        <filename>/usr/share/backgrounds/rocky-default-1-mesh.xml</filename>\n        <options>zoom</options>\n        <author>Louis Abel</author>\n        <email>label@rockylinux.org</email>\n        <license>CC-BY-SA 4.0</license>\n    </wallpaper>\n    <wallpaper deleted=\"false\">\n        <name>Rocky Linux 9 Default Background - Placeholder Target</name>\n        <filename>/usr/share/backgrounds/rocky-default-1-target.xml</filename>\n        <options>zoom</options>\n        <author>Louis Abel</author>\n        <email>label@rockylinux.org</email>\n        <license>CC-BY-SA 4.0</license>\n    </wallpaper>\n</wallpapers>\n

The wallpaper itself will list every applicable variant of that background if applicable.

<background>\n  <starttime>\n    <year>2021</year>\n    <month>10</month>\n    <day>29</day>\n    <hour>19</hour>\n    <minute>21</minute>\n    <second>19</second>\n  </starttime>\n\n<static>\n<duration>10000000000.0</duration>\n<file>\n  <!-- Wide 16:9 -->\n  <size width=\"1920\" height=\"1080\">/usr/share/backgrounds/rocky-default-1-mesh-16-9.png</size>\n  <!-- Wide 16:10 -->\n  <size width=\"1920\" height=\"1200\">/usr/share/backgrounds/rocky-default-1-mesh-16-10.png</size>\n  <!-- Standard 4:3 -->\n  <size width=\"2048\" height=\"1536\">/usr/share/backgrounds/rocky-default-1-mesh-4-3.png</size>\n  <!-- Normalish 5:4 -->\n  <size width=\"1280\" height=\"1024\">/usr/share/backgrounds/rocky-default-1-mesh-5-4.png</size>\n</file>\n</static>\n</background>\n

Day/Night Wallpapers have a similar configuration.

<background>\n  <starttime>\n    <year>2022</year>\n    <month>01</month>\n    <day>01</day>\n    <hour>8</hour>\n    <minute>00</minute>\n    <second>00</second>\n  </starttime>\n<!-- This animation will start at 8 AM. -->\n\n<!-- We start with day at 8 AM. It will remain up for 10 hours. -->\n<static>\n<duration>36000.0</duration>\n<file>/usr/share/backgrounds/rocky-default-1-mesh-day.png</file>\n</static>\n\n<!-- Day ended and starts to transition to night at 6 PM. The transition lasts for 2 hours, ending at 8 PM. -->\n<transition type=\"overlay\">\n<duration>7200.0</duration>\n<from>/usr/share/backgrounds/rocky-default-1-mesh-day.png</from>\n<to>/usr/share/backgrounds/rocky-default-1-mesh-night.png</to>\n</transition>\n\n<!-- It's 8 PM, we're showing the night till 6 AM. -->\n<static>\n<duration>36000.0</duration>\n<file>/usr/share/backgrounds/rocky-default-1-mesh-night.png</file>\n</static>\n\n<!-- It's 6 AM, and we're starting to transition to day. Transition completes at 8 AM. -->\n<transition type=\"overlay\">\n<duration>7200.0</duration>\n<from>/usr/share/backgrounds/rocky-default-1-mesh-night.png</from>\n<to>/usr/share/backgrounds/rocky-default-1-mesh-day.png</to>\n</transition>\n\n</background>\n
"},{"location":"documentation/guidelines/rocky_logos_guidelines/#guidelines","title":"Guidelines","text":"

This section goes over the general guidelines for the main backgrounds included in the distribution.

Note: It is highly recommended and encouraged that a submission should be the highest resolution as noted below. See the note on minimum resolutions.

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#minimum-resolutions","title":"Minimum Resolutions","text":"

For general submissions, we request a high resolution to make things simpler, that way the user should be able to use a wallpaper without having to choose \"the right one\" for their monitor size. However, for the case case of extra backgrounds, this requirement is more relaxed. If a submitter wishes not to use the highest resolution but opts to make multiple resolutions instead, they should follow the below list:

The placeholders in this commit shows an example of using the minimum resolutions.

"},{"location":"documentation/guidelines/rocky_logos_guidelines/#extras-package","title":"Extras Package","text":"

If a wallpaper does not make it to the main package (whether it doesn't meet guidelines or is simply just Rocky inspired), they should be able to be submitted for addition into the rocky-backgrounds-extras package.

"},{"location":"documentation/references/","title":"References","text":"

Use this section to locate reference configuration items for the toolkit.

"},{"location":"documentation/references/empanadas_common/","title":"Empanadas common.py Configuration","text":"

The common.py configuration contains dictionaries and classes that dictate most of the functionality of empanadas.

"},{"location":"documentation/references/empanadas_common/#config-items","title":"Config Items","text":"

type: Dictionary

"},{"location":"documentation/references/empanadas_common/#configrlmacro","title":"config.rlmacro","text":"

type: String

required: True

description: Empanadas expects to run on an EL system. This is part of the general check up. It should not be hardcoded and use the rpm python module.

"},{"location":"documentation/references/empanadas_common/#configdist","title":"config.dist","text":"

type: String

required: False

description: Was the original tag placed in mock configs. This combines el with the rpm python module expansion. This is no longer required. The option is still available for future use.

"},{"location":"documentation/references/empanadas_common/#configarch","title":"config.arch","text":"

type: String

required: True

description: The architecture of the current running system. This is checked against the supported architectures in general release configurations. This should not be hardcoded.

"},{"location":"documentation/references/empanadas_common/#configdate_stamp","title":"config.date_stamp","text":"

type: String

required: True

description: Date time stamp in the form of YYYYMMDD.HHMMSS. This should not be hardcoded.

"},{"location":"documentation/references/empanadas_common/#configcompose_root","title":"config.compose_root","text":"

type: String

required: True

description: Root path of composes on the system running empanadas.

"},{"location":"documentation/references/empanadas_common/#configstaging_root","title":"config.staging_root","text":"

type: String

required: False

description: For future use. Root path of staging repository location where content will be synced to.

"},{"location":"documentation/references/empanadas_common/#configproduction_root","title":"config.production_root","text":"

type: String

required: False

description: For future use. Root path of production repository location where content will be synced to from staging.

"},{"location":"documentation/references/empanadas_common/#configcategory_stub","title":"config.category_stub","text":"

type: String

required: True

description: For future use. Stub path that is appended to staging_root and production_root.

example: mirror/pub/rocky

"},{"location":"documentation/references/empanadas_common/#configsig_category_stub","title":"config.sig_category_stub","text":"

type: String

required: True

description: For future use. Stub path that is appended to staging_root and production_root for SIG content.

example: mirror/pub/sig

"},{"location":"documentation/references/empanadas_common/#configrepo_base_url","title":"config.repo_base_url","text":"

type: String

required: True

description: URL to the base url's where the repositories live. This is typically to a peridot instance. This is supplemented by the configuration project_id parameter.

Note that this does not have to be a peridot instance. The combination of this value and project_id can be sufficient enough for empanadas to perform its work.

"},{"location":"documentation/references/empanadas_common/#configmock_work_root","title":"config.mock_work_root","text":"

type: String

required: True

description: Hardcoded path to where ISO work is performed within a mock chroot. This is the default path created by mock and it is recommended not to change this.

example: /builddir

"},{"location":"documentation/references/empanadas_common/#configcontainer","title":"config.container","text":"

type: String

required: True

description: This is the container used to perform all operations in podman.

example: centos:stream9

"},{"location":"documentation/references/empanadas_common/#configdistname","title":"config.distname","text":"

type: String

required: True

description: Name of the distribution you are building or building for.

example: Rocky Linux

"},{"location":"documentation/references/empanadas_common/#configshortname","title":"config.shortname","text":"

type: String

required: True

description: Short name of the distribution you are building or building for.

example: Rocky

"},{"location":"documentation/references/empanadas_common/#configtranslators","title":"config.translators","text":"

type: Dictionary

required: True

description: Translates Linux architectures to golang architectures. Reserved for future use.

"},{"location":"documentation/references/empanadas_common/#configaws_region","title":"config.aws_region","text":"

type: String

required: False

description: Region you are working in with AWS or onprem cloud that supports this variable.

example: us-east-2

"},{"location":"documentation/references/empanadas_common/#configbucket","title":"config.bucket","text":"

type: String

required: False

description: Name of the S3-compatible bucket that is used to pull images from. Requires aws_region.

"},{"location":"documentation/references/empanadas_common/#configbucket_url","title":"config.bucket_url","text":"

type: String

required: False

description: URL of the S3-compatible bucket that is used to pull images from.

"},{"location":"documentation/references/empanadas_common/#allowed_type_variants-items","title":"allowed_type_variants items","text":"

type: Dictionary

description: Key value pairs of cloud or image variants. The value is either None or a list type.

"},{"location":"documentation/references/empanadas_common/#reference-example","title":"Reference Example","text":"
config = {\n    \"rlmacro\": rpm.expandMacro('%rhel'),\n    \"dist\": 'el' + rpm.expandMacro('%rhel'),\n    \"arch\": platform.machine(),\n    \"date_stamp\": time.strftime(\"%Y%m%d.%H%M%S\", time.localtime()),\n    \"compose_root\": \"/mnt/compose\",\n    \"staging_root\": \"/mnt/repos-staging\",\n    \"production_root\": \"/mnt/repos-production\",\n    \"category_stub\": \"mirror/pub/rocky\",\n    \"sig_category_stub\": \"mirror/pub/sig\",\n    \"repo_base_url\": \"https://yumrepofs.build.resf.org/v1/projects\",\n    \"mock_work_root\": \"/builddir\",\n    \"container\": \"centos:stream9\",\n    \"distname\": \"Rocky Linux\",\n    \"shortname\": \"Rocky\",\n    \"translators\": {\n        \"x86_64\": \"amd64\",\n        \"aarch64\": \"arm64\",\n        \"ppc64le\": \"ppc64le\",\n        \"s390x\": \"s390x\",\n        \"i686\": \"386\"\n    },\n    \"aws_region\": \"us-east-2\",\n    \"bucket\": \"resf-empanadas\",\n    \"bucket_url\": \"https://resf-empanadas.s3.us-east-2.amazonaws.com\"\n}\n\nALLOWED_TYPE_VARIANTS = {\n        \"Azure\": None,\n        \"Container\": [\"Base\", \"Minimal\", \"UBI\"],\n        \"EC2\": None,\n        \"GenericCloud\": None,\n        \"Vagrant\": [\"Libvirt\", \"Vbox\"],\n        \"OCP\": None\n\n}\n
"},{"location":"documentation/references/empanadas_config/","title":"Empanadas config yaml Configuration","text":"

Each file in empanads/config/ is a yaml file that contains configuration items for the distribution release version. The configuration can heavily dictate the functionality and what features are directly supported by empanadas when ran.

See the items below to see which options are mandatory and optional.

"},{"location":"documentation/references/empanadas_config/#config-items","title":"Config Items","text":""},{"location":"documentation/references/empanadas_config/#top-level","title":"Top Level","text":"

The Top Level is the name of the profile and starts the YAML dictionary for the release. It is alphanumeric and accepts punctuation within reason. Common examples:

"},{"location":"documentation/references/empanadas_config/#fullname","title":"fullname","text":"

type: String

required: True

description: Needed for treeinfo and discinfo generation.

"},{"location":"documentation/references/empanadas_config/#revision","title":"revision","text":"

type: String

required: True

description: Full version of a release

"},{"location":"documentation/references/empanadas_config/#rclvl","title":"rclvl","text":"

type: String

required: True

description: Release Candidate or Beta descriptor. Sets names and versions with this descriptor if enabled.

"},{"location":"documentation/references/empanadas_config/#major","title":"major","text":"

type: String

required: True

description: Major version of a release

"},{"location":"documentation/references/empanadas_config/#minor","title":"minor","text":"

type: String

required: True

description: Minor version of a release

"},{"location":"documentation/references/empanadas_config/#profile","title":"profile","text":"

type: String

required: True

description: Matches the top level of the release. This should not differ from the top level assignment.

"},{"location":"documentation/references/empanadas_config/#disttag","title":"disttag","text":"

type: String

required: True

description: Sets the dist tag for mock configs.

"},{"location":"documentation/references/empanadas_config/#bugurl","title":"bugurl","text":"

type: String

required: True

description: A URL to the bug tracker for this release or distribution.

"},{"location":"documentation/references/empanadas_config/#checksum","title":"checksum","text":"

type: String

required: True

description: Checksum type. Used when generating checksum information for images.

"},{"location":"documentation/references/empanadas_config/#fedora_major","title":"fedora_major","text":"

type: String

required: False

description: For future use with icicle.

"},{"location":"documentation/references/empanadas_config/#gpg_key","title":"gpg_key","text":"

type: List

required: False

description: List of GPG keys for a given repository

"},{"location":"documentation/references/empanadas_config/#repo_gpg_key","title":"repo_gpg_key","text":"

type: List

required: False

description: List of GPG keys for a given repository. Use this if the signing key for the repo is different from packages.

"},{"location":"documentation/references/empanadas_config/#allowed_arches","title":"allowed_arches","text":"

type: list

required: True

description: List of supported architectures for this release.

"},{"location":"documentation/references/empanadas_config/#provide_multilib","title":"provide_multilib","text":"

type: boolean

required: True

description: Sets if architecture x86_64 will be multilib. It is recommended that this is set to True.

"},{"location":"documentation/references/empanadas_config/#project_id","title":"project_id","text":"

type: String

required: True

description: Appended to the base repo URL in common.py. For peridot, it is the project id that is generated for the project you are pulling from. It can be set to anything else if need be for non-peridot use.

"},{"location":"documentation/references/empanadas_config/#repo_symlinks","title":"repo_symlinks","text":"

type: dict

required: False

description: For future use. Sets symlinks to repositories for backwards compatibility. Key value pairs only.

"},{"location":"documentation/references/empanadas_config/#renames","title":"renames","text":"

type: dict

required: False

description: Renames a repository to the value set. For example, renaming all to devel. Set to {} if no renames are goign to occur.

"},{"location":"documentation/references/empanadas_config/#all_repos","title":"all_repos","text":"

type: list

required: True

description: List of repositories that will be synced/managed by empanadas.

"},{"location":"documentation/references/empanadas_config/#structure","title":"structure","text":"

type: dict

required: True

description: Key value pairs of packages and repodata. These are appended appropriately during syncing and ISO actions. Setting these are mandatory.

"},{"location":"documentation/references/empanadas_config/#iso_map","title":"iso_map","text":"

type: dictionary

required: True if building ISO's and operating with lorax.

description: Controls how lorax and extra ISO's are built.

If are you not building images, set to {}

"},{"location":"documentation/references/empanadas_config/#xorrisofs","title":"xorrisofs","text":"

type: boolean

required: True

description: Dictates of xorrisofs is used to build images. Setting to false uses genisoimage. It is recommended that xorrisofs is used.

"},{"location":"documentation/references/empanadas_config/#iso_level","title":"iso_level","text":"

type: boolean

required: True

description: Set to false if you are using xorrisofs. Can be set to true when using genisoimage.

"},{"location":"documentation/references/empanadas_config/#images","title":"images","text":"

type: dict

required: True

description: Dictates the ISO images that will be made or the treeinfo that will be generated.

Note: The primary repository (for example, BaseOS) will need to be listed to ensure the treeinfo data is correctly generated. disc should be set to False and isoskip should be set to True. See the example section for an example.

"},{"location":"documentation/references/empanadas_config/#namedisc","title":"name.disc","text":"

type: boolean

required: True

description: This tells the iso builder if this will be a generated ISO.

"},{"location":"documentation/references/empanadas_config/#nameisoskip","title":"name.isoskip","text":"

type: boolean

required: False

description: This tells the iso builder if this will be skipped, even if disc is set to True. Default is False.

"},{"location":"documentation/references/empanadas_config/#namevariant","title":"name.variant","text":"

type: string

required: True

description: Names the primary variant repository for the image. This is set in .treeinfo.

"},{"location":"documentation/references/empanadas_config/#namerepos","title":"name.repos","text":"

type: list

required: True

description: Names of the repositories included in the image. This is added to .treeinfo.

"},{"location":"documentation/references/empanadas_config/#namevolname","title":"name.volname","text":"

type: string

required: True

required value: dvd

description: This is required if building more than the DVD image. By default, the the name dvd is harcoded in the buildImage template.

"},{"location":"documentation/references/empanadas_config/#lorax","title":"lorax","text":"

type: dict

required: True if building lorax images.

description: Sets up lorax images and which repositories to use when building lorax images.

"},{"location":"documentation/references/empanadas_config/#loraxrepos","title":"lorax.repos","text":"

type: list

required: True

description: List of repos that are used to pull packages to build the lorax images.

"},{"location":"documentation/references/empanadas_config/#loraxvariant","title":"lorax.variant","text":"

type: string

required: True

description: Base repository for the release

"},{"location":"documentation/references/empanadas_config/#loraxlorax_removes","title":"lorax.lorax_removes","text":"

type: list

required: False

description: Excludes packages that are not needed when lorax is running.

"},{"location":"documentation/references/empanadas_config/#loraxrequired_pkgs","title":"lorax.required_pkgs","text":"

type: list

required: True

description: Required list of installed packages needed to build lorax images.

"},{"location":"documentation/references/empanadas_config/#livemap","title":"livemap","text":"

type: dict

required: False

description: Dictates what live images are built and how they are built.

"},{"location":"documentation/references/empanadas_config/#livemapgit_repo","title":"livemap.git_repo","text":"

type: string

required: True

description: The git repository URL where the kickstarts live

"},{"location":"documentation/references/empanadas_config/#livemapbranch","title":"livemap.branch","text":"

type: string

required: True

description: The branch being used for the kickstarts

"},{"location":"documentation/references/empanadas_config/#livemapksentry","title":"livemap.ksentry","text":"

type: dict

required: True

description: Key value pairs of the live images being created. Key being the name of the live image, value being the kickstart name/path.

"},{"location":"documentation/references/empanadas_config/#livemapallowed_arches","title":"livemap.allowed_arches","text":"

type: list

required: True

description: List of allowed architectures that will build for the live images.

"},{"location":"documentation/references/empanadas_config/#livemaprequired_pkgs","title":"livemap.required_pkgs","text":"

type: list

required: True

description: Required list of packages needed to build the live images.

"},{"location":"documentation/references/empanadas_config/#cloudimages","title":"cloudimages","text":"

type: dict

required: False

description: Cloud related settings.

Set to {} if not needed.

"},{"location":"documentation/references/empanadas_config/#cloudimagesimages","title":"cloudimages.images","text":"

type: dict

required: True

description: Cloud images that will be generated and in a bucket to be pulled, and their format.

"},{"location":"documentation/references/empanadas_config/#cloudimagesimagesname","title":"cloudimages.images.name","text":"

type: dict

required: True

description: Name of the cloud image being pulled.

Accepted key value options:

"},{"location":"documentation/references/empanadas_config/#repoclosure_map","title":"repoclosure_map","text":"

type: dict

required: True

description: Repoclosure settings. These settings are absolutely required when doing full syncs and need to check repositories for consistency.

"},{"location":"documentation/references/empanadas_config/#repoclosure_maparches","title":"repoclosure_map.arches","text":"

type: dict

required: True

description: For each architecture (key), dnf switches/settings that dictate how repoclosure will check for consistency (value, string).

example: x86_64: '--forcearch=x86_64 --arch=x86_64 --arch=athlon --arch=i686 --arch=i586 --arch=i486 --arch=i386 --arch=noarch'

"},{"location":"documentation/references/empanadas_config/#repoclosure_maprepos","title":"repoclosure_map.repos","text":"

type: dict

required: True

description: For each repository that is pulled for a given release(key), repositories that will be included in the repoclosure check. A repository that only checks against itself must have a value of [].

"},{"location":"documentation/references/empanadas_config/#extra_files","title":"extra_files","text":"

type: dict

required: True

description: Extra files settings and where they come from. Git repositories are the only supported method.

"},{"location":"documentation/references/empanadas_config/#extra_filesgit_repo","title":"extra_files.git_repo","text":"

type: string

required: True

description: URL to the git repository with the extra files.

"},{"location":"documentation/references/empanadas_config/#extra_filesgit_raw_path","title":"extra_files.git_raw_path","text":"

type: string

required: True

description: URL to the git repository with the extra files, but the \"raw\" url form.

example: git_raw_path: 'https://git.rockylinux.org/staging/src/rocky-release/-/raw/r9/'

"},{"location":"documentation/references/empanadas_config/#extra_filesbranch","title":"extra_files.branch","text":"

type: string

required: True

description: Branch where the extra files are pulled from.

"},{"location":"documentation/references/empanadas_config/#extra_filesgpg","title":"extra_files.gpg","text":"

type: dict

required: True

description: For each gpg key type (key), the relative path to the key in the git repository (value).

These keys help set up the repository configuration when doing syncs.

By default, the RepoSync class sets stable as the gpgkey that is used.

"},{"location":"documentation/references/empanadas_config/#extra_fileslist","title":"extra_files.list","text":"

type: list

required: True

description: List of files from the git repository that will be used as \"extra\" files and placed in the repositories and available to mirrors and will appear on ISO images if applicable.

"},{"location":"documentation/references/empanadas_config/#reference-example","title":"Reference Example","text":"
---\n'9':\n  fullname: 'Rocky Linux 9.0'\n  revision: '9.0'\n  rclvl: 'RC2'\n  major: '9'\n  minor: '0'\n  profile: '9'\n  disttag: 'el9'\n  bugurl: 'https://bugs.rockylinux.org'\n  checksum: 'sha256'\n  fedora_major: '20'\n  allowed_arches:\n    - x86_64\n    - aarch64\n    - ppc64le\n    - s390x\n  provide_multilib: True\n  project_id: '55b17281-bc54-4929-8aca-a8a11d628738'\n  repo_symlinks:\n    NFV: 'nfv'\n  renames:\n    all: 'devel'\n  all_repos:\n    - 'all'\n    - 'BaseOS'\n    - 'AppStream'\n    - 'CRB'\n    - 'HighAvailability'\n    - 'ResilientStorage'\n    - 'RT'\n    - 'NFV'\n    - 'SAP'\n    - 'SAPHANA'\n    - 'extras'\n    - 'plus'\n  structure:\n    packages: 'os/Packages'\n    repodata: 'os/repodata'\n  iso_map:\n    xorrisofs: True\n    iso_level: False\n    images:\n      dvd:\n        disc: True\n        variant: 'AppStream'\n        repos:\n          - 'BaseOS'\n          - 'AppStream'\n      minimal:\n        disc: True\n        isoskip: True\n        repos:\n          - 'minimal'\n          - 'BaseOS'\n        variant: 'minimal'\n        volname: 'dvd'\n      BaseOS:\n        disc: False\n        isoskip: True\n        variant: 'BaseOS'\n        repos:\n          - 'BaseOS'\n          - 'AppStream'\n    lorax:\n      repos:\n        - 'BaseOS'\n        - 'AppStream'\n      variant: 'BaseOS'\n      lorax_removes:\n        - 'libreport-rhel-anaconda-bugzilla'\n      required_pkgs:\n        - 'lorax'\n        - 'genisoimage'\n        - 'isomd5sum'\n        - 'lorax-templates-rhel'\n        - 'lorax-templates-generic'\n        - 'xorriso'\n  cloudimages:\n    images:\n      EC2:\n        format: raw\n      GenericCloud:\n        format: qcow2\n  livemap:\n    git_repo: 'https://git.resf.org/sig_core/kickstarts.git'\n    branch: 'r9'\n    ksentry:\n      Workstation: rocky-live-workstation.ks\n      Workstation-Lite: rocky-live-workstation-lite.ks\n      XFCE: rocky-live-xfce.ks\n      KDE: rocky-live-kde.ks\n      MATE: rocky-live-mate.ks\n    allowed_arches:\n      - x86_64\n      - aarch64\n    required_pkgs:\n      - 'lorax-lmc-novirt'\n      - 'vim-minimal'\n      - 'pykickstart'\n      - 'git'\n  variantmap:\n    git_repo: 'https://git.rockylinux.org/rocky/pungi-rocky.git'\n    branch: 'r9'\n    git_raw_path: 'https://git.rockylinux.org/rocky/pungi-rocky/-/raw/r9/'\n  repoclosure_map:\n    arches:\n      x86_64: '--forcearch=x86_64 --arch=x86_64 --arch=athlon --arch=i686 --arch=i586 --arch=i486 --arch=i386 --arch=noarch'\n      aarch64: '--forcearch=aarch64 --arch=aarch64 --arch=noarch'\n      ppc64le: '--forcearch=ppc64le --arch=ppc64le --arch=noarch'\n      s390x: '--forcearch=s390x --arch=s390x --arch=noarch'\n    repos:\n      devel: []\n      BaseOS: []\n      AppStream:\n        - BaseOS\n      CRB:\n        - BaseOS\n        - AppStream\n      HighAvailability:\n        - BaseOS\n        - AppStream\n      ResilientStorage:\n        - BaseOS\n        - AppStream\n      RT:\n        - BaseOS\n        - AppStream\n      NFV:\n        - BaseOS\n        - AppStream\n      SAP:\n        - BaseOS\n        - AppStream\n        - HighAvailability\n      SAPHANA:\n        - BaseOS\n        - AppStream\n        - HighAvailability\n  extra_files:\n    git_repo: 'https://git.rockylinux.org/staging/src/rocky-release.git'\n    git_raw_path: 'https://git.rockylinux.org/staging/src/rocky-release/-/raw/r9/'\n    branch: 'r9'\n    list:\n      - 'SOURCES/Contributors'\n      - 'SOURCES/COMMUNITY-CHARTER'\n      - 'SOURCES/EULA'\n      - 'SOURCES/LICENSE'\n      - 'SOURCES/RPM-GPG-KEY-Rocky-9'\n      - 'SOURCES/RPM-GPG-KEY-Rocky-9-Testing'\n...\n
"},{"location":"documentation/references/empanadas_sig_config/","title":"Empanadas SIG yaml Configuration","text":"

Each file in empanads/sig/ is a yaml file that contains configuration items for the distribution release version. The configuration determines the structure of the SIG repositories synced from Peridot or a given repo.

Note that a release profile (for a major version) is still required for this sync to work.

See the items below to see which options are mandatory and optional.

"},{"location":"documentation/references/empanadas_sig_config/#config-items","title":"Config Items","text":""},{"location":"documentation/references/empanadas_sig_config/#reference-example","title":"Reference Example","text":""},{"location":"events/meeting-notes/2024-03-18/","title":"Release Engineering (SIG/Core) Meeting 2024-03-18","text":""},{"location":"events/meeting-notes/2024-03-18/#attendees","title":"Attendees","text":""},{"location":"events/meeting-notes/2024-03-18/#old-business","title":"Old Business","text":"

To fill.

"},{"location":"events/meeting-notes/2024-03-18/#new-business","title":"New Business","text":"

To fill.

"},{"location":"events/meeting-notes/2024-03-18/#open-floor","title":"Open Floor","text":"

To fill.

"},{"location":"events/meeting-notes/2024-03-18/#action-items","title":"Action Items","text":"

To fill.

"},{"location":"include/resources_bottom/","title":"Resources bottom","text":"Resources Account ServicesGit (RESF Git Service)Git (Rocky Linux GitHub)Git (Rocky Linux GitLab)Mail ListsContacts

URL: https://accounts.rockylinux.org

Purpose: Account Services maintains the accounts for almost all components of the Rocky ecosystem

Technology: Noggin used by Fedora Infrastructure

Contact: ~Infrastructure in Mattermost and #rockylinux-infra in Libera IRC

URL: https://git.resf.org

Purpose: General projects, code, and so on for the Rocky Enterprise Software Foundation.

Technology: Gitea

Contact: ~Infrastructure, ~Development in Mattermost and #rockylinux-infra, #rockylinux-devel in Libera IRC

URL: https://github.com/rocky-linux

Purpose: General purpose code, assets, and so on for Rocky Linux. Some content is mirrored to the RESF Git Service.

Technology: GitHub

Contact: ~Infrastructure, ~Development in Mattermost and #rockylinux-infra, #rockylinux-devel in Libera IRC

URL: https://git.rockylinux.org

Purpose: Packages and light code for the Rocky Linux distribution

Technology: GitLab

Contact: ~Infrastructure, ~Development in Mattermost and #rockylinux-infra, #rockylinux-devel in Libera IRC

URL: https://lists.resf.org

Purpose: Users can subscribe and interact with various mail lists for the Rocky ecosystem

Technology: Mailman 3 + Hyper Kitty

Contact: ~Infrastructure in Mattermost and #rockylinux-infra in Libera IRC

Name Email Mattermost Name IRC Name Louis Abel label@rockylinux.org @nazunalika Sokel/label/Sombra Mustafa Gezen mustafa@rockylinux.org @mustafa mstg Skip Grube skip@rockylinux.org @skip77 Sherif Nagy sherif@rockylinux.org @sherif Pablo Greco pgreco@rockylinux.org @pgreco pgreco Neil Hanlon neil@resf.org @neil neil Taylor Goodwill tg@resf.org @tgo tg"},{"location":"legacy/","title":"Legacy","text":"

Legacy documentation comes here.

Debrand List

Koji Tagging

"},{"location":"legacy/debrand_list/","title":"Rocky Debrand Packages List","text":"

This is a list of packages that require changes to their material for acceptance in Rocky Linux. Usually this means there is some text or images in the package that reference upstream trademarks, and these must be swapped out before we can distribute them.

The first items in this list are referenced from the excellent CentOS release notes here: https://wiki.centos.org/Manuals/ReleaseNotes/CentOS8.1905#Packages_modified_by_CentOS

It is assumed that we will have to modify these same packages. It is also assumed that these changed packages might not necessarily be debranding.

However, this list is incomplete. For example, the package Nginx does not appear on the list, and still has RHEL branding in the CentOS repos. We will need to investigate the rest of the package set and find any more packages like this that we must modify.

One way to find said changes is to look for ?centos tags in the SPEC file, while also looking at the manual debranding if there was any for the c8 branches.

There will be cases where a search and replace for ?centos to ?rocky will be sufficient.

Current patches (for staging) are here.

"},{"location":"legacy/debrand_list/#packages-that-need-debranding-changes","title":"Packages that need debranding changes:","text":"Package Notes Work Status abrt See here DONE anaconda See here DONE apache-commons-net AppStream module with elevating branch names NO CHANGES REQUIRED ~~basesystem~~ (does not require debranding, it is a skeleton package) NO CHANGES REQUIRED cloud-init See here DONE - NEEDS REVIEW IN GITLAB (Rich Alloway) cockpit See here DONE ~~compat-glibc~~ NOT IN EL 8 dhcp See here DONE, NEEDS REVIEW IN GITLAB (Rich Alloway) firefox See here -- Still requires a distribution.ini ID MOSTLY DONE (Louis) fwupdate NOT STARTED glusterfs Changes don't appear to be required NO CHANGES REQUIRED gnome-settings-daemon No changes required for now. NO CHANGES REQUIRED grub2 (secureboot patches not done, just debrand) See here DONE, NEEDS REVIEW IN GITLAB AND SECUREBOOT (Rich Alloway) httpd See here DONE initial-setup See here DONE ipa This is a dual change: Logos and ipaplatform. Logos are taken care of in rocky-logos and the ipaplatform is taken care of here. See here DONE ~~kabi-yum-plugins~~ NOT IN EL 8 kernel See here for a potential example NOT STARTED ~~kde-settings~~ NOT IN EL 8 libreport See here DONE oscap-anaconda-addon See here DONE Requires install QA PackageKit See here DONE ~~pcs~~ NO CHANGES REQUIRED plymouth See here DONE ~~redhat-lsb~~ NO CHANGES REQUIRED redhat-rpm-config See here DONE scap-security-guide QA is likely required to test this package as it is NO CHANGES REQUIRED, QA REQUIRED shim NOT STARTED shim-signed NOT STARTED sos See here DONE subscription-manager See here DONE, NEEDS REVIEW ~~system-config-date~~ NOT IN EL8 ~~system-config-kdump~~ NOT IN EL8 thunderbird See here DONE ~~xulrunner~~ NOT IN EL 8 ~~yum~~ NO CHANGES REQUIRED (end of CentOS list) nginx Identified changes, in staging (ALMOST) DONE"},{"location":"legacy/debrand_list/#packages-that-need-to-become-other-packages","title":"Packages that need to become other packages:","text":"

We will want to create our own versions of these packages. The full \"lineage\" is shown, from RHEL -> CentOS -> Rocky (Where applicable)

Package Notes redhat-indexhtml -> centos-indexhtml -> rocky-indexhtml Here redhat-logos -> centos-logos -> rocky-logos Here redhat-release-* -> centos-release -> rocky-release Here centos-backgrounds -> rocky-backgrounds Provided by logos centos-linux-repos -> rocky-repos Here centos-obsolete-packages Here"},{"location":"legacy/debrand_list/#packages-that-exist-in-rhel-but-not-in-centos","title":"Packages that Exist in RHEL, but not in CentOS","text":"

For sake of complete information, here is a list of packages that are in RHEL 8, but do not exist in CentOS 8. We do not need to worry about these packages:

"},{"location":"legacy/koji_tagging/","title":"Koji Tagging Strategy","text":"

This document covers how the Rocky Linux Release Engineering Team handles the tagging for builds in Koji and how it affects the overall build process.

"},{"location":"legacy/koji_tagging/#contact-information","title":"Contact Information","text":"Owner Release Engineering Team Email Contact releng@rockylinux.org Mattermost Contacts @label @mustafa @neil @tgo Mattermost Channels ~Development"},{"location":"legacy/koji_tagging/#what-is-koji","title":"What is Koji?","text":"

Koji is the build system used for Rocky Linux, as well as CentOS, Fedora, and likely others. Red Hat is likely to use a variant of Koji called \"brew\" with similar functionality and usage. Koji uses mock, a common RPM building utility, to build RPMs in a chroot environment.

"},{"location":"legacy/koji_tagging/#architecture-of-koji","title":"Architecture of Koji","text":""},{"location":"legacy/koji_tagging/#components","title":"Components","text":"

Koji comprises of multiple components:

"},{"location":"legacy/koji_tagging/#tags","title":"Tags","text":"

Tags are the most important part of the koji ecosystem. With tags, you can have specific repository build roots for the entire distribution or just a simple subset of builds that should not polute the main build tags (for example, for SIGs where a package or two might be newer (or even older) than what's in BaseOS/AppStream.

Using tags, you can setup what is called \"inheritance\". So for example. You can have a tag named dist-rocky8-build but it happens to inherit dist-rocky8-updates-build, which will likely have a newer set of packages than the former. Inheritance, in a way, can be considered setting \"dnf priorities\" if you've done that before. Another way to look at it is \"ordering\" and \"what comes first\".

Targets call tags to send packages to build in, generally.

"},{"location":"legacy/koji_tagging/#tag-strategy","title":"Tag Strategy","text":"

The question that we get is \"what's the difference between a build and an updates-build tag\" - It's all about the inheritance. For example, let's take a look at dist-rocky8-build

  dist-rocky8-build\n    el8\n    dist-rocky8\n    build-modules\n       . . .\n

In this tag, you can see that this build tag inherits el8 packages first, and then the packages in dist-rocky8, and then build-modules. This is where \"base\" packages start out at, generally and a lot of them won't be updated or even change with the lifecycle of the version.

dist-rocky8-updates-build\n    el8\n    dist-rocky8-updates\n        dist-rocky8\n    dist-rocky8-build\n        build-modules\n

This one is a bit different. Notice that it inherits el8 first, and then dist-rocky8-updates, which inherits dist-rocky8. And then it also pulls in dist-rocky8-build, the previous tag we were talking about. This tag is where updates for a minor release are sent to.

dist-rocky8_4-updates-build\n    el8_4\n    dist-rocky8-updates\n        dist-rocky8\n    dist-rocky8-build\n        el8\n        build-modules\n

Here's a more interesting one. Notice something? It's pretty similar to the last one, but see how it's named el8_4 instead? This is where updates during 8.4 are basically sent to and that's how they get tagged as .el8_4 on the RPM's. The el8_4 tag contains a build macros package that instructs the %dist tag to be set that way. When 8.5 comes out, we'll basically have the same setup.

At the end of the day, builds that happen in these updates-build tags get dropped in dist-rocky8-updates.

"},{"location":"legacy/koji_tagging/#what-about-modules","title":"What about modules?","text":"

Modules are a bit tricky. We generally don't touch how MBS does its tags or what's going on there. When builds are being done with the modules, they do end up using the el8 packages in some manner or form. The modules are separated entirely from the main tags though, so they don't polute the main tags. You don't want a situation where say, you build the latest ruby, but something builds off the default version of ruby provided in el8 and now you're in trouble and get dnf filtering issues.

"},{"location":"legacy/koji_tagging/#how-do-we-determine-what-is-part-of-a-compose","title":"How do we determine what is part of a compose?","text":"

There are special tags that have a -compose suffix. These tags are used as a way to pull down packages for repository building during the pungi process.

"},{"location":"rpm/","title":"RPM","text":"

This section is primarily for documentation and useful information as it pertains to package building and modularity. Use the menu on the left side to find the information you're looking for.

"},{"location":"rpm/local_module_builds/","title":"Local Module Builds","text":"

{% set git_revision_date = '2024-03-05' %} Within the Fedora and Red Hat ecosystem, modularity is unfortunately a piece that is a blessing and a curse. It might be more one way or the other.

This page is primarily to talk about how to do local builds for modules, including the final formatting of the module yaml description that will have to be imported into the repo via modifyrepo_c.

Note that the below is based on how lazybuilder performs module builds, which was made to be close to MBS+Koji and is not perfect. This is mostly used as a reference.

"},{"location":"rpm/local_module_builds/#contact-information","title":"Contact Information","text":"Owner Release Engineering Team Email Contact releng@rockylinux.org Email Contact infrastructure@rockylinux.org Mattermost Contacts @label @mustafa @neil @tgo Mattermost Channels ~Development"},{"location":"rpm/local_module_builds/#building-local-modules","title":"Building Local Modules","text":"

This section explains what it's like to build local modules, what you can do, and what you can expect.

"},{"location":"rpm/local_module_builds/#module-source-transmodrification-pulling-sources","title":"Module Source, \"transmodrification\", pulling sources","text":"

The module source typically lives in a SOURCES directory in a module git repo with the name of modulemd.src.txt. This is a basic version that could be used to do a module build. Each package listed is a reference to the stream version for that particular module.

document: modulemd\nversion: 2\ndata:\n  stream: 1.4\n  summary: 389 Directory Server (base)\n  description: >-\n    389 Directory Server is an LDAPv3 compliant server.  The base package includes\n    the LDAP server and command line utilities for server administration.\n  license:\n    module:\n    - MIT\n  dependencies:\n  - buildrequires:\n      nodejs: [10]\n      platform: [el8]\n    requires:\n      platform: [el8]\n  filter:\n    rpms:\n    - cockpit-389-ds\n  components:\n    rpms:\n      389-ds-base:\n        rationale: Package in api\n        ref: stream-1.4-rhel-8.4.0\n        arches: [aarch64, ppc64le, s390x, x86_64]\n

Notice ref? That's the reference point. When a \"transmodrification\" occurs, the process is supposed to look at each RPM repo in the components['rpms'] list. The branch name that this module data lives in will be the basis of how it determines what the new references will be. In this example, the branch name is r8-stream-1.4 so when we do the \"conversion\", it should become a git commit hash of the last commit in the branch r8-stream-1.4 for that particular rpm component.

document: modulemd\nversion: 2\ndata:\n  stream: \"1.4\"\n  summary: 389 Directory Server (base)\n  description: 389 Directory Server is an LDAPv3 compliant server.  The base package\n    includes the LDAP server and command line utilities for server administration.\n  license:\n    module:\n    - MIT\n  dependencies:\n  - buildrequires:\n      nodejs:\n      - \"10\"\n      platform:\n      - el8\n    requires:\n      platform:\n      - el8\n  filter:\n    rpms:\n    - cockpit-389-ds\n  components:\n    rpms:\n      389-ds-base:\n        rationale: Package in api\n        ref: efe94eb32d597765f49b7b1528ba9881e1f29327\n        arches:\n        - aarch64\n        - ppc64le\n        - s390x\n        - x86_64\n

See the reference now? It's now a commit hash that refers directly to 389-ds-base on branch r8-stream-1.4, being the last commit/tag. See the glossary at the end of this page for more information, as it can be a commit hash, branch, or tag name.

"},{"location":"rpm/local_module_builds/#configuring-macros-and-contexts","title":"Configuring Macros and Contexts","text":"

Traditionally within an MBS and Koji system, there are several macros that are created and are usually unique per module stream. There are certain components that work together to create a unique %dist tag based on several factors. To summarize, here's what generally happens:

"},{"location":"rpm/local_module_builds/#configuring-the-macros","title":"Configuring the Macros","text":"

In koji+MBS, a module macros package is made that defines the module macros. In lazybuilder, we skip that and define the macros directly. For example, in mock, we drop a file with all the macros we need. Here's an example of 389-ds. The file name is is macros.zz-modules to ensure these macros are picked up last and will have precendence and override macros of similar names, especially the %dist tag.

rpmbuild# cat /etc/rpm/macros.zz-modules\n\n%dist .module_el8.4.0+636+837ee950\n%modularitylabel 389-ds:1.4:8040020210810203142:866effaa\n%_module_build 1\n%_module_name 389-ds\n%_module_stream 1.4\n%_module_version 8040020210810203142\n%_module_context 866effaa\n

The the %dist tag honestly is the most important piece here. But all of these tags are required regardless.

"},{"location":"rpm/local_module_builds/#build-opts-macros","title":"Build Opts Macros","text":"

Some modules may have additional buildopts macros. Perl is a great example of this. When koji+MBS make their module macros package for the build, they combine the module macros and the build opts macros together into one file. It will be the same exact file name each time.

rpmbuild# cat /etc/rpm/macros.zz-modules\n\n# Module macros\n%dist .module+el8.4.0+463+10533ad3\n%modularitylabel perl:5.24:8040020210602173155:162f5753\n%_module_build 1\n%_module_name perl\n%_module_stream 5.24\n%_module_version 8040020210602173155\n%_module_context 162f5753\n\n# Build Opts macros\n%_with_perl_enables_groff 1\n%_without_perl_enables_syslog_test 1\n%_with_perl_enables_systemtap 1\n%_without_perl_enables_tcsh 1\n%_without_perl_Compress_Bzip2_enables_optional_test 1\n%_without_perl_CPAN_Meta_Requirements_enables_optional_test 1\n%_without_perl_IPC_System_Simple_enables_optional_test 1\n%_without_perl_LWP_MediaTypes_enables_mailcap 1\n%_without_perl_Module_Build_enables_optional_test 1\n%_without_perl_Perl_OSType_enables_optional_test 1\n%_without_perl_Pod_Perldoc_enables_tk_test 1\n%_without_perl_Software_License_enables_optional_test 1\n%_without_perl_Sys_Syslog_enables_optional_test 1\n%_without_perl_Test_Harness_enables_optional_test 1\n%_without_perl_URI_enables_Business_ISBN 1\n
"},{"location":"rpm/local_module_builds/#built-module-example","title":"Built Module Example","text":"

Let's break down an example of 389-ds - It's a simple module. Let's start with modulemd.txt, generated during a module build and before packages are built. Notice how it has xmd data. That is an integral part of making the context, though it's mostly information for koji and MBS and is generated on the fly and used throughout the build process for each arch. In the context of lazybuilder, it creates fake data to essentially fill the gap of not having MBS+Koji in the first place. The comments will point out what's used to make the contexts.

---\ndocument: modulemd\nversion: 2\ndata:\n  name: 389-ds\n  stream: 1.4\n  version: 8040020210810203142\n  context: 866effaa\n  summary: 389 Directory Server (base)\n  description: >-\n    389 Directory Server is an LDAPv3 compliant server.  The base package includes\n    the LDAP server and command line utilities for server administration.\n  license:\n    module:\n    - MIT\n  xmd:\n    mbs:\n      # This section xmd['mbs']['buildrequires'] is used to generate the build context\n      # This is typically made before hand and is used with the dependencies section\n      # to make the context listed above.\n      buildrequires:\n        nodejs:\n          context: 30b713e6\n          filtered_rpms: []\n          koji_tag: module-nodejs-10-8030020210426100849-30b713e6\n          ref: 4589c1afe3ab66ffe6456b9b4af4cc981b1b7cdf\n          stream: 10\n          version: 8030020210426100849\n        platform:\n          context: 00000000\n          filtered_rpms: []\n          koji_tag: module-rocky-8.4.0-build\n          ref: virtual\n          stream: el8.4.0\n          stream_collision_modules:\n          ursine_rpms:\n          version: 2\n      commit: 53f7648dd6e54fb156b16302eb56bacf67a9024d\n      mse: TRUE\n      rpms:\n        389-ds-base:\n          ref: efe94eb32d597765f49b7b1528ba9881e1f29327\n      scmurl: https://git.rockylinux.org/staging/modules/389-ds?#53f7648dd6e54fb156b16302eb56bacf67a9024d\n      ursine_rpms: []\n  # Dependencies is part of the context combined with the xmd data. This data\n  # is already in the source yaml pulled for the module build in the first place.\n  # Note that in the source, it's usually `elX` rather than `elX.Y.Z` unless\n  # explicitly configured that way.\n  dependencies:\n  - buildrequires:\n      nodejs: [10]\n      platform: [el8.4.0]\n    requires:\n      platform: [el8]\n  filter:\n    rpms:\n    - cockpit-389-ds\n  components:\n    rpms:\n      389-ds-base:\n        rationale: Package in api\n        repository: git+https://git.rockylinux.org/staging/rpms/389-ds-base\n        cache: http://pkgs.fedoraproject.org/repo/pkgs/389-ds-base\n        ref: efe94eb32d597765f49b7b1528ba9881e1f29327\n        arches: [aarch64, ppc64le, s390x, x86_64]\n...\n

Below is a version meant to be imported into a repo. This is after the build's completion. You'll notice that some fields are either empty or missing from above or even from the git repo's source that we pulled from initially. You'll also notice that xmd is now an empty dictionary. This is on purpose. While it is optional in the repo module data, the build system typically gives it {}.

---\ndocument: modulemd\nversion: 2\ndata:\n  name: 389-ds\n  stream: 1.4\n  version: 8040020210810203142\n  context: 866effaa\n  arch: x86_64\n  summary: 389 Directory Server (base)\n  description: >-\n    389 Directory Server is an LDAPv3 compliant server.  The base package includes\n    the LDAP server and command line utilities for server administration.\n  license:\n    module:\n    - MIT\n    content:\n    - GPLv3+\n  # This data is not an empty dictionary. It is required.\n  xmd: {}\n  dependencies:\n  - buildrequires:\n      nodejs: [10]\n      platform: [el8.4.0]\n    requires:\n      platform: [el8]\n  filter:\n    rpms:\n    - cockpit-389-ds\n  components:\n    rpms:\n      389-ds-base:\n        rationale: Package in api\n        ref: efe94eb32d597765f49b7b1528ba9881e1f29327\n        arches: [aarch64, ppc64le, s390x, x86_64]\n  artifacts:\n    rpms:\n    - 389-ds-base-0:1.4.3.16-19.module+el8.4.0+636+837ee950.src\n    - 389-ds-base-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n    - 389-ds-base-debuginfo-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n    - 389-ds-base-debugsource-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n    - 389-ds-base-devel-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n    - 389-ds-base-legacy-tools-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n    - 389-ds-base-legacy-tools-debuginfo-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n    - 389-ds-base-libs-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n    - 389-ds-base-libs-debuginfo-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n    - 389-ds-base-snmp-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n    - 389-ds-base-snmp-debuginfo-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n    - python3-lib389-0:1.4.3.16-19.module+el8.4.0+636+837ee950.noarch\n...\n

The final \"repo\" of modules (per arch) is eventually made with a designation like:

module-NAME-STREAM-VERSION-CONTEXT\n\nmodule-389-ds-1.4-8040020210810203142-866effaa\n

This is what pungi and other utilities bring in and then combine into a single repo, generally, taking care of the module.yaml.

"},{"location":"rpm/local_module_builds/#default-modules","title":"Default Modules","text":"

Most modules will have a set default that would be expected if a dnf install was called. For example, in EL8 if you said dnf install postgresql-server, the package that gets installed is version 10. If a module doesn't have a default set, a dnf install will traditionally not work. To ensure a module package will install without having to enable them and to use the default, you need default information. Here's the postgresql example.

---\ndocument: modulemd-defaults\nversion: 1\ndata:\n  module: postgresql\n  stream: 10\n  profiles:\n    9.6: [server]\n    10: [server]\n    12: [server]\n    13: [server]\n...\n

Even if a module only has one stream, default module information is still needed to ensure that a package can be installed without enabling the module explicitly. Here's an example.

---\ndocument: modulemd-defaults\nversion: 1\ndata:\n  module: httpd\n  stream: 2.4\n  profiles:\n    2.4: [common]\n...\n

This type of information is expected by pungi as a default modules repo that can be configured. These YAML's are not with the modules themselves. They are brought in when the repos are being created in the first place.

In the context of lazybuilder, it checks for defaults if enabled and then the final repo that's made of the results will immediately have the information at the top. See the references below for the jinja template that lazybuilder uses to generate this information.

As a final note, let's say an update comes in for postgresql and you want to ensure that the old version of postgresql 10 and the updated version of 10 can stay together. This is when the final module data is combined together and then it's added into the repo using modifyrepo_c. Note though, you do not have to have the modulemd-defaults provided again. You can have it once such as the first time you made the repo in the first place, and it will still work.

"},{"location":"rpm/local_module_builds/#building-the-packages","title":"Building the packages","text":"

So we have an idea of how the module data itself is made and managed. All there is left to do is to do a chain build in mock. The kicker is you need to pay attention to the build order that is assigned to each package being built. If a build order isn't assigned, assume that it's group 0 and will be built first. This does not stop 0 being assigned, but just know that buildorder being omitted implies group 0. See below.

    components:\n        rpms:\n            first:\n                rationale: core functions\n                ref: 3.0\n                buildorder: 0\n            second:\n                rationale: ui\n                ref: latest\n                buildorder: 0\n            third:\n                rationale: front end\n                ref: latest\n                buildorder: 1\n

What this shows is that the packages in build group 0 can be built simultaneously in the context of Koji+MBS. For a local build, you'd just put them first in the list. Basically each of these groups have to be done, completed, and available right away for the next package or set of packages. For koji+mbs, they do this automatically since they have a tag/repo that gets updated on each completion and the builds are done in parallel.

For mock, a chain build will always have an internal repo that it uses, so each completed package will have a final createrepo done on it before moving on to the next package in the list. It's not parallel like koji, but it's still consistent.

Essentially a mock command would look like:

mock -r module.cfg \\\n  --chain \\\n  --localrepo /var/lib/mock/modulename \\\n  first.src.rpm \\\n  second.src.rpm \\\n  third.src.rpm\n
"},{"location":"rpm/local_module_builds/#making-the-final-yaml-and-repo","title":"Making the final YAML and repo","text":"

It's probably wise to have a template to make the module repo data off of. It's the same as having a script to \"transmodrify\" the module data properly to be used. Having a template will simplify a lot of things and will make it easier to convert the data from git and then the final build artifacts and data that makes the module data. The lazybuilder template is a good starting point, though it is a bit ugly, being made in jinja. It can be made better using python or even golang.

Regardless, you should have it templated or scripted somehow. See the references in the next section.

"},{"location":"rpm/local_module_builds/#a-note-about-virtual-modules","title":"A note about virtual modules","text":"

Virtual modules are weird. They do not have a module dist tag, and they are just built like... any other RPM. The difference here is that a virtual module while it will should have an api['rpms'] list, it will not have an artifacts section.

A huge example of this is perl:5.26 in EL8. perl 5.26 is the default version. If you install perl-interpreter, you'll get perl-interpreter-5.26.3-419.el8_4.1.x86_64. Notice how it doesn't have a module tag? That's because it wasn't built directly in MBS. There are not many virtual modules, but this is important to keep in mind that these do in fact exist. The module yaml itself will not have a list of packages to build, aka a \"components\" section. Here's the current EL8 perl 5.26 example.

document: modulemd\nversion: 2\ndata:\n    summary: Practical Extraction and Report Language\n    description: >\n        Perl is a high-level programming language with roots in C, sed, awk\n        and shell scripting. Perl is good at handling processes and files, and\n        is especially good at handling text. Perl's hallmarks are practicality\n        and efficiency. While it is used to do a lot of different things,\n        Perl's most common applications are system administration utilities\n        and web programming.\n    license:\n        module: [ MIT ]\n    dependencies:\n        - buildrequires:\n              platform: [el8]\n          requires:\n              platform: [el8]\n    references:\n        community: https://docs.pagure.org/modularity/\n    profiles:\n        common:\n            description: Interpreter and all Perl modules bundled within upstream Perl.\n            rpms:\n                - perl\n        minimal:\n            description: Only the interpreter as a standalone executable.\n            rpms:\n                - perl-interpreter\n    api:\n        rpms:\n            - perl\n            - perl-Archive-Tar\n            - perl-Attribute-Handlers\n            - perl-autodie\n            - perl-B-Debug\n            - perl-bignum\n            - perl-Carp\n            - perl-Compress-Raw-Bzip2\n            - perl-Compress-Raw-Zlib\n            - perl-Config-Perl-V\n            - perl-constant\n            - perl-CPAN\n            - perl-CPAN-Meta\n            - perl-CPAN-Meta-Requirements\n            - perl-CPAN-Meta-YAML\n            - perl-Data-Dumper\n            - perl-DB_File\n            - perl-devel\n            - perl-Devel-Peek\n            - perl-Devel-PPPort\n            - perl-Devel-SelfStubber\n            - perl-Digest\n            - perl-Digest-MD5\n            - perl-Digest-SHA\n            - perl-Encode\n            - perl-Encode-devel\n            - perl-encoding\n            - perl-Env\n            - perl-Errno\n            - perl-experimental\n            - perl-Exporter\n            - perl-ExtUtils-CBuilder\n            - perl-ExtUtils-Command\n            - perl-ExtUtils-Embed\n            - perl-ExtUtils-Install\n            - perl-ExtUtils-MakeMaker\n            - perl-ExtUtils-Manifest\n            - perl-ExtUtils-Miniperl\n            - perl-ExtUtils-MM-Utils\n            - perl-ExtUtils-ParseXS\n            - perl-File-Fetch\n            - perl-File-Path\n            - perl-File-Temp\n            - perl-Filter\n            - perl-Filter-Simple\n            - perl-generators\n            - perl-Getopt-Long\n            - perl-HTTP-Tiny\n            - perl-interpreter\n            - perl-IO\n            - perl-IO-Compress\n            - perl-IO-Socket-IP\n            - perl-IO-Zlib\n            - perl-IPC-Cmd\n            - perl-IPC-SysV\n            - perl-JSON-PP\n            - perl-libnet\n            - perl-libnetcfg\n            - perl-libs\n            - perl-Locale-Codes\n            - perl-Locale-Maketext\n            - perl-Locale-Maketext-Simple\n            - perl-macros\n            - perl-Math-BigInt\n            - perl-Math-BigInt-FastCalc\n            - perl-Math-BigRat\n            - perl-Math-Complex\n            - perl-Memoize\n            - perl-MIME-Base64\n            - perl-Module-CoreList\n            - perl-Module-CoreList-tools\n            - perl-Module-Load\n            - perl-Module-Load-Conditional\n            - perl-Module-Loaded\n            - perl-Module-Metadata\n            - perl-Net-Ping\n            - perl-open\n            - perl-Params-Check\n            - perl-parent\n            - perl-PathTools\n            - perl-Perl-OSType\n            - perl-perlfaq\n            - perl-PerlIO-via-QuotedPrint\n            - perl-Pod-Checker\n            - perl-Pod-Escapes\n            - perl-Pod-Html\n            - perl-Pod-Parser\n            - perl-Pod-Perldoc\n            - perl-Pod-Simple\n            - perl-Pod-Usage\n            - perl-podlators\n            - perl-Scalar-List-Utils\n            - perl-SelfLoader\n            - perl-Socket\n            - perl-Storable\n            - perl-Sys-Syslog\n            - perl-Term-ANSIColor\n            - perl-Term-Cap\n            - perl-Test\n            - perl-Test-Harness\n            - perl-Test-Simple\n            - perl-tests\n            - perl-Text-Balanced\n            - perl-Text-ParseWords\n            - perl-Text-Tabs+Wrap\n            - perl-Thread-Queue\n            - perl-threads\n            - perl-threads-shared\n            - perl-Time-HiRes\n            - perl-Time-Local\n            - perl-Time-Piece\n            - perl-Unicode-Collate\n            - perl-Unicode-Normalize\n            - perl-utils\n            - perl-version\n    # We do not build any packages because they are already available\n    # in BaseOS or AppStream repository. We cannnot replace BaseOS\n    # packages.\n    #components:\n    #    rpms:\n
"},{"location":"rpm/local_module_builds/#reference","title":"Reference","text":"

Below is a reference for what's in a module's data. Some keys are optional. There'll also be an example from lazybuilder, which uses jinja to template out the final data that is used in a repo.

"},{"location":"rpm/local_module_builds/#module-template-and-known-keys","title":"Module Template and Known Keys","text":"

Below are the keys that are expected in the YAML for both defaults and the actual module build itself. Each item will have information on the type of value it is (eg, is it a string, list), if it's optional or mandatory, plus comments that may point out what's valid in source data rather than final repo data. Some of the data below may not be used in EL, but it's important to know what is possible and what could be expected.

This information was copied from: Fedora Modularity

# Document type identifier\n# `document: modulemd-defaults` describes the default stream and profiles for\n# a module.\ndocument: modulemd-defaults\n# Module metadata format version\nversion: 1\ndata:\n    # Module name that the defaults are for, required.\n    module: foo\n    # A 64-bit unsigned integer. Use YYYYMMDDHHMM to easily identify the last\n    # modification time. Use UTC for consistency.\n    # When merging, entries with a newer 'modified' value will override any\n    # earlier values. (optional)\n    modified: 201812071200\n    # Module stream that is the default for the module, optional.\n    stream: \"x.y\"\n    # Module profiles indexed by the stream name, optional\n    # This is a dictionary of stream names to a list of default profiles to be\n    # installed.\n    profiles:\n        'x.y': []\n        bar: [baz, snafu]\n    # System intents dictionary, optional. Indexed by the intent name.\n    # Overrides stream/profiles for intent.\n    intents:\n        desktop:\n            # Module stream that is the default for the module, required.\n            # Overrides the above values for systems with this intent.\n            stream: \"y.z\"\n            # Module profiles indexed by the stream name, required\n            # Overrides the above values for systems with this intent.\n            # From the above, foo:x.y has \"other\" as the value and foo:bar has\n            # no default profile.\n            profiles:\n                'y.z': [blah]\n                'x.y': [other]\n        server:\n            # Module stream that is the default for the module, required.\n            # Overrides the above values for systems with this intent.\n            stream: \"x.y\"\n            # Module profiles indexed by the stream name, required\n            # Overrides the above values for systems with this intent.\n            # From the above foo:x.y and foo:bar have no default profile.\n            profiles:\n                'x.y': []\n

Note: The glossary explains this, but remember that AUTOMATIC means that it will typically not be in the module data itself, and will likely be in repo data itself. There are also spots where thare are things that are MANDATORY but also do not show up in a lot of modules, because the implicit/default option turns off that section.

Note: There is a large chunk of these keys and values that state they are AUTOMATIC and they do show up in the module data as a result of the module data source and/or the build system doing work. An example of this is arch, among others.

##############################################################################\n# Glossary:                                                                  #\n#                                                                            #\n# build system: The process by which a module is built and packaged. In many #\n# cases, this will be the Module Build Service tool, but this term is used   #\n# as a catch-all to describe any mechanism for producing a yum repository    #\n# containing modular content from input module metadata files.               #\n#                                                                            #\n#                                                                            #\n# == Attribute Types ==                                                      #\n#                                                                            #\n# MANDATORY: Attributes of this type must be filled in by the packager of    #\n# this module. They must also be preserved and provided in the output        #\n# metadata produced by the build system for inclusion into a repository.     #\n#                                                                            #\n# OPTIONAL: Attributes of this type may be provided by the packager of this  #\n# module, when appropriate. If they are provided, they must also be          #\n# preserved and provided in the output metadata produced by the build        #\n# system for inclusion into a repository.                                    #\n#                                                                            #\n# AUTOMATIC: Attributes of this type must be present in the repository       #\n# metadata, but they may be left unspecified by the packager. In this case,  #\n# the build system is responsible for generating an appropriate value for    #\n# the attribute and including it in the repository metadata. If the packager #\n# specifies this attribute explicitly, it must be preserved and provided in  #\n# the output metadata for inclusion into a repository.                       #\n#                                                                            #\n# The definitions above describe the expected behavior of the build system   #\n# operating in its default configuration. It is permissible for the build    #\n# system to override user-provided entries through non-default operating     #\n# modes. If such changes are made, all items indicated as being required for #\n# the output repository must still be present.                               #\n##############################################################################\n\n\n# Document type identifier\n# `document: modulemd` describes the contents of a module stream\ndocument: modulemd\n\n# Module metadata format version\nversion: 2\n\ndata:\n    # name:\n    # The name of the module\n    # Filled in by the build system, using the VCS repository name as the name\n    # of the module.\n    #\n    # Type: AUTOMATIC\n    #\n    # Mandatory for module metadata in a yum/dnf repository.\n    name: foo\n\n    # stream:\n    # Module update stream\n    # Filled in by the buildsystem, using the VCS branch name as the name of\n    # the stream.\n    #\n    # Type: AUTOMATIC\n    #\n    # Mandatory for module metadata in a yum/dnf repository.\n    stream: \"latest\"\n\n    # version:\n    # Module version, 64-bit unsigned integer\n    # If this value is unset (or set to zero), it will be filled in by the\n    # buildsystem, using the VCS commit timestamp.  Module version defines the\n    # upgrade path for the particular update stream.\n    #\n    # Type: AUTOMATIC\n    #\n    # Mandatory for module metadata in a yum/dnf repository.\n    version: 20160927144203\n\n    # context:\n    # Module context flag\n    # The context flag serves to distinguish module builds with the\n    # same name, stream and version and plays an important role in\n    # automatic module stream name expansion.\n    #\n    # If 'static_context' is unset or equal to FALSE:\n    #   Filled in by the buildsystem.  A short hash of the module's name,\n    #   stream, version and its expanded runtime dependencies. The exact\n    #   mechanism for generating the hash is unspecified.\n    #\n    #   Type: AUTOMATIC\n    #\n    #   Mandatory for module metadata in a yum/dnf repository.\n    #\n    # If 'static_context' is set to True:\n    #   The context flag is a string of up to thirteen [a-zA-Z0-9_] characters\n    #   representing a build and runtime configuration for this stream. This\n    #   string is arbitrary but must be unique in this module stream.\n    #\n    #   Type: MANDATORY\n    static_context: false\n    context: c0ffee43\n\n    # arch:\n    # Module artifact architecture\n    # Contains a string describing the module's artifacts' main hardware\n    # architecture compatibility, distinguishing the module artifact,\n    # e.g. a repository, from others with the same name, stream, version and\n    # context.  This is not a generic hardware family (i.e. basearch).\n    # Examples: i386, i486, armv7hl, x86_64\n    # Filled in by the buildsystem during the compose stage.\n    #\n    # Type: AUTOMATIC\n    #\n    # Mandatory for module metadata in a yum/dnf repository.\n    arch: x86_64\n\n    # summary:\n    # A short summary describing the module\n    #\n    # Type: MANDATORY\n    #\n    # Mandatory for module metadata in a yum/dnf repository.\n    summary: An example module\n\n    # description:\n    # A verbose description of the module\n    #\n    # Type: MANDATORY\n    #\n    # Mandatory for module metadata in a yum/dnf repository.\n    description: >-\n        A module for the demonstration of the metadata format. Also,\n        the obligatory lorem ipsum dolor sit amet goes right here.\n\n    # servicelevels:\n    # Service levels\n    # This is a dictionary of important dates (and possibly supplementary data\n    # in the future) that describes the end point of certain functionality,\n    # such as the date when the module will transition to \"security fixes only\"\n    # or go completely end-of-life.\n    # Filled in by the buildsystem.  Service level names might have special\n    # meaning to other systems.  Defined externally.\n    #\n    # Type: AUTOMATIC\n    servicelevels:\n        rawhide:\n            # EOL dates are the ISO 8601 format.\n            eol: 2077-10-23\n        stable_api:\n            eol: 2077-10-23\n        bug_fixes:\n            eol: 2077-10-23\n        security_fixes:\n            eol: 2077-10-23\n\n    # license:\n    # Module and content licenses in the Fedora license identifier\n    # format\n    #\n    # Type: MANDATORY\n    license:\n        # module:\n        # Module license\n        # This list covers licenses used for the module metadata and\n        # possibly other files involved in the creation of this specific\n        # module.\n        #\n        # Type: MANDATORY\n        module:\n            - MIT\n\n        # content:\n        # Content license\n        # A list of licenses used by the packages in the module.\n        # This should be populated by build tools, not the module author.\n        #\n        # Type: AUTOMATIC\n        #\n        # Mandatory for module metadata in a yum/dnf repository.\n        content:\n            - ASL 2.0\n            - GPL+ or Artistic\n\n    # xmd:\n    # Extensible metadata block\n    # A dictionary of user-defined keys and values.\n    # Defaults to an empty dictionary.\n    #\n    # Type: OPTIONAL\n    xmd:\n        some_key: some_data\n\n    # dependencies:\n    # Module dependencies, if any\n    # A list of dictionaries describing build and runtime dependencies\n    # of this module.  Each list item describes a combination of dependencies\n    # this module can be built or run against.\n    # Dependency keys are module names, dependency values are lists of\n    # required streams.  The lists can be both inclusive (listing compatible\n    # streams) or exclusive (accepting every stream except for those listed).\n    # An empty list implies all active existing streams are supported.\n    # Requiring multiple streams at build time will result in multiple\n    # builds.  Requiring multiple streams at runtime implies the module\n    # is compatible with all of them.  If the same module streams are listed\n    # in both the build time and the runtime block, the build tools translate\n    # the runtime block so that it matches the stream the module was built\n    # against.  Multiple builds result in multiple output modulemd files.\n    # See below for an example.\n    # The example below illustrates how to build the same module in four\n    # different ways, with varying build time and runtime dependencies.\n    #\n    # Type: OPTIONAL\n    dependencies:\n        # Build on all available platforms except for f27, f28 and epel7\n        # After build, the runtime dependency will match the one used for\n        # the build.\n        - buildrequires:\n              platform: [-f27, -f28, -epel7]\n          requires:\n              platform: [-f27, -f28, -epel7]\n\n        # For platform:f27 perform two builds, one with buildtools:v1, another\n        # with buildtools:v2 in the buildroot.  Both will also utilize\n        # compatible:v3.  At runtime, buildtools isn't required and either\n        # compatible:v3 or compatible:v4 can be installed.\n        - buildrequires:\n              platform: [f27]\n              buildtools: [v1, v2]\n              compatible: [v3]\n          requires:\n              platform: [f27]\n              compatible: [v3, v4]\n\n        # For platform:f28 builds, require either runtime:a or runtime:b at\n        # runtime.  Only one build is performed.\n        - buildrequires:\n              platform: [f28]\n          requires:\n              platform: [f28]\n              runtime: [a, b]\n\n        # For platform:epel7, build against against all available extras\n        # streams and moreextras:foo and moreextras:bar.  The number of builds\n        # in this case will be 2 * <the number of extras streams available>.\n        # At runtime, both extras and moreextras will match whatever stream was\n        # used for build.\n        - buildrequires:\n              platform: [epel7]\n              extras: []\n              moreextras: [foo, bar]\n          requires:\n              platform: [epel7]\n              extras: []\n              moreextras: [foo, bar]\n\n    # references:\n    # References to external resources, typically upstream\n    #\n    # Type: OPTIONAL\n    references:\n        # community:\n        # Upstream community website, if it exists\n        #\n        # Type: OPTIONAL\n        community: http://www.example.com/\n\n        # documentation:\n        # Upstream documentation, if it exists\n        #\n        # Type: OPTIONAL\n        documentation: http://www.example.com/\n\n        # tracker:\n        # Upstream bug tracker, if it exists\n        #\n        # Type: OPTIONAL\n        tracker: http://www.example.com/\n\n    # profiles:\n    # Profiles define the end user's use cases for the module. They consist of\n    # package lists of components to be installed by default if the module is\n    # enabled. The keys are the profile names and contain package lists by\n    # component type. There are several profiles defined below. Suggested\n    # behavior for package managers is to just enable repository for selected\n    # module. Then users are able to install packages on their own. If they\n    # select a specific profile, the package manager should install all\n    # packages of that profile.\n    # Defaults to no profile definitions.\n    #\n    # Type: OPTIONAL\n    profiles:\n\n        # An example profile that defines a set of packages which are meant to\n        # be installed inside a container image artifact.\n        #\n        # Type: OPTIONAL\n        container:\n            rpms:\n                - bar\n                - bar-devel\n\n        # An example profile that delivers a minimal set of packages to\n        # provide this module's basic functionality. This is meant to be used\n        # on target systems where size of the distribution is a real concern.\n        #\n        # Type: Optional\n        minimal:\n            # A verbose description of the module, optional\n            description: Minimal profile installing only the bar package.\n            rpms:\n                - bar\n\n        # buildroot:\n        # This is a special reserved profile name.\n        #\n        # This provides a listing of packages that will be automatically\n        # installed into the buildroot of all component builds that are started\n        # after a component builds with its `buildroot: True` option set.\n        #\n        # The primary purpose of this is for building RPMs that change\n        # the build environment, such as those that provide new RPM\n        # macro definitions that can be used by subsequent builds.\n        #\n        # Specifically, it is used to flesh out the build group in koji.\n        #\n        # Type: OPTIONAL\n        buildroot:\n            rpms:\n                - bar-devel\n\n        # srpm-buildroot:\n        # This is a special reserved profile name.\n        #\n        # This provides a listing of packages that will be automatically\n        # installed into the buildroot of all component builds that are started\n        # after a component builds with its `srpm-buildroot: True` option set.\n        #\n        # The primary purpose of this is for building RPMs that change\n        # the build environment, such as those that provide new RPM\n        # macro definitions that can be used by subsequent builds.\n        #\n        # Very similar to the buildroot profile above, this is used by the\n        # build system to specify any additional packages which should be\n        # installed during the buildSRPMfromSCM step in koji.\n        #\n        # Type: OPTIONAL\n        srpm-buildroot:\n            rpms:\n                - bar-extras\n\n    # api:\n    # Module API\n    # Defaults to no API.\n    #\n    # Type: OPTIONAL\n    api:\n        # rpms:\n        # The module's public RPM-level API.\n        # A list of binary RPM names that are considered to be the\n        # main and stable feature of the module; binary RPMs not listed\n        # here are considered \"unsupported\" or \"implementation details\".\n        # In the example here we don't list the xyz package as it's only\n        # included as a dependency of xxx.  However, we list a subpackage\n        # of bar, bar-extras.\n        # Defaults to an empty list.\n        #\n        # Type: OPTIONAL\n        rpms:\n            - bar\n            - bar-extras\n            - bar-devel\n            - baz\n            - xxx\n\n    # filter:\n    # Module component filters\n    # Defaults to no filters.\n    #\n    # Type: OPTIONAL\n    filter:\n        # rpms:\n        # RPM names not to be included in the module.\n        # By default, all built binary RPMs are included.  In the example\n        # we exclude a subpackage of bar, bar-nonfoo from our module.\n        # Defaults to an empty list.\n        #\n        # Type: OPTIONAL\n        rpms:\n            - baz-nonfoo\n\n    # demodularized:\n    # Artifacts which became non-modular\n    # Defaults to no demodularization.\n    # Type: OPTIONAL\n    demodularized:\n        # rpms:\n        # A list of binary RPM package names which where removed from\n        # a module. This list explains to a package mananger that the packages\n        # are not part of the module anymore and up-to-now same-named masked\n        # non-modular packages should become available again. This enables\n        # moving a package from a module to a set of non-modular packages. The\n        # exact implementation of the demodularization (e.g. whether it\n        # applies to all modules or only to this stream) is defined by the\n        # package manager.\n        # Defaults to an empty list.\n        #\n        # Type: OPTIONAL\n        rpms:\n            - bar-old\n\n    # buildopts:\n    # Component build options\n    # Additional per component type module-wide build options.\n    #\n    # Type: OPTIONAL\n    buildopts:\n        # rpms:\n        # RPM-specific build options\n        #\n        # Type: OPTIONAL\n        rpms:\n            # macros:\n            # Additional macros that should be defined in the\n            # RPM buildroot, appended to the default set.  Care should be\n            # taken so that the newlines are preserved.  Literal style\n            # block is recommended, with or without the trailing newline.\n            #\n            # Type: OPTIONAL\n            macros: |\n                %demomacro 1\n                %demomacro2 %{demomacro}23\n\n            # whitelist:\n            # Explicit list of package build names this module will produce.\n            # By default the build system only allows components listed under\n            # data.components.rpms to be built as part of this module.\n            # In case the expected RPM build names do not match the component\n            # names, the list can be defined here.\n            # This list overrides rather then just extends the default.\n            # List of package build names without versions.\n            #\n            # Type: OPTIONAL\n            whitelist:\n                - fooscl-1-bar\n                - fooscl-1-baz\n                - xxx\n                - xyz\n\n        # arches:\n        # Instructs the build system to only build the\n        # module on this specific set of architectures.\n        # Includes specific hardware architectures, not families.\n        # See the data.arch field for details.\n        # Defaults to all available arches.\n        #\n        # Type: OPTIONAL\n        arches: [i686, x86_64]\n\n    # components:\n    # Functional components of the module\n    #\n    # Type: OPTIONAL\n    components:\n        # rpms:\n        # RPM content of the module\n        # Keys are the VCS/SRPM names, values dictionaries holding\n        # additional information.\n        #\n        # Type: OPTIONAL\n        rpms:\n            bar:\n                # name:\n                # The real name of the package, if it differs from the key in\n                # this dictionary. Used when bootstrapping to build a\n                # bootstrapping ref before building the package for real.\n                #\n                # Type: OPTIONAL\n                name: bar-real\n\n                # rationale:\n                # Why is this component present.\n                # A simple, free-form string.\n                #\n                # Type: MANDATORY\n                rationale: We need this to demonstrate stuff.\n\n                # repository:\n                # Use this repository if it's different from the build\n                # system configuration.\n                #\n                # Type: AUTOMATIC\n                repository: https://pagure.io/bar.git\n\n                # cache:\n                # Use this lookaside cache if it's different from the\n                # build system configuration.\n                #\n                # Type: AUTOMATIC\n                cache: https://example.com/cache\n\n                # ref:\n                # Use this specific commit hash, branch name or tag for\n                # the build.  If ref is a branch name, the branch HEAD\n                # will be used.  If no ref is given, the master branch\n                # is assumed.\n                #\n                # Type: AUTOMATIC\n                ref: 26ca0c0\n\n                # buildafter:\n                # Use the \"buildafter\" value to specify that this component\n                # must be be ordered later than some other entries in this map.\n                # The values of this array come from the keys of this map and\n                # not the real component name to enable bootstrapping.\n                # Use of both buildafter and buildorder in the same document is\n                # prohibited, as they will conflict.\n                #\n                # Note: The use of buildafter is not currently supported by the\n                # Fedora module build system.\n                #\n                # Type: AUTOMATIC\n                #\n                # buildafter:\n                #    - baz\n\n                # buildonly:\n                # Use the \"buildonly\" value to indicate that all artifacts\n                # produced by this component are intended only for building\n                # this component and should be automatically added to the\n                # data.filter.rpms list after the build is complete.\n                # Defaults to \"false\" if not specified.\n                #\n                # Type: AUTOMATIC\n                buildonly: false\n\n            # baz builds RPM macros for the other components to use\n            baz:\n                rationale: Demonstrate updating the buildroot contents.\n\n                # buildroot:\n                # If buildroot is set to True, the packages listed in this\n                # module's 'buildroot' profile will be installed into the\n                # buildroot of any component built in buildorder/buildafter\n                # batches begun after this one, without requiring that those\n                # packages are listed among BuildRequires.\n                #\n                # The primary purpose of this is for building RPMs that change\n                # the build environment, such as those that provide new RPM\n                # macro definitions that can be used by subsequent builds.\n                #\n                # Defaults to \"false\" if not specified.\n                #\n                # Type: OPTIONAL\n                buildroot: true\n\n                # srpm-buildroot:\n                # If srpm-buildroot is set to True, the packages listed in this\n                # module's 'srpm-buildroot' profile will be installed into the\n                # buildroot of any component built in buildorder/buildafter\n                # batches begun after this one, without requiring that those\n                # packages are listed among BuildRequires.\n                #\n                # The primary purpose of this is for building RPMs that change\n                # the build environment, such as those that provide new RPM\n                # macro definitions that can be used by subsequent builds.\n                #\n                # Defaults to \"false\" if not specified.\n                #\n                # Type: OPTIONAL\n                srpm-buildroot: true\n\n                # See component xyz for a complete description of buildorder\n                #\n                # build this component before any others so that the macros it\n                # creates are available to all of them.\n                buildorder: -1\n\n            xxx:\n                rationale: xxx demonstrates arches and multilib.\n\n                # arches:\n                # xxx is only available on the listed architectures.\n                # Includes specific hardware architectures, not families.\n                # See the data.arch field for details.\n                # Instructs the build system to only build the\n                # component on this specific set of architectures.\n                # If data.buildopts.arches is also specified,\n                # this must be a subset of those architectures.\n                # Defaults to all available arches.\n                #\n                # Type: AUTOMATIC\n                arches: [i686, x86_64]\n\n                # multilib:\n                # A list of architectures with multilib\n                # installs, i.e. both i686 and x86_64\n                # versions will be installed on x86_64.\n                # Includes specific hardware architectures, not families.\n                # See the data.arch field for details.\n                # Defaults to no multilib.\n                #\n                # Type: AUTOMATIC\n                multilib: [x86_64]\n\n            xyz:\n                rationale: xyz is a bundled dependency of xxx.\n\n                # buildorder:\n                # Build order group\n                # When building, components are sorted by build order tag\n                # and built in batches grouped by their buildorder value.\n                # Built batches are then re-tagged into the buildroot.\n                # Multiple components can have the same buildorder index\n                # to map them into build groups.\n                # Defaults to zero.\n                # Integer, from an interval [-(2^63), +2^63-1].\n                # In this example, bar, baz and xxx are built first in\n                # no particular order, then tagged into the buildroot,\n                # then, finally, xyz is built.\n                # Use of both buildafter and buildorder in the same document is\n                # prohibited, as they will conflict.\n                #\n                # Type: OPTIONAL\n                buildorder: 10\n\n        # modules:\n        # Module content of this module\n        # Included modules are built in the shared buildroot, together with\n        # other included content.  Keys are module names, values additional\n        # component information.  Note this only includes components and their\n        # properties from the referenced module and doesn't inherit any\n        # additional module metadata such as the module's dependencies or\n        # component buildopts.  The included components are built in their\n        # defined buildorder as sub-build groups.\n        #\n        # Type: OPTIONAL\n        modules:\n            includedmodule:\n\n                # rationale:\n                # Why is this module included?\n                #\n                # Type: MANDATORY\n                rationale: Included in the stack, just because.\n\n                # repository:\n                # Link to VCS repository that contains the modulemd file\n                # if it differs from the buildsystem default configuration.\n                #\n                # Type: AUTOMATIC\n                repository: https://pagure.io/includedmodule.git\n\n                # ref:\n                # See the rpms ref.\n                #\n                # Type: AUTOMATIC\n                ref: somecoolbranchname\n\n                # buildorder:\n                # See the rpms buildorder.\n                #\n                # Type: AUTOMATIC\n                buildorder: 100\n\n    # artifacts:\n    # Artifacts shipped with this module\n    # This section lists binary artifacts shipped with the module, allowing\n    # software management tools to handle module bundles.  This section is\n    # populated by the module build system.\n    #\n    # Type: AUTOMATIC\n    artifacts:\n\n        # rpms:\n        # RPM artifacts shipped with this module\n        # A set of NEVRAs associated with this module. An epoch number in the\n        # NEVRA string is mandatory.\n        #\n        # Type: AUTOMATIC\n        rpms:\n            - bar-0:1.23-1.module_deadbeef.x86_64\n            - bar-devel-0:1.23-1.module_deadbeef.x86_64\n            - bar-extras-0:1.23-1.module_deadbeef.x86_64\n            - baz-0:42-42.module_deadbeef.x86_64\n            - xxx-0:1-1.module_deadbeef.x86_64\n            - xxx-0:1-1.module_deadbeef.i686\n            - xyz-0:1-1.module_deadbeef.x86_64\n\n        # rpm-map:\n        # The rpm-map exists to link checksums from repomd to specific\n        # artifacts produced by this module. Any item in this list must match\n        # an entry in the data.artifacts.rpms section.\n        #\n        # Type: AUTOMATIC\n        rpm-map:\n\n          # The digest-type of this checksum.\n          #\n          # Type: MANDATORY\n          sha256:\n\n            # The checksum of the artifact being sought.\n            #\n            # Type: MANDATORY\n            ee47083ed80146eb2c84e9a94d0836393912185dcda62b9d93ee0c2ea5dc795b:\n\n              # name:\n              # The RPM name.\n              #\n              # Type: Mandatory\n              name: bar\n\n              # epoch:\n              # The RPM epoch.\n              # A 32-bit unsigned integer.\n              #\n              # Type: OPTIONAL\n              epoch: 0\n\n              # version:\n              # The RPM version.\n              #\n              # Type: MANDATORY\n              version: 1.23\n\n              # release:\n              # The RPM release.\n              #\n              # Type: MANDATORY\n              release: 1.module_deadbeef\n\n              # arch:\n              # The RPM architecture.\n              #\n              # Type: MANDATORY\n              arch: x86_64\n\n              # nevra:\n              # The complete RPM NEVRA.\n              #\n              # Type: MANDATORY\n              nevra: bar-0:1.23-1.module_deadbeef.x86_64\n
"},{"location":"rpm/local_module_builds/#module-template-and-keys-using-jinja","title":"Module Template and Keys using jinja","text":"
{% if module_default_data is defined %}\n---\ndocument: modulemd-defaults\nversion: {{ module_default_data.version }}\ndata:\n  module: {{ module_default_data.data.module }}\n  stream: {{ module_default_data.data.stream }}\n  profiles:\n{% for k in module_default_data.data.profiles %}\n    {{ k }}: [{{ module_default_data.data.profiles[k]|join(', ') }}]\n{% endfor %}\n...\n{% endif %}\n---\ndocument: {{ module_data.document }}\nversion: {{ module_data.version }}\ndata:\n  name: {{ source_name | default(\"source\") }}\n  stream: \"{{ module_data.data.stream }}\"\n  version: {{ module_version | default(8040) }}\n  context: {{ module_context | default('01010110') }}\n  arch: {{ mock_arch | default(ansible_architecture) }}\n  summary: {{ module_data.data.summary | wordwrap(width=79) | indent(width=4) }}\n  description: {{ module_data.data.description | wordwrap(width=79) | indent(width=4) }}\n  license:\n{% for (key, value) in module_data.data.license.items() %}\n    {{ key }}:\n    - {{ value | join('\\n    - ') }}\n{% endfor %}\n  xmd: {}\n{% if module_data.data.dependencies is defined %}\n  dependencies:\n{% for l in module_data.data.dependencies %}\n{% for r in l.keys() %}\n{% if loop.index == 1 %}\n  - {{ r }}:\n{% else %}\n    {{ r }}:\n{% endif %}\n{% for (m, n) in l[r].items() %}\n      {{ m }}: [{{ n | join(', ') }}]\n{% endfor %}\n{% endfor %}\n{% endfor %}\n{% endif %}\n{% if module_data.data.filter is defined %}\n  filter:\n{% for (key, value) in module_data.data.filter.items() %}\n    {{ key }}:\n    - {{ value | join('\\n    - ') }}\n{% endfor %}\n{% endif %}\n{% if module_data.data.profiles is defined %}\n  profiles:\n{% for (key, value) in module_data.data.profiles.items() %}\n    {{ key }}:\n{% for (key, value) in value.items() %}\n{% if value is iterable and (value is not string and value is not mapping) %}\n      {{ key | indent(width=6) }}:\n      - {{ value | join('\\n      - ') }}\n{% else %}\n      {{ key | indent(width=6) }}: {{ value }}\n{% endif %}\n{% endfor %}\n{% endfor %}\n{% endif %}\n{% if module_data.data.api is defined %}\n  api:\n{% for (key, value) in module_data.data.api.items() %}\n    {{ key }}:\n    - {{ value | join('\\n    - ') }}\n{% endfor %}\n{% endif %}\n{% if module_data.data.buildopts is defined %}\n  buildopts:\n{% for (key, value) in module_data.data.buildopts.items() %}\n    {{ key }}:\n{% for (key, value) in value.items() %}\n      {{ key }}: |\n        {{ value | indent(width=8) }}\n{% endfor %}\n{% endfor %}\n{% endif %}\n{% if module_data.data.references is defined %}\n  references:\n{% for (key, value) in module_data.data.references.items() %}\n    {{ key }}: {{ value }}\n{% endfor %}\n{% endif %}\n{% if module_data.data.components is defined %}\n  components:\n{% for (key, value) in module_data.data.components.items() %}\n    {{ key }}:\n{% for (key, value) in value.items() %}\n      {{ key }}:\n{% for (key, value) in value.items() %}\n{% if value is iterable and (value is not string and value is not mapping) %}\n        {{ key | indent(width=8) }}: [{{ value | join(', ') }}]\n{% else %}\n        {{ key | indent(width=8) }}: {{ value }}\n{% endif %}\n{% endfor %}\n{% endfor %}\n{% endfor %}\n{% endif %}\n{% if artifacts is defined %}\n  artifacts:\n{% for (key, value) in artifacts.items() %}\n    {{ key }}:\n    - {{ value | join('\\n    - ') }}\n{% endfor %}\n{% endif %}\n...\n
"},{"location":"sop/","title":"SOP (Standard Operationg Procedures)","text":"

This section goes over the various SOP's for SIG/Core. Please use the menu items to find the various pages of interest.

"},{"location":"sop/sop_compose/","title":"SOP: Compose and Repo Sync for Rocky Linux and Peridot","text":"

This SOP covers how the Rocky Linux Release Engineering Team handles composes and repository syncs for the distribution. It contains information of the scripts that are utilized and in what order, depending on the use case.

"},{"location":"sop/sop_compose/#contact-information","title":"Contact Information","text":"Owner Release Engineering Team Email Contact releng@rockylinux.org Email Contact infrastructure@rockylinux.org Mattermost Contacts @label @mustafa @neil @tgo Mattermost Channels ~Development"},{"location":"sop/sop_compose/#related-git-repositories","title":"Related Git Repositories","text":"

There are several git repositories used in the overall composition of a repository or a set of repositories.

Pungi - This repository contains all the necessary pungi configuration files that peridot translates into its own configuration. Pungi is no longer used for Rocky Linux.

Comps - This repository contains all the necessary comps (which are groups and other data) for a given major version. Peridot (and pungi) use this information to properly build repositories.

Toolkit - This repository contains various scripts and utilities used by Release Engineering, such as syncing composes, functionality testing, and mirror maintenance.

"},{"location":"sop/sop_compose/#composing-repositories","title":"Composing Repositories","text":""},{"location":"sop/sop_compose/#mount-structure","title":"Mount Structure","text":"

There is a designated system that takes care of composing repositories. These systems contain the necessary EFS/NFS mounts for the staging and production repositories as well as composes.

"},{"location":"sop/sop_compose/#empanadas","title":"Empanadas","text":"

Each repository or set of repositories are controlled by various comps and pungi configurations that are translated into peridot. Empanadas is used to run a reposync from peridot's yumrepofs repositories, generate ISO's, and create a pungi compose look-a-like. Because of this, the comps and pungi-rocky configuration is not referenced with empanadas.

"},{"location":"sop/sop_compose/#running-a-compose","title":"Running a Compose","text":"

First, the toolkit must be cloned. In the iso/empanadas directory, run poetry install. You'll then have access to the various commands needed:

"},{"location":"sop/sop_compose/#full-compose","title":"Full Compose","text":"

To perform a full compose, this order is expected (replacing X with major version or config profile)

# This creates a brand new directory under /mnt/compose/X and symlinks it to latest-Rocky-X\npoertry run sync-from-peridot --release X --hashed --repoclosure --full-run\n\n# On each architecture, this must be ran to generate the lorax images\n# !! Use --rc if the image is a release candidate or a beta image\n# Note: This is typically done using kubernetes and uploaded to a bucket\npoetry run build-iso --release X --isolation=None\n\n# The images are pulled from the bucket\npoetry run pull-unpack-tree --release X\n\n# The extra ISO's (usually just DVD) are generated\n# !! Use --rc if the image is a release candidate or a beta image\n# !! Set --extra-iso-mode to mock if desired\n# !! If there is more than the dvd, remove --extra-iso dvd\npoetry run build-iso-extra --release X --extra-iso dvd --extra-iso-mode podman\n\n# This pulls the generic and EC2 cloud images\npoetry run pull-cloud-image --release X\n\n# This ensures everything is closed out for a release. This copies iso's, images,\n# generates metadata, and the like.\n# !! DO NOT RUN DURING INCREMENTAL UPDATES !!\npoetry run finalize-compose --release X\n
"},{"location":"sop/sop_compose/#incremental-compose","title":"Incremental Compose","text":"

It is possible to simply compose singular repos if you know which ones you want to sync. This can be done when it's not for a brand new release.

# Set your repos as desired. --arch is also acceptable.\n# --ignore-debug and --ignore-source are also acceptable options.\npoetry run sync-from-peridot --release X --hashed --clean-old-packages --repo X,Y,Z\n
"},{"location":"sop/sop_compose/#syncing-composes","title":"Syncing Composes","text":"

Syncing utilizes the sync scripts provided in the release engineering toolkit.

When the scripts are being ran, they are usually ran with a specific purpose, as each major version may be different.

The below are common vars files. common_X will override what's in common. Typically these set what repositories exist and how they are named or look at the top level. These also set the current major.minor release as necessary.

.\n\u251c\u2500\u2500 common\n\u251c\u2500\u2500 common_8\n\u251c\u2500\u2500 common_9\n

These are for the releases in general. What they do is noted below.

\u251c\u2500\u2500 gen-torrents.sh                  -> Generates torrents for images\n\u251c\u2500\u2500 minor-release-sync-to-staging.sh -> Syncs a minor release to staging\n\u251c\u2500\u2500 prep-staging-X.sh                -> Preps staging updates and signs repos (only for 8)\n\u251c\u2500\u2500 sign-repos-only.sh               -> Signs the repomd (only for 8)\n\u251c\u2500\u2500 sync-file-list-parallel.sh       -> Generates file lists in parallel for mirror sync scripts\n\u251c\u2500\u2500 sync-to-prod.sh                  -> Syncs staging to production\n\u251c\u2500\u2500 sync-to-prod.delete.sh           -> Syncs staging to production (deletes artifacts that are no longer in staging)\n\u251c\u2500\u2500 sync-to-prod-sig.sh              -> Syncs a sig provided compose to production\n\u251c\u2500\u2500 sync-to-staging.sh               -> Syncs a provided compose to staging\n\u251c\u2500\u2500 sync-to-staging.delete.sh        -> Syncs a provided compose to staging (deletes artifacts that are no longer in the compose)\n\u251c\u2500\u2500 sync-to-staging-sig.sh           -> Syncs a sig provided compose to staging\n

Generally, you will only run sync-to-staging.sh or sync-to-staging.delete.sh to sync. The former is for older releases, the latter is for newer releases. Optionally, if you are syncing a \"beta\" or \"lookahead\" release, you will need to also provide the RLREL variable as beta or lookahead.

# The below syncs to staging for Rocky Linux 8\nRLVER=8 bash sync-to-staging.sh Rocky\n# The below syncs to staging for Rocky Linux 9\nRLVER=9 bash sync-to-staging.delete.sh Rocky\n

Once the syncs are done, staging must be tested and vetted before being sent to production. Once staging is completed, it is synced to production.

# Set X to whatever release\nbash RLVER=X sync-to-prod.delete.sh\nbash sync-file-list-parallel.sh\n

During this phase, staging is rsynced with production, the file list is updated, and the full time list is also updated to allow mirrors to know that the repositories have been updated and that they can sync.

Note: If multiple releases are being updated, it is important to run the syncs to completion before running the file list parallel script.

"},{"location":"sop/sop_compose_8/","title":"SOP: Compose and Repo Sync for Rocky Linux 8","text":"

This SOP covers how the Rocky Linux Release Engineering Team handles composes and repository syncs for Rocky Linux 8. It contains information of the scripts that are utilized and in what order, depending on the use case.

Please see the other SOP for Rocky Linux 9+ that are managed via empanadas and peridot.

"},{"location":"sop/sop_compose_8/#contact-information","title":"Contact Information","text":"Owner Release Engineering Team Email Contact releng@rockylinux.org Email Contact infrastructure@rockylinux.org Mattermost Contacts @label @mustafa @neil @tgo Mattermost Channels ~Development"},{"location":"sop/sop_compose_8/#related-git-repositories","title":"Related Git Repositories","text":"

There are several git repositories used in the overall composition of a repository or a set of repositories.

Pungi - This repository contains all the necessary pungi configuration files for composes that come from koji. Pungi interacts with koji to build the composes.

Comps - This repository contains all the necessary comps (which are groups and other data) for a given major version. Pungi uses this information to properly build the repositories.

Toolkit - This repository contains various scripts and utilities used by Release Engineering, such as syncing composes, functionality testing, and mirror maintenance.

"},{"location":"sop/sop_compose_8/#composing-repositories","title":"Composing Repositories","text":"

For every stable script, there is an equal beta or lookahead script available.

"},{"location":"sop/sop_compose_8/#mount-structure","title":"Mount Structure","text":"

There is a designated system that takes care of composing repositories. These systems contain the necessary EFS/NFS mounts for the staging and production repositories as well as composes.

"},{"location":"sop/sop_compose_8/#pungi","title":"Pungi","text":"

Each repository or set of repositories are controlled by various pungi configurations. For example, r8.conf will control the absolute base of Rocky Linux 8, which imports other git repository data as well as accompanying json or other configuration files.

"},{"location":"sop/sop_compose_8/#running-a-compose","title":"Running a Compose","text":"

Inside the pungi git repository, the folder scripts contain the necessary scripts that are ran to perform a compose. There are different types of composes:

Each script is titled appropriately:

When these scripts are ran, they generate an appropriate directory under /mnt/compose/X with a directory and an accompanying symlink. For example. If an update to Rocky was made using updates-8.sh, the below would be made:

drwxr-xr-x. 5 root  root  6144 Jul 21 17:44 Rocky-8-updates-20210721.1\nlrwxrwxrwx. 1 root  root    26 Jul 21 18:26 latest-Rocky-8 -> Rocky-8-updates-20210721.1\n

This setup also allows pungi to reuse previous package set data to reduce the time it takes to build a compose. Typically during a new minor release, all composes should be ran so they can be properly combined. Example of a typical order if releasing 8.X:

produce-8.sh\nupdates-8-devel.sh\nupdates-8-extras.sh\n\n# ! OR !\nproduce-8-full.sh\n
"},{"location":"sop/sop_compose_8/#syncing-composes","title":"Syncing Composes","text":"

Syncing utilizes the sync scripts provided in the release engineering toolkit.

When the scripts are being ran, they are usually ran for a specific purpose. They are also ran in a certain order to ensure integrity and consistency of a release.

The below are common vars files. common_X will override what's in common. Typically these set what repositories exist and how they are named or look at the top level. These also set the current major.minor release as necessary.

.\n\u251c\u2500\u2500 common\n\u251c\u2500\u2500 common_8\n\u251c\u2500\u2500 common_9\n

These are for the releases in general. What they do is noted below.

\u251c\u2500\u2500 gen-torrents.sh                  -> Generates torrents for images\n\u251c\u2500\u2500 minor-release-sync-to-staging.sh -> Syncs a minor release to staging\n\u251c\u2500\u2500 sign-repos-only.sh               -> Signs the repomd (only)\n\u251c\u2500\u2500 sync-to-prod.sh                  -> Syncs staging to production\n\u251c\u2500\u2500 sync-to-staging.sh               -> Syncs a provided compose to staging\n\u251c\u2500\u2500 sync-to-staging-sig.sh           -> Syncs a sig provided compose to staging\n

Generally, you will only run minor-release-sync-to-staging.sh when a full minor release is being produced. So for example, if 8.5 has been built out, you would run that after a compose. gen-torrents.sh would be ran shortly after.

When doing updates, the order of operations (preferably) would be:

* sync-to-staging.sh\n* sync-to-staging-sig.sh -> Only if sigs are updated\n* sync-to-prod.sh        -> After the initial testing, it is sent to prod.\n

An example of order:

# The below syncs to staging\nRLVER=8 bash sync-to-staging.sh Extras\nRLVER=8 bash sync-to-staging.sh Rocky-devel\nRLVER=8 bash sync-to-staging.sh Rocky\n

Once the syncs are done, staging must be tested and vetted before being sent to production. During this stage, the updateinfo.xml is also applied where necessary to the repositories to provide errata. Once staging is completed, it is synced to production.

pushd /mnt/repos-staging/mirror/pub/rocky/8.X\npython3.9 /usr/local/bin/apollo_tree -p $(pwd) -n 'Rocky Linux 8 $arch' -i Live -i Minimal -i devel -i extras -i images -i isos -i live -i metadata -i Devel -i plus -i nfv\npopd\nRLVER=8 bash sign-repos-only.sh\nRLVER=8 bash sync-to-prod.sh\nbash sync-file-list-parallel.sh\n

During this phase, staging is rsynced with production, the file list is updated, and the full time list is also updated to allow mirrors to know that the repositories have been updated and that they can sync.

Note: If multiple releases are being updated, it is important to run the syncs to completion before running the file list parallel script.

"},{"location":"sop/sop_compose_8/#quicker-composes","title":"Quicker Composes","text":"

On the designated compose box, there is a script that can do all of the incremental steps.

cd /root/cron\nbash stable-updates\n

The same goes for a full production.

bash stable\n
"},{"location":"sop/sop_compose_sig/","title":"SOP: Compose and Repo Sync for Rocky Linux Special Interest Groups","text":"

This SOP covers how the Rocky Linux Release Engineering Team handles composes and repository syncs for Special Interest Groups.

"},{"location":"sop/sop_compose_sig/#contact-information","title":"Contact Information","text":"Owner Release Engineering Team Email Contact releng@rockylinux.org Email Contact infrastructure@rockylinux.org Mattermost Contacts @label @mustafa @neil @tgo Mattermost Channels ~Development"},{"location":"sop/sop_compose_sig/#composing-repositories","title":"Composing Repositories","text":""},{"location":"sop/sop_compose_sig/#mount-structure","title":"Mount Structure","text":"

There is a designated system that takes care of composing repositories. These systems contain the necessary EFS/NFS mounts for the staging and production repositories as well as composes.

"},{"location":"sop/sop_compose_sig/#empanadas","title":"Empanadas","text":"

Each repository or set of repositories are controlled by various comps and pungi configurations that are translated into peridot. Empanadas is used to run a reposync from peridot's yumrepofs repositories, generate ISO's, and create a pungi compose look-a-like. Because of this, the comps and pungi-rocky configuration is not referenced with empanadas.

"},{"location":"sop/sop_compose_sig/#running-a-compose","title":"Running a Compose","text":"

First, the toolkit must be cloned. In the iso/empanadas directory, run poetry install. You'll then have access to the various commands needed:

To perform a compose of a SIG, it must be defined in the configuration. As an example, here is composing the core sig.

# This creates a brand new directory under /mnt/compose/X and symlinks it to latest-SIG-Y-X\n~/.local/bin/poetry run sync-sig --release 9 --sig core --hashed --clean-old-packages --full-run\n\n# This assumes the directories already exist and will update in place.\n~/.local/bin/poetry run sync-sig --release 9 --sig core --hashed --clean-old-packages\n
"},{"location":"sop/sop_compose_sig/#syncing-composes","title":"Syncing Composes","text":"

Syncing utilizes the sync scripts provided in the release engineering toolkit.

When the scripts are being ran, they are usually ran with a specific purpose, as each major version may be different.

For SIG's, the only files you'll need to know of are sync-to-staging-sig.sh and sync-to-prod-sig.sh. Both scripts will delete packages and data that are no longer in the compose.

# The below syncs the core 8 repos to staging\nRLVER=8 bash sync-to-staging-sig.sh core\n# The below syncs the core 9 repos to staging\nRLVER=9 bash sync-to-staging-sig.sh core\n\n# The below syncs everything in staging for 8 core to prod\nRLVER=8 bash sync-to-prod-sig.sh core\n\n# The below syncs everything in staging for 9 core to prod\nRLVER=9 bash sync-to-prod-sig.sh core\n

Once staging is completed and reviewed, it is synced to production.

bash sync-file-list-parallel.sh\n

During this phase, staging is rsynced with production, the file list is updated, and the full time list is also updated to allow mirrors to know that the repositories have been updated and that they can sync.

"},{"location":"sop/sop_mirrormanager2/","title":"Mirror Manager Maintenance","text":"

This SOP contains most if not all the information needed for SIG/Core to maintain and operate Mirror Manager for Rocky Linux.

"},{"location":"sop/sop_mirrormanager2/#contact-information","title":"Contact Information","text":"Owner SIG/Core (Release Engineering & Infrastructure) Email Contact infrastructure@rockylinux.org Email Contact releng@rockylinux.org Mattermost Contacts @label @neil @tgo Mattermost Channels ~Infrastructure"},{"location":"sop/sop_mirrormanager2/#introduction","title":"Introduction","text":"

So you made a bad decision and now have to do things to Mirror Manager. Good luck.

"},{"location":"sop/sop_mirrormanager2/#pieces","title":"Pieces","text":"Item Runs on... Software Mirrorlist Server mirrormanager001 https://github.com/adrianreber/mirrorlist-server/ Mirror Manager 2 mirrormanager001 https://github.com/fedora-infra/mirrormanager2"},{"location":"sop/sop_mirrormanager2/#mirrorlist-server","title":"Mirrorlist Server","text":"

This runs two (2) instances. Apache/httpd is configured to send /mirrorlist to one and /debuglist to the other.

Note that the timing for the restart of the mirror list instances are arbitrary.

"},{"location":"sop/sop_mirrormanager2/#mirror-manager-2","title":"Mirror Manager 2","text":"

This is a uwsgi service fronted by an apache/httpd instance. This is responsible for everything else that is not /mirrorlist or /debuglist. This allows the mirror managers to, well, manage their mirrors.

"},{"location":"sop/sop_mirrormanager2/#cdn","title":"CDN","text":"

Fastly sits in front of mirror manager. VPN is required to access the /admin endpoints.

If the backend of the CDN is down, it will attempt to guess what the user wanted to access and spit out a result on the dl.rockylinux.org website. For example, a request for AppStream-8 and x86_64 will result in a AppStream/x86_64/os directory on dl.rockylinux.org. Note that this isn't perfect, but it helps in potential down time or patching.

Fastly -> www firewall -> mirrormanager server\n

In reality, the flow is a lot more complex, and a diagram should be created to map it out in a more user-friendly manner (@TODO)

User -> Fastly -> AWS NLB over TLS, passthru -> www firewall cluster (decrypt TLS) -> mirrormanager server (Rocky CA TLS)\n
"},{"location":"sop/sop_mirrormanager2/#tasks","title":"Tasks","text":"

Below are a list of possible tasks to take with mirror manager, depending on the scenario.

"},{"location":"sop/sop_mirrormanager2/#new-release","title":"New Release","text":"

For the following steps, the following must be completed:

/opt/mirrormanager/scan-primary-mirror-0.4.2/target/debug/scan-primary-mirror --debug --config $HOME/scan-primary-mirror.toml --category 'Rocky Linux'\n/opt/mirrormanager/scan-primary-mirror-0.4.2/target/debug/scan-primary-mirror --debug --config $HOME/scan-primary-mirror.toml --category 'Rocky Linux SIGs'\n
  1. Update the redirects for $reponame-$releasever

    a. Use psql to mirrormanager server: psql -U mirrormanager -W -h mirrormanager_db_host mirrormanager_db

    b. Confirm that all three columns are filled and that the second and third columns are identical:

    select rr.from_repo AS \"From Repo\", rr.to_repo AS \"To Repo\", r.prefix AS \"Target Repo\" FROM repository_redirect AS rr LEFT JOIN repository AS r ON rr.to_repo = r.prefix GROUP BY r.prefix, rr.to_repo, rr.from_repo ORDER BY r.prefix ASC;`\n

    c. Change the majorversion redirects to point to the new point release, for example:

    update repository_redirect set to_repo = regexp_replace(to_repo, '9\\.2', '9.3') where from_repo ~ '(\\w+)-9-(debug|source)';`\n

    d. Insert new redirects for the major version expected by the installer

    insert into repository_redirect (from_repo,to_repo) select REGEXP_REPLACE(rr.from_repo,'9\\.2','9.3'),REGEXP_REPLACE(rr.to_repo,'9\\.2','9.3')FROM repository_redirect AS rr WHERE from_repo ~ '(\\w+)-9.2';\n
  2. Generate the mirrorlist cache and restart the debuglist and verify.

Once the bitflip is initiated, restart mirrorlist and reenable all cronjobs.

"},{"location":"sop/sop_mirrormanager2/#out-of-date-mirrors","title":"Out-of-date Mirrors","text":"
  1. Get current shasum of repomd.xml. For example: shasum=$(curl https://dl.rockylinux.org/pub/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml | sha256sum)
  2. Compare against latest propagation log:
tail -latr /var/log/mirrormanager/propagation/rocky-9.3-BaseOS-x86_64_propagation.log.*`\n\nexport VER=9.3\nawk -v shasum=$(curl -s https://dl.rockylinux.org/pub/rocky/$VER/BaseOS/x86_64/os/repodata/repomd.xml | sha256sum | awk '{print $1}') -F'::' '{split($0,data,\":\")} {if ($4 != shasum) {print data[5], data[6], $2, $7}}' < $(find /var/log/mirrormanager/propagation/ -name \"rocky-${VER}-BaseOS-x86_64_propagation.log*\" -mtime -1 | tail -1)'\n

This will generate a table. You can take the IDs in the first column and use the database to disable them by ID (table name: hosts) or go to https://mirrors.rockylinux.org/mirrormanager/host/ID and uncheck 'User active'.

Users can change user active, but they cannot change admin active. It is better to flip user active in this case.

Admins can also view https://mirrors.rockylinux.org/mirrormanager/admin/all_sites if necessary.

Example of table columns:

Note

These mirrors are here soley as an example and not to call anyone out, every mirror shows up on here at one point, for some reason, due to natural variations in how mirrors sync.

[mirrormanager@ord1-prod-mirrormanager001 propagation]$ awk -v shasum=$(curl -s https://dl.rockylinux.org/pub/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml | sha256sum | awk '{print $1}') -F'::' '{split($0,data,\":\")} {if ($4 != shasum) {print data[5], data[6], $2, $7}}' < rocky-9.3-BaseOS-x86_64_propagation.log.1660611632 | column -t\n164  mirror.host.ag            http://mirror.host.ag/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml             404\n173  rocky.centos-repo.net     http://rocky.centos-repo.net/9.3/BaseOS/x86_64/os/repodata/repomd.xml            403\n92   rocky.mirror.co.ge        http://rocky.mirror.co.ge/9.3/BaseOS/x86_64/os/repodata/repomd.xml               404\n289  mirror.vsys.host          http://mirror.vsys.host/rockylinux/9.3/BaseOS/x86_64/os/repodata/repomd.xml      404\n269  mirrors.rackbud.com       http://mirrors.rackbud.com/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml        200\n295  mirror.ps.kz              http://mirror.ps.kz/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml               200\n114  mirror.liteserver.nl      http://rockylinux.mirror.liteserver.nl/9.3/BaseOS/x86_64/os/repodata/repomd.xml  200\n275  mirror.upsi.edu.my        http://mirror.upsi.edu.my/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml         200\n190  mirror.kku.ac.th          http://mirror.kku.ac.th/rocky-linux/9.3/BaseOS/x86_64/os/repodata/repomd.xml     404\n292  mirrors.cat.pdx.edu       http://mirrors.cat.pdx.edu/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml        200\n370  mirrors.gbnetwork.com     http://mirrors.gbnetwork.com/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml      404\n308  mirror.ihost.md           http://mirror.ihost.md/rockylinux/9.3/BaseOS/x86_64/os/repodata/repomd.xml       404\n87   mirror.freedif.org        http://mirror.freedif.org/Rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml         404\n194  mirrors.bestthaihost.com  http://mirrors.bestthaihost.com/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml   404\n30   mirror.admax.se           http://mirror.admax.se/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml            200\n195  mirror.uepg.br            http://mirror.uepg.br/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml             404\n247  mirrors.ipserverone.com   http://mirrors.ipserverone.com/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml    404'\n
"},{"location":"sop/sop_release/","title":"Rocky Release Procedures for SIG/Core (RelEng/Infrastructure)","text":"

This SOP contains all the steps required by SIG/Core (a mix of Release Engineering and Infrastructure) to perform releases of all Rocky Linux versions. Work is in all collaboration within the entire group of engineerings.

"},{"location":"sop/sop_release/#contact-information","title":"Contact Information","text":"Owner SIG/Core (Release Engineering & Infrastructure) Email Contact infrastructure@rockylinux.org Email Contact releng@rockylinux.org Mattermost Contacts @label @neil @tgo @skip77 @mustafa @sherif @pgreco Mattermost Channels ~Infrastructure"},{"location":"sop/sop_release/#preparation","title":"Preparation","text":""},{"location":"sop/sop_release/#notes-about-release-day","title":"Notes about Release Day","text":"

Within a minimum of two (2) days, the following should be true:

  1. Torrents should be setup. All files can be synced with the seed box(es) but not yet published. The data should be verified using sha256sum and compared to the CHECKSUM files provided with the files.

  2. Website should be ready (typically with an open PR in github). The content should be verified that the design and content are correct and finalized.

  3. Enough mirrors should be setup. This essentially means that all content for a release should be synced to our primary mirror with the executable bit turned off, and the content should also be hard linked. In theory, mirror manager can be queried to verify if mirrors are or appear to be in sync.

"},{"location":"sop/sop_release/#notes-about-patch-days","title":"Notes about Patch Days","text":"

Within a minimum of one (1) to two (2) days, the following should be true:

  1. Updates should be completed in the build system, and verified in staging.

  2. Updates should be sent to production and file lists updated to allow mirrors to sync.

"},{"location":"sop/sop_release/#prior-to-release-day-notes","title":"Prior to Release Day notes","text":"

Ensure the SIG/Core Checklist is read thoroughly and executed as listed.

"},{"location":"sop/sop_release/#release-day","title":"Release Day","text":""},{"location":"sop/sop_release/#priorities","title":"Priorities","text":"

During release day, these should be verified/completed in order:

  1. Website - The primary website and user landing at rockylinux.org should allow the user to efficiently click through to a download link of an ISO, image, or torrent. It must be kept up.

  2. Torrent - The seed box(es) should be primed and ready to go for users downloading via torrent.

  3. Release Notes & Documentation - The release notes are often on the same website as the documentation. The main website and where applicable in the docs should refer to the Release Notes of Rocky Linux.

  4. Wiki - If applicable, the necessary changes and resources should be available for a release. In particular, if a major release has new repos, changed repo names, this should be documented.

  5. Everything else!

"},{"location":"sop/sop_release/#resources","title":"Resources","text":""},{"location":"sop/sop_release/#sigcore-checklist","title":"SIG/Core Checklist","text":""},{"location":"sop/sop_release/#beta","title":"Beta","text":""},{"location":"sop/sop_release/#release-candidate","title":"Release Candidate","text":""},{"location":"sop/sop_release/#final","title":"Final","text":" Resources Account ServicesGit (RESF Git Service)Git (Rocky Linux GitHub)Git (Rocky Linux GitLab)Mail ListsContacts

URL: https://accounts.rockylinux.org

Purpose: Account Services maintains the accounts for almost all components of the Rocky ecosystem

Technology: Noggin used by Fedora Infrastructure

Contact: ~Infrastructure in Mattermost and #rockylinux-infra in Libera IRC

URL: https://git.resf.org

Purpose: General projects, code, and so on for the Rocky Enterprise Software Foundation.

Technology: Gitea

Contact: ~Infrastructure, ~Development in Mattermost and #rockylinux-infra, #rockylinux-devel in Libera IRC

URL: https://github.com/rocky-linux

Purpose: General purpose code, assets, and so on for Rocky Linux. Some content is mirrored to the RESF Git Service.

Technology: GitHub

Contact: ~Infrastructure, ~Development in Mattermost and #rockylinux-infra, #rockylinux-devel in Libera IRC

URL: https://git.rockylinux.org

Purpose: Packages and light code for the Rocky Linux distribution

Technology: GitLab

Contact: ~Infrastructure, ~Development in Mattermost and #rockylinux-infra, #rockylinux-devel in Libera IRC

URL: https://lists.resf.org

Purpose: Users can subscribe and interact with various mail lists for the Rocky ecosystem

Technology: Mailman 3 + Hyper Kitty

Contact: ~Infrastructure in Mattermost and #rockylinux-infra in Libera IRC

Name Email Mattermost Name IRC Name Louis Abel label@rockylinux.org @nazunalika Sokel/label/Sombra Mustafa Gezen mustafa@rockylinux.org @mustafa mstg Skip Grube skip@rockylinux.org @skip77 Sherif Nagy sherif@rockylinux.org @sherif Pablo Greco pgreco@rockylinux.org @pgreco pgreco Neil Hanlon neil@resf.org @neil neil Taylor Goodwill tg@resf.org @tgo tg"},{"location":"sop/sop_upstream_prep_checklist/","title":"Generalized Prep Checklist for Upcoming Releases","text":"

This SOP contains general checklists required by SIG/Core to prepare and plan for the upcoming release. This work, in general, is required to be done on a routine basis, even months out before the next major or minor release, as it requires monitoring of upstream's (CentOS Stream) work to ensure Rocky Linux will remain ready and compatible with Red Hat Enterprise Linux.

"},{"location":"sop/sop_upstream_prep_checklist/#contact-information","title":"Contact Information","text":"Owner SIG/Core (Release Engineering & Infrastructure) Email Contact infrastructure@rockylinux.org Email Contact releng@rockylinux.org Mattermost Contacts @label @neil @tgo @skip77 @mustafa @sherif @pgreco Mattermost Channels ~Infrastructure"},{"location":"sop/sop_upstream_prep_checklist/#general-upstream-monitoring","title":"General Upstream Monitoring","text":"

It is expected to monitor the following repositories upstream, as these will indicate what is coming up for a given major or point release. These repositories are found at the Red Hat gitlab.

These repositories can be monitored by setting to \"all activity\" on the bell icon.

Upon changes to the upstream repositories, SIG/Core member should analyze the changes and apply the same to the lookahead branches:

"},{"location":"sop/sop_upstream_prep_checklist/#general-downward-merging","title":"General Downward Merging","text":"

Repositories that generally track for LookAhead and Beta releases will flow downward to the stable branch. For example:

* rXs / rXlh\n      |\n      |----> rX-beta\n                |\n                |----> rX\n

This applies to any specific rocky repo, such as comps, pungi, peridot-config, and so on. As it is expected some repos will deviate in commit history, it is OK to force push, under the assumption that changes made in the lower branch exists in the upper branch. That way you can avoid changes/functionality being reverted on accident.

"},{"location":"sop/sop_upstream_prep_checklist/#general-package-patching","title":"General Package Patching","text":"

There are packages that are patched typically for the purpose of debranding. List of patched packages are typically maintained in a metadata repository. The obvious ones are listed below and should be monitored and maintained properly:

"}]}