wiki/search/search_index.json

1 line
152 KiB
JSON

{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Release Engineering (SIG/Core) Wiki","text":""},{"location":"#about","title":"About","text":"<p>The Rocky Linux Release Engineering Team (also known as SIG/Core) dedicates themselves to the development, building, management, production, and release of Rocky Linux. This group combines development and infrastructure in a single cohesive unit of individuals that ultimately make the distribution happen.</p> <p>The \"SIG/Core\" reference name is not a strict Special Interest Group (as defined by the Rocky Linux wiki).</p> <p>The primary goal (or \"interest\") is to ensure Rocky Linux is built and released in a complete and functional manner. The secondary goal is to ensure proper collaboration and development of the Peridot build system.</p>"},{"location":"#mission","title":"Mission","text":"<p>Release Engineering strives to ensure a stable distribution is developed, built, tested, and provided to the community from the RESF as a compatible derivative of Red Hat Enterprise Linux. To achieve this goal, some of the things we do are:</p> <ul> <li>Ensuring a quality and fully compatible release product</li> <li>Developing and iterating on the build systems and architecture</li> <li>Developing all code in the open</li> <li>Setting the technical direction for the build system architecture</li> <li>Release of beta and final products to the end users and mirrors</li> <li>Release of timely updates to the end users and mirrors</li> </ul> <p>See the What We Do page for a more detailed explanation of our activities.</p>"},{"location":"#getting-in-touch-contributing","title":"Getting In Touch / Contributing","text":"<p>There are various ways to get in touch with Release Engineering and provide help, assistance, or even just ideas that can benefit us or the entire community.</p> <ul> <li> <p>Chat</p> <ul> <li>Mattermost: ~development on Mattermost</li> <li>IRC: #rockylinux and #rockylinux-devel on libera.chat</li> </ul> </li> <li> <p>RESF SIG/Core Issue Tracker</p> </li> <li>Mail List</li> </ul> <p>For a list of our members, see the Members page.</p>"},{"location":"#resources-and-rocky-linux-policies","title":"Resources and Rocky Linux Policies","text":"<ul> <li>RESF Git Service</li> <li>Rocky Linux GitHub</li> <li>Rocky Linux GitLab</li> <li>Rocky Linux Image Guide</li> <li>Rocky Linux Repository Guide</li> <li>Rocky Linux Release Version Guide/Policy</li> <li>Special Interest Groups.</li> </ul>"},{"location":"#general-packaging-resources","title":"General Packaging Resources","text":"<ul> <li>RPM Packaging Guide</li> <li>Fedora Packaging Guidelines</li> <li>Basic Packaging Tutorial</li> </ul>"},{"location":"members/","title":"Members","text":"<p>Release Engineering (SIG/Core) is a mix of Development and Infrastructure members to ensure a high quality release of Rocky Linux as well as the uptime of the services provided to the community. The current members of this group are listed in the table below. Some members may also be found in various Special Interest Groups, such as SIG/AltArch and SIG/Kernel.</p> Role Name Email Mattermost Name IRC Name Release Engineering Co-Lead and Infrastructure Louis Abel label@rockylinux.org @nazunalika Sokel/label/Sombra Release Engineering Co-Lead Mustafa Gezen mustafa@rockylinux.org @mustafa mstg Release Engineering and Development Skip Grube skip@rockylinux.org @skip77 Release Engineering and Development Sherif Nagy sherif@rockylinux.org @sherif Release Engineering and Development Pablo Greco pgreco@rockylinux.org @pgreco pgreco Infrastructure Lead Neil Hanlon neil@resf.org @neil neil Infrastructure Lead Taylor Goodwill tg@resf.org @tgo tg"},{"location":"what_we_do/","title":"What We Do","text":"<p>Release Engineering (SIG/Core) was brought together as a combination of varying expertise (development and infrastructure) to try to fill in gaps of knowledge but to also to ensure that the primary goal of having a stable release of Rocky Linux is reached.</p> <p>Some of the things we do in pursuit of our mission goals:</p> <ul> <li>Continuous preparation for upcoming changes from upstream (Fedora and CentOS Stream)</li> <li>Distribution release and maintenance</li> <li>Design and collaboration for the Peridot build system</li> <li>Design and development work to integrate all components together</li> <li>Maintenance of the infrastructure used to build and maintain Rocky Linux (such as ansible roles and playbooks)</li> <li>Working with the testing team with images and a platform to test</li> <li>Providing resources for Special Interest Groups</li> <li>Providing assistance and resources for users within the community</li> </ul> <p>\"Why the name SIG/Core?\"</p> <p>While not an actual Special Interest Group, the reality is that Release Engineering is ultimately the \"core\" of Rocky Linux's production. The idea of \"SIG/Core\" stemmed from the thought that without this group, Rocky Linux would not exist as it is now, so we are \"core\" to its existence. The other idea was that SIG/Core would eventually branch out to elsewhere. Where this would go, it is uncertain.</p>"},{"location":"documentation/","title":"Release General Overview","text":"<p>This section goes over at a high level how we compose releases for Rocky Linux. As most of our tools are home grown, we have made sure that the tools are open source and in our git services.</p> <p>This page should serve as an idea of the steps we generally take and we hope that other projects out there who wish to also use our tools can make sure they can use them in this same way, whether they want to be an Enterprise Linux derivative or another project entirely.</p>"},{"location":"documentation/#build-system-and-tools","title":"Build System and Tools","text":"<p>The tools in use for the distribution are in the table below.</p> Tool Maintainer Code Location srpmproc SIG/Core at RESF GitHub empanadas SIG/Core at RESF sig-core-toolkit Peridot SIG/Core at RESF GitHub MirrorManager 2 Fedora Project MirrorManager2 <p>For Rocky Linux to be build, we use <code>Peridot</code> as the build system and <code>empanadas</code> to \"compose\" the distribution. As we do not use Koji for Rocky Linux beyond version 9, pungi can no longer be used. Peridot instead takes pungi configuration data and comps and transforms them into a format it can understand. Empanadas then comes in to do the \"compose\" and sync all the repositories down.</p>"},{"location":"documentation/#full-compose-major-or-minor-releases","title":"Full Compose (major or minor releases)","text":"<p>Step by step, it looks like this:</p> <ul> <li>Distribution is built and maintained in Peridot</li> <li>Comps and pungi configuration is converted into the peridot format for the project</li> <li>Repositories are created in yumrepofs based on the configuration provided</li> <li>A repoclosure is ran against the repositories from empanadas to ensure there are no critical issues</li> <li> <p>In Parallel:</p> <ul> <li>Repositories are synced as a \"full run\" in empanadas</li> <li>Lorax is ran using empanadas in the peridot cluster</li> </ul> </li> <li> <p>Lorax results are pulled down from an S3 bucket</p> </li> <li>DVD images are built for each architecture</li> <li>Compose directory is synced to staging for verification</li> <li>Staging is synced to production to allow mirror syncing</li> <li>Bit flip on release day</li> </ul>"},{"location":"documentation/#general-updates","title":"General Updates","text":"<p>Step by step, it looks like this:</p> <ul> <li>Distribution is maintained in Peridot</li> <li>Updates are built, repos are then \"hashed\" in yumrepofs</li> <li>Empanadas syncs updates as needed, either per repo or all repos at once</li> <li>Updates are synced to staging to be verified</li> <li>Staging is synced to production to allow mirror syncing</li> </ul>"},{"location":"documentation/empanadas/","title":"Empanadas","text":"<p>This page goes over <code>empanadas</code>, which is part of the SIG/Core toolkit. Empanadas assists SIG/Core is composing repositories, creating ISO's, creating images, and various other activities in Rocky Linux. It is also used for general testing and debugging of repositories and its metadata.</p>"},{"location":"documentation/empanadas/#contact-information","title":"Contact Information","text":"Owner SIG/Core (Release Engineering &amp; Infrastructure) Email Contact releng@rockylinux.org Mattermost Contacts <code>@label</code> <code>@neil</code> Mattermost Channels <code>~Development</code>"},{"location":"documentation/empanadas/#general-information","title":"General Information","text":"<p><code>empanadas</code> is a python project using poetry, containing various built-in modules with the goal to try to emulate the Fedora Project's pungi to an extent. While it is not perfect, it achieves the very basic goals of creating repositories, images and ISO's for consumption by the end user. It also has interactions with peridot, the build system used by the RESF to build the Rocky Linux distribution.</p> <p>For performing syncs, it relies on the use of podman to perform syncing in a parallel fashion. This was done because it is not possible to run multiple dnf transactions at once on a single system and looping one repository at a time is not sustainable (nor fast).</p>"},{"location":"documentation/empanadas/#requirements","title":"Requirements","text":"<ul> <li>Poetry must be installed on the system</li> <li>Podman must be installed on the system</li> <li><code>fpart</code> must be installed on the system (available in EPEL on EL systems)</li> <li>Enough storage should be available if repositories are being synced</li> <li><code>mock</code> must be installed if building live images</li> <li>System must be an Enterprise Linux system or Fedora with the <code>%rhel</code> macro set</li> </ul>"},{"location":"documentation/empanadas/#features","title":"Features","text":"<p>As of this writing, <code>empanadas</code> has the following abilities:</p> <ul> <li>Repository syncing via dnf from a peridot instance or applicable repos</li> <li>Per profile dnf repoclosure checking for all applicable repos</li> <li>Per profile dnf repoclosure checking for peridot instance repositories</li> <li>Basic ISO Building via <code>lorax</code></li> <li>Extra ISO Building via <code>xorriso</code> for DVD and minimal images</li> <li>Live ISO Building using <code>livemedia-creator</code> and <code>mock</code></li> <li>Anaconda treeinfo builder</li> <li>Cloud Image builder</li> </ul>"},{"location":"documentation/empanadas/#installing-empanadas","title":"Installing Empanadas","text":"<p>The below is how to install empanadas from the development branch on a Fedora system.</p> <pre><code>% dnf install git podman fpart poetry mock -y\n% git clone https://git.resf.org/sig_core/toolkit.git -b devel\n% cd toolkit/iso/empanadas\n% poetry install\n</code></pre>"},{"location":"documentation/empanadas/#configuring-empanadas","title":"Configuring Empanadas","text":"<p>Depending on how you are using empanadas will depend on how your configurations will be setup.</p> <ul> <li><code>empanadas/common.py</code></li> <li><code>empanadas/config/*.yaml</code></li> <li><code>empanadas/sig/*.yaml</code></li> </ul> <p>These configuration files are delicate and can control a wide variety of the moving parts of empanadas. As these configurations are fairly massive, we recommend checking the reference guides for deeper details into configuring for base distribution or \"SIG\" content.</p>"},{"location":"documentation/empanadas/#using-empanadas","title":"Using Empanadas","text":"<p>The most common way to use empanadas is to sync repositories from a peridot instance. This is performed upon each release or on each set of updates as they come from upstream. Below lists how to use <code>empanadas</code>, as well as the common options.</p> <p>Note that for each of these commands, it is fully expected you are running <code>poetry run</code> in the root of empanadas.</p> <pre><code># Syncs all repositoryes for the \"9\" release\n% poetry run sync_from_peridot --release 9 --clean-old-packages\n\n# Syncs only the BaseOS repository without syncing sources\n% poetry run sync_from_peridot --release 9 --clean-old-packages --repo BaseOS --ignore-source\n\n# Syncs only AppStream for ppc64le\n% poetry run sync_from_peridot --release 9 --clean-old-packages --repo AppStream --arch ppc64le\n</code></pre> Resources Account ServicesGit (RESF Git Service)Git (Rocky Linux GitHub)Git (Rocky Linux GitLab)Mail ListsContacts <p>URL: https://accounts.rockylinux.org</p> <p>Purpose: Account Services maintains the accounts for almost all components of the Rocky ecosystem</p> <p>Technology: Noggin used by Fedora Infrastructure</p> <p>Contact: <code>~Infrastructure</code> in Mattermost and <code>#rockylinux-infra</code> in Libera IRC</p> <p>URL: https://git.resf.org</p> <p>Purpose: General projects, code, and so on for the Rocky Enterprise Software Foundation.</p> <p>Technology: Gitea</p> <p>Contact: <code>~Infrastructure</code>, <code>~Development</code> in Mattermost and <code>#rockylinux-infra</code>, <code>#rockylinux-devel</code> in Libera IRC</p> <p>URL: https://github.com/rocky-linux</p> <p>Purpose: General purpose code, assets, and so on for Rocky Linux. Some content is mirrored to the RESF Git Service.</p> <p>Technology: GitHub</p> <p>Contact: <code>~Infrastructure</code>, <code>~Development</code> in Mattermost and <code>#rockylinux-infra</code>, <code>#rockylinux-devel</code> in Libera IRC</p> <p>URL: https://git.rockylinux.org</p> <p>Purpose: Packages and light code for the Rocky Linux distribution</p> <p>Technology: GitLab</p> <p>Contact: <code>~Infrastructure</code>, <code>~Development</code> in Mattermost and <code>#rockylinux-infra</code>, <code>#rockylinux-devel</code> in Libera IRC</p> <p>URL: https://lists.resf.org</p> <p>Purpose: Users can subscribe and interact with various mail lists for the Rocky ecosystem</p> <p>Technology: Mailman 3 + Hyper Kitty</p> <p>Contact: <code>~Infrastructure</code> in Mattermost and <code>#rockylinux-infra</code> in Libera IRC</p> Name Email Mattermost Name IRC Name Louis Abel label@rockylinux.org @nazunalika Sokel/label/Sombra Mustafa Gezen mustafa@rockylinux.org @mustafa mstg Skip Grube skip@rockylinux.org @skip77 Sherif Nagy sherif@rockylinux.org @sherif Pablo Greco pgreco@rockylinux.org @pgreco pgreco Neil Hanlon neil@resf.org @neil neil Taylor Goodwill tg@resf.org @tgo tg"},{"location":"documentation/peridot/","title":"Peridot Build System","text":"<p>This page goes over the Peridot Build System and how SIG/Core utilizes it.</p> <p>More to come.</p>"},{"location":"documentation/rebuild/","title":"Rebuild Version Bump","text":"<p>In some cases, a package has to be rebuilt. A package may be rebuilt for these reasons:</p> <ul> <li>Underlying libraries have been rebased</li> <li>ABI changes that require a rebuild (mass rebuilds, though they are rare)</li> <li>New architecture added to a project</li> </ul> <p>This typically applies to packages being built from a given <code>src</code> subgroup. Packages pulled from upstream don't fall into this category in normal circumstances. In those cases, they receive <code>.0.1</code> and so on as standalone rebuilds.</p>"},{"location":"documentation/compose/","title":"Composing and Managing Releases","text":"<p>This section goes over the process of composing a release from a bunch of packages to repositories, to images. This section also goes over the basics of working with koji when necessary.</p>"},{"location":"documentation/compose/koji/","title":"Updates and Management in Koji, A Manual","text":"<p>More to come.</p>"},{"location":"documentation/references/","title":"References","text":"<p>Use this section to locate reference configuration items for the toolkit.</p>"},{"location":"documentation/references/empanadas_common/","title":"Empanadas common.py Configuration","text":"<p>The <code>common.py</code> configuration contains dictionaries and classes that dictate most of the functionality of empanadas.</p>"},{"location":"documentation/references/empanadas_common/#config-items","title":"Config Items","text":"<p>type: Dictionary</p>"},{"location":"documentation/references/empanadas_common/#configrlmacro","title":"config.rlmacro","text":"<p>type: String</p> <p>required: True</p> <p>description: Empanadas expects to run on an EL system. This is part of the general check up. It should not be hardcoded and use the rpm python module.</p>"},{"location":"documentation/references/empanadas_common/#configdist","title":"config.dist","text":"<p>type: String</p> <p>required: False</p> <p>description: Was the original tag placed in mock configs. This combines <code>el</code> with the rpm python module expansion. This is no longer required. The option is still available for future use.</p>"},{"location":"documentation/references/empanadas_common/#configarch","title":"config.arch","text":"<p>type: String</p> <p>required: True</p> <p>description: The architecture of the current running system. This is checked against the supported architectures in general release configurations. This should not be hardcoded.</p>"},{"location":"documentation/references/empanadas_common/#configdate_stamp","title":"config.date_stamp","text":"<p>type: String</p> <p>required: True</p> <p>description: Date time stamp in the form of YYYYMMDD.HHMMSS. This should not be hardcoded.</p>"},{"location":"documentation/references/empanadas_common/#configcompose_root","title":"config.compose_root","text":"<p>type: String</p> <p>required: True</p> <p>description: Root path of composes on the system running empanadas.</p>"},{"location":"documentation/references/empanadas_common/#configstaging_root","title":"config.staging_root","text":"<p>type: String</p> <p>required: False</p> <p>description: For future use. Root path of staging repository location where content will be synced to.</p>"},{"location":"documentation/references/empanadas_common/#configproduction_root","title":"config.production_root","text":"<p>type: String</p> <p>required: False</p> <p>description: For future use. Root path of production repository location where content will be synced to from staging.</p>"},{"location":"documentation/references/empanadas_common/#configcategory_stub","title":"config.category_stub","text":"<p>type: String</p> <p>required: True</p> <p>description: For future use. Stub path that is appended to <code>staging_root</code> and <code>production_root</code>.</p> <p>example: <code>mirror/pub/rocky</code></p>"},{"location":"documentation/references/empanadas_common/#configsig_category_stub","title":"config.sig_category_stub","text":"<p>type: String</p> <p>required: True</p> <p>description: For future use. Stub path that is appended to <code>staging_root</code> and <code>production_root</code> for SIG content.</p> <p>example: <code>mirror/pub/sig</code></p>"},{"location":"documentation/references/empanadas_common/#configrepo_base_url","title":"config.repo_base_url","text":"<p>type: String</p> <p>required: True</p> <p>description: URL to the base url's where the repositories live. This is typically to a peridot instance. This is supplemented by the configuration <code>project_id</code> parameter.</p> <p>Note that this does not have to be a peridot instance. The combination of this value and <code>project_id</code> can be sufficient enough for empanadas to perform its work.</p>"},{"location":"documentation/references/empanadas_common/#configmock_work_root","title":"config.mock_work_root","text":"<p>type: String</p> <p>required: True</p> <p>description: Hardcoded path to where ISO work is performed within a mock chroot. This is the default path created by mock and it is recommended not to change this.</p> <p>example: <code>/builddir</code></p>"},{"location":"documentation/references/empanadas_common/#configcontainer","title":"config.container","text":"<p>type: String</p> <p>required: True</p> <p>description: This is the container used to perform all operations in podman.</p> <p>example: <code>centos:stream9</code></p>"},{"location":"documentation/references/empanadas_common/#configdistname","title":"config.distname","text":"<p>type: String</p> <p>required: True</p> <p>description: Name of the distribution you are building or building for.</p> <p>example: <code>Rocky Linux</code></p>"},{"location":"documentation/references/empanadas_common/#configshortname","title":"config.shortname","text":"<p>type: String</p> <p>required: True</p> <p>description: Short name of the distribution you are building or building for.</p> <p>example: <code>Rocky</code></p>"},{"location":"documentation/references/empanadas_common/#configtranslators","title":"config.translators","text":"<p>type: Dictionary</p> <p>required: True</p> <p>description: Translates Linux architectures to golang architectures. Reserved for future use.</p>"},{"location":"documentation/references/empanadas_common/#configaws_region","title":"config.aws_region","text":"<p>type: String</p> <p>required: False</p> <p>description: Region you are working in with AWS or onprem cloud that supports this variable.</p> <p>example: <code>us-east-2</code></p>"},{"location":"documentation/references/empanadas_common/#configbucket","title":"config.bucket","text":"<p>type: String</p> <p>required: False</p> <p>description: Name of the S3-compatible bucket that is used to pull images from. Requires <code>aws_region</code>.</p>"},{"location":"documentation/references/empanadas_common/#configbucket_url","title":"config.bucket_url","text":"<p>type: String</p> <p>required: False</p> <p>description: URL of the S3-compatible bucket that is used to pull images from.</p>"},{"location":"documentation/references/empanadas_common/#allowed_type_variants-items","title":"allowed_type_variants items","text":"<p>type: Dictionary</p> <p>description: Key value pairs of cloud or image variants. The value is either <code>None</code> or a list type.</p>"},{"location":"documentation/references/empanadas_common/#reference-example","title":"Reference Example","text":"<pre><code>config = {\n \"rlmacro\": rpm.expandMacro('%rhel'),\n \"dist\": 'el' + rpm.expandMacro('%rhel'),\n \"arch\": platform.machine(),\n \"date_stamp\": time.strftime(\"%Y%m%d.%H%M%S\", time.localtime()),\n \"compose_root\": \"/mnt/compose\",\n \"staging_root\": \"/mnt/repos-staging\",\n \"production_root\": \"/mnt/repos-production\",\n \"category_stub\": \"mirror/pub/rocky\",\n \"sig_category_stub\": \"mirror/pub/sig\",\n \"repo_base_url\": \"https://yumrepofs.build.resf.org/v1/projects\",\n \"mock_work_root\": \"/builddir\",\n \"container\": \"centos:stream9\",\n \"distname\": \"Rocky Linux\",\n \"shortname\": \"Rocky\",\n \"translators\": {\n \"x86_64\": \"amd64\",\n \"aarch64\": \"arm64\",\n \"ppc64le\": \"ppc64le\",\n \"s390x\": \"s390x\",\n \"i686\": \"386\"\n },\n \"aws_region\": \"us-east-2\",\n \"bucket\": \"resf-empanadas\",\n \"bucket_url\": \"https://resf-empanadas.s3.us-east-2.amazonaws.com\"\n}\n\nALLOWED_TYPE_VARIANTS = {\n \"Azure\": None,\n \"Container\": [\"Base\", \"Minimal\", \"UBI\"],\n \"EC2\": None,\n \"GenericCloud\": None,\n \"Vagrant\": [\"Libvirt\", \"Vbox\"],\n \"OCP\": None\n\n}\n</code></pre>"},{"location":"documentation/references/empanadas_config/","title":"Empanadas config yaml Configuration","text":"<p>Each file in <code>empanads/config/</code> is a yaml file that contains configuration items for the distribution release version. The configuration can heavily dictate the functionality and what features are directly supported by empanadas when ran.</p> <p>See the items below to see which options are mandatory and optional.</p>"},{"location":"documentation/references/empanadas_config/#config-items","title":"Config Items","text":""},{"location":"documentation/references/empanadas_config/#top-level","title":"Top Level","text":"<p>The Top Level is the name of the profile and starts the YAML dictionary for the release. It is alphanumeric and accepts punctuation within reason. Common examples:</p> <ul> <li><code>9</code></li> <li><code>9-beta</code></li> <li><code>8-lookahead</code></li> </ul>"},{"location":"documentation/references/empanadas_config/#fullname","title":"fullname","text":"<p>type: String</p> <p>required: True</p> <p>description: Needed for treeinfo and discinfo generation.</p>"},{"location":"documentation/references/empanadas_config/#revision","title":"revision","text":"<p>type: String</p> <p>required: True</p> <p>description: Full version of a release</p>"},{"location":"documentation/references/empanadas_config/#rclvl","title":"rclvl","text":"<p>type: String</p> <p>required: True</p> <p>description: Release Candidate or Beta descriptor. Sets names and versions with this descriptor if enabled.</p>"},{"location":"documentation/references/empanadas_config/#major","title":"major","text":"<p>type: String</p> <p>required: True</p> <p>description: Major version of a release</p>"},{"location":"documentation/references/empanadas_config/#minor","title":"minor","text":"<p>type: String</p> <p>required: True</p> <p>description: Minor version of a release</p>"},{"location":"documentation/references/empanadas_config/#profile","title":"profile","text":"<p>type: String</p> <p>required: True</p> <p>description: Matches the top level of the release. This should not differ from the top level assignment.</p>"},{"location":"documentation/references/empanadas_config/#disttag","title":"disttag","text":"<p>type: String</p> <p>required: True</p> <p>description: Sets the dist tag for mock configs.</p>"},{"location":"documentation/references/empanadas_config/#bugurl","title":"bugurl","text":"<p>type: String</p> <p>required: True</p> <p>description: A URL to the bug tracker for this release or distribution.</p>"},{"location":"documentation/references/empanadas_config/#checksum","title":"checksum","text":"<p>type: String</p> <p>required: True</p> <p>description: Checksum type. Used when generating checksum information for images.</p>"},{"location":"documentation/references/empanadas_config/#fedora_major","title":"fedora_major","text":"<p>type: String</p> <p>required: False</p> <p>description: For future use with icicle.</p>"},{"location":"documentation/references/empanadas_config/#allowed_arches","title":"allowed_arches","text":"<p>type: list</p> <p>required: True</p> <p>description: List of supported architectures for this release.</p>"},{"location":"documentation/references/empanadas_config/#provide_multilib","title":"provide_multilib","text":"<p>type: boolean</p> <p>required: True</p> <p>description: Sets if architecture x86_64 will be multilib. It is recommended that this is set to <code>True</code>.</p>"},{"location":"documentation/references/empanadas_config/#project_id","title":"project_id","text":"<p>type: String</p> <p>required: True</p> <p>description: Appended to the base repo URL in common.py. For peridot, it is the project id that is generated for the project you are pulling from. It can be set to anything else if need be for non-peridot use.</p>"},{"location":"documentation/references/empanadas_config/#repo_symlinks","title":"repo_symlinks","text":"<p>type: dict</p> <p>required: False</p> <p>description: For future use. Sets symlinks to repositories for backwards compatibility. Key value pairs only.</p>"},{"location":"documentation/references/empanadas_config/#renames","title":"renames","text":"<p>type: dict</p> <p>required: False</p> <p>description: Renames a repository to the value set. For example, renaming <code>all</code> to <code>devel</code>. Set to <code>{}</code> if no renames are goign to occur.</p>"},{"location":"documentation/references/empanadas_config/#all_repos","title":"all_repos","text":"<p>type: list</p> <p>required: True</p> <p>description: List of repositories that will be synced/managed by empanadas.</p>"},{"location":"documentation/references/empanadas_config/#structure","title":"structure","text":"<p>type: dict</p> <p>required: True</p> <p>description: Key value pairs of <code>packages</code> and <code>repodata</code>. These are appended appropriately during syncing and ISO actions. Setting these are mandatory.</p>"},{"location":"documentation/references/empanadas_config/#iso_map","title":"iso_map","text":"<p>type: dictionary</p> <p>required: True if building ISO's and operating with lorax.</p> <p>description: Controls how lorax and extra ISO's are built.</p> <p>If are you not building images, set to <code>{}</code></p>"},{"location":"documentation/references/empanadas_config/#xorrisofs","title":"xorrisofs","text":"<p>type: boolean</p> <p>required: True</p> <p>description: Dictates of xorrisofs is used to build images. Setting to false uses genisoimage. It is recommended that xorrisofs is used.</p>"},{"location":"documentation/references/empanadas_config/#iso_level","title":"iso_level","text":"<p>type: boolean</p> <p>required: True</p> <p>description: Set to false if you are using xorrisofs. Can be set to true when using genisoimage.</p>"},{"location":"documentation/references/empanadas_config/#images","title":"images","text":"<p>type: dict</p> <p>required: True</p> <p>description: Dictates the ISO images that will be made or the treeinfo that will be generated.</p> <p>Note: The primary repository (for example, BaseOS) will need to be listed to ensure the treeinfo data is correctly generated. <code>disc</code> should be set to <code>False</code> and <code>isoskip</code> should be set to <code>True</code>. See the example section for an example.</p>"},{"location":"documentation/references/empanadas_config/#namedisc","title":"name.disc","text":"<p>type: boolean</p> <p>required: True</p> <p>description: This tells the iso builder if this will be a generated ISO.</p>"},{"location":"documentation/references/empanadas_config/#nameisoskip","title":"name.isoskip","text":"<p>type: boolean</p> <p>required: False</p> <p>description: This tells the iso builder if this will be skipped, even if <code>disc</code> is set to <code>True</code>. Default is <code>False</code>.</p>"},{"location":"documentation/references/empanadas_config/#namevariant","title":"name.variant","text":"<p>type: string</p> <p>required: True</p> <p>description: Names the primary variant repository for the image. This is set in .treeinfo.</p>"},{"location":"documentation/references/empanadas_config/#namerepos","title":"name.repos","text":"<p>type: list</p> <p>required: True</p> <p>description: Names of the repositories included in the image. This is added to .treeinfo.</p>"},{"location":"documentation/references/empanadas_config/#namevolname","title":"name.volname","text":"<p>type: string</p> <p>required: True</p> <p>required value: <code>dvd</code></p> <p>description: This is required if building more than the DVD image. By default, the the name <code>dvd</code> is harcoded in the buildImage template.</p>"},{"location":"documentation/references/empanadas_config/#lorax","title":"lorax","text":"<p>type: dict</p> <p>required: True if building lorax images.</p> <p>description: Sets up lorax images and which repositories to use when building lorax images.</p>"},{"location":"documentation/references/empanadas_config/#loraxrepos","title":"lorax.repos","text":"<p>type: list</p> <p>required: True</p> <p>description: List of repos that are used to pull packages to build the lorax images.</p>"},{"location":"documentation/references/empanadas_config/#loraxvariant","title":"lorax.variant","text":"<p>type: string</p> <p>required: True</p> <p>description: Base repository for the release</p>"},{"location":"documentation/references/empanadas_config/#loraxlorax_removes","title":"lorax.lorax_removes","text":"<p>type: list</p> <p>required: False</p> <p>description: Excludes packages that are not needed when lorax is running.</p>"},{"location":"documentation/references/empanadas_config/#loraxrequired_pkgs","title":"lorax.required_pkgs","text":"<p>type: list</p> <p>required: True</p> <p>description: Required list of installed packages needed to build lorax images.</p>"},{"location":"documentation/references/empanadas_config/#livemap","title":"livemap","text":"<p>type: dict</p> <p>required: False</p> <p>description: Dictates what live images are built and how they are built.</p>"},{"location":"documentation/references/empanadas_config/#livemapgit_repo","title":"livemap.git_repo","text":"<p>type: string</p> <p>required: True</p> <p>description: The git repository URL where the kickstarts live</p>"},{"location":"documentation/references/empanadas_config/#livemapbranch","title":"livemap.branch","text":"<p>type: string</p> <p>required: True</p> <p>description: The branch being used for the kickstarts</p>"},{"location":"documentation/references/empanadas_config/#livemapksentry","title":"livemap.ksentry","text":"<p>type: dict</p> <p>required: True</p> <p>description: Key value pairs of the live images being created. Key being the name of the live image, value being the kickstart name/path.</p>"},{"location":"documentation/references/empanadas_config/#livemapallowed_arches","title":"livemap.allowed_arches","text":"<p>type: list</p> <p>required: True</p> <p>description: List of allowed architectures that will build for the live images.</p>"},{"location":"documentation/references/empanadas_config/#livemaprequired_pkgs","title":"livemap.required_pkgs","text":"<p>type: list</p> <p>required: True</p> <p>description: Required list of packages needed to build the live images.</p>"},{"location":"documentation/references/empanadas_config/#cloudimages","title":"cloudimages","text":"<p>type: dict</p> <p>required: False</p> <p>description: Cloud related settings.</p> <p>Set to <code>{}</code> if not needed.</p>"},{"location":"documentation/references/empanadas_config/#cloudimagesimages","title":"cloudimages.images","text":"<p>type: dict</p> <p>required: True</p> <p>description: Cloud images that will be generated and in a bucket to be pulled, and their format.</p>"},{"location":"documentation/references/empanadas_config/#cloudimagesimagesname","title":"cloudimages.images.name","text":"<p>type: dict</p> <p>required: True</p> <p>description: Name of the cloud image being pulled.</p> <p>Accepted key value options:</p> <ul> <li><code>format</code>, which is <code>raw</code>, <code>qcow2</code>, <code>vhd</code>, <code>tar.xz</code></li> <li><code>variants</code>, which is a list</li> <li><code>primary_variant</code>, which symlinks to the \"primary\" variant in the variant list</li> </ul>"},{"location":"documentation/references/empanadas_config/#repoclosure_map","title":"repoclosure_map","text":"<p>type: dict</p> <p>required: True</p> <p>description: Repoclosure settings. These settings are absolutely required when doing full syncs and need to check repositories for consistency.</p>"},{"location":"documentation/references/empanadas_config/#repoclosure_maparches","title":"repoclosure_map.arches","text":"<p>type: dict</p> <p>required: True</p> <p>description: For each architecture (key), dnf switches/settings that dictate how repoclosure will check for consistency (value, string).</p> <p>example: <code>x86_64: '--forcearch=x86_64 --arch=x86_64 --arch=athlon --arch=i686 --arch=i586 --arch=i486 --arch=i386 --arch=noarch'</code></p>"},{"location":"documentation/references/empanadas_config/#repoclosure_maprepos","title":"repoclosure_map.repos","text":"<p>type: dict</p> <p>required: True</p> <p>description: For each repository that is pulled for a given release(key), repositories that will be included in the repoclosure check. A repository that only checks against itself must have a value of <code>[]</code>.</p>"},{"location":"documentation/references/empanadas_config/#extra_files","title":"extra_files","text":"<p>type: dict</p> <p>required: True</p> <p>description: Extra files settings and where they come from. Git repositories are the only supported method.</p>"},{"location":"documentation/references/empanadas_config/#extra_filesgit_repo","title":"extra_files.git_repo","text":"<p>type: string</p> <p>required: True</p> <p>description: URL to the git repository with the extra files.</p>"},{"location":"documentation/references/empanadas_config/#extra_filesgit_raw_path","title":"extra_files.git_raw_path","text":"<p>type: string</p> <p>required: True</p> <p>description: URL to the git repository with the extra files, but the \"raw\" url form.</p> <p>example: <code>git_raw_path: 'https://git.rockylinux.org/staging/src/rocky-release/-/raw/r9/'</code></p>"},{"location":"documentation/references/empanadas_config/#extra_filesbranch","title":"extra_files.branch","text":"<p>type: string</p> <p>required: True</p> <p>description: Branch where the extra files are pulled from.</p>"},{"location":"documentation/references/empanadas_config/#extra_filesgpg","title":"extra_files.gpg","text":"<p>type: dict</p> <p>required: True</p> <p>description: For each gpg key type (key), the relative path to the key in the git repository (value).</p> <p>These keys help set up the repository configuration when doing syncs.</p> <p>By default, the RepoSync class sets <code>stable</code> as the gpgkey that is used.</p>"},{"location":"documentation/references/empanadas_config/#extra_fileslist","title":"extra_files.list","text":"<p>type: list</p> <p>required: True</p> <p>description: List of files from the git repository that will be used as \"extra\" files and placed in the repositories and available to mirrors and will appear on ISO images if applicable.</p>"},{"location":"documentation/references/empanadas_config/#reference-example","title":"Reference Example","text":"<pre><code>---\n'9':\n fullname: 'Rocky Linux 9.0'\n revision: '9.0'\n rclvl: 'RC2'\n major: '9'\n minor: '0'\n profile: '9'\n disttag: 'el9'\n bugurl: 'https://bugs.rockylinux.org'\n checksum: 'sha256'\n fedora_major: '20'\n allowed_arches:\n - x86_64\n - aarch64\n - ppc64le\n - s390x\n provide_multilib: True\n project_id: '55b17281-bc54-4929-8aca-a8a11d628738'\n repo_symlinks:\n NFV: 'nfv'\n renames:\n all: 'devel'\n all_repos:\n - 'all'\n - 'BaseOS'\n - 'AppStream'\n - 'CRB'\n - 'HighAvailability'\n - 'ResilientStorage'\n - 'RT'\n - 'NFV'\n - 'SAP'\n - 'SAPHANA'\n - 'extras'\n - 'plus'\n structure:\n packages: 'os/Packages'\n repodata: 'os/repodata'\n iso_map:\n xorrisofs: True\n iso_level: False\n images:\n dvd:\n disc: True\n variant: 'AppStream'\n repos:\n - 'BaseOS'\n - 'AppStream'\n minimal:\n disc: True\n isoskip: True\n repos:\n - 'minimal'\n - 'BaseOS'\n variant: 'minimal'\n volname: 'dvd'\n BaseOS:\n disc: False\n isoskip: True\n variant: 'BaseOS'\n repos:\n - 'BaseOS'\n - 'AppStream'\n lorax:\n repos:\n - 'BaseOS'\n - 'AppStream'\n variant: 'BaseOS'\n lorax_removes:\n - 'libreport-rhel-anaconda-bugzilla'\n required_pkgs:\n - 'lorax'\n - 'genisoimage'\n - 'isomd5sum'\n - 'lorax-templates-rhel'\n - 'lorax-templates-generic'\n - 'xorriso'\n cloudimages:\n images:\n EC2:\n format: raw\n GenericCloud:\n format: qcow2\n livemap:\n git_repo: 'https://git.resf.org/sig_core/kickstarts.git'\n branch: 'r9'\n ksentry:\n Workstation: rocky-live-workstation.ks\n Workstation-Lite: rocky-live-workstation-lite.ks\n XFCE: rocky-live-xfce.ks\n KDE: rocky-live-kde.ks\n MATE: rocky-live-mate.ks\n allowed_arches:\n - x86_64\n - aarch64\n required_pkgs:\n - 'lorax-lmc-novirt'\n - 'vim-minimal'\n - 'pykickstart'\n - 'git'\n variantmap:\n git_repo: 'https://git.rockylinux.org/rocky/pungi-rocky.git'\n branch: 'r9'\n git_raw_path: 'https://git.rockylinux.org/rocky/pungi-rocky/-/raw/r9/'\n repoclosure_map:\n arches:\n x86_64: '--forcearch=x86_64 --arch=x86_64 --arch=athlon --arch=i686 --arch=i586 --arch=i486 --arch=i386 --arch=noarch'\n aarch64: '--forcearch=aarch64 --arch=aarch64 --arch=noarch'\n ppc64le: '--forcearch=ppc64le --arch=ppc64le --arch=noarch'\n s390x: '--forcearch=s390x --arch=s390x --arch=noarch'\n repos:\n devel: []\n BaseOS: []\n AppStream:\n - BaseOS\n CRB:\n - BaseOS\n - AppStream\n HighAvailability:\n - BaseOS\n - AppStream\n ResilientStorage:\n - BaseOS\n - AppStream\n RT:\n - BaseOS\n - AppStream\n NFV:\n - BaseOS\n - AppStream\n SAP:\n - BaseOS\n - AppStream\n - HighAvailability\n SAPHANA:\n - BaseOS\n - AppStream\n - HighAvailability\n extra_files:\n git_repo: 'https://git.rockylinux.org/staging/src/rocky-release.git'\n git_raw_path: 'https://git.rockylinux.org/staging/src/rocky-release/-/raw/r9/'\n branch: 'r9'\n gpg:\n stable: 'SOURCES/RPM-GPG-KEY-Rocky-9'\n testing: 'SOURCES/RPM-GPG-KEY-Rocky-9-Testing'\n list:\n - 'SOURCES/Contributors'\n - 'SOURCES/COMMUNITY-CHARTER'\n - 'SOURCES/EULA'\n - 'SOURCES/LICENSE'\n - 'SOURCES/RPM-GPG-KEY-Rocky-9'\n - 'SOURCES/RPM-GPG-KEY-Rocky-9-Testing'\n...\n</code></pre>"},{"location":"documentation/references/empanadas_sig_config/","title":"Empanadas SIG yaml Configuration","text":"<p>Each file in <code>empanads/sig/</code> is a yaml file that contains configuration items for the distribution release version. The configuration determines the structure of the SIG repositories synced from Peridot or a given repo.</p> <p>Note that a release profile (for a major version) is still required for this sync to work.</p> <p>See the items below to see which options are mandatory and optional.</p>"},{"location":"documentation/references/empanadas_sig_config/#config-items","title":"Config Items","text":""},{"location":"documentation/references/empanadas_sig_config/#reference-example","title":"Reference Example","text":""},{"location":"include/resources_bottom/","title":"Resources bottom","text":"Resources Account ServicesGit (RESF Git Service)Git (Rocky Linux GitHub)Git (Rocky Linux GitLab)Mail ListsContacts <p>URL: https://accounts.rockylinux.org</p> <p>Purpose: Account Services maintains the accounts for almost all components of the Rocky ecosystem</p> <p>Technology: Noggin used by Fedora Infrastructure</p> <p>Contact: <code>~Infrastructure</code> in Mattermost and <code>#rockylinux-infra</code> in Libera IRC</p> <p>URL: https://git.resf.org</p> <p>Purpose: General projects, code, and so on for the Rocky Enterprise Software Foundation.</p> <p>Technology: Gitea</p> <p>Contact: <code>~Infrastructure</code>, <code>~Development</code> in Mattermost and <code>#rockylinux-infra</code>, <code>#rockylinux-devel</code> in Libera IRC</p> <p>URL: https://github.com/rocky-linux</p> <p>Purpose: General purpose code, assets, and so on for Rocky Linux. Some content is mirrored to the RESF Git Service.</p> <p>Technology: GitHub</p> <p>Contact: <code>~Infrastructure</code>, <code>~Development</code> in Mattermost and <code>#rockylinux-infra</code>, <code>#rockylinux-devel</code> in Libera IRC</p> <p>URL: https://git.rockylinux.org</p> <p>Purpose: Packages and light code for the Rocky Linux distribution</p> <p>Technology: GitLab</p> <p>Contact: <code>~Infrastructure</code>, <code>~Development</code> in Mattermost and <code>#rockylinux-infra</code>, <code>#rockylinux-devel</code> in Libera IRC</p> <p>URL: https://lists.resf.org</p> <p>Purpose: Users can subscribe and interact with various mail lists for the Rocky ecosystem</p> <p>Technology: Mailman 3 + Hyper Kitty</p> <p>Contact: <code>~Infrastructure</code> in Mattermost and <code>#rockylinux-infra</code> in Libera IRC</p> Name Email Mattermost Name IRC Name Louis Abel label@rockylinux.org @nazunalika Sokel/label/Sombra Mustafa Gezen mustafa@rockylinux.org @mustafa mstg Skip Grube skip@rockylinux.org @skip77 Sherif Nagy sherif@rockylinux.org @sherif Pablo Greco pgreco@rockylinux.org @pgreco pgreco Neil Hanlon neil@resf.org @neil neil Taylor Goodwill tg@resf.org @tgo tg"},{"location":"legacy/","title":"Legacy","text":"<p>Legacy documentation comes here.</p> <p>Debrand List</p> <p>Koji Tagging</p>"},{"location":"legacy/debrand_list/","title":"Rocky Debrand Packages List","text":"<p>This is a list of packages that require changes to their material for acceptance in Rocky Linux. Usually this means there is some text or images in the package that reference upstream trademarks, and these must be swapped out before we can distribute them.</p> <p>The first items in this list are referenced from the excellent CentOS release notes here: https://wiki.centos.org/Manuals/ReleaseNotes/CentOS8.1905#Packages_modified_by_CentOS</p> <p>It is assumed that we will have to modify these same packages. It is also assumed that these changed packages might not necessarily be debranding.</p> <p>However, this list is incomplete. For example, the package Nginx does not appear on the list, and still has RHEL branding in the CentOS repos. We will need to investigate the rest of the package set and find any more packages like this that we must modify.</p> <p>One way to find said changes is to look for <code>?centos</code> tags in the SPEC file, while also looking at the manual debranding if there was any for the <code>c8</code> branches.</p> <p>There will be cases where a search and replace for <code>?centos</code> to <code>?rocky</code> will be sufficient.</p> <p>Current patches (for staging) are here.</p>"},{"location":"legacy/debrand_list/#packages-that-need-debranding-changes","title":"Packages that need debranding changes:","text":"Package Notes Work Status abrt See here DONE anaconda See here DONE apache-commons-net AppStream module with elevating branch names NO CHANGES REQUIRED ~~basesystem~~ (does not require debranding, it is a skeleton package) NO CHANGES REQUIRED cloud-init See here DONE - NEEDS REVIEW IN GITLAB (Rich Alloway) cockpit See here DONE ~~compat-glibc~~ NOT IN EL 8 dhcp See here DONE, NEEDS REVIEW IN GITLAB (Rich Alloway) firefox See here -- Still requires a distribution.ini ID MOSTLY DONE (Louis) fwupdate NOT STARTED glusterfs Changes don't appear to be required NO CHANGES REQUIRED gnome-settings-daemon No changes required for now. NO CHANGES REQUIRED grub2 (secureboot patches not done, just debrand) See here DONE, NEEDS REVIEW IN GITLAB AND SECUREBOOT (Rich Alloway) httpd See here DONE initial-setup See here DONE ipa This is a dual change: Logos and ipaplatform. Logos are taken care of in <code>rocky-logos</code> and the <code>ipaplatform</code> is taken care of here. See here DONE ~~kabi-yum-plugins~~ NOT IN EL 8 kernel See here for a potential example NOT STARTED ~~kde-settings~~ NOT IN EL 8 libreport See here DONE oscap-anaconda-addon See here DONE Requires install QA PackageKit See here DONE ~~pcs~~ NO CHANGES REQUIRED plymouth See here DONE ~~redhat-lsb~~ NO CHANGES REQUIRED redhat-rpm-config See here DONE scap-security-guide QA is likely required to test this package as it is NO CHANGES REQUIRED, QA REQUIRED shim NOT STARTED shim-signed NOT STARTED sos See here DONE subscription-manager See here DONE, NEEDS REVIEW ~~system-config-date~~ NOT IN EL8 ~~system-config-kdump~~ NOT IN EL8 thunderbird See here DONE ~~xulrunner~~ NOT IN EL 8 ~~yum~~ NO CHANGES REQUIRED (end of CentOS list) nginx Identified changes, in staging (ALMOST) DONE"},{"location":"legacy/debrand_list/#packages-that-need-to-become-other-packages","title":"Packages that need to become other packages:","text":"<p>We will want to create our own versions of these packages. The full \"lineage\" is shown, from RHEL -&gt; CentOS -&gt; Rocky (Where applicable)</p> Package Notes redhat-indexhtml -&gt; centos-indexhtml -&gt; rocky-indexhtml Here redhat-logos -&gt; centos-logos -&gt; rocky-logos Here redhat-release-* -&gt; centos-release -&gt; rocky-release Here centos-backgrounds -&gt; rocky-backgrounds Provided by logos centos-linux-repos -&gt; rocky-repos Here centos-obsolete-packages Here"},{"location":"legacy/debrand_list/#packages-that-exist-in-rhel-but-not-in-centos","title":"Packages that Exist in RHEL, but not in CentOS","text":"<p>For sake of complete information, here is a list of packages that are in RHEL 8, but do not exist in CentOS 8. We do not need to worry about these packages:</p> <ul> <li>insights-client</li> <li>Red_Hat_Enterprise_Linux-Release_Notes-8-*</li> <li>redhat-access-gui</li> <li>redhat-bookmarks</li> <li>subscription-manager-migration</li> <li>subscription-manager-migration-data</li> </ul>"},{"location":"legacy/koji_tagging/","title":"Koji Tagging Strategy","text":"<p>This document covers how the Rocky Linux Release Engineering Team handles the tagging for builds in Koji and how it affects the overall build process.</p>"},{"location":"legacy/koji_tagging/#contact-information","title":"Contact Information","text":"Owner Release Engineering Team Email Contact releng@rockylinux.org Mattermost Contacts <code>@label</code> <code>@mustafa</code> <code>@neil</code> <code>@tgo</code> Mattermost Channels <code>~Development</code>"},{"location":"legacy/koji_tagging/#what-is-koji","title":"What is Koji?","text":"<p>Koji is the build system used for Rocky Linux, as well as CentOS, Fedora, and likely others. Red Hat is likely to use a variant of Koji called \"brew\" with similar functionality and usage. Koji uses mock, a common RPM building utility, to build RPMs in a chroot environment.</p>"},{"location":"legacy/koji_tagging/#architecture-of-koji","title":"Architecture of Koji","text":""},{"location":"legacy/koji_tagging/#components","title":"Components","text":"<p>Koji comprises of multiple components:</p> <ul> <li><code>koji-hub</code>, which is the center of all Koji operations. It runs XML-RPC and relies on other components to call it for actions. This piece will also talk to the database and is one component that has write access to the filesystem.</li> <li><code>kojid</code>, which is the daemon that runs on the builder nodes. It's responsibility is to talk to the hub for actions in which it can or has to perform, for example, building an RPM or install images. But that is not all that it can do.</li> <li><code>koji-web</code> is a set of scripts that provides the web interface that anyone can see at our koji.</li> <li><code>koji</code> is the command line utility that is commonly used - It is a wrapper of the various API commands that can be called. In our environment, it requires a login via kerberos.</li> <li><code>kojira</code> is a component that ensures repodata is updated among the build tags.</li> </ul>"},{"location":"legacy/koji_tagging/#tags","title":"Tags","text":"<p>Tags are the most important part of the koji ecosystem. With tags, you can have specific repository build roots for the entire distribution or just a simple subset of builds that should not polute the main build tags (for example, for SIGs where a package or two might be newer (or even older) than what's in BaseOS/AppStream.</p> <p>Using tags, you can setup what is called \"inheritance\". So for example. You can have a tag named <code>dist-rocky8-build</code> but it happens to inherit <code>dist-rocky8-updates-build</code>, which will likely have a newer set of packages than the former. Inheritance, in a way, can be considered setting \"dnf priorities\" if you've done that before. Another way to look at it is \"ordering\" and \"what comes first\".</p> <p>Targets call tags to send packages to build in, generally.</p>"},{"location":"legacy/koji_tagging/#tag-strategy","title":"Tag Strategy","text":"<p>The question that we get is \"what's the difference between a build and an updates-build tag\" - It's all about the inheritance. For example, let's take a look at <code>dist-rocky8-build</code></p> <pre><code> dist-rocky8-build\n el8\n dist-rocky8\n build-modules\n . . .\n</code></pre> <p>In this tag, you can see that this build tag inherits el8 packages first, and then the packages in dist-rocky8, and then build-modules. This is where \"base\" packages start out at, generally and a lot of them won't be updated or even change with the lifecycle of the version.</p> <pre><code>dist-rocky8-updates-build\n el8\n dist-rocky8-updates\n dist-rocky8\n dist-rocky8-build\n build-modules\n</code></pre> <p>This one is a bit different. Notice that it inherits el8 first, and then dist-rocky8-updates, which inherits dist-rocky8. And then it also pulls in dist-rocky8-build, the previous tag we were talking about. This tag is where updates for a minor release are sent to.</p> <pre><code>dist-rocky8_4-updates-build\n el8_4\n dist-rocky8-updates\n dist-rocky8\n dist-rocky8-build\n el8\n build-modules\n</code></pre> <p>Here's a more interesting one. Notice something? It's pretty similar to the last one, but see how it's named el8_4 instead? This is where updates during 8.4 are basically sent to and that's how they get tagged as <code>.el8_4</code> on the RPM's. The <code>el8_4</code> tag contains a build macros package that instructs the <code>%dist</code> tag to be set that way. When 8.5 comes out, we'll basically have the same setup.</p> <p>At the end of the day, builds that happen in these updates-build tags get dropped in dist-rocky8-updates.</p>"},{"location":"legacy/koji_tagging/#what-about-modules","title":"What about modules?","text":"<p>Modules are a bit tricky. We generally don't touch how MBS does its tags or what's going on there. When builds are being done with the modules, they do end up using the el8 packages in some manner or form. The modules are separated entirely from the main tags though, so they don't polute the main tags. You don't want a situation where say, you build the latest ruby, but something builds off the default version of ruby provided in <code>el8</code> and now you're in trouble and get dnf filtering issues.</p>"},{"location":"legacy/koji_tagging/#how-do-we-determine-what-is-part-of-a-compose","title":"How do we determine what is part of a compose?","text":"<p>There are special tags that have a <code>-compose</code> suffix. These tags are used as a way to pull down packages for repository building during the pungi process.</p>"},{"location":"rpm/","title":"RPM","text":"<p>This section is primarily for documentation and useful information as it pertains to package building and modularity. Use the menu on the left side to find the information you're looking for.</p>"},{"location":"rpm/local_module_builds/","title":"Local Module Builds","text":"<p>{% set git_revision_date = '2024-03-05' %} Within the Fedora and Red Hat ecosystem, modularity is unfortunately a piece that is a blessing and a curse. It might be more one way or the other.</p> <p>This page is primarily to talk about how to do local builds for modules, including the final formatting of the module yaml description that will have to be imported into the repo via <code>modifyrepo_c</code>.</p> <p>Note that the below is based on how <code>lazybuilder</code> performs module builds, which was made to be close to MBS+Koji and is not perfect. This is mostly used as a reference.</p>"},{"location":"rpm/local_module_builds/#contact-information","title":"Contact Information","text":"Owner Release Engineering Team Email Contact releng@rockylinux.org Email Contact infrastructure@rockylinux.org Mattermost Contacts <code>@label</code> <code>@mustafa</code> <code>@neil</code> <code>@tgo</code> Mattermost Channels <code>~Development</code>"},{"location":"rpm/local_module_builds/#building-local-modules","title":"Building Local Modules","text":"<p>This section explains what it's like to build local modules, what you can do, and what you can expect.</p>"},{"location":"rpm/local_module_builds/#module-source-transmodrification-pulling-sources","title":"Module Source, \"transmodrification\", pulling sources","text":"<p>The module source typically lives in a <code>SOURCES</code> directory in a module git repo with the name of <code>modulemd.src.txt</code>. This is a basic version that could be used to do a module build. Each package listed is a reference to the stream version for that particular module.</p> <pre><code>document: modulemd\nversion: 2\ndata:\n stream: 1.4\n summary: 389 Directory Server (base)\n description: &gt;-\n 389 Directory Server is an LDAPv3 compliant server. The base package includes\n the LDAP server and command line utilities for server administration.\n license:\n module:\n - MIT\n dependencies:\n - buildrequires:\n nodejs: [10]\n platform: [el8]\n requires:\n platform: [el8]\n filter:\n rpms:\n - cockpit-389-ds\n components:\n rpms:\n 389-ds-base:\n rationale: Package in api\n ref: stream-1.4-rhel-8.4.0\n arches: [aarch64, ppc64le, s390x, x86_64]\n</code></pre> <p>Notice <code>ref</code>? That's the reference point. When a \"transmodrification\" occurs, the process is supposed to look at each RPM repo in the components['rpms'] list. The branch name that this module data lives in will be the basis of how it determines what the new references will be. In this example, the branch name is <code>r8-stream-1.4</code> so when we do the \"conversion\", it should become a git commit hash of the last commit in the branch <code>r8-stream-1.4</code> for that particular rpm component.</p> <pre><code>document: modulemd\nversion: 2\ndata:\n stream: \"1.4\"\n summary: 389 Directory Server (base)\n description: 389 Directory Server is an LDAPv3 compliant server. The base package\n includes the LDAP server and command line utilities for server administration.\n license:\n module:\n - MIT\n dependencies:\n - buildrequires:\n nodejs:\n - \"10\"\n platform:\n - el8\n requires:\n platform:\n - el8\n filter:\n rpms:\n - cockpit-389-ds\n components:\n rpms:\n 389-ds-base:\n rationale: Package in api\n ref: efe94eb32d597765f49b7b1528ba9881e1f29327\n arches:\n - aarch64\n - ppc64le\n - s390x\n - x86_64\n</code></pre> <p>See the reference now? It's now a commit hash that refers directly to 389-ds-base on branch <code>r8-stream-1.4</code>, being the last commit/tag. See the glossary at the end of this page for more information, as it can be a commit hash, branch, or tag name.</p>"},{"location":"rpm/local_module_builds/#configuring-macros-and-contexts","title":"Configuring Macros and Contexts","text":"<p>Traditionally within an MBS and Koji system, there are several macros that are created and are usually unique per module stream. There are certain components that work together to create a unique <code>%dist</code> tag based on several factors. To summarize, here's what generally happens:</p> <ul> <li>A module version is formed as <code>M0m00YYYYMMDDhhmmss</code>, which would be the major version, 0, minor version, 0, and then a timestamp.</li> <li> <p>Select components are brought together and a sha1 hash is made, shortened to 8 characters for the context</p> <ul> <li>The runtime context is typically the \"dependencies\" section of the module source, calculated to sha1</li> <li>The build context is the <code>xmd['mbs']['buildrequires']</code> data that koji generates and is output into <code>module.txt</code>, calculated to sha1</li> <li>The runtime and build contexts are combined <code>BUILD:RUNTIME</code>, a sha1 is calculated, and then shortened to 8</li> <li>This context is typically the one that changes less often</li> </ul> </li> <li> <p>Select components are brought together and a sha1 hash is made, shortened to 8 characters for the dist tag</p> <ul> <li>The module name, stream, version, and context are all brought together as <code>name.stream.version.context</code>, calculated to sha1</li> </ul> </li> <li> <p>The <code>%dist</code> tag is given a format of <code>.module+elX.Y.Z+000+00000000</code> (note: fedora uses <code>.module_fcXX+000+00000000</code>)</p> <ul> <li>X is the major version, Y is the minor version, Z is typically 0.</li> <li>The second number is the iteration, aka the module number. If you've done 500 module builds, the next one would be 501, regardless of module.</li> <li>The last set is a context hash generated earlier in the step above</li> </ul> </li> </ul>"},{"location":"rpm/local_module_builds/#configuring-the-macros","title":"Configuring the Macros","text":"<p>In koji+MBS, a module macros package is made that defines the module macros. In lazybuilder, we skip that and define the macros directly. For example, in mock, we drop a file with all the macros we need. Here's an example of 389-ds. The file name is is <code>macros.zz-modules</code> to ensure these macros are picked up last and will have precendence and override macros of similar names, especially the <code>%dist</code> tag.</p> <pre><code>rpmbuild# cat /etc/rpm/macros.zz-modules\n\n%dist .module_el8.4.0+636+837ee950\n%modularitylabel 389-ds:1.4:8040020210810203142:866effaa\n%_module_build 1\n%_module_name 389-ds\n%_module_stream 1.4\n%_module_version 8040020210810203142\n%_module_context 866effaa\n</code></pre> <p>The the <code>%dist</code> tag honestly is the most important piece here. But all of these tags are required regardless.</p>"},{"location":"rpm/local_module_builds/#build-opts-macros","title":"Build Opts Macros","text":"<p>Some modules may have additional buildopts macros. Perl is a great example of this. When koji+MBS make their module macros package for the build, they combine the module macros and the build opts macros together into one file. It will be the same exact file name each time.</p> <pre><code>rpmbuild# cat /etc/rpm/macros.zz-modules\n\n# Module macros\n%dist .module+el8.4.0+463+10533ad3\n%modularitylabel perl:5.24:8040020210602173155:162f5753\n%_module_build 1\n%_module_name perl\n%_module_stream 5.24\n%_module_version 8040020210602173155\n%_module_context 162f5753\n\n# Build Opts macros\n%_with_perl_enables_groff 1\n%_without_perl_enables_syslog_test 1\n%_with_perl_enables_systemtap 1\n%_without_perl_enables_tcsh 1\n%_without_perl_Compress_Bzip2_enables_optional_test 1\n%_without_perl_CPAN_Meta_Requirements_enables_optional_test 1\n%_without_perl_IPC_System_Simple_enables_optional_test 1\n%_without_perl_LWP_MediaTypes_enables_mailcap 1\n%_without_perl_Module_Build_enables_optional_test 1\n%_without_perl_Perl_OSType_enables_optional_test 1\n%_without_perl_Pod_Perldoc_enables_tk_test 1\n%_without_perl_Software_License_enables_optional_test 1\n%_without_perl_Sys_Syslog_enables_optional_test 1\n%_without_perl_Test_Harness_enables_optional_test 1\n%_without_perl_URI_enables_Business_ISBN 1\n</code></pre>"},{"location":"rpm/local_module_builds/#built-module-example","title":"Built Module Example","text":"<p>Let's break down an example of <code>389-ds</code> - It's a simple module. Let's start with <code>modulemd.txt</code>, generated during a module build and before packages are built. Notice how it has <code>xmd</code> data. That is an integral part of making the context, though it's mostly information for koji and MBS and is generated on the fly and used throughout the build process for each arch. In the context of lazybuilder, it creates fake data to essentially fill the gap of not having MBS+Koji in the first place. The comments will point out what's used to make the contexts.</p> <pre><code>---\ndocument: modulemd\nversion: 2\ndata:\n name: 389-ds\n stream: 1.4\n version: 8040020210810203142\n context: 866effaa\n summary: 389 Directory Server (base)\n description: &gt;-\n 389 Directory Server is an LDAPv3 compliant server. The base package includes\n the LDAP server and command line utilities for server administration.\n license:\n module:\n - MIT\n xmd:\n mbs:\n # This section xmd['mbs']['buildrequires'] is used to generate the build context\n # This is typically made before hand and is used with the dependencies section\n # to make the context listed above.\n buildrequires:\n nodejs:\n context: 30b713e6\n filtered_rpms: []\n koji_tag: module-nodejs-10-8030020210426100849-30b713e6\n ref: 4589c1afe3ab66ffe6456b9b4af4cc981b1b7cdf\n stream: 10\n version: 8030020210426100849\n platform:\n context: 00000000\n filtered_rpms: []\n koji_tag: module-rocky-8.4.0-build\n ref: virtual\n stream: el8.4.0\n stream_collision_modules:\n ursine_rpms:\n version: 2\n commit: 53f7648dd6e54fb156b16302eb56bacf67a9024d\n mse: TRUE\n rpms:\n 389-ds-base:\n ref: efe94eb32d597765f49b7b1528ba9881e1f29327\n scmurl: https://git.rockylinux.org/staging/modules/389-ds?#53f7648dd6e54fb156b16302eb56bacf67a9024d\n ursine_rpms: []\n # Dependencies is part of the context combined with the xmd data. This data\n # is already in the source yaml pulled for the module build in the first place.\n # Note that in the source, it's usually `elX` rather than `elX.Y.Z` unless\n # explicitly configured that way.\n dependencies:\n - buildrequires:\n nodejs: [10]\n platform: [el8.4.0]\n requires:\n platform: [el8]\n filter:\n rpms:\n - cockpit-389-ds\n components:\n rpms:\n 389-ds-base:\n rationale: Package in api\n repository: git+https://git.rockylinux.org/staging/rpms/389-ds-base\n cache: http://pkgs.fedoraproject.org/repo/pkgs/389-ds-base\n ref: efe94eb32d597765f49b7b1528ba9881e1f29327\n arches: [aarch64, ppc64le, s390x, x86_64]\n...\n</code></pre> <p>Below is a version meant to be imported into a repo. This is after the build's completion. You'll notice that some fields are either empty or missing from above or even from the git repo's source that we pulled from initially. You'll also notice that xmd is now an empty dictionary. This is on purpose. While it is optional in the repo module data, the build system typically gives it <code>{}</code>.</p> <pre><code>---\ndocument: modulemd\nversion: 2\ndata:\n name: 389-ds\n stream: 1.4\n version: 8040020210810203142\n context: 866effaa\n arch: x86_64\n summary: 389 Directory Server (base)\n description: &gt;-\n 389 Directory Server is an LDAPv3 compliant server. The base package includes\n the LDAP server and command line utilities for server administration.\n license:\n module:\n - MIT\n content:\n - GPLv3+\n # This data is not an empty dictionary. It is required.\n xmd: {}\n dependencies:\n - buildrequires:\n nodejs: [10]\n platform: [el8.4.0]\n requires:\n platform: [el8]\n filter:\n rpms:\n - cockpit-389-ds\n components:\n rpms:\n 389-ds-base:\n rationale: Package in api\n ref: efe94eb32d597765f49b7b1528ba9881e1f29327\n arches: [aarch64, ppc64le, s390x, x86_64]\n artifacts:\n rpms:\n - 389-ds-base-0:1.4.3.16-19.module+el8.4.0+636+837ee950.src\n - 389-ds-base-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n - 389-ds-base-debuginfo-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n - 389-ds-base-debugsource-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n - 389-ds-base-devel-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n - 389-ds-base-legacy-tools-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n - 389-ds-base-legacy-tools-debuginfo-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n - 389-ds-base-libs-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n - 389-ds-base-libs-debuginfo-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n - 389-ds-base-snmp-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n - 389-ds-base-snmp-debuginfo-0:1.4.3.16-19.module+el8.4.0+636+837ee950.x86_64\n - python3-lib389-0:1.4.3.16-19.module+el8.4.0+636+837ee950.noarch\n...\n</code></pre> <p>The final \"repo\" of modules (per arch) is eventually made with a designation like:</p> <pre><code>module-NAME-STREAM-VERSION-CONTEXT\n\nmodule-389-ds-1.4-8040020210810203142-866effaa\n</code></pre> <p>This is what pungi and other utilities bring in and then combine into a single repo, generally, taking care of the module.yaml.</p>"},{"location":"rpm/local_module_builds/#default-modules","title":"Default Modules","text":"<p>Most modules will have a set default that would be expected if a dnf install was called. For example, in EL8 if you said <code>dnf install postgresql-server</code>, the package that gets installed is version 10. If a module doesn't have a default set, a <code>dnf install</code> will traditionally not work. To ensure a module package will install without having to enable them and to use the default, you need default information. Here's the postgresql example.</p> <pre><code>---\ndocument: modulemd-defaults\nversion: 1\ndata:\n module: postgresql\n stream: 10\n profiles:\n 9.6: [server]\n 10: [server]\n 12: [server]\n 13: [server]\n...\n</code></pre> <p>Even if a module only has one stream, default module information is still needed to ensure that a package can be installed without enabling the module explicitly. Here's an example.</p> <pre><code>---\ndocument: modulemd-defaults\nversion: 1\ndata:\n module: httpd\n stream: 2.4\n profiles:\n 2.4: [common]\n...\n</code></pre> <p>This type of information is expected by pungi as a default modules repo that can be configured. These YAML's are not with the modules themselves. They are brought in when the repos are being created in the first place.</p> <p>In the context of lazybuilder, it checks for defaults if enabled and then the final repo that's made of the results will immediately have the information at the top. See the references below for the jinja template that lazybuilder uses to generate this information.</p> <p>As a final note, let's say an update comes in for postgresql and you want to ensure that the old version of postgresql 10 and the updated version of 10 can stay together. This is when the final module data is combined together and then it's added into the repo using <code>modifyrepo_c</code>. Note though, you do not have to have the modulemd-defaults provided again. You can have it once such as the first time you made the repo in the first place, and it will still work.</p>"},{"location":"rpm/local_module_builds/#building-the-packages","title":"Building the packages","text":"<p>So we have an idea of how the module data itself is made and managed. All there is left to do is to do a chain build in mock. The kicker is you need to pay attention to the build order that is assigned to each package being built. If a build order isn't assigned, assume that it's group 0 and will be built first. This does not stop 0 being assigned, but just know that <code>buildorder</code> being omitted implies group 0. See below.</p> <pre><code> components:\n rpms:\n first:\n rationale: core functions\n ref: 3.0\n buildorder: 0\n second:\n rationale: ui\n ref: latest\n buildorder: 0\n third:\n rationale: front end\n ref: latest\n buildorder: 1\n</code></pre> <p>What this shows is that the packages in build group 0 can be built simultaneously in the context of Koji+MBS. For a local build, you'd just put them first in the list. Basically each of these groups have to be done, completed, and available right away for the next package or set of packages. For koji+mbs, they do this automatically since they have a tag/repo that gets updated on each completion and the builds are done in parallel.</p> <p>For mock, a chain build will always have an internal repo that it uses, so each completed package will have a final createrepo done on it before moving on to the next package in the list. It's not parallel like koji, but it's still consistent.</p> <p>Essentially a mock command would look like:</p> <pre><code>mock -r module.cfg \\\n --chain \\\n --localrepo /var/lib/mock/modulename \\\n first.src.rpm \\\n second.src.rpm \\\n third.src.rpm\n</code></pre>"},{"location":"rpm/local_module_builds/#making-the-final-yaml-and-repo","title":"Making the final YAML and repo","text":"<p>It's probably wise to have a template to make the module repo data off of. It's the same as having a script to \"transmodrify\" the module data properly to be used. Having a template will simplify a lot of things and will make it easier to convert the data from git and then the final build artifacts and data that makes the module data. The lazybuilder template is a good starting point, though it is a bit ugly, being made in jinja. It can be made better using python or even golang.</p> <p>Regardless, you should have it templated or scripted somehow. See the references in the next section.</p>"},{"location":"rpm/local_module_builds/#a-note-about-virtual-modules","title":"A note about virtual modules","text":"<p>Virtual modules are weird. They do not have a module dist tag, and they are just built like... any other RPM. The difference here is that a virtual module while it will should have an api['rpms'] list, it will not have an artifacts section.</p> <p>A huge example of this is perl:5.26 in EL8. perl 5.26 is the default version. If you install perl-interpreter, you'll get <code>perl-interpreter-5.26.3-419.el8_4.1.x86_64</code>. Notice how it doesn't have a module tag? That's because it wasn't built directly in MBS. There are not many virtual modules, but this is important to keep in mind that these do in fact exist. The module yaml itself will not have a list of packages to build, aka a \"components\" section. Here's the current EL8 perl 5.26 example.</p> <pre><code>document: modulemd\nversion: 2\ndata:\n summary: Practical Extraction and Report Language\n description: &gt;\n Perl is a high-level programming language with roots in C, sed, awk\n and shell scripting. Perl is good at handling processes and files, and\n is especially good at handling text. Perl's hallmarks are practicality\n and efficiency. While it is used to do a lot of different things,\n Perl's most common applications are system administration utilities\n and web programming.\n license:\n module: [ MIT ]\n dependencies:\n - buildrequires:\n platform: [el8]\n requires:\n platform: [el8]\n references:\n community: https://docs.pagure.org/modularity/\n profiles:\n common:\n description: Interpreter and all Perl modules bundled within upstream Perl.\n rpms:\n - perl\n minimal:\n description: Only the interpreter as a standalone executable.\n rpms:\n - perl-interpreter\n api:\n rpms:\n - perl\n - perl-Archive-Tar\n - perl-Attribute-Handlers\n - perl-autodie\n - perl-B-Debug\n - perl-bignum\n - perl-Carp\n - perl-Compress-Raw-Bzip2\n - perl-Compress-Raw-Zlib\n - perl-Config-Perl-V\n - perl-constant\n - perl-CPAN\n - perl-CPAN-Meta\n - perl-CPAN-Meta-Requirements\n - perl-CPAN-Meta-YAML\n - perl-Data-Dumper\n - perl-DB_File\n - perl-devel\n - perl-Devel-Peek\n - perl-Devel-PPPort\n - perl-Devel-SelfStubber\n - perl-Digest\n - perl-Digest-MD5\n - perl-Digest-SHA\n - perl-Encode\n - perl-Encode-devel\n - perl-encoding\n - perl-Env\n - perl-Errno\n - perl-experimental\n - perl-Exporter\n - perl-ExtUtils-CBuilder\n - perl-ExtUtils-Command\n - perl-ExtUtils-Embed\n - perl-ExtUtils-Install\n - perl-ExtUtils-MakeMaker\n - perl-ExtUtils-Manifest\n - perl-ExtUtils-Miniperl\n - perl-ExtUtils-MM-Utils\n - perl-ExtUtils-ParseXS\n - perl-File-Fetch\n - perl-File-Path\n - perl-File-Temp\n - perl-Filter\n - perl-Filter-Simple\n - perl-generators\n - perl-Getopt-Long\n - perl-HTTP-Tiny\n - perl-interpreter\n - perl-IO\n - perl-IO-Compress\n - perl-IO-Socket-IP\n - perl-IO-Zlib\n - perl-IPC-Cmd\n - perl-IPC-SysV\n - perl-JSON-PP\n - perl-libnet\n - perl-libnetcfg\n - perl-libs\n - perl-Locale-Codes\n - perl-Locale-Maketext\n - perl-Locale-Maketext-Simple\n - perl-macros\n - perl-Math-BigInt\n - perl-Math-BigInt-FastCalc\n - perl-Math-BigRat\n - perl-Math-Complex\n - perl-Memoize\n - perl-MIME-Base64\n - perl-Module-CoreList\n - perl-Module-CoreList-tools\n - perl-Module-Load\n - perl-Module-Load-Conditional\n - perl-Module-Loaded\n - perl-Module-Metadata\n - perl-Net-Ping\n - perl-open\n - perl-Params-Check\n - perl-parent\n - perl-PathTools\n - perl-Perl-OSType\n - perl-perlfaq\n - perl-PerlIO-via-QuotedPrint\n - perl-Pod-Checker\n - perl-Pod-Escapes\n - perl-Pod-Html\n - perl-Pod-Parser\n - perl-Pod-Perldoc\n - perl-Pod-Simple\n - perl-Pod-Usage\n - perl-podlators\n - perl-Scalar-List-Utils\n - perl-SelfLoader\n - perl-Socket\n - perl-Storable\n - perl-Sys-Syslog\n - perl-Term-ANSIColor\n - perl-Term-Cap\n - perl-Test\n - perl-Test-Harness\n - perl-Test-Simple\n - perl-tests\n - perl-Text-Balanced\n - perl-Text-ParseWords\n - perl-Text-Tabs+Wrap\n - perl-Thread-Queue\n - perl-threads\n - perl-threads-shared\n - perl-Time-HiRes\n - perl-Time-Local\n - perl-Time-Piece\n - perl-Unicode-Collate\n - perl-Unicode-Normalize\n - perl-utils\n - perl-version\n # We do not build any packages because they are already available\n # in BaseOS or AppStream repository. We cannnot replace BaseOS\n # packages.\n #components:\n # rpms:\n</code></pre>"},{"location":"rpm/local_module_builds/#reference","title":"Reference","text":"<p>Below is a reference for what's in a module's data. Some keys are optional. There'll also be an example from lazybuilder, which uses jinja to template out the final data that is used in a repo.</p>"},{"location":"rpm/local_module_builds/#module-template-and-known-keys","title":"Module Template and Known Keys","text":"<p>Below are the keys that are expected in the YAML for both defaults and the actual module build itself. Each item will have information on the type of value it is (eg, is it a string, list), if it's optional or mandatory, plus comments that may point out what's valid in source data rather than final repo data. Some of the data below may not be used in EL, but it's important to know what is possible and what could be expected.</p> <p>This information was copied from: Fedora Modularity</p> <pre><code># Document type identifier\n# `document: modulemd-defaults` describes the default stream and profiles for\n# a module.\ndocument: modulemd-defaults\n# Module metadata format version\nversion: 1\ndata:\n # Module name that the defaults are for, required.\n module: foo\n # A 64-bit unsigned integer. Use YYYYMMDDHHMM to easily identify the last\n # modification time. Use UTC for consistency.\n # When merging, entries with a newer 'modified' value will override any\n # earlier values. (optional)\n modified: 201812071200\n # Module stream that is the default for the module, optional.\n stream: \"x.y\"\n # Module profiles indexed by the stream name, optional\n # This is a dictionary of stream names to a list of default profiles to be\n # installed.\n profiles:\n 'x.y': []\n bar: [baz, snafu]\n # System intents dictionary, optional. Indexed by the intent name.\n # Overrides stream/profiles for intent.\n intents:\n desktop:\n # Module stream that is the default for the module, required.\n # Overrides the above values for systems with this intent.\n stream: \"y.z\"\n # Module profiles indexed by the stream name, required\n # Overrides the above values for systems with this intent.\n # From the above, foo:x.y has \"other\" as the value and foo:bar has\n # no default profile.\n profiles:\n 'y.z': [blah]\n 'x.y': [other]\n server:\n # Module stream that is the default for the module, required.\n # Overrides the above values for systems with this intent.\n stream: \"x.y\"\n # Module profiles indexed by the stream name, required\n # Overrides the above values for systems with this intent.\n # From the above foo:x.y and foo:bar have no default profile.\n profiles:\n 'x.y': []\n</code></pre> <p>Note: The glossary explains this, but remember that AUTOMATIC means that it will typically not be in the module data itself, and will likely be in repo data itself. There are also spots where thare are things that are MANDATORY but also do not show up in a lot of modules, because the implicit/default option turns off that section.</p> <p>Note: There is a large chunk of these keys and values that state they are AUTOMATIC and they do show up in the module data as a result of the module data source and/or the build system doing work. An example of this is arch, among others.</p> <pre><code>##############################################################################\n# Glossary: #\n# #\n# build system: The process by which a module is built and packaged. In many #\n# cases, this will be the Module Build Service tool, but this term is used #\n# as a catch-all to describe any mechanism for producing a yum repository #\n# containing modular content from input module metadata files. #\n# #\n# #\n# == Attribute Types == #\n# #\n# MANDATORY: Attributes of this type must be filled in by the packager of #\n# this module. They must also be preserved and provided in the output #\n# metadata produced by the build system for inclusion into a repository. #\n# #\n# OPTIONAL: Attributes of this type may be provided by the packager of this #\n# module, when appropriate. If they are provided, they must also be #\n# preserved and provided in the output metadata produced by the build #\n# system for inclusion into a repository. #\n# #\n# AUTOMATIC: Attributes of this type must be present in the repository #\n# metadata, but they may be left unspecified by the packager. In this case, #\n# the build system is responsible for generating an appropriate value for #\n# the attribute and including it in the repository metadata. If the packager #\n# specifies this attribute explicitly, it must be preserved and provided in #\n# the output metadata for inclusion into a repository. #\n# #\n# The definitions above describe the expected behavior of the build system #\n# operating in its default configuration. It is permissible for the build #\n# system to override user-provided entries through non-default operating #\n# modes. If such changes are made, all items indicated as being required for #\n# the output repository must still be present. #\n##############################################################################\n\n\n# Document type identifier\n# `document: modulemd` describes the contents of a module stream\ndocument: modulemd\n\n# Module metadata format version\nversion: 2\n\ndata:\n # name:\n # The name of the module\n # Filled in by the build system, using the VCS repository name as the name\n # of the module.\n #\n # Type: AUTOMATIC\n #\n # Mandatory for module metadata in a yum/dnf repository.\n name: foo\n\n # stream:\n # Module update stream\n # Filled in by the buildsystem, using the VCS branch name as the name of\n # the stream.\n #\n # Type: AUTOMATIC\n #\n # Mandatory for module metadata in a yum/dnf repository.\n stream: \"latest\"\n\n # version:\n # Module version, 64-bit unsigned integer\n # If this value is unset (or set to zero), it will be filled in by the\n # buildsystem, using the VCS commit timestamp. Module version defines the\n # upgrade path for the particular update stream.\n #\n # Type: AUTOMATIC\n #\n # Mandatory for module metadata in a yum/dnf repository.\n version: 20160927144203\n\n # context:\n # Module context flag\n # The context flag serves to distinguish module builds with the\n # same name, stream and version and plays an important role in\n # automatic module stream name expansion.\n #\n # If 'static_context' is unset or equal to FALSE:\n # Filled in by the buildsystem. A short hash of the module's name,\n # stream, version and its expanded runtime dependencies. The exact\n # mechanism for generating the hash is unspecified.\n #\n # Type: AUTOMATIC\n #\n # Mandatory for module metadata in a yum/dnf repository.\n #\n # If 'static_context' is set to True:\n # The context flag is a string of up to thirteen [a-zA-Z0-9_] characters\n # representing a build and runtime configuration for this stream. This\n # string is arbitrary but must be unique in this module stream.\n #\n # Type: MANDATORY\n static_context: false\n context: c0ffee43\n\n # arch:\n # Module artifact architecture\n # Contains a string describing the module's artifacts' main hardware\n # architecture compatibility, distinguishing the module artifact,\n # e.g. a repository, from others with the same name, stream, version and\n # context. This is not a generic hardware family (i.e. basearch).\n # Examples: i386, i486, armv7hl, x86_64\n # Filled in by the buildsystem during the compose stage.\n #\n # Type: AUTOMATIC\n #\n # Mandatory for module metadata in a yum/dnf repository.\n arch: x86_64\n\n # summary:\n # A short summary describing the module\n #\n # Type: MANDATORY\n #\n # Mandatory for module metadata in a yum/dnf repository.\n summary: An example module\n\n # description:\n # A verbose description of the module\n #\n # Type: MANDATORY\n #\n # Mandatory for module metadata in a yum/dnf repository.\n description: &gt;-\n A module for the demonstration of the metadata format. Also,\n the obligatory lorem ipsum dolor sit amet goes right here.\n\n # servicelevels:\n # Service levels\n # This is a dictionary of important dates (and possibly supplementary data\n # in the future) that describes the end point of certain functionality,\n # such as the date when the module will transition to \"security fixes only\"\n # or go completely end-of-life.\n # Filled in by the buildsystem. Service level names might have special\n # meaning to other systems. Defined externally.\n #\n # Type: AUTOMATIC\n servicelevels:\n rawhide:\n # EOL dates are the ISO 8601 format.\n eol: 2077-10-23\n stable_api:\n eol: 2077-10-23\n bug_fixes:\n eol: 2077-10-23\n security_fixes:\n eol: 2077-10-23\n\n # license:\n # Module and content licenses in the Fedora license identifier\n # format\n #\n # Type: MANDATORY\n license:\n # module:\n # Module license\n # This list covers licenses used for the module metadata and\n # possibly other files involved in the creation of this specific\n # module.\n #\n # Type: MANDATORY\n module:\n - MIT\n\n # content:\n # Content license\n # A list of licenses used by the packages in the module.\n # This should be populated by build tools, not the module author.\n #\n # Type: AUTOMATIC\n #\n # Mandatory for module metadata in a yum/dnf repository.\n content:\n - ASL 2.0\n - GPL+ or Artistic\n\n # xmd:\n # Extensible metadata block\n # A dictionary of user-defined keys and values.\n # Defaults to an empty dictionary.\n #\n # Type: OPTIONAL\n xmd:\n some_key: some_data\n\n # dependencies:\n # Module dependencies, if any\n # A list of dictionaries describing build and runtime dependencies\n # of this module. Each list item describes a combination of dependencies\n # this module can be built or run against.\n # Dependency keys are module names, dependency values are lists of\n # required streams. The lists can be both inclusive (listing compatible\n # streams) or exclusive (accepting every stream except for those listed).\n # An empty list implies all active existing streams are supported.\n # Requiring multiple streams at build time will result in multiple\n # builds. Requiring multiple streams at runtime implies the module\n # is compatible with all of them. If the same module streams are listed\n # in both the build time and the runtime block, the build tools translate\n # the runtime block so that it matches the stream the module was built\n # against. Multiple builds result in multiple output modulemd files.\n # See below for an example.\n # The example below illustrates how to build the same module in four\n # different ways, with varying build time and runtime dependencies.\n #\n # Type: OPTIONAL\n dependencies:\n # Build on all available platforms except for f27, f28 and epel7\n # After build, the runtime dependency will match the one used for\n # the build.\n - buildrequires:\n platform: [-f27, -f28, -epel7]\n requires:\n platform: [-f27, -f28, -epel7]\n\n # For platform:f27 perform two builds, one with buildtools:v1, another\n # with buildtools:v2 in the buildroot. Both will also utilize\n # compatible:v3. At runtime, buildtools isn't required and either\n # compatible:v3 or compatible:v4 can be installed.\n - buildrequires:\n platform: [f27]\n buildtools: [v1, v2]\n compatible: [v3]\n requires:\n platform: [f27]\n compatible: [v3, v4]\n\n # For platform:f28 builds, require either runtime:a or runtime:b at\n # runtime. Only one build is performed.\n - buildrequires:\n platform: [f28]\n requires:\n platform: [f28]\n runtime: [a, b]\n\n # For platform:epel7, build against against all available extras\n # streams and moreextras:foo and moreextras:bar. The number of builds\n # in this case will be 2 * &lt;the number of extras streams available&gt;.\n # At runtime, both extras and moreextras will match whatever stream was\n # used for build.\n - buildrequires:\n platform: [epel7]\n extras: []\n moreextras: [foo, bar]\n requires:\n platform: [epel7]\n extras: []\n moreextras: [foo, bar]\n\n # references:\n # References to external resources, typically upstream\n #\n # Type: OPTIONAL\n references:\n # community:\n # Upstream community website, if it exists\n #\n # Type: OPTIONAL\n community: http://www.example.com/\n\n # documentation:\n # Upstream documentation, if it exists\n #\n # Type: OPTIONAL\n documentation: http://www.example.com/\n\n # tracker:\n # Upstream bug tracker, if it exists\n #\n # Type: OPTIONAL\n tracker: http://www.example.com/\n\n # profiles:\n # Profiles define the end user's use cases for the module. They consist of\n # package lists of components to be installed by default if the module is\n # enabled. The keys are the profile names and contain package lists by\n # component type. There are several profiles defined below. Suggested\n # behavior for package managers is to just enable repository for selected\n # module. Then users are able to install packages on their own. If they\n # select a specific profile, the package manager should install all\n # packages of that profile.\n # Defaults to no profile definitions.\n #\n # Type: OPTIONAL\n profiles:\n\n # An example profile that defines a set of packages which are meant to\n # be installed inside a container image artifact.\n #\n # Type: OPTIONAL\n container:\n rpms:\n - bar\n - bar-devel\n\n # An example profile that delivers a minimal set of packages to\n # provide this module's basic functionality. This is meant to be used\n # on target systems where size of the distribution is a real concern.\n #\n # Type: Optional\n minimal:\n # A verbose description of the module, optional\n description: Minimal profile installing only the bar package.\n rpms:\n - bar\n\n # buildroot:\n # This is a special reserved profile name.\n #\n # This provides a listing of packages that will be automatically\n # installed into the buildroot of all component builds that are started\n # after a component builds with its `buildroot: True` option set.\n #\n # The primary purpose of this is for building RPMs that change\n # the build environment, such as those that provide new RPM\n # macro definitions that can be used by subsequent builds.\n #\n # Specifically, it is used to flesh out the build group in koji.\n #\n # Type: OPTIONAL\n buildroot:\n rpms:\n - bar-devel\n\n # srpm-buildroot:\n # This is a special reserved profile name.\n #\n # This provides a listing of packages that will be automatically\n # installed into the buildroot of all component builds that are started\n # after a component builds with its `srpm-buildroot: True` option set.\n #\n # The primary purpose of this is for building RPMs that change\n # the build environment, such as those that provide new RPM\n # macro definitions that can be used by subsequent builds.\n #\n # Very similar to the buildroot profile above, this is used by the\n # build system to specify any additional packages which should be\n # installed during the buildSRPMfromSCM step in koji.\n #\n # Type: OPTIONAL\n srpm-buildroot:\n rpms:\n - bar-extras\n\n # api:\n # Module API\n # Defaults to no API.\n #\n # Type: OPTIONAL\n api:\n # rpms:\n # The module's public RPM-level API.\n # A list of binary RPM names that are considered to be the\n # main and stable feature of the module; binary RPMs not listed\n # here are considered \"unsupported\" or \"implementation details\".\n # In the example here we don't list the xyz package as it's only\n # included as a dependency of xxx. However, we list a subpackage\n # of bar, bar-extras.\n # Defaults to an empty list.\n #\n # Type: OPTIONAL\n rpms:\n - bar\n - bar-extras\n - bar-devel\n - baz\n - xxx\n\n # filter:\n # Module component filters\n # Defaults to no filters.\n #\n # Type: OPTIONAL\n filter:\n # rpms:\n # RPM names not to be included in the module.\n # By default, all built binary RPMs are included. In the example\n # we exclude a subpackage of bar, bar-nonfoo from our module.\n # Defaults to an empty list.\n #\n # Type: OPTIONAL\n rpms:\n - baz-nonfoo\n\n # demodularized:\n # Artifacts which became non-modular\n # Defaults to no demodularization.\n # Type: OPTIONAL\n demodularized:\n # rpms:\n # A list of binary RPM package names which where removed from\n # a module. This list explains to a package mananger that the packages\n # are not part of the module anymore and up-to-now same-named masked\n # non-modular packages should become available again. This enables\n # moving a package from a module to a set of non-modular packages. The\n # exact implementation of the demodularization (e.g. whether it\n # applies to all modules or only to this stream) is defined by the\n # package manager.\n # Defaults to an empty list.\n #\n # Type: OPTIONAL\n rpms:\n - bar-old\n\n # buildopts:\n # Component build options\n # Additional per component type module-wide build options.\n #\n # Type: OPTIONAL\n buildopts:\n # rpms:\n # RPM-specific build options\n #\n # Type: OPTIONAL\n rpms:\n # macros:\n # Additional macros that should be defined in the\n # RPM buildroot, appended to the default set. Care should be\n # taken so that the newlines are preserved. Literal style\n # block is recommended, with or without the trailing newline.\n #\n # Type: OPTIONAL\n macros: |\n %demomacro 1\n %demomacro2 %{demomacro}23\n\n # whitelist:\n # Explicit list of package build names this module will produce.\n # By default the build system only allows components listed under\n # data.components.rpms to be built as part of this module.\n # In case the expected RPM build names do not match the component\n # names, the list can be defined here.\n # This list overrides rather then just extends the default.\n # List of package build names without versions.\n #\n # Type: OPTIONAL\n whitelist:\n - fooscl-1-bar\n - fooscl-1-baz\n - xxx\n - xyz\n\n # arches:\n # Instructs the build system to only build the\n # module on this specific set of architectures.\n # Includes specific hardware architectures, not families.\n # See the data.arch field for details.\n # Defaults to all available arches.\n #\n # Type: OPTIONAL\n arches: [i686, x86_64]\n\n # components:\n # Functional components of the module\n #\n # Type: OPTIONAL\n components:\n # rpms:\n # RPM content of the module\n # Keys are the VCS/SRPM names, values dictionaries holding\n # additional information.\n #\n # Type: OPTIONAL\n rpms:\n bar:\n # name:\n # The real name of the package, if it differs from the key in\n # this dictionary. Used when bootstrapping to build a\n # bootstrapping ref before building the package for real.\n #\n # Type: OPTIONAL\n name: bar-real\n\n # rationale:\n # Why is this component present.\n # A simple, free-form string.\n #\n # Type: MANDATORY\n rationale: We need this to demonstrate stuff.\n\n # repository:\n # Use this repository if it's different from the build\n # system configuration.\n #\n # Type: AUTOMATIC\n repository: https://pagure.io/bar.git\n\n # cache:\n # Use this lookaside cache if it's different from the\n # build system configuration.\n #\n # Type: AUTOMATIC\n cache: https://example.com/cache\n\n # ref:\n # Use this specific commit hash, branch name or tag for\n # the build. If ref is a branch name, the branch HEAD\n # will be used. If no ref is given, the master branch\n # is assumed.\n #\n # Type: AUTOMATIC\n ref: 26ca0c0\n\n # buildafter:\n # Use the \"buildafter\" value to specify that this component\n # must be be ordered later than some other entries in this map.\n # The values of this array come from the keys of this map and\n # not the real component name to enable bootstrapping.\n # Use of both buildafter and buildorder in the same document is\n # prohibited, as they will conflict.\n #\n # Note: The use of buildafter is not currently supported by the\n # Fedora module build system.\n #\n # Type: AUTOMATIC\n #\n # buildafter:\n # - baz\n\n # buildonly:\n # Use the \"buildonly\" value to indicate that all artifacts\n # produced by this component are intended only for building\n # this component and should be automatically added to the\n # data.filter.rpms list after the build is complete.\n # Defaults to \"false\" if not specified.\n #\n # Type: AUTOMATIC\n buildonly: false\n\n # baz builds RPM macros for the other components to use\n baz:\n rationale: Demonstrate updating the buildroot contents.\n\n # buildroot:\n # If buildroot is set to True, the packages listed in this\n # module's 'buildroot' profile will be installed into the\n # buildroot of any component built in buildorder/buildafter\n # batches begun after this one, without requiring that those\n # packages are listed among BuildRequires.\n #\n # The primary purpose of this is for building RPMs that change\n # the build environment, such as those that provide new RPM\n # macro definitions that can be used by subsequent builds.\n #\n # Defaults to \"false\" if not specified.\n #\n # Type: OPTIONAL\n buildroot: true\n\n # srpm-buildroot:\n # If srpm-buildroot is set to True, the packages listed in this\n # module's 'srpm-buildroot' profile will be installed into the\n # buildroot of any component built in buildorder/buildafter\n # batches begun after this one, without requiring that those\n # packages are listed among BuildRequires.\n #\n # The primary purpose of this is for building RPMs that change\n # the build environment, such as those that provide new RPM\n # macro definitions that can be used by subsequent builds.\n #\n # Defaults to \"false\" if not specified.\n #\n # Type: OPTIONAL\n srpm-buildroot: true\n\n # See component xyz for a complete description of buildorder\n #\n # build this component before any others so that the macros it\n # creates are available to all of them.\n buildorder: -1\n\n xxx:\n rationale: xxx demonstrates arches and multilib.\n\n # arches:\n # xxx is only available on the listed architectures.\n # Includes specific hardware architectures, not families.\n # See the data.arch field for details.\n # Instructs the build system to only build the\n # component on this specific set of architectures.\n # If data.buildopts.arches is also specified,\n # this must be a subset of those architectures.\n # Defaults to all available arches.\n #\n # Type: AUTOMATIC\n arches: [i686, x86_64]\n\n # multilib:\n # A list of architectures with multilib\n # installs, i.e. both i686 and x86_64\n # versions will be installed on x86_64.\n # Includes specific hardware architectures, not families.\n # See the data.arch field for details.\n # Defaults to no multilib.\n #\n # Type: AUTOMATIC\n multilib: [x86_64]\n\n xyz:\n rationale: xyz is a bundled dependency of xxx.\n\n # buildorder:\n # Build order group\n # When building, components are sorted by build order tag\n # and built in batches grouped by their buildorder value.\n # Built batches are then re-tagged into the buildroot.\n # Multiple components can have the same buildorder index\n # to map them into build groups.\n # Defaults to zero.\n # Integer, from an interval [-(2^63), +2^63-1].\n # In this example, bar, baz and xxx are built first in\n # no particular order, then tagged into the buildroot,\n # then, finally, xyz is built.\n # Use of both buildafter and buildorder in the same document is\n # prohibited, as they will conflict.\n #\n # Type: OPTIONAL\n buildorder: 10\n\n # modules:\n # Module content of this module\n # Included modules are built in the shared buildroot, together with\n # other included content. Keys are module names, values additional\n # component information. Note this only includes components and their\n # properties from the referenced module and doesn't inherit any\n # additional module metadata such as the module's dependencies or\n # component buildopts. The included components are built in their\n # defined buildorder as sub-build groups.\n #\n # Type: OPTIONAL\n modules:\n includedmodule:\n\n # rationale:\n # Why is this module included?\n #\n # Type: MANDATORY\n rationale: Included in the stack, just because.\n\n # repository:\n # Link to VCS repository that contains the modulemd file\n # if it differs from the buildsystem default configuration.\n #\n # Type: AUTOMATIC\n repository: https://pagure.io/includedmodule.git\n\n # ref:\n # See the rpms ref.\n #\n # Type: AUTOMATIC\n ref: somecoolbranchname\n\n # buildorder:\n # See the rpms buildorder.\n #\n # Type: AUTOMATIC\n buildorder: 100\n\n # artifacts:\n # Artifacts shipped with this module\n # This section lists binary artifacts shipped with the module, allowing\n # software management tools to handle module bundles. This section is\n # populated by the module build system.\n #\n # Type: AUTOMATIC\n artifacts:\n\n # rpms:\n # RPM artifacts shipped with this module\n # A set of NEVRAs associated with this module. An epoch number in the\n # NEVRA string is mandatory.\n #\n # Type: AUTOMATIC\n rpms:\n - bar-0:1.23-1.module_deadbeef.x86_64\n - bar-devel-0:1.23-1.module_deadbeef.x86_64\n - bar-extras-0:1.23-1.module_deadbeef.x86_64\n - baz-0:42-42.module_deadbeef.x86_64\n - xxx-0:1-1.module_deadbeef.x86_64\n - xxx-0:1-1.module_deadbeef.i686\n - xyz-0:1-1.module_deadbeef.x86_64\n\n # rpm-map:\n # The rpm-map exists to link checksums from repomd to specific\n # artifacts produced by this module. Any item in this list must match\n # an entry in the data.artifacts.rpms section.\n #\n # Type: AUTOMATIC\n rpm-map:\n\n # The digest-type of this checksum.\n #\n # Type: MANDATORY\n sha256:\n\n # The checksum of the artifact being sought.\n #\n # Type: MANDATORY\n ee47083ed80146eb2c84e9a94d0836393912185dcda62b9d93ee0c2ea5dc795b:\n\n # name:\n # The RPM name.\n #\n # Type: Mandatory\n name: bar\n\n # epoch:\n # The RPM epoch.\n # A 32-bit unsigned integer.\n #\n # Type: OPTIONAL\n epoch: 0\n\n # version:\n # The RPM version.\n #\n # Type: MANDATORY\n version: 1.23\n\n # release:\n # The RPM release.\n #\n # Type: MANDATORY\n release: 1.module_deadbeef\n\n # arch:\n # The RPM architecture.\n #\n # Type: MANDATORY\n arch: x86_64\n\n # nevra:\n # The complete RPM NEVRA.\n #\n # Type: MANDATORY\n nevra: bar-0:1.23-1.module_deadbeef.x86_64\n</code></pre>"},{"location":"rpm/local_module_builds/#module-template-and-keys-using-jinja","title":"Module Template and Keys using jinja","text":"<pre><code>{% if module_default_data is defined %}\n---\ndocument: modulemd-defaults\nversion: {{ module_default_data.version }}\ndata:\n module: {{ module_default_data.data.module }}\n stream: {{ module_default_data.data.stream }}\n profiles:\n{% for k in module_default_data.data.profiles %}\n {{ k }}: [{{ module_default_data.data.profiles[k]|join(', ') }}]\n{% endfor %}\n...\n{% endif %}\n---\ndocument: {{ module_data.document }}\nversion: {{ module_data.version }}\ndata:\n name: {{ source_name | default(\"source\") }}\n stream: \"{{ module_data.data.stream }}\"\n version: {{ module_version | default(8040) }}\n context: {{ module_context | default('01010110') }}\n arch: {{ mock_arch | default(ansible_architecture) }}\n summary: {{ module_data.data.summary | wordwrap(width=79) | indent(width=4) }}\n description: {{ module_data.data.description | wordwrap(width=79) | indent(width=4) }}\n license:\n{% for (key, value) in module_data.data.license.items() %}\n {{ key }}:\n - {{ value | join('\\n - ') }}\n{% endfor %}\n xmd: {}\n{% if module_data.data.dependencies is defined %}\n dependencies:\n{% for l in module_data.data.dependencies %}\n{% for r in l.keys() %}\n{% if loop.index == 1 %}\n - {{ r }}:\n{% else %}\n {{ r }}:\n{% endif %}\n{% for (m, n) in l[r].items() %}\n {{ m }}: [{{ n | join(', ') }}]\n{% endfor %}\n{% endfor %}\n{% endfor %}\n{% endif %}\n{% if module_data.data.filter is defined %}\n filter:\n{% for (key, value) in module_data.data.filter.items() %}\n {{ key }}:\n - {{ value | join('\\n - ') }}\n{% endfor %}\n{% endif %}\n{% if module_data.data.profiles is defined %}\n profiles:\n{% for (key, value) in module_data.data.profiles.items() %}\n {{ key }}:\n{% for (key, value) in value.items() %}\n{% if value is iterable and (value is not string and value is not mapping) %}\n {{ key | indent(width=6) }}:\n - {{ value | join('\\n - ') }}\n{% else %}\n {{ key | indent(width=6) }}: {{ value }}\n{% endif %}\n{% endfor %}\n{% endfor %}\n{% endif %}\n{% if module_data.data.api is defined %}\n api:\n{% for (key, value) in module_data.data.api.items() %}\n {{ key }}:\n - {{ value | join('\\n - ') }}\n{% endfor %}\n{% endif %}\n{% if module_data.data.buildopts is defined %}\n buildopts:\n{% for (key, value) in module_data.data.buildopts.items() %}\n {{ key }}:\n{% for (key, value) in value.items() %}\n {{ key }}: |\n {{ value | indent(width=8) }}\n{% endfor %}\n{% endfor %}\n{% endif %}\n{% if module_data.data.references is defined %}\n references:\n{% for (key, value) in module_data.data.references.items() %}\n {{ key }}: {{ value }}\n{% endfor %}\n{% endif %}\n{% if module_data.data.components is defined %}\n components:\n{% for (key, value) in module_data.data.components.items() %}\n {{ key }}:\n{% for (key, value) in value.items() %}\n {{ key }}:\n{% for (key, value) in value.items() %}\n{% if value is iterable and (value is not string and value is not mapping) %}\n {{ key | indent(width=8) }}: [{{ value | join(', ') }}]\n{% else %}\n {{ key | indent(width=8) }}: {{ value }}\n{% endif %}\n{% endfor %}\n{% endfor %}\n{% endfor %}\n{% endif %}\n{% if artifacts is defined %}\n artifacts:\n{% for (key, value) in artifacts.items() %}\n {{ key }}:\n - {{ value | join('\\n - ') }}\n{% endfor %}\n{% endif %}\n...\n</code></pre>"},{"location":"sop/","title":"SOP (Standard Operationg Procedures)","text":"<p>This section goes over the various SOP's for SIG/Core. Please use the menu items to find the various pages of interest.</p>"},{"location":"sop/sop_compose/","title":"SOP: Compose and Repo Sync for Rocky Linux and Peridot","text":"<p>This SOP covers how the Rocky Linux Release Engineering Team handles composes and repository syncs for the distribution. It contains information of the scripts that are utilized and in what order, depending on the use case.</p>"},{"location":"sop/sop_compose/#contact-information","title":"Contact Information","text":"Owner Release Engineering Team Email Contact releng@rockylinux.org Email Contact infrastructure@rockylinux.org Mattermost Contacts <code>@label</code> <code>@mustafa</code> <code>@neil</code> <code>@tgo</code> Mattermost Channels <code>~Development</code>"},{"location":"sop/sop_compose/#related-git-repositories","title":"Related Git Repositories","text":"<p>There are several git repositories used in the overall composition of a repository or a set of repositories.</p> <p>Pungi - This repository contains all the necessary pungi configuration files that peridot translates into its own configuration. Pungi is no longer used for Rocky Linux.</p> <p>Comps - This repository contains all the necessary comps (which are groups and other data) for a given major version. Peridot (and pungi) use this information to properly build repositories.</p> <p>Toolkit - This repository contains various scripts and utilities used by Release Engineering, such as syncing composes, functionality testing, and mirror maintenance.</p>"},{"location":"sop/sop_compose/#composing-repositories","title":"Composing Repositories","text":""},{"location":"sop/sop_compose/#mount-structure","title":"Mount Structure","text":"<p>There is a designated system that takes care of composing repositories. These systems contain the necessary EFS/NFS mounts for the staging and production repositories as well as composes.</p> <ul> <li><code>/mnt/compose</code> -&gt; Compose data</li> <li><code>/mnt/repos-staging</code> -&gt; Staging</li> <li><code>/mnt/repos-production</code> -&gt; Production</li> </ul>"},{"location":"sop/sop_compose/#empanadas","title":"Empanadas","text":"<p>Each repository or set of repositories are controlled by various comps and pungi configurations that are translated into peridot. Empanadas is used to run a reposync from peridot's yumrepofs repositories, generate ISO's, and create a pungi compose look-a-like. Because of this, the comps and pungi-rocky configuration is not referenced with empanadas.</p>"},{"location":"sop/sop_compose/#running-a-compose","title":"Running a Compose","text":"<p>First, the toolkit must be cloned. In the <code>iso/empanadas</code> directory, run <code>poetry install</code>. You'll then have access to the various commands needed:</p> <ul> <li><code>sync_from_peridot</code></li> <li><code>build-iso</code></li> <li><code>build-iso-extra</code></li> <li><code>pull-unpack-tree</code></li> <li><code>pull-cloud-image</code></li> <li><code>finalize_compose</code></li> </ul>"},{"location":"sop/sop_compose/#full-compose","title":"Full Compose","text":"<p>To perform a full compose, this order is expected (replacing X with major version or config profile)</p> <pre><code># This creates a brand new directory under /mnt/compose/X and symlinks it to latest-Rocky-X\npoertry run sync_from_peridot --release X --hashed --repoclosure --full-run\n\n# On each architecture, this must be ran to generate the lorax images\n# !! Use --rc if the image is a release candidate or a beta image\n# Note: This is typically done using kubernetes and uploaded to a bucket\npoetry run build-iso --release X --isolation=None\n\n# The images are pulled from the bucket\npoetry run pull-unpack-tree --release X\n\n# The extra ISO's (usually just DVD) are generated\n# !! Use --rc if the image is a release candidate or a beta image\n# !! Set --extra-iso-mode to mock if desired\n# !! If there is more than the dvd, remove --extra-iso dvd\npoetry run build-iso-extra --release X --extra-iso dvd --extra-iso-mode podman\n\n# This pulls the generic and EC2 cloud images\npoetry run pull-cloud-image --release X\n\n# This ensures everything is closed out for a release. This copies iso's, images,\n# generates metadata, and the like.\n# !! DO NOT RUN DURING INCREMENTAL UPDATES !!\npoetry run finalize_compose --release X\n</code></pre>"},{"location":"sop/sop_compose/#incremental-compose","title":"Incremental Compose","text":"<p>It is possible to simply compose singular repos if you know which ones you want to sync. This can be done when it's not for a brand new release.</p> <pre><code># Set your repos as desired. --arch is also acceptable.\n# --ignore-debug and --ignore-source are also acceptable options.\npoetry run sync_from_peridot --release X --hashed --clean-old-packages --repo X,Y,Z\n</code></pre>"},{"location":"sop/sop_compose/#syncing-composes","title":"Syncing Composes","text":"<p>Syncing utilizes the sync scripts provided in the release engineering toolkit.</p> <p>When the scripts are being ran, they are usually ran with a specific purpose, as each major version may be different.</p> <p>The below are common vars files. common_X will override what's in common. Typically these set what repositories exist and how they are named or look at the top level. These also set the current major.minor release as necessary.</p> <pre><code>.\n\u251c\u2500\u2500 common\n\u251c\u2500\u2500 common_8\n\u251c\u2500\u2500 common_9\n</code></pre> <p>These are for the releases in general. What they do is noted below.</p> <pre><code>\u251c\u2500\u2500 gen-torrents.sh -&gt; Generates torrents for images\n\u251c\u2500\u2500 minor-release-sync-to-staging.sh -&gt; Syncs a minor release to staging\n\u251c\u2500\u2500 prep-staging-X.sh -&gt; Preps staging updates and signs repos (only for 8)\n\u251c\u2500\u2500 sign-repos-only.sh -&gt; Signs the repomd (only for 8)\n\u251c\u2500\u2500 sync-file-list-parallel.sh -&gt; Generates file lists in parallel for mirror sync scripts\n\u251c\u2500\u2500 sync-to-prod.sh -&gt; Syncs staging to production\n\u251c\u2500\u2500 sync-to-prod.delete.sh -&gt; Syncs staging to production (deletes artifacts that are no longer in staging)\n\u251c\u2500\u2500 sync-to-prod-sig.sh -&gt; Syncs a sig provided compose to production\n\u251c\u2500\u2500 sync-to-staging.sh -&gt; Syncs a provided compose to staging\n\u251c\u2500\u2500 sync-to-staging.delete.sh -&gt; Syncs a provided compose to staging (deletes artifacts that are no longer in the compose)\n\u251c\u2500\u2500 sync-to-staging-sig.sh -&gt; Syncs a sig provided compose to staging\n</code></pre> <p>Generally, you will only run <code>sync-to-staging.sh</code> or <code>sync-to-staging.delete.sh</code> to sync. The former is for older releases, the latter is for newer releases. Optionally, if you are syncing a \"beta\" or \"lookahead\" release, you will need to also provide the <code>RLREL</code> variable as <code>beta</code> or <code>lookahead</code>.</p> <pre><code># The below syncs to staging for Rocky Linux 8\nRLVER=8 bash sync-to-staging.sh Rocky\n# The below syncs to staging for Rocky Linux 9\nRLVER=9 bash sync-to-staging.delete.sh Rocky\n</code></pre> <p>Once the syncs are done, staging must be tested and vetted before being sent to production. Once staging is completed, it is synced to production.</p> <pre><code># Set X to whatever release\nbash RLVER=X sync-to-prod.delete.sh\nbash sync-file-list-parallel.sh\n</code></pre> <p>During this phase, staging is rsynced with production, the file list is updated, and the full time list is also updated to allow mirrors to know that the repositories have been updated and that they can sync.</p> <p>Note: If multiple releases are being updated, it is important to run the syncs to completion before running the file list parallel script.</p>"},{"location":"sop/sop_compose_8/","title":"SOP: Compose and Repo Sync for Rocky Linux 8","text":"<p>This SOP covers how the Rocky Linux Release Engineering Team handles composes and repository syncs for Rocky Linux 8. It contains information of the scripts that are utilized and in what order, depending on the use case.</p> <p>Please see the other SOP for Rocky Linux 9+ that are managed via empanadas and peridot.</p>"},{"location":"sop/sop_compose_8/#contact-information","title":"Contact Information","text":"Owner Release Engineering Team Email Contact releng@rockylinux.org Email Contact infrastructure@rockylinux.org Mattermost Contacts <code>@label</code> <code>@mustafa</code> <code>@neil</code> <code>@tgo</code> Mattermost Channels <code>~Development</code>"},{"location":"sop/sop_compose_8/#related-git-repositories","title":"Related Git Repositories","text":"<p>There are several git repositories used in the overall composition of a repository or a set of repositories.</p> <p>Pungi - This repository contains all the necessary pungi configuration files for composes that come from koji. Pungi interacts with koji to build the composes.</p> <p>Comps - This repository contains all the necessary comps (which are groups and other data) for a given major version. Pungi uses this information to properly build the repositories.</p> <p>Toolkit - This repository contains various scripts and utilities used by Release Engineering, such as syncing composes, functionality testing, and mirror maintenance.</p>"},{"location":"sop/sop_compose_8/#composing-repositories","title":"Composing Repositories","text":"<p>For every stable script, there is an equal beta or lookahead script available.</p>"},{"location":"sop/sop_compose_8/#mount-structure","title":"Mount Structure","text":"<p>There is a designated system that takes care of composing repositories. These systems contain the necessary EFS/NFS mounts for the staging and production repositories as well as composes.</p> <ul> <li><code>/mnt/koji</code> -&gt; Koji files store</li> <li><code>/mnt/compose</code> -&gt; Compose data</li> <li><code>/mnt/repos-staging</code> -&gt; Staging</li> <li><code>/mnt/repos-production</code> -&gt; Production</li> </ul>"},{"location":"sop/sop_compose_8/#pungi","title":"Pungi","text":"<p>Each repository or set of repositories are controlled by various pungi configurations. For example, <code>r8.conf</code> will control the absolute base of Rocky Linux 8, which imports other git repository data as well as accompanying json or other configuration files.</p>"},{"location":"sop/sop_compose_8/#running-a-compose","title":"Running a Compose","text":"<p>Inside the <code>pungi</code> git repository, the folder <code>scripts</code> contain the necessary scripts that are ran to perform a compose. There are different types of composes:</p> <ul> <li>produce -&gt; Generates a full compose, generally used for minor releases, which generate new ISO's</li> <li>update -&gt; Generates a smaller compose, generally used for updates within a minor release cycle - ISO's are not generated</li> </ul> <p>Each script is titled appropriately:</p> <ul> <li><code>produce-X.sh</code> -&gt; Generates a full compose for X major release, typically set to the current minor release according to <code>rX.conf</code></li> <li><code>produce-X-full.sh</code> -&gt; Generates a full compose for X major release, including extras, plus, and devel in one go.</li> <li><code>updates-X.sh</code> -&gt; Generates a smaller compose for X major release, typically set to the current minor release according to <code>rX.conf</code></li> <li><code>updates-X-NAME.sh</code> -&gt; Generates a compose for the specific compose, such as NFV, Rocky-devel, Extras, or Plus</li> <li><code>updates-X-full.sh</code> -&gt; Generates a full incremental compose for the X release, which includes extras, plus, and devel in one go. Does NOT make ISO's.</li> </ul> <p>When these scripts are ran, they generate an appropriate directory under <code>/mnt/compose/X</code> with a directory and an accompanying symlink. For example. If an update to <code>Rocky</code> was made using <code>updates-8.sh</code>, the below would be made:</p> <pre><code>drwxr-xr-x. 5 root root 6144 Jul 21 17:44 Rocky-8-updates-20210721.1\nlrwxrwxrwx. 1 root root 26 Jul 21 18:26 latest-Rocky-8 -&gt; Rocky-8-updates-20210721.1\n</code></pre> <p>This setup also allows pungi to reuse previous package set data to reduce the time it takes to build a compose. Typically during a new minor release, all composes should be ran so they can be properly combined. Example of a typical order if releasing 8.X:</p> <pre><code>produce-8.sh\nupdates-8-devel.sh\nupdates-8-extras.sh\n\n# ! OR !\nproduce-8-full.sh\n</code></pre>"},{"location":"sop/sop_compose_8/#syncing-composes","title":"Syncing Composes","text":"<p>Syncing utilizes the sync scripts provided in the release engineering toolkit.</p> <p>When the scripts are being ran, they are usually ran for a specific purpose. They are also ran in a certain order to ensure integrity and consistency of a release.</p> <p>The below are common vars files. common_X will override what's in common. Typically these set what repositories exist and how they are named or look at the top level. These also set the current major.minor release as necessary.</p> <pre><code>.\n\u251c\u2500\u2500 common\n\u251c\u2500\u2500 common_8\n\u251c\u2500\u2500 common_9\n</code></pre> <p>These are for the releases in general. What they do is noted below.</p> <pre><code>\u251c\u2500\u2500 gen-torrents.sh -&gt; Generates torrents for images\n\u251c\u2500\u2500 minor-release-sync-to-staging.sh -&gt; Syncs a minor release to staging\n\u251c\u2500\u2500 sign-repos-only.sh -&gt; Signs the repomd (only)\n\u251c\u2500\u2500 sync-to-prod.sh -&gt; Syncs staging to production\n\u251c\u2500\u2500 sync-to-staging.sh -&gt; Syncs a provided compose to staging\n\u251c\u2500\u2500 sync-to-staging-sig.sh -&gt; Syncs a sig provided compose to staging\n</code></pre> <p>Generally, you will only run <code>minor-release-sync-to-staging.sh</code> when a full minor release is being produced. So for example, if 8.5 has been built out, you would run that after a compose. <code>gen-torrents.sh</code> would be ran shortly after.</p> <p>When doing updates, the order of operations (preferably) would be:</p> <pre><code>* sync-to-staging.sh\n* sync-to-staging-sig.sh -&gt; Only if sigs are updated\n* sync-to-prod.sh -&gt; After the initial testing, it is sent to prod.\n</code></pre> <p>An example of order:</p> <pre><code># The below syncs to staging\nRLVER=8 bash sync-to-staging.sh Extras\nRLVER=8 bash sync-to-staging.sh Rocky-devel\nRLVER=8 bash sync-to-staging.sh Rocky\n</code></pre> <p>Once the syncs are done, staging must be tested and vetted before being sent to production. During this stage, the <code>updateinfo.xml</code> is also applied where necessary to the repositories to provide errata. Once staging is completed, it is synced to production.</p> <pre><code>pushd /mnt/repos-staging/mirror/pub/rocky/8.X\npython3.9 /usr/local/bin/apollo_tree -p $(pwd) -n 'Rocky Linux 8 $arch' -i Live -i Minimal -i devel -i extras -i images -i isos -i live -i metadata -i Devel -i plus -i nfv\npopd\nRLVER=8 bash sign-repos-only.sh\nRLVER=8 bash sync-to-prod.sh\nbash sync-file-list-parallel.sh\n</code></pre> <p>During this phase, staging is rsynced with production, the file list is updated, and the full time list is also updated to allow mirrors to know that the repositories have been updated and that they can sync.</p> <p>Note: If multiple releases are being updated, it is important to run the syncs to completion before running the file list parallel script.</p>"},{"location":"sop/sop_compose_8/#quicker-composes","title":"Quicker Composes","text":"<p>On the designated compose box, there is a script that can do all of the incremental steps.</p> <pre><code>cd /root/cron\nbash stable-updates\n</code></pre> <p>The same goes for a full production.</p> <pre><code>bash stable\n</code></pre>"},{"location":"sop/sop_compose_sig/","title":"SOP: Compose and Repo Sync for Rocky Linux Special Interest Groups","text":"<p>This SOP covers how the Rocky Linux Release Engineering Team handles composes and repository syncs for Special Interest Groups.</p>"},{"location":"sop/sop_compose_sig/#contact-information","title":"Contact Information","text":"Owner Release Engineering Team Email Contact releng@rockylinux.org Email Contact infrastructure@rockylinux.org Mattermost Contacts <code>@label</code> <code>@mustafa</code> <code>@neil</code> <code>@tgo</code> Mattermost Channels <code>~Development</code>"},{"location":"sop/sop_compose_sig/#composing-repositories","title":"Composing Repositories","text":""},{"location":"sop/sop_compose_sig/#mount-structure","title":"Mount Structure","text":"<p>There is a designated system that takes care of composing repositories. These systems contain the necessary EFS/NFS mounts for the staging and production repositories as well as composes.</p> <ul> <li><code>/mnt/compose</code> -&gt; Compose data</li> <li><code>/mnt/repos-staging</code> -&gt; Staging</li> <li><code>/mnt/repos-production</code> -&gt; Production</li> </ul>"},{"location":"sop/sop_compose_sig/#empanadas","title":"Empanadas","text":"<p>Each repository or set of repositories are controlled by various comps and pungi configurations that are translated into peridot. Empanadas is used to run a reposync from peridot's yumrepofs repositories, generate ISO's, and create a pungi compose look-a-like. Because of this, the comps and pungi-rocky configuration is not referenced with empanadas.</p>"},{"location":"sop/sop_compose_sig/#running-a-compose","title":"Running a Compose","text":"<p>First, the toolkit must be cloned. In the <code>iso/empanadas</code> directory, run <code>poetry install</code>. You'll then have access to the various commands needed:</p> <ul> <li><code>sync_sig</code></li> </ul> <p>To perform a compose of a SIG, it must be defined in the configuration. As an example, here is composing the <code>core</code> sig.</p> <pre><code># This creates a brand new directory under /mnt/compose/X and symlinks it to latest-SIG-Y-X\n~/.local/bin/poetry run sync_sig --release 9 --sig core --hashed --clean-old-packages --full-run\n\n# This assumes the directories already exist and will update in place.\n~/.local/bin/poetry run sync_sig --release 9 --sig core --hashed --clean-old-packages\n</code></pre>"},{"location":"sop/sop_compose_sig/#syncing-composes","title":"Syncing Composes","text":"<p>Syncing utilizes the sync scripts provided in the release engineering toolkit.</p> <p>When the scripts are being ran, they are usually ran with a specific purpose, as each major version may be different.</p> <p>For SIG's, the only files you'll need to know of are <code>sync-to-staging-sig.sh</code> and <code>sync-to-prod-sig.sh</code>. Both scripts will delete packages and data that are no longer in the compose.</p> <pre><code># The below syncs the core 8 repos to staging\nRLVER=8 bash sync-to-staging-sig.sh core\n# The below syncs the core 9 repos to staging\nRLVER=9 bash sync-to-staging-sig.sh core\n\n# The below syncs everything in staging for 8 core to prod\nRLVER=8 bash sync-to-prod-sig.sh core\n\n# The below syncs everything in staging for 9 core to prod\nRLVER=9 bash sync-to-prod-sig.sh core\n</code></pre> <p>Once staging is completed and reviewed, it is synced to production.</p> <pre><code>bash sync-file-list-parallel.sh\n</code></pre> <p>During this phase, staging is rsynced with production, the file list is updated, and the full time list is also updated to allow mirrors to know that the repositories have been updated and that they can sync.</p>"},{"location":"sop/sop_mirrormanager2/","title":"Mirror Manager Maintenance","text":"<p>This SOP contains most if not all the information needed for SIG/Core to maintain and operate Mirror Manager for Rocky Linux.</p>"},{"location":"sop/sop_mirrormanager2/#contact-information","title":"Contact Information","text":"Owner SIG/Core (Release Engineering &amp; Infrastructure) Email Contact infrastructure@rockylinux.org Email Contact releng@rockylinux.org Mattermost Contacts <code>@label</code> <code>@neil</code> <code>@tgo</code> Mattermost Channels <code>~Infrastructure</code>"},{"location":"sop/sop_mirrormanager2/#introduction","title":"Introduction","text":"<p>So you made a bad decision and now have to do things to Mirror Manager. Good luck.</p>"},{"location":"sop/sop_mirrormanager2/#pieces","title":"Pieces","text":"Item Runs on... Software Mirrorlist Server mirrormanager001 https://github.com/adrianreber/mirrorlist-server/ Mirror Manager 2 mirrormanager001 https://github.com/fedora-infra/mirrormanager2"},{"location":"sop/sop_mirrormanager2/#mirrorlist-server","title":"Mirrorlist Server","text":"<p>This runs two (2) instances. Apache/httpd is configured to send <code>/mirrorlist</code> to one and <code>/debuglist</code> to the other.</p> <ul> <li> <p>Every fifteen (15) minutes: Mirrorlist cache is regenerated</p> <ul> <li>This queries the database for active mirrors and other information and writes a protobuf. The mirrorlist-server reads the protobuf and responds accordingly.</li> </ul> </li> <li> <p>Every twenty (20) minutes: Service hosting <code>/mirrorlist</code> is restarted</p> </li> <li>Every twenty-one (21) minutes: Service hosting <code>/debuglist</code> is restarted</li> </ul> <p>Note that the timing for the restart of the mirror list instances are arbitrary.</p>"},{"location":"sop/sop_mirrormanager2/#mirror-manager-2","title":"Mirror Manager 2","text":"<p>This is a uwsgi service fronted by an apache/httpd instance. This is responsible for everything else that is not <code>/mirrorlist</code> or <code>/debuglist</code>. This allows the mirror managers to, well, manage their mirrors.</p>"},{"location":"sop/sop_mirrormanager2/#cdn","title":"CDN","text":"<p>Fastly sits in front of mirror manager. VPN is required to access the <code>/admin</code> endpoints.</p> <p>If the backend of the CDN is down, it will attempt to guess what the user wanted to access and spit out a result on the dl.rockylinux.org website. For example, a request for AppStream-8 and x86_64 will result in a <code>AppStream/x86_64/os</code> directory on dl.rockylinux.org. Note that this isn't perfect, but it helps in potential down time or patching.</p> <pre><code>Fastly -&gt; www firewall -&gt; mirrormanager server\n</code></pre> <p>In reality, the flow is a lot more complex, and a diagram should be created to map it out in a more user-friendly manner (@TODO)</p> <pre><code>User -&gt; Fastly -&gt; AWS NLB over TLS, passthru -&gt; www firewall cluster (decrypt TLS) -&gt; mirrormanager server (Rocky CA TLS)\n</code></pre>"},{"location":"sop/sop_mirrormanager2/#tasks","title":"Tasks","text":"<p>Below are a list of possible tasks to take with mirror manager, depending on the scenario.</p>"},{"location":"sop/sop_mirrormanager2/#new-release","title":"New Release","text":"<p>For the following steps, the following must be completed:</p> <ul> <li>Production rsync endpoints should have all brand new content</li> <li>New content root should be locked down to 750 (without this, mirror manager cannot view it)</li> <li> <p>Disable mirrormanager user cronjobs</p> </li> <li> <p>Update the database with the new content. This is run on a schedule normally (see previous section) but can be done manually.</p> <p>a. As the mirror manager user, run the following:</p> </li> </ul> <pre><code>/opt/mirrormanager/scan-primary-mirror-0.4.2/target/debug/scan-primary-mirror --debug --config $HOME/scan-primary-mirror.toml --category 'Rocky Linux'\n/opt/mirrormanager/scan-primary-mirror-0.4.2/target/debug/scan-primary-mirror --debug --config $HOME/scan-primary-mirror.toml --category 'Rocky Linux SIGs'\n</code></pre> <ol> <li> <p>Update the redirects for <code>$reponame-$releasever</code></p> <p>a. Use psql to mirrormanager server: <code>psql -U mirrormanager -W -h mirrormanager_db_host mirrormanager_db</code></p> <p>b. Confirm that all three columns are filled and that the second and third columns are identical: <pre><code>select rr.from_repo AS \"From Repo\", rr.to_repo AS \"To Repo\", r.prefix AS \"Target Repo\" FROM repository_redirect AS rr LEFT JOIN repository AS r ON rr.to_repo = r.prefix GROUP BY r.prefix, rr.to_repo, rr.from_repo ORDER BY r.prefix ASC;`\n</code></pre></p> <p>c. Change the <code>majorversion</code> redirects to point to the new point release, for example: <pre><code>update repository_redirect set to_repo = regexp_replace(to_repo, '9\\.2', '9.3') where from_repo ~ '(\\w+)-9-(debug|source)';`\n</code></pre></p> <p>d. Insert new redirects for the major version expected by the installer</p> <pre><code>insert into repository_redirect (from_repo,to_repo) select REGEXP_REPLACE(rr.from_repo,'9\\.2','9.3'),REGEXP_REPLACE(rr.to_repo,'9\\.2','9.3')FROM repository_redirect AS rr WHERE from_repo ~ '(\\w+)-9.2';\n</code></pre> </li> <li> <p>Generate the mirrorlist cache and restart the debuglist and verify.</p> </li> </ol> <p>Once the bitflip is initiated, restart mirrorlist and reenable all cronjobs.</p>"},{"location":"sop/sop_mirrormanager2/#out-of-date-mirrors","title":"Out-of-date Mirrors","text":"<ol> <li>Get current shasum of repomd.xml. For example: <code>shasum=$(curl https://dl.rockylinux.org/pub/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml | sha256sum)</code></li> <li>Compare against latest propagation log:</li> </ol> <pre><code>tail -latr /var/log/mirrormanager/propagation/rocky-9.3-BaseOS-x86_64_propagation.log.*`\n\nexport VER=9.3\nawk -v shasum=$(curl -s https://dl.rockylinux.org/pub/rocky/$VER/BaseOS/x86_64/os/repodata/repomd.xml | sha256sum | awk '{print $1}') -F'::' '{split($0,data,\":\")} {if ($4 != shasum) {print data[5], data[6], $2, $7}}' &lt; $(find /var/log/mirrormanager/propagation/ -name \"rocky-${VER}-BaseOS-x86_64_propagation.log*\" -mtime -1 | tail -1)'\n</code></pre> <p>This will generate a table. You can take the IDs in the first column and use the database to disable them by ID (table name: hosts) or go to https://mirrors.rockylinux.org/mirrormanager/host/ID and uncheck 'User active'.</p> <p>Users can change user active, but they cannot change admin active. It is better to flip user active in this case.</p> <p>Admins can also view https://mirrors.rockylinux.org/mirrormanager/admin/all_sites if necessary.</p> <p>Example of table columns:</p> <p>Note</p> <p>These mirrors are here soley as an example and not to call anyone out, every mirror shows up on here at one point, for some reason, due to natural variations in how mirrors sync.</p> <pre><code>[mirrormanager@ord1-prod-mirrormanager001 propagation]$ awk -v shasum=$(curl -s https://dl.rockylinux.org/pub/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml | sha256sum | awk '{print $1}') -F'::' '{split($0,data,\":\")} {if ($4 != shasum) {print data[5], data[6], $2, $7}}' &lt; rocky-9.3-BaseOS-x86_64_propagation.log.1660611632 | column -t\n164 mirror.host.ag http://mirror.host.ag/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml 404\n173 rocky.centos-repo.net http://rocky.centos-repo.net/9.3/BaseOS/x86_64/os/repodata/repomd.xml 403\n92 rocky.mirror.co.ge http://rocky.mirror.co.ge/9.3/BaseOS/x86_64/os/repodata/repomd.xml 404\n289 mirror.vsys.host http://mirror.vsys.host/rockylinux/9.3/BaseOS/x86_64/os/repodata/repomd.xml 404\n269 mirrors.rackbud.com http://mirrors.rackbud.com/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml 200\n295 mirror.ps.kz http://mirror.ps.kz/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml 200\n114 mirror.liteserver.nl http://rockylinux.mirror.liteserver.nl/9.3/BaseOS/x86_64/os/repodata/repomd.xml 200\n275 mirror.upsi.edu.my http://mirror.upsi.edu.my/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml 200\n190 mirror.kku.ac.th http://mirror.kku.ac.th/rocky-linux/9.3/BaseOS/x86_64/os/repodata/repomd.xml 404\n292 mirrors.cat.pdx.edu http://mirrors.cat.pdx.edu/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml 200\n370 mirrors.gbnetwork.com http://mirrors.gbnetwork.com/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml 404\n308 mirror.ihost.md http://mirror.ihost.md/rockylinux/9.3/BaseOS/x86_64/os/repodata/repomd.xml 404\n87 mirror.freedif.org http://mirror.freedif.org/Rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml 404\n194 mirrors.bestthaihost.com http://mirrors.bestthaihost.com/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml 404\n30 mirror.admax.se http://mirror.admax.se/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml 200\n195 mirror.uepg.br http://mirror.uepg.br/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml 404\n247 mirrors.ipserverone.com http://mirrors.ipserverone.com/rocky/9.3/BaseOS/x86_64/os/repodata/repomd.xml 404'\n</code></pre>"},{"location":"sop/sop_release/","title":"Rocky Release Procedures for SIG/Core (RelEng/Infrastructure)","text":"<p>This SOP contains all the steps required by SIG/Core (a mix of Release Engineering and Infrastructure) to perform releases of all Rocky Linux versions. Work is in all collaboration within the entire group of engineerings.</p>"},{"location":"sop/sop_release/#contact-information","title":"Contact Information","text":"Owner SIG/Core (Release Engineering &amp; Infrastructure) Email Contact infrastructure@rockylinux.org Email Contact releng@rockylinux.org Mattermost Contacts <code>@label</code> <code>@neil</code> <code>@tgo</code> <code>@skip77</code> <code>@mustafa</code> <code>@sherif</code> <code>@pgreco</code> Mattermost Channels <code>~Infrastructure</code>"},{"location":"sop/sop_release/#preparation","title":"Preparation","text":""},{"location":"sop/sop_release/#notes-about-release-day","title":"Notes about Release Day","text":"<p>Within a minimum of two (2) days, the following should be true:</p> <ol> <li> <p>Torrents should be setup. All files can be synced with the seed box(es) but not yet published. The data should be verified using sha256sum and compared to the CHECKSUM files provided with the files.</p> </li> <li> <p>Website should be ready (typically with an open PR in github). The content should be verified that the design and content are correct and finalized.</p> </li> <li> <p>Enough mirrors should be setup. This essentially means that all content for a release should be synced to our primary mirror with the executable bit turned off, and the content should also be hard linked. In theory, mirror manager can be queried to verify if mirrors are or appear to be in sync.</p> </li> </ol>"},{"location":"sop/sop_release/#notes-about-patch-days","title":"Notes about Patch Days","text":"<p>Within a minimum of one (1) to two (2) days, the following should be true:</p> <ol> <li> <p>Updates should be completed in the build system, and verified in staging.</p> </li> <li> <p>Updates should be sent to production and file lists updated to allow mirrors to sync.</p> </li> </ol>"},{"location":"sop/sop_release/#prior-to-release-day-notes","title":"Prior to Release Day notes","text":"<p>Ensure the SIG/Core Checklist is read thoroughly and executed as listed.</p>"},{"location":"sop/sop_release/#release-day","title":"Release Day","text":""},{"location":"sop/sop_release/#priorities","title":"Priorities","text":"<p>During release day, these should be verified/completed in order:</p> <ol> <li> <p>Website - The primary website and user landing at rockylinux.org should allow the user to efficiently click through to a download link of an ISO, image, or torrent. It must be kept up.</p> </li> <li> <p>Torrent - The seed box(es) should be primed and ready to go for users downloading via torrent.</p> </li> <li> <p>Release Notes &amp; Documentation - The release notes are often on the same website as the documentation. The main website and where applicable in the docs should refer to the Release Notes of Rocky Linux.</p> </li> <li> <p>Wiki - If applicable, the necessary changes and resources should be available for a release. In particular, if a major release has new repos, changed repo names, this should be documented.</p> </li> <li> <p>Everything else!</p> </li> </ol>"},{"location":"sop/sop_release/#resources","title":"Resources","text":""},{"location":"sop/sop_release/#sigcore-checklist","title":"SIG/Core Checklist","text":""},{"location":"sop/sop_release/#beta","title":"Beta","text":"<ul> <li>Compose Completed</li> <li>Repoclosure must be checked and pass</li> <li>Lorax Run</li> <li>ISO's are built</li> <li>Cloud Images built</li> <li>Live Images built</li> <li>Compose Synced to Staging</li> <li>AWS/Azure Images in Marketplace</li> <li>Vagrant Images</li> <li>Container Images</li> <li> <p>Mirror Manager</p> <ul> <li>Ready to Migrate from previous beta release (rltype=beta)</li> <li>Boot image install migration from previous beta release</li> </ul> </li> <li> <p>Pass image to Testing Team for final validation</p> </li> </ul>"},{"location":"sop/sop_release/#release-candidate","title":"Release Candidate","text":"<ul> <li>Compose Completed</li> <li>Repoclosure must be checked and pass</li> <li>Lorax Run</li> <li>ISO's are built</li> <li>Cloud Images built</li> <li>Live Images built</li> <li>Compose Synced to Staging</li> <li>AWS/Azure Images in Marketplace</li> <li>Vagrant Images</li> <li>Container Images</li> <li> <p>Mirror Manager</p> <ul> <li>Ready to Migrate from previous release</li> <li>Boot image install migration from previous release</li> </ul> </li> <li> <p>Pass image to Testing Team for validation</p> </li> </ul>"},{"location":"sop/sop_release/#final","title":"Final","text":"<ul> <li>Compose Completed</li> <li>Repoclosure must be checked and pass</li> <li>Lorax Run</li> <li>ISO's are built</li> <li>Cloud Images built</li> <li>Live Images built</li> <li>Compose Synced to Staging</li> <li>AWS/Azure Images in Marketplace</li> <li>Vagrant Images</li> <li>Container Images</li> <li> <p>Mirror Manager</p> <ul> <li>Ready to Migrate from previous release</li> <li>Boot image install migration from previous release</li> </ul> </li> <li> <p>Pass image to Testing Team for final validation</p> </li> <li>Sync to Production</li> <li>Sync to Europe Mirror if applicable</li> <li>Hardlink Run</li> <li>Bitflip after 24-48 Hours</li> </ul> Resources Account ServicesGit (RESF Git Service)Git (Rocky Linux GitHub)Git (Rocky Linux GitLab)Mail ListsContacts <p>URL: https://accounts.rockylinux.org</p> <p>Purpose: Account Services maintains the accounts for almost all components of the Rocky ecosystem</p> <p>Technology: Noggin used by Fedora Infrastructure</p> <p>Contact: <code>~Infrastructure</code> in Mattermost and <code>#rockylinux-infra</code> in Libera IRC</p> <p>URL: https://git.resf.org</p> <p>Purpose: General projects, code, and so on for the Rocky Enterprise Software Foundation.</p> <p>Technology: Gitea</p> <p>Contact: <code>~Infrastructure</code>, <code>~Development</code> in Mattermost and <code>#rockylinux-infra</code>, <code>#rockylinux-devel</code> in Libera IRC</p> <p>URL: https://github.com/rocky-linux</p> <p>Purpose: General purpose code, assets, and so on for Rocky Linux. Some content is mirrored to the RESF Git Service.</p> <p>Technology: GitHub</p> <p>Contact: <code>~Infrastructure</code>, <code>~Development</code> in Mattermost and <code>#rockylinux-infra</code>, <code>#rockylinux-devel</code> in Libera IRC</p> <p>URL: https://git.rockylinux.org</p> <p>Purpose: Packages and light code for the Rocky Linux distribution</p> <p>Technology: GitLab</p> <p>Contact: <code>~Infrastructure</code>, <code>~Development</code> in Mattermost and <code>#rockylinux-infra</code>, <code>#rockylinux-devel</code> in Libera IRC</p> <p>URL: https://lists.resf.org</p> <p>Purpose: Users can subscribe and interact with various mail lists for the Rocky ecosystem</p> <p>Technology: Mailman 3 + Hyper Kitty</p> <p>Contact: <code>~Infrastructure</code> in Mattermost and <code>#rockylinux-infra</code> in Libera IRC</p> Name Email Mattermost Name IRC Name Louis Abel label@rockylinux.org @nazunalika Sokel/label/Sombra Mustafa Gezen mustafa@rockylinux.org @mustafa mstg Skip Grube skip@rockylinux.org @skip77 Sherif Nagy sherif@rockylinux.org @sherif Pablo Greco pgreco@rockylinux.org @pgreco pgreco Neil Hanlon neil@resf.org @neil neil Taylor Goodwill tg@resf.org @tgo tg"},{"location":"sop/sop_upstream_prep_checklist/","title":"Generalized Prep Checklist for Upcoming Releases","text":"<p>This SOP contains general checklists required by SIG/Core to prepare and plan for the upcoming release. This work, in general, is required to be done on a routine basis, even months out before the next major or minor release, as it requires monitoring of upstream's (CentOS Stream) work to ensure Rocky Linux will remain ready and compatible with Red Hat Enterprise Linux.</p>"},{"location":"sop/sop_upstream_prep_checklist/#contact-information","title":"Contact Information","text":"Owner SIG/Core (Release Engineering &amp; Infrastructure) Email Contact infrastructure@rockylinux.org Email Contact releng@rockylinux.org Mattermost Contacts <code>@label</code> <code>@neil</code> <code>@tgo</code> <code>@skip77</code> <code>@mustafa</code> <code>@sherif</code> <code>@pgreco</code> Mattermost Channels <code>~Infrastructure</code>"},{"location":"sop/sop_upstream_prep_checklist/#general-upstream-monitoring","title":"General Upstream Monitoring","text":"<p>It is expected to monitor the following repositories upstream, as these will indicate what is coming up for a given major or point release. These repositories are found at the Red Hat gitlab.</p> <ul> <li>centos-release</li> <li>centos-logos</li> <li>pungi-centos</li> <li>comps</li> <li>module-defaults</li> </ul> <p>These repositories can be monitored by setting to \"all activity\" on the bell icon.</p> <p>Upon changes to the upstream repositories, SIG/Core member should analyze the changes and apply the same to the lookahead branches:</p> <ul> <li> <p>rocky-release</p> <ul> <li>Manual changes required</li> </ul> </li> <li> <p>rocky-logos</p> <ul> <li>Manual changes required</li> </ul> </li> <li> <p>pungi-rocky</p> <ul> <li>Run <code>sync-from-upstream</code></li> </ul> </li> <li> <p>peridot-rocky</p> <ul> <li>Configurations are generated using peridot tools</li> </ul> </li> <li> <p>comps</p> <ul> <li>Run <code>sync-from-upstream</code></li> </ul> </li> <li> <p>rocky-module-defaults</p> <ul> <li>Run <code>sync-from-upstream</code></li> </ul> </li> </ul>"},{"location":"sop/sop_upstream_prep_checklist/#general-downward-merging","title":"General Downward Merging","text":"<p>Repositories that generally track for LookAhead and Beta releases will flow downward to the stable branch. For example:</p> <pre><code>* rXs / rXlh\n |\n |----&gt; rX-beta\n |\n |----&gt; rX\n</code></pre> <p>This applies to any specific rocky repo, such as comps, pungi, peridot-config, and so on. As it is expected some repos will deviate in commit history, it is OK to force push, under the assumption that changes made in the lower branch exists in the upper branch. That way you can avoid changes/functionality being reverted on accident.</p>"},{"location":"sop/sop_upstream_prep_checklist/#general-package-patching","title":"General Package Patching","text":"<p>There are packages that are patched typically for the purpose of debranding. List of patched packages are typically maintained in a metadata repository. The obvious ones are listed below and should be monitored and maintained properly:</p> <ul> <li>abrt</li> <li>anaconda</li> <li>anaconda-user-help</li> <li>chrony</li> <li>cockpit</li> <li>dhcp</li> <li>dnf</li> <li>firefox</li> <li>fwupd</li> <li>gcc</li> <li>gnome-session</li> <li>gnome-settings-daemon</li> <li>grub2</li> <li>initial-setup</li> <li>kernel</li> <li>kernel-rt</li> <li>libdnf</li> <li>libreoffice</li> <li>libreport</li> <li>lorax-templates-rhel</li> <li>nginx</li> <li>opa-ff</li> <li>opa-fm</li> <li>openldap</li> <li>openscap</li> <li>osbuild</li> <li>osbuild-composer</li> <li>PackageKit</li> <li>pesign</li> <li>python-pip</li> <li>redhat-rpm-config</li> <li>scap-security-guide</li> <li>shim</li> <li>shim-unsigned-x64</li> <li>shim-unsigned-aarch64</li> <li>subscription-manager</li> <li>systemd</li> <li>thunderbird</li> </ul>"}]}