wiki/search/search_index.json

1 line
104 KiB
JSON
Raw Permalink Normal View History

{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"SIG/HPC Wiki","text":"<p>This SIG is aiming to provide various HPC packages to support building HPC cluster using Rocky Linux systems</p>"},{"location":"#responsibilities","title":"Responsibilities","text":"<p>Developing and maintaining various HPC related packages, this may include porting, optimized and contributing to upstream sources to support HPC initiative</p>"},{"location":"#meetings-communications","title":"Meetings / Communications","text":"<p>We are meeting on bi-weekly bases on Google meet for now and you may check RESF community calendar here also check Contact US page to reach us</p>"},{"location":"about/","title":"About","text":"<p>TBD</p>"},{"location":"contact/","title":"Contact US","text":"<p>We hang out in our SIG/HPC Mattermost channel and #rockylinux-sig-hpc on irc.libera.chat \"bridged to our MatterMost channel\" also our SIG forums are located here</p>"},{"location":"events/","title":"SIG/HPC Meeting","text":"<p>We are meeting twice a month on bi-weekly bases on Thursday at 9:00 PM UTC here on Google meet - for now -</p>"},{"location":"installation/","title":"Repo Installation","text":"<p>\"\"\"This page is still under construction\"\"\"</p> <p>For Rocky 8 and 9, <code>dnf install rocky-release-hpc</code> will install the required repos</p>"},{"location":"installation/#slurm-installation","title":"Slurm installation:","text":"<p>For Rocky 9: <code>dnf install slurm22</code> or <code>dnf install slurm23</code></p> <p>For Rocky 8: you need to enable PowerTools repo first, then <code>dnf install slurm22</code> or <code>dnf install slurm23</code></p> <p>Slurm is divided into multiple packages, so <code>dnf search slurm</code> might be a good idea to fetch whatever packages you need</p>"},{"location":"packages/","title":"SIG/HPC Packages","text":"<p>Those are some of the packages that we are thinking to maintain and support within this SIG </p> <pre><code>* Lustre server and client\n* Slurm\n* Apptainer\n* Easybuild\n* Spack\n* opempi build slurm support\n* Lmod\n* conda\n* sstack\n* fail2ban - in EPEL not sure if it's fit in this SIG -\n* glusterfs-server - Better suited under SIG/Storage -\n* glusterfs-selinux - Better suited under SIG/Storage -\n* Cython\n* genders\n* pdsh\n* gcc (latest releases, parallel install)\n* autotools\n* cmake\n* hwloc (this really needs to support parallel versions)\n* libtool\n* valgrind (maybe)\n* charliecloud\n* Warewulf (if all config options are runtime instead of pre-compiled)\n* magpie\n* openpbs\n* pmix\n* NIS : ypserv, ypbind, yptools and a correspdonding nss_nis (took the source rpms from fedora and recompiled them for R9)\n</code></pre>"},{"location":"events/meeting-notes/2023-04-20/","title":"SIG/HPC meeting 2023-04-20","text":""},{"location":"events/meeting-notes/2023-04-20/#attendees","title":"Attendees:","text":"<pre><code>* Alan Marshall\n* Nje\n* Neil Hanlon\n* Matt Bidwell\n* David (NezSez)\n* Jonathan Andreson\n* Stack\n* Balaji\n* Sherif\n* Gregorgy Kurzer\n* David DeBonis\n</code></pre>"},{"location":"events/meeting-notes/2023-04-20/#quick-round-of-introduction","title":"Quick round of introduction","text":"<p>Everyone introduced themselves</p>"},{"location":"events/meeting-notes/2023-04-20/#definition-of-stakeholders","title":"Definition of stakeholders","text":"<p>\"still needs lots to clarification and classification since those are very wide terms\"</p> <pre><code>* HPC End-user ?maybe?\n* HPC Systems admins and engineers, to provide them with tools and know how to build HPC clusters using Rocky linux\n* HPC Vendors, however the SIG has to be vendor neutral and agnostic\n</code></pre>"},{"location":"events/meeting-notes/2023-04-20/#discussions","title":"Discussions:","text":"<p>Stack: we need to make sure that we are not redoing efforts that already done with other groups Greg engaged with Open HPC community and providing some core packages such as apptainer, mpi, openHPC</p> <p>Sherif: we need to have one hat to fit most of all but we c