|
|
|
@ -1,71 +0,0 @@
|
|
|
|
|
# SIG/HPC meeting 2023-04-20
|
|
|
|
|
|
|
|
|
|
## Attendees:
|
|
|
|
|
|
|
|
|
|
* Alan Marshall
|
|
|
|
|
* Nje
|
|
|
|
|
* Neil Hanlon
|
|
|
|
|
* Matt Bidwell
|
|
|
|
|
* David (NezSez)
|
|
|
|
|
* Jonathan Andreson
|
|
|
|
|
* Stack
|
|
|
|
|
* Balaji
|
|
|
|
|
* Sherif
|
|
|
|
|
* Gregorgy Kurzer
|
|
|
|
|
* David DeBonis
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Quick round of introduction
|
|
|
|
|
|
|
|
|
|
Everyone introduced themselves
|
|
|
|
|
|
|
|
|
|
## Definition of stakeholders
|
|
|
|
|
|
|
|
|
|
"still needs lots to clarification and classification since those are very wide terms"
|
|
|
|
|
|
|
|
|
|
* HPC End-user ?maybe?
|
|
|
|
|
* HPC Systems admins and engineers, to provide them with tools and know how to build HPC clusters using Rocky linux
|
|
|
|
|
* HPC Vendors, however the SIG has to be vendor neutral and agnostic
|
|
|
|
|
|
|
|
|
|
## Discussions:
|
|
|
|
|
|
|
|
|
|
Stack: we need to make sure that we are not redoing efforts that already done with other groups
|
|
|
|
|
Greg engaged with Open HPC community and providing some core packages such as apptainer, mpi, openHPC
|
|
|
|
|
|
|
|
|
|
Sherif: we need to have one hat to fit most of all but we can't have one hat that fit all
|
|
|
|
|
Stack: Feedback regarding Sherif's idea that generic idea's are not great idea and there is a bad performance
|
|
|
|
|
Greg: we need to put building blocks in the this repo and will make life easiest and lower the barriers like Spack, slurm and easybuild
|
|
|
|
|
|
|
|
|
|
Devid (NezSez): Some end users won't understand / know anything about HPC and just needs to use the HPC, such as Maya or dynamic fluids
|
|
|
|
|
|
|
|
|
|
Neil: some tools can be very easily an entry point for organization and teams to use HPC like jupiter playbook
|
|
|
|
|
|
|
|
|
|
Stack: HPC is usually tuned to different needs, we can reach to other HPC that are running Rocky to ask them to promate rocky and establish a dialog to get an idea of what things that they are running into rocky
|
|
|
|
|
|
|
|
|
|
Matt: HPC out of the box there are few projects that doing that and we don't need to run in circles of what we are going to
|
|
|
|
|
|
|
|
|
|
Balaji: SIG for scientific application that focus on support the application and optimization, and HPC suggest the architecture to reach max capabilities
|
|
|
|
|
|
|
|
|
|
Greg: Agreeing with stack we don't want to provide application that there are tools that do that
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Gregory Kurtzer (Chat):
|
|
|
|
|
A simple strategy might be just to start assembling a list of packages we want to include as part of SIG/HPC, and be open minded as this list expands.
|
|
|
|
|
|
|
|
|
|
Neil Hanlon(Chat):
|
|
|
|
|
actually have to leave now, but, if we make some sort of toolkit, it has to be quite unopinionated... OpenStack-Ansible is a good example of being unopinionated about how you run your openstack cluster(s), but give you all the tools to customize and tune to your unique situation, too
|
|
|
|
|
|
|
|
|
|
## Remarks:
|
|
|
|
|
* A point raised, should be rebuild some packages that area already in Epel or not and if we shall have a higher priority on our repo or not
|
|
|
|
|
* We need to think more about conflicts with other SIGs like luster and sig storage
|
|
|
|
|
|
|
|
|
|
## Action items:
|
|
|
|
|
|
|
|
|
|
* List of applications “Thread on MM to post pkgs”
|
|
|
|
|
* Building blocks which are each pkg as a building block such as luster, openHPC, slurm, etc…
|
|
|
|
|
* Reach out to other communities “Greg”
|
|
|
|
|
* Reaching out for different sites that uses Rocky for HPC “Stack will ping few of them and others as well -Group effort-”
|
|
|
|
|
* Reaching out to hardware vendors
|
|
|
|
|
* Statistic / public registry for sites / HPC to add themselves if they want
|
|
|
|
|
* Meeting will be bi-weekly “Tantive Thursday 9:00PM UTC”
|
|
|
|
|
* Documentations
|