generated from sig_core/wiki-template
Init pages
This commit is contained in:
parent
68e1f239da
commit
a4d8111e7a
1
docs/about.md
Normal file
1
docs/about.md
Normal file
@ -0,0 +1 @@
|
||||
TBD
|
2
docs/contact.md
Normal file
2
docs/contact.md
Normal file
@ -0,0 +1,2 @@
|
||||
# Contact US
|
||||
We hang out in our [SIG/HPC Mattermost channel](https://chat.rockylinux.org/rocky-linux/channels/sig-hpc) and soon we will have IRC bridge
|
3
docs/events.md
Normal file
3
docs/events.md
Normal file
@ -0,0 +1,3 @@
|
||||
## SIG/HPC Meeting
|
||||
|
||||
We are meeting twice a month on bi-weekly bases on Thursday at 9:00 PM UTC here on [Google meet](https://meet.google.com/hsy-qnoe-dxx) - for now -
|
71
docs/events/meeting-notes/2023-04-20.md
Normal file
71
docs/events/meeting-notes/2023-04-20.md
Normal file
@ -0,0 +1,71 @@
|
||||
# SIG/HPC meeting 2023-04-20
|
||||
|
||||
## Attendees:
|
||||
|
||||
* Alan Marshall
|
||||
* Nje
|
||||
* Neil Hanlon
|
||||
* Matt Bidwell
|
||||
* David (NezSez)
|
||||
* Jonathan Andreson
|
||||
* Stack
|
||||
* Balaji
|
||||
* Sherif
|
||||
* Gregorgy Kurzer
|
||||
* David DeBonis
|
||||
|
||||
|
||||
## Quick round of introduction
|
||||
|
||||
Everyone introduced themselves
|
||||
|
||||
## Definition of stakeholders
|
||||
|
||||
"still needs lots to clarification and classification since those are very wide terms"
|
||||
|
||||
* HPC End-user ?maybe?
|
||||
* HPC Systems admins and engineers, to provide them with tools and know how to build HPC clusters using Rocky linux
|
||||
* HPC Vendors, however the SIG has to be vendor neutral and agnostic
|
||||
|
||||
## Discussions:
|
||||
|
||||
Stack: we need to make sure that we are not redoing efforts that already done with other groups
|
||||
Greg engaged with Open HPC community and providing some core packages such as apptainer, mpi, openHPC
|
||||
|
||||
Sherif: we need to have one hat to fit most of all but we can’t have one hat that fit all
|
||||
Stack: Feedback regarding Sherif’s idea that generic idea’s are not great idea and there is a bad performance
|
||||
Greg: we need to put building blocks in the this repo and will make life easiest and lower the barriers like Spack, slurm and easybuild
|
||||
|
||||
Devid (NezSez): Some end users won’t understand / know anything about HPC and just needs to use the HPC, such as Maya or dynamic fluids
|
||||
|
||||
Neil: some tools can be very easily an entry point for organization and teams to use HPC like jupiter playbook
|
||||
|
||||
Stack: HPC is usually tuned to different needs, we can reach to other HPC that are running Rocky to ask them to promate rocky and establish a dialog to get an idea of what things that they are running into rocky
|
||||
|
||||
Matt: HPC out of the box there are few projects that doing that and we don’t need to run in circles of what we are going to
|
||||
|
||||
Balaji: SIG for scientific application that focus on support the application and optimization, and HPC suggest the architecture to reach max capabilities
|
||||
|
||||
Greg: Agreeing with stack we don’t want to provide application that there are tools that do that
|
||||
|
||||
|
||||
Gregory Kurtzer (Chat):
|
||||
A simple strategy might be just to start assembling a list of packages we want to include as part of SIG/HPC, and be open minded as this list expands.
|
||||
|
||||
Neil Hanlon(Chat):
|
||||
actually have to leave now, but, if we make some sort of toolkit, it has to be quite unopinionated... OpenStack-Ansible is a good example of being unopinionated about how you run your openstack cluster(s), but give you all the tools to customize and tune to your unique situation, too
|
||||
|
||||
## Remarks:
|
||||
* A point raised, should be rebuild some packages that area already in Epel or not and if we shall have a higher priority on our repo or not
|
||||
* We need to think more about conflicts with other SIGs like luster and sig storage
|
||||
|
||||
## Action items:
|
||||
|
||||
* List of applications “Thread on MM to post pkgs”
|
||||
* Building blocks which are each pkg as a building block such as luster, openHPC, slurm, etc…
|
||||
* Reach out to other communities “Greg”
|
||||
* Reaching out for different sites that uses Rocky for HPC “Stack will ping few of them and others as well -Group effort-”
|
||||
* Reaching out to hardware vendors
|
||||
* Statistic / public registry for sites / HPC to add themselves if they want
|
||||
* Meeting will be bi-weekly “Tantive Thursday 9:00PM UTC”
|
||||
* Documentations
|
Loading…
Reference in New Issue
Block a user