wiki/search/search_index.json

1 line
5.0 KiB
JSON
Raw Normal View History

{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"SIG/HPC Wiki","text":"<p>This SIG is aiming to provide various HPC packages to support building HPC cluster using Rocky Linux systems</p>"},{"location":"#responsibilities","title":"Responsibilities","text":"<p>Developing and maintaining various HPC related packages, this may include porting, optimized and contributing to upstream sources to support HPC initiative</p>"},{"location":"#meetings-communications","title":"Meetings / Communications","text":"<p>We are meeting on bi-weekly bases on Google meet for now and you may check RESF community calendar here also check Contact US page to reach us</p>"},{"location":"about/","title":"About","text":"<p>TBD</p>"},{"location":"contact/","title":"Contact US","text":"<p>We hang out in our SIG/HPC Mattermost channel and soon we will have IRC bridge</p>"},{"location":"events/","title":"Events","text":""},{"location":"events/#sighpc-meeting","title":"SIG/HPC Meeting","text":"<p>We are meeting twice a month on bi-weekly bases on Thursday at 9:00 PM UTC here on Google meet - for now -</p>"},{"location":"events/meeting-notes/2023-04-20/","title":"SIG/HPC meeting 2023-04-20","text":""},{"location":"events/meeting-notes/2023-04-20/#attendees","title":"Attendees:","text":"<pre><code>* Alan Marshall\n* Nje\n* Neil Hanlon\n* Matt Bidwell\n* David (NezSez)\n* Jonathan Andreson\n* Stack\n* Balaji\n* Sherif\n* Gregorgy Kurzer\n* David DeBonis\n</code></pre>"},{"location":"events/meeting-notes/2023-04-20/#quick-round-of-introduction","title":"Quick round of introduction","text":"<p>Everyone introduced themselves</p>"},{"location":"events/meeting-notes/2023-04-20/#definition-of-stakeholders","title":"Definition of stakeholders","text":"<p>\"still needs lots to clarification and classification since those are very wide terms\"</p> <pre><code>* HPC End-user ?maybe?\n* HPC Systems admins and engineers, to provide them with tools and know how to build HPC clusters using Rocky linux\n* HPC Vendors, however the SIG has to be vendor neutral and agnostic\n</code></pre>"},{"location":"events/meeting-notes/2023-04-20/#discussions","title":"Discussions:","text":"<p>Stack: we need to make sure that we are not redoing efforts that already done with other groups Greg engaged with Open HPC community and providing some core packages such as apptainer, mpi, openHPC</p> <p>Sherif: we need to have one hat to fit most of all but we can't have one hat that fit all Stack: Feedback regarding Sherif's idea that generic idea's are not great idea and there is a bad performance Greg: we need to put building blocks in the this repo and will make life easiest and lower the barriers like Spack, slurm and easybuild</p> <p>Devid (NezSez): Some end users won't understand / know anything about HPC and just needs to use the HPC, such as Maya or dynamic fluids</p> <p>Neil: some tools can be very easily an entry point for organization and teams to use HPC like jupiter playbook</p> <p>Stack: HPC is usually tuned to different needs, we can reach to other HPC that are running Rocky to ask them to promate rocky and establish a dialog to get an idea of what things that they are running into rocky</p> <p>Matt: HPC out of the box there are few projects that doing that and we don't need to run in circles of what we are going to </p> <p>Balaji: SIG for scientific application that focus on support the application and optimization, and HPC suggest the architecture to reach max capabilities</p> <p>Greg: Agreeing with stack we don't want to provide application that there are tools that do that</p> <p>Gregory Kurtzer (Chat): A simple strategy might be just to start assembling a list of packages we want to include as part of SIG/HPC, and be open minded as this list expands.</p> <p>Neil Hanlon(Chat): actually have to leave now, but, if we make some sort of toolkit, it has to be quite unopinionated... OpenStack-Ansible is a good example of being unopinionated about how you run your openstack cluster(s), but give you all the tools to cust