44
Show HN: Virtual SLURM HPC cluster in a Docker Compose
I'm the main developer behind vHPC, a SLURM HPC cluster in a docker compose.
As part of my job, I'm working on a software solution that needs to interact with one of the largest Italian HPC clusters (Cineca Leonardo, 270 PFLOPS). Of course developing on the production system was out of question, as it would have led to unbearably long feedback loops. I thus started looking around for existing containerised solutions, which were always lacking some key ingredient in order to suitably mock our target system (accounting, MPI, out of date software, ...).
I thus decided that it was worth it to make my own virtual cluster from scratch, learning a thing or two about SLURM in the process. Even though it satisfies the particular needs of the project I'm working on, I tried to keep vHPC as simple and versatile as possible.
I proposed the company to open source it, and as of this morning (CET) vHPC is FLOSS for others to use and tweak. I am around to answer any question.
The Digital Research Alliance of Canada (formerly Compute Canada) has Terrafrom recipes that can talk to various cloud APIs that do something similar:
* https://github.com/ComputeCanada/magic_castle
They link to various other projects that do cloud-y-HPC:
* AWS ParallelCluster [AWS]
* Cluster in the cloud [AWS, GCP, Oracle]
* Elasticluster [AWS, GCP, OpenStack]
* Google Cluster Toolkit [GCP]
* illume-v2 [OpenStack]
* NVIDIA DeepOps [Ansible playbooks only]
* StackHPC Ansible Role OpenHPC [Ansible Role for OpenStack]
Nvidia also offers free licenses for their Base Command Manager (BCM, formerly Bright Cluster Manager); pay for enterprise support, or hit up the forums:
* https://www.nvidia.com/en-us/data-center/base-command-manage...
* http://support.brightcomputing.com/manuals/10/
* http://support.brightcomputing.com/manuals/11/
Cool!
I have worked 100% in 3 comparable systems over the past 10 years. Can you access with ssh?
I find it super fluid to work on the HPC directly to develop methods for huge datasets by using vim to code and tmux for sessions. I focus on printing detailed log files constantly with lots of debugs and an automated monitoring script to print those logs in realtime; a mixture of .out .err and log.txt.
Thanks for this! I went looking for something similar a while back and found nothing much. I'm guessing that the alternative to this tidy modern repository is a gigantic broken pile of ansible/chef/puppet that hasn't been touched in 10 years.
Even surprisingly popular distributed-systems stuff is always really bad about "follow this 10 step copy/paste to deploy to EKS" but that's also obnoxious. In the first place, people want to see something basically working on small scale first to check if it's abandonware. But even after that.. local prototyping without first setting up multiple repositories, then shipping multiple modified container images, and already having CI/CD for all of the above is really nice to have.
> I'm guessing that the alternative to this tidy modern repository is a gigantic broken pile of ansible/chef/puppet that hasn't been touched in 10 years.
Not quite sure how well you looked, but there are a bunch of deployment systems for HPC, Ansible or otherwise:
* https://old.reddit.com/r/HPC/comments/1p4a3fq/what_imaging_s...
* My comment listing a bunch: https://news.ycombinator.com/item?id=46037792
Interesting, I've been dealing with replacing a few on-prem HPC clusters lately. One of the things we've been looking at is OpenOnDemand. How does this compare to that? Is this primarily targeted at cluster development or can I really just make an arbitrarily large production HPC cluster with it?
Don’t you still need the HPC cluster with OpenOnDemand? I thought it was a web interface to use HPC resources.
But this still runs on a single computer, so you wouldn’t use this to deploy a production cluster. This would be for testing in a virtual multi-node-ish setup.
ondemand is "just" a web frontend for using a traditional HPC cluster, which of course means its architecture is deeply cursed: https://osc.github.io/ood-documentation/latest/architecture....
Yeah, OOD is a giant RoR webapp; you need to be running it on a node that can submit to your cluster.
RoR = Ruby on Rails
I wish I had this for my master's thesis! it was a puny 64 core node, but nevertheless...
[dead]