common_cryosparc
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| common_cryosparc [2021/07/09 11:25] – bonnefon | common_cryosparc [2023/11/01 20:18] (current) – external edit 127.0.0.1 | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | We deployed cryoSPARC on cbi-gpu-03 and we are opening the access to everyone. As cryoSPARC is not compatible with the IGBMC authentication system, I have to create the user accounts by hand. To have an account, you have to [[balletn@igbmc.fr|send | + | We deployed cryoSPARC on cbi-gpu-03 |
| Moreover, to follow the new team/ | Moreover, to follow the new team/ | ||
| Line 11: | Line 11: | ||
| There is a detail about this installation: | There is a detail about this installation: | ||
| - | [[http:// | + | [[https://cavarelli.cryosparc.cbi.igbmc.fr|https:// |
| + | |||
| + | To check your position in the queue, log to cbi-gpu-03 | ||
| + | |||
| + | ---- | ||
| + | |||
| + | **2022-04-07** | ||
| + | |||
| + | The cryosparc instances have been migrated successfully. | ||
| + | I am just waiting for some operations on the IGBMC IT Service and then I will send you the link to your team instances. | ||
| + | |||
| + | Moreover I deployed Slurm on every machine of the cbi and already prepared everything to be able to plug in some team clusters into it. | ||
| + | If you are interrested in plugging your server into the platform cluster, you can contact me so we can discuss about what's possible (team priorities and such). | ||
| + | This allows a uniform way of handling jobs and queue priorities. | ||
| + | Slurm usage is not mandatory for now on the platform servers (cbi-compute and cbi-gpu), but will be in the following weeks (the time for me to write some documentation and examples for everyone). I will notify you when it's ready. | ||
| + | |||
| + | I worked with the IT Services to deploy dedicated domain names for each team. | ||
| + | You can access your cryoSPARC instance with this url: | ||
| + | |||
| + | [[https:// | ||
| + | |||
| + | It is accessible from inside and outside the lab network. | ||
| + | |||
| + | ---- | ||
| + | |||
| + | **2022-04-21** | ||
| + | |||
| + | In order to bring the Slurm cluster to a fully featured state I am deploying a shared home system. | ||
| + | This will bring you two features: | ||
| + | - you can use your home inside your Slurm jobs as files will be share between the cluster nodes (for custom scripts for example) | ||
| + | - you will have a place to put your scripts, which was not the case before because of the "by project" | ||
| + | |||
| + | There is three little details: | ||
| + | * Your home directory on the servers will now be: **/ | ||
| + | * Some of you already have some data inside your homes on some of the servers. Every data will still be available inside **/ | ||
| + | * A quota of 30GB will be put in place for every home. | ||
| + | |||
| + | To be able to run jobs through slurm, you first have to connect to one of the servers via **ssh** (let's say cbi-compute-01). This will create a home directory on the storage (**/ | ||
| + | |||
| + | The shared home system did not solve the recent cryoSPARC issue (// | ||
| + | I will try to make a workaround for this during the day. | ||
| + | |||
| + | The cryoSPARC issue should be fixed. | ||
| + | |||
| + | Also, now that our servers are deployed under Slurm, you should know that (when connected to one of the CBI servers) with the command: | ||
| + | < | ||
| + | You can see the node list and if they are not available, the reason why. It allows you to follow when I am debugging a specific node. | ||
| + | |||
common_cryosparc.1625829959.txt.gz · Last modified: (external edit)
