common_cryosparc
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| common_cryosparc [2022/04/22 07:46] – bonnefon | common_cryosparc [2023/11/01 20:18] (current) – external edit 127.0.0.1 | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | We deployed cryoSPARC on cbi-gpu-03 and we are opening the access to everyone. As cryoSPARC is not compatible with the IGBMC authentication system, I have to create the user accounts by hand. To have an account, you have to [[balletn@igbmc.fr|send | + | We deployed cryoSPARC on cbi-gpu-03 |
| Moreover, to follow the new team/ | Moreover, to follow the new team/ | ||
| Line 13: | Line 13: | ||
| [[https:// | [[https:// | ||
| - | To check your position in the queue, log to cbi-gpu-03 with ssh and run the squeue command | + | To check your position in the queue, log to cbi-gpu-03 |
| + | ---- | ||
| **2022-04-07** | **2022-04-07** | ||
| Line 28: | Line 29: | ||
| I worked with the IT Services to deploy dedicated domain names for each team. | I worked with the IT Services to deploy dedicated domain names for each team. | ||
| You can access your cryoSPARC instance with this url: | You can access your cryoSPARC instance with this url: | ||
| + | |||
| [[https:// | [[https:// | ||
| + | |||
| It is accessible from inside and outside the lab network. | It is accessible from inside and outside the lab network. | ||
| + | |||
| + | ---- | ||
| + | |||
| + | **2022-04-21** | ||
| + | |||
| + | In order to bring the Slurm cluster to a fully featured state I am deploying a shared home system. | ||
| + | This will bring you two features: | ||
| + | - you can use your home inside your Slurm jobs as files will be share between the cluster nodes (for custom scripts for example) | ||
| + | - you will have a place to put your scripts, which was not the case before because of the "by project" | ||
| + | |||
| + | There is three little details: | ||
| + | * Your home directory on the servers will now be: **/ | ||
| + | * Some of you already have some data inside your homes on some of the servers. Every data will still be available inside **/ | ||
| + | * A quota of 30GB will be put in place for every home. | ||
| + | |||
| + | To be able to run jobs through slurm, you first have to connect to one of the servers via **ssh** (let's say cbi-compute-01). This will create a home directory on the storage (**/ | ||
| + | |||
| + | The shared home system did not solve the recent cryoSPARC issue (// | ||
| + | I will try to make a workaround for this during the day. | ||
| + | |||
| + | The cryoSPARC issue should be fixed. | ||
| + | |||
| + | Also, now that our servers are deployed under Slurm, you should know that (when connected to one of the CBI servers) with the command: | ||
| + | < | ||
| + | You can see the node list and if they are not available, the reason why. It allows you to follow when I am debugging a specific node. | ||
| + | |||
| + | |||
common_cryosparc.1650613562.txt.gz · Last modified: (external edit)
