User Tools

Site Tools


team_gpu

Lamour GPU node

The GPU server is installed ate IGBMC for the moment

Specs :

  • CPU : 2x Intel Xeon-G 6330
  • RAM : 16x 32GB DDR4-2933/3200 RDIMM (512GB total)
  • GPU : 4x Nvidia A10 (4x 24GB)
  • HDD : 6x 1.92 To SSD SAS (1 disk for OS, 5 for scratch)
  • 7 years support

It is part of the IGBMC HPC, so it's accessible through ssh (with MobaXterm for instance)

ssh -X login@hpc.igbmc.fr

and then:

ssh -X login@phantom-node39

space2, mendel and storage can be accessed

/shared/space2/lamour-ruff/
/shared/mendel/projects/
/shared/mendel/teams/lamour-ruff/
/shared/misc/cbi-teams/lamour-ruff/

The CryoSPARC instance can be accessed online at :

https://cavarelli-cryosparc.igbmc.fr

The submission script has to be modified to increase the allocated RAM for some jobs (not enough memory for 2D classes Job with 200 classes in the Extensive validation EMPIAR 10025 Benchmark Mode - Edvanced jobs enabled)

#SBATCH --mem={{ (ram_gb*(ram_adjust|int))|int }}M

with ram_adjust set to 1500 (1,5x times the memory allocated by default)

team_gpu.txt · Last modified: by Luc Bonnefond