by Cristian V. Diaconu
Queue | Wall time | CPUs | Memory | Requirements | |
---|---|---|---|---|---|
limit | default | ||||
guscus | no limit (Be careful there!) | 24 hr | 12 cores (2 × 6-core) | 24 GB | none |
compute | 8 hr — not very useful for us | 8 hr | 8 cores (2 × 8-core) | 12 GB | min 9 cpus |
Note: Request 12 CPUs with the qsub option -l nodes=1:ppn=12
Queue | Wall time limit (same as the default) |
---|---|
commons | 24 hr |
guscus-short | 4 days |
guscus-med | 1 week |
guscus-long | no limit (Be careful there!) |
Sugar nodes have 8 cores (2 × 4-core CPU). Request 8 cores with the qsub option -l nodes=1:ppn=8
The majority of Sugar's nodes have 16GB of RAM. If you need
more memory (32GB instead of 16GB) you can specify with
the -l
option to qsub
:
-l nodes=1:ppn=8:bigmem
or -l
mem=31gb
. For qgj
you can ask for more
memory with -M 31gb
.
Gaussian environment can now be loaded using
the module
command. Since GDV is a Scuseria
Group specific resource, it is fully confied to
/projects/guscus
tree.
Please add the following to your configuration file to load
the module configuration for the /projects/guscus
specific software:
tcsh/csh
shell users:source /projects/guscus/.cshrc
bash
shell users:. /projects/guscus/.bashrc
List available modules:
$ module avail---------------------- /projects/guscus/apps/modulefiles ----------------------- g09-b1 gdv-h10 pgi-10.9 ---------------------------- /opt/apps/modulefiles ----------------------------- R/2.11.1-gcc matlab/2009b amber/11 mvapich/1.1.0-intel cilk++/8503 mvapich/1.2rc1-intel ......$ module load gdv-h10 $ module listCurrently Loaded Modulefiles: 1) pgi-10.9 2) gdv-h10
h2o.gjf
%nproc=2 %mem=1GB #p hf/6-311++G** pop=reg Water HF/6-311++G** 0 1 O 0.00000000 0.00000000 -0.11084336 H 0.00000000 0.78388323 0.44332080 H 0.00000000 -0.78388323 0.44332080
h2o.pbs
#!/bin/bash -x #PBS -N h2o #PBS -V #PBS -m n #PBS -r n #PBS -j oe #PBS -q guscus #PBS -l nodes=1:ppn=12 #PBS -l mem=24000mb #PBS -l walltime=30:00 set -o errexit # exit on errors set -o pipefail # or failure in a pipe cd "$PBS_O_WORKDIR" # original working directory NAME="h2o" # the base name for all files INP="$NAME.gjf" # input file OUT="$NAME.out" # output file ERR="$NAME.err" # log file GAUSS_MEMDEF=1610612736 # default memory for Gaussian GAUSS_SCRDIR=$SHARED_SCRATCH/$USER/tmp/$PBS_JOBID # Gaussian scratch dir export GAUSS_SCRDIR GAUSS_MEMDEF /bin/mkdir -p $GAUSS_SCRDIR trap "rm -rf $GAUSS_SCRDIR" EXIT # set trap to clean-up at EXIT time $GAU <"$INP" >"$OUT" # Run Gaussian echo $? # Gaussian exit status
$ module load g09-b1 $ module listCurrently Loaded Modulefiles: 1) pgi-10.9 2) g09-b1
$ qsub h2o.pbs146270.sticman.stic.rice.edu$ qstat -u $USERsticman.stic.rice.edu: Req'd Req'd Elap Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time -------------------- -------- -------- ---------------- ------ ----- --- ------ ----- - ----- 146270.sticman.s cvd1 guscus h2o 17925 1 12 24000m 00:30 R --
$ cd /projects/guscus/apps/gau/gdv-h10/extra/LRS-HSE-dos $ module load gdv-h10 $ mk $ ls -lhtr.... -rw-r--r-- 1 cvd1 guscus 4.4K 11-03-10,02:13.09 zfndos.o -rw-r--r-- 1 cvd1 guscus 156K 11-03-10,02:13.09 bdam1.o -rwxr-x--- 1 cvd1 guscus 26M 11-03-10,02:13.10 l502.exe -rwxr-xr-x 1 cvd1 guscus 18K 11-03-10,02:13.10 gdos
Gaussian enviroment can now be setup using
the module
command. E.g.:
$ module avail---------------------- /projects/guscus/apps/modulefiles ----------------------- g09-b1 gdv-h10 pgi-10.9 ---------------------------- /opt/apps/modulefiles ----------------------------- R/2.11.1-gcc matlab/2009b amber/11 mvapich/1.1.0-intel cilk++/8503 mvapich/1.2rc1-intel ......$ module load gdv-h10 $ module listCurrently Loaded Modulefiles: 1) pgi-10.9 2) gdv-h10
IMPORTANT: do not run the jobs from your home directory.
$SHARED_SCRATCH
space. E.g.:
$ mkdir -p /$SHARED_SCRATCH/$USER/h2o $ cd /$SHARED_SCRATCH/$USER/h2o
cat <<EOF >h2o.gjf %nprocsh=12 %mem=18GB #p opt hf/6-311++G** pop=reg Water HF/6-311++G** 0 1 O 0.00000000 0.00000000 -0.11084336 H 0.00000000 0.78388323 0.44332080 H 0.00000000 -0.78388323 0.44332080 EOF
cat <<EOF >h2o.pbs #!/bin/bash -x #PBS -N H2O #PBS -V #PBS -m n #PBS -r n #PBS -o "h2o.err" #PBS -j oe #PBS -l nodes=1:ppn=12 #PBS -l mem=24000mb #PBS -l walltime=120:00:00 cd $PBS_O_WORKDIR export GAUSS_SCRDIR=/shared.scratch/$USER/tmp/$PBS_JOBID mkdir -p $GAUSS_SCRDIR trap "rm -rf $GAUSS_SCRDIR" EXIT time gdv <h2o.gjf >h2o.out echo "exit code: $?" EOF
$ module load gdv-h10 $ qsub h2o.pbs $ qs -uJob Id Job Name User Queue S NDS Wallt CPUt Spdup -------- --------------- -------- -------- - --- ----- ------ ----- 146219 LiH-1D cvd1 guscus R 1 0.0 0.0 0.00
$ ls /projects/guscus/apps/gau/g09-b1/ gdv-g1/ gdv-h1/ gdv-h10/
As of 3/15/2011 this is the only maintained location, both on the RCSG machines and on the desktops.