Using HFSS on Supercomputers using PBS
时间:04-04
整理:3721RD
点击:
Hi,
I am trying to submit an HPC job for HFSS 16.1 using a pbs job. However, it only seems to be using one node and is insanely slow. Can somebody please look into my pbs job below and let me know what I am doing wrong.
It needs 800GB RAM at a minimum
I am trying to submit an HPC job for HFSS 16.1 using a pbs job. However, it only seems to be using one node and is insanely slow. Can somebody please look into my pbs job below and let me know what I am doing wrong.
It needs 800GB RAM at a minimum
Code C - [expand] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 #!/bin/bash -l #PBS -N SRR_jcho #PBS -j oe #PBS -l walltime=24:00:00,nodes=4:ppn=16,mem=1000gb #PBS -m abe #PBS -M [email]agarw071@umn.edu[/email] module load hfss export OptFile=S{PBS_O_WORKDIR}/Options.txt export ANSYSEM_JOB_ID=S{PBS_JOBID} export ANSYSEM_HOST_FILE=SPBS_NODEFILE export LINUX_SSH_BINARY_PATH=/usr/bin export ANSYSEM_LINUX_HPC_UTILS=/opt/software/AnsysEM/15.0/AnsysEM15.0/Linux64/schedulers/utils cd S{PBS_O_WORKDIR} # mkdir -p S{PBS_O_WORKDIR}/scratch echo creating batch options list echo \Sbegin \'Config\' > S{OptFile} echo \'HFSS/NumCoresPerDistributedTask\'=S{PBS_NUM_PPN} >> S{OptFile} echo \'HFSS/HPCLicenseType\'=\'Pool\' >> S{OptFile} echo \'HFSS/SolveAdaptiveOnly\'=0 >> S{OptFile} echo \'HFSS/MPIVendor\'= \'Intel\' >> S{OptFile} echo \'HFSS-IE/NumCoresPerDistributedTask\'=S{PBS_NUM_PPN} >> S{OptFile} echo \'HFSS-IE/HPCLicenseType\'=\'Pool\' >> S{OptFile} echo \'HFSS-IE/SolveAdaptiveOnly\'=0 >> S{OptFile} echo \'HFSS-IE/MPIVendor\'=\'Intel\' >> S{OptFile} # echo \'tempdirectory\'=\'S{PBS_O_WORKDIR}/scratch\' >> S{OptFile} echo \Send \'Config\' >> S{OptFile} chmod 777 S{OptFile} hfss -Ng -monitor -distributed -batchoptions "HFSS/HPCLicenseType=pool" -BatchSolve vacuum_gap_10nm.hfss
Hi kritia,
I was trying something similar not long ago, and was getting the same results. It turned out that the meshing process took upwards of 24 hrs, and this process only used one node.
What I ended up doing is pre-meshing the simulation on a desktop computer and then transferring the simulation data over to the supercomputer to do the actual adaptive passes.
This was with a previous version of HFSS (12, I think), so as a disclaimer they may have fixed this and your problem could be entirely unrelated.
Good Luck!