vnp4.physics.ubc.ca: P4-Xeon
Cluster Homepage
Important: The information contained below is subject to
change without warning. Please e-mail
Matthew Choptuik immediately if you encounter problems using the
cluster. Before using the cluster, read ALL information on this
page,
as well as the three pages linked to in the System Use section
below.
Finally, please check here FREQUENTLY while using the system.
Index
[
Status & News |
Overview |
Accounts |
Access | Use |
Software |
]
- OCTOBER 22, 2008, 4:00 PM
A new version of PBS, torque,
has been installed on the cluster. In principle, this change should be
transparent to users, in the sense that all existing batch submission
scripts should work as previously, as should the basic PBS-related
commands, qsub, qstat, qdel,
etc. Management has verified that test (parallel) jobs run
correctly as user matt, but
there is a chance that other users may encounter difficulties. As
usual, report all problems to Matt.
- See HERE
for a recent shapshot of
usage on the cluster.
- See HERE
for recent node load factors.
- See HERE
for usage summary by user.
-
See HERE for /home and /home2 usage
summary by user. PLEASE try to keep the partitions below 80%
usage.
-
See HERE for /var/scratch usage
summary by user on all nodes. PLEASE try to keep the partitions
below 80% usage.
NOTE: Eligible
applicants requesting an account on this cluster will also
automatically be given an account on our old PIII
cluster. Users are encouraged to use the old cluster for
development purposes, particularly
for parallel applications.
The
cluster is currently configured so that only the head node (aka the
front end node) is connected to an external network; all other nodes
(including the head node, which has multiple network interfaces) are on
a private internal network.
Within
the private network, the head node is known as head, to the
external world it is vnfe4.physics.ubc.ca,
and access to and from it is limited to ssh and scp.
Thus, e.g, from your local workstation, connect via
workstation% ssh user@vnfe4.physics.ubc.ca
replacing
user with your own account name. Once you have successfully
logged into vnfe4, you will note that, as mentioned above,
within the internal network, the head's hostname is head,
not vnfe4.
- Submitting (Parallel) Batch Jobs on the
Myrinet nodes Using PBS
- Running Interactive Jobs on the Gigabit
Nodes
- RUNNING I/O INTENSIVE JOBS ON THE CLUSTER
- OTHER EXAMPLES (Including F77/C mixed mode
examples)
Send
mail to Matthew
Choptuik, if there is software you wish to have installed. Please
include a description of the software, and, if possible, a distribution
site from which it can be downloaded.
Unless indicated
otherwise, all software described below is available on all nodes;
however, development work (including compilation) should generally be
restricted to the head node.
- Linux:
Redhat
7.3
- Intel F90/C/C++ compilers and
Math Kernel Library:
See HERE for usage
information and the
following links for web-based documentation:
- Portland Group HPF, F90, C and
C++ compilers: See HERE for usage
information and the
following links for web-based documentation:
- PBS Pro Batch Queueing System: (head node only). See HERE for information on using PBS to
submit parallel jobs on the Myrinet nodes.
- MPI: MPICH Version 1.2.5: Because there are two types of
interconnect on the cluster (Myrinet and Gigabit Ethernet), and three
compiler suites (GNU, Intel and Portland Group), there are currently 6
distinct installed versions of the MPICH library and associated
software, with installation directories as shown in the following
table:
|
Gigabit |
Myrinet |
GNU |
/opt/mpich.gcc |
/opt/gmpi.gcc |
Intel |
/opt/mpich.intel |
/opt/gmpi.intel |
PGI |
/opt/mpich.pgi |
/opt/gmpi.pgi |
See HERE for examples of
compilation and batch submission of parallel jobs using the Myrinet
interface.
Additional information:
- Software compiled locally from source code: Software of
this type is generally installed in a location that depends on the
compiler suite that was used for the build. Currently these locations
are as follows:
- /usr/local/{bin,lib,include}:
GCC compilers
- /usr/local/intel/{bin,lib,include}: Intel compilers
- /usr/local/pgi/{bin,lib,include}: PGI compilers
Maintained by choptuik@physics.ubc.ca.
Supported by CIAR, CFI and NSERC.