Note: There is an applications web page listing applications available on NY Blue/P beyond the libraries described here.
So also see that applications page.
IBM's ESSL for SLES on POWER (v4.4) that supports Blue Gene/P has been installed
on the
Front-End Node. ESSL provides over 150 math subroutines (Linear Algebra, Matrix operations,
Sorting and Searching, FFT, Random Number Generator, etc.) that have been specifically tuned
for performance on BG/P.
ESSL subroutines are for Serial code. Parallel ESSL (PESSL) in not available on NY Blue/P.
The ESSL routines have been installed on the Front-End host under /opt/ibmmath/lib and the header files under /opt/ibmmath/include. Thus, to link your code to the ESSL library that contains single-threaded routines you should use (Fortran):
>mpixlf90_r example.f -L/opt/ibmmath/lib -lesslbg
The ESSL SMP Library (contains multi-threaded routines) is libesslsmpbg.a in the same directory. To compile SMP code, you must specify the -qsmp compiler option.
Information on the ESSL subroutines is available on the command line of the Front-End host using the installed man pages. For example, to obtain a description of the subroutine sasum, type: man sasum
Additional Information:
Mathematical Acceleration Subsystem (MASS) for Blue Gene/P consists of libraries of mathematical intrinsic functions tuned specifically for optimum performance on Blue Gene/P. These libraries offer improved performance over the standard mathematical library routines, are thread-safe, and support compilations in C, C++, and Fortran applications.
MASS v4.4 (scalar and vector) libraries have been installed on the Front-End node under
/opt/ibmcmp/xlmass/bg/4.4. The subdirectory bglib contains the
"cross-compiled"
versions of the libraries. The lib and lib64 subdirectories contain
the "native" versions
(for executables that will run on the Front-End node, not on the Blue Gene Compute
Nodes).
Add linker option -Wl,--allow-multiple-definition to allow multiple definitions for the math routines.
Additional Information:
The Basic Linear Algebra Communication Subprograms (BLACS) provide a linear algebra
oriented message passing interface.
The archives have been compiled from the version 1.1 source code downloaded from
http://www.netlib.org/blacs/mpiblacs.tgz
and http://www.netlib.org/blacs/mpiblacs-patch03.tgz
For Fortran, to link with the BLACS libraries add the following at your link step:
/bgsys/apps/blacs-1.1/BLACS/LIB/blacsF77init_MPI-BGP-0.a /bgsys/apps/blacs-1.1/BLACS/LIB/blacs_MPI-BGP-0.a /bgsys/apps/blacs-1.1/BLACS/LIB/blacsF77init_MPI-BGP-0.a
For C, to link with the BLACS libraries add the following at your link step:
/bgsys/apps/blacs-1.1/BLACS/LIB/blacsCinit_MPI-BGP-0.a /bgsys/apps/blacs-1.1/BLACS/LIB/blacs_MPI-BGP-0.a /bgsys/apps/blacs-1.1/BLACS/LIB/blacsCinit_MPI-BGP-0.a
The LAPACK library contains routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision.
On NY Blue/P it was built from the NETLIB Fortran source code.
To link against the lapack library, add the following at the link step:
-L/bgsys/apps/lapack-3.2/lapack-3.2 -llapack_BGP.a
Additional Information:
ScaLAPACK is a library of high-performance linear algebra routines for distributed-memory message-passing MIMD computers and networks of workstations supporting MPI . It is a continuation of the LAPACK project, which designed and produced analogous software for workstations, vector supercomputers, and shared-memory parallel computers. Both libraries contain routines for solving systems of linear equations, least squares problems, and eigenvalue problems. The goals of both projects are efficiency (to run as fast as possible), scalability (as the problem size and number of processors grow), reliability (including error bounds), portability (across all important parallel machines), flexibility (so users can construct new routines from well-designed parts), and ease of use (by making the interface to LAPACK and ScaLAPACK look as similar as possible).
The library is currently written in Fortran (with the exception of a few symmetric eigenproblem auxiliary routines written in C to exploit IEEE arithmetic) in a Single Program Multiple Data (SPMD) style using explicit message passing for interprocessor communication.
On NY Blue/P it was built from the NETLIB Fortran source code at http://www.netlib.org/scalapack.
Additional Information:
mpixlf77_r -o myprog myprog.c -L/bgsys/apps/scalapack-1.8.0/O3qstrict/scalapack-1.8.0 -lscalapack -L/bgsys/apps/blacs-1.1/BLACS/LIB -lblacsF77init_MPI-BGP-0 -lblacs_MPI-BGP-0 -lblacsF77init_MPI-BGP-0 -L/bgsys/apps/lapack-3.2/lapack-3.2 -llapack_BGP -lblas_BGP
mpixlf77_r -o myprog myprog.c -L/bgsys/apps/scalapack-1.8.0/O3qstrict/scalapack-1.8.0 -lscalapack -L/bgsys/apps/blacs-1.1/BLACS/LIB -lblacsF77init_MPI-BGP-0 -lblacs_MPI-BGP-0 -lblacsF77init_MPI-BGP-0 -L/bgsys/apps/lapack-3.2/lapack-3.2 -llapack_BGP -L/opt/ibmmath/lib -lesslbg
mpicc -o myprog myprog.c -L/bgsys/apps/scalapack-1.8.0/O3qstrict/scalapack-1.8.0 -lscalapack -L/bgsys/apps/blacs-1.1/BLACS/LIB -lblacsCinit_MPI-BGP-0 -lblacs_MPI-BGP-0 -lblacsCinit_MPI-BGP-0 -L/bgsys/apps/lapack-3.2/lapack-3.2 -L/opt/ibmmath/lib -llapack_BGP -lblas_BGP -lesslbg -L/opt/ibmcmp/xlf/bg/11.1/bglib -lxlf90_r -lxl -lxlfmath -L/opt/ibmcmp/xlsmp/bg/1.7/bglib -lxlomp_ser -lm