So also see that applications page.
The Basic Linear Algebra Subprograms (BLAS) are routines that provide standard building blocks for performing basic vector and matrix operations.
The BLAS are available as part of the netlib lapack build.
There is also an older version of the BLAS available (not recommended) described on the legacy page.
BLAS optimized for Blue Gene/L are also available as part of ESSL , but be aware ESSL does not contain all of the BLAS found in the netlib lapack build.
The Basic Linear Algebra Communication Subprograms (BLACS) provide a linear algebra
oriented message passing interface. The archives have been compiled from the version 1.1 source code downloaded from:
For Fortran, to link with the BLACS libraries add the following at your link step:
/bgl/apps/blacs/BLACS/LIB/blacsF77init_MPI-BGL-0.a /bgl/apps/blacs/BLACS/LIB/blacs_MPI-BGL-0.a /bgl/apps/blacs/BLACS/LIB/blacsF77init_MPI-BGL-0.a
For C, to link with the BLACS libraries add the following at your link step:
/bgl/apps/blacs/BLACS/LIB/blacsCinit_MPI-BGL-0.a /bgl/apps/blacs/BLACS/LIB/blacs_MPI-BGL-0.a /bgl/apps/blacs/BLACS/LIB/blacsCinit_MPI-BGL-0.a
There is also an older version of the BLACS available (not recommended) described on the legacy page.
The LAPACK library contains routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision.
To link against the lapack library, add the following at the link step:
ScaLAPACK is a library of high-performance linear algebra routines for distributed-memory message-passing MIMD computers and networks of workstations supporting MPI . It is a continuation of the LAPACK project, which designed and produced analogous software for workstations, vector supercomputers, and shared-memory parallel computers. Both libraries contain routines for solving systems of linear equations, least squares problems, and eigenvalue problems. The goals of both projects are efficiency (to run as fast as possible), scalability (as the problem size and number of processors grow), reliability (including error bounds), portability (across all important parallel machines), flexibility (so users can construct new routines from well-designed parts), and ease of use (by making the interface to LAPACK and ScaLAPACK look as similar as possible).
The library is currently written in Fortran (with the exception of a few symmetric eigenproblem auxiliary routines written in C to exploit IEEE arithmetic) in a Single Program Multiple Data (SPMD) style using explicit message passing for interprocessor communication.
On NY Blue/L it was built from the NETLIB Fortran source code at http://www.netlib.org/scalapack.
mpixlf90 -o myprog myprog.c -L/bgl/apps/scalapack -lscalapack -L/bgl/apps/blacs/BLACS/LIB -lblacsF77init_MPI-BGL-0 -lblacs_MPI-BGL-0 -lblacsF77init_MPI-BGL-0 -L/bgl/apps/lapack -llapack_BGL -lblas_BGL
mpixlf90 -o myprog myprog.c -L/bgl/apps/scalapack -lscalapack -L/bgl/apps/blacs/BLACS/LIB -lblacsF77init_MPI-BGL-0 -lblacs_MPI-BGL-0 -lblacsF77init_MPI-BGL-0 -L/bgl/apps/lapack -llapack_BGL -L/opt/ibmmath/lib -lesslbg
mpicc -o myprog myprog.c -L/bgl/apps/scalapack -lscalapack -L/bgl/apps/blacs/BLACS/LIB -lblacsCinit_MPI-BGL-0 -lblacs_MPI-BGL-0 -lblacsCinit_MPI-BGL-0 -L/bgl/apps/lapack -L/opt/ibmmath/lib -llapack_BGL -lblas_BGL -lesslbg -L/opt/ibmcmp/xlf/bg/11.1/bglib -lxlf90 -lxl -lxlfmath -L/opt/ibmcmp/xlsmp/bg/1.7/bglib -lxlomp_ser -lm
IBM's ESSL for SLES on POWER (v4.2.5) that supports Blue Gene/L has been installed on the Front-End Node. ESSL provides over 150 math subroutines (Linear Algebra, Matrix operations, Sorting and Searching, FFT, Random Number Generator, etc.) that have been specifically tuned for performance on BG/L.
ESSL subroutines are for Serial code. Parallel ESSL (PESSL) in not available on the Blue Gene/L. SCALAPAC provides parallel subroutines available on the Blue Gene/L.
The ESSL routines have been installed on the Front-End host under /opt/ibmmath/lib and the header files under /opt/ibmmathc/include. Thus, to link your code to the ESSL library you should use (Fortran):
blrts_xlf example.f -L/opt/ibmmath/lib -lesslbg
Information on the ESSL subroutines is available on the command line of the Front-End host using the installed man pages. For example, to obtain a description of the subroutine sasum, type: man sasum
Mathematical Acceleration Subsystem (MASS) for Blue Gene/L consists of libraries of mathematical intrinsic functions tuned specifically for optimum performance on Blue Gene/L. These libraries offer improved performance over the standard mathematical library routines, are thread-safe, and support 32-bit compilations in C, C++, and Fortran applications.
MASS v4.4 (scalar and vector) libraries has been installed on the Front-End node under: /opt/ibmcmp/xlmass/bg/4.4/. The subdirectory blrts_lib contains the "cross-compiled" versions of the libraries. The lib and lib64 subdirectories contain the "native" versions (for executables that will run on the Front-End node, not on the Blue Gene Compute Nodes).
Chapter 3 of the Using the XL Compilers for Blue Gene Guide (pdf) provides detailed instructions on how to compile and link your code to the MASS libraries.
Add linker option -Wl,--allow-multiple-definition to allow multiple definitions for the math routines.