So also see that applications page.

The Basic Linear Algebra Subprograms (BLAS) are routines that provide standard building blocks for performing basic vector and matrix operations.

The BLAS are available as part of the netlib lapack build.

There is also an older version of the BLAS available (not recommended) described on the legacy page.

BLAS optimized for Blue Gene/L are also available as part of ESSL , but be aware ESSL does not contain all of the BLAS found in the netlib lapack build.

The Basic Linear Algebra Communication Subprograms (BLACS) provide a linear algebra

oriented message passing interface.
The archives have been compiled from the version 1.1 source code downloaded from:

Read **/bgl/apps/blacs/IMPORTANT.NOTICE**

For Fortran, to link with the BLACS libraries add the following at your link step:

/bgl/apps/blacs/BLACS/LIB/blacsF77init_MPI-BGL-0.a /bgl/apps/blacs/BLACS/LIB/blacs_MPI-BGL-0.a /bgl/apps/blacs/BLACS/LIB/blacsF77init_MPI-BGL-0.a

For C, to link with the BLACS libraries add the following at your link step:

/bgl/apps/blacs/BLACS/LIB/blacsCinit_MPI-BGL-0.a /bgl/apps/blacs/BLACS/LIB/blacs_MPI-BGL-0.a /bgl/apps/blacs/BLACS/LIB/blacsCinit_MPI-BGL-0.a

There is also an older version of the BLACS available (not recommended) described on the legacy page.

The LAPACK library contains routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision.

To link against the lapack library, add the following at the link step:

-L/bgl/apps/lapack -llapack_BGL

Additional Information:

- Many LAPACK routines invoke BLAS routines.

To link against the LAPACK library and the BLAS routines in ESSL:

-L/bgl/apps/lapack -L/opt/ibmmath/lib -llapack_BGL -lesslbg - To link against the LAPACK library and the BLAS routines from netlib (described above):

-L/bgl/apps/lapack -llapack_BGL -lblas_BGL - The ESSl library contains BLAS routines, is tuned specifically for Blue Gene, and can help significantly in improving performance.
- But be aware that ESSL does not contain all of the standard BLAS routines.
- Using the GNU fortran, or GNU or XL C compiler, to link against the LAPACK library and the BLAS routines in the LAPACK distribution:

**-L/bgl/apps/lapack****-llapack_BGL -lblas_BGL****-L/opt/ibmcmp/xlf/bg/11.1/bglib****-lxlf90 -lxl -lxlfmath****-L/opt/ibmcmp/xlsmp/bg/1.7/bglib****-lxlomp_ser****-lm** - Using the GNU fortran, or GNU or XL C compiler, to link against the LAPACK library, and the BLAS routines in the LAPACK distribution, then against the BLAS routines in ESSL:
**-L/bgl/apps/lapack -L/opt/ibmmath/lib****-llapack_BGL -lblas_BGL -lesslbg****-L/opt/ibmcmp/xlf/bg/11.1/bglib****-lxlf90 -lxl -lxlfmath****-L/opt/ibmcmp/xlsmp/bg/1.7/bglib****-lxlomp_ser****-lm** - Read
**/bgl/apps/lapack/IMPORTANT.NOTICE** - There is also an older version of LAPACK available (not recommended) described on the legacy page.
- Information about LAPACK including a hyperlink to LAPACK USER'S GUIDE

ScaLAPACK is a library of high-performance linear algebra routines for distributed-memory message-passing MIMD computers and networks of workstations supporting MPI . It is a continuation of the LAPACK project, which designed and produced analogous software for workstations, vector supercomputers, and shared-memory parallel computers. Both libraries contain routines for solving systems of linear equations, least squares problems, and eigenvalue problems. The goals of both projects are efficiency (to run as fast as possible), scalability (as the problem size and number of processors grow), reliability (including error bounds), portability (across all important parallel machines), flexibility (so users can construct new routines from well-designed parts), and ease of use (by making the interface to LAPACK and ScaLAPACK look as similar as possible).

The library is currently written in Fortran (with the exception of a few symmetric eigenproblem auxiliary routines written in C to exploit IEEE arithmetic) in a Single Program Multiple Data (SPMD) style using explicit message passing for interprocessor communication.

On NY Blue/L it was built from the NETLIB Fortran source code at http://www.netlib.org/scalapack.

Additional Information:

- To use SCALAPACK, one must link the BLACS and LAPACK libraries. Additionally, many SCALAPACK routines invoke BLAS routines.
- To link against the SCALAPACK library and the netlib BLAS routines in the lapack distribution:

mpixlf90 -o myprog myprog.c -L/bgl/apps/scalapack -lscalapack -L/bgl/apps/blacs/BLACS/LIB -lblacsF77init_MPI-BGL-0 -lblacs_MPI-BGL-0 -lblacsF77init_MPI-BGL-0 -L/bgl/apps/lapack -llapack_BGL -lblas_BGL

- To link against the SCALAPACK library and the BLAS routines in ESSL:

mpixlf90 -o myprog myprog.c -L/bgl/apps/scalapack -lscalapack -L/bgl/apps/blacs/BLACS/LIB -lblacsF77init_MPI-BGL-0 -lblacs_MPI-BGL-0 -lblacsF77init_MPI-BGL-0 -L/bgl/apps/lapack -llapack_BGL -L/opt/ibmmath/lib -lesslbg

- To link against the SCALAPACK library, and the BLAS routines in the lapack distribution then the BLAS in ESSL, using the XL C compiler,
or using the GNU C or Fortran compiler (this example is using the GNU C compiler).

Replace**blacsF77init_MPI-BGL-0**with**blacsCinit_MPI-BGL-0**if using a C compiler:

mpicc -o myprog myprog.c -L/bgl/apps/scalapack -lscalapack -L/bgl/apps/blacs/BLACS/LIB -lblacsCinit_MPI-BGL-0 -lblacs_MPI-BGL-0 -lblacsCinit_MPI-BGL-0 -L/bgl/apps/lapack -L/opt/ibmmath/lib -llapack_BGL -lblas_BGL -lesslbg -L/opt/ibmcmp/xlf/bg/11.1/bglib -lxlf90 -lxl -lxlfmath -L/opt/ibmcmp/xlsmp/bg/1.7/bglib -lxlomp_ser -lm

- You may find after linking to the three blacs libraries above that you have to add another
**-lblacs_MPI-BGL-0** - If you are calling a scalapack routine from a C code, in the C code you need to prototype the fortran scalapack routine it is calling. See pages 4 to 7 of the IBM XL C/C++ Programming Guide , especially the example on pages 7 and 8.
- Read
**/bgl/apps/scalapack/IMPORTANT.NOTICE** - There is also an older scalapack available (not recommended) in /bgl/local/lib/libscalapack.rts.a.
- Scalapack User Guide
- Scalapack Home Page
- Also see the README file in /bgl/apps/scalapack.

IBM's ESSL for SLES on POWER (v4.2.5) that supports Blue Gene/L has been installed on the Front-End Node. ESSL provides over 150 math subroutines (Linear Algebra, Matrix operations, Sorting and Searching, FFT, Random Number Generator, etc.) that have been specifically tuned for performance on BG/L.

ESSL subroutines are for Serial code. Parallel ESSL (PESSL) in not available on the Blue Gene/L.
**SCALAPAC** provides parallel subroutines available on the Blue Gene/L.

The ESSL routines have been installed on the Front-End host under **/opt/ibmmath/lib**
and the header files under **/opt/ibmmathc/include**. Thus, to link your
code to the ESSL library you should use (Fortran):

blrts_xlf example.f -L/opt/ibmmath/lib -lesslbg

Information on the ESSL subroutines is available on the command line of the
Front-End host using the installed man pages. For example, to obtain a description of
the subroutine *sasum*, type: ** man sasum **

Additional Information:

- The ESSL
**Guide and Reference**is available (in**pdf**) at:

http://publib.boulder.ibm.com/epubs/pdf/am501403.pdf - For more information about ESSL, refer to the following web address:

http://www-03.ibm.com/systems/p/software/essl.html.

Mathematical Acceleration Subsystem (MASS) for Blue Gene/L consists of libraries of mathematical intrinsic functions tuned specifically for optimum performance on Blue Gene/L. These libraries offer improved performance over the standard mathematical library routines, are thread-safe, and support 32-bit compilations in C, C++, and Fortran applications.

MASS v4.4 (scalar and vector) libraries has been installed on the Front-End node under:
**/opt/ibmcmp/xlmass/bg/4.4/**. The subdirectory **blrts_lib** contains the
"cross-compiled" versions of the libraries. The **lib** and **lib64** subdirectories contain the "native" versions
(for executables that will run on the Front-End node, not on the Blue Gene Compute Nodes).

Chapter 3 of the Using the XL Compilers for Blue Gene Guide (pdf) provides detailed instructions on how to compile and link your code to the MASS libraries.

Add linker option **-Wl,--allow-multiple-definition** to allow multiple definitions for the math routines.

Additional Information:

- Mathematical Acceleration Subsystem for Blue Gene/L
- Mathematical Acceleration Subsystem for Blue Gene/L - Features and Benefits
- Using the MASS Libraries on Blue Gene/L
- MASS Library slides of Blue Gene/L System and Optimization Tips by Bob Walkup of IBM
- MASS C/C++ function prototypes for Blue Gene/L
- MASS Fortran Interface blocks for Blue Gene/L
- Performance information for the MASS libraries on Blue Gene/L
- Accuracy information for the MASS libraries for Blue Gene/L