Showing posts with label dft. Show all posts
Showing posts with label dft. Show all posts

20 March 2012

114. Nwchem 6.0 with openmpi support on debian testing

I still haven't managed to compile a working versin of Nwchem 6.1 on Debian 64 bit regardless of whether I'm using mpich or openmpi. The number of posts relating to compiling nwchem is steadily growing, but I'd rather have post which are almost, but not quite, identical if it makes it's unambiguous for the average user how to build and use nwchem.

Anyway, since I'm using openmpi on my rocks cluster(s), I figure I might as well start using openmpi on debian too. In addition, the only way you can get nwchem 6.0 to work with mpich2 on debian seems to be by using the old v1.2 package which causes problems of its own (see apt-pinning).

Note: See here for information about python support: http://verahill.blogspot.com.au/2012/04/adding-python-support-to-nwchem-under.html

Long story short -- nwchem with openmpi:
mkdir ~/tmp
sudo apt-get install openmpi-bin libopenmpi-dev
wget http://www.nwchem-sw.org/images/Nwchem-6.0.tar.gz
tar -xvf Nwchem-6.0.tar.gz
cd nwchem-6.0/

export LARGE_FILES=TRUE
export TCGRSH=/usr/bin/ssh
export NWCHEM_TOP=/home/me/tmp/nwchem-6.0
export NWCHEM_TARGET=LINUX64
export NWCHEM_MODULES=all
export USE_MPI=y
export USE_MPIF=y
export MPI_LOC=/usr/lib/openmpi/lib
export MPI_INCLUDE=/usr/lib/openmpi/include
export LIBRARY_PATH=$LIBRARY_PATH:/usr/lib/openmpi/lib
export LIBMPI="-lmpi -lopen-rte -lopen-pal -ldl -lmpi_f77 -lpthread"
cd $NWCHEM_TOP/src
make clean
make nwchem_config
make FC=gfortran

This will take a good 20-30 minutes.


Your binary will be in nwchem-6.0/bin/LINUX64/

Finally, see whether openmpi is already in your LD_LIBRARY_PATH

echo $LD_LIBRARY_PATH
/lib/openmm:/usr/lib/nvidia-cuda-toolkit:/usr/lib/nvidia
If not, edit ~/.bashrc and add
export LD_LIBRARY_PATH=/usr/lib/openmpi/lib:$LD_LIBRARY_PATH
export PATH=$PATH:/home/me/tmp/nwchem-6.0/bin/LINUX64


113. Using ECCE to run nwchem jobs

EDIT: This post is getting messier as I'm hammering things out...but I've gotten everything to work in the end, so please persist.  The workflow described below is not the ideal one, but it'll get you started. I'll link here when I put up a newer, more reasonable tutorial.

EDIT2: I'm really warming to ECCE as I'm learning more about it. I still think it'd be nice if it was open source, and I can't understand why it has to be reliant on csh (which is pretty much broken on ROCKS, and uncomfortable at the best of times), but it's pretty neat once you've got all the details ironed out. Error feedback/report could be better though.

EDIT 3: ECCE is going open source the (northern) summer of 2012! As users we no longer have any excuses to complain.

Here's a quick introduction to getting started with using ECCE as the interface to nwchem, similar to how gaussview can be used to set up gaussian jobs.

This presumes that you've set up ECCE and preferably compiled your own version of nwchem:
http://verahill.blogspot.com.au/2012/03/ecce-on-debian-but-not-on-rockscentos.html
http://verahill.blogspot.com.au/2012/03/nwchem-61-with-openmpi-on-rocks.html
http://verahill.blogspot.com.au/2012/01/debian-testing-64-wheezy-nwhchem.html


##Important##
Once I had figured all of this out I rebuilt nwchem and re-installed ecce in the proper locations. You might want to do the same.

A. If you're going to use several nodes you should put nwchem in the same position in the file system hierarchy on all nodes e.g.
/opt/nwchem/nwchem-6.0/bin/LINUX64/nwchem

Also, make sure you share a folder (see how to use NFS) between the nodes which you can use for run time files e.g. /work

EDIT 4: This (probably) isn't necessary. In fact, using NFS in the wrong way will slow things down.

Set the permissions right (chown your user and set to 777 -- 755 is enough for nfs sharing between debian nodes, but between ROCKS and Debian you seem to need 777), and open your firewall on all ports for communication between the nodes.

B. Make sure that ECCE_HOME has been set in ~/.bashrc e.g.
export ECCE_HOME=/opt/ecce/apps

and in ~/.cshrc
setenv ECCE_HOME=/opt/ecce/apps

C.
edit /opt/ecce/apps/siteconfig/submit.site (location depends on where you install ecce)
Change lines 65+ from
#NWChemCommand {
#  $nwchem $infile > $outfile
#}
to (for multiple nodes)
NWChemCommand {
mpirun -hostfile /work/hosts.list -n $totalprocs --preload-binary /opt/nwchem/nwchem-6.0/bin/LINUX64/nwchem $infile > $outfile
}
to use mpirun for parallel job submissions and assuming you have a hosts file in /work. For running on a single node you can use


NWChemCommand {
mpirun  -n $totalprocs $nwchem  $infile > $outfile
}

user either --preload-binary /opt/nwchem/nwchem-6.0/bin/LINUX64/nwchem or $nwchem -- see what works for you. You probably can't do preload if you're running different linux distros (e.g. debian and centos)

My hosts.list looks like this:

tantalum slots=4 max_slots=4
beryllium slots=4 max_slots=5

Make sure that you don't accidentally put 2 jobs on node 0, then 2 jobs on node 1, then another 2 jobs on node 0, since they won't be consecutively numbered and will crash armci. You can avoid this by setting slots and max_slots to the same number.


D.
You may have to edit /etc/openmpi/openmpi-mca-params.conf if you have several (real or virtual) interfaces and add e.g.


btl=tcp,sm,self
btl_tcp_if_include=eth1,eth2
btl_tcp_if_exclude=eth0,virtbr0


Start ECCE:
First start the server
csh /home/me/tmp/ecce/ecce-v6.2/server/ecce-utils/start_ecce_server
then launch ecce

ecce

This will launch what the ecce people call the 'gateway':
The Gateway

0. Make sure you've got your machine set up
Click on Machine browser
Make sure that you can connect to the node e.g. by clicking on disk usage

Set the application paths. Don't fiddle with nodes -- just change number of processors to the total for all nodes.



1. Draw SiCl4 
Click on the Builder in the Gateway, which gives you the following:
The builder window

Click on More to get the periodic table which gives you access to Si

Select Geometry -- here, Tetrahedral

Si -- with four 'nubs' (yup, that's what the ecce ppl call them)

Time to attach Cl atoms to the nubs. Select Cl and pick Terminal geometry.

Click on a 'nub' to replace it with a Cl

And do it until you've replaced all 'nubs'. Hold down right mouse button to rotate

Click on the broom next to the bond menu on the right to pre-optimize  the structure using MM

And save. You will probably be limited to saving your jobs in folders below the ecce  folder.


2. Set up your job
Click on the Organizer icon in the 'gateway', which takes you here:

Click on the first icon, Editor

Focus on selecting Theory and Run type. Here's we'll do a geometry optimisation.

Click on Details for Theory

Click on Details for Run type

Constraints are optional

In the organizer, click on the third icon to set the basis set. Defined atoms for a particular basis set are indicated by a n orange right lower corner

You can get Details about the basis set

If you don't have a Navy Triangle you can't run. Click on Editor and see what might be wrong.

Ready to run. Click on Launch.
4. Running
I'm still working on enabling more than a single core...
Once you've clicked on launch you'll get

 If you click on viewer you can monitor the job

Optimization in progress
5. Re-launch a job at higher theory
In the Organizer, select your last job and then click on Edit, Duplicate Setup with Last Geometry
You then get a copy to edit

Change the basis set, save, then click on Final Edit

This is the nwchem input file in a vim instance

Add a line to the end, saying task scf freq to calculate the vibrations (there's another job option called geovib which does optim+freq , but here we do it by hand)

Launch

Running...

You can now look at the vibrations

And you can visualise MOs -- here's the HOMO which looks like all isolated p orbitals on the chlorine

You can also calculate 'properties'

These include GIAO shielding

Performance:
Here's phenol (scf/6-31g*) across three gigabit-linked nodes. The dotted line denotes node boundaries.


Here's a number of alkanes (scf/6-31g) on 4 cores on a single node:


13 March 2012

105. Nwchem 6.1 with openmpi on ROCKS 5.4.3/CentOS 5.6


EDIT 18 May 2012: 
Compiling nwchem 6.1 with internal libs on debian: http://verahill.blogspot.com.au/2012/05/compiling-nwchem-61-with-internal-libs.html
Compiling nwchem 6.1 with openblas on debian: http://verahill.blogspot.com.au/2012/05/building-nwchem-61-on-debian.html


I can build and use nwchem on ROCKS 5.4.3 -- see instructions below.

EDIT: the gfortran version is GNU Fortran (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50)
On debian, which yields a segfaulting binary, the version is GNU Fortran (Debian 4.6.3-1) 4.6.3

I'm still having no luck building binaries which don't segfault on execution on debian though. The openmpi versions are the same for both ROCKS and debain: 1.4.3.

--START HERE --

ROCKS 5.4.3/CentOS
The build is essentially the same as for nwchem-6.0 (http://verahill.blogspot.com.au/2012/03/building-nwchem-60-on-rocks-543centos.html) - the single difference is that you need to define USE_MPIF4 or you get errors

To build:

wget http://www.nwchem-sw.org/images/Nwchem-6.1-2012-Feb-10.tar.gz
tar -xvf Nwchem-6.1-2012-Feb-10.tar.gz
cd nwchem-6.1/
export LARGE_FILES=TRUE
export TCGRSH=/usr/bin/ssh
export NWCHEM_TOP=/export/home/me/tmp/nwchem-6.1
export NWCHEM_TARGET=LINUX64
export NWCHEM_MODULES=all
export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y
export MPI_LOC=/opt/openmpi
export MPI_INCLUDE=/opt/openmpi/include
export LIBRARY_PATH=$LIBRARY_PATH:/opt/openmpi/lib
export LIBMPI="-lmpi -lopen-rte -lopen-pal -ldl -lmpi_f77 -lpthread"
cd $NWCHEM_TOP/src
make clean
make  nwchem_config
make  FC=gfortran

Building takes a little while.

Running:
Make sure that you make the reference to your openmpi libs permanent and make life easier by putting the following in your ~/.bashrc or /etc/profile:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi/lib

export NWCHEM_EXECUTABLE=/export/home/me/tmp/nwchem-6.1/bin/LINUX64/nwchem
export NWCHEM_BASIS_LIBRARY=/export/home/me/tmp/nwchem-6.1/src/basis/libraries/
PATH=$PATH:/export/home/me/nwchem-6.1/bin/LINUX64



To run on multiple procs do
mpirun -n 3 nwchem input.nw
where 3 is the number of cores

103. Building nwchem 6.0 on Rocks 5.4.3/CentOS

I've always been a Debian man, but for various reasons I need to be able to compile various scientific packages on a HPC running ROCKS. ROCKS 5.4.3 is based on CentOS 5,6and it turns out that debian is wonderfully easy, accommodating and robust in comparison. Well, since it's not my HPC, CentOS is what I'm stuck with.

Here's how to build nwchem on a rocks 5.4.3 (viper) cluster based on CentOS 5.6 and its ancient kernel.
(Linux  2.6.18-238.19.1.el5 #1 SMP Fri Jul 15 07:31:24 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux )

There are three different approaches:




CASE 1.
 Using LD_LIBRARY_PATH
This method requires no root access.
Check to see whether you've installed the rocks-openmpi package from the bio roll - it should be in /opt/openmpi. Otherwise use yum to install the base-roll openmpi package, which will end up in /usr/lib64/openmpi/1.4-gcc/lib -- you'll need root or sudo to do anything with yum.

For compilation, do
export LIBRARY_PATH=$LIBRARY_PATH:/opt/openmpi/lib
or
export LIBRARY_PATH=$LIBRARY_PATH:/usr/lib64/openmpi/1.4-gcc/lib/
depending on whether there is an openmpi directory in /opt or not.

You can also put the export line in your buildconf.sh below
For execution:
in either you ~/.bashrc (user basis) or /etc/profile (global) put
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi/lib






CASE 2. /opt/openmpi is present; using symlinked libs

mpicc and mpif77 are probably already symlinked, but if not:

sudo ln -s /opt/openmpi/bin/mpicc /usr/bin/mpicc
sudo ln -s /opt/openmpi/bin/mpif77 /usr/bin/mpif77


The following allows for building and running:
sudo ln -s /opt/openmpi/lib/libmpi.so /usr/lib/libmpi.so
sudo ln -s /opt/openmpi/lib/libopen-rte.so /usr/lib/libopen-rte.so
sudo ln -s /opt/openmpi/lib/libopen-pal.so /usr/lib/libopen-pal.so
sudo ln -s /opt/openmpi/lib/libmpi_f77.so /usr/lib/libmpi_f77.so
sudo ln -s /opt/openmpi/lib/libmpi.so /usr/lib64/libmpi.so.0
sudo ln -s /opt/openmpi/lib/libopen-rte.so /usr/lib64/libopen-rte.so.0
sudo ln -s /opt/openmpi/lib/libopen-pal.so /usr/lib64/libopen-pal.so.0
sudo ln -s /opt/openmpi/lib/libmpi_f77.so /usr/lib64/libmpi_f77.so.0


the /usr/lib64 symlinks are necessary for execution, or you'll get
./nwchem: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory



CASE 3. /opt/openmpi is NOT present; using symlinked libs

yum install openmpi openmpi-devel
And then put in all the symlinks...dunno why this isn't done on install, but there you go.

sudo ln -s /usr/local/lib64/openmpi/1.4-gcc/bin/mpicc  /usr/bin/mpicc
sudo ln -s /usr/local/lib64/openmpi/1.4-gcc/bin/mpif77 /usr/bin/mpif77
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libmpi.so /usr/lib/libmpi.so
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libopen-rte.so /usr/lib/libopen-rte.so
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libopen-pal.so /usr/lib/libopen-pal.so
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libmpi_f77.so /usr/lib/libmpi_f77.so

Using the above symlinks compilation will work just fine.
However, in order to actually run nwchem you need
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libmpi.so /usr/lib64/libmpi.so.0
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libopen-rte.so /usr/lib64/libopen-rte.so.0
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libopen-pal.so /usr/lib64/libopen-pal.so.0
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libmpi_f77.so /usr/lib64/libmpi_f77.so.0

or you'll get
./nwchem: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory
Finally, make sure we can find our mpirun:
sudo ln -s /usr/lib64/openmpi/1.4-gcc/bin/mpirun /usr/bin/mpirun


ALL CASES
Continue here:
We'll be working in /export/home/me/tmp
wget http://www.nwchem-sw.org/images/Nwchem-6.0.tar.gz
tar -xvf Nwchem
cd nwchem-6.0

create a file called buildconf.sh and stuff it with the following:
export LARGE_FILES=TRUE
export TCGRSH=/usr/bin/ssh
export NWCHEM_TOP=/export/home/me/tmp/nwchem-6.0
export NWCHEM_TARGET=LINUX64
export NWCHEM_MODULES=all
export USE_MPI=y
export USE_MPIF=y
export MPI_LOC=/usr/lib64/openmpi/1.4-gcc/lib
export MPI_INCLUDE=/usr/lib64/openmpi/1.4-gcc/include
export LIBMPI="-lmpi -lopen-rte -lopen-pal -ldl -lmpi_f77 -lpthread"
cd $NWCHEM_TOP/src
make clean
make nwchem_config
make FC=gfortran
NOTE: the above buildconf.sh works for the case when you installed openmpi yourself (CASE 2 or 3). If it got installed with ROCKS on setup and is present in /opt/openmpi (CASE 1 or 3) change the following:

export MPI_LOC=/opt/openmpi/lib
export MPI_INCLUDE=/opt/openmpi/include
Launch the build

sh buildconf.sh

You'll end up with a binary called nwchem in nwchem--6.0/bin/LINUX64 -- you can put a PATH to it in your ~/.bashrc


CASE 3
For execution you will need to make sure nwchem can find the openmpi libs --
echo $LD_LIBRARY_PATH
will tell you whether the path is included by default.
Otherwise, in either you ~/.bashrc (user basis) or /etc/profile (global) put
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi/lib


Running
If you move nwchem out of the compilation directory (to say /usr/local/nwchem) you may also want to define e.g.

export NWCHEM_TOP=/usr/local/nwchem-6.0
export NWCHEM_TARGET=LINUX64
export NWCHEM_BASIS_LIBRARY=${NWCHEM_TOP}/libraries/

Again, this goes into your .bashrc or /etc/profile, depending on scope.

To use multiple cores, do
mpirun -n 4 nwchem jobname.nw
where the number of cores is 4.


Errors and troubleshooting:
If you get errors about libraries missing or mpicc-related errors you should make sure that you've symlinked everything you need into the /usr/lib folder or set the LIBRARY_PATH (see above). You could probably edit /etc/ld.conf too, but it will get messy with time.

I also tried building using mpich2-1.2 as well as 1.4, but got error messages about undefined references left and right.

24 February 2012

74. Building nwchem 6.1 on debian testing 32 bit only


EDIT 18 May 2012: 
It's now been solved on 64 bit as well
Compiling nwchem 6.1 with internal libs on debian: http://verahill.blogspot.com.au/2012/05/compiling-nwchem-61-with-internal-libs.html
Compiling nwchem 6.1 with openblas on debian: http://verahill.blogspot.com.au/2012/05/building-nwchem-61-on-debian.html


This doesn't work with the 64 bit version of nwchem 6.1. There's a separate post on that. Nwchem 6.1 64 bit will build just fine, but will crash when run. Again, see the other post.

Building on 32 bit debian testing:



Put a hold on your mpich2 and mpich2-dev packages (see e.g. here for more details)
1. edit your /etc/apt/sources.list to allow packages from stable e.g.

deb ftp://ftp.au.debian.org/debian/ testing main contrib non-fre
deb ftp://ftp.au.debian.org/debian/ stable main contrib non-free

2. create an /etc/apt/preferences file e.g.

Package: *
Pin: release a=testing
Pin-Priority: 990
Package: *
Pin: release a=stable
Pin-Priority: -10
2. install v 1.2 explicitly
sudo apt-get update && sudo apt-get install mpich2=1.2.1.1-5 libmpich2-dev=1.2.1.1-5

3. put a hold on the packages

sudo su
echo "mpich2 hold"|dpkg --set-selections
echo "libmpich2-dev hold"|dpkg --set-selections

exit

Make sure you have the necessary packages:
sudo apt-get install build-essential gfortran fort77

I had some error messages before installing fort77. Not sure they are related.

Download the nwchem source
cd ~
wget http://www.nwchem-sw.org/images/Nwchem-6.1-2012-Feb-10.tar.gz
tar -xvf Nwchem-6.1-2012-Feb.tar.gz
cd nwchem-6.1

create buildconf.sh in ~/nwchem-6.1


export LARGE_FILES=TRUE
export TCGRSH=/usr/local/bin/ssh
export NWCHEM_TOP=/home/me/nwchem-6.1
export NWCHEM_TARGET=LINUX
export NWCHEM_MODULES=all
export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y
export MPI_LOC=/usr
export MPI_LIB=$MPI_LOC/lib
export MPI_INCLUDE=$MPI_LOC/include/mpich2
export LIBMPI="-lmpich -lfmpich -lpthread"
export NWCHEM_MODULES="all"
cd $NWCHEM_TOP/src
make clean
make nwchem_config
make FC=gfortran

run
sh buildconf.sh


Building takes ages. But it works. Why it works for 32 bit and not 64 bit has me a bit confused, but it's probably a good hint to the solution.

09 January 2012

43. nwchem revisited. Install on new debian machine

Here's a streamlined version of compiling and setting up nwchem with mpich2 support on a virgin debian testing (wheezy) 64 bit computer. I'm working on a build guide for nwchem 6.1 -- currently it builds fine but all jobs end with a Segmentation Violation error and exits with status 11.

Start by running
sudo apt-get install build-essential  gfortran
Edit these two files (the preferences one will most likely not exist)
/etc/apt/sources.list

deb ftp://ftp.au.debian.org/debian/ testing main contrib non-free
deb ftp://ftp.au.debian.org/debian/ stable main contrib non-free
deb ftp://ftp.au.debian.org/debian/ unstable main contrib non-free

/etc/apt/preferences

 Package: *
Pin: release a=testing
Pin-Priority: 990

Package: *
Pin: release a=unstable
Pin-Priority: -10

Package: *
Pin: release a=stable
Pin-Priority: 10

IMPORTANT: the pin-priority for stable must be positive (here +10), or it won't work.

Run
sudo apt-get install mpich2=1.2.1.1-5 libmpich2-dev=1.2.1.1-5

Set the Pin-priority to -10 for stable again.

sudo su
echo "mpich2 hold"|dpkg --set-selections
echo "libmpich2-dev hold"|dpkg --set-selections
mkdir ~/nwchem
cd ~/nwchem
touch buildconf.sh
chmod +x buildconf.sh

(EDIT 21/02/2012: I accidentally put a bad csh-formatted buildconf.sh file at the beginning. Then I put an incomplete bash version. It should work now.)

In buildconf.sh put
export LARGE_FILES=TRUE
export TCGRSH=/usr/local/bin/ssh
export NWCHEM_TOP=/home/myhome/nwchem/nwchem-6.0
export NWCHEM_TARGET=LINUX64
export NWCHEM_MODULES=all
export USE_MPI=y
export USE_MPIF=y
export MPI_LOC=/usr
export MPI_INCLUDE=$MPI_LOC/include/mpich2

cd $NWCHEM_TOP/src
make clean
make nwchem_config
make FC=gfortran

Then download the source code for nwchem

wget http://www.nwchem-sw.org/images/Nwchem-6.0.tar.gz
tar -xvf Nwchem-6.0.tar.gz

To start building:
./buildconf.sh

Once it's built:
echo "PATH=$PATH:/home/myname/nwchem/nwchem-6.0/bin/LINUX64" >> ~/.bashrc
source ~/.bashrc

Prepare mpd
echo "MPD_SECRETWORD=jibberjabber" >> ~/.mpd.conf
chmod 600 ~/.mpd.conf
mpd --ncpus=3 &

Prepare for a test-run
touch nwchem.nw
Put the following in the nwchem.nw file:

start benzene 

geometry units angstroms
C  0.100  1.396  0.000
C  1.209  0.698  0.000
C  1.209 -0.698  0.000
C  0.000 -1.396  0.000
C -1.209 -0.698  0.000
C -1.209  0.698  0.000
H  0.000  2.479  0.000
H  2.147  1.240  0.000
H  2.147 -1.240  0.000
H  0.000 -2.479  0.000
H -2.147 -1.240  0.000
H -2.147  1.240  0.000
end
basis
 H library sto-3g
 c library sto-3g
end
dft
    xc b3lyp
end
task dft optimize

Launch the job:
mpdrun -n 2 nwchem nwchem.nw

And you should be ready to go


Edit: 12/02/2012 It looks like version of nwchem currently in SID is built with mpi support: http://packages.debian.org/sid/nwchem . I haven't checked it out.