Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

20 March 2012

114. Nwchem 6.0 with openmpi support on debian testing

I still haven't managed to compile a working versin of Nwchem 6.1 on Debian 64 bit regardless of whether I'm using mpich or openmpi. The number of posts relating to compiling nwchem is steadily growing, but I'd rather have post which are almost, but not quite, identical if it makes it's unambiguous for the average user how to build and use nwchem.

Anyway, since I'm using openmpi on my rocks cluster(s), I figure I might as well start using openmpi on debian too. In addition, the only way you can get nwchem 6.0 to work with mpich2 on debian seems to be by using the old v1.2 package which causes problems of its own (see apt-pinning).

Note: See here for information about python support: http://verahill.blogspot.com.au/2012/04/adding-python-support-to-nwchem-under.html

Long story short -- nwchem with openmpi:
mkdir ~/tmp
sudo apt-get install openmpi-bin libopenmpi-dev
wget http://www.nwchem-sw.org/images/Nwchem-6.0.tar.gz
tar -xvf Nwchem-6.0.tar.gz
cd nwchem-6.0/

export LARGE_FILES=TRUE
export TCGRSH=/usr/bin/ssh
export NWCHEM_TOP=/home/me/tmp/nwchem-6.0
export NWCHEM_TARGET=LINUX64
export NWCHEM_MODULES=all
export USE_MPI=y
export USE_MPIF=y
export MPI_LOC=/usr/lib/openmpi/lib
export MPI_INCLUDE=/usr/lib/openmpi/include
export LIBRARY_PATH=$LIBRARY_PATH:/usr/lib/openmpi/lib
export LIBMPI="-lmpi -lopen-rte -lopen-pal -ldl -lmpi_f77 -lpthread"
cd $NWCHEM_TOP/src
make clean
make nwchem_config
make FC=gfortran

This will take a good 20-30 minutes.


Your binary will be in nwchem-6.0/bin/LINUX64/

Finally, see whether openmpi is already in your LD_LIBRARY_PATH

echo $LD_LIBRARY_PATH
/lib/openmm:/usr/lib/nvidia-cuda-toolkit:/usr/lib/nvidia
If not, edit ~/.bashrc and add
export LD_LIBRARY_PATH=/usr/lib/openmpi/lib:$LD_LIBRARY_PATH
export PATH=$PATH:/home/me/tmp/nwchem-6.0/bin/LINUX64


18 March 2012

110. Compiling, installing Gnuplot 4.6 on Debian

A new version of gnuplot doesn't happen very often, and this one has an interesting added feature in terms of support for using braces in algorithms.
http://www.gnuplot.info/announce.4.6.0

Building gnuplot 4.6 is similar to building 4.4.4 and is pretty straightforward:

sudo apt-get install libgd2-xpm-dev checkinstall

wget http://sourceforge.net/projects/gnuplot/files/latest/download?source=files
mv download\?source\=files gnuplot-4.6.tar.gz
tar -xvf gnuplot-4.6.tar.gz
cd gnuplot-4.6.0/

./configure --with-linux-vga
make
checkinstall -install=no
 sudo rm /usr/local/share/info/dir -rf
sudo dpkg -i gnuplot_4.6.0-1_amd64.deb

You may get an error if trying to install on a system with a home-compiled version of octave (see below).

The problem with handling small numbers is not present in this version (http://verahill.blogspot.com.au/2012/02/debian-testing-wheezy-64-bug-in-debian.html).


Error:
Selecting previously unselected package gnuplot.
(Reading database ... 258722 files and directories currently installed.)
Unpacking gnuplot (from gnuplot_4.6.0-1_amd64.deb) ...
dpkg: error processing gnuplot_4.6.0-1_amd64.deb (--install):
 trying to overwrite '/usr/local/share/info/dir', which is also in package octave 3.6.1-1
dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)
Errors were encountered while processing:
 gnuplot_4.6.0-1_amd64.deb
Solution:

 sudo dpkg --force-overwrite -i gnuplot_4.6.0-1_amd64.deb

13 March 2012

105. Nwchem 6.1 with openmpi on ROCKS 5.4.3/CentOS 5.6


EDIT 18 May 2012: 
Compiling nwchem 6.1 with internal libs on debian: http://verahill.blogspot.com.au/2012/05/compiling-nwchem-61-with-internal-libs.html
Compiling nwchem 6.1 with openblas on debian: http://verahill.blogspot.com.au/2012/05/building-nwchem-61-on-debian.html


I can build and use nwchem on ROCKS 5.4.3 -- see instructions below.

EDIT: the gfortran version is GNU Fortran (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50)
On debian, which yields a segfaulting binary, the version is GNU Fortran (Debian 4.6.3-1) 4.6.3

I'm still having no luck building binaries which don't segfault on execution on debian though. The openmpi versions are the same for both ROCKS and debain: 1.4.3.

--START HERE --

ROCKS 5.4.3/CentOS
The build is essentially the same as for nwchem-6.0 (http://verahill.blogspot.com.au/2012/03/building-nwchem-60-on-rocks-543centos.html) - the single difference is that you need to define USE_MPIF4 or you get errors

To build:

wget http://www.nwchem-sw.org/images/Nwchem-6.1-2012-Feb-10.tar.gz
tar -xvf Nwchem-6.1-2012-Feb-10.tar.gz
cd nwchem-6.1/
export LARGE_FILES=TRUE
export TCGRSH=/usr/bin/ssh
export NWCHEM_TOP=/export/home/me/tmp/nwchem-6.1
export NWCHEM_TARGET=LINUX64
export NWCHEM_MODULES=all
export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y
export MPI_LOC=/opt/openmpi
export MPI_INCLUDE=/opt/openmpi/include
export LIBRARY_PATH=$LIBRARY_PATH:/opt/openmpi/lib
export LIBMPI="-lmpi -lopen-rte -lopen-pal -ldl -lmpi_f77 -lpthread"
cd $NWCHEM_TOP/src
make clean
make  nwchem_config
make  FC=gfortran

Building takes a little while.

Running:
Make sure that you make the reference to your openmpi libs permanent and make life easier by putting the following in your ~/.bashrc or /etc/profile:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi/lib

export NWCHEM_EXECUTABLE=/export/home/me/tmp/nwchem-6.1/bin/LINUX64/nwchem
export NWCHEM_BASIS_LIBRARY=/export/home/me/tmp/nwchem-6.1/src/basis/libraries/
PATH=$PATH:/export/home/me/nwchem-6.1/bin/LINUX64



To run on multiple procs do
mpirun -n 3 nwchem input.nw
where 3 is the number of cores

104. Building gromacs with fftw3 and openmpi on ROCKS 5.4.3/CentOS

This guide was heavily modified on 13/03/2012 to remove the need for sudo/root privileges.

Not all flavours of linux are equal. I've always been a Debian man, but have recently become a user of a ROCKS based HPC cluster on a different continent. To make sure that I don't screw things up I'm currently trying to work out how to reliably compile common computational packages under ROCKS 5.4.3, which is CentOS based.

If you installed the bio roll from the beginning you'll have openmpi in /opt/openmpi (rocks_openmpi.x86_64 package), and fftw in /opt/rocks/lib and /opts/rocks/include (fftw.x86_64 package)

If you only installed the basic rolls, you won't have either. Now, you can either download the bio roll and install from there, or you can install the regular openmpi package and compile fftw yourself. In fact, you'll need to do the latter if you want double-precision gromacs anyway.

My goal is to avoid having to use sudo or root at all. I've rewritten this guide a couple of times, so there may be weird annoying errors that I've missed.

Installing openmpi:
If you don't have openmpi in /opt, then you can install it from the base roll
sudo yum install openmpi


fftw3:
You can skip this step IF
1. you have fftw files in /opt/rocks/lib and /opt/rocks/include
AND
2. you only want single precision

Otherwise:

wget http://www.fftw.org/fftw-3.3.1.tar.gz
tar -xvf fftw-3.3.1.tar.gz
cd fftw-3.3.1


Then use --prefix to tell make where to install the files:

Single precision fftw3 libraries:
make distclean
./configure --enable-float --enable-mpi --enable-threads --with-pic --prefix=/export/home/me/.fftwsingle
make
make install

Double-precision fftw3 libraries:
make distclean
./configure --disable-float --enable-mpi --enable-threads --with-pic --prefix=/export/home/me/.fftwdouble
make 
make install

gromacs:

First download and extract:

cd ~/tmp
wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.5.5.tar.gz

tar -xvf gromacs-4.5.5.tar.gz
cd gromacs-4.5.5/

Before building you need to define where the openmpi libs are i.e.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi/lib
OR
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/openmpi/1.4-gcc/lib

We now have three permutations of possible builds:
1. We use the single precision fftw libs in /opt/rocks/lib and /opt/rocks/include
export LDFLAGS=-L/opt/rocks/lib
export CPPFLAGS=-I/opt/rocks/include
./configure --enable-mpi --enable-float --with-fft=fftw3 --program-suffix=_spmpi --prefix=/export/home/me/gromacs
make
make install

2. We use the single precision fftw libs in /export/home/me/.fftwsingle

export LDFLAGS=-L/export/home/me/.fftwsingle/lib
export CPPFLAGS=-I/export/home/me/.fftwsingle/include
./configure --enable-mpi --enable-float --with-fft=fftw3 --program-suffix=_spmpi --prefix=/export/home/me/gromacs
make
make install

3. We use the double precision fftw libs in /export/home/me/.fftwdouble

export LDFLAGS=-L/export/home/me/.fftwdouble/lib
export CPPFLAGS=-I/export/home/me/.fftwdouble/include
./configure --enable-mpi --disable-float --with-fft=fftw3 --program-suffix=_ddmpi --prefix=/export/home/me/gromacs
make
make install


Running

You will now have single and double-precision binaries, e.g.
grompp_spmpi and grompp_ddmpi

Make sure that you define/have defined LD_LIBRARY_PATH in /etc/profile or ~/.bashrc and included the paths to your mpi libs and your fftw libs, e.g.:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi/lib:/export/home/me/.fftwsingle:/export/home/me/.fftwdouble

Actually, it doesn't seem necessary to include the fftw path.

You may also want to include your gromacs bins in your path:
export PATH=$PATH:/export/home/me/gromacs/bin

Dynamic load-balancing seems to be disabled by default, so to use multiple cores run using e.g.
mpirun -n 4 mdrun_spmpi -s inp.tpr -o out.trr etc.

DONE


Troubleshooting

Error:
checking size of off_t... configure: error: in `/export/home/me/tmp/gromacs-4.5.5':
configure: error: cannot compute sizeof (off_t)
See `config.log' for more details
config.log:
./conftest: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory
Solution:
Set LD_LIBRARY_PATH to your openmpi libs e.g.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi/lib

Error:
/usr/local/lib/libfftw3f.a: could not read symbols: Bad value
collect2: ld returned 1 exit status
make[3]: *** [libmd.la] Error 1
Solution:
Compile fftw3 using the --with-pic switch:
./configure --enable-float --enable-mpi --enable-threads --with-pic 




103. Building nwchem 6.0 on Rocks 5.4.3/CentOS

I've always been a Debian man, but for various reasons I need to be able to compile various scientific packages on a HPC running ROCKS. ROCKS 5.4.3 is based on CentOS 5,6and it turns out that debian is wonderfully easy, accommodating and robust in comparison. Well, since it's not my HPC, CentOS is what I'm stuck with.

Here's how to build nwchem on a rocks 5.4.3 (viper) cluster based on CentOS 5.6 and its ancient kernel.
(Linux  2.6.18-238.19.1.el5 #1 SMP Fri Jul 15 07:31:24 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux )

There are three different approaches:




CASE 1.
 Using LD_LIBRARY_PATH
This method requires no root access.
Check to see whether you've installed the rocks-openmpi package from the bio roll - it should be in /opt/openmpi. Otherwise use yum to install the base-roll openmpi package, which will end up in /usr/lib64/openmpi/1.4-gcc/lib -- you'll need root or sudo to do anything with yum.

For compilation, do
export LIBRARY_PATH=$LIBRARY_PATH:/opt/openmpi/lib
or
export LIBRARY_PATH=$LIBRARY_PATH:/usr/lib64/openmpi/1.4-gcc/lib/
depending on whether there is an openmpi directory in /opt or not.

You can also put the export line in your buildconf.sh below
For execution:
in either you ~/.bashrc (user basis) or /etc/profile (global) put
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi/lib






CASE 2. /opt/openmpi is present; using symlinked libs

mpicc and mpif77 are probably already symlinked, but if not:

sudo ln -s /opt/openmpi/bin/mpicc /usr/bin/mpicc
sudo ln -s /opt/openmpi/bin/mpif77 /usr/bin/mpif77


The following allows for building and running:
sudo ln -s /opt/openmpi/lib/libmpi.so /usr/lib/libmpi.so
sudo ln -s /opt/openmpi/lib/libopen-rte.so /usr/lib/libopen-rte.so
sudo ln -s /opt/openmpi/lib/libopen-pal.so /usr/lib/libopen-pal.so
sudo ln -s /opt/openmpi/lib/libmpi_f77.so /usr/lib/libmpi_f77.so
sudo ln -s /opt/openmpi/lib/libmpi.so /usr/lib64/libmpi.so.0
sudo ln -s /opt/openmpi/lib/libopen-rte.so /usr/lib64/libopen-rte.so.0
sudo ln -s /opt/openmpi/lib/libopen-pal.so /usr/lib64/libopen-pal.so.0
sudo ln -s /opt/openmpi/lib/libmpi_f77.so /usr/lib64/libmpi_f77.so.0


the /usr/lib64 symlinks are necessary for execution, or you'll get
./nwchem: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory



CASE 3. /opt/openmpi is NOT present; using symlinked libs

yum install openmpi openmpi-devel
And then put in all the symlinks...dunno why this isn't done on install, but there you go.

sudo ln -s /usr/local/lib64/openmpi/1.4-gcc/bin/mpicc  /usr/bin/mpicc
sudo ln -s /usr/local/lib64/openmpi/1.4-gcc/bin/mpif77 /usr/bin/mpif77
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libmpi.so /usr/lib/libmpi.so
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libopen-rte.so /usr/lib/libopen-rte.so
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libopen-pal.so /usr/lib/libopen-pal.so
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libmpi_f77.so /usr/lib/libmpi_f77.so

Using the above symlinks compilation will work just fine.
However, in order to actually run nwchem you need
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libmpi.so /usr/lib64/libmpi.so.0
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libopen-rte.so /usr/lib64/libopen-rte.so.0
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libopen-pal.so /usr/lib64/libopen-pal.so.0
sudo ln -s /usr/lib64/openmpi/1.4-gcc/lib/libmpi_f77.so /usr/lib64/libmpi_f77.so.0

or you'll get
./nwchem: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory
Finally, make sure we can find our mpirun:
sudo ln -s /usr/lib64/openmpi/1.4-gcc/bin/mpirun /usr/bin/mpirun


ALL CASES
Continue here:
We'll be working in /export/home/me/tmp
wget http://www.nwchem-sw.org/images/Nwchem-6.0.tar.gz
tar -xvf Nwchem
cd nwchem-6.0

create a file called buildconf.sh and stuff it with the following:
export LARGE_FILES=TRUE
export TCGRSH=/usr/bin/ssh
export NWCHEM_TOP=/export/home/me/tmp/nwchem-6.0
export NWCHEM_TARGET=LINUX64
export NWCHEM_MODULES=all
export USE_MPI=y
export USE_MPIF=y
export MPI_LOC=/usr/lib64/openmpi/1.4-gcc/lib
export MPI_INCLUDE=/usr/lib64/openmpi/1.4-gcc/include
export LIBMPI="-lmpi -lopen-rte -lopen-pal -ldl -lmpi_f77 -lpthread"
cd $NWCHEM_TOP/src
make clean
make nwchem_config
make FC=gfortran
NOTE: the above buildconf.sh works for the case when you installed openmpi yourself (CASE 2 or 3). If it got installed with ROCKS on setup and is present in /opt/openmpi (CASE 1 or 3) change the following:

export MPI_LOC=/opt/openmpi/lib
export MPI_INCLUDE=/opt/openmpi/include
Launch the build

sh buildconf.sh

You'll end up with a binary called nwchem in nwchem--6.0/bin/LINUX64 -- you can put a PATH to it in your ~/.bashrc


CASE 3
For execution you will need to make sure nwchem can find the openmpi libs --
echo $LD_LIBRARY_PATH
will tell you whether the path is included by default.
Otherwise, in either you ~/.bashrc (user basis) or /etc/profile (global) put
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi/lib


Running
If you move nwchem out of the compilation directory (to say /usr/local/nwchem) you may also want to define e.g.

export NWCHEM_TOP=/usr/local/nwchem-6.0
export NWCHEM_TARGET=LINUX64
export NWCHEM_BASIS_LIBRARY=${NWCHEM_TOP}/libraries/

Again, this goes into your .bashrc or /etc/profile, depending on scope.

To use multiple cores, do
mpirun -n 4 nwchem jobname.nw
where the number of cores is 4.


Errors and troubleshooting:
If you get errors about libraries missing or mpicc-related errors you should make sure that you've symlinked everything you need into the /usr/lib folder or set the LIBRARY_PATH (see above). You could probably edit /etc/ld.conf too, but it will get messy with time.

I also tried building using mpich2-1.2 as well as 1.4, but got error messages about undefined references left and right.

24 February 2012

74. Building nwchem 6.1 on debian testing 32 bit only


EDIT 18 May 2012: 
It's now been solved on 64 bit as well
Compiling nwchem 6.1 with internal libs on debian: http://verahill.blogspot.com.au/2012/05/compiling-nwchem-61-with-internal-libs.html
Compiling nwchem 6.1 with openblas on debian: http://verahill.blogspot.com.au/2012/05/building-nwchem-61-on-debian.html


This doesn't work with the 64 bit version of nwchem 6.1. There's a separate post on that. Nwchem 6.1 64 bit will build just fine, but will crash when run. Again, see the other post.

Building on 32 bit debian testing:



Put a hold on your mpich2 and mpich2-dev packages (see e.g. here for more details)
1. edit your /etc/apt/sources.list to allow packages from stable e.g.

deb ftp://ftp.au.debian.org/debian/ testing main contrib non-fre
deb ftp://ftp.au.debian.org/debian/ stable main contrib non-free

2. create an /etc/apt/preferences file e.g.

Package: *
Pin: release a=testing
Pin-Priority: 990
Package: *
Pin: release a=stable
Pin-Priority: -10
2. install v 1.2 explicitly
sudo apt-get update && sudo apt-get install mpich2=1.2.1.1-5 libmpich2-dev=1.2.1.1-5

3. put a hold on the packages

sudo su
echo "mpich2 hold"|dpkg --set-selections
echo "libmpich2-dev hold"|dpkg --set-selections

exit

Make sure you have the necessary packages:
sudo apt-get install build-essential gfortran fort77

I had some error messages before installing fort77. Not sure they are related.

Download the nwchem source
cd ~
wget http://www.nwchem-sw.org/images/Nwchem-6.1-2012-Feb-10.tar.gz
tar -xvf Nwchem-6.1-2012-Feb.tar.gz
cd nwchem-6.1

create buildconf.sh in ~/nwchem-6.1


export LARGE_FILES=TRUE
export TCGRSH=/usr/local/bin/ssh
export NWCHEM_TOP=/home/me/nwchem-6.1
export NWCHEM_TARGET=LINUX
export NWCHEM_MODULES=all
export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y
export MPI_LOC=/usr
export MPI_LIB=$MPI_LOC/lib
export MPI_INCLUDE=$MPI_LOC/include/mpich2
export LIBMPI="-lmpich -lfmpich -lpthread"
export NWCHEM_MODULES="all"
cd $NWCHEM_TOP/src
make clean
make nwchem_config
make FC=gfortran

run
sh buildconf.sh


Building takes ages. But it works. Why it works for 32 bit and not 64 bit has me a bit confused, but it's probably a good hint to the solution.

24 January 2012

56. Gromacs -- setting up and running energy minimisation of methanol in water. Using itp files

* First browse through this tutorial.
* I'm not an expert at GROMACS -- I'm writing this up as I am learning myself. Use what I show here as a starting point for your own explorations. Don't treat it like a scientifically complete piece of work.

1. Building the .gro
Draw a methanol molecule in e.g. Avogadro (available in the debian repos) and save as .xyz.
6

C         -1.60284       -0.81294        0.00066
O         -0.18719       -0.81849       -0.00206
H         -1.97028       -1.66300        0.58111
H         -1.97029       -0.87387       -1.02686
H         -1.95386        0.11552        0.45631
H          0.09473       -1.65393       -0.41207
You should edit the file to give the atoms unique names -- keep them short though, as each string field is limited to 5 chars in the .gro file. You will refer to these names later in your metahnol.itp file.

Here's my new methanol.xyz:
6
meth
COH         -1.60284       -0.81294        0.00066
OHC         -0.18719       -0.81849       -0.00206
HC1         -1.97028       -1.66300        0.58111
HC2         -1.97029       -0.87387       -1.02686
HC3         -1.95386        0.11552        0.45631
HO          0.09473       -1.65393       -0.41207
Next we convert it to a .gro file and create a 2.5 nm x 2.5 nm x 2.5 nm box around it. 

editconf -f methanol.xyz -o methanol.gro -box 2.5 2.5 2.5 

methanol.gro:

meth
    6
    1 ???   COH    1   1.216   1.264   1.257
    1 ???   OHC    2   1.358   1.263   1.257
    1 ???   HC1    3   1.179   1.179   1.315
    1 ???   HC2    4   1.179   1.258   1.154
    1 ???   HC3    5   1.181   1.357   1.302
    1 ???    HO    6   1.386   1.180   1.216
   2.50000   2.50000   2.50000

2. Building the .top
We're going to make it a bit easier and more modular for ourselves by using the #include directive and by (re)using .itp files -- this way we can pull instructions and data from stock files.

step1.top:
[defaults]
1   1   yes   1.0   1.0

#include "atomtypes.itp"
#include "methanol.itp"
#include "water.itp"

[system]
wood liquor

[molecules]
meth   1
SOL   1
It now remains for us to define the molecule meth in methanol.itp, SOL in water.itp and define the proper atomtypes in atomtypes.itp.

3. Building the methanol.itp

First create a .gzmat file from the .xyz using openbabel


babel methanol.xyz methanol.gzmat

methanol.gzmat:


#Put Keywords Here, check Charge and Multiplicity.
 meth
0  1
Xx
Xx  1  r2
Xx  1  r3  2  a3
Xx  1  r4  2  a4  3  d4
Xx  1  r5  2  a5  3  d5
Ho  2  r6  1  a6  3  d6
Variables:
r2= 1.4157
r3= 1.0929
a3= 109.52
r4= 1.0929
a4= 109.52
d4= 239.22
r5= 1.0922
a5= 109.00
d5= 119.61
r6= 0.9724
a6= 107.10
d6=  60.39


The zmat format gives an easy overview over bond-lengths and angles, which is what I use it for. For something as small as methanol you don't really need a zmat file, but it can come in handy for large molecules. We'll also include #ifndef, #else and #endif -- this way we have an easy way of switching between using constraints and not doing so. ifndef checks to see whether FLEXIBLE is defined or not (ifndef=If Not Defined).

Remember that the atom order in methanol.itp should be the same as in the .gro file.

The partial charges are addressed under atomtypes. Be aware that while gzmat outputs bond lengths in  Ã…ngström you should use nanometres in your .itp file or things will get weird.

methanol.itp:


;;;;;;;;;;;;;;;;;;
; Methanol
;;;;;;;;;;;;;;;;;;
[ moleculetype ]
meth    3
[atoms]
;nr type    resnr   res atom    cgnr    chrg    mass
1   COH 1   meth    COH   1   0.1450   12.011
2   OHC 1   meth    OHC   1   -0.683   15.9994
3   HC  1   meth    HC1   1   0.0400   1.00079
4   HC  1   meth    HC2   1   0.0400   1.00079
5   HC  1   meth    HC3   1   0.0400   1.00079
6   HO  1   meth    HO   1   0.4180   1.00079
#ifndef FLEXIBLE
[constraints]
;ai aj  f   nm
3   1   1   0.110929  ; H-C
4   1   1   0.110929  ; H-C
5   1   1   0.110929  ; H-C
1   2   1   0.14157  ; C-O
6   2   1   0.09724  ; H-O
[exclusions]
1   2   3   4   5   6
2   1   3   4   5   6
3   1   2   4   5   6
4   1   2   3   5   6
5   1   2   3   4   6
6   1   2   3   4   5
#else
[bonds]
;ai aj  f   nm  kb
3   1   1   0.110929  10  ; H-C, r3
4   1   1   0.110929  10  ; H-C, r4
5   1   1   0.110929  10  ; H-C, r5
2   1   1   0.14157  100  ; C-O, r2
6   2   1   0.09724  1  ; H-O, r6
;[ pairs ]
; i j   funct
;2   6
;3   6
;4   6
[angles]
;ai aj ak   f   angle   k_a
3   1   2   1   109.52  500   ; H-C-O a3
4   1   2   1   109.52  500   ; H-C-O a4
5   1   2   1   109.52  500   ; H-C-O a5
6   2   1   1   107.10  500   ; H-O-C a6
3   1   4   1   120.00  500
4   1   5   1   120.00  500
3   1   5   1   120.00  500

[dihedrals];proper - f=1
6   2   1   3   1   60   300    2    ;H(O)-O-C-H(C) d6
;6   2   1   4   1      
;6   2   1   5   1
[dihedrals];improper - f=2
4   1   2   3   2   239.22  300    ;H-C-O-H(C) d4
5   1   2   3   2   119.61  300    ;H-C-O-H(C) d5
#endif



3. Building the water.itp
The partial charges are addressed under atomtypes, but this is essentially a copy of the (/usr/local/gromacs/share/gromacs/)top/oplsaa.ff/spce.itp file.

water.itp:
[moleculetype]
sol 2
[atoms]
;nr type resnr res atom cgnr charge mass
1   OW  1   SOL OW  1   -0.8476 15.9994
2   HW  1   SOL HW1 1   0.4238  1.00079
3   HW  1   SOL HW2 1   0.4238  1.00079


#ifndef FLEXIBLE
[settles]
;OW fun doh dhh
1   1   0.1 0.16330
[exclusions]
1   2   3
2   1   3
3   1   2
#else
[bonds]
;i  j   f   l(nm)  kb
1   2   1   0.1 34.5e3
1   3   1   0.1 34.5e3
[angles]
;ai aj ak f angle k_a
2   1   3   1   109.47  383
#endif

4. Building the atomtypes.itp
This is the tricky bit -- you need to pick the correct parameters for your atoms, and as they are empirical it is difficult to know what is 'right'.

A quick way of getting decent starting values is to
cat /usr/local/gromacs/share/gromacs/top/oplsaa.ff/ffnonbonded.itp

and look through the OPLS-AA for different atoms.

e.g.

cat /usr/local/gromacs/share/gromacs/top/oplsaa.ff/ffnonbonded.itp | grep OH

gives 
; name  bond_type    mass    charge   ptype          sigma      epsilon
 opls_023   OH 8      15.99940    -0.700       A    3.07000e-01  7.11280e-01 ; SIG

 opls_078   OH 8     15.99940    -0.700       A    3.07000e-01  7.11280e-01 ; SIG
 opls_154   OH  8     15.99940    -0.683       A    3.12000e-01  7.11280e-01
 opls_162   OH  8     15.99940    -0.635       A    3.07000e-01  7.11280e-01
 opls_167   OH   8  15.99940    -0.585       A    3.07000e-01  7.11280e-01
 opls_169   OH  8   15.99940    -0.700       A    3.07000e-01  7.11280e-01
 opls_171   OH  8   15.99940    -0.730       A    3.07000e-01  7.11280e-01
 opls_187   OH   8  15.99940    -0.700       A    3.07000e-01  7.11280e-01
 opls_268   OH   8  15.99940    -0.530       A    3.00000e-01  7.11280e-01
 opls_420   OH   8  15.99940    -0.980       A    3.15000e-01  1.04600e+00
 opls_434   OH   8  15.99940    -1.300       A    3.20000e-01  1.04600e+00

For most definitions sigma=3.07000e-01 and epsilon=7.11280e-01, so we'll use that. However, we want to use c6 and c12 notation, where c6=4*epsilon*sigma^6 and c12=4*epsilon*sigma^12.

An alternative is to look through the literature for suitable values.

An advantage with making an atomtypes.itp file yourself is that you can slowly expand it between experiments.

Here's my atomtypes.itp:
[atomtypes]
;atom no mass charge ptype c6 c12
;water
OW  8   15.9994 -0.8476 A   0.261710e-02    0.26331e-5
HW  1   1.00079 0.42380 A   0.000000e-02    0.00000e-5
;methanol
; Methanol, Jorgensen et al. JACS 118 pp. 11225 (1996)
;opls-aa/ffnonbonded.itp
COH 6     12.01100     0.145       A    0.20305e-2  0.37326e-5      ;s/e 3.50000e-01  2.76144e-01
OHC 8     15.99940    -0.683       A    0.26244e-2  0.24208e-5      ;s/e 3.12000e-01  7.11280e-01
HC  1      1.00800     0.040       A    0.012258e-2 0.0029926e-5    ;s/e 2.50000e-01  1.25520e-01
HO  1      1.00800     0.418       A    0.00000000  0.000000000     ;s/e 0.00000e+00  0.00000e+00
;CO2
OC  8   15.9994 -0.35   A   2.5438e-1   2.0478e-4
CO  6   12.0107 0.700   A   5.2044e-2   2.5080e-5
;CO3
OCO2    8   15.9994 -1.0450 A   0.261710e-2 0.26331e-05
CO3     6   12.0107 1.13500  A   0.234020e-2 0.33740e-05



5. Our em.mdp
This is from one of Justin Lemkul's tutorials


; Parameters describing what to do, when to stop and what to save
define = -DFLEXIBLE ; relaxes constraints in the .itp files
integrator = steep ; Algorithm (steep = steepest descent minimization)
emtol = 1000.0   ; Stop minimization when the maximum force < 1000.0 kJ/mol/nm
emstep          = 0.01          ; Energy step size
nsteps = 50000   ; Maximum number of (minimization) steps to perform
; Parameters describing how to find the neighbors of each atom and how to calculate the interactions
nstlist = 1 ; Frequency to update the neighbor list and long range forces
ns_type = grid ; Method to determine neighbor list (simple, grid)
rlist = 1.2 ; Cut-off for making neighbor list (short range forces)
coulombtype = PME ; Treatment of long range electrostatic interactions
rcoulomb = 1.2 ; Short-range electrostatic cut-off
rvdw = 1.2 ; Short-range Van der Waals cut-off
pbc = xyz ; Periodic Boundary Conditions (yes/no)

6. Energy minimisation
I'll keep it brief:

Add water
genbox_dd  -cp methanol.gro -cs /usr/local/gromacs/share/gromacs/top/spc216.gro -o step1.gro -p step1.top 

Prepare mdrun input
grompp_dd -f em.mdp -po step2.mdp -c step1.gro -p step1.top -pp step2.top -o step2.tpr 

Do energy minimsation
mdrun_dd -v -s step2.tpr -o step3.trr -x step3.xtc -c step3.gro -e step3.edr -g step3.log


Using a run.sh

I get tired of typing the same thing over and over - especially when troubleshooting

Create a run.sh file and call it with sh run.sh every time
editconf_dd -f methanol.xyz -o methanol.gro -box 2.5 2.5 2.5
genbox_dd  -cp methanol.gro -cs /usr/local/gromacs/share/gromacs/top/spc216.gro -o step1.gro -p step1.top
grompp_dd -f em.mdp -po step2.mdp -c step1.gro -p step1.top -pp step2.top -o step2.tpr
mdrun_dd -v -s step2.tpr -o step3.trr -x step3.xtc -c step3.gro -e step3.edr -g step3.log

7. Equilibration and production runs
See previous tutorial for mdp files.

I know it's a bit of a cop-out, but no time to make a more detailed tutorial right now.

20 January 2012

54. Compiling GROMACS with GPU support using OpenMM (OpenMM 4.0 source or 3.1.1 Linux 64 binaries) on debian testing

To compile gromacs without GPU support (and that's probably what you should do) look here:
http://verahill.blogspot.com.au/2012/05/gromacs-with-external-fftw3-and-blas-on.html
GPU support is most likely NOT going to be useful for you.

For a better way (out-of-tree) of building a newer openMM, see:
http://verahill.blogspot.com.au/2012/06/compiling-openmm-41-on-debian-testing.html

The method below describes an in-tree build of OpenMM. Do an out of tree build (e.g. see link above) to avoid headaches.

OK, enough with the 'bold'. Basically, I wrote the post below at a time when I was at a very early stage of learning how to compile my own programs. I'm obviously still learning, but I have published better methods now -- see the links above for those.
Be aware that in-tree (when you build your package in the root of the source tree) building is not supported for OpenMM and you will probably be told so if posting on forums asking for help. Out-of-tree (when you build in a directory outside the source tree structure) is supported and is described above. So, while I'm leaving this post here for posterity, use it as inspiration, but don't follow it blindly.

/25th of September 2012.


Original Post:
First read this: 1. Use EITHER the OpenMM3.1.1-Linux64 binaries
 2. OR, which is better, compile OpenMM4.0 from source -- see here http://verahill.blogspot.com/2012/01/debian-testing-64-wheezy_20.html -- if you do this you can skip step 3.

Do NOT use: 1. the Open4.0MM-Linux64 binaries or the OpenMM3.1.1-Source source -- at least I have had issues with those versions.



The following guide was last tested: 20/01/2012
It has not been checked for typos -- whenever you use a command with 'sudo' in it, try to understand what it does first.

Finally: if it doesn't work the first time, try a few more times. I've never managed to build on the first try, but restarting with make clean, cmake CMakeList.txt, make works in the end every time. Sigh...

You may also want to remove CMakCache.txt in the source root.

1. Things to install first

Install cmake and autoconf if you haven't already

sudo apt-get install cmake autoconf

2. edit ~/.bashrc  or /etc/profiles
Put this at the end of your ~/.bashrc  (or /etc/profiles for multi-user systems)

export LD_LIBRARY_PATH=/lib/openmm:/usr/lib/nvidia-cuda-toolkit:/usr/lib/nvidia:$LD_LIBRARY_PATH
export OPENMM_PLUGIN_DIR=/usr/local/openmm/lib/plugins
export OPENMM_ROOT_DIR=/usr/local/openmm 

load its content by
source ~/.bashrc

3. OpenMM
To download OpenMM you need to register with simtk.org. Registering is all thing considered fairly easy and quick.

Either follow this guide for compiling OpenMM4.0 from source (preferred) or follow the steps below to use the 3.1.1 binaries:

Download the OpenMM3.1.1-Linux64.zip file (if applicable).

Put the .zip file in ~/tmp

unzip OpenMM3.1.1-Linux64.zip
cd ~/tmp/OpenMM3.1.1-Linux64.zip

sudo mkdir -p /lib/openmm/plugins
sudo mkdir -p /lib/openmm/include

sudo cp lib/*.so /lib/openmm
sudo cp include/* -R /lib/openmm/include
sudo cp plugins/* -R /lib/openmm/plugins

This is the structure you're aiming for:

/lib/openmm
|-- include
|   |-- internal
|   |-- openmm
|   `-- serialization
`-- plugins


Next:

sudo mkdir /usr/local/openmm
sudo cp ~/tmp/OpenMM3.1.1/* -R /usr/local/openmm


So that you end up with

/usr/local/openmm
|-- bin
|-- docs
|   `-- api
|       `-- search
|-- include
|   `-- openmm
|       |-- internal
|       `-- serialization
|-- lib
|   `-- plugins
|-- licenses
`-- python
    |-- simtk
    |   |-- openmm
    |   `-- unit
    `-- src
        `-- swig_doxygen
            |-- doxygen
            `-- swig_lib
                `-- python
4. Gromacs-4.5.5
Get and untar gromacs:
cd ~/tmp
wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.5.5.tar.gz
tar -xvf gromacs-4.5.5.tar.gz
mv gromacs-4.5.5/ gromacs_gpu/

Let's start preparing our build:
cd gromacs_gpu/
make clean
autoconf 
export OPENMM_ROOT_DIR=/usr/local/openmm
cmake CMakeList.txt -DGMX_OPENMM=ON -DGMX_THREADS=OFF

EDIT/COMMENT:  I think autoconf /automake and cmake are mutually exclusive. I only let autoconf stay here since I did invoke it when I did my build.

The output of a successful run of cmake follows below -- note the binary suffix, as well as the Found CUDA and Found OpenMM.

-- Using default binary suffix: "-gpu"
-- Using default library suffix: "_gpu"
-- No external FFT libraries needed for the OpenMM build, switching to fftpack!
-- Switching off CPU-based acceleration, the OpenMM build does not support/need any!
-- Found CUDA: /usr (Required is at least version "3.1")
-- Found OpenMM: /usr/local/openmm
-- Looking for fseeko
-- Looking for fseeko - found
-- Using internal FFT library - fftpack
-- Configuring done
-- Generating done
-- Build files have been written to: /home/me/tmp/gromacs_mopac



Next, some editing - first we copy two of the files that the DGMX_OPENMM switch on the cmake command above created, then we remove two offending lines.

This command should be on a single line:
cp gromacs_gpu/src/kernel/gmx_gpu_utils/CMakeFiles/gmx_gpu_utils_generated_gmx_gpu_utils.cu.o.cmake ~/tmp

This command is also a single line:
cp gromacs_gpu/src/kernel/gmx_gpu_utils/CMakeFiles/gmx_gpu_utils_generated_memtestG80_core.cu.o.cmake ~/tmp


We now have two *.cu.o files in our ~/tmp
Open each one and remove all instances of -fexcess-precision=fast in them. Then copy the files back to their original location:

(This command should be on a single line:)
cp gmx_gpu_utils_generated_gmx_gpu_utils.cu.o.cmake gromacs_gpu/src/kernel/gmx_gpu_utils/CMakeFiles/

(This command is also a single line:)
cp gmx_gpu_utils_generated_memtestG80_core.cu.o.cmake gromacs_gpu/src/kernel/gmx_gpu_utils/CMakeFiles/

We cope and edit since these files get regenerated every time you do the cmake -DGMX_OPENMM=ON command.
----------------------------------------------------------------------------------------------------------
Quick aside: a clue to  -fexcess-precision:
Here: "On x86 targets, code containing floating-point calculations may run significantly slower when compiled with GCC 4.5 in strict C99 conformance mode than they did with earlier GCC versions. This is due to stricter standard conformance of the compiler and can be avoided by using the option -fexcess-precision=fast." * Here: "-fexcess-precision=fast # disables the GCC 4.5 strict floating point C99 standards conformance for improved floating point performance "* I'm running gcc (Debian) 4.6.2-12 and g++ 4.6.2-4* I've tried with both -fexcess-precision=fast and -fexcess-precision=standard. Neither works.
* Not all excess-precision options work with all languages.
* Using g++ 4.5 someone got "cc1plus: sorry, unimplemented: -fexcess-precision=standard for C++" and comes to the conclusion that "Bah. Still, at least the -ffloat-store workaround still helps, for this
case at any rate. Also, if you get GCC to use SSE instructions, there's no
issue with excess precision".
I think the conclusion is that removing -fexcess-precision=fast MAY lead to a speed penalty (up to 2x) but will not change the results of the sim/calc.
----------------------------------------------------------------------------------------------------------
OK, time to build!
cd gromacs_gpu/
make 

and then install:
sudo make install

If all went well you'll now have mdrun-gpu in your /usr/local/gromacs/bin folder together with 93 other -gpu bin files!


Files for benchmarking can be downloaded from here: http://www.gromacs.org/@api/deki/files/128/=gromacs-gpubench-dhfr.tar.gz

If you get:

Program mdrun-gpu, VERSION 4.5.5
Source code file: /home/me/tmp/gromacs_gpu/src/kernel/openmm_wrapper.cpp, line: 1324
Fatal error:
The selected GPU (#0, GeForce GT 430) is not supported by Gromacs! Most probably you have a low-end GPU which would not perform well, or new hardware that has not been tested with the current release. If you still want to try using the device, use the force-device=yes option.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

when trying to benchmark, use

mdrun-gpu -device "OpenMM:deviceid=0, force-device=yes" -v

to force execution.

Make sure to monitor your GPU temperature -- it will easily reach 80 degrees for a fan-less video card.

Early 'results':
Test                             GPU: GT 430    GT520      CPU (6 core AMD Phenom II, 2.8 GHz)
dhfr-impl-1nm.bench           31.8          19.5            27.2 ns/day
dhfr-impl-2nm.bench           31.9          19.9            5.8 ns/day
dhfr-solv-PME.bench           2.7             1.7             8.6 ns/day


Two significant problems are present:
1. The card heats up quickly to 80 degrees Celsius. It may thus be throttled.
2. The video card is in use by GNOME at the same time. Will rerun later without x.

The difference in speed for the 2nm text is significant.



5. Optional
If you haven't already, add this to ~/.bashrc (or /etc/profile)
PATH=$PATH:/usr/local/gromacs/bin

then run
source ~/.bashrc
to load the new settings


Addendum: Errors and solutions
Do a fresh cycle of make clean, cmake, make and make install after each fix. Also, remove CMakeCache.txt

1. Missing libxml2-dev

Symptom:
[ 89%] Building C object src/mdlib/CMakeFiles/md.dir/gmx_qhop_xml.c.o
/home/me/tmp/gromacs_gpu/src/mdlib/gmx_qhop_xml.c:48:27: fatal error: libxml/parser.h: No such file or directory
compilation terminated.
make[3]: *** [src/mdlib/CMakeFiles/md.dir/gmx_qhop_xml.c.o] Error 1
make[2]: *** [src/mdlib/CMakeFiles/md.dir/all] Error 2
make[1]: *** [src/kernel/CMakeFiles/mdrun.dir/rule] Error 2
make: *** [mdrun] Error 2

Solution:
sudo apt-get install libxml2-dev

2. Missing OpenMM.h
Symptom:
Linking C shared library libgmxpreprocess_gpu.so
[ 98%] Built target gmxpreprocess
Scanning dependencies of target openmm_api_wrapper
[ 98%] Building CXX object src/kernel/CMakeFiles/openmm_api_wrapper.dir/openmm_wrapper.cpp.o
/home/me/tmp/gromacs_gpu/src/kernel/openmm_wrapper.cpp:59:20: fatal error: OpenMM.h: No such file or directory
compilation terminated.
make[3]: *** [src/kernel/CMakeFiles/openmm_api_wrapper.dir/openmm_wrapper.cpp.o] Error 1
make[2]: *** [src/kernel/CMakeFiles/openmm_api_wrapper.dir/all] Error 2
make[1]: *** [src/kernel/CMakeFiles/mdrun.dir/rule] Error 2
make: *** [mdrun] Error 2

Solution:
 sudo mkdir /usr/local/openmm/lib/include
sudo cp ~/tmp/OpenMM3.1.1-Source/openmmapi/include/* -R /usr/local/openmm/include/

3. Missing Kernel.h
Symptom:
Linking CXX static library libgmx_gpu_utils.a
[  0%] Built target gmx_gpu_utils
[  0%] Building CXX object src/kernel/CMakeFiles/openmm_api_wrapper.dir/openmm_wrapper.cpp.o
In file included from /usr/local/openmm/include/OpenMM.h:36:0,
                 from /home/me/tmp/gromacs_gpu/src/kernel/openmm_wrapper.cpp:59:
/usr/local/openmm/include/openmm/BrownianIntegrator.h:36:27: fatal error: openmm/Kernel.h: No such file or directory
compilation terminated.
make[3]: *** [src/kernel/CMakeFiles/openmm_api_wrapper.dir/openmm_wrapper.cpp.o] Error 1
make[2]: *** [src/kernel/CMakeFiles/openmm_api_wrapper.dir/all] Error 2
make[1]: *** [src/kernel/CMakeFiles/mdrun.dir/rule] Error 2
make: *** [mdrun] Error 2
make: *** No rule to make target `mdrun-install'.  Stop.

Solution:
Kernel.h was missing from the openmmapi folder when I compiled OpenMM myself. Use the pre-compiled version

4. cc1plus: Unrecognised option -fexcess-precision=fast

Symptom:
 (the % depends on whether you used make, make mdrun, make gmx_gpu_utils etc)
[  0%] Building NVCC (Device) object src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_memtestG80_core.cu.o
cc1plus: error: unrecognized command line option "-fexcess-precision=fast"
CMake Error at CMakeFiles/gmx_gpu_utils_generated_memtestG80_core.cu.o.cmake:198 (message):
  Error generating
  /home/me/tmp/gromacs_gpu/src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_memtestG80_core.cu.o

make[3]: *** [src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_memtestG80_core.cu.o] Error 1
make[2]: *** [src/kernel/gmx_gpu_utils/CMakeFiles/gmx_gpu_utils.dir/all] Error 2
make[1]: *** [src/kernel/CMakeFiles/mdrun.dir/rule] Error 2
make: *** [mdrun] Error 2


Solution: 
You need to edit gmx_gpu_utils_generated_memtestG80_core.cu.o.cmake and gmx_gpu_utils_generated_gmx_gpu_utils.cu.o.cmake and remove all instances of -fexcess-precision=fast
The files in src/kernel/gmx_gpu_utils/ will be overwritten every time you run cmake so make copies of the two edited files to keep around if you need to re-build.

Here's the diff (edit vs unedited) for gmx_gpu_utils_generated_gmx_gpu_utils.cu.o.cmake:

88c88
< set(CMAKE_HOST_FLAGS  -Wall -Wno-unused   )
---
> set(CMAKE_HOST_FLAGS  -fexcess-precision=fast -Wall -Wno-unused   )

Here's the diff (edit vs unedited) for gmx_gpu_utils_generated_memtestG80_core.cu.o.cmake:

88c88
< set(CMAKE_HOST_FLAGS  -Wall -Wno-unused   )
---
> set(CMAKE_HOST_FLAGS  -fexcess-precision=fast -Wall -Wno-unused   )



5. Nonbonded kernel
Symptom:
[ 78%] Building C object src/gmxlib/CMakeFiles/gmx.dir/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c.o
/home/me/tmp/gromacs_gpu/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c: In function ‘nb_kernel400nf_x86_64_sse’:
/home/me/tmp/gromacs_gpu/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:629:32: error: ‘gmx_invsqrt_exptab’ undeclared (first use in this function)
/home/me/tmp/gromacs_gpu/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:629:32: note: each undeclared identifier is reported only once for each function it appears in
/home/me/tmp/gromacs_gpu/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:629:59: error: ‘gmx_invsqrt_fracttab’ undeclared (first use in this function)
make[3]: *** [src/gmxlib/CMakeFiles/gmx.dir/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c.o] Error 1
make[2]: *** [src/gmxlib/CMakeFiles/gmx.dir/all] Error 2
make[1]: *** [src/kernel/CMakeFiles/mdrun.dir/rule] Error 2
make: *** [mdrun] Error 2



Solution: instead of just using cmake, use cmake ../gromacs_gpu/ -DGMX_OPENMM=ON -DGMX_THREADS=OFF

Links to this page:
http://lists.gromacs.org/pipermail/gmx-users/2012-February/068121.html
http://gromacs.5086.n6.nabble.com/gromacs-on-GPU-td5004295.html;cid=1358868814898-115

18 January 2012

53. GROMACS -- carbon dioxide in water. Example

I'm new to GROMACS, so don't follow what I've done blindly.

To get started you need a .top and a .gro file. The .gro file is not dissimilar to a .pdb file and contains xyz coordinates and atom names. The .gro format is inflxible in that specific numbers of chars are dedicated to each field -- if that's Greek to you, then take this advice: don't edit it by hand. If you do, only substitute x number of characters with the same number of characters.

The .top file contains information about the atoms in each molecule and defines their properties, including partial charges, Lennar-Jones or Buckingham params, constraints etc.

 Making the .top file is probably the most challenging step when you are new to GROMACS. Once you've done it a few times it becomes easy, although perhaps somewhat tedious at times. Using a zmat file as your starting point may help (we'll do that here)

For each 'experiment' you need an .mdp file. The .mdp file defines what methods are used during the run, the type of experiment, cutoffs, etc.

In the example I use the gromacs binaries I compiled in an earlier post. To avoid confusion I'll use the double-precision (_dd) binaries the entire time. If you are using the debian gromacs binaries, use _d instead.

To start you need co2.gro, step1.top, em.mdp, eq.mdp, eq2.mdp and production.mdp. The rest of the files are generated

Comments are added to gromacs files by adding a ; in front.

Summary of commands that we'll be using after we have our .gro and .top files:

genbox_dd -cp co2.gro -o step1.gro -cs /usr/local/gromacs/share/gromacs/top/spc216.gro -p step1.top
# energy minimisation
grompp_dd -f em.mdp -po step3.mdp -p step1.top -pp step2.top -c step1.gro -o step3.tpr
mdrun_dd -v -s step3.tpr -o step4.trr -x step4.xtc -cpo step4.cpt -c step4.gro -e step4.edr -g step4.log
# equilibration step 1
grompp_dd -f eq.mdp -po step6.mdp -p step2.top -pp step6.top -c step4.gro -o step6.tpr
mdrun_dd -v -s step6.tpr -o step7.trr -x step7.xtc -cpo step7.cpt -c step7.gro -e step7.edr -g step7.log
# equilibration step 2
grompp_dd -f eq2.mdp -po step8.mdp -p step6.top -pp step8.top -c step7.gro -o step8.tpr
mdrun_dd -v -s step8.tpr -o step9.trr -x step9.xtc -cpo step9.cpt -c step9.gro -e step9.edr -g step9.log
# production run
grompp_dd -f production.mdp -po step10.mdp -p step8.top -pp step10.top -c step9.gro -o step10.tpr
mdrun_dd -v -s step10.tpr -o step11.trr -x step11.xtc -cpo step11.cpt -c step11.gro -e step11.edr -g step11.log 


START HERE
If at any point you get annoying fatal errors, your first thought should be 'formatting'. I don't know how faithful blogspot is towards tabs etc. Google -- if you're listening: please allow the upload of simple ascii files! Oh, and while we're at it -- if I'm emailing text files with unix line endings, don't effing change them to windows line endings! Ehum...so...


1. Making a .gro file
I first made an .xyz file using avogadro and optimised it using the built-in MM engine.

co2.xyz:

3

C         -5.06401        3.32301        1.92535
O         -4.89871        1.97004        1.81668
O         -5.22923        4.67528        2.03397


I edited the file to
1. Add a title card (not important)
2. give the atoms specific names (the names aren't important -- but they should match those used in .top). I decided to call the carbon CO and the Oxygens OC since the C is bonded to O and the Os are bonded to C. The names must be 1-5 characters long.

co2_edited.xyz:

3
CAR
CO         -5.06401        3.32301        1.92535
OC         -4.89871        1.97004        1.81668
OC         -5.22923        4.67528        2.03397

Next, we make our .gro file and set the box size to 2 by 2 by 2 nanometres:
editconf_dd -f co2_edited.xyz -o co2.gro -box 2 2 2 -label CAR -resnr 1
co2.gro:

CAR
    3
    1 ???    CO    1   1.000   1.000   1.000
    1 ???    OC    2   1.017   0.865   0.989
    1 ???    OC    3   0.983   1.135   1.011
   2.00000   2.00000   2.00000
I opened co2.gro in vim and did 
:%s/???/CAR/g
the saved using 
:wq

co2.gro:
CAR
    3
    1 CAR    CO    1   1.000   1.000   1.000
    1 CAR    OC    2   1.017   0.865   0.989
    1 CAR    OC    3   0.983   1.135   1.011
   2.00000   2.00000   2.00000

This is our starting .gro file.


2. Making our .top file
I first made a zmat file since it contains angles and bond lengths. CO2 is a simple enough molecule that this isn't necessary, but it's good to have a standard approach.

If you haven't yet installed babel, then do so:
sudo apt-get install openbabel

Generate the zmat file:
babel co2.xyz co2.gzmat

co2.gzmat:
 CAR
0  1
Co
Xx  1  r2
Xx  1  r3  2  a3
Variables:
r2= 1.3674
r3= 1.3666
a3= 179.97

For an explanation of the zmat format, look here.

OK, time to create our .top file

Open a new file and call it step1.top

First create directive headers to outline the file -- we will have to types of molecules -- CAR (CO2) -- and SOL (water), so we have two sets of [molecultypes] (+ [atoms], [constraints] etc.):
[defaults]
[atomtypes]
[moleculetype]
[atoms]
[constraints]
[exclusions]
[angles]
[dihedrals]
[moleculetype]
[atoms]
[constraints]
[exclusions]
[system]
[molecules]


The order of the different directive is (sadly) important. 

OK, time to add information to the different sections -- let's start with the easy ones:

[defaults]:
[defaults]
; nbfunc comb-rule gen-pairs fudgeLJ fudgeQQ
1 1 yes 1.0 1.0

This is a standard directive when you roll your own simulation independent of predefined forcefields. nbfunc=1 means we're using Lennard-Jones c6/c12 parameters.

[system]:
[system]
sparkling water

[system] is basically like a title-card

[molecules]:
[molecules]
CAR      1
SOL      1

OK, time to define our carbon dioxide molecule:
[moleculetype]:
[moleculetype]
;name nrexcl
CAR    3
CAR is the name and nrexcl=3 tells gromacs to exclude non-bonded interactions between atoms that are no further than 3 bonds away.

For atoms the order should be the same as in the .gro file. Here CO  and the two OC make up a single charge group (all have the same cgnr=1). I got the partial charges from an article -- http://pubs.acs.org/doi/full/10.1021/jp062723w.

The type (e.g. CO) must match that which will later be defined in [atomtypes]. atom (e..g CO) must match that in the .gro file. mass is self-explanatory, as is nr.

[atoms]
[atoms]
;   nr   type  resnr residue  atom   cgnr     charge       mass
1 CO 1 CAR CO 1 0.70         12.0107
2 OC 1 CAR OC 1 -0.35 15.9994
3 OC 1 CAR OC 1 -0.35 15.9994

We will constrain the bond distances -- which we got from the zmat file above (r2 and r3 -- I picked 1.3674 Ångström, which is 0.13674 nm:

[constraints]:
; particles bonded if defined in bonds (1, 5, 6) or constraints
[constraints]
;ai     aj func bond
1 2 1 0.13674
1 3 1 0.13674
[exclusions]:
 [exclusions]
;ai other atoms
1 2 3
2 1 3
3 1 2
Exclusions excludes non-bonded interactions between atom ai and the other atoms listed on the same line. It's probably not necessary in such a small molecule.

[angles]:

[angles]
2 1 3   1 180 1500

There's only one angle -- atom 1 is carbon, so the angle is across O=C=O, or 2=1=3 or, if you prefer, 3=1=2 -- look at atom nr. We know that the angle is 180 degrees, but the zmat data also gave us that. Again, useful for more complex molecules.

Comment out dihedral, since there aren't any. You should be aware of the existence of it though.
[dihedrals]:
;[dihedrals]

Water:
We do the same thing for water, just without explanations. This was copied verbatim from the gromacs manual. Look there for information about [settles]

[ moleculetype ]
; molname nrexcl
SOL 2
[ atoms ]
;   nr   type  resnr residue  atom   cgnr     charge       mass
     1     OW      1    SOL     OW      1    -0.8476   15.99940
     2     HW      1    SOL    HW1      1     0.4238    1.00800
     3     HW      1    SOL    HW2      1     0.4238    1.00800
;[ constraints ]
; ai aj funct length(b0, nm) kb(kJ mol-1 nm-2)
;  1 2 1 0.100 ;  1 3 1 0.100 ;  2 3 1 0.1633
[ settles ]
;ai funct doh dhh
 1 1 0.1 0.1633
[ exclusions ]
1       2       3
2       1       3
3       1       2

We've so far avoided [atomtypes], which is probably the MOST IMPORTANT directive.

[atomtypes]:
[atomtypes]
;type z       mass        pq          ptype    c6                 c12
OC 8 15.9994 -0.35 A 2.5438e-1 2.0478e-4
CO 6 12.0107 0.70 A 5.2044e-2 2.5080e-5
OW 8 15.9994 -0.834 A 0.261710e-2 0.26331e-05
HW 1 1.0080 0.417 A 0.0000000000 0.00000e-00

So how did we get here?
Let's look at the first line:

OC is the atom type which we'll refer to under the [atoms] directives.
8 is the atomic number of oxygen.
15.9994  is the average atomic mass of carbon.
-0.35 is the partial charge. For carbon CO and oxygen OC I got it from http://pubs.acs.org/doi/full/10.1021/jp062723w. However, in the following post from 2011: "The charges from [atoms] are used. The charges in [atomtypes] are never used in any forcefield currently used with GROMACS."
A is the particle type -- A stands for Atom
2.5438e-1 is the Lennard-Jones c6 parameter for the carbon dioxide carbon -- I got it from here which gives epsilon and sigma: c6=4*epsilon*sigma^6
2.0478e-4 is the Lennard-Jones c12 parameter for the carbon dioxide carbon -- I got it from here which gives epsilon and sigma: c12=4*epsilon*sigma^12. (sigma_o=0.305 nm; epsilon_o=79 K; sigma_c=0.280 nm; epsilon_c=27 K)

I got OW and HW from a different source.

Finally, here's our entire step1.top:

[defaults]
1 1 yes 1.0 1.0
;http://pubs.acs.org/doi/full/10.1021/jp062723w
;
[atomtypes]
; sigma_o=0.305 nm; epsilon_o=79 K; sigma_c=0.280; epsilon_c=27
OC 8 15.9994 -0.35 A 2.5438e-1 2.0478e-4
CO 6 12.0107 0.70 A 5.2044e-2 2.5080e-5 OW 8 15.9994 -0.8476 A   0.261710e-2  0.26331e-05
HW 1 1.0080 0.4238 A 0.0000000000 0.00000e-00
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; [moleculetype]
CAR 3
[atoms]
1 CO 1 CAR CO 1 0.70 12.0107
2 OC 1 CAR OC 1 -0.35 15.9994
3 OC 1 CAR OC 1 -0.35 15.9994
; particles bonded if defined in bonds (1, 5, 6) or constraints
[constraints]
; func bond
1 2 1 0.13674
1 3 1 0.13674
;all non-bonded interactions between atom 1 and the other atoms ignored
[exclusions]
1 2 3
2 1 3
3 1 2
[angles]
2 1 3   1 180 1500
;[dihedrals]
;no dihedrals
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[ moleculetype ]
; molname nrexcl
SOL 2
[ atoms ]
;   nr   type  resnr residue  atom   cgnr     charge       mass
     1     OW      1    SOL     OW      1    -0.8476   15.99940
     2     HW      1    SOL    HW1      1     0.4238    1.00800
     3     HW      1    SOL    HW2      1     0.4238    1.00800
;[ constraints ]
; ai aj funct length(b0, nm) kb(kJ mol-1 nm-2)
;  1 2 1 0.100 ;  1 3 1 0.100 ;  2 3 1 0.1633
[ settles ]
;ai funct doh dhh
 1 1 0.1 0.1633
[ exclusions ]
1       2       3
2       1       3
3       1       2
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[system]
fizzy water
[molecules]
CAR 1
SOL               212

3. Add water
We use co2.gro and step1.top as the input. We get our water molecules from /usr/local/gromacs/share/gromacs/top/spc216.gro. We write a new .gro file, step1.gro. It will have our carbon dioxide molecule (CAR) and 212 water molecules (SOL).

genbox_dd -cp co2.gro -o step1.gro -cs /usr/local/gromacs/share/gromacs/top/spc216.gro -p step1.top

Ideally genbox_dd updates step1.top and puts the correct number of SOL molecules under [molecules], but please check.

[..]
Successfully made neighbourlist
nri = 11519, nrj = 660245
Checking Protein-Solvent overlap: tested 92 pairs, removed 9 atoms.
Checking Solvent-Solvent overlap: tested 13414 pairs, removed 540 atoms.
Added 212 molecules
Generated solvent containing 636 atoms in 212 residues
Writing generated configuration to step1.gro
Back Off! I just backed up step1.gro to ./#step1.gro.1#
CAR
Output configuration contains 639 atoms in 213 residues
Volume                 :           8 (nm^3)
Density                :      811.63 (g/l)
Number of SOL molecules:    212
Processing topology
[..]
4. Do energy minimisation (EM):
We first need our em.mdp to tell mdrun what to do. Like all the other .mdp files they originally come from http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/01_pdb2gmx.html, and have been used with a minimal amount of editing.

em.mdp:

; minim.mdp - used as input into grompp to generate em.tpr
; Parameters describing what to do, when to stop and what to save
integrator = steep ; Algorithm (steep = steepest descent minimization)
emtol = 1000.0   ; Stop minimization when the maximum force < 1000.0 kJ/mol/nm
emstep          = 0.01               ; Energy step size
nsteps = 50000   ; Maximum number of (minimization) steps to perform
; Parameters describing how to find the neighbors of each atom and how to calculate the interactions
nstlist = 1         ; Frequency to update the neighbor list and long range forces
ns_type         = grid ; Method to determine neighbor list (simple, grid)
rlist              = 0.9         ; Cut-off for making neighbor list (short range forces)
coulombtype = PME ; Treatment of long range electrostatic interactions
rcoulomb = 0.9 ; Short-range electrostatic cut-off
rvdw = 0.9 ; Short-range Van der Waals cut-off
pbc          =  xyz         ; Periodic Boundary Conditions (yes/no)
Alright. Let's make our binary .trp file using our step1.top and our step1.gro. We'll use the binary trp file with mdrun in the next step. grompp also output a new top file, step2.top, for us, as well as a new .mdp file, step3.mdp.The latter doesn't matter to us.

grompp_dd -f em.mdp -po step3.mdp -p step1.top -pp step2.top -c step1.gro -o step3.tpr
mdrun_dd -v -s step3.tpr -o step4.trr -x step4.xtc -cpo step4.cpt -c step4.gro -e step4.edr -g step4.log

This step is fast, and we get
Steepest Descents converged to Fmax < 1000 in 36 steps
Potential Energy  = -9.52113394667855e+03
Maximum force     =  5.44327588453194e+02 on atom 1
Norm of force     =  7.44126862763805e+01

5. Equilibration part 1
title = fizzy drink
define = -DPOSRES ; position restrain the protein
; Run parameters
integrator = md ; leap-frog integrator
nsteps = 90000 ; 2 * 50000 = 100 ps
dt = 0.002 ; 2 fs
; Output control
nstxout = 100 ; save coordinates every 0.2 ps
nstvout = 100 ; save velocities every 0.2 ps
; ; no nstxtcout
nstenergy = 100 ; save energies every 0.2 ps
nstlog = 100 ; update log file every 0.2 ps
; Bond parameters
continuation = no ; first dynamics run
constraint_algorithm = lincs ; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
lincs_iter = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighborsearching
ns_type = grid ; search neighboring grid cells
nstlist = 5 ; 10 fs
rlist = 0.9 ; short-range neighborlist cutoff (in nm)
rcoulomb = 0.9 ; short-range electrostatic cutoff (in nm)
rvdw = 0.9 ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics
pme_order = 4 ; cubic interpolation
fourierspacing = 0.16 ; grid spacing for FFT
; Temperature coupling is on
tcoupl = V-rescale ; modified Berendsen thermostat
tc-grps = CAR SOL ; two coupling groups - more accurate
tau_t = 0.1 0.1 ; time constant, in ps
ref_t = 300 300 ; reference temperature, one for each group, in K
;;;;;;;;;;;;;;;;;;;;;
; no pressure coupling
;;;;;;;;;;;;;;;;;;;;;;;;;
; Pressure coupling is off
pcoupl = no ; no pressure coupling in NVT
;
;
;
;
; Periodic boundary conditions
pbc = xyz ; 3-D PBC
; Dispersion correction
DispCorr = EnerPres ; account for cut-off vdW scheme
; Velocity generation
gen_vel = yes ; assign velocities from Maxwell distribution
gen_temp = 300 ; temperature for Maxwell distribution
gen_seed = -1 ; generate a random seed
We then run:
grompp_dd -f eq.mdp -po step6.mdp -p step2.top -pp step6.top -c step4.gro -o step6.tpr
mdrun -v -s step6.tpr -o step7.trr -x step7.xtc -cpo step7.cpt -c step7.gro -e step7.edr -g step7.log

which gives
step 90000, remaining runtime:     0 s        
 Average load imbalance: 2.7 %
 Part of the total run time spent waiting due to load imbalance: 1.0 %
 Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 %

Parallel run - timing based on wallclock.
               NODE (s)   Real (s)      (%)
       Time:     27.299     27.299    100.0
               (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:    258.447     15.110    569.700      0.042


See this tutorial for an explanation of the two-step equilibration approach that we're using. I use 90000 steps instead of 10000, and it really doesn't matter at this stage.

6. Equilibration part 2
eq2.mdp:
title = fizzy drink
define = -DPOSRES ; position restrain the protein
; Run parameters
integrator = md ; leap-frog integrator
nsteps = 90000 ; 2 * 50000 = 100 ps
dt = 0.002 ; 2 fs
; Output control
nstxout = 100 ; save coordinates every 0.2 ps
nstvout = 100 ; save velocities every 0.2 ps
; ; no nstxtcout
nstenergy = 100 ; save energies every 0.2 ps
nstlog = 100 ; update log file every 0.2 ps
; Bond parameters
continuation = yes ; <---  Restarting after NVT
constraint_algorithm = lincs ; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
lincs_iter = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighborsearching
ns_type = grid ; search neighboring grid cells
nstlist = 5 ; 10 fs
rlist = 0.9 ; short-range neighborlist cutoff (in nm)
rcoulomb = 0.9 ; short-range electrostatic cutoff (in nm)
rvdw = 0.9 ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics
pme_order = 4 ; cubic interpolation
fourierspacing = 0.16 ; grid spacing for FFT
; Temperature coupling is on
tcoupl = V-rescale ; modified Berendsen thermostat
tc-grps = CAR SOL ; two coupling groups - more accurate
tau_t = 0.1 0.1 ; time constant, in ps
ref_t = 300 300 ; reference temperature, one for each group, in K
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; turning on pressure coupling
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; <-- Pressure coupling is on
pcoupl = Parrinello-Rahman ; Pressure coupling on in NPT
pcoupltype = isotropic ; uniform scaling of box vectors
tau_p = 2.0 ; time constant, in ps
ref_p = 1.0 ; reference pressure, in bar
compressibility = 4.5e-5 ; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc = xyz ; 3-D PBC
; Dispersion correction
DispCorr = EnerPres ; account for cut-off vdW scheme
; Velocity generation
gen_vel = no ; <-- Velocity generation is off
;
;


and we run
grompp_dd -f eq2.mdp -po step8.mdp -p step6.top -pp step8.top -c step7.gro -o step8.tpr
mdrun_dd -v -s step8.tpr -o step9.trr -x step9.xtc -cpo step9.cpt -c step9.gro -e step9.edr -g step9.lo

The mdrun takes 43 seconds on an intel i5 with four cores.

7. Production run
production.mdp:
title = fizzy water
; removed define
; Run parameters
integrator = md ; leap-frog integrator
nsteps = 500000 ; 2 * 500000 = 1000 ps, 1 ns
dt = 0.002 ; 2 fs
; Output control
nstxout = 1000 ; save coordinates every 2 ps. Incr by *10
nstvout = 1000 ; save velocities every 2 ps. Incr by *10
nstxtcout = 1000 ; xtc compressed trajectory output every 2 ps. Incr by *10
nstenergy = 1000 ; save energies every 2 ps. Incr by *10
nstlog = 1000 ; update log file every 2 ps. Incr by *10
; Bond parameters
continuation = yes ; <-- Restarting after NPT
constraint_algorithm = lincs ; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
lincs_iter = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighborsearching
ns_type = grid ; search neighboring grid cells
nstlist = 5 ; 10 fs
rlist = 0.9 ; short-range neighborlist cutoff (in nm)
rcoulomb = 0.9 ; short-range electrostatic cutoff (in nm)
rvdw = 0.9 ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics
pme_order = 4 ; cubic interpolation
fourierspacing = 0.16 ; grid spacing for FFT
; Temperature coupling is on
tcoupl = V-rescale ; modified Berendsen thermostat
tc-grps = CAR SOL ; two coupling groups - more accurate
tau_t = 0.1 0.1 ; time constant, in ps
ref_t = 300 300 ; reference temperature, one for each group, in K
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; turning on pressure coupling
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; <-- Pressure coupling is on
pcoupl = Parrinello-Rahman ; Pressure coupling on in NPT
pcoupltype = isotropic ; uniform scaling of box vectors
tau_p = 2.0 ; time constant, in ps
ref_p = 1.0 ; reference pressure, in bar
compressibility = 4.5e-5 ; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc = xyz ; 3-D PBC
; Dispersion correction
DispCorr = EnerPres ; account for cut-off vdW scheme
; Velocity generation
gen_vel = no ; <-- Velocity generation is off
;
;

and we run
grompp_dd -f production.mdp -po step10.mdp -p step8.top -pp step10.top -c step9.gro -o step10.tpr
mdrun_dd -v -s step10.tpr -o step11.trr -x step11.xtc -cpo step11.cpt -c step11.gro -e step11.edr -g step11.log 

The last run takes four minutes on four cores.

8. Analysis
Again, this follows  this tutorial almost verbatim:
trjconv -s step10.tpr -f step11.xtc -o step12.xtc -pbc mol 



RMS during production run for CAR/SOL:
g_rms -s step10.tpr -f step12.xtc -o step13_rms.xvg -tu ns




CAR
g_gyrate -s step10.tpr -f step12.xtc -o step14_gyrate.xvg

CAR/SOL:
g_rdf -s step10.tpr -f step11.xtc -o step15_rdf.xvg 

And the video has as usual been compressed to something ugly and useless by blogger: