Showing posts with label cluster. Show all posts
Showing posts with label cluster. Show all posts

Thursday, July 18, 2013

X-Forwarding in qlogin on rocks linux > 5

to enable X forwarding from your nodes to your original remote session:

In rocks 4.3,I get the output
# qconf -sconf | grep qlogin
qlogin_command               /opt/gridengine/bin/rocks-qlogin.sh
qlogin_daemon                /usr/sbin/sshd -i
But in rocks 5.3,I get
# qconf -sconf | grep qlogin
qlogin_command               builtin
qlogin_daemon                builtin
So I changed it in rocks 5.3
# qconf -mconf global
qlogin_command               /opt/gridengine/bin/rocks-qlogin.sh
qlogin_daemon                /usr/sbin/sshd -i

And modify the script:

/opt/gridengine/bin/rocks-qlogin.sh

to include this part

/usr/bin/ssh -Y -p $PORT $HOST
The Y basically allows none secured X authentification.
 

Wednesday, October 10, 2012

submitting all files in a directory to qsub

recently we needed a small script to submit all files in a directory to a script, executed by qsub.

#!/bin/bash if [ $# -lt 2 ]; then echo Missing arguments... echo "Use: process.sh 'input dir' 'output dir'" exit 1 fi if [ -d $1 ]; then for file in `ls $1` do qsub -cwd -p -512 run.sh $1$file $2 done else echo "Missing or incorrect input directory..." exit 1 fi

and the actual run.sh script is just a small java program, which takes our two parameters.

java -Xmx1024m -jar $HOME/data/jars/DataExtractor-0.1.jar $1 $2

Friday, November 4, 2011

rocks linux - virtual hosts with apache

currently I have a cluster setup in the office with about 5 nodes and a bit over 50 cpus, so this morning I decided to rebuild some of the nodes, since I needed to make some changes to the cluster.

20 minutes into this procedure, I keep getting odd error messages, like file 'update.img' not found and so.

So while backtracking the latest changes I did to the server. I realized that I setup 20 virtual hosts in the apache configuration, which ended up screwing with the kickstart configuration. It turns out that the order of the virtual host seems to quite important and that the kickstart configuration always needs to be first, and than you have to define the other virtual hosts.

Example of a virtual host configuration, which allows the kickstart configuration to work,


vim /etc/httpd/conf.d/rocks.conf


actual file:


<IfModule mod_mime.c>
AddHandler cgi-script .cgi
</IfModule>

UseCanonicalName Off


DirectoryIndex index.cgi

<Directory "/var/www/html">
Options FollowSymLinks Indexes ExecCGI
AllowOverride None
Order allow,deny
Allow from all
</Directory>

<Directory "/var/www/html/proc">
Options FollowSymLinks Indexes ExecCGI
AllowOverride None
Order deny,allow
Allow from 10.1.0.0/255.255.0.0
Allow from 127.0.0.1
Deny from all
</Directory>

<Directory "/var/www/html/pxelinux">
Options FollowSymLinks Indexes ExecCGI
AllowOverride None
Order deny,allow
Allow from 10.1.0.0/255.255.0.0
Allow from 127.0.0.1
Deny from all
</Directory>

<VirtualHost *:80>
ServerName kickstart.host.com
DocumentRoot "/var/www/html"
</VirtualHost>

<VirtualHost *:80>
ServerName virtual.host.com
DocumentRoot "/var/www/cts/html"
</VirtualHost>

Thursday, March 4, 2010

rocks linux cluster - adding a new parallel environment

by default rocks ships with a couple of environments, which execute stuff on different nodes. But sometimes you just want to have a node all to your self and take over all it's slots.

Todo this you can just create a new environment and which gives you a defined number of cpus for a specified job.


  1. create a file which describes the paralell environment like this

  2. pe_name threaded
    slots 999
    user_lists NONE
    xuser_lists NONE
    start_proc_args /bin/true
    stop_proc_args /bin/true
    allocation_rule $pe_slots
    control_slaves FALSE
    job_is_first_task TRUE
    urgency_slots min
    accounting_summary FALSE

  3. register this on the head node

  4. qconf -Ap file.txt

  5. add it to the list of available envionments

  6. qconf -mq all.q
    pe_list make mpich mpi orte threaded

  7. test it with qlogin

  8. qlogin -pe threaded 4

Monday, March 1, 2010

scala/groovy on rocks linux

well since there is not scala/groovy roll for rocks we need to install it the traditional way.

  • go into the directory /shared/apps on the frontend
  • if apps doesn't exist create it
  • copy your scala/groovy tgz there
  • gunzip and untar it
  • edit your extend-compute.xml as shown here
  • add a new file modification section like this


<file name="/etc/profile" mode="append">

GROOVY_HOME=/share/apps/groovy
SCALA_HOME=/share/apps/scala

export GROOVY_HOME
export SCALA_HOME

PATH=$GROOVY_HOME/bin:$PATH
PATH=$SCALA_HOME/bin:$PATH

export PATH

</file>


  • rebuild your dist as shown here
  • reinstall you nodes as shown here

Wednesday, February 24, 2010

rocks linux cluster - mounting an nfs share on all nodes

after setting up the latest cluster I tried to provide to all nodes a couple of nfs shares, since user demanded this.

Well in rocks linux it's rather simple, once you understand the concept behind.

So a step to step tutorial.

  • go to the profile directory
  • cd /export/rocks/install/site-profiles/5.3/nodes/
  • make a copy of the skeleton file
  • cp skeleton.xml extend-compute.xml
  • edit file to tell it that we need to create a directory and add a line to the fstab. The right place for this is in the post section


    mkdir -p /mnt/share

    <file name="/etc/fstab" mode="append">
    server:/mount /mnt/share nfs defaults 0 0
    </file>

  • change back to the main install dir
  • cd /export/rocks/install
  • rebuild rocks distibution
  • rocks create distro
  • rebuild nodes
  • ssh compute-0-0 '/boot/kickstart/cluster-kickstart'

congratulations if you did everything right your node should now boot up and have a directory mounted.