Tuesday, February 24, 2015

simple oneliner to create an index over all the massbank indexes

I'm quite often playing around with massbank these days and always like to have a simple index of all there files in a single directory instead of dealing with all there sub directories.

find ./ -name '*.txt' -exec bash -c "f=\$(basename {} ); ln -fs {} ./all/\$f" \;

basically takes care of all these issues for you

Monday, May 19, 2014

Using grails for restful services

Well if you use grails for restful services, it saves a lot of time to ensure that you add the following mapping to your controller.

class SubmitterController extends RestfulController{

//response format avoid the error that views are not found
    static responseFormats = ['json']

    static allowedMethods = [list:"GET", save: "POST", update: "POST", delete: "DELETE", show: "GET"]

    public SubmitterController(){


once you did this, everything works as it should and it's rather obvious. But sometimes we forget. Nearly as stupid as font-awesome not working fonts, if you use...
< i class="fa-minus-square"></i>

you will only see a stupid utf square. Instead you need to use...
< i class="fa fa-minus-square"></i>

Friday, February 7, 2014

adding the head node of rocks as compute node

happily stolen from:


cause I keep forgetting it...

 Add Frontend as a SGE Execution Host in Rocks

To setup the frontend node to also be a SGE execution host which queued jobs can be run on (like the compute nodes), do the following:

Quick Setup

# cd /opt/gridengine
# ./install_execd    (accept all of the default answers)
# qconf -mq all.q    (if needed, adjust the number of slots for [frontend.local=4] and other parameters)
# /etc/init.d/sgemaster.frontend stop
# /etc/init.d/sgemaster.frontend start
# /etc/init.d/sgeexecd.frontend stop
# /etc/init.d/sgeexecd.frontend start

Detailed Setup

1. As root, make sure $SGE_ROOT, etc. are setup correctly on the frontend:
# env | grep SGE
It should return back something like:
If not, source the file /etc/profile.d/sge-binaries.[c]sh or check if the SGE Roll is properly installed and enabled:
# rocks list roll
sge:          5.2     x86_64 yes

2. Run the install_execd script to setup the frontend as a SGE execution host:
# cd $SGE_ROOT
# ./install_execd 
Accept all of the default answers as suggested by the script.

  • NOTE: For the following examples below, the text should be substituted with the actual "short hostname" of your frontend (as reported by the command hostname -s).
For example, if running the command hostname on your frontend returns back the "FQDN long hostname" of:
# hostname
then hostname -s should return back just:
# hostname -s

3. Verify that the number of job slots for the frontend is equal to the number of physical processors/cores on your frontend that you wish to make available for queued jobs by checking the value of the slots parameter of the queue configuration for all.q:
# qconf -sq all.q | grep slots
slots                 1,[compute-0-0.local=4],[.local=4]
The [.local=4] means that SGE can run up to 4 jobs on the frontend. Be aware that since the frontend is normally used for other tasks besides running compute jobs, it is recommended that not all the installed physical processors/cores on the frontend be available to be scheduled by SGE to avoid overloading the frontend.
For example, on a 4-core frontend, to configure SGE to use only up to 3 of the 4 cores, you can modify the slots for .local from 4 to 3 by typing:
# qconf -mattr queue slots '[.local=3]' all.q
If there are additional queues besides the default all.q one, repeat the above for each queue.
Read "man queue_conf" for a list of resource limit parameters such as s_cpu, h_cpu, s_vmem, and h_vmem that can be adjusted to prevent jobs from overloading the frontend.

  • NOTE: For Rocks 5.2 or older, the frontend may have been default configured during installation with only 1 job slot ([.local=1]) in the default all.q queue, which will only allow up to 1 queued job to run on the frontend. To check the value of the slots parameter of the queue configuration for all.q, type:
# qconf -sq all.q | grep slots
slots                 1,[compute-0-0.local=4],[.local=1] 
If needed, modify the slots for .local from 1 to 4 (or up to the maximum number of physical processors/cores on your frontend that you wish to use) by typing:
# qconf -mattr queue slots '[.local=4]' all.q

  • NOTE: For Rocks 5.3 or older, create the file /opt/gridengine/default/common/host_aliases to contain both the .local hostname and the FQDN long hostname of your frontend:
# vi $SGE_ROOT/default/common/host_aliases
.local .mydomain.org

  • NOTE: For Rocks 5.3 or older, edit the file /opt/gridengine/default/common/act_qmaster to contain the .local hostname of your frontend:
# vi $SGE_ROOT/default/common/act_qmaster

  • NOTE: For Rocks 5.3 or older, edit the file /etc/init.d/sgemaster.:
# vi /etc/init.d/sgemaster.
and comment out the line:
/bin/hostname --fqdn > $SGE_ROOT/default/common/act_qmaster
by inserting a # character at the beginning, so it becomes:
#/bin/hostname --fqdn > $SGE_ROOT/default/common/act_qmaster
in order to prevent the file /opt/gridengine/default/common/act_qmaster from getting overwritten with incorrect data every time sgemaster. is run during bootup.

4. Restart both qmaster and execd for SGE on the frontend:
# /etc/init.d/sgemaster. stop
# /etc/init.d/sgemaster. start
# /etc/init.d/sgeexecd. stop
# /etc/init.d/sgeexecd. start

And everything will start working. :)

Friday, September 20, 2013

postgres killing queries

once in a while our database server get's terrible overloaded, cause people are running thousands of long running queries against it.

1. How to find out who is generating queries:

select datname, client_addr from pg_stat_activity;

2. Killing queries from a specific IP:

select pg_terminate_backend(procpid) from pg_stat_activity where client_addr='IP';

This can be done as postgres user.

Tuesday, August 20, 2013

playing with logfiles...

After setting up our zabbix system at work to monitor most of the servers, I'm still trying to analyze our database server and logs a bit more. Since the system seems to be under quite a high load recently.

The easiest way todo this, was like always a little awk scripts to analyze some statistics from our log files and point me in the right directions

cat pg_log/postgresql-2013-08-*.log |  grep duration | awk -F ':' '{ print $7, $0 }' | grep execute | awk  '{ total += $1; count++; print $0 } END { print "average query speed: ", total/count, " count of queries: ",  count }'

Thursday, July 18, 2013

X-Forwarding in qlogin on rocks linux > 5

to enable X forwarding from your nodes to your original remote session:

In rocks 4.3,I get the output
# qconf -sconf | grep qlogin
qlogin_command               /opt/gridengine/bin/rocks-qlogin.sh
qlogin_daemon                /usr/sbin/sshd -i
But in rocks 5.3,I get
# qconf -sconf | grep qlogin
qlogin_command               builtin
qlogin_daemon                builtin
So I changed it in rocks 5.3
# qconf -mconf global
qlogin_command               /opt/gridengine/bin/rocks-qlogin.sh
qlogin_daemon                /usr/sbin/sshd -i

And modify the script:


to include this part

/usr/bin/ssh -Y -p $PORT $HOST
The Y basically allows none secured X authentification.

Thursday, June 20, 2013

Playing with Chef

Recently I got a bit overwhelmed with all the servers I maintain and have to update and so decided to ease my life and start using an automation system. Don't really care that much about my real servers. But having quite a lot of dedicated virtual machines, I guess it was time to simplify this process.

Bootstrapping a system
knife bootstrap systemname-x username -P password --sudo
Installing a cookbook from the central repository
knife cookbook site install name-of-cookbook
Uploading a cookbook to the server
knife cookbook upload name-of-cookbook
Adding a recipe to a system
 knife node run_list add HOSTNAME 'recipe[recipe]'

Updating all systems
knife ssh "*:*" "sudo chef-client" -x username -P password
Updating/Adding a data bag
knife data bag from file BAG_NAME ITEM.json
Changing the environment for nodes
knife exec -E 'nodes.transform("chef_environment:dev") { |n| n.chef_environment("production") }'
Adding a role to all nodes in a certain environment
knife exec -E 'nodes.transform("chef_environment:Fiehnlab") {|n| puts n.run_list << "role[user-management]"; n.save }'
Overriding the runlist for a single run
chef-client -o  recipe["rocks-cluster-6.1::computenode"]

Just a small overview for me to remember some commands.