Wednesday, April 1, 2015

adding program shortcuts to the context menu of windows

ever since I got the surface pro 3, loving it, I ended up doing a lot of programming under windows and rediscovering this platform.

So one of my first things I had todo was to add/optimize some context menu entries to simplify the progress of getting console2 to open at specific directories. Basically trying to replace the cmd program as much as possible.

regedit.exe

got to:

HKEY_CLASSES_ROOT\Folder\shell

edit 'Command Prompt here' 

set as new command

"C:\terminal\Console.exe"  -d  "%L"

Tuesday, February 24, 2015

simple oneliner to create an index over all the massbank indexes

I'm quite often playing around with massbank these days and always like to have a simple index of all there files in a single directory instead of dealing with all there sub directories.

find ./ -name '*.txt' -exec bash -c "f=\$(basename {} ); ln -fs {} ./all/\$f" \;

basically takes care of all these issues for you

Monday, May 19, 2014

Using grails for restful services

Well if you use grails for restful services, it saves a lot of time to ensure that you add the following mapping to your controller.


class SubmitterController extends RestfulController{

//response format avoid the error that views are not found
    static responseFormats = ['json']

    static allowedMethods = [list:"GET", save: "POST", update: "POST", delete: "DELETE", show: "GET"]

    public SubmitterController(){
        super(Submitter)
    }


}

once you did this, everything works as it should and it's rather obvious. But sometimes we forget. Nearly as stupid as font-awesome not working fonts, if you use...
< i class="fa-minus-square"></i>

you will only see a stupid utf square. Instead you need to use...
< i class="fa fa-minus-square"></i>

Friday, February 7, 2014

adding the head node of rocks as compute node

happily stolen from:


https://wiki.rocksclusters.org/wiki/index.php/Sun_GridEngine

cause I keep forgetting it...


 Add Frontend as a SGE Execution Host in Rocks

To setup the frontend node to also be a SGE execution host which queued jobs can be run on (like the compute nodes), do the following:
[edit]

Quick Setup

# cd /opt/gridengine
# ./install_execd    (accept all of the default answers)
# qconf -mq all.q    (if needed, adjust the number of slots for [frontend.local=4] and other parameters)
# /etc/init.d/sgemaster.frontend stop
# /etc/init.d/sgemaster.frontend start
# /etc/init.d/sgeexecd.frontend stop
# /etc/init.d/sgeexecd.frontend start
[edit]

Detailed Setup

1. As root, make sure $SGE_ROOT, etc. are setup correctly on the frontend:
# env | grep SGE
It should return back something like:
SGE_CELL=default
SGE_ARCH=lx26-amd64
SGE_EXECD_PORT=537
SGE_QMASTER_PORT=536
SGE_ROOT=/opt/gridengine
If not, source the file /etc/profile.d/sge-binaries.[c]sh or check if the SGE Roll is properly installed and enabled:
# rocks list roll
NAME          VERSION ARCH   ENABLED
sge:          5.2     x86_64 yes

2. Run the install_execd script to setup the frontend as a SGE execution host:
# cd $SGE_ROOT
# ./install_execd 
Accept all of the default answers as suggested by the script.


  • NOTE: For the following examples below, the text should be substituted with the actual "short hostname" of your frontend (as reported by the command hostname -s).
For example, if running the command hostname on your frontend returns back the "FQDN long hostname" of:
# hostname
mycluster.mydomain.org
then hostname -s should return back just:
# hostname -s
mycluster

3. Verify that the number of job slots for the frontend is equal to the number of physical processors/cores on your frontend that you wish to make available for queued jobs by checking the value of the slots parameter of the queue configuration for all.q:
# qconf -sq all.q | grep slots
slots                 1,[compute-0-0.local=4],[.local=4]
The [.local=4] means that SGE can run up to 4 jobs on the frontend. Be aware that since the frontend is normally used for other tasks besides running compute jobs, it is recommended that not all the installed physical processors/cores on the frontend be available to be scheduled by SGE to avoid overloading the frontend.
For example, on a 4-core frontend, to configure SGE to use only up to 3 of the 4 cores, you can modify the slots for .local from 4 to 3 by typing:
# qconf -mattr queue slots '[.local=3]' all.q
If there are additional queues besides the default all.q one, repeat the above for each queue.
Read "man queue_conf" for a list of resource limit parameters such as s_cpu, h_cpu, s_vmem, and h_vmem that can be adjusted to prevent jobs from overloading the frontend.


  • NOTE: For Rocks 5.2 or older, the frontend may have been default configured during installation with only 1 job slot ([.local=1]) in the default all.q queue, which will only allow up to 1 queued job to run on the frontend. To check the value of the slots parameter of the queue configuration for all.q, type:
# qconf -sq all.q | grep slots
slots                 1,[compute-0-0.local=4],[.local=1] 
If needed, modify the slots for .local from 1 to 4 (or up to the maximum number of physical processors/cores on your frontend that you wish to use) by typing:
# qconf -mattr queue slots '[.local=4]' all.q


  • NOTE: For Rocks 5.3 or older, create the file /opt/gridengine/default/common/host_aliases to contain both the .local hostname and the FQDN long hostname of your frontend:
# vi $SGE_ROOT/default/common/host_aliases
.local .mydomain.org


  • NOTE: For Rocks 5.3 or older, edit the file /opt/gridengine/default/common/act_qmaster to contain the .local hostname of your frontend:
# vi $SGE_ROOT/default/common/act_qmaster
.local


  • NOTE: For Rocks 5.3 or older, edit the file /etc/init.d/sgemaster.:
# vi /etc/init.d/sgemaster.
and comment out the line:
/bin/hostname --fqdn > $SGE_ROOT/default/common/act_qmaster
by inserting a # character at the beginning, so it becomes:
#/bin/hostname --fqdn > $SGE_ROOT/default/common/act_qmaster
in order to prevent the file /opt/gridengine/default/common/act_qmaster from getting overwritten with incorrect data every time sgemaster. is run during bootup.

4. Restart both qmaster and execd for SGE on the frontend:
# /etc/init.d/sgemaster. stop
# /etc/init.d/sgemaster. start
# /etc/init.d/sgeexecd. stop
# /etc/init.d/sgeexecd. start


And everything will start working. :)

Friday, September 20, 2013

postgres killing queries

once in a while our database server get's terrible overloaded, cause people are running thousands of long running queries against it.

1. How to find out who is generating queries:

select datname, client_addr from pg_stat_activity;


2. Killing queries from a specific IP:

select pg_terminate_backend(procpid) from pg_stat_activity where client_addr='IP';

This can be done as postgres user.

Tuesday, August 20, 2013

playing with logfiles...

After setting up our zabbix system at work to monitor most of the servers, I'm still trying to analyze our database server and logs a bit more. Since the system seems to be under quite a high load recently.

The easiest way todo this, was like always a little awk scripts to analyze some statistics from our log files and point me in the right directions

cat pg_log/postgresql-2013-08-*.log |  grep duration | awk -F ':' '{ print $7, $0 }' | grep execute | awk  '{ total += $1; count++; print $0 } END { print "average query speed: ", total/count, " count of queries: ",  count }'

Thursday, July 18, 2013

X-Forwarding in qlogin on rocks linux > 5

to enable X forwarding from your nodes to your original remote session:

In rocks 4.3,I get the output
# qconf -sconf | grep qlogin
qlogin_command               /opt/gridengine/bin/rocks-qlogin.sh
qlogin_daemon                /usr/sbin/sshd -i
But in rocks 5.3,I get
# qconf -sconf | grep qlogin
qlogin_command               builtin
qlogin_daemon                builtin
So I changed it in rocks 5.3
# qconf -mconf global
qlogin_command               /opt/gridengine/bin/rocks-qlogin.sh
qlogin_daemon                /usr/sbin/sshd -i

And modify the script:

/opt/gridengine/bin/rocks-qlogin.sh

to include this part

/usr/bin/ssh -Y -p $PORT $HOST
The Y basically allows none secured X authentification.