Monday, March 11, 2013

Lift - getting access to a database

Since I recently started to play more and more with lift. I discovered that it's not always as easy as you think it is and once you find a solution it was easy indeed.

For example I want to connect to my database server, but have no desire to actually use any form of OR mapping.

So after googleing a bit and woundering how todo this. It's all rather simple.

In your Boot.scala file


def boot {
    // where to search snippet
    LiftRules.addToPackages("code")

    // Build SiteMap
    LiftRules.setSiteMap(sitemap)

    //rest based stuff

    /**
     * connection url for lift
     */
    val vendor = new StandardDBVendor("org.postgresql.Driver","jdbc:postgresql://****/****",Full("username"),Full("password")) with ProtoDBVendor{
      override def maxPoolSize = 100
      override def doNotExpandBeyond = 200
    }
    
    DB.defineConnectionManager(DefaultConnectionIdentifier, vendor)

  }

This basically defines what to connect to and a connection pool with a max of 100 connections.

To actually access this connection and todo something with it, well this is remarkable easy.


          DB.performQuery("select * from data")

Tuesday, January 29, 2013

scala + lift + json

So since I really really really dislike soap and Java, I decided to play a bit more with scala and rest based web services. Basically parse some JSON strings with scala.

Now after browsing countless articles and remembering the JSON structure, I finally found the 'lift-json' project, which simplifies my live as busy developer a bit.

How to parse JSON with scala?


package edu.ucdavis.fiehnlab.alchemy.core.communication.services.reader
import edu.ucdavis.fiehnlab.alchemy.core.process.types.LibrarySpectra
import net.liftweb.json.parse
import net.liftweb.json.DefaultFormats
/**
 * connects to the alchemy webservice and fetches all registered co
 */
class WebserviceLibraryReader {

  /**
   * internal converter class to simplify things
   */
  case class JsonCompound(name: String, inchikey: String, retentiontime: Double, theoretical: Double, massspectra: String)

  case class CompoundWapper(compound: Array[JsonCompound])

  implicit val formats = DefaultFormats

  /**
   * downloads the compounds for the given URL and returns it as a set of library spectra
   */
  def fetchCompounds(url: String): Set[LibrarySpectra] = {

    val compounds: CompoundWapper = parse("""
        
        {"compound":[{"name":"test","inchikey":"ADVPTQAUNPRNPO-UHFFFAOYSA-N","retentiontime":11111,"theoretical":11,"massspectra":"111:1 112:2 113:3"},{"name":"test","inchikey":"ADVPTQAUNPRNPO-UHFFFAOYSA-N","retentiontime":11111,"theoretical":11,"massspectra":"111:1 112:2 113:3"}]}
        
        """).extract[CompoundWapper]

    compounds.compound.foreach { x: JsonCompound =>
      println(x)
    }
    
    
    null
  }

}

object WebserviceLibraryReader {

  def main(ars: Array[String]) = {
    println(new WebserviceLibraryReader().fetchCompounds("http://localhost:8080/alchemy-admin/services/queryAllCompoundsForMethod/test"))
  }
}

The complete tutorial can be found here

Thursday, January 24, 2013

using ssd's in a raid0 over 4 drives.

Recently I played a bit more with ssd's, since frankly they are getting affordable enough to use as scratch drives for our new software product.

Basically I was wondering, if a raid0 over 4 ssd's is fast enough for us, or if I should use a PCI-Express card.



root@****:/mnt/luna/scratch/gert# dd if=/dev/zero of=/mnt/scratch/bs.img bs=8048 count=81920
81920+0 records in
81920+0 records out
659292160 bytes (659 MB) copied, 0.529044 s, 1.2 GB/s
root@****:/mnt/luna/scratch/gert# dd of=/dev/null if=/mnt/scratch/bs.img bs=8048 count=81920
81920+0 records in
81920+0 records out
659292160 bytes (659 MB) copied, 0.129838 s, 5.1 GB/s


Sure the sample size is not perfect, but for a first test I'm rather surprised by this result and looking forward, if the speed keeps up like this.

Thursday, December 13, 2012

grails 1.3.8 and java 1.6

since we still have a lot of grails 1.3.8 application, we are forced to use them and apparently sometimes all the test fail with the following exception:


java.lang.IllegalAccessError: tried to access class org.apache.xml.serializer.XSLOutputAttributes from class org.apache.xalan.transformer.TransformerImpl at org.apache.xalan.transformer.TransformerImpl.executeChildTemplates(TransformerImpl.java:2387) at org.apache.xalan.transformer.TransformerImpl.executeChildTemplates(TransformerImpl.java:2255) at org.apache.xalan.lib.Redirect.write(Redirect.java:212) at org.apache.xalan.extensions.ExtensionHandlerJavaClass.processElement(ExtensionHandlerJavaClass.java:495) at org.apache.xalan.templates.ElemExtensionCall.execute(ElemExtensionCall.java:230) at org.apache.xalan.templates.ElemApplyTemplates.transformSelectedNodes(ElemApplyTemplates.java:395) at org.apache.xalan.templates.ElemApplyTemplates.execute(ElemApplyTemplates.java:177) at org.apache.xalan.transformer.TransformerImpl.executeChildTemplates(TransformerImpl.java:2336) at org.apache.xalan.transformer.TransformerImpl.applyTemplateToNode(TransformerImpl.java:2202) at org.apache.xalan.transformer.TransformerImpl.transformNode(TransformerImpl.java:1276) at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:673) at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:1192) at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:1170) at org.apache.tools.ant.taskdefs.optional.TraXLiaison.transform(TraXLiaison.java:187) at org.apache.tools.ant.taskdefs.XSLTProcess.process(XSLTProcess.java:709) at org.apache.tools.ant.taskdefs.XSLTProcess.execute(XSLTProcess.java:333) at org.apache.tools.ant.taskdefs.optional.junit.AggregateTransformer.transform(AggregateTransformer.java:264) at org.apache.tools.ant.taskdefs.optional.junit.XMLResultAggregator.execute(XMLResultAggregator.java:158) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at _GrailsEvents_groovy$_run_closure5.doCall(_GrailsEvents_groovy:58) at _GrailsEvents_groovy$_run_closure5.call(_GrailsEvents_groovy) at _GrailsTest_groovy$_run_closure1.doCall(_GrailsTest_groovy:199) at TestApp$_run_closure1.doCall(TestApp:82) at gant.Gant$_dispatch_closure5.doCall(Gant.groovy:381) at gant.Gant$_dispatch_closure7.doCall(Gant.groovy:415) at gant.Gant$_dispatch_closure7.doCall(Gant.groovy) at gant.Gant.withBuildListeners(Gant.groovy:427) at gant.Gant.this$2$withBuildListeners(Gant.groovy) at gant.Gant$this$2$withBuildListeners.callCurrent(Unknown Source) at gant.Gant.dispatch(Gant.groovy:415) at gant.Gant.this$2$dispatch(Gant.groovy) at gant.Gant.invokeMethod(Gant.groovy) at gant.Gant.executeTargets(Gant.groovy:590)
fix in the BuildConfig.groovy file:


inherits("global") { // uncomment to disable ehcache // excludes 'ehcache' excludes 'serializer' }

git deleting remote branch

deleting a remote branch is rather clunky, but easy:

git push origin :branch

please ensure that you have the colon (:) infront of your branch name to ensure that the branch will deleted.

git memorizing branches


First, you must create your branch locally
git checkout -b your_branch
After that, you can work locally in your branch, when you are ready to share the branch, push it. The next command push the branch to the remote repository origin and tracks it
git push -u origin your_branch
Teammates can reach your branch, by doing:
git fetch
git checkout origin/your_branch
You can continue working in the branch and pushing whenever you want without passing arguments to git push (argumentless git push will push the master to remote master, your_branch local to remote your_branch, etc...)
git push
Teammates can push to your branch by doing commits and then push explicitly
... work ...
git commit
... work ...
git commit
git push origin HEAD:refs/heads/your_branch
Or tracking the branch to avoid the arguments to git push
git checkout --track -b your_branch origin/your_branch
... work ...
git commit
... work ...
git commit
git push
Found at stackoverflow

Wednesday, October 10, 2012

submitting all files in a directory to qsub

recently we needed a small script to submit all files in a directory to a script, executed by qsub.

#!/bin/bash if [ $# -lt 2 ]; then echo Missing arguments... echo "Use: process.sh 'input dir' 'output dir'" exit 1 fi if [ -d $1 ]; then for file in `ls $1` do qsub -cwd -p -512 run.sh $1$file $2 done else echo "Missing or incorrect input directory..." exit 1 fi

and the actual run.sh script is just a small java program, which takes our two parameters.

java -Xmx1024m -jar $HOME/data/jars/DataExtractor-0.1.jar $1 $2