So much to do, so little time

Trying to squeeze sense out of chemical data

Archive for May, 2011

Accessing High Content Data from R

without comments

Over the last few months I’ve been getting involved in the informatics & data mining aspects of high content screening. While I haven’t gotten into image analysis itself (there’s a ton of good code and tools already out there), I’ve been focusing on managing image data and meta-data and asking interesting questions of the voluminuous, high-dimensional data that is generated by these techniques.

One of our platforms is ImageXpress from Molecular Devices, which stores images in a file-based image store and meta data and numerical image features in an Oracle database. While they do provide an API to interact with the database it’s a Windows only DLL. But since much of modeling requires I access the data from R, I needed a more flexible solution.

So, I’ve put together an R package that allows one to access numeric image data (i.e., descriptors) and images themselves. It depends on the ROracle package (which in turns requires an Oracle client installation).

Currently the functionality is relatively limited, focusing on my common tasks. Thus for example, given assay plate barcodes, we can retrieve the assay ids that the plate is associated with and then for a given assay, obtain the cell-level image parameter data (or optionally, aggregate it to well-level data). This task is easily parallelizable – in fact when processing a high content RNAi screen, I make use of snow to speed up the data access and processing of 50 plates.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
library(ncgchcs)
con <- get.connection(user='foo', passwd='bar', sid='baz')
plate.barcode <- 'XYZ1023'
plate.id <- get.plates(con, plate.barcode)

## multiple analyses could be run on the same plate - we need
## to get the correct one (MX uses 'assay' to refer to an analysis run)
## so we first get details of analyses without retrieving the actual data
details <- get.assay.by.barcode(con, barcode=plate.barcode, dry=TRUE)
details <- subset(ret, PLATE_ID == plate.id & SETTINGS_NAME == assay.name)
assay.id <- details$ASSAY_ID

## finally, get the analysis data, using median to aggregate cell-level data
hcs.data <-  get.assay(con, assay.id, aggregate.func=median, verbose=FALSE, na.rm=TRUE)

Alternatively, given a plate id (this is the internal MetaXpress plate id) and a well location, one can obtain the path to the relevant image(s). With the images in hand, you could use EBImage to perform image processing entirely in R.

1
2
3
4
5
6
library(ncgchcs)
## will want to set IMG.STORE.LOC to point to your image store
con <- get.connection(user='foo', passwd='bar', sid='baz')
plate.barcode <- 'XYZ1023'
plate.id <- get.plates(con, plate.barcode)
get.image.path(con, plate.id, 4, 4) ## get images for all sites & wavelengths

Currently, you cannot get the internal plate id based on the user assigned plate name (which is usually different from the barcode). Also the documentation is non-existant, so you need to explore the package to learn the functions. If there’s interest I’ll put in Rd pages down the line. As a side note, we also have a Java interface to the MetaXpress database that is being used to drive a REST interface to make our imaging data accessible via the web.

Of course, this is all specific to the ImageXpress platform – we have others such as InCell and Acumen. To have a comprehensive solution for all our imaging, I’m looking at the OME infrastructure as a means of, at the very least, have a unified interface to the images and their meta data.

Written by Rajarshi Guha

May 27th, 2011 at 5:01 am

Posted in software,Uncategorized

Tagged with , , ,

MIOSS Workshop Wrap Up

without comments

The last few days I’ve been at the EBI, attending the Molecular Informatics Open Source Software (MIOSS) workshop. As part of this trip to the UK, I’ve also had the opportunity to present some of the work my colleagues and I have done at the NCTT – thanks to Mark Forster for the invitation to speak at Syngenta and to John Chambers for having me speak to the ChEMBL group. At the workshop I presented my work on cheminformatics in R.

The focus of the workshop was to bring OSS developers and users from industry and academica/government together to hear about a variety of projects and discuss issues underlying the development and use of these projects. There were some very nice presentations – I won’t go into too much detail but some highlights for me included

  • Kevin Lawson (Syngenta) presented his work on LICSS – integrating the CDK with Excel. While I’ not a fan of Excel, it’s a necessary evil. I was quite surprised at the performance he acheived for substructure searches within Excel and the ability to access various functionalities of the CDK as Excel functions. While it probably won’t replace Accord or ChemOffice right now, it’s something to take a look at.
  • Mike Bodkin (Lilly) spoke about the use of KNIME at Lilly. They have built up an extensive collection of commercial and OSS nodes and it’s clear that KNIME is capable of giving Pipeline Pilot a run for its money. Thorsten Mienl then spoke of the OSS development of KNIME, and mentioned that they now support a collection of HCS and image analysis nodes (courtesy MPI Dresden). This is quite interesting, given that we’re ramping up our HCS capabilities at the NCTT
  • Hans de Winter of Silicos spoke about the tools and services that their company has produced on top of OpenBabel (and contributed back to the community). Quite encouraging to see a cheminformatics company making money of the OSS stack
  • Greg Landrum spoke about RDKit, presenting the RDKit based catridge for Postregsql. He showed some nice performance numbers and it was nice to see that they had gotten the coders who implemented the GiST indexing mechanisms to implement a GiST index for binary fingerprints.

In addition to these, there were other talks on Openbabel, Cinfony, Taverna, fpocket and others. While I’ve known about many of these projects it was useful to learn some of the details from the developers themselves.

A number of issues surrounding OSS development and use were discussed. For example, community development was regarded as a key factor in the success of OSS projects. Erik Lindahl of GROMACS fame, spoke about the development model of GROMACS and how important their success has been due to community involvement. Some other issues included the importance (and lack of) good documentation, what makes people contribute to OSS and so on.

The fact that industry participation was about 50% of group was nice. And a number of industry-related issues also arose. For example, there were several discussion of business models based around OSS and how they can feed back into OSS projects. A commen thread seemed to be that service and customization of OSS are good approaches to building businesses around the OSS stack, Silicos and Eagle Genomics being two prime examples.

The fact that there are industry users of OSS as well as industry members contributing back to OSS projects was very encouraging. An idea supported by a number of participants was some form of web site / wiki where such contributors and users could list themselves. (IMO, the Blue Obelisk wiki, could be a candidate for this type of thing).  Sure, there’d be usually corporate and legal barriers to this type of thing, but if done would have a number of benefits – encouragement for project developers and easily viewable precedent that would encourage other companies to use or participate in OSS projects, resulting in a positive feedback loop. With various pre-competitive collaboration efforts (e.g., Pistoia Alliance) popping up in the pharma industry, this is certainly possible.

Finally, it’s always good to meet up with old friends and also meet people whom I’ve only known over email. The social aspects of the workshop were very nice – helped greatly by excellent food and drink! Thanks to Mark for putting together a great meeting.

Written by Rajarshi Guha

May 6th, 2011 at 6:25 pm

Posted in cheminformatics

Tagged with , , ,