So much to do, so little time

Trying to squeeze sense out of chemical data

Archive for the ‘Uncategorized’ Category

High Content Screens and Multivariate Z’

without comments

While contributing to a book chapter on high content screening I came across the problem of characterizing screen quality. In a traditional assay development scenario the Z factor (or Z’) is used as one of the measures of assay performance (using the positive and negative control samples). The definition of Z’ is based on a 1-D readout, which is the case with most non-high content screens. But what happens when we have to deal with 10 or 20 readouts, which can commonly occur in a high content screen?

Assuming one has identified a small set of biologically relevant phenotypic parameters (from the tens or hundreds spit out by HCA software), it makes sense that one measure the assay performance in terms of the overall biology, rather than one specific aspect of the biology. In other words, a useful performance measure should be able to take into account multiple (preferably orthogonal) readouts. In fact, in many high content screening assays, the use of the traditional Z’ with a single readout leads to very low values suggesting a poor quality assay, when in fact, that is not the case if one were to consider the overall biology.

One approach that has been described in the literature is an extension of the Z’, termed the multivariate Z’. The approach was first described by Kummel et al, which develops an LDA model, trained on the positive and negative wells. Each well is described by N phenotypic parameters and the assumption is that one has pre-selected these parameters to be meaningful and relevant. The key to using the model for a Z’ calculation is to replace the N-dimensional values for a given well by the 1-dimensional linear projection of that well:

\(P_i = \sum_{j=1}^{D} w_j x_{ij} \)

where \(P_i\) is the 1-D projected value, \(w_j\) is the weight for the \(j\)’th pheontypic parameter and \(x_{ij}\) is the value of the \(j\)’th parameter for the \(i\)’th well.

The projected value is then used in the Z’ calculation as usual. Kummel et al showed that this approach leads to better (i.e., higher) Z’ values compared to the use of univariate Z’. Subsequently, Kozak & Csucs extended this approach and used a kernel method to project the N-dimensional well values in a non-linear manner. Unsurprisingly, they show a better Z’ than what would be obtained via a linear projection.

And this is where I have my beef with these methods. In fact, a number of beefs:

  • These methods are model based and so can suffer from over-fitting. No checks were made and if over-fitting were to occur one would obtain a falsely optimistic Z’
  • These methods assert success when they perform better than a univariate Z’ or when a non-linear projection does better than a linear projection. But neither comparison is a true indication that they have captured the assay performance in an absolute sense. In other words, what is the “ground truth” that one should be aiming for, when developing multivariate Z’ methods? Given that the upper bound of Z’ is 1.0, one can imagine developing methods that give you increasing Z’ values – but does a method that gives Z’ close to 1 really mean a better assay?  It seems that published efforts are measured relative to other implementations and not necessarily to an actual assay quality (however that is characterized).
  • While the fundamental idea of separation of positive and negative control reponses as a measure of assay performance is good, methods that are based on learning this separation are at risk of generating overly optimistic assesments of performance.

A counter-example

As an example, I looked at a recent high content siRNA screen we ran that had 104 parameters associated with it. The first figure shows the Z’ calculated using each layer individually (excluding layers with abnormally low Z’)

As you can see, the highest Z’ is about 0.2. After removing those with no variation and members of correlated pairs I ended up with a set of 15 phenotypic parameters. If we compare the per-parameter distributions of the positive and negative control responses, we see very poor separation in all layers but one, as shown in the density plots below (the scales are all independent)

I then used these 15 parameters to build an LDA model and obtain a multivariate Z’ as described by Kummel et al. Now, the multivariate Z’ turns out to be 0.68, suggesting a well performing assay. I also performed MDS on the 15 parameter set to get lower dimensional (3D, 4D, 5D, 6D etc) datasets and performed the same calculation, leading to similar Z’ values (0.41 – 0.58)

But in fact, from the biological point of view, the assay performance was quite poor due to poor performance of the positive control (we haven’t found a good one yet). In practice then, the model based multivariate Z’ (at least as described by Kummel et al can be misleading. One could argue that I had not chosen an appropriate set of phenotypic parameters – but I checkout a variety of other subsets (though not exhaustively) and I got similar Z’ values.


Of course, it’s easy to complain and while I haven’t worked out a rigorous alternative, the idea of describing the distance between multivariate distributions as a measure of assay performance (as opposed to learning the separation) allows us to attack the problem in a variety of ways. There is a nice discussion on StackExchange regarding this exact question. Some possibilities include

It might be useful to perform a more comprehensive investigation of these methods as a way to measure assay performance

Written by Rajarshi Guha

September 9th, 2012 at 8:03 pm

An ACS in (not so) Sunny San Diego

with one comment

Another ACS National meeting is over, this time in San Diego. It was good to catch up with old friends and meet many new, interesting people. As I was there for a relatively short period, I bounced around most sessions.

MEDI and COMP had a joint session on desktop modeling and its utility in medicinal chemistry. Anthony Nicholls gave an excellent talk, where he differentiated between “strong signals” and “weak signals”, the former being extremely obvious trends, features or facts that do not require a high degree of specialized exerptise to detect and the latter being those that do require significantly more expertise to identify. An example of a strong signal would be an empty region of a binding pocket that is not occupied by a ligand feature – it’s pretty easy to spot this and when hihglighted the possible actions are also obvious. A weak signal could be a pi-stacking interaction which could be difficult to identify in a crowded 3D diagram. He then highlighted how simple modifications to traditional 2D depictions can be used to make the obvious more obvious and make features that might be subtle, say in 3D, more obvious in a 2D depiction. Overall, an elegant talk, that focused on how simple visual cues in 2D & pseudo-3D depictions can key the mind to focus on important elements.

There were two other symposia that were of particular interest. On Sunday Shuxing Zhang and Sean Eakins organized a symposium on polypharmacology with an excellent line up of speakers including Chris Lipinski. Curt Breneman gave a nice talk that highlighted best practices in QSAR modeling and Marti Head gave a great talk on the role and value of docking in computational modeling projects.

On Tuesday, Jan Kuras and Tudor Oprea organized a session on System Chemical Biology. Though the session appeared to be more on the lines of drug repurposing, there were several interesting talks. Ebelebola May from Sandia Labs gave a very interesting talk on a system level model of small molecule inhibition of M. Tuberculosis and F. Tularensis – combining metabolic pathway models and cheminformatics.

John Overington gave a very interesting talk on identifying drug combinations to improve safety. Contrary to much of my reading in this area, he points out the value of “me-too” drugs and taking combinations of such drugs. Given that such drugs hit the same target, he pointed out that this results in the fact that off-targets will see reduced concentrations of the individual drugs (hopefully reducing side effects) while the on-target will see the pooled concentration (thus maintaining efficacy (?)). It’s definitely a contrasting view to the one where we identify combinations of drugs hitting different targets (which I’d guess is a tougher proposition, since identifying a truly synergistic combination requires a detailed knowledge of the underlying pathways and interactions). He also pointed out that his analyses indicated that combination dosing is not actually reduced, in contrast to the current dogma.

As before we had a CINFlash session which I think went quite well – 8 diverse speakers with a pretty good audience. The slides of the talks have been made available and we plan to have another session in Philadelphia this Fall, so consider submitting something. We also had a great Scholarships for Scientific Excellence poster session – 15 posters covering topics ranging from reaction prediction to an analysis of retractions. Excellent work, and very encouraging to see newcomers to CINF interested in getting more invovled.

The only downsides to the meeting was the chilly and unsunny weather and the fact that people still think that displaying tables of numbers in a slide actually transmits any information!

Written by Rajarshi Guha

April 6th, 2012 at 2:39 am

Posted in Uncategorized

Tagged with , ,

Cheminformatics and Clam Chowder

with one comment

The time has come to move again – though, in this case, it’s just a geographic move. From August I’ll be living in Manchester, CT (great cheeseburgers and lovely cycle routes) and will continue to work remotely for NCGC. I’ll be travelling to DC every month or so. The rest of the time I’ll be working from Connecticut.

Being new to the area, it’d be great to meet up over a beer, with people in the surrounding areas (NY/CT/RI) doing cheminformatics, predictive modeling and other life science related topics (any R user groups in the area?). If anybody’s interested, drop me a line (comment, mail or @rguha).

Written by Rajarshi Guha

July 25th, 2011 at 2:35 am

Posted in Uncategorized

Accessing High Content Data from R

without comments

Over the last few months I’ve been getting involved in the informatics & data mining aspects of high content screening. While I haven’t gotten into image analysis itself (there’s a ton of good code and tools already out there), I’ve been focusing on managing image data and meta-data and asking interesting questions of the voluminuous, high-dimensional data that is generated by these techniques.

One of our platforms is ImageXpress from Molecular Devices, which stores images in a file-based image store and meta data and numerical image features in an Oracle database. While they do provide an API to interact with the database it’s a Windows only DLL. But since much of modeling requires I access the data from R, I needed a more flexible solution.

So, I’ve put together an R package that allows one to access numeric image data (i.e., descriptors) and images themselves. It depends on the ROracle package (which in turns requires an Oracle client installation).

Currently the functionality is relatively limited, focusing on my common tasks. Thus for example, given assay plate barcodes, we can retrieve the assay ids that the plate is associated with and then for a given assay, obtain the cell-level image parameter data (or optionally, aggregate it to well-level data). This task is easily parallelizable – in fact when processing a high content RNAi screen, I make use of snow to speed up the data access and processing of 50 plates.

con <- get.connection(user='foo', passwd='bar', sid='baz')
plate.barcode <- 'XYZ1023' <- get.plates(con, plate.barcode)

## multiple analyses could be run on the same plate - we need
## to get the correct one (MX uses 'assay' to refer to an analysis run)
## so we first get details of analyses without retrieving the actual data
details <-, barcode=plate.barcode, dry=TRUE)
details <- subset(ret, PLATE_ID == & SETTINGS_NAME == <- details$ASSAY_ID

## finally, get the analysis data, using median to aggregate cell-level data <-  get.assay(con,, aggregate.func=median, verbose=FALSE, na.rm=TRUE)

Alternatively, given a plate id (this is the internal MetaXpress plate id) and a well location, one can obtain the path to the relevant image(s). With the images in hand, you could use EBImage to perform image processing entirely in R.

## will want to set IMG.STORE.LOC to point to your image store
con <- get.connection(user='foo', passwd='bar', sid='baz')
plate.barcode <- 'XYZ1023' <- get.plates(con, plate.barcode)
get.image.path(con,, 4, 4) ## get images for all sites & wavelengths

Currently, you cannot get the internal plate id based on the user assigned plate name (which is usually different from the barcode). Also the documentation is non-existant, so you need to explore the package to learn the functions. If there’s interest I’ll put in Rd pages down the line. As a side note, we also have a Java interface to the MetaXpress database that is being used to drive a REST interface to make our imaging data accessible via the web.

Of course, this is all specific to the ImageXpress platform – we have others such as InCell and Acumen. To have a comprehensive solution for all our imaging, I’m looking at the OME infrastructure as a means of, at the very least, have a unified interface to the images and their meta data.

Written by Rajarshi Guha

May 27th, 2011 at 5:01 am

Posted in Uncategorized,software

Tagged with , , ,

ICCS 2011

without comments

A few openings are left for the International Conference on Chemical Structures (ICCS)

A little less than 40 days left until the 9th International Conference on Chemical Structures (ICCS) starts in Noordwijkerhout, The Netherlands. The conference will focus on the latest scientific and technological developments in cheminformatics and related areas in six plenary sessions:

o Cheminformatics
o Structure-Activity and Structure-Property Prediction
o Structure-Based Drug Design and Virtual Screening
o Analysis of Large Chemistry Spaces
o Integrated Chemical Information
o Dealing with Biological Complexity

34 scientific lectures and 80 posters in two poster sessions will present applications and case studies as well as method development and algorithmic work in these areas. The program will open with a presentation by Engelbert Zass, ETH Zürich who has been awarded the CSA Trust Mike Lynch Award on the occasion of the 9th ICCS. We invite you to have a look at the scientific program which is now available at the website

In addition to the scientific program there will be a commercial exhibition with 16 leading cheminformatics software suppliers. The participation of scientists from more than 20 countries will make this a truly international event with ample opportunities to networks and discuss science.

Free workshops will be offered before and after the official conference program by BioSolveIT (, The Chemical Computing Group (, Tripos (, and Accelrys (

On Wednesday afternoon there is a sailing cruise on the IJsselmeer on two traditional sailing boats. They will leave from the scenic Muiderslot castle, and then sail to the picturesque fishing village Volendam where the old village can be explored. A banquet dinner will be served on the boats on the way back.

If you are planning to attend, we encourage you to register as soon as possible through the conference web site:

We are looking forward to meeting with you all in Noordwijkerhout.

Keith T Taylor, ICCS Chair
Markus Wagener, ICCS Chair

Written by Rajarshi Guha

April 27th, 2011 at 12:53 pm

Posted in Uncategorized

Tagged with ,