# So much to do, so little time

Trying to squeeze sense out of chemical data

## High Content Screens and Multivariate Z’

While contributing to a book chapter on high content screening I came across the problem of characterizing screen quality. In a traditional assay development scenario the Z factor (or Z’) is used as one of the measures of assay performance (using the positive and negative control samples). The definition of Z’ is based on a 1-D readout, which is the case with most non-high content screens. But what happens when we have to deal with 10 or 20 readouts, which can commonly occur in a high content screen?

Assuming one has identified a small set of biologically relevant phenotypic parameters (from the tens or hundreds spit out by HCA software), it makes sense that one measure the assay performance in terms of the overall biology, rather than one specific aspect of the biology. In other words, a useful performance measure should be able to take into account multiple (preferably orthogonal) readouts. In fact, in many high content screening assays, the use of the traditional Z’ with a single readout leads to very low values suggesting a poor quality assay, when in fact, that is not the case if one were to consider the overall biology.

One approach that has been described in the literature is an extension of the Z’, termed the multivariate Z’. The approach was first described by Kummel et al, which develops an LDA model, trained on the positive and negative wells. Each well is described by N phenotypic parameters and the assumption is that one has pre-selected these parameters to be meaningful and relevant. The key to using the model for a Z’ calculation is to replace the N-dimensional values for a given well by the 1-dimensional linear projection of that well:

$P_i = \sum_{j=1}^{D} w_j x_{ij}$

where $P_i$ is the 1-D projected value, $w_j$ is the weight for the $j$‘th pheontypic parameter and $x_{ij}$ is the value of the $j$‘th parameter for the $i$‘th well.

The projected value is then used in the Z’ calculation as usual. Kummel et al showed that this approach leads to better (i.e., higher) Z’ values compared to the use of univariate Z’. Subsequently, Kozak & Csucs extended this approach and used a kernel method to project the N-dimensional well values in a non-linear manner. Unsurprisingly, they show a better Z’ than what would be obtained via a linear projection.

And this is where I have my beef with these methods. In fact, a number of beefs:

• These methods are model based and so can suffer from over-fitting. No checks were made and if over-fitting were to occur one would obtain a falsely optimistic Z’
• These methods assert success when they perform better than a univariate Z’ or when a non-linear projection does better than a linear projection. But neither comparison is a true indication that they have captured the assay performance in an absolute sense. In other words, what is the “ground truth” that one should be aiming for, when developing multivariate Z’ methods? Given that the upper bound of Z’ is 1.0, one can imagine developing methods that give you increasing Z’ values – but does a method that gives Z’ close to 1 really mean a better assay?  It seems that published efforts are measured relative to other implementations and not necessarily to an actual assay quality (however that is characterized).
• While the fundamental idea of separation of positive and negative control reponses as a measure of assay performance is good, methods that are based on learning this separation are at risk of generating overly optimistic assesments of performance.

## A counter-example

As an example, I looked at a recent high content siRNA screen we ran that had 104 parameters associated with it. The first figure shows the Z’ calculated using each layer individually (excluding layers with abnormally low Z’)

As you can see, the highest Z’ is about 0.2. After removing those with no variation and members of correlated pairs I ended up with a set of 15 phenotypic parameters. If we compare the per-parameter distributions of the positive and negative control responses, we see very poor separation in all layers but one, as shown in the density plots below (the scales are all independent)

I then used these 15 parameters to build an LDA model and obtain a multivariate Z’ as described by Kummel et al. Now, the multivariate Z’ turns out to be 0.68, suggesting a well performing assay. I also performed MDS on the 15 parameter set to get lower dimensional (3D, 4D, 5D, 6D etc) datasets and performed the same calculation, leading to similar Z’ values (0.41 – 0.58)

But in fact, from the biological point of view, the assay performance was quite poor due to poor performance of the positive control (we haven’t found a good one yet). In practice then, the model based multivariate Z’ (at least as described by Kummel et al can be misleading. One could argue that I had not chosen an appropriate set of phenotypic parameters – but I checkout a variety of other subsets (though not exhaustively) and I got similar Z’ values.

## Alternatives

Of course, it’s easy to complain and while I haven’t worked out a rigorous alternative, the idea of describing the distance between multivariate distributions as a measure of assay performance (as opposed to learning the separation) allows us to attack the problem in a variety of ways. There is a nice discussion on StackExchange regarding this exact question. Some possibilities include

It might be useful to perform a more comprehensive investigation of these methods as a way to measure assay performance

Written by Rajarshi Guha

September 9th, 2012 at 8:03 pm

## Lots of Pretty Pictures

Yesterday I attended the High Content Analysis conference in San Francisco. Over the last few months I’ve been increasingly involved in the analysis of high content screens, both for small molecules and siRNA. This conference gave me the opportunity to meet people working in the field as well as present some of our recent work on an automated screening methodology integrating primary and secondary screens into a single workflow.

The panel discussion was interesting, though I was surprised that standards was a major issue. Data management and access is certainly a major problem in this field, given that a single screen can generate TB’s of image data plus millions to even billions of rows of cell-level data. The cloud did come up, but I’m not sure how smooth a workflow would be involving cloud operations.

Some of the talks were interesting such as the presentation on OME by Jason Swedlow. The talk that really caught my eye was by Ilya Goldberg on their work with WND-CHARM. In contrast to traditional analysis of high content screens which involves cell segmentation and subsequent object identification, he tackles the problem by consider the image itself as the object. Thus rather than evaluate phenotypic descriptors for individual cells, he evaluates descrptors such as texttures, Haralick features etc., for an entire image of a well. With these descriptors he then develops classification models using LDA – which does surprisingly well (in that SVM’s don’t do a whole lot better!). The approach is certainly attractive as image segmentation can get qute hairy. At the same time, the method requires pretty good performance on the control wells. Currently, I’ve been following the traditional HCA workflow – which has worked quite well in terms of classification performance. However, this technique is certainly one to look into, as it could avoid some of the subjectivity involved in segmentation based workflows.

As always, San Francisco is a wonderful place – weather, food and feel. Due to my short stay I could only sample one resteraunt – a tapas bar called Lalola. A fantastic place with a mind blowing mushroom tapas and the best sangria I’ve had so far. Highly recommended.

Written by Rajarshi Guha

January 13th, 2011 at 5:51 pm

Posted in software,visualization

Tagged with , , ,