So much to do, so little time

Trying to squeeze sense out of chemical data

Competitive Predictive Modeling – How Useful is it?

with 6 comments

While at the ACS National Meeting in Philadelphia I attended a talk by David Thompson of Boehringer Ingelheim (BI), where he spoke about a recent competition BI sponsored on Kaggle – a web site that hosts data mining competitions. In this instance, BI provided a dataset that contained only object identifiers and about 1700 numerical features and a binary dependent variable. The contest was open to anybody and who ever got the best classification model (as measured by log loss) was selected as the winner. You can read more about the details of the competition and also on Davids’ slides.

But I’m curious about the utility of such a competition. During the competition, all contestents had access to were the numerical features. So the contestants had no idea of the domain from where the data came – placing the onus on pure modeling ability and no need for domain knowledge. But in fact the dataset provided to them, as announced by David at the ACS, was the Hansen AMES mutagenicity dataset characterized using a collection of 2D descriptors (continuous topological descriptors as well as binary fingerprints).

BI included some “default” models and the winning models certainly performed better (10% for the winning model). This is not surprising, as they did not attempt build optimized models. But then we also see that the top 5 models differed only incrementally in their log loss values. Thus any one of the top 3 or 4 models could be regarded as a winner in terms of actual predictions.

What I’d really like to know is how well such an approach leads to better chemistry or biology. First, it’s clear that such an approach leads to the optimization of pure predictive performance and cannot provide insight into why the model makes an active or inactive call. In many scenario’s this is sufficient, but more often than not, domain specific diagnostics are invaluable. Second, how does the relative increase in model performance lead to better decision making? Granted, the crowd-sourced, gamified approach is a nice way to eke out the last bits of predictive performance on a dataset – but does it really matter that one model performs 1% better than the next best model? The fact that the winning model was 10% better than the “default” BI model is not too informative. So a specific qustion I have is, was there a benefit, in terms of model performance, and downstream decision making by asking the crowd for a better model, compared to what BI had developed using (implicit or explicit) chemical knowledge?

My motivation is to try and understand whether the winning model was an incremental improvement or whether it was a significant jump, not just in terms of numerical performance, but in terms of the predicted chemistry/biology. People have been making noises of how data trumps knowledge (or rather hypotheses and models) and I believe that in some cases this can be true. But I also wonder to what extent this holds for chemical data mining.

But it’s equally important to understand what such a model is to be used for. In a virtual screening scenario, one could probably ignore interpretability and go for pure predictive performance. In such cases, for increasingly large libraries, it might make sense for one to have a model that s 1% better than the state of the art. (In fact, there was a very interesting talk by Nigel Duffy of Numerate, where he spoke about a closed form, analytical expression for the hit rate in a virtual screen, which indicates that for improvements in the overall performance of a VS workflow, the best investment is to increase the accuracy of the predictive model. Indeed, his results seem to indicate that even incremental improvements in model accuracy lead to a decent boost to the hit rate).

I want to stress that I’m not claiming that BI (or any other organization involved in this type of activity) has the absolute best models and that nobody can do better. I firmly believe that however good you are at something, there’s likely to be someone better at it (after all, there are 6 billion people in the world). But I’d also like to know how and whether incrementally better models do when put to the test of real, prospective predictions.

Written by Rajarshi Guha

August 22nd, 2012 at 9:02 pm

Chunking lists in R

without comments

A common task for is to run database queries on gene symbols or compound identifiers. This involves constructing an SQL query as a string and sending that off to the database. In the case of the ROracle package, the query strings are limited to a 1000 (?) or so characters. This means that directly querying for a thousand identifiers won’t work. And going through the list of identifiers one at a time is inefficient. What we need in this situation is a to “chunk” the list (or vector) of identifiers and work on individual chunks. With the help of the itertools package, this is very easy:

1
2
3
4
5
6
7
8
library(itertools)
n <- 1:11
chunk.size <- 3
it <- ihasNext(ichunk(n, chunk.size))
while (itertools::hasNext(it)) {
  achunk <- unlist(nextElem(it))
  print(achunk)
}

Written by Rajarshi Guha

July 5th, 2012 at 2:22 pm

Posted in software

Tagged with , ,

Software for the “Federation of Independent Scientists”

without comments

A few days back, Derek Lowe posted a comment from a reader who suggested a way to approach the current employment challenges in the pharmaceutical industry would be the formation of a Federation of Independent Scientists. Such a federation would be open to consultants, small companies etc and would use its size to obtain group rates on various things – journal access, health insurance and so on. Obviously, there’s a lot of details left out here and when you go in the nitty gritty a lot of issues arise that don’t have simple answers. Nevertheless, an interesting (and welcome, as evidenced by the comment thread) idea.

One aspect raised by a commenter was access to modeling and docking software by such a group. He mentioned that he’d

… like to see an open source initiative develop a free, open source drug discovery package.Why not, all the underlying force fields and QM models have been published … it would just take a team of dedicated programmers and computational chemists time and passion to create it.

This is the very essence of the Blue Obelisk movement, under whose umbrella there is now a wide variety of computation chemistry and cheminformatics software. There’s certainly no lack of passion in the Open Source chemistry software community. As most of it is based on volunteer effort, time is always an issue. This has a direct effect on the features provided by Open Source chemistry software – such software does not always match up to commercial tools. But as the commenter above pointed out, much of the algorithms underlying proprietrary software is published. It just needs somebody with the time and expertise to implement them. And the combination of these two (in the absence of funding) is not always easy to find.

Of course, having access to the software is just one step. A scientists requires (possibly significant) hardware resources to run the software. Another comment raised this issue and asked about the possibility of a cloud based install of comp chem software.

With regards the sophisticated modelling tools – do they have to be locally installed?

How do the big pharma companies deploy the software now? I would be very suprised if it wasn’t easily packaged, although I guess the number of people using it is limited.

I’m thinking of some kind of virtual server, or remote desktop style operation. Your individual contractor can connect from whereever, and have full access to a range of tools, then transfer their data back to their own location for safekeeping.

Unlike CloudBioLinux, which provides a collection of bioinformatics and structural biology software as a prepackaged AMI for Amazons EC2 platform, I’m not aware of a similarly prepackaged set of Open Source tools for chemistry. And certainly not based on the cloud. (There are some companies that host comp chem software on the cloud and provide access to these installations for a fee). While some Linux distribibutions do package a number of scientific packages (UbuntuScience for example), I don’t think that these would support a computational drug discovery operation. (The above comment does’nt necessarily focus just on Open Source software. One could consider commercial software hosted on remote servers, though I wonder what type of licensing would be involved).

The last component would be the issue of data, primarily for cloud based solutions. While compute cycles on such platforms are usually cheap, bandwidth can be expensive. Granted, chemical data is not as big as biological data (cf. 1000Genomes on AWS), but sending a large collection of conformers over the network may not be very cost-effective. One way to bypass this would be to generate “standard” conformer collections and other such libraries and host them on the cloud. But what is “standard” and who would pay for hosting costs is an open question.

But I do think there is a sufficiently rich ecosystem of Open Source software that could serve much of the computational needs of a “Federation of Independent Scientists”. It’d be interesting to put together a list of Open Source based on requirements from the the commenters in that thread.

Written by Rajarshi Guha

April 14th, 2012 at 9:23 pm

I’d Rather Be … Reverse Engineering

with 3 comments

Gamification is a hot topic and companies such as Tunedit and Kaggle are succesfully hosting a variety of data mining competitions. These competitions employ data from a variety of domains such as bond trading, essay scoring and so on. Recently, both platforms have hosted a QSAR challenge (though not officially denoted as such). The most recent one is the challenge hosted at Kaggle by Boehringer Ingelheim.

While it’s good to see these competitions raise the profile of “data science” (and make some money for the winners), I must admit that these are not particularly interesting to me as it really boils down to looking at numbers with no context (aka domain knowledge). For example, in the Kaggle & BI example, there are 1,776 descriptors that have been normalized but no indication of the chemistry or biology. One could ask whether a certain mechanism of action is known to play a role in the biology being tested which could suggest a certain class of descriptors over another. Alternatively, one could ask whether there are a few distinct chemotypes present thus suggesting multiple local models versus a single global model. (I suppose that the supplied descriptors may lend themselves to a clustering, but a scaffold based approach would be much more direct and chemically intuitive).

This is not to say that such competitions are useless. On the contrary, lack of domain knowledge doesn’t preclude one from apply sophisticated statistical and machine learning methods to unannotated data and obtaining impressive results. The issue of data versus domain knowledge has been discussed in several places.

In contrast to the currently hosted challenge at Kaggle, an interesting twist would be to try and reverse engineer the structures from their descriptor values. There have been some previous discussions on reverse engineering structures from descriptor data. Obviously, we’re not going to be able to verify our results, but it would be an interesting challenge.

Written by Rajarshi Guha

April 6th, 2012 at 4:16 am

An ACS in (not so) Sunny San Diego

with one comment

Another ACS National meeting is over, this time in San Diego. It was good to catch up with old friends and meet many new, interesting people. As I was there for a relatively short period, I bounced around most sessions.

MEDI and COMP had a joint session on desktop modeling and its utility in medicinal chemistry. Anthony Nicholls gave an excellent talk, where he differentiated between “strong signals” and “weak signals”, the former being extremely obvious trends, features or facts that do not require a high degree of specialized exerptise to detect and the latter being those that do require significantly more expertise to identify. An example of a strong signal would be an empty region of a binding pocket that is not occupied by a ligand feature – it’s pretty easy to spot this and when hihglighted the possible actions are also obvious. A weak signal could be a pi-stacking interaction which could be difficult to identify in a crowded 3D diagram. He then highlighted how simple modifications to traditional 2D depictions can be used to make the obvious more obvious and make features that might be subtle, say in 3D, more obvious in a 2D depiction. Overall, an elegant talk, that focused on how simple visual cues in 2D & pseudo-3D depictions can key the mind to focus on important elements.

There were two other symposia that were of particular interest. On Sunday Shuxing Zhang and Sean Eakins organized a symposium on polypharmacology with an excellent line up of speakers including Chris Lipinski. Curt Breneman gave a nice talk that highlighted best practices in QSAR modeling and Marti Head gave a great talk on the role and value of docking in computational modeling projects.

On Tuesday, Jan Kuras and Tudor Oprea organized a session on System Chemical Biology. Though the session appeared to be more on the lines of drug repurposing, there were several interesting talks. Ebelebola May from Sandia Labs gave a very interesting talk on a system level model of small molecule inhibition of M. Tuberculosis and F. Tularensis - combining metabolic pathway models and cheminformatics.

John Overington gave a very interesting talk on identifying drug combinations to improve safety. Contrary to much of my reading in this area, he points out the value of “me-too” drugs and taking combinations of such drugs. Given that such drugs hit the same target, he pointed out that this results in the fact that off-targets will see reduced concentrations of the individual drugs (hopefully reducing side effects) while the on-target will see the pooled concentration (thus maintaining efficacy (?)). It’s definitely a contrasting view to the one where we identify combinations of drugs hitting different targets (which I’d guess is a tougher proposition, since identifying a truly synergistic combination requires a detailed knowledge of the underlying pathways and interactions). He also pointed out that his analyses indicated that combination dosing is not actually reduced, in contrast to the current dogma.

As before we had a CINFlash session which I think went quite well – 8 diverse speakers with a pretty good audience. The slides of the talks have been made available and we plan to have another session in Philadelphia this Fall, so consider submitting something. We also had a great Scholarships for Scientific Excellence poster session – 15 posters covering topics ranging from reaction prediction to an analysis of retractions. Excellent work, and very encouraging to see newcomers to CINF interested in getting more invovled.

The only downsides to the meeting was the chilly and unsunny weather and the fact that people still think that displaying tables of numbers in a slide actually transmits any information!

Written by Rajarshi Guha

April 6th, 2012 at 2:39 am

Posted in Uncategorized

Tagged with , ,