Archive for the ‘cheminformatics’ tag
A few days back, Derek Lowe posted a comment from a reader who suggested a way to approach the current employment challenges in the pharmaceutical industry would be the formation of a Federation of Independent Scientists. Such a federation would be open to consultants, small companies etc and would use its size to obtain group rates on various things – journal access, health insurance and so on. Obviously, there’s a lot of details left out here and when you go in the nitty gritty a lot of issues arise that don’t have simple answers. Nevertheless, an interesting (and welcome, as evidenced by the comment thread) idea.
One aspect raised by a commenter was access to modeling and docking software by such a group. He mentioned that he’d
… like to see an open source initiative develop a free, open source drug discovery package.Why not, all the underlying force fields and QM models have been published … it would just take a team of dedicated programmers and computational chemists time and passion to create it.
This is the very essence of the Blue Obelisk movement, under whose umbrella there is now a wide variety of computation chemistry and cheminformatics software. There’s certainly no lack of passion in the Open Source chemistry software community. As most of it is based on volunteer effort, time is always an issue. This has a direct effect on the features provided by Open Source chemistry software – such software does not always match up to commercial tools. But as the commenter above pointed out, much of the algorithms underlying proprietrary software is published. It just needs somebody with the time and expertise to implement them. And the combination of these two (in the absence of funding) is not always easy to find.
Of course, having access to the software is just one step. A scientists requires (possibly significant) hardware resources to run the software. Another comment raised this issue and asked about the possibility of a cloud based install of comp chem software.
With regards the sophisticated modelling tools – do they have to be locally installed?
How do the big pharma companies deploy the software now? I would be very suprised if it wasn’t easily packaged, although I guess the number of people using it is limited.
I’m thinking of some kind of virtual server, or remote desktop style operation. Your individual contractor can connect from whereever, and have full access to a range of tools, then transfer their data back to their own location for safekeeping.
Unlike CloudBioLinux, which provides a collection of bioinformatics and structural biology software as a prepackaged AMI for Amazons EC2 platform, I’m not aware of a similarly prepackaged set of Open Source tools for chemistry. And certainly not based on the cloud. (There are some companies that host comp chem software on the cloud and provide access to these installations for a fee). While some Linux distribibutions do package a number of scientific packages (UbuntuScience for example), I don’t think that these would support a computational drug discovery operation. (The above comment does’nt necessarily focus just on Open Source software. One could consider commercial software hosted on remote servers, though I wonder what type of licensing would be involved).
The last component would be the issue of data, primarily for cloud based solutions. While compute cycles on such platforms are usually cheap, bandwidth can be expensive. Granted, chemical data is not as big as biological data (cf. 1000Genomes on AWS), but sending a large collection of conformers over the network may not be very cost-effective. One way to bypass this would be to generate “standard” conformer collections and other such libraries and host them on the cloud. But what is “standard” and who would pay for hosting costs is an open question.
But I do think there is a sufficiently rich ecosystem of Open Source software that could serve much of the computational needs of a “Federation of Independent Scientists”. It’d be interesting to put together a list of Open Source based on requirements from the the commenters in that thread.
… my goal for the project changed from just a review of a book, to an attempt to build a bridge between theoretical computer science and computational chemistry …
The review/bridging was a pretty thorough summary of the book, but the blog post as well as the comments raised a number of interesting issues that I think are worth discussing. Aaron notes
… Unlike the field of bioinformatics, which enjoys a rich academic literature going back many years, HCA is the first book of its kind …
While the HCA may be the first compilation of cheminformatics-related algorithms in a single place, cheminformatics actually has a pretty long lineage, starting back in the 1960’s. Examples include canonicalization (Morgan, 1965) and ring perception (Hendrickson, 1961). See here for a short history of cheminformatics. Granted these are not CS journals, but that doesn’t mean that cheminformatics is a new field. Bioinformatics also seems to have a similar lineage (see this Biostar thread) with some seminal papers from the 1960’s (Dayhoff et al, 1962). Interestingly, it seems that much of the most-cited literature (alignments etc.) in bioinformatics comes from the 90’s.
Aaron then goes onto note that “there does not appear to be an overarching mathematical theory for any of the application areas considered in HCA“. In some ways this is correct – a number of cheminformatics topics could be considered ad-hoc, rather than grounded in rigorous mathematical proofs. But there are topics, primarily in the graph theoretical areas, that are pretty rigorous. I think Aarons choice of complexity descriptors as an example is not particularly useful – granted it is easy to understand without a background in cheminformatics, but from a practical perspective, complexity descriptors tend to have limited use, synthetic feasibility being one case. (Indeed, there is an ongoing argument about whether topological 2D descriptors are useful and much of the discussion depends on the context). All the points that Aaron notes are correct: induction on small examples, lack of a formal framework for comparison, limited explanation of the utility. Indeed, these comments can be applied to many cheminformatics research reports (cf. “my FANCY-METHOD model performed 5% better on this dataset” style papers).
But this brings me to my main point – many of the real problems addressed by cheminformatics cannot be completely (usefully) abstracted away from the underlying chemistry and biology. Yes, a proof of the lower bounds on the calculation of a molecular complexity descriptor is interesting; maybe it’d get you a paper in a TCS journal. However, it is of no use to a practising chemist in deciding what molecule to make next. The key thing is that one can certainly start with a chemical graph, but in the end it must be tied back to the actual chemical & biological problem. There are certainly examples of this such as the evaluation of bounds on fingerprint similarity (Swamidass & Baldi, 2007). I believe that this stresses the need for real collaborations between TCS, cheminformatics and chemistry.
As another example, Aaron uses the similarity principle (Martin et al, 2002) to explain how cheminformatics measures similarity in different ways and the nature of problems tacked by cheminformatics. One anonymous commenter responds
… I refuse to believe that this is a valid form of research. Yes, it has been mentioned before. The very idea is still outrageous …
In my opinion, the commenter has never worked on real chemical problems, or is of the belief that chemistry can be abstracted into some “pure” framework, divorced from reality. The fact of the matter is that, from a physical point of view, similar molecules do in many cases exhibit similar behaviors. Conversely, there are many cases where similar molecules exhibit significantly different behaviors (Maggiora, 2006). But this is reality and is what cheminformatics must address. In other words, cheminformatics in the absence of chemistry is just symbols on paper.
Aaron, as well as number of commenters, notes that one of the reasons holding back cheminformatics is public access to data and tools. For data, this was indeed the case for a long time. But over the last 10 years or so, a number of large public access databases have become available. While one can certainly argue about the variability in data quality, things are much better than before. In terms of tools, open source cheminformatics tools are also relatively recent, from around 2000 or so. But, as I noted in the comment thread, there is a plethora of open source tools that one can use for most cheminformatics computations, and in some areas are equivalent to commercial implementations.
My last point, which is conjecture on my part, is that one reason for the higher profile of bioinformatics in the CS community is that is has a relatively lower barrier to entry for a non-biologist (and I’ll note that this is likely not a core reason, but a reason nonetheless). After all, the bulk of bioinformatics revolves around strings. Sure there are topics (protein structure etc) that are more physical and I don’t want to go down the semantic road of what is and what is not bioinformatics. But my experience as a faculty member in a department with both cheminformatics and bioinformatics, seems to suggest to me that, coming from a CS or math background, it is easier to get up to speed on the latter than the former. I believe that part of this is due to the fact that while both cheminformatics and bioinformatics are grounded in common, abstract data structures (sequences, graphs etc), one very quickly runs into the nuances of chemical structure in cheminformatics. An alternative way to put it is that much of bioinformatics is based on a single data type – properties of sequences. On the other hand, cheminformatics has multiple data types (aka structural representations) and which one is best for a given task is not always apparent. (Steve Salzberg also made a comment on the higher profile of bioinformatics, which I’ll address in an upcoming post).
In summary, I think Aarons post was very useful as an attempt at bridge building between two communities. Some aspects could have been better articulated – but the fact is, CS topics have been a core part of cheminformatics for a long time and there are ample problems yet to be tackled.
These last few days I’ve been in the UK for an EBI workshop on cheminformatics in R. It was a two day workshop, the first day focusing on general cheminormatics in R using the rcdk and rpubchem packages, and the second day focusing on doing mass spectrometry in R using XCMS and Rdisop, run by Steffen Neumann and Paul Benton. It was an excellent workshop with participation from industry and academia and skill levels ranging from new R users to experts and people with minimal cheminformatics backgrounds to full time cheminformaticians. While I think my exercises might have been a little too difficult, I think we were able to cover a variety of topics ranging from details on how to do specific cheminformatics operations in R to more application oriented tasks such as fingerprint based analysis and benchmarking virtual screening methods. The slides from the workshop are available here – it’s a pretty big slide deck and covers some introductory R (there are some mistakes in that section which I will update in the coming days), and overview of the CDK and then sections on usage and applications of the rcdk and rpubchem packages. It certainly helped that I had a very friendly audience! During the course of the workshop I also learned a few things about R (thanks to Tobias Verbeke and Steffen). Given that about 40 people or so were exposed to the rcdk package, my (known) user base should hopefully increase It was nice to get a patch from Tobias during the workshop, which will be incorporated once I’m back home. It was also great to meet a number of people with whom I’d only had email or FriendFeed exchanges with in the past – including Chris Swain, Mark Rijnbeek, Duncan Hull, Nico Adams (though I didn’t realize it was him when I was speaking to him – sorry Nico!), Duan Lian and Syed Asad Rahman. I also got to briefly meet some of the ChEMBL folks (John and Patricia). Monday night we had a lovely workshop dinner at The Cricketer (Clavering). Many thanks to Gabriella Rustici and Dominic Clark for organizing this and inviting me to run the first day. The only downside of this trip? It was too short It would’ve been great to be able to stay a day or two more to have longer discussions with various groups.
In addition to the workshop, I visited Asad and his family in Cambridge for a fantastic dinner and much useful discussion. He’s done some excellent work on SMSD and showed me some of his recent work on enzyme classification and reaction mappings. I won’t say much more as he’s writing this up, except to say that it was quite impressive and I’m eagerly looking forward to seeing the writeups. Hopefully we’ll be able to do some joint work in the near future. Given the speed up that SMSD provides for graph isomorphism, I’m in the process of updating the CDK SMARTS parser to make use of it rather than the older UIT, which should improve SMARTS matching considerably. Down the road, the pharmacophore matching code will get a similar upgrade.
I was also able to squeeze in a day trip up to Harrogate, where I grew up. It was fun to see familiar streets and places after 23 years or so. It certainly didn’t hurt to also have some pretty amazing traditional English fare (Yorkshire curd tart at Bettys and the fish ‘n chips at Graveleys was fantastic).
Recently there have been two papers asking whether cheminformatics or virtual screening in general, have really helped drug discovery, in terms of lead discovery.
The first paper from Muchmore et al focuses on the utility of various cheminformatics tools in drug discovery. Their report is retrospective in nature where they note that while much research has been done in developing descriptors and predictors of various molecular properties (solubility, bioavilability etc), it does not seem that this has contributed to increased productivity. They suggest three possible reasons for this
- not enough time to judge the contributions of cheminformatics methods
- methods not being used properly
- methods themselves not being sufficiently accurate.
They then go on consider how these reasons may apply to various cheminformatics methods and tools that are accessible to medicinal chemists. Examples range from molecular weight and ligand efficiency to solubility, similarity and bioisosteres. They use a 3-class scheme – known knowns, unknown knowns and unknown unknowns corresponding to methods whose underlying principles are whose results can be robustly interpreted, methods for properties that we don’t know how to realistically evaluate (but which we may still do so – such as solubility) and methods for which we can get a numerical answer but whose meaning or validity is doubtful. Thus for example, ligand binding energy calculations are placed in the “unknown unknown” category and similarity searches are placed in the “known unknown” category.
It’s definitely an interesting read, summarizing the utility of various cheminformatics techniques. It raises a number of interesting questions and issues. For example, a recurring issue is that many cheminformatics methods are ultimately subjective, even though the underlying implementation may be quantitative – “what is a good Tanimoto cutoff?” in similarity calculations would be a classic example. The downside of the article is that it does appear at times to be specific to practices at Abbott.
The second paper is by Schneider and is more prospective and general in nature and discusses some reasons as to why virtual screening has not played a more direct role in drug discovery projects. One of the key points that Schneider makes is that
appropriate “description of objects to suit the problem” might be the key to future success
In other words, it may be that molecular descriptors, while useful surrogates of physical reality, are probably not sufficient to get us to the next level. Schneider even states that “… the development of advanced virtual screening methods … is currently stagnated“. This statement is true in many ways, especially if one considers the statistical modeling side of virtual screening (i.e., QSAR). Many recent papers discuss slight modifications to well known algorithms that invariably lead to an incremental improvement in accuracy. Schneider suggests that improvements in our understanding of the physics of the drug discovery problem – protein folding, allosteric effects, dynamics of complex formation, etc – rather than continuing to focus on static properties (logP etc) will lead to advances. Another very valid point is that future developments will need to move away from the prediction or modeling of “… one to one interactions between a ligand and a single target …” and instead will need to consider “… many to many relationships …“. In other words, advances in virtual screen will address (or need to address) the ligand non-specificity or promiscuity. Thus activity profiles, network models and polyparmacology will all be vital aspects of successful virtual screening.
I really like Schneiders views on the future of virtual screening, even though they are rather general. I agree with his views on the stagnation of machine learning (QSAR) methods but at the same time I’m reminded of a paper by Halevy et al, which highlights the fact that
simple models and a lot of data trump more elaborate models based on less data
Now, they are talking about natural language processing using trillion-word corpora. Not exactly the situation we face in drug discovery! But, it does look like we’re slowly going in the direction of generating biological datasets of large size and of multiple types. A recent NIH RFP proposes this type of development. Coupled with well established machine learning methods, this could be lead to some very interesting developments. (Of course even ‘simple’ properties such as solubility could benefit from a ‘large data’ scenario as noted by Muchmore et al).
Overall, two interesting papers looking at the state of the field from different views.
Finally back home from another ACS National Meeting, this time in San Francisco. While the location is certainly an attraction, there was some pretty nice talks and symposia in the CINF division such as the Visualization of Chemical Data, Metabolomics and Materials Informatics. Credit for these (and all the other) symposia go to the organizers who put in a lot of effort to get an excellent line up of speakers – as evidenced by packed rooms. This time, I finally got round to visiting some of the other division – some excellent talks in MEDI. As in the past, there was a Blue Obelisk dinner, this time at La Briciola (a fantastic recommendation from Moses Hohman and the CDD crowd) where there was much good discussion. I got a Blue Obelisk Obelisk from PMR (Cameron Neylon and Alex Wade were also recipients this year).
CINF had some excellent receptions where I got to meet old faces and make some new friends – with many of whom I’ve actually had many virtual exchanges via email or Friendfeed. Here’s a picture of me and Wendy Warr from one of the receptions.
With the meeting over and most of the follow up now, I can take a bit of a break while the last few submissions for the Boston program come trickling in. And then I get down to finalizing the program for the Fall meeting. This fall, we have an excellent line up of symposia including “Data Intensive Drug Design“, “Semantic Chemistry and RDF” and “Structure Activity Landscapes“. At the Fall meeting, I’ll also be chairing a COMP symposium titled “HPC on the Cheap” where an excellent set of speakers will be focusing on various technologies that let users access high performance computing power at a fraction of the price of super computers – stuff like FPGA’s, GPU’s and distributed systems such as Hadoop. This is part of the “Scripting and Programming” series, so expect to see code on the slides!
I’d also like to let people know that in Boston, CINF will be running an experimental symposium consisting of several very short (5 minutes or 8 minutes) lightning talks. But unlike traditional ACS symposia, we’re going to open submissions to this symposia sometime in July and close about 3 or 2 weeks before the meeting itself. In other words, we’re going to be looking for recent and ongoing developments in chemical information and cheminformatics. The title and exact mechanics of this symposium – dates, submissions, reviews and the actual times, slide counts will be announced in the near future at various places. If you think the early ACS deadlines suck, consider submitting a short talk to this symposium.
Overall an excellent meeting in San Francisco and I’m already looking forward to Boston. But in the meantime, time to get back to chewing on data, and finishing up some papers, book chapters and talks.