So much to do, so little time

Trying to squeeze sense out of chemical data

Archive for the ‘Literature’ tag

Drug-Target Networks & Polypharmacology

with 3 comments

I came across Takigawa et al where they address polypharmacology by investigating drug-target pairs.  Their approach is to simultaneously identify substructures from the ligand and subsequences from the target and combine this information to suggest drug-target pairs that represent some form of polypharmacology.  More specifically their hypothesis is that “polypharmacological principles” are embedded in a special set of paired fragments (substructures on the ligand side, subsequence on the target side). When you think about it, this is a more generalized (abstract?) version of a pharmacophore that makes the role of the target explicit.

Their approach originates from two assumptions

These results suggest that targets of promiscuous drugs can be dissimilar, implying that only a small part of each target is related with the principle of polypharmacology.

and

Similarly, recent research shows that smaller drugs in molecular weight are likely to be more promiscuous, suggesting that small fragments in each ligand would be a key to drug promiscuity

These lead to their hypothesis

… that paired fragments significantly shared in drug-target pairs could be crucial factors behind polypharmacology.

Based on this idea they first apply a frequent itemset algorithm to identify pairs of subgraph (SG) and subsequences (SS), that occur frequently (more than 5%) in the drug-target pairs. After identifying about 10,000 such SS-SG pairs, they define a sparse fingerprint, where each bit corresponds to one such pair. Using these fingerprints they then cluster the drug-target pairs, ending up with a selection of clusters. They then propose that individual clusters represent distinct polypharmacologies.

Our significant substructure pairs partitioned drug-target pairs covering most of approved drugs into clusters, which were clearly separated from each other, implying that each cluster corresponds to a unique polypharmacology type

While the underlying algorithms to obtain their results are nice, a lot of things weren’t clear.

Foremost, given the above quote, it’s not exactly clear from the paper what is meant by “unique polypharmacology type? Given that a cluster will consist of multiple drugs and multiple targets, it is not apparent from the text that a cluster highlights either promiscuity of compounds or ligand preferences for a small number of targets. While I think this is a major issue there are some other lesser problems

  • I get the impression that they consider promiscuity and polypharmacology as equivalent concepts. While there is a degree of similarity, I’d regard polypharmacology more as a rationally, controlled type of promiscuity
  • Most fragments they highlight in Figure 2 are relatively trivial paths. Certainly, reactive groups can lead to promiscuity; none of the subgraphs list exhibit reactive functionality and their application of the frequent itemset method, using a support of 5% could easily filter these out
  • Given they consider arbitrary subsequences of the target, the resulting associations could be meaningless. Again, it’d be interesting to note, in cases where crystal structure is available, how many of the subsequences, in the list of significant SS-SG pairs, lie in or around the binding site. A related question would be, of the SG-SS pairs associated with a cluster, how are individual subsequences distributed? Few unique subsequences could point towards a common binding site or active domain.
  • Related to the previous point, it’d be interesting to see in how many of the SG-SS paired fragments, the members correspond to actual interacting motifs (again based on crystal structure data).
  • One could argue that just using string subsequences to characterize the target misses information on important ligand-target interactions.

And while they may be the first to consider an analysis of drug-target pairs specifically, the idea of considering ligand and target simultaneously is not new. For example, the SiFT approach is quite similar and was described in 2004.

So, even though the paper seems pretty fuzzy on the supposed polypharmacology that they identify, it is overall an interesting paper (and one of the more interesting cheminformatics applications of frequent itemset methods).

Written by Rajarshi Guha

March 9th, 2011 at 6:07 am

Cheminformatics – the New World for TCS?

with 4 comments

A few weeks back Aaron Sterling posted a review of the Handbook of Cheminformatics Algorithms (in which I have a chapter). Aaron notes

… my goal for the project changed from just a review of a book, to an attempt to build a bridge between theoretical computer science and computational chemistry …

The review/bridging was a pretty thorough summary of the book, but the blog post as well as the comments raised a number of interesting issues that I think are worth discussing. Aaron notes

… Unlike the field of bioinformatics, which enjoys a rich academic literature going back many years, HCA is the first book of its kind …

While the HCA may be the first compilation of cheminformatics-related algorithms in a single place, cheminformatics actually has a pretty long lineage, starting back in the 1960′s. Examples include canonicalization (Morgan, 1965) and ring perception (Hendrickson, 1961). See here for a short history of cheminformatics. Granted these are not CS journals, but that doesn’t mean that cheminformatics is a new field. Bioinformatics also seems to have a similar lineage (see this Biostar thread) with some seminal papers from the 1960′s (Dayhoff et al, 1962). Interestingly, it seems that much of the most-cited literature (alignments etc.) in bioinformatics comes from the 90′s.

Aaron then goes onto note that “there does not appear to be an overarching mathematical theory for any of the application areas considered in HCA“. In some ways this is correct – a number of cheminformatics topics could be considered ad-hoc, rather than grounded in rigorous mathematical proofs. But there are topics, primarily in the graph theoretical areas, that are pretty rigorous. I think Aarons choice of complexity descriptors as an example is not particularly useful – granted it is easy to understand without a background in cheminformatics, but from a practical perspective, complexity descriptors tend to have limited use, synthetic feasibility being one case. (Indeed, there is an ongoing argument about whether topological 2D descriptors are useful and much of the discussion depends on the context). All the points that Aaron notes are correct: induction on small examples, lack of a formal framework for comparison, limited explanation of the utility. Indeed, these comments can be applied to many cheminformatics research reports (cf. “my FANCY-METHOD model performed 5% better on this dataset” style papers).

But this brings me to my main point – many of the real problems addressed by cheminformatics cannot be completely (usefully) abstracted away from the underlying chemistry and biology. Yes, a proof of the lower bounds on the calculation of a molecular complexity descriptor is interesting; maybe it’d get you a paper in a TCS journal. However, it is of no use to a practising chemist in deciding what molecule to make next. The key thing is that one can certainly start with a chemical graph, but in the end it must be tied back to the actual chemical  & biological problem. There are certainly examples of this such as the evaluation of bounds on fingerprint similarity (Swamidass & Baldi, 2007). I believe that this stresses the need for real collaborations between TCS, cheminformatics and chemistry.

As another example, Aaron uses the similarity principle (Martin et al, 2002) to explain how cheminformatics measures similarity in different ways and the nature of problems tacked by cheminformatics. One anonymous commenter responds

… I refuse to believe that this is a valid form of research. Yes, it has been mentioned before. The very idea is still outrageous …

In my opinion, the commenter has never worked on real chemical problems, or is of the belief that chemistry can be abstracted into some “pure” framework, divorced from reality. The fact of the matter is that, from a physical point of view, similar molecules do in many cases exhibit similar behaviors. Conversely, there are many cases where similar molecules exhibit significantly different behaviors (Maggiora, 2006). But this is reality and is what cheminformatics must address. In other words, cheminformatics in the absence of chemistry is just symbols on paper.

Aaron, as well as number of commenters, notes that one of the reasons holding back cheminformatics is public access to data and tools. For data, this was indeed the case for a long time. But over the last 10 years or so, a number of large public access databases have become available. While one can certainly argue about the variability in data quality, things are much better than before. In terms of tools, open source cheminformatics tools are also relatively recent, from around 2000 or so. But, as I noted in the comment thread, there is a plethora of open source tools that one can use for most cheminformatics computations, and in some areas are equivalent to commercial implementations.

My last point, which is conjecture on my part, is that one reason for the higher profile of bioinformatics in the CS community is that is has a relatively lower barrier to entry for a non-biologist (and I’ll note that this is likely not a core reason, but a reason nonetheless). After all, the bulk of bioinformatics revolves around strings. Sure there are topics (protein structure etc) that are more physical and I don’t want to go down the semantic road of what is and what is not bioinformatics. But my experience as a faculty member in a department with both cheminformatics and bioinformatics, seems to suggest to me that, coming from a CS or math background, it is easier to get up to speed on the latter than the former. I believe that part of this is due to the fact that while both cheminformatics and bioinformatics are grounded in common, abstract data structures (sequences, graphs etc), one very quickly runs into the nuances of chemical structure in cheminformatics. An alternative way to put it is that much of bioinformatics is based on a single data type – properties of sequences. On the other hand, cheminformatics has multiple data types (aka structural representations) and which one is best for a given task is not always apparent. (Steve Salzberg also made a comment on the higher profile of bioinformatics, which I’ll address in an upcoming post).

In summary, I think Aarons post was very useful as an attempt at bridge building between two communities. Some aspects could have been better articulated – but the fact is, CS topics have been a core part of cheminformatics for a long time and there are ample problems yet to be tackled.

Written by Rajarshi Guha

February 13th, 2011 at 7:45 pm

What Has Cheminformatics Done for You Lately?

with 3 comments

Recently there have been two papers asking whether cheminformatics or virtual screening in general, have really helped drug discovery, in terms of lead discovery.

The first paper from Muchmore et al focuses on the utility of various cheminformatics tools in drug discovery.  Their report is retrospective in nature where they note that while much research has been done in developing descriptors and predictors of various molecular properties (solubility, bioavilability etc), it does not seem that this has contributed to increased productivity. They suggest three possible reasons for this

  • not enough time to judge the contributions of cheminformatics methods
  • methods not being used properly
  • methods themselves not being sufficiently accurate.

They then go on consider how these reasons may apply to various cheminformatics methods and tools that are accessible to medicinal chemists. Examples range from molecular weight and ligand efficiency to solubility, similarity and bioisosteres. They use a 3-class scheme – known knowns, unknown knowns and unknown unknowns corresponding to methods whose underlying principles are whose results can be robustly interpreted, methods for properties that we don’t know how to realistically evaluate (but which we may still do so – such as solubility) and methods for which we can get a numerical answer but whose meaning or validity is doubtful. Thus for example, ligand binding energy calculations are placed in the “unknown unknown” category and similarity searches are placed in the “known unknown” category.

It’s definitely an interesting read, summarizing the utility of various cheminformatics techniques. It raises a number of interesting questions and issues. For example, a recurring issue is that many cheminformatics methods are ultimately subjective, even though the underlying implementation may be quantitative – “what is a good Tanimoto cutoff?” in similarity calculations would be a classic example.  The downside of the article is that it does appear at times to be specific to practices at Abbott.

The second paper is by Schneider and is more prospective and general in nature and discusses some reasons as to why virtual screening has not played a more direct role in drug discovery projects. One of the key points that Schneider makes is that

appropriate “description of objects to suit the problem” might be the key to future success

In other words, it may be that molecular descriptors, while useful surrogates of physical reality, are probably not sufficient to get us to the next level. Schneider even states that “… the development of advanced virtual screening methods … is currently stagnated“. This statement is true in many ways, especially if one considers the statistical modeling side of virtual screening (i.e., QSAR). Many recent papers discuss slight modifications to well known algorithms that invariably lead to an incremental improvement in accuracy. Schneider suggests that improvements in our understanding of the physics of the drug discovery problem – protein folding, allosteric effects, dynamics of complex formation, etc – rather than continuing to focus on static properties (logP etc) will lead to advances. Another very valid point is that future developments will need to move away from the prediction or modeling of “… one to one interactions between a ligand and a single target …”  and instead will need to consider “… many to many relationships …“. In other words, advances in virtual screen will address (or need to address) the ligand non-specificity or promiscuity. Thus activity profiles, network models and polyparmacology will all be vital aspects of successful virtual screening.

I really like Schneiders views on the future of virtual screening, even though they are rather general. I agree with his views on the stagnation of machine learning (QSAR) methods but at the same time I’m reminded of a paper by Halevy et al, which highlights the fact that

simple models and a lot of data trump more elaborate models based on less data

Now, they are talking about natural language processing using trillion-word corpora. Not exactly the situation we face in drug discovery! But, it does look like we’re slowly going in the direction of generating biological datasets of large size and of multiple types. A recent NIH RFP proposes this type of development. Coupled with well established machine learning methods, this could be lead to some very interesting developments. (Of course even ‘simple’ properties such as solubility could benefit from a ‘large data’ scenario as noted by Muchmore et al).

Overall, two interesting papers looking at the state of the field from different views.

Written by Rajarshi Guha

April 5th, 2010 at 4:33 am

The Quest for Universal Descriptors – Not There Yet

with 2 comments

A major component of QSAR modeling is the choice of molecular descriptors that are used in a model. The literature is replete with descriptors and there’s lots of software (commercial and open source) to calculate them. There are many issues related to molecules descriptors, (such as many descriptors being correlated and so on) but I came across a paper by Frank Burden and co-workers describing a “universal descriptor”. What is such a descriptor?

The idea derives from the fact that molecular descriptors usually characterize one specific structural feature. But in many cases, the biological activity of a molecule is a function of multiple structural features. This implies that you need multiple descriptors to capture the entire structure-activity relationship. The goal of a universal descriptor set is that it should be able to characterize a molecular structure in such a way that it (implicitly or explicitly) encodes all the structural features that might be relevant for the molecules activity in multiple, diverse scenarios. In other words, a true universal descriptor set could be used in a variety QSAR models and not require additional descriptors.

One might ask whether this is feasible or not. But when we realize that in many cases biological activity is controlled by shape and electrostatics, it might make sense that a descriptor that characterizes these two features simultaneously should be a good candidate. Burden et al describe “charge fingerprints” which are claimed to be a step towards such a universal descriptor set.

These descriptors are essentially binned counts of partial charges on specific atoms. The method considers 7 atoms (H, C, N, O, P, S, Si) and for each atom declares 3 bins. Then for a given molecule, one simply bins the partial charges on the atoms. This results in a 18-element descriptor vector which can then be used in QSAR modeling. This is a very simple descriptor to implement (the authors implementation is commercially available, as far as I can see). They test it out on several large and diverse datasets and also compare these descriptors to atom count descriptors and BCUT‘s.

The results indicate that while similar in performance to things like BCUT’s, in the end combinations of these charge fingerprints with other descriptors perform best. OK, so that seems to preclude the charge fingerprints being universal in nature. The fact that the number of bins is an empirical choice based on the datasets they employed also seems like a factor that prevents the from being universal descriptors. And, shape isn’t considered. Given this point, it would have been interesting to see how these descriptors comapred to CPSA‘s. So while simple, interpretable and useful, it’s not clear why these would be considered universal.

Written by Rajarshi Guha

February 14th, 2009 at 1:09 am

Posted in Literature

Tagged with , ,