So much to do, so little time

Trying to squeeze sense out of chemical data

Archive for the ‘text mining’ Category

Exploring co-morbidities in medical case studies

with 2 comments

A previous post described a first look at the data available in casesdatabase.com, primarily looking at summaries of high level meta-data. In this post I start looking at the cases themselves. As I noted previously, BMC has performed some form of biomedical entity recognition on the abstracts (?) of the case studies, resulting in a set of keywords for each case study. The keywords belong to specific types such as Condition, Medication and so on. The focus of this post will be to explore the occurrence of co-morbidities – which conditions occur together, to what extent and whether such occurrences are different from random. The code to extract the co-morbidity data and generate the analyses below is available in co-morbidity.py

Before doing any analyses we need to do some clean up of the Condition keywords. This includes normalizing terms (replacing ‘comatose’ with ‘coma’, converting all diabetes variants such as Type 1 and Type 2 to just diabetes), fixing spelling variants (replacing ‘foetal’ with ‘fetal’), removing stopwords and so on. The Python code to perform this clean up requires that we manually identify these transformations. I haven’t done this rigorously, so it’s not a totally cleansed dataset. The cleanup code looks like

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
def cleanTerms(terms):
    repMap = {'comatose':'coma',
              'seizures':'seizure',
              'foetal':'fetal',
              'haematomas':'Haematoma',
              'disorders':'disorder',
              'tumour':'tumor',
              'abnormalities':'abnormality',
              'tachycardias':'tachycardias',
              'lymphomas': 'lymphoma',
              'tuberculosis':'tuberculosis',
              'hiv':'hiv',
              'anaemia':'anemia',
              'carcinoma':'carcinoma',
              'metastases':'metastasis',
              'metastatic':'metastasis',
              '?':'-'}
    stopwords = ['state','syndrome'', low grade', 'fever', 'type ii', 'mellitus', 'type 2', 'type 1', 'systemic', 'homogeneous', 'disease']
    l = []
    term = [x.lower().strip() for x in terms]
    for term in terms:
        for sw in stopwords: term = term.replace(sw, '')
        for key in repMap.keys():
            if term.find(key) >= 0: term = repMap[key]
        term = term.encode("ascii", "ignore").replace('\n','').strip()
        l.append(term)
    l = filter(lambda x: x != '-', l)
    return(list(set(l)))

Since each case study can be associated with multiple conditions, we generate a set of unique condition pairs for each case, and collect these for all 28K cases I downloaded previously.

1
2
3
4
5
6
7
8
9
10
cases = pickle.load(open('cases.pickle'))
allpairs = []
for case in cases:
    ## get all conditions for this case
    conds = filter(lambda x: x['type'] == 'Condition', [x for x in case['keywords']])
    conds = cleanTerms([x['text'] for x in conds])
    if len(conds) == 0: continue
    conds.sort()
    pairs = [ (x,y) for x,y in list(combinations(conds, 2))]
    allpairs.extend(pairs)

It turns out that across the whole dataset, there are a total of 991,466 pairs of conditions corresponding to 576,838 unique condition pairs and 25,590 unique conditions. Now, it’s clear that some condition pairs may be causally related (some of which are trivial cases such as cough and infection), whereas others are not. In addition, it is clear that some condition pairs are related in a semantic, rather than causal, fashion – carcinoma and cancer. In the current dataset we can’t differentiate between these classes. One possibility would be to code the conditions using ICD10 and collapse terms using the hierarchy.

Number of co-morbidities vs frequency of occurrence

Number of co-morbidities vs frequency of occurrence

Having said that, we work with what we currently have – and it’s quite sparse. In fact the 28K case studies represent just 0.16% of all possible co-morbidities. Within the set of just under 600K unique observed co-morbidities, the bulk occur just once. For the rest of the analysis we ignore these singleton co-morbidities (leaving us with 513,997  co-morbidities). It’s interesting to see the distribution of frequencies of co-morbidities. The first figure plots the number of co-morbidities that occur at least N times – 99,369 co-morbidities occur 2 or more times in the dataset and so on.

Another way to visualize the data is to plot a pairwise heatmap of conditions. For pairs of conditions that occur in the cases dataset we can calculate the probability of occurrence (i.e., number of times the pair occurs divided by the number of pairs). Furthermore, using a sampling procedure we can evaluate the number of times a given pair would be selected randomly from the pool of conditions. For the current analysis, I used 1e7 samples and evaluated the probability of a co-morbidity occurring by chance. If this probability is greater than the observed probability I label that co-morbidity as not different from random (i.e., insignificant). Ideally, I would evaluate a confidence interval or else evaluate the probability analytically (?).

For the figure below, I considered the 48 co-morbidities (corresponding to 25 unique conditions) that occurred 250 or more times in the dataset. I display the lower triangle for the heatmap – grey indicates no occurrences for a given co-morbidity and white X’s identify co-morbidities that have a non-zero probability of occurrence but are not different from random. As noted above, some of these pairs are not particularly informative – for example, tumor and metastasis occur with a relatively high probability, but this is not too surprising

Probability of occurrence for co-morbidities occurring more than 250 times

Probability of occurrence for co-morbidities occurring more than 250 times

It’s pretty easy to modify co-morbidity.py to look at other sets of co-morbidities. Ideally, however, we would precompute probabilities for all co-morbidities and then support interactive visualization (maybe using D3).

It’s also interesting to look at co-morbidities that include a specific condition. For example, lets consider tuberculosis (and all variants). There are 948 unique co-morbidities that include tuberculosis as one of the conditions. While the bulk of them occur just twice, there are a number with relatively large frequencies of occurrence – lymphadenopathy co-occurs with tuberculosis 203 times. Rather than tabulate the co-occurring conditions, we can use the frequencies to generate a word cloud, as shown below. As with the co-morbidity heatmaps, this could be easily automated to support interactive exploration. On a related note, it’d be quite interesting to compare the frequencies discussed here with data extracted from a live EHR system

A visualization of conditions most frequently co-occurring with tuberculosis

A visualization of conditions most frequently co-occurring with tuberculosis

So far this has been descriptive – given the size of the data, we should be able to try out some predictive models. Future posts will look at the possibilities of modeling the case studies dataset.

Written by Rajarshi Guha

October 12th, 2013 at 10:43 pm

Exploring medical case studies

with one comment

I recently came across http://www.casesdatabase.com/ from BMC, a collection of more than 29,000 peer-reviewed case studies collected from a variety of journals. I’ve been increasingly interested in the possibilities of mining clinical data (inspired by impressive work from Atul Butte, Nigam Shah and others), so this seemed like a great resource to explore

The folks at BMC have provided a REST API, which is still in development – as a result, there’s no public documentation and it still has a few rough edges. However, thanks to help from Demitrakis Kavallierou, I was able to interact with the API and extract summary search information as well as 28,998 case studies as of Sept 23, 2013. I’ve made the code to extract case studies available as proc.py. Running this, gives you two sets of data.

  1. A JSON file for each year between 2000 and 2014, containing the summary results for all cases in that year which includes a summary view of the case, plus facets for a variety of fields (age, condition, pathogen, medication, intervention etc.)
  2. A pickle file containing the case reports, as a list of maps. The case report contains the full abstract, case report identifier and publication meta-data.

A key feature of the case report entries is that BMC has performed some form of entity recognition so that it provides a list of keywords identified by different types: ‘Condition’, ‘Symptom’, ‘Medication’ etc. Each case may have multiple occurences for each type of keyword and importantly, each keyword is associated with the text fragment it is extracted from. As an example consider case 10.1136/bcr.02.2009.1548. The entry extracts two conditions

1
2
3
{u'sentence': u'She was treated by her family physician for presumptive interscapular myositis with anti-inflammatory drugs, cold packs and rest.',
u'text': u'Myositis',
u'type': u'Condition'}

and

1
2
3
{u'sentence': u'The patient denied any constitutional symptoms and had no cough.',
u'text': u'Cough',
u'type': u'Condition'}

I’m no expert in biomedical entity recognition, but the fact that BMC has performed it, saves me from having to become one, allowing me to dig into the data. But there are the usual caveats associated with text mining – spelling variants, term variants (insulin and insulin therapy are probably equivalent) and so on.

Count of cases deposited per year

Count of cases deposited per year

However, before digging into the cases themselves, we can use the summary data, and especially the facet information (which is, by definition, standardized) to get some quick summaries from the database. For example we see the a steady increase of case studies deposited in the literature over the last decade or so.

Interestingly, the number of unique conditions, medications or pathogens reported for these case studies is more or less constant, though there seems to be a downward trend for conditions. The second graph highlights this trend, by plotting the number of unique facet terms (for three types of facets) per year, normalized by the number of cases deposited that year.

Normalized count of unique facet terms by year

Normalized count of unique facet terms by year

This is a rough count, since I didn’t do any clean up of the text – so that misspellings of the same term (say, acetaminophen and acetaminaphen will be counted as two separate medication facets) may occur.

Another interesting task would be to enrich the dataset with additional annotations - ICD9/ICD10 for conditions, ATC for drugs – which would allow a higher level categorization and linking of case studies. In addition, one could use the CSLS service to convert medication names to chemical structures and employ structural similarity to group case studies.

The database also records some geographical information for each case. Specifically, it lists the countries that the authors are from. While interesting to an extent, it would have been nice if the country of occurrence or country of treatment were specifically extracted from the text. Currently, one might infer that the treatment occurred in the same country as the author is from, but this is likely only true when all authors are from the same country. Certainly, multinational collaborations will hide the true number of cases occurring in a given country (especially so for tropical diseases).

But we can take a look at how the number of cases reported for specific conditions, varies with geography and time. The figure below shows the cases whose conditions included the term tuberculosis

Tuberculosis cases by country and year

Tuberculosis cases by country and year

The code to extract the data from the pickle file is in condition_country.py. Assuming you have cases.pickle in your current path, usage is

1
$ python condition_country.py condition_name

and will output the data into a CSV file, which you can the process using your favorite tools.

In following blog posts, I’ll start looking at the actual case studies themselves. Interesting things to look at include exploring the propensity of co-morbidities, analysing the co-occurrence of conditions and medications or conditions and pathogens, to see whether the set of treatments associated with a given condition (or pathogen) has changed over time. Both these naturally lead to looking at the data with eye towards repurposing events.

Written by Rajarshi Guha

October 10th, 2013 at 7:20 pm

Words, Sentences, Fragments & Molecules

without comments

For some time I have been thinking of the analogy between linguistics (and text mining of language data) and chemistry, specifically from the point of view of fragments (though, the relationship between the two fields is actually quite long and deep, since many techniques from IR have been employed in cheminformatics). For example, atoms and bonds can be considered an “alphabet” for chemical structures. Going one level up, one can consider fragments as words, which can be joined together to form larger structures (with the linguistic analog being sentences). In a talk I gave at the ACS sometime back I compared fragments with n-grams (though LINGO‘s are probably a more direct analog).

On these lines I have been playing with text mining and modeling tools in R, mainly via the excellent tm package. One of the techniques I have been playing around with is Latent Dirichlet Allocation. This is a generative modeling approach, allowing one to associate a document (composed of a set of words) with a “topic”. Here, a topic is a group of words that have a higher probability of being generated from that topic than another topic. The technique assumes that a document is comprised of a mixture of topics – as a result, one can assign a document to different topics with different probabilities. There have been a number of applications of LDA in bioinformatics with some applications focusing on topic models as way to cluster objects such as genes [1, 2], whereas others have used it in the more traditional document grouping context [3].

In text mining scenario, developing an LDA model for a set of documents is relatively straightforward (in R) – perform a series of pre-processing steps (mainly to standardize the text) such as converting everything to lower case, removing stopwords and so on. At the end of this one has a series of documents, each one being represented as a bag of words. The collection of words across all documents can be converted to a document-term matrix (documents in the rows, words in the columns) which is then used as input to the LDA routine.

Those familiar with building predictive models with keyed fingerprints will find this quite familiar – the individual bit positions represent structural fragments, thus are the chemical analogs of words. Based on this observation I wondered what I would get (and what it would mean) by applying a technique like LDA to a collection structures and their fragments.

My initial thought is that the use of LDA to determine a set of topics for a collection of chemical structures is essentially a clustering of the molecules, with the terms associated with the topics being representative substructures for that “cluster”. With these topics in hand, it wil be interesting to see what (or whether) properties (physical, chemical , biological) may be correlated with the clusters/topics identified. The rest of this post describes a quick first look at this, using ChEMBL as the source of structures and R for performing pre-processing and modeling.

Structures & fragments

We had previously fragmented ChEMBL (v8) in house, so obtaining the data was just a matter of running an SQL query to identify all fragments that occured in 50 or molecules and retrieving their structures and the molecules they were associated with. This gives us 190,252 molecules covered by 6,110 fragments. While a traditional text document-based modeling project would involved a series of pre-processing steps, the only one I need to perform in this scenario is the removal of small (and thus likely very common) fragments such as benzene – the cheminformatics equivalent of removing stopwords. (Ideally I would also remove fragments that already occur in other fragments – the cheminformatics equivalent of stemming)

The data file I have is of the form

1
fragment_id, molregno, smiles, natom

where natom is the number of atoms in the fragment. The R code to generate (relatively) clean data, read to feed to the LDA function looks like:

1
2
3
4
5
6
7
frags <- read.table('chembl.data', header=TRUE, as.is=TRUE, comment='', sep=',')
names(frags) <- c('fid', 'molid', 'smiles', 'natom')
frags <- subset(frags, natom &gt;= 8)
## now we create the "documents"
tmp <- by(frags, frags$molid, function(x) return( c(x$molid[1], join(x$smiles, ' '))))
tmp <- data.frame(do.call('rbind', tmp), stringsAsFactors=FALSE)
names(tmp) <- c('title', 'text')

In the code above, we rearrange the data to create “documents” – identified by a title (the molecule identifier) with the body of the document being the space concatenated SMILES for the fragments associated with that molecule. In other words, a molecule (document) is constructed from a set of fragments (words). With the data arranged in this form we can go ahead and reuse code from the tm and topicmodels packages.

1
2
3
4
## Get a document-term matrix
library(tm)
corpus <- Corpus(VectorSource(tmp$text))
dtm <- DocumentTermMatrix(corpus, control = list(tolower=FALSE))

Finally, we’re ready to develop some models, starting of with 6 topics.

1
2
3
library(topicmodels)
SEED <- 1234
lda.model <- LDA(dtm, k=6, control=list(seed=SEED))

So, what are the topics that have been identified? As I noted above, each topic is really a set of “words” that have a higher probability of being generated by that topic. In the case of this model we obtain the following top 4 fragments associated with each topic (most likely fragments are at the top of the table):

Visual inspection clearly suggests distinct differences in the topics – topic 1 appears to be characterized primarily by the lack of aromaticity, whereas topic 2 appears to be characterized by quinoline and indole type structures. This is just a rough inspection of the most likely “terms” for each topic. It’s also interesting to look at how the molecules (a.k.a., documents) are assigned to the topics. The barchart indicates the distribution of molecules amongst the 6 topics.

As with other unsupervised clustering methods, the choice of k (i.e., the number of topics) is tricky. A priori there is no reason to choose one over the other. Blei in his original paper used “perplexity” as a measure of the models generalizability (smaller values are better). In this case, we can vary k and evaluate the perplexity: with 6 topics the perplexity is 1122, with 12 topics it drops to 786 and with 100 topics it drops to 308 – you can see that it seems to continuously decrease with increase in number of topics (which has been observed elsewhere, though in my case, the hyperparameters are kept constant). Wallach et al have discussed various approaches to evaluating topic models.

Numerical evaluation of these models is useful, but we’re more interested in how these assignments correlate with chemical or biological features. First, one could look at the structural homogenity of the molecules assigned to topics. For k = 6, this is probably not useful, as the individual groups are very large. With k = 100 one obtains a much more sensible estimate of homogeneity (but this is to be expected). Another way to evaluate the topics from chemical point of view is to look at some property or activity. Given that ChEMBL provides assay and target information for the molecules, we have many ways to perform this evaluation. As a brief example, we can consider activity distrbutions derived from the molecules associated with each topic. Most ChEMBL molecules have multiple activities associated with them as many are tested in multiple assays. To allow comparison we converted activities in a given assay to Z-scores, allow comparison of activitives across assays. Then for each molecule, we identified the minimum activity, only considering those activities that were annotated as IC50 and as exact (i.e., not < or >). After removal of a few extreme outliers we obtain:

Clearly, within each group, the Z-scores cluster tightly around 0. It appears that the groups differentiate from each other in terms of the extreme values. Indeed plotting summary statistics for each group confirms this – in fact the median Z-score has a range of 0.05 and the mean Z-score a range of 0.11 across the six groups. In other words, the bulk of the groups are quite similar.

Other possibilities

The example shown here is rather simplistic and is the equivalent of unsupervised clustering. One obvious next step is to search the parameter space of the LDA model, evaluate different approaches to estimating the posterior distribution (EM or Gibbs sampling) and so on. A number of extensions to the basic LDA technique have been proposed, one of them being a supervised form of LDA.

It’d also be useful to look at this method on a slightly smaller, labeled dataset – I’ve run some preliminary experiments on the Bursi AMES but those results need a little more work. More generally, smaller datasets can be problematic as the number of unique fragments can be low. In addition fewer observations means that the estimates of the posterior distribution becomes fuzzier. One way around this is to develop a model on something like the ChEMBL dataset I used here and then apply that to smaller datasets. Obviously, this goes towards ideas of applicability – but given the size of ChEMBL, it may indeed “cover” many smaller datasets.

Is this useful?

At first sight, it’s an interesting method that identifies groupings in an unsupervised manner. Of course, one could easily run k-means or any of the hierarchical clustering methods to achieve the same result. However, the generative aspect of LDA models is what is of interest to me, but also seems the part that is difficult to map to a chemical setting – unlike topics in a document, which one can (usually) understand based on the likely terms for that topic, it’s not clear what a topic is for a collection of molecules in an unsupervised setting. And then, how does one infer the meaning of a topic from fragments? While it’s certainly true that certain fragments are associated with specific properties/activities, this is certainly not a given (unlike words, where each one does have an individual meaning). Furthermore, in an unsupervised setting like the one I’ve described here, fishing for a correlation between (some set of) properties and groupings of molecules is probably not the way to go.

Written by Rajarshi Guha

January 5th, 2012 at 4:45 am

Annotating Bioassays

with 2 comments

I’ve been working for some time with the PubChem Bioassay collection – a set of 1293 assays that cover a range of techniques (enzymatic, phenotypic etc.), targets and sizes (from 20 molecules to 200,000 molecules). In addition, some assays are primary, high-throughput assays whereas a number of them are smaller, confirmatory assays. While an extremely valuable collection, one of the drawbacks is the lack of curation. This has led to some people saying that the data is too noisy to be useful. Yes, the noise is a problem, but I think there’s still useful data to extract and model.

One of the problems that I have faced is that while one can perform a full text search for assays on PubChem, there is no form of annotations on the assays themselves. One effect of this is that it is difficult to link an assay to other biological resources (though for enzymatic assays, one can determine a Pubmed protein identifier). While working on my bioassay network project, I needed annotations and I didn’t want to do it manually.

Read the rest of this entry »

Written by Rajarshi Guha

January 25th, 2009 at 5:03 pm

Locality of References in a Paper

without comments

The other day I was reading a paper and as is my habit, while reading I flip to see what papers are being cited. Since this was an ACS journal, the references are listed in the order that they occur in the text. When the authors were discussing a point in the paper, they’d usually include a number of references. Given the ordering of the references, this implies that related references are grouped together in the bibliography.

This set me thinking – given a set of references and their citations within a paper, we can capture relationships between the references in various ways. Most obviously, one might analyze the  cited papers (either in whole, or in part such as just the abstract or title) and draw conclusions.

However, the fact that the authors of the paper considered references X,  Y and Z to be related to a specific point already provides us with some information. Thus  in a bibliography where references are order based on first occurrence, can we use the “locality” of the references in the list to draw any conclusions? One could employ some form of a sliding window and look at groups of references. The key thing here would be to have a way to characterize a reference – so it’d probably require that you can access the title (or better, the abstract or full text) of the paper being cited. I will admit that I’m not sure what sort of conclusions one might draw from such an analysis – but it was interesting to observe “local behavior” in a list of references.

Not having followed work in bibliometrics, I’m sure someone has already thought of this and looked into it. If anybody has heard of stuff like this, I’d appreciate any pointers.

(Of course this is all moot, if we can’t easily access the paper itself)

Written by Rajarshi Guha

September 30th, 2008 at 7:40 pm