# So much to do, so little time

Trying to squeeze sense out of chemical data

## Pig and Cheminformatics

Pig is a platform for analyzing large datasets. At its core is a high level language (called Pig Latin), that is focused on specifying a series of data transformations. Scripts written in Pig Latin are executed by the Pig infrastructure either in local or map/reduce modes (the latter making use of Hadoop).

Previously I had investigated Hadoop for running cheminformatics tasks such as SMARTS matching and pharmacophore searching. While the implementation of such code is pretty straightforward, it’s still pretty heavyweight compared to say, performing SMARTS matching in a database via SQL. On the other hand, being able to perform these tasks in Pig Latin, lets us write much simpler code that can be integrated with other non-cheminformatics code in a flexible manner. An example of Pig Latin script that we might want to execute is:

 123 A = load 'medium.smi' as (smiles:chararray); B = filter A by net.rguha.dc.pig.SMATCH(smiles, 'NC(=O)C(=O)N'); store B into 'output.txt';

The script loads a file containing SMILES strings and then filters entries that match the specified SMARTS pattern and writes out the matching SMILES to an output file. Clearly, very similar to SQL. However, the above won’t work on a default Pig installation since SMATCH is not a builtin function. Instead we need to look at  a user defined function (UDF).

UDF’s are implemented in Java and can be classified into one of three types: eval, aggregate or filter functions. For this example I’ll consider a filter function that takes two strings representing a SMILES string and a SMARTS string and returns true if the SMILES contains the specified pattern.

 1234567891011121314151617181920212223 public class SMATCH extends FilterFunc {     static SMARTSQueryTool sqt;static {         try {             sqt = new SMARTSQueryTool("C");         } catch (CDKException e) {             System.out.println(e);         }     }     static SmilesParser sp = new SmilesParser(DefaultChemObjectBuilder.getInstance());     public Boolean exec(Tuple tuple) throws IOException {         if (tuple == null || tuple.size() < 2) return false;         String target = (String) tuple.get(0);         String query = (String) tuple.get(1);         try {             sqt.setSmarts(query);             IAtomContainer mol = sp.parseSmiles(target);             return sqt.matches(mol);         } catch (CDKException e) {             throw WrappedIOException.wrap("Error in SMARTS pattern or SMILES string "+query, e);         }     } }

A UDF for filtering must implement the FilterFunc interface which specifies a single method, exec. Within this method, we check whether we have the requisite number of input arguments and if so, simply return the value of the SMARTS match. For more details on filter functions see the UDF manual.

One of the key features of the code is the static initialization of the SMILES parser and SMARTS matcher. I’m not entirely sure how many times the UDF is instantiated during a query (once for each “row”? Once for the entire query?) – but if it’s more than once, we don’t want to instantiate the parser and matcher in the exec function. Note that since Hadoop is not a multithreaded model, we don’t need to worry about the lack of thread safety in the CDK.

Compiling the above class and packaging it into a jar file, allows us to run the above Pig Latin script (you’ll have to register the jar file at the beginning by writing register /path/to/myudf.jar) from the command line:

### Conclusions

While this experiment didn’t actually highlight new algorithms or methods, it does highlight the ease with which data intensive computation can be handled. What was really cool for me was that I have access to massive compute resources, accessible with a few simple command line invocations. Another nice thing is that the Hadoop framework, makes handling large data problems pretty much trivial – as opposed to chunking my data by hand, making sure each chunk is processed by a different node and all the scheduling issues associated with this.

The next thing to look at is how one can access the Amazon public datasets stored on EBS from a Hadoop cluster. This will allow pharmacophore searching for the entire PubChem collection  – either via the Pub3D dataset (single conformer) or else via the PubChem dataset (multiple conformers). While I’ve focused on pharmacophore searching, one can consider arbitrary cheminformatics tasks.

Going further, one could consider the use of HBase, a column store based on Hadoop, as a storage system for large chemical collections and couple it to Hadoop applications. This will be useful, if the use case does not involve complex relational queries. Going back to pharmacophore searches, one could imagine running searches against large collections stored in HBase, and updating the database with the results – in this case, database usage is essentially simple lookups based on compound ID, as opposed to relational queries.

Finally, it’d also be useful to try and consider cheminformatics applications that could make use of the Map/Reduce framework at an algorithmic level, as opposed to Map/Reduce to simply processe data in chunks. Some immediate applications that come to mind include pharmacophore discovery and diversity analysis.

Written by Rajarshi Guha

May 12th, 2009 at 2:22 am

## Hadoop, Chunks and Multi-line Records

Chunking an input file

In a previous post I described how one requires a custom RecordReader class to deal with multi-line records  (such as SD files) in a Hadoop program. While it worked fine on a small input file (less than 5MB) I had not addressed the issue of “chunking” and that caused it to fail when dealing with larger files (the code in that post is updated now).

When a Hadoop program is run on an input file, the framework will send chunks of the input file to individual RecordReader instances. Note that it doesn’t actually read the entire file and send around portions of it – that would not scale to petabyte files! Rather, it determines the size of the file and ends start and end offsets into the original file, to the RecordReaders. They then seek to the appropriate position in the original file and then do their work.

The problem with this is that when a RecordReader receives a chunk (defined in terms of start and offsets), it can start in the middle of a record and end in the middle of another record. This shown schematically in the figure, where the input file with 5 multi-line, variable length records is divided into 5 chunks. As you can see, in the general case, chunks don’t start or end on record boundaries.

My initial code, when faced with chunks failed badly since rather than recognizing chunk boundaries it simply read each record in the whole file. Alternatively (and naively) if one simply reads up to a chunk boundary, the last and first records read from that chunk will generally be invalid.

The correct (and simple) strategy for an arbitrary chunk, is to make sure that the start position is not 0. If so, we read the bytes from the start position until we reach the first end of record marker. In general, the record we just read will be incomplete, so we discard it. We then carry on reading complete records as usual. But if, after reading a record, we note that the current file position is beyond the end position of the current chunk, we note that the chunk is done with and just return this last record. Thus, according to the figure, when processing he second chunk from the top, we read in bytes 101 to 120 and discard that data. We then start reading the initial portion of Record 3 until the end of the record, at position 250 – even though we’ve gone beyond the chunk boundary at position 200. However we now flag that we’re done with the chunk and carry on.

When another RecordReader class gets the next chunk starting at position 200, it will be dumped into the middle of Record 3. But, according to our strategy, we simply read till the end of record marker at position 250 and discard the data. This is OK, since the RecordReader instance that handled the previous chunk already read the whole of Record 3.

The two edge cases here are when the chunk starts at position 0 (beginning of the input file) and the chunk ends at the end of file. In the former case, we don’t discard anything, but simply process the entire chunk plus a bit beyond it to get the entire last record for this chunk. For the latter case, we simply check whether we’re at the end of the file and flag it to the nextKeyValue() method.

The implementation of this strategy is shown in the SDFRecordReader class listing.

In hindsight this is pretty obvious, but I was bashing myself for a while and hopefully this explanation saves others some effort.

Written by Rajarshi Guha

May 6th, 2009 at 5:00 am

Posted in Uncategorized,software

Tagged with ,

In my previous post I had described my initial attempts at working with Hadoop, an implementation of the map/reduce framework. Most Hadoop examples are based on line oriented input files. In the cheminformatics domain, SMILES files are line oriented and so applying Hadoop to a variety of tasks that work with SMILES input is easy. However, a number of problems require 3D coordinates or meta data and SMILES cannot support this. Instead, we consider SD files, where each molecule is a multi-line record. So to make use of Hadoop, we’re going to need to support multi-line records.

### Handling multi-line records

The basic idea is to create a class that will accumulate text from an SD file until it reaches the  marker, indicating the end of a molecule. The resultant text is then sent to the mapper as a string. The mapper can then use the CDK to parse the string representation of the SD record into an IAtomContainer object and then carry on from there.

So how do we generate multi-line records for the mapper? First, we extend TextInputFormat so that our class returns an appropriate RecordReader.

The SDFRecordReader class is where all the work is done. We start of by setting some variables

 1234567891011 public class SDFRecordReader extends RecordReader {     private long end;     private boolean stillInChunk = true;     private LongWritable key = new LongWritable();     private Text value = new Text();     private FSDataInputStream fsin;     private DataOutputBuffer buffer = new DataOutputBuffer();     private byte[] endTag = "\n".getBytes();

The main method in this class is nextKeyValue(). The code in this class simply reads bytes from the input until it hits the end molecule marker () and then sets the value to the resultant string and the key to the position in the file. At this point, it doesn’t really matter what we use for the key, since the mapper will usually work with a molecule identifier and some calculated property. As a result, the reducer will likely not get to the see the key generated in this method.

 12345678910111213141516171819 public boolean nextKeyValue() throws IOException {         if (!stillInChunk) return false;         boolean status = readUntilMatch(endTag, true);                 value = new Text();         value.set(buffer.getData(), 0, buffer.getLength());         key = new LongWritable(fsin.getPos());         buffer.reset();         // status is true as long as we're still within the         // chunk we got (i.e., fsin.getPos() < end). If we've         // read beyond the chunk it will be false         if (!status) {             stillInChunk = false;         }         return true;     }

The remaining methods are pretty self-explanatory and I’ve included the entire class below.

 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192 package net.rguha.dc.io; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.DataOutputBuffer; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.InputSplit; import org.apache.hadoop.mapreduce.RecordReader; import org.apache.hadoop.mapreduce.TaskAttemptContext; import org.apache.hadoop.mapreduce.lib.input.FileSplit; import java.io.IOException; public class SDFRecordReader extends RecordReader {     private long end;     private boolean stillInChunk = true;     private LongWritable key = new LongWritable();     private Text value = new Text();     private FSDataInputStream fsin;     private DataOutputBuffer buffer = new DataOutputBuffer();     private byte[] endTag = "\n".getBytes();     public void initialize(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {         FileSplit split = (FileSplit) inputSplit;         Configuration conf = taskAttemptContext.getConfiguration();         Path path = split.getPath();         FileSystem fs = path.getFileSystem(conf);         fsin = fs.open(path);         long start = split.getStart();         end = split.getStart() + split.getLength();         fsin.seek(start);         if (start != 0) {             readUntilMatch(endTag, false);         }     }     public boolean nextKeyValue() throws IOException {         if (!stillInChunk) return false;         boolean status = readUntilMatch(endTag, true);                 value = new Text();         value.set(buffer.getData(), 0, buffer.getLength());         key = new LongWritable(fsin.getPos());         buffer.reset();         if (!status) {             stillInChunk = false;         }         return true;     }     public LongWritable getCurrentKey() throws IOException, InterruptedException {         return key;     }     public Text getCurrentValue() throws IOException, InterruptedException {         return value;     }     public float getProgress() throws IOException, InterruptedException {         return 0;     }     public void close() throws IOException {         fsin.close();     }     private boolean readUntilMatch(byte[] match, boolean withinBlock) throws IOException {         int i = 0;         while (true) {             int b = fsin.read();             if (b == -1) return false;             if (withinBlock) buffer.write(b);             if (b == match[i]) {                 i++;                 if (i >= match.length) {                     return fsin.getPos() < end;                 }             } else i = 0;         }     } }

### Using multi-line records

Now that we have classes to handle multi-line records, using them is pretty easy. For example, lets rework the atom counting example, to work with SD file input. In this case, the value argument of the map() method will be a String containing the SD record. We simply parse this using the MDLV2000Reader and then proceed as before.

 123456789101112131415161718192021 public static class TokenizerMapper extends Mapper {         private final static IntWritable one = new IntWritable(1);         private Text haCount = new Text();         public void map(Object key, Text value, Context context) throws IOException, InterruptedException {             try {                 StringReader sreader = new StringReader(value.toString());                 MDLV2000Reader reader = new MDLV2000Reader(sreader);                 ChemFile chemFile = (ChemFile)reader.read((ChemObject)new ChemFile());                 List containersList = ChemFileManipulator.getAllAtomContainers(chemFile);                 IAtomContainer molecule = containersList.get(0);                 for (IAtom atom : molecule.atoms()) {                     haCount.set(atom.getSymbol());                     context.write(haCount, one);                 }             } catch (CDKException e) {                 e.printStackTrace();             }         }     }

Before running it, we need to tell Hadoop to use our SDFInputFormat class and this is done in the main program driver by

 1234 Configuration conf = new Configuration() Job job = new Job(conf, "haCount count"); job.setInputFormatClass(SDFInputFormat.class); ...

After regenerating the jar file we’re ready to run. Our SD file contains 100 molecules taken from Pub3D and we copy it into our local HDFS

and then run our program

Looking at the output,

 12345678910 \$ hadoop dfs -cat output.sdf/part-r-00000 Br  40 C   2683 Cl  23 F   15 H   2379 N   300 O   370 P   1 S   82

we get a count of the elements across the 100 molecules.

With the SDFInputFormat and SDFRecordReader classes, we are now in a position to throw pretty much any cheminformatics task onto a Hadoop cluster. The next post will look at doing SMARTS based substructure searches using this framework. Future posts will consider the performance gains (or not) when using this approach.

Written by Rajarshi Guha

May 4th, 2009 at 8:32 pm

Over the past few months I’ve been hacking together scripts to distribute data parallel jobs. However, it’s always nice when somebody else has done the work. In this case, Hadoop is an implementation of the map/reduce framework from Google. As Yahoo and others have shown, it’s an extremely scalable framework, and when coupled with Amazons EC2, it’s an extremely powerful system, for processing large datasets.

I’ve been hearing a lot about Hadoop from my brother who is working on linking R with Hadoop and I thought that this would be a good time to try it out for myself. So the first task was to convert the canonical word counting example to something closer to my interest – counting occurrence of elements in a collection of SMILES. This is a relatively easy example, since SMILES files are line oriented, so it’s simply a matter of reworking the WordCount example that comes with the Hadoop distribution.

For now, I run Hadoop 0.20.0 on my  Macbook Pro following these instructions on setting up a single node Hadoop system. I also put the bin/ directory of the Hadoop distribution in my PATH. The code employs the CDK to parse a SMILES string and identify each element.

This is compiled in the usual manner and converted to a jar file. For some reason, Hadoop wouldn’t run the class unless the CDK classes were also included in the jar file (i.e., the -libjars argument didn’t seem to let me specify the CDK libraries separately from my code). So the end result was to include the whole CDK in my Hadoop program jar.

OK, the next thing was to create an input file. I extracted 10,000 SMILES from PubChem and copied them into my local HDFS by

Then running my program is simply

There’s quite a bit of output, though the interesting portion is

 123456789101112131415161718192021 09/05/04 14:45:58 INFO mapred.JobClient: Counters: 17 09/05/04 14:45:58 INFO mapred.JobClient:   Job Counters 09/05/04 14:45:58 INFO mapred.JobClient:     Launched reduce tasks=1 09/05/04 14:45:58 INFO mapred.JobClient:     Launched map tasks=1 09/05/04 14:45:58 INFO mapred.JobClient:     Data-local map tasks=1 09/05/04 14:45:58 INFO mapred.JobClient:   FileSystemCounters 09/05/04 14:45:58 INFO mapred.JobClient:     FILE_BYTES_READ=533 09/05/04 14:45:58 INFO mapred.JobClient:     HDFS_BYTES_READ=482408 09/05/04 14:45:58 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1098 09/05/04 14:45:58 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=336 09/05/04 14:45:58 INFO mapred.JobClient:   Map-Reduce Framework 09/05/04 14:45:58 INFO mapred.JobClient:     Reduce input groups=0 09/05/04 14:45:58 INFO mapred.JobClient:     Combine output records=60 09/05/04 14:45:58 INFO mapred.JobClient:     Map input records=10000 09/05/04 14:45:58 INFO mapred.JobClient:     Reduce shuffle bytes=0 09/05/04 14:45:58 INFO mapred.JobClient:     Reduce output records=0 09/05/04 14:45:58 INFO mapred.JobClient:     Spilled Records=120 09/05/04 14:45:58 INFO mapred.JobClient:     Map output bytes=1469996 09/05/04 14:45:58 INFO mapred.JobClient:     Combine input records=244383 09/05/04 14:45:58 INFO mapred.JobClient:     Map output records=244383 09/05/04 14:45:58 INFO mapred.JobClient:     Reduce input records=60

So we’ve procesed 10,000 records which is good. To see what was generated, we do

and we get

 1234567891011121314 Ag    13 Al    11 Ar    1 As    6 Au    4 B    49 Ba    7 Bi    1 Br    463 C    181452 Ca    5 Cd    2 Cl    2427 ....

Thus across the entire 10,000 molecules, there were 13 occurrences of Ag, 1881,452 occurrences of carbons and so on.

### Something useful?

OK, so this is a rather trivial example. But it was quite simple to create and more importantly, I should be able to take this jar file and run it on a proper multi-node Hadoop cluster and work with the entire PubChem collection.

A more realistic use case is to do SMARTS searching. In this case, the mapper would simply emit the molecule title along with an indication of whether it matched the supplied pattern (say 1 for a match, 0 otherwise) and the reducer would simply collect the key/value pairs for which the value was 1. Since one could do this with SMILES input, this is quite simple.

A slightly non-trivial task is to apply this framework to SD files. My motivation is that I’d like to run pharmacophore searches across large collections, without having to split up large SD files by hand. Hadoop is an excellent framework for this. The problem is that most Hadoop examples work with line-oriented data files. SD files are composed of multi-line records and so this type of input requires some extra work, which I’ll describe in my next post.

### Debugging note

When debugging it’s useful to edit the Hadoop config file to have

 12345   mapred.job.tracker   local   foo

so that it runs as a single process.

Written by Rajarshi Guha

May 4th, 2009 at 7:24 pm