Today I got an email asking whether it’d be possible to speed up a fingerprint similarity matrix calculation in R. Now, pairwise similarity matrix calculations (whether they’re for molecules or sequences or anything else) are by definition quadratic in nature. So performing these calculations for large collections aren’t always feasible – in many cases, it’s worthwhile to rethink the problem.
But for those situations where you do need to evaluate it, a simple way to parallelize the calculation is to evaluate the similarity of each molecule with all the rest in parallel. This means each process/thread must have access to the entire set of fingerprints. So again, for very large collections, this is not always practical. However, for small collections parallel evaluation can lead to speed ups.
The fingerprint package provides a method to directly get the similarity matrix for a set of fingerprints, but this is implemented in interpreted R so is not very fast. Given a list of fingerprints, a manual evaluation of the similarity matrix can be done using nested lapply’s:
1 2 3 4 | library(fingerprint) sims <- lapply(fps, function(x) { unlist(lapply(fps, function(y) distance(x,y))) }) |
For 1012 fingerprints, this takes 286s on my Macbook Pro (4GB, 2.4 GHz). Using snow, we can convert this to a parallel version, which takes 172s on two cores:
1 2 3 4 5 6 7 8 | library(fingerprint) library(snow) cl <- makeCluster(4, type = "SOCK") clusterEvalQ(cl, library(fingerprint)) clusterExport(cl, "fps") sim <- parLapply(cl, fps, function(x) { unlist(lapply(fps, function(y) distance(x,y))) }) |