Archive for the ‘conference’ tag
I just got back from ACoP7, the yearly meeting of the International Society of Pharmacometrics (ISoP). Now, I don’t do any PK/PD modeling (hence the “strange land”) but was invited to talk about our high throughput screening platform for drug combinations. I also hoped to learn a little more about this field as well as get an idea of the state of quantitative systems pharmacology (QSP). This post is a short summary of some aspects of the meeting and the PK/PD field that caught my eye, especially as an outsider to the field (hence the “stranger”).
The practice of PK/PD is clearly quite a bit downstream in the drug development pipeline from where I work, though it can be beneficial to keep PK/PD aspects in mind even at the lead discovery/optimization stages. However I did come across a number of talks and posters that were attempting to bridge pre-clinical and clinical stages (and in some cases, even making use of in vitro) data. As a result the types of problems being considered were interesting and varied – ranging from models of feeding to predict weight loss/gain in neonates to analyzing drug exposure using mechanistic models.
A lot of PK/PD problems are addressed using model based methods, as opposed to machine learning methods (see Breiman, 2001). I have some familiarity with the types of statistics used, but in practice much of my work is better suited for machine learning approaches. However, I did come across nice examples of some methodologies that may be useful in QSAR type settings – including mixed effect models, IRT models and Bayesian methods. It was also nice to see a lot of people using R (ISoP even runs a Shiny server for members’ applications) and companies providing R solutions (e.g., Metrum, Mango) and came across a nice poster (Justin Penzenstadler, UMBC) comparing various R packages for NLME modeling. I also came across Stan, which seems like a good way to get into Bayesian modeling. Certainly worth exploring nore.
The data used in a lot of PK/PD problems is also qualitatively (and quantitatively) different from my world of HTS and virtual screening. Datasets tend to be smaller and noiser, which are challenging to model (hence less focus on purely data driven, distribution-free M/L methods). A number of presentations showed results with quite wide CI’s and significant variance in the observed properties. At the same time, models tend to be smaller in terms of features, which are usually driven by the disease state or the biology being modeled. This is in contrast to the 1000’s of descriptors we deal with in QSAR. However, even with smaller feature sets I got the impression that feature selection (aka covariate selection) is a challenge.
Finally, I was interested in learning more about QSP. Having followed this topic on and off (my initiation was this white paper), I wasn’t really up to date and was a bit confused between QSP and phsyiologically based PK (PBPK) models, and hoped this meeting would clarify things a bit. Some of the key points I was able to garner
- QSP models could be used to model PK/PD but don’t have to. This seems to be the key distinction between QSP and PBPK approaches
- Building a comprehensive model from scratch is daunting, and speaking to a number of presenters, it turns out many tend to reuse published models and tweak them for their specific system. (this also leads one to ask what is “useful”?)
- Some models can be very complex – 100’s of ODE‘s and there were posters that went with such large models but also some that went with smaller simplified models. It seems that one can ask “How big a model should you go for to get accurate results?” as well as “How small a model can you get away with to get accurate results?“. Model reduction/compression seems to be an actively addressed topic
- One of the biggest challenges for QSP models is the parametrization – which appears to be a mix of literature hunting, guesswork and some experiment. Examples where the researcher used genomic or proteomics data (e.g. Jaehee Shim, Mount Sinai) were more familiar to me, but nonetheless, daunting to someone who would like to use some of this work, but is not an expert in the field (or a grad student who doesn’t sleep). PK/PD models tend to require fewer parameters, though PBPK models are more closer to QSP approaches in terms of their parameter space.
- Where does one find models and parameters in reusable (aka machine readable) formats? This is an open problem and approaches such as DDMoRE are addressing this with a repository and annotation specifications.
- Much of QSP modeling is done in Matlab (and many published models are in the form of Matlab code, rather than a more general/abstract model specification). I didn’t really see alternative approaches (e.g., agent based models) to QSP models beyond the ODE approach.
- ISoP has a QSP SIG which looks like an interesting place to hang out. They’ve put out some papers that clarify aspects of QSP (e.g., a QSP workflow) and lay out a roadmap for future activities.
So, QSP is very attractive since it has the promise of supporting mechanistic understanding of drug effects but also allowing one to capture emergent effects. However, it appears to be very problem & condition specific and it’s not clear to me how detailed I’d need to get to reach an informative model. It’s certainly not something I can pull off-the-shelf and include in my projects. But definitely worth tracking and exploring more.
Overall, it was a nice experience and quite interesting to see the current state of the art in PK/PD/QSP and learn about the challenges and successes that people are having in this area. (Also, ISoP really should make abstracts publicly linkable).
Dear Colleagues, we are organizing a symposium at the Fall ACS meeting in Philadelphia focusing on computational, experimental and hybrid approaches to characterizing the unstudied and understudied druggable genome. In 2014 the NIH initiated a program titled, “Illuminating the Druggable Genome” (IDG) with the goal of improving our understanding of the properties and functions of proteins that are currently unannotated within the four most commonly drug-targeted protein families – GPCRs, ion channels, nuclear receptors and kinases. As part of this program a Knowledge Management Center (KMC) was formed, as a collaboration between six academic center, who’s goal was to develop an integrative informatics platform to collect data, develop data driven prioritization schemes, analytical methods and disseminate standardized/annotated information related to the unannotated proteins in the four gene families of interest.
In this symposium, members of the various components of the IDG program will present the results of ongoing work related to experimental methods, target prioritization, data aggregation and platform development. In addition, we welcome contributions related to the identification of druggable targets, approaches to quantify druggability and novel approaches to integrating disparate data source with the goal of shedding light on the “dark genome”
The deadline for abstract submissions is March 29, 2016. All abstracts should be submitted via MAPS at http://bit.ly/1mMqLHj. If you have any questions feel free to contact Tudor or myself
University of New Mexico
Another ACS National meeting is over, this time in San Diego. It was good to catch up with old friends and meet many new, interesting people. As I was there for a relatively short period, I bounced around most sessions.
MEDI and COMP had a joint session on desktop modeling and its utility in medicinal chemistry. Anthony Nicholls gave an excellent talk, where he differentiated between “strong signals” and “weak signals”, the former being extremely obvious trends, features or facts that do not require a high degree of specialized exerptise to detect and the latter being those that do require significantly more expertise to identify. An example of a strong signal would be an empty region of a binding pocket that is not occupied by a ligand feature – it’s pretty easy to spot this and when hihglighted the possible actions are also obvious. A weak signal could be a pi-stacking interaction which could be difficult to identify in a crowded 3D diagram. He then highlighted how simple modifications to traditional 2D depictions can be used to make the obvious more obvious and make features that might be subtle, say in 3D, more obvious in a 2D depiction. Overall, an elegant talk, that focused on how simple visual cues in 2D & pseudo-3D depictions can key the mind to focus on important elements.
There were two other symposia that were of particular interest. On Sunday Shuxing Zhang and Sean Eakins organized a symposium on polypharmacology with an excellent line up of speakers including Chris Lipinski. Curt Breneman gave a nice talk that highlighted best practices in QSAR modeling and Marti Head gave a great talk on the role and value of docking in computational modeling projects.
On Tuesday, Jan Kuras and Tudor Oprea organized a session on System Chemical Biology. Though the session appeared to be more on the lines of drug repurposing, there were several interesting talks. Ebelebola May from Sandia Labs gave a very interesting talk on a system level model of small molecule inhibition of M. Tuberculosis and F. Tularensis – combining metabolic pathway models and cheminformatics.
John Overington gave a very interesting talk on identifying drug combinations to improve safety. Contrary to much of my reading in this area, he points out the value of “me-too” drugs and taking combinations of such drugs. Given that such drugs hit the same target, he pointed out that this results in the fact that off-targets will see reduced concentrations of the individual drugs (hopefully reducing side effects) while the on-target will see the pooled concentration (thus maintaining efficacy (?)). It’s definitely a contrasting view to the one where we identify combinations of drugs hitting different targets (which I’d guess is a tougher proposition, since identifying a truly synergistic combination requires a detailed knowledge of the underlying pathways and interactions). He also pointed out that his analyses indicated that combination dosing is not actually reduced, in contrast to the current dogma.
As before we had a CINFlash session which I think went quite well – 8 diverse speakers with a pretty good audience. The slides of the talks have been made available and we plan to have another session in Philadelphia this Fall, so consider submitting something. We also had a great Scholarships for Scientific Excellence poster session – 15 posters covering topics ranging from reaction prediction to an analysis of retractions. Excellent work, and very encouraging to see newcomers to CINF interested in getting more invovled.
The only downsides to the meeting was the chilly and unsunny weather and the fact that people still think that displaying tables of numbers in a slide actually transmits any information!
A few openings are left for the International Conference on Chemical Structures (ICCS)
A little less than 40 days left until the 9th International Conference on Chemical Structures (ICCS) starts in Noordwijkerhout, The Netherlands. The conference will focus on the latest scientific and technological developments in cheminformatics and related areas in six plenary sessions:
o Structure-Activity and Structure-Property Prediction
o Structure-Based Drug Design and Virtual Screening
o Analysis of Large Chemistry Spaces
o Integrated Chemical Information
o Dealing with Biological Complexity
34 scientific lectures and 80 posters in two poster sessions will present applications and case studies as well as method development and algorithmic work in these areas. The program will open with a presentation by Engelbert Zass, ETH Zürich who has been awarded the CSA Trust Mike Lynch Award on the occasion of the 9th ICCS. We invite you to have a look at the scientific program which is now available at the website www.int-conf-chem-structures.org.
In addition to the scientific program there will be a commercial exhibition with 16 leading cheminformatics software suppliers. The participation of scientists from more than 20 countries will make this a truly international event with ample opportunities to networks and discuss science.
Free workshops will be offered before and after the official conference program by BioSolveIT (www.biosolveit.de), The Chemical Computing Group (www.chemcomp.com), Tripos (www.tripos.com), and Accelrys (www.accelrys.com).
On Wednesday afternoon there is a sailing cruise on the IJsselmeer on two traditional sailing boats. They will leave from the scenic Muiderslot castle, and then sail to the picturesque fishing village Volendam where the old village can be explored. A banquet dinner will be served on the boats on the way back.
If you are planning to attend, we encourage you to register as soon as possible through the conference web site: www.int-conf-chem-structures.org.
We are looking forward to meeting with you all in Noordwijkerhout.
Keith T Taylor, ICCS Chair
Markus Wagener, ICCS Chair
Call for Papers: High Content Screening: Exploring Relationships Between Small Molecules and Phenotypic Results
242nd ACS National Meeting
Denver, Aug 28 – Sept 1, 2011
Dear Colleagues, we are organizing an ACS symposium, focusing on the use of High Content Screening (HCS) for small molecule applications. High content screens, while resource intensive, are capable of providing a detailed view of the phenotypic effects of small molecules. Traditional reporter based screens are characterized by a one-dimensional signal. In contrast, high content screens generate rich, multi-dimensional datasets that allow for wide-ranging and in-depth analysis of various aspects of chemical biology including mechanisms of action, target identification and so on. Recent developments in high-throughput HCS pose significant challenges throughout the screening pipeline ranging from assay design and miniaturization to data management and analysis. Underlying all of this is the desire to connect chemical structure to phenotypic effects.
We invite you to submit contributions highlighting novel work and new developments in High Content Screening (HCS), High Content Analysis (HCA), and data exploration as it relates to the field of small molecules. Topics of interest include but are not limited to
- Compound & in silico screening for drug discovery
- Compound profiling by high content analysis
- Chemistry & probes in imaging
- Lead discovery strategies – one size fits all or horses for courses?
- Application of HCA in discovering toxicology screening strategies
- Novel data mining approaches for HCS data that link phenotypes to chemical structures
- Software & informatics for HCS data management and integration
+1 858 799 5609
NIH Chemical Genomics Center
+1 814 404 5449