Chris Lipinski, author of the 1997 'rule of five',1 believes that although the rule has altered medicinal chemistry for oral small molecule drugs, predicting the behavior of newer biologics might not be far off, and optimization of RNA or protein delivery could be the opportunity for the next big breakthrough in computer-based predictions.

The rule of five, developed while Lipinski was a senior research fellow at Pfizer Inc., outlines four simple criteria to help design orally available drugs. SciBX talked with Lipinski, now scientific advisor at Melior Discovery Inc., to hear his thoughts on the impact of the rule of five and how he thinks current predictive approaches might advance drug design.

 

SciBX: What was the impetus that led you to develop the rule of five?

 

Chris Lipinski: At that time, 1997, at least 90% of the small molecule medicinal chemistry efforts were directed at oral compounds. People were heavily influenced by a high throughput screening philosophy and were making large numbers of compounds that were evaluated for potency without regard for anything else.

Normally, increasing the potency means making compounds that are larger and more lipophilic, which generates a lot of problems for getting good oral absorption.

The original purpose of the rule of five was to shift the physicochemical property profiles of compounds being made by medicinal chemists to increase the likelihood of getting an orally active compound (see Box 1, "Rules of engagement").

 

SciBX: Do you believe the rule of five has changed how medicinal chemists create compounds?

 CL: Absolutely. Based on medicinal chemistry sessions at the American Chemical Society national meeting, where some of the hottest compounds going into the clinic are presented for the first time, it seems that on average more than 50% of the effort goes toward optimizing properties other than potency.

I would say that across the board, those simple principles were successful. At Pfizer they definitely moved the medicinal chemistry profiles in a positive direction, and at most other organizations I think they did also.

 

SciBX: When you came up with the rule of five were there compound classes you knew or suspected it would not apply to?

 

CL: Yes. At Pfizer we looked for compounds that were orally active and broke the rule and found they were mostly natural products.

One of the big problems with natural products is that we don't understand shape-typically natural products are large and can form intramolecular hydrogen bonds. We do not do well computationally with cyclic structures like these and with predicting restrained conformational mobility.

For example, cyclosporine breaks every parameter in the rule of five, but in fact with some formulation work you get acceptable oral absorption because cyclosporine is a macrocyclic peptide that behaves like a molecular chameleon.

In a lipophilic environment, the N-methyl groups, which are greasy, stick on the outside and all the polar groups are buried on the inside, whereas when the molecule is in a polar aqueous environment it reverses so all the lipophilic N-methyl groups are on the inside and all the polar ones are on the outside.

I strongly suspect that many of the natural products that are orally active and have good membrane penetration properties in fact have this molecular chameleon-type property. We could try to come up with rules for that. But we don't have any way of taking large lists of complex natural products and figuring out what the shape would be in a lipophilic or hydrophilic environment and whether the energy levels would support the compounds flipping shape.

If we understood that, then we could learn from evolution and come up with designs that might help us get into those difficult targets where the ligands are out of the rule of five space.

 

SciBX: Your original predictions related to small molecule drugs. Can we make similar predictions for other modalities?

 

CL: We are beginning to see better software for dealing with proteins. I often talk about the 'in-between-world size'.

Three years ago, if you had something that was [molecular weight]
20 kDa, the small molecule software didn't work well.

Now, we are getting to the position that we can experimentally make complex agents as single compounds that differ by post-translational modification or position of pegylation, even though they're in the biologic size range.

That gets closer to the world of small molecules, which deals with discrete single compounds. Then the kinds of rules and discoveries about SAR properties and matched pairs in the small molecule world might also apply to the larger molecules.

I think the driver for progress here will be technology advancements for controlling the synthesis and manufacture of some of these complex compounds.

 

SciBX: Do you think there are newer strategies that could help improve on the rule of five?

 

CL: Matched fragment pairs add value, as they typically occur in lead optimization, and unlike the physicochemical properties used in the rule of five, this approach directly implements experimental data from in vitro and in vivo screening.

If you have a molecule that looks good and has one property such as clearance that is a problem, the team can make a series of matched fragment pairs and look at specific ADME parameters to help optimize the molecule.

 

SciBX: There are various software packages that predict how small molecules will behave based on their chemical structures. How good do you think those programs are, and what is the most effective way to implement the information they yield?

 

CL: In general, the usefulness depends on how structurally similar your new compound is to the software's training set of molecules.

For absorption, with good experimental input parameters you can generate fairly good simulations of the plasma profile of the compound and some of the clearance parameters. That aspect is definitely useful.

For predictive software, like Derek Nexus, the main value is in the toxicity alerts. People would seldom exclude a compound at a post-screening stage because of a general alert in a toxicity-prediction program. But if they receive an alert for a specific organ toxicity that is based on testing data in the literature [such as Derek provides], they can look at aspects of the chemistry that can be changed to resolve that issue but retain potency. The specific character of the toxicity prediction makes it easier to test whether the prediction of a problem is correct and whether the new chemistry changes have solved the problem.

 

SciBX: Has computational prediction peaked or can it still improve?

 

CL: It could get better from two angles.

First, the data quality; predictions are only as good as data quality. There are increasing numbers of people talking about this in the biology and chemistry realms. In chemistry, some journals require quality parameters to confirm the compound's identity and its level of purity.

Second, there is a big movement toward cooperation and collaboration between the different players, for example, by precompetitive data sharing. This can increase the amount of data being used to make the predictions and so improve the quality.

For example, it makes sense for organizations to combine datasets on matched fragment pairs as the quantity of relevant experimental data that comes out of high throughput screening is very small. Pooling data will make it more likely you'll get matched fragment pairs that enable you to deduce rules about which fragments are more metabolically stable or give you lower toxicity in specific assays.

 

SciBX: Do you believe that ultimately these predictive programs can lead to shorter drug development times and cost savings that will affect the industry?

 

CL: I'm not sure about shorter. It may reduce attrition. These kinds of tools used efficiently and knowledgeably can eliminate a lot of the mistakes and wasted effort that currently goes on.

 

SciBX: What has surprised you most in the field of molecule design and predicting drug behavior, and what are the biggest successes and failures?

 

CL: The success of fragment screening took me completely by surprise. If someone had told me 15 years ago that you could have a small lipophilic fragment that would bind selectively, and that screening compounds in the 100 mM to [low] mM range could give you enough information to optimize and reach a clinical candidate in just 100 or 150 compounds, I wouldn᾽t have believed it. But that᾽s what happened, and that technology is a rare success story.

Something that really didn't work nearly as well as initially advertised was the promise of X-ray structure-based drug design. When structures of ligands docked to proteins became more accessible, companies were founded on the basis that knowing the X-ray structure would lead directly to designing a ligand for that target. Most of those companies didn᾽t make it because people didn᾽t understand protein flexibility.

A lot of proteins, including just about all of the kinase targets, change their shape when they bind a ligand. Any computational approach that can't take that into account is going to have a problem.

It is still a big issue, as it᾽s very hard to predict what the eventual protein conformation will be when a protein structure comes in proximity [to] a small ligand. Part of the problem is that you have to run the computer simulations for a very long time to have enough time to see the movement of the protein, which on the time scale of a molecular dynamic simulation is very slow.

Although the idea was good the technology didn᾽t work because not enough was known about the science. This is a general phenomenon in science. Everything looks better at an early stage before you have learned about the intricacies.

 

SciBX: What might be the next area of drug development in which in silico approaches will break through?

 

CL: One of the biggest challenges is the delivery of protein and RNA therapeutics. There are certain adjuvants that work partially, often by forming a complex between the compound and a cationic carrier.

In silico approaches could help with questions such as the compound characteristics that generate long enough residence time on the cell surface to enable cellular uptake and intracellular release. There is a window of time before proteolytic enzymes and low pH destroy the agent, and using computer approaches to optimize the biophysics of that would be helpful.

In addition, in silico approaches could help in pharmacodynamics. Drugs such as antidepressants can take two to three weeks to have a measurable effect, but we know little about what's happening in that time.

There is a lot of interest in using systems biology to understand questions like this, especially at NIH. If we understood more about the behavior of biology networks, then our predictions about efficacy-which right now are absolutely horrible-would improve.

 

SciBX: Thank you very much for your time.

Fishburn, C.S. SciBX 6(46); doi:10.1038/scibx.2013.1309
Published online Dec. 5, 2013

REFERENCES

1.   Lipinski, C.A. et al. Adv. Drug Deliv. Rev. 23, 3-25 (1997)

2.   Lowe, D. Lipinski's anchor. Corante (Nov. 25, 2013)

COMPANIES AND INSTITUTIONS MENTIONED

American Chemical Society, Washington, D.C.

Melior Discovery Inc., Exton, Pa.

National Institutes of Health, Bethesda, Md.

Pfizer Inc. (NYSE:PFE), New York, N.Y.