Tuesday, September 5, 2000
Getting to products
Venture funding for product companies
Biotech R & D
Sidebar: Back to School: Starting from scratch
The first several generations of biotechnology product companies,
whether working on proteins, antibodies or small molecules, depended on focused
biological insights and small-scale, individual experiments to develop their
initial products and maintain and replenish their pipelines. Now a new generation
of companies, as well as some of the older companies, are looking to discover
products using the shotgun approach of large-scale, brute force experimentation,
hoping that multiple insights will fall out of massively parallel assays and
genome-wide sequence analysis.
An unspoken assumption in the industry today - particularly
among investors - is that the latter formula will triumph. In part, this assumption
is based on necessity: onesy-twosy discovery will not provide enough NCEs per
year to maintain the growth rates that investors demand.
But the reality is much more nuanced. The two models are not
mutually exclusive. Moreover, it is probably not possible for the new model
to supplant the old: no matter how targets and drug candidates are discovered,
there is (as yet) no way to avoid doing the biology. Thus a variety of corporate
models will continue to coexist, from niche companies using very few of the
large-scale technologies to large companies combining the full complement of
genomics, proteomics, arrays, informatics, screening, and combinatorial chemistry
discovery tools with focused expertise in multiple areas of biology.
In the end, the most successful companies are likely to use
the data from the new approach to fuel and speed the classical model, or conversely,
apply the biological expertise at the core of the classical model to turn data
from the new approach into useful products.
"If you go back to the classic model, the original protein
companies in some way didn't have to have biological insights because the biology
had been done over the past 50 years - there was just new technology that allowed
you to do things fast," said David Goeddel, CEO of Tularik Inc. (TLRK, South
San Francisco, Calif.). "Then came antibody and small molecule companies that
had to have the biology. They were very selective and could only take on a few
Now, he said, "you have large-scale, massive data. I don't
see that as leading directly to drugs without having the biological insights.
But if it's done the right way and if companies really put all the pieces in
from the databases at one extreme to knockouts of all the genes at the other,
there will be large amounts of important information. But someone will still
have to go through with biological insights and pick targets. You can't make
leads for a thousand targets."
George Yancopoulos, senior vice president of research and chief
scientific officer of Regeneron Pharmaceuticals Inc. (REGN, Tarrytown, N.Y.),
agreed. "We are amazed at the amount of information that's been generated, but
that is the easy part," he said. "The difficult part is translating that into
valuable targets. There is no way of automating the scientific process. Traditional
genomics has not yet led to a major scientific discovery that provides insight
about a disease."
But whether companies begin from individual biological insights
or massive data-crunching, each model is adding pieces from the other.
The classic approach
The bread and butter of the classic approach is the belief that a deep understanding of the biological basis of disease is the best way to discover and develop new drugs. Virtually all of the product companies formed in the late '80s and early '90s started from these roots - and most have been very selective about adding the newer tools without losing their biological backbones.