The ins and outs of drug testing

Exploring the lengthy process through which an initial idea can become a clinically used drug.

 

Drugs are primarily developed by pharmaceutical companies from a variety of sources. Some are derived from micro-organisms such as the antibiotic penicillin, others from animals such as the anticoagulant Heparin from leeches, and some from plants such as aspirin, whose active ingredient , salicylic acid, was originally extracted from willow bark.

On top of these natural sources, modern techniques have vastly increased the speed of development to unidentified chemical products which may or may not have a use: it’s up to drug testing to find out. The use of robotics and modern combinational chemistry allow up to 50,000 compounds to be synthesised per day, which, when coupled with compound libraries and natural products, give researchers thousands of chemicals on which to base potential new drugs.

The first stage of drug development begins with pre-clinical development. Firstly, researchers draw up a hypothesis to predict the chemical’s potential use. From this hypothesis, the thousands of starting chemical products, both natural and synthetic, can be tested to find one that supports the hypothesis. The next stage is to test the compound in an animal model to see if the proposed effect takes place in a living organism, and to test its potential toxicity. Should the compound pass this stage, a process of refinement will take place, with chemists subtly changing the chemical structure in order to find the most potent variant of the active chemical.

Further pharmacological testing will then take place to propose what the human body will do to a drug and to test for any toxic effects. This pre-clinical phase can take between 5 and 10 years before a compound can be identified as a potential drug candidate, and only then will be allowed to enter clinical trials.

There are four stages to clinical trials, and in each, safety information and patient welfare are the primary concerns. Phase 1 involves between 20-80 healthy volunteers, and is used to find the appropriate dose and tolerance levels. A series of increasing doses are administered and the body’s tolerance and potential side effects of the drug can be ascertained without patients experiencing any unpleasantness.

Phase 2 involves 100-300 patients, and the compound’s efficiency at treating a specific disease is tested, with a focus on safety still the priority. Phase 3 involves large groups of 1000-3000 patients, where the new drug is compared to other drugs currently available on the market. If this stage is passed, a drug can be sold as a therapeutic treatment. The final phase is phase 4, which is the continued monitoring and data collection of a drug, with an emphasis on its effects on different groups and long term safety.

Clinical trials can take upwards of 10 years and cost between £500-£1000 million pounds. A drug may fail at any stage: only one to two out of every 100 make it to phase 3, with most failing due to the severity of side effects or a lack of efficacy. The process of seeing your initial hypothesis through to a markable, effective drug is therefore one of the longest, most expensive  and most gruelling challenges that can be taken on by any modern scientist.

You may commit to ten years of research and find your data from animals doesn’t translate to human patients, or that your compound causes unforeseen and unacceptable side effects. One example of this is the tragic disaster of the drug thalidomide in the 1950s and 1960s, which was primarily designed as a morning sickness treatment, and caused birth defects to foetuses. However, through this thorough and rigorous process, modern drug development has now been deemed one of the strictest forms of testing, and holds patient safety in its essentials.

You must be logged in to post a comment Login

Leave a Reply