Return home

    Stay up to date

    Digital health studies

    A brief history of clinical research

    Picture of The doc.ai team
    By The doc.ai team on November, 11 2020

    Coronavirus, pandemic and COVID-19 are still relatively new additions to our daily vocabularies. We now live in a world that seems much slower, full of constant browser refreshes for updates, nostalgia for the way things were, and carefully planned outings to the grocery store. Yet, a competitive race is underway, as our futures are now tied to the breakthroughs made by medical researchers, and importantly, to the people participating in clinical trials.

    The hunt for vaccines and other therapies brings into sharp focus the importance of the human equation in medical research. Without people participating in clinical trials, there are no advancements in treating disease. 

    In our new blog series, we’re going to evaluate how clinical trials came about, their evolution, and their present-day promise.

     

    The origins of modern clinical research

    Although clinical research has been practiced for thousands of years, one of the first controlled scientific clinical studies is believed to have taken place in the mid-1700s with a sea-faring doctor and a group of sailors with scurvy. 

     

    When Dr. James Lind divided his sickly sailors into small groups, he treated them with various concoctions ranging from seawater to citrus fruits. As the tale goes, both of the sailors who consumed the fruit became well enough to care for the others. Although this fresh fruit treatment was considered expensive, further ‘testing’ over the years determined that a less extravagant therapy, lime juice, was just as effective. 

     

    The methods of clinical research have improved greatly since that first study, but the purpose of a clinical trial has remained intact. For a study to be rigorous, first, it should be reproducible. Second, it must be independent of both the investigators and the participants. Finally, it must strive to answer a scientific question. 

     

    Leading up to the 1900s, the United States was full of elixirs and snake oil remedies peddled by people more moxie than medical credentials. The American Medical Association pushed the government to intervene to protect the public from medicines that posed safety risks. The result was the passage of the 1906 Pure Foods and Drugs Act and with it the formation of the U.S. Food and Drug Administration (FDA). 

     

    For many years, the FDA only reviewed the safety of drugs before approving them for public use. It wasn't until 1968 that the FDA began requiring drug manufacturers seek approval to demonstrate their product’s efficacy. This led to the multi-phase system of clinical trials we have today.  

     

    Clinical trials today

    The FDA has different standards and requirements for various categories of health products including drugs, diagnostics, devices, and cell therapies. For drug approval for instance, each clinical trial is generally divided into five phases. The drug being tested must meet each phase’s goal before progressing to the next phase. This rigorous design has been refined over decades to ensure both the safety and efficacy of new therapies. 

     

    Phase 1: This is a drug’s first introduction to humans and is dosed to a small number of healthy participants. These trials look for harmful side effects and properties like how long a drug lasts in our bodies.

    Phase 2: In this phase, drug testing is done on several hundred actual patients with the condition that the drug is targeted to treat. If the drug is determined to be effective with relatively few or minor side effects, it advances to the next stage. 

    Phase 3: During Phase 3, a minimum of 300 patients are dosed with the study drug. This phase often pits a new drug against an existing one, comparing their relative effectiveness. Patients are randomly assigned to be dosed with either the new drug or an existing control drug. To prevent bias, patients, and often the study administrators, do not know which drug is being administered. Critical data is collected on side effects and effectiveness for submission to the U.S. Food and Drug Administration (FDA) for approval. 

    Phase 4: Once a drug receives FDA approval, the agency may mandate it to be tested more extensively in thousands of volunteers. The goal in Phase 4 is to collect a greater pool of data on a drug’s effectiveness and safety in more real-world settings. With larger populations being evaluated, rare but serious side effects can be detected, as well as potentially dangerous interactions with other drugs. It is only after a drug or treatment has been approved in a Phase 4 clinical trial, that it is made broadly available to the market for human consumption. 

    Phase 5: Following the release of the drug, the FDA conducts postmarket surveillance to look for additional signals on a drug’s risks, benefits and optimal uses. Post market surveillance is moving toward using digital technologies and smart devices to better help patients track dosing, symptoms, side effects and benefits.

    Traditionally, participants will join a study from their doctor’s office, then keep a paper diary of all symptoms throughout the study. However, subjective patient diaries are notoriously unreliable for understanding the response to a medication. 

    In many ways, we have not seen much advancement in how we perform clinical research since the Dr. James Lind experiment with his sickly sailors. At doc.ai, we’re focused on reimagining the many parameters of clinical research, from how data is collected, to how and how quickly it’s interpreted. In the next article in this series, we’ll explore challenges to the way research is conducted today, along with opportunities for improvement.

     
     

    Submit a Comment