Aifred uses AI to accomplish what was previously not possible with other
methods: helping clinicians and patients make better treatment choices as they are faced
with a large number of treatments options that are all similarly effective at the population
level, but not at the individual patient level. Along with a network of domain experts in
psychiatry, we are harnessing the power of AI to identify patterns in patient data that can
predict optimal personal treatment.
Aifred is also innovating in the area of
clinical interpretability, to reduce the ‘black box’ nature of deep learning models and to
provide physicians and patients with the rationale for the AI’s treatment
recommendation.
Quality of model predictions are very important and we analyze this
carefully, making sure to take only the best models to clinical practice. By training our
predictive models on high-quality and reliable clinical datasets, we ensure quality input to
our model. De-identified patient outcomes collected by our tool over time will feed back
into our neural networks to continuously improve Aifred’s predictive power.
Our solution uses AI to learn from the clinical data of thousands of patients
to help tailor individual treatment, significantly reducing the time it takes for a patient
to reach remission. Specifically, we use deep learning to perform differential treatment
selection, which allows us to capture complex relationships within patient data. We have
built a clinician-patient application that allows patients to answer questionnaires about
their mental state and quality of life, and visualizes this data for both the patient and
clinician while providing key decision support for treatment.
The Aifred solution makes use of innovative and powerful machine learning techniques to predict treatment efficacy based on an array of individual patient characteristics.
Our solution includes a clinical treatment algorithm based on best-practice guidelines, which, combined with the ability to collect patient input, allows for easy implementation of gold-standard measurement-based care.
Track patient symptoms and test results to monitor outcomes or make new predictions. Banks of standardized questionnaires, data visualization, -- all of it capable of being tailored to clinicians' needs.
Our system provides a report to highlight the most significant features that lead to a personalized treatment prediction, enabling the clinician to understand what is behind the information provided and avoiding a “black-box” recommendation from our AI system.
Our team is conducting leading research into new interpretability techniques and into further improving our differential treatment benefit prediction. Our researchers have also conducted in-depth research to identify predictors of treatment response and side effect burden in depression including examining the potential feasibility of adding biomarker testing in routine clinical practice, although our tool is not dependent on such input today.
Clinical research is focused on validating our model in controlled and real-world conditions. We are running pilots of our current product at three hospital sites in Canada and have also tested our product with the AI included, in a simulation center environment. Now we are beginning the first ever clinical studies of an AI-powered clinical decision support system in depression that does not rely on inputs from additional tests. We are blazing the trail when it comes to clinical validation of deep-learning based clinical decision aids, and as such are investing heavily in the development of ethical principles to guide development and testing and in ensuring safety and useability of our tools. In fact, ethical development is so important to us that we have created our own ethical framework, known as Meticulous Transparency, to guide our work.
We strongly believe in the potential for AI to enhance, but never replace, physician decision-making. Following this principle, the model must be user-friendly and provide clinicians and patients with features they want and need. Initial studies have shown that the majority of clinicians who tried it trust our tool to help them make decisions, and 90% of clinicians say they would use it with their patients who are most in need.
Safety and effectiveness of the AI model must be rigorously assessed in an open-label clinical trial. A group of physicians using our model will be compared to a group practicing usual care, and patient outcomes will be evaluated. We have applied for and received approval from Health Canada to conduct this clinical trial and will be meeting with the US FDA to seek approval to conduct our trial in the United States.
Dr. Gustavo Turecki, MD, PhD - Genetics, Dataset Access Content
Expert
Dr. Marc Miresco, MD, MSc - External Psychiatry Services Content
Expert
Dr. Gail Myhr, MDCM, Dip Psy, MSc, FRCP - Cognitive Behavioral Therapy
Content Expert
Dr. Marcelo Berlim, MD, MSc - Literature Review and
Neuromodulation Content Expert
Dr. Howard Margolese MD, CM, MSc, FRCPC -
Clinical Trial Expert
Dr. Daniel Blumberger, MD, MSc, FRCPC - rTMS and
NeuromodulationContent Expert
Dr. Simone Vigod, MD, MSc, FRCPC - Guidelines,
Best Practices,Cultural Safety Content Expert
Dr. Sagar Parikh, MD - Guidelines, Best Practices, Cultural
SafetyContent Expert
Tristan Sylvain - Machine Learning Content Expert
Dr.
MargauxLuck - Machine Learning Content Expert
Dr. Wendell Wallach - Bioethics and AI Ethics Content Expert
Dr. Jordan Karp, MD - Late Life Depression Content Expert
Dr. Ann John, PhD, Suicide and Epidemiology Content Expert
Dr.Marcos Del Pozo Baños, PhD, Machine Learning and Medical Records Content Expert
Tung Tran, Director of Mental Health and Addiction Programs