AI in Mental Healthcare

The fields of Mental healthcare and AI go parallel to each other. As the 'concept of AI' is inspired by the workings of the human brain, psychologists and computer scientists have been working hand in hand since the days of the Dartmouth workshop to build an AI that propels human growth.

AI in Mental Healthcare
Photo by Steve Johnson / Unsplash

The fields of Mental healthcare and AI go parallel to each other. As the 'concept of AI' is inspired by the workings of the human brain, psychologists and computer scientists have been working hand in hand since the days of the Dartmouth workshop to build an AI that propels human growth. The field of psychology has contributed significantly to the advancements in AI, so much so that the first 'artificial neural network (ANN)' was invented by a psychologist named Frank Rosenblatt; this ANN called 'Perceptron', was able to differentiate between objects such as a Circle and a Triangle. However, today, we have come a long way since those days; now, AI can do more than that. Specifically, in 'mental healthcare', it can potentially detect symptoms of mental health issues in an individual and recommend a treatment or even deliver therapy.

Advancements in AI algorithms, fueled by exponential growth in computational power and data due to the tech and internet boom, are creating a new frontier for human development. Digitally delivered Mental Healthcare is one of those frontiers. Digital Mental Healthcare powered by AI can exponentiate access to affordable, culturally relevant, medical-grade care across the globe. Making this our mission, Cerina has embarked on a new journey to create robust AI solutions aiding mental healthcare delivery. It is about time AI contributes back to the field of psychology!

But what about the Trustworthiness of AI in applications such as mental healthcare? Scientists have been grappling with questions on AI Ethics since the 1950s, and we need to answer them now. As a medical-grade device: safety, reliability & clinical efficacy has always been at the forefront of our development ethos. We constantly ask ourselves: Is AI even ready for an application as sensitive as mental health? What about Bias? Will it be reliable? What if AI makes mistakes? Who will be responsible? The lack of regulations, guidelines, laws, and standards surrounding this topic does make it challenging to answer these questions.

Regulations around AI did not match AI's propelled growth in the early days; despite that, recently, a collective effort led by policymakers and industry leaders has expatiated the discussion on this topic and resulted in a robust framework(s) guiding the development of AI.

Some of these frameworks/ initiatives include: -


Cerina AI Principals

At Cerina, we understand the importance of having a solid ethical foundation for AI development. Below is the list of our foundational principles for AI ethics; these principles act as our north star when it comes to developing AI responsibly.

  • Consent, Control, Privacy & Security for Data

Cerina's robust privacy policy, inspired by the works of Mozilla Foundation's 'privacy not included' project, dictates our privacy approach. We inform our users about all collected data points, their use cases, and how we process and delete them after it has served their purpose. We only collect data after acquiring consent from the user. We respect their Data protection rights and give them complete control of their data. We use robust encryption protocols when data is in transit and at rest. We encourage you to review our industry-leading privacy policy and contact us with any queries.

  • Non-Discrimination & Fairness

We ensure the accuracy of the dataset being used to train the algorithm and ensure that it is representative of the population in which it is in use. In addition, we proactively give special attention to promoting and protecting the equality of individuals & ensure that our AI does not perpetuate any discrimination or unfair stereotypes. We conduct routine algorithmic audits and monitor continuous end-user feedback analysis to minimise inaccuracies and biases. Our 'AI technologies' are designed for universal usage. We are also working towards making the Cerina app and all its tools available in multiple languages across the globe. Instead of doing the language translation, we will translate the app and AI tools in a culturally contextual manner.

  • Accuracy, safety, Risk minimisation & Trust

AI applications in healthcare must be close to if not 100% accurate; we try to make AI models as precise as possible. Audited frequently routine tests and updates, this accuracy is necessary to maintain a medical-grade device's safety and clinical efficacy. We employ different mechanisms to minimise the risk of failure by dividing it among multiple (different) models (e.g., Any prediction, classification or detection is made by various models based on numerous algorithms and results are combined to get the final results). We frequently conduct reviews of our model's accuracy compared with the industry norms and standards. In addition, we take proactive measures to prevent AI from being influenced or misused by bad actors.

  • Autonomy, Reversibility, Accountability & Sustainability

Users can use the Cerina app with or without clinical supervision. However, (under supervision) clinical supervisors must have complete control and operational autonomy over the user's treatment pathway. Supervisors can monitor the user's progress in the app and access vital information on the user's treatment (provided with the user's prior consent); they can also monitor predictions/ detections & decisions made by AI models. This way, all our AI's decisions are kept in check. Furthermore, any unsuitable decision is immediately reversible. This human supervision promotes accountability, leading to a sustainable growth of the AI ecosystem in healthcare, inclusive of all stakeholders.

  • Compliance

Being a medical-grade device, we not only follow all regulations relating to Privacy and security (HIPAA & GDPR laws) but also go above and beyond to create our own set of industry-leading standards. We follow all regulations/ guidelines/ directives issued by relevant authorities in our respective marketplaces. We frequently engage and communicate with policymakers, industry leaders and other stakeholders to ensure compliance.

  • Transparency, Explainability, Intelligibility and Oversight

Under many GDPR-like & Similar laws, users of digital services have the right to know how AI is making decisions for them. Cerina is committed to ensuring that we are transparent about all technologies being used in our AI Stack. In addition, we frequently communicate with our users through social media and with stakeholders at events (e.g., Neuro, Digital & AI Innovation Summit in Lisbon, Oct 2022) & groups such as IT4Anxiety.

We provide all documentation on AI's working to our B2B Partner Companies (upon request) and provide information through policy documents to all individual Customers and users. If any user has any queries or wants to understand how AI makes decisions for them, they can reach us at [email protected].

Cerina closely works with Healthcare, Academia, Research and tech Industries advisors to ensure the latest industry practices are followed during the development cycles.

  • Advisors on AI & Data Science

Dr Nicos Savva | Professor at London Business School

Dr Ward van Breda | Director at Sentimentics BV | PhD in Machine Learning (Predictive Modelling in E-Mental Health: Exploring Applicability in Personalised Depression Treatment)

  • Psychology and Research Team

Dr Momotaj Islam | Chief Psychologist at Cerina | Chartered Clinical Psychologist

Dr Özlem Eylem-van Bergeijk | Research Lead | Beck Institute for CBT Scholar

The Development and Engineering Teams work Closely with Advisors and Psychology & Research Team to ensure the proper adoption of AI in the field of Mental Healthcare.

We are also actively engaging and collaborating with partner research institutes and projects. For example, Cerina was awarded an "Innovation Support Grant" by Medilink Midlands (supported by the European Union's Regional Development Fund) for AI development in the app. In addition, Cerina recently joined the IT4Anxiety consortium, which is dedicated to helping the implementation of innovative solutions through start-ups to reduce the anxiety of patients suffering from mental disorders.


Let's discuss how AI works best for you.

The primary objective of this Article is to inform all our stakeholders about the principles observed during the development of AI. But, along with that, we also intend to initiate a conversation with you. We want to know your thoughts on 'AI in mental healthcare', what concerns you have, your ideas & solutions, and what things you wish to understand more and better; we want to know your feedback and insights on AI features in our app to improve our product. (Basically) We want to talk about 'AI & Mental Health' with you.

As per the regulations (of GDPR), we have appointed a data protection officer (DPO); if you have any questions or queries regarding how AI works in the Cerina app, please get in touch with our DPO.

Name of DPO: Prasannajeet Mane; Email address: [email protected] ;

Postal address: 71-75 Shelton Street, Covent Garden, London, England, WC2H 9JQ

Alternatively, you can also contact our AI/ML Engineer to discuss more on AI's principles and technical details of the development.

AI/ML Engineer: Omkar Kadam; at [email protected]