← Back to essays

AI and Psychology: What are the big questions for 2025?

AI and Psychology: What are the big questions for 2025?

Last week, I attended the American Psychological Association's Mobile Health Tech Advisory Committee meeting, where we explored the evolving role of AI in mental health. Throughout the week, I've shared insights in smaller posts, but here's a long-form summary of the most critical takeaways for anyone working in digital health, AI, and mental healthcare.


How Do We Show the Value of Psychologists in the Age of GenAI?

LLMs are increasingly easy to implement. With access to psychology self-help books and research papers for retrieval-augmented generation (RAG), creating AI-driven mental health tools has never been simpler. Add to that the fact that some individuals may feel more comfortable opening up to AI than to another person due to perceived lack of judgment.

The big question: How do we, as psychologists, clearly articulate and demonstrate our unique value in this landscape?

Psychologists go beyond surface-level interventions—we ensure that AI-driven solutions are ethical, effective, and grounded in a deep understanding of human behavior and mental health principles. This is a challenge we must actively address as AI continues to evolve in mental healthcare.


What AI Tools Are Psychologists Actually Using?

As AI continues shaping healthcare, psychologists must ensure these technologies are ethical, inclusive, and aligned with patient needs. However, one of the gaps we identified is that psychologists aren't regularly using AI tools—even widely available ones like ChatGPT.

Here are a few AI-powered tools I personally use:

Boardy – A LinkedIn-based tool for professional networking.

Lex.page – A writing tool with custom GPT instructions for specific contexts.

ChatGPT for troubleshooting – AI can provide step-by-step solutions to technical issues faster than searching outdated forum posts.


Understanding Risk and Liability in AI-Driven Healthcare

For companies building AI solutions for healthcare, a major hurdle is how clinicians perceive risk and liability. Many providers hesitate to use AI-driven tools because they are unsure how these technologies impact their ethical and legal responsibilities.

Key question: If AI-driven systems make errors in assessment or recommendations, who is accountable - and more explicitly, who gets sued?

👩‍⚖️This article by Mello and Guha dives deeper into this issue: 🔗 Read more


How Do We Ethically Use AI in Mental Health?

Many of the ethical dilemmas AI presents in mental health aren't entirely new—our existing ethical frameworks already offer guidance.

📌 Tiffany Chenneville, PhD, outlines key ethical questions clinicians should consider when applying these frameworks to GenAI. 🔗 Read more

Given that business leaders and tech executives don't take ethical oaths like we do, it's on us to advocate for responsible AI development.


Can AI Improve Equity in Mental Health?

With all the noise around the rollback of DEI initiatives, how can we still push forward in improving equity? AI presents exciting opportunities to bridge gaps in access to mental health care. For example:

💡 Language Translation – AI can translate psychological reports into multiple languages, making care more accessible.

💡 Audience-Specific Reports – AI can tailor reports for different audiences, simplifying clinical findings for patients while keeping in-depth analyses for specialists.

These tools could be game-changers for underserved communities, where clinician availability is limited.


Why Explainability in AI Matters for Healthcare

With all the ethical and legal concerns mentioned earlier, explainability in AI models will become increasingly important.

🏥 DeepSeek and similar models that explicitly show their reasoning and thought processes will be essential.

Clinicians are responsible for patient outcomes, so they need tools that are interpretable. If an AI suggests a diagnosis or treatment plan, providers must understand why—not just take AI output at face value.


Where Is the Ethical Line in AI-Assisted Academic Work?

One of the more nuanced discussions revolved around the ethical use of AI in academia.

🧐 There's a clear difference between using AI to debug an R script vs. using AI to fully write a research paper—but where exactly is the line?

If AI assists with organizing ideas, does that constitute academic dishonesty? AI is just another tool—no one claims they "wrote" a paper using Microsoft Word, even though it assists in formatting. As AI becomes more embedded in research, institutions must define clearer guidelines for ethical use.


How Can We Guide Patients on AI Risks If We Don't Understand Them Ourselves?

📢 Final thought: If psychologists are expected to guide patients on AI risks and benefits, we need to educate ourselves first.

AI is already embedded in mental health tools, from chatbots to diagnostic aids. If misinformation spreads, psychologists need to be the ones helping patients make informed choices. That means actively engaging with AI tools and understanding their strengths and limitations.


Final Thoughts

The discussions at the APA meeting underscored the immense opportunities AI presents in mental health, as well as the challenges that must be addressed for ethical and effective integration.

By continuing these conversations and fostering collaboration between psychologists, technologists, and policymakers, we can shape AI-driven healthcare in a way that prioritizes:

Patient well-being

Equity

Responsible innovation

I'd love to hear from others in this space—what are your thoughts? How do you see AI shaping the future of mental health?

More to explore

Enjoyed this? Get new essays when they're published.