Background:
Artificial intelligence (AI) has allowed programmers to create human-like meaningful texts. As a result, chatbots have recently gained great attention. Many people have praised how novel chat applications can create original and human-like essays. However, a few studies discuss the use of AI chatbots in psychology.
Goal of Study:
We aimed to discuss the use of AI chatbots in the field of Psychology. Also, we will summarize previous studies on ChatGPT.
Methods:
This manuscript discusses how AI can be used in this process. We used ChatGPT to create a brief literature review article to show the progress of the OpenAI ChatGPT AI application. Studies on Pubmed were searched. Overall, we found eight studies using the keyword of ‘’ChatGPT’’.
Results:
Most studies claim that ChatGPT can write original essays, and it is hard to distinguish ChatGPT from human-like writing. We found no study discussing the impact of ChatGPT on psychology.
Conclusion:
ChatGPT can allow writing essays on various topics and be used in many fields, including psychology, medicine, engineering, philosophy, medical education, literature, and computer sciences.
Keywords: Open Artificial intelligence, ChatGPT, ChatGPT author, academic publishing, psychiatry, psychology, artificial intelligence
Commentary:
Commentary: The Unseen Risks of AI Chatbots in Psychology – A Cautionary Perspective
The abstract by Uludag presents a rather optimistic view of AI chatbots in psychology, highlighting their ability to generate human‑like text and suggesting their potential application across multiple fields. While the authors acknowledge that few studies have examined AI in psychology, their conclusion – that ChatGPT “can be used in many fields including psychology” – glosses over several significant concerns. This commentary focuses on the potential negative effects that must be considered before embracing AI chatbots in psychological research, education, or practice.
1. Epistemic Risks: Plausible but False Content
ChatGPT and similar large language models (LLMs) are designed to produce fluent, coherent text – not truthful or accurate information. They are prone to “hallucinations”: generating confident statements that are factually wrong, unsupported, or entirely fabricated. In psychology, where diagnostic criteria, treatment protocols, and research findings require precision, such errors could have serious consequences. For example, a chatbot might provide incorrect information about mental health conditions, suggest harmful coping strategies, or misrepresent empirical evidence. The abstract itself notes that only eight studies were found using the keyword “ChatGPT” – but a proper systematic review would likely yield far more, suggesting the authors’ own search was incomplete. If researchers rely on chatbots to write literature reviews, they risk propagating errors and omissions.
2. Undermining Critical Thinking and Academic Integrity
The ability of ChatGPT to produce “original and human‑like essays” raises profound concerns about academic integrity. Students, trainees, and even established researchers might be tempted to delegate writing, analysis, or even conceptual work to AI. This undermines the development of critical thinking, argumentation, and scientific writing skills – competencies that are essential in psychology. Moreover, using AI to generate manuscripts without transparent disclosure constitutes a form of plagiarism or ghostwriting. The abstract mentions using ChatGPT to “create a brief literature review article” – yet it does not specify which parts were AI‑generated, nor does it discuss the ethical implications of listing AI as a co‑author (a practice already rejected by major journals).
3. Lack of Empathy and Therapeutic Alliance
In clinical psychology, the therapeutic relationship is a cornerstone of effective treatment. Chatbots lack genuine empathy, emotional understanding, and the ability to respond to nuanced human distress. Over‑reliance on AI for mental health support could lead to depersonalised care, where vulnerable individuals receive scripted, context‑blind responses. While some chatbot interventions have shown promise for specific conditions (e.g., CBT for mild anxiety), they are adjuncts – not replacements. The abstract’s uncritical endorsement of AI in psychology ignores the risk that patients may feel unheard, misunderstood, or even harmed by an algorithm that cannot recognise suicidal ideation or trauma responses.
4. Privacy and Data Security
Psychological discussions often involve deeply personal, sensitive information. Chatbots operated by commercial entities (e.g., OpenAI) may store, analyse, or use user inputs for model training. This creates serious privacy risks. Even anonymised data can be re‑identified, and breaches could expose confidential mental health records. The abstract makes no mention of data protection, informed consent, or regulatory compliance (e.g., HIPAA, GDPR). Without robust safeguards, deploying chatbots in psychology could violate ethical standards and legal obligations.
5. Bias and Inequity
LLMs are trained on vast internet data that contain historical and societal biases. Consequently, chatbots can reproduce stereotypes related to race, gender, sexuality, and mental health. For example, an AI might associate certain demographics with aggression or assume gender‑based emotional responses. In psychological assessment or triage, such biases could lead to misdiagnosis, unequal treatment recommendations, or reinforcement of stigmatising beliefs. The abstract fails to acknowledge this risk, presenting AI as a neutral tool.
6. Erosion of Professional Judgement
If psychologists and trainees begin to rely on chatbots for diagnostic suggestions, treatment planning, or literature synthesis, there is a danger of automation bias – over‑trusting AI output and under‑using human judgement. Psychology is not a purely algorithmic discipline; it requires contextual understanding, ethical reasoning, and personalised care. Delegating cognitive tasks to AI could deskill practitioners and reduce the quality of care over time.
7. Lack of Regulation and Accountability
Who is responsible when a chatbot gives harmful psychological advice? The developer? The user? The institution? Currently, no clear liability framework exists. The abstract’s enthusiastic conclusion overlooks this regulatory vacuum. Until there are standards for validation, transparency, and oversight, using chatbots in psychology is ethically precarious.
Conclusion
The abstract by Uludag captures the excitement surrounding AI chatbots but fails to address the substantial negative effects that must be considered. Psychology – as a discipline centred on human welfare, scientific rigour, and ethical practice – cannot afford to adopt AI uncritically. Future research should focus not on whether ChatGPT can generate text, but on how to mitigate its risks: ensuring accuracy, preserving human empathy, protecting privacy, reducing bias, maintaining academic integrity, and establishing accountability. Without these safeguards, the integration of AI chatbots into psychology may do more harm than good.
link of study:https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4331367
https://www.igi-global.com/chapter/the-use-of-ai-supported-chatbot-in-psychology/371637
Uludag, K. (2025). The use of AI-supported Chatbot in Psychology. In Chatbots and Mental Healthcare in Psychology and Psychiatry (pp. 1-20). IGI Global Scientific Publishing.
