This journal club reviews recent evidence on AI-assisted qualitative analysis on what it enables, where it fails, and how to use it responsibly. Can AI meaningfully support qualitative research without eroding interpretive depth?
Journal Club Article: Cook, D. A., Ginsburg, S., Sawatsky, A. P., Kuper, A., & D’Angelo, J. D. (2025). Artificial Intelligence to Support Qualitative Data Analysis: Promises, Approaches, Pitfalls. Academic Medicine, 10-1097.
Aim & design: The authors critically appraise how AI can support qualitative data analysis via three activities:
- an exploratory case study using ChatGPT-4 on three narrative datasets
- a scoping review of AI-supported qualitative data analysis
- a conceptual analysis of promises, pitfalls, and ethics.
Methods & Key Results:
- ChatGPT case study. Out-of-the-box prompting produced accurate brief summaries but failed for higher-order tasks (e.g., thematic analysis, cross-theme insights); after iterative prompting, some utility emerged for keyword counting and summarisation, while several tasks remained unsatisfactory.
- Scoping review (N=130 articles). Of these, 104 were original studies; publication accelerated in 2023–2024 (n=64). Common approaches included inductive topic/theme discovery (n=70), keyword detection (n=39), rubric-based coding (n=30), sentiment analysis (n=28), and discourse analysis (n=13). Many studies used unsupervised learning (n=75), with frequent use of natural language processing, pretrained transformers, and other neural methods.
- Historical/contextual synthesis: Computer assistance in qualitative data analysis predates current large language models by decades (e.g., NVivo/ATLAS.ti/MAXQDA functions like retrieval, word frequencies, coder agreement). AI’s recent wave increases accessibility and scope rather than inventing qualitative data analysis.
Promises (applications): AI can expedite transcription and translation at human-competitive quality; support purposeful sampling, and large-scale corpus analysis; aid data cleaning; and facilitate coder training and human–human collaboration workflows.
Pitfalls (risks/constraints): The paper emphasises a necessary “human in the loop.” AI tools, especially Large Language Models, are probabilistic pattern-matchers, not meaning-makers; they risk missing rare/nuanced phenomena, hallucinating, and reproducing bias. There are dangers of “plug-and-chug” analyses by under-trained users, potential erosion of collaborative sense-making, and privacy/security concerns. Limited context windows and incomplete knowledge also constrain performance.
Ethical and methodological stance: Researchers must understand what is “under the hood,” practice reflexivity about how AI shapes findings, and be transparent about AI use in data handling and analysis.
Practice implications:
- Use AI where it is demonstrably strong (transcription, translation, summarisation, retrieval).
- Reserve interpretive/thematic inferences for trained qualitative researchers, with audit trails and triangulation.
- Plan for ethics, privacy, and model bias; document AI roles and parameters, and maintain rigorous coder calibration and reflexive practice.
Conclusion: AI has a long history of assisting qualitative data analysis and, with modern large language models and natural language processing, offers powerful yet bounded affordances. Its value is maximised when embedded in careful qualitative methodology with explicit human oversight.

Leave a Reply