The Narratives of Artificial Intelligence: A Critical View of an Emerging Tool

Authors

DOI:

https://doi.org/10.62695/RRUI1032

Keywords:

Critical analysis, Ethical considerations, Narratives, Artificial intelligence in teaching and learning, Problematising education, Teachers' work

Abstract

Artificial Intelligence has taken the world by storm, and humanity seems to be venturing into uncharted waters. The potential of Artificial Intelligence is still being explored in different sectors. In this desk research, the authors critically analyse and problematise the use of Artificial Intelligence in the educational realm. Ethical dilemmas when employing and relying on Artificial Intelligence are explored from contrasting perspectives. The authors attempt to evoke questions on the reliability, validity, and any possible hidden or silenced narratives in information being provided by large language models such as ChatGPT. The role of the educator as a trailblazer in the ethical and discerning use of Artificial Intelligence is emphasised. In parallel, the paper makes the case for revisiting core issues in education including the need for reappropriation of teachers’ work and slowing down the pace of education to allow for a critical undertaking.

Author Biographies

Angele Pulis, Institute for Education

Angele Pulis is a lecturer at the Institute for Education. Her research domains include educational leadership, pupil voice and mixed methods research. She holds a Ph.D. from the University of Leicester, a Master of Philosophy from the University of Wales, and a postgraduate diploma in Educational Administration and Management, and a Bachelor in Education (Hons) from the University of Malta. Her career in schools has included various roles. She was a Head of a primary school and an Assistant Head of a sixth form and a secondary school. She has taught Integrated Science, Biology and Chemistry in various schools.

Mario Mallia, Institute for Education

Mario Mallia is lecturer at the Institute for Education, focusing on critical pedagogy, gender, and science education. He was Head of a primary and secondary school for sixteen years, a Deputy Head, and a teacher of science. He holds a Master’s degree in Education, a postgraduate diploma in School Administration and Management, and a Bachelor in Education (Hons) degree from the University of Malta. He served, inter alia, as a board member of the National Commission for the Promotion of Equality and the Foundation of Educational Services for many years, besides, to date, being active in the political and social fields.

References

Addington, A. (2024, September, 23). Knowledge cutoff dates for ChatGPT, Meta Ai, Copilot, Gemini, Claude. ComputerCity. https://computercity.com/artificial-intelligence/knowledge-cutoff-dates-llms

Baldini, M., & Farahi, F. (2025). Rereading the history of pedagogy between apocalyptic and integrated. A critical pedagogy in the age of ubiquity. Journal of Inclusive Methodology and Technology in Learning and Teaching, 5(1), Article 1. https://www.inclusiveteaching.it/index.php/inclusiveteaching/article/view/285

Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2024). Generative AI Can Harm Learning (SSRN Scholarly Paper 4895486). Social Science Research Network. https://doi.org/10.2139/ssrn.4895486

Blank, I. A. (2023). What are large language models supposed to model? Trends in Cognitive Sciences, 27(11), 987–989. https://doi.org/10.1016/j.tics.2023.08.006

Borenstein, J., & Howard, A. (2021). Emerging challenges in AI and the need for AI ethics education. AI and Ethics, 1, 61–65. https://doi.org/10.1007/s43681-020-00002-7

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, 1–15. https://www.media.mit.edu/publications/gender-shades-intersectional-accuracy-disparities-in-commercial-gender-classification/

Byrd, A. (2023). Truth-Telling: Critical inquiries on LLMs and the corpus texts that train them. Composition Studies, 51(1), 135–142, 217. https://www.proquest.com/docview/2841560441/abstract/126FF487D76B4041PQ/1

Chen, C., Liu, K., Chen, Z., Gu, Y., Wu, Y., Tao, M., Fu, Z., & Ye, J. (2024, May 7–11). Inside: LLMs’ internal states retain the power of hallucination detection [Conference presentation]. ICLR: The Twelfth International Conference on Learning Representations, Vienna, Austria. https://doi.org/10.48550/arXiv.2402.03744

Children’s Rights Observatory Malta. (2022). Children’s Manifesto. Salesian Press.

Chomsky, N. (2002). Understanding power: the indispensable Chomsky. The New Press.

Chugh, R. (2024, September, 26). ChatGPT is changing the way we write. Here’s how – And why it’s a problem. The Conversation. https://theconversation.com/chatgpt-is-changing-the-way-we-write-heres-how-and-why-its-a-problem-239601

Dastin, J. (2018, October 11). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/

Elyoseph, Z., Hadar-Shoval, D., Asraf, K., & Lvovsky, M. (2023). ChatGPT outperforms humans in emotional awareness evaluations. Frontiers in Psychology, 14, Article 1199058. https://doi.org/10.3389/fpsyg.2023.1199058

Farquhar, S., Kossen, J., Kuhn, L., & Gal, Y. (2024). Detecting hallucinations in large language models using semantic entropy. Nature, 650, 625–630. https://doi.org/10.1038/s41586-024-07421-0

Franceschelli, G., & Musolesi, M. (2025). On the creativity of large language models. AI & Soc 40, 3785–3795. https://doi.org/10.1007/s00146-024-02127-3

Freire, P. (2000). Pedagogy of the Oppressed: 30th anniversary edition. Bloomsbury Academic.

Foroux, D. (2024, September, 23). AI Writing and the Illusion of Progress. Darius Foroux. https://dariusforoux.com/ai-writing-illusion/

Gillespie, L., & McBain, S. (2011). A critical analysis process – Bridging the theory to practice gap in senior secondary school physical education. Teachers and Curriculum, 12(1), 65–72. https://doi.org/10.15663/tandc.v12i1.32

Giroux, H. (1988). Teachers as Intellectuals: Towards a critical pedagogy of learning. Bergin & Garvey Publishers.

Harris, M. (2023, May 18). Elon Musk used to say he put $100M in OpenAI, but now it’s $50M: Here are the Receipts. Archive.Ph. https://archive.ph/YGGXQ

Ho, F. T. (2021). AI in education: A systematic literature review. Journal of Cases on Information Technology, 23(1), 1–20. https://doi.org/10.4018/JCIT.2021010101

Liu, F., Liu, Y., Shi, L., Huang, H., Wang, R., Yang, Z., Zhang, L., Li, Z., & Ma, Y. (2024). Exploring and evaluating hallucinations in LLM-powered code generation. ArXiv, Computer Science, 1–18. https://doi.org/10.48550/arXiv.2404.00971

Mayo, P., & Vittoria, P. (2021). Critical Education in International Perspective. Bloomsbury Academic. https://doi.org/10.5040/9781350147782

Ministry of Education and Employment. (2015). Educators’ Guide for pedagogy and assessment: Using a learning outcomes framework.https://www.um.edu.mt/library/oar/bitstream/123456789/119734/1/Educators_guide_for_pedagogy_and_assessment.pdfMinistry of Education, Sport, Youth Research and Innovation. (2024, November 6). The National Education Strategy. Edukazzjoni. https://education.gov.mt/useful-links/the-national-education-strategy-2/

Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy & Technology, 33(4), 659–684. https://doi.org/10.1007/s13347-020-00405-8

Naveed, H., Khan, A. U., Qiu, S., Saqib, M., Anwar, S., Usman, M., Barnes, N., & Milan, A. (2023). A comprehensive overview of large language models. ArXiv, Computer Science, Linguistics, 1–46. https://doi.org/10.48550/arXiv.2307.06435

Nayir, F., Sari, T., & Bozkurt, A. (2024). Reimagining education: Bridging artificial intelligence, transhumanism, and critical pedagogy. Journal of Educational Technology and Online Learning, 7(1), 102–115. https://doi.org/10.31681/jetol.1308022

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342

OECD. (2021). OECD Digital Education Outlook 2021: Pushing the Frontiers with Artificial Intelligence, Blockchain and Robots. OECD. https://doi.org/10.1787/589b283f-en

Office of the Prime Minister (2019a). Malta: Towards trustworthy AI. https://www.mdia.gov.mt/wp-content/uploads/2023/04/Malta_Towards_Ethical_and_Trustworthy_AI.pdf

Office of the Prime Minister (2019b). https://malta.ai/wp-content/uploads/2019/04/Draft_Policy_document_-_online_version.pdf

Pierce, D., & Hathaway, A. (2018, August 29). The promise (and pitfalls) of AI for education. The Journal. https://thejournal.com/articles/2018/08/29/the-promise-of-ai-for-education.aspx

Plunkett, J. (2023, April 29). Freedom in the age of autonomous machines. Medium. https://medium.com/@jamestplunkett/freedom-in-the-age-of-autonomous-machines-def5d18e82d8

Rensfeldt, A. B., & Rahm, L. (2023). Automating teacher work? A history of the politics of automation and Artificial Intelligence in education. Postdigital Science and Education, 5(1), 25–43. https://doi.org/10.1007/s42438-022-00344-x

Schiff, D. (2022). Education for AI, not AI for education: The role of education and ethics in national AI policy strategies. International Journal of Artificial Intelligence in Education, 32, 527–563. https://doi.org/10.1007/s40593-021-00270-2

Vincent, J. (2023, March 15). OpenAI co-founder on company’s past approach to openly sharing research: “We were wrong”. The Verge. https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview

Wang, S., Wang, F., Zhu, Z., Wang, J., Tran, T., & Du. Z. (2024). Artificial intelligence in education: A systematic literature review. Expert Systems with Applications, 252, 1–19. https://doi.org/10.1016/j.eswa.2024.124167

Xu, Z., Jain, S., & Kankanhalli, M. S. (2024). Hallucination is inevitable: An innate limitation of large language models. ArXiv, Computer Science, 1–26. https://doi.org/10.48550/arXiv.2401.11817

Zavalloni, G. (2017). La Pedagogia Della Lumaca: Per una scuola lenta e non violenta (10th ed.). EMI.

Zhang, Y., Li, Y., Cui, L., Cai, D., Liu, L., Fu, T., Huang, X., Zhao, E., Zhang, Y., Chen, Y., Wang, L., Luu, A. H., Bi, W., Shi, F., & Shi, S. (2023). Siren’s song in the AI ocean: A survey on hallucination in large language models [Unpublished manuscript]. Tencent AI lab, Soochow University, Zhejiang University, Renmin University of China, Nanyang Technological University, Toyota Technological Institute at Chicago. https://doi.org/10.48550/arXiv.2309.01219

Downloads

Published

13-11-2025

How to Cite

Pulis, A., & Mallia, M. (2025). The Narratives of Artificial Intelligence: A Critical View of an Emerging Tool. Malta Journal of Education, 6(02), 123–136. https://doi.org/10.62695/RRUI1032

Similar Articles

<< < 1 2 3 4 5 6 7 8 9 > >> 

You may also start an advanced similarity search for this article.