ARTIFICIAL INTELLIGENCE DAN TANTANGAN EPISTEMOLOGI: APAKAH MESIN DAPAT DISEBUT ‘MENGETAHUI’ ?
DOI:
https://doi.org/10.71282/jurmie.v3i1.1623Keywords:
Epistemology, Artificial Intelligence, Knowledge, Epistemic justification, Explainable AIAbstract
This study aims to normatively examine the claim that Artificial Intelligence (AI), particularly large language models (LLMs), can be said to “know” in an epistemological sense. Using a qualitative approach based on conceptual-argumentative analysis and systematic literature review, this study analyzes reputable scientific literature published in the last five years on the epistemology of AI, knowledge justification, explainable AI, and epistemic trust and authority. The analysis was conducted through mapping of key themes and critical evaluation of the philosophical premises underlying the claim of machine knowledge. The results show that although AI is capable of producing accurate and useful outputs, these systems do not meet the normative requirements of knowledge because they lack an epistemic subject, an attitude toward truth, and epistemic responsibility. Reliabilistic and explainable AI approaches only provide functional justification, not normative justification in the classical or contemporary epistemological sense. The novelty of this research lies in its assertion that the issue of AI knowledge is conceptual and normative, not merely technical, and in its reinforcement of the social and distributed epistemological framework in understanding the role of AI. The philosophical implications of this research emphasize the need to maintain the concept of “knowing” as a strict normative category, in order to prevent the erosion of human epistemic responsibility in increasingly technology-mediated knowledge practices.
Downloads
References
Fierro, Constanza, Ruchira Dhar, Filippos Stamatiou, Nicolas Garneau, and Anders Søgaard. 2024. “Defining Knowledge : Bridging Epistemology and Large Language Models,” 16096–111.
Fleisher, Will. 2022. “Understanding, Idealization, and Explainable AI,” 534–60. https://doi.org/10.1017/epi.2022.39.
Gazit, Lior. 2026. “Constitutive Knowledge Sources : An Institutional Approach to Epistemic Trust in Opaque AI Systems,” 1–15.
Hauswald, Rico. 2025. “Artificial Epistemic Authorities.” Social Epistemology 39 (6): 716–25. https://doi.org/10.1080/02691728.2025.2449602.
Heersmink, Richard, Barend De Rooij, María Jimena, Clavel Vázquez, and Matteo Colombo. 2024. “A Phenomenology and Epistemology of Large Language Models : Transparency , Trust, and Trustworthiness.” Ethics and Information Technology 26 (3): 1–15. https://doi.org/10.1007/s10676-024-09777-3.
Lahusen, Christian, Martino Maggetti, and Marija Slavkovik. 2024. “Trust , Trustworthiness and AI Governance.” Scientific Reports, 1–10. https://doi.org/10.1038/s41598-024-71761-0.
Mattioli, Martina, Eike Petersen, Aasa Feragen, Marcello Pelillo, and Siavash A Bigdeli. 2025. “Onto-Epistemological Analysis of AI Explanations.”
Shin, Donghee. 2025. “Automating Epistemology : How AI Reconfigures Truth , Authority , and Verification.” AI & SOCIETY, no. 0123456789. https://doi.org/10.1007/s00146-025-02560-y.
Sivunen, Anu, and Jennifer L Gibbs. 2023. “Managing Collapsed Boundaries in Global Work” 28.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Nur Oktriani Wulan Dari, Ahmad Munadhil Izzul Haq, Sumadi (Author)

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.










