인공지능(AI) 기반 의사결정 시스템의 책무성 체계에 관한 연구
A Study on the Accountability Framework for AI-Based Decision-Making Systems
- 창간연도
- 2002년 2월
- ISSN
- (Print)1598-7817 , (Online)2713-6744
- 등재사항
- KCI등재(2019.01 ~ 현재)
- 수록권호
- 제25권 4호
- 발행일
- 2025.12
- 수록논문
- 19 articles
- 유형
- 학술저널
- 주제
- 사회과학
- 발행기간
- 2002.02 ~ 2025.12
- 발행주기
- 연 4회(계간)
- 총 권호 수
- 78 volumes
- 총 논문 수
- 771 articles
키워드
초록
본 연구는 인공지능을 비롯한 디지털 신기술이 우리 사회를 변화시키고 있는 이 시점에서 새롭게 등장한 자동적 책무성 체계하에서 발생할 수 있는 결과에 대한 책임 문제를 살펴보았다.이를 위해 본 연구는 책무성 체계 분석틀을 활용하여 인공지능 기반 의사결정 시스템의 책무성 확보를 위한 역할책임(role responsibility), 설명책임(explanatory responsibility), 부담책임(liability)에 대해 살펴보았다. 역할책임에 대해서는 RASCI 모델을 활용하여 인공지능 기반 의사결정 시스템의 생명주기 별로 이해관계자 간 역할분담 방향을 제시하였다. 그리고 설명책임과 관련해서는 해당 시스템의 생명주기 별로 설명 가능성(explainability)을 고려한 이해관계자 간 책임 배분에 대해 살펴보았다. 마지막으로 부담책임은 인공지능 시스템의 복잡성이 초래한 책임 부담의 주요 이슈들에 대해 학자들의 논의를 중심으로 살펴보았다.
이러한 연구 결과를 바탕으로 외국의 인공지능 관련 법제화 동향과 함께 우리나라의 인공지능 관련 법령 및 가이드라인이 인공지능 기반 의사결정 시스템의 책무성 확보를 위해 어떤 한계를 보이고 있는지 분석하고, 향후 우리나라에서 인공지능 책무성 문제에 적절하게 대응하기 위해 고려하여야 할 사항을 정책적 제안으로 제시하였다.
This study examines the issue of accountability within emerging automated accountability frameworks at a time when artificial intelligence (AI) and other digital emerging technologies are fundamentally transforming society. To this end, the study employs an accountability framework to analyze three core dimensions of accountability in AI-based decision-making systems: role responsibility, explanatory responsibility, and liability.
With respect to role responsibility, the study applies the RASCI model to propose a systematic allocation of roles and responsibilities among stakeholders across the life cycle of AI-based decision-making systems. Regarding explanatory responsibility, it examines how accountability for explainability should be distributed among relevant actors at each stage of the system life cycle. Finally, in addressing liability, the study reviews major scholarly debates concerning the challenges of attributing and allocating responsibility arising from the complexity of AI systems.
Based on these analytical findings, the study evaluates recent trends in foreign AI-related legislation alongside South Korea’s AI-related laws and guidelines, identifying their limitations in ensuring accountability for AI-based decision-making systems. It concludes by offering policy recommendations on how South Korea should respond more effectively to emerging accountability challenges associated with artificial intelligence.
목차
Ⅰ. 서론Ⅱ. AI와 책무성에 대한 이론적 논의
Ⅲ. AI 시스템의 책무성 확보를 위한 분석틀 및 책임 배분
Ⅳ. AI 시스템의 책무성 확보 방향
Ⅴ. 결론
참고문헌 (94)
- 김재선. (2021). 알고리즘 자동행정결정에 대한 행정법적 해석방안에 관한 연구 - 미국 행정법상 입법방안 논의를 중심으로 -. 「법학논총」, 45(3), 77-112.
- 박희정·오윤섭·강민아. (2024). 「공공 책무성」. 서울: 대영문화사.
- 배재권. (2023). 인공지능법 제정안과 설명가능한 인공지능(XAI)의 기본원칙에 관한 연구. 「한국지식정보기술학회 논문지」, 18(2), 439 - 448.
- 이중원. (2019). 인공지능에게 책임을 부과할 수 있는가?: 책무성 중심의 인공지능 윤리 모색. 「과학철학」, 22(2), 79-104.
- 정영기. (2021). 인공지능의 윤리: 책무성을 중심으로. 「인문과 예술」, 11, 295-313.
- 조성훈. (2022). 인공지능의 설명 가능성-책임 공백 문제와 설명을 요구할 권리를 중심으로. 「IP & Data 法」, 2(1), 61-82.
- 황용석·김기태. (2020). 알고리듬 기반 자동화된 의사결정의 설명 가능성에 대한 연구. 「언론정보연구」, 57(3), 41-80.
- 홍석한. (2023). 미국 “2022 알고리즘 책임법안”에 대한 고찰. 「미국헌법연구」, 34(1), 73-107.
- Adams, S., Arel, I., Bach, J., Coop, R., Furlan, R., Goertzel, B., Hall, J. S., Samsonovich, A., Scheutz, M., Schlesinger, M., Shapiro, S. C., & Sowa, J. (2012). Mapping the landscape of human-level artificial general intelligence. AI Magazine, 33(1), 25–42. doi:10.1609/aimag.v33i1.2322
- Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989.
- Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and Machine Learning: Limitations and Opportunities. MIT Press.
- Bashayreh, L. (2023). AI liability and the challenge of responsibility gaps. Journal of Law, Technology & Policy, 18(2), 45–72.
- Behn, R.D. (2001). Rethinking democratic accountability. Washington, DC: Brookings Institution Press.
- Bertolini, A. (2020). Artificial intelligence and civil liability. European Journal of Comparative Law, 27(1), 31–58.
- Borning, A. (1987). Computer System Reliability and Nuclear War. Communications of the ACM, 30(2), 112–131.
- Bovens, M. (1998). The quest for responsibility. Accountability and citizenship in complex organisations. Cambridge: Cambridge University Press.
- Bovens, M.A.P., Schillemans, T. & ’t Hart, P. (2008). Does public accountability work? An assessment tool. Public Administration, 86(1): 225–242.
- Brownsword, R., & Goodwin, M. (2021). Law, technology and society: Reimagining legal governance in the algorithmic age. Cambridge University Press.
- Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., CTR, T., Trask, A., & Anderson, H. (2020). Toward trustworthy AI development: Mechanisms for supporting verifiable claims. OpenAI. https://arxiv.org/abs/2004.07213
- Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12.
- Calo, R. (2021). Artificial intelligence policy: A primer and roadmap. Harvard Journal of Law & Technology, 34(2), 1–45.
- Chan, T., Yu, L., & Hutter, A. (2022). Operationalizing safe AI: Standards, certification and accountability mechanisms. AI and Ethics, 2(4), 551–568.
- Custers, B. (2022). The fairness paradox in AI regulation. Computer Law & Security Review, 45, 105639.
- Davis, Mi. (2012). “Ain’t No One Here But Us Social Forces”: Constructing the Professional Responsibility of Engineers. Science and engineering ethics. 18. 13-34. 10.1007/s11948-010-9225-3.
- de Bruijn, J. (2002). Managing performance in the public sector. London: Routledge.
- Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62.
- Doorn N. (2012). Responsibility ascriptions in technology development and engineering: three perspectives. Science and Engineering Ethics, 18(1), 69-90. doi: 10.1007/s11948-009-9189-3.
- Doshi-Velez, F., & Kortz, M. (2017). Accountability of AI under the law: The role of explanation. arXiv preprint arXiv:1711.01134.
- Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a “right to explanation” is unlikely to empower users or hold algorithms accountable. Duke Law & Technology Review, 16(1), 18–84.
- European Commission (2019). Liability for Artificial Intelligence and Other Emerging Digital Technologies. Report from the Expert Group on Liability and New Technologies.
- European Commission. (2022). Proposal for an AI Liability Directive. Publications Office of the European Union.
- EU. (2024). Artificial Intelligence Act. European Parliament and Council.
- Feinberg, J. (1985). Sua culpa. In D. G. Johnson & J. W. Snapper (Eds.), Ethical issues in the use of computers (pp. 102–120). Wadsworth. http://dl.acm.org/citation.cfm?id=2569.2675
- Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1).
- Floridi, L., & Cowls, J. (2022). The logic of distributed accountability in AI systems. Oxford Internet Institute.
- Goodin, R. E. (1985). Protecting the vulnerable. A reanalysis of our social responsibilities. Chicago: Chicago University Press.
- Hart, H. L. A. (1968). Punishment and Responsibility. Oxford: Oxford University Press.
- Holmquist, L. E. (2017). Intelligence on tap: artificial intelligence as a new design material. Interactions, 24(4), 28-33.
- IBM. (2022). AI Fairness 360 Technical Documentation. IBM Research.
- Infocomm Media Development Authority (IMDA). (2022). Model AI Governance Framework (2nd ed.). Government of Singapore.
- ISO. (2024). ISO/IEC 42001: Artificial intelligence management systems — Requirements. International Organization for Standardization.
- Kemper, J. & Kolkman, D. (2019). Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society, 22(14), 2081-2096, DOI: 10.1080/1369118X.2018.1477967
- Kutz, C. (2000). Complicity: Ethics and law for a collective age. Cambridge: Cambridge University Press.
- Lazer, D., Kennedy, R., King, G., & Vespignani, A. (2014). The Parable of Google Flu: Traps in Big Data Analysis. Science, 343, 14 March, pp. 1203-1205. Publisher's Version Copy at https://tinyurl.com/y3r58os2
- Leveson, N. (1986). Software Safety: Why, What, and How. Computing Surveys, 18(2), 125–163.
- Leveson, N., and Turner, C. (1993). An Investigation of the Therac-25 Accidents. Computer, 26(7), 18–41.
- Littlewood, B., and Strigini, L. (1992). The Risks of Software. Scientific American, November: 62–75.
- Lonsdale, J. (1999). Impacts. In C. Pollitt, X. Girre, R. Mul, H. Summa, M. Waerness, & J. Lonsdale (Eds.), Performance or compliance? Performance audit and public management in five countries. Oxford: Oxford University Press.
- Lynch, W. T., & Kline, R. (2000). Engineering practice and engineering ethics. Science Technology & Human Values, 25(2), 195–225.
- Mackie, J. L. (1978). Ethics: Inventing right and wrong. Harmondsworth: Penguin Books.
- Marr, B.(2023). The 15 Biggest Risks of Artificial Intelligence. Forbes Daily. https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/?sh=13bf17452706
- Marshall, G.(1987). Constitutional Conventions: The Rules and Forms of Political Accountability. Oxford: online edn, Oxford Academic. https://doi.org/10.1093/acprof:oso/9780198762027.001.0001.
- McCarthy, John, Minsky, Marvin L, Rochester, Nathaniel & Shannon, Claude E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine, 27(4), 12–14. [Web of Science ®], [Google Scholar]
- Neale, A., & Anderson, B. (2000). Performance reporting for accountability purposes: Lessons, issues, future. International Public Management Journal, 3(1): 93–106.
- Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2, 25–42. https://doi.org/10.1007/BF02639315
- Neumann, P. G. (1999). Inside risks: Information is a double-edged sword. Communications of the ACM, 42(7), 120. https://doi.org/10.1145/306549.306599
- Nihle´n Fahlquist, J. (2009). Moral responsibility for environmental problems—individual or institutional? Journal of Agricultural and Environmental Ethics, 22(2), 109–124
- Nihle´n Fahlquist, J. (2006). Responsibility ascriptions and public health problems. Who is responsible for obesity and lung cancer? Journal of Public Health, 14(1), 15–19.
- NIST. (2023). Artificial Intelligence Risk Management Framework (AI RMF). U.S. National Institute of Standards and Technology.
- Nozick, R. (1974). Anarchy, State, and Utopia. New York: Basic Books.
- OECD. (2019). OECD Principles on Artificial Intelligence. OECD Publishing.
- OECD. (2022). OECD Framework for Classification of AI System Risk. OECD Publishing.
- OECD. (2024). AI, data governance and privacy: Synergies and areas of international co-operation. OECD Publishing.
- Ojeda, T., Murphy, S. P., Bengfort, B., & Dasgupta, A. (2014). Practical data science cookbook. doi:10.5326/50.5.toc
- Papagiannidis, E., et al. (2025). Responsible artificial intelligence governance: A review and research agenda. Information and Management, 62(4), 102789.
- Parnas, D., Schouwen, J., & Kwan, S. P. (1990). Evaluation of Safety-Critical Software. Communications of the ACM, 33(6), 636–648.
- Pasquale, Frank A. (2015). The Black Box Society: The Secret Algorithms that Control Money and Information. Book Gallery. 96. https://digitalcommons.law.umaryland.edu/books/96
- Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
- Personal Data Protection Commission (PDPC). (2020). Model AI Governance Framework (2nd ed.).Government of Singapore.
- Pielke, R. A. (1999). Who decides? Forecasts and responsibilities in the 1997 Red River flood. Applied Behavioral Science Review, 7(2), 83-101, ISSN 1068-8595, https://doi.org/10.1016/S1068-8595(00)80012-4.
- Raji, I. D., Smart, A., White, R., Mitchell, M., Gebru, T., & Buolamwini, J. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAT*).
- RASCI Responsibility Matrix. In: ManagementMania.com [online]. Wilmington (DE) 2011-2023, 09/06/2020 [cit. 03/04/2023]. Available at: https://managementmania.com/en/rasci-responsibility-matrix.
- Responsible Data Science Initiative (2016) Responsible data science. http://www.responsibledatascience.org/.
- Rich, E., Knight, K. & Nair, S. B. (2009). Artificial intelligence. Third edition. New Dehli, India: Tata McGraw-Hill. [Google Scholar]
- Russel, Stuart & Norvig, Peter (2010). Artificial intelligence: A modern approach. Third edition. Englewood Cliffs, NJ: Prentice Hall. [Google Scholar]
- Rob Kitchin (2017) Thinking critically about and researching algorithms, Information, Communication & Society, 20(1), 14-29, DOI: 10.1080/1369118X.2016.1154087
- Rodrigues, R. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology, 4, 100005.
- Ron, N., Lipshitz, R., & Popper, M. (2006). How organizations learn: Post-flight reviews in an F-16 fighter squadron. Organization Studies, 27(8): 1069–1089.
- Rosa, A., Feyereisl, J., & Team, T. G. (2016). A framework for searching for general artificial intelligence. CoRR abs/1611.00685, 1–54. Retrieved from http://arxiv.org/abs/1611.00685 [Google Scholar]
- Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2016). Automation, Algorithms, and Politics | When the Algorithm Itself is a Racist: Diagnosing Ethical Harm in the Basic Components of Software. International Journal Of Communication, 10(19). Retrieved from https://ijoc.org/index.php/ijoc/article/view/6182
- Scott, C. (2006). Spontaneous Accountability. In Michael W Dowdle (Ed.), Public accountability: Designs, dilemma and experiences. 174-194. Cambridge: Cambridge University Press.
- Schillemans, T. (2011). Does horizontal accountability work?. Administration & Society, 43(4): 387–416.
- Singapore GovTech. (2023). Model AI Governance Framework (3rd edition).
- South Korea Government. (2024). AI Governance Public Sector Guidelines.
- Stone, Bernell K.(1973). A Linear Programming Formulation of the General Portfolio Selection Problem. Journal of Financial and Quantitative Analysis, 8, 621-636.
- Stoyanovich, J., Howe, B., Abiteboul, S., Miklau, G., Sahuguet, A., & Weikum, G. (2017). Fides: Towards a Platform for Responsible Data Science. 1-6. 10.1145/3085504.3085530.
- Thierer, A., O’Sullivan Castillo, A., & Russell, R. (2017). Artificial intelligence and public policy. Mercatus research. Edited by Mercatus Center at George Mason University. https://www.mercatus.org/research/research-papers/artificial-intelligence-and-public-policy
- UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence.
- Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the GDPR. International Data Privacy Law, 7(2), 76–99.
- Wagner, G. (2022). Liability for artificial intelligence and EU consumer law. Journal of European Consumer and Market Law, 11(3), 97–108.
- White House OSTP. (2023). Blueprint for an AI Bill of Rights. The White House.
- Wiener, N. (1950/1954). The Human Use of Human Beings: Cybernetics and Society. Houghton Mifflin, 1950. (Second Edition Revised, Doubleday Anchor, 1954.
- Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial intelligence and the public sector—applications and challenges. International Journal of Public Administration, 42(7), 596-615.
- Wirtz, B. W., Weyerer, J. C., & Sturm, B. J. (2020). The dark sides of artificial intelligence: An integrated AI governance framework for public administration. International Journal of Public Administration, 43(9), 818-829.
참고문헌 더보기
경인행정학회