Warning: fopen(/home/virtual/kcse/journal/upload/ip_log/ip_log_2024-04.txt): failed to open stream: Permission denied in /home/virtual/lib/view_data.php on line 88 Warning: fwrite() expects parameter 1 to be resource, boolean given in /home/virtual/lib/view_data.php on line 89
1Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
2Cardiology Division, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
3National IT Industry Promotion Agency, Jincheon, Korea
4Health Innovation Big Data Center, Asan Medical Center, Seoul, Korea
5Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
Copyright © 2019 Korean Council of Science Editors
This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Ethical issue |
---|
Privacy and data protection, consent for data use, and data ownership. |
Fairness and bias in data and AI algorithms. If data underrepresent any particular groups of patients (e.g., ethnicity, gender, and economic status), then the resulting AI algorithms will have biases against these groups. |
Evidence to ensure the greatest benefit to patients while avoiding any harm (i.e., rigorous clinical validation of AI). |
Equitable access (e.g., if resource-poor hospitals and patients have limited access to AI, disparities in healthcare may be exacerbated). |
Conflicts of interest (e.g., if healthcare professionals involved in patient care hold positions in AI startups or other commercial entities, it may increase the risk that professional judgment or actions regarding a primary interest will be unduly influenced by a secondary interest). |
Accountability (i.e., who should be liable for adverse events related to the use of AI?). |
The exploitation of AI for unethical purposes (e.g., manipulating AI outputs with malicious intent by covertly modifying data to perform an adversarial attack [20]). |