1Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
2Cardiology Division, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
3National IT Industry Promotion Agency, Jincheon, Korea
4Health Innovation Big Data Center, Asan Medical Center, Seoul, Korea
5Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
Copyright © 2019 Korean Council of Science Editors
This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Ethical issue |
---|
Privacy and data protection, consent for data use, and data ownership. |
Fairness and bias in data and AI algorithms. If data underrepresent any particular groups of patients (e.g., ethnicity, gender, and economic status), then the resulting AI algorithms will have biases against these groups. |
Evidence to ensure the greatest benefit to patients while avoiding any harm (i.e., rigorous clinical validation of AI). |
Equitable access (e.g., if resource-poor hospitals and patients have limited access to AI, disparities in healthcare may be exacerbated). |
Conflicts of interest (e.g., if healthcare professionals involved in patient care hold positions in AI startups or other commercial entities, it may increase the risk that professional judgment or actions regarding a primary interest will be unduly influenced by a secondary interest). |
Accountability (i.e., who should be liable for adverse events related to the use of AI?). |
The exploitation of AI for unethical purposes (e.g., manipulating AI outputs with malicious intent by covertly modifying data to perform an adversarial attack [20]). |
Ethical issue |
---|
Privacy and data protection, consent for data use, and data ownership. |
Fairness and bias in data and AI algorithms. If data underrepresent any particular groups of patients (e.g., ethnicity, gender, and economic status), then the resulting AI algorithms will have biases against these groups. |
Evidence to ensure the greatest benefit to patients while avoiding any harm (i.e., rigorous clinical validation of AI). |
Equitable access (e.g., if resource-poor hospitals and patients have limited access to AI, disparities in healthcare may be exacerbated). |
Conflicts of interest (e.g., if healthcare professionals involved in patient care hold positions in AI startups or other commercial entities, it may increase the risk that professional judgment or actions regarding a primary interest will be unduly influenced by a secondary interest). |
Accountability (i.e., who should be liable for adverse events related to the use of AI?). |
The exploitation of AI for unethical purposes (e.g., manipulating AI outputs with malicious intent by covertly modifying data to perform an adversarial attack [20]). |
Question to ask |
---|
Regarding training data |
Do the authors thoroughly explain how they collected, processed, and organized the data? |
Do the authors thoroughly describe the characteristics of the data/patients including demographic characteristics, clinical characteristics, and technical characteristics? |
Do the authors explicitly disclose anticipated biases in the data as well as unintended consequences and pitfalls that could result from the biases? |
Regarding test data and results |
In addition to the above questions, the following questions should also be asked. |
Do the authors clearly state if the test data were collected prospectively or retrospectively? |
Do the authors clearly state whether the test data were a subsample of the initial dataset from which the training data were also drawn or independent external data? |
For external data, do the authors clearly state whether the test data represent a convenience series or a clinical cohort? |
For a clinical cohort, do the authors clearly explain patient eligibility criteria and which specific clinical settings they represent? |
If test datasets from multiple institutions were used, do the authors report results from each individual institution? |
Do the authors clarify (by providing the name of the registry and a study identifier) if they prospectively registered the study in a publicly accessible registry? |
Regarding interpretation of study results |
Do the authors interpret the results explicitly and avoid over-interpretation? |
Regarding sharing of algorithms and data |
Do the authors explain how to access their algorithms and data in the report (e.g., placing a link to a web page for download) if they are willing to share them? |
For shared data, do the authors explain how they have ensured patient privacy and data protection? |
AI, artificial intelligence.