Human Accountability and Responsibility Needed to Protect Scientific Integrity in an Age of AI, Says New Editorial
In an editorial published on May 21, 2024 in the Proceedings of the National Academy of Sciences, an interdisciplinary group of experts urges the scientific community to follow five principles of human accountability and responsibility when using artificial intelligence in research. The authors also call for the establishment of a Strategic Council on the Responsible Use of AI in Science to provide ongoing guidance and oversight on responsibilities and best practices as the technology evolves. The editorial emphasizes that advances in generative AI represent a transformative moment for science — one that will accelerate scientific discovery but also challenge core norms and values of science, such as accountability, transparency, replicability, and human responsibility. “We welcome the advances that AI is driving across scientific disciplines, but we also need to be vigilant about upholding long-held scientific norms and values,” said National Academy of Sciences President Marcia McNutt, one of the co-authors of the editorial. “We hope our paper will prompt reflection among researchers and set the stage for concerted efforts to protect the integrity of science as generative AI increasingly is used in the course of research.” The 24 authors of the editorial were convened by the National Academy of Sciences, the Annenberg Public Policy Center of the University of Pennsylvania, and the Annenberg Foundation Trust at Sunnylands to explore emerging challenges posed by the use of AI in research and to chart a path forward for the scientific community. The group included experts in...Read more...
No comments:
Post a Comment