EN
登录

儿童健康人工智能的伦理考量与以儿童为中心的医疗人工智能建议

Ethical considerations in AI for child health and recommendations for child-centered medical AI

Nature 等信源发布 2025-03-10 04:08

可切换为仅中文


Abstract

摘要

There does not exist any previous comprehensive review on AI ethics in child health or any guidelines for management, unlike in adult medicine. This review describes ethical principles in AI for child health and provides recommendations for child-centered medical AI. We also introduce the Pediatrics EthicAl Recommendations List for AI (PEARL-AI) framework for clinicians and AI developers to ensure ethical AI enabled systems in healthcare for children..

在儿童健康领域,目前尚不存在任何关于人工智能伦理的全面综述或管理指南,这与成人医学不同。本综述描述了儿童健康领域人工智能的伦理原则,并提供了以儿童为中心的医疗人工智能建议。我们还介绍了面向临床医生和人工智能开发者的儿科伦理建议清单(PEARL-AI)框架,以确保儿童医疗中的人工智能系统符合伦理。

Introduction

简介

Children are not miniature versions of adults, as children undergo age-associated changes in organ function and neurodevelopment

儿童不是成人的缩小版,因为儿童在器官功能和神经发育方面会经历与年龄相关的变化。

1

1

,

2

2

. Even within the pediatric age group of 0–18 years, there is a large disparity between preterm neonates with immaturely developed organs, compared to post-pubertal adolescents with adult physiology

即使在0-18岁的儿科年龄组内,与具有成人生理机能的青春期后青少年相比,器官发育尚未成熟的早产新生儿之间存在很大的差异。

1

1

,

3

3

.

Artificial Intelligence (AI) is playing an increasingly important role in healthcare. In pediatrics, AI is used in a wide variety of fields, such as in radiology for the diagnosis of developmental dysplasia of the hip

人工智能(AI)在医疗保健领域中扮演着越来越重要的角色。在儿科,AI被广泛应用于各个领域,例如在放射学中用于诊断髋关节发育不良。

4

4

and in genetics for the diagnosis of rare diseases

并在遗传学中用于罕见病的诊断

5

5

. AI holds much promise for improving the healthcare of children worldwide, including in less developed and underprivileged communities with limited access to specialist pediatricians.

人工智能在改善全球儿童医疗保健方面具有很大的潜力,包括在缺乏专科儿科医生的欠发达和贫困社区。

There is widespread awareness of the importance of AI ethics and governance for adults, but less emphasis has been placed on AI ethics and governance for children. This review article aims to describe ethical principles and challenges in the use of AI in healthcare for children. Important ethical principles that will be covered include non-maleficence, beneficence, autonomy, justice and transferability, transparency and explainability, privacy, dependability, auditability, knowledge management, accountability, and trust.

人们普遍意识到人工智能伦理和治理对成年人的重要性,但对儿童的人工智能伦理和治理关注较少。本综述旨在描述在儿童医疗中使用人工智能的伦理原则与挑战。将涵盖的重要伦理原则包括无害、行善、自主性、公正与可转移性、透明性与可解释性、隐私、可靠性、可审计性、知识管理、问责制以及信任。

The final section in this article will provide recommendations for child-centered medical AI..

本文的最后一部分将提供以儿童为中心的医疗人工智能建议。

Methodology

方法论

A literature search in PubMed for relevant articles related to AI ethics in child health was conducted in January 2024 and repeated in September 2024. We conducted a search using “Artificial Intelligence or AI” and “Ethics” as search terms and “Guidelines”, “Practice Guidelines”, “Review”, and “Systematic Review” as the article type.

2024年1月和2024年9月,我们在PubMed上进行了与儿童健康中人工智能伦理相关的文献检索。我们使用“人工智能或AI”和“伦理”作为搜索词,并将“指南”、“实践指南”、“综述”和“系统综述”作为文章类型进行检索。

The filters for “Age: Child: birth-18 years” and “Article Language: English” were also applied. The abstracts in the returned search were reviewed for articles that either discussed AI ethics in child health or provided recommendations and guidelines on ensuring ethical AI in child health. Articles that fulfilled the above criteria were selected to have their full text reviewed.

还应用了“年龄:儿童:出生-18岁”和“文章语言:英语”的过滤条件。对返回的搜索结果的摘要进行了审查,筛选出讨论儿童健康中的人工智能伦理或提供关于确保儿童健康中合乎伦理的人工智能的建议和指南的文章。符合上述标准的文章被选中进行全文审查。

The lists of references from the selected articles were also screened to obtain further relevant articles..

所选文章的参考文献列表也被筛选以获取更多相关文章。

A literature search for relevant articles related to AI ethics in children was similarly conducted in January 2024 using the Google search engine. We conducted a search using “AI ethics in children” as the search phrase and obtained the first 40 results returned. The webpages in the returned search were reviewed for articles that either discussed AI ethics in children or provided recommendations and guidelines on ensuring ethical AI in children.

2024年1月,我们同样使用谷歌搜索引擎进行了与儿童人工智能伦理相关的文献检索。我们以“儿童人工智能伦理”为搜索词进行检索,并获取了前40个搜索结果。对返回的网页进行了审查,筛选出讨论儿童人工智能伦理或提供确保儿童人工智能伦理的建议和指南的文章。

Articles that fulfilled the above criteria were selected to have their full text reviewed. The lists of references from the selected articles were also screened to obtain further relevant articles..

满足上述标准的文章被选中进行全文审查。所选文章的参考文献列表也被筛选以获取更多相关文章。

A search was also performed for policy documents or position statements on the websites of organizations that were deemed relevant to our review. These included UNICEF (on children), WHO (on health), International Pediatric Association, American Academy of Pediatrics, Royal College of Paediatrics and Child Health (on children’s health), International Medical Informatics Association and American Medical Informatics Association (on medical informatics).

我们还搜索了被认为与我们的审查相关的组织网站上的政策文件或立场声明。这些组织包括联合国儿童基金会(关于儿童)、世界卫生组织(关于健康)、国际儿科协会、美国儿科学会、皇家儿科与儿童健康学院(关于儿童健康)、国际医学信息学协会和美国医学信息学协会(关于医学信息学)。

Policy documents or position statements that fulfilled the above criteria were selected to have their full text reviewed. The lists of references from the selected documents were also screened to obtain further relevant articles..

满足上述标准的政策文件或立场声明被选中进行全文审查。所选文件的参考文献列表也被筛选以获取更多相关文章。

Our search strategy revealed that there does not exist any previous comprehensive review or framework on AI ethical issues in child health, nor are there any guidelines for management, unlike in adult medicine

我们的搜索策略显示,之前没有任何关于儿童健康中人工智能伦理问题的全面综述或框架,也没有管理指南,这与成人医学不同。

6

6

,

7

7

. There is a published 2-page review with framework based on only 7 references on the ethics of AI in Pediatrics, focusing mainly on the use of generative AI chatbots that utilize Large Language Models

有一篇已发表的两页综述,基于仅7篇参考文献,聚焦于儿科人工智能伦理,主要关注使用大型语言模型的生成式AI聊天机器人。

8

8

.

There are, however publications that address AI ethical issues in subspecialty pediatric medicine. These include embryology

然而,有一些出版物涉及小儿亚专科医学中的人工智能伦理问题,其中包括胚胎学。

9

9

, neonatology

,新生儿科

10

10

, genomic medicine

,基因组医学

5

5

, and radiology

,以及放射科

11

11

,

12

12

.

There are several guidelines on the ethics regarding the use of AI in children

关于在儿童中使用人工智能的伦理问题,有几项指导原则

13

13

,

14

14

,

15

15

,

16

16

, but these are not specific to the practice of Medicine.

,但这些并不是医学实践所特有的。

Ethical considerations

伦理考量

First described in 1979, Beauchamp and Childress’s landmark work

首次描述于1979年,比彻姆和查尔里斯的里程碑式著作

17

17

on foundational principles of medical ethics is ever more important in considering the ethical debate surrounding AI-enabled applications and usage. Key principles then highlighted include—autonomy, beneficence, non-maleficence, and justice, which have been cornerstones of ethical discussions in healthcare.

基于医学伦理的基本原则在考虑围绕人工智能应用和使用的伦理辩论时变得越来越重要。随后强调的关键原则包括——自主性、仁慈、无害和公正,这些原则一直是医疗保健领域伦理讨论的基石。

Jobin et al. identified other ethical concerns with regard to AI.

乔宾等人指出了关于人工智能的其他伦理问题。

18

18

, and these concerns include transparency, privacy, and trust. The American Medical Informatics Association (AMIA) has also defined additional AI principles that include dependability, auditability, knowledge management and accountability

,这些担忧包括透明度、隐私和信任。美国医学信息学协会 (AMIA) 还定义了额外的人工智能原则,包括可靠性、可审计性、知识管理和问责制。

7

7

. Unfortunately, some of these ethical principles may conflict with one another, such as justice and privacy, as illustrated below.

不幸的是,其中一些道德原则可能会相互冲突,例如正义和隐私,如下所述。

Non-maleficence

无害原则

Non-maleficence implies the need for AI to be safe and not to cause harm

无害原则意味着人工智能需要安全且不造成伤害。

18

18

,

19

19

. References to non-maleficence in AI ethics occur more commonly than beneficence

人工智能伦理中关于“不伤害”的引用比“行善”更为常见。

18

18

, likely due to society’s concerns that AI may intentionally or unintentionally inflict harm. Prioritizing non-maleficence before beneficence when approaching AI systems by no means suggests that AI systems are fraught with risks or harm. Rather, it highlights the approach to ethical issues in the context of AI.

,这可能是因为社会担心人工智能可能有意或无意地造成伤害。在处理人工智能系统时,优先考虑“不伤害”原则并不意味着人工智能系统充满风险或危害。相反,它强调了在人工智能背景下对待伦理问题的方法。

Before an AI system is implemented for child health, there must exist convincing evidence that it results in no harm or that benefits can be confidently expected to outweigh harm, notwithstanding any benefits that it can bring to the children. Evidence-based health informatics (EBHI) supports the use of concrete scientific evidence in decision-making regarding the implementation of technological healthcare systems.

在人工智能系统被用于儿童健康之前,必须有令人信服的证据表明它不会造成伤害,或者可以确信其益处将超过伤害,无论它能为儿童带来何种好处。基于证据的健康信息学(EBHI)支持在实施技术性医疗系统时使用具体的科学证据进行决策。

20

20

.

In embryology, the process of in-vitro fertilization involves selecting the best embryo for transfer. Ethical principles guide the selection of one embryo over another. The ‘best’ embryo has the highest potential to result in a viable pregnancy, whilst preventing the birth of children with conditions that would shorten their lifespan or significantly decrease their quality of life.

在胚胎学中,体外受精的过程涉及选择最适合的胚胎进行移植。伦理原则指导着一个胚胎相对于另一个胚胎的选择。“最佳”胚胎具有最高的潜力实现可行的妊娠,同时防止生育那些寿命会缩短或生活质量显著下降的孩子。

9

9

. AI has been used to rank embryos using images and time-lapsed videos as input

人工智能已经被用于通过图像和延时视频作为输入来对胚胎进行排序。

21

21

. AI has also been used in pre-implantation genetic screening of embryos non-invasively without the need for an embryo biopsy

人工智能还被用于胚胎植入前的非侵入性遗传筛查,无需进行胚胎活检。

22

22

. In 2019, scientists in a fertility clinic in Australia developed a non-invasive test (that did not use AI) for preimplantation genetic screening of embryos

2019年,澳大利亚一家生育诊所的科学家开发了一种非侵入性测试(未使用人工智能)用于胚胎植入前的基因筛查。

23

23

,

24

24

, and introduced it prematurely for clinical use

,并且过早地将其引入临床使用

9

9

. There was a marked discrepancy in results between validation studies and real-world clinical experience

验证研究与真实世界临床经验的结果存在显著差异。

25

25

. Importantly and significantly, embryos erroneously deemed genetically abnormal by the novel test and unsuitable for transfer appear to have been discarded

重要且显著的是,那些被新测试错误地判定为基因异常且不适合移植的胚胎似乎已被丢弃。

26

26

, resulting in a class action suit in Australia

,导致在澳大利亚提起集体诉讼

27

27

. Although the non-invasive test above did not utilize AI, it nevertheless serves as a cautionary tale. Experts have argued that prioritizing embryos for transfer using novel technologies, such as AI, is acceptable

虽然上述非侵入性测试并未利用人工智能,但这仍然可以作为一个警示故事。专家们认为,使用新颖的技术(如人工智能)来优先选择要移植的胚胎是可以接受的。

9

9

, but discarding embryos based on unproven advances is not

,但基于未经证实的进步而丢弃胚胎是不对的

9

9

; thereby emphasizing the need for caution and a balanced approach to ensure that the benefits of novel technologies outweigh any potential harm.

;从而强调了谨慎和平衡方法的必要性,以确保新技术的好处大于任何潜在危害。

AI might deepen existing moral controversies. For example, coupled with whole genome or exome sequencing, AI could facilitate massive genomic examination of embryos for novel disorders, dispositions or polygenic risk of disease or non-disease traits (such as intelligence). This would move beyond targeted preimplantation genetic diagnosis to massive prenatal “screening” raising significant ethical issues, even facilitating polygenic editing..

人工智能可能会加深现有的道德争议。例如,结合全基因组或外显子组测序,人工智能可以促进对胚胎进行大规模的基因组检测,以发现新型疾病、倾向性或疾病的多基因风险以及非疾病特征(如智力)。这将超越针对性的植入前遗传学诊断,迈向大规模的产前“筛查”,引发重大的伦理问题,甚至促进多基因编辑的实施。

AI systems used in healthcare are often designed to include a “human-in-the-loop”. The prediction made by the AI system is checked by a human expert, such that the AI augments but does not automate decision-making. The knowledge, skills, experience and judgment of the healthcare professional are important in case- contextualization, as no case is “standard” and each child comes with his own unique medical, family and social history.

医疗保健领域使用的人工智能系统通常设计为包含“人在环路”中。人工智能系统的预测会由人类专家进行检查,因此人工智能起到的是辅助作用,而非自动化决策。医疗保健专业人员的知识、技能、经验和判断在案例情境化中非常重要,因为没有哪个案例是“标准”的,每个孩子都有其独特的医疗、家庭和社会历史。

Although having a human in the loop decreases the risk of an AI causing harm, there is a risk of introducing human bias and decreasing justice and fairness. AI-enabled decisions are more objective and reproducible unless the source training data was biased or derived from a disparate population from which it is being used..

虽然在循环中加入人类会降低人工智能造成伤害的风险,但这样做也存在引入人类偏见并降低正义和公平性的风险。除非源训练数据存在偏见或来自与其使用人群不同的群体,否则人工智能辅助的决策更加客观且可重现。

AI systems that are used outside of healthcare settings can also have an impact on children’s health. Social media and streaming platforms are changing how children interact with content. With touchscreen technology and intuitive user interfaces, even very young children can access these applications with ease.

在医疗环境之外使用的人工智能系统也可能对儿童的健康产生影响。社交媒体和流媒体平台正在改变儿童与内容互动的方式。通过触摸屏技术和直观的用户界面,即使是非常年幼的儿童也能轻松访问这些应用程序。

28

28

. AI recommendation algorithms are optimized to keep children engaged on the platform for extended periods rather than to prioritize content quality

人工智能推荐算法是为了让孩子长时间停留在平台上,而不是优先考虑内容质量。

29

29

. There have been multiple studies that have highlighted the adverse effect of prolonged screen time on the cognitive development and neurobehavioral development of children

有多项研究强调了长时间屏幕时间对儿童认知发展和神经行为发展的不利影响。

30

30

,

31

31

, and on the development of obesity

,并且在肥胖症的发展方面

32

32

, and its related complications. Excessive screen time is positively associated with behavioral and conduct problems, developmental delay, speech disorder, learning disability, autism spectrum disorders and attention deficit hyperactivity disorder, especially for preschoolers and boys, and the dose-response relationships are significant.

,以及其相关的并发症。过度的屏幕时间与行为和品行问题、发育迟缓、语言障碍、学习障碍、自闭症谱系障碍及注意力缺陷多动障碍呈正相关,尤其是对于学龄前儿童和男孩而言,且剂量反应关系显著。

30

30

.

Beneficence

仁慈

Beneficence or promoting good can be seen as benefiting an individual or a group of persons collectively

行善或促进良好可以被视为使个人或群体集体受益

18

18

. AI must benefit all children, including children from different ages, ethnicities, geographical regions and socioeconomic conditions. These include the most marginalized children and children from minority groups.

人工智能必须惠及所有儿童,包括不同年龄、种族、地理区域和社会经济条件的儿童。这些儿童包括最边缘化的儿童和少数群体的儿童。

In healthcare, AI has demonstrated its ability to benefit the care of sick children in out-patient

在医疗保健领域,人工智能已经展示了其在门诊病童护理方面的益处。

33

33

,

34

34

and in-patient care

住院护理

35

35

. In genomics, AI has been used in both prenatal and pediatric settings. AI can use genotypes to predict phenotypes (genotype-to-phenotype) and can also use phenotypes to predict genotypes (phenotype-to-genotype). Identifai Genetics can determine in the first trimester of pregnancy whether there is a higher chance a baby will be born with any genetic disorder, using cell-free fetal DNA circulating in the maternal blood.

在基因组学中,人工智能已经应用于产前和儿科领域。人工智能可以利用基因型预测表型(基因型到表型),也可以利用表型预测基因型(表型到基因型)。Identifai Genetics可以在怀孕的第一孕期,通过母体血液中循环的无细胞胎儿DNA,来判断婴儿出生时患任何遗传疾病的可能性是否更高。

33

33

, allowing in-utero treatment of some genetic diseases. Face2Gene uses deep learning and computer vision to convert patient images into de-identified mathematical facial descriptors

,允许对某些遗传疾病进行子宫内治疗。Face2Gene使用深度学习和计算机视觉将患者图像转换为去识别化的数学面部描述符。

36

36

,

37

37

. The patient’s facial descriptors are compared to syndrome gestalts to quantify similarity (gestalt scores) to generate a prioritized list of syndromic diagnosis

患者的面部特征与综合征的整体特征进行比较,以量化相似性(整体评分),从而生成综合征诊断的优先列表。

36

36

,

37

37

. Face2Gene supports over 7000 genetic disorders

Face2Gene支持超过7000种遗传疾病

34

34

, and is routinely used in clinical practice by geneticists.

,并且在临床实践中被遗传学家常规使用。

An AI platform combining genomic sequencing with automated phenotyping using natural language processing prospectively diagnosed three critically ill infants in intensive care with a mean time saving of 22 h, and the early diagnosis impacted treatment in each case

一个结合基因组测序与使用自然语言处理的自动化表型分析的AI平台前瞻性地诊断了三名重症婴儿,平均节省时间22小时,且早期诊断在每个病例中都影响了治疗。

35

35

. In these time-critical scenarios, rapid diagnosis by AI can have a meaningful impact to improve clinical outcomes for these seriously ill children with rare genetic diseases. It also allows transfer to palliative care and avoidance of invasive procedures for diagnoses that are incompatible with life..

在这些时间紧迫的场景中,人工智能快速诊断可以对改善患有罕见遗传病的重症儿童的临床结果产生有意义的影响。它还允许转入姑息治疗,并避免对与生命不相容的诊断进行侵入性手术。

Autonomy

自治权

Autonomy can be viewed as positive freedom or negative freedom

自治可以被看作是积极的自由或消极的自由。

18

18

. Positive freedom is seen as the ability for self-determination

积极的自由被视为自我决定的能力

38

38

, whereas negative freedom is the ability to be free from interference, such as from technological experimentation

,而消极自由是指免受干扰的自由,例如免于技术实验的干扰。

39

39

or surveillance

或监视

40

40

.

Unlike adults, who are able to consent, a parent or legal guardian must provide consent for the collection of a child’s medical data or the use of an AI-enabled device in a child. Decisionally competent adolescents have developing autonomy, and their consent should be sought, as well as that of parents.

与能够自行同意的成年人不同,儿童医疗数据的收集或在儿童身上使用人工智能设备需要由父母或法定监护人提供同意。具有决策能力的青少年正在发展自主权,应征求他们的同意,同时也需征求父母的同意。

Gillick competence can be applied when determining whether a child under 16 is competent to consent.

吉尔里克能力可用于确定16岁以下的儿童是否有能力同意。

41

41

,

42

42

. Gillick competence is dependent on the child’s maturity and intelligence, and higher levels of competence are required for more complicated decisions. Consent obtained from a Gillick-competent child cannot be overruled by the child’s parents. However, when a Gillick competent child refuses consent, the consent can be obtained from the child’s parent or guardian.

Gillick能力取决于孩子的成熟度和智力,更复杂的决定需要更高水平的能力。从具备Gillick能力的孩子那里获得的同意不能被孩子的父母推翻。然而,当具备Gillick能力的孩子拒绝同意时,可以由孩子的父母或监护人获得同意。

In accordance with the United Nations Convention on the Rights of the Child, every child has the right to be informed and to express their views freely regarding matters relevant to them, and these views should be considered in accordance with the child’s maturity.

根据《联合国儿童权利公约》,每个儿童都有权被告知并就与其相关的事宜自由表达自己的意见,这些意见应根据儿童的成熟程度予以考虑。

43

四十三

. Although a younger child is not legally able to give consent, the child has the freedom to assent or dissent after being informed in age-appropriate language

尽管年幼的孩子在法律上无法给予同意,但在用适合其年龄的语言告知后,孩子有表示同意或不同意的自由。

44

44

.

The use of AI in pediatric care should not infringe the child’s right to an open future

在儿科护理中使用人工智能不应侵犯儿童的开放未来权利。

45

45

. This can occur through infringements of confidentiality and privacy, or generally if decisions are made on the basis of AI which unreasonably narrows the child’s future options.

这可能由于侵犯保密性和隐私权而发生,或者通常在基于人工智能做出不合理限制儿童未来选择的决定时发生。

Justice and transferability

正义与可转移性

Justice is defined as fairness in terms of access to AI

正义被定义为在人工智能获取方面的公平性。

18

18

,

19

19

, data

,数据

18

18

, and the benefits of AI

,以及人工智能的好处

18

18

,

46

46

; and the prevention of bias

`; 以及防止偏见`

18

18

,

19

19

, and discrimination

,以及歧视

18

18

,

19

19

. Justice encompasses equity for all, including vulnerable groups such as minority groups, mothers-to-be and children. AI must benefit all children, including children from different ages, ethnicities, geographical regions and socioeconomic conditions.

正义包括对所有人公平,包括少数群体、准妈妈和儿童等弱势群体。人工智能必须使所有儿童受益,包括不同年龄、种族、地理区域和社会经济条件的儿童。

Underprivileged communities, including their children, are similarly disadvantaged in the digital world

弱势群体,包括他们的孩子,在数字世界中同样处于不利地位。

47

47

. Technology (including AI) may increase inequality in under-resourced, less-connected communities

技术(包括人工智能)可能会加剧资源匮乏、连接较少的社区的不平等现象。

48

48

due to limited access to technology and lower digital literacy. This impacts the ability of the healthcare teams in these communities to leverage on AI in both adult and pediatric medicine. Moreover, machine learning algorithms trained on pediatric data from developed countries may not be applicable to children in less developed countries, resulting in incorrect predictions.

由于技术获取有限和数字素养较低。这影响了这些社区的医疗团队在成人和儿科医学中利用人工智能的能力。此外,基于发达国家儿科数据训练的机器学习算法可能不适用于欠发达国家的儿童,导致预测不准确。

These AI applications that were trained on non-representative populations can potentially perpetuate rather than reduce bias.

这些基于非代表性人群训练的 AI 应用程序可能会延续而不是减少偏见。

49

49

. AI systems risk compromising children’s right to equitable access to the highest attainable standard of healthcare

人工智能系统有可能损害儿童获得最高标准医疗服务的平等权利

43

四十三

.

However, AI can also promote equality by connecting under-developed communities to developed communities. The Pediatric Moonshot project was launched in 2020 in an effort to reduce healthcare inequity, lower cost and improve outcomes for children globally

然而,人工智能还可以通过将欠发达社区与发达社区联系起来促进平等。儿科登月计划于2020年启动,旨在努力减少医疗不平等、降低成本并改善全球儿童的治疗效果。

50

50

. The Pediatric Moonshot project aims to link all the children’s hospitals in the world on the cloud by creating privacy-preserving real-time AI applications based on access to data. Edge zones have been deployed in 3 continents (North America, South America, and Europe). There is a shortage of specialist pediatricians in underdeveloped countries, and the Pediatric Moonshot project includes Mercury, a global image-sharing network to allow non-children’s hospitals or clinics to share images with pediatricians in children’s hospitals for expert opinion.

儿科登月计划旨在通过创建基于数据访问的隐私保护实时人工智能应用程序,将全球所有儿童医院连接到云端。边缘区域已经部署在三大洲(北美、南美和欧洲)。在欠发达国家,专业儿科医生短缺,而儿科登月计划包括一个名为Mercury的全球影像共享网络,允许非儿童医院或诊所与儿童医院的儿科医生共享影像以获取专家意见。

The Pediatric Moonshot project also includes Gemini, an AI research lab for children, designed to pioneer privacy-preserving, de-centralized training of AI applications in child health that can also be deployed on mobile devices for use by doctors serving under-privileged communities..

儿科登月计划还包括Gemini,这是一个专为儿童设计的AI研究实验室,旨在开创保护隐私、去中心化的儿童健康AI应用训练方法,这些应用还可以部署在移动设备上,供服务于弱势社区的医生使用。

Algorithmic bias is the systemic under or over-prediction of probabilities for a specific population, such as children. Fairness (unbiasedness) is multifaceted, has many different definitions and can be measured by various metrics

算法偏差是指对特定人群(如儿童)的概率进行系统性低估或高估。公平性(无偏性)是多方面的,有许多不同的定义,并且可以通过各种指标来衡量。

51

51

. Fairness metrics used for AI models in healthcare include well-calibration, balance for positive class, and balance for negative class. It is important to note that these 3 conditions for fairness cannot typically be achieved at the same time by an AI model, except under very specific conditions

用于医疗保健领域的人工智能模型的公平性指标包括良好的校准、正类平衡和负类平衡。需要注意的是,这三种公平性条件通常无法同时被一个人工智能模型实现,除非在非常特定的条件下。

52

52

. Hence, there is no universal one-size-fits-all definition of fairness, and some definitions are incompatible with others. The appropriate definition and metric of fairness used largely depends on the healthcare context.

因此,公平性没有一个放之四海而皆准的定义,有些定义与其他定义是不相容的。所采用的公平性的适当定义和衡量标准在很大程度上取决于医疗保健的背景。

Al-enabled devices that were trained on adult data only may underperform when used in children. Several studies have investigated the use of adult AI in pediatric patients and results have highlighted difficulties in generalizing AI across the age spectrum

仅使用成人数据训练的AI设备在儿童使用时可能表现不佳。多项研究调查了成人AI在儿科患者中的应用,结果突显了在年龄跨度上推广AI的困难。

53

53

,

54

54

,

55

55

,

56

56

,

57

57

. For example, AI developed to detect vertebral fractures in adults was unreliable in children with a low sensitivity of 36% for the detection of mild vertebral fractures

例如,开发用于检测成人椎体骨折的人工智能在儿童中并不可靠,对轻度椎体骨折的检测敏感性仅为36%。

54

54

. A deep learning algorithm, EchoNet-Peds, that was trained on pediatric echocardiograms performed significantly better to estimate ejection fraction than an adult model applied to the same data

一种名为EchoNet-Peds的深度学习算法,在儿科超声心动图上进行训练后,估算射血分数的表现显著优于应用于相同数据的成人模型。

55

55

. As pediatric care is commonly undertaken in facilities that manage both adults and children, AI-enabled devices not evaluated in children could unwittingly be used by healthcare providers on children, resulting in adverse outcomes. Thus far, most AI-driven radiology solutions have been designed for adult patients.

由于儿科护理通常在同时管理成人和儿童的医疗机构中进行,未经在儿童中评估的AI设备可能会被医疗服务提供者无意中用于儿童,从而导致不良后果。迄今为止,大多数AI驱动的放射学解决方案都是为成年患者设计的。

Of late, radiology imaging advocacy groups have appealed to the US Congress to create policies that address the lack of AI-based innovations tailored specifically for pediatric care.

最近,放射影像学倡导组织呼吁美国国会制定政策,解决专门为儿科护理量身定制的基于人工智能的创新缺乏的问题。

58

58

.

As such, it is important to consider the transferability of AI systems to the context of pediatric healthcare. Transferability is a measure for how effective a health intervention, initially evaluated and validated in one context, can be applied to another

因此,重要的是要考虑人工智能系统在儿科医疗环境中的可转移性。可转移性是一个衡量指标,用于评估最初在一个环境中评估和验证的健康干预措施能多有效地应用于另一个环境。

59

59

. AI models are prone to systemic bias arising from the training data, which limits the range of application. Even if the training data originates from a diverse population, the differences in quantity can greatly skew outputs. Children from diverse backgrounds may experience vastly different health challenges, which can be due to factors such as demographic characteristics, upbringing, culture, access to healthcare services, and their surrounding environment.

人工智能模型容易受到训练数据带来的系统性偏差,这限制了其应用范围。即使训练数据来源于多样化的人群,数量上的差异仍可能极大地扭曲输出结果。来自不同背景的儿童可能会面临截然不同的健康挑战,这可能是由于人口统计特征、成长环境、文化、医疗保健服务的可及性以及周围环境等因素造成的。

Failure to account for these differences could lead to bias and disparities in the quality of care, disproportionately affecting vulnerable children..

未能考虑到这些差异可能会导致护理质量的偏差和不平等,对脆弱儿童造成不成比例的影响。

Transparency and explainability

透明性和可解释性

Transparency includes both technological transparency and organizational transparency. Technological transparency refers to the communication and disclosure to stakeholders of the use of AI

透明度包括技术透明度和组织透明度。技术透明度是指与利益相关者沟通和披露人工智能的使用。

18

18

,

19

19

, including to the healthcare team, pediatric patients, and their parents, or guardians. Parents value transparency, and disclosure pathways should be developed to support this expectation

,包括向医疗团队、儿科患者及其父母或监护人。家长重视透明度,应制定披露途径以支持这一期望。

60

60

. Transparency also refers to efforts to increase explainability and interpretability of AI-enabled devices

透明度还指努力提高人工智能设备的可解释性和可诠释性。

18

18

.

Organizational transparency refers to the disclosure to patients and parents of conflicts of interest. It is not uncommon for AI-enabled mobile health applications to have both a diagnostic and a therapeutic arm, wherein a diagnosis made is followed by a redirection of the user to an e-commerce platform with therapeutic products, such as in esthetic medicine websites that are also used by adolescents.

组织透明度是指向患者和家长披露利益冲突。对于人工智能支持的移动健康应用程序来说,同时拥有诊断和治疗功能并不少见,其中诊断结果会引导用户转向提供治疗产品的电子商务平台,例如青少年也会使用的医学美容网站。

Appropriate disclosure of any conflicts of interest between the developer of the AI diagnostic app and the manufacturer of the recommended therapeutic products is frequently absent..

人工智能诊断应用程序的开发者与推荐治疗产品的制造商之间存在利益冲突时,常常缺乏适当的披露。

Transparency is seen as a key enabler of the various ethical principles. Only with transparency and understanding, can there be nonmaleficence, autonomy

透明度被视为各种伦理原则的关键推动因素。只有在透明和理解的基础上,才能实现无害和自主。

18

18

, and trust

,以及信任

46

46

,

61

61

,

62

六十二

,

63

63

.

Privacy

隐私

Privacy relates to the need for data protection and data security

隐私涉及数据保护和数据安全的需求。

18

18

. While privacy is a right for all children as per the UN Convention on the Rights of the Child

. 根据《联合国儿童权利公约》,隐私权是所有儿童的权利

43

四十三

, there is marked variability in adolescent privacy laws not only between countries, but also between states within the same country for consent and privacy regarding substance abuse, mental health, contraception, human immunodeficiency virus infection, and other sexually transmitted infections

,青少年隐私法律存在显著差异,不仅国家之间不同,即使在同一国家的不同州之间,关于药物滥用、心理健康、避孕、人类免疫缺陷病毒感染及其他性传播感染的同意和隐私法律也各不相同。

64

64

. This creates challenges for AI developers looking to build AI systems for the above health conditions for older children.

这给想要为上述健康状况较大的儿童构建人工智能系统的开发者带来了挑战。

Fitness trackers and wearables, and digital health apps such as menstruation tracking, sleep tracking, and mental health apps, are popular among adolescents

健身追踪器和可穿戴设备,以及月经追踪、睡眠追踪和心理健康等数字健康应用程序在青少年中很受欢迎。

65

65

. These commercial apps collect sensitive data, including real-time geolocation data and reported or inferred emotional states

这些商业应用程序收集敏感数据,包括实时地理位置数据以及报告或推断的情绪状态。

65

65

. As mobile phone apps collect a large amount of identifying data, it is almost impossible to de-identify data in order to protect privacy

由于手机应用程序收集了大量的识别数据,几乎不可能通过去标识化来保护隐私。

66

66

. Stigma and discrimination can result from leakage of sensitive health data, and while this negatively affects patients of all ages, the vulnerability and young age of children means that any inadvertent disclosure of such data would have longer-lasting effects in children

敏感健康数据的泄露可能导致污名化和歧视,虽然这对所有年龄段的患者都有负面影响,但儿童的脆弱性和年幼特点意味着任何此类数据的无意泄露都会对儿童产生更持久的影响。

65

65

.

De-identified data is typically used to train AI systems. However, there is a real possibility of de-identified pediatric data to be re-identified, particularly for children with rare genetic diseases, thereby resulting in an infringement of privacy and possible harm. Larger datasets, which include data from pediatric patients, are needed for the unbiased training of AI-enabled devices used by children..

去标识化的数据通常用于训练人工智能系统。然而,去标识化的儿科数据确实存在被重新识别的可能性,特别是对于患有罕见遗传病的儿童,从而导致隐私侵犯和可能的伤害。为了对儿童使用的人工智能设备进行无偏见的训练,需要更大规模的数据集,其中应包括来自儿科患者的数据。

Unfortunately, this may result in not only the loss of autonomy, but also the possibility of re-identification and loss of privacy for certain children and their families.

不幸的是,这不仅可能导致某些儿童及其家庭丧失自主权,而且还可能重新识别身份,丧失隐私。

Dependability

可靠性

Dependability refers to the need for AI systems to be robust, secure and resilient, where in the event of a malfunction, the system must ensure that it does not put the patient or the clinical setting in an unsafe state

可靠性是指人工智能系统需要具备稳健性、安全性和弹性,当发生故障时,系统必须确保不会使患者或临床环境处于不安全状态。

7

7

. This principle is especially important for pediatric patients, as they may be less capable of voicing concerns or understanding risks and less likely to be aware when an adverse event has occurred compared to adults. Without proper supervision, such malfunctions can be catastrophic.

这一原则对于儿科患者尤为重要,因为他们可能不太能够表达自己的担忧或理解风险,也不太可能像成年人那样意识到不良事件的发生。如果没有适当的监督,此类故障可能会造成灾难性后果。

Auditability

可审计性

Auditability is the requirement for any capable AI system to document its decision-making process via an “audit trail” which captures input and output values as well as changes in performances and model states

可审计性是指任何有能力的人工智能系统都需要通过“审计跟踪”来记录其决策过程,该跟踪捕获输入和输出值以及性能和模型状态的变化。

7

7

. This is a layer of transparency that is critical for understanding how the model functions and evolves over time. In pediatric care, this allows clinicians to ensure that recommendations made by an AI-enabled system align with the needs of children and identify any systemic error that may disproportionately affect them.

. 这一透明层对于理解模型如何运作及随时间演变至关重要。在儿科护理中,这使临床医生能够确保由人工智能系统提出的建议与儿童的需求相符,并识别任何可能对他们造成不成比例影响的系统性错误。

The audit log is also important for clinicians to evaluate changes within the system over time. For medical-legal purposes, the audit trail for AI in pediatric patients may need to be retained until the age of maturity (18 years) plus an additional 3 years (21 years).

审计日志对于临床医生评估系统随时间的变化也很重要。出于医疗法律目的,儿科患者中人工智能的审计追踪可能需要保留至成年年龄(18岁)再加额外3年(21年)。

7

7

.

Knowledge management

知识管理

Children’s health can be significantly impacted by a wide range of factors, from genetic to environmental. In the present day, these factors can fluctuate widely within short periods of time, and vary among children. AI models for pediatric healthcare, as a result, may become outdated and less effective as time goes on..

儿童的健康可能受到从遗传到环境等多种因素的显著影响。在当今时代,这些因素可能在短时间内发生大幅波动,并因儿童而异。因此,随着时间推移,用于儿科医疗的AI模型可能会变得过时且效果降低。

Accountability

责任

Accountability is the requirement for organizations responsible for creating, deploying and maintaining the AI system to actively supervise its usage and address any concerns raised

问责制是指负责创建、部署和维护人工智能系统的组织需要积极监督其使用,并解决任何引发的问题。

7

7

. As we have mentioned above, children represent a specially vulnerable population who may be unaware of the potential risks from AI systems. It is then up to parents and clinicians to voice concerns regarding the safety of the child. Accountability ensures that any potential failures in AI systems do not disproportionately burden individual clinicians but are addressed in a way that protects both healthcare providers and the children under their care..

如上所述,儿童是一个特别脆弱的群体,可能意识不到人工智能系统的潜在风险。因此,父母和临床医生有责任对儿童的安全问题提出关切。问责制确保人工智能系统任何潜在的故障不会不成比例地加重个别临床医生的负担,而是以一种保护医疗服务提供者及他们所照顾的儿童的方式来处理。

Accountability also encompasses professional liability. The clinician in charge of the patient is potentially liable for any harm from use of the AI-enabled system on his pediatric patients, and his professional license is at risk. In future, the clinician could also be held accountable for his or her failure to utilize AI-enabled systems on his patients if this becomes the standard of care..

问责还涵盖专业责任。负责患者治疗的临床医生可能要对使用人工智能系统对其儿科患者造成的任何伤害负责,并且其专业执照也面临风险。未来,如果使用人工智能系统成为护理标准,临床医生也可能因未能对其患者使用人工智能系统而被追究责任。

Trust

信任

Trust refers to trustworthy AI and is a byproduct of the above ethical principles. It is generally recognized that trust is needed for AI adoption and for AI to fulfill its potential for good. Conversely, it can be argued that trust is the one ethical principle in which we should not have 100% of, in that we should never place complete trust in an AI-enabled medical device..

信任是指值得信赖的人工智能,是上述伦理原则的副产品。人们普遍认为,人工智能的采用以及充分发挥其向善的潜力需要信任。相反,可以认为信任是一个我们不应完全具备的伦理原则,因为绝不应完全信任一个启用了人工智能的医疗设备。

Recommendations for child-centric medical AI

以儿童为中心的医疗人工智能建议

At present, none of the professional bodies for child health (including the International Pediatric Association representing pediatricians from over 144 countries in over 176 member societies, the American Academy of Pediatrics, and the Royal College of Paediatrics and Child Health in the United Kingdom) have published a set of guidelines or recommendations for child-centered medical AI.

目前,所有儿童健康专业机构(包括代表超过 144 个国家/地区、176 个成员学会的儿科医生的国际儿科协会、美国儿科学会以及英国皇家儿科与儿童健康学院)均未发布针对以儿童为中心的医疗人工智能指南或建议。

Similarly, none of the medical informatics associations (including the International Medical Informatics Association and the American Medical Informatics Association) have published guidelines or recommendations for pediatric medical AI. What is currently available are 1) guidelines for AI ethics and governance in adult medicine.

同样,没有任何医学信息学协会(包括国际医学信息学协会和美国医学信息学协会)发布过针对儿科医疗人工智能的指南或建议。目前可用的只有 1) 成人医学中的人工智能伦理和治理指南。

6

6

,

7

7

and 2) policy documents from United Nations Children’s Fund (UNICEF) and the like on AI ethics and governance pertaining to children

以及2)联合国儿童基金会(UNICEF)等机构发布的与儿童相关的AI伦理和治理政策文件

13

13

,

14

14

,

15

15

,

16

16

but not specific to child health.

但并非特定于儿童健康。

In this review paper, we based our recommendations for child-centered AI on the policy guidance by UNICEF

在本综述论文中,我们基于联合国儿童基金会的政策指导,提出了以儿童为中心的人工智能建议。

13

13

, and we elaborated on these recommendations in the context of child health.

,并在儿童健康的背景下详细阐述了这些建议。

The overarching recommendations by UNICEF are to develop and deploy AI systems in a manner that upholds children’s collective rights to protection, provision and participation whilst nurturing various stakeholders and adapting to the national or local context

联合国儿童基金会的总体建议是以维护儿童的保护权、提供权和参与权的方式开发和部署人工智能系统,同时培养各利益相关者并适应国家或地方的背景。

13

13

. UNICEF has specific recommendations that are discussed below.

联合国儿童基金会有一些具体的建议,下面将进行讨论。

Ensure AI used in healthcare promotes children’s development and wellbeing

确保医疗保健领域使用的人工智能促进儿童的发展和福祉

UNICEF recognizes that AI systems can support the realization of every child’s right to good health and to flourish across mental, physical, social, and environmental spheres of life

联合国儿童基金会认识到,人工智能系统可以支持实现每个儿童在身心健康、社会和环境生活各个领域的权利和福祉。

13

13

. UNICEF recommends prioritizing how AI systems can benefit children and to leverage AI to support children’s well-being

联合国儿童基金会建议优先考虑人工智能系统如何使儿童受益,并利用人工智能支持儿童的福祉。

13

13

. AI design should adopt a child-centered approach, which should include safety-by-design, privacy-by-design and inclusion-by-design.

人工智能设计应采取以儿童为中心的方法,其中包括安全设计、隐私设计和包容性设计。

Ensure inclusion of and for children during the design and development of healthcare AI

确保在医疗人工智能的设计和开发过程中包含儿童的需求。

All of the four ethical principles (respect for autonomy, non-maleficence, beneficence, and justice) require high-quality evidence, This includes AI. There must be an inclusive design approach when developing AI products that will be used by children or impact them, and there should be meaningful child participation, both in AI policies and in the design and development processes.

所有四项伦理原则(尊重自主性、不伤害、行善和正义)都需要高质量的证据,这包括人工智能。在开发供儿童使用或影响儿童的人工智能产品时,必须采用包容性设计方法,并且在人工智能政策以及设计和开发过程中应有有意义的儿童参与。

13

13

. Ideally, this should include randomized controlled trials of the use of AI in children where feasible.

理想情况下,只要可行,这应该包括在儿童中使用人工智能的随机对照试验。

Within the health care context, conducting clinical trials in children is challenging due to the heterogeneity of the subjects

在医疗保健背景下,由于受试者的异质性,在儿童中进行临床试验具有挑战性。

1

1

and ethical concerns resulting in strict laws and ethical guidelines

以及导致严格法律和道德准则的伦理问题

67

67

,

68

68

. In addition, children are collectively a smaller population than adults, and children have fewer chronic diseases, making it less financially attractive for commercial vendors to develop AI-enabled devices for children. Notwithstanding the difficulties, the National Institute of Health states that “children (i.e., individuals under the age of 18) must be included in all human subjects research, conducted or supported by the NIH, unless there are scientific and ethical reasons not to include them”.

此外,儿童总体上是一个比成年人更小的群体,而且儿童患慢性疾病的情况较少,这使得为儿童开发人工智能设备对商业供应商来说在经济上不那么有吸引力。尽管存在这些困难,美国国立卫生研究院表示,“除非有科学和伦理上的原因不包括他们,否则所有由美国国立卫生研究院进行或支持的人类受试者研究都必须包括儿童(即18岁以下的个人)”。

69

69

,

70

70

. The NIH policy was developed because “medical treatments applied to children are often based upon testing done only in adults, and scientifically evaluated treatments are less available to children”

“应用于儿童的医疗方法通常仅基于在成人身上进行的测试,且科学评估过的治疗对儿童来说较少可用”,因此制定了美国国立卫生研究院政策。

69

69

. Specifically in the context of AI systems, the White Paper by the American College of Radiology (ACR) recommends the inclusion of pediatric patients in AI models that are developed and potentially applicable to children

特别是针对人工智能系统,美国放射学会(ACR)的白皮书建议在开发可能适用于儿童的人工智能模型中纳入儿科患者。

11

11

. Developers could be incentivized to develop AI using suitable pediatric data, resulting either in separate pediatric models or in combined adult and pediatric models. The ACR also recommends the incorporation of AI into clinical practice guidelines for children when appropriate

开发者可以通过使用合适的儿科数据来获得激励,从而开发出单独的儿科模型或成人与儿科结合的模型。美国放射学会(ACR)还建议在适当的情况下将人工智能纳入儿童临床实践指南。

11

11

.

Ensure AI used in healthcare for children prioritize fairness, non-discrimination and equitable access

确保儿童医疗领域使用的人工智能优先考虑公平性、非歧视性和平等获取机会。

The most marginalized children, including children from minority groups, should be supported so that they may benefit from AI systems. Datasets should include a diversity of children’s data, including children from different regions, ages, socioeconomic conditions, and ethnicities, in order to remove prejudicial bias against children or against certain groups of children that results in discrimination and exclusion.

最边缘化的儿童,包括少数群体的儿童,应得到支持,以便他们能够从人工智能系统中受益。数据集应包含多样化的儿童数据,包括来自不同地区、年龄、社会经济状况和种族的儿童,以消除对儿童或某些儿童群体的偏见,避免导致歧视和排斥。

13

13

.

The American Medical Informatics Association Position Paper states that “AI must be subject to increased scrutiny when applied to vulnerable groups including children, particularly in cases where such groups were under-represented in the data used to train the AI

美国医学信息学协会立场文件指出:“当人工智能应用于弱势群体,包括儿童时,必须受到更加严格的审查,尤其是在用于训练人工智能的数据中,这类群体代表性不足的情况下。”

7

7

.” All AI healthcare models used in children should be tested for fairness using an appropriate definition of fairness and a suitable metric of fairness that caters to the specific context. For AI-enabled devices that are used by both adults and children, the model must not be biased against children.

“所有在儿童中使用的AI医疗模型都应该使用适当的公平性定义和适合特定背景的公平性指标进行公平性测试。对于成人和儿童都使用的AI设备,该模型不得对儿童存在偏见。”

For AI-enabled devices that are exclusively designed for children, the model must not discriminate against any group of children (such as by race, geographical location, or socioeconomic situation). Given the known limitations in conducting clinical trials with children, we may sometimes have to argue for a benign form of discrimination in favor of children..

对于专为儿童设计的AI设备,模型不得对任何儿童群体(如种族、地理位置或社会经济状况)产生歧视。鉴于在儿童中进行临床试验存在已知的局限性,有时我们可能不得不主张一种对儿童有利的良性歧视。

ACCEPT-AI is a framework designed to evaluate AI studies that include pediatric populations and can be used to check for age-related algorithmic bias throughout the AI life cycle, from study design to post-deployment

ACCEPT-AI 是一个旨在评估包含儿科人群的 AI 研究的框架,可用于检查 AI 生命周期中与年龄相关的算法偏见,从研究设计到部署后阶段。

71

71

. If needed, pre-processing, in-processing, and/or post-processing can be implemented to mitigate bias. Bias should be minimized as far as possible, but it is not usually possible to totally eliminate bias.

如果需要,可以实施预处理、过程中处理和/或后处理来减轻偏差。偏差应尽可能减小,但通常无法完全消除偏差。

However, ensuring fairness in AI for children goes beyond addressing algorithmic bias. Economic and organizational values must also be taken into account to ensure equitable access to AI-driven healthcare systems for all children, regardless of socioeconomic status. These developments should aim to provide better healthcare outcomes for children using fewer resources, and AI system providers should aim to provide business models that offer more value to users.

然而,确保儿童人工智能的公平性不仅仅是解决算法偏见的问题。还必须考虑经济和组织价值,以确保所有儿童,不论社会经济地位如何,都能公平地获得由人工智能驱动的医疗系统。这些发展应该旨在使用更少的资源为儿童提供更好的医疗结果,人工智能系统提供商应致力于提供为用户带来更多价值的商业模式。

72

72

. This ensures that new healthcare systems remain accessible to lower-income populations and reduces the burden on healthcare providers in under-developed communities. In this sense, AI developments must be inclusive, engaging a broad range of stakeholders to ensure that the perspectives of children, caregivers, healthcare providers, policymakers, and communities are incorporated into the design and deployment processes.

这确保了新的医疗系统仍然可以被低收入人群访问,并减轻了欠发达社区医疗服务提供者的负担。从这个意义上讲,人工智能的发展必须具有包容性,吸引广泛的 stakeholders 参与,以确保儿童、护理人员、医疗服务提供者、政策制定者和社区的观点被纳入设计和部署过程。

73

73

. Inclusivity helps mitigate the risk of further marginalizing vulnerable populations and ensures that the benefits of AI can be equitably distributed across diverse groups of children.

包容性有助于减轻进一步边缘化弱势群体的风险,并确保人工智能的益处能够公平地分配给不同群体的儿童。

Ensure AI enabled healthcare systems protect children’s data and privacy

确保启用人工智能的医疗系统保护儿童的数据和隐私

There must be a responsible data approach to the handling of children’s data

必须有负责任的数据方法来处理儿童的数据

65

65

. A balance must be found such that there is sufficient data about children for the development of AI systems while minimizing data collection to safeguard privacy and security

必须找到一种平衡,使得在尽量减少数据收集以保障隐私和安全的同时,仍有足够的儿童数据用于人工智能系统的发展。

65

65

. AI systems should adopt a privacy-by-design approach. Not only is there a need to protect an individual child’s right to privacy, but there is also a need to protect collective groups of children (such as a racial group) to prevent profiling

人工智能系统应采用隐私设计方法。不仅需要保护个别儿童的隐私权,还需要保护儿童群体(如种族群体)以防止被画像。

13

13

.

UNICEF promotes children maintaining control over their own data with the capacity to access, securely share, understand the use of, and delete their data, in accordance with their age and maturity

联合国儿童基金会促进儿童根据其年龄和成熟程度,保持对其自身数据的控制,具备访问、安全共享、理解数据用途以及删除数据的能力。

65

65

. However, parents and guardians need to provide consent for the use of younger children’s data. Furthermore, as children’s understanding develops with age, the consent process should be revisited periodically as the child grows

然而,父母和监护人需要为使用年幼孩子的数据提供同意。此外,随着孩子年龄的增长,他们的理解能力也会发展,因此在孩子成长过程中应定期重新审视同意过程。

13

13

. As children mature and attain the age of consent, they can reverse the consent previously provided by their parent or legal guardian and exercise their ‘right to be forgotten’ and for their data to be erased

随着儿童成熟并达到同意年龄,他们可以撤销之前由父母或法定监护人提供的同意,并行使他们的“被遗忘权”以及要求删除其数据的权利。

74

74

.

Ensure safety for children when AI is used in healthcare

确保在医疗保健中使用人工智能时儿童的安全

With respect to how AI systems (including AI-enabled mobile health applications) interact with users, children should not be exposed to content targeting that could harm their mental or physical health. Additionally, in keeping with online safety recommendations, children and their parents should have access to child safety tools.

关于人工智能系统(包括启用人工智能的移动健康应用程序)如何与用户互动,儿童不应接触可能损害其身心健康的内容。此外,根据在线安全建议,儿童及其父母应能获取儿童安全工具。

These tools should include options to control the content children are exposed to, limit the public visibility of profile information, restrict other users from contacting or interacting with an account used by a child, and manage location sharing.

这些工具应包括控制儿童接触内容的选项、限制个人资料信息的公开可见性、阻止其他用户与儿童使用的帐户联系或互动,以及管理位置共享功能。

75

75

.

UNICEF advocates continuously assessing and monitoring AI’s impact on children throughout the entire AI development life cycle and testing AI systems for safety, security, and robustness

联合国儿童基金会倡导在整个AI开发生命周期中持续评估和监控AI对儿童的影响,并测试AI系统的安全性、可靠性和稳健性。

13

13

. AI systems used in healthcare, in particular those used for children, should have appropriate human agency, oversight and control measures with humans in the loop as far as possible.

特别是那些用于儿童的医疗保健领域的人工智能系统,应尽可能地保持人在回路中,并具备适当的人类决策、监督和控制措施。

In Pediatric Medicine, off-label use of medication is common

在儿科医学中,药物的标签外使用很常见。

76

76

, as there are fewer legalized medicines and dosage forms for the pediatric population

,因为儿科人群的合法药物和剂型较少

77

77

. Legal restrictions on the conduct of clinical trials in children exacerbate the lag in the regulation of medicines for pediatric use

对儿童进行临床试验的法律限制加剧了儿科用药监管的滞后。

78

78

,

79

79

. Off-label use is associated with increased uncertainty on efficacy and increased risk for adverse effects. Significantly, more off-label medicines are prescribed in the neonatal and pediatric intensive care units

超说明书使用会增加对疗效的不确定性,并增加不良反应的风险。重要的是,更多的超说明书药物被开在新生儿和儿科重症监护室。

76

76

,

80

80

, and this may reflect the dire do-or-die situation that makes off-label drug use less of an issue for clinicians. With regards to off-label use of drugs and medical devices in the United States, once a drug or device receives regulatory approval, physicians can exercise professional judgment and legally prescribe the drug or device for any indication they deem safe and effective, irrespective of official FDA-approved indications.

,这可能反映出一种严峻的要么做要么死的局面,使得超说明书用药对临床医生来说不那么成问题。关于美国药物和医疗器械的超说明书使用,一旦某种药物或器械获得监管机构的批准,医生就可以行使专业判断,并合法地为他们认为安全有效的任何适应症开处方,而不论官方FDA批准的适应症为何。

81

81

. The American Academy of Pediatrics (AAP) Policy Statement on the Off-Label Use of Medical Devices in Children states that “The clinical need for devices to diagnose and treat diseases or conditions occurring in children has led to the widespread and necessary practice in pediatric medicine and surgery of using approved devices for off-label or physician- directed applications that are not included in FDA-approved labeling.

美国儿科学会(AAP)关于儿童医疗器械标签外使用的政策声明指出:“为了诊断和治疗儿童疾病或病症,临床上对医疗器械的需求已导致在儿科医学和外科手术中广泛且必要地使用经批准的器械进行标签外使用或医生指导的应用,而这些应用并未包含在FDA批准的标签中。”

This practice is common and often appropriate, even with the highest-risk (class III) devices.”.

这种做法很常见,而且通常很合适,即使是针对最高风险(III 类)设备。"

82

82

The FDA Guidance document on “Off-Label and Investigational Use Of Marketed Drugs, Biologics, and Medical Devices” states that “If physicians use a product for an indication not in the approved labeling, they have the responsibility to be well informed about the product, to base its use on firm scientific rationale and on sound medical evidence.”.

FDA关于“已上市药物、生物制品和医疗器械的超说明书使用和试验性使用”的指南文件指出:“如果医生将某种产品用于未经批准的适应症,他们有责任充分了解该产品,并基于坚实的科学依据和可靠的医学证据来使用。”

83

83

The AAP Policy Statement on the Off-Label Use of Drugs in Children states that “Off-label use is neither incorrect nor investigational if based on sound scientific evidence, expert medical judgment, or published literature.”

美国儿科学会关于儿童药物标签外使用的政策声明指出:“如果基于可靠的科学证据、专家医学判断或已发表的文献,标签外使用既不是错误的,也不属于试验性使用。”

84

84

The emphasis on scientific evidence and published literature avoids experimental and potentially unsafe practices.

强调科学证据和已发表的文献,避免了实验性和潜在不安全的做法。

Drawing from the experience of off-label drug and device use, with the added knowledge that AI systems behave unpredictably when applied to patients demographically different from their training population, there is an urgent need for additional research and recommendations by key opinion leaders on the risks and benefits of off-label use of AI-enabled devices in pediatric patients and to set new standards of evidence before AI is deployed on children.

借鉴标签外药物和设备使用的经验,并结合人工智能系统在应用于与其训练人群在人口统计学上不同的患者时行为不可预测的知识,关键意见领袖亟需对儿童患者中使用人工智能设备的标签外应用进行额外研究并提出建议,评估其风险和益处,并在人工智能应用于儿童之前制定新的证据标准。

Robust informatics evaluation frameworks are also crucial when developing AI systems for pediatric care to ensure that the design prioritizes ethics and equity, assessing limitations and risks while also helping users understand system logics.

在为儿科护理开发人工智能系统时,强大的信息学评估框架也至关重要,以确保设计优先考虑伦理和公平性,在评估局限性和风险的同时帮助用户理解系统逻辑。

85

85

. Evidence-based health informatics requires the need for concrete scientific evidence in assessing performances and risks associated with AI systems

基于证据的健康信息学需要具体的科学证据来评估与人工智能系统相关的性能和风险。

20

20

.

Moreover, AI models must be also continuously updated and retrained to account for new pediatric data

此外,人工智能模型还必须不断更新和重新训练,以考虑新的儿科数据。

7

7

, reflecting such changes to ensure that the outputs from the models remain accurate, reproducible, and relevant. This prevents degradations in model effectiveness from causing harm to children. Developers must clearly document when AI models are created, revalidated, and set to expire

,反映这些变化以确保模型输出保持准确、可重复和相关性。这防止了模型效能下降对儿童造成伤害。开发者必须明确记录AI模型的创建时间、重新验证时间和到期时间。

7

7

.

Aside from this, AI systems need to adopt a fail-safe design and be thoroughly tested for robustness to ensure that their performance does not degrade in unforeseen circumstances

除此之外,人工智能系统需要采用故障安全设计,并经过全面的鲁棒性测试,以确保其性能不会在不可预见的情况下下降。

7

7

. This prevents potential malfunctions from compromising the safety of the child.

这可以防止潜在的故障影响孩子的安全。

AI-enabled systems must have cybersecurity measures in place to protect against unauthorized access, modification and disruption, whilst maintaining confidentiality, integrity and availability. These protective mechanisms must secure both the AI system as well as the data of the children.

启用人工智能的系统必须采取网络安全措施,以防止未经授权的访问、修改和中断,同时保持保密性、完整性和可用性。这些保护机制必须既保护人工智能系统,也保护儿童的数据。

Ensure that AI in healthcare supports transparency, explainability and accountability for children

确保医疗保健领域的人工智能为儿童提供透明度、可解释性和问责制支持

Medical professionals should be informed about the use of an AI-enabled device and the limitations of the AI system including inclusion and exclusion criteria. Parents and children who interact with an AI medical system also have the right to be informed using age-appropriate language and an inclusive manner, to understand how the system works and how it uses and maintains confidential data.

医疗专业人员应被告知有关使用人工智能设备的情况,以及人工智能系统的局限性,包括纳入和排除标准。与人工智能医疗系统互动的父母和儿童也有权通过适龄的语言和包容的方式被告知,以了解系统的工作原理及其如何使用和维护机密数据。

13

13

. AI systems should be developed so that children are protected and empowered by legal and regulatory frameworks, irrespective of children’s vulnerability and understanding of the AI system

人工智能系统的发展应确保儿童受到法律和监管框架的保护与赋权,无论儿童对人工智能系统的脆弱性及理解程度如何。

13

13

. These AI governance frameworks must be regularly reviewed and updated to protect the rights of children. AI regulatory bodies must be established to continually monitor and correct any ethical infringements to the rights of children.

这些人工智能治理框架必须定期审查和更新,以保护儿童的权利。必须设立人工智能监管机构,持续监控并纠正任何对儿童权利的伦理侵犯。

Physicians should determine prior to the use of any AI system whether the device has been specifically evaluated in pediatric patients by referring to the indications for use section and the 510(k) summary in the FDA 510(k) database

医生在使用任何人工智能系统之前,应通过查阅FDA 510(k)数据库中的使用指征部分和510(k)摘要,确定该设备是否已在儿科患者中进行过专门评估。

12

12

. Alternatively, physicians can refer to the user manual or ask for such information from the device vendor

或者,医生可以查阅用户手册或向设备供应商索取此类信息。

11

11

. The American College of Radiology (ACR) has recommended including a “statement regarding authorization specifically for use in children, including a description of the evidence that does/does not support use in children, or if there is a lack of such evidence” on all FDA-authorized AI devices

美国放射学院 (ACR) 建议在所有 FDA 授权的人工智能设备上包含“专门针对儿童使用授权的声明,包括支持或不支持在儿童中使用的证据描述,或者是否存在此类证据不足”的说明。

86

86

. The ACR also recommended a highly visible nutrition label-style summary of AI-enabled device information that could include a pediatric use section

。ACR还建议提供一个高度可见的、类似营养标签的AI赋能设备信息摘要,其中可能包括儿科使用部分。

86

86

. The above would guide decision-making related to AI device acquisition, implementation, and appropriate use

上述内容将指导与人工智能设备的获取、实施和适当使用相关的决策制定。

11

11

,

86

86

and facilitate properly informed consent. If the AI-enabled device has not been evaluated in children, physicians should exercise caution and clinical judgment and disclose this to parents and competent children.

并促进充分知情同意。如果尚未在儿童中评估过该人工智能设备,医生应谨慎行使临床判断,并向父母和有行为能力的儿童披露这一点。

As far as possible, developers of AI should use interpretable and not black-box AI for building AI systems for children. This is especially important for irreversible decisions with far-reaching consequences, such as for embryo selection in in-vitro fertilization treatment

尽可能地,人工智能的开发者应该使用可解释的人工智能而不是黑箱人工智能来为儿童构建人工智能系统。这对于具有深远影响的不可逆决策尤为重要,例如在体外受精治疗中的胚胎选择。

87

87

, or for predicting futility of care in critically ill neonates and children. Developers should also provide accessible mechanisms for reporting and escalating concerns regarding the AI system to medical professionals and families, ensuring that risks are promptly assessed and mitigated, and that complaints are properly addressed.

,或用于预测危重新生儿和儿童护理的无效性。开发者还应提供可访问的机制,向医疗专业人员和家属报告并升级对人工智能系统的担忧,确保风险得到及时评估和缓解,并妥善处理投诉。

Redress should be offered in case of harm.

如果造成损害,应该提供补救措施。

7

7

.

Empower governments and businesses with knowledge of AI and children’s rights

赋予政府和企业关于人工智能和儿童权利的知识力量

Policymakers, management, and AI system developers must have awareness and knowledge of AI and children’s rights, and be committed to child-centered AI and translating this into practice

政策制定者、管理人员和人工智能系统开发人员必须了解人工智能和儿童权利的知识,并致力于以儿童为中心的人工智能,并将其付诸实践。

13

13

.

Support governments and businesses in creating an enabling environment for child-centered medical AI

支持各国政府和企业为以儿童为中心的医疗人工智能创建有利环境

Governments and businesses should invest in infrastructure development to address the digital divide and aim for equitable sharing of the benefits of AI

政府和企业应投资于基础设施建设,以解决数字鸿沟问题,并力求公平分享人工智能带来的利益。

13

13

. Not only must funding and incentives be provided for child-centered AI policies and strategies, support must be provided for rigorous research on AI for and with children across the AI system’s life cycle

不仅要为以儿童为中心的人工智能政策和战略提供资金和激励措施,还必须支持在人工智能系统生命周期内针对儿童和与儿童一起进行的严格研究。

13

13

.

The United Nations Secretary-General’s High-level Panel on Digital Cooperation recommends increasing international cooperation on AI by investment in open source software, open data, open AI models, open standards, and open content

联合国秘书长关于数字合作的高级别小组建议通过投资开源软件、开放数据、开放的人工智能模型、开放标准和开放内容来加强人工智能领域的国际合作。

88

88

. Child-centered AI systems would greatly benefit from government and private sector cooperation and from the sharing of resources and approaches

儿童为中心的人工智能系统将大大受益于政府和私营部门的合作以及资源和方法的共享。

13

13

.

The key concepts linking the ethical principles and recommendations for child-centered medical AI are summarized in Fig.

图中总结了将伦理原则与以儿童为中心的医疗人工智能建议联系起来的关键概念。

1

1

.

Fig. 1: Linking of key ethical concepts and recommendations for child-centered medical AI.

图1:以儿童为中心的医疗人工智能的关键伦理概念与建议的关联。

The key ethical considerations in AI for child health are non-maleficience, beneficence, autonomy, privacy, justice and transferability, transparency and explainability, accountability, dependability, auditability and knowledge management. Only when these ethical principles are upheld, will there be trust in the AI-enabled system.

人工智能在儿童健康领域的关键伦理考量包括:无害原则、行善原则、自主性、隐私、公正与可转移性、透明性与可解释性、问责性、可靠性、可审计性以及知识管理。只有在这些伦理原则得到维护的情况下,才能建立对人工智能赋能系统的信任。

These ethical principles are linked to recommendations for action to support child-centered medical AI..

这些伦理原则与支持以儿童为中心的医疗人工智能的行动建议相联系。

Full size image

全尺寸图像

PEARL-AI framework

PEARL-AI框架

From our comprehensive review of ethical principles in AI for child health and the recommendations we have collated for advancing child-centered medical AI, we present the Pediatrics EthicAl Recommendations List for AI (PEARL-AI) framework (Table

通过我们对儿童健康人工智能伦理原则的全面审查,以及我们为推进以儿童为中心的医疗人工智能所整理的建议,我们提出了儿科伦理建议清单人工智能(PEARL-AI)框架(表

1

1

). As there is an absence of both randomized controlled trials and large non-randomized trials in the use of AI in healthcare for children, all the recommendations in the PEARL-AI framework are built using Level C quality of evidence based on previously published opinions of experts. In the framework, we have also included new recommendations that we believe to be important in ethical AI for pediatric healthcare, for which we have elaborated on in the previous section on recommendations for child-centric medical AI..

由于在儿童医疗中使用人工智能缺乏随机对照试验和大型非随机试验,PEARL-AI框架中的所有建议均基于先前发表的专家意见,属于C级质量证据。在该框架中,我们还纳入了一些我们认为在儿科医疗伦理人工智能领域重要的新建议,这些建议已在前一节关于以儿童为中心的医疗人工智能建议中详细阐述。

Table 1 PEARL-AI framework for clinicians, academics, administrators and developers

表1 面向临床医生、学者、管理人员和开发人员的PEARL-AI框架

Full size table

全尺寸表格

This framework is intended as a practical, actionable resource for clinicians, academics, healthcare administrators, and AI developers. The PEARL-AI framework will be regularly updated to reflect new evidence and developments in the field of AI in healthcare in children, to ensure that AI-enabled systems in healthcare uphold the highest standards of ethics while addressing the unique needs and vulnerabilities of children..

该框架旨在为临床医生、学者、医疗保健管理员和人工智能开发者提供一个实用的、可操作的资源。PEARL-AI框架将定期更新,以反映儿童医疗领域中人工智能的新证据和发展,确保医疗中的人工智能系统在坚持最高伦理标准的同时,满足儿童的独特需求并应对他们的脆弱性。

A systematic child-centric approach

一种系统化的以儿童为中心的方法

The PEARL-AI framework is designed to be a child-centric and structured guide that supports ethical decision-making throughout all phases of the AI lifecycle. By placing children at the core of its considerations, the framework prioritizes child health, rights, and well-being in every stage of AI development and deployment..

PEARL-AI框架旨在成为一种以儿童为中心、结构化的指南,支持在人工智能生命周期的各个阶段进行道德决策。通过将儿童置于其考虑的核心,该框架在人工智能开发和部署的每个阶段优先考虑儿童的健康、权利和福祉。

The ethical challenges inherent in AI development are magnified for pediatric populations due to their vulnerability, dependency on caregivers, and limited ability to advocate for themselves. This makes the implementation of a framework like PEARL-AI essential.

由于儿科人群的脆弱性、对护理者的依赖性以及自我维权能力的局限性,人工智能开发中的伦理挑战在这一群体中被进一步放大。这使得实施像PEARL-AI这样的框架变得至关重要。

Proactive ethical oversight

主动的伦理监督

The PEARL-AI framework supports proactive ethical oversight for identifying and addressing potential ethical breaches early in the AI development process. Key features of the framework include:

PEARL-AI框架支持主动的伦理监督,以便在人工智能开发过程的早期识别和解决潜在的伦理问题。该框架的主要特点包括:

1.

1.

Child-Centered Safeguards: Recommendations for designing algorithms and interfaces that are sensitive to the unique physical, cognitive, and emotional needs of children.

以儿童为中心的保护措施:针对儿童独特的身体、认知和情感需求,设计算法和界面的建议。

2.

2.

Ethical Risk Assessment: A structured evaluation of potential risks to children’s well-being posed by AI models, including bias, discrimination, and unintended outcomes.

伦理风险评估:对人工智能模型可能对儿童福祉构成的风险进行的结构化评估,包括偏见、歧视和意外结果。

3.

3.

Stakeholder Engagement: Mechanisms to involve a broad spectrum of stakeholders, including the children themselves, parents, clinicians, and developers, in the design and evaluation processes.

利益相关者参与:涉及广泛的利益相关者,包括儿童自身、父母、临床医生和开发者,参与到设计和评估过程的机制。

4.

4.

Iterative Validation: Emphasis on continuous testing and validation of AI systems in real-world pediatric settings to ensure safety, accuracy, and ethical alignment.

迭代验证:强调在现实世界的儿科环境中对人工智能系统进行持续的测试和验证,以确保其安全性、准确性和道德一致性。

Lifecycle ethical integration

生命周期伦理整合

A distinctive attribute of the PEARL-AI framework is its focus on integrating ethical considerations into every phase of the AI lifecycle, including:

PEARL-AI框架的一个显著特点是其专注于将伦理考量融入人工智能生命周期的每个阶段,包括:

1.

1.

Problem Definition: Ensuring that the AI initiative addresses a genuine pediatric healthcare need without introducing unnecessary risk.

问题定义:确保人工智能计划解决的是真正的儿科医疗需求,同时不引入不必要的风险。

2.

2.

Data Collection and Preparation: Advocating for transparency, informed consent (tailored to the pediatric context), and equitable data representation.

数据收集与准备:倡导透明度、知情同意(针对儿科环境量身定制)以及公平的数据代表性。

3.

3.

Algorithm Development: Prioritizing fairness, explainability, and bias mitigation in model design.

算法开发:在模型设计中优先考虑公平性、可解释性和偏差缓解。

4.

4.

Testing and Deployment: Instituting rigorous testing protocols to validate that AI tools perform reliably and safely in diverse pediatric populations.

测试与部署:制定严格的测试协议,以验证人工智能工具在不同儿科人群中的可靠性和安全性。

5.

5.

Post-Deployment Monitoring: Establishing mechanisms for ongoing surveillance of AI systems to detect and rectify issues that may emerge over time.

部署后监控:建立对人工智能系统进行持续监控的机制,以检测和纠正可能随着时间出现的问题。

The PEARL-AI framework emphasizes that AI in pediatric healthcare should not merely meet technical and clinical benchmarks but should actively protect and promote the best interests of children. By prioritizing safeguards through the AI lifecycle, the framework helps to ensure that the transformative potential of AI in child health is harnessed responsibly, with children’s rights and well-being placed at the forefront..

PEARL-AI框架强调,儿科医疗中的人工智能不仅应满足技术和临床基准,还应积极保护和促进儿童的最佳利益。通过在人工智能生命周期中优先考虑保障措施,该框架有助于确保人工智能在儿童健康领域的变革潜力被负责任地利用,将儿童的权利和福祉置于首位。

Conclusion

结论

This review article describes ethical principles and challenges in the use of AI in healthcare for children. Important AI ethical principles discussed include non- maleficence, beneficence, autonomy, justice and transferability, transparency and explainability, privacy, dependability, auditability, knowledge management, accountability, and trust.

这篇综述文章描述了在儿童医疗中使用人工智能的伦理原则和挑战。讨论的重要人工智能伦理原则包括无害、仁慈、自主、公正与可转移性、透明性和可解释性、隐私、可靠性、可审计性、知识管理、问责制和信任。

In the final section in this article, we provide recommendations for child-centered medical AI. We based our recommendations for child-centered AI on the policy guidance on AI for children by UNICEF, and we elaborated on these recommendations in the context of child health. We also introduced the Pediatrics EthicAl Recommendations List for AI (PEARL-AI) framework, that can be used by both AI developers and clinicians to ensure ethical AI-enabled systems in healthcare for children..

在本文的最后一节中,我们为以儿童为中心的医疗人工智能提供了建议。我们以联合国儿童基金会关于儿童人工智能的政策指导为基础,提出了以儿童为中心的人工智能建议,并在儿童健康背景下详细阐述了这些建议。我们还介绍了《儿科伦理人工智能建议清单》(PEARL-AI)框架,该框架可供人工智能开发者和临床医生使用,以确保儿童医疗中合乎伦理的人工智能系统。

References

参考文献

Kearns, G. L, et al. Developmental pharmacology-drug disposition, action, and therapy in infants and children.

Kearns, G. L, 等。发育药理学——婴儿和儿童的药物处置、作用与治疗。

N. Engl. J. Med.

新英格兰医学杂志

349

349

, 1157–1167 (2003).

,1157-1167页(2003年)。

Article

文章

CAS

中国科学院

PubMed

PubMed

Google Scholar

谷歌学术索

Klassen, T. P., Hartling, L., Craig, J. C. & Offringa, M. Children are not just small adults: The urgent need for high-quality trial evidence in children.

克拉斯,T.P.,哈特林,L.,克雷格,J.C.,奥弗林加,M. 儿童不仅仅是小大人:儿童高质量试验证据的迫切需求。

PLoS Med.

公共科学图书馆·医学

12

12

, e172 (2008).

,e172(2008)。

Article

文章

Google Scholar

谷歌学术

Allegaert, K. & van den Anker, J. Neonates are not just little children and need more finesse in dosing of antibiotics.

阿勒加特,K. & 范登安克,J. 新生儿不仅仅是小孩子,抗生素的剂量需要更加精细。

Acta Clin. Belg.

临床比利时杂志

74

74

, 157–163 (2019).

,157-163页(2019年)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术

Ghasseminia, S. et al. Interobserver variability of hip dysplasia indices on sweep ultrasound for novices, experts, and artificial intelligence.

Ghasseminia, S. 等。新手、专家和人工智能在扫查超声中髋关节发育不良指数的观察者间差异。

J. Pediatr. Orthop.

小儿骨科杂志

42

42

, e315–e323 (2022).

,e315–e323(2022)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术

Coghlan, S., Gyngell, C. & Vears, D. F. Ethics of artificial intelligence in prenatal and pediatric genomic medicine.

科格兰,S.,金内尔,C.,& 维尔斯,D. F. 产前和儿科基因组医学中的人工智能伦理。

J. Community Genet.

社区遗传学杂志

15

15

, 13–24 (2023).

,13-24页(2023年)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术

World Health Organisation. Ethics and Governance of Artificial Intelligence for Health.

世界卫生组织。健康领域人工智能的伦理与治理。

https://www.who.int/publications/i/item/9789240029200

https://www.who.int/publications/i/item/9789240029200

(2021).

(2021)。

Solomonides, A. E. et al. Defining AMIA's artificial intelligence principles.

所罗门尼德斯,A. E. 等。定义AMIA的人工智能原则。

J. Am. Med. Inform. Assoc.

美国医学信息学协会期刊

29

29

, 585–591 (2021).

,585-591页(2021年)。

Article

文章

PubMed Central

PubMed Central

Google Scholar

谷歌学术

Sisk, B. A., Antes, A. L. & DuBois, J. M. An overarching framework for the ethics of artificial intelligence in pediatrics.

西斯克,B. A., 安特斯,A. L., 杜波依斯,J. M. 儿科人工智能伦理的总体框架。

JAMA Pediatr.

美国医学会儿科杂志

178

178

, 213–214 (2024).

,213-214页(2024)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术

Polyakov, A., Rozen, G., Gyngell, C. & Savulescu, J. Novel embryo selection strategies—finding the right balance.

Polyakov, A., Rozen, G., Gyngell, C. & Savulescu, J. 新型胚胎选择策略——寻找适当的平衡。

Front. Reprod. Health

生殖健康前沿

5

5

, 1287621 (2023).

,1287621(2023)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术

Sullivan, B. A. et al. Transforming neonatal care with artificial intelligence: Challenges, ethical consideration, and opportunities.

沙利文,B. A. 等。利用人工智能变革新生儿护理:挑战、伦理考量与机遇。

J. Perinatol.

围产期医学杂志

44

44

, 1–11 (2023).

,1–11(2023)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术搜索

Sammer, M. B. K. et al. Use of artificial intelligence in radiology: Impact on pediatric patients, a white paper from the ACR Pediatric AI Workgroup.

Sammer, M. B. K. 等。人工智能在放射学中的应用:对儿科患者的影响,ACR儿科AI工作组白皮书。

J. Am. Coll. Radio.

美国放射学会杂志

20

20

, 730–737 (2023).

,730-737页(2023年)。

Article

文章

Google Scholar

谷歌学术

Nelson, B. J., Zeng, R., Sammer, M. B. K., Frush, D. P. & Delfino, J. G. An FDA guide on indications for use and device reporting of artificial intelligence-enabled devices: significance for pediatric use.

Nelson, B. J., Zeng, R., Sammer, M. B. K., Frush, D. P. & Delfino, J. G. 美国食品药品监督管理局关于人工智能设备的使用指征与设备报告指南:对儿科使用的意义。

J. Am. Coll. Radio.

美国放射学会杂志。

20

20

, 738–741 (2023).

,738-741页(2023年)。

Article

文章

Google Scholar

谷歌学术

United Nations Children’s Fund. (2021b). Policy guidance on AI for children 2.0.

联合国儿童基金会(2021b)。《儿童人工智能政策指导2.0》。

UNICEF

联合国儿童基金会

https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf

https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf

(2021).

(2021)。

World Economic Forum. Generation AI: establishing global standards for children and AI.

世界经济论坛。人工智能世代:为儿童和人工智能建立全球标准。

https://www.weforum.org/publications/generation-ai-establishing-global-standards-for-children-and-ai/

https://www.weforum.org/publications/generation-ai-establishing-global-standards-for-children-and-ai/

(2019).

(2019)。

World Economic Forum. Artificial intelligence for children—toolkit.

世界经济论坛。儿童人工智能——工具包。

https://www3.weforum.org/docs/WEF_Artificial_Intelligence_for_Children_2022.pdf

https://www3.weforum.org/docs/WEF_Artificial_Intelligence_for_Children_2022.pdf

(2022).

(2022)。

Mahomed, S., Aitken, M., Atabey, A., Wong, J. & Briggs, M. A. I., Children’s rights, & wellbeing: Transnational frameworks: mapping 13 frameworks at the intersections of data-intensive technologies, children’s rights, and wellbeing.

穆罕默德,S.,艾特肯,M.,阿塔贝伊,A.,黄,J.,布里吉斯,M.A.I.,儿童权利与福祉:跨国框架:在数据密集型技术、儿童权利和福祉的交汇处绘制13个框架。

The Alan Turing Institute

图灵研究所

.

https://www.turing.ac.uk/news/publications/ai-childrens-rights-wellbeing-transnational-frameworks

https://www.turing.ac.uk/news/publications/ai-childrens-rights-wellbeing-transnational-frameworks

(2023).

(2023)。

Beauchamp, T. L. & Childress, J. F. Principles of biomedical ethics. (Oxford University Press, 2001).

比彻姆,T. L. 和 查尔迪斯,J. F. 《生物医学伦理学原则》。(牛津大学出版社,2001年)。

Jobin, A., Ienca, M. & Vayena, E. The global landscape of AI ethics guidelines.

乔宾,A.,伊恩卡,M.,瓦耶纳,E. 人工智能伦理指南的全球格局。

Nat. Mach. Intell.

自然机器智能

1

1

, 389–399 (2019).

,389-399页(2019年)。

Article

文章

Google Scholar

谷歌学术索

European Commission. Ethics guidelines for trustworthy AI.

欧盟委员会。可信人工智能的伦理准则。

https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

https://digital-strategy.ec.europa.eu/zh/library/ethics-guidelines-trustworthy-ai

(2019).

(2019)。

Rigby, M. et al. Steps in moving evidence-based health informatics from theory to practice.

里格比,M. 等。将基于证据的健康信息学从理论付诸实践的步骤。

Healthc. Inform. Res.

医疗卫生信息研究

22

22

, 255–260 (2016).

,255-260页(2016年)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术

Kragh, M. F. & Karstoft, H. Embryo selection with artificial intelligence: How to evaluate and compare methods?

克拉斯,M.F. 和 卡尔斯托夫特,H. 使用人工智能进行胚胎选择:如何评估和比较方法?

J. Assist Reprod. Genet.

生殖遗传学杂志

38

38

, 1675–1689 (2021).

,1675-1689(2021)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术索

Barnes, J. et al. A non-invasive artificial intelligence approach for the prediction of human blastocyst ploidy: a retrospective model development and validation study.

巴恩斯,J. 等。一种非侵入性人工智能方法预测人类囊胚多倍性:一项回顾性模型开发与验证研究。

Lancet Digit Health

柳叶刀数字健康

5

5

, e28–e40 (2023).

,e28-e40(2023)。

Article

文章

CAS

中国科学院

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术搜索

Biospectrum. Fertility scientists develop a revolutionary embryo screening test.

生物光谱。生育科学家开发出一种革命性的胚胎筛查测试。

https://www.biospectrumasia.com/news/47/13554/fertility-scientists-develop-a-revolutionary-embryo-screening-test.html

https://www.biospectrumasia.com/news/47/13554/生育科学家开发了一种革命性的胚胎筛查测试.html

(2019).

(2019)。

9 News. World-first DNA test to boost IVF success rates.

9 News. 世界首创的DNA测试将提高体外受精成功率。

https://www.9news.com.au/national/qld-news-ivf-gold-coast-world-first-dna-test-helping-women-fall-pregnant-monash/2157d4f2-c307-4f04-ac74-88e0dfa41a02

https://www.9news.com.au/national/qld-news-ivf-gold-coast-world-first-dna-test-helping-women-fall-pregnant-monash/2157d4f2-c307-4f04-ac74-88e0dfa41a02

(2019).

(2019)。

Therapeutic Goods Administration. Device incident report.

治疗用品管理局。设备事件报告。

https://www.tga.gov.au/sites/default/files/foi-3089-01.pdf

https://www.tga.gov.au/sites/default/files/foi-3089-01.pdf

(2020).

(2020)。

Victorian Assisted Reproductive Treatment Authority. Annual Report.

维多利亚辅助生殖治疗管理局。年度报告。

https://www.varta.org.au/sites/default/files/2021-12/varta-annual-report-2021.pdf

https://www.varta.org.au/sites/default/files/2021-12/varta-annual-report-2021.pdf

(2021).

(2021)。

Supreme Court of Victoria. Monash IVF Group Proceeding.

维多利亚最高法院。莫纳什试管婴儿集团诉讼。

https://www.supremecourt.vic.gov.au/areas/group-proceedings/monash-ivf

https://www.supremecourt.vic.gov.au/areas/group-proceedings/monash-ivf

(2021).

(2021)。

Elias, N. & Sulkin, I. YouTube viewers in diapers: An exploration of factors associated with amount of toddlers’ online viewing.

埃利亚斯,N. & 苏尔金,I. 穿尿布的YouTube观众:幼儿在线观看时长相关因素的探索。

Cyberpsychol. J. Psychosoc. Res. Cyberspace

网络心理学杂志。心理社会研究。赛博空间

11

11

, Article 2 (2017).

,第2条(2017年)。

Google Scholar

谷歌学术

UNICEF & U. C. Berkeley Human Rights Center. Executive Summary: Artificial intelligence and children’s rights.

联合国儿童基金会和加州大学伯克利分校人权中心。摘要:人工智能与儿童权利。

https://www.unicef.org/innovation/media/10726/file/Executive%20Summary:%20Memorandum%20on%20Artificial%20Intelligence%20and%20Child%20Rights.pdf

https://www.unicef.org/innovation/media/10726/file/执行摘要:关于人工智能与儿童权利的备忘录.pdf

(2019).

(2019)。

Qu, G. et al. Association between screen time and developmental and behavioral problems among children in the United States: evidence from 2018 to 2020 NSCH.

屈,G. 等。美国儿童屏幕时间与发育和行为问题之间的关联:2018年至2020年NSCH的证据。

J. Psychiatr. Res.

精神病学研究杂志

161

161

, 140–149 (2023).

,140-149页(2023年)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术索

Santos, R. M. S., Mendes, C. G., Marques Miranda, D. & Romano-Silva, M. A. The association between screen time and attention in children: a systematic review.

桑托斯,R. M. S.,门德斯,C. G.,马克斯·米兰达,D.,罗曼诺-席尔瓦,M. A. 屏幕时间与儿童注意力之间的关联:一项系统综述。

Dev. Neuropsychol.

发展神经心理学。

47

47

, 175–192 (2022).

,175-192页(2022年)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术

Haghjoo, P., Siri, G., Soleimani, E., Farhangi, M. A. & Alesaeidi, S. Screen time increases overweight and obesity risk among adolescents: a systematic review and dose-response meta-analysis.

哈格朱,P.,西里,G.,索莱伊马尼,E.,法尔汉吉,M. A.,& 艾莱赛迪,S. 屏幕时间增加青少年超重和肥胖风险:一项系统评价与剂量反应荟萃分析。

BMC Prim. Care

BMC初级保健

23

23

, 161 (2022).

,161(2022)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术索

Identifai Genetics.

识别遗传学。

https://identifai-genetics.com/

https://identifai-genetics.com/

(2024).

(2024)。

Face2Gene.

Face2Gene。

https://www.face2gene.com/

https://www.face2gene.com/

(2024).

(2024)。

Clark, M. M. et al. Diagnosis of genetic diseases in seriously ill children by rapid whole-genome sequencing and automated phenotyping and interpretation.

克拉克,M. M. 等。通过快速全基因组测序、自动表型分析和解读来诊断重症儿童的遗传疾病。

Sci. Transl. Med.

科学转化医学

11

11

, eaat6177 (2019).

,eaat6177(2019)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术

Gurovich, Y. et al. Identifying facial phenotypes of genetic disorders using deep learning.

Gurovich, Y. 等。使用深度学习识别遗传疾病的面部表型。

Nat. Med.

自然医学

25

25

, 60–64 (2019).

,60-64页(2019年)。

Article

文章

CAS

中国科学院

PubMed

PubMed

Google Scholar

谷歌学术搜索

Hsieh, T. C. et al. GestaltMatcher facilitates rare disease matching using facial phenotype descriptors.

谢,T. C. 等。GestaltMatcher 使用面部表型描述符促进罕见病匹配。

Nat. Genet.

自然遗传学

54

54

, 349–357 (2022).

,349-357页(2022年)。

Article

文章

CAS

中国科学院

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术搜索

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems, version 2.

IEEE关于自主和智能系统伦理的全球倡议。合乎伦理的设计:优先考虑人类福祉的自主和智能系统愿景,版本2。

IEEE

电气和电子工程师协会

.

http://standards.ieee.org/develop/indconn/ec/autonomous.systems.html

http://standards.ieee.org/develop/indconn/ec/autonomous.systems.html

(2017).

(2017)。

van Est, R. & Gerritsen, J. Human rights in the robot age: challenges arising from the use of robotics, artificial intelligence, and virtual and augmented reality.

范埃斯特,R. & 格里特森,J. 机器人时代的人权:由机器人技术、人工智能以及虚拟和增强现实的使用带来的挑战。

Rathenau Institute

拉特瑙研究所

.

https://www.rathenau.nl/en/digitalisering/human-rights-robot-age

https://www.rathenau.nl/en/digitalisering/human-rights-robot-age

(2017).

(2017)。

Directorate-General for Research and Innovation, European Group on Ethics in Science and New Technologies. Statement on artificial intelligence, robotics and 'autonomous' systems.

欧洲科学与新技术伦理小组、研究与创新总司。关于人工智能、机器人技术以及“自主”系统的声明。

European Commission

欧盟委员会

.

https://doi.org/10.2777/531856

https://doi.org/10.2777/531856

(2018).

(2018)。

House of Lords. Gillick v West Norfolk and Wisbech Area Health Authority, [1986] AC 112 (HL).

上议院。吉里克诉西诺福克和威斯贝奇地区卫生局,[1986] AC 112(上议院)。

https://www.bailii.org/uk/cases/UKHL/1985/7.html

https://www.bailii.org/uk/cases/UKHL/1985/7.html

(1986).

(1986)。

Griffith, R. What is Gillick competence?

格里菲斯,R。什么是吉尔里克能力?

Hum. Vaccin Immunother.

人用疫苗与免疫治疗。

12

12

, 244–247 (2016).

,244-247页(2016年)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术索

UNICEF. Convention on the Rights of the Child.

联合国儿童基金会。《儿童权利公约》。

http://www.unicef.org/child-rights-convention/convention-text

http://www.unicef.org/child-rights-convention/convention-text

(1989).

(1989)。

Adams, C., Pente, P., Lemermeyer, G. & Rockwell, G. Ethical principles for artificial intelligence in K-12 education.

亚当斯,C.,潘特,P.,勒默迈尔,G.,罗克韦尔,G. 《K-12教育中的人工智能伦理原则》。

Comput. Educ. Artif. Intell.

计算机教育与人工智能

4

4

, 100131 (2023).

,100131(2023)。

Millum, J. The foundation of the child's right to an open future.

米勒姆,J. 儿童开放未来权利的基础。

J. Soc. Philos.

《社会哲学杂志》

45

45

, 522–538 (2014).

,522-538页(2014年)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术索

UNI Global Union. 10 principles for ethical artificial intelligence.

联合国全球工会。人工智能伦理的十大原则。

https://uniglobalunion.org/report/10-principles-for-ethical-artificial-intelligence/

https://uniglobalunion.org/report/10-principles-for-ethical-artificial-intelligence/

(2017).

(2017)。

UNICEF. The state of the world’s children 2017: Children in a digital world.

联合国儿童基金会。《2017年世界儿童状况:数字世界中的儿童》。

https://www.unicef.org/media/48581/file/SOWC_2017_ENG.pdf

https://www.unicef.org/media/48581/file/SOWC_2017_ENG.pdf

(2017).

(2017)。

ITU. Module on setting the stage for AI governance: Interfaces, infrastructures, and institutions for policymakers and regulators.

ITU. 为人工智能治理奠定基础的模块:政策制定者和监管者的界面、基础设施与机构。

https://www.itu.int/en/ITU-D/Conferences/GSR/Documents/GSR2018/documents/AISeries_GovernanceModule_GSR18.pdf

https://www.itu.int/en/ITU-D/Conferences/GSR/Documents/GSR2018/documents/AISeries_GovernanceModule_GSR18.pdf

(2018).

(2018)。

Bhargava, H. et al. Promises, pitfalls, and clinical applications of artificial intelligence in pediatrics.

Bhargava, H. 等。人工智能在儿科中的前景、隐患及临床应用。

J. Med. Internet Res.

医学互联网研究杂志

26

26

, e49022 (2024).

,e49022(2024)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术索

Pediatric Moonshot.

儿科登月计划。

https://pediatricmoonshot.com/

https://pediatricmoonshot.com/

(2023).

(2023)。

Verma, S. & Rubin, J. Fairness definitions explained.

维尔马,S. & 鲁宾,J. 公平性定义解析。

Fairware

公平软件

1

1

, 1–7 (2018).

,1-7页(2018年)。

Google Scholar

谷歌学术

Kleinberg, J., Mullainathan, S. & Raghavan, M. Inherent trade-offs in the fair determination of risk scores.

Kleinberg, J., Mullainathan, S. & Raghavan, M. 风险评分公平判定中的固有权衡。

https://doi.org/10.48550/arXiv.1609.05807

https://doi.org/10.48550/arXiv.1609.05807

(2016).

(2016)。

Alqahtani, F. F. et al. Evaluation of a semi-automated software program for the identification of vertebral fractures in children.

阿尔卡塔尼,F. F. 等。半自动化软件程序在儿童椎体骨折识别中的评估。

Clin. Radio.

临床放射学

72

72

, 904.e11–904.e20 (2017).

,904.e11–904.e20(2017)。

Article

文章

CAS

中国科学院

Google Scholar

谷歌学术

Alqahtani, F. F. et al. Diagnostic performance of morphometric vertebral fracture analysis (MXA) in children using a 33-point software program.

阿尔卡塔尼,F. F. 等。使用33点软件程序在儿童中形态计量椎体骨折分析(MXA)的诊断性能。

Bone

骨头

133

133

, 115249 (2020).

,115249(2020)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术索

Reddy, C. D., Lopez, L., Ouyang, D., Zou, J. Y. & He, B. Video-based deep learning for automated assessment of left ventricular ejection fraction in pediatric patients.

Reddy, C. D., Lopez, L., Ouyang, D., Zou, J. Y. & He, B. 基于视频的深度学习用于自动化评估小儿患者的左心室射血分数。

J. Am. Soc. Echocardiogr.

美国超声心动图学会期刊

36

36

, 482–489 (2023).

,482-489页(2023年)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术索

Shin, H. J., Son, N. H., Kim, M. J. & Kim, E. K. Diagnostic performance of artificial intelligence approved for adults for the interpretation of pediatric chest radiographs.

申,H. J.,孙,N. H.,金,M. J.,金,E. K. 适用于成人的人工智能在解读儿童胸部X光片中的诊断性能。

Sci. Rep.

科学报告

12

12

, 10215 (2022).

,10215(2022)。

Article

文章

CAS

中国科学院

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术

Baumert, M., Hartmann, S. & Phan, H. Automatic sleep staging for the young and the old—evaluating age bias in deep learning.

鲍默特,M.,哈特曼,S.,潘,H. 青年人与老年人的自动睡眠分期——评估深度学习中的年龄偏差。

Sleep. Med.

睡眠医学。

107

107

, 18–25 (2023).

,18-25页(2023年)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术

Stempniak, M. Radiology groups urge congress to address scarcity of AI solutions in pediatric care.

斯特普尼亚克,M. 放射学团体敦促国会解决儿科护理中人工智能解决方案的短缺问题。

Radiol. Business

放射学业务

.

https://www.radiologybusiness.com/topics/artificial-intelligence/radiology-congress-ai-solutions-pediatric-care

https://www.radiologybusiness.com/topics/artificial-intelligence/radiology-congress-ai-solutions-pediatric-care

(2022).

(2022)。

Schloemer, T., De Bock, F. & Schröder-Bäck, P. Implementation of evidence-based health promotion and disease prevention interventions: theoretical and practical implications of the concept of transferability for decision-making and the transfer process.

Schloemer, T., De Bock, F. & Schröder-Bäck, P. 基于证据的健康促进和疾病预防干预措施的实施:可转移性概念对决策和转移过程的理论与实践意义。

Bundesgesundheitsblatt, Gesundheitsforschung, Gesundheitsschutz

联邦健康公报,健康研究,健康保护

64

64

, 534–543 (2021).

,534-543页(2021年)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术

Haley, L. C. et al. Attitudes on artificial intelligence use in pediatric care from parents of hospitalized children.

海利,L. C. 等。住院儿童父母对在儿科护理中使用人工智能的态度。

J. Surg. Res.

外科研究杂志

295

295

, 158–167 (2023).

,158-167页(2023年)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术

Microsoft. Responsible bots: 10 guidelines for developers of conversational AI.

微软。负责任的机器人:对话式人工智能开发者的10条准则。

https://www.microsoft.com/en-us/research/publication/responsible-bots/

https://www.microsoft.com/zh-cn/research/publication/responsible-bots/

(2018).

(2018)。

Personal Data Protection Commission Singapore. Discussion paper on artificial intelligence (AI) and personal data - fostering responsible development and adoption of AI.

新加坡个人数据保护委员会。关于人工智能(AI)和个人数据的讨论文件——促进人工智能的负责任开发和应用。

https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/Discussion-Paper-on-AI-and-PD---050618.pdf

https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/Discussion-Paper-on-AI-and-PD---050618.pdf

(2018).

(2018)。

Royal College of Physicians. Artificial intelligence (AI) in health.

皇家内科医师学会。人工智能(AI)在健康领域。

https://www.rcplondon.ac.uk/projects/outputs/artificial-intelligence-ai-health

https://www.rcplondon.ac.uk/projects/outputs/artificial-intelligence-ai-health

(2018).

(2018)。

Sharko, M. et al. State-by-state variability in adolescent privacy laws.

Sharko, M. 等。青少年隐私法的州际差异。

Pediatrics

儿科

149

149

, e2021053458 (2022).

,e2021053458(2022)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术

UNICEF. The case for better governance of children’s data: A manifesto.

联合国儿童基金会。《加强儿童数据治理的案例:一份宣言》。

https://www.unicef.org/globalinsight/reports/better-governance-childrens-data-manifesto

https://www.unicef.org/globalinsight/reports/better-governance-childrens-data-manifesto

(2021).

(2021)。

Moore, S. et al. Consent processes for mobile app mediated research: Systematic review.

Moore, S. 等。移动应用介导研究的同意过程:系统评价。

JMIR mHealth uHealth

JMIR mHealth uHealth

5

5

, e126 (2017).

,e126(2017)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术

Matsui, K., Ibuki, T., Tashiro, S. & Nakamura, H. Study group on regulatory science for early clinical application of pediatric pharmaceuticals. principles of ethical consideration required for clinical research involving children.

松井,K.,伊吹,T.,田代,S. & 中村,H. 关于儿童药物早期临床应用的监管科学研究小组。涉及儿童的临床研究所需的伦理考量原则。

Pediatr. Int

儿科国际

63

63

, 248–259 (2021).

,248-259页(2021年)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术

Krishna, S. & Fuloria, M. Ethical considerations in neonatal research.

克里希纳,S. & 富洛里亚,M. 新生儿研究中的伦理考量。

Neoreviews

新生儿评论

23

23

, e151–e158 (2022).

,e151–e158(2022)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术

National Institutes of Health. NIH policy and guidelines on the inclusion of children as participants in research involving human subjects.

美国国立卫生研究院。关于在涉及人类受试者的研究中包含儿童作为参与者的NIH政策和指南。

https://grants.nih.gov/grants/guide/notice-files/not98-024.html

https://grants.nih.gov/grants/guide/notice-files/not98-024.html

(1998).

(1998)。

National Institutes of Health, Inclusion of children in clinical research: Change in NIH definition.Supreme Court of Victoria. Monash IVF Group Proceeding.

美国国立卫生研究院,儿童参与临床研究:NIH定义的变更。维多利亚州最高法院。莫纳什IVF集团诉讼。

https://grants.nih.gov/grants/guide/noticefiles/NOT-OD-16-010.html

https://grants.nih.gov/grants/guide/noticefiles/NOT-OD-16-010.html

(2015).

(2015)。

Muralidharan, V., Burgart, A., Daneshjou, R.&Rose, S. Recommendations for the use of pediatric data in artificial intelligence and machine learning ACCEPT-AI.

Muralidharan, V., Burgart, A., Daneshjou, R. & Rose, S. 关于在人工智能和机器学习中使用儿科数据的建议 ACCEPT-AI。

NPJ Digit. Med.

数字医学npj

6

6

, 166 (2023).

,166(2023)。

Article

文章

CAS

中国科学院

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术索

Silva, H. P., Lehoux, P., Miller, F. A. & Denis, J. L. Introducing responsible innovation in health: a policy-oriented framework.

席尔瓦,H. P.,勒胡,P.,米勒,F. A.,丹尼斯,J. L. 引入健康领域的负责任创新:一个政策导向的框架。

Health Res. Policy Sys.

健康研究、政策与系统

16

16

, 90 (2018).

,90(2018)。

Article

文章

Google Scholar

谷歌学术

Klaassen, P., Kupper, F., Rijnen, M., Vermeulen, S. & Broerse, J. Policy brief on the state of the art on RRI and a working definition of RRI.

克拉斯森,P.,库珀,F.,里恩,M.,费尔梅伦,S.,布罗尔塞,J. 关于RRI现状的政策简报及RRI的工作定义。

Amsterdam

阿姆斯特丹

:

VU University

阿姆斯特丹自由大学

https://rri-tools.eu/documents/10184/107098/RRITools_D1.1-RRIPolicyBrief.pdf/c246dc97-802f-4fe7-a230-2501330ba29b

https://rri-tools.eu/documents/10184/107098/RRITools_D1.1-RRIPolicyBrief.pdf/c246dc97-802f-4fe7-a230-2501330ba29b

(2014).

(2014)。

General Data Protection Regulation. Art 17 GDPR—right to erasure (‘right to be forgotten’).

通用数据保护条例。《通用数据保护条例》第17条——删除权(“被遗忘权”)。

https://gdpr-info.eu/art-17-gdpr/

https://gdpr-info.eu/art-17-gdpr/

(2018).

(2018)。

Infocomm Media Development Authority. Code of Practice for Online Safety - Singapore.

新加坡资讯通信媒体发展局。在线安全实践准则 - 新加坡。

https://www.imda.gov.sg/-/media/imda/files/regulations-and-licensing/regulations/codes-of-practice/codes-of-practice-media/code-of-practice-for-online-safety.pdf

https://www.imda.gov.sg/-/media/imda/files/regulations-and-licensing/regulations/codes-of-practice/codes-of-practice-media/code-of-practice-for-online-safety.pdf

(2023).

(2023)。

Petkova, V., Georgieva, D., Dimitrov, M. & Nikolova, I. Off-Label prescribing in pediatric population—literature review for 2012-2022.

佩特科娃,V.,乔治耶娃,D.,迪米特罗夫,M.,尼古拉耶娃,I. 儿科人群的超说明书用药—2012-2022年文献综述。

Pharmaceutics

药剂学

15

15

, 2652 (2023).

,2652(2023)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术

Khan, D., Kirby, D., Bryson, S., Shah, M. & Rahman Mohammed, A. Paediatric specific dosage forms: patient and formulation considerations.

Khan, D., Kirby, D., Bryson, S., Shah, M. & Rahman Mohammed, A. 儿科特定剂型:患者与配方的考量。

Int J. Pharm.

国际药学杂志

616

616

, 121501 (2022).

,121501(2022)。

Article

文章

CAS

中国科学院

PubMed

PubMed

Google Scholar

谷歌学术

Lenk, C. Off-label drug use in paediatrics: a world-wide problem.

Lenk, C. 儿科中的超说明书用药:一个全球性问题。

Curr. Drug Targets

当前药物靶点

13

13

, 878–884 (2012).

,878-884页(2012年)。

Article

文章

CAS

中国科学院

PubMed

PubMed

Google Scholar

谷歌学术

McCune, S. & Portman, R. J. Accelerating pediatric drug development: a 2022 special issue of therapeutic innovation & regulatory science.

麦库恩,S. & 波特曼,R. J. 加速儿科药物开发:《治疗创新与监管科学》2022年特刊。

Ther. Innov. Regul. Sci.

治疗创新与监管科学

56

56

, 869–872 (2022).

,869-872页(2022年)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术

Oshikoya, K. A. et al. Off-label prescribing for children with chronic diseases in Nigeria; findings and implications.

Oshikoya, K. A. 等。尼日利亚慢性病儿童的超说明书用药;研究结果与影响。

Expert Opin. Drug Saf.

专家意见。药物安全。

16

16

, 981–988 (2017).

,981-988页(2017年)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术索

Van Norman, G. A. Off-label use vs off-label marketing of drugs: part 1: off-label use- patient harms and prescriber responsibilities.

范·诺曼,G. A. 药物的超说明书使用与超说明书营销:第一部分:超说明书使用——患者伤害与处方医生责任。

JACC Basic Transl. Sci.

JACC基础转化科学

8

8

, 224–233 (2023).

,224-233页(2023年)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术搜索

American Academy of Pediatrics Section on Cardiology and Cardiac Surgery; Section on Orthopaedics. Off-label use of medical devices in children.

美国儿科学会心血管和心脏外科学分会;骨科学分会。儿童医疗设备的超说明书使用。

Pediatrics

儿科

139

一百三十九

, e20163439 (2017).

,e20163439(2017)。

Article

文章

Google Scholar

谷歌学术搜索

United States Food and Drug Administration. “Off label” and investigational use of marketed drugs, biologics, and medical devices - Guidance for institutional review boards and clinical investigators.

美国食品药品监督管理局。《已上市药品、生物制品和医疗器械的“标签外”和试验性使用——机构审查委员会和临床研究者的指导》。

https://www.fda.gov/regulatory-information/search-fda-guidance-documents/label-and-investigational-use-marketed-drugs-biologics-and-medical-devices

https://www.fda.gov/regulatory-information/search-fda-guidance-documents/标签和研究性使用已上市药物、生物制品和医疗器械

(1998).

(1998)。

Frattarelli, D. A. et al. American Academy of Pediatrics Committee on Drugs. Off-label use of drugs in children.

Frattarelli, D. A. 等。美国儿科学会药物委员会。儿童药物的超说明书使用。

Pediatrics

儿科

133

133

, 563–567 (2014).

,563-567页(2014年)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术

Cresswell, K. et al. The need to strengthen the evaluation of the impact of Artificial Intelligence-based decision support systems on healthcare provision.

Cresswell, K. 等。需要加强对基于人工智能的决策支持系统对医疗提供影响的评估。

Health Policy

健康政策

136

136

, 104889 (2023).

,104889(2023)。

Article

文章

PubMed

PubMed

Google Scholar

谷歌学术

Fleishon, H. ACR comments to FDA on AI transparency.

弗莱希恩,H. 美国放射学会对FDA关于人工智能透明度的评论。

https://www.acr.org/-/media/ACR/Files/Advocacy/Regulatory-Issues/acr-comments_fda-ai-transparency.pdf

https://www.acr.org/-/media/ACR/Files/Advocacy/Regulatory-Issues/acr-comments_fda-ai-transparency.pdf

(2021).

(2021)。

Afnan, M. A. M. et al. Interpretable, not black-box, artificial intelligence should be used for embryo selection.

Afnan, M. A. M. 等。应使用可解释的而非黑箱的人工智能进行胚胎选择。

Hum. Reprod. Open.

人类生殖开放期刊

2021

2021

, hoab040 (2021).

,hoab040(2021)。

Article

文章

PubMed

PubMed

PubMed Central

PubMed Central

Google Scholar

谷歌学术索

United Nations. Report of the Secretary-General: Roadmap for digital cooperation.

联合国秘书长报告:数字合作路线图。

https://www.un.org/en/content/digital-cooperation-roadmap/

https://www.un.org/zh/content/digital-cooperation-roadmap/

(2020).

(2020)。

Download references

下载参考文献

Author information

作者信息

Authors and Affiliations

作者与所属机构

Krsyma Medical AI Pte Ltd, Singapore, Singapore

新加坡Krsyma医疗人工智能私人有限公司,新加坡

Seo Yi Chng

徐伊静

St Helens and Knowsley NHS Foundation Trust, Merseyside, England

圣海伦斯和诺斯利NHS基金会信托,默西塞德郡,英格兰

Mark Jun Wen Tern

马克·俊·文·特恩

Department of Paediatrics, National University of Singapore, Singapore, Singapore

新加坡国立大学儿科系,新加坡,新加坡

Yung Seng Lee

林荣盛

Department of Diagnostic Radiology, Singapore General Hospital, Singapore, Singapore

新加坡,新加坡综合医院,诊断放射科

Lionel Tim-Ee Cheng

翁庭辉

Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore

新加坡国立大学医院诊断影像科,新加坡

Jeevesh Kapur

杰夫什·卡普尔

Singapore Institute for Clinical Sciences (SICS), Agency for Science, Technology and Research (A

新加坡临床科学研究院(SICS),科学技术研究局(A

STAR), Singapore, Singapore

新加坡,新加坡

Johan Gunnar Eriksson & Yap Seng Chong

约翰·古纳尔·埃里克森和叶成忠

Department of Obstetrics & Gynaecology, Yong Loo Lin School of Medicine, National University of Singapore (NUS), Singapore, Singapore

新加坡国立大学杨潞龄医学院妇产科系,新加坡,新加坡

Johan Gunnar Eriksson & Yap Seng Chong

约翰·古纳·埃里克森 和 叶成忠

Department of General Practice and Primary Health Care, University of Helsinki and Helsinki University Hospital, Helsinki, Finland

赫尔辛基大学和赫尔辛基大学医院全科医学与初级卫生保健系,赫尔辛基,芬兰

Johan Gunnar Eriksson

约翰·古纳尔·埃里克森

Folkhälsan Research Center, Helsinki, Finland

民众健康研究中心,赫尔辛基,芬兰

Johan Gunnar Eriksson

约翰·古纳尔·埃里克松

Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore

新加坡国立大学杨潞龄医学院生物医学伦理中心,新加坡,新加坡

Julian Savulescu

朱利安·萨乌莱斯库

Biomedical Research Group, Murdoch Children’s Research Institute, Melbourne, VIC, Australia

生物医学研究组,墨尔本默多克儿童研究所,维多利亚州,澳大利亚

Julian Savulescu

朱利安·萨乌莱斯库

Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, UK

英国牛津大学哲学系,牛津大学实践伦理学乌希罗中心,牛津,英国

Julian Savulescu

朱利安·萨乌莱斯库

Authors

作者

Seo Yi Chng

徐伊静

View author publications

查看作者的出版物

You can also search for this author in

您还可以搜索此作者在

PubMed

PubMed

Google Scholar

谷歌学术

Mark Jun Wen Tern

马克·俊·文·特恩

View author publications

查看作者的出版物

You can also search for this author in

您还可以搜索此作者在

PubMed

PubMed

Google Scholar

谷歌学术

Yung Seng Lee

李永森

View author publications

查看作者的出版物

You can also search for this author in

您还可以搜索该作者在

PubMed

PubMed

Google Scholar

谷歌学术

Lionel Tim-Ee Cheng

梁振英

View author publications

查看作者的出版物

You can also search for this author in

您还可以搜索此作者在

PubMed

PubMed

Google Scholar

谷歌学术

Jeevesh Kapur

杰夫什·卡普尔

View author publications

查看作者的出版物

You can also search for this author in

您还可以搜索此作者在

PubMed

PubMed

Google Scholar

谷歌学术

Johan Gunnar Eriksson

约翰·古纳尔·埃里克森

View author publications

查看作者的出版物

You can also search for this author in

您还可以搜索此作者在

PubMed

PubMed

Google Scholar

谷歌学术

Yap Seng Chong

杨森钟

View author publications

查看作者的出版物

You can also search for this author in

您还可以搜索该作者在

PubMed

PubMed

Google Scholar

谷歌学术索

Julian Savulescu

朱利安·萨乌莱斯库

View author publications

查看作者的出版物

You can also search for this author in

您还可以搜索此作者在

PubMed

PubMed

Google Scholar

谷歌学术

Contributions

贡献

S.Y.C., M.J.W.T., and Y.S.L. conceived the project, with input from L.T.C., J.K., J.G.E., Y.S.C., and J.S. All authors contributed to the writing of the manuscript.

S.Y.C.、M.J.W.T. 和 Y.S.L. 构思了该项目,L.T.C.、J.K.、J.G.E.、Y.S.C. 和 J.S. 提供了建议。所有作者都参与了手稿的撰写。

Corresponding author

通讯作者

Correspondence to

联系方式:

Seo Yi Chng

徐伊静

.

Ethics declarations

伦理声明

Competing interests

利益冲突

J.S. is a Bioethics Committee consultant for Bayer. J.S. is an Advisory Panel member for the Hevolution Foundation. S.Y.C. is a director at Krsyma Medical AI Pte Ltd. M.J.W.T., Y.S.L., L.T.C., J.K., J.G.E., and Y.S.C. declare no competing interests.

J.S. 是拜耳公司生物伦理委员会的顾问。J.S. 是 Hevolution 基金会咨询小组的成员。S.Y.C. 是 Krsyma Medical AI Pte Ltd 的董事。M.J.W.T.、Y.S.L.、L.T.C.、J.K.、J.G.E. 和 Y.S.C. 声明无竞争利益。

Additional information

附加信息

Publisher’s note

出版商说明

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Springer Nature 对已发布地图中的管辖权主张和机构隶属关系保持中立。

Rights and permissions

权利与许可

Open Access

开放获取

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

本文根据知识共享署名4.0国际许可证获得许可,该许可证允许您在任何媒介或格式中使用、分享、改编、分发和复制,只要您对原作者和来源给予适当的署名,提供知识共享许可证的链接,并说明是否进行了修改。

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

本文中的图像或其他第三方材料包含在文章的 Creative Commons 许可证中,除非材料的信用条款另有说明。如果材料未包含在文章的 Creative Commons 许可证中,且您计划的使用方式未被法律规定允许或超出了允许的使用范围,您需要直接从版权持有人处获得许可。

To view a copy of this licence, visit .

要查看此许可证的副本,请访问 。

http://creativecommons.org/licenses/by/4.0/

http://creativecommons.org/licenses/by/4.0/

.

Reprints and permissions

重印和权限

About this article

关于本文

Cite this article

引用这篇文章

Chng, S.Y., Tern, M.J.W., Lee, Y.S.

张淑仪、陈美娟、李迎春

et al.

等。

Ethical considerations in AI for child health and recommendations for child-centered medical AI.

儿童健康领域的人工智能伦理考量及以儿童为中心的医疗人工智能建议。

npj Digit. Med.

数字医学npj

8

8

, 152 (2025). https://doi.org/10.1038/s41746-025-01541-1

,152(2025)。https://doi.org/10.1038/s41746-025-01541-1

Download citation

下载引用

Received

已收到

:

20 February 2024

2024年2月20日

Accepted

已接受

:

26 February 2025

2025年2月26日

Published

已发布

:

10 March 2025

2025年3月10日

DOI

数字对象标识符

:

https://doi.org/10.1038/s41746-025-01541-1

https://doi.org/10.1038/s41746-025-01541-1

Share this article

分享这篇文章

Anyone you share the following link with will be able to read this content:

任何你分享以下链接的人都将能够阅读此内容:

Get shareable link

获取可共享链接

Sorry, a shareable link is not currently available for this article.

抱歉,这篇文章目前没有可共享的链接。

Copy to clipboard

复制到剪贴板

Provided by the Springer Nature SharedIt content-sharing initiative

由 Springer Nature SharedIt 内容共享计划提供