<NOTICE>
DEPARTMENT OF COMMERCE
<SUBAGY>National Telecommunications and Information Administration</SUBAGY>
<DEPDOC>[Docket No. 240216-0052]</DEPDOC>
<RIN>RIN 0660-XC060</RIN>
<SUBJECT>Dual Use Foundation Artificial Intelligence Models With Widely Available Model Weights</SUBJECT>
<HD SOURCE="HED">AGENCY:</HD>
National Telecommunications and Information Administration, Department of Commerce.
<HD SOURCE="HED">ACTION:</HD>
Notice; request for comment.
<SUM>
<HD SOURCE="HED">SUMMARY:</HD>
On October 30, 2023, President Biden issued an Executive order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which directed the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, to conduct a public consultation process and issue a report on the potential risks, benefits, other implications, and appropriate policy and regulatory approaches to dual-use foundation models for which the model weights are widely available. Pursuant to that Executive order, the National Telecommunications and Information Administration (NTIA) hereby issues this Request for Comment on these issues. Responses received will be used to submit a report to the President on the potential benefits, risks, and implications of dual-use foundation
models for which the model weights are widely available, as well as policy and regulatory recommendations pertaining to those models.
</SUM>
<DATES>
<HD SOURCE="HED">DATES:</HD>
Written comments must be received on or before March 27, 2024.
</DATES>
<HD SOURCE="HED">ADDRESSES:</HD>
All electronic public comments on this action, identified by
<E T="03">Regulations.gov</E>
docket number NTIA-2023-0009, may be submitted through the Federal e-Rulemaking Portal at
<E T="03">https://www.regulations.gov.</E>
The docket established for this request for comment can be found at
<E T="03">www.Regulations.gov,</E>
NTIA-2023-0009. To make a submission, click the “Comment Now!” icon, complete the required fields, and enter or attach your comments. Additional instructions can be found in the “Instructions” section below, after
<E T="02">SUPPLEMENTARY INFORMATION</E>
.
<FURINF>
<HD SOURCE="HED">FOR FURTHER INFORMATION CONTACT:</HD>
Please direct questions regarding this Request for Comment to Travis Hall at
<E T="03">thall@ntia.gov</E>
with “Openness in AI Request for Comment” in the subject line. If submitting comments by U.S. mail, please address questions to Bertram Lee, National Telecommunications and Information Administration, U.S. Department of Commerce, 1401 Constitution Avenue NW, Washington, DC 20230. Questions submitted via telephone should be directed to (202) 482-3522. Please direct media inquiries to NTIA's Office of Public Affairs, telephone: (202) 482-7002; email:
<E T="03">press@ntia.gov.</E>
</FURINF>
<SUPLINF>
<HD SOURCE="HED">SUPPLEMENTARY INFORMATION:</HD>
<HD SOURCE="HD1">Background and Authority</HD>
Artificial intelligence (AI)
<SU>1</SU>
<FTREF/>
has had, and will have, a significant effect on society, the economy, and scientific progress. Many of the most prominent models, including the model that powers ChatGPT, are “fully closed” or “highly restricted,” with limited or no public access to their inner workings. The recent introduction of large, publicly-available models, such as those from Google, Meta, Stability AI, Mistral, the Allen Institute for AI, and EleutherAI, however, has fostered an ecosystem of increasingly “open” advanced AI models, allowing developers and others to fine-tune models using widely available computing.
<SU>2</SU>
<FTREF/>
<FTNT>
<SU>1</SU>
Artificial Intelligence (AI) “has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”
<E T="03">see</E>
Executive Office of the President, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88 FR 75191 (November 1, 2023)
<E T="03">https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.</E>
“AI Model” means “a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.”
<E T="03">see</E>
Id.
</FTNT>
<FTNT>
<SU>2</SU>
<E T="03">See e.g.,</E>
Zoe Brammer, How Does Access Impact Risk? Assessing AI Foundation Model Risk Along a Gradient of Access, The Institute for Security and Technology (December 2023)
<E T="03">https://securityandtechnology.org/wp-content/uploads/2023/12/How-Does-Access-Impact-Risk-Assessing-AI-Foundation-Model-Risk-Along-A-Gradient-of-Access-Dec-2023.pdf;</E>
Irene Solaiman, The Gradient of Generative AI Release: Methods and Considerations, arXiv:2302.04844v1 (February 5, 2023);
<E T="03">https://arxiv.org/pdf/2302.04844.pdf.</E>
</FTNT>
Dual use foundation models with widely available weights (referred to here as open foundation models) could play a key role in fostering growth among less resourced actors, helping to widely share access to AI's benefits.
<SU>3</SU>
<FTREF/>
Small businesses, academic institutions, underfunded entrepreneurs, and even legacy businesses have used these models to further innovate, advance scientific knowledge, and gain potential competitive advantages in the marketplace. The concentration of access to foundation models into a small subset of organizations poses the risk of hindering such innovation and advancements, a concern that could be lessened by availability of open foundation models. Open foundation models can be readily adapted and fine-tuned to specific tasks and possibly make it easier for system developers to scrutinize the role foundation models play in larger AI systems, which is important for rights- and safety-impacting AI systems (
<E T="03">e.g.</E>
healthcare, education, housing, criminal justice, online platforms etc.).
<SU>4</SU>
<FTREF/>
These open foundation models have the potential to help scientists make new medical discoveries or even make mundane, time-consuming activities more efficient.
<SU>5</SU>
<FTREF/>
<FTNT>
<SU>3</SU>
<E T="03">See e.g.,</E>
Elizabeth Seger et al., Open-Sourcing Highly Capable Foundation Models, Centre for the Governance of AI (2023)
<E T="03">https://cdn.governance.ai/Open-Sourcing_Highly_Capable_Foundation_Models_2023_GovAI.pdf.</E>
</FTNT>
<FTNT>
<SU>4</SU>
<E T="03">See e.g.,</E>
Executive Office of the President: Office of Management and Budget, Proposed Memorandum For the Heads of Executive Departments and Agencies (November 3, 2023)
<E T="03">https://www.whitehouse.gov/wp-content/uploads/2023/11/AI-in-Government-Memo-draft-for-public-review.pdf;</E>
Cui Beilei et al., Surgical-DINO: Adapter Learning of Foundation Model for Depth Estimation in Endoscopic Surgery, arXiv:2401.06013v1 (January 11, 2024)
<E T="03">https://arxiv.org/pdf/2401.06013.pdf</E>
(Using low-ranked adaptation, or LoRA, in a foundation model to help with surgical depth estimation for endoscopic surgeries).
</FTNT>
<FTNT>
<SU>5</SU>
<E T="03">See e.g.,</E>
Shaoting Zhang, On the Challenges and Perspectives of Foundation Models for Medical Image Analysis, arXiv:2306.05705v2 (November 23, 2023),
<E T="03">https://arxiv.org/pdf/2306.05705.pdf.</E>
</FTNT>
Open foundation models have the potential to transform research, both within computer science
<SU>6</SU>
<FTREF/>
and through supporting other disciplines such as medicine, pharmaceutical, and scientific research.
<SU>7</SU>
<FTREF/>
Historically, widely available programming libraries have given researchers the ability to simultaneously run and understand algorithms created by other programmers. Researchers and journals have supported the movement towards open science,
<SU>8</SU>
<FTREF/>
which includes sharing research artifacts like the data and code required to reproduce results.
<FTNT>
<SU>6</SU>
<E T="03">See e.g.,</E>
David Noever, Can Large Language Models Find And Fix Vulnerable Software?, arxiv 2308.10345 (August 20, 2023)
<E T="03">https://arxiv.org/abs/2308.10345;</E>
<SU>6</SU>
Andreas Stöckl, Evaluating a Synthetic Image Dataset Generated with Stable Diffusion, Proceedings of Eighth International Congress on Information and Communication Technology Vol. 693 (July 25, 2023)
<E T="03">https://link.springer.com/chapter/10.1007/978-981-99-3243-6_64.</E>
</FTNT>
<FTNT>
<SU>7</SU>
<E T="03">See e.g.,</E>
Kun-Hsing Yu et al., Artificial intelligence in healthcare, Nature Biomedical Engineering Vol. 2 719-731 (October 10, 2018)
<E T="03">https://www.nature.com/articles/s41551-018-0305-z#citeas;</E>
Kevin Maik Jablonka et al., 14 examples of how LLMs can transform materials science and chemistry: a reflection on a large language model hackathon, Digital Discovery 2 (August 8, 2023)
<E T="03">https://pubs.rsc.org/en/content/articlehtml/2023/dd/d3dd00113j.</E>
</FTNT>
<FTNT>
<SU>8</SU>
<E T="03">See e.g.,</E>
Harvey V. Fineberg et al., Consensus Study Report: Reproducibility and Replicability in Science, National Academies of Sciences (May 2019)
<E T="03">https://nap.nationalacademies.org/resource/25303/R&R.pdf;</E>
Nature, Reporting standards and availability of data, materials, code and protocols,
<E T="03">https://www.nature.com/nature-portfolio/editorial-policies/reporting-standards;</E>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Preview showing 10k of 35k characters.
Full document text is stored and available for version comparison.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
This text is preserved for citation and comparison.