← All FR Documents ·← Back to 2024-06748
Notice

Dual Use Foundation Artificial Intelligence Models With Widely Available Model Weights

Notice; request for comment.

📖 Research Context From Federal Register API

Summary:

On October 30, 2023, President Biden issued an Executive order on "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," which directed the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, to conduct a public consultation process and issue a report on the potential risks, benefits, other implications, and appropriate policy and regulatory approaches to dual-use foundation models for which the model weights are widely available. Pursuant to that Executive order, the National Telecommunications and Information Administration (NTIA) hereby issues this Request for Comment on these issues. Responses received will be used to submit a report to the President on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, as well as policy and regulatory recommendations pertaining to those models.

Key Dates
Citation: 89 FR 14059
Written comments must be received on or before March 27, 2024.
Comments closed: March 27, 2024
Public Participation

In Plain English

What is this Federal Register notice?

This is a notice published in the Federal Register by Commerce Department, National Telecommunications and Information Administration. Notices communicate information, guidance, or policy interpretations but may not create new binding obligations.

Is this rule final?

This document is classified as a notice. It may or may not create enforceable regulatory obligations depending on its specific content.

Who does this apply to?

Notice; request for comment.

When does it take effect?

Written comments must be received on or before March 27, 2024.

Why it matters: This notice communicates agency policy or guidance regarding applicable regulations.

Regulatory History — 7 documents in this rulemaking

  1. Feb 26, 2024 2024-03763 Notice
    Dual Use Foundation Artificial Intelligence Models With Widely Available Mode...
  2. Apr 2, 2024 2024-06748 Notice
    Adoption of First Responder Network Authority Categorical Exclusions Under th...
  3. May 23, 2024 2024-11277 Notice
    Advancement of 6G Telecommunications Technology
  4. Sep 4, 2024 2024-19524 Notice
    Request for Comments on Bolstering Data Center Growth, Resilience, and Security
  5. Sep 12, 2024 2024-20645 Notice
    Request for Comment on Local Estimates of internet Adoption
  6. Dec 11, 2024 2024-29064 Notice
    Ethical Guidelines for Research Using Pervasive Data
  7. Dec 27, 2024 2024-30760 Notice
    Impact of L-Band MSS ‘Direct-to-Device’ Operations on GPS

Document Details

Document Number2024-03763
FR Citation89 FR 14059
TypeNotice
PublishedFeb 26, 2024
Effective Date-
RIN0660-XC06
Docket IDDocket No. 240216-0052
Pages14059–14063 (5 pages)
Text FetchedYes

Agencies & CFR References

CFR References:
None

Linked CFR Parts

PartNameAgency
No linked CFR parts

Paired Documents

TypeProposedFinalMethodConf
No paired documents

Related Documents (by RIN/Docket)

Doc #TypeTitlePublished
2024-30760 Notice Impact of L-Band MSS ‘Direct-to-Device’ ... Dec 27, 2024
2024-29064 Notice Ethical Guidelines for Research Using Pe... Dec 11, 2024
2024-20645 Notice Request for Comment on Local Estimates o... Sep 12, 2024
2024-19524 Notice Request for Comments on Bolstering Data ... Sep 4, 2024
2024-11277 Notice Advancement of 6G Telecommunications Tec... May 23, 2024
2024-06748 Notice Adoption of First Responder Network Auth... Apr 2, 2024

External Links

⏳ Requirements Extraction Pending

This document's regulatory requirements haven't been extracted yet. Extraction happens automatically during background processing (typically within a few hours of document ingestion).

Federal Register documents are immutable—once extracted, requirements are stored permanently and never need re-processing.

Full Document Text (4,619 words · ~24 min read)

Text Preserved
<NOTICE> DEPARTMENT OF COMMERCE <SUBAGY>National Telecommunications and Information Administration</SUBAGY> <DEPDOC>[Docket No. 240216-0052]</DEPDOC> <RIN>RIN 0660-XC060</RIN> <SUBJECT>Dual Use Foundation Artificial Intelligence Models With Widely Available Model Weights</SUBJECT> <HD SOURCE="HED">AGENCY:</HD> National Telecommunications and Information Administration, Department of Commerce. <HD SOURCE="HED">ACTION:</HD> Notice; request for comment. <SUM> <HD SOURCE="HED">SUMMARY:</HD> On October 30, 2023, President Biden issued an Executive order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which directed the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, to conduct a public consultation process and issue a report on the potential risks, benefits, other implications, and appropriate policy and regulatory approaches to dual-use foundation models for which the model weights are widely available. Pursuant to that Executive order, the National Telecommunications and Information Administration (NTIA) hereby issues this Request for Comment on these issues. Responses received will be used to submit a report to the President on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, as well as policy and regulatory recommendations pertaining to those models. </SUM> <DATES> <HD SOURCE="HED">DATES:</HD> Written comments must be received on or before March 27, 2024. </DATES> <HD SOURCE="HED">ADDRESSES:</HD> All electronic public comments on this action, identified by <E T="03">Regulations.gov</E> docket number NTIA-2023-0009, may be submitted through the Federal e-Rulemaking Portal at <E T="03">https://www.regulations.gov.</E> The docket established for this request for comment can be found at <E T="03">www.Regulations.gov,</E> NTIA-2023-0009. To make a submission, click the “Comment Now!” icon, complete the required fields, and enter or attach your comments. Additional instructions can be found in the “Instructions” section below, after <E T="02">SUPPLEMENTARY INFORMATION</E> . <FURINF> <HD SOURCE="HED">FOR FURTHER INFORMATION CONTACT:</HD> Please direct questions regarding this Request for Comment to Travis Hall at <E T="03">thall@ntia.gov</E> with “Openness in AI Request for Comment” in the subject line. If submitting comments by U.S. mail, please address questions to Bertram Lee, National Telecommunications and Information Administration, U.S. Department of Commerce, 1401 Constitution Avenue NW, Washington, DC 20230. Questions submitted via telephone should be directed to (202) 482-3522. Please direct media inquiries to NTIA's Office of Public Affairs, telephone: (202) 482-7002; email: <E T="03">press@ntia.gov.</E> </FURINF> <SUPLINF> <HD SOURCE="HED">SUPPLEMENTARY INFORMATION:</HD> <HD SOURCE="HD1">Background and Authority</HD> Artificial intelligence (AI)  <SU>1</SU> <FTREF/> has had, and will have, a significant effect on society, the economy, and scientific progress. Many of the most prominent models, including the model that powers ChatGPT, are “fully closed” or “highly restricted,” with limited or no public access to their inner workings. The recent introduction of large, publicly-available models, such as those from Google, Meta, Stability AI, Mistral, the Allen Institute for AI, and EleutherAI, however, has fostered an ecosystem of increasingly “open” advanced AI models, allowing developers and others to fine-tune models using widely available computing. <SU>2</SU> <FTREF/> <FTNT> <SU>1</SU>  Artificial Intelligence (AI) “has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.” <E T="03">see</E> Executive Office of the President, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88 FR 75191 (November 1, 2023) <E T="03">https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.</E> “AI Model” means “a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.” <E T="03">see</E> Id. </FTNT> <FTNT> <SU>2</SU>   <E T="03">See e.g.,</E> Zoe Brammer, How Does Access Impact Risk? Assessing AI Foundation Model Risk Along a Gradient of Access, The Institute for Security and Technology (December 2023) <E T="03">https://securityandtechnology.org/wp-content/uploads/2023/12/How-Does-Access-Impact-Risk-Assessing-AI-Foundation-Model-Risk-Along-A-Gradient-of-Access-Dec-2023.pdf;</E> Irene Solaiman, The Gradient of Generative AI Release: Methods and Considerations, arXiv:2302.04844v1 (February 5, 2023); <E T="03">https://arxiv.org/pdf/2302.04844.pdf.</E> </FTNT> Dual use foundation models with widely available weights (referred to here as open foundation models) could play a key role in fostering growth among less resourced actors, helping to widely share access to AI's benefits. <SU>3</SU> <FTREF/> Small businesses, academic institutions, underfunded entrepreneurs, and even legacy businesses have used these models to further innovate, advance scientific knowledge, and gain potential competitive advantages in the marketplace. The concentration of access to foundation models into a small subset of organizations poses the risk of hindering such innovation and advancements, a concern that could be lessened by availability of open foundation models. Open foundation models can be readily adapted and fine-tuned to specific tasks and possibly make it easier for system developers to scrutinize the role foundation models play in larger AI systems, which is important for rights- and safety-impacting AI systems ( <E T="03">e.g.</E> healthcare, education, housing, criminal justice, online platforms etc.). <SU>4</SU> <FTREF/> These open foundation models have the potential to help scientists make new medical discoveries or even make mundane, time-consuming activities more efficient. <SU>5</SU> <FTREF/> <FTNT> <SU>3</SU>   <E T="03">See e.g.,</E> Elizabeth Seger et al., Open-Sourcing Highly Capable Foundation Models, Centre for the Governance of AI (2023) <E T="03">https://cdn.governance.ai/Open-Sourcing_Highly_Capable_Foundation_Models_2023_GovAI.pdf.</E> </FTNT> <FTNT> <SU>4</SU>   <E T="03">See e.g.,</E> Executive Office of the President: Office of Management and Budget, Proposed Memorandum For the Heads of Executive Departments and Agencies (November 3, 2023) <E T="03">https://www.whitehouse.gov/wp-content/uploads/2023/11/AI-in-Government-Memo-draft-for-public-review.pdf;</E> Cui Beilei et al., Surgical-DINO: Adapter Learning of Foundation Model for Depth Estimation in Endoscopic Surgery, arXiv:2401.06013v1 (January 11, 2024) <E T="03">https://arxiv.org/pdf/2401.06013.pdf</E> (Using low-ranked adaptation, or LoRA, in a foundation model to help with surgical depth estimation for endoscopic surgeries). </FTNT> <FTNT> <SU>5</SU>   <E T="03">See e.g.,</E> Shaoting Zhang, On the Challenges and Perspectives of Foundation Models for Medical Image Analysis, arXiv:2306.05705v2 (November 23, 2023), <E T="03">https://arxiv.org/pdf/2306.05705.pdf.</E> </FTNT> Open foundation models have the potential to transform research, both within computer science  <SU>6</SU> <FTREF/> and through supporting other disciplines such as medicine, pharmaceutical, and scientific research. <SU>7</SU> <FTREF/> Historically, widely available programming libraries have given researchers the ability to simultaneously run and understand algorithms created by other programmers. Researchers and journals have supported the movement towards open science, <SU>8</SU> <FTREF/> which includes sharing research artifacts like the data and code required to reproduce results. <FTNT> <SU>6</SU>   <E T="03">See e.g.,</E> David Noever, Can Large Language Models Find And Fix Vulnerable Software?, arxiv 2308.10345 (August 20, 2023) <E T="03">https://arxiv.org/abs/2308.10345;</E>   <SU>6</SU> Andreas Stöckl, Evaluating a Synthetic Image Dataset Generated with Stable Diffusion, Proceedings of Eighth International Congress on Information and Communication Technology Vol. 693 (July 25, 2023) <E T="03">https://link.springer.com/chapter/10.1007/978-981-99-3243-6_64.</E> </FTNT> <FTNT> <SU>7</SU>   <E T="03">See e.g.,</E> Kun-Hsing Yu et al., Artificial intelligence in healthcare, Nature Biomedical Engineering Vol. 2 719-731 (October 10, 2018) <E T="03">https://www.nature.com/articles/s41551-018-0305-z#citeas;</E> Kevin Maik Jablonka et al., 14 examples of how LLMs can transform materials science and chemistry: a reflection on a large language model hackathon, Digital Discovery 2 (August 8, 2023) <E T="03">https://pubs.rsc.org/en/content/articlehtml/2023/dd/d3dd00113j.</E> </FTNT> <FTNT> <SU>8</SU>   <E T="03">See e.g.,</E> Harvey V. Fineberg et al., Consensus Study Report: Reproducibility and Replicability in Science, National Academies of Sciences (May 2019) <E T="03">https://nap.nationalacademies.org/resource/25303/R&R.pdf;</E> Nature, Reporting standards and availability of data, materials, code and protocols, <E T="03">https://www.nature.com/nature-portfolio/editorial-policies/reporting-standards;</E> ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Preview showing 10k of 35k characters. Full document text is stored and available for version comparison. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
This text is preserved for citation and comparison.