Foundation AI models GPT-3 and DALL-E need release standards

Percy Liang is director of the Center for Research on Foundation Models, a faculty affiliate of the Stanford Institute for Human-Centered AI, and an associate professor of computer science at Stanford University.

Humans are not very good at predicting the future, especially when it comes to technology.

Foundation models are a new class of large-scale neural networks with the ability to generate text, audio, video, and images. These models will anchor all kinds of applications and have the power to influence many aspects of society. It is difficult for anyone, even experts, to imagine where this technology will lead in the coming years.

Foundation models are trained on broad data using scale self-monitoring so they can be adapted to a wide variety of tasks. This groundbreaking approach to AI dramatically improves accuracy and opens up new possibilities as organizations no longer have to train a new model for each new AI application. It also carries clear risks, as the downstream consequences are difficult to predict, let alone manage. If not managed effectively, foundation models such as GPT-3, PaLM and DALL-E 2 can cause significant harm to individuals and society, whether intended or not.

One of the most important parts of governance is establishing community standards for the release of foundation models so that a diverse group of researchers has the opportunity to analyze them accurately. Currently, companies like Microsoft, Google, OpenAI, Meta, and DeepMind each take a different stance on releasing their models. Some embrace a wide open version, while others prefer a closed version or one that is limited to a small group of researchers.

While we do not expect a consensus, we believe that it is problematic for any foundation model developer to independently determine their release policy. A single actor releasing an insecure, high-performance technology can knowingly or unknowingly cause significant harm to individuals and society. In addition, developers would benefit from sharing best practices, rather than bearing the economic and social costs of discovering certain damages over and over again.

Fortunately, releasing new base models doesn’t have to be an all-or-nothing proposition. A multidimensional framework of policy would take into account four key questions.

  1. What to release: Papers, models, code and data can be released separately; each has an impact on expanding scientific knowledge and reducing the potential risk of harm.
  2. Who gets access to the release: Given the risks involved in releasing models, the order of who has access matters. For example, there may be an inner circle of trusted colleagues, a center circle of researchers requesting access, and the general public.
  3. When to release the model: The timing of a release should depend on both intrinsic properties, such as the results of safety assessments, and external circumstances, such as which other models exist and how much time has passed.
  4. To release the model: The process of releasing new resources should include a two-way communication between developers and researchers so that the release is maintained over time.

To help developers make more informed decisions with input from the wider community, we at the Center for Research on Foundation Models at the Stanford Institute for Human-Centered AI have proposed establishing a foundation model review board. The role of the board would be to facilitate the process of developers releasing foundation models to outside researchers. This approach will expand the group of researchers who can study and improve foundation models, while helping to manage the risks of release.

The basic workflow of the assessment board looks something like this:

  • A developer places a call for proposals describing the available foundation models and what the developer believes are the most critical areas of research of these models.
  • A researcher submits a research proposal that includes the research goals, the type of access needed to achieve those goals, and a plan for managing any ethical and security risks.
  • The board assesses the research proposal and deliberates, possibly with additional input from the researcher.
  • Based on the recommendation of the board, the developer of the foundation model makes a final decision to approve, reject or postpone the proposal.
  • If the proposal is approved, the foundation model developer releases the desired assets to the researcher.

A review committee like this would ensure that release decisions are made in a very contextual way: for a particular researcher, for a particular purpose, for a particular foundation model with a particular form of access, and at a particular time. This concreteness makes it much easier to reason about the benefits and risks of a particular decision. A series of these acts would establish Community standards for the release of designs.

We must recognize that basic models are evolving rapidly and require standards of governance. The models we see five years from now may be unrecognizable to us today, just as the models of today would be unimaginable five years ago. Those developing base models should work with the community to develop best practices around the release of new models. Downstream users, including application developers and researchers, should be more aware of the models they use, what data was used to train those models, and how the models were built — and if that information isn’t available, ask to know .

An important feature of human-centric AI is transparency and openness, which ensures collective governance, characterized by fair processes and striving for superior results. Given the enormous uncertainty and our poor ability to predict the future, we cannot make decisions based solely on expected results. We must focus on developing a resilient process that will enable us to be prepared for whatever lies ahead.

Since research on foundation models is still in its infancy, input and feedback are extremely valuable. For those working with foundation models, whether through research or development, we’d love to hear from you at [email protected]

Additional contributions to this report come from Rob Reich, Professor of Political Science and, with courtesy, Professor of Philosophy at Stanford University; Rishi Bommasani, a Ph.D. student in the computer science department at Stanford; and Kathleen Creel, HAI-EIS Embedded EthiCS Fellow at Stanford.

Leave a Comment

Your email address will not be published.