What policies are proposed | considered for generative AI



Policies proposed for generative AI:

AI transparency laws: Requiring companies and researchers developing generative AI to disclose details about how their systems work, what data they are trained on, and how outputs are produced. This could help address issues of bias and lack of explainability. Examples include proposed US legislation like the Algorithmic Accountability Act.

AI oversight committees: Establishing independent committees to review generative AI systems before and after they are deployed, assessing things like system design, training data, outputs, risk of harm or deception, and more. They could recommend changes or restrictions on the AI's use. Some companies like Anthropic have proposed internal oversight committees.

Guidelines for AI safety: Developing guidelines around properly constraining generative systems, considering ethical implications, limiting autonomy, providing human oversight, using Constitutional AI techniques, and more. For example, researchers at OpenAI proposed Constitutional AI as an approach where models have human values and oversight embedded into their design.

Policies on synthetic media: Enacting policies specially targeted at "synthetic media" like AI-generated images, video, audio, and text that govern how these outputs can be produced and shared to avoid deception. For example, researchers have proposed ways to embed digital watermarks or other signals in synthetic media to indicate its AI-generated nature. Some platforms have banned synthetic media that could mislead people or violate terms of use.

International AI governance frameworks: Proposing governance frameworks at an international level on responsible development of generative AI and other advanced systems. For example, UNESCO adopted an AI ethics framework with guidelines on things like privacy, bias, and human oversight that could apply to generative AI. Some countries are discussing broader AI governance policies as well.

Funding for research on AI policy: Providing more funding and resources specifically for researchers and institutes focused on AI policy and ethics. Generating new ideas for governance, guidelines, and frameworks to help ensure progress in generative AI is grounded and aimed at benefitting humanity. Groups like AI Now Institute and Brookfield Institute are examples of this type of work.

Discussions on long-term issues: Continuing active discussions between researchers, policymakers, and the public on managing extremely advanced generative systems and artificial general intelligence. Promoting these discussions now on issues like transparency, value alignment, and risks to humanity can help set us up to enact policies and safeguards further in advance of technology that may be far off but crucial to consider. The field of AI safety and ethics does this well.

These are some promising examples, but policies for generative AI remain limited. MORE work is needed, but policies must be approached carefully by collaborating across fields and maintaining an adaptable stance as AI continues to evolve. With time and effort focused on policy, governance for generative systems can develop further. But it will remain an ongoing grand challenge.