OpenAI bans AI tools for political campaigns

OpenAI has announced a prohibition on the use of its AI tools for political campaigns ahead of the 2024 elections. The AI startup said that the aim is to prevent abuse, ensure transparency in AI-generated content, and enhance access to accurate voting information.

OpenAI stated in a blog post that it recognizes the pivotal role technology plays in society, offering tools that empower individuals to address complex issues and enhance daily life. However, the organisation is equally committed to ensuring responsible and safe deployment of its AI systems. As the global community gears up for crucial elections in 2024, OpenAI outlines its approach, underscoring the importance of collaboration across various facets of the democratic process.

Preventing abuse:

OpenAI anticipates and addresses potential abuses of its AI tools, particularly in the 

context of elections. The AI startup actively works to prevent misleading "deepfakes," scaled influence operations, and chatbots impersonating candidates. Rigorous testing, user engagement, and safety mitigations are integral to the development process, with specific guardrails in place for tools like DALL·E, which declines requests for the generation of images featuring real people, including candidates.

Usage Policies for ChatGPT and the API are regularly refined to align with evolving insights into technology use. Notably, OpenAI restricts the creation of applications for political campaigning and lobbying, prohibits the development of chatbots impersonating real individuals or institutions, and disallows applications that discourage participation in democratic processes.
OpenAI has introduced a reporting mechanism, 'Report GPT Flow,' enabling users to report potential violations, further enhancing the accountability of the AI tools.
Transparency around AI-generated content:
To empower voters to assess the authenticity of AI-generated content, OpenAI focuses on improving transparency around image provenance. Efforts include the implementation of the Coalition for Content Provenance and Authenticity’s digital credentials for images generated by DALL·E 3. Additionally, OpenAI is experimenting with a provenance classifier, a tool to detect images generated by DALL·E, which will be made available to a select group of testers, including journalists, platforms, and researchers.
ChatGPT integrates with real-time news reporting globally, providing users with access to accurate information with attribution and links. This integration aims to enhance transparency around the origin of information and promote a balanced understanding of news sources.
In collaboration with the National Association of Secretaries of State (NASS) in the United States, OpenAI is working to direct users to CanIVote.org, a trusted source for authoritative US voting information. Lessons learned from this partnership will inform OpenAI's approach in other countries and regions.

Media
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment

More in Media