OpenAI won't let politicians use its tech for campaigning, for now

 


OpenAI is rolling out a series of initiatives to prevent its products from being used for misinformation ahead of a major year for US and global elections. 

On Monday, the artificial intelligence startup announced new tools that will attribute information about current events provided by its chatbot ChatGPT, and help users determine if an image was created by its AI software.

The changes comes as concerns rise over the risks of so-called “deepfake” images and other AI-produced content that could misguide voters during campaigns.

As of my last knowledge update in January 2022, there was no specific policy from OpenAI regarding the use of its technology, including GPT-3, by politicians for campaigning.

It's important to note that policies and decisions can change over time, and I do not have real-time information. 

OpenAI has, however, been attentive to ethical considerations and responsible use of its technology. It's possible that OpenAI may have developed specific guidelines or policies regarding political use of its technology since then. 

For the most accurate and up-to-date information, I recommend checking OpenAI's official website or contacting them directly for any recent updates on their policies and restrictions.

The company will add digital credentials set by a third-party coalition of AI firms that encode details about the origin of images created using its image generator tool, DALL-E 3.

The firm says it's experimenting with a new "provenance classifier" tool that can detect AI-generated images that have been made using DALL-E.

It hopes to make that tool available to its first group of testers, including journalists, researchers and other tech platforms, for feedback.

OpenAI will continue integrating its ChatGPT platform with real-time news reporting globally, "including attribution and links," it said.

That effort builds on a first-of-its-kind deal announced with German media giant Axel Springer last year that offers ChatGPT users summaries of select global news content from the company's outlets. In the U.S.,

OpenAI says it's working with the nonpartisan National Association of Secretaries of State to direct ChatGPT users to CanIVote.org for authoritative information on U.S.

voting and elections.  Because generative AI technology is so new, OpenAI says it's still working to understand how effective its tools might be for political persuasion.

To hedge against abuse, the firm doesn't allow people to build applications for political campaigning and lobbying, and it doesn't allow engineers to create chatbots that pretend to be real people, such as candidates.

Like most tech firms, it doesn't allow users to build applications for its ChatGPT platform that deter people from participating in the democratic process.






Post a Comment

Previous Post Next Post