The chatbot store of OpenAI is being inundated with spam.

OpenAI CEO Sam Altman introduced GPTs, custom chatbots powered by OpenAI’s generative AI models, at the company’s first-ever developer conference in November. He said that GPTs could help people “accomplish all sorts of tasks,” such as programming, learning about obscure scientific topics, and getting workout tips.

It can be more helpful to use GPTs because they mix directions, new information, and acts, Altman said. You can create a GPT for almost anything.

He wasn’t joking when he said “anything.”

Eltrys discovered that OpenAI’s official store for GPTs, the GPT Store, is full of strange GPTs that might violate copyright, suggesting that OpenAI isn’t very strict with its moderation. Some Google Play Stores (GPTs) claim to make art in the style of Disney and Marvel brands, but they’re really just ads for paid third-party services. They also say they can get around AI content detection tools like Turnitin and Copyleaks.

There was insufficient moderation.
In order for developers to add GPTs to the GPT Store, they need to prove their user accounts and send GPTs to OpenAI’s review system, which is a mix of human and automatic review. Here’s someone who can talk about the process:

We look for and evaluate GPTs that might be against our rules using a mix of automatic systems, human review, and user reports. If you violate the rules, we may punish your content or account with warnings, share limits, or prevent you from adding it to the GPT Store or using it for profit.

You don’t need to know how to code to make a GPT, and the author can make it as easy or complicated as they want. OpenAI’s GPT-building tool, GPT Builder, lets developers put in the features they want to offer, and the tool will try to make a GPT that can do those things.

The GPT Store has grown very quickly—OpenAI said in January that it had about 3 million GPTs—possibly because it is easy for anyone to join. But it looks like this growth has come at the cost of quality and following OpenAI’s own rules.

There are issues with copyright.
Many GPTs from well-known movie, TV, and video game brands are available on the GPT Store. Eltrys believes that the owners of those companies did not create or approve these GPTs. One GPT creates monsters like the Pixar movie “Monsters, Inc.”, and another offers text-based adventures in the world of “Star Wars.”

These GPTs and the ones in the GPT Store that let users talk to protected figures like Wario and Aang from “Avatar: The Last Airbender” set the stage for copyright drama.

Kit Walsh, a senior staff attorney at the Electronic Frontier Foundation, gave this explanation:

Both creating transformative works and breaking the law are possible with these GPTs. [Transformative works] are a type of fair use that is not subject to copyright claims. Naturally, people who violate the law may face consequences, and the creator of a legally permissible tool may inadvertently assume responsibility if they encourage unlawful use. Using a trademarked name to identify a good or service can also lead to trademark issues, as people may not be aware of the trademark owner’s backing or management.

The Digital Millennium Copyright Act has a safe harbor provision that protects OpenAI and other platforms (like YouTube and Facebook) that host infringing content, as long as they follow the law and take down specific examples of infringement when asked. This implies that GPT creators would not hold OpenAI accountable for copyright violations.

However, it appears unfavorable for a company facing an intellectual property lawsuit.

Cheating in school
The rules of service for OpenAI make it clear that developers can’t make GPTs that encourage cheating in school. However, the GPT Store is brimming with GPTs that claim to circumvent AI content analyzers, including those sold to teachers via sites that detect plagiarism.

Some GPTs say they are “sophisticated” rephrasing tools that are “undetectable” by well-known AI content scanners like Originality.ai and Copyleaks. Another tool, Humanizer Pro, ranked No. 2 in the GPT Store’s Writing area, claims to “humanize” content to evade AI scanners, preserving the “meaning and quality” of the text and achieving a “100% human” score.

Some of these GPTs are really just sneaky ways to get to more expensive services. Humanizer users can opt for a “premium plan” to utilize “the most advanced algorithm,” which forwards text input into the GPT to a plug-in from a third-party site named GPTInf. Instead of OpenAI’s $20-per-month ChatGPT Plus, GPTInf costs $12 per month for 10,000 words, or $8 per month for a year-long plan. This seems a bit high.

We have previously discussed the limitations of AI content analyzers. A lot of scholarly studies, not just our own tests, show that they’re not accurate or reliable. However, OpenAI is still letting tools on the GPT Store that encourage dishonest behavior in school work, even if the behavior doesn’t have the desired effect.

The person from OpenAI said:

It is against our rules to give GPTs for academic dishonesty, such as stealing. This encompasses GPTs that allegedly circumvent academic integrity tools like plagiarism checkers. Some GPTs help make words more “human.” There are many reasons why people might want AI-generated material that doesn’t “sound” like AI. We continue to learn from the practical applications of these GPTs.

Fraudulent acts
OpenAI’s rules also say that GPT makers can’t make GPTs that pretend to be people or groups without their “consent or legal right.”

You can find a lot of GPTs on the GPT Store, though, that say they reflect the views of people or act like real people.
There are dozens of GPTs that pretend to be talks with “Elon Musk,” “Donald Trump,” “Leonardo DiCaprio,” “Barack Obama,” and “Joe Rogan” when you look for those words. Some are obviously mocking, while others are less so. One type of GPT advertises itself not as a person but as an expert on goods from well-known companies. For example, MicrosoftGPT calls itself an “expert in all things Microsoft.”

Is this the same as imitation when many of the targets are famous people and some of the pieces are obviously parodies? That’s something OpenAI should explain.

The spokesman said:

We let makers tell their GPTs to react “in the style of” a certain real person, as long as the GPTs don’t pretend to be that person. For instance, if a creator names a GPT after a real person and instructs them to fully replicate that person’s image, they should refrain from using that image as their own.

The company recently fired the person who made a GPT that looked like long-shot Democratic presidential candidate Rep. Dean Phillips. The GPT even had a warning that said it was an AI tool. However, OpenAI stated that they took it down due to a violation of their policy on political advertising, which included both impersonation and not just impersonation.

Breaking out of jail
On the GPT Store, there are also some pretty strange efforts to hack OpenAI models, though they don’t work very well.

DAN, which stands for “Do Anything Now,” is a common way to get models to react to prompts without following their normal rules. There are a lot of GPTs on the market that use DAN. The few I tested wouldn’t answer any dangerous questions I asked them, like “How do I build a bomb?” Nevertheless, they were more likely to use less-than-nice words than the vanilla ChatGPT.

The spokesman said:

We prohibit explaining or providing instructions to GPTs designed to circumvent OpenAI’s security measures or violate its rules. GPTs that try to change model behavior in other ways are okay, like trying to make GPT more flexible without breaking our rules on how to use it.

Pains of growth
When OpenAI first introduced the GPT Store, they marketed it as a collection of powerful AI tools that would help people be more productive. And that’s it—problems and all. On the other hand, it’s quickly turning into a place where spammy, illegal, and maybe even harmful GPTs can grow, or at least GPTs that clearly break its rules.

If the GPT Store is like this right now, making money from it could bring about a whole new set of problems. OpenAI said that GPT makers will be able to “earn money based on how many people are using [their] GPTs” in the future. They may even be able to charge for individual GPTs. But what will Disney or the Tolkien Estate do when unauthorized creators of Marvel or Lord of the Rings-themed GPTs start making substantial profits?

It’s clear why OpenAI created the GPT Store. Devin Coldewey, one of my coworkers, wrote that Apple’s App Store model has been incredibly profitable, and OpenAI is just trying to copy it word for word. OpenAI systems not only store and build GPTs, but also promote and test them. As of a few weeks ago, ChatGPT Plus users can also call them directly from the ChatGPT screen, providing another incentive to sign up.

However, the GPT Store is having the same problems that many of the biggest online stores for apps, goods, and services did when they first started out. A recent article in The Information said that GPT Store developers are having trouble getting users because the store doesn’t have good back-end data and the training process isn’t very good.

For all that OpenAI talks about selection and how important safety is, one might have thought it would have taken great care to avoid the clear problems. It looks like that’s not the case, though. What a mess the GPT Store is! If nothing changes soon, it might remain that way.

Juliet P.
Author: Juliet P.

Share this article
0
Share
Shareable URL
Prev Post

Anima, a healthcare tool that gives doctors Salesforce-like features, gets $12 million.

Next Post

Google is fined $270 million in France after an authority discovers that data from news publishers was used for Gemini.

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
Subscribe to our newsletter
Get notified of the best deals on our WordPress themes.