Europcar believes ChatGPT was used to propagate a phony data breach.

A person in a popular hacker forum posted Europcar data on Sunday. The hacker claimed to have stolen over 48 million Europcar customers’ personal data and was “listening to offers” to sell it.

The data looks to be fake, maybe manufactured with ChatGPT, according to Europcar.

After receiving a threat intelligence service notice about the forum advertising, Europcar spokesperson Vincent Vevaud informed Eltrys that the business had examined the hack.

“Thoroughly checking the data contained in the sample, we are confident that this advertisement is false,” Vevaud emailed.

This record count is absolutely inaccurate and in conflict with ours.

The sample data is presumably ChatGPT-generated (addresses don’t exist, ZIP codes don’t match, first and last names don’t match email addresses, and email addresses use strange TLDs).

Most importantly, our client database does not include these email addresses.

The hacker forum user informed Eltrys in an online conversation that “the data is real,” without evidence.

The user said in the forum that the data includes usernames, passwords, full names, home addresses, ZIP codes, birth dates, passport numbers, and driving license numbers.

Europcar, Troy Hunt, who runs Have I Been Pwned, and Eltrys all believe the online data sample is fraudulent.

Starting with data authenticity, several things don’t add up. The most evident is that email addresses and usernames don’t match individuals names,” Hunt remarked on X (now Twitter).

Hunt noted that many purported house locations are bogus and “just don’t exist.”

A forum poster was requested to explain Hunt’s findings but did not.

Hunt is also doubtful that ChatGPT produced the data.

We’ve had fake breaches forever because individuals seek exposure, fame, or money. Whatever, it doesn’t matter because none of it makes it ‘AI,’” Hunt wrote.

Vevaud did not immediately answer inquiries on how Europcar determined ChatGPT generated the data.

After Eltrys asked ChatGPT to produce “a dataset of fake stolen personal data,” the bot said it could not help “in creating or promoting any illegal or unethical activities.”

It is plausible that hackers would utilize ChatGPT or a similar text-generating AI platform to build enormous amounts of bogus data, but it is difficult to prove.

Eltrys Team
Author: Eltrys Team

Share this article
0
Share
Shareable URL
Prev Post

Mark Zuckerberg argues Apple and Google should handle app parental consent, not Meta.

Next Post

Motional, an autonomous car business, will lose a crucial investor.

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
Subscribe to our newsletter
Get notified of the best deals on our WordPress themes.