What does GPT-3 “know” about me?

That’s not surprising – Matt has been online for a very long time, which means he has a bigger online footprint than me. It could also be because it is in the US, and most of the large language models are heavily US-focused. The United States does not have a federal data protection law. In California, where Matt lives, there is one, but it didn’t go into effect until 2020.

Matt’s claim to fame, according to GPT-3 and BlenderBot, is “epic hackWhich he wrote about in an article for Wired back in 2012. As a result of security flaws in Apple and Amazon systems, hackers have taken over and deleted Mat’s entire digital life. [Editor’s note: He did not hack the accounts of Barack Obama and Bill Gates.]

But she is getting terrified. With a little prodding, GPT-3 tells me that Matt has a wife and two young daughters (correct, regardless of the names), and lives in San Francisco (correct). He also told me he wasn’t sure if Matt had a dog: “[From] From what we can see on social media, Matt Hunan doesn’t seem to have any pets. He’s tweeted about his love for dogs in the past, but he doesn’t seem to have any of his love for him.” (Incorrect.)

The system also gave me his work address, a phone number (incorrect), a credit card number (also incorrect), a random phone number with an area code in Cambridge, Massachusetts (where the MIT Technology Review is located), and an address of a building next to the local Social Security Administration in St. Francisco.

The GPT-3 database collected information on Mat from several sources, according to an OpenAI spokesperson. Matt’s connection to San Francisco is on his Twitter profile and LinkedIn profile, which appear on the first page of Google results for his name. His new job at the MIT Technology Review has been widely published and tweeted. Matt’s hack went viral on social media, and he gave interviews to the media about it.

As for more other personal information, GPT-3 is likely a “hallucinogen”.

GPT-3 predicts the next string of words based on the text input provided by the user. Occasionally, a model may produce factually inaccurate information because it attempts to produce plausible text based on statistical patterns in its training data and user-provided context — this is generally known as a “hallucination,” says an OpenAI spokesperson.

I asked Matt what he made of it all. “Many of the answers generated by GPT-3 weren’t quite right. (I’ve never hacked Obama or Bill Gates!), he said. “But most of them are pretty close, and some are spot on. It’s a bit worrying. But I’m rest assured that the AI ​​doesn’t know where I live, so I’m not in direct danger of Skynet sending a Terminator to knock me down. I think we can save that for tomorrow.”

Leave a Comment