A recent survey shows that nearly half of US citizens know what ChatGPT is, and a significant number of people already use it in their everyday lives. Read on to find out how inclined Americans are to trust the AI chatbot with their privacy and how dubious they are about the correctness of ChatGPT answers.
Contents What is ChatGPT, and why is it so popular? Is ChatGPT a reliable and safe tool to use? NordVPN survey: Usage and trust of ChatGPT in the US Using ChatGPT ChatGPT errors noticed Trusting ChatGPT Methodology
What is ChatGPT, and why is it so popular?
ChatGPT is an artificial intelligence (AI) chatbot that can hold human-like conversations with people. It can accurately understand the intent of the question, grasp its context, and provide detailed answers, drawing information from a large pool of data. ChatGPT can learn from its own mistakes and continuously improves its performance from interactions with humans. ChatGPT gained widespread popularity because its creators, OpenAI, made one of its versions, ChatGPT 3.5, accessible for free to the public. The bot can be used for various cases, including but not limited to coding in different coding languages, writing various types of texts, and even moderating discussions about philosophy or the meaning of life.
Is ChatGPT a reliable and safe tool to use?
ChatGPT can be a valuable tool to use in both personal and professional environments. However, it depends on the version of the bot you’re using. For instance, the free ChatGPT 3.5 version is not considered trustworthy enough for users to rely entirely on its answers. ChatGPT 3.5 works with data produced before September 2021 and has no knowledge of changes in the world since then. On the other hand, a more advanced version of the chatbot, ChatGPT 4, is available for a paid subscription and can provide information generated before August 2022. Nevertheless, users of both chatbot versions should be aware of the bot possibly drawing data from unreliable sources and should always check it before using. ChatGPT doesn’t pose significant risks online and is subject mainly to the threat of data breaches — which is common to all online platforms. Though there have been concerns about its privacy policy, with some countries even going as far as banning the bot, OpenAI has adjusted the terms since then, introducing an age limit and the ability to choose whether your answers will be stored in ChatGPT’s memory. Some cyber security companies have also started to provide solutions mitigating the possible ChatGPT security risks. It’s important to note that the chatbot is learning from human responses. So if you accidentally leak your own or your company’s sensitive data, ChatGPT can later use it, making you susceptible to profiling.
NordVPN survey: Usage and trust of ChatGPT in the US
A recent NordVPN survey revealed that almost half (49%) of 18-65-year-old Americans know what ChatGPT is. So let’s look at how often people in the US use the AI chatbot and how inclined they are to trust it with their privacy and rely on its answers.
Using ChatGPT
Since the launch of ChatGPT in November 2022, slightly more than a quarter of Americans (27%) have started using the chatbot or at least tried it out. Among the Americans who’ve heard of ChatGPT, 43% claim to be using it regularly, 13% have only used the chatbot a few times, and 44% don’t use ChatGPT at all. A third (33%) of people using ChatGPT regularly say they use it at least several times a week, whereas three out of five users (64%) claim to use it at least once a week.
ChatGPT errors noticed
The majority of ChatGPT users in America notice that the chatbot is inclined to make mistakes. The research team has observed that the more often people use ChatGPT, the more often they notice its errors. Two out of five users (40%) often notice mistakes made by ChatGPT, and 37% indicate spotting them sometimes. Around 20% of American users claim to notice ChatGPT’s mistakes rarely, and 4% say they’ve never experienced a chatbot error. Despite ChatGPT’s mistakes, seven out of ten (72%) ChatGPT users intend to continue using it, and 28% say they may continue using the chatbot. Only 0.5% of users stated they’ll stop using ChatGPT. The percentage of people who often notice mistakes and still intend to continue using ChatGPT is even higher and reaches 82%. The chatbot’s mistakes don’t seem to be a significant enough reason to stop using ChatGPT for people who use the bot often, because 86% of them trust ChatGPT’s factual correctness anyway.
Trusting ChatGPT
When asked about privacy protection, two thirds of respondents (67%) using ChatGPT claim to trust ChatGPT and its platform with their personal data: 31% claim to trust it very much, and 36% only somewhat trust OpenAI’s bot. Of the people using ChatGPT, 23% have no opinion on the subject, and only 10% of the users claim to be concerned about privacy issues while using ChatGPT. Regarding the factual correctness of the content generated by ChatGPT, 75% of American users trust the bot’s answers, of which 28% trust them very much and 48% trust them somewhat. At the same time, 19% have no opinion on the subject, and only 6% claim to not trust ChatGPT’s factual correctness. Overall, 61% of American users trust ChatGPT’s factual correctness and data privacy protection. Researchers have also noticed that people who use ChatGPT less often are inclined to have more concerns about the bot’s privacy issues and the correctness of the content.
Methodology
This research was commissioned by NordVPN and carried out by Cint between June 6 and 11, 2023. A total of 1,011 respondents between the ages of 18-65 years old were surveyed, placing quotas on gender, age, and the place of residence to achieve a nationally representative sample among internet users. } />