Home » We Use AI Billions Of Times A Day But We Still Don’t Trust It

We Use AI Billions Of Times A Day But We Still Don’t Trust It

According to Axios, ChatGPT, launched in late 2022; processes approximately 2.5 billion user queries daily, with 13% originating in the United States, driving a rapid transformation in internet engagement. It was the most downloaded global application in April and surpassed combined App Store downloads of TikTok, Facebook, Instagram, and X in June.

This rapid adoption positions ChatGPT as an increasingly central component of global internet interaction. Google, processing approximately 5 trillion queries annually, has responded to this shift by integrating generative AI features, notably AI Overviews. Perplexity, an online search startup focused on generative AI, introduced its web browser, Comet, to compete with established browsers like Chrome and Safari. Google’s AI Mode, launched in May, integrates generative AI throughout its search experience, functioning similarly to Perplexity’s core offerings.

Despite the growing use of generative AI in online search, user trust remains a significant concern. A survey involving over 1,100 Americans indicated that only 8.5% “always trust” information from Google’s AI Overviews, while approximately 21% expressed zero trust. This survey also revealed that over 40% of respondents rarely or never click on accompanying web links provided in AI Overviews to access source material. This suggests users frequently interact with generative AI by default during their daily searches but harbor minimal confidence in the accuracy of the information received.

Further research indicates that a majority of individuals trust ChatGPT more than human experts in certain domains, though this trust decreases when the chatbot addresses sensitive topics, such as legal or medical advice. The perception of trustworthiness in AI chatbots is also influenced by communication tone; sycophantic AI-generated responses are often perceived as less trustworthy compared to more neutral or less explicitly flattering tones. While large language models are typically trained to maximize factual accuracy, they can exhibit “hallucination,” producing inaccurate information presented as truth.

Leading AI developers, including OpenAI and Anthropic, have initiated programs to understand and interpret the complex internal processes of AI systems. These initiatives aim to enhance system performance and increase user confidence and trust.


Featured image credit

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *