Some Still Don’t Use AI Daily, Says Anthropic’s Claude
anthropic claude deepmind gemini google openai
| Source: Mastodon | Original article
A new study from Brigham Young University has quantified why a sizable minority still steer clear of generative‑AI tools in their daily routines. Researchers Jacob Steffen and Taylor Wells surveyed 2,400 adults across North America and found that 27 percent of respondents rarely or never engage with large‑language‑model (LLM) services such as ChatGPT, Claude or Gemini. Trust‑related concerns topped the list: 68 percent of non‑users said they doubted the accuracy of AI‑generated answers, while 54 percent worried about hidden biases. Practical obstacles followed, with 42 percent citing a lack of clear use‑cases and 31 percent feeling overwhelmed by the sheer number of available platforms.
The findings matter because generative AI has moved from novelty to backbone of many workplaces, education systems and consumer apps. Adobe’s 2025 consumer survey reported that 73 percent of UK users now rely on GenAI for personal tasks, and Harvard Business Review notes a surge in “Custom GPTs” tailored for niche workflows. If a quarter of the population remains disengaged, the industry faces a credibility gap that could slow adoption, limit data diversity for model training, and invite regulatory scrutiny over transparency and accountability.
What to watch next are the responses from the major AI players. Anthropic’s Claude team has already announced a “trust‑by‑design” roadmap that will embed provenance metadata in every response, while OpenAI is piloting a real‑time fact‑checking layer for ChatGPT. Analysts expect that measurable improvements in reliability and clearer privacy guarantees will be the decisive factors in converting the reluctant segment. Follow‑up studies slated for late 2026 will track whether these interventions shift the trust metric and shrink the “non‑user” cohort.
Sources
Back to AIPULSEN