Large language models (LLMs) are being adopted at a breakneck pace, and children are at the forefront. A majority of kids ages 12-18 use ChatGPT, while their parents lag behind. Despite this, we do not know how children use LLMs, how they conceptualize them, and how their intellectual character and beliefs are shaped by them. Past work suggests LLMs’ confident, agentive outputs diminish curiosity in users—especially children—leaving them vulnerable to adopting fabrications as established beliefs.
We address this by constructing the first platform for studying children’s LLM use, how LLMs influence children’s character, and how LLMs could be redesigned to promote character development and accurate beliefs. Deliverables include a large cross-sectional and longitudinal dataset on how children use LLMs and how this impacts their character, best-design practice and guidance documents in collaboration with Common Sense Media for developers and stakeholders in technology and education, and the research platform itself. Impacts are enabling the development of a new generation of LLMs—and science-informed policies and educational practices that support children’s character development, promote curiosity, and promote accurate belief adoption.