Really interesting paper here Experimental evidence that:
People are less able to accurately assess their performance when using AI.
Using Large Language Models levels out the Dunning–Kruger effect (a cognitive bias that describes the systematic tendency of people with low ability in a specific area to give overly positive assessments of this ability. The term may also describe the tendency of high performers to underestimate their skills. )
Both higher AI literacy and higher confidence in individuals correlate with lower self-assessment accuracy.
Thanks to the ever useful PsyBlog for bringing my attention to this.