Home » Fear Of Judgment Deters Women From AI Tools

Fear Of Judgment Deters Women From AI Tools

Researchers conducted a three-part study within a leading global technology company to investigate why adoption of a proprietary AI coding assistant remained low and unequal despite universal access, integration into workflows, and minimal training friction (Competence Penalty and Technology Adoption, 2025).

Unequal adoption despite equal access

The study analyzed digital trace data from 28,698 full-time software engineers between January and December 2024. The proprietary AI assistant, functionally similar to GitHub Copilot, was pre-installed on all company-issued devices, integrated into standard coding workflows, and promoted company-wide. It offered auto-generation and auto-completion of code with one-click activation and had been shown internally to boost productivity by up to 30% (Study 1).

After one year, only 41% of engineers had used the tool at least once. In the first month, adoption was 9% among male engineers and 5% among female engineers, a 4% gap (χ²(1)=11.34, p<.001, OR=1.07). After 12 months, adoption rose to 43% for males and 31% for females, widening the gap to 12% (χ²(1)=172.50, p<.001, OR=1.29). Age differences were smaller, with a gap of 3% to 5% between mature-age (≥32) and younger engineers (month 12: χ²(1)=111.17, p<.001, OR=1.23).

Among adopters (n=11,897), female engineers sent fewer prompts (median=222) than males (median=327, p<.001, r=0.03) and copied fewer lines of AI-generated code (median=15 vs. 34, p<.001, r=0.07). Mature-age adopters sent fewer prompts (median=240) than younger adopters (median=424, p<.001, r=0.09) and copied fewer lines (median=24 vs. 41, p<.001, r=0.08).

Experimental evidence of competence penalty

Study 2 tested whether using AI for coding incurs a perceived competence penalty, particularly for groups already subject to competence scrutiny. A pre-registered experiment used a 2×2 design: purported AI usage (yes/no) × competence scrutiny (female engineers as high-scrutiny, male engineers as low-scrutiny). Participants (n=1,026, 513 female, median age=31) reviewed identical Python code snippets attributed to either a male or female engineer, with or without AI assistance.

Purported AI usage reduced competence ratings from a mean of 6.80 (no AI) to 6.18 (AI) on an 11-point scale (F(1,1022)=25.51, p<.001, d=-0.31). Female engineers were rated less competent than males overall (M=6.21 vs. 6.77, F(1,1022)=20.40, p<.001, d=-0.28), with a larger penalty for females using AI (drop of 0.83 points, d=-0.42) compared to males (drop of 0.41 points, d=-0.21).

AI usage did not significantly affect perceived work quality (M=7.02 vs. 6.90, p=.265). However, participants attributed less contribution to female engineers in AI-assisted conditions (mean=35%) compared to males (40%, p=.010, d=-0.23), and controlling for contribution did not eliminate the gender difference in competence ratings.

Non-adopters penalized AI usage more than adopters (non-adopters: drop of 0.90 points, p<.001; adopters: no significant change, p=.826). Among male non-adopters, the penalty for female engineers using AI was especially severe (drop of 1.79 points, p<.001, d=-0.89).

Anticipated penalty deters adoption

Study 3 surveyed 919 engineers (439 female, median age=30) to measure anticipated competence penalty alongside other adoption factors such as perceived learning cost and intrinsic task motivation. Agreement with the statement “If my manager knows that I am using AI for coding, it will decrease their evaluation of my coding ability” was rated on a 7-point scale.

Higher anticipated competence penalty predicted lower AI adoption (B=-0.27, SE=0.06, p<.001, OR=0.76), including when controlling for other perceptions (B=-0.26, SE=0.08, p=.001, OR=0.77). Adoption was 61% among those anticipating minimal penalty versus 33% among those with high anticipated penalty (>4 on the scale).

Female engineers anticipated more penalty than males (M=1.96 vs. 1.75, p=.009, d=0.17), and mature-age engineers anticipated more than younger ones (M=1.96 vs. 1.77, p=.021, d=0.15). Mediation analyses indicated anticipated penalty partially explained both gender and age adoption gaps (95% CI: gender [-0.11, -0.01], age [-0.12, -0.01]).

The findings identify a paradox: technologies designed to enhance performance can become liabilities for users if they trigger competence penalties. Because these penalties are greater for groups under existing competence scrutiny, they can reinforce or widen workplace inequalities. Even with equal access, adoption rates remain unequal, and identical work output may still receive lower competence evaluations when associated with AI usage by scrutinized groups.

Mandatory disclosure of AI usage, while promoting transparency, may discourage adoption or lead to unequal penalties across demographic groups. Addressing competence penalty requires shifting evaluation criteria toward actual work quality and framing AI adoption as enhancing — not replacing — human capability.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *