Home » AI Made Executives Worse At Stock Picking

AI Made Executives Worse At Stock Picking

Over 300 managers and executives participated in a HBR study, revealing that those who consulted ChatGPT made less accurate stock predictions and exhibited greater optimism and overconfidence compared to those who engaged in peer discussions.

The experiment design

The experiment commenced with participants being shown a recent stock price chart for Nvidia (NVDA). Nvidia was selected due to its significant share price increase, driven by its integral role in powering AI technologies. Each participant was initially asked to make an individual, private forecast regarding Nvidia’s projected stock price one month into the future.

Following their initial forecasts, participants were randomly divided into two distinct groups:

  • The control group: These executives conversed in small groups, relying solely on human interaction to share thoughts and information, emulating traditional decision-making processes.
  • The treatment group: Executives in this group consulted ChatGPT about Nvidia’s stock but were explicitly instructed not to communicate with any peers, representing an AI-assisted decision process.

After their respective consultation periods, all participants submitted a revised forecast for Nvidia’s stock price one month ahead.

Key findings: Optimism, inaccuracy, and overconfidence

The study’s findings indicated that AI consultation led to more optimistic forecasts. While both groups had similar baseline expectations, the ChatGPT group elevated their one-month price estimates by approximately $5.11 on average after consultation.

In contrast, peer discussions resulted in more conservative forecasts, with the group lowering their price estimates by approximately $2.20 on average.

A critical finding was that AI consultation negatively impacted prediction accuracy. After a one-month waiting period, the analysis revealed:

  • Those who had consulted ChatGPT made predictions that were less accurate after their consultation than before.
  • Executives who engaged in peer discussions made significantly more accurate predictions following their consultation.

AI consultation also contributed to increased overconfidence. Consulting ChatGPT significantly heightened participants’ propensity to offer pinpoint predictions (forecasts with decimals), an indicator of overconfidence. Conversely, the peer discussion group became less likely to use pinpoint predictions, indicating a decrease in overconfidence.

Why the disparity? Five key factors

The disparity in outcomes can be attributed to five key factors:

  • Extrapolation bias and “trend riding”: ChatGPT, relying on historical data of Nvidia’s rising stock, likely encouraged an extrapolation bias, assuming past upward trends would continue without real-time context.
  • Authority bias and detail overload: Many executives were impressed by the AI’s detailed and confident responses, leading to an “AI authority bias” where they gave more weight to the AI’s suggestions than their own judgment.
  • Absence of emotion in AI: The AI lacks human emotions like wariness or a “fear of heights” when observing a soaring price chart. This lack of a cautious “gut-check” allowed bolder, unchallenged forecasts.
  • Peer calibration and social dynamics: Peer discussions introduced diverse viewpoints and a “don’t be the sucker” mentality, where individuals moderated extreme views to avoid appearing naive, leading to a more conservative consensus.
  • The illusion of knowledge: Access to a sophisticated tool like ChatGPT gave participants an “illusion of knowing everything,” making them more susceptible to overconfidence.

Guidance for leaders and organizations

The study’s findings offer important guidance for integrating AI tools into decision-making:

  • Retain and leverage human discussion: An optimal approach may involve combining AI input with human dialogue. Use an AI for a preliminary assessment, then convene a team to debate its findings.
  • Critical thinking is fundamental: Treat AI as a starting point for inquiry, not the definitive answer. Question its data sources and potential blind spots.
  • Train teams and establish guidelines: Communicate that AI might foster overconfidence. Institutionalize a blend of AI and human input to protect against unbalanced influences.

The study acknowledges its limitations, including its controlled setting, focus on a single stock (Nvidia), and the use of a ChatGPT model without real-time market data. These factors suggest that results might vary in different contexts or with different AI tools.


Featured image credit

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *