Can Large Language Models Produce More Accurate Analyst Forecasts?

Using textual information from a complete history of regular quarterly and annual (10-Q and 10-K) filings by U.S. corporations, we train machine learning algorithms and large language models, LLMs, to predict future earnings surprises.

First, the length of MD&A section on its own is negatively associated with future earnings surprises and firm returns in the cross-section.

Second, neither sentiment-based nor classic NLPs approaches are able to “learn” from the past managerial discussions to forecast future earnings.

Third, only “finance-trained” LLMs have the capacity to “understand” the contexts of previous discussions to predict both positive and negative earnings surprises, and future firm returns.

Our evidence indicates significant, and somewhat hidden in the complexity of presentations, informational content of publicly disclosed corporate filings, and superior (to human) abilities of more recent AI models to identify it.