New technology often moves fast. Many investment firms and analysts are already experimenting with artificial intelligence and tools such as ChatGPT. But it typically takes much longer to understand where technology is changing behaviour in deeper and more subtle ways; how we think about the data we see. Now, in a study of insights from earnings calls carried out by a group of Boston academics, there is clear evidence of behavioural change. Exposure to much of what is produced from AI – often done without real thought – shows we can recognise the superficial, banality dressed as insight.
Even before the new tools, frustration with customer service chatbots was teaching everyone to tell the difference between human behaviour and an imperfect simulation. Online fraud and fake personas often fall down on a tiny detail that our gut feelings pick up. In everyday experience, it is usually possible to spot the absence of humanity. But company reporting is a tougher challenge, with highly incentivised executives and tightly scripted communication. Amid a sea of impressive but bland verbiage, it is tricky to spot any nuggets of information.
Company annual reports are carefully crafted. Despite regulation on narrative reporting and forward-looking statements, words can be constructed alongside innovative metrics in ways that mislead. AI has a role to play in navigating this, searching for keywords and subtle differences compared with previous updates. But it is in the opportunity for Q&A at company meetings and on conference calls where gut feelings can help more. Thanks to AI, analysts are now familiar with what content-poor but well-presented material looks like – re-packaged public information that adds nothing new feels meaningless no matter how clever it looks. Now, when analysts hear that sort of presentation or answer from a company they quickly see it for what it is. If a follow-up question does not draw out a meaningful response, the message is that there is something to hide. AI has already trained analysts to spot the superficial and sharpen their challenge to management.
The recent ‘Executives vs Chatbots’ study on earnings calls looked at the difference between the actual answers provided by senior executives, and those generated by ChatGPT and other AI models for the same questions. The comparison was illuminating, showing what new information emerged and how that changed forecasts. For the cynics, it is perhaps unsurprising that the more similar an executive’s answer was to a generic AI response, the more likely the market concluded it was bad news. Management response that looks like boilerplate text without conveying additional information may represent an attempt to hide bad news. Facing potential disappointment, language becomes more formal and less spontaneous. The study seems to confirm that human intuition is doing a good job of spotting lack of originality, helped by a new familiarity with waffle.
It is an interesting experiment when preparing an email on AI to look at the automatically-generated response. Often it looks polite but is essentially unhelpful; just the question reformatted. This encourages a clearer phrasing of the question to ensure a meaningful response. The technology has already made humans raise their game. Much has been written on the impact of AI on the process of investment management, but in time behaviour will also change. The immediate result might be many more instant experts, emboldened by AI to pontificate well beyond their skill set. This may already be familiar to clients of fund managers. But in time all will get better at recognising reporting that lacks substance. If humans are to add value they must ensure their communication shows judgement and adds insight.