What implications do AI and ChatGPT have for reputation management and crisis communications?
“AI and ChatGPT can provide organizations with valuable tools for reputation management and crisis communications, enabling faster response times, real-time monitoring, automated responses, enhanced communication planning, and more.”
In case the American spelling didn’t give it away, those aren’t our words, but the opening line of an in-depth answer that ChatGPT gave to our above question.
Here, the humanoids at Alder look at three aspects of ChatGPT’s response which caught our attention, and assess it with our own sentient expertise.
Automated response at a time of crisis
ChatGPT is of the opinion that “AI-powered chatbots can be used to provide automated responses to frequently asked questions during a crisis”. This would undoubtedly save time for PR and communications teams, but the risk is far greater than the reward.
During a crisis, even the most basic question must be answered with precision, sincerity and humanity, and that answer needs to align with an underlying strategy. To place your trust in an automated message, which may misjudge the required tone, is a highly risky move during a critical time; the resulting statement could forever be attributed to your organisation. At the moment, any time- and resource-saving gains of using ChatGPT to answer questions in a time of crisis are far outweighed by the reputational risk of using a technology that is still in its teething phase.
Legal questions
Integral to crisis communications is the navigation of legal matters, including confidentiality, liability and libel. ChatGPT cites its ability to provide “consistent messaging” when handling a crisis. Although consistency is important, the nuances associated with even the shortest statement is currently lost on ChatGPT and other AI applications, and indeed on many human consultants if they aren’t crisis communications specialists. Statements require intense thought; a misplaced ‘sorry’ or use of the wrong pronoun can have major legal consequences or reputational for a person or organisation.
ChatGPT’s failure to grasp the legal consequences of statements is among its most fundamental flaws when it comes to its application in a crisis situation.
“Accuracy issues”
ChatGPT acknowledges that the use of AI in reputation management and crisis communications may present some significant “accuracy issues”. The fact it is able to make this humble admission is notable in a world where technology is frequently mischaracterised as an infallible panacea.
An Australian mayor found himself at the centre of a furore surrounding the inaccuracies of AI technology earlier this month. It is believed that mayor Brian Hood will start a legal bid against ChatGPT’s owner OpenAI, after he was wrongly described by the tool as serving time in prison for false accounting when in fact, he was the one who blew the whistle on the illegal activity.
Hood’s example is just one of an emerging series of questions around the veracity of ChatGPT’s content and the legal implications (and unknowns) of a computer programme typing out inaccurate statements about a human or organisation.
Clearly, repeating ‘fake news’ or misinformation when constructing a crisis response could have calamitous consequences for a person or organisation, not to mention any PR or communications consultancy involved.
In conclusion…
While the sheer speed and scale at which ChatGPT absorbs and produces information presents an exciting prospect for communications, using it to provide responses and background information when tackling a crisis poses serious legal and reputational risks.
That doesn’t mean Alder is ruling out AI’s future application in this field entirely, given that the technology will continue to develop apace. Only time will tell whether AI technology will provide the communication and reputation management professionals with a severely erroneous aid, a useful if limited assistant or a revolutionary new player, or potentially a combination of all three.