Interview Large language models stumble when trying to sway buyers with moral arguments, according to research from the SGH Warsaw School of Economics and Sakarya Business School.
"Our research suggests that people respond to an AI-generated message based on their knowledge of AI's possibilities and limitations," first author Wojciech Trzebiński told The Register, in reference to a recently published study that found moral arguments pushing buyers toward Fair Trade products were less convincing to those who did not believe in the "superiority" of artificial intelligence over humans.
"Such knowledge may be useful and valid. For example, people are likely to believe that machines (such as AI-enabled chatbots) are not capable of judging what is moral and immoral."
Trzebiński and colleagues carried out the study using an example of a real Fair Trade product and argued the case for why buyers should pick it over rival, less-fair goods. When the advice came from a human, it was generally well-received; switch the human out for a chatbot, though, and buyers were less convinced, thanks to what the researchers called an incongruity in "message-source fit" – in other words, that soulless stochastic parrots shouldn't concern themselves with matters of morality in the first place.
"When people are aware of the machine source of moral appeal, they are likely to activate their beliefs on AI, and such beliefs may diminish the persuasiveness of AI outputs that are considered as inappropriate to be produced by machines," Trzebiński told us. "Studies have revealed that pattern exists for various types of outputs, not only morally based advice, but also product recommendations related to pleasure, experience, or creative content.
"All those areas may be perceived as human-specific domains. Given that an AI agent may simulate humans, for example using informal tone and expressing empathy, people may be confused about the nature of the agent (AI or human) when it's not revealed. In such cases, people may be unsure how to react. AI systems are imperfect, they may hallucinate, and, without a human sense of morality, its moral advice may be misleading. So, I believe that people have the right to use their knowledge on AI and decide to what extent they should rely on AI."
That knowledge – or, rather, belief – surrounding artificial intelligence cuts both ways, though. The team found that a certain group was more likely to be swayed by the machine's moral arguments: the true believers in the AI revolution, who perceive AI as being a font of all knowledge, and in the "superiority" of artificial over human intelligence.
Trzebiński still believes there's a place for AI in marketing, if used transparently and honestly. "I am optimistic about the value AI can provide," he told us. "Probably there will always be limitations, as people will remain the ones responsible for setting the task for an AI agent, and AI will use, directly or indirectly, human-generated content. However, taking the domain of morality as an example, AI may even outperform human sources, as it may be free from prejudices and easily embrace many different moral stances. So, I encourage marketers to use GenAI in their communication if they are transparent about the AI source of messaging.
"On the other hand, consumers (and people in general) should be aware that their reactions to AI outputs may depend on their AI perceptions. For example, when an individual views an AI agent as humanlike, their concerns about the agent's ability to speak about morality may disappear. In that case, the individual should reconsider how to react to such AI output. Maybe they should be more skeptical about it, realizing that it was generated by a machine even though the machine tries to resemble a human."
Less scrupulous marketers, however, may take a different message from the study: the ability to target efforts toward the true believers who are likely to take the statistical text outputs an LLM generates at face value. "Marketers can focus on audiences more likely to perceive AI agents as humanlike and believe in AI's superiority over humans," the team wrote in the paper's section on practical implications of the research, "to make such communication more effective. Marketers can, therefore, attempt to predict AI anthropomorphising tendencies and AI superiority beliefs within their target groups, e.g. using social media to analyse user characteristics."
Those on the consuming side of the table, meanwhile, are advised to check their bias and "carefully consider whether an AI product recommender is capable of formulating moral judgments that are appropriate for the consumer's morality, and discount positive impressions about such capability which may result from perceiving the AI agent as humanlike or AI as generally superior to humans."
The team's paper has been published in the Journal of Business Research under closed-access terms. ®
Source: The register