Today's article comes from the Sage Open journal. The authors are de Oliveira et al., from the University of Braslia, in Brazil. In this study, the authors devised an experiment to test whether an influencer could actually sway peoples' opinions or not.
DOI: 10.1177/21582440251369597
How much influence do "influencers" actually have?
Can they sway public opinion about a product? Can they get you to purchase something you would otherwise never consider buying? Can they affect how you think about and perceive a company, or fundamentally affect its financial position? Can they move the markets?
Or, are they essentially just spokespeople? No more persuasive than any other talking-head would be, and only as influential as the script they happen to be reading?
Where's the line? Where does promotion end and influence start? And is there really a difference?
The authors of today's paper certainly think so. Their idea: get a bunch of people together and show them cold hard facts. Objective, quantitative, empirical data. Then, have them watch influencer content, in which the influencer draws conclusions that don't necessarily follow from the data.
In the end, where will the participants stand?
If they side with the influencer, and hold positions that don't follow from the evidence, then that points to one thing: influence is real. If they stick with the data, then maybe it's not.
On today's episode we'll walk through how the authors structured their study, collected the data, analyzed the results, and of course, what those results said. Let's jump in.
Months before recruiting participants or creating any content, the authors were deep-diving on social media. In fact, they spent a full year doing preliminary analysis of Brazil's leading financial digital influencers. They monitored posts across Twitter, Instagram, Facebook, and YouTube, examining language patterns, tone, emphasis, arguments, jargon, and visual features. Then, based on this analysis, they developed a script for an influencer video. One that they tuned to be as persuasive as possible. The question: who was going to perform it?
They needed a professional, so they ran a full casting process and eventually settled on the actress who would play the influencer. They worked with an audiovisual team to create the video, and professional editors to add graphics, tables, and other elements.
Importantly: the editors actually created two versions of the final video. Both of them replicated the look of a video being played-back on a social media site (a like-button overlay, comments, a username label etc). But...one of the videos was made to look like it had low engagement (very few likes, comments, and shares), and the other was made to look popular. Why? Because this study isn't just about whether any content-creator can sway opinions, it's about whether influencers can. That is: people with large followings and social proof on their side. The doctored engagement metrics are meant to test exactly that.
But what was this video about, exactly? It was a rant: a critique of a company. The actor was framed as being a finance-influencer, and in the video they explained why people should not invest in a particular company. The catch is: the company doesn't exist. The researchers had invented it, just for this purpose. They created a bunch of fictitious documents and financial statements about the company, and importantly, all that information was positive. By the numbers, the made-up company was doing quite well. But the video they created took an opposing position: that the company was of low quality, had poor prospects, and wasn't investment grade. Again, this did not follow logically from the data at all, quite the opposite. But regardless, they were adamantly advising against buying the company's stock.
This idea is the most important part of this study, so it's worth repeating: the documents said one thing, but the influencer was saying another. The question the authors were asking was: what would happen when people were exposed to both messages? Who would they side with? The facts, or the influence?
Now it was time to actually conduct the study. They gathered together ~280 participants, mostly accounting students, and randomly split them into three groups.
Once they were done reviewing the materials, the participants completed a questionnaire that measured their perceptions of the company across seven dimensions: debt payment capacity, liquidity, indebtedness, profitability, cash flow, growth forecast, and overall perception. Since these items showed high correlation, the researchers used Principal Component Analysis to create a consolidated index. They also measured participants' trust in different information sources, their agreement with various growth forecasts, and their final investment intentions. The questionnaire included control variables for demographic characteristics, investment experience, confidence levels, and subject matter knowledge.
Once all the responses were collected and tabulated, it was time to run statistical analysis. For this they used five different regression models. Let's see what they found.
So overall, it certainly appears that the influencers won the day. And here's the thing: some participants indicated high trust in accounting information. And when asked directly they evaluated the company positively. But, in the end, even they were swayed by the influencer. And remember, many of the participants were accounting students, people more familiar with financial data than the average person. Despite consciously liking the company and valuing it highly, they seemed to subconsciously give more weight to the influencer's interpretation than their own. Cognitive dissonance to say the least.
The authors conducted several robustness checks to validate these findings. They ran the same analyses using only participants who watched the influencer videos, to compare the low-engagement vs high-engagement versions. They also conducted Tobit regressions to account for censored responses from the control group. In all cases, the core findings remained consistent.
So what can we make of this? What can we learn from this study?
Well, let's put our "communication theory" hats on. The results of this paper appear to confirm that digital influencers are fundamentally changing how corporate information flows through markets. Rather than information moving directly from companies to investors, influencers are inserting themselves as intermediary processors who translate formal corporate language into informal social media language. And this translation isn't neutral. The influencer's own opinions, biases, and communication style become embedded in the message that reaches the final audience. When that translated message conflicts with the original corporate message, it's not a fair fight. People tend to give more weight to the translated version, even when they consciously believe the original is more reliable.
"Why?" is anyone's guess. The authors propose that this might happen because influencers communicate through more accessible channels and formats. They use informal language, often deliver content through video, and employ simple, direct communication styles. This contrasts pretty sharply with the complex, formal language used in corporate financial reports.
Previous research has shown that even small changes in information presentation can significantly impact interpretation, and this study extends that body of work to show that the medium and messenger can matter as much as the message itself.
But a mystery remains: why didn't the doctored engagement metrics have the intended effect? Why wasn't a significantly 'more-popular' video significantly more influential than the control? It's not entirely clear from the data. But perhaps social proof mechanisms work differently depending on the type of decision people are making. Or...perhaps the communication-style of influencers is just inherently influential even if the messenger doesn't seem to be popular at all.
If you want to dive deeper into the authors' methodology, review their Principal Component Analysis, look at the specific language patterns they identified in influencer content, or see their regression results, then I'd highly recommend downloading the paper. They also include a pretty interesting story about how they conducted their 12-months of influencer studying, and a set of recommendations for how they believe companies and regulators should respond to their findings.