Effective January 31st, Gartner Peer Insights will make some key changes to the way they handle customer reviews. These are changes that, according to industry expert Destrier Analyst Relations, are intended to “clamp down on vendors said to be ‘gaming’ the reviews process.”
In short, Gartner is belatedly recognizing that promoting Peer Insights as a way of influencing the Magic Quadrant motivates bad behavior, because vendors perceive the stakes as very high. Encouraging technology vendors to drive reviews to attempt to influence the Quadrant, without checks and balances, leads to a cherry-picking problem that creates inherently biased reviews (just as, in the traditional analyst model, vendors connect analysts with only their happiest customers). It’s good that Gartner is finally attempting to address this long-standing issue.
But it’s also surprising that it’s taken this long for Gartner to recognize what every economist knows and what every parent of preschoolers learns as they try to distribute evenly-sized slices of birthday cake: when even one individual in a group tries to maximize their personal outcome, it creates an unlevel playing field for the entire group. Whether it’s a five-year-old fighting for the largest slice of cake or a CMO attempting to ensure that only customers with high NPS scores write reviews, the outcome is the same. One individual feels like they win a temporary victory that lasts no longer than the sugar rush from the birthday cake.
Some vendors, whether because of relative size of customer base or other reasons, are better at gaming the system than others. As an example, a colleague from an enterprise software company told me that they gave their sales reps spiffs of up to $1,000 in exchange for getting customers to contribute 5-star reviews to Peer Insights. So the system rewards bad behavior and the group as a whole, as well as the ideals of basic fairness, transparency, and trust, suffer.
Gartner has to have known this all along. I’m reminded of Captain Renault’s famous line from Casablanca: “I’m shocked – shocked! – to find that gambling is going on in here.”
At TrustRadius, we recognized that potential for bias from our earliest days, building our platform to be able to detect review source and many other factors that could signal potential bias and feed them into an algorithm. We knew it would be the harder way to build the platform, but we also knew it would be necessary. Because of that architecture, we were able to create our TRScore™ in a way that adjusts for bias. Smart vendors welcome a fair and unbiased market conversation because they understand that, in a world where buyers are sick of marketing jargon, truth sells. Those vendors will cast a wide net in driving reviews and are rewarded with a fair, honest representation of their products (yes, including flaws) that builds credibility and helps buyers make good decisions.
Not all vendors see it that way, and it’s understandable. After all, even if the Director of Analyst Relations embraces the idea of transparency, she or he may report to higher-ups who are uncomfortable with critical feedback given publicly. In that case, they may send only the happiest customers to leave reviews. (Even though our research shows that negative feedback in reviews actually helps shorten deal cycles.) Until recently, Gartner Peer Insights happily represented those reviews as the “authentic” voice of the market. In contrast, over the past three years here at TrustRadius, we have applied a thoughtful, scientific, and transparent method to surface and correct for selection bias in reviews through TRScore.
Software buyers, of course, have known about these issues for years. As software buyer Sterling Uhler noted, “It would be to easy for a company to saturate a user site with fake reviews pushing the negative ones down the list or making it seem as if there were mainly positive reviews and only a few negative ones.” Another buyer, Hubert Sawyers III, reflected similar sentiments, saying “Many times, reviews can be really generic, especially the positive ones, which makes you wonder what is driving the review.”
You know that a child is maturing when they start to ask tough questions about the existence of Santa Claus and the Tooth Fairy. We can see that the market for peer reviews and customer voice is maturing when, finally, another player in the market starts attempting to correct for bias. We applaud this move and are glad to welcome Gartner Peer Insights to the real world, where trust and transparency can only be created by understanding and correcting for the bias that, otherwise, some vendors would introduce.
They’ll still have a lot to do, of course. There are years of historical data to reconcile (if I were you, I might not trust anything I read on Gartner Peer Insights written before February 1, 2019).
Their approach may need finesse as well. Rather than pursuing the scalable approach of using an algorithm like TRScore to correct for bias, and labeling all reviews with information about how they were collected, Gartner GVP Richard Cho warns “We will catch those who try to cheat, and we will discipline them.” That sounds a bit like a fed-up parent scolding the five-year-old trying to game the birthday cake distribution, rather than a market approach designed to provide a level playing field for all participants.
And, of course, Gartner is still part of the traditional analyst model, which often overlooks conflicts of interest (such as allowing the implication that, if a vendor wishes to be considered for the Magic Quadrant, they must drive a volume of reviews on Gartner Peer Insights). I look forward to seeing whether they make moves to address those conflicts as well.