The rapid growth in e-commerce has led to a concomitant increase in consumers’ reliance on digital word-of-mouth to inform their choices. As such, there is an increasing incentive for sellers to solicit reviews for their products. Recent studies have examined the direct effect of receiving incentives or introducing incentive policy on review writing behavior. However, since incentivized reviews are often only a small proportion of the overall reviews on a platform, it is important to understand whether their presence on the platform has spillover effects on the unincentivized reviews which are often in the majority. Using the state-of-the-art language model, Bidirectional Encoder Representations from Transformers (BERT) to identify incentivized reviews, a document embedding method, Doc2Vec to create matched pairs of Amazon and non-Amazon branded products, and a natural experiment caused by a policy change on Amazon.com in October 2016, we conduct a difference-in-differences analysis to identify the spillover effects of banning incentivized reviews on unincentivized reviews. Our results suggest that there are positive spillover effects of the ban on the review sentiment, length, helpfulness, and frequency, suggesting that the policy stimulates more reviews in the short-run and more positive, lengthy, and helpful reviews in the long run. Thus, we find that the presence of incentivized reviews on the platform poisons the well of reviews for unincentivized reviews.
We study the spillover effects of the online reviews of other covisited products on the purchases of a focal product using clickstream data from a large retailer. The proposed spillover effects are moderated by (a) whether the related (covisited) products are complementary or substitutive, (b) the choice of media channel (mobile or personal computer (PC)) used, (c) whether the related products are from the same or a different brand, (d) consumer experience, and (e) the variance of the review ratings. To identify complementary and substitutive products, we develop supervised machine-learning models based on product characteristics, such as product category and brand, and novel text-based similarity measures. We train and validate the machine-learning models using product pair labels from Amazon Mechanical Turk. Our results show that the mean rating of substitutive (complementary) products has a negative (positive) effect on purchasing of the focal product. Interestingly, the magnitude of the spillover effects of the mean ratings of covisited (substitutive and complementary) products is significantly larger than the effects on the focal product, especially for complementary products. The spillover effect of ratings is stronger for consumers who use mobile devices versus PCs. We find the negative effect of the mean ratings of substitutive products across different brands on purchasing of a focal product to be significantly higher than within the same brand. Lastly, the effect of the mean ratings is stronger for less experienced consumers and for ratings with lower variance. We discuss implications on leveraging the spillover effect of the online product reviews of related products to encourage online purchases.