How opinions are received by online communities: A case study on Amazon.com helpfulness votes.

Cristian Danescu-Niculescu-Mizil and Gueorgi Kossinets and Jon Kleinberg and Lillian Lee.

Proceedings of WWW, pp. 141--150, 2009.



PDF



Teaser Answer:

                                   

For a product with average start rating of 3.5, a review with star-rating of 4 would be the most likely to be rated as helpful (ignoring the content of the text). 


In general, the best strategy is to rate your reviews slightly above the average of the other reviews. 


Interestingly, how much above the average you’d want to go depends on the level of disagreement between the other reviewers: is the 3.5 average the result of the other reviewers agreeing that the product is worth about 3-4 stars (in which case 4 stars would be your best bet), or is it the result of  two sides disagreeing on whether the review is worth 1-2 stars or 4-5 stars (in which case 5 stars would be a better bet)? (Check out Figure 3 in the paper for details.)


                                                Click here to read the question again.

                                   


Talk Slides



xkcd



ABSTRACT:

                                   

There are many on-line settings in which users publicly express opinions. A number of these offer mechanisms for other users to evaluate these opinions; a canonical example is Amazon.com, where reviews come with annotations like ``26 of 32 people found the following review helpful.'' Opinion evaluation appears in many off-line settings as well, including market research and political campaigns. Reasoning about the evaluation of an opinion is fundamentally different from reasoning about the opinion itself: rather than asking, ``What did Y think of X?'', we are asking, ``What did Z think of Y's opinion of X?'' Here we develop a framework for analyzing and modeling opinion evaluation, using a large-scale collection of Amazon book reviews as a dataset. We find that the perceived helpfulness of a review depends not just on its content but also but also in subtle ways on how the expressed evaluation relates to other evaluations of the same product. As part of our approach, we develop novel methods that take advantage of the phenomenon of review ``plagiarism'' to control for the effects of text in opinion evaluation, and we provide a simple and natural mathematical model consistent with our findings. Our analysis also allows us to distinguish among the predictions of competing theories from sociology and social psychology, and to discover unexpected differences in the collective opinion-evaluation behavior of user populations from different countries.



BibTeX ENTRY:

                                   

@InProceedings{Danescu-Niculescu-Mizil+al:09a,

  author={Cristian Danescu-Niculescu-Mizil and Gueorgi Kossinets and Jon Kleinberg

  and Lillian Lee},

  title={How opinions are received by online communities: A case study on

  {Amazon.com} helpfulness votes},

  booktitle={Proceedings of WWW},

  year={2009},

  pages={141--150}

}