Judgement devices and the evaluation of singularities: the use of performance ratings and narrative information to guide film viewer choice




Download 103,99 Kb.
bet8/11
Sana22.07.2021
Hajmi103,99 Kb.
#15458
1   2   3   4   5   6   7   8   9   10   11
5.2 Whose opinion counts most?

A further theme that emerged from our analysis was situations in which interviewees and IMDb users described dealing with conflicting information provided by a single judgement device. This was particularly true in the case of individuals dealing with mixed reviews about a given film. As noted in the previous section, in these cases, the film’s rating was used in some cases as a means of providing a ‘tie-breaker’ to resolve the problem of conflicting information. This approach was particularly useful in cases where the film’s rating was either particularly high, or particularly low. In these cases, however, it was often the case that the majority of reviews were generally very positive (in the case of high rating films) or very negative (in the case of low rating films). More difficult to resolve were those cases where the number of positive and negative reviews was generally fairly even and the rating score was mid-range. In these cases, the score did not give users a clear signal either way in terms of whether or not the film was worthwhile. As a result, they were required to rely on other judgement devices, typically narrative reviews. The types of reviews considered were either user reviews provided by members of the public (in addition to IMDb, user reviews are also provided by a number of other sites, including Rotten Tomatoes, Netflix, and a popular Chinese website, Douban), or expert critics’ reviews. In addition, people often received film recommendations (either solicited or unsolicited) from friends or family.

One may have reasonably expected, based on Karpik’s (2010) discussion of the role of cicerones, and Shrum’s (1996) contention about the merit of critics’ opinions, that the opinions of expert critics would perhaps carry a lot of weight in filmgoers’ decision processes, given the perceived credibility of these experts in the film industry, relative to the opinion of anonymous Internet users. However, Jeacle and Carter (2011) found, perhaps surprisingly, that with the advent of user websites like TripAdvisor, the opinion of the layperson is often privileged above that of the expert. However, our results reveal a subtle difference to this again. Our findings tended to show that while film viewers sometimes chose to view a particular film if it had received the approval of cicerones (particularly if the film had received a significant award, such as an Academy Award), in general, expert opinions were not necessarily favoured over those of laypeople. However, we do not see evidence that lay opinions were necessarily privileged over those of the expert either. The opinions that we see being privileged in our analysis are those which correspond most closely with the individual making the decision. Therefore, people tend to seek out the opinions of those who tastes and preferences appear to match most closely with their own, whether those opinions are provided by laypeople, or by experts. This similarity in taste is often assessed by taking into account the feedback that certain reviewers (expert or otherwise) have provided in relation to previous films, and comparing this to one’s own assessment of the same films. In the case where this match in tastes was found, interviewees reported frequently focusing on the reviews of those users/critics, and weighting these more heavily in their decision. This result is consistent with Blank’s (2007) argument, which is that consumers of reviews gravitate toward the reviews of reviewers who they believe to have credibility, and that this perception of credibility is determined by the consumer’s perception of previous reviews by that particular reviewer.17

It may also have been expected, given Karpik’s (2010) discussion of personal networks, that individuals would place more emphasis on the recommendations of people they know well (i.e. friends and family), as opposed to strangers. However, we see little evidence that this is the case. Indeed, in circumstances when individuals did rely on the recommendations of people they know well, they tended to be dissatisfied with the results, the following example from our netnographic data being representative:

I rented this movie because a friend told me it was the best movie ever, unfortunately it was pretty much the opposite, especially the whole setup. I saw Saturday morning cartoons that were more interesting.
The fact that one’s friend/family member recommended the film did not necessarily lead to situations whereby the individual ended up happy with the decision to see the film, or in the case of our interview data, even decided to follow the recommendation. Again, the key consideration in determining whether viewers were more likely to follow the recommendation, and were more likely to comment positively on the outcome afterwards, were in cases where the tastes in films of the friend/family member were seen as being similar to one’s own tastes.
5.3 Responding to unsatisfactory outcomes

Perhaps not unexpectedly, we identified many circumstances in our data whereby despite spending what appeared to be a significant amount of time considering various judgement devices, individuals still felt disappointed with their choice of film after viewing it. As one IMDb user commented in relation to a particular ‘blockbuster’ film:

I initially had a feeling that this movie would be too much superhero overload for my taste. But then I saw the positive reviews on IMDb and Rotten Tomatoes18 and also the comments…and I felt I had been wrong and they might have pulled it off really well! So I went to watch it.
But I felt completely cheated after watching this film. How could IMDb ratings be so misleading?! The only reason this film might be remembered could be because it would become part of the case study "How a below average film could be made into [a] blockbuster by hyping it up on internet and social networks." I don't write reviews in general, but [I felt] forced to do it for this film. Such a letdown and waste of time even though it's got one of the best ever ratings on IMDb.
We noted four distinct types of responses from our data to seeing a film that was deemed to be unsatisfactory. The first of these was that an individual would place less reliance on or trust in the particular judgement device or devices that led them to their choice, as indicated by one IMDb user:

I saw this ‘movie’ partly because of the sheer number of good reviews at Netflix, and from it I learned a valuable lesson....the lesson I learned is "Don't trust reviews".


The second response was to post a review or rate the film themselves. These reviews often contained a critique of the judgement devices (and those responsible for them) the individuals relied upon in choosing to see the film in the first place. We saw many examples of this in our netnographic data, a representative example of which was as follows:

I rented this movie on the strength of the ratings and glowing reviews at this site [IMDb]. "Brilliant", they said. "Dark and beautiful", they wrote. 8.4 stars. Well, all I can say is, these people must have been on some serious drugs when [they] saw this totally inane movie…I give this movie 1 black hole.

Furthermore, many interviewees also indicated that IMDb was just one of a number of sources in which people left comments following viewing a film. One of our interviewees offered the following observation regarding comments that they posted on Youtube:

[In the case of one particular film] I saw the good reviews and the trailer was very nice, so I went there [i.e. to the cinema] and it’s not that impressive. So then I went back to YouTube and posted. I felt it wasn’t that impressive, so I just left a comment.


The third response was to re-evaluate the information provided by the judgement device ex-post. Some interviewees indicated that following watching a film that failed to meet their expectations, they returned to film websites and read the reviews over again. The purpose of this appears to have been to gauge which information and which users matched most closely in viewpoint with those of the individual who had just watched the film. This information was then recalled and used in assessing future films. In the (perhaps extreme) case of one interviewee, this process also led to the person re-watching films multiple times in an effort to better understand the viewpoints of IMDb users:

I like to go online [after watching a film] and read how other people perceive it. Because of this, I have watched Shutter Island three times now.


The final response was essentially to do nothing. Some interviewees were pragmatic in suggesting that seeing a bad film was ‘part and parcel’ of the cinematic experience, was no one’s fault, and was not something that could be completely avoided. Those in this category of response noted that judgement devices were inherently imperfect, and that no amount of information gathering prior to viewing a film could guarantee enjoyment of the film.



  1. Download 103,99 Kb.
1   2   3   4   5   6   7   8   9   10   11




Download 103,99 Kb.

Bosh sahifa
Aloqalar

    Bosh sahifa



Judgement devices and the evaluation of singularities: the use of performance ratings and narrative information to guide film viewer choice

Download 103,99 Kb.