Natali Helberger, with whom I work on the Personalised Communication project, and I wrote a blog post for the LSE Media Policy Project blog: ‘Facebook is a news editor: the real issues to be concerned about‘. We argue that of course, Facebook employs human editors to curate the news feeds, but that this– while it should not come as surprise at all – has serious legal consequences.[full post]
This seems to be the week of publications on selective exposure, as also a literature review I co-authored was published. The abstract:
Some fear that personalised communication can lead to information cocoons or filter bubbles. For instance, a personalised news website could give more prominence to conservative or liberal media items, based on the (assumed) political interests of the user. As a result, users may encounter only a limited range of political ideas. We synthesise empirical research on the extent and effects of self-selected personalisation, where people actively choose which content they receive, and pre-selected personalisation, where algorithms personalise content for users without any deliberate user choice. We conclude that at present there is little empirical evidence that warrants any worries about filter bubbles.
Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., de Vreese, C. H., & Helberger, N. (2016). Should we worry about filter bubbles ? Internet Policy Review, 5(1). doi:10.14763/2016.1.401 [Full text]
A while ago, I studied with my former UvA-colleague Marijn van Klingeren (now RU Nijmegen) and Yariv Tsfati (U Haifa) the relationship between selective exposure and political polarization. The results are now published. The abstract:
One of the main lines of reasoning in the contemporary debate on media effects is the notion that selective exposure to congruent information can lead to political polarization. Most studies are correlational, potentially plagued with self-report biases, and cannot demonstrate time order. Even less is known about the mechanisms behind such an effect. We conducted an online quasi-experiment with a sample matching the characteristics of the Dutch population closely (N = 501). We investigate how selective exposure can lead to polarized attitudes and which role frames, facts, and public opinion cues play. While we find that facts learned can help explaining attitude change and that selectivity can influence the perception of public opinion, we cannot confirm that people generally polarize.
Trilling, D., van Klingeren, M., & Tsfati, Y. (2016). Selective exposure, political polarization, and possible mediators: Evidence from the Netherlands. International Journal of Public Opinion Research, online first. doi:10.1093/ijpor/edw003
Full text: HTML PDF
I wrote a short blogpost for the Graduate School of Communication Science, where I try to explain why I think that – while teaching the basics is necessary – we should also try to involve students in cutting-edge research, and how sharing code and data can benefit both students and the research community.
My colleague Jelle Boumans and I just published an article outlining different approaches to automated content analysis. The focus lies on the application of computational social science approaches, which are often rooted in computer science, to the analysis of digital texts, especially journalistic texts. The article is part of a special issue with a lot of interesting related work (editorial).
The abstract of our article:
When analyzing digital journalism content, journalism scholars are confronted with a number of substantial differences compared to traditional journalistic content. The sheer amount of data and the unique features of digital content call for the application of valuable new techniques. Various other scholarly fields are already applying computational methods to study digital journalism data. Often, their research interests are closely related to those of journalism scholars. Despite the advantages that computational methods have over traditional content analysis methods, they are not commonplace in digital journalism studies. To increase awareness of what computational methods have to offer, we take stock of the toolkit and show the ways in which computational methods can aid journalism studies. Distinguishing between dictionary-based approaches, supervised machine learning, and unsupervised machine learning, we present a systematic inventory of recent applications both inside as well as outside journalism studies. We conclude with suggestions for how the application of new techniques can be encouraged.
Boumans, J.W., & Trilling, D. (2015). Taking Stock of the Toolkit: An overview of relevant automated content analysis approaches and techniques for digital journalism scholars. Digital Journalism, online first. doi:10.1080/21670811.2015.1096598