A while ago, I studied with my former UvA-colleague Marijn van Klingeren (now RU Nijmegen) and Yariv Tsfati (U Haifa) the relationship between selective exposure and political polarization. The results are now published. The abstract:
One of the main lines of reasoning in the contemporary debate on media effects is the notion that selective exposure to congruent information can lead to political polarization. Most studies are correlational, potentially plagued with self-report biases, and cannot demonstrate time order. Even less is known about the mechanisms behind such an effect. We conducted an online quasi-experiment with a sample matching the characteristics of the Dutch population closely (N = 501). We investigate how selective exposure can lead to polarized attitudes and which role frames, facts, and public opinion cues play. While we find that facts learned can help explaining attitude change and that selectivity can influence the perception of public opinion, we cannot confirm that people generally polarize.
Trilling, D., van Klingeren, M., & Tsfati, Y. (2016). Selective exposure, political polarization, and possible mediators: Evidence from the Netherlands. International Journal of Public Opinion Research, online first. doi:10.1093/ijpor/edw003
Full text: HTML PDF
I wrote a short blogpost for the Graduate School of Communication Science, where I try to explain why I think that – while teaching the basics is necessary – we should also try to involve students in cutting-edge research, and how sharing code and data can benefit both students and the research community.
My colleague Jelle Boumans and I just published an article outlining different approaches to automated content analysis. The focus lies on the application of computational social science approaches, which are often rooted in computer science, to the analysis of digital texts, especially journalistic texts. The article is part of a special issue with a lot of interesting related work (editorial).
The abstract of our article:
When analyzing digital journalism content, journalism scholars are confronted with a number of substantial differences compared to traditional journalistic content. The sheer amount of data and the unique features of digital content call for the application of valuable new techniques. Various other scholarly fields are already applying computational methods to study digital journalism data. Often, their research interests are closely related to those of journalism scholars. Despite the advantages that computational methods have over traditional content analysis methods, they are not commonplace in digital journalism studies. To increase awareness of what computational methods have to offer, we take stock of the toolkit and show the ways in which computational methods can aid journalism studies. Distinguishing between dictionary-based approaches, supervised machine learning, and unsupervised machine learning, we present a systematic inventory of recent applications both inside as well as outside journalism studies. We conclude with suggestions for how the application of new techniques can be encouraged.
Boumans, J.W., & Trilling, D. (2015). Taking Stock of the Toolkit: An overview of relevant automated content analysis approaches and techniques for digital journalism scholars. Digital Journalism, online first. doi:10.1080/21670811.2015.1096598
You’re a UvA student and want to work with some interesting people (one of them being me)? Apply.
For a project on automated content-analysis, we are looking for a student-assistent for a fixed number of hours to be agreed on and depending on the available funding (probably somewhere between 100 to 150 in total). The pay is 10 euro per hour.
To date, the researchers in the project use a set of self-written Python scripts to access a MongoDB on a remote server and run analyses, either on the remote server or on their personal laptop (Natural Language Processing and similar applications). These scripts are now executed by supplying command-line arguments. The student’s task is to develop a (simple) web interface to these scripts; specifications to be agreed on. Depending on time available and the student’s skills, further improvement of the existing scripts could be another task assigned.
The ideal candidate
– is enrolled at the UvA (faculty does not matter)
– is not and has not been employed by the University of Amsterdam (any Faculty!) in the current calendar year
– has proven relevant programming skills
– can work independently without step-by-step instructions
– is willing to provide own ideas and input.
Kennis van de Nederlandse taal is een pre.
The project is situated at the Department of Communication Science at the Faculty of Social Sciences (FMG).
To express your interest, please send an email to dr. Damian Trilling, firstname.lastname@example.org.
Tomorrow, I’ll give a talk (in Dutch) at the Weekend of Science, a nationwide event in which universities open their doors for a wide public. From their website:
Sanne Kruikemeier en Damian Trilling van de afdeling communicatiewetenschap geven tijdens het Weekend van de Wetenschap een korte demonstratie en lezing.
Damian geeft een inkijk in de wereld van sociale media en nieuws. Mensen lezen nieuws op allerlei plekken, bijvoorbeeld op Facebook en Twitter. Damian zal met behulp van Big Data laten zien hoe nieuws wordt verspreid onder mensen. Hij laat zien hoe mensen nieuws delen en waarom mensen bepaald nieuws doorsturen. Wat vinden we belangrijk en de moeite waard om door te sturen?
Sanne geeft een eye tracking demonstratie. Met behulp van de eye tracker kunnen we nauwkeurig kijkgedrag bestuderen. Hoe worden kranten en tijdschriften bekeken? Wat valt op? Worden kranten en website op verschillende manieren bekeken? Sanne laat zien wat we allemaal kunnen leren van ons kijkgedrag.
The academic conference season is over, but that doesn’t mean that there are no interesting workshops to attend or talks to give. Last week, I was a guest at the London School of Economics and Political Science (LSE), where we discussed in a workshop how to measure media diversity – a topic very much related to our own Personalised Communication project.
Today, I gave an invited talk at the annual meeting of the Dutch Platform for Survey Research (Nederlandstalig Platform voor Surveyonderzoek, NPSO). I tried to provide an overview of different techniques for automated content analysis, as my colleagues and I use them in our research. The slides are available here.
More to come in the next months…