Install this application on your home screen for quick and easy access when you’re on the go.
Just tap then “Add to Home Screen”
Install this application on your home screen for quick and easy access when you’re on the go.
Just tap then “Add to Home Screen”
Monday 17 – Friday 21 February 2019, 14:00 – 17:30 (finishing slightly earlier on Friday)
15 hours over five days
Some people think of measurement as simply assigning numbers to cases. Sometimes it is. However, often measurement gets much more complicated.
Many of the concepts we deal with in the social sciences are not immediately visible to us, and the phenomena that are do not, typically, correspond exactly to the concept we are after.
Other concepts – most obviously attitudes, meaning, and beliefs – we have to measure using statements from people who do not know our concepts and are biased in numerous conscious and unconscious ways.
This course introduces case-based (qualitative), variance-based (quantitative), and interpretivist approaches to think about and handle these issues. Themes include the logic of latent variable modelling, the psychology of responding to questions, and mediating between everyday language and scholarly terminology. All, as the course makes clear, are handled somewhat differently, sometimes fundamentally differently, across approaches.
Tasks for ECTS Credits
2 credits (pass/fail grade) Attend at least 90% of course hours, participate fully in in-class activities, and carry out the necessary reading and/or other work prior to, and after, class.
3 credits (to be graded) As above, plus complete one task (tbc).
4 credits (to be graded) As above, plus complete two tasks (tbc).
Kim Sass Mikkelsen is associate professor of public administration and politics at Roskilde University in Denmark, where he teaches public administration and management.
He is a methods pluralist, having contributed to qualitative methods development, and he teaches case study and statistical methods.
Substantively, Kim studies public sector human resource management in a number of countries, from Uganda to Estonia to Nepal to Brazil, using surveys of public servants and managers.
If broadly conceived as a link between concepts and data, measurement is indispensable for all empirical social science. Sometimes, measurement is easy but most of the time it is not.
A study of local public health may count the number of health centers or the number of staff working in such centers in a city. But what is this a measure of? Is it a good measure?
A study of employee engagement among firefighters may roll out a standard nine-item engagement battery in a survey. But why so many items? Why those nine? And to what extent can we believe what the firefighters reply?
A study of constructive deviance among nurses may approach how they construct meaning of their interaction with clients and how they work around or counteract rules. But how do we elucidate such meaning from their everyday language? After all, few nurses will ever use the term constructive deviance.
This course introduces measurement issues from variance-based, case-based, and interpretivist perspectives as well as common measurement problems for social science research.
The social sciences tend to deal with concepts – like public health capacity, employee engagement, and constructive deviance – that are not immediately visible. Moreover, the sources of information we have to measure these concepts are rarely direct and are often statements or stories from people, who may not be responding to what they are asked, may hide what they believe from researchers or themselves, and who do not speak the language of the social science academy.
We will examine in detail topics surrounding the often problematic business of social science measurement, including:
1. Can everything that matters be measured?
2. Content validity
3. Measurement validity
4. Social desirability bias
5. Context effects on responses in surveys and interviews
6. Latent variable modeling (confirmatory factor analysis and structural equation modeling)
7. Measurement issues in process tracing and case studies
8. Judgement and theory in historical measurement
9. Interpretative mediation between everyday language and scientific concepts
10. The transferability of qualitative and quantitative measurement standards to interpretative research.
The course is structured around three approaches: Variance-based, case-based, and interpretative.
We introduce the main differences between the three approaches in terms of concepts and measurement. We talk about the extent to which each approach leans on – or can lean on – the others, and the extent to which they conflict with each other either fundamentally or in common practice.
We tackle human beings as sources of information. We talk about a large literature, mainly from survey methodology, about sources of bias when asking questions of people. These biases include social desirability bias, where people report distorted views of their attitudes or behaviours either strategically or to present themselves better to interviewers or themselves, and context effects, where prior questions influence answers to subsequent ones. We discuss the applicability of these and other biases to research, relying on a variety of tools including interviews and participant observation.
We dive deep into variance-based approaches to measurement. In particular, we examine in depth the latent variables perspective on measurement developed in psychology. In this approach, the notion that we cannot directly observe many of our concepts is tackled head on and modelled statistically using confirmatory factor analysis (CFA) and structural equation modelling (SEM). While the session itself will not teach you how to apply these tools, an additional, voluntary workshop during the week will provide an introduction to CFA and SEM in the lavaan package for the R statistical environment.
We dive equally deep into case-based approaches to measurement. In particular, we look at measurement in case study research, whether this research is based on historical sources or interviews. We talk about common measurement pitfalls in process tracing research, and apply process tracing tests to attempt to get around them. Moreover, we talk about the roles that theory, histography, and expert judgement plays in historical case study measurement.
We dive into interpretative approaches to measurement. We look at the extent to which the advice from Days 3 and 4 apply to this approach and talk about what standards do apply if the interpretative approach is different. We talk about taking meaning building seriously and what this means to concepts and measures of those concepts. Finally, we debate the feasibility and problems of quantification in interpretative work.
We strongly encourage you to bring your own research questions, concepts, hypotheses, and proposed measures – even if you are only in the early stages of research – to the course on Day 1.
This course will help you measure core concepts in your research. There will be plenty of opportunity to reflect on measurement issues in your own research, and develop ways to overcome them.
Basic understanding of statistical methods.
Some knowledge of concept formation would be useful.
Day | Topic | Details |
---|---|---|
1 | Approaches to measurement |
We distinguish parameters of measurement debates and approaches. You will learn important similarities and differences in how measurement is discussed in variance-based, case-based, and interpretative social science – and you will learn how these differences matter. |
2 | Measuring people |
We look at problems related to using people as measurement devices, as we do in interviews, surveys, and sometimes document analysis. We discuss a range of important problems related to recall, social desirability, self-deception, and more. |
3 | Latent variables |
We consider a very common approach to measurement as links between latent constructs we cannot observe and indicators we can. We talk about confirmatory factor analysis and structural equation modelling as variance-based approaches to this problem. |
4 | Measurement in case studies |
We consider measurement issues in causally oriented case study methods, such as comparative historical analysis and process tracing. We talk about test types and how far the latent variables framework can get us in case study work. |
5 | Measuring meaning |
We consider measurement in interpretivist social science. We talk about the extent to which ‘measurement’ even applies here, and – more importantly – how interpretivists mediate between concepts and conversations with real people. |
Day | Readings |
---|---|
1 |
Goertz and Mahoney (2012) Yanow, Dvora. 2003 Mosley, L. (Ed.). (2013) Ahram, A. I. (2013) |
2 |
Tourangeau, R., L. J. Rips, og K. Rasniski. 2000 Zaller, J., & Feldman, S. (1992) Nederhof, A. J. (1985) Hjortskov, M. (2017) |
3 |
Adcock, R., & Collier, D. (2001) Brown, T. A. and M. T. Moore. 2008 Raykov, T., & Marcoulides, G. A. (2012) Bozeman, B., & Su, X. (2015) |
4 |
Collier, D. (2011) Lustick, I. S. (1996) Schedler, A. (2012) Mosley, L. (Ed.) (2013) Fairfield, T. (2013) |
5 |
Yanow, D., & Schwartz-Shea, P. (2015) Bevir, M., & Kedar, A. (2008) Barkin, J. S., & Sjoberg, L. (Eds.). (2017) |
For those interested in the optional introduction to lavaan, basic prior knowledge of the R environment is necessary.
A good introduction can be found here and the software itself plus the more user friendly extension RStudio can be downloaded here and here (R is required to run RStudio).
AWAITING LINKS FROM INSTRUCTOR