Wednesday, 5 August 2015

It’s complicated…

CRFR Co-Director, Dr Pam Warner offers a review of a recently published paper to ask what is the role of interpretive quantitative methods in the social sciences?

A recently published paper has made a plea for:

“more meaningful, more understandable, more applicable practice of quantitative methods in social science”. (Babones 2015[1])

As you read this, you are probably thinking “Hear, hear!” Certainly this paper caught my attention because it articulates a number of points that have been bubbling away in the back of my mind…

These days many published research articles report on social research studies that have applied statistical analysis methods to multivariable data sets. For many active researchers the meaning and implications of the reported findings can be difficult to fathom, even more so probably for practitioners or policy-makers, especially if a very specialised technique has been used.

Undoubtedly it would be helpful to practitioners and policy makers if research gave clarity as to cause and effect – because if factor F causes some happy outcome H, then finding some way to introduce/ intensify F would produce/ enhance H… (Or conversely, for adverse outcomes.) However, where findings of multivariate observational social research studies are reported in a ‘causal’ way, unjustifiably, there is real risk that the research ‘consumer’ will be misled.

First thing to acknowledge is that life is complicated, of course - so it would be optimistic to expect a simple account of interrelationships among the available variables, particularly in family and early years studies. Indeed, can we even expect to discover causes from an observational research study? By way of introduction, the Babones paper reminds us that there are two main types of social science research data – qualitative (eg interview transcripts) or quantitative (measurements or survey question responses) - and also two main research strategies - interpretive and positivist:
  • Positivist research should be based on strong prior theory, with the data being used to challenge (test) that theory. This means that positivist research can often make causal claims, if the data confirm the theory.
  • In contrast, interpretive research seeks to draw out theory (or meaning) from the data. It thus follows that interpretive approaches need minimal amounts of prior theorising, but cannot make strong claims about causality.

A general if simplistic presumption prevails, that quantitative research is ‘positivist’, while qualitative research is ‘interpretive’. However Babones points out that this is not entirely the case, and argues that much quantitative research reported in the literature is not truly positivist. This is because the prior theorisation is inadequate, or insufficiently explicit, or the model used in the analysis was tweaked a number of times en route, so that the findings are actually more ‘emergent’ than giving support to prior theory. This is a crucial point, and undermines confidence in the published causal claims. It also partly explains why it so often happens that different studies on the same topic give conflicting understandings of associations between an outcome of interest, and the many potential explanatory variables.

Regarding the role that statistical techniques can play, Babones is very positive, stating that they can lead to findings that are “meaningful, understandable and applicable” and that they can offer “the incredible scope and power that can only be achieved through the application of multivariate statistical models to data derived from sample surveys”. He believes that most often the problem is that quantitative sociology “unnecessarily retains the philosophical, rhetorical and methodological baggage” of a positivist approach to research. As a remedy he urges that it would be possible, and useful, to develop an interpretive approach to analysis of multi-variable quantitative data sets, and he characterises this by deploying an evocative analogy –

“…relationships among these observed variables are the measurement tip of the causal iceberg. The task of interpretive social science is to surmise what lies unobserved beneath”.

Babones concludes that more realistic policy implications could be formulated from research providing understandings of causality derived from interpretive quantitative sociology, and involving triangulation of findings from different sources and approaches, and thorough reflexivity. Let’s do it!

If this blog echoes your experiences, or suggests counter-arguments, I would be interested to hear from you.Dr Pam Warner. 

No comments:

Post a Comment