Zigzagging and proud of being the infidels of science

This is the text of my keynote introductory talk at the State of the Art of Realist Methodologies, (The Leeds Club, Leeds, UK 4th November 2015).

The slides associated with this talk may be found here: methods and metaphors nde


Our concern is to reflect backwards into the future of realist methodology. To consider the insights we can learn from bringing a realist ontology that accepts the real—causal, stratified, emergent, and independent of our knowing it; a relativist ontology—weakly constructed [theoretical] accounts of this stratified and emergent real that have enough power to direct us to identify methods to test these constructions and justify our choice of cases; a methodological strategy that abducts—a creative endeavour of meaning-making any pragmatist (following Charles S. Pierce) would recognise—and retroduction—the transfactual search for the essence of things—into play in a realist methodology intent of addressing real-world problems.

Ours is a practical endeavour in which methodological insights are born of empirical research practices. There is an invitation from another pragmatist, Robert Park, which guides one part of a realist research methodology:

You have been told to go grubbing in the library, thereby accumulating a mass of notes and liberal coating of grime. You have been told to choose problems where you can find musty stacks of routine records based on trivial schedules prepared by tired bureaucrats and filled out by reluctant applicants for the aid of fussy do-gooders or indifferent clerks. This is called ‘getting your hands dirty in real research’. Those who thus counsel you are wise and honourable: the reasons they offer are of great value. But one thing more is needful: first-hand observation. Go and sit in the lounges of the luxury hotels and on the doorsteps of the flophouses; sit in the Orchestra Hall and the Star and Garter Burlesque. In short […] go get the seat of your pants dirty in real research (cited in Hammersley, 1989: 76)

Such practical, hand-dirtying, pant-dirtying engagements can surprise. But for realists, mechanisms generatively shape and exert their powers on that we can observe. These may or may not be observable as material practices, they may or may not be expressed in the semiotic utterances and frames of reference of research participants. These generative mechanism do bring about changes in material things. This does not make mechanisms ‘theoretical’ and realists of the middle range ‘idealist’ as has been suggested in a recent ‘immanent critique’ (Porter, 2015). This methodology recognises that mechanisms are real, independent of our knowing them, and causal. We will always have to relate concepts to social processes.

Claiming these properties for mechanisms is part of the methodological conundrum that takes us from explaining how we get from a feeble idea about a scientific problem to a fragile account of what works for whom in what circumstances, when, and why.

Inevitably we fall back on metaphor to describe these complex relational processes in our methodology—mine is zigzagging. Gill and Ana will draw on other metaphors—being nimble, avoiding whirlpools, levels, layers, and ladders of explanation.

In developing my account of a zigzag I am interested to understand how, through my research—investigating health and health needs in slums in Mumbai and with and on low-income populations in Leeds, explaining social exclusion and vulnerability—I wrestle with and arrive at an account that always includes in one way or another regularities leading to describable outcomes, twisted and shaped by generative mechanisms firing in particular contexts across space and time. And thinking about these practical problems of explanation plunges me into the disputation of methodology.

The challenge of the realist question—what works for whom, in what circumstances, when, and why?—is to produce in some way a context, mechanism, outcome configuration. Ray Pawson described the CMO configuration as an ‘ugly circumlocution’ (Pawson, 2013:21), which both sums up the trouble with the matter, and reminds us of another ugly circumlocution, the Circumlocution Office Charles Dickens describes in Little Dorrit. Much of Dickens’ account is about getting nothing done, the wastefulness of the civil service, which, of course is a way of doing policy realist know a great deal about. As an example, Harold Macmillan, UK Minister for Housing (and later Prime Minister) in responding to angry public demands that the government do something during the great London smog of December 1952, observed that:

We cannot do very much, but we can seem to be busy—and that is half the battle nowadays (in Ascherson, 2015).

It still is it would so often seem, and ‘doing nothing while appearing to do something’ may frequently be a generative mechanism and at least a part of a programme theory. These invisible (or disguised) generative mechanisms are a challenge for realist explanation. Understanding what these mechanisms are is a regular topic of conversation on RAMESES.

Stepping back a little, there is something else in Dickens’ account of the Circumlocution Office, and a theme he wrote of often, which, I suggest, helps us along in debates like these. Do we bend to the will of authority, or bridle against its frequently impenetrable arrogance and stupidity:

Such a nursery of statesmen had the Department become in virtue of a long career of this nature, that several solemn lords had attained the reputation of being quite unearthly prodigies of business, solely from having practised, How not to do it, as the head of the Circumlocution Office. As to the minor priests and acolytes of that temple, the result of all this was that they stood divided into two classes, and, down to the junior messenger, either believed in the Circumlocution Office as a heaven-born institution that had an absolute right to do whatever it liked; or took refuge in total infidelity, and considered it a flagrant nuisance.

Realists are, of course, infidels, our methodology a flagrant nuisance. The last thing we would ever dream of doing is imposing a method (a tool, a trope) because we felt it was absolutely right to answer all realist questions. We are encouraged to be disputatious, adopting Popper’s stance that knowledge is facilitated through criticism. Donald T Campbell’s (1988) ‘disputatious community of scholars’ whose intellectual debates help our research, evaluation, and methodology to flourish through listening to each others arguments and counter-arguments. An open system of criticism and support that brings multiple perspectives to bear both in getting at the real that remains independent of our knowing it, and also in this crucible of substantive research to continually develop the emergent and transitive nature of our methodological enquiry.

Which brings me to my metaphor of the zigzag. I can’t claim it to be original. I found it in Imre Lakatos’s account of the logic of mathematical discovery, which Ray Pawson suggested I read. Here the theoretical progression of maths is understood through a discussion between teacher and pupils tracing the proofs and refutations of Eulerian geometry. Through the interlocution of one of the students, Lakatos notes:

Discovery does not go up or down, but follows a zig-zag path, prodded by counterexamples, it moves from the naïve conjecture to the premises and then turns back again to delete the naïve conjecture and replace it by the theorem. Naïve conjecture and counter examples do not appear in the fully fledged [inferential] structure; the zig-zag of discovery cannot be discerned in the end-product (pg. 42)

This is precisely the way of realist methodology I contend. Constantly we zig-zag between theory and empirical investigation, abducting from idea to testing that idea, bringing to bear, like the pragmatists, through creative innovative research, choosing cases, comparing cases, the method in the service of the idea, the teacher-learner relationship (inverted from the co-production and constructivism of much of social science). But that is only the zig, the zag is retroductive, the working out of how empirical instances are accounted for in theories of the middle-range. (Here we part from the pragmatists) to make causal claims (however fragile) for what works for whom in what circumstance and why.

To recognise ideas are as important to research (and explaining the social world) as measurements is axiomatic to this methodology. Ideas—‘concepts, meaning, and intentions’ as Joe Maxwell (2012:18) reminds us, are:

as real as rocks; they are just not as accessible to direct observation and description as rocks. In this way they are like quarks, black holes, the meteor impact that supposedly killed the dinosaurs, or William Shakespeare: we have no way of directly observing them, and our claims about them are based on a variety of sorts of indirect evidence.

Which brings me to my conclusion, the answers to the question—‘So what!’. I have three of these.

My first ‘so what!’ is explanation—so much of social science and human science research produces decorative and descriptive assemblages—some fine stories about lived experience from both quantitative and qualitative research. Realists claim to explain causal powers in systems. Realism seeks explanation beyond events and experiences through bringing evidence into a relation with history and social process.

My second ‘so what!’ leads on from the first. Our methods wrestle with and address complexity, causality, and claim. We ask what works for whom in what circumstances and why. Now one way to take this question is into a fractured account of social phenomena. Realist methodology avoids disappearing down this relativist wormhole through comparison. We mobilises comparative cases that are linked together by theory, or put less grandly ideas. Realism is a methodology of casing, working out what is this a case of in an open-social system.

With an aim of producing such complex causal explanation there is no heaven-born, institutionally ratified and kite-marked toolbox to dip into for the right replicable method to tell us what works. And linked to this protocol bound science in interesting ways, the claimed for validity of co-produced description, however participatory, will never be enough. My third ‘so what!’ is to emphasise that we are and should be proud of being the infidels of science promoting our flagrant nuisance of a methodology at every turn.


Posted in generative mechanisms, realism, refining theory, abduction, depth ontology | Leave a comment

From love to telling a good story: thinking about impact, research, and research training in PhD research


As is the case for most social science researchers, or indeed any researchers I imagine, I do the research I do because I think it is interesting. It’s the part of the

title of my talk which I call ‘from love …’, but of course that is not the whole story.

The other part of the talk is about wanting to make a difference, and this is, I think, about ‘telling a good story’. The story of what constitutes a good story, and how what we tell makes a difference is the focus of this talk.

The latest way in which we have been asked to evaluate how our research makes a difference adopts the term impact, which in turn has led to unpleasant neologism; our research must be impactful, evidently.

But before considering impact, let me return to love. I suspect that if I asked each of you to reflect on why you are doing the research you do the answer would go something like this: it is part of my biography, it is, in effect, part of who I am, socially, politically, culturally; and because of these dimensions, the way in which I choose what to study sits in this web of sociological account.

When I talk with under-graduates about choosing their topic for a dissertation I have a list of criteria which include all that you would expect: is what you are proposing practical? Is the potential research ethical? How does it relate to the course you have been doing? Is it an opportunity to showcase your knowledge?

But right at the top of my list of criteria is to always ask, does this topic really interests you? This is why I research what I research. As you will know only too well, as you immerse yourself in your chosen research you will get to obsess about it. You will wake up thinking about it, go to sleep worrying about it, and, eventually it will all become a little bit pathological, you will dream about it.

Well, this account of ‘it fascinates me’ is an individualising account of the research we choose to do. But no sociology of research can conceive of our choices without embedding these in the mechanisms, the powers and liabilities that shape the research we do. I’ve alluded to some of these in the list of criteria for students’ choices—is the research doable, achievable with the resources available—talks to the institutional powers and liabilities that define the research we do. It is a matter of fact that a PhD is three years long, a Ma dissertation must fit within a year, and a research proposal has time limits, strictly imposed.

There are pressures within and beyond the institution that ‘shape’ our research. The process of ethical approval is not value neutral. There are good examples of how ethics review moulds research to ‘acceptable’ methodological approaches, limits some kinds of potentially valuable research, decrees some research off limits, makes judgements about what research is worthwhile and likely to produce valid conclusions.

The point here, is not to acquiesce to these powers and liabilities unquestioningly, but to accept the existence of these extrinsic forces and, as Martyn Hammersley has recently written, question their legitimacy. Researchers face the consequences of these forces. We must critically engage with them.

In my own research this has meant I have had to ‘train’ NHS ethics review boards in qualitative methods. How do you explain to a panel, used to reviewing randomised control trials and occasionally having to deal with the ever so risqué observational study, that you propose to sample no more than 12 cases, through gatekeeper referral. No randomisation, no power equations to calculate sample size, and the account of the sample you have at the outset is, you propose, nothing like the way you intend to describe it in your analysis!

The points, in thinking about these powers and liabilities that inevitably shape our research, are twofold. First, we are obliged to think about the primary purpose of a PhD and Ma—which I want to argue is about training, it is about becoming an expert in a discipline. And secondly, how part of that learning and the acquisition of expertise, is, inevitably, about critically engaging with the powers of institutions, amongst many others. I have purposefully left the ‘elephant in the room’, ‘impact’ out of the conversation until now. I will consider this in a few moments.

But first, I just want to make an observation about our disciplines. The fractal division of disciplines, to use Andrew Abbott’s compelling metaphor, mean we often traverse different intellectual landscapes. But, for all the subdivision in fields / disciplines that happens, we always hold to particular epistemological positions. There are, we assert, legitimate ways of understanding the social world we seek to investigate.

In my view, a key part of doing a PhD and Ma is to gain the disciplinary confidence to argue for the legitimacy of an epistemological place in the social sciences and, more broadly, in the contribution to knowledge from sociology / social policy and the fractal subdivisions of these fields.

These degrees equip those who do them with the scholarly skills commensurate with doing the ‘third degree’; which I know etymologically comes from somewhere else, but fits quite nicely here.

And what is more, alongside honing scholarly skills, everyone hopes their research will contribute to knowledge in their discipline / field. These contributions may be substantive, methodological, theoretical—or most likely contribute in all three ways to extending knowledge. The presentations today are doing this. Each of you is no doubt thinking about how your research will contribute to the academic field in which you are engaged—through papers written, presented, and published. You will seek to make a difference.

This academic contribution, this making a difference is not ‘impact’ from your research, as it is now in common usage across universities.

Impact, or more appropriately, ‘the pathways to impact’ must, according to RCUK, show how research will contribute to ‘fostering global economic performance, and specifically the economic performance of the United Kingdom: increasing the effectiveness of public services and policy; enhancing the quality of life and creative output’.

This would be fine if we could equate benefit with impact. Who does not, in some way, want to make services better and people’s lives more fulfilled? If our research can achieve these things, then so much the better.

But impact is not conceived of in this way. Impact is increasingly directing research to consider short-term utilitarian goals at the expense of longer-term considerations like the contribution to academic knowledge and its wider benefit to society.

We are increasingly asked to show how we will measure, specific objectives that are achievable, and relevant, in the time-frame (generally three years at the most) of a project. The ESRC calls these SMART objectives.

This SMART model assumes our research is predominately about applied problem-based knowledge, in which University researchers clank up against users telling them how to deal with some social problem, like a snooker cue ball cannoning off another ball, to crash it satisfactorily into the pocket. Cause, effect, and the forces that intervene are all easily defined, measured, and recorded.

Of course academics do research like this. As an example, I heard recently of an impact case which went something like this. Academic researchers map post offices and populations, identify post offices that might be closed down, Post Office closes post offices. Research like this is, as John Holmwood has suggested, most likely to engage the beneficiaries of the research (in this case the Post Office management, not the communities served by the post offices, one would imagine) as co-producers of its conclusions.

But, as I have argued, doing a research degree is quite different. It is about honing skills in discipline based knowledge. Its purpose is best described through consideration of disciplinary hegemony and a focus on internal audiences.

This does not mean, of course, that PhD research should not make a difference. It does and should. The most effective way in which we can make a difference is through the expertise we have, and the insights we gain into that which we research.

Stefan Collini has recently argued, rightly in my view, that universities should provide a home to extend and deepen human understanding.

Impact does not begin to provide an account of what you are doing in your research. It is yet another transient formula arrived at to suit the purpose of current political thinking. It is mechanistic, economistic, trying to measure ‘making a difference’ in crude ways that lack any critical account of causality or consequence.

These observations bring me to the ‘good story’ in your PhD and beyond. This story is produced through the disciplined extension of knowledge through research that provides insight and understanding into that which we choose to research. Our story will transgress the limits to existing knowledge.

I can see this in my own PhD and subsequent research. Our story gets better, the message clearer, the confidence in the ways in which we are contributing to knowledge stronger and more confident. Underlying this is not mechanically applied social science, but theory honed and refined through many iterations.

I investigate the texture of inequality and inequities and I have a particular interest in differences in health and the relationships and mechanisms that perpetuate or reduce those differences.

I have presented this research to audiences as diverse as the many groups of students I have taught, to panels of Supreme Court judges in India, and Her Majesty’s Treasury Seminar Series. It has been directly cited in court rulings to stop evictions in slums in Mumbai, to current coalition policy to extend the role of health visitors and increase their numbers.

In these senses it has had an impact, a crude causal story can be told. But, that is just using the vocabulary and (neo-liberal) mindset of the moment. This innovative body of research has framed a number of debates around how services should be delivered. As Carol Weiss noted in 1979, (note the date), the research makes a difference because it has had incremental effect on policy and practice through presenting an extension of knowledge and insight.

Allow me to make these points with a further example. Many of you will have heard of this book, the Spirit Level, published in 2009 by Richard Wilkinson and Kate Pickett. You may be less familiar with Wilkinson’s other publications: Unfair Shares, published in 1991, Unhealthy societies, published in 1996, Impact of inequality, published in 2005; and there were other publications before these, right back to his Masters in Medical Science thesis ‘Socio-economic factors in mortality differentials’. Each covers largely the same ground, each is pitched to a particular audience, each tests and refines the theory that more unequal societies are less healthy. Up to the Spirit Level, Wilkinsons’s arguments influenced a rather focused group of academics and interested activist NGOs.

Today we would be obliged to say that the Spirit Level was ‘impactful’. But who exactly could have predicted this? Who could have charted a causal link between yet another publication on the same theme and its take-up by a then ailing New Labour Government, desperate for new ideas. Could Wilkinson and Pickett have written SMART objectives, measuring specific objectives, of that which was achievable, and relevant, in a clearly defined time-frame?

I suspect not. I certainly could not in my research; although, like many of my colleagues I’m getting quite good at telling a story of impact in retrospect, and, as a result, becoming quite a crystal ball gazer when I write proposals.

There are no guidelines to guide you in producing research with impact from your degree, nor should there be. Your PhD and Ma is about producing work that can be disseminated and that can make a difference.

You will find ways in which you will want to disseminate your research, to peers, to participants, to organisations that have a particular interest, to gatekeepers. When you get to your viva one question you are guaranteed to be asked is where do you plan to publish? That question rests on an evaluation of how good your research is. The story you can tell from that research depends on this.

The difference you will make from your research can not be measured through short-term utilitarian goals and mechanistic pathways to impact. The PhD and Ma is about becoming a disciplinary expert, confident that the social world can be investigated using the methods you are using to say something legitimate and useful. It is a stepping stone along the way towards extending and deepening human understanding.




Posted in impact, PhD research | Leave a comment

The limits to theoretical saturation in realist explanation


source http://lornaalkana.com/2013/09/29/orange-and-pink-flower/

Janice Morse (2015) is quite clear what theoretical saturation is not in her recent editorial in Qualitative Health Research. It is not the accumulation of events to a point where we have heard everything there is to be heard about a matter. Theoretical saturation happens, according to Morse, at a quite different level of abstraction at a level of characteristics within categories. The theories Morse is thinking about are statements that in some way account for all the events recorded in the data. For realists this reliance on events is the limitation of theoretical saturation.

The foundation of theoretical saturation in grounded theory is data. Repeating an idea she has discussed elsewhere (Morse, 1995), Morse (2015:587) points out that ‘in qualitative enquiry, the data at the tails of the curve are equally important and must be deliberately collected until adequate’. In other words, qualitative researchers seek out what quantitative researchers might refer to as outliers. These are unusual cases that present theory discovery with paradox, uncertainty, and difficulty. Account for these in a theory and it will be a stronger more saturated theory. Buttressing this comprehensive collection of data across a putative distribution is a call to replicate data, which means ensuring ‘several participants have essential characteristics in common’ (587).

These strategies yield rich data, but for theoretical saturation to happen data must be known intimately. At the same time, the methods of grounded theory—memo writing and coding—intentionally withdraw these data from the site of their production. As Morse observes, ‘the process of coding into categories removes the experience from the individual participant and is the first step on the process of conceptualisation, synthesis, and abstraction’ (588). Theoretical saturation rests on strategies to ensure the collection of comprehensive and replicated data, and an analysis through immersion and abstraction from context.

Realist methodologists (Porter, 1993; Pawson, 2006; Maxwell, 2012) accept the notion of theoretical saturation rather uncritically. Porter (1993:599), for instance, citing Anselm Strauss (1987) contends that his research investigating racism amongst intensive care unit staff ‘continued to a point of ‘theoretical saturation”. I’m not sure it did, unless Porter means theoretical saturation in the way Morse suggests it is not—when everything there is to say about the matter is said.

Even if this definition of theoretical saturation is accepted it sits at odds with a realist methodology. The problem for realists is not that new events will be found that need bolting into the theory. This is a trivial argument. New events will always happen, the social world is dynamic and only relatively enduring. It is the nature of events that is the problem. Events are empirical accounts that may be observed, listened to, and faithfully recorded. But they are not the end of the matter in the stratified and depth ontology of a realist methodology. A claim to theoretical saturation is a claim to reify structures, the relationships that happen in particular contexts. These are expressed in some way in events and therefore in data. But in grounded theory, as Morse points out, characteristics within categories are purposefully withdrawn from the contexts of their production. This is a flat empiricist ontology. It is quite at odds with a realist explanation, which explicitly accounts for context as generative mechanisms as powers, liabilities and dispositions that shape the expression of events. Most often these are not amenable to measurement (as data). We make claims for these. They are theories, which as Joseph Maxwell (2012) contends, are as real as rocks. In grounded theory methodology theories emerge from data. In a realist methodology theories are as real as the data.

It is because of this depth of reality that theoretical saturation can never be achieved in realist research. The best we can manage is a fallible model (Ragin, 1992; Lakatos, 1976), which best brings evidence into a relationship with theory. This model will have all of the features of good research Morse lays claim to—it will account for the evidence, surprise, excite, and extend understanding and insight about a social process—but it will never saturate the matter. It awaits the next investigation, where this model will be tested, refined, and judged once more.

References to follow

Posted in depth ontology, grounded theory, theoretical saturation | 1 Comment

I’m not dancing, I’m zigzagging

A second outing for a methodological investigation considering the metaphors we use to describe qualitative researching and the metaphors participants use to describe their experiences. My intention is to move from a metaphor of dance to a metaphor of zigzagging to more precisely explain the methodologies of realist (qualitative) research. I’m not dancing, I’m zigzagging was presented at the @LSJ_Leeds Seminar at Leeds University on the 18th March 2015.

Posted in abduction, generative mechanisms, grounded theory, qualitative longitudinal, refining theory | Leave a comment

Positivism, realism, and the experiment

thing 1 thing 2Imre Lakatos wrote somewhere, and I can’t remember where, that if we are to talk of positivism then we must define what we mean before we start. In thinking about the possibilities for experimentation and realism I start by defining positivism before going on to consider Andrew Hawkins’ (2013) thoughtful methodological account of an experiment in realist research.

I draw on four ‘fundamental rules’ of positivism from Cohen (1980). First, pheomenalism, positivists banish the essence of things from any rational discussion. Their focus is observable manifestations. Second, is the rule of the nominal. In denominating particular instances of things, these things share the same fact, but this term does not relate to some general property. Third, is an assertion that there is a unity of science, which adheres to a specified and mutually understood set of rules. This allows for the design of definable forms of enquiry (also know as protocols) that are developed before the investigation and must be adhered to through any experiment. Finally, the findings from any enquiry must be reducible to normative statements that describe a fact in such a way that prediction can be made from that fact that allows for technical prescriptions to be made.

For realists each of these rules is problematic. Phenomenalism is an expression of a flat ontology, an assertion that all that can be learnt of the social world is observable in an empirical domain. Pejoratively, positivism is empiricist. Realists contend that social phenomena are far richer and deeper than this. They are, as Andrew Hawkins (2014) points out, stratified in some way, with mechanisms as powers, liabilities, and dispositions (many of which are not amenable to direct observation or measurement) acting in actual ways on the empirical and observable phenomena we seek to investigate and explain.

The unwillingness in positivism to recognise (and name) the cause of things inevitably leads to nominalism. This, for realists, is an abstraction of that which can be observed (as a regularity or outcome for instance) from the ‘ugly circumlocution’ (Pawson, 2013:21) of context and mechanisms of which they are part. Nominalism is reduction. It is also reification.

Things (often called variables) shorn of their relations are essential for positivists to proceed (Byrne, 2002). In this state are they amenable to rule-laden investigation. There are no relationships to complicate investigation. Only that which can be acted upon and instrumentally measured in some way is permissible. Validity lies in the design of the procedure. Design, therefore, is valorised at the expense of creativity, ideas, and interpretation. For realists, the rigour of design is important but the direction of decision making is quite different. Experiments are well designed in the service of ideas to which they are subordinate.

Realists don’t produce normative statements from their research. They simply can’t because the real is independent of our knowing it and, try as we might, we will never adequately account for real social phenomena. There are reasons for this that positivism simply ignores—open social systems, mechanisms that are invisible, immeasurable, or obfuscating (and maybe dormant, but not latent—see below), and a recognition that mechanisms, contexts, and outcomes in the social world are only ever relatively enduring. Realists recognise how science progresses through the incremental accretion (and occasional punctuated equilibrium) of knowledge, not in the heroic leaps of Whigish positivism.

Now, the interesting thing for me about Andrew Hawkins’ paper is that it deals with these four fundamental rules of positivism well and elaborates the realist alternative. The experiment he has done is not positivist in the way outlined here, it is realist. The experiment is set up to test ideas. The design does not come first, but is shown to be fit for purpose to test ideas about how and why mentorship of students works. Identifying a quasi-experimental and a quasi-control group is quite acceptable because they are not groups, but cases developed to bring together particular configurations. They are purposefully selected to allow theories to be tested in each case and between cases. The use of ANOVA, Cohen’s d and t-tests, to measure variance, effect size, and fit, are not at odds with realist approaches, so long as the prior purposive work remains associated with these measures.

It is easy to be waylaid by the ‘cocked hat type’ distribution that is ‘sufficiently nearly normal’, which Student (1908: 18) suggested might be used for laboratory and biological experiments. Positivism and the use of experimental approaches can be conflated. Indeed it might be useful to avoid using terms normally associated with experimental trials, such as the experimental and control group to describe the mentored and un-mentored students in the experiment. This language of the trialist has particular meaning and with this implicit assumptions. Random allocation to experimental and control groups allows, it is claimed, for latent variables to be assumed to be latent for one case and therefore latent for all cases. They have no value (Bollen, 2002). (A reason not to use latent to describe mechanisms, I would suggest.) Even more sophisticated and recent techniques developed to account for multiple contexts in cluster, nested, partially nested, and multiple membership of context approaches to randomised control trials (Roberts and Walwyn, 2012) hold tight to a positivist model of the world. This nod at the complexity of lived experience is at first sight alluring, but remains significantly constrained through the assumption that the identification of a variable—the number of times a patient visits a therapist for a particular treatment, for instance—adequately explains the relationships that led to choices, however constrained, that conjoin treatment to outcome in some way. As Hawkins (2014) shows in his example of an experiment in realist research, realists add more than a few observable mediators and moderators to the descriptive assemblage of statistical tables, they seek to explain causal generative mechanisms through testing theories about these mechanisms.

Posted in experiment, generative mechanisms, positivism, RCT | Leave a comment

The ingredients of realist research for a disputatious community of truth seekers.

ready steady cookRealist researchers are, to use Ray Pawson’s term, a disputatious community of truth seekers. For me, this means that we agree (most of the time) on some or all of the underlying methodological principles, or ingredients, of realist research, but we will never agree on rules for doing social research. Each research project requires that we work out the best approach to its investigation.

These disputations mean I cannot point you to the methods cookbook, ‘How to do realist research properly’. Cookbooks we use when we want to knock up a nice Spanish stew, not to design investigations.

But as I mentioned, there are a number of methodological ingredients. As researchers we are faced with a similar challenge to that faced by a professional cook on a tea time cookery programme, who is presented with difficult to deal with ingredients and told to produce an edible meal. As with the cooking, expertise, creativity, and experience are brought to bear on methodological ingredients to produce explanation. These methodological ingredients include:

  • A contention that there is a reality independent of our knowing it.

  • Neither empirical observation, nor theories about the world account for social reality, it is far richer and deeper than that.

  • We most often describe reality as stratified, the empirical, the actual and the real.

  • In explanations we want to account for real causal mechanisms. These are, according to Roy Bhaskar (2008), the powers of things, which unlike events persist or are at least relatively enduring.

  • All researchers construct accounts of social objects. But as Joe Maxwell (2012: pg.13) contends, ‘our concepts refer to real phenomena, rather than being abstractions from sense data or purely our own constructions’.

  • These weak constructions raise consciousness about that which we seek to interpret and explain.

  • Investigation zigzags between ideas and evidence.

  • Methods and samples are always chosen in the service of testing, refining, judging, elaborating, …, ideas.

  • Explanation is an effort to work out the relations between ideas and evidence.

  • These relations we present as models (or less grandly, ideas on the backs of envelopes—cf. Greenhalgh et al., 2009), which can be transferred from one complex system to another to be tested and refined.

  • Interpretations and explanations—the insiders’ perspectives and the outsiders’ understandings—cannot be separated and they are always provisional.

  • Explanations are implicated in theories of the middle range, which seek to explain what works for whom in what circumstances and why (Pawson and Tilley, 1997; Pawson, 2006; 2013).

  • For some realists the idea of C+M=O, that ‘ugly circumlocution’ (Pawson, 2013:21), is the best way of checking the explanation is complete. But all realist explanations will explain the entanglement of how generative mechanisms act on social regularities in specific contexts to produce particular outcomes.

(after Emmel, 2013)

Good luck mixing the ingredients!

Posted in generative mechanisms, realism, refining theory, social reality, weak constructions | Leave a comment

Ontology and methodology, Margaret Archer, and a far better non-abelist explanation

beeLast week we started the Ma module Research Strategy and Design in the School of Sociology and Social Policy at Leeds. Like any method course should, I think, we start with ontology, move on to epistemology, and by week 2 we are working out research questions. In our discussion about the relationship between ontology and methodology I put this quote up on the screen from Margaret Archer (1995:28)

An ontology without a methodology is deaf and dumb; a methodology without an ontology is blind. Only if the two go hand in hand can we avoid [research] in which the deaf and the blind lead in different directions, both of which end up in cul-de-sacs’

Rightly, students in the group accused Archer of abelism. I challenged them to go away and come up with a non-abelist alternative.

Inga Reichelt came up with this, which I think is rather better than the original:

“An ontology without a methodology is like a bee queen without a bee colony to reign over – she might have a plan and idea on how to lead the bee hive, but not the power to execute any tasks. A methodology without an ontology is a colony of worker bees without a bee queen- a bundle of chaotic, disorganized tools. Only together can the two achieve their functioning and create a buzzing beehive of lively research.”

Posted in teaching realist methods | Leave a comment

An introduction to realist methods (for quantitative researchers)

laserAttached here is a presentation An introduction to realist methodologies I gave to the Partnership of Junior Health Analysts in Leeds. This is a presentation specifically for researchers more used to quantitative methods, in particular those crunching large datasets, which many of the analysts in the group collate for the NHS, and doing randomized control trials. When I asked at the beginning of the session if anyone had heard of realism, only one person confessed to knowing anything about these methodologies. The discussion afterwards focused on the potential relationships between the research these data scientists and clinical trialists are conducting and realist methods. The potential relationships between RCTs and realist methods is indeed an area that needs to be considered in much more detail I think.

Posted in RCT, teaching realist methods | Leave a comment

The meaning of abduction


Ingo Rohlfing wrote of my post responding to Howick et al:

“The discretion and judgement emphasised here lead away from both deductive and inductive approaches. Mechanistic reasoning is the alternative, despite its accepted weaknesses.” I agree mechanistic reasoning does not (have to) rely on induction or deduction. But creativity hardly strikes me as a good standard for making causal claims about mechanisms. I would think it is abduction or inference to the best explanation (which also requires some creativity, but all social science does).

Thanks Ingo

I wonder what you mean by abduction? Do you, like Blaikie (2010), treat abduction as a self-sufficient research strategy (see Hammersley, 2014), or do you follow Pierce (1903:235), who emphasised the guessing and creativity of an abductive approach?

For Pierce, abduction:

‘… allows any flight of imagination, provided this imagination ultimately alights upon a possible practical effect; and thus many hypotheses may seem at first glance to be excluded by the pragmatical maxim that are not really so excluded.’

I agree with this version of abduction and also with your observation that ‘creativity [is not] a good standard for making causal claims about mechanisms’. In my response to Howick et al, I was concerned with two issues. First, like Cartwright and Hardie (2012) I think the imposition of the experimental model on complex phenomena is designed to remove the ‘guessing’ or ‘creativity’ from the very process of doing research. I return to Cartwright and Hardie’s point, deliberation is not second best, it is an essential part of the process. We might get it wrong quite a lot of the time, of course, but then we must invoke a second point made by CS Pierce. As he asks:

…. What is good abduction? What should an explanatory hypothesis be to be worthy to rank as a hypothesis? Of course, it must explain the facts. But what other conditions ought it to fulfil to be good? …. Any hypothesis, therefore, may be admissible, in the absence of any special reasons to the contrary, provided it be capable of experimental verification, and only insofar as it is capable of such verification. This is approximately the doctrine of pragmatism.

I’m not entirely sure I like the term ‘verification’, I prefer testing, refining, judging, and so on (but then much debate has happened in the social sciences since 1903). What I do like, this definition of abduction is not self-sufficient. It is explicitly part of a process of researching that progresses through explanatory hypotheses. I think about these as theories of the middle range (after Merton, 1968). Raymond Boudon describes these as a set of statements that ‘organise a set of hypotheses and relate them to segregated observations’ (1991: 520). My point, unlike Howick and colleagues (2013), is that guesses, creativity, deliberation, and informed judgement are very much part of any research endeavour. We must valorise these theories, hypotheses, and conjectures. They are the ZIG that precedes the ZAG in any research endeavour. In short, ideas precede method. So often research is presented as otherwise.

Posted in abduction, theory of middle range | Leave a comment

Five lessons from WEICK KE (2007)THE GENERATIVE PROPERTIES OF RICHNESS Academy of Management Journal 2007, Vol. 50, No. 1, 14–19.

1. locate the scene deep inside one’s own head, catch the significance of the scene, use that significance to reanimate analysis

2. a head full of theories … increases requisite variety

3. what events mean, what is significant in their unfolding, may become clearer when compared with other events

4. Clifford Geertz said, “Social scientists are professional second guessers”

5. ‘In my own theorizing I often try to say things without using the verb to be.’


Posted in evidence, purposive work, refining theory | Leave a comment

Mechanisms, the problem and the solution: a response to Howick et al. (2013)

harrisons chron stampI cannot disagree with the conclusion of Howick and colleagues’ 2013 paper, ‘Problems with using mechanisms to solve the problem of extrapolation’. Uncertainty and scepticism will always be part of medicine, at least an account of medicine that does not succumb to myth making and a Whig history. Extrapolation of findings from a sample to a population will always be a methodological challenge that is best met through the caution of knowing that we must ‘learn to live with a much higher degree of uncertainty and scepticism about the effects of many medical interventions, even those whose effects have been established in well-controlled population studies’ (288). But, I’m not sure Howick and colleagues offer much of a solution. In fact, I think they throw the baby out with the bath water. Mechanisms may well have all the weaknesses they suggest, but mechanisms are also the solution to the scientific progress (as knowledge cumulation) they aspire to, I argue.

Wisely, Howick and colleagues (2013) do not deal with mechanisms as empirical cogs in the previously unopened black box of Evidence Based Medicine (EBM). They caution against a model that makes the ‘unwarranted ontological assumption that mechanisms are productive of regular relationships between inputs and outputs’ (284). But, they also seem to be hamstrung, to a degree, by their observation that ‘the functioning of most mechanisms is discovered in tightly controlled laboratory experiments that expressly exclude as many potentially interfering variables as possible’ (283). They accept that ‘mechanisms in the human body and social world, especially those that are pertinent to clinically relevant outcomes are generally more complex than […] mechanical machines’ (284), yet their solution is to ‘role out [randomised control] trials, modified to what is systematically observed’(288).

In short, the hierarchy of EBM is reasserted. The epistemology of this position, which is never explicitly stated in a paper about epistemology, is that reality is forced to fit the experiment, not methods chosen as a response to reality. One might conclude that this is not an epistemological treatment of mechanisms, so much as an empirical justification for extending the experimental model.

Howick and colleagues criticisms of mechanisms are threefold and justified in my view. First, mistaken mechanism may be identified, as knowledge of mechanisms is often lacking. Second, knowledge of mechanisms may lack external validity. The mechanism discovered in the tightly controlled experiment on the laboratory bench may well not be reproducible in a clinical setting. And third, a mechanism might produce paradoxical reactions, an anti-epileptic drug might prevent and cause seizures for example, as these authors point out.

They offer two ways out of these limitations of mechanisms. The first is Daniel Steel’s solution of comparative process tracing. For Steel (2008:89), judgements are based on ‘inductive inferences concerning known similarities in related mechanism in a class of organism, and on the impact those differences make’. As with all analytic inductive models, the search is on for the best model to explain empirical observations. It is a process of testing cases and reformulating hypotheses, ‘until a point is reached where every new case investigated confirms the current hypothesis’ (Hammersley, 2014:18). At its most basic, once the mechanism is determined in the target population the knowledge that the mechanism happens in the study population (or model) is redundant. This for Steel is an ‘extrapolator’s circle’, which he proposes can be avoided through process tracing that reconstructs the path of mechanisms step-by-step to the end point in the model. This can then be compared with other organisms most likely to differ significantly, to identify if mechanisms (often most proximal to the end-point) are similar. Even this empirical method of process tracing cannot, however, resolve the problem of unknown mechanisms, claims to the external validity of the mechanism (beyond the experiment), or paradoxical end points. Howick and colleagues look to another approach in an attempt to address these problems.

Their second approach to mechanistic reasoning is provided by Cartwright and Hardie. I am not sure this case can be used to make claims for extrapolation. Cartwright and Hardie (2012: 45) state that ‘[external validity] is the central notion in the Randomised Control Trial (RCT) orthodoxy, and it does not do the job that it is meant to do’. Yet Howick and colleagues use Cartwright and Hardie’s case study to ‘show why [the case I discuss below] does not support mechanistic reasoning to solve the problem of extrapolation’ (286). In short, Howick and colleagues construct a straw man.

Nonetheless, it is worth rehearsing Cartwright and Hardie’s approach because this shows how mechanistic reasoning emphasise uncertainty, invites scepticism, and undermines claims to external validity. Using RCTs of feeding programmes for infant health (ih) in India (Tamil Nadu TN) and Bangladesh (BD) Cartwright and Hardie show how mechanisms must be abstracted when multiple contexts are considered. In the formulae to represent the findings from these RCTs, a and a’ stand for unknown variables in both settings. In Tamil Nadu support factors (bm) empower mothers (em) to control resources in the food programme and feed their children. In Bangladesh however em does not pertain, instead mother-in-laws (eml ) rule the roost, control food and, therefore infant health, support factors bml are necessary.

TN : i.h.(i)c=a1+a2i.h.0(i)+a3”bm(i)em(i)+a4′z(i)

BD : i.h.(i)c=a1‘+a2‘i.h.0(i)+a3”bml(i)eml(i)+a4′z'(i)

As Cartwright and Hardie (2012:29) note these represent ‘two causal principles with nothing in common except their abstract form’. The formula can be re-written so they do have one thing in common:

TN : i.h.(i)c=a1+a2i.h.0(i)+a3”bpw(i)epw(i)+a4′z(i)

BD : i.h.(i)c=a1‘+a2‘i.h.0(i)+a3”bpw(i)epw(i)+a4′z'(i)

The one thing in common is epw, (educated person with power) with support factors bpw. Rewriting the equations in this way has abstracted them further from the real, in this case substituting an abstract powerful person for very real mothers and mother-in-laws. ‘This is’, to quote Cartwright and Hardie ‘(nearly) vacuous’ (86).

These vacuous statements need bringing back to where the action is. Howick and colleagues invoke a putative World Bank consultant who prescribes particular treatments based on a knowledge of bpw, epw in each setting. They argue that this prescription may do more harm than good because of an ‘inability to identify all mechanisms’ ( 287). Quite so, World Bank consultants are probably not good at spotting mechanisms, programme workers are likely to be better, I’d suggest. But at least both have a theory (a mechanism) to work with, the ‘powerful person theory’. The job now is to put that theory to work and find out who the powerful person is, in what circumstances, why, and what support factors they need to improve infant health. An investigation of mechanisms might start with a statement about bpw, epw but will hopefully progress to bm, em or bml, eml, or some other unknown combination.

Howick and colleagues’ do not adopt this approach. Rather, their solutions from investigation of mechanistic reasoning are, first, and rightly, ‘temper our confidence in all mechanistic reasoning’ (287). And second, modify the experimental RCT design in response to that which is systematically observed. Cartwright and Hardie agree with the first part of this solution, it appears. They recognise that what works ‘over here’ might not work ‘over there’, the mechanism that work in a feeding programme to improve infant health in Tamil Nadu may well not work in Bangladesh.

Hardie and Cartwright point is that this problem of external validity affects RCTs as much as any approach. The second part of Howick and colleagues solution does not, therefore, follow from the first, unless as they seem to assume, all mechanistic reasoning is sophisticated induction. Most mechanistic reasoning, including Hardie and Cartwright’s approach, simply does not follow the analytic induction proposed by Daniel Steel. Cartwright and Hardie eschew an inductive approach with as much force as they do the deductive approach of the RCT. Their preferred approach is creativity:

[…] the orthodoxy [the RCT], which is a rule based system, discourages decision makers from thinking about their problems, because the aim of rules is to reduce or eliminate the use of discretion and judgement […] Deliberation is not second best, it is what you have to do, and it is not faute de mieux because there is no mieux.(Cartwright and Hardie, 2012:158)

The discretion and judgement emphasised here lead away from both deductive and inductive approaches. Mechanistic reasoning is the alternative, despite its accepted weaknesses. The challenge here is a methodological approach that can creatively zigzag between ideas and evidence to produce models that are constantly open to testing and refinement. In short, a methodology that does not ask the question ‘what works’, but ‘what works, for whom, in which circumstances, why, and when’. In addition, having outlined the weaknesses inherent in the deductive methods of the RCT, so often misbranded as the ‘gold standard’, I return to my key point, it is not the method that drives the investigation, but the question that must decide the method in knowledge cumulation for contemporary health care and beyond.

Posted in generative mechanisms | Tagged , , | 2 Comments

‘Research relations in process’ and ‘subjects in process: lessons from qualitative longitudinal research for sampling

Last week I gave a talk as part of Bren Neal’s workshop on advanced methods in qualitative longitudinal (QL) research for the White Rose Doctoral Training Centre sampling in QL research. Through conducting QL research as part of Timescapes (http://www.timescapes.leeds.ac.uk/), I have learnt a great deal about sampling in qualitative research more generally. In particular QL research foregrounds the ways in which we learn from the social processes of doing research.
Here are four key lessons learnt from bringing common ways of thinking about sampling into engagement with time in the research. First, drawing on Glaser and Strauss’s (1967) grounded theory, QL research emphasises how we must build research relations throughout the research, because we are return repeatedly to participants. Maintaining a distance between observers and observed through the tabula rasa of the theoretically sensitive researcher, observation of interaction, and coding—the positivist twist in grounded theory’s methodology—is impractical. We build ethical relationships of trust with participants if the research is to proceed.
Secondly, QL research questions the notion of theoretical saturation. All grounded theorists from Glaser and Strauss (1967) to Kathy Charmaz (2014) advocate theoretical saturation; the point when no new properties emerge and sampling stops. Yet, QL research emphasises how research is conducted in time. Barbara Adam (1990; 2008) reminds us that we rework and revoke the past, plans for the future inform material practices in the present. And, to paraphrase a line from Alan Bennett’s play the History Boys, life, like history, is ‘one bloody thing after another’. If this is so then theoretical saturation can not be achieved, conceptual density will always be provisional and ‘theoretical completeness’, as Glaser (2001) has it, can never be achieved.
This means that our relations within the research, and therefore with participants are always subject to revision. MQ Patton’s (2002) pragmatic strategies of purposeful sampling provide important insight. Cases are purposefully chosen through any combination of Patton’s 14+1 strategies of sampling because they are information rich, contribute to answering research questions, and increase the credibility of the research with its audience. Drawing on the Oxford English Dictionary’s definition, pragmatism deals ‘with matters in accordance with practical rather than theoretical considerations […] aiming at what is achievable rather than ideal’. The temporality of QL research means that strategies of sampling are always under revision, as new insights are learnt and new material practices addressed. What is considered to be information rich and credible will change. Researchers must respond to these temporal flows in the research.
The idea of a temporal flow brings Jennifer Mason’s account of a theoretical or purposive sampling strategy to mind and, in particular, her description of this interpretative and inductive strategy as organic. Like Giampietro Gobo (2008), Mason recognises that the representativeness of the sample in qualitative research is something researchers learn about as the research progresses. The fourth lesson is interpretation of representativeness must, necessarily, include an account of the historical context within which the sample speaks and the researcher interprets.
In short the four lessons for sampling from QL research are—1) relations in research are reciprocal between researchers and participants; 2) the natural aspects of temporality play an important role in explanatory approaches; 3) the research must be responsive to the material practices of participants, researchers and audience; and 4) biography and history intersect in the participants’ lives. An account of sampling in qualitative research (drawing on the lessons from qualitative longitudinal research) is an ongoing dialogue between ‘research relations in process’ and ‘subjects in process’, the title of my talk.

Posted in grounded theory, purposeful sampling, qualitative longitudinal, theoretical or purposive sampling | Leave a comment

Step 8: Researchers always bring prejudices, prejudgements, frames of reference, and concepts to their choice of cases


I am reading a paper by Rycroft-Malone and colleagues (2004) about what constitutes evidence in evidence-based nursing practice. This paper is interesting because it emphasises a plurality of evidence. The authors identify four sources of evidence—from research and scholarship, professional knowledge and clinical experience, local data and information, and patient experiences and preferences—while at the same time they consider this evidence, from whatever source, to be strongly constructed. My comments consider a realist response to this notion of construction of evidence, and the implications of knowing that researchers always bring prejudices, prejudgements, frames of reference, and concepts to their choice of cases.

The first point to say about realist methodology is that it is accepting of the idea of a broader definition of what constitutes evidence. On this point we can agree with Rycroft-Malone and colleagues. Further, the ways in which these authors move away from propositional evidence, which has traditionally informed practice, and point to the importance of professional tacit knowledge and experience and how this is crafted into understandings of particular issues, how patient experience must be part of the account, and the key role of context in any evidence collection are all issues we would strongly agree with.

However, we depart from Rycroft-Malone and colleagues’ conceptualisation of evidence. First, realists can not accept the claim that evidence is strongly constructed. These authors claim that both propositional and non-propositional evidence is constructed. Certainly it is the case that any evidence is constructed, we would argue. The issue is whether that construction is strong or weak. Strong constructions are particular, emergent, and discovered from empirical evidence. If the evidence is strongly constructed then it describes what works for whom in what context. Evidence conceived of as strongly constructed provides opportunities for an endless stream of narrative accounts, bespoke stories to fit every patient, or health care professional, or scientist’s construction in particular contexts.

If, however, the construction in the evidence is weak, as realists argue, then the realist question, what works for whom in what circumstances, and WHY can be answered.

To take an example Rycroft-Malone and colleagues use, they discuss uptake of low molecular weight heparin as antithrombolytic prophylaxis for elective joint replacement. According to Ferlie, whose study Rycroft-Malone and colleagues’ draw on, the use of heparin ‘was influenced by the beliefs of a core group of orthopaedic surgeons, whose views were based on experiential knowledge’ (pg. 85). This can be read in two ways. For Rycroft-Malone colleagues this is a strong construction: this insight shows how this particular group of orthopaedic surgeons view heparin, another group will think differently.

For realists, this points to a weak construction, some orthopaedic surgeons think this is an appropriate treatment and there must be a reason or reasons why they think like this, when much of the empirical evidence is equivocal. It is these underlying (often invisible or indirectly observable) causes that spur these consultants to make choices about whether to use heparin or not that realism is interested to investigate.

The second and interrelated difference hinges on what empirical evidence provides by way of insight. What we can say about Rycroft-Malone and colleagues’ account of evidence is that it is ontologically flat, by which I mean that what is considered to be evidence consists only of what can be measured, observed, said, or recorded in some way, even when from a rich variety of sources. Whereas realists, similarly catholic about evidence, add another dimension to their account of evidence, they are interested in evidence that has a stratified and deep ontology. The surface appearance of things may well provide a rich description of what is going on, but it does not explain the powers, liabilities, and dispositions that lie beneath the surface of things (often quite deeply)that explain the real.

In short, Rycroft-Malone colleagues’ account of evidence is entirely horizontal. It is made up of strongly constructed accounts from science, practitioners, surveys of context, and patients. Realists add vertical stratification and depth to this account of evidence, they search for indirect, invisible, or even hypothesised evidence to get at the causes that lie beneath empirical observation.

Why does this matter? Well the reason why I am reading this paper is because we are designing a study with the provisional question ‘Is there a difference between the repositioning behaviours of patients and staff when using an alternating pressure mattress (APM) or high specification foam (HSF) mattress.’? This is a very important question to ask because the UK National Health Service spends a great deal of money (it is said between 4-5% of its budget) dealing with pressure sores. Leaving aside these costs, no one should have to endure these preventable wounds. In designing the study, we could draw on the kinds of flat empirical evidence, from research and scholarship, professional knowledge and clinical experience, local data and information, and patient experiences and preferences as outlined by Rycroft-Malone and colleagues. The outcome will be a descriptive answer ‘yes’ or ‘no’, or something in between. But we will not be able to explain why there is (or is not) a difference. To do this we need ideas about some of the assumptions that underlie and are implied in the overarching question articulated here. So, for instance, it might be that ward policy has a particular impact on the way in which patients are moved, nurses might approach the care of patients in different kinds of beds differently, or patients might find it harder to move around in one kind of bed and easier to care for themselves in another. Our investigation starts with ideas like these and uses methods (always in the service of ideas) to test these out. Our intention is to move beyond being able to describe instances towards outing the causal processes that explain observed differences.

The same challenges arise for sampling in research. The ideas about why something is happening are always the starting point in making choices about who or what to include in the research. Researchers always bring prejudices, prejudgements, frames of reference, and concepts to their choice of cases.



Posted in evidence, purposive work, strong constructions, weak constructions | Leave a comment

Step 7: Theory (or put less grandly, ideas) always precede the choosing of cases


Source: polderfoto Mourgefile

The positivist twist in Barney Glaser and Anselm Struass’s (1967) formulation of grounded theory was to insist on the theoretically sensitive researcher with no preconceptions. This was the working out of the implications of the philosopher John Stuart Mill’s tabula rasa; the researcher as a blank slate upon which the emergent characteristics of theory in research are written from the close observation and coding of interaction.

For realists this empiricism simply will not do. More recent versions of grounded theory (Corbin and Strauss, 2008) seek to modify theoretical sensitivity to include the agency of the social researcher. But grounded theorists remains agnostic about theory (Charmaz, 2006; Henwood and Pidgeon, 2003).

Agnosticism holds that the existence of anything beyond and behind material phenomena is unknown. But, as I argued in the last step, for realists, while real entities are independent of our knowing them, we can know real phenomena pretty well through working out the relation between ideas and (often indirect) evidence.

Necessarily then, theories, ideas, presuppositions, call this intellectual work what you will, always precedes empirical work. Without theories we are blind, Michael Burawoy (2009) observes. Ideas are the expression of the social researcher’s agency in the research. They are the purposive work that precedes the purposeful choosing of cases in qualitative research.

Our aim through research is to refine theories of the middle range. These are not grand all-encompassing system theories. They are, instead, ‘special theories of greater or less scope, coupled with the historically-grounded hope that these will continue to be brought together into families of theories’ (Merton, 1968: 48). In other words, theories are fallible, the subject of revision, reinterpretation, and re-presentation.

As such we must start with ideas that we want to test. These ideas are the basis of the choices we make about who or what we want to do research with (the sample). In this model, research will be a process of constant accretion of direct and indirect evidence to test and refine ideas (Pawson, 2013). This can only happen, of course, if we start with ideas.

The ground-breaking work of grounded theory emphasises a process of theory building as active and dialogical (Byrne and Callaghan, 2013). But realist researchers reject Barney Glaser’s positivist twist to discovering emergent theories of the middle range. We bring prejudice, prejudgement, frames of reference, and concept to the purposive acts of purposefully choosing cases. This is a very different kind of emergence, which I consider through further investigation of ideas and their construction in the next step.

Posted in grounded theory, purposive work, refining theory | Leave a comment

Step 6: Theories about social objects refer to actual features of the real world


Photograph: Rogan Josh mourgeFile

Joseph Maxwell (2012: 13) contends that:

‘our concepts refer to real phenomena, rather than being abstractions from sense data or purely our own constructions [… and more than this (on pg. 18), for realist researchers …] concepts, meanings and intentions are as real as rocks; they are just not as accessible to direct observation and description as rocks. In this way they are like quarks, black holes, the meteor impact that supposedly killed the dinosaurs, or William Shakespeare: we have no way of directly observing them, and our claims about them are based on a variety of sorts of indirect evidence.’

Let me start by dealing with the first part of this quote. Here Maxwell works out the implications of a stratified ontology for research. The focus on sense data and constructions remind us that all we want to know about the social world is not available from empirical observation (as I discussed in Step 5). We (and indeed participants in research) simply can’t produce convenient descriptive fictions about social processes from experiences of events.

As I will explain in much more detail later, a realist methodology depends upon constantly trying to work out the relation between ideas and evidence. This is also Maxwell’s point in reminding us concepts, meanings, and intentions are as real as rocks. Once again he is emphasising the stratified nature of the real. He is also pointing to the ways in which we get at real explanations of social processes.

One challenge we face in a realist research methodology is the very problem of a stratified ontology. Knowing that empirical observation will not provide an adequate explanation of the real can be, potentially, quite paralysing. We can end up in a ceaseless ontological debate about the location of things and their causal powers.

Maxwell’s strategy, and the strategy of many realists, is to accept that indirect evidence of the casual powers of things allow us to work out, as best we can, social and institutional norms, values, and inter-relationships (Pawson and Tilley, 1997) in particular ecologically bounded cases (Harvey, 2009). The realist goal is to express these connections in particular contexts. We accept these to be actual, yet fallible features of the real world.

As an example, Maxwell asserts in the quote above that what we know of William Shakespeare is based on indirect evidence. We know, for instance that Shakespeare left his second best bed to his wife, Anne Hathaway. ‘I gyve unto my wife my second best bed …’ he wrote in his will. And we may assume that this was derisory and contemptible. But Carol Anne Duffy knows better, and gets nearer the real in her poem, The World’s Wife:

The bed we loved in was a spinning world
of forests, castles, torchlight, clifftops, seas
where we would dive for pearls. My lover’s words
were shooting stars which fell to earth as kisses
on these lips; my body now a softer rhyme
to his, now echo, assonance; his touch
a verb dancing in the centre of a noun.
Some nights, I dreamed he’d written me, the bed
a page beneath his writer’s hands. Romance
and drama played by touch, by scent, by taste.
In the other bed, the best, our guests dozed on,
dribbling their prose. My living laughing love –
I hold him in the casket of my widow’s head
as he held me upon that next best bed.

To understand why Shakespeare bequeathed his wife his second best bed requires that we understand the norms of the 17th Century England, as Carol Anne Duffy implies. The best bed was reserved for guests. The second best bed was the marital bed. Only when we bring theories and evidence into relation with each other can we explain the social and institutional values expressed in Shakespeare’s will.

Theories (or put less grandly, ideas) are the intellectual work we bring to bear in interpreting and explaining social phenomena. Ideas play a significant part in all realist explanation; indeed they precede any attempt to explain the social world. I turn to this precedence of ideas in my next step.

Posted in realism, social reality | Leave a comment

Step 5: Society is neither a thing that exists independent of human action, nor is it ever entirely a product of our action


source Huggy’s Eye

For realists, the goal of science is the identification of things and their causal powers. This goal can not be achieved, Roy Bhaskar (1998) asserts, through the faithful recording of the natural and social phenomena we sense, observe, and record. Empirical observation simply does not exhaust the possibility of what really exists in the world. Realists reject this empiricist and flat ontology, preferring instead a stratified ontology, which, as I suggested in Step 2, distinguishes between events and the power of things. We can go further and identify three different domains. These three domains are the empirical, actual, and real. In this stratified model, the domain of the empirical refers to human sensory experience and perception, the actual to events occurring in the world, and the real to the mechanisms and structures that have causal powers and shape events.

It is these causal powers, that have their expression in events, that realism gets to grips with. Of society, for instance, realists argues that people do not create society. Society is always there before we are, but we can’t act unless it is there. For Bhaskar (1998: 36) society is ‘an ensemble [of the causal powers] of structures, practices and conventions which individuals reproduce and transform, but which would not exist unless they did so.’ Society is neither a thing that exists independent of human actions, nor is it ever entirely a product of our actions.

From Step 9 onwards I will discuss the implications of this observation to the social act of choosing cases. In short, I will argue that cases are neither things in the research (an error of reification), nor are they entirely of our making (an error of voluntarism).

Recognising that this dialectical relationship flows from society to our actions in society, like doing social research for instance, provides a window onto the possibilities for interpretation and explanation in social research, I will argue. But before I consider the relation between realism and explanation, there is a methodological bridge to cross, what are ideas in social research. This will be the topic of my next post.

Posted in causal efficacy, powers, realism | Leave a comment

Step 4: Generative mechanisms are the powers that effect behaviour. They make a difference; they cause things to happen

ImageIn the previous step I made reference to William Outhwaite’s observation that social relations are as real as human actions. In this step I want to go further and bundle up social relations, human actions, and ideas under one term—generative mechanisms.

These generative mechanisms are the powers, liabilities, dispositions, and resources that fire to produce dynamic (social) processes. They act to shape and describe social regularities. Indeed, as Pawson and Tilley (1997:68) observe, these generative mechanisms are regularities. They are ‘an account of the make-up, behaviour, and interrelationships of those processes which are responsible for the regularity’.

For realists there are real connections between events, that cause things to happen. The existence of causal powers and an acceptance that these powers exist independent of our knowing them sets realism apart from anti-realism (including social constructivism and empiricism) (Brock and Mares, 2007).

In Step 2 I suggested that to explain social reality we must get beyond the surface appearance of social objects and describe the underlying mechanisms that fashion and shape these objects. This, for Malcolm Williams (Letherby et al., 2013), is the ‘trick’ of explanation.

These 39 steps accept that social research is as much a social object as the social objects we investigate (a thought that might seem trivial to state, but is rarely acknowledged in methods textbooks). And because a realist strategy of choosing cases is a dynamic social process we are involved in a double trick of explanation. We must explain the choices researchers make through the interplay of researcher and institution and researcher and researched, as much as the historically and institutionally contingent powers, liabilities, and dispositions that fire to bring about particular social processes and outcomes.

Posted in causal efficacy, generative mechanisms, powers | Leave a comment

Step 3: Realism is a model of stratified reality being composed generatively


Photograph by Flying Pete

This observation about realism builds on my previous step. The emphasis on generative composition I leave to William Outhwaite (1998: pg. 284) who in his discussion of realism and social science observes that ‘the more interesting human actions [which I will argue in later steps include choosing cases in qualitative research] are those which presuppose a network of social relations. And if these social relations are a precondition of individual actions, it seems odd to think of them as any less real than those actions’.

Posted in generative mechanisms, realism | Leave a comment

The power of small samples in qualitative research: a seminar for students planning their dissertations in the School of Sociology and Social Policy, University of Leeds

Q- How big, or small should a sample be in qualitative research?

A-    It is not the number of cases that matters, it is what we do with them that counts. Sample size is frequently used to determine the quality of qualitative (and quantitative) research design, as Emma Uprichard (2013: 7) observes. But this criteria should be rendered meaningless ‘without further explanation as to what, how and why [it] may matter in the first place.’

And here is the presentation power-of-small-samples

Posted in sample size, sampling unit | Leave a comment

Step 2: Social reality is not simply captured by description or ideas, but is richer and deeper

russian mind‘Social reality is not simply captured by description or ideas, but is richer and deeper’ (Malcolm Williams in Letherby et al., 2013: 105). Reality is stratified and our theories about the social objects we investigate refer to actual features and properties of the real world. These properties include real and relatively enduring mechanisms, which Roy Bhaskar (2008: 221 – parentheses in the original) considers to be ‘nothing but the powers of things. Things, unlike events (which are changes in them) persist’. Powers, liabilities, and dispositions are an essential part of any realist explanation of sampling in qualitative research. These have causal efficacy, they have an effect on behaviour, and they make a difference.

Posted in causal efficacy, social reality | Leave a comment

A review by Prof Ray Pawson

The starting point of all social research, at least according to the methods ‘cookbooks’, is to consider who and where to study – a decision usually known as sampling or case selection according to the researcher’s preferred paradigm. The job is done according to the first approach if one can show that the sample chosen is typical of the population to which one’s findings will apply. The second approach trades on atypicality. The researcher’s job is to understand and remain faithful to the unique, neglected characteristics of the situation under study. Stuff and nonsense, declares Emmel. He sees any individual study as a mere staging post in the wider body of social science investigation. Inquiry thus zigzags back and forth between bodies of explanation and bodies of empirical research. Accordingly, researchers should choose who and where to study according the zigs and zags already accomplished in that body of inquiry and their efforts should add a further zig or zag. He is perfectly correct, of course, and let us hope that one day these twists and turns of study selection will be recognised in the cookbooks.

Ray Pawson

Professor of Social Research Methods, The University of Leeds

Posted in review | Leave a comment

Step 1. The terms sampling and sample are common across the methods of social research, choosing cases is better

ImageSampling and choosing cases in qualitative research: a realist approach starts with a bold assertion. The verb sampling does not adequately capture the work we do choosing cases in qualitative research. The noun sample is inadequate too. Our common understanding of sampling is that a sample is drawn from a predefined population and each unit within that population has a chance of being included greater than zero (Bowley, 1906; Gorard, 2007). These rules do not hold in a realist sampling strategy. In these 39 steps I will argue that cases are chosen and transformed throughout the research in a methodological strategy of casing. In short, casing is the working out of the relation between ideas and evidence to extend interpretation and explanation in the research (Ragin, 1992). But before I discuss the methodology and techniques of choosing cases in qualitative research the next six posts discuss my characterisation of realism.
Posted in casing, sampling unit | Leave a comment

The role of context in case study selection (Poulis, Poulis, and Plakoyiannaki, 2013)

A lovely paper by Poulis and colleagues (2013) sent to me by one of the authors, Emmanuella Plakoyiannaki. Set in the research world of international business, it is a fine account of the application of casing and reflection on this methodology. For Charles Ragin casing, or the evolving case in the research, is a methodological practice that seeks to work out the relation between ideas and evidence.

Casing is at the heart of the strategy I propose in my book for sampling and choosing cases in qualitative research. It is reassuring (and nice) to see other authors making the same points quite independent of my work.

Poulis and colleagues think about casing and context as two parts of the same problem. They point out that contextualisation takes place at many stages in the research process. Case sampling and contextualisation are combined and emergent decisions made throughout the research, rather than treated as discrete and separate task undertaken in some notional linear progress of doing research.

What is more, these authors suggest that understanding context requires the application of diverse tools and approaches. In their examples they discuss using pilot cases, direct observation, purposeful sampling strategies, and secondary data. They stress that these ‘tools assisted … towards context-sensitive case selection, emphasising that they chose these tools because they were most fitting to their study’ (pg. 306).

In my book I use slightly different language. I talk about purposive work leading to purposeful sampling, but the principle is the same. The working out of the relation between ideas and evidence drives forward the choice of cases, never evidence alone.

I also make much of MQ Patton’s observation that there is nothing convenient about purposeful sampling. So too, Poulis and colleagues insist that information rich cases ‘deal with contextual challenges and avoid a convenience-sampling logic […] diverse information from multiple contexts [represents] a response to emerging sampling challenges’ (pg. 308).

And these challenges of case selection are always creative. Methodologies are bent to the peculiarities of the research setting. As Poulis and colleagues (2013) emphasise, the notion of casing and their paper, ‘demonstrates the iterative thinking, dynamic reflection, and multiple sources of information can only lead to discovering critical dimensions’ (pg. 312) which might otherwise go unnoticed.

Poulis K, Poulis E, & Plakoyinnaki E (2013). The role of context in case study selection: an international business perspective. International Business Review, 22 304-314.

Posted in casing, context | Leave a comment

This blog

This blog is about sampling, choosing cases in qualitative research, and realism. It accompanies my book, which zips these ideas together.

51sNAfRfAPL._AA160_My book is published by SAGE, ISBN: 9780857025104, and available from all good academic bookshops and Amazon, of course.

This blog presents 39 steps (as 39 tweets), which accompany the publication of the book.

I wanted to write the conclusion to this book as 39 steps (with deference to John Buchan, Alfred Hitchcock, and Richard Hannay), written as 39 tweets. But my editor, Jai Seamen, wisely rejected the idea. She pointed out that tweets are only relatively enduring. Tomorrow there may be no Twitter.

In this blog I will start by adding height, and depth, and length, and breadth (with references) to 140 character micro blogs.

I welcome comment, observations, experiences, case studies, and argument to test and refine the ideas presented here.

Nick Emmel

Posted in review | Leave a comment