Mechanisms, the problem and the solution: a response to Howick et al. (2013)

harrisons chron stampI cannot disagree with the conclusion of Howick and colleagues’ 2013 paper, ‘Problems with using mechanisms to solve the problem of extrapolation’. Uncertainty and scepticism will always be part of medicine, at least an account of medicine that does not succumb to myth making and a Whig history. Extrapolation of findings from a sample to a population will always be a methodological challenge that is best met through the caution of knowing that we must ‘learn to live with a much higher degree of uncertainty and scepticism about the effects of many medical interventions, even those whose effects have been established in well-controlled population studies’ (288). But, I’m not sure Howick and colleagues offer much of a solution. In fact, I think they throw the baby out with the bath water. Mechanisms may well have all the weaknesses they suggest, but mechanisms are also the solution to the scientific progress (as knowledge cumulation) they aspire to, I argue.

Wisely, Howick and colleagues (2013) do not deal with mechanisms as empirical cogs in the previously unopened black box of Evidence Based Medicine (EBM). They caution against a model that makes the ‘unwarranted ontological assumption that mechanisms are productive of regular relationships between inputs and outputs’ (284). But, they also seem to be hamstrung, to a degree, by their observation that ‘the functioning of most mechanisms is discovered in tightly controlled laboratory experiments that expressly exclude as many potentially interfering variables as possible’ (283). They accept that ‘mechanisms in the human body and social world, especially those that are pertinent to clinically relevant outcomes are generally more complex than […] mechanical machines’ (284), yet their solution is to ‘role out [randomised control] trials, modified to what is systematically observed’(288).

In short, the hierarchy of EBM is reasserted. The epistemology of this position, which is never explicitly stated in a paper about epistemology, is that reality is forced to fit the experiment, not methods chosen as a response to reality. One might conclude that this is not an epistemological treatment of mechanisms, so much as an empirical justification for extending the experimental model.

Howick and colleagues criticisms of mechanisms are threefold and justified in my view. First, mistaken mechanism may be identified, as knowledge of mechanisms is often lacking. Second, knowledge of mechanisms may lack external validity. The mechanism discovered in the tightly controlled experiment on the laboratory bench may well not be reproducible in a clinical setting. And third, a mechanism might produce paradoxical reactions, an anti-epileptic drug might prevent and cause seizures for example, as these authors point out.

They offer two ways out of these limitations of mechanisms. The first is Daniel Steel’s solution of comparative process tracing. For Steel (2008:89), judgements are based on ‘inductive inferences concerning known similarities in related mechanism in a class of organism, and on the impact those differences make’. As with all analytic inductive models, the search is on for the best model to explain empirical observations. It is a process of testing cases and reformulating hypotheses, ‘until a point is reached where every new case investigated confirms the current hypothesis’ (Hammersley, 2014:18). At its most basic, once the mechanism is determined in the target population the knowledge that the mechanism happens in the study population (or model) is redundant. This for Steel is an ‘extrapolator’s circle’, which he proposes can be avoided through process tracing that reconstructs the path of mechanisms step-by-step to the end point in the model. This can then be compared with other organisms most likely to differ significantly, to identify if mechanisms (often most proximal to the end-point) are similar. Even this empirical method of process tracing cannot, however, resolve the problem of unknown mechanisms, claims to the external validity of the mechanism (beyond the experiment), or paradoxical end points. Howick and colleagues look to another approach in an attempt to address these problems.

Their second approach to mechanistic reasoning is provided by Cartwright and Hardie. I am not sure this case can be used to make claims for extrapolation. Cartwright and Hardie (2012: 45) state that ‘[external validity] is the central notion in the Randomised Control Trial (RCT) orthodoxy, and it does not do the job that it is meant to do’. Yet Howick and colleagues use Cartwright and Hardie’s case study to ‘show why [the case I discuss below] does not support mechanistic reasoning to solve the problem of extrapolation’ (286). In short, Howick and colleagues construct a straw man.

Nonetheless, it is worth rehearsing Cartwright and Hardie’s approach because this shows how mechanistic reasoning emphasise uncertainty, invites scepticism, and undermines claims to external validity. Using RCTs of feeding programmes for infant health (ih) in India (Tamil Nadu TN) and Bangladesh (BD) Cartwright and Hardie show how mechanisms must be abstracted when multiple contexts are considered. In the formulae to represent the findings from these RCTs, a and a’ stand for unknown variables in both settings. In Tamil Nadu support factors (bm) empower mothers (em) to control resources in the food programme and feed their children. In Bangladesh however em does not pertain, instead mother-in-laws (eml ) rule the roost, control food and, therefore infant health, support factors bml are necessary.

TN : i.h.(i)c=a1+a2i.h.0(i)+a3”bm(i)em(i)+a4′z(i)

BD : i.h.(i)c=a1‘+a2‘i.h.0(i)+a3”bml(i)eml(i)+a4′z'(i)

As Cartwright and Hardie (2012:29) note these represent ‘two causal principles with nothing in common except their abstract form’. The formula can be re-written so they do have one thing in common:

TN : i.h.(i)c=a1+a2i.h.0(i)+a3”bpw(i)epw(i)+a4′z(i)

BD : i.h.(i)c=a1‘+a2‘i.h.0(i)+a3”bpw(i)epw(i)+a4′z'(i)

The one thing in common is epw, (educated person with power) with support factors bpw. Rewriting the equations in this way has abstracted them further from the real, in this case substituting an abstract powerful person for very real mothers and mother-in-laws. ‘This is’, to quote Cartwright and Hardie ‘(nearly) vacuous’ (86).

These vacuous statements need bringing back to where the action is. Howick and colleagues invoke a putative World Bank consultant who prescribes particular treatments based on a knowledge of bpw, epw in each setting. They argue that this prescription may do more harm than good because of an ‘inability to identify all mechanisms’ ( 287). Quite so, World Bank consultants are probably not good at spotting mechanisms, programme workers are likely to be better, I’d suggest. But at least both have a theory (a mechanism) to work with, the ‘powerful person theory’. The job now is to put that theory to work and find out who the powerful person is, in what circumstances, why, and what support factors they need to improve infant health. An investigation of mechanisms might start with a statement about bpw, epw but will hopefully progress to bm, em or bml, eml, or some other unknown combination.

Howick and colleagues’ do not adopt this approach. Rather, their solutions from investigation of mechanistic reasoning are, first, and rightly, ‘temper our confidence in all mechanistic reasoning’ (287). And second, modify the experimental RCT design in response to that which is systematically observed. Cartwright and Hardie agree with the first part of this solution, it appears. They recognise that what works ‘over here’ might not work ‘over there’, the mechanism that work in a feeding programme to improve infant health in Tamil Nadu may well not work in Bangladesh.

Hardie and Cartwright point is that this problem of external validity affects RCTs as much as any approach. The second part of Howick and colleagues solution does not, therefore, follow from the first, unless as they seem to assume, all mechanistic reasoning is sophisticated induction. Most mechanistic reasoning, including Hardie and Cartwright’s approach, simply does not follow the analytic induction proposed by Daniel Steel. Cartwright and Hardie eschew an inductive approach with as much force as they do the deductive approach of the RCT. Their preferred approach is creativity:

[…] the orthodoxy [the RCT], which is a rule based system, discourages decision makers from thinking about their problems, because the aim of rules is to reduce or eliminate the use of discretion and judgement […] Deliberation is not second best, it is what you have to do, and it is not faute de mieux because there is no mieux.(Cartwright and Hardie, 2012:158)

The discretion and judgement emphasised here lead away from both deductive and inductive approaches. Mechanistic reasoning is the alternative, despite its accepted weaknesses. The challenge here is a methodological approach that can creatively zigzag between ideas and evidence to produce models that are constantly open to testing and refinement. In short, a methodology that does not ask the question ‘what works’, but ‘what works, for whom, in which circumstances, why, and when’. In addition, having outlined the weaknesses inherent in the deductive methods of the RCT, so often misbranded as the ‘gold standard’, I return to my key point, it is not the method that drives the investigation, but the question that must decide the method in knowledge cumulation for contemporary health care and beyond.

Advertisements
This entry was posted in generative mechanisms and tagged , , . Bookmark the permalink.

2 Responses to Mechanisms, the problem and the solution: a response to Howick et al. (2013)

  1. “The discretion and judgement emphasised here lead away from both deductive and inductive approaches. Mechanistic reasoning is the alternative, despite its accepted weaknesses.” I agree mechanistic reasoning does not (have to) rely on induction or deduction. But creativity hardly strikes me as a good standard for making causal claims about mechanisms. I would think it is abduction or inference to the best explanation (which also requires some creativity, but all social science does).

    • nickemmel says:

      I wonder what you mean by abduction. Do you, like Blaikie (2000), treat abduction as a self-sufficient research strategy (see also Hamersley, 2014), or do you follow Pierce (1903:235), who emphasised the guessing and creativity of an abductive approach?
      For Pierce, abduction:
      ‘… allows any flight of imagination, provided this imagination ultimately alights upon a possible practical effect; and thus many hypotheses may seem at first glance to be excluded by the pragmatical maxim that are not really so excluded.’
      I agree with this version of abduction and also with your observation that ‘creativity [is not] a good standard for making causal claims about mechanisms’. In my response to Howick et al, I was concerned with two issues. First, like Cartwright and Hardie (2012) I the imposition of the experimental model on complex phenomena is designed to remove the ‘guessing’ or ‘creativity’ from the very process of doing research. I return to Cartwright and Hardie’s point, deliberation is not second best, it is an essential part of the process. We might get it wrong quite a lot of the time, of course. But then we must invoke a second point, which is made by CS Pierce. As he asks:
      …. What is good abduction? What should an explanatory hypothesis be to be worthy to rank as a hypothesis? Of course, it must explain the facts. But what other conditions ought it to fulfil to be good? …. Any hypothesis, therefore, may be admissible, in the absence of any special reasons to the contrary, provided it be capable of experimental verification, and only insofar as it is capable of such verification. This is approximately the doctrine of pragmatism.
      I’m not entirely sure I like the term ‘verification’, I’d prefer testing, refining, judging, and so on (but then much debate has happened in the social sciences since 1903). What I do like is that this form of abduction is not self-sufficient. It is explicitly part of a process of researching that proposes what I would term theories of the middle range (after Merton, 1968). Raymond Boudon describes these as a set of statements that ‘organise a set of hypotheses and relate them to segregated observations’ (1991: 520). My point, unlike Howick and colleagues (2013) is that guesses, creativity, deliberations, and informed judgement as part of any research endeavour. We must valorise these theories, hypotheses, and conjectures. They are the ZIG that precedes the ZAG in any research endeavour. In short, ideas precede method, so often it is presented as otherwise.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s