Conformity & Obedience #3
Dispositional and Situational
The 2 approaches to explaining obedience were to some extent reconciled via the work of Alan Elms (Alan Elms & Stanley Milgram, 1966).
One of Milgram’s assistants, Elms tested sub-
In Integrated SocioPsychology terms the vMEME most likely to obey blindly the orders of a legitimate authority is BLUE. However, the ruthlessness of the authoritarian personalities – and possibly their enjoyment of inflicting pain on others – suggests that they may also be high in the Psychoticism Dimension of Temperament.
Research into situational factors in obedience
In addition to Milgram’s obedience experiments, their variations and numerous replications, there have been a number of other important studies into obedience.
A very different one to Milgram was that of Wim Meeus & Quinten Raaijmakers (1985) – though they overtly took their inspiration from his work. 1980s Dutch culture was much more liberal than early 1960s American culture; so the intention was to see if the power of obedience to a higher authority would still apply in a different cultural setting. They also wanted to eradicate certain ambiguities in Milgram’s study – primarily that the levels of shock appeared to be dangerous, confirmed by the learner/victim going silent and the levers being labelled ‘severe shock’, etc, yet the participants had been told there would be no permanent tissue damage.
In the baseline procedure there were 39 volunteer participants aged 18-55, both male and female and of at least Dutch high school education. 24 of the volunteers were allocated to the experimental group while 15 were put in a control group. The experiment lasted about 30 minutes. The participants were were given the role of ‘interviewer’ and ordered to harass a ‘job applicant’ (actually a confederate) to make him nervous while sitting a test to determine if he would get the job. Although the premise of the set-up was that the experimenters were researching the relationship between psychological stress and test achievement, they were also told that the ‘applicant’ did not know the real purpose of the study – they heard the applicant being told that poor performance on the test would not affect their job prospects – and that the job being applied for was real. The applicant, listening via a speaker in a different room, had to answer 32 multiple-choice questions read out in 4 sets by the ‘interviewer’. The harassing consisted of 15 negative statements – 5 each for the second, third and fourth question sets. (No negative statements were made during the first question set.) These appeared on a TV screen, telling the interviewer when to make the remarks and what to say. The comments built from mild criticism – “Your answer to question 9 was wrong” – to devastating utterances such as “This job is too difficult for you. You are only suited to lower functions.” No errors were made in the first question set but 10 were made over the next 3 sets – 8 being enough to ‘fail’ the test.The applicant had been instructed to begin confidently but to protest at the negative statements – eg: “But surely…” and “My answer wasn’t wrong, was it?” The applicant acted increasingly distressed until reaching the point – at the eighth or ninth negative statement – where he begged the interviewer to stop. The applicant then accused the interviewer of lying to him about the study and withdrawing consent. The interviewers were were told to ignore the applicant’s interruptions and were given 4 verbal prods to continue the remarks if they refused to go on. The participants were told that electrodes on the applicant’s skull were measuring tension which was displayed numerically on a sequence panel, running from 15 to 65. The experimenter, next to the participant, added verbal comments on the stress indicators displayed such as “normal” or “intense”. The graphic shows how the stress level and errors were manipulated. 91.7% of the participants (22 out of 24) obeyed by disturbing and criticising the applicant with all 15 statements when told to do so by the researcher. The mean of the stress remarks given was 14.81. None of the participants in the experimental condition put up any real opposition to the experimenter’s demands.
With the control group the participants could choose when to make the negative statements and could stop making them at any time during the test. When the participants in the control group stopped the negative statements, the applicant had been instructed to stop making errors and their ‘tension’ levels would drop. No one in the control group made the stress remarks.
Meeus & Raaijmakers made 2 variations on the baseline. Firstly the experimenter set up the study, ordered the stress remarks and then left the room (22 participants). Secondly 2 confederates played co-interviewers alongside the real participant – protesting after stress remark 8 (causing the experimenter to go through the 4 verbal prods) and refusing, first one and then the other, to continue after stress remark 10 (when the applicant withdrew their consent to the experiment) – though the experimenter asked the real participant to continue (19 participants). Removal of the experimenter and introducing rebellious peers both led to a substantial reduction in obedience amongst the real participants – 36.4% and 15.8% respectively were fully obedient. The graphic shows the relative influence of the 2 variation conditions. The researchers explained the reduction in obedience in the ‘experimenter’ absent condition as being due to the participant having to take personal responsibility. They attributed the reduction in obedience in the ‘rebellious peers’ condition as both having to take personal responsibility and having the peers to model.
While Meeus & Raaijmakers did indeed demonstrate that, even in a more liberal culture than that of Milgram’s studies, people would obey an authority figure and go against their better nature to do something designed to hurt another person, the study has been heavily criticised as lacking mundane realism and, therefore, ecological validity as the task was hardly an everyday scenario.
A study with rather more ecological validity was that of Charles Hofling et al (1966). 3 psychiatric hospitals in the American Midwest took part in this study, with one of them acting as a control. 22 uncomprehending nurses from the other two hospitals were used for the experiment – 12 public wards and 10 private. Participants were closely matched for age, sex, race, marital status, length of working week, professional experience and area of origin. While alone on the ward on night duty – 7-9 PM, just before evening visiting or just after it, when doctors are not normally around and medication is not normally administered – they received a phone call from an unknown “Doctor Smith from the Psychiatric Department” (the authority figure) asking them to administer 20 mg of ‘Astroten’ to a patient, ‘Mr Jones’, who needed the drug urgently. The caller, who claimed to be running late, said he would sign the authorisation papers when he arrived at the hospital in about 10 minutes time. Amongst the ward’s drugs were bottles containing fake pills labelled Astroten 5 mg. Maximum dose 10 mg. Do not exceed the stated dose. (The capsules in fact contained glucose, harmless to most patients.) The researcher playing Doctor Smith used a written script to standardise the conversation and all conversations were recorded. The conversation was planned to end when either:-
- the nurse complied and went to administer the medication
- refused consistently to give the medication
- went to get advice
- became emotionally upset
- the call went on for more than 10 minutes
A researcher (a real doctor) stopped nurses that were moving towards the patient’s bed with the ‘medication’ and all nurses were debriefed within 30 minutes of the telephone conversation. 12 graduate nurses and 21 student nurses from the control hospital completed a questionnaire about what they would do if they were asked to administer a medicine unauthorised for use on their ward by a doctor they didn’t know. The telephone conversations were generally brief without much resistance from the nurses. 16 nurses in debrief said they felt they should have been more resistant to the caller. None became hostile to the caller. 21 of the 22 nurses started to administer the Astroten. During the debrief only 11 nurses admitted to being aware of the dosage limits for Astroten. The other 10 did not notice it but judged that it must be safe anyway if a doctor had ordered them to administer it. Nearly all admitted they should not have followed the orders as they were in contravention of hospital policy.However, many of the nurses stated obeying a doctor’s orders without question was a fairly common occurrence. 15 nurses said they could recall similar incidents and that doctors were displeased if nurses resisted their orders. Amongst the control group, 10 of the 12 graduate nurses and all 21 students said they would not have administered the medication. Most believed other nurses would behave in the same way.
Hofling et al’s study does support Milgram’s Agency Theory. The nurses’ actions indicated they were in an agentic state, recognising and responding to the doctor’s authority. It also has high ecological validity. The nurses were unaware of the set-up so their behaviour was natural. Comparison between the control group’s questionnaire responses to the actual practice of the nurses in the experimental situation shows the difference between what people think they would do and what they actually do.
Steven Rank & Cardell Jacobson (1977), however, queried the mundane realism of the study in that the nurses in had no knowledge of the drug involved and that they had no opportunity to seek advice from anyone of equal or higher status. (Both of which would apply in most hospital situations.) They replicated Hofling’s experiment but the instruction was to administer Valium at 3 times the recommended level, the telephoned instruction came from a real, known doctor on the hospital staff and the nurses were able to consult with other nurses before proceeding. Under these conditions, only 2 out of 18 nurses prepared the medication as requested. Rank & Jacobsen concluded: “nurses aware of the toxic effects of a drug and allowed to interact naturally – will not administer a medication overdose merely because a physician orders it.”
However, Eliot Smith & Diane Mackie (1995) reported that there is a daily 12% error rate in US hospitals and that “many researchers attribute such problems largely to the unquestioning deference to authority that doctors demand and nurses accept.” The same year Annamarie Krackow & Thomas Blass gave a questionnaire to 68 nurses which asked about the last time they had disagreed with a doctor’s order. 2 factors emerged as key to whether the nurses would or wouldn’t obey. Most importantly was whether the nurses recognised the doctor as a legitimate authority with the right to make the decision in question. However, the nurses were also influenced by the consequences for the patient; if these would be serious, the nurses were more likely to take responsibility and challenge the order.
Rather less dramatic but still debatable in terms of mundane realism and ecological validity was Leonard Bickman’s field experiment in 1974. He had 3 male experimenters dressed either in as a milkman, a uniformed guard or a civilian in a sports coat and tie make demands of passers-by in a New York City street. They gave one of 3 orders:-
- “Pick up this bag for me” – pointing to litter
- “This fellow is overparked at the meter but doesn’t have any change. Give him a dime” – nodding in the direction of a confederate fumbling for change by a parking meter
- “Don’t you know you have to stand at the other side of the pole? This sign says ‘No standing'” – to a participant at a bus stop
The passers-by were most likely to obey the guard (38%) and least likely to obey the civilian (14%). As Bickman concluded, in support of Milgram and the concept of legitimate authority, a uniform has immense social power. In a variation of the study Bickman found people even obeyed the guard when he walked away after giving the order!
Research into dispositional factors in obedience
Robert Altemeyer (1981) worked with 3 of the authoritarian personality traits he thought constituted ‘right-wing authoritarianism’ (RWA):-
- Conventionalism – an adherence to ‘conventional’ norms and values
- Authoritarian aggression – hostility towards people who violate such norms and values
- Authoritarian submission – uncritical submission to legitimate authority
Altmeyer tested the relationship between RWA and obedience by instructing his participants to give themselves increasingly levels of electric shock when they made mistakes on a learning task. He found a significant positive correlation between RWA scores and the level of shock the participants were willing to give themselves.
Education, cognitive complexity, politics and authoritarianism
In what appear to be a complex series of interlocking factors, it appears that education – with related increases in cognitive complexity – and political preferences all influence or are influenced by authoritarianism and how willing somebody might be to obey.
Milgram (1974) noted that less-educated people were consistently more obedient than well-educated people. However, Elms noted that the less-educated were the most obedient and authoritarian. C P Middendorp & J D Meleon (1990) also found a link between poorer education and authoritarianism.
While education isn’t the only factor leading to greater cognitive complexity, it is almost always does improve cognitive complexity. A number of developmentalists have found a correlation between greater complexity and political, social and moral views. Frenkel-Brunswik (1951) found becoming less prejudiced is negatively correlated with greater cognitive complexity. Lawrence Kohlberg (1963) found morality becomes more complex with greater cognitive development. Jane Loevinger (1976) stated that, as people’s thinking becomes more complex, so they become much more aware of others and their needs. In Gravesian terms, GREEN (liberal) thinking is more complex than BLUE/ORANGE (economically conservative), BLUE (rigidly conservative) and PURPLE/BLUE (socially conservative).
Quite a bit of evidence points to lesser cognitive complexity being associated with having right-wing political views. Gordon Hodson & Michael Busseri (2012) found that people with low childhood intelligence tend to grow up to have racist and anti-gay views. Jonathan Haidt, Craig Joseph & Jesse Graham note there is a “consistent difference between liberals and conservatives” on several measurements related to cognitive complexity. Emma Onraet et al (2015) offer an explanation for this: “Right-wing ideologies provide well-structured and ordered views about society and intergroup relations, thereby psychologically minimizing the complexity of the social world. Theoretically, therefore, those with fewer cognitive resources drift towards right-wing conservative ideologies in an attempt to increase psychological control over their context.”
Unsurprisingly perhaps, Laurent Bègue et al (2014) found that people who defined themselves as more ‘left-wing’ were less obedient than people who saw themselves as ‘right-wing’. In fake game show contestants had to give (fake) electric shocks to other contestants. What the researchers found was a negative correlation between strength of left-wing political views and the intensity of shock the contestant was willing to administer.
However, not all research supports the association of lesser cognitive ability with right-wing political views. Luke Conway et al (2016) asked over 2000 participants – equally Democratic and Republican – to write statements about different domains in their lives eg:-
- Climate change
- Death penalty
- Sex relations except in marriage are always wrong
- Drinking alcohol
- Separate roles for men and women
Complexity in the statements made was rated on a scale of 1 (simple) – 7 (highly complex). Republicans were much more complex on some topics and Democrats on others – there was no overall direction of travel in simplicity to complexity.