Alan C. Elms
Virtual Library
  Alan C. Elms Home Page

 Alan C. Elms Home Page   Virtual Library   Articles Online

Rae Carlson
A Presley Pathography
On Cordwainer Smith
Erikson's History
Elvis: Twinless Twin
Obedience in Retrospect
The Psychologist as Biographer
Posthumanity How?

Keeping Deception Honest:

Justifying Conditions for Social Scientific Research Stratagems


The Problem of Deception: A Consequentialist Middle Ground

Deception is a word used to end arguments, not to begin them. To accuse researchers of deception is to remove them from the ranks of those with whom legitimate human relationships can be pursued. The term is so sweeping that it includes Satan's lures for lost souls, the traitor's treachery, the false lover's violation of a pure heart. How could any deception ever be considered ethically justifiable if it keeps such company?

The use of so broad a term as deception is itself deceptive when applied without qualification to certain common procedures in social scientific research. It muddies issues, biases ethical debates, lumps together a vast array of practices that differ in intent, execution, and outcome. Because of such radical differences among various practices labeled "deception," social scientists have suggested other terms for the kinds of stratagems used in their research, such as "staging" or "technical illusions."¹

But stage plays and magic tricks are not quite on the same order as our research stratagems, either. The researcher hopes that subjects will not realize an illusion is being created. If the experiment is to work, they should perceive the stage scenery through which they are walking, the memorized speeches of the actors around them, as genuine. When the curtain falls, they are not likely to break into spontaneous applause—any more than they are likely to call the Bunco Squad or the Consumer Fraud Division. So "staging" and similar terms are as problematic as "deception." In lieu of a better word, I will continue to use "deception" for the practice of misleading research subjects, even though it obliterates important distinctions among forms of deception.

Certain ethicists refuse to differentiate social scientists' attempts to mislead subjects from any other kind of deception, conceptually as well as terminologically. For them, the argument is already over: there are no circumstances under which social scientific deception is ethically permissible. Non-absolutists are likely to find such an absolutist stance worth little attention, and I do not have the space to examine it closely here. For those who are interested, Sissela Bok has summarized the basic philosophical arguments against it.2

Certain others—I hesitate to call them ethicists, though they do hold down the other end of the ethical scale from the moral absolutists—insist that normal rules do not apply to science, that the end knowledge fully justifies the deceptive means. In extreme form, these people appear to us as Nazi eugenicists or as the mad scientists of Hollywood—much beloved by the moral absolutists, who need such opponents to justify their own extremist stance. In milder form, they include simple corner-cutters, Machiavellian careerists, and earnest believers in the primacy of scientific truth.

The position in the middle of the scale is the hard one to hold. Here are those who see life as filled with moral conflicts, rarely easy to resolve, and who see social scientific research as a necessary part of their ethical life. They see such research as the best route to certain ethical goals, and an element of deception as essential to certain kinds of research. They do not accept deception easily, and so they are the ones who might ask, and who need to know, what conditions make deceptions sometimes ethically tolerable in social scientific research. They are the ones to whom I am mainly speaking, and whom at the same time I am trying to represent.

In so doing, I am taking what is variously called a consequentialist, risk-benefit, or cost-benefit position. Shakespeare neatly dramatized the classic case for this position in Measure for Measure, where he presented a novice nun with a moral dilemma: should she yield her virginity to a rapacious judge in order to save her brother's life, or should she deceive the judge and thereby save both her brother and her sexual virtue? The Duke of Vienna, apparently voicing Shakespeare's own sentiments, counsels her to deceive the judge. He assures her that "the doubleness of the benefit defends the deceit from reproof."3 The Duke and Shakespeare are making a cost-benefit analysis, and they conclude that in this instance the benefits of deception considerably outweigh the costs. Most people other than the strictest moral absolutists would agree: when the value of honesty conflicts with other values, certain circumstances may make those other values more important than honesty, and deception then becomes tolerable.

"Tolerable" does not mean "ethically neutral." Deception is, as Bok argues, never a neutral practice.4 It always carries potential harm to the interests of the deceived, in this case to the research subjects who might have chosen to avoid research participation had they been fully and accurately informed. It always carries potential harm to the deceivers, in this case the researchers and their assistants, whose reputation for veracity may be harmed and whose own character may be affected negatively by repeated deceptive practices. It carries potential harm to the deceivers' profession, since social scientists in general may become less trusted as the deceptive practices of part of the profession become well known. And it carries potential harm to society, in that it may contribute to a general lack of trust and to the willingness of nonprofessionals to act deceptively themselves. Perhaps none of these potential harms will be realized, if social scientific deception remains on a small scale and is surrounded by various kinds of constraints and counteractive efforts. But given the potential for harm, deception in social scientific research is not something to be employed casually. It must be carefully justified and any negative effects must be offset as much as possible.

What, then, are the boundary conditions under which deception can be considered ethically justifiable in social scientific research? I will state the major conditions in a single sentence, and then expound upon each term: Deception is justifiable in social scientific research when (1) there is no other feasible way to obtain the desired information, (2) the likely benefits substantially outweigh the likely harms, (3) subjects are given the option to withdraw from participation at any time without penalty, (4) any physical or psychological harm to subjects is temporary, and (5) subjects are debriefed as to all substantial deceptions and the research procedures are made available for public review. All of these conditions are by now familiar to researchers and ethicists; some have already been built into federal law. Most social scientists who use deception have accepted the conditions as reasonable and even necessary components of their own ethical decision-making processes. But not all ethicists have accepted the conditions as sufficient justification. I would like to argue that these five conditions are both necessary and sufficient justifications for the use of deception in social scientific research.

Lack of Feasible Alternatives

Henry A. Murray stated the primary justification for social scientific deception some forty years ago, in the opening pages of his classic work Explorations in Personality.5 Among "the few general principles that our [research] experience invited us to adopt," he lists two that are immediately relevant:

"[A.] The experimental session should be as life-like as possible. This is important because the purpose of personological studies is to discover how a man reacts under the stress of common conditions. To know how he responds to a unique, unnatural laboratory situation is of minor interest. [B.] The subject's mind should be diverted from the true purpose of an experiment. This is usually accomplished by announcing a plausible but fictitious objective. If a subject recognizes the experimenter's aim, his responses will be modified by other motives: for instance, by the desire to conceal the very thing which the experimenter wishes to observe."

Deception is at times necessary, Murray says, in order to create a laboratory situation that will seem life-like rather than artificial, since situations that strike the subject as artificial will tell us little about human behavior and may even mislead us. We need experimental control over relevant variables because neither naturalistic observation nor the subtlest statistical manipulations of available data will in all cases allow us to sort out the crucial psychological variables; but, paradoxically, we must sometimes use deception to make an experimentally created situation seem real, so that subjects will give genuine, generalizable responses.

Elliot Aronson and J. Merrill Carlsmith make a useful distinction in this regard between "experimental realism" and "mundane realism."6 An experiment is realistic in the first sense "if the situation is realistic to the subject, if it involves him, if he is forced to take it seriously, if it has impact on him." It is realistic in the second sense "to the extent to which events occurring in a laboratory setting are likely to occur in the ‘real world.' . . . The mere fact that an event is similar to events that occur in the real world does not endow it with importance. Many events that occur in the real world are boring and uninvolving." Thus an experiment may be trivial because it is unrealistic in any sense; or it may be trivial because it merely presents some version of mundane reality. But it may transcend triviality by the "stress of common conditions," through the creation of an invented but emotionally involving experimental reality. The latter kind of experiment may be an important route to valuable information about human behavior (whereas the former kinds will never be); and it may be possible to pursue such a route only through the use of deception.

But what of alternative routes? Why not, for instance, simply approach people honestly and ask them to tell us about themselves? This is in some circumstances the best procedure to follow, and I certainly find it a more comfortable procedure than deceptive experimentation. But Murray points out its weakness as an exclusive approach, in his Principle B. Wittingly or unwittingly, a subject's knowledge that particular aspects of his or her behavior are under study will almost certainly lead to modifications of that behavior. Enough data are available on the powerful effects of "demand characteristics," the subtle and unintended cues from researchers concerning their intentions and expectations, to indicate that explicit acknowledgement of such intentions and expectations could seriously disrupt normal behavior patterns. Further, subjects may have less than admirable reasons for trying intentionally to mislead researchers about their behavior—particularly about those aspects of behavior that society might have a strong interest in understanding and perhaps in working to modify. Destructive obedience, child abuse, racial and sexual prejudice, authoritarianism—the list could easily be extended of important psychological patterns that many people would be reluctant to admit, but that we need to understand much better if we wish to build a more satisfying society for all. If individuals will not talk about such matters honestly when they are asked straightforwardly, some form of research deception may be essential in order to gain the information we need.

Moreover, people may simply not know how they would behave in certain socially important but seldom encountered situations. Concerning such matters, it may be useless to ask people what they would probably do, and impossible to observe them in relevant real-life situations where the major variables are sufficiently unconfounded to let us make sense of the psychological processes at work. Once again, some use of deception to create an experimental reality may be the only effective means to collect essential knowledge.

But what about simulation? The word here refers not to creating an experimental reality by artificial means, but to asking research subjects to pretend they are participating in a realistic experiment and having them report how they think they would behave if they really were in such an experiment. This kind of simulation has often been recommended by people who do not wish to abandon the strengths of experimental research but who find deception to be an unacceptable aspect of such research. Unfortunately, simulation has proven to be an inadequate alternative both methodologically and ethically. If the simulation is relatively undetailed, it is not much different from simply asking people directly to describe how they would behave in various circumstances in the real world, and it has the same flaws as that approach—people often don't know, or don't want to tell, how they would behave.7 If the simulation closely reproduces each step of a genuine experiment, however—if for instance, as in Don Mixon's8 or Daniel Geller's9 simulations of the Milgram obedience studies, subjects are walked through every stage of the experiment, being given only the information available to genuine experimental subjects at each stage—it may gain in accuracy of subjects' self-reports at the expense of ethical losses. Simulation subjects may undergo stresses similar in quality if not in intensity to those experienced by genuine subjects, and at the end they may feel similarly misled as to the actual scope or intent of the experiment they have helped to simulate. Using another example, the fact that Philip Zimbardo's prison study10 was a simulation does not divest it of the ethical dilemmas originally confronted in nonsimulation experiments. Further, even though simulation studies rendered sufficiently close in detail to the original experiment may yield similar data from their "as-if " subjects, serious doubt would always remain about the validity of a simulation study if no "real" experiment were available for comparison. The substitution of simulation studies for experiments experienced by their participants as real thus appears to be a commendable but unrealizable dream.

The Harm-Benefit Calculus

Here is where I must take an explicitly consequentialist position. Most social scientists are consequentialists, as least to some degree; otherwise they would not take the trouble to do social scientific research. The difficulty of framing and executing empirical studies, the high level of ambiguity that must be tolerated in the typical results, the ethical distress that never quite goes away—all these must be offset by the hope that some kind of social benefit will derive from the research in the long run. Otherwise, you might as well become a philosopher.

Remarkably little direct harm has ever come to subjects from academic social scientific research. I say "academic" because I am not willing to attempt any general ethical justification for the research programs of the CIA, General Mills, or the Church of Scientology, social scientific though they may be at times. They are not subject to the same kinds of regulations as academic research, and they are not open to free discussion or to the informal influence of scientific peer pressure. In terms of academic research, a potential subject is in far less physical danger during virtually any kind of research participation than in driving across town to an experimental session, or in spending the research hour playing tennis instead. Psychologically, as researchers have often pointed out to institutional review boards, the principal danger to the typical subject is boredom. The individual is at much greater psychological risk in deciding to get married, to have a baby, or to enroll as a college student—all activities typically entered without truly informed consent-than in participating in practically any academic research study ever carried out by a social scientist.

But what of the more notorious examples of psychologically stressful research? I worked behind the scenes of the most notorious of all, the Milgram obedience studies,11 and I interviewed a substantial sample of the participants later,12 as did (independently) a psychiatrist.13 The remarkable thing about the Milgram subjects was not that they suffered great persisting harm, but that they suffered so little, given the intensity of their emotional reactions during the experiment itself. Through a combination of careful debriefing and their own standard coping mechanisms, nearly all subjects were able to process the Milgram experience as interesting but as basically irrelevant to their long-term psychological comfort. Though some commentators refuse to believe this, they must ignore not only the data on the Milgram subjects but also a great deal of evidence about human psychological resilience under much more traumatic conditions—from birth, through adolescence, to terminal illness. It may be possible to find an occasional individual who suffers some kind of lasting distress from an encounter with an inept experimenter, or from some unwanted self-insight induced by research participation.14 But a botched debriefing cannot be held against the bulk of responsibly conducted studies, and a psychologically fragile individual's reactions to carefully managed research participation are unlikely to be any worse than to an emotionally involving movie, a fire-and-brimstone sermon, or a disappointing job interview.

And what of the indirect harms that might come from a deceptive study? I have already mentioned the possibility that deceptive research will generate a greater distrust of social scientists and of other people in general. Researchers should take such concerns into account in limiting deceptive research practices to a necessary minimum. But these concerns are often exaggerated, at times by elevating social scientists into sacred protectors of the truth who must never be caught in even momentary deception. The general public does not see social scientists that way, according to various public opinion polls. Furthermore, abuses of public trust by politicians, physicians, lawyers, ministers, business leaders, and other supposedly trustworthy individuals touch much more directly on people's lives than the encapsulated deceptions of social scientists. Indeed, it could reasonably be argued that certain social scientific research practices, such as prompt debriefing after deception, should work to promote trust, in contrast to the attempts of these other societal leaders to maintain deceptions for as long as possible.

Given the generally minor harms of properly conducted social scientific research, what are the benefits? It must be acknowledged that few social scientific research studies will produce any immediate major benefits to participants or to society. Unless the researcher is testing a specific aspect of a carefully formulated social program, itself derived from earlier and more basic research, the findings are likely to be useful only in terms of adding to the broad body of social scientific knowledge, much of it tentative and even contradictory. That is the way of science, and it appears to be the way still more of social science, for reasons which we need not examine here. Any insistence that social science research always meet criteria of immediate utility would make it a mere adjunct of business, government, and military interests and would frustrate forever its development as a source of basic scientific discoveries useful in a broad range of applications.

Such preclusion of basic social scientific research would carry its own long-term ethical costs, usually ignored or dismissed by those intent on eliminating short-range costs. It is on this point that the ethical commitment of many social scientists is often misunderstood by professional ethicists. If your planned research clearly has some short-term ethical costs in terms of subject stress or deceptive practices, say the ethicists, why not use a less intrusive methodology or change your research topic entirely? Were researchers mainly concerned with professional respectability or academic advancement, one of those alternatives would indeed be the sensible course to take, and in fact some researchers have made such a shift—or have quit doing research altogether—in the face of difficulties with critics and IRBs. But other researchers continue to feel ethically obligated to investigate serious human issues in ways that are powerful enough scientifically to contribute to the expansion of basic knowledge, not merely in ways that will generate another journal publication as inoffensively as possible. These researchers are usually concerned with the immediate welfare of their subjects, and with the potentially negative social effects of such practices as deception; their critics have no monopoly on such concerns. But these researchers also perceive the dangers in sins of omission, of failures to do the responsible basic research that may contribute to major long-run social benefits. Such commitment to the active pursuit of usable, slowly cumulative information about human behavior may not be shared either by the more urgently involved practitioner or by the more contemplative philosopher; but its ethical foundations are genuine.

Research projects do differ, however, in their degree of potential benefits, and the differences may be important for our ethical decision making. How do we decide whether a proposed study has enough potential benefits to outweigh its potential harms—given that both are potential rather than actual? If there were easy answers to this question, we would not still be debating it. Our estimates of potential harms and benefits must be very crude at best, informed to some extent by previous experience but retaining a greater margin for error than any of us would like. Unless we decide simply to close down large areas of social scientific research, we must continue making such crude estimates and acting upon them, as individual researchers or as peer reviewers of research by others. Some kind of peer review is essential in assessing potential benefits, though it need not always be as extensive or as formal as certain government agencies now insist. If, by rough estimate, a piece of proposed research may potentially yield minor harms offset by minor benefits, it is not worth much ethical agonizing by anyone. If the rough estimate suggests minor benefits and major harms, we can easily reject the research as ethically unacceptable. If the estimate suggests minor harms and major benefits, most of us would be willing to approve the research, though we might wish to assess its actual harms and benefits later and to revise our judgmental criteria accordingly. It is only when our rough estimates suggest major potential harms and major potential benefits that we really begin to worry about the crudity of our estimates—and about what specific meaning to invest in such admittedly ambiguous terms as "major potential benefit."

We have already considered the question of harm with regard to the specific example of the Milgram obedience studies. Let us look at the question of benefit in the same context, since estimates of "major benefit" have been more disputed there than in perhaps any other example. Several of Stanley Milgram's critics appear to assume that his claims for the social value of his research were post-hoc justifications intended to quiet criticisms of his deceptive and stressful experimental practices. But Milgram had made a rather detailed case for substantial potential benefit in his original research proposals, and his research was funded on that basis. He had read widely concerning the events of the Holocaust and the various attempts to explain its origins. He did not propose yet another intellectual analysis, or a psychological study of some phenomenon previously much studied and perhaps vaguely related to the Holocaust, such as conformity to peer pressures. Instead, he proposed a series of studies that would examine specific contextual variables associated with greater or lesser obedience to a realistic command to administer severe physical pain to another individual. Doubtless there are many steps between such displays of individual obedience and the occurrence of a social phenomenon as broad and intense as the Holocaust. But it is reasonable to assume that laboratory research on destructive obedience could make a useful contribution to the understanding of destructive obedience on a large scale, even though it might not be the only way or even the single best way to proceed in elucidating the genesis of Holocaust-like phenomena. Further, it is reasonable to assume that better and wider public understanding of the conditions most likely to promote destructive obedience on a small scale could have a prophylactic effect with regard to destructive obedience on a large scale–although, again, there are surely many forces working in a complex society to strengthen or weaken tendencies toward genocidal Final Solutions. Thus, I think Milgram made a good case concerning potential benefit, on the basis of the issues involved and the means by which he proposed to study them. It is hard to conceive how anyone could make a better case, before the fact, for major benefits from basic social scientific research.

Furthermore, I think a case can now be made that the Milgram research has actually yielded substantial benefits in the years since its publication. Most ethical discussions of deceptive social scientific research heavily stress harm and lightly sketch benefits, as if any negative effects would reverberate through all of human society, while any positive effects would hardly resound beyond laboratory walls. That is not the way the diffusion of knowledge works in our society. I would suggest that Solomon Asch's deception-based research on social conformity helped sensitize a generation of college students to the dangers of conformism. I would suggest that Asch's student, Stanley Milgram, has helped to sensitize another generation, well beyond campus boundaries, to the possibility that they themselves could under certain circumstances be as obedient as the sternest Nazis. As much as Milgram's research offends certain moral sensibilities, it has also dramatized serious ethical choices so provocatively that virtually every introductory psychology and social psychology textbook of the past decade has prominently featured Milgram's findings.15 Some social scientists and ethicists find it implausible that laboratory studies of individual psychological phenomena could yield any useful understanding of the dynamics of a Holocaust. I find it even more implausible to assume that research with the broad dissemination and emotional impact of Milgram's studies has not already generated enough introspection and discussion to diminish significantly the likelihood of another Holocaust-like phenomenon, at least in this country.

Few social scientific studies are likely to have the individual force of Milgram's obedience research. But judgments about their potential benefit can be made in similar fashion, on the basis of the researcher's serious consideration of factors likely to play a role in major social phenomena, the choice of apt research strategies, and the social implications of anticipated (or unanticipated but possible) research findings. At no time can these judgments be so definitive or so overwhelming as to outweigh certain kinds of research harm. But in combination with the remaining criteria, they may lead to a reasoned decision that limited potential harm deriving from deception and other aspects of the research design are outweighed by the likely long-term benefits of a particular research project as a part of the ongoing social scientific research enterprise.

The Option to Withdraw

One of the objections most often raised against research deception is that it prevents subjects from deciding whether to give their fully informed consent to research participation. "Informed consent" is a concept that grew out of medical experimentation, where the only way for patients to make an effective decision about research participation is to know well in advance what kinds of physical interventions might be imposed upon them. Many medical interventions have potentially serious and virtually irrevocable consequences, and if the patient fails to say "No" before being anesthetized, cut open, injected with cancer cells, infected with bacteria, etc., there may be no way of effectively saying "No" later. The situation is usually very different in social scientific research. As already suggested, the intervention is most often minor and the consequences are temporary or reversible (as by post-research debriefing). Perhaps even more important in an ethical sense is the possibility of an ongoing process of informed consent. Even if, for purposes of conducting a study, subjects must be asked to give their consent to participation partly on the basis of misleading or incomplete information, they can continue their assessment of the study's costs to them as it proceeds, and can be guaranteed the right to quit at any point where they decide that the costs are becoming greater than they wish to bear. This process of "ongoing informed consent" is implicit in many research situations, including interviews and questionnaires where the subject is fully in control of the information he or she supplies. In circumstances where the possible harms are greater—as when a questionnaire deals with particularly sensitive issues, or when an experiment manipulates social or other pressures to continue participation beyond normally tolerable limits of stress—the subject should clearly and emphatically be informed in advance of the right to stop participating at any time without penalty.

In some instances, a research procedure may have the potential to impose upon a subject a psychological harm well outside those encountered in normal social interactions, under circumstances where the subject is misled as to what is about to happen and is unable to withdraw his ongoing consent in time to avoid the harm. Such instances more closely resemble physical intervention without informed consent in medical research than does the usual social scientific study, and they should be placed under the same constraints as medical interventions. I am thinking here of such studies as those in which a subject fills out a personality questionnaire, then is suddenly and falsely told that the questionnaire reveals hidden homosexual tendencies or other characteristics that are highly discrepant from the subject's own self-image. Most subjects appear to accept rather easily, during debriefing, the information that an apparently realistic experimental situation has been fabricated or that a recently introduced stranger is not nearly as bad a person as the experimenter has made him out to be. But I suspect that a false imputation of homosexuality or neurosis, made by a psychologist, may continue to raise self-doubts well after the psychologist has changed stories. The characterization is not a consequence of the subject's own behavior, and its sudden attribution to the subject is made without an opportunity for ongoing informed consent.

The Milgram obedience studies have been criticized on somewhat similar grounds. But I do not see the Milgram studies as falling in the same category, since subjects in those studies were never falsely characterized. Subjects who shocked the "victim" unmercifully did so with little persuasion from the experimenter and much resistance from the "victim." They had the choice throughout the experiment of quitting at any time, and in fact a substantial portion of subjects did quit. A continuing opportunity was provided subjects to make a moral decision, and no force or unusual psychological technique was brought to bear to interfere with that choice. In such instances, where research participation brings unsought self-knowledge, I do feel that the researcher has a responsibility to help the subject cope with such self-knowledge and to give the subject some guidance in integrating it satisfactorily into his or her self-concept over the long run. Milgram's debriefing procedures were designed to do that, and the follow-up research suggests that they were effective in that regard. Self-knowledge in itself, even unsought self-knowledge, does not seem to me an ethically negative "risk." Ethically concerned individuals of many persuasions and cultural roles, including preachers, teachers, novelists, and charismatic leaders, have attempted throughout history to induce such knowledge in anyone whose attention they could momentarily catch, even by deceptive devices (such as embedding lessons about human nature within an apparently innocuous entertainment). The induction of unsought self-knowledge need not be seen as a major mission of social scientists, but neither should it be seen as an evil from which research subjects must be protected at all costs.

Temporary versus Lasting Harm

Though I am primarily a consequentialist rather than a deontologist, I am unwilling to balance the certainty of lasting harm to a misinformed subject against the possibility of general benefits as a result of a particular study. But temporary discomfort, anxiety, or even pain may fairly be weighed among the harms in a harm-benefit ratio, as long as the subject is permitted to cease participation whenever the distress becomes personally intolerable and as long as no lasting scars (physical or psychological) result. The generation of temporarily intense anxiety or pain should not be employed casually, even if these terms are met; it must be more than offset by the potential value of the research. Furthermore, as with unsought self-insight in the previous section, the researcher is obligated to take an active role in restoring the anxious or agitated subject to his or her normal emotional state. The debriefing period is usually the opportune time to do this.

Debriefing and Publicity

The debriefing period, properly used, is a time for limiting or eliminating several potential harms of deceptive research practices. First, it provides the occasion to diminish anxiety and other unpleasant emotional reactions, and to give the subject a sense of the true value of his or her participation as a research subject. Instead of leaving the subject with a sense of having been tricked, the researcher should honestly communicate the difficulty or impossibility of doing research on the topic at hand with full subject foreknowledge, and should describe the efforts necessary to give subjects a realistic—if deceptive—experience in a controlled setting. Second, the debriefing process restores a sense of honesty to the researcher, and by interrupting the role of arch-manipulator, it brings him or her back toward the human level of the subjects. Third, it provides an ethical model to researchers, subjects, and others of how a necessary deception can be limited in its consequences, how deception can be used without destroying the integrity of human social contacts or the autonomy and self-esteem of the individuals involved. Given the vast amounts of deception which occur in ordinary social life without any intentional debriefing, the use of deception linked with debriefing might even have a salutary effect upon the public sense of ethical standards, as already suggested, rather than producing the invidious effects predicted by certain critics of deceptive practices.

Finally, the requirement of debriefing is ethically advantageous in that it increases the level of publicity connected with the research. I am not referring to publicity in the usual sense of newspaper headlines and talk-show appearances, but to publicity as the term has been used by John Rawls and subsequently by Sissela Bok. As Bok puts it, "According to such a constraint, a moral principle must be capable of public statement and defense."16 The general requirement of debriefing means that a researcher must at some reasonable point publicize his or her deceptive research procedures to the individuals most likely to be at risk as a result, namely, the subjects, and must therefore be able to justify the deceptions to them or risk some kind of retaliation from them. But publicity must involve more than the researcher's interactions with the subject, as the latter part of boundary condition 5 suggests.

Peer review and reviews by institutional review boards mean more publicity, more occasions when the researcher must be able to offer an acceptable ethical defense of any deceptive practices he or she feels to be required in the chosen research area. Still other professional practices common in the social sciences involve further publicity: peer reviews for academic promotions; peer reviews by granting agencies, in addition to IRB reviews; presentations of research procedures and findings at professional meetings; journal review and publication of research papers.

Conclusion: The Salutary Consequences of Publicity

Several years ago I wrote a short piece for Psychology Today in which I compared and contrasted experimental social psychologists with professional con artists.17 The similarities, which were considerable, mainly concerned the practice of deception. The differences, which were also considerable, included such things as the principal motivations of psychologists vs. those of con artists and the attitudes of the two groups toward "subjects" or "marks." The major difference concerned the matter of publicity. Con artists avoid publicity as much as possible, and thus their deceptive practices can grow unchecked except by sheer force of law. Social psychologists, however, ordinarily seek publicity in the form of professional presentations, and have also by and large accepted its necessity in such forms as debriefing. Publicity of a perfectly ordinary professional sort was how the Milgram studies and others became the focus of a great deal of professional discussion of ethics, eventually widening to include discussion in the news media, on television drama programs, and in various circles of government. I say "publicity of a perfectly ordinary professional sort" because no scandal was involved, no hidden deceits were dramatically revealed, no damage suits came to court. Milgram talked and wrote about his research, and other people responded with their views on the ethical considerations involved, and Milgram responded in turn with his, and the dialogue continues.

The dialogue has by no means been a useless one. Deception in social science research has become much more constrained over the past fifteen years, in large part as the result of such voluntary publicity rather than through the coercion of federal regulations and financial threats. The federal government may ultimately outlaw deception in social scientific research altogether, in response to political pressures stronger than social scientists can muster—in which case I would not be surprised to see the spread of bootleg deception research on and off university campuses, conducted by researchers who feel they cannot study certain major issues effectively by any other means. That would be the ultimate ethical disaster for deception research, since in secret it would be hardly more constrained than the con artist's trade. The ultimate condition under which deception research is ethically justifiable is out in the open, where its practitioners are continually forced to present their justifications to others and where their critics must resort to reason rather than coercion. Ethical decision making is not a closed system in which a set of rules can be ordained once and applied to all situations forever after. I do not have all the answers about deception, its effects, and its reasonable limits; nor does anyone else. Continuing publicity about the kinds of deception social scientists see as necessary, and about the controlled conditions under which deception should be tolerated in research, will feed the ongoing dialogue about deception in such a way as to make our decisions about it increasingly more realistic, more sophisticated, and more ethical.


1. Stanley Milgram, "Subject Reaction: The Neglected Factor in the Ethics of Experimentation," Hastings Center Report 7, no. 5 (1977): 19.

2. Sissela Bok, Lying: Moral Choice in Public and Private Life (New York: Vintage Books, 1979), pp. 34-49.

3. Measure for Measure, act 3, scene 1. In William Shakespeare, The Comedies (New York: Heritage Press, 1958), p. 267.

4. Bok, Lying, pp. 32-33.

5. Henry A. Murray, Explorations in Personality (New York: Oxford University Press, 1938), pp. 26-28.

6. Elliot Aronson and J. Merrill Carlsmith, "Experimentation in Social Psychology," in G. Lindzey and E. Aronson, eds., The Handbook of Social Psychology, 2d ed. (Reading, Mass.: Addison-Wesley, 1968), vol. 2, pp. 22-23.

7. Jonathan L. Freedman, "Roleplaying: Psychology by Consensus," Journal of Personality and Social Psychology 13 (1969): 107-14.

8. Don Mixon, "Instead of Deception," Journal for the Theory of Social Behavior 2 (1972): 145-77.

9. Daniel M. Geller, "Involvement in Role-Playing Simulations: A Demonstration with Studies on Obedience," Journal of Personality and Social Psychology 36 (1978): 219-35.

10. Philip G. Zimbardo, "Pathology of Imprisonment," Society 9, no. 4 (1972): 4-6.

11. Stanley Milgram, Obedience to Authority (New York: Harper & Row, 1974).

12. Alan C. Elms, Social Psychology and Social Relevance (Boston: Little, Brown, 1972), pp. 153-54.

13. Stanley Milgram, "Issues in the Study of Obedience: A Reply to Baumrind," American Psychologist 19 (1964): 848-52.

14. Diana Baumrind, "Metaethical and Normative Considerations Covering the Treatment of Human Subjects in the Behavioral Sciences," in E. C. Kennedy, ed., Human Rights and Psychological Research (New York: Crowell, 1975), pp. 37-68.

15. In a recent tabulation of frequency of citations in introductory psychology textbooks, Milgram was found to be twelfth in rank among all psychologists, just below Carl Jung and higher than William James, John B. Watson, Abraham Maslow, or Leon Festinger. Daniel Perlman, "Who's Who in Psychology," American Psychologist 35 (1980): 104-6.

16. Bok, Lying, pp. 97-112.

17. Alan C. Elms, "Alias Johnny Hooker," Psychology Today 10, no. 9 (1977): 19.

[This paper was originally published in Ethical Issues in Social Science Research, edited by Tom L. Beauchamp, Ruth R. Faden, R. Jay Wallace, Jr., & Leroy Walters, pp. 232-245. Baltimore, MD: The Johns Hopkins University Press, 1982. Copyright © The Johns Hopkins University Press. Reprinted by permission.]

Alan C. Elms Home Page   Virtual Library   Articles Online Text Version | *Standard Version

Alan C. Elms
Alan C. Elms Home Page

All text, layout and images on the domain are Copyright © 1999-2007 by Alan C. Elms. All rights reserved.