Stay: A History of Suicide and the Philosophies Against It (Review)

CN: Suicide. (If you are having thoughts of suicide or self-harm, please call the 24/7 National Suicide Prevention Hotline at 1-800-273-8255.)

In Stay: A History of Suicide and the Philosophies Against It (2013), the historian Jennifer Michael Hecht passionately advances a secular argument against suicide. Tracing a thread of intellectual thought that has underpinnings in ancient philosophy, Hecht lays out an anti-suicide thesis that emphasizes the obligations each of us has to our community as well as our future self. Hecht weaves in history, philosophy, social science, and literature in her quest to uncover the factors that exacerbate suicide in our society. From public policy to private conduct, Hecht outlines a healthier approach to addressing the problem of suicide. Additionally, Hecht aims to counter the popular perception of secular philosophy as being permissive with respect to suicide. To the contrary, she argues, luminous thinkers throughout history have converged on a resolutely anti-suicide message.

Stay grew out of a blog post Hecht wrote in 2010. Deeply troubled by the suicides of two of her close friends, she issued a simple and adamant appeal to those struggling with suicidal thoughts:

Don’t kill yourself. Suffer here with us instead. We need you with us, we have not forgotten you, you are our hero. Stay. (xi, quoted from her original essay in Best American Poetry Journal)

The post went viral, leading to an op-ed in The Boston Globe, which in turn led to a publisher inviting her to develop these ideas into a book. The events that led up to the publication of Stay are noteworthy, because they account for some of the book’s shortcoming. Hecht describes her 2010 blog post as a “manifesto” written “in the heat of emotion” (xii). Indeed, the moral message of her 2010 post is piercingly clear, though the arguments are at best underdeveloped. Not to impugn Hecht’s aptitude as a scholar, but rarely does a manifesto withstand intellectual scrutiny unscathed. Stay attempts to preserve the inspirational qualities of its source while simultaneously being a work of scholarship, with limited success. Furthermore, when Hecht-the-scholar and Hecht-the-moralizer come into conflict, Hecht-the-moralizer wins decisively.

The first pillar of Hecht’s anti-suicide argument is that each of us has a responsibility to our community. We tend to think of our community as the group of people who we see regularly, such as friends, family, co-workers, neighbors. In the case of suicide, however, the scope of our influence extends beyond this small in-group. Consequently, Hecht broadly defines community as everyone who might come to know about—and hence be influenced by—your suicide. This definition of community can help us recognize cases when we are impacted by the suicide of someone living far away. As Hecht observes, “geography camouflages our spheres of influence” (154), making it difficult to recognize suicide clusters at a distance.

Suicide clustering a phenomenon where one suicide leads to more suicides in the community. To buttress her argument that our suicide harms the community, Hecht reviews the evidence for suicide clusters. Far from being an armchair theory, suicide clustering is a real phenomenon that has strong empirical support. Suicide clustering is sometimes called suicidal influence or suicidal contagion; Hecht opts for suicide clustering since it’s more neutral.

Stay compiles many arguments against suicide. Even so, Hecht is critical of many traditional anti-suicide messages, particularly those that portray suicidal people as selfish, cowardly, or foolish. Hecht persuasively argues that these shame-based messages are not only cruel, but counterproductive. Heaping shame onto a suicidal person is likely to exacerbate, rather than alleviate, his or her despair.

In light of this, Hecht takes care to frame her anti-suicide arguments in positive terms that affirm the reality of someone’s feelings even as they  undermine the soundness of his or her judgment. Instead of castigating the suicidal for being selfish, Hecht wants us to reassure them that everyone considers their life precious. Instead of equating suicidal thoughts with weakness, we should express gratitude for the bravery it takes to endure these pernicious thoughts.

However, in spite of her focus on affirming language, Hecht devotes a lot of time to a message that stubbornly resists a affirming spin: the fact that your suicide might trigger a suicide in the community that wouldn’t have happened otherwise. In other words, suicidal influence. Rather than handling this true-but-depressing corollary to the reality to suicidal influence with compassionate deftness, Hecht is blunt to the point of crass: “[o]ne of the arguments I hope to bring to light is that suicidal influence is strong enough that a suicide might also be considered a homicide.” (5) It baffles me that Hecht would mar her overall positive message with this cruel characterization of suicide as a kind of “delayed homicide.”

When we think of suicide clusters, we tend to think of someone reacting to a real suicide in their community. However, fictional depictions of suicide also have the potential to trigger suicide clusters. To understand why the suicide of a fictional character can lead to suicides in the real world, we must examine the psychology of someone vulnerable to suicide.

First, being exposed to depictions of suicide doesn’t directly “inspire” suicides. Rather, it acts as a model for others to emulate. In other words, seeing examples of suicide “releases” latent suicidal impulses in people who are already vulnerable. According to social modeling theory, our sense of what is normal is largely shaped by the actions of those around us, particularly those we identify with. Even if we maintain our belief that suicide is wrong, seeing an example of it in our community nevertheless normalizes it to an extent. The reality of suicide clustering obliges those who write about suicide to ask themselves hard questions. For fiction writers, does your narrative demand suicide, or is it just a lazy trope? For newspaper editors, is a suicide newsworthy, or simply sensationalist? This is not to say, however, that every depiction of suicide is irresponsible. The flip-side of narrative’s capacity to inspire suicide is narrative’s capacity to model anti-suicide strategies.

When evaluating suicide through a scientific lens, scholars tend to emphasize the impact of social forces rather than specific ideas. In one sense, this is understandable. Suicide clustering is a social phenomenon, and the means of establishing cause-and-effect are fairly straightforward. At the level of individual human minds, by contrast, assessing the impact of specific ideas on on suicidal behavior is difficult. There is a lacuna in our understanding of how specific memes impact people on an individual basis. What soothes one person’s despair might exacerbate someone else’s. We should not expect to find a one-size-fits-all anti-suicide message.

According to Hecht, not only do specific ideas matter, but the impact of specific ideas can be decisive. Enlisting the suicide contagion metaphor, Hecht observes:

What is contagious is an idea. Suicide begins as an idea. Remaining alive after one has contemplated suicide also begins as an idea. It may be possible to encourage anti-suicide contagion. (171)

The best way to encourage such anti-suicide contagions is by fostering an intellectual climate in which discussions of suicide are less taboo. In such a marketplace of ideas, diverse anti-suicide thoughts can emerge and take hold in people’s minds.

Some of the most fascinating passages in Stay are those in which Hecht scrutinizes the role of suicide in our literature. Her interest in literature is two-fold. First, literary works can be historical resources, reflecting the prevailing attitudes toward suicide in the time and place in which they were written. Second, literature has often been a source of suicide contagion. One of the earliest documented examples of suicidal contagion is The Sorrows of Young Werther, a novel published in 1774 by the German poet Johann Wolfgang von Goethe. In the novel, Werther is a young man who shoots himself after a romantic failure. In the years following its publication, many European men killed themselves in the same manner as the protagonist after suffering similar misfortune.

The Sorrows of Young Werther is widely regarded as the beginning of the Romantic movement. To the present day, suicide has remained a potent Romantic metaphor. Hecht perfectly encapsulates the appeal of suicide as a literary trope, writing, “a person choosing to die or to live exists in the very crucible of human morality and meaning.” (232) Hecht revisits the suicides of many famous characters in literature, highlighting how impulsive and destructive their deaths were, even in the context of the story. In my view, Hecht succeeds at denuding these fictional suicides of their poignancy.

The second pillar of Hecht’s anti-suicide argument is that people have an obligation to their future self. Much like the community we’d be forsaking, our future self has a stake in our continued existence. After all, however dire the current situation is, things might improve in the future. Our future self might be grateful that we chose to remain alive.

As someone quite interested in the philosophy of the self and free will, it is easy to become distracted by the many arcane thought experiments and weird edge cases that typify this domain of philosophy. The crux of Hecht’s argument is that we ought not to respect the soundness of a suicidal person’s judgment. In the case of a suicidal person, the appearance of an autonomous person making a rational judgment is an illusion:

What may look like an integrated person making an impulsive move might also be seen as a person in a particular mood acting quickly so as not to allow input from him- or herself in different moods. (187)

I agree with Hecht’s description of an individual’s mind as being a parliament of competing factions. When you prevent someone from killing him- or herself, it is like intervening in a neighboring nation’s affairs when you observe that a tiny segment of the population has seized control of the nation, and is on the verge of unleashing the nuclear weapons on itself.

However, I doubt that emphasizing the rights of one’s future self will be an effective deterrent to suicide. In subjective terms, our relationship to our future self can feel as remote and inconsequential as any given relationship to someone in our community. If you’ve ever tried to exercise control over your diet, you understand how it can be a struggle to act in the best interest of your future self even on a matter as simple as resisting a piece of candy. An even more apposite analogy can be made to drug addiction, which causes someone to not only discount the interests of their future self, but to knowingly engages in self-destructive behavior mandated by their self of the present moment.

Speaking of addiction, Hecht highlights the impulsiveness at the heart of many, if not most, suicides. Nowhere is this impulse component starker than in accounts of people who have attempted suicide. Many report regretting their decision the moment after they made it. There are even anecdotes about people who have jumped off a bridge, and say they remember feeling regret in the moments before hitting the ground. The fact that it is possible to commit such a final act for rash and frivolous reasons undermines the Romantic vision of suicide as a profound philosophical statement undertaken after calm contemplation. Moreover, these accounts reveal that despite our perception of suicide as the product of personal choice and innate disposition, the role of luck can be decisive. If you have a single suicidal impulse in your entire life, and you just happen to be crossing a bridge at the time, you are terribly unlucky.

On the surface, the phenomenon of suicidal influence seems to indicate that we should suppress discussion of suicide in public discourse. After all, however well-meaning you are, or vociferously anti-suicide your message is, merely reminding people that suicide is a possible course of action will exacerbate the problem. Hecht strongly disagrees this interpretation, arguing instead that we should talk more frankly about suicide. Hecht makes a two-pronged argument in favor of greater openness regarding suicide. The first is a refutation of the knee-jerk interpretation of suicide clusters:

From a practical standpoint, too, it makes sense to give thought to these issues. If we try to suppress the whole subject, if we quarantine suicide from our consciousness and from public conversation, we run the risk of suddenly confronting it, alone and unarmed, when we are most vulnerable. It is much better to remember that this is part of the human experience and to avail ourselves of the conceptual barriers to suicide that have been provided through history. (234)

Secondly, Hecht argues that the difference between an appropriate and an inappropriate discussion of suicide is context. The best response to bad representations is not to discourage any representations, but to insist on better representations. Hecht writes, “[j]ust as caring and realistic discussion of suicide can help curtail suicide influence, sensitive, informed depictions of suicide in media can do the population good rather than harm.” (171) Hecht provides an example: “In one study of three television movies including a suicide, suicide increase after two, both of which concentrated their attention of the suicide victim. The one that was not associated with a rise in the suicide rate concentrated on the grieving parents.” (171) It is both possible and necessary to treat suicide in a way that neither glamorizes nor trivializes the issue. Fortunately, by most standards, media representation of suicide has improved dramatically in response to evidence and advocacy.

Echoing the classic use/mention distinction in philosophy, Hecht underscores the difference between thinking suicidal thoughts and contemplating suicide as a topic. Hecht’s argument that context matters when discussing suicide had the side-effect of relieving my own guilt about being interested in this topic. Embarrassing as it sounds, it was gratifying to be reassured that my academic interest in suicide was not a macabre fascination with death, nor some nebulous internalized suicidal desire. It seems strange that an interest in suicide is regarded as unseemly in a way that an analogous curiosity about serial killers is not. There are entire television networks seemingly dedicated to true-crime documentaries about disturbing murders. Some of these programs are trashy, prurient, and exploitative—like the salacious tabloid stories about the suicides of celebrities—but others are not. Just as there are ways to tell stories about suicide that are respectful and illuminating, there are ways to study suicide without meriting worry.

Jennifer Michael Hecht is perhaps uniquely qualified to write a book on suicide. As the author of Doubt: A History, she has the secular-humanist bona fides to reassure skeptical readers that she harbors no hidden sympathy for religious nonsense. Her philosophical background notwithstanding, Hecht insists that her anti-suicide arguments apply irrespective of your beliefs about religion and the afterlife. Without conceding the truth-claims of religion, Hecht concedes that religious opprobrium for suicide probably did have a deterrent effect. However, she counters, religious justification is no longer necessary, since there now exists an equally forceful secular argument against suicide:

Pythagoras taught that each of us is stationed at a guard post, responsible for attending to it until we are dismissed. Plato would borrow the idea, which remained a cogent metaphor for centuries. (25)

This metaphor is very agreeable to Hecht’s thesis, and she refers to it frequently. It spotlights the potentially horrible consequences your death will have on your community. As with staying alive, standing watch isn’t necessarily fun, but you must do it anyway.

Hecht is confident that this metaphor will prove just as efficacious as religious commandments at discouraging suicide. After all, it has all the same elements, just switched our with secular terms. Instead of being rewarded with an eternity in heaven, you will receive gratitude from your community for having protected them. Correspondingly, the terror of hell is replaced by the fear of being posthumously dishonored by the community that you deserted. It’s worth noting that the guard post metaphor presupposes a community that you depend on to such an extent that the ignominy and material deprivation that you’d experience their absence would be devastating. Moreover, the guard post metaphor ignores the fact that suicide is often provoked by the community. A toxic community can be worse than no community at all.

It is impossible to talk about suicide without discussing the rise of individualism. Until recently in human history, one couldn’t survive as an individual. Survival depended on foregoing one’s own interests to serve the interests of the community. This changed with the development of complex modern societies. Now, the state and the market can provide the rudiments of an autonomous life, enabling someone to live independent of any community. The balance of power has shifted toward the individual. According to this analysis, any effective anti-suicide argument should appeal to one’s self of being an individual rather than a community member. Inasmuch as the metaphor of the guard standing watch relies on the traditional view of the community as essential for one’s livelihood, it is apt to be dismissed.

Alongside her arguments about our obligations to the community and our future self, Jennifer Michael Hecht endeavors to undermine the widespread perception that “secular philosophy is without exception open to suicide.” (232) Far from being mere anti-secular propaganda, Hecht contends that this permissive attitude is the dominant opinion in secular circles. Even if Hecht has cherry-picked quotes favorable to this interpretation, I was nevertheless surprised that these arguments existed at all. I consider myself fairly knowledgeable about secular philosophy, and I was not aware of many of these nuanced anti-suicide messages.

But why is there a consensus in modern secular culture about the acceptability of suicide? In Hecht’s assessment, the secular community’s permissive attitude toward suicide is a historical accident, rather than a logical extension of core philosophical beliefs. In Hecht’s account, Enlightenment philosophers (notably Voltaire and Hume) defended suicide as part of a wholesale rejection of religious doctrine. In essence, Hecht charges early secularists with throwing the anti-suicide baby out with the religious bathwater. At first, I was dismissive of this possibility, because in my mind the entire secular worldview—including its attitude toward suicide—seemed like a coherent system built up from basic principles, unshaped by historical contingencies. For Hecht to write that early secularists might have praised suicide for contrarian reasons aroused cognitive dissonance. But then I remembered that the same phenomenon seems to have taken place with spiritual experience, which was disparaged as part of a wholesale rejection of religious belief. Currently, Sam Harris and other secularists are trying to disentangle spiritual experience from religion. As a big supporter of the secular spirituality project, the thought that Hecht might be engaged in a complementary project led me to think her position was more credible.

This absence of credible dissenting views is one of Stay’s major flaws. “Writers through history have given us conceptual barriers to suicide with which we ought to be familiar, as a culture.” (12) While I completely agree that we should all be apprised of the best arguments against suicide, I also think we should be aware of the serious modern philosophers who have put forward nuanced arguments about how suicide is sometimes permissible. For example, consider the provocatively titled Better Never to Have Been: The Harm of Coming Into Existence (2006), by the philosopher David Benatar. I have not read Benatar’s book, but if my understanding of the thesis is correct, the advisability of suicide is somewhat ancillary to the fundamental question of whether existence is preferable to non-existence. Judging from the reviews, Benatar’s book is part of a lively debate in academic philosophy.

I am frankly unable to comment on the accuracy of the philosophy presented in Stay. I read one review of Stay that charged Hecht with misunderstanding Kant’s categorical imperative. I have no idea whether this criticism is valid, and, if so, whether it undermines her interpretation of Kant’s attitude toward suicide. I can, however, comment of the effect of the philosophical sections on a reader unfamiliar with the finer points of philosophy. The philosophy of these books is not the self-conscious, analytical, and impersonal stuff of academic treatises. Stay is philosophy in a literary style. Undoubtedly, people will derive inspiration from some of the passages, but such inspiration does not depend on having a strong understanding of the underlying philosophical principles.

Considering how most public discussion around suicide concerns terminally ill people, Hecht draws a distinction between despair suicide and what she calls end-of-life management:

this book is chiefly about despair suicide, rather than what might be called end-of-life management. People who are fatally ill and in terrible pain are dealing with different issues and may certainly be seen as altering the way that their illness kills them, rather than actually taking their own lives. (11)

It is understandable that Hecht would want to distance her thesis from the exigencies of modern politics, but I found this distinction to be perfunctory and dubious. Hecht must know that there are agonizing conditions that aren’t terminal but nevertheless deserve to be called ‘fates worse than death’. I would never dream of telling a person with locked-in syndrome, for instance, that I knew better than they did whether their life was worth living. I wish Hecht had dealt with this objection earlier, and more comprehensively. The unstated coda to the advice of “stay” is “stay, because maybe you will feel better in the future.” This is not the case for those who are locked in. The real revolution in the secular morality has an emphasis on the actual determinants of suffering and well-being, rather than intuitively attractive but philosophically shallow notions of preserving life irrespective of whether it is actually worth living.

Perhaps Hecht reasoned that the subset of people seeking physician-assisted suicide represented a tiny fraction of all the people contemplating suicide, so dwelling on these cases would muddle the moral message for the bulk of (possibly suicidal) readers. It’s true that people tend to exaggerate the scope of their own suffering, and might therefore locate their despair along the same continuum as someone with a painful, degenerative, and terminal disease. If people judge that their personal despair is so irrevocable that it falls into the latter category, they might leap to the assumption that their own suicide is justified.

Toward the end of the book, Hecht reveals that is not ignorant of this gray area between despair suicide and end-of-life management. In the final chapter, Hecht finally acknowledges the nuance:

Of course, there are times when a person suffers from despair so intensely and for so long that it can seem merciful to let him or her end life. Perhaps there’s a level of emotional anguish that is more reasonably considered alongside painful fatal illness in regard to the appropriateness of suicide. There are many things that we say are wrong that yet have some exceptions. (231)

This concession did a lot to endear me to the book. Her position is scarcely different from mine at all. Consider how different this is from her attitude in the blog post that inspired this book:

I’m issuing a rule. You are not allowed to kill yourself. (x)

This is a rare case in which I forgave the author for eschewing nuance in the introduction. Hecht was put in a difficult situation. However rigorous of a scholar Hecht is, she was certainly aware that vulnerable people were going to reach for her book as a life-line. By front-loading the book with sweeping injunctions against suicide, it has the effect of nudging such people in the right direction. To that end, Stay feels calculated to be as welcoming as possible. I have long maintained that every book has the potential to be a self-help book, and Stay is no exception.

It is generally considered illegitimate to comment on an author’s tone. Unlike facts and logic, tone is subjective. ‘Tone-trolling,’ as it is known, violates your responsibility as a reader to be charitable in your interpretation. Despite all this, Hecht’s characterization of suicide as “delayed homicide” made me bristle:

One of the best predictors of suicide is knowing a suicide. That means that every suicide is also a delayed homicide. (x, quoted from her original essay in Best American Poetry Journal)

How dare she imply that someone who has killed herself has the moral status of a murderer! (Not all homicide is murder, of course, but most people will conflate the two.) Though never stated it so plainly, the logic behind “delayed homicide” seems to be, “if you are contemplating suicide, you ought to feel so guilty about the possibility that your decision will indirectly result in the death of someone else that you should stay alive to prevent this from happening.”It resembles a threat: if you kill yourself, you will be blamed for the deaths of other people. Death will be no respite from disgrace. Even if the logic of this meme seems watertight, it remains to be seen whether such a message will actually have the intended effect.

The resemblance to the threat of punishment in the afterlife did not escape Hecht’s notice. In fact, it seems to have been her intent. In her original blog post, she writes:

In the West, in the past, the dominant religions told people suicide was against the rules, they must not do it, if they did they would be punished in the afterlife. People killed themselves anyway, of course, but the strict injunction must have helped keep a billion moments of anguish from turning into a bloodbath. (Best American Poetry Journal)

I doubt there will ever be a secular incentive to live that matches the religious fear of everlasting torment. Even so, Hecht is certainly right that an absolute rule against suicide would stop a great many of the impulsive suicides I mentioned earlier. But what about the people in excruciating, intractable pain who, rather than taking their own lives, persisted in living? I cannot help but worry about the religious believers who, holding to this injunction, endured torture, chronic pain, or  degenerative disease beyond the point most modern people would seek physician-assisted suicide. If you are like me, and believe that certain situations are so intractably agonizing that suicide is a rational response, such religiously-motivated endurance would result in a loss of well-being.

Upon further reflection, I wonder why my reaction to the “delayed homicide” formulation was so negative. In other contexts, I am very willing to equate direct action with indirect action. When someone buys meat, for instance, I consider him or her partly culpable for the maltreatment of the slaughtered animals. In principle, I don’t think anyone should be immune from criticism –including the dead– but heaping scorn onto a person who has just killed him- or herself feels uncouth—literally adding insult to injury. Hecht should know that if the traditional shame-based messages to discourage suicide don’t work, then this sophisticated form of shaming is unlikely to work either. Public health authorities have already tried similar shaming messages to curb obesity and addiction, and such efforts have been shown to be ineffective or even counterproductive.

Another reason I doubt the effectiveness of the “delayed homicide” formulation is that it is based on probabilistic evidence. Suicide clusters can be demonstrated in a large population, but in any given case you might be lucky, and everyone in your community might remain resilient in the aftermath of your death. Furthermore, this label would successfully discourage suicide only in someone who was sensitive to the welfare of their community; someone outraged by their community might take perverse solace in the fact that their death will trigger suicides in the community. It is necessary to reckon with the phenomenon of suicide clusters when crafting public policy, but as an anti-suicide meme it may be counterproductive.

If I had to recommend a single passage from the book, it would be the conclusion. It represents the best aspects of the book while downplaying its weaknesses. It stands on its own merits as an uplifting anti-suicide message. You can read it here. It clarifies that in spite of the peremptory rhetoric at the beginning, the central mission of this book has been to educate people on the history of arguments against suicide:

I believe fiercely in the position I have here put forward, but rather than seeking to convince everyone that my position is the only correct one, I am seeking to make sure that alongside arguments in favor of the right to suicide, people are also aware of this argument that we must endeavor to live. (231)

This is not as banal as it first appears. In fact, this the best encapsulation of her thesis, since it communicates the value of spreading knowledge without any moral posturing. In the end, Hecht is absolutely right that one should die for lack of knowing these arguments.

I have no doubt that there are people for whom “it will bring solace to know that there is a philosophical thread extending over twenty-five hundred years that urges us to use our courage to stay alive.” (232) However, do not mistake this book for a disinterested overview of the history of suicide and the philosophy surrounding it. Stay is an explicitly anti-suicide book, eschewing scholarly nuance in favor of righteous passion. Hecht is unforgivably cursory in her consideration of dissenting voices. By hiding important caveats at the end of the book, one can reasonably conclude that the author wanted her audience to come away believing that despair can never become severe enough to justify suicide. I am tempted to recommend Stay solely on the basis of its writing quality. Hecht’s lucid prose, replete with delightful metaphors, is a pleasure to sift through. The book will be especially engrossing to anyone who appreciates the classics, especially the mythology and art of the ancient world. I submit that Hecht’s exploration of suicide in Shakespeare stands on its own merits as compelling literary analysis. Despite its limitations, Stay has whet my intellectual appetite. I look forward to reading a more comprehensive and academic treatment of the topic, such as Suicide: The Philosophical Dimensions, by Michael Cholbi.


Exploring Creativity, Part IV: Brainstorming

Exploring Creativity, Part IV: Brainstorming

In Exploring Creativity, Part I, I discussed how creative exercises rarely incorporate rationality. As an example of an exercise that explicitly applied creativity to overcoming bias, I spotlighted Eliezer Yudkowsky’s idea for avoiding the fallacy of the false dilemma. To disabuse oneself of the idea that a problem admits of only two options, Yudkowsky suggested, you should spend five minutes racking your mind for additional alternatives. I called this recommendation the Timer Task because Yudkowsky insisted that we measure five minutes by an actual clock, rather than our intuitive sense of time. Despite there being no experimental evidence to recommend this technique, I praised the Timer Task for at least acknowledging the synergy between creativity and rationality. Another aspect of the Timer Task that distinguishes it from most other creativity exercises is the fact that it is solitary.

The solitary nature of the Timer Task is noteworthy because most creative exercises implemented by schools and businesses are group exercises. Indeed, for the past sixty years, the prototypical creative exercise has been group brainstorming. Based on my own informal survey, I suspect most Americans are familiar with the technique. According to the cognitive psychologist and creativity expert Mark A. Runco, “brainstorming is almost definitely the most often employed [creativity] enhancement technique” (365). 

In its modern sense, ‘brainstorming’ refers to an activity in which several people work together to generate creative solutions to a given problem. However, the term ‘brainstorm’ dates back to the 1890s, when it was medical jargon that meant “a fit of mental confusion or excitement” (Random House). By the 1920s, the word ‘brainstorm’ had diverged from its clinical origin, toward something analogous to epiphany or insight. According to the Dictionary of American Slang (4th ed.), it was “[a] sudden idea, esp. one that is apt and useful.” Around the same time, we find the first use of ‘brainstorm’ in its modern sense, as a verb meaning, “to examine and work on a problem by having a group sit around and utter spontaneously whatever relevant thoughts they have.” (Dictionary of American Slang, 4th ed.)

It was not until the 1950s, however, that ‘brainstorming’ transitioned from little-known slang to an established member of the lexicon. The current notion of brainstorming as a formal technique can be traced back to a single source: the 1953 business management book, Applied Imagination. It is from this book that we get the term, “brainstorming session.” The impact of this book can be observed in the increased usage of  ‘brainstorm’/’brainstorming’ in written media after 1953, as depicted in the Google NGram chart below. The especially steep rise in the usage of “brainstorming” –as opposed to other conjugations of the verb ‘to brainstorm,’ such as ‘brainstormed’– reflected an emerging sense that the word referred to a formal technique, rather than a new label for a standard conferencing strategy.

Google NGram Viewer: 'Brainstorm', 'Brainstorming', and 'Brainstormed'

Although the process of searching your mind for creative solutions to a particular problem –i.e. the crux of brainstorming– can be accomplished just as easily by an individual as by a group, the popular meaning of brainstorming assumes it to be a group activity. Outside the scholarly literature, “group brainstorming” is an oxymoron, and “individual brainstorming” is a contradiction in terms. Undoubtedly, the inventors and early adopters of brainstorming regarded their method as a group activity. Even today, most dictionaries continue to define brainstorming as an inherently social enterprise. If you conducted a psychological experiment in which you brought a group of strangers together, gave them each a piece of paper, and asked them to “brainstorm creative solutions to a problem,” I strongly suspect most groups would not even consider splitting up, coming up with ideas separately, and then pooling their ideas at the end of the session. And yet, as I intend to show, this strategy of brainstorming individually and then pooling solutions would be far more effective than group brainstorming.

Modern American culture idealizes extraverted personality traits. Although recent years have seen an uptick in appreciation of introversion, the extrovert ideal still exerts a tremendous impact on all aspects of our lives, including our conception of creativity. Indeed, brainstorming’s status as the prototypical creative exercise is simply one example of a broader conflation of creativity with extraversion. Appreciation for brainstorming cuts across traditional ideological and professional boundaries: therapists, educators, corporate managers, and military planners all employ brainstorming techniques in their work. But what are the consequences of conflating creativity and social interaction?

In her fine book Quiet: The Power of Introverts in a World that Can’t Stop Talking, Susan Cain situates the enthusiastic embrace of group brainstorming in the 1950s within a broader cultural trend that glorified extraverted personality traits. One indicator of the public’s receptiveness to brainstorming was the speed of its adoption. Writing in 1958 –a mere five years after the publication of Applied Imagination— the psychologist Donald Taylor commented:

Within recent years [the use of brainstorming] has grown rapidly. A large number of major companies, units of the Army, Navy, and Air Force, and various federal, state, and local civilian agencies have employed the technique […]. (Taylor 24)

In my previous post in this series, I profiled Mihaly Csikszentmihalyi, an eminent psychologist who wrote some irrational things about creativity. One of his unfalsifiable definitions of creativity touched on the introversion-extraversion dynamic. In Creativity, he wrote that creative people “seem to harbor opposite tendencies on the continuum between extraversion and introversion” (Csikszentmihalyi, as quoted in Landrum, pg. 64). To go from reading Csikszentmihalyi to reading Susan Cain is to experience a kind of literary vertigo. Whereas Csikszentmihalyi is exuberant, vague and prone to digressions, Cain is modest, deliberative, and thesis-driven. Here is a striking, instructive (and, in my view, invidious) example. Both Cain and Csikszentmihalyi address the apparent paradox of people who manifest both introverted and extroverted traits. Here’s Csikszentmihalyi:

Creative people tend to be both extroverted and introverted. We’re usually one or the other, either preferring to be in the thick of crowds or sitting on the sidelines and observing the passing show…. Creative individuals, on the other hand, seem to exhibit both traits simultaneously. (Csikszentmihalyi, as quoted in Kaufman)

Here’s Cain:

Introverts, in contrast, may have strong social skills and enjoy parties and business meetings, but after a while wish they were home in their pajamas. They prefer to devote their social energies to close friends, colleagues, and family. They listen more than they talk, think before they speak, and often feel as if they express themselves better in writing than in conversation. They tend to dislike conflict. Many have a horror of small talk, but enjoy deep discussions. (Cain 11)

More than anything, I want to highlight the fact that both authors are making the same argument. The only difference is that Susan Cain actually resolves the paradox by providing illustrative examples; Csikszentmihalyi simply asserts that certain “[c]reative individuals” can “exhibit both traits simultaneously.”

My purpose in bringing this up is not simply to re-hash my earlier critique of Csikszentmihalyi, but to show that rigorous thinking about science isn’t solely the province of scientists. Susan Cain is not a scientist, but her writing reflects a deep respect for science. And in many ways she is a better rationalist than Csikszentmihalyi. It goes to show that regardless of your profession or academic specialty, everyone is capable of making a positive contribution to the scientific conversation.

In Quiet, Susan Cain uses brainstorming as a lens through which to inspect our cultural beliefs about introversion and extraversion. In an earlier section, I mentioned the 1953 book Applied Imagination, crediting it with having launched brainstorming into public consciousness. Applied Imagination was written by the advertising executive Alex Osborn, who had been developing brainstorming techniques since 1939 in his role as a consultant to major businesses. He believed that (1) people were naturally creative, (2) creativity was the key to success in business, and (3) traditional business practices stymied creativity. Brainstorming, Osborn thought, was the optimal way to unleash this wellspring of latent of creativity. Though convinced that group synergy was essential for creative achievement, Osborn was aware of pernicious group dynamics like social anxiety and diffusion of responsibility. However, he maintained that these problems could be averted through a combination of explicit instructions and expert guidance. Osborn outlined four rules for constructive brainstorming:

(1) Criticism is ruled out. Adverse judgment of ideas must be withheld until later.

(2) “Free-wheeling” is welcomed. The wilder the idea, the better; it is easier to tame down that to think up.

(3) Quantity is wanted. The greater number of ideas, the more the likelihood of winners.

(4) Combinations and improvements are sought. In addition to contributing ideas of their own, participants should suggest how ideas of others can be turned into better ideas; or how two or more ideas can be joined into still another idea. (Applied Imagination, as quoted in Taylor, et al., 24-25)

On the surface, these rules sound plausible and comprehensive. However, if the objective is to maximize creativity, all four rules are counterproductive. The first rule is wrong in its assumption one could subvert people’s judgmental attitudes via an explicit rule, and doubly wrong for supposing that a maximally permissive environment is the ideal incubator of creativity. The second and third rules err in their assumptions about what kind of creative solutions are worth aiming for. The fourth rule, like the first, falsely assumes that a collaborative environment is necessarily more amenable to creativity than an adversarial one. In summary, these four rules are the product of misconceptions and wishful thinking.

Osborn was particularly emphatic about his third rule, “Go for quantity.” Writing about his own experiences with brainstorming in a business setting, he enthused:

“One group produced 45 suggestions for a home-appliance promotion, 56 ideas for a money-raising campaign, 124 ideas on how to sell more blankets. In another case, 15 groups brainstormed one and the same problem and produced over 800 ideas.” (Osborn, as quoted in Quiet, pg. 87)

Large numbers such as these are only superficially impressive, because they do not take into consideration the quality of those ideas. It is no doubt possible to conjure up thousands of cockamamie solutions to any given problem; the set of possible ideas is literally infinite. Ultimately, however, a single quality idea is worth more than ten-thousand terrible ones. Nor do Osborn’s numbers tell us whether people working in isolation would have generated more ideas. Astonishingly, Osborn does not comment on either of these two alternative possibilities.

Like many of my peers, my earliest experiences with brainstorming occurred at school. During these sessions, I remember feeling exasperated by the “be freewheeling” rule, which seemed to result in many irrelevant digressions. I can also recall a few instances in which I wanted to violate the “non-judgmental” rule. This was not (necessarily) because I was an arrogant jackass, but because I genuinely thought rebutting my peer’s point would improve the overall conversation.

When I read Quiet, I was gratified to discover that my own attitude toward class participation is the norm in East Asian culture. Quoting an Asian student who was astonished by the permissive attitude of the American university she had attended:

“…. At UCLA, The professor would start class, saying, ‘Let’s discuss!’ I would look at me peers while they were talking nonsense, and the professors were so patient, just listening to everyone.” She nods her head comically, mimicking the overly respectful professors.

I remember being amazed. It was a linguistics class, and that’s not even linguistics the students are talking about! I thought, “Oh, in the U.S., as soon as you start talking, you’re fine.’” (Cain 185)

Whereas American education emphasizes participation, East Asian culture emphasizes restraint. Cain cogently explores how this particular difference reflects a more fundamental difference in how each culture regards introverted and extraverted traits. As I alluded to earlier, contemporary American society is engrossed with extraversion. Having experienced the American educational system firsthand, I am quite familiar with the ways this system sometimes fails to foster curiosity, rationality, and civic virtue. However, this may simply be an instance of the grass always being greener on the other side. Because I never personally experienced a more restrained educational environment, I cannot readily conceive of its potential downsides. 

According to Cain, “Osborn’s theory had great impact, and company leaders took up brainstorming with enthusiasm” (87). The popularity of brainstorming in the business community coincided with the growing recognition of creativity as a legitimate subject of inquiry among psychologists, who were understandably eager to assess which creative exercises were most effective, and why. “There’s only one problem with Osborn’s breakthrough idea,” Cain notes witheringly, “group brainstorming doesn’t actually work” (88)

How did researchers demonstrate that group brainstorming doesn’t work? And why did the practitioners and popularizers of group brainstorming not recognize its ineffectiveness?

If you were a researcher, how would you design an experiment to test whether brainstorming actually accomplished its stated goal of enhancing creativity? It wouldn’t suffice to look at case studies of group brainstorming, as Alex Osborn did. You would have no control group, and hence no basis for concluding anything about the effects of brainstorming. The question becomes what kind of control group is best. Ideally, the only difference between the control group and the experimental group would be the presence or absence of people. Therefore, you would need to compare group brainstormers against an equivalent numbers of individuals brainstorming in isolation. For example, you might compare the solutions generated by a group of six people who worked for one hour against the compiled solutions of a nominal group consisting of six individuals who each spent an hour working on solutions in isolation. In the coming paragraphs, I will be speaking generally about how one might design an experiment, as though the project was purely hypothetical. This is educational sleight-of-hand. For all intents and purposes, I am summarizing Donald Taylor’s landmark study, “Does Group Participation When Using Brainstorming Facilitate or Inhibit Creative Thinking?” (1958), which was the first to rigorously compare group brainstorming against nominal groups of individuals who worked independently.

To compare the real groups against the nominal groups, the experimenters needed an operational definition of creativity, the mental faculty that group brainstorming allegedly enhanced. In their analysis of the results, the experimenters assessed not only the sheer number of ideas, but also their overall quality. But, whereas the total number of ideas is easily measured and entirely objective (just count them), the notion of quality is not so easily judged. In part, this is because quality comprises multiple abstract things that might be present or absent in different amounts. In psychological research, it’s useful to subdivide quality into components with more restricted definitions. These might include novelty, generality, effectiveness, and feasibility. If you assess each of these components individually and then pool those assessments, the composite score will be a good indicator of total quality.

But wait! Even if terms like novelty and feasibility are more narrowly defined than quality, aren’t such criteria still subjective? Aren’t objective facts the only solid basis for scientific generalization? Well, as it turns out, the use of subjective measures is ubiquitous in psychological research. Cognitive scientists don’t elide this problematic issue, either. To the contrary, for any given study, the researchers must establish that the subjective measures are more valid than mere opinions. One method of demonstrating this is through inter-observer reliability, meaning that different observers of the same measurements agree (if not totally, then to a significant extent) on what scores those measurement deserve. Consider the assessments of solution feasibility in the studies of brainstorming; if multiple evaluators independently give the same feasibility scores in ninety percent of cases, then you can have confidence that their judgments converge on some universal standard of “feasibility.” However, if the evaluators arrive at wildly different determinations of feasibility –say, with only a fifteen percent overlap in their judgments– then it would be no firm basis for comparing the real groups against the nominal groups.

In order to test the effectiveness of brainstorming as a creative exercise, the researchers needed to create a standardized creative task. The problems featured in Osborn’s case studies could not be used because those problems were specific to the type of business where the study occurred. It would not make sense, for example, to ask non-engineers to brainstorm creative solutions to an engineering problem. One would expect the quality of their solutions to be poor irrespective of whether they worked in real or nominal groups. An engineering problem presupposes a lot of knowledge and experience that the participants of most psychology experiments simply do not have. Of course this problem isn’t unique to engineering. Any problem that requires specialized knowledge is inappropriate for general experiments. Consequently, researchers needed to invent problems that didn’t demand specialized knowledge. The problems also needed to admit many possible solutions, which could be readily evaluated for feasibility, novelty, generality, etc. Here are abridged versions of the three problems used by Taylor, et al. in their 1958 study:

  1. The Tourist Problem asked “How can the number of European tourists coming to the U.S. be increased?”
  2. The Thumb Problem asked for a list of pros and cons that would arise if people had an additional thumb.”
  3. The Teacher Problem asked how to insure continued educational efficacy, given population increases.” (Runco, 366)

These particular problems have been recycled for future studies of creativity. If you’d like to read the full version of these three problems –and test your own creativity– follow this link to take my Creativity Test!

I’ve already given you the upshot of this research: group brainstorming doesn’t work. But now, having sketched out the experimental methods, I can state the results more precisely: the nominal groups came up with significantly more solutions than the traditional groups, and the quality of their solutions was significantly higher. As Taylor wrote, “[t]o the extent that the results of the present experiment can be generalized, it must be concluded that group participation when using brainstorming inhibits creative thinking.” (23)

Taylor and his colleagues used undergraduates as participants. Since then, the ineffectiveness of group brainstorming has been demonstrated in a variety of experimental populations, from students to corporate managers to military strategists. Susan Cain discusses a study that examined the possibility that maybe group brainstorming would prove effective if all the group members were extroverts. The study compared business executives, a population the researchers expected to be extroverts, and research scientists, whose inclinations were expected to tend introvert. The scientists and the executives both performed better as collections of individuals than as a singular group, thereby falsifying that hypothesis.

It gets worse. Not only do groups inhibit creativity, but “performance gets worse as group size increases: groups of nine generate fewer and poorer ideas compared to groups of six, which do worse than groups of four.” (Cain 88) The social factors that undermine group brainstorming are cumulative. But what are these pernicious social factors? There are three main culprits:

  1. Social loafing: “in a group, some individuals tend to sit back and let others do the work.”
  2. Production blocking: “only one person can talk or produce an idea at once, while the other group members are forced to sit passively.”
  3. Evaluation apprehension: “the fear of looking stupid in front of one’s peers.” (Cain 89)

In the face of these three social forces, Osborn’s four rules were not really safeguards at all. Of all the sins against rationality one can convict Osborn of, it was his expectation that his explicit rules would be sufficient that I find most egregious. How hard would it have been to test this strategy by having some groups receive explicit instructions, and other groups receive null or contrary instructions? Why did he not consult with psychologists before proclaiming that his pet method was effective? Instead, he made an end-run around the scientific process, and we are now living in a world where his counterproductive method is the prototypical creativity exercise. It’s supremely ironic: because of Osborn’s overwhelming enthusiasm for creativity, the world is now less creative place than it might otherwise be.

This notion of groups performing worse than an equal number of individuals reminds me of a humorous anecdote from Jewish history. As the story goes, 70 Jewish scholars were sequestered in separate rooms and asked to translate the Torah from Hebrew to Greek. Miraculously, every scholar produced the identical translation! However, we shouldn’t be too impressed that all of the scholars independently arrived on the same translation. The real miracle would have been if they had produced the same translation after having been put in the same room.

For many people, the ultimate counterevidence to the experimental failure of group brainstorming would be recent online collaborative enterprises such as Wikipedia, Metafilter, and TV Tropes. This is a reasonable objection that is best addressed by highlighting the psychological differences between in-person and online social interaction. As Susan Cain puts it, “we fail to realize that participating in an online working group is a form of solitude all its own” (89). In my opinion, Cain’s characterization of online collaboration as “a form of solitude all its own” is stretching a precise term like ‘solitude’ too far into metaphor. It would be more precise to say that, in many crucial respects, the cognitive experience of collaborating with thousands of people in cyber-space bears a closer resemblance to solitary thought than it does to an interaction with far fewer colleagues in the physical world. Importantly, the social forces that undermine group brainstorming –social loafing, production blocking, and evaluation apprehension– are exacerbated by cues associated with occupying the same physical space as another person. If these social forces were triggered by the mere presence of another human mind, one would expect no difference between online and in-person collaboration. However, the human mind evolved in a context where all interactions were visceral interactions; our ancestors never had to cope with community archiving projects, social networks, or massive multiplayer online games. Insofar as modern circumstances differ from the ancestral environment, we should expect a mismatch between our intuition and our expectation of how a rational actor ought to behave. As Susan Cain writes, the real problem arises when “we assume that the success of online collaborations will be replicated in the face-to-face world.” (89) Here, as elsewhere, evolutionary science points the way to enlightened personal beliefs and public policies.

It is worth noting the more sophisticated modern advocates of “brainstorming” generally do not espouse Alex Osborn’s original rules. Indeed, “brainstorming” now refers to “not just one tactic but a method for divergent thinking in groups” (Runco 365; my italics). These modern variations on group brainstorming have been evaluated, and they, too, have been shown to be less conducive to creativity than allowing individuals to devise creative solutions in isolation. Since Donald Taylor’s 1958 study debunking group brainstorming, “[d]ozens or even hundreds of other studies have found much the same,” (Runco 366) including a 1991 meta-analysis. In spite of the counterevidence, Osborn’s original methodology is still being used by business consultants and teachers. In part, this is a reflection of a being uninformed about the empirical failure of group brainstorming. But it is also emblematic of our culture’s preference for extraversion over introversion, which Cain calls the extrovert ideal.

Given the lack of evidence for group brainstorming, its continued popularity is hard to fathom. According to Cain, the most compelling explanation is emotional. “Participants in brainstorming sessions usually believe that their group performed much better than it actually did, which points to a valuable reason for their continued popularity—group brainstorming makes people feel attached,” which is “a worthy goal, so long as we understand that social glue, as opposed to creativity, is the principal benefit.” (Cain 89)

To call brainstorming a “worthy goal” whose “principal benefit” is “social glue” is to damn it with faint praise. Although I agree that people enjoy group brainstorming mainly because of socially harmony, this is somewhat distinct from the impression people get that it stimulates creativity. This perception of effectiveness is real, even if it is ultimately the result of a cognitive illusion borne of a failure to imagine how much better the results would have been if the group members had pooled their results after having spent an equal amount of time working in isolation.

And yet, the evidence that brainstorming fosters camaraderie is undeniable. Isn’t that reason enough to practice brainstorming? That depends, but its worth noting that if your principal defense of group brainstorming consists of pointing to one of its positive byproducts, you have all but conceded that brainstorming doesn’t achieve its stated goal of enhancing creativity. After all, if the evidence supported it, why not make that the centerpiece of your argument?

Furthermore, social psychologists have shown that there are alternative ways to trigger the cognitive processes that underlie interpersonal bonding without sacrificing creativity. Moreover, positive feelings toward another person can be promoted by arbitrary and trivial stimuli. For instance, in one highly cited study, researchers showed that tapping your finger in synchrony with a stranger engendered a sense of social affiliation, more so than if the stranger tapped asynchronously or not at all (Hove & Risen, 2009). If one can foster positive social emotions through simple activities such as tapping fingers, swaying in unison, or eating communally, why would you continue to use brainstorming, which is tantamount to squandering the creative potential of your team? To place creativity in opposition to group harmony is to construct a false dilemma in which group brainstorming is the only way to maintain both creativity and social harmony. (If you disagree, I encourage you to find a clock and spend five minutes thinking up viable alternatives.) But it is entirely possible –and indeed optimal– to make your employees feel attached while also maximizing their creative potential. To quote the psychologist Adrian Furnham, “[i]f you have talented and motivated people, they should be encouraged to work alone when creativity or efficiency is the highest priority” (Cain 88-89).

In spite of all its shortcomings, group creative efforts do have the potential to circumvent individual bias. When one feels pressure to distinguish him or herself by generating useful ideas, it is in every individual’s self-interest to spot the absurdities, false assumptions, and possible consequences of their co-workers’ ideas. In short, the group’s ethic would need to be somewhat adversarial. Brainstorming, by contrast, was designed to be as non-adversarial as possible (“Criticism is ruled out. Adverse judgment of ideas must be withheld until later.”). Clearly, there is a sweet spot on the competitiveness scale: adversarial but not ruthlessly competitive.

When I wrote about Eliezer Yudkowsky’s five-minute Timer Task, I introduced it was the first creative exercise I had ever encountered that was explicitly geared toward promoting rationality. This, it turns out, was not true. While compiling notes for this essay, I came across a passage from Daniel Kahneman’s masterpiece, Thinking Fast and Slow, that explored a creative exercise known as the premortem:

The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, [the psychologist Gary] Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.” (Kahneman 264)

Whereas a postmortem is an inquiry into why something failed after it has already failed, the premortem asks us to imagine that the project has already failed, and the task is to explain (i.e. speculate on) what went wrong. Kahneman explains that “[t]he premortem has two main advantages: it overcomes the groupthink that affects many teams once a decision appears to have been made, and it unleashes the imagination of knowledgeable individuals in a much-needed direction.” (264) Our cognitive stance tends toward unrealistic optimism. Pessimism is so anathema to the human mind that people’s “worst case scenarios” are usually slightly better than what eventually happens. The premortem encourages maximum pessimism. In this respect, the premortem is a bit like writing a dystopian story about the worst case scenario of your plan. For instance, is there a possibility that your company’s latest kitchen appliance will lead to a zombie apocalypse? If a member of the group can articulate a compelling scenario for how this might happen, the premortem would have succeeded in averting eldritch horror, civilizational collapse, and the decline in your company’s market value.

Unlike Yudkowsky’s Timer Task –but like group brainstorming– the premortem is a group exercise. Although the effectiveness of the premortem must ultimately await empirical verification, there are strong theoretical reasons for supposing that the social component might actually enhance the creativity of the participants by “encourag[ing] even supporters of the decision to search for possible threats that they had not considered earlier” (Kahneman 266)

Brainstorming fostered an expectation about the sort of environment that is most conducive to creativity (i.e. a participatory group). More subtly, however, it created an expectation for the kind of problem that is amenable to deliberate creativity. The typical brainstorming problem is one that has a huge pool of possible answers, all of which are underdeveloped at their inception but can be elaborated upon. Business strategies fall into this category of problem. There are some creative problems that defy group brainstorming. One of the unacknowledged casualties of brainstorming’s popularity has been the application of creativity to our personal problems. This takes us back to Yudkowsky’s creative exercise, the Timer Task, which I praised for channeling creativity into circumventing a specific cognitive bias, the false dilemma. But creativity is rarely put forward as a solution for general personal distress. Some personal issues cannot be attributed insufficient rationality. One such issue is mindfulness. To be sure, rationality and mindfulness share certain values, and practicing one can improve your performance in the other. Yet there is still a facet of mindfulness that is profoundly difficult to capture in rational terms. Even for the procedural practice most associated with mindfulness, meditation, there comes a point beyond which your success isn’t a matter of knowing more, but of using your attention differently. That capacity to induce your mind into a more psychologically desirable state arguably involves creativity, but not the same kind as brainstorming or the Timer Task.

Brainstorming is a metaphor that says a lot about our perception of what happens in the mind when we engage in creative thought. The implication seems to be that being creative requires deviating from a calm, rational, orderly state of mind. In this view, creativity is a sort of salutary chaos that shakes up ossified patterns of thought. An alternative interpretation is that brainstorming involves tapping into a latent “storm of creativity” that exists below the surface of our awareness. Truly, anyone who has ever tried to meditate can attest to the tempestuousness of our mental baseline. And unlike the whimsical ‘tempest in a teapot,’ a storm is vast, chaotic, and unknowable. My subjective experience of meditating literally includes snatches of language floating unbidden into my visual field, not unlike debris tossed around by a storm. Despite my misgivings about group brainstorming as a creativity-enhancement technique, I find the underlying metaphor of creativity as a storm to be quite captivating.

I wondered, however, whether there were other, potentially better, metaphors for creative thought. In fact, I went and set a timer for five minutes, and tried to think of creative alternatives. Here are the only two viable options I came up with:

  • Imagine that you are standing on the ocean shore, watching the tide go in and out. The incoming tide represents new ideas floating into your head. When the ideas are in mind, it is as though they are objects floating beside your feet. If you think the idea is promising, you keep it. And if you dislike the idea, let the tide flush it away. This metaphor is compatible with meditation techniques that emphasize the breath, which flows inward and outward like a tide.
  • Searching your mind for creative solutions is analogous to digging through a junkyard, and setting aside the useful items you find. (If wading through garbage doesn’t suit your fancy, imagine doing the same thing in an arcade-style claw game.) As with junk, the vast majority of ideas you find will be useless, making it necessary to develop a systematic search process.

If you have any suggestions for alternative metaphors for creative thought, let me know in the comments. Also, for those of you who are familiar with other languages, with what metaphors does your language handle creativity?



brainstormer. (n.d.). Random House Unabridged. Retrieved September 04, 2015, from website:

brainstormer. (n.d.). The Dictionary of American Slang. Retrieved September 04, 2015, from website:

Cain, S. (2012). Quiet: The power of introverts in a world that can’t stop talking. New York: Crown.

Csikszentmihalyi, M. (1996). Creativity: Flow and the Psychology of Discovery and Invention. Harper Collins.

Hove, M. J., & Risen, J. L. (2009). It’s All in the Timing: Interpersonal Synchrony Increases Affiliation. Social Cognition, 27(6), 949–960.

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus, and Giroux.

Kaufman, S. (2012, January 14). After the Show: The Many Faces of the Performer. Retrieved from

Landrum, G. (2004). Entrepreneurial Genius: The Power of Passion. Brendan Kelly Publishing.

Runco, M. (2014). Creativity Theories and Themes : Research, Development, and Practice. Amsterdam: Elsevier Academic Press.

Taylor, D., Berry, P., & Block, C. (1958). Does Group Participation When Using Brainstorming Facilitate or Inhibit Creative Thinking? Administrative Science Quarterly,3(1), 23-47.

Yudkowsky, E. (2007, May 6). The Third Alternative. Retrieved from

“Complete Opioid Biosynthesis in Yeast,” by Galanie, et al. (2015): Discussion

In the latest issue of the journal Science, a group of synthetic biologists reported that they had engineered a strain of yeast that could synthesize the opiate molecule thebaine, a precursor to many opiate drugs. The paper, “Complete Biosynthesis of Opiates in Yeast,” was written by Stephanie Galanie, working in the laboratory of Christina Smolke at Stanford University. This report marks not only an achievement in genetic engineering, but also a potential breakthrough in biotechnology. If there was a way to synthesize thebaine and related opiates directly, without cultivating fields upon fields of poppy plants, it would be a boon to the 5.5 billion people who, according to the World Health Organization, have “low to nonexistent access to treatment for moderate or severe pain. (quoted in Galanie, et al., 2015)”

The opiates comprise a broad class of drugs that mimic endogenous opioid molecules in the brain. Opiates can relieve pain, induce euphoria, engender addiction, end life painlessly, or even block the action of other opiates. The only natural source of opiates like thebaine is the opium poppy (Papaver somniferum), from which opiates get their name. Although there are non-biological methods of synthesizing opiates, most opiates are still derived from poppies. Despite the risks and inefficiencies of agriculture, “chemical synthesis of these complex molecules is not commercially competitive.” (Galanie, et al., 2015) When the poppy straw is harvested, there are two principal extracts: morphine and thebaine. Morphine is valuable in its extracted form, while thebaine is medically valuable only after it has been processed into other potent opiates, such as hydrocodone (Oxycontin, Percoset), naltrexone (Revia), and buprenorphine (Suboxone).

Depressingly, most news outlets have approached this story from the angle of prescription drug abuse. Many commentators have even insinuated that this discovery portends a future in which most of the populace is stupefied by narcotics. If it is easy to imagine such a world, it is only because we have been exposed to countless dystopian narratives that depict a future in which cutting-edge technology has enabled addiction. But we need to avoid the fallacy of generalizing from fictional evidence. We need to weigh such hypothetical scenarios against the real, ongoing, and preventable pain of the 5.5 billion people who lack access to opiate medications. Moreover, these sensationalistic stories distract us from more pressing concerns about safety in biomedical research. For instance, there are experimental epidemiologists who are intentionally engineering “gain-of-function” mutations into the avian flu virus, transforming it into a virus that is transmissible between humans. On the one hand, virulent samples can escape the lab and cause an epidemic. But, on the other hand, if we understood the mutations that enable viruses jump between species, we’d be able to react more effectively in the event of a natural epidemic. In any case, this is research that genuinely deserves public attention and concern.

This paper by Galanie, et al., is of particular interest to me because its subject combines elements from both of the laboratories I’ve worked for. Formerly, I worked in a neuroscience lab that used opiate drugs to understand how the endogenous opioid system underlies reward-based learning. And now, my current makes use of transgenic mouse models, which have been endowed with genetic constructs that enable us to visualize aspects of their neurobiology.

Interestingly, much of the early research in synthetic biology –including the discovery of recombinant DNA– was done at my current institution, Cold Spring Harbor Laboratory. According to most historical sources, recombinant DNA was discovered by Stanley Cohen in 1972. Working at CSHL, he and his colleagues demonstrated that DNA from one organism could be stably inserted into a different organism. These discoveries paved the way for genetically modified organisms, the Human Genome Project, and transgenic mice.

In a series of famous experiments, Cohen used restriction enzymes to create a strain of E. coli that expressed DNA from two separate strains of the bacteria. Each of the two original strains contained a plasmid (a circular segment of bacterial DNA that code for nonessential functions; see Fig. 1) that conferred resistance to one of two antibiotics. Through wholly synthetic methods, Cohen engineered a strain of E. coli that possessed a fusion plasmid that made it resistant to both antibiotics. Cohen coined the term recombination to describe the incorporation of exogenous genetic material into an organism under artificial conditions.

Figure 1. Plasmids are circular loops of DNA inside a bacteria.
Figure 1. Plasmids are circular loops of DNA inside a bacteria. (Wikimedia Commons)

Stanley Cohen and his colleague, Herbert Boyer, went on to demonstrate that recombination was possible between two different species of bacteria. Some critics suggested that DNA introduced in an artificial manner was unstable. In response, Cohen and Boyer showed that recombinant genes were maintained through hundreds of replication cycles. With hindsight, the idea that the host bacteria would have a mechanism for distinguishing its original DNA from the recombinant DNA seems ridiculous. It is redolent of essentialism, the discredited idea that the every species has a unique, non-physical essence that would render its DNA incompatible with that of another species. It is now universally acknowledged in biology that there is no life essence. Moreover, the interchangeability of DNA is a logical consequence of the fact that all extant life evolved from a common ancestor that used DNA as its genetic code.

We now know that recombination is not a quirky thing that only happens in contrived laboratory settings. It happens frequently in nature. In technical terms, recombination is a subcategory of a broader type of genetic exchange called horizontal gene transfer. Unlike vertical gene transfer –the familiar transfer of genes from the parent to offspring– horizontal gene transfer is the exchange of genetic material in ways other than sexual or asexual reproduction. (Counterintuitively, it is also possible to have gene transfer that is neither vertical nor horizontal, but within an organism’s own genome. In bacteria and plants, there are segments of DNA called transposons [“jumping genes”] that shuffle around the genome via a cut-and-paste mechanism.) In contrast to the subtle instances of recombination in Stanley Cohen’s experiments, natural instances of horizontal gene transfer can be quite dramatic. According the endosymbiotic theory, the cellular organelles known as mitochondria and chloroplasts were once free bacteria that became endosymbionts (“symbiotic organisms that live inside”). Insofar as they reproduce autonomously and encode the means of their own replication, the plasmids inside bacteria function more like endosymbionts than native bacterial DNA. (Fig. 1) For retroviruses such as HIV, recombination is a replication strategy. These retroviruses convert themselves into DNA and insinuate themselves into the host’s genome; the ultimate intention of these viruses is to leave the genome and infect other hosts. Occasionally, however, the viral DNA will mutate to such a degree that it can no longer escape the host’s genome. If this mutated viral DNA occurs in the germ line (the sperm and ova, in humans), the virus-unto-DNA will be present in the offspring, and propagate across generations like the fusion plasmid in Stanley Cohen’s E. coli experiments. Stunning evidence for the stability of this recombinant viral DNA comes from the myriad “fossil viruses” that populate our genome. Carl Zimmer writes:

“Scientists have identified 100,000 pieces of retrovirus DNA in our genes, making up eight percent of the human genome. That’s a huge portion of our DNA when you consider that protein coding genes make up just over one percent of the genome.”

The pioneers of recombinant DNA techniques were commendable not only for their scholarship, but also for their entrepreneurship. In 1976, Cohen, Boyer, and their colleague Paul Berg founded the biotechnology company Genentech, and applied recombinant DNA techniques to a suite of biological problems, such as cancer and diabetes. Their enterprise was wildly successful, and, in 2009, the Swiss pharmaceutical giant Roche purchased Genentech for $46.8 billion.

Additionally, the founders of recombinant DNA techniques have been praised for their ethical stewardship of the technology they brought into the world. Concerned about possible biohazards, Paul Berg organized the 1975 Asilomar Conference on Recombinant DNA to develop sensible guidelines for further biotechnology research. The consensus among the conference attendees was that the risks were high, and that research should proceed cautiously. The rules agreed upon at Asilomar were adopted by the NIH in 1976. While the Asilomar Conference is generally viewed as a success, some scholars regard these guidelines as overly conservative. Commenting on a recent controversy in bioethics, Steven Pinker wrote:

Though the Asilomar recommendations have long been a source of self-congratulation among scientists, they were opposed by a number of geneticists at the time, who correctly argued that they were an overreaction which would needlessly encumber and delay important research.

Another reason why scholars like Pinker criticize the Asilomar framework is because its “excess of caution” approach is founded on an invalid premise: one cannot predict the pace of animal research by extrapolating from progress already made in bacteria. While it is relatively easy to induce recombination in bacteria, the process of engineering recombinant DNA into animals is considerably more difficult. Bacterial DNA is far more accessible, and hence easier to manipulate. By dint of being unicellular, a recombination event between two bacteria is a much more salient event: one or both of the participating bacteria come away with a new genome. For a multicellular organism, like a mouse or a dandelion, such a wholesale change in genome is impossible; it would involve transforming billions of cells at once. However, all multicellular organisms have a phase in their reproductive cycle in which they are a single cell. It is at this unicellular stage that an organism is susceptible to wholesale genetic modification. In humans, this cell is called a zygote, and is the result of the fusion of the sperm and egg. If a transgene (another term for recombinant DNA) was inserted into the zygote’s genome, that embryo would develop as though the transgene had always been present. Consider the case of a zygote with two defective copies of CFTR, the gene responsible for cystic fibrosis. A transgene that substituted the healthy version of CFTR in place of one of the defective copies would spell the difference between a healthy adult and one with cystic fibrosis.

Although transgenic techniques are rarely applied to humans, transgenic animals are a staple of modern biomedical research. Mice and fruit flies with recombinant DNA have increased our understanding of development, disease, and behavior. In spite of this progress, the process of transforming an animal’s genome remains complex, expensive, time-consuming, laborious, and imprecise. As with humans, the principal impediment stems from the fact that transforming animals requires germ line manipulation. Because of this, the model organism of choice for cell biologists has traditionally been the yeast Saccharomyces cerevisiae (Fig. 2). This is the same species we use in baking and alcohol fermentation. Yeast are particularly useful in cell biology and cellular biochemistry because they combine the convenience of bacteria with the representativeness of eukaryotes. Humans, plants, and fungi are all eukaryotes, meaning that they have a nucleus, linear chromosomes, and a generally high degree of shared biochemistry. Yeast are fungi, and since fungi are eukaryotes, it is often valid to generalize from yeast to humans. Certainly it is more valid than generalizing from bacteria to humans. In other ways, yeast behave more like bacteria. Unlike most eukaryotes, yeast are single-celled. It is easier to access, isolate, and manipulate individual cells. Yeast further resemble bacteria because of their ability to undergo asexual reproduction. Although yeast cannot replicate themselves quite as rapidly as bacteria, asexual reproduction simplifies the task of maintaining stable laboratory cultures.

Figure 2. Yeast cells viewed under a microscope.
Figure 2. Yeast cells viewed under a microscope. (Wikimedia Commons)

If the ultimate goal of biomedical research is to apply our knowledge to humans, why not study the biochemistry of human cells? In general, the petri dish is an inhospitable environment for human tissue. If it is necessary to study animal cells, the next best option is insect cells, which are more viable in culture. The exception to the rule that human cells cannot thrive in culture is cancer cells. In cancer cells, the feedback mechanisms that regulate cell division have been compromised, resulting in wild proliferation irrespective of the environment. Unfortunately, the same mutations that make cancer cells so fecund in a petri dish also disqualify them as research subjects. The biochemistry of a cancer cell is too disordered and anomalous to be representative. In many cases, healthy insect cells are a better guide to human biology than human cancer cells. For a vivid illustration of just how unrepresentative cancer cells can be, compare the karyotypes (profiles of chromosomes) of a normal human cell and a HeLa cancer cell (Fig. 3). The HeLa cell has eleven additional chromosomes, and is missing five others, including both copies of Chromosome 13.

Figure 3. The karyotypes of normal human cells and HeLa cancer cells
Figure 3. The karyotypes of normal human cells and HeLa cancer cells (Berkeley Science Review)

(Note: When I refer to the article under consideration, I’ll refer to “Galanie, et al.” When I’m referring to the general research program coordinated by Christina Smolke, which pre-dates this paper, I will refer to “Smolke” or “Smolke’s research.”)

Galanie, et al., applies techniques from yeast biochemistry and genetic engineering. By the end, the thebaine-producing yeast developed by Christina Smolke and her colleagues contained genes from four plants, an animal, and a bacteria. The yeast’s native genes were also altered. In total, 21 non-native genes were engineered into the yeast. (Fig. 4)

Figure 4. Summary of the species whose genes were engineered into the yeast in Galanie, et al. (2015). (Credit: Robert F. Service)

In the past several years, synthetic biologists have recapitulated various stages of thebaine biosynthesis in yeast, but Smolke is the first to have achieved “complete biosynthesis.” This is not to say, however, that Smolke and her collaborators merely fit together pre-fabricated pieces of a puzzle. Although certain steps had been previously worked out, unifying all the steps in a single organism required expertise, ingenuity, and perseverance in the face of endless technical challenges.

The general approach taken by Galanie, et al., consisted of working out the basic steps, and then optimizing each step. At every step, it was possible to observe the impact of their manipulation: did the yeast produce the next product in the pathway? (Fig. 5) If not, why not? If so, how would one tweak the process so that it produces more of the desired protein? The complexity of this project was so much greater than earlier achievements in yeast-based biosynthesis that innovation was crucial. The chronicle of the experiment really highlights the “engineering” part of genetic engineering. There was tinkering, trial-and-error, and troubleshooting. By analogy to other unprecedented biotechnology initiatives (The Human Genome Project, the reconstruction of the wooly mammoth genome, etc.), it is reasonable to suppose that some of the techniques that seem awkward and laborious will become optimized and standardized as the technology matures.

Figure 5. The biosynthetic pathway for thebaine production in yeast. The bolded words are genes that have been inserted into the yeast’s genome. (from Galanie, et al., Figure 1A)

The most vexing technical challenge arose from an unexpected interaction between the yeast’s native cellular milieu and one of the plant proteins that the yeast was made to express. How they solved this problem was the most fascinating part of the paper.

The enzyme salutaridine synthase (SalSyn) is native to the opium poppy, where it catalyzes one of the steps that in the opiate production pathway. In the opium poppy, SalSyn is always processed correctly, with the active sites that are responsible for catalyzing the conversion on the C-terminal facing outward into the cytosol. (Fig. 6, left side) When the SalSyn gene was imported to yeast, the protein was processed incorrectly, with the C-terminus facing inward, into the endoplasmic reticulum. Moreover, the upside-down SalSyn was glycosylated. Glycosylation is a process by which sugar-like chains are tacked onto the developing protein. Normally, the glyco-tags function as shipping labels that ensure the protein gets delivered its proper cellular destination. In the context of engineering a yeast that synthesizing opiates, however, glycosylation became a problem, because the sugar-tags blocked the SalSyn’s active sites. (Fig. 6, center)

Removing the glycosylation sites wasn’t viable, because it made SalSyn less efficient. Instead, the solution involved engineering a chimera protein. Smolke and her colleagues used a plant genome database to search for a plant protein that was sufficiently similar to SalSyn that it would catalyze the same chemical reaction, but not so similar that it would be glycosylated or inserted upside-down. They ended up inserting a protein that had a membrane component from a poppy plant, and an active component from a Goldthread flower. (Fig. 6, right side)

Figure 6. The correctly processed SalSyn protein (Left), the incorrectly processed and glycosylated SalSyn protein (Center), and the engineered SalSyn fusion protein (Right). (From Galanie, et al., Fig. 3A)

Although Smolke and her colleagues deserve praise for finding a suitable plant protein in the database and engineering a functional fusion protein, they were, ultimately, lucky that there existed a plant protein that satisfied their needs. There is currently no way to predict whether a protein like SalSyn will have an adverse interaction when expressed in an organism that doesn’t naturally manufacture it. How synthetic biologists resolve such cross-species interactions will determine the future of genetic engineering.

When engineering biomolecules, it is important to consider the handedness of those molecules. Most biological molecules are like a human hand: asymmetric, And, like a hand, every biomoledule has a particular handedness. A biomolecule and its counterpart with the opposite handedness are called stereoisomers. One of the principle reasons why biosynthesis of molecules is often preferable to standard chemical synthesis is that biosynthesis produces molecules with the optimal handedness for affecting our bodies.

According to the Curie Principle, asymmetric effects can only arise from asymmetric causes. Biosynthesis is an asymmetric process, but synthesis by standard industrial chemistry is not. In industrial chemistry, synthesizing biomolecules involves finding a similar molecule and then modifying it step by step until it matches the desired biomolecule. Unlike biological enzymes, standard laboratory chemicals catalyze reactions symmetrically, with no bias toward left-handed or right-handed variants. As a result, the final product of industrial synthesis is racemic, containing 50 percent right-handed molecules and 50 percent left-handed molecules.

Sometimes, the other-handed version of a biomolecule is merely inactive. One treatment for Parkinson’s Disease involves injecting of DOPA, the precursor to dopamine, into a patient’s brain. However, only one stereoisomer, (L)-DOPA, is biologically active. While a racemic mixture of (L)-DOPA and its counterpart, (D)-DOPA, is not medically dangerous, it would nonetheless be better to have a nonracemic solution with only (L)-DOPA.

A more benign example of how the two different stereoisomers of a compound can have different biological effects is the organic oil carvone. Carvone’s right-handed form smells like spearmint while the left-handed form smells like caraway.

The textbook example of the tragedy of misunderstanding the difference between a stereoisomer and its mirror counterpart is the drug thalidomide. In the early 1950s, thalidomide was produced by an industrial process that resulted in a racemic mixture with a fifty-fifty ratio of thalidomide’s two stereoisomers. Thalidomide was proven safe and effective in adult populations, and marketed as a treatment for nausea. In particular, it was widely prescribed as a remedy for morning sickness. Unfortunately, the manufacturers of thalidomide neglected to test their drug in pregnant women. One of the stereoisomers did indeed relieved morning sickness, but the other produced devastating birth defects. According to the U.S. Food and Drug Administration, “[i]n the late 1950s and early 1960s, more than 10,000 children in 46 countries were born with deformities…as a consequence of thalidomide use.”

In order to optimize each step of the biosynthetic pathway, Smolke and her team not only needed a means to detect the presence of a particular biomolecule, but also its precise quantity. For this, they used liquid chromatography mass spectroscopy (LC-MS), a technique for detecting and quantifying the presence of a specific molecule in a complex mixture. Standard LC-MS, however, was not sufficient, because that technique has no way of distinguishing between a biomolecule and its stereoisomer. Considering that one of the steps in thebaine biosynthesis is the conversion of (S)-reticuline to its stereoisomer, (R)-reticuline, the inability of standard LC-MS to distinguish the handedness of biomolecules was a significant obstacle. Therefore, Smolke relied on a more sophisticated analytical technique, chiral LC-MS, which (as the name suggests) takes the chirality of the constituent molecules into account.

In most cases in which you are trying to synthesize a biomolecule, you are trying to synthesize the naturally occurring isoform, such as (L)-DOPA. The naturally occurring stereoisomer is preferred because the other isoform is either inactive or toxic. But what if the unnatural stereoisomer was, in fact, better than the naturally occurring version? It would be as if (R)-DOPA did a better job at relieving parkinsonian symptoms than (L)-DOPA. In his outstanding book, Right Hand, Left Hand, the psychologist Chris McManus discusses an example of a stereoisomer performing better than its naturally occurring isoform. Certain animals synthesize reversed versions of ubiquitous biomolecules, for use as toxins against would-be predators. But what is a toxin to one species might be a mind-expanding agent for another. Humans have a long history of exploiting plant and animal toxins for medicinal or recreational purposes. Nicotine, for example, is a neurotoxin meant to deter insects from consuming tobacco plants. McManus highlights a poisonous frog species that synthesizes dermorphin and deltorphin, stereoisomer counterparts of the naturally occurring opiate molecules, morphine and enkephalin, respectively:

“Dermorphins and deltorphins are opioid peptides because they act on the brain in the same way as natural opiates such as morphine and heroin. In fact, weight for weight, dermorphin is a thousand times more potent than morphine, and ten thousand times more potent than the proper neurotransmitter in the brain, enkephalin. From the dermorphins and deltorphins may well come morphine substitutes that are potent pain-killers but also non-addictive and without the side effects of sedation and gastro-intestinal stasis. Of course there is also the possibility of new designer drugs to feed abuse, the juice of the poppy being replaced with a simple peptide.” (McManus 134)

If McManus’s speculation about the therapeutic value of these other-handed opiates turns out to be correct, there is no reason to suppose that synthetic biologists like Smolke couldn’t engineer this frog’s genes for dermorphin and deltorphin production into yeast. In fact, once you have a yeast that can synthesize morphine, it would only require a few more genes to synthesize dermorphin.

The experiment recounted in Galanie, et al., is known as a proof of principle. In this kind of experiment, it is necessary only to demonstrate that your method (e.g. yeast-based opiate biosynthesis) is feasible, rather than safe, moral, or economically viable. Even though proof of principle experiments require a reduced burden of proof, Galanie, et al., nevertheless defend the safety, ethics, and economic value of their research.

Safety: Smolke and her colleagues ensured the safety of their laboratory by consulting before, during, and after with the D.E.A. The lab kept meticulous records of the opiates it produced, and all samples were destroyed after testing. The lab members submitted to background checks, presumably to discover any history of drug abuse. Furthermore, the yeast were modified such that they could only grow on a particular medium, meaning that if someone were to steal a sample of the opiate-producing yeast, the yeast would die in the absence of this necessary substrate.

Ethics: According to the World Health Organization, there are 5.5 billion people who have “low to nonexistent access to treatment for moderate or severe pain.” It is scandalous that anyone has to endure needless suffering, much less a majority of human beings. Most pet-owners in the United States would be furious if their veterinarian didn’t have enough pain medication when their pet needed surgery. We should feel just as much compassion for the billions of people whose children and elderly parents are suffering because pain-killing drugs are unavailable or too expensive. Since poppy farming is not meeting the global demand, it is a moral imperative to find alternative sources of opiates. If synthetic biologists like Smolke are able to scale up opiate synthesis in yeast, it could potentially relieve the suffering of billions of people.

Economic Viability: Having a yeast that is capable of thebaine biosynthesis is significant because it is only genes necessary to convert thebaine into hydrocodone (the main ingredient in the second most widely prescribed drug in the United States, Oxycontin). A yeast that could synthesize hydrocodone directly would supplant not only agricultural production of thebaine, but also the industrial chemistry that converts raw thebaine into hydrocodone. In fact, the authors did indeed generate a yeast that produced hydrocodone, though they emphasize that they didn’t optimize it to be efficient. This relatively slight modiification will vastly increase the economic viability of yeast-based opiate biosynthesis.

Just as Stanley Cohen, Herbert Boyer, and Paul Berg founded Genentech to profit from recombinant DNA, Christina Smolke has recently founded her own company, Antheia, which seeks to push yeast opioid synthesis into commercial viability. It is reasonable to expect that many investors will be eager to fund Smolke’s start-up, along with many researchers eager to lend their talents to her upcoming projects. Even so, significant challenges lie ahead. In order for “yeast-based production of opioids to be a feasible alternative to poppy farming,” it will require “over a 100,000-fold improvement” in efficiency (Galanie, et al., 2015)

Is this degree of improvement achievable? One cause for optimism is the comparable improvement achieved in arteminisin-producing yeast strains. Arteminisin is an antimalarial drug that –like thebaine– was previously produced only by plant sources. And, like thebaine, the first yeast that synthesized arteminisin was orders of magnitude less productive than agricultural sources. But, within a few years, “researchers boosted the output of the artemisinin-making yeast by a similar amount.” (Service, 2015) Today, yeast-based production of arteminisin accounts for one-third of global production. It is important to note that the arteminisin pathway involves only 3-6 genes. By contrast, thebaine biosynthesis requires 21 genes. This difference suggests that progress in yeast-based thebaine biosynthesis might proceed at a slower pace than that of arteminisin.

In his fascinating book, Superintelligence, the philosopher Nick Bostrom provides a theoretical framework for predicting the rate of improvement for any given technology. Although Bostrom applies this model to machine intelligence, its explanatory range is more general. For any technology, the rate of improvement is proportional to optimization power and inversely proportional to system recalcitrance. Figure 7 depicts this relation in a mathematical ratio. Optimization power refers to the effort being applied to a problem. We can suppose that Smolke and her new company will be working very hard on this problem, though the overall progression of the technology will be limited by the fact that only one group has the proprietary privileges and technical know-how to synthesize opiates from yeast. The Human Genome Project, by contrast, was a multinational collaboration among at least twenty scientific institutions. System recalcitrance refers to how easy it is to make the system more productive through additional effort. We might describe a system with low recalcitrance as having “low-hanging fruit.” Conversely, when a system is highly recalcitrant, further investment would result in “diminishing returns.” If the analogy to artemisinin-producing yeast is apposite, then it is reasonable to expect substantial progress in yeast-based opiate synthesis within several years.

Screen Shot 2015-08-22 at 8.41.52 PM
Figure 7. Nick Bostom’s framework for predicting the development of a particular technology.

After finishing Smolke’s paper, I wondered whether subsequent optimization necessarily needs to come from human ingenuity. Why couldn’t they enhance opiate output through selective breeding? Humanity has a long history of radically reshaping plants and animals through selective breeding. Although yeast are valued as model organisms for their asexual reproduction, they are fully capable of reproducing sexually. It would therefore be possible to breed generations of yeast, and select for high thebaine output.

Perhaps the trajectory of the opiate-producing yeast would resemble the corn plants that were bred for high oil content. In a dramatic illustration of the power of artificial selection to increase output of a particular gene product, the corn’s oil yield quadrupled in fewer than eighty generations (Fig. 8).

Figure 8. The average oil content of corn selectively bred for high oil yield across 80 generations. (From Boyd and Silk, How Humans Evolved: Second Edition. [New York: W.W. Norton and Company, Inc., 2000] p. 74; retrieved from PBS)
Figure 8. The average oil content of corn selectively bred for high oil yield across 80 generations.
(From Boyd and Silk, How Humans Evolved: Second Edition. [New York: W.W. Norton and Company, Inc., 2000] p. 74; retrieved from PBS)
However, the corn example might be inapposite because selective breeding requires genetic variation. The reason those corn plants could increase in oil yield over many years was not because the plants were evolving new genes, but because selection process was concentrating all the genes that increased oil yield while winnowing down those that didn’t share those genes. At some point, the corn plants will stop increasing in oil content. When this happens, it will not necessarily because the oil content has begun to compromise the corn’s ability to survive, but because every corn plant will be invariant with respect to their oil-related genes. Since the corn’s oil content was the only selection criterion, it is akin to breeding identical clones. Smolke’s yeast are also, for all intents and purposes, identical clones. Unlike a natural population, there is no simply diversity for evolution to winnow down.

The ultimate economic impact of yeast-based opiate biosynthesis notwithstanding, this paper by Galanie, et al., shows the practical value of basic research. This recent achievement was made possible by decades of exploratory research into the hidden mechanisms of bacteria, flowers, and yeast –research, which, at the time may have been difficult to justify in terms of economic utility. Nevertheless, that corpus of information is now being applied to the biomedical sciences. Investment in scientific research benefits society.

Life’s diversity is frequently extolled, and biodiversity is a key goal of conservationism. In terms of sheer economic value, however, diversity is difficult to justify. As a result, many choose to frame it as an aesthetic good. It was, in Darwin’s words, “endless forms most beautiful and most wonderful.” But life is more than life forms. It is also a vast repository of information, strategies for wringing meaningful work out of insensate chemical. It is this informational diversity that is worth preserving. This information is intelligible because it is wrought in DNA, the universal genetic code. Modern advances in synthetic biology attest to our progress in mastering this amazing –and largely untapped– natural resource.

The power to engineer organisms is the ultimate validation of our knowledge about the workings of cellular mechanisms. To have assembled all the knowledge about cell biology into a textbook is impressive, but to apply that knowledge in order to reconfigure and optimize another creature is deeply gratifying. The ability to engineer an organism is an outside criterion of verification, the biological equivalent of an airplane designer observing that her plane flies properly instead of falling from the sky. Perhaps, like the corn that was bred for high oil content, there is a limit to how much an evolved life-form can be re-shaped. However much we might praise the utility of any given model organism, there is a point after which further efficiency cuts into their viability. There is currently a program in synthetic biology to develop single-celled “template organisms” that are loaded with gene clusters that enable the artificial cells to do a particular job, and not expend any resources on extraneous functions. Even if synthetic biology doesn’t produce template organisms, it is likely that synthetic biologists working with single-celled organisms like yeast will converge on this solution. Along with the insertion of novel genes, Smolke also silenced the activity of native yeast genes. This suggests that further optimizations might come from eliminating any genes that interfere directly or indirectly with thebaine biosynthesis. Once that is done, the genes that code for anything besides basic functions and thebaine biosynthesis will be erased, since these genes waste metabolic resources that might otherwise be dedicated to manufacturing thebaine. The ultimate product of all this culling will be an organism that resembles the notional template organism more so than a wild-type yeast.

The research chronicled in Galanie, et al., is a technological accomplishment. Moreover, the byproducts of optimizing this particular technique will catalyze subsequent advances. It’s moral implications are no less significant; the promise of meeting demand for opiate medication in the developing world is worth our investment in this research program.


Galanie, S., Thodey, K., Trenchard, I. J., Interrante, F., & Smolke, C. D. (2015). Complete biosynthesis of opioids in yeast, (August), 1–11.

Service, R.F. (2015. Modified yeast produce opiates from sugar. Science, 14 August 2015: 349 (6249), 677. [DOI:10.1126/science.349.6249.677]

Exploring Creativity, Part III: How Not to Define Creativity

“There are forty kinds of lunacy but only one kind of common sense.”

-African proverb

Before I began this essay series, it had never occurred to me to consider creativity and rationality together. On the surface, they seemed like totally separate domains. Creativity belonged to positive psychology, a field that seemed scarcely more rigorous than humanist philosophy. And rationality belonged to a sphere of discourse that emphasized the proceduralization of one’s life in the service of eradicating bias. But I now recognize that creativity is central to rationality. One need not look further than the rationalist whose creative exercise spurred this essay series in the first place, Eliezer Yudkowsky, whose prodigious creativity is visible in all his pursuits, from fiction to scholarship to advocacy.

It is edifying to juxtapose Eliezer Yudkowsky, who privileges rationality over creativity (or at least views creativity as a tool of rational thought), with Mihaly Csikszentmihalyi (pronounced mee-HAI six-cent-mee-HAI), a positive psychologist who –as we shall see– allows his enthusiasm for creativity to deaden his critical thinking skills. Csikszentmihalyi is probably best known for his work on creativity, which he encapsulated in his book Creativity: Flow and the Psychology of Discovery and Invention (1997). In the book, Csikszentmihalyi outlines the characteristics of creative individuals that he had gleaned through extensive interviews:

Characteristics of the creative personality:

  1. Creative individuals have a great deal of energy, but they are also often quiet and at rest.
  2. Creative individuals tend to be smart, yet also naive at the same time.
  3. Creative individuals have a combination of playfulness and discipline, or responsibility and irresponsibility.
  4. Creative individuals alternate between imagination and fantasy at one end, and rooted sense of reality at the other.
  5. Creative people seem to harbor opposite tendencies on the continuum between extraversion and introversion.
  6. Creative individuals are also remarkable humble and proud at the same time.
  7. Creative individuals to a certain extent escape rigid gender role stereotyping and have a tendency toward androgyny.
  8. Generally, creative people are thought to be rebellious and independent.
  9. Most creative persons are very passionate about their work, yet they can be extremely objective about it as well.
  10. The openness and sensitivity of creative individuals often exposes them to suffering pain yet also a great deal of enjoyment. (Csikszentmihalyi 58-73)

If you’re like me, you find it easy to identify with these descriptions. Yes, That’s me! I do have these opposing tendencies! Personally, I felt a frisson of pseudo-insight when I read #5. My persistent amazement about how difficult it is to understand what it’s like to feel sociable when I am feeling introverted, and vice versa, is a perennial topic in my journal writing. These definitions also conform to our intuitive sense that creativity involves a kind of broad, exploratory, and boundary-defying mind. But we would be wise not to trust our intuitive sense that these descriptions point to a valid definition of creativity. Our minds are easily fooled by superficial resemblances, and are especially vulnerable to intuitive interpretations that seem psychologically comforting. Csikszentmihalyi’s descriptions of “the creative personality” are both superficially plausible and psychologically comforting. Truly, these traits don’t describe creative people. They describe people.

It is trivially easy to construct statements that apply universally, flatter everybody, and sound scholarly. Here are a few I came up with just now:

  • Creative individuals enjoy eating new and delicious foods, but also have the capacity to appreciate plain and familiar meals.
  • Creative people enjoy exploring the natural world, yet also delight in cozy indoor settings.
  • Creative individuals tend to link paper strips using tape, but sometimes they find it preferable to use a glue-stick.

Perhaps you think that the model’s generality is an advantage. After all, If creativity is a holistic concept with diverse manifestations, why shouldn’t the definition be just a general and multifaceted? The problem with this reasoning is that when your model is so elastic that it can explain everything, then it actually explains nothing. Such a model would be a Fake Explanation. As Yudkowsky explains, “the usefulness of a model is not what it can explain, but what it can’t.” A model of the world that is compatible with every possibility is as good as having no model at all.

But Fake Explanations are insidious. They trick us into believing that we have actually explained something when we haven’t. They the intellectual equivalent of junk food, satisfying our desire to be knowledgeable without actually increasing our knowledge. This kind of uncritical thinking is facilitated by imprecise language. In particular, the conflation of a word’s commonsense definition with its technical, scientific counterpart. In Csikszentmihalyi’s final synthesis, where he reveals the factor that unites the various “characteristics of the creative personality” that I enumerated above, we see an example of this irrational conflation:

“If I had to express in one word what makes their personalities different from others, it’s complexity. They show tendencies of thought and action that in most people are segregated. They contain contradictory extremes; instead of being an individual, each of them is a multitude.” (my bold)

This word, complexity, has a precise meaning in physics, mathematics, and computer science. Here, however, Csikszentmihalyi is not referring to that definition, but simply capitalizing on the association of the word complexity with scientific rigor. I do not mean to imply that Csikszentmihalyi is deliberately masking the vacuity of his argument with fuzzy language. That’s the most insidious part of a Fake Explanation: he probably thought that complexity was a suitable explanation for his observations. But it is It is extra disappointing because the scientific study of complexity is an fascinating topic. Did you know, for instance, that information, randomness, and complexity all have rigorous mathematical expressions? And did you know that they are equivalent, alternative formulations of the same basic principles? To say that creative individuals display “complexity” and contains “a multitude” is tantamount to throwing up your hands and admitting your ignorance. The most egregious sin against rationality is to believe yourself to be doing science when in fact you are not. As Richard Feynman wrote, “The first principle is that you must not fool yourself and you are the easiest person to fool.”

How, then, did Csikszentmihalyi fool himself so utterly? Csikszentmihalyi is famous for his notion of Flow, a kind of “effortless attention” that is widely described as a “peak experience.” The fact that psychology textbooks, rationality resources, mindfulness texts, and psychotherapy manuals all discuss Flow is a testament to its enduring utility as a concept. And Csikszentmihalyi is not a charlatan or a pseudoscientist; his work is repeatedly cited by the founder of behavioral economics, Daniel Kahneman, and other eminent scholars of the mind. Having researched the intellectual history of creativity, it seems plausible that Flow will someday be as much a fixture in the English lexicon as creativity currently is. My friend Mike Tizzano has written a cogent review of Csikszentmihalyi’s book on this subject.

To understand how Csikszentmihalyi could have made such patently irrational statements, we need to examine his evidence-gathering method. He interviewed 91 people who were publicly regarded as creative, and whose creative contributions had an impact on their respective fields. Full disclosure: I am a laboratory scientist, a consummate experimentalist who studies mouse behavior, physiology, and brain organization. I am just about the farthest thing from a positive psychologist, and I am eager to avoid the appearance that my critique is merely the chauvinism of an experimentalist. Truly, I am not impugning Csikszentmihalyi’s method, only the conclusions he draws from it. In psychology, there are two types of research methodologies. First, there are descriptive methods, such as naturalistic observation, case studies, or surveys. Then there are experimental methods, which involve manipulating variables to work out cause-and-effect. To an extent, the descriptive-experimental distinction is a false dichotomy. Surveys, for example, are usually classified as a descriptive method, but are much more quantifiable and generalizable than another descriptive methods, such as naturalistic observation or case studies. The experimental approach allows for more rigorous generalizations, while the descriptive approach provides evidence that is, at best, suggestive.

To be sure, Csikszentmihalyi’s approach was rigorous enough to cast doubt on certain popular misconceptions of creativity, such as the prevailing idea that highly creative individuals are “mad geniuses” who had distressing childhoods. In fact, Csikszentmihalyi found that most of his interviewees were not eccentric monomaniacs, but, rather, conscientious and agreeable. Moreover, Csikszentmihalyi’s sample generally reported happy or uneventful childhoods, with plenty of supportive adults to act as mentors. Csikszentmihalyi’s methodology was sufficient to demonstrate that these earlier theories of creativity were likely to be untrue. Although not maximally rigorous, this kind of study can serve as a stimulant for further research that is more rigorous.

Indeed, more rigorous survey methods have vindicated Csikszentmihalyi’s impression that the “mad genius” is mostly a myth. A apposite modern parallel is the work of Temple Grandin, an animal behavior expert who has written extensively on autism. Grandin, includes in her books the first-hand accounts of how people with autism experience the world, including her own. In her books, Thinking In Pictures and The Autistic Brain: Thinking Across the Spectrum, Grandin highlights these other perspectives not as a way of making an end-run around the peer review process, but as a way to highlight aspects of the autistic experience –such as the variety of ways-of-thinking and the role of sensory processing difficulties– that are currently being neglected by the scientific establishment.

The pursuit of a coherent vision of creativity begins with a recognition that intelligent, well-informed people will have slightly different intuitions about what constitutes creativity. This diversity is to be expected, since creativity is an abstract notion that cannot be observed directly. Nor does anyone have any privileged access to it. It is fundamentally unlike hard-to-observe physical phenomena like quantum tunneling, radioactive decay, or cell division. Understanding (and, eventually, enhancing) creativity requires more than building a better microscope or a more powerful supercollider. It will require a clear and explicit definition that tracks our intuitive sense of what creativity is. We begin with the observation that creativity is a broadly subscribed notion that exists in the minds of many individuals. In that regard, it is like a meme –a wildly popular meme, too. But the popularity of a meme is orthogonal to its truth value. False and pernicious memes –vaccines cause autism, feminism entails denigrating men, existing laws ensure humane treatment of factory-farmed livestock– propagate irrespective of their scientific merit. Memes spread because they are psychologically comforting and/or politically expedient.

Creativity is often framed as an unscientific concept, and unquestionably there are formulations of creativity that are not valid (i.e. that do not track our intuitions about creativity) or –as in the case of Csikszentmihalyi– define it in such a way that it is not possible to objectively distinguish creativity from its absence.The best way to classify creativity is as an intellectual construct. An intellectual construct is a phenomenon that cannot be attributed to a distinct cause or group of causes, but nonetheless proves a useful tool for categorizing and explaining other facts. The intellectual construct is a useful fiction, except the users of intellectual constructs are not trying to seed propaganda or hoodwink anybody. Nor are intellectual constructs resistant to objective study. There is, however, one sacrifice that you must make before embracing an intellectual construct. You must abandon the ineffable mystique that surrounds venerated notions like creativity, and deign to design a contrived test that is susceptible to measurement. And, finally, to accept such tests as proxies for creativity itself. This process of converting an abstract, holistic idea into a discrete, verbalizable form is called operationalization.

My first introduction to operationalization was in a Psychology 101 course I took when I was a junior in high school. Professor Tsiris explained that in addition to teaching, she also worked as a behavioral therapist at a special school for children with psychological problems. One day, Professor Tsiris asked us how we would make operational definitions to assess a child’s writing. The class volunteered that one should look at the legibility of the handwriting, and whether the child has written in complete sentences. No, she said. You had better make sure that the child is sitting down, has a piece of paper in front of them, is facing the paper, has a pencil in their hand, has the pencil tip pointing down toward the paper, and many other extremely basic considerations. The class was duly humbled by this explanation. “At the school where I work,” she said, “it’s my job to quantify the behavior of the children, so we can really assess the effectiveness of our interventions.” She explained that one of the children she monitored had a habit of banging his head on this desk. Matter-of-factly, Professor Tsiris explained that when he started doing this, she didn’t try to restrain or even reprimand him. Professor Tsiris lifted her hand and pantomimed holding up a clicker-counter. “Instead, in those situations, I count the number of times he hits his head against the desk.” The number of head-bangs was her operational definition of that child’s misbehavior. I remember being in awe of Professor Tsiris at that moment. Operationalization, it would seem, was a hardcore enterprise.

Like other intellectual constructs, such as intelligence, emotion, language, motivation, and attention, psychologists must define creativity in such a way that it is both valid and reliable. In the scientific context, valid and reliable have precise meanings. Validity refers to the fact that the operationalized descriptions actually measure what they purport to measure, in this case creativity. Intellectual construct like creativity actually correspond to our intuitions about who is and isn’t creative. Although there are several different aspects of validity that need to be assessed, as in the remark, “the idea was ridiculous on its face!” Unsurprisingly, face validity is sometimes referred to as a “sanity check.” Face validity is way of quickly checking a new idea, but it shouldn’t be used as a bludgeon for dismissing all nuanced or counterintuitive claims. The face validity metric doesn’t zero in on a specific aspect of a theory, but, rather, on the theory as a whole: its foundations, its proposed mechanisms, its falsifiability, and its overall relevance. In the case of Csikszentmihalyi, the glaring flaw is that his descriptions of creativity are unfalsifiable. His model is unable to distinguish creativity from its absence. To the extent that someone might claim to find the theory useful, it would be a case of mistakenly attributing their intuitive ability to recognize creativity to Csikszentmihalyi’s vague descriptions. Despite the challenges involved in fashioning a rigorous definition of creativity, we can all (to an extent) “know it when we see it.”

Moving a little beyond face validity, we might predict that people who a majority of impartial observers would describe as creative would perform better on the proposed tests of creativity than people designated as less creative. Put so starkly, this criterion sounds blindingly obvious. Why do researchers even bother with articulating such elementary principles? The short answer is that this component of face validity is not as blindingly obvious when encountered in the wild. And it is particularly hard to spot when it is your cherished hypothesis that stands to be eviscerated. Recall Csikszentmihalyi’s descriptions of “the creative personality” from earlier. Let’s consider #5: “Creative people seem to harbor opposite tendencies on the continuum between extraversion and introversion.” How would you design an experiment to find creative people according to this definition? Since no person is completely introverted or completely extroverted (what would such a person act like?), everyone must therefore “harbor opposite tendencies on the continuum between extraversion and introversion.” If everyone qualifies as creative according to this definition, then no one is creative.

Did you just bristle at my conclusion that “no one is creative?” I hope you did. I did,too –and I wrote it! That visceral sensation of doubt is a natural reaction. To be an effective rationalist doesn’t mean you have no such feelings, it means you have the skills to recognize these feelings for what they are, and deal with them constructively. Our incongruity-detecting mental modules are on the whole well-calibrated, but nevertheless occasionally misfire. Moreover, these misfires occur disproportionately in particular circumstances rather than at random. Part of your skill as a rationalist is to notice patterns in the circumstances that make these misfires more probable, and to then guard against them. If you can recognize the fallibility of your intuitive sense of doubt, then you are halfway to understanding why a scientific theory can propagate despite failing the test of face validity. You likely felt surprised and perhaps aggrieved at the conclusion of “no one is creative” because you interpreted the conclusion inductively (on the basis of your experience) even though the form of the sentence required deductive logic. The premise, I remind you, was “If everyone qualifies as creative according to this definition.” Your pre-existing knowledge of the instrumental and pragmatic value of the concept of creativity, as well as your emotional stake in conceiving of yourself as a creative individual, compeled you grant legitimacy to Csikszentmihalyi’s irrational formulation of creativity.

I have focused on face validity because, compared to other measures of validity, the most fundamental as well as the easiest to understand. But there are other forms of validity that probe deeper aspects of a theory’s applicability to its stated claims. These include internal, external, test, criterion, content, and construct validity. I will no doubt touch on these kinds of validity in future essays, though I probably won’t name them explicitly.

My purpose in this essay was to show through a case study some of the basic ways a definition of creativity could fail as a scientific theory. We saw that a definition must be falsifiable, operationalized, and stripped of any extraneous mysticism. As a rationalist, I see no principled reason for holding my personal beliefs to different, lower standard than my scientific beliefs. Put another way, there should be no firewall between the intellectual ethic of the laboratory and the rest of the world. In my next essay, I will look at the prototypical creative exercise, brainstorming, and evaluate its theoretical foundations, empirical support, as well as its intersection with the introversion-extraversion dimension of human personality.


Csikszentmihalyi, M. (1997). Creativity: Flow and the psychology of discovery and invention. New York: Harper Perennial.

Tizzano, M. (2015, March 6). Book Review: Flow by Mihaly Csikszentmihalyi. Retrieved July 18, 2015, from

Yudkowsky, E. (2007, August 20). Fake Explanations. Retrieved July 18, 2015, from

Exploring Creativity, Part II: A Brief History of Creativity

In my previous post, I began my exploration of creativity with an intriguing proposal for circumventing bias via a creative exercise. My overarching goal in this series is to showcase the methods of rationality while also explaining the modern scientific understanding of creativity. I intended to follow up that post with a brief survey the history of the concept, from its origins in philosophy to its modern meaning. And then I planned to follow up with a discussion of some of the technical definitions of creativity used in modern psychology. Disappointingly, the intellectual history of creativity is not particularly relevant or interesting. I had expected creativity to have followed a similar trajectory to early sciences, like optics, which underwent a more or less smooth and unidirectional course of improvement from their ancient origins to their modern incarnations. The history of creativity, by contrast, is marked by a small number of theories going in and out of fashion throughout the ages, with the major conceptual breakthroughs all happening in very modern times.

Creativity as we understand it is an astonishingly recent invention. This is not to say that until recently no one thought or acted creatively; it is rather the case that they did not conceive of their thoughts and actions in those terms. More than anything, the history of creativity emphasizes that our current conception of creativity as a singular mental faculty is not the norm. Rather, creativity comprises a cluster of phenomena: it is a faculty of mind, a personality trait, a variety of subjective experience, and a skill that can be taught and exercised. This multifaceted thing we call creativity has also experienced a change in connotation over the course of its history, from slightly untrustworthy to uniformly positive. The mystical undercurrents of creative thought have similarly subsided. Whereas the term creativity originally referred to the divine ability to engineer life, it is now a thoroughly secular concept. Whereas creative thinking was once ascribed to the intervention of supernatural agents, both good (angels, geniuses, muses) and evil (daemons, djinns), it is now recognized as being instantiated in our physical brains. Despite its secularization, creativity retains some of its original mystique. This enduring mystique manifests itself in people who still argue that human creativity is qualitatively superior to the creativity of non-human animals, as opposed to both existing at different points along a spectrum of cognitive and behavioral flexibility.

The words creative and creativity have taken on a distinct meaning from their etymological origin in the word create. As I mentioned earlier, creativity used to refer exclusively to divine creation. The next step in the term’s evolution was toward something synonymous with “productivity,” in which there was no indication that the production was due to creative expression or not. We can illustrate the distinction between this sense and the modern sense using a simple thought experiment. We would not say that a person who produces a thousand origami cranes in a single day while working from an instruction manual is necessarily creative. However, we would say that the maker of the first ever origami crane was creative, even if it took him fifteen years to fold his first crane. The crane-maker in the first case is creative in the earlier sense, but not in the modern sense. The crane-maker in the second case is creative in the modern sense, but not in the earlier sense.

Before the twentieth century, it was commonly thought that creative expression was something field-specific: one used separate mental faculties for mathematical versus philosophical creativity, rather than employing the same basic faculty of mind but applying it to different subjects. The modern conception of creativity as a phenomenon that is fundamentally the same across disciplines is attributed to the philosopher Alfred North Whitehead, who, in 1927, coined the word in its modern sense. Whitehead’s references to creativity mostly track the current meaning, but, crucially, neglect the possibility that it could be an object of scientific inquiry. (Whitehead regarded creativity as a metaphysical concept.) The psychological establishment took little notice of creativity until 1950, when J.P. Guilford discussed creativity in his address to the American Psychological Association. Guilford’s message to the psychological community was that creativity was not merely a fuzzy metaphysical concept, but a something that could, in principle, be studied just as rigorously and quantitatively as topics like intelligence and emotion. Once creativity was reified as a legitimate scientific topic, the serious efforts to define it began. Precise definitions are crucial in all sciences, but particularly in the psychological sciences, where there most of the objects of inquiry are abstract concepts rather than physical phenomena.

When reviewing the early efforts to define creativity, I was delighted to discover another opportunity to bring in rationality. Next time, I will present another case study in the intersection of creativity and rationality. This will lead into a more technical overview of the different definitions of creativity that have been put forward by psychologists since 1950.

Exploring Creativity, Part I: A Case Study in Applying Creativity to the Goal of Overcoming Bias

In his essay The Third Alternative, the artificial intelligence theorist Eliezer Yudkowsky examines the fallacy known as the false dilemma (also known as false choice or the fallacy of the excluded middle), in which a problem is presented as a choice between two options when, in reality, other alternatives exist. As a persuasive tool, the false dilemma is an effective way for mendacious people frame a problem in such a way that their preferred solution seems reasonable. Political messages routinely incorporate false dilemmas, such as socialized health care vs. preserving individual freedom, and, negotiating with Iran vs. endangering Israel. However, false dilemmas are not merely rhetorical tricks that others perpetrate on us. Rather, as individuals, our minds habitually construct false dilemmas for self-serving reasons: to justify irrational beliefs, baseless prejudices, and hurtful actions. Most rationality resources emphasize the motivation component of biased reasoning: we want a particular outcome, so our thinking proceeds in such a way that this desired outcome is assured. Yudkowsky, however, while not discounting the role of motivation, suggests that part of the problem stems from a lack of creativity. His suspicion is that if someone confronting a false dilemma were to think of a viable “third alternative” through an concerted burst of creativity, that person’s motivation to maintain the false dilemma would be ameliorated. Pursuant to this problem, Yudkowsky recommends an unconventional creative exercise:

Which leads into another good question to ask yourself straight out:  Did I spend five minutes with my eyes closed, brainstorming wild and creative options, trying to think of a better alternative?  It has to be five minutes by the clock, because otherwise you blink—close your eyes and open them again—and say, “Why, yes, I searched for alternatives, but there weren’t any.” Blinking makes a good black hole down which to dump your duties. An actual, physical clock is recommended.

This exercise, when performed in good faith, is intended to force you into creative thought in the hopes that the identification of another alternative will destroy your motivation to continue conceiving of the problem in a binary manner. Yudkowsky does not shy away from the major obstacle to successfully implementing this strategy: if we think that finding a creative solution will undermine our self-serving formulation of a problem, we will consequently be unmotivated in our search for creative solutions. The motivation to be biased is powerful and pervasive, but it can be counteracted by a sincerely held commitment to overcome bias. This conviction in the possibility and desirability of overcoming bias is the foundation of modern rationality.

When I previously described this five-minute timer task as “unconventional,” I did not mean to imply that it was outlandish or radical. I simply meant it was atypical by the standards of most debiasing techniques I’ve read about. The most common techniques involve “commitment devices,” which are ways of intentionally constraining your future options in such a way that there is reduced risk of bad decisions. Examples of commitment devices include reducing your credit line if you have problems restricting your spending, setting up recurring reminders to ensure timely submission of tax forms, or even buying an alarm clock that forces you to chase it around the room in order to turn it off. In a sense, commitment devices work by limiting creativity. Yudkowsky himself frequently extols commitment devices in his essays on rationality. In addition, the creativity-promoting aspect of the five-minute timer task seems at odds with another theme in his writings on rationality, best encapsulated by this quotation: “One of chief pieces of advice I give to aspiring rationalists is ‘Don’t try to be clever.” It is therefore surprising that Yudkowsky would ever endorse a debiasing exercise that called for “brainstorming wild and creative options.”

To resolve this apparent incongruity, it’s important to remember that Yudkowsky’s proposed the five-minute timer exercise as a remedy for a specific fallacy, the false dilemma. Perhaps creative effort is an effective debiasing strategy for false dilemmas, but not for other biases. It is, of course, an empirical question whether this exercise would actually be effective in reducing bias. And, even if it was shown to be effective, would it generalize to other kinds of biased decision-making that Yudkowsky attributes to insufficient creativity, such as the planning fallacy and scope insensitivity? It is my intention to evaluate the scientific evidence pertaining to the intentional application of creative thinking toward the problem of overcoming bias.

I wonder why this proposed solution involving deliberate creative effort is not more common. In our society, the importance of creativity is relentlessly extoled. Educators worry about how to teach it; cultural pundits despair over the deficient creativity of the younger generation; business executives pay exorbitant fees to consultants to make their employees more creative. Creativity is valued in every profession and avocation. Artists, scientists, chefs, and physicians are all praised for their creative contributions. Creativity enjoys such a singularly positive connotation that even the admonition ‘let’s not get too creative” is a humorously mild, and the financial term creative destruction seems at first to be an oxymoron.

Regardless of whether or not the 5-minute timer task works as a debiasing tool, there is broad agreement that creativity is a desirable goal. How, then, can we achieve it? Provocative questions proliferate:

Is solitude or collaboration most conducive to creative thought? How has creativity been defined in the history of psychology? Is creativity correlated with intelligence? Is creativity associated with particular personality traits? Does an individual who is creative in one domain, like songwriting, more likely to be creative in another domain, like designing scientific experiments? Which is to say, is creative potential a general trait or a domain-specific talent that doesn’t generalize to other domains? Relatedly, is there a difference between the high-order creativity we use to describe artistic or scientific achievement and the more mundane form of flexibility and resourcefulness we bring to our daily lives? What is the relationship between creativity and mindfulness meditation, or between creativity and psychopathology? Are there creativity-enhancing drugs? Finally, what is the proper role of creativity in rationality? Is there a correlation between rationality and creativity, and, if so, what is the factor (or factors) that unites them?

I will address each of these questions in subsequent posts. Next time, however, I will address the fundamental issue of how to define creativity. This has implications for whether creativity can be regarded as a tractable scientific topic, as well as how we measure it.

The Idea of the Self

This week, I finished the first and most onerous phase of the graduate school application process. This whole endeavor was dreary and stressful. Whenever I felt especially demotivated, I awarded myself treats that I could partake in once the applications were turned in. Because reading was the hobby I most neglected during this research process*, the treats I promised myself were mainly books. As a result, I cannot open my Kindle without encountering an intimidating queue of unread titles.

One of the books that was hardest to resist reading was Waking Up: A Guide to Spirituality Without Religion, by Sam Harris. As the title implies, Waking Up is an introduction to meditation and mindfulness for a secular audience. The thesis of the book is that in spite of all the religious associations, transcendent experience is a real phenomenon that can be studied and practiced without sacrificing the rigorous standards of scientific argument.

(Note: I haven’t started to read Waking Up yet. The purpose of this post is to organize my thoughts about the topic so that I can engage with the book more thoughtfully when I do read it.)

One of the central arguments of Waking Up is that the experience of the self is an illusion. Moreover, it is both possible and desirable to dispel this illusion. The first part of Harris’s case against the self is philosophical, demonstrating that despite its intuitive appeal the concept of the self is incoherent.

The second part of Harris’s case is scientific, relying on recent findings in neuropsychology. This makes me bristle, because I do not fully trust Sam Harris to accurately present scientific evidence. In his previous book, Free Will, he overstated the relevance of neurological findings about free choice. I thought that Daniel Dennett’s critique of Free Will was very persuasive on this point. To be clear, I still trust Sam Harris, but I will not be as reflexively accepting of his framing as I am with, say, Steven Pinker. This is unfortunate, because I want Waking Up to be a compelling book. I want it to give me the inspiration and the intellectual justification to start meditating. But I know I will not be as motivated if I have lingering doubts about his representation of the relevant research.

Therefore, I am assigning myself a prerequisite: The Self Illusion: How the Social Brain Creates Identity, by Bruce Hood. Bruce Hood is a distinguished neuropsychologist who was involved with much of the research that Sam Harris cites. I am familiar with Bruce Hood’s work from his contributions to (, and I have confidence in both his fundamental knowledge and scholarly standards.

I am aware that there is a research program in neuropsychology dedicated to understanding the self, but I know very little about it. I know even less about the history of philosophical speculation on the subject. For instance, I can’t articulate a decent definition of the self off the top of my head. That being said, in the course of my studies I have encountered some anecdotes that might be relevant to the issue of the self.

  • In my introductory psychology class, we learned about split brain patients: people who have had the three/four major nerve bundles that connect the two cerebral hemispheres severed in order to ameliorate their seizures. Although split brain patients appear to be a single, unified self, clever studies that present information to one side of the brain but not the other reveal that each side of the brain seems to operate independently, almost like separate selves. I say “almost” because these studies show that the two sides share some information between them. There’s a great 10-minute documentary with Alan Alda and Michael Gazzaniga involving a split brain patient. Actually seeing the person perform the tests is very compelling. We see a phenomenon called confabulation, where a patient insists that he or she chose to behave in a particular way when actually that behavior was evoked by the experimenter’s manipulation. In this case we see the left hemisphere (the speaking side) confabulate reasons for choosing a particular picture, when the real reason was that the right hemsphere was exposed to the picture. The fact that the two separate hemispheres appear to function as independent selves, coupled with the fact that each of these hemispheres cannot distinguish between internally-generated and externally-suggested thoughts, suggest that our seamless experience of a self is illusory.
  • Another possible argument against the self is the psychological phenomenon known as the hot-cold empathy gap. In their book Willpower: Rediscovering the Greatest Human Strength, the authors describe the hot-cold empathy gap as “the inability, during a cool, rational, peaceful moment, to appreciate how we’ll behave during the heat of passion and temptation.” (Baumeister & Tierney, 148) This seems to be what people mean when they refer to their sleepy, hungry, willpower-depleted self as a different person. I can relate. If I was a unitary self, why do I sometimes act erratically and counterproductively? Why does anticipating and subverting the bad behavior of my future self feel so much like anticipating and subverting another person? It is unsurprising to me that people in the past attributed their intense, maladaptive impulses to demonic possession; it really feels like another self, or at least an augmented self, is controlling your behavior. If I was a singular self, I would expect my preferences to remain more constant across time.
  • A philosophical argument that casts doubt on the reality of the self is the notion that there is no continuity-of-structure in our bodies. The molecular composition of your body is in constant flux. It’s possible that none of the atoms that comprised your body ten years ago are present your current body. This constant overhaul of your physical body seems inconsistent with our single, uninterrupted experience of self.

I am looking forward to reading more about the self, its illusoriness, and its relationship to my goal of increasing my wellbeing through meditation.