Exploring Creativity, Part IV: Brainstorming

Exploring Creativity, Part IV: Brainstorming

In Exploring Creativity, Part I, I discussed how creative exercises rarely incorporate rationality. As an example of an exercise that explicitly applied creativity to overcoming bias, I spotlighted Eliezer Yudkowsky’s idea for avoiding the fallacy of the false dilemma. To disabuse oneself of the idea that a problem admits of only two options, Yudkowsky suggested, you should spend five minutes racking your mind for additional alternatives. I called this recommendation the Timer Task because Yudkowsky insisted that we measure five minutes by an actual clock, rather than our intuitive sense of time. Despite there being no experimental evidence to recommend this technique, I praised the Timer Task for at least acknowledging the synergy between creativity and rationality. Another aspect of the Timer Task that distinguishes it from most other creativity exercises is the fact that it is solitary.

The solitary nature of the Timer Task is noteworthy because most creative exercises implemented by schools and businesses are group exercises. Indeed, for the past sixty years, the prototypical creative exercise has been group brainstorming. Based on my own informal survey, I suspect most Americans are familiar with the technique. According to the cognitive psychologist and creativity expert Mark A. Runco, “brainstorming is almost definitely the most often employed [creativity] enhancement technique” (365). 

In its modern sense, ‘brainstorming’ refers to an activity in which several people work together to generate creative solutions to a given problem. However, the term ‘brainstorm’ dates back to the 1890s, when it was medical jargon that meant “a fit of mental confusion or excitement” (Random House). By the 1920s, the word ‘brainstorm’ had diverged from its clinical origin, toward something analogous to epiphany or insight. According to the Dictionary of American Slang (4th ed.), it was “[a] sudden idea, esp. one that is apt and useful.” Around the same time, we find the first use of ‘brainstorm’ in its modern sense, as a verb meaning, “to examine and work on a problem by having a group sit around and utter spontaneously whatever relevant thoughts they have.” (Dictionary of American Slang, 4th ed.)

It was not until the 1950s, however, that ‘brainstorming’ transitioned from little-known slang to an established member of the lexicon. The current notion of brainstorming as a formal technique can be traced back to a single source: the 1953 business management book, Applied Imagination. It is from this book that we get the term, “brainstorming session.” The impact of this book can be observed in the increased usage of  ‘brainstorm’/’brainstorming’ in written media after 1953, as depicted in the Google NGram chart below. The especially steep rise in the usage of “brainstorming” –as opposed to other conjugations of the verb ‘to brainstorm,’ such as ‘brainstormed’– reflected an emerging sense that the word referred to a formal technique, rather than a new label for a standard conferencing strategy.

Google NGram Viewer: 'Brainstorm', 'Brainstorming', and 'Brainstormed'

Although the process of searching your mind for creative solutions to a particular problem –i.e. the crux of brainstorming– can be accomplished just as easily by an individual as by a group, the popular meaning of brainstorming assumes it to be a group activity. Outside the scholarly literature, “group brainstorming” is an oxymoron, and “individual brainstorming” is a contradiction in terms. Undoubtedly, the inventors and early adopters of brainstorming regarded their method as a group activity. Even today, most dictionaries continue to define brainstorming as an inherently social enterprise. If you conducted a psychological experiment in which you brought a group of strangers together, gave them each a piece of paper, and asked them to “brainstorm creative solutions to a problem,” I strongly suspect most groups would not even consider splitting up, coming up with ideas separately, and then pooling their ideas at the end of the session. And yet, as I intend to show, this strategy of brainstorming individually and then pooling solutions would be far more effective than group brainstorming.

Modern American culture idealizes extraverted personality traits. Although recent years have seen an uptick in appreciation of introversion, the extrovert ideal still exerts a tremendous impact on all aspects of our lives, including our conception of creativity. Indeed, brainstorming’s status as the prototypical creative exercise is simply one example of a broader conflation of creativity with extraversion. Appreciation for brainstorming cuts across traditional ideological and professional boundaries: therapists, educators, corporate managers, and military planners all employ brainstorming techniques in their work. But what are the consequences of conflating creativity and social interaction?

In her fine book Quiet: The Power of Introverts in a World that Can’t Stop Talking, Susan Cain situates the enthusiastic embrace of group brainstorming in the 1950s within a broader cultural trend that glorified extraverted personality traits. One indicator of the public’s receptiveness to brainstorming was the speed of its adoption. Writing in 1958 –a mere five years after the publication of Applied Imagination— the psychologist Donald Taylor commented:

Within recent years [the use of brainstorming] has grown rapidly. A large number of major companies, units of the Army, Navy, and Air Force, and various federal, state, and local civilian agencies have employed the technique […]. (Taylor 24)

In my previous post in this series, I profiled Mihaly Csikszentmihalyi, an eminent psychologist who wrote some irrational things about creativity. One of his unfalsifiable definitions of creativity touched on the introversion-extraversion dynamic. In Creativity, he wrote that creative people “seem to harbor opposite tendencies on the continuum between extraversion and introversion” (Csikszentmihalyi, as quoted in Landrum, pg. 64). To go from reading Csikszentmihalyi to reading Susan Cain is to experience a kind of literary vertigo. Whereas Csikszentmihalyi is exuberant, vague and prone to digressions, Cain is modest, deliberative, and thesis-driven. Here is a striking, instructive (and, in my view, invidious) example. Both Cain and Csikszentmihalyi address the apparent paradox of people who manifest both introverted and extroverted traits. Here’s Csikszentmihalyi:

Creative people tend to be both extroverted and introverted. We’re usually one or the other, either preferring to be in the thick of crowds or sitting on the sidelines and observing the passing show…. Creative individuals, on the other hand, seem to exhibit both traits simultaneously. (Csikszentmihalyi, as quoted in Kaufman)

Here’s Cain:

Introverts, in contrast, may have strong social skills and enjoy parties and business meetings, but after a while wish they were home in their pajamas. They prefer to devote their social energies to close friends, colleagues, and family. They listen more than they talk, think before they speak, and often feel as if they express themselves better in writing than in conversation. They tend to dislike conflict. Many have a horror of small talk, but enjoy deep discussions. (Cain 11)

More than anything, I want to highlight the fact that both authors are making the same argument. The only difference is that Susan Cain actually resolves the paradox by providing illustrative examples; Csikszentmihalyi simply asserts that certain “[c]reative individuals” can “exhibit both traits simultaneously.”

My purpose in bringing this up is not simply to re-hash my earlier critique of Csikszentmihalyi, but to show that rigorous thinking about science isn’t solely the province of scientists. Susan Cain is not a scientist, but her writing reflects a deep respect for science. And in many ways she is a better rationalist than Csikszentmihalyi. It goes to show that regardless of your profession or academic specialty, everyone is capable of making a positive contribution to the scientific conversation.

In Quiet, Susan Cain uses brainstorming as a lens through which to inspect our cultural beliefs about introversion and extraversion. In an earlier section, I mentioned the 1953 book Applied Imagination, crediting it with having launched brainstorming into public consciousness. Applied Imagination was written by the advertising executive Alex Osborn, who had been developing brainstorming techniques since 1939 in his role as a consultant to major businesses. He believed that (1) people were naturally creative, (2) creativity was the key to success in business, and (3) traditional business practices stymied creativity. Brainstorming, Osborn thought, was the optimal way to unleash this wellspring of latent of creativity. Though convinced that group synergy was essential for creative achievement, Osborn was aware of pernicious group dynamics like social anxiety and diffusion of responsibility. However, he maintained that these problems could be averted through a combination of explicit instructions and expert guidance. Osborn outlined four rules for constructive brainstorming:

(1) Criticism is ruled out. Adverse judgment of ideas must be withheld until later.

(2) “Free-wheeling” is welcomed. The wilder the idea, the better; it is easier to tame down that to think up.

(3) Quantity is wanted. The greater number of ideas, the more the likelihood of winners.

(4) Combinations and improvements are sought. In addition to contributing ideas of their own, participants should suggest how ideas of others can be turned into better ideas; or how two or more ideas can be joined into still another idea. (Applied Imagination, as quoted in Taylor, et al., 24-25)

On the surface, these rules sound plausible and comprehensive. However, if the objective is to maximize creativity, all four rules are counterproductive. The first rule is wrong in its assumption one could subvert people’s judgmental attitudes via an explicit rule, and doubly wrong for supposing that a maximally permissive environment is the ideal incubator of creativity. The second and third rules err in their assumptions about what kind of creative solutions are worth aiming for. The fourth rule, like the first, falsely assumes that a collaborative environment is necessarily more amenable to creativity than an adversarial one. In summary, these four rules are the product of misconceptions and wishful thinking.

Osborn was particularly emphatic about his third rule, “Go for quantity.” Writing about his own experiences with brainstorming in a business setting, he enthused:

“One group produced 45 suggestions for a home-appliance promotion, 56 ideas for a money-raising campaign, 124 ideas on how to sell more blankets. In another case, 15 groups brainstormed one and the same problem and produced over 800 ideas.” (Osborn, as quoted in Quiet, pg. 87)

Large numbers such as these are only superficially impressive, because they do not take into consideration the quality of those ideas. It is no doubt possible to conjure up thousands of cockamamie solutions to any given problem; the set of possible ideas is literally infinite. Ultimately, however, a single quality idea is worth more than ten-thousand terrible ones. Nor do Osborn’s numbers tell us whether people working in isolation would have generated more ideas. Astonishingly, Osborn does not comment on either of these two alternative possibilities.

Like many of my peers, my earliest experiences with brainstorming occurred at school. During these sessions, I remember feeling exasperated by the “be freewheeling” rule, which seemed to result in many irrelevant digressions. I can also recall a few instances in which I wanted to violate the “non-judgmental” rule. This was not (necessarily) because I was an arrogant jackass, but because I genuinely thought rebutting my peer’s point would improve the overall conversation.

When I read Quiet, I was gratified to discover that my own attitude toward class participation is the norm in East Asian culture. Quoting an Asian student who was astonished by the permissive attitude of the American university she had attended:

“…. At UCLA, The professor would start class, saying, ‘Let’s discuss!’ I would look at me peers while they were talking nonsense, and the professors were so patient, just listening to everyone.” She nods her head comically, mimicking the overly respectful professors.

I remember being amazed. It was a linguistics class, and that’s not even linguistics the students are talking about! I thought, “Oh, in the U.S., as soon as you start talking, you’re fine.’” (Cain 185)

Whereas American education emphasizes participation, East Asian culture emphasizes restraint. Cain cogently explores how this particular difference reflects a more fundamental difference in how each culture regards introverted and extraverted traits. As I alluded to earlier, contemporary American society is engrossed with extraversion. Having experienced the American educational system firsthand, I am quite familiar with the ways this system sometimes fails to foster curiosity, rationality, and civic virtue. However, this may simply be an instance of the grass always being greener on the other side. Because I never personally experienced a more restrained educational environment, I cannot readily conceive of its potential downsides. 

According to Cain, “Osborn’s theory had great impact, and company leaders took up brainstorming with enthusiasm” (87). The popularity of brainstorming in the business community coincided with the growing recognition of creativity as a legitimate subject of inquiry among psychologists, who were understandably eager to assess which creative exercises were most effective, and why. “There’s only one problem with Osborn’s breakthrough idea,” Cain notes witheringly, “group brainstorming doesn’t actually work” (88)

How did researchers demonstrate that group brainstorming doesn’t work? And why did the practitioners and popularizers of group brainstorming not recognize its ineffectiveness?

If you were a researcher, how would you design an experiment to test whether brainstorming actually accomplished its stated goal of enhancing creativity? It wouldn’t suffice to look at case studies of group brainstorming, as Alex Osborn did. You would have no control group, and hence no basis for concluding anything about the effects of brainstorming. The question becomes what kind of control group is best. Ideally, the only difference between the control group and the experimental group would be the presence or absence of people. Therefore, you would need to compare group brainstormers against an equivalent numbers of individuals brainstorming in isolation. For example, you might compare the solutions generated by a group of six people who worked for one hour against the compiled solutions of a nominal group consisting of six individuals who each spent an hour working on solutions in isolation. In the coming paragraphs, I will be speaking generally about how one might design an experiment, as though the project was purely hypothetical. This is educational sleight-of-hand. For all intents and purposes, I am summarizing Donald Taylor’s landmark study, “Does Group Participation When Using Brainstorming Facilitate or Inhibit Creative Thinking?” (1958), which was the first to rigorously compare group brainstorming against nominal groups of individuals who worked independently.

To compare the real groups against the nominal groups, the experimenters needed an operational definition of creativity, the mental faculty that group brainstorming allegedly enhanced. In their analysis of the results, the experimenters assessed not only the sheer number of ideas, but also their overall quality. But, whereas the total number of ideas is easily measured and entirely objective (just count them), the notion of quality is not so easily judged. In part, this is because quality comprises multiple abstract things that might be present or absent in different amounts. In psychological research, it’s useful to subdivide quality into components with more restricted definitions. These might include novelty, generality, effectiveness, and feasibility. If you assess each of these components individually and then pool those assessments, the composite score will be a good indicator of total quality.

But wait! Even if terms like novelty and feasibility are more narrowly defined than quality, aren’t such criteria still subjective? Aren’t objective facts the only solid basis for scientific generalization? Well, as it turns out, the use of subjective measures is ubiquitous in psychological research. Cognitive scientists don’t elide this problematic issue, either. To the contrary, for any given study, the researchers must establish that the subjective measures are more valid than mere opinions. One method of demonstrating this is through inter-observer reliability, meaning that different observers of the same measurements agree (if not totally, then to a significant extent) on what scores those measurement deserve. Consider the assessments of solution feasibility in the studies of brainstorming; if multiple evaluators independently give the same feasibility scores in ninety percent of cases, then you can have confidence that their judgments converge on some universal standard of “feasibility.” However, if the evaluators arrive at wildly different determinations of feasibility –say, with only a fifteen percent overlap in their judgments– then it would be no firm basis for comparing the real groups against the nominal groups.

In order to test the effectiveness of brainstorming as a creative exercise, the researchers needed to create a standardized creative task. The problems featured in Osborn’s case studies could not be used because those problems were specific to the type of business where the study occurred. It would not make sense, for example, to ask non-engineers to brainstorm creative solutions to an engineering problem. One would expect the quality of their solutions to be poor irrespective of whether they worked in real or nominal groups. An engineering problem presupposes a lot of knowledge and experience that the participants of most psychology experiments simply do not have. Of course this problem isn’t unique to engineering. Any problem that requires specialized knowledge is inappropriate for general experiments. Consequently, researchers needed to invent problems that didn’t demand specialized knowledge. The problems also needed to admit many possible solutions, which could be readily evaluated for feasibility, novelty, generality, etc. Here are abridged versions of the three problems used by Taylor, et al. in their 1958 study:

  1. The Tourist Problem asked “How can the number of European tourists coming to the U.S. be increased?”
  2. The Thumb Problem asked for a list of pros and cons that would arise if people had an additional thumb.”
  3. The Teacher Problem asked how to insure continued educational efficacy, given population increases.” (Runco, 366)

These particular problems have been recycled for future studies of creativity. If you’d like to read the full version of these three problems –and test your own creativity– follow this link to take my Creativity Test!

I’ve already given you the upshot of this research: group brainstorming doesn’t work. But now, having sketched out the experimental methods, I can state the results more precisely: the nominal groups came up with significantly more solutions than the traditional groups, and the quality of their solutions was significantly higher. As Taylor wrote, “[t]o the extent that the results of the present experiment can be generalized, it must be concluded that group participation when using brainstorming inhibits creative thinking.” (23)

Taylor and his colleagues used undergraduates as participants. Since then, the ineffectiveness of group brainstorming has been demonstrated in a variety of experimental populations, from students to corporate managers to military strategists. Susan Cain discusses a study that examined the possibility that maybe group brainstorming would prove effective if all the group members were extroverts. The study compared business executives, a population the researchers expected to be extroverts, and research scientists, whose inclinations were expected to tend introvert. The scientists and the executives both performed better as collections of individuals than as a singular group, thereby falsifying that hypothesis.

It gets worse. Not only do groups inhibit creativity, but “performance gets worse as group size increases: groups of nine generate fewer and poorer ideas compared to groups of six, which do worse than groups of four.” (Cain 88) The social factors that undermine group brainstorming are cumulative. But what are these pernicious social factors? There are three main culprits:

  1. Social loafing: “in a group, some individuals tend to sit back and let others do the work.”
  2. Production blocking: “only one person can talk or produce an idea at once, while the other group members are forced to sit passively.”
  3. Evaluation apprehension: “the fear of looking stupid in front of one’s peers.” (Cain 89)

In the face of these three social forces, Osborn’s four rules were not really safeguards at all. Of all the sins against rationality one can convict Osborn of, it was his expectation that his explicit rules would be sufficient that I find most egregious. How hard would it have been to test this strategy by having some groups receive explicit instructions, and other groups receive null or contrary instructions? Why did he not consult with psychologists before proclaiming that his pet method was effective? Instead, he made an end-run around the scientific process, and we are now living in a world where his counterproductive method is the prototypical creativity exercise. It’s supremely ironic: because of Osborn’s overwhelming enthusiasm for creativity, the world is now less creative place than it might otherwise be.

This notion of groups performing worse than an equal number of individuals reminds me of a humorous anecdote from Jewish history. As the story goes, 70 Jewish scholars were sequestered in separate rooms and asked to translate the Torah from Hebrew to Greek. Miraculously, every scholar produced the identical translation! However, we shouldn’t be too impressed that all of the scholars independently arrived on the same translation. The real miracle would have been if they had produced the same translation after having been put in the same room.

For many people, the ultimate counterevidence to the experimental failure of group brainstorming would be recent online collaborative enterprises such as Wikipedia, Metafilter, and TV Tropes. This is a reasonable objection that is best addressed by highlighting the psychological differences between in-person and online social interaction. As Susan Cain puts it, “we fail to realize that participating in an online working group is a form of solitude all its own” (89). In my opinion, Cain’s characterization of online collaboration as “a form of solitude all its own” is stretching a precise term like ‘solitude’ too far into metaphor. It would be more precise to say that, in many crucial respects, the cognitive experience of collaborating with thousands of people in cyber-space bears a closer resemblance to solitary thought than it does to an interaction with far fewer colleagues in the physical world. Importantly, the social forces that undermine group brainstorming –social loafing, production blocking, and evaluation apprehension– are exacerbated by cues associated with occupying the same physical space as another person. If these social forces were triggered by the mere presence of another human mind, one would expect no difference between online and in-person collaboration. However, the human mind evolved in a context where all interactions were visceral interactions; our ancestors never had to cope with community archiving projects, social networks, or massive multiplayer online games. Insofar as modern circumstances differ from the ancestral environment, we should expect a mismatch between our intuition and our expectation of how a rational actor ought to behave. As Susan Cain writes, the real problem arises when “we assume that the success of online collaborations will be replicated in the face-to-face world.” (89) Here, as elsewhere, evolutionary science points the way to enlightened personal beliefs and public policies.

It is worth noting the more sophisticated modern advocates of “brainstorming” generally do not espouse Alex Osborn’s original rules. Indeed, “brainstorming” now refers to “not just one tactic but a method for divergent thinking in groups” (Runco 365; my italics). These modern variations on group brainstorming have been evaluated, and they, too, have been shown to be less conducive to creativity than allowing individuals to devise creative solutions in isolation. Since Donald Taylor’s 1958 study debunking group brainstorming, “[d]ozens or even hundreds of other studies have found much the same,” (Runco 366) including a 1991 meta-analysis. In spite of the counterevidence, Osborn’s original methodology is still being used by business consultants and teachers. In part, this is a reflection of a being uninformed about the empirical failure of group brainstorming. But it is also emblematic of our culture’s preference for extraversion over introversion, which Cain calls the extrovert ideal.

Given the lack of evidence for group brainstorming, its continued popularity is hard to fathom. According to Cain, the most compelling explanation is emotional. “Participants in brainstorming sessions usually believe that their group performed much better than it actually did, which points to a valuable reason for their continued popularity—group brainstorming makes people feel attached,” which is “a worthy goal, so long as we understand that social glue, as opposed to creativity, is the principal benefit.” (Cain 89)

To call brainstorming a “worthy goal” whose “principal benefit” is “social glue” is to damn it with faint praise. Although I agree that people enjoy group brainstorming mainly because of socially harmony, this is somewhat distinct from the impression people get that it stimulates creativity. This perception of effectiveness is real, even if it is ultimately the result of a cognitive illusion borne of a failure to imagine how much better the results would have been if the group members had pooled their results after having spent an equal amount of time working in isolation.

And yet, the evidence that brainstorming fosters camaraderie is undeniable. Isn’t that reason enough to practice brainstorming? That depends, but its worth noting that if your principal defense of group brainstorming consists of pointing to one of its positive byproducts, you have all but conceded that brainstorming doesn’t achieve its stated goal of enhancing creativity. After all, if the evidence supported it, why not make that the centerpiece of your argument?

Furthermore, social psychologists have shown that there are alternative ways to trigger the cognitive processes that underlie interpersonal bonding without sacrificing creativity. Moreover, positive feelings toward another person can be promoted by arbitrary and trivial stimuli. For instance, in one highly cited study, researchers showed that tapping your finger in synchrony with a stranger engendered a sense of social affiliation, more so than if the stranger tapped asynchronously or not at all (Hove & Risen, 2009). If one can foster positive social emotions through simple activities such as tapping fingers, swaying in unison, or eating communally, why would you continue to use brainstorming, which is tantamount to squandering the creative potential of your team? To place creativity in opposition to group harmony is to construct a false dilemma in which group brainstorming is the only way to maintain both creativity and social harmony. (If you disagree, I encourage you to find a clock and spend five minutes thinking up viable alternatives.) But it is entirely possible –and indeed optimal– to make your employees feel attached while also maximizing their creative potential. To quote the psychologist Adrian Furnham, “[i]f you have talented and motivated people, they should be encouraged to work alone when creativity or efficiency is the highest priority” (Cain 88-89).

In spite of all its shortcomings, group creative efforts do have the potential to circumvent individual bias. When one feels pressure to distinguish him or herself by generating useful ideas, it is in every individual’s self-interest to spot the absurdities, false assumptions, and possible consequences of their co-workers’ ideas. In short, the group’s ethic would need to be somewhat adversarial. Brainstorming, by contrast, was designed to be as non-adversarial as possible (“Criticism is ruled out. Adverse judgment of ideas must be withheld until later.”). Clearly, there is a sweet spot on the competitiveness scale: adversarial but not ruthlessly competitive.

When I wrote about Eliezer Yudkowsky’s five-minute Timer Task, I introduced it was the first creative exercise I had ever encountered that was explicitly geared toward promoting rationality. This, it turns out, was not true. While compiling notes for this essay, I came across a passage from Daniel Kahneman’s masterpiece, Thinking Fast and Slow, that explored a creative exercise known as the premortem:

The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, [the psychologist Gary] Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.” (Kahneman 264)

Whereas a postmortem is an inquiry into why something failed after it has already failed, the premortem asks us to imagine that the project has already failed, and the task is to explain (i.e. speculate on) what went wrong. Kahneman explains that “[t]he premortem has two main advantages: it overcomes the groupthink that affects many teams once a decision appears to have been made, and it unleashes the imagination of knowledgeable individuals in a much-needed direction.” (264) Our cognitive stance tends toward unrealistic optimism. Pessimism is so anathema to the human mind that people’s “worst case scenarios” are usually slightly better than what eventually happens. The premortem encourages maximum pessimism. In this respect, the premortem is a bit like writing a dystopian story about the worst case scenario of your plan. For instance, is there a possibility that your company’s latest kitchen appliance will lead to a zombie apocalypse? If a member of the group can articulate a compelling scenario for how this might happen, the premortem would have succeeded in averting eldritch horror, civilizational collapse, and the decline in your company’s market value.

Unlike Yudkowsky’s Timer Task –but like group brainstorming– the premortem is a group exercise. Although the effectiveness of the premortem must ultimately await empirical verification, there are strong theoretical reasons for supposing that the social component might actually enhance the creativity of the participants by “encourag[ing] even supporters of the decision to search for possible threats that they had not considered earlier” (Kahneman 266)

Brainstorming fostered an expectation about the sort of environment that is most conducive to creativity (i.e. a participatory group). More subtly, however, it created an expectation for the kind of problem that is amenable to deliberate creativity. The typical brainstorming problem is one that has a huge pool of possible answers, all of which are underdeveloped at their inception but can be elaborated upon. Business strategies fall into this category of problem. There are some creative problems that defy group brainstorming. One of the unacknowledged casualties of brainstorming’s popularity has been the application of creativity to our personal problems. This takes us back to Yudkowsky’s creative exercise, the Timer Task, which I praised for channeling creativity into circumventing a specific cognitive bias, the false dilemma. But creativity is rarely put forward as a solution for general personal distress. Some personal issues cannot be attributed insufficient rationality. One such issue is mindfulness. To be sure, rationality and mindfulness share certain values, and practicing one can improve your performance in the other. Yet there is still a facet of mindfulness that is profoundly difficult to capture in rational terms. Even for the procedural practice most associated with mindfulness, meditation, there comes a point beyond which your success isn’t a matter of knowing more, but of using your attention differently. That capacity to induce your mind into a more psychologically desirable state arguably involves creativity, but not the same kind as brainstorming or the Timer Task.

Brainstorming is a metaphor that says a lot about our perception of what happens in the mind when we engage in creative thought. The implication seems to be that being creative requires deviating from a calm, rational, orderly state of mind. In this view, creativity is a sort of salutary chaos that shakes up ossified patterns of thought. An alternative interpretation is that brainstorming involves tapping into a latent “storm of creativity” that exists below the surface of our awareness. Truly, anyone who has ever tried to meditate can attest to the tempestuousness of our mental baseline. And unlike the whimsical ‘tempest in a teapot,’ a storm is vast, chaotic, and unknowable. My subjective experience of meditating literally includes snatches of language floating unbidden into my visual field, not unlike debris tossed around by a storm. Despite my misgivings about group brainstorming as a creativity-enhancement technique, I find the underlying metaphor of creativity as a storm to be quite captivating.

I wondered, however, whether there were other, potentially better, metaphors for creative thought. In fact, I went and set a timer for five minutes, and tried to think of creative alternatives. Here are the only two viable options I came up with:

  • Imagine that you are standing on the ocean shore, watching the tide go in and out. The incoming tide represents new ideas floating into your head. When the ideas are in mind, it is as though they are objects floating beside your feet. If you think the idea is promising, you keep it. And if you dislike the idea, let the tide flush it away. This metaphor is compatible with meditation techniques that emphasize the breath, which flows inward and outward like a tide.
  • Searching your mind for creative solutions is analogous to digging through a junkyard, and setting aside the useful items you find. (If wading through garbage doesn’t suit your fancy, imagine doing the same thing in an arcade-style claw game.) As with junk, the vast majority of ideas you find will be useless, making it necessary to develop a systematic search process.

If you have any suggestions for alternative metaphors for creative thought, let me know in the comments. Also, for those of you who are familiar with other languages, with what metaphors does your language handle creativity?

 

References:

brainstormer. (n.d.). Random House Unabridged. Retrieved September 04, 2015, from Dictionary.com website: http://dictionary.reference.com/browse/brainstormer

brainstormer. (n.d.). The Dictionary of American Slang. Retrieved September 04, 2015, from Dictionary.com website: http://dictionary.reference.com/browse/brainstormer

Cain, S. (2012). Quiet: The power of introverts in a world that can’t stop talking. New York: Crown.

Csikszentmihalyi, M. (1996). Creativity: Flow and the Psychology of Discovery and Invention. Harper Collins.

Hove, M. J., & Risen, J. L. (2009). It’s All in the Timing: Interpersonal Synchrony Increases Affiliation. Social Cognition, 27(6), 949–960.

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus, and Giroux.

Kaufman, S. (2012, January 14). After the Show: The Many Faces of the Performer. Retrieved from http://www.creativitypost.com/psychology/after_the_show_the_many_faces_of_the_performer

Landrum, G. (2004). Entrepreneurial Genius: The Power of Passion. Brendan Kelly Publishing.

Runco, M. (2014). Creativity Theories and Themes : Research, Development, and Practice. Amsterdam: Elsevier Academic Press.

Taylor, D., Berry, P., & Block, C. (1958). Does Group Participation When Using Brainstorming Facilitate or Inhibit Creative Thinking? Administrative Science Quarterly,3(1), 23-47.

Yudkowsky, E. (2007, May 6). The Third Alternative. Retrieved from http://lesswrong.com/lw/hu/the_third_alternative/

“Complete Opioid Biosynthesis in Yeast,” by Galanie, et al. (2015): Discussion

In the latest issue of the journal Science, a group of synthetic biologists reported that they had engineered a strain of yeast that could synthesize the opiate molecule thebaine, a precursor to many opiate drugs. The paper, “Complete Biosynthesis of Opiates in Yeast,” was written by Stephanie Galanie, working in the laboratory of Christina Smolke at Stanford University. This report marks not only an achievement in genetic engineering, but also a potential breakthrough in biotechnology. If there was a way to synthesize thebaine and related opiates directly, without cultivating fields upon fields of poppy plants, it would be a boon to the 5.5 billion people who, according to the World Health Organization, have “low to nonexistent access to treatment for moderate or severe pain. (quoted in Galanie, et al., 2015)”

The opiates comprise a broad class of drugs that mimic endogenous opioid molecules in the brain. Opiates can relieve pain, induce euphoria, engender addiction, end life painlessly, or even block the action of other opiates. The only natural source of opiates like thebaine is the opium poppy (Papaver somniferum), from which opiates get their name. Although there are non-biological methods of synthesizing opiates, most opiates are still derived from poppies. Despite the risks and inefficiencies of agriculture, “chemical synthesis of these complex molecules is not commercially competitive.” (Galanie, et al., 2015) When the poppy straw is harvested, there are two principal extracts: morphine and thebaine. Morphine is valuable in its extracted form, while thebaine is medically valuable only after it has been processed into other potent opiates, such as hydrocodone (Oxycontin, Percoset), naltrexone (Revia), and buprenorphine (Suboxone).

Depressingly, most news outlets have approached this story from the angle of prescription drug abuse. Many commentators have even insinuated that this discovery portends a future in which most of the populace is stupefied by narcotics. If it is easy to imagine such a world, it is only because we have been exposed to countless dystopian narratives that depict a future in which cutting-edge technology has enabled addiction. But we need to avoid the fallacy of generalizing from fictional evidence. We need to weigh such hypothetical scenarios against the real, ongoing, and preventable pain of the 5.5 billion people who lack access to opiate medications. Moreover, these sensationalistic stories distract us from more pressing concerns about safety in biomedical research. For instance, there are experimental epidemiologists who are intentionally engineering “gain-of-function” mutations into the avian flu virus, transforming it into a virus that is transmissible between humans. On the one hand, virulent samples can escape the lab and cause an epidemic. But, on the other hand, if we understood the mutations that enable viruses jump between species, we’d be able to react more effectively in the event of a natural epidemic. In any case, this is research that genuinely deserves public attention and concern.

This paper by Galanie, et al., is of particular interest to me because its subject combines elements from both of the laboratories I’ve worked for. Formerly, I worked in a neuroscience lab that used opiate drugs to understand how the endogenous opioid system underlies reward-based learning. And now, my current makes use of transgenic mouse models, which have been endowed with genetic constructs that enable us to visualize aspects of their neurobiology.

Interestingly, much of the early research in synthetic biology –including the discovery of recombinant DNA– was done at my current institution, Cold Spring Harbor Laboratory. According to most historical sources, recombinant DNA was discovered by Stanley Cohen in 1972. Working at CSHL, he and his colleagues demonstrated that DNA from one organism could be stably inserted into a different organism. These discoveries paved the way for genetically modified organisms, the Human Genome Project, and transgenic mice.

In a series of famous experiments, Cohen used restriction enzymes to create a strain of E. coli that expressed DNA from two separate strains of the bacteria. Each of the two original strains contained a plasmid (a circular segment of bacterial DNA that code for nonessential functions; see Fig. 1) that conferred resistance to one of two antibiotics. Through wholly synthetic methods, Cohen engineered a strain of E. coli that possessed a fusion plasmid that made it resistant to both antibiotics. Cohen coined the term recombination to describe the incorporation of exogenous genetic material into an organism under artificial conditions.

Figure 1. Plasmids are circular loops of DNA inside a bacteria.
Figure 1. Plasmids are circular loops of DNA inside a bacteria. (Wikimedia Commons)

Stanley Cohen and his colleague, Herbert Boyer, went on to demonstrate that recombination was possible between two different species of bacteria. Some critics suggested that DNA introduced in an artificial manner was unstable. In response, Cohen and Boyer showed that recombinant genes were maintained through hundreds of replication cycles. With hindsight, the idea that the host bacteria would have a mechanism for distinguishing its original DNA from the recombinant DNA seems ridiculous. It is redolent of essentialism, the discredited idea that the every species has a unique, non-physical essence that would render its DNA incompatible with that of another species. It is now universally acknowledged in biology that there is no life essence. Moreover, the interchangeability of DNA is a logical consequence of the fact that all extant life evolved from a common ancestor that used DNA as its genetic code.

We now know that recombination is not a quirky thing that only happens in contrived laboratory settings. It happens frequently in nature. In technical terms, recombination is a subcategory of a broader type of genetic exchange called horizontal gene transfer. Unlike vertical gene transfer –the familiar transfer of genes from the parent to offspring– horizontal gene transfer is the exchange of genetic material in ways other than sexual or asexual reproduction. (Counterintuitively, it is also possible to have gene transfer that is neither vertical nor horizontal, but within an organism’s own genome. In bacteria and plants, there are segments of DNA called transposons [“jumping genes”] that shuffle around the genome via a cut-and-paste mechanism.) In contrast to the subtle instances of recombination in Stanley Cohen’s experiments, natural instances of horizontal gene transfer can be quite dramatic. According the endosymbiotic theory, the cellular organelles known as mitochondria and chloroplasts were once free bacteria that became endosymbionts (“symbiotic organisms that live inside”). Insofar as they reproduce autonomously and encode the means of their own replication, the plasmids inside bacteria function more like endosymbionts than native bacterial DNA. (Fig. 1) For retroviruses such as HIV, recombination is a replication strategy. These retroviruses convert themselves into DNA and insinuate themselves into the host’s genome; the ultimate intention of these viruses is to leave the genome and infect other hosts. Occasionally, however, the viral DNA will mutate to such a degree that it can no longer escape the host’s genome. If this mutated viral DNA occurs in the germ line (the sperm and ova, in humans), the virus-unto-DNA will be present in the offspring, and propagate across generations like the fusion plasmid in Stanley Cohen’s E. coli experiments. Stunning evidence for the stability of this recombinant viral DNA comes from the myriad “fossil viruses” that populate our genome. Carl Zimmer writes:

“Scientists have identified 100,000 pieces of retrovirus DNA in our genes, making up eight percent of the human genome. That’s a huge portion of our DNA when you consider that protein coding genes make up just over one percent of the genome.”

The pioneers of recombinant DNA techniques were commendable not only for their scholarship, but also for their entrepreneurship. In 1976, Cohen, Boyer, and their colleague Paul Berg founded the biotechnology company Genentech, and applied recombinant DNA techniques to a suite of biological problems, such as cancer and diabetes. Their enterprise was wildly successful, and, in 2009, the Swiss pharmaceutical giant Roche purchased Genentech for $46.8 billion.

Additionally, the founders of recombinant DNA techniques have been praised for their ethical stewardship of the technology they brought into the world. Concerned about possible biohazards, Paul Berg organized the 1975 Asilomar Conference on Recombinant DNA to develop sensible guidelines for further biotechnology research. The consensus among the conference attendees was that the risks were high, and that research should proceed cautiously. The rules agreed upon at Asilomar were adopted by the NIH in 1976. While the Asilomar Conference is generally viewed as a success, some scholars regard these guidelines as overly conservative. Commenting on a recent controversy in bioethics, Steven Pinker wrote:

Though the Asilomar recommendations have long been a source of self-congratulation among scientists, they were opposed by a number of geneticists at the time, who correctly argued that they were an overreaction which would needlessly encumber and delay important research.

Another reason why scholars like Pinker criticize the Asilomar framework is because its “excess of caution” approach is founded on an invalid premise: one cannot predict the pace of animal research by extrapolating from progress already made in bacteria. While it is relatively easy to induce recombination in bacteria, the process of engineering recombinant DNA into animals is considerably more difficult. Bacterial DNA is far more accessible, and hence easier to manipulate. By dint of being unicellular, a recombination event between two bacteria is a much more salient event: one or both of the participating bacteria come away with a new genome. For a multicellular organism, like a mouse or a dandelion, such a wholesale change in genome is impossible; it would involve transforming billions of cells at once. However, all multicellular organisms have a phase in their reproductive cycle in which they are a single cell. It is at this unicellular stage that an organism is susceptible to wholesale genetic modification. In humans, this cell is called a zygote, and is the result of the fusion of the sperm and egg. If a transgene (another term for recombinant DNA) was inserted into the zygote’s genome, that embryo would develop as though the transgene had always been present. Consider the case of a zygote with two defective copies of CFTR, the gene responsible for cystic fibrosis. A transgene that substituted the healthy version of CFTR in place of one of the defective copies would spell the difference between a healthy adult and one with cystic fibrosis.

Although transgenic techniques are rarely applied to humans, transgenic animals are a staple of modern biomedical research. Mice and fruit flies with recombinant DNA have increased our understanding of development, disease, and behavior. In spite of this progress, the process of transforming an animal’s genome remains complex, expensive, time-consuming, laborious, and imprecise. As with humans, the principal impediment stems from the fact that transforming animals requires germ line manipulation. Because of this, the model organism of choice for cell biologists has traditionally been the yeast Saccharomyces cerevisiae (Fig. 2). This is the same species we use in baking and alcohol fermentation. Yeast are particularly useful in cell biology and cellular biochemistry because they combine the convenience of bacteria with the representativeness of eukaryotes. Humans, plants, and fungi are all eukaryotes, meaning that they have a nucleus, linear chromosomes, and a generally high degree of shared biochemistry. Yeast are fungi, and since fungi are eukaryotes, it is often valid to generalize from yeast to humans. Certainly it is more valid than generalizing from bacteria to humans. In other ways, yeast behave more like bacteria. Unlike most eukaryotes, yeast are single-celled. It is easier to access, isolate, and manipulate individual cells. Yeast further resemble bacteria because of their ability to undergo asexual reproduction. Although yeast cannot replicate themselves quite as rapidly as bacteria, asexual reproduction simplifies the task of maintaining stable laboratory cultures.

Figure 2. Yeast cells viewed under a microscope.
Figure 2. Yeast cells viewed under a microscope. (Wikimedia Commons)

If the ultimate goal of biomedical research is to apply our knowledge to humans, why not study the biochemistry of human cells? In general, the petri dish is an inhospitable environment for human tissue. If it is necessary to study animal cells, the next best option is insect cells, which are more viable in culture. The exception to the rule that human cells cannot thrive in culture is cancer cells. In cancer cells, the feedback mechanisms that regulate cell division have been compromised, resulting in wild proliferation irrespective of the environment. Unfortunately, the same mutations that make cancer cells so fecund in a petri dish also disqualify them as research subjects. The biochemistry of a cancer cell is too disordered and anomalous to be representative. In many cases, healthy insect cells are a better guide to human biology than human cancer cells. For a vivid illustration of just how unrepresentative cancer cells can be, compare the karyotypes (profiles of chromosomes) of a normal human cell and a HeLa cancer cell (Fig. 3). The HeLa cell has eleven additional chromosomes, and is missing five others, including both copies of Chromosome 13.

Figure 3. The karyotypes of normal human cells and HeLa cancer cells
Figure 3. The karyotypes of normal human cells and HeLa cancer cells (Berkeley Science Review)

(Note: When I refer to the article under consideration, I’ll refer to “Galanie, et al.” When I’m referring to the general research program coordinated by Christina Smolke, which pre-dates this paper, I will refer to “Smolke” or “Smolke’s research.”)

Galanie, et al., applies techniques from yeast biochemistry and genetic engineering. By the end, the thebaine-producing yeast developed by Christina Smolke and her colleagues contained genes from four plants, an animal, and a bacteria. The yeast’s native genes were also altered. In total, 21 non-native genes were engineered into the yeast. (Fig. 4)

Figure 4. Summary of the species whose genes were engineered into the yeast in Galanie, et al. (2015). (Credit: Robert F. Service)

In the past several years, synthetic biologists have recapitulated various stages of thebaine biosynthesis in yeast, but Smolke is the first to have achieved “complete biosynthesis.” This is not to say, however, that Smolke and her collaborators merely fit together pre-fabricated pieces of a puzzle. Although certain steps had been previously worked out, unifying all the steps in a single organism required expertise, ingenuity, and perseverance in the face of endless technical challenges.

The general approach taken by Galanie, et al., consisted of working out the basic steps, and then optimizing each step. At every step, it was possible to observe the impact of their manipulation: did the yeast produce the next product in the pathway? (Fig. 5) If not, why not? If so, how would one tweak the process so that it produces more of the desired protein? The complexity of this project was so much greater than earlier achievements in yeast-based biosynthesis that innovation was crucial. The chronicle of the experiment really highlights the “engineering” part of genetic engineering. There was tinkering, trial-and-error, and troubleshooting. By analogy to other unprecedented biotechnology initiatives (The Human Genome Project, the reconstruction of the wooly mammoth genome, etc.), it is reasonable to suppose that some of the techniques that seem awkward and laborious will become optimized and standardized as the technology matures.

Figure 5. The biosynthetic pathway for thebaine production in yeast. The bolded words are genes that have been inserted into the yeast’s genome. (from Galanie, et al., Figure 1A)

The most vexing technical challenge arose from an unexpected interaction between the yeast’s native cellular milieu and one of the plant proteins that the yeast was made to express. How they solved this problem was the most fascinating part of the paper.

The enzyme salutaridine synthase (SalSyn) is native to the opium poppy, where it catalyzes one of the steps that in the opiate production pathway. In the opium poppy, SalSyn is always processed correctly, with the active sites that are responsible for catalyzing the conversion on the C-terminal facing outward into the cytosol. (Fig. 6, left side) When the SalSyn gene was imported to yeast, the protein was processed incorrectly, with the C-terminus facing inward, into the endoplasmic reticulum. Moreover, the upside-down SalSyn was glycosylated. Glycosylation is a process by which sugar-like chains are tacked onto the developing protein. Normally, the glyco-tags function as shipping labels that ensure the protein gets delivered its proper cellular destination. In the context of engineering a yeast that synthesizing opiates, however, glycosylation became a problem, because the sugar-tags blocked the SalSyn’s active sites. (Fig. 6, center)

Removing the glycosylation sites wasn’t viable, because it made SalSyn less efficient. Instead, the solution involved engineering a chimera protein. Smolke and her colleagues used a plant genome database to search for a plant protein that was sufficiently similar to SalSyn that it would catalyze the same chemical reaction, but not so similar that it would be glycosylated or inserted upside-down. They ended up inserting a protein that had a membrane component from a poppy plant, and an active component from a Goldthread flower. (Fig. 6, right side)

Figure 6. The correctly processed SalSyn protein (Left), the incorrectly processed and glycosylated SalSyn protein (Center), and the engineered SalSyn fusion protein (Right). (From Galanie, et al., Fig. 3A)

Although Smolke and her colleagues deserve praise for finding a suitable plant protein in the database and engineering a functional fusion protein, they were, ultimately, lucky that there existed a plant protein that satisfied their needs. There is currently no way to predict whether a protein like SalSyn will have an adverse interaction when expressed in an organism that doesn’t naturally manufacture it. How synthetic biologists resolve such cross-species interactions will determine the future of genetic engineering.

When engineering biomolecules, it is important to consider the handedness of those molecules. Most biological molecules are like a human hand: asymmetric, And, like a hand, every biomoledule has a particular handedness. A biomolecule and its counterpart with the opposite handedness are called stereoisomers. One of the principle reasons why biosynthesis of molecules is often preferable to standard chemical synthesis is that biosynthesis produces molecules with the optimal handedness for affecting our bodies.

According to the Curie Principle, asymmetric effects can only arise from asymmetric causes. Biosynthesis is an asymmetric process, but synthesis by standard industrial chemistry is not. In industrial chemistry, synthesizing biomolecules involves finding a similar molecule and then modifying it step by step until it matches the desired biomolecule. Unlike biological enzymes, standard laboratory chemicals catalyze reactions symmetrically, with no bias toward left-handed or right-handed variants. As a result, the final product of industrial synthesis is racemic, containing 50 percent right-handed molecules and 50 percent left-handed molecules.

Sometimes, the other-handed version of a biomolecule is merely inactive. One treatment for Parkinson’s Disease involves injecting of DOPA, the precursor to dopamine, into a patient’s brain. However, only one stereoisomer, (L)-DOPA, is biologically active. While a racemic mixture of (L)-DOPA and its counterpart, (D)-DOPA, is not medically dangerous, it would nonetheless be better to have a nonracemic solution with only (L)-DOPA.

A more benign example of how the two different stereoisomers of a compound can have different biological effects is the organic oil carvone. Carvone’s right-handed form smells like spearmint while the left-handed form smells like caraway.

The textbook example of the tragedy of misunderstanding the difference between a stereoisomer and its mirror counterpart is the drug thalidomide. In the early 1950s, thalidomide was produced by an industrial process that resulted in a racemic mixture with a fifty-fifty ratio of thalidomide’s two stereoisomers. Thalidomide was proven safe and effective in adult populations, and marketed as a treatment for nausea. In particular, it was widely prescribed as a remedy for morning sickness. Unfortunately, the manufacturers of thalidomide neglected to test their drug in pregnant women. One of the stereoisomers did indeed relieved morning sickness, but the other produced devastating birth defects. According to the U.S. Food and Drug Administration, “[i]n the late 1950s and early 1960s, more than 10,000 children in 46 countries were born with deformities…as a consequence of thalidomide use.”

In order to optimize each step of the biosynthetic pathway, Smolke and her team not only needed a means to detect the presence of a particular biomolecule, but also its precise quantity. For this, they used liquid chromatography mass spectroscopy (LC-MS), a technique for detecting and quantifying the presence of a specific molecule in a complex mixture. Standard LC-MS, however, was not sufficient, because that technique has no way of distinguishing between a biomolecule and its stereoisomer. Considering that one of the steps in thebaine biosynthesis is the conversion of (S)-reticuline to its stereoisomer, (R)-reticuline, the inability of standard LC-MS to distinguish the handedness of biomolecules was a significant obstacle. Therefore, Smolke relied on a more sophisticated analytical technique, chiral LC-MS, which (as the name suggests) takes the chirality of the constituent molecules into account.

In most cases in which you are trying to synthesize a biomolecule, you are trying to synthesize the naturally occurring isoform, such as (L)-DOPA. The naturally occurring stereoisomer is preferred because the other isoform is either inactive or toxic. But what if the unnatural stereoisomer was, in fact, better than the naturally occurring version? It would be as if (R)-DOPA did a better job at relieving parkinsonian symptoms than (L)-DOPA. In his outstanding book, Right Hand, Left Hand, the psychologist Chris McManus discusses an example of a stereoisomer performing better than its naturally occurring isoform. Certain animals synthesize reversed versions of ubiquitous biomolecules, for use as toxins against would-be predators. But what is a toxin to one species might be a mind-expanding agent for another. Humans have a long history of exploiting plant and animal toxins for medicinal or recreational purposes. Nicotine, for example, is a neurotoxin meant to deter insects from consuming tobacco plants. McManus highlights a poisonous frog species that synthesizes dermorphin and deltorphin, stereoisomer counterparts of the naturally occurring opiate molecules, morphine and enkephalin, respectively:

“Dermorphins and deltorphins are opioid peptides because they act on the brain in the same way as natural opiates such as morphine and heroin. In fact, weight for weight, dermorphin is a thousand times more potent than morphine, and ten thousand times more potent than the proper neurotransmitter in the brain, enkephalin. From the dermorphins and deltorphins may well come morphine substitutes that are potent pain-killers but also non-addictive and without the side effects of sedation and gastro-intestinal stasis. Of course there is also the possibility of new designer drugs to feed abuse, the juice of the poppy being replaced with a simple peptide.” (McManus 134)

If McManus’s speculation about the therapeutic value of these other-handed opiates turns out to be correct, there is no reason to suppose that synthetic biologists like Smolke couldn’t engineer this frog’s genes for dermorphin and deltorphin production into yeast. In fact, once you have a yeast that can synthesize morphine, it would only require a few more genes to synthesize dermorphin.

The experiment recounted in Galanie, et al., is known as a proof of principle. In this kind of experiment, it is necessary only to demonstrate that your method (e.g. yeast-based opiate biosynthesis) is feasible, rather than safe, moral, or economically viable. Even though proof of principle experiments require a reduced burden of proof, Galanie, et al., nevertheless defend the safety, ethics, and economic value of their research.

Safety: Smolke and her colleagues ensured the safety of their laboratory by consulting before, during, and after with the D.E.A. The lab kept meticulous records of the opiates it produced, and all samples were destroyed after testing. The lab members submitted to background checks, presumably to discover any history of drug abuse. Furthermore, the yeast were modified such that they could only grow on a particular medium, meaning that if someone were to steal a sample of the opiate-producing yeast, the yeast would die in the absence of this necessary substrate.

Ethics: According to the World Health Organization, there are 5.5 billion people who have “low to nonexistent access to treatment for moderate or severe pain.” It is scandalous that anyone has to endure needless suffering, much less a majority of human beings. Most pet-owners in the United States would be furious if their veterinarian didn’t have enough pain medication when their pet needed surgery. We should feel just as much compassion for the billions of people whose children and elderly parents are suffering because pain-killing drugs are unavailable or too expensive. Since poppy farming is not meeting the global demand, it is a moral imperative to find alternative sources of opiates. If synthetic biologists like Smolke are able to scale up opiate synthesis in yeast, it could potentially relieve the suffering of billions of people.

Economic Viability: Having a yeast that is capable of thebaine biosynthesis is significant because it is only genes necessary to convert thebaine into hydrocodone (the main ingredient in the second most widely prescribed drug in the United States, Oxycontin). A yeast that could synthesize hydrocodone directly would supplant not only agricultural production of thebaine, but also the industrial chemistry that converts raw thebaine into hydrocodone. In fact, the authors did indeed generate a yeast that produced hydrocodone, though they emphasize that they didn’t optimize it to be efficient. This relatively slight modiification will vastly increase the economic viability of yeast-based opiate biosynthesis.

Just as Stanley Cohen, Herbert Boyer, and Paul Berg founded Genentech to profit from recombinant DNA, Christina Smolke has recently founded her own company, Antheia, which seeks to push yeast opioid synthesis into commercial viability. It is reasonable to expect that many investors will be eager to fund Smolke’s start-up, along with many researchers eager to lend their talents to her upcoming projects. Even so, significant challenges lie ahead. In order for “yeast-based production of opioids to be a feasible alternative to poppy farming,” it will require “over a 100,000-fold improvement” in efficiency (Galanie, et al., 2015)

Is this degree of improvement achievable? One cause for optimism is the comparable improvement achieved in arteminisin-producing yeast strains. Arteminisin is an antimalarial drug that –like thebaine– was previously produced only by plant sources. And, like thebaine, the first yeast that synthesized arteminisin was orders of magnitude less productive than agricultural sources. But, within a few years, “researchers boosted the output of the artemisinin-making yeast by a similar amount.” (Service, 2015) Today, yeast-based production of arteminisin accounts for one-third of global production. It is important to note that the arteminisin pathway involves only 3-6 genes. By contrast, thebaine biosynthesis requires 21 genes. This difference suggests that progress in yeast-based thebaine biosynthesis might proceed at a slower pace than that of arteminisin.

In his fascinating book, Superintelligence, the philosopher Nick Bostrom provides a theoretical framework for predicting the rate of improvement for any given technology. Although Bostrom applies this model to machine intelligence, its explanatory range is more general. For any technology, the rate of improvement is proportional to optimization power and inversely proportional to system recalcitrance. Figure 7 depicts this relation in a mathematical ratio. Optimization power refers to the effort being applied to a problem. We can suppose that Smolke and her new company will be working very hard on this problem, though the overall progression of the technology will be limited by the fact that only one group has the proprietary privileges and technical know-how to synthesize opiates from yeast. The Human Genome Project, by contrast, was a multinational collaboration among at least twenty scientific institutions. System recalcitrance refers to how easy it is to make the system more productive through additional effort. We might describe a system with low recalcitrance as having “low-hanging fruit.” Conversely, when a system is highly recalcitrant, further investment would result in “diminishing returns.” If the analogy to artemisinin-producing yeast is apposite, then it is reasonable to expect substantial progress in yeast-based opiate synthesis within several years.

Screen Shot 2015-08-22 at 8.41.52 PM
Figure 7. Nick Bostom’s framework for predicting the development of a particular technology.

After finishing Smolke’s paper, I wondered whether subsequent optimization necessarily needs to come from human ingenuity. Why couldn’t they enhance opiate output through selective breeding? Humanity has a long history of radically reshaping plants and animals through selective breeding. Although yeast are valued as model organisms for their asexual reproduction, they are fully capable of reproducing sexually. It would therefore be possible to breed generations of yeast, and select for high thebaine output.

Perhaps the trajectory of the opiate-producing yeast would resemble the corn plants that were bred for high oil content. In a dramatic illustration of the power of artificial selection to increase output of a particular gene product, the corn’s oil yield quadrupled in fewer than eighty generations (Fig. 8).

Figure 8. The average oil content of corn selectively bred for high oil yield across 80 generations. (From Boyd and Silk, How Humans Evolved: Second Edition. [New York: W.W. Norton and Company, Inc., 2000] p. 74; retrieved from PBS)
Figure 8. The average oil content of corn selectively bred for high oil yield across 80 generations.
(From Boyd and Silk, How Humans Evolved: Second Edition. [New York: W.W. Norton and Company, Inc., 2000] p. 74; retrieved from PBS)
However, the corn example might be inapposite because selective breeding requires genetic variation. The reason those corn plants could increase in oil yield over many years was not because the plants were evolving new genes, but because selection process was concentrating all the genes that increased oil yield while winnowing down those that didn’t share those genes. At some point, the corn plants will stop increasing in oil content. When this happens, it will not necessarily because the oil content has begun to compromise the corn’s ability to survive, but because every corn plant will be invariant with respect to their oil-related genes. Since the corn’s oil content was the only selection criterion, it is akin to breeding identical clones. Smolke’s yeast are also, for all intents and purposes, identical clones. Unlike a natural population, there is no simply diversity for evolution to winnow down.

The ultimate economic impact of yeast-based opiate biosynthesis notwithstanding, this paper by Galanie, et al., shows the practical value of basic research. This recent achievement was made possible by decades of exploratory research into the hidden mechanisms of bacteria, flowers, and yeast –research, which, at the time may have been difficult to justify in terms of economic utility. Nevertheless, that corpus of information is now being applied to the biomedical sciences. Investment in scientific research benefits society.

Life’s diversity is frequently extolled, and biodiversity is a key goal of conservationism. In terms of sheer economic value, however, diversity is difficult to justify. As a result, many choose to frame it as an aesthetic good. It was, in Darwin’s words, “endless forms most beautiful and most wonderful.” But life is more than life forms. It is also a vast repository of information, strategies for wringing meaningful work out of insensate chemical. It is this informational diversity that is worth preserving. This information is intelligible because it is wrought in DNA, the universal genetic code. Modern advances in synthetic biology attest to our progress in mastering this amazing –and largely untapped– natural resource.

The power to engineer organisms is the ultimate validation of our knowledge about the workings of cellular mechanisms. To have assembled all the knowledge about cell biology into a textbook is impressive, but to apply that knowledge in order to reconfigure and optimize another creature is deeply gratifying. The ability to engineer an organism is an outside criterion of verification, the biological equivalent of an airplane designer observing that her plane flies properly instead of falling from the sky. Perhaps, like the corn that was bred for high oil content, there is a limit to how much an evolved life-form can be re-shaped. However much we might praise the utility of any given model organism, there is a point after which further efficiency cuts into their viability. There is currently a program in synthetic biology to develop single-celled “template organisms” that are loaded with gene clusters that enable the artificial cells to do a particular job, and not expend any resources on extraneous functions. Even if synthetic biology doesn’t produce template organisms, it is likely that synthetic biologists working with single-celled organisms like yeast will converge on this solution. Along with the insertion of novel genes, Smolke also silenced the activity of native yeast genes. This suggests that further optimizations might come from eliminating any genes that interfere directly or indirectly with thebaine biosynthesis. Once that is done, the genes that code for anything besides basic functions and thebaine biosynthesis will be erased, since these genes waste metabolic resources that might otherwise be dedicated to manufacturing thebaine. The ultimate product of all this culling will be an organism that resembles the notional template organism more so than a wild-type yeast.

The research chronicled in Galanie, et al., is a technological accomplishment. Moreover, the byproducts of optimizing this particular technique will catalyze subsequent advances. It’s moral implications are no less significant; the promise of meeting demand for opiate medication in the developing world is worth our investment in this research program.

References:

Galanie, S., Thodey, K., Trenchard, I. J., Interrante, F., & Smolke, C. D. (2015). Complete biosynthesis of opioids in yeast, (August), 1–11.

Service, R.F. (2015. Modified yeast produce opiates from sugar. Science, 14 August 2015: 349 (6249), 677. [DOI:10.1126/science.349.6249.677]