The Language of Anti-Reductivism

Red Root and Running Cold — two sculptures from Nancy Bowen. Each is made of glass and metal, and loosely mimics a human body (or maybe the nervous system). See more of her work at http://nancybowenstudio.com/.

One of my ongoing projects is to develop a language of anti-reductivism. It’s a project that I share with a number of social scientists and humanities scholars, and has been motivated by the turn to molecular and neurologic explanations in the hard and clinical sciences. Biological reductionism circulates in popular media too — from narratives about the hereditary nature of certain kinds of behavior to science reporting on the discovery of “the gene” or part of the brain that causes a particular disease or set of behaviors. Biological reductionism is alluring — it promises an easy explanation for a complex problem. But anyone paying attention to the influences of society on individual behavior — including the development of research questions and the interpretation of the data produced through scientific practice — would be able to see that context is a powerful factor to consider. Reducing a complex set of behaviors to a gene or part of the brain obscures more than it reveals and serves to pathologize individuals rather than motivate changing social norms and institutions.

Wherever biological reductivism is used, individuals are pitted against dominant institutions and widespread expectations of “normal” behavior and development. One of the points I make in The Slumbering Masses (and I reiterate it all the time) is that certain arrangements of sleep are a problem, not because of their physiological effects or origins, but due to the organizations of work, school, family life, and recreation that make certain schedules (i.e. the 9-to-5 workday) the normative basis to understand human biology. In effect, an individual is made to be at fault, when it is actually the organization of society that preferentially treats some ways of sleeping as “normal” and others as pathological. The same can be said for much more than sleeping behaviors and the temporal organization of society; and re-conceptualizing bioethics might be one avenue for developing new ways to organize institutions and — just maybe — society more generally.

You can following my development of a language of anti-reductivism through a set of pieces in which I develop a couple of interrelated terms, “multibiologism” and the “biology of everyday life.” Multibiologism is my attempt to conceptualize a way to work against normative assumptions about biology, based in no small part upon a history of medicine that takes able-bodied white men as its foundation against which other kinds of bodies are compared (and pathologized). Such an approach brings together thinkers like Georges Canguilhem, Keith Wailoo, Dorothy Roberts, and Lennard Davis, drawing together the philosophy and history of medicine, critical race studies, feminist theory, and disability studies. Multibiologism accepts human physiological plasticity as based in the material reality of the world that we live in, but argues that “biology” is a discursive field that is produced through everyday action (including science & medicine). It’s this everyday action that helps to comprise the “biology of everyday life,” where toxins, diet, exercise, work, and other exposures and practices shape the body and expectations of normalcy. Which is all to say that human biology isn’t a stable or predictable thing, and that it changes over the course of a lifetime, is different between societies, and is not the same as what it was for our ancestors. Making that argument has built upon insights from a century of anthropological research (drawing on Margaret Lock and Patricia Kaufert’s work on “local biologies” and Mary Douglas’ work on disgust, especially, and extends a way of thinking that Marcel Mauss started working on in his “Notion of Body Techniques” lectures) and pairs it with the history of changing attitudes to the body (following Norbert Elias, specifically).

It was my ethnographic experiences in the sleep clinic I spent the most time in during the fieldwork for The Slumbering Masses that led me to thinking about multibiologism. I often described the clinicians I worked with there are “sociological,” in no small part due to their willingness to seek social remedies for sleep disorders (rather than resort to pharmaceuticals or surgeries). It was only when I started spending time in other sleep clinics that I began to realize how sociological they were. That they were more likely to talk to parents and educators about rearranging school expectations than they were to prescribe a sleep drug was motivated by their interests in finding long term solutions to the problems that their patients faced. It also recognized that many of their patients were “normal” in their variation from norms of consolidated nightly sleep, and that reorganizing expectations was a better — and more sustainable — solution than prescribing a drug. But it seemed to me that there needed to be language to do the kind of work they sought to do — and language that provided an ethical framework that was based on the lived realities of scientists, physicians, and patients.

(If you’re keen on following the breadcrumbs, the argument starts in the final chapter of The Slumbering Masses, moves on in ‘“Human Nature”and the Biology of Everyday Life,’ reaches its bioethical point in ‘Neurological Disorders, Affective Bioethics, and the Nervous System,’ and lays the basis for Unraveling.)

When I was finishing The Slumbering Masses — and was articulating these ideas for myself before incorporating them into the book — I began to think about what the next project would be. What I wanted to do was develop a research agenda that focused on an expression of human physiology that explicitly challenged how humans are thought about as humans. That led me to consider communication, and linguistic capacity more specifically, which neuroscientists, social scientists, and philosophers (and probably others too) still identify as the defining feature of humans (i.e. only humans have language). What about humans that didn’t speak (or at least didn’t speak in ways that were recognized as normative communication)? That led me first to thinking about the then-newish discourse of “neurodiversity,” which developed, in time, into a project that focused on families wherein a family member communicates in a non-normative way. That project eventually became Unraveling, which develops a set of terms — connectivity, facilitation, animation, and modularity — that seek to provide ways for thinking about individuals, families, communities, and institutions that strike against biologically reductive ways of conceptualizing brains and behavior.

So much of bioethical thinking reinforces reductive ways of conceptualizing the individual. But what the families at the heart of Unraveling show is that disorders of communication — and neurological disorders more generally — are disorders not strictly because of some physiological difference on the part of the individual, but because of the ordering of American society and the expectations that shape what it means to be a “normal” speaker and “neurotypical.” That might be a fairly easy point to convince most social scientists of — and maybe even many physicians — but beyond this diagnostic contribution, I wanted to provide tools for reconfiguring how we talk about what the aims of bioethical intervention are, and how we might achieve them.

It has long been apparent to me that any systemic change in the way that we conceptualize medical disorders requires alliances between social scientists and clinical practitioners. Social scientists — and anthropologists especially — often make recourse to the language of complication (“it’s complicated!” or “it’s complex!”) without having the precise analytic language to describe what those complexities are comprised of and how they make lives livable. What Unraveling seeks to do is provide that language, drawing from the histories of psychiatry and neuroscience as well as the lived experiences of individuals with “neurological disorders.” In the lead up to Unraveling being released, I’ll profile some of the ideas integral to the text — connectivity, facilitation, animation, and modularity — and how they undergird a cybernetic theory of subjectivity and affective bioethics.

Biological reductivism ultimately lets those in power off the hook. Being able to target individuals through pathologization (which supports the logic of medical intervention and undergirds expectations of “compliance”) enables institutional actors — physicians, educators, parents, administrators, managers, law enforcement agents, judges, etc. — to ignore the social contexts in which particular behaviors or ways of being in the world are accepted as disorderly. As disability studies scholars and anthropologists have been arguing for decades, changing social orders can many more lives livable. A robust language of anti-reductivism is one step in the direction of reordering society and social expectations, but there is work to be done in building supple institutions and relations to support the diverse ways that human inhabit the world.

Everything I Needed to Unlearn I Learned from Sid Meier’s Civilization

I’ve been playing Sid Meier’s Civilization my whole video-game-playing life. If you don’t know it, it’s a slow strategy game that models the origins of “civilization” through the near future. Players choose a “civilization” to play (what anthropologists of an earlier era might refer to as a “culture group”) and take turns conducting research, moving units around to explore the randomly-generated board, engaging in diplomacy, waging war, and modifying the landscape to extract strategic resources. Players start by placing a settlement that will grow into a dynamic, urban, capital city over the next 6000+ years of gameplay. If that sounds boring, somehow the designers of the game have managed to overcome the implicit boringness of the premise, and made a game that can half-joking ask players when they’ve finished the game if they want to play “just one more turn” and know that many will. Which is all to say that Civilization is slightly compulsive, and I have lost many nights to playing the game into the wee hours.

The cover of the original version of Sid Meier’s Civilization from 1991. Somehow it perfectly captures a lot of what’s wrong with the game…

Civilization is almost educational. Or it would be if it didn’t fly in the face of a century of research in the social sciences (which I’ll get to briefly). I often think about having my undergraduate students play it, largely because it relies on a set of presumptions about how “civilizations” work, and what differentiates “successful” ones from those that “collapse.” As a game, it attempts to model how societies move from being small-scale, early agricultural communities with a small government to a much larger, continent-spanning, industrialized nation with a “modern” form of government (i.e. democracy, communism, or fascism). All of these are based on a player’s progress through the “tech tree,” a set of unfurling technologies that move from pottery, agriculture, and the wheel, to sanitation, nuclear power, and space flight. If that sounds like unilineal evolution, that’s because it basically is; if it doesn’t sound like unilineal evolution, it might be because that’s an unfamiliar term, which might be familiar in its assumptions.

Unilineal evolution is the idea that there are stages to social development, and societies move from a state of savagery, to barbarism, to being truly civilized. Popular in the US and Western Europe in the late 1800s, unilineal evolution was one of the underlying justifications for imperialism (the “white man’s burden” was to help all of those “half-devil half-child” “savages” move up the tech tree). As a theory, social scientists threw unilineal evolution out decades ago, pointing to the racist, colonial biases in a theory developed by a bunch of white men in the global north that posited that the features of societies in Western Europe (and, begrudgingly, the northeastern US) represented the pinnacle of civilization (secularism, representative politics, industrial capitalism, heteronormative kinship, etc.).

Over time, anthropologists and historians did a pretty good job of showing how wrong that kind of thinking is, beyond its implicit colonial racism. First, civilizations like China and Japan made it fairly clear that a society can have some of these civilizational features without having all of them, and that the development of any one of them doesn’t necessarily depend on the development of a specific preceding stage or technology (e.g. you don’t have to have polytheism before monotheism, or monotheism before secularism; or or the germ theory of disease before sanitation). And second, it became increasingly clear that the idea that societies move from “simple” to “complex” forms of institutions ignored just how complex “simple” institutions can be. What looks to be “simple” from the outside can be exceedingly complex from the inside (e.g. kinship systems in Papua New Guinea). But some form of unilineal evolution persists in Civilization, and it’s very apparent in the biases baked into the game.

Early versions of Civilization were pretty straightforward in their biases. It was difficult to win the game with anything other than a market-driven democracy, even if you were a warmonger (you’ve got to have a market system to pay for all that military R&D and unit support, after all). Over time, Civilization has become a more modular game. It used to be that adopting a government like Democracy came with a set of predetermined features, but now Democracy has a base set of rules, and players can choose from a set of “policies” that offer a variety of bonuses. In that way, you can play a Democracy that depends upon an isolationist, surveillance state or a peaceful Communist state that provides its citizens with amenities to keep them happy. Better yet, the designers chose to separate the technological and “civic” trees, so one needn’t research the wheel before democracy (which can also allow for a civilization that is scientifically invested, but ignores “civic” achievements). But one of the biases that persists is technological determinism.

It might seem silly to suggest that a society needn’t invent the wheel before inventing gunpowder, but the wheel is not a precondition for chemistry. Similarly, one needn’t understand shipbuilding to develop atomic theory. Yes, we live in a world where the wheel preceded gunpowder and shipbuilding preceded atomic theory, but on a planet with a Pangea-like mega-continent, shipbuilding would be unnecessary. Access to some bat guano, sulfur, and charcoal resulting in gunpowder isn’t so hard to imagine preceding the development of the wheel. In all cases, what actually makes a technology possible are the social demands that compel research and encourage individuals and communities to harness a technology’s usage. Hence, gunpowder’s early discovery and widespread abandonment in China or how the refrigerator got its hum. I understand why, for the sake of the game, some kind of tech tree is important, but what continues to confound me is why there are technological bottlenecks where you have to have a specific technology before you can research further technologies (and the same goes for “civics”).

A persistent feature of the game is that each of the civilizations has some set of basic benefits, which can include special units and buildings, and, in some cases, suggest that there is something intrinsic about a civilization’s relationship with geography. Canada and Russia get a bonus for being near tundra tiles; Japan gets a bonus for fighting along water tiles; etc. At its best, these kinds of rules make the game dynamic. At its worst, it fosters a kind of Jared Diamond-esque environmental determinism. (Which, again, historians and anthropologists discredited long before his Pulitzer Prize-winning Guns, Germs, and Steel — but, institutional racism is hard to overcome!) A more nuanced game might allow players to mix and match these bonuses to reflect the complex relationship between what societies value and the landscapes they have to make do with.

One other enduring problem in the game is that the designers really want to focus on the positive side of civilization. These days, Great People can be recruited to join your civilization, each of which has a positive effect (e.g. Albert Einstein gives a boost to your scientific output). But what about all the terrible “Great People” in history? What about the slave trade, on which contemporary capitalism was built? When Civilization 6 was initially released, environmental change (i.e. the Anthropocene, which is what the game is all about) wasn’t included in the game, inspiring the rumor that it was too controversial to include. Maybe including things like racism and ethnonationalism would make the game too grim; maybe the designers simply want players to provide those narratives to the game as they play it. But if any of the criticisms of my above concerns amount to “but that just isn’t realistic,” so too is the history of human civilizations without the ugly side of the nation-state and everyday politics. (As I write this, I kind of wish there was a “utopia mode” that would allow players to avoid things like fossil fuel combustion, factory farms, and the gendered division of labor, to name just three.)

This is clearly not an exhaustive list of all of the problems with Civilization. Whatever its problems, it provides a basis to rethink some of the biases in history and social science — and popular culture more generally. Working through what’s wrong with Civilization helps open up what anthropology and history have done over the 20th century to change the way that social scientists think about “civilization” and what it’s composed of and how it changes over time.

It would be amazing if Civilization 7 was more of an open sandbox, allowing players more flexibility in how they play. It would also be great if there was more of a dark side to Civilization. I don’t think Civilization drove me to become an anthropologist, but it does continue to remind me — each time I play a game — of what has gone wrong with social theory over the course of the 19th and 20th centuries, and how we might work against implicit and explicit biases in the narratives that get told in video games and elsewhere. I hope the next version of Civilization gets up to date with contemporary social science, but, in any case, I’m not going to stop playing it…

We’re Having a Generational Transition Problem…

That’s Luke and Yoda, from “The Last Jedi,” watching the original Jedi temple burn to the ground. My apologies if that’s a spoiler in any way.

There was a moment when a senior faculty member and I were talking in a shared departmental office — just catching up really. The faculty member was talking about their daughter, who recently had a child, started a career, got married, and bought a home. The senior faculty member said she was “finally getting it together” in her early 30s. And it dawned on me that my senior colleague was basically talking about me. I was the same age as their daughter, in a similar place in my career trajectory and personal life. It made me suddenly realize that part of the generational transition problem I was seeing in our institution and the academy more generally, was that Baby Boomers were in the position of handing things over to people that were basically their children’s age (thanks to a series of hiring freezes in the 1990s and early 2000s). When my senior colleagues looked at me, I realized that they were seeing their children or their children’s friends, with all of their career and personal foibles. Why would they hand off to those children, especially something precious like their career’s work of institution- and discipline-building?

I’ve watched senior faculty — nearing retirement — at several institutions basically sabotage departments, programs, and centers that they’ve built rather than anoint and mentor younger faculty to take the reins. For the last several years, I’ve been trying to think through what I’ve seen in the university, particularly around the transition from an older generation of scholars (mostly Baby Boomers) to people of my generation (Gen Xers, although I think I’m on the tail end of the spread). Why not hand off rather than let things fall apart?

It comes in many forms. The benign neglect of not having faculty meetings to talk about necessary changes to the curriculum as faculty retire. The secrecy — if not outright denial — about faculty retirements and when they’ll happen. The gatekeeping that insists on junior faculty consulting senior faculty with the classes they want to teach or improvements they seek to make. The lack of actual mentoring on the part of senior faculty toward their juniors. The deliberate ambiguity about institutional expectations and opportunities, spanning everything from tenure requirements to the availability of resources. And then there’s the more aggressive and deliberate actions that some faculty take: spiking junior faculty’s tenure cases, arguing against diversity hires as unmerited, withholding access to resources, and running centers, programs, and departments aground rather than help steer them in a new direction.

It’s hard not to see some of this behavior as a function of the changing demographics of faculty hires, including shifts in representation of gender, sexuality, race, ethnicity, disability, but also a greater diversity in the institutions that are producing Ph.D.s. Visions of what the discipline of Anthropology (and probably every discipline) are and will be are changing, sometimes radically. I can imagine that for many senior faculty, seeing these changes occur is alienating, and, for some, deeply distressing. Which all has me thinking that part of the generational transfer has to be some collective ego work: labor to help make evident to senior faculty that their lifetime of contributions to the field are vital, and also work on the part of younger faculty to articulate visions of Anthropology (and other disciplines) that redevelop the canon to acknowledge the generations before us while developing supple visions of the disciplines that build upon their pasts, address present needs, and develop livable futures.

Taken from the Louvre’s archives, that’s the image from an ancient Greek vase depicting a relay race (between naked Greek men, to be specific about it).

Years ago, a senior faculty member I knew well retired as soon as he could. His rationale was that he had spent the last 30+ years trying to build a specific vision of anthropology, and after decades of frustration with the institutions he was a part of and the colleagues he had, he was just done. He could have coasted for several years, teaching a set of courses he cared about, but he preferred to cut himself loose from the institution, travel, and write. I really admired his graceful exit.

Before that, a group of senior faculty I knew (different institution, different time) were dealing with the demographic and institutional shift among the faculty by thwarting junior faculty efforts to hire even more diverse faculty. It was only when a couple of the senior faculty broke ranks — acknowledging that the department wasn’t really theirs to build any longer — that the junior faculty had enough of a quorum to make the hires they wanted to. One of the junior faculty described it all as a problem of grace — that some people couldn’t manage the intergenerational transfer gracefully.

I’ve recently become aware of way more younger faculty quitting their academic jobs. Maybe this always happened, and I didn’t see it. But I know personally of several faculty in tenure track jobs (some tenured) who have either quit without a job lined up or have made a calculated exit from academia. And the internet is littered with additional examples. It’s hard for me not to see people driven to quit as responses to institutions that they don’t see futures in — and feel like they don’t have the mentoring or support to make a livable institutional future. Somehow I have a hard time seeing quitting as a form of grace.

The problem with both my impulse to interpret my senior colleague’s actions as one of gracefulness and that junior faculty’s similar impulse to interpret a lack of grace on the part of senior faculty is that it places the onus on specific faculty to behave in particular ways. If we’re going to navigate this intergenerational moment generatively, it’s going to be through collaboration, not individual choices.

That all said, I’m not sure what the right way forward is. I do know that universities are decidedly conservative institutions, and that incrementalism is probably the only sustainable way forward. What might that look like?

Develop sustained dialogues between junior and senior faculty. That might be through workshops or conference panels, or, locally, by having faculty give guest lectures in each other’s courses or discuss their work in seminars. Having a regular space to come talk about ideas, one’s scholarship, and one’s place in the field keeps lines of dialogue open. It also makes it clear that whatever else is happening, there’s a relationship that’s being maintained between people who recognize one another as scholars (even if they might disagree as respected experts).

Collaborate intergenerationally whether in writing or funds-seeking or conference planning. I don’t doubt the first two of these can be hard, but it might be a site where very deliberate mentorship can happen. Working on panels for conferences together (or local workshops) can serve as a way to introduce each other to networks of scholars with shared interests.

Share writing. Writing is so central to the profession, and, for better and worse, people’s relationships with their work. Sharing writing, often without the pretense of needing anything like feedback, is helpful to keep lines of communication open, but also to help develop expanding networks of connection between scholars across generations.

Create structures of care, which can range from the occasional check-in email, a meal or drinks, or even home visits if you’re familiar enough. Some of the best, most humane interactions I’ve had with other faculty have been in one-on-one meals or drinks — not dinner or department parties — and they’ve produced some of the most lasting scholarly friendships I have with people more senior than me.

I’ve been very deliberate to pitch these suggestions without presuming that invitations need to come from juniors to seniors or vice versa. Kindness is the rule, and building a sustainable future depends on actors across generations working together to have something to hand off to the generations to come.

Other ideas? Other good experiences? Please post them in the comments.