Facilitation as a Kind of Care

Recently, there have emerged intense debates about Facilitated Communication (FC). Narrowly defined, FC is the process whereby an individual with a communication impairment relies on another individual’s aid in the use of a keyboard, letter board, symbol board, or tablet device with a symbolic interface. The facilitator uses his or her hand to steady the arm or hand of the communicator, making it possible for the communicator to point at a symbol or type a letter. A more expansive definition of FC would include Augmentative and Alternative Communication (AAC) and the various ways that interpreters and facilitators are employed to aid communicators who have communication impairments, which might include such diverse tactics as ascertaining eye movements, interpreting tapping fingers or feet, and discerning eye movements associated with a letter or symbol board. Parsing FC from AAC has been a tactic used to discredit individual FC practitioners while preserving the use of AAC for specific individuals. What became apparent to me during the process of writing Unraveling, a book that is expressly about communication impairments and their social affordances, is that all communication is facilitated, and that the distinctions between kinds of communication are one of degree, not kind.

A low tech flip book filled with simple symbols to use for communication. Borrowed from http://literacyforallinstruction.ca

In Unraveling, I argue that the opposition to FC is due to its chafing against dominant ways of thinking about communication, language, and subjectivity. (At the time of writing, the Wikipedia page for Facilitated Communication has been totally taken over by opponents of FC, which Wikipedia has abetted by putting the page in a series on Alternative and Pseudo-Medicine [which the medical anthropologist in me has some additional things to say about some other time].) Drawing on a history of understanding the subject as conveying his or her interior sense of self through the transparent, referential use of language, this view holds that only those who can speak their interior selves are full subjects. This is exemplified in Emile Benveniste’s “Subjectivity in Language” and apparent in thinkers like Judith Butler and others who see discourse as primarily, if not solely, restricted to language-use. Ableist in this assumption, the variance of non-normative speakers from socially-established norms marks some individuals as disabled — and some as more disabled than others. Such a view ignores the complex, situated, material interactions between individuals that all communication requires. It ignores how communication — and by extension subjectivity — is facilitated.

By facilitation, I mean a processural interaction between bodies; facilitation aims towards an end that only can be reached — or can be reached more immediately — through interactions between actors. In defining facilitation in that way, I’m drawing on Don Kulick and Jens Rydstrom’s Loneliness and Its Opposite, which is about the ways that caregivers aid disabled individuals in their sex lives, particularly in contexts of residential care in Denmark. In Kulick and Rydstrom’s analysis, sexual interactions between two disabled people are made possible by one or more caregivers who are able to help position bodies, put on condoms, and otherwise ensure that the disabled lovers will be successful in their interaction. Communication is not so different.

In Unraveling, I focus part of a chapter on a family — the Goddards — and their use of FC with their adult daughter, Peyton. (Peyton keeps a website here.) As Peyton and her mother recount in her memoir, I Am Intelligent, Peyton became non-verbal in her childhood, a case of what is often referred to as “regressive autism.” It was only in her early 20s, and out of desperation, that her parents turned to FC, despite having seen a television program that portrayed FC users as misguided and manipulative. Peyton’s use of FC relies on her mother or another caregiver to hold her wrist — and eventually her arm — while she uses a keyboard to type out messages. Her writing doesn’t always follow standard syntax or spelling, but her caregivers are able to discern her meaning through context and in conversation with Peyton. Aware of the criticisms of FC, Peyton’s psychiatrist devises experiments to prove that Peyton is communicating and that the facilitators are serving as a medium for her to do so.

Critics of FC often point to its inability to be replicated in laboratory conditions, which any awareness of the replication crisis in psychology would seem to trouble as a sound counter-argument. Critics also — as in the case of the Wikipedia page on FC — point to specific cases of facilitators who have been accused of abuse or whose use of FC has been discredited. The challenge to both of these criticisms is that for the many users of FC who use it to get through their everyday lives without contestation by authorities or FC deniers, there’s no benefit to showing up for a potentially hostile “experiment” to test the validity of their means of communication. In other words, the more successful users of FC might never be seen in experimental contexts precisely because those in their lives see the use of FC as successful and not in need of testing. Moreover, recent research has pointed to how scientific ideologies constrain what experimental protocols see and report, suggesting that how autism — for example — has been researched and discussed is in need of significant re-conceptualization, particularly in relation to questions around social interaction and communication. Which is all to say that FC is subject to what linguistic anthropologists refer to as “language” and “semiotic” ideologies, and is due for some critical reassessment (including reassessing the work of its critics).

Consider what happens in any communicative exchange. A speaker utters a set of noises or makes series of gestures; the speaker’s audience of one or more people register these actions and interpret them based on their tacit understandings of language within their community. The audience also works from the situation in which the act occurs in an effort to ascertain the referential content of the message. The process of communication — as symbolic interactionists and ethnomethodologists have long argued — is one of collaboration and depends not on an interior self with a transparent message conveyed through language, but rather a process through which some operable certainty can be made between communicators. Over the course of a conversation this might become easier, as a set of shared assumptions develop, but everyone has experienced communicative interactions where referents, meanings, and intents are misunderstood and lead to confusion or tension. Smoothing out communication and ignoring all of the interpretation that occurs in a communicative interaction ignores all of the facilitation that is happening between individuals — a facilitation that is working toward and end of shared understanding.

If one accepts that all communication is necessarily facilitated, what follows is that a practice like FC is not typologically different from everyday speech, the use of sign language, communication through gestures, or reading. In each case, the speaker (or author) seeks to convey some message, but that message is constructed through an interaction with the audience. The facilitator in FC is analogous to any other medium through which communication is enabled, and when communicated with, might serve as both medium and audience.

One of the consequences of this line of thinking — and one that I work on developing in Unraveling — is that rather than see subjectivity as something that arises in the individual (which can sometimes be seen as a “natural” process and one that disabled individuals are unable to undergo completely), subjectivity is a collaborative process that relies not just on language, but communicative interaction. Moreover, it is situationally dependent, is shaped by the material conditions individuals and communities are comprised through, and is based in the physiological capacities that individuals have and that are enabled through their worldly interactions with and through human and non-human others.

That might all sound a little obtuse, but consider it in Peyton Goddard’s case. In the period when she cannot communicate with language — after she loses her ability to normatively communicate in her childhood and before she adopts FC — it is not that Peyton doesn’t have experiences that shape her subjectivity. Rather, the experiences that she has during that approximately 20-year period profoundly shape her, but she is unable to communicate about them — at least not in any normatively recognized way — and they have an outsized effect on her. It’s only when she returns to language use that she is able to tame the experiences she has had, largely in collaboration with her family and caregivers, who, with her, help to encode her experiences in a shared understanding of what has happened to her over those 20 years. I am Intelligent is the result of that work.

In this way, seeing facilitation as a kind of care — and one that is end-focused and collaborative — helps to position the act of communication as a form of caring interaction. Listening, interpreting, and sharing all become integral to helping other people exist in the world as subjects who can be known and know the world and their social others. Shutting individuals out of these caring experiences — as, apparently, those who seek to discredit FC seek to do — is a violent and inhumane act. Instead, practicing careful communication and finding ways for others to communicate — normatively or not — ensures more vibrant connections between people. Ignoring this responsibility serves to maintain ableist forms of subjectivity and personhood that exclude some kinds of communicators while preserving normative kinds of subjects and persons. At its worst, this comes to naturalize certain kinds of “normal” and “pathological” human experiences and renders some individuals outside of networks of care. In Unraveling, I try and plot ways forward that acknowledge the necessity of facilitation and build animating worlds of connection and care.

(Unraveling: Remaking Personhood in a Neurodiverse Age comes out from the University of Minnesota Press in 2020.)

The Language of Anti-Reductivism

Red Root and Running Cold — two sculptures from Nancy Bowen. Each is made of glass and metal, and loosely mimics a human body (or maybe the nervous system). See more of her work at http://nancybowenstudio.com/.

One of my ongoing projects is to develop a language of anti-reductivism. It’s a project that I share with a number of social scientists and humanities scholars, and has been motivated by the turn to molecular and neurologic explanations in the hard and clinical sciences. Biological reductionism circulates in popular media too — from narratives about the hereditary nature of certain kinds of behavior to science reporting on the discovery of “the gene” or part of the brain that causes a particular disease or set of behaviors. Biological reductionism is alluring — it promises an easy explanation for a complex problem. But anyone paying attention to the influences of society on individual behavior — including the development of research questions and the interpretation of the data produced through scientific practice — would be able to see that context is a powerful factor to consider. Reducing a complex set of behaviors to a gene or part of the brain obscures more than it reveals and serves to pathologize individuals rather than motivate changing social norms and institutions.

Wherever biological reductivism is used, individuals are pitted against dominant institutions and widespread expectations of “normal” behavior and development. One of the points I make in The Slumbering Masses (and I reiterate it all the time) is that certain arrangements of sleep are a problem, not because of their physiological effects or origins, but due to the organizations of work, school, family life, and recreation that make certain schedules (i.e. the 9-to-5 workday) the normative basis to understand human biology. In effect, an individual is made to be at fault, when it is actually the organization of society that preferentially treats some ways of sleeping as “normal” and others as pathological. The same can be said for much more than sleeping behaviors and the temporal organization of society; and re-conceptualizing bioethics might be one avenue for developing new ways to organize institutions and — just maybe — society more generally.

You can following my development of a language of anti-reductivism through a set of pieces in which I develop a couple of interrelated terms, “multibiologism” and the “biology of everyday life.” Multibiologism is my attempt to conceptualize a way to work against normative assumptions about biology, based in no small part upon a history of medicine that takes able-bodied white men as its foundation against which other kinds of bodies are compared (and pathologized). Such an approach brings together thinkers like Georges Canguilhem, Keith Wailoo, Dorothy Roberts, and Lennard Davis, drawing together the philosophy and history of medicine, critical race studies, feminist theory, and disability studies. Multibiologism accepts human physiological plasticity as based in the material reality of the world that we live in, but argues that “biology” is a discursive field that is produced through everyday action (including science & medicine). It’s this everyday action that helps to comprise the “biology of everyday life,” where toxins, diet, exercise, work, and other exposures and practices shape the body and expectations of normalcy. Which is all to say that human biology isn’t a stable or predictable thing, and that it changes over the course of a lifetime, is different between societies, and is not the same as what it was for our ancestors. Making that argument has built upon insights from a century of anthropological research (drawing on Margaret Lock and Patricia Kaufert’s work on “local biologies” and Mary Douglas’ work on disgust, especially, and extends a way of thinking that Marcel Mauss started working on in his “Notion of Body Techniques” lectures) and pairs it with the history of changing attitudes to the body (following Norbert Elias, specifically).

It was my ethnographic experiences in the sleep clinic I spent the most time in during the fieldwork for The Slumbering Masses that led me to thinking about multibiologism. I often described the clinicians I worked with there are “sociological,” in no small part due to their willingness to seek social remedies for sleep disorders (rather than resort to pharmaceuticals or surgeries). It was only when I started spending time in other sleep clinics that I began to realize how sociological they were. That they were more likely to talk to parents and educators about rearranging school expectations than they were to prescribe a sleep drug was motivated by their interests in finding long term solutions to the problems that their patients faced. It also recognized that many of their patients were “normal” in their variation from norms of consolidated nightly sleep, and that reorganizing expectations was a better — and more sustainable — solution than prescribing a drug. But it seemed to me that there needed to be language to do the kind of work they sought to do — and language that provided an ethical framework that was based on the lived realities of scientists, physicians, and patients.

(If you’re keen on following the breadcrumbs, the argument starts in the final chapter of The Slumbering Masses, moves on in ‘“Human Nature”and the Biology of Everyday Life,’ reaches its bioethical point in ‘Neurological Disorders, Affective Bioethics, and the Nervous System,’ and lays the basis for Unraveling.)

When I was finishing The Slumbering Masses — and was articulating these ideas for myself before incorporating them into the book — I began to think about what the next project would be. What I wanted to do was develop a research agenda that focused on an expression of human physiology that explicitly challenged how humans are thought about as humans. That led me to consider communication, and linguistic capacity more specifically, which neuroscientists, social scientists, and philosophers (and probably others too) still identify as the defining feature of humans (i.e. only humans have language). What about humans that didn’t speak (or at least didn’t speak in ways that were recognized as normative communication)? That led me first to thinking about the then-newish discourse of “neurodiversity,” which developed, in time, into a project that focused on families wherein a family member communicates in a non-normative way. That project eventually became Unraveling, which develops a set of terms — connectivity, facilitation, animation, and modularity — that seek to provide ways for thinking about individuals, families, communities, and institutions that strike against biologically reductive ways of conceptualizing brains and behavior.

So much of bioethical thinking reinforces reductive ways of conceptualizing the individual. But what the families at the heart of Unraveling show is that disorders of communication — and neurological disorders more generally — are disorders not strictly because of some physiological difference on the part of the individual, but because of the ordering of American society and the expectations that shape what it means to be a “normal” speaker and “neurotypical.” That might be a fairly easy point to convince most social scientists of — and maybe even many physicians — but beyond this diagnostic contribution, I wanted to provide tools for reconfiguring how we talk about what the aims of bioethical intervention are, and how we might achieve them.

It has long been apparent to me that any systemic change in the way that we conceptualize medical disorders requires alliances between social scientists and clinical practitioners. Social scientists — and anthropologists especially — often make recourse to the language of complication (“it’s complicated!” or “it’s complex!”) without having the precise analytic language to describe what those complexities are comprised of and how they make lives livable. What Unraveling seeks to do is provide that language, drawing from the histories of psychiatry and neuroscience as well as the lived experiences of individuals with “neurological disorders.” In the lead up to Unraveling being released, I’ll profile some of the ideas integral to the text — connectivity, facilitation, animation, and modularity — and how they undergird a cybernetic theory of subjectivity and affective bioethics.

Biological reductivism ultimately lets those in power off the hook. Being able to target individuals through pathologization (which supports the logic of medical intervention and undergirds expectations of “compliance”) enables institutional actors — physicians, educators, parents, administrators, managers, law enforcement agents, judges, etc. — to ignore the social contexts in which particular behaviors or ways of being in the world are accepted as disorderly. As disability studies scholars and anthropologists have been arguing for decades, changing social orders can many more lives livable. A robust language of anti-reductivism is one step in the direction of reordering society and social expectations, but there is work to be done in building supple institutions and relations to support the diverse ways that human inhabit the world.

Everything I Needed to Unlearn I Learned from Sid Meier’s Civilization

I’ve been playing Sid Meier’s Civilization my whole video-game-playing life. If you don’t know it, it’s a slow strategy game that models the origins of “civilization” through the near future. Players choose a “civilization” to play (what anthropologists of an earlier era might refer to as a “culture group”) and take turns conducting research, moving units around to explore the randomly-generated board, engaging in diplomacy, waging war, and modifying the landscape to extract strategic resources. Players start by placing a settlement that will grow into a dynamic, urban, capital city over the next 6000+ years of gameplay. If that sounds boring, somehow the designers of the game have managed to overcome the implicit boringness of the premise, and made a game that can half-joking ask players when they’ve finished the game if they want to play “just one more turn” and know that many will. Which is all to say that Civilization is slightly compulsive, and I have lost many nights to playing the game into the wee hours.

The cover of the original version of Sid Meier’s Civilization from 1991. Somehow it perfectly captures a lot of what’s wrong with the game…

Civilization is almost educational. Or it would be if it didn’t fly in the face of a century of research in the social sciences (which I’ll get to briefly). I often think about having my undergraduate students play it, largely because it relies on a set of presumptions about how “civilizations” work, and what differentiates “successful” ones from those that “collapse.” As a game, it attempts to model how societies move from being small-scale, early agricultural communities with a small government to a much larger, continent-spanning, industrialized nation with a “modern” form of government (i.e. democracy, communism, or fascism). All of these are based on a player’s progress through the “tech tree,” a set of unfurling technologies that move from pottery, agriculture, and the wheel, to sanitation, nuclear power, and space flight. If that sounds like unilineal evolution, that’s because it basically is; if it doesn’t sound like unilineal evolution, it might be because that’s an unfamiliar term, which might be familiar in its assumptions.

Unilineal evolution is the idea that there are stages to social development, and societies move from a state of savagery, to barbarism, to being truly civilized. Popular in the US and Western Europe in the late 1800s, unilineal evolution was one of the underlying justifications for imperialism (the “white man’s burden” was to help all of those “half-devil half-child” “savages” move up the tech tree). As a theory, social scientists threw unilineal evolution out decades ago, pointing to the racist, colonial biases in a theory developed by a bunch of white men in the global north that posited that the features of societies in Western Europe (and, begrudgingly, the northeastern US) represented the pinnacle of civilization (secularism, representative politics, industrial capitalism, heteronormative kinship, etc.).

Over time, anthropologists and historians did a pretty good job of showing how wrong that kind of thinking is, beyond its implicit colonial racism. First, civilizations like China and Japan made it fairly clear that a society can have some of these civilizational features without having all of them, and that the development of any one of them doesn’t necessarily depend on the development of a specific preceding stage or technology (e.g. you don’t have to have polytheism before monotheism, or monotheism before secularism; or or the germ theory of disease before sanitation). And second, it became increasingly clear that the idea that societies move from “simple” to “complex” forms of institutions ignored just how complex “simple” institutions can be. What looks to be “simple” from the outside can be exceedingly complex from the inside (e.g. kinship systems in Papua New Guinea). But some form of unilineal evolution persists in Civilization, and it’s very apparent in the biases baked into the game.

Early versions of Civilization were pretty straightforward in their biases. It was difficult to win the game with anything other than a market-driven democracy, even if you were a warmonger (you’ve got to have a market system to pay for all that military R&D and unit support, after all). Over time, Civilization has become a more modular game. It used to be that adopting a government like Democracy came with a set of predetermined features, but now Democracy has a base set of rules, and players can choose from a set of “policies” that offer a variety of bonuses. In that way, you can play a Democracy that depends upon an isolationist, surveillance state or a peaceful Communist state that provides its citizens with amenities to keep them happy. Better yet, the designers chose to separate the technological and “civic” trees, so one needn’t research the wheel before democracy (which can also allow for a civilization that is scientifically invested, but ignores “civic” achievements). But one of the biases that persists is technological determinism.

It might seem silly to suggest that a society needn’t invent the wheel before inventing gunpowder, but the wheel is not a precondition for chemistry. Similarly, one needn’t understand shipbuilding to develop atomic theory. Yes, we live in a world where the wheel preceded gunpowder and shipbuilding preceded atomic theory, but on a planet with a Pangea-like mega-continent, shipbuilding would be unnecessary. Access to some bat guano, sulfur, and charcoal resulting in gunpowder isn’t so hard to imagine preceding the development of the wheel. In all cases, what actually makes a technology possible are the social demands that compel research and encourage individuals and communities to harness a technology’s usage. Hence, gunpowder’s early discovery and widespread abandonment in China or how the refrigerator got its hum. I understand why, for the sake of the game, some kind of tech tree is important, but what continues to confound me is why there are technological bottlenecks where you have to have a specific technology before you can research further technologies (and the same goes for “civics”).

A persistent feature of the game is that each of the civilizations has some set of basic benefits, which can include special units and buildings, and, in some cases, suggest that there is something intrinsic about a civilization’s relationship with geography. Canada and Russia get a bonus for being near tundra tiles; Japan gets a bonus for fighting along water tiles; etc. At its best, these kinds of rules make the game dynamic. At its worst, it fosters a kind of Jared Diamond-esque environmental determinism. (Which, again, historians and anthropologists discredited long before his Pulitzer Prize-winning Guns, Germs, and Steel — but, institutional racism is hard to overcome!) A more nuanced game might allow players to mix and match these bonuses to reflect the complex relationship between what societies value and the landscapes they have to make do with.

One other enduring problem in the game is that the designers really want to focus on the positive side of civilization. These days, Great People can be recruited to join your civilization, each of which has a positive effect (e.g. Albert Einstein gives a boost to your scientific output). But what about all the terrible “Great People” in history? What about the slave trade, on which contemporary capitalism was built? When Civilization 6 was initially released, environmental change (i.e. the Anthropocene, which is what the game is all about) wasn’t included in the game, inspiring the rumor that it was too controversial to include. Maybe including things like racism and ethnonationalism would make the game too grim; maybe the designers simply want players to provide those narratives to the game as they play it. But if any of the criticisms of my above concerns amount to “but that just isn’t realistic,” so too is the history of human civilizations without the ugly side of the nation-state and everyday politics. (As I write this, I kind of wish there was a “utopia mode” that would allow players to avoid things like fossil fuel combustion, factory farms, and the gendered division of labor, to name just three.)

This is clearly not an exhaustive list of all of the problems with Civilization. Whatever its problems, it provides a basis to rethink some of the biases in history and social science — and popular culture more generally. Working through what’s wrong with Civilization helps open up what anthropology and history have done over the 20th century to change the way that social scientists think about “civilization” and what it’s composed of and how it changes over time.

It would be amazing if Civilization 7 was more of an open sandbox, allowing players more flexibility in how they play. It would also be great if there was more of a dark side to Civilization. I don’t think Civilization drove me to become an anthropologist, but it does continue to remind me — each time I play a game — of what has gone wrong with social theory over the course of the 19th and 20th centuries, and how we might work against implicit and explicit biases in the narratives that get told in video games and elsewhere. I hope the next version of Civilization gets up to date with contemporary social science, but, in any case, I’m not going to stop playing it…

We’re Having a Generational Transition Problem…

That’s Luke and Yoda, from “The Last Jedi,” watching the original Jedi temple burn to the ground. My apologies if that’s a spoiler in any way.

There was a moment when a senior faculty member and I were talking in a shared departmental office — just catching up really. The faculty member was talking about their daughter, who recently had a child, started a career, got married, and bought a home. The senior faculty member said she was “finally getting it together” in her early 30s. And it dawned on me that my senior colleague was basically talking about me. I was the same age as their daughter, in a similar place in my career trajectory and personal life. It made me suddenly realize that part of the generational transition problem I was seeing in our institution and the academy more generally, was that Baby Boomers were in the position of handing things over to people that were basically their children’s age (thanks to a series of hiring freezes in the 1990s and early 2000s). When my senior colleagues looked at me, I realized that they were seeing their children or their children’s friends, with all of their career and personal foibles. Why would they hand off to those children, especially something precious like their career’s work of institution- and discipline-building?

I’ve watched senior faculty — nearing retirement — at several institutions basically sabotage departments, programs, and centers that they’ve built rather than anoint and mentor younger faculty to take the reins. For the last several years, I’ve been trying to think through what I’ve seen in the university, particularly around the transition from an older generation of scholars (mostly Baby Boomers) to people of my generation (Gen Xers, although I think I’m on the tail end of the spread). Why not hand off rather than let things fall apart?

It comes in many forms. The benign neglect of not having faculty meetings to talk about necessary changes to the curriculum as faculty retire. The secrecy — if not outright denial — about faculty retirements and when they’ll happen. The gatekeeping that insists on junior faculty consulting senior faculty with the classes they want to teach or improvements they seek to make. The lack of actual mentoring on the part of senior faculty toward their juniors. The deliberate ambiguity about institutional expectations and opportunities, spanning everything from tenure requirements to the availability of resources. And then there’s the more aggressive and deliberate actions that some faculty take: spiking junior faculty’s tenure cases, arguing against diversity hires as unmerited, withholding access to resources, and running centers, programs, and departments aground rather than help steer them in a new direction.

It’s hard not to see some of this behavior as a function of the changing demographics of faculty hires, including shifts in representation of gender, sexuality, race, ethnicity, disability, but also a greater diversity in the institutions that are producing Ph.D.s. Visions of what the discipline of Anthropology (and probably every discipline) are and will be are changing, sometimes radically. I can imagine that for many senior faculty, seeing these changes occur is alienating, and, for some, deeply distressing. Which all has me thinking that part of the generational transfer has to be some collective ego work: labor to help make evident to senior faculty that their lifetime of contributions to the field are vital, and also work on the part of younger faculty to articulate visions of Anthropology (and other disciplines) that redevelop the canon to acknowledge the generations before us while developing supple visions of the disciplines that build upon their pasts, address present needs, and develop livable futures.

Taken from the Louvre’s archives, that’s the image from an ancient Greek vase depicting a relay race (between naked Greek men, to be specific about it).

Years ago, a senior faculty member I knew well retired as soon as he could. His rationale was that he had spent the last 30+ years trying to build a specific vision of anthropology, and after decades of frustration with the institutions he was a part of and the colleagues he had, he was just done. He could have coasted for several years, teaching a set of courses he cared about, but he preferred to cut himself loose from the institution, travel, and write. I really admired his graceful exit.

Before that, a group of senior faculty I knew (different institution, different time) were dealing with the demographic and institutional shift among the faculty by thwarting junior faculty efforts to hire even more diverse faculty. It was only when a couple of the senior faculty broke ranks — acknowledging that the department wasn’t really theirs to build any longer — that the junior faculty had enough of a quorum to make the hires they wanted to. One of the junior faculty described it all as a problem of grace — that some people couldn’t manage the intergenerational transfer gracefully.

I’ve recently become aware of way more younger faculty quitting their academic jobs. Maybe this always happened, and I didn’t see it. But I know personally of several faculty in tenure track jobs (some tenured) who have either quit without a job lined up or have made a calculated exit from academia. And the internet is littered with additional examples. It’s hard for me not to see people driven to quit as responses to institutions that they don’t see futures in — and feel like they don’t have the mentoring or support to make a livable institutional future. Somehow I have a hard time seeing quitting as a form of grace.

The problem with both my impulse to interpret my senior colleague’s actions as one of gracefulness and that junior faculty’s similar impulse to interpret a lack of grace on the part of senior faculty is that it places the onus on specific faculty to behave in particular ways. If we’re going to navigate this intergenerational moment generatively, it’s going to be through collaboration, not individual choices.

That all said, I’m not sure what the right way forward is. I do know that universities are decidedly conservative institutions, and that incrementalism is probably the only sustainable way forward. What might that look like?

Develop sustained dialogues between junior and senior faculty. That might be through workshops or conference panels, or, locally, by having faculty give guest lectures in each other’s courses or discuss their work in seminars. Having a regular space to come talk about ideas, one’s scholarship, and one’s place in the field keeps lines of dialogue open. It also makes it clear that whatever else is happening, there’s a relationship that’s being maintained between people who recognize one another as scholars (even if they might disagree as respected experts).

Collaborate intergenerationally whether in writing or funds-seeking or conference planning. I don’t doubt the first two of these can be hard, but it might be a site where very deliberate mentorship can happen. Working on panels for conferences together (or local workshops) can serve as a way to introduce each other to networks of scholars with shared interests.

Share writing. Writing is so central to the profession, and, for better and worse, people’s relationships with their work. Sharing writing, often without the pretense of needing anything like feedback, is helpful to keep lines of communication open, but also to help develop expanding networks of connection between scholars across generations.

Create structures of care, which can range from the occasional check-in email, a meal or drinks, or even home visits if you’re familiar enough. Some of the best, most humane interactions I’ve had with other faculty have been in one-on-one meals or drinks — not dinner or department parties — and they’ve produced some of the most lasting scholarly friendships I have with people more senior than me.

I’ve been very deliberate to pitch these suggestions without presuming that invitations need to come from juniors to seniors or vice versa. Kindness is the rule, and building a sustainable future depends on actors across generations working together to have something to hand off to the generations to come.

Other ideas? Other good experiences? Please post them in the comments.

The Mentoring Compact

Faculty mentoring of graduate students is one of those things that is rightly the subject of recurrent conversation; there are good mentors and bad, lack of clarity in faculty expectations and student responses, sustainable and deeply-broken models of graduate student training, all of which seem to perpetuate themselves (often unreflexively). The faculty who are best at mentoring recognize that it is a dynamic process, and that not one model works for all students, and, moreover, that the process of mentoring students leads to new techniques and understandings of the process. Sometimes it takes graduate students to precipitate some faculty growth. That all said, this is what I’ve learned in my eight years of graduate study and eleven years of working with graduate students, which I offer as a two-sided compact, what students should do, and what faculty should provide:

That’s Yoda on Luke’s back — maybe as a metaphor of the interdependency of mentor and mentee?

Students: Keep lines of communication open with your adviser and committee. If you’re a graduate student, don’t wait for your adviser or committee to contact you. Instead, make a regular practice of keeping people up to date about what you’re doing and how things are going. I make this suggestion because I’ve found that when things start to go poorly for graduate students (during grant writing, dissertation research, dissertation writing, job seeking, etc.), many students take to not communicating with their committee, often, it seems, out of fear of communicating that things are going poorly. If you send your adviser a monthly email keeping them abreast of what’s happening, it keeps lines of communication open and ensures that when difficulties arise, there’s already a channel open. (Other committee members might receive email every four or six months.) Just answer these three questions: 1) what have you been working on?, 2) what problems have you faced?, 3) how have you addressed those problems? (#3 is a good place to ask for help, if needed.)

Students might worry that sending advisers and committee members emails obliges them to respond, thereby creating unnecessary work for faculty, but it’s okay to preface emails like this with something along the lines of “There’s no need to respond to this email; I’m just writing to keep you in the loop.” Most faculty, I’m sure, will take the opportunity to not respond, but know that faculty are keeping students in mind when they receive emails like this.

(I’ve thought about writing a contract with graduate students, part of which would give them an automatic out of the advising relationship. For example, if you’re my advisee and I don’t hear from you for six months, then I assume I’m no long your adviser. I’ve watched students struggle with taking faculty off their committees, often because the lines of communication between faculty and student are troubled. But I’ve not gotten to an actual contract yet…)

Faculty: Have guidelines for responding to student emails. I tell my advisees that I’ll always respond to an email within 24 hours (unless it’s the weekend or I’m traveling); if I’m a committee member, it’s no more than 72 hours; if I’m just some random faculty member, it can be up to a week. If it’s an actual emergency — and I can do something about it — I’ll break these guidelines. If I’m going to be running late because I want to be thorough in my response, I always make sure to send an email to that effect. That said, I try and abide by a minimalist email policy and send as few emails as possible (if only to have a very clear and direct chain of communication). Only when students start working on their dissertations do I give them my phone number, since I assume that before that the kind of help I can provide is largely bureaucratic (i.e. email and meeting based).

Students: Do what faculty ask you to do. One of the recurrent sources of frustration voiced by faculty who work with graduate students is that students come seeking advice, faculty do a lot of work in making suggestions and providing feedback and resources, and then students don’t follow through by doing what faculty ask them to do. Even if you think the suggestion is off base, it’s better to do the work than to avoid it; showing a faculty member that you did the work and proving that the suggestion was insufficient or off base is a clearer demonstration of the paucity of a suggestion than not taking the suggestion seriously. If you can’t do something, it’s so much better to explain why you can’t than to just not do it (which open lines of communication can facilitate). If an assignment (or job task, like grading) has a clear set of instructions, follow the instructions as provided. Again, it’s better to show the paucity of the instructions by following them than to let faculty think you’re just lazy and trying to find workarounds.

Faculty: Be clear in your expectations and provide instructive guidelines. When I have teaching assistants grading for me, I provide them with very clear rubrics to use; when I am helping students generate reading lists, I’m very clear about how many items should be on it, and what kinds of things those readings should be (e.g. books, chapters, articles, etc.). I find that being very clear in my expectations helps students immensely, and that when they don’t follow the instructions I’ve provided, I can point to the instructions as the basis of our next interaction (see below). I try and take notes of my conversations with students and provide them with a copy of those notes after the meeting (either in writing or via email) so that I know we’re on the same page.

Students: Give faculty lead time to prepare themselves for what you need. There’s very little I find as frustrating as someone else’s deadline being imposed on my work schedule. Having students give me something they’re seeking feedback on shortly before the due date is a case in point: if you want careful reading and generative feedback, I probably need a week to fit it into my schedule and make sure I give it the time it needs. Preparing faculty for upcoming deadlines and the prospect that you’re going to send them something ensures that you get the attention you want. This might be something to communicate in a monthly email (e.g. “I have a fellowship deadline at the end of the month and plan to send you the application in two weeks.”), and is definitely something to give people at least a week or more to prepare for. If it’s big — like a dissertation draft — give them a month or more to prepare for it.

Faculty: Tell students what their windows of opportunity are. I’m pretty regimented in my work planning, and I imagine most faculty are. Because of that, I know — roughly — when I’m going to have more or less time to give students feedback, set up meetings, etc. At the beginning of the semester, I try and give my students a sense of when these windows will be, and try and set up deadlines around them. For example, when I know a student is going to be sending me a dissertation draft, I let that student know when I’ll have a week to dedicate to reading it and commenting on it. If they miss the window, I’ll still get the work done, but I’m clear that it will take me a little longer than if I have it in the window.

Students: Plan to educate faculty on standards and policies. This is especially true for faculty new to your institution or in other departments than your own: faculty tend to not know what the policies are that dictate student lives. If you can provide them with written documentation (i.e. from a graduate student handbook), it can go a long way to clarifying faculty expectations of your work. If standards vary from policies, then you also want proof of that (e.g. if the graduate handbook says comprehensive exams comprise 100 texts but everyone actually does 75, bring some recently defended comprehensive exam reading lists to talk through). Faculty may not vary from the policy as written, but if there is an emerging norm, you’ll want them to know about it and have proof that it exists.

Faculty: Provide a prompt for the material basis for meetings. I find that having some kind of written product to talk through with students makes meetings feel much more productive than not having something to focus on. This can be a dissertation proposal, a grant application, a reading list, an annotated bibliography, an article manuscript, something you’ve both read recently, etc. Having something to focus on ensures that the conversation is well focused and there’s a direct outcome of the meeting. There can be small talk too, but having a clear work plan for you and the student helps to make sure that there are deliverables and that the student has the feeling of being materially supported.

Elsewhere, I’ve provided some guidelines for thinking about how to compose a dissertation committee, and what the overall professionalization timeline might look like given today’s academic job market. The latter might be especially helpful in thinking about the material basis of meetings and to provide a trajectory for the mentoring relationship (at least during grad school). Other tips? Insight? Post them in the comments.

On Having an Ax to Grind

“Productive scholars have an ax to grind.” That was a lesson imparted on me by one of my undergraduate mentors, Brian Murphy. We were walking across campus during my senior year, and I had been talking about the possibility of pursuing some kind of graduate degree, at the time in Literature. Brian was narrating how, despite enjoying the scholarly work he had done throughout his career, he never felt particularly driven to participate in the arguments motivating many scholars in the discipline. (Little did I know that the postmodernism debate was in full rage mode at the time.) Frankly, I didn’t really know what to do with the advice at the time, but I tucked in away.

A man comes to get his ax ground, sometime in the Middle Ages? (From Married to the Sea)

While I was working on the Master’s degree that followed (in Science Fiction Studies at the University of Liverpool), I had things I was interested in, but the work was driven more by curiosity and expediency than having a real argument to make. Over time, the thesis I wrote there developed more of an argument and ended up being publishable as a couple of articles about superhero utopias and the role of law and capital in superhero comics. But to this day, I’m not sure I have much of an ax to grind when it comes to superheroes.

It was while working on revising that content that I received a second piece of advice, this time from Hai Ren, a faculty member I worked with at Bowling Green. Hai suggested that to write a dissertation, one needed “three theorists.” Hai’s point, as I understood it, was that you need some parameters on the ideas that you’re working with, and that having three theorists — who, he suggested, one reads in their entirety (queue up the qualifying exam reading list) — gave a writer the ability to play off differences and consensus between sets of theory. If I wasn’t sure what ax I had to grind, Hai gave me a way to craft one.

I’ve made the same recommendations to students over the years, but I add that the theories that one adopts should really be ontologically compatible. So monists and dualists don’t go together, nor do communists and free marketeers, nor biological determinists and social constructionists, etc. I had started thinking about this after reading Judith Butler’s The Psychic Life of Power, where she draws together Freud, Lacan, Bourdieu, Foucault, Kristeva, Irigaray, Hegel (and others, I’m sure, but memory fails me). Butler’s “toolbox” approach struck me as eliding the profound differences between a thinker like Freud, who really believes in some form of biological determinism, and Foucault, who really does not. You can put them together, but you can’t really build a sound theory out of them because the ontologies don’t fit together. That is, unless you find ways to treat some thinkers as existing within an ontological paradigm developed by others who you take more seriously (e.g. Freud’s use of biology is a form of Foucaultian discourse and not really materially reductive. But I’m skeptical.).

If you go look at the introduction to The Slumbering Masses, I’m pretty explicit about using three sets of theory and having an ax to grind: I’m trying to work through the overlap between Bruno Latour, Bernard Steigler, and Gilles Deleuze & Felix Guattari, and I’m trying to bring them to bear on how we conceptualize the interaction between medicine and capitalism in the U.S. That means, in part, that I’m rejecting medicalization as a way to think about human nature and its interactions with capitalist forms of medicine (which you can read about here).

That doesn’t mean that I’m only working through the theories that come out of those four, white, relatively elite, able-bodied, heterosexual French men. I’m intensely aware of who these men were (and are), and use their monism to engage with other thinkers (especially Genevieve Lloyd and Moira Gatens, two Australian Spinozist philosophers) and the subfield interests I have (especially science and technology studies and feminist medical anthropology). Those engagements helped me suss out things from the theories I was using and guided me through my interactions with those rather large fields of literature. It also gave me a way to talk about things like my “contributions” to the field and the “significance” of my research (scare quotes to denote my general skepticism of that kind of grant-speak criteria). In saturated areas of study, being clear about your theoretical commitments can also make clear what you’re doing differently than other people working on the same topic or area of study.

I try and get students to think about what they believe. Stop thinking through the ecumenical polytheism of graduate study, and consider what kind of world you want to make with your scholarship. What is the ontology that you’re committed to? And who are the right thinkers to join to that project? It shouldn’t just be people that you enjoy reading, but people (and sets of theories) that you fundamentally share a common sensibility with. In committing to a set of thinkers, what differences can you map out between them and how might they guide your interactions with key concepts in your field? That can provide a ton of grist for the mill, both in terms of the initial dissertation, but also in articles and other spin-off projects.

I have other axes to grind — especially around racism in science and medicine — and those too are informed by my theoretical commitments. Having a pretty solidly determined ontological commitment gives me a framework to engage with whatever springs up. And, over time, I’ve changed the people most central to the projects I work on now. But having a set of theoretical commitments helps to guide what and how I read as well as the kinds of questions I ask about the phenomena that I’m drawn to work on.

It took a while, but now I have axes to grind…

Experimenting with Montessori in the College Classroom

When my first child enrolled in preschool, I started to get interested in Montessori approaches to education. As a pedagogical practice, Montessori approaches have largely focused on preschool and elementary education, but there’s a growing interest in finding ways to apply Montessori approaches to higher education — through middle and high school, as well as in the university classroom. In Fall 2018, I taught a small Social Studies of Science and Technology class where I decided to experiment with a more Montessori-like approach. At its heart, Montessori education seeks to instill in students the ability to ask their own questions of the course material, and to facilitate their finding answers to those questions. Rather than impose expectations of content from above through the lecturer-expert instructor, a Montessori approach seeks to create a more symmetrical relationship between the instructor and students. Overall, it seemed to work pretty well.

That would be Theseus following Ariadne’s thread straight into the Minotaur’s lair. A bit of a metaphor, maybe?

The course ended up only having five students (down from a peak of 13), and they were sophomores and juniors, only one of whom was an Anthropology major (which matters only because it was an Anthropology class, ostensibly). I declined to put General Education distributions on the class, which likely kept the enrollment low since students are Gen Ed hungry at my institution. I have a sneaking suspicion that the format of the class might have also turned off some students, especially since my recent experience is that many students just want to be lectured to (hence my experimenting with a format like this). As a result, I thought the class was a little too small, and that a larger class (more like 12-15 students) would have worked better. As it was, one student was really engaged, and most of the other students seemed to be along for the ride… (You can see a copy of the final version of the syllabus here, as well as a list of the guiding questions and threshold prompts.)

This is how I structured the class: We started by reading a book that I hoped would set the tone for the class, and also help the students generate a list of “guiding questions.” That book was James Jones’ Bad Blood, which is about the Tuskegee syphilis “experiment” and the long history of medical and scientific abuses of racialized bodies in the United States. It raises all sort of questions about race, objectivity, data, ethics, media, and methodology, and is written for a pretty general audience. We took a class period to watch Nova’s The Deadly Deception episode, which interviews Jones and brings the book’s project up to date (circa 1993). We then took a day in class to generate a list of guiding questions that would help frame the next section of class. This all took about a week and a half.

As we prepared for the next section of class, we read the Introductions (and sometimes first chapters) of several books — Helen Verran’s Science and an African Logic, Kim TallBear’s Native American DNA, Priscilla Song’s Biomedical Odysseys, Warwick Anderson’s Colonial Pathologies, and Mario Biagioli’s Science Studies Reader. My goals in doing so were to give the students a sense of the breadth of possible topics that might be covered, as well as the shared mission of science studies scholars. We spent a lot of time talking about methods and the citational practices of each of the authors. Who was being cited, how were they being cited, and what specific articles, chapters, or books were being discussed?

That led us to generate a list of author names and readings that we used our guiding questions to shape. As much as we could, we relied on the Science Studies Reader to provide us with readings, which was easy for a lot of the most canonical content (e.g. Bruno Latour, Michel Callon, Donna Haraway, Sandra Harding, etc.). It also led us to select readings from the Science Studies Reader that I normally wouldn’t have picked, but which made sense given our guiding questions. Toward the end of this section, we again took a day to generate a list of updated guiding questions, which revised the earlier list with some nuance and added several new questions.

At this point, the students had their first assessment, a written Threshold assignment. Each of the Thresholds — and there were two more throughout the semester — posed a big question and asked the students to pair guiding questions to it in the search of an answer. Students were free to use any of the course materials from the preceding section by way of providing their answer, and could pair the guiding questions in any way they wanted to. Over time, they had more and more guiding questions to draw from, and at only one point did I ask them not to use a specific guiding question (because they had all used it once already) in a Threshold paper.

Since I had demonstrated to them how to go about finding readings associated with the guiding questions (through our citation tracking), the next section of the class was curated by the students. They picked guiding questions they wanted to seek answers to, and I paired them together based on their shared interests. In a larger class, the groupings would have been larger, and this section of the class would have lasted longer. Each student was responsible for presenting a reading to the class, and they could draw on any of the books we had on hand, as well as articles and book chapters they located through library searches. The result was way more diverse than I would have ever planned — they picked scholars, topics, and readings that I had never encountered.

The presentations — like all student presentations — were a mixed bag. But when I needed to, I intervened to keep things on track and get students to think about the connections between the stuff they had picked and the other course content. And throughout the course, I sometimes stepped into the role of lecturer, especially when they were encountering difficult content for the first time. This didn’t stop during the presentations, but I tended to use my interventions sparingly and definitely let students feel the pressure of being underprepared.

Because the group was rather small, the time I had set aside in the syllabus for student presentations was too much. As we approached the end of the student presentations, I asked the students what they were curious about and we generated a list of topics. It ended up resolving into a section on bodies as epistemic objects, and we covered a wide variety of kinds of bodies, from the microbial to the human to the planetary.

Opening the class to student curiosities — and supporting their labor — definitely resulted in a different class than I would have planned on my own. That said, in many ways the kinds of questions and answers the students generated were along the lines I had hoped they would be, but the ways they chose to get there varied (especially in terms of the readings they chose). A larger version of the same class would have probably been much more dynamic. I doubt such an approach would work for a class larger than ~25 students, but maybe…? If you experiment with classroom approaches like this, let me know — I’m really curious about how it might be refined.