Emily T. Troscianko 

Written after Humanities Forward: Opportunities and Challenges for the Next Twenty Years, University of Oxford, 13-14 May 2023. A 9-years-later sequel to another manifesto, here.

Passion seems to be expected of us a lot more in the humanities than in other fields. But it’s not always clear whether we’re meant to love the artefacts we’re studying or the act of studying them, or both. Maybe the assumption is that the first leads inevitably to the second? It clearly doesn’t, though. And maybe expecting either causes problems.

Like most people’s, a lot of my research choices have been guided by what I like (literature more than maths, German more than French, Kafka more than Fontane, etc.), though at age 18 I also made a point of not studying English lit in order not to “ruin the pleasure”. Increasingly, though, my research methods have been guided by a growing suspicion that liking leads us astray. Not just in the sense that I wish I hadn’t had the option acquire no stats, coding, or other nonlinguistic competencies after age 16. But in the sense that liking makes people think they know things they don’t know.

For example, liking reading stories makes people think it must be good for them (and for everyone else). The research does not bear this out—partly because of course it’s found much more equivocal things than this, partly because it hasn’t been done much (because everyone’s so convinced they already know). The state of mental health in literature faculties is not the best anecdotal evidence for the strong version of the “literature is good” claim (though I suppose you can always roll out the “but how much worse would we be without it?” defence). And the fact that many psychotherapists—who, let’s remember, have also made career choices that reflect a high valuation and enjoyment of the spoken and written word—use or recommend aesthetic artefacts for therapeutic purposes doesn’t automatically make it true. They might do far better to give the session time to habit change than poetry, but the a/b test is rarely done.

Also, liking reading stories (or indeed looking at paintings or listening to music) makes people think that reading a story and then saying what they like about the story they just read constitutes research—and then get annoyed when funding for that enjoyable pastime starts drying up.  

And then, really liking reading stories makes people think we shouldn’t look too closely at them. The impulse to attribute all kinds of slightly mysterious goodness to literature coexists oddly with a prudish sense that we mustn’t call it dirty words like useful (I guess in part because useful is an attribute that encourages us to look closer and find out what the use really is). Objections to unweaving the rainbow by conducting empirical research on literature’s causes and effects get very impassioned, especially amongst those who are sure it has lots of important ones.

There’s a structural solution to this tangled knot of problems: Step up a level from using interpretation as a method of study (take a text/image, conclude what it means) to treating interpretation as the object of study (take a text/image and a bunch of responses to it, conclude what we can about correlations; even better, control for confounding variables when gathering that bunch of responses, and conclude what we can about causes and effects). 

Taking up the meta-stance on interpretation instead of just happily using it is a lot less likely to lead to unhealthy and misleading entanglements of research agendas with personal investments. This isn’t to say that the latter get ignored or denied; on the contrary, it means we can let them play the right roles. We can observe our reasons for caring about this question or this artefact (anything from “I made it” to “I experience(d) something it evokes”), and we can treat them as hypotheses to feed into our research design (e.g. “I designed it to achieve x and avoid y, so let’s see whether I succeeded” or “having experienced something that’s evoked in this means other people are likely to respond with more of x and less of y”). And then we can broaden out beyond our own experiences and hunches and see whether they amount to anything. 

If we make this move, we become researchers rather than artists or hobbyists (nothing against either, but they’re not humanities researchers). 

So we stop contributing to the endless mixup between the relative goods of the arts and the humanities, which probably arises mostly from a bad-faith inkling in the humanities that everyone likes the arts, so let’s just pretend that’s us.

We stop pretending that etymology gives us some kind of weird monopoly on knowing what it means to be human, or making being human any easier. Various versions of psychology arguably get at both much more directly than happens if we take the detour via art.

We probably get disciplines beyond the humanities involved, because empirical question-answering that goes beyond a singular subjectivity is something other fields have a lot to teach us about. 

Then we start making ourselves really uncomfortable by reconsidering how we create knowledge and what it means to know something. (If there’s one thing I really hate when I’ve spent months designing and running an experiment, it’s someone who has never bothered to do that saying “we knew that already”. No you didn’t! You had a hunch based on anecdata, usually just your own.)

We get less invested in asking endless questions and defending their unanswerability, and expecting people to pay us for it. (Some of my more antagonistically formative moments as a grad student came thanks to eminent professors who literally shuddered behind the lectern at the idea of anyone, let alone an impertinent 25-year-old, thinking about methods for actually answering any of the questions they’d just spent 50 minutes posing.) 

We get more invested in posing questions answerably and then going ahead and doing the hard time-consuming work of finding (sure, provisional, incomplete) answers. 

We ask, fundamentally, why we love this. Or why we don’t. Or why we’re inclined to say we do—possibly to make ourselves feel like we belong, and/or to signal a particular kind of sensibility-infused status? And then we try to find out.

And, where research hits lifestyle, maybe we also get rid of the millstone of “this is my calling” and “academic stuff is all I could ever do”. Maybe we stop feeling guilty about leaving academia when we conclude that other sectors probably offer better cost/benefit contours for our remaining decades. Maybe we get more creative about carving out niches in the alt-ac borderlands.

I’m biased, but if we’re making predictions about the survival of the humanities in any recognisable form, even into the next century, I think we’ll need plenty of radical interdisciplinarity (not history meets art history, but control engineering meets cognitive literary studies, say). And I think we’ll also need a blossoming of alt-ac careers that put humanities skills to work. We’re all used to computer scientists dreaming up startups and consulting for big tech, but humanities research can feed into valuable and pleasurable careers too, and the other parts of those careers can enrich the research.

Not having a salary for doing research forces us to ask very different kinds of question about the purpose and value of doing it, and answering those questions tends to make the research better. “What am I willing to spend my actual money on?” is always a nicely refreshing question. I often wonder how much academic humanities research would melt away if it cost researchers their own money to do it, and what would survive, and why. Not because everything has to come back to money, but because a lot of things do. This relates to a thought experiment my 2014 manifesto coauthor likes to pose: What research would get done and published if you couldn’t put your own name to it, if the status games all just fell away? And to another proposed by my partner: If you could only publish 10 things in your lifetime, which would make the cut and how, at each decision point, would you make the call?

So, I’m looking forward to seeing how humanities-inflected borderlands careers will grow and change in the decades ahead. Junior researchers, i.e. those for whom the previously standard academic career model now has an obviously pretty tiny chance of working out well, have a lot to gain and rather little to lose from getting ambitious about the interplay between “what do I want to find out?” and “how do I want to live?”.

Thanks to Stephan Nitu, Dan Grimley, Gervase Rosser, Elleke Boehmer, Arlene Holmes-Henderson, and others at the roundtable for the impetus.