1. Are nonhuman animals capable of doing ethics? Don’t primates for instance share some of our ethical affordances, e.g. status hierarchies?
The concept of affordance distinguishes between possibilities and actualities. For instance, certain capacities and propensities, such as empathic responses, loyalty, fairness, impulsive sharing, and third party norm enforcement appear very early in child development, and therefore are good candidates for being human universals. Some observers think that these propensities are already ethical. I argue against this claim. One reason is that it would, in the end, amount to making ethics automatic, much like wincing or the sexual response to pheromones. By treating ethics in deterministic terms, we eliminate two aspects that I consider to be definitive of the idea, judgment and historicity. These universals are necessary but not sufficient for ethical life. Rather, what they offer is affordances, possibilities that people can ignore, resist, or take up in unpredictable ways. So, granted that primates also seem to display certain features with humans, such as status hierarchies, these remain merely affordances. To call them “ethical” requires some evidence that primates take them up (or not) in ways that involve a capacity to evaluate situations or actors. Although there is a range of opinions among primatologists, the general consensus seems to be that there is no evidence that higher primates have feelings such as fairness or betrayal, or are in any other way able to reflect on social relations or actions in terms of values as such.
2. Are the grammatical persons you’re describing (“I”,”You,” “They”) meant to literally be functions of language or are they analogies for styles of social interaction? (Put differently, are they meant to describe *talk about* ethics/morality or to describe types of interaction that are not necessarily discursive but still symbolic?)
I use the terms metaphorically. All languages have some version of first, second, and third person pronouns to indicate one’s place in a conversation (speaker, addressee, or outside the conversation). The pronouns therefore recognize possible stances within a social situation. But these stances need not be marked linguistically. Put simply, the stances are (1) being the subject of an action oneself, (2) being a significant other for that subject, and (3) being an outside and uninvolved. The second person is someone through whose eyes I see myself, who in turn can trade places with me, becoming the first person for whom I am the addressee. The second person is someone to whom I (the first person) owe an accounting of my own actions or those of other people. The giving of accounts–such as taking responsibility or denying it, characterizing an action through by means of excuses, justifications, praise, and so forth–is a catalyst for ethical reflection. The semiotic realization of these stances can vary widely. But they are not about ethics: they are affordances for the taking of ethical stances of various sorts. For instance, Utilitarianism favors the third person stance, in which the general good outweighs any personal relationships and commitments I might have as the first person subject of my own life, or toward specific second persons. The potential for taking a third person stance is important (e.g for altruism) but it cannot account for ethical life all by itself, since it’s by virtue of the first and second person stances that people are likely to care about ethical matters in the first place.
3. Can you say some more about how participation in organizations (bureaucracies, etc.) might affect the capacity to move between ethical stances? How, more broadly, does institutionalization of norms (in things like law) work?
Institutionalized norms such as law have the status of what I’ve called “historical objects.” By this I mean they circulate in public (they aren’t just private or psychological in nature), they are enduring over time (they aren’t just manifest in the moment of a moral decision) and they are subject to historical changes. Being objectified, they are easy to perceive, to cognize, to teach, and to debate. It’s only after you have explicit ways of talking about sexism that people can actively criticize or defend certain ways of behaving. And only once you have a legal code that you can really challenge it. At the same time, of course, objectified norms also constrain people’s perspectives, intuitions, and abilities to act. It takes more effort to resist or avoid them than not to. When formal norms and one’s ethical intuitions collide, one is faced with the options that A.O. Hirschman famously described in Exit, Loyalty, and Voice. And participation in an organization typically involves a distribution of agency and responsibility—you can, for instance, deny culpability for actions taken by an organization in which you consider yourself to be merely a cog—or a good soldier. In addition, institutionalized norms also become associated with rules, judges, and punishments that are external to the ethical subject, which is part of what virtue ethicists like Bernard Williams object to in much deontological thinking. There are two objections. First, that people might act not from a sense of values but from obedience and fear of punishment. Second, that ethics is treated as a constraint on one’s willfulness rather than as part of human flourishing.
4. Okay, that leads to a good empirical follow-up: Where do you see the possibilities for empirical work that would map these connections and make the linkages between the requirements for flourishing and the diversity of actual judgments and behavior?
Well, as an anthropologist, I am constitutionally wary of saying that there are certain specific conditions for human flourishing in all times and places, and that I can know for sure what they are. The ethnographic and historical evidence suggests not only that there are multiple ways for humans to flourish, but some may be irreconcilable with one another. For instance, if autonomy is your highest value, can you also give equal standing to loyalty and solidarity? That’s an empirical question. But there’s also another principle at stake. If you want your empirical research to give you an ethics, that would be tantamount to asking it to guarantee that your ethics is the right one—to grant to your ethics the authority to supersede the ethics of other people. I think we should be very nervous about the temptation to make that authority claim. However, that doesn’t get us off the hook—radical relativism isn’t a way out. We can’t avoid having an ethical stance, we just shouldn’t expect to ground it in some ultimate authority. So what can we ask of empirical research? One approach is to look for evidence of ethical failure and responses to it. What are the circumstances in which people themselves come to know that there are conditions for human flourishing that their own social reality fails to meet, and what do they do to change that? I give a few examples of this in my book. Among these are American feminist Consciousness Raising in the 1970s, the Vietnamese communist revolution, an Islamic piety movement in Egypt, and a Melanesian community that suddenly converted to charismatic Christianity. What’s especially interesting are historical situations in which existing norms, to which no one had felt any strong objections for a long period of time, come to feel unbearable. The abolition of slavery is an obvious example. Here the empirical work should hone in on an immanent critique, rather than seeking an external platform from which to judge social worlds sub specie aeternitatis. Note that this doesn’t recommend facile cultural relativism. Relativism tends to be based on two mistaken assumptions, first, that societies lack internal ethical conflicts, and second, that self critique and transformation aren’t normal ongoing features of life in every society. All the ethnographic evidence suggests the contrary.
5. How do you evaluate critical realism as a meta-theory for a political ethics of human emancipation?
One attraction of critical realism (taken in rather broad terms) is to provide an alternative to both naturalistic determinism and strong versions of cultural construction. The former seems to eliminate any serious role for humans as self-interpreting agents. The latter seems to impose insuperable barriers between different social realities—and to make it virtually impossible to offer any principled grounds for social critique. The idea of emergence seems both a persuasive and promising approach, as long as one isn’t hoping for final guarantees: there’s no end to history.
We are very grateful to Professor Webb Keane for taking the time to follow up on these questions.