Some noise on signaling

Published:

Summary: I ask what signaling is and argue that it’s a bad idea to simply accuse people of “signaling” because signaling can mean a lot of things. I also argue that not all signaling is bad.

Sometimes when people publicly give to charity or adopt a vegan diet or support a cause like Black Lives Matter, they get accused of ‘virtue signaling’. This is a criticism that’s always bothered me for reasons that I couldn’t quite articulate. I think I’ve now identified why it bothers me and why I think that we should avoid blanket claims that someone is ‘signaling’ or ‘virtue signaling’. In order to make things clear, I’m going to give a broad definition of signaling and note the various ways that one could adjust this definition. I’m then going to explain what I think the conditions are for signaling in a way that is morally blameworthy and the difficulties involved in distinguishing blameworthy signaling from blameless behavior that is superficially similar.

In order to discuss signaling with any clarity, we need to try to give some account of what a signal is. The term was originally introduced by Spence (1973) who relied on an implicit definition. I’m not a fan of implicit definitions, so I’m going to attempt to give an explicit definition that is as broad and clear as I can muster, given that ‘signal’ and ‘signaling’ are now terms in ordinary parlance as well as in different academic fields.

A signal is, at base, a piece of evidence. But we don’t want to call any piece of evidence a signal. For one thing, a signal is usually sent by one party (the sender) and received by another (the recipient), so it’s communicated evidence. Moreover, the evidence is usually about a property of the sender: I can signal that I’m hungry or that I’m a good piano player or that I like you, but not that the sky is blue or that it will rain tomorrow (we can imagine an even broader definition of a signal that includes these, but let’s grant this restriction). And we communicate this evidence in various ways: by, for example, undertaking certain actions, saying certain things, or having certain properties. Putting all this together, let’s give the following definition of a signal:

A signal is an action, statement, or property of the sender that communicates to the receiver some evidence that the sender has some property p

Note that, under this broad definition, ‘trivial signals’ and ‘costless signals’ are possible: we can signal that we have a property by simply having it. We can also signal things at no cost to ourselves. I don’t think this is a problem: most of the signals we’re interested in just happen to be non-trivial signals or costly signals (e.g. incurring a cost to convey private information).

Of course, one way we give information about ourselves is by simply telling people. If I’m hungry, I can turn to you and say “I’m hungry”. In doing so, I give you testimonial evidence that I’m hungry. Because you’re a good Bayesian, how much you will increase your credence that I’m hungry given this evidence depends on (a) how likely you think it is that I’m hungry (you’re less likely to believe me if I just ate a large meal) and (b) how likely you think it is that I’d say I’m hungry if I wasn’t (you’re less likely to believe me if I have a habit of lying about how hungry I am, or if I have an incentive to lie to you in this case). And sometimes my testimony that I have a given property just won’t be sufficient to convince you to a high enough degree that I have the property in question. For example, if I’m interviewing for a job, it’s probably not sufficient for me to say to you “trust me, I know python inside and out” because it’s not that common to know python inside and out, and I have a strong incentive to deceive you into thinking I know more about python than I actually do. (As a side note: this gives us a strong incentive to adopt fairly strong honesty norms: if you’re known to be honest and accurate about your abilities and properties even when you have an incentive to lie, you’ll have to rely less on non-testimonial signals of those abilities and properties.)

I know that others (like Robin Hanson, in this post) want to exclude direct testimony of the form “I have property p” as a signal. We could exclude this by adding to the definition the condition that “I have property p” isn’t the content of the agent’s assertion, but I think this is unnecessarily messy: it’s just that we’re less interested in signals that are given via direct testimony. Also, some cases of signaling do seem to involve assertions of this sort. If I find it very difficult to tell people I love them, then the act of saying “I love you” may be a very credible signal that I love you. It also happens to be the primary content of my assertion.

In cases where we can’t sufficiently raise someone’s credence that we have some property p with our testimony alone, they require additional evidence that we have the property to be sufficiently confident that we have it (where the property itself may be a gradational one: e.g., that I’m competent in python to degree n, and not just that I am competent in python simpliciter). In such cases, we need to provide additional evidence that we have the property. For example, I can give you evidence of my competence in python by showing you my university transcripts, or simply by demonstrating my abilities. When I do so, I raise your credence that I am competent in python to the degree that you would require to give me a job, which I wasn’t able to do with testimony alone.

In scenarios like this, there’s an optimal credence for you to have in “Amanda has property p” from my perspective, and there’s an optimal credence for you to have in “Amanda has property p” from your perspective. You — the receiver — probably just want to have the most accurate credence that I have property p. Sometimes it’s going to be in my interest to communicate evidence that will give you a more accurate credence (e.g., if I genuinely know python well, I want to communicate evidence that will move you up from your low prior to one that is more accurate), but sometimes I want to make your credence less accurate (e.g., if I don’t know python that well, but I want to convince you to give me the job). Let’s say that the sender value of a signal is how valuable the resultant credence change is to the sender, and the accuracy of a signal is how much closer the signal moves the receiver towards having an accurate credence.

Hanson argues that we cannot signal that we have properties that are ‘easy to verify’ because if a property is easy to verify, then it is cheap for the receiver to check whether my signal is accurate. I think that it will often be less rational to send costly signals of properties that are easy to verify, but I don’t think we should make this part of the definition of a signal. Suppose that I am in a seminar, and I ask a naive question that any graduate student would be afraid to ask because it might make them look foolish. As a side effect of this, I might signal (or, rather, countersignal) that I am a tenured professor. Such a thing is easy enough to verify: someone could simply look up my name on a faculty list. So if my primary goal was to signal that I was a tenured professor, there are easier methods available to me than to ask naive questions in seminars. But we can signal something even doing so is not our primary goal. And this seems like a genuine instance of signaling that I am a tenured professor, despite the fact that this information is easily verifiable.

Finally, signals sometimes involve costs to the sender. Hanson argues that costly signals are required in cases where a property is more difficult to verify or cannot be verified soon. I think the details here are actually rather tricky, but one thing we can say is that the costlier it is for any receiver to verify that I have a given property, the higher that the minimum absolute cost for sending a true signal is going to be. It doesn’t follow that sending the signal will be net costly to me, just that the absolute cost will be higher. For example, suppose that to be deemed competent as a pilot you need to do hundreds of hours of supervised flying (i.e., you can’t just take a one-time test to demonstrate that you’re a competent pilot). The property ‘is a competent pilot’ is then quite hard to verify, and so the cost of sending a true signal involves hundreds of hours of supervised flying. But if I love flying and am more than happy to pay the time and money cost to engage in supervised flying, then the net cost to me to send the signal might be negligible or even zero, even though the absolute costs are quite high.

So far I have argued that a signal can simply be understood as an action, statement, or property of the sender that communicates to the receiver some evidence that the sender has some property p. Such signaling will be rational if the cost to the sender is greater than the benefit that they will acquire by sending the signal. But one remaining question is whether signaling must be consciously or unconsciously motivating. By ‘motivating’ I just mean that the benefits of sending the signal are part of the agent’s reasons for undertaking a given action (e.g., doing something, speaking, acquiring a property). We might be unconsciously motivated by the signal value of something: for example, I might think that I’m playing the flute because I love it, even though I am unconsciously motivated by a desire to appear interesting or cultured. We can also be motivated to greater or lesser degrees by something: for example, it might turn out that if I could never actually demonstrate my flute-playing abilities to others, then I’d only reduce my flute-playing by 5%, in which case only 5% of my flute-playing was motivated by the signal value it generated.

I’m going to assume that signaling doesn’t require being motivated by signal value. This means that my signaling something can be a side-effect of something I would do for its own sake. Some people might think that in order for me to be ‘signaling’, sending the signal must be a sufficient part of my conscious or unconscious motivation. For example: it must be the case that they would not undertake the action were it not for the signaling value it afforded. If this is the case, then 5% of my flute playing would by signaling in the case above, while 95% of my playing would not be signaling. I can foresee difficulties for views that have either a counterfactual or threshold motivational requirement for signaling, and so I’m going to assume that I can signal without being motivated by signal value. The reader can decide whether they would want to classify unmotivated signaling as signaling (and economists seem to reserve the term for signals that are both motivated and costly).

I think we can now divide signaling into four important categories that track how accurate the signal is (i.e., whether the sender actually has the property to the relevant degree) and how motivated the agent is by the signal value. I’ll label these as follows:

Innate signaling involves sending an accurate signal without being consciously or unconsciously motivated by sending the signal. If a child is hungry and eats some bread from the floor for this reason alone, then she is innately signaling hunger to anyone who sees her.

Honest signaling involves sending an accurate signal that one is consciously or unconsciously motivated by. If a child is hungry and eats some bread from the floor to show her parents that she is hungry, then she is honestly signaling hunger.

Deceptive signaling involves sending an inaccurate signal that one is consciously or unconsciously motivated by. If a child is not hungry and eats some bread from the floor to get her parents to believe that she is hungry and give her sweets, then she is deceptively signaling hunger.

Mistaken signaling involves sending an inaccurate signal that one is not consciously or unconsciously motivated by. If a child is not hungry and eats some bread from the floor because she is curious about the taste of bread that has fallen on the floor, then she is mistakenly signaling hunger to anyone who sees her.

Since motivation and accuracy come in degrees, signaling behavior comes on a spectrum from more honest to more innate, and more deceptive to more mistaken, and so on. (If you think that agents must be consciously or unconsciously motivated to send a signal in order for them to be signaling, then innate signaling and mistaken signaling will not be signaling at all. I have shaded these darker in the diagram above to reflect this.)

So when is it unethical or blameworthy for agents to engage in signaling? It seems pretty clear that innate signaling will rarely be unethical or blameworthy. If an agent innately signals that she is selfish, then we might think that she is unethical or blameworthy for being selfish but not that she is unethical or blameworthy for signaling that she is selfish. The same is true of mistaken signaling. If an agent is not negligent, but mistakenly signals something that is not true — for example, she appears more altruistic than she is because someone mistakes a minor act of kindness on her part for a great sacrifice — then we presumably don’t think that she is responsible for accidentally sending inaccurate signals to others. We might think that she can be blamed if she is negligent (e.g., if she had the ability to correct the beliefs). But if her actions were not consciously or unconsciously motivated by their signal value, then we’re unlikely to think that she can be accused of signaling in a way that is unethical.

If this is correct, then most occasions on which we think that agents can be aptly blamed for signaling are when these agents are motivated in whole or in part by the signal value of their actions (in other words, even if we do think that innate signaling and mistaken signaling are possible, we don’t think that they’re particularly blameworthy). But things are tricky even if we focus on motivated signaling, because we have already said that an agent can be consciously or unconsciously motivated by the value of sending a signal. Let’s focus only on motivated signaling and adjust our y-axis to reflect this distinction:

The more that a behavior involves conscious deceptive signaling, the less ethical it is, all else being equal. This is because conscious deceptive signaling involves intentionally trying to get others to believe things that are false, which we generally consider harmful. If I become a vegetarian in order to deceive my boss into thinking that I share her values when I don’t, then the motives behind my action are blameworthy, even if the action itself is morally good.

Unconscious deceptive signaling seems less blameworthy. Suppose that I’m a deeply selfish person but help my elderly aunt once a week. Without realizing it, I’m actually doing this in order to mitigate the evidence others have that I’m selfish. This isn’t as blameworthy as conscious deception, but we might want to encourage people to avoid sending deceptive signals to others. And so here we might be inclined to point out to someone that they are in fact deceiving people, even if they are not doing so consciously.

As I mentioned above, signals can be deceptive to greater or lesser degrees. For example, suppose that I give 10% of my income to charity, but that if I were to suddenly not gain at all personally from being able to signal my charitable giving I would only give 8% of my income to charity. Suppose that giving 10% signals “I am altruistic to degree n” and giving 8% signals “I am altruistic to degree m“, where n>m. Let’s call a trait ‘robust’ insofar as one has the trait even if they were to lose the personal gain from signaling that they have it (this is distinct from the counterfactual of not being able to signal at all, since signaling can have moral value). The deceptive signal that people receive is “Amanda is robustly altruistic to degree n” when the truth is that I am only robustly altruistic to degree m. If this is the case, then my signal is much less deceptive than the signal of someone who would give nothing to charity if it were not for the self-interested signaling value of their donations.

Finally, what about honest signaling? Honest signaling cannot be criticized on the grounds that it is deceptive, but we might still think that honest signaling can sometimes be morally blameworthy. For example, suppose that I were to give 10% of my income to charity and, when asked about it, was explicit that I thought that if I wouldn’t personally benefit from telling people about my giving, I’d only give 8% of my income to charity. I haven’t attempted to deceive you in this case. Nonetheless, we might think that being motivated by self-interested signaling value is morally worse than being motivated by the good that my charitable giving can do because the latter is more robust than the former (the former is sensitive to things like the existence of Twitter or an ability to discuss giving among friends, while the latter is not). I suspect that this is why honest conscious signaling causes us to think that the agent in question has “one thought too many”, while unconscious honest signaling still makes us feel like the person’s motivations could be better, insofar as we don’t think that being motivated by signaling value is particularly laudable.

Note that this criticism only seems apt in domains where we think that self-interest should not be an undue part of one’s motivations: i.e., in the moral domain. We are not likely to chide the trainee pilot if she pays $100 to get a certificate showing that she has completed her training because this is a domain in which self-interest seems permissible. Similarly, the criticism only seems apt if the agent is motivated by the value of the signal for her. If someone advertises their charitable donation to normalize donating and encourage others to donate, then they are motivated by the moral value of their signal and not by its personal value. This motivation does not seem morally blameworthy.

If I am correct here, then critical accusations of signaling can be divided into two distinct accusations: first, that the person is being consciously or unconsciously deceptive, and second, that the person is being motivated by how much sending a signal benefits them personally, when this is worse than an alternative set of motivations: i.e., moral motivations. Since this can be consciously or unconsciously done, the underlying criticisms are as follows:

(1) Conscious deceptive signaling: you are consciously generating evidence that you have property p to degree n, when you actually have property p to degree m, where m ≠ n

(2) Unconscious deceptive signaling: you are unconsciously generating evidence that you have property p to degree n, when you actually have property p to degree m, where m ≠ n

(3) Conscious self-interested motivations: you are being consciously motivated by the personal signal value of your actions rather than by the moral value of your actions

(4) Unconscious self-interested motivations: you are being unconsciously motivated by the personal signal value of your actions rather than by the moral value of your actions

Note that if an agent is signaling honestly then she can only be accused of (3) and (4), but if she is signaling dishonestly then she can be accused of (1), (2), (1 & 3) or (2 & 4).

Claims that one is doing (3) or (4) only arise in the moral domain, and only if the agent is non-morally motivated to send a signal. Even when these conditions are satisfied, the harm of (3) or (4) can be fairly minor and forgivable, especially if the action that the person undertakes is a good one. It’s presumably better to do more good even if we are, to some small degree, motivated by the personal signaling value that doing more good affords. But let’s accept that each of (1) – (4) is, at the very least, morally suboptimal to some degree and that we can be justified in pointing this out when we see it. The question then is: how do we identify instances of (1) to (4), and how bad they are?

In order to claim that an agent is engaging in unconscious deceptive signaling, we need to have some evidence that she doesn’t actually have the property to the degree indicated. In order to claim that she is engaging in conscious deceptive signaling, we need to have some evidence that she also knows that this is the case. And in order to claim that an agent has self-interested motives, we have to have some evidence that she is being consciously or unconsciously motivated by the personal signaling value of her actions, and not by their moral consequences (with signal value being mostly a side-effect).

I think that it’s important to note that criticisms of people for signaling must have one of these components. It’s too easy to claim that someone is “just signaling” where the implication is that they are doing so wrongly and for the person in question to feel that they have to defend the claim “I am not signaling” rather than having to defend the claim “I am not being deceptive nor being unduly motivated by personal signaling value”.

The key problem we face is that whether or not an agent is signaling inaccurately, and whether or not she is being unduly motivated by self interest will often be underdetermined by the evidence. Suppose that you see someone tweet “I hope things get better in Syria.” If you claim that this person is merely ‘virtue signaling’, then you presumably mean that (i) they are consciously or unconsciously trying to make themselves appear more caring than they actually are, or (ii) they consciously or unconsciously sent this message because of the personal value it had for them rather than out of genuine care (or both). But we can’t really infer this from their tweet alone. The person might actually be as caring as this message indicates (i.e., the signal they send is accurate), and they might be primarily motivated by the signal value only insofar as it is impersonally valuable (i.e., because it normalizes caring about Syria and informs people about the situation). Someone might think that if the agent actually cared about people then they would focus on some different situation where more people are in peril, but the person tweeting about Syria also be focusing other causes, or they might simply not know about how much suffering different situations are causing, or they might not believe in that sort of ethical prioritization.

So what counts as evidence that someone is engaged in a morally egregious form of signaling? In support of (1) or (2), we can have independent evidence that the person lacks the property that they profess to have. For example, if someone claims that systemic social change is the most important intervention for the poor and yet does nothing to bring about systemic social change, we can infer that they are not very motivated to help the poor. Insofar as engaging in discussion about what is the best way to help the poor seems to send the signal that one helps the poor, we can infer that this signal is deceptive. In support of (3) or (4), we can have evidence that the person is unduly motivated by the personal signal value of their action. For example, if someone does the minimum that would be required to make them look good but less than what would be required if they were genuinely motivated to do good, then it seems more likely that they are being motivated by personal signaling value. An example might be a company that makes a token donation to charity in response to a PR disaster. In this kind of case, it seems we have some evidence that the charity is trying to appear good, rather than trying to genuinely correct the harm that led to the PR disaster in the first place.

I think we can take a few useful lessons from all this. The first is that it’s a bad idea to simply accuse people of “signaling” because signaling can mean a lot of things, and not all signaling is bad. The second is that if we are going to make such an accusation, then we must be more precise about whether we are objecting because we think they are sending deceptive signals, or because we think they are being unduly motivated by personal signaling value. The third is that we should be able to say why we think they are consciously or unconsciously being deceptive or unduly motivated by personal signaling value, since a lot of behavior that is consistent with blameworthy signaling is not in fact an instance of blameworthy signaling. The fourth is that we should identify how bad a given instance of signaling is and not overstate our case: if someone is only a little motivated by signaling value, whether consciously or unconsciously, then they have hardly committed a grave moral wrong that undermines the goodness of their actions. None of this nuance is captured if the name of the game is simply to see some apparently virtuous behavior and dismiss it as a mere instance of ‘virtue signaling’.

Leave a Comment