Courtship ornaments as adversarial patches

When I read a few months back about machine language research into tricks that would deceive image-recognition systems, I immediately thought about spider courtship. The research looked into “adversarial patches” (summary; original), little pieces of an image that could be inserted (imagine a sticker placed on an otherwise identifiable thing) that would make machine learning algorithms go awry, misidentifying the overall image. The example used was an inserted patch that would make an algorithm mistake a banana for a toaster. If done subtly, the adversarial patch wouldn’t even be noticed by we humans, but the image recognition system would seize some special concocted detail and become delusional, thinking the image was of a toaster.

So what does this have to do with spider courtship? Evolutionary biologists have long wondered why males (usually) in many animal species have elaborate or exaggerated features that they display to the female in courtship. One long-popular theory suggests that these features indicate the male’s good genes: “I have such good genes that I can afford to divert my resources to conspicuous frippery” (picture an impractical muscle car). Females that choose those males give an advantage to their children’s genes, which will be thereby be paired with good genes, getting a boost in future generations. Recently, though, there’s been a lot of talk about an antagonistic arms race between males and females for control of reproduction. Perhaps these courtship features don’t represent high quality of the male, but simply take advantage of a susceptibility in the female. Just like an adversarial patch.

Suppose a female has evolved to recognize a potentially threatening wasp (= toaster) and to recognize pestering males (= banana). If the male could have a courtship ornament that makes her think, at least momentarily, that he’s a wasp, she might freeze in place rather than reject him instantly in exasperation. Freezing her there might give her brain time to notice his redeeming qualities. If you look at any of the images of dancing peacock spiders (Maratus), you’ll see how many of them look like big insect heads. Many species of paradise spiders (Habronattus), which I talked about in my last post, have strange knees that look a bit like small insect heads. Here are the knees of various species of Habronattus:

Third leg of male Habronattus captiosus, H. cuspidatus, and H. viridipes. The male wiggles them when doing his courtship dance to the female.

These knees may not look convincing as insects, but search for images of “peacock spider” and you’ll see insect-like displays. If it seems ridiculous that males would have body parts mimicking insects, it actually represents a respected idea in the field, called “sensory exploitation”, in which the male mimics something the female is already attuned to such as prey or predator. So, the idea of adversarial courtship ornaments already exists in the literature.

Where machine learning research comes in is that it might help us understand the evolution of these ornaments. Learned machines may not have reached the subtlety of a human brain, but they might have reached that of a spider’s. Findings in machine learning research may illuminate how tiny spider brains handle the complex images their sharp eyes supply from the messy world. For instance, could susceptibilities in our current machine learning methods help us understand the susceptible cognitive weaknesses that affect the spiders and their evolution?

So, here are some questions that biologists studying visually-mediated courtship might ask of the field of machine learning (“ML”):

  1. Do the successful features of adversarial patches help us understand why spiders have such complex ornaments? One of the outstanding puzzles in sexual selection theory is to explain highly complex courtship. If each detail signals a male’s quality, so many quality indications would come at a high cost (cost is part of the theory). If instead the ornaments are mimicking a female’s sensory target, perhaps the complexity is needed simply to get the mimicry right. Or, could there be some direct advantage of complexity? Do ML image classifiers get distracted by edges, and the way to make an adversarial patch is to fill it full of many edges, with contrasting colours, and possibly a higher level pattern (like a sense of stripes)?
  2. Do the details tend to matter, i.e. do small changes in an adversarial patch render it ineffective? If any slight change in detail loses effectiveness, then that could explain uniformity of ornaments in a species.
  3. For any trained ML image recognizer, is there a large series of very different patch forms that would succeed at being adversarial? This, combined with details-do-matter, would imply an uncanny valley: slight changes from a good patch fail, but big changes might succeed, by hitting a different style of patch. This could explain the peculiar observation that some Habronattus have very uniform ornaments within a species, and yet seem to be susceptible to hybridization with rather different-looking species. Thinking of the sensory bias theory of sexual selection, perhaps the shared ancestral susceptibility isn’t a single peak in an optimality landscape, but a set of peaks, with different species settling on different ones — all species are susceptible to all of these many different peaks, but there’ll be a penalty for any male with a slight change from the current peak on which a population sits.
  4. How does susceptibility to adversarial patches scale with neural net size and with input size? Does the ratio matter; i.e. are Habronattus and Maratus susceptible because their eyes or their environment give them too much information to process — many pixels being input but only small brains?
  5. Does the classifier’s number of functionally-different targets among which it has to distinguish (bananas, toasters, cars, puppies, etc.) have an effect on its susceptibility? If the female spider’s brain has to distinguish ants from wasps from prey from sticks from males, will be she more susceptible than females in environments without ants or wasps? (This might combine with the previous question to be about dimensionality. In other words, is there some equation like susceptibility = (pixels * targets)/neurons?)
  6. Does susceptibility rise or fall if there is a stronger requirement to generalize (i.e. visually very different objects are all generalized as “prey”; very different objects needing to elicit the response “sit still”)? This could add a different sort of dimensionality.
  7. Does re-training an ML image classifier to avoid susceptibility to an adversarial patch tend to open up a new vulnerability? In a small neural net with only so many “neurons” to go around, can you get a whack-a-mole scenario where retraining to avoid being fooled by adversarial patch A opens up susceptibility B, retraining against which opens up susceptibility C, and so on? This could generate male-female arms race leading to constant change, or perhaps ever accumulating complexity.

It may be that the answers to these questions are already in the machine learning image recognition literature. If so, let’s talk!