I have a new working paper with Alisa Frik about privacy risk. I forgot to write about it here up to now, but here it goes:
In that paper, we report results of an experiment where we exposed subjects to the risk of revealing private information to others.
The inspiration for this experiment came from our dissatisfaction with the current methods to assess the value of privacy. The most popular methods include:
- experiments asking participants for their willingness to pay to avoid revealing private information to others,
- surveys asking respondents for their feelings about a range of possible scenarios involving privacy.
While those two methods may be suitable for some applications, they suffer from two main weaknesses:
- they are not incentivized (surveys) and
- they do not correspond to the type of privacy decisions that most people face.
Indeed, it rarely happens in real life to get offered payment for private information or to be asked to pay for information protection from a well identified, immediate and certain threat.
Most of the time instead, people have to decide how much to invest to protect their information from a non-specific threat that may or may not be realized in the future and have uncertain consequences.
The experiment and the hypotheses
In our experiment we elicited the willingness to take risk with one’s personal information by asking people to make choices between lotteries involving personal information disclosure with a certain probability.
In order to see if people’s willingness to play privacy lotteries differed from their willingness to play lotteries with money, we asked participants to play lotteries involving monetary outcomes.
We also tested whether, as implied by some existing research, privacy could be defined as a good that has value only in so far as one maintains control over it.
We refer to such goods as “control goods” in the paper. Unlike a house or a car, which maintains its usage value to us even if it is under threat of being stolen, privacy would, under this hypothesis, lose its value if it is under threat.
In other words, under this approach, I care about privacy only if I feel I am in control of the level of risk to which it is exposed.
To the best of our knowledge, our experiment is the first attempt to test the relation between risk and privacy attitudes in a laboratory setting, and the first to directly test a view of privacy as a “control good”.
We show that there does not seem to be much difference in the attitude to privacy risk vs. attitude to monetary risk.
Indeed, people’s answers to survey-type questions about their attitudes to privacy were only marginally useful to predict how they played privacy lotteries. They played privacy lotteries mostly like they play monetary lotteries.
We also find that depriving subjects of full control over their personal information did not lead them to lose interest in protecting it, unlike implied by some research.
From our experiment, it seems therefore that just because people know their private information is at risk does not mean they stop trying to protect it. This is quite reassuring given how frequently privacy breaches are reported in the news.