In many ways, we’ve accepted that handing over personal information is the cost of interacting online. The issue, however, is that in the U.S., when it comes to handling consumer information, the rule of business has largely been ‘Anything Goes!’ 

And if the deluge of privacy scandals that tech companies have faced tells us anything, it’s that consumer privacy is not exactly a top priority for many businesses. 

Because of this, it’s become clear that there needs to be some level external policy which places limits on what data can be collected and for what purposes. 

Information Fiduciaries

New York Privacy Act is the latest consumer-focused regulation to take steps in this direction, and it contains some innovative approaches to help protect users’ personal information.  

One such approach is the inclusion of the privacy concept “information fiduciaries,” originated by Yale Law School professor Jack Balkin. The proposed regulation would require any organization that handles personal information to act as an information fiduciary and must “exercise the duty of care, loyalty and confidentiality expected of a fiduciary with respect to securing the personal data of a consumer against a privacy risk; and [] act in the best interests of the consumer, without regard to the interests of the entity, controller or data broker.” 

As expected, this has many tech companies up in arms. According to an article from Wired, Facebook has argued that the line requiring companies to act ‘in the best interest of the consumer is too broad: “Different consumers, Facebook argues, have different interests when it comes to the use of their data, making that a fuzzy line to draw.”  

Facebook’s argument over differing interests might seem to make sense, but, when it comes to creating sound privacy policy, it falls far short of the mark 

The problem is that in today’s landscape, it’s becoming impossible to pinpoint consumers’ true interests. In an article titled, “Privacy and human behavior in the age of information,” the authors collected empirical data and found a variety of issues when it comes to accurately located consumers’ concern for privacy. One point they discovered was that companies have been able to effectively influence users’ privacy concerns:  

Some entities have an interest in, and have developed expertise in, exploiting behavioral and psychological processes to promote disclosure. Such efforts play on the malleability of privacy preferences, a term we use to refer to the observation that various, sometimes subtle, factors can be used to activate or suppress privacy concerns, which in turn affect behavior.” 

Disinterested Policy 

The question then becomes: how can a privacy policy rely on the individual consumers’ interest if those interests are being influenced by entities that depend on the collection, processing and sale of personal information? 

Because the tech environment is a complex world of interactions, because of the limitations in our ability to discern attempts by others to leverage our behavioral biases, and because we cannot fully trust that the intentions of the tech platforms like Facebook are congruent with our own interests, there needs to be some external baseline of privacy policies protections that help create a level playing field for everyone The question, of course, is where that baseline is drawn.  That comes down to being able to answer these questions:  What do we want to keep private?  When do we want to keep it private?  How and when do we want to share?

Sounds simple, but in a digital world of constant measurement and surveillance, it’s not so easy.

Share This