Computers, Privacy & the Constitution

The IRB?

-- By EyzaguirreF - 26 Apr 2018

My big takeaway from this class is this: giving up our right to privacy on an individual level, by first conceding to its erosion for the sake of convenience and then eventually being left with nothing, i.e., total surveillance, comes with it a host of negative externalities that we impose on those around us, and that also harms their right to privacy. And, as we discussed in class, using the traditional privacy policies of opting in and opting out to fight this ecological problem is like using five-gallon buckets of water to put out sweeping forest fires. We need bigger hoses, nay, we need helicopters, and privacy policies generally assume that the questions we have can be answered through some form of meaningful consent. Privacy policies assume that the transactions are bilateral, that is, between two parties and two parties only. This is simply not so, and the harms done to our third party loved ones are not necessarily physical in nature. Rather, they are psychological.

So, I thought more about what these claimed psychological health ramifications could be. What is their nature, depth, visibility, and what is their potential for harmfully mutating societal interaction? Or is there no harm? Could it be that the impact of technology on our society improves us as individuals? Are we forming relationships with our devices, much like the relationships we have built with the people around us – friends, family, partners, etc.? What sorts of relationships have we formed with our laptops and smart phones, all for the sake of convenience? Is it even right to characterize these countless daily interactions as a relationship? I wait for the 1 on 96th street. I look around. Unlike before, I look at the faces in the train station differently. I see zombies, everywhere, and everyone has their head buried in a screen with a camera pointed right at them. They are focused, and not to be disturbed. I see the billboards at Times Square: Make Google Do It. But if Google does it, what of the unseen, hefty price tag that we must pay?

On the road towards losing our democracy we are hurting our psyche, irreversibly. The psychological harms to society need to be addressed, and I want to think outside the box. This paper, this thought, of mine, much like the class, is not the end, but a part of the journey. Contextualizing our loss to privacy as a collective action problem, for me, is a good start. Much like climate change, the changing technological environment is limiting us from exercising free will. I wanted to look to regulatory schemes in the realm of sciences for comparison, hoping it could teach me a thing or two.

One such system that caught my eye was the use of Institutional Review Boards, which were put in place in direct response to the research abuses of the 20th Century, including experiments of Nazi physicians, the Tuskegee Syphilis Study, and Stanford prison experiment. Alas, society learns from its mistakes. Today, we have an opportunity to learn from the mistakes of politicians and lawmakers in their inability to reign in the tech giants.

What are IRBs? They are groups of people, committees, that oversee ethical compliance. Detailed IRB membership requirements are addressed at 45 CFR 46.107. Generally speaking, an IRB must have at least five members with varying backgrounds to promote complete and adequate review of the research activities commonly conducted by the institution. At least one member must have primary concerns in scientific areas, and at least one must have primary concerns in nonscientific areas. At least one member must not be affiliated with the institution. Diversity in the viewership of any problem means a more competitive marketplace of ideas and solutions addressing the problem, and a having a more diverse IRB is to have better regulation.

Take, for example the mandate of the National Institute of Environmental Health Sciences (NIEHS) Institutional Review Board. One of its goals is “to provide ethical and regulatory oversight of research that involves human subjects by . . . [p]rotecting the rights, welfare and well-being of human research participants, recruited to participate in research conducted or supported by the NIEHS.” The mandate also highlights three core principles taken from the Belmont Report, a report by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Among them, the first is spot on for a component of privacy. The mandate requires a respect for persons, with the understanding that “individuals should be treated as autonomous agents.” Autonomy, as a component of privacy, is the focus here. In fashioning regulations for protecting our right to privacy, other mandates protecting our right to secrecy and anonymity can be made.

I do not mean to suggest by any means that the current IRB regulatory scheme is something that can or much less should be copy/pasted over the problems we discussed in class. Rather, I see it as a tool we can improve and use when we wish to protect our privacy rights. It helps reframe the issue to allow us to think about privacy as ecological rather than transactional in nature. Reflecting upon the positives and negatives of IRBs could serve as guideposts for reshaping the traditional IRB into something useful.

But then again, maybe looking to the IRB will not be much of any help. Maybe even something like an IRB is not the answer, either. After all, merely suggesting its use presumes an acceptance that collecting behavioral data of people en masse for the purposes of exploitation is OK.

I do think, however, that using IRBs is better suited for addressing privacy concerns than privacy policies are. That said, given how ineffective privacy policies have proven to be, that may not be saying much. This is my idea, and in it I hope the reader considers exploring further with it.

This does seem to me a valuable beginning. Improving the draft means focusing the draft on the idea more, on the background less. What does "the IRB" stand for here? Institutional review boards mostly evaluate methodology, determining whether the conduct of research is appropriate, primarily against the background of an "informed consent" standard. If this is not a subset of what "data protection authorities" are supposed to do under the various European schemes arising from the GDPR, what are the differences in concept to which you want to call our attention? To the extent that "consent" is not a sufficient basis for privacy regulation, how do IRB-like institutions get at the larger problems? Critics of the research governance culture think that review boards are standardless bodies unaccountable to law. If they are even partly right, what sort of substantive privacy law should back whatever entities you are imagining?

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r3 - 10 May 2018 - 14:26:58 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM