Whatever happened to informed consent?


Facebook has been experimenting on us, and getting scientific publications out of us. They took advantage of their large numbers of users to do a study on more than half a million subjects on how positive and negative messages affect attitude. I was surprised — I know I could never get approval for such a project (if I were a psychologist, that is). But apparently they had IRB approval.

Did an institutional review board—an independent ethics committee that vets research that involves humans—approve the experiment?

Yes, according to Susan Fiske, the Princeton University psychology professor who edited the study for publication. 

“I was concerned,” Fiske told The Atlantic, “until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people’s News Feeds all the time.”

Wait. I thought one simple, basic criterion was this: do the subjects know that they are in an experiment? Did they voluntarily sign up to be tested? You don’t have to spell out exactly what they’re being tested for, but they do have to understand that they are entering an artificial situation in which they are going to have some sort of evaluation done.

Oh, yeah, the APA says something like that.

When psychologists conduct research or provide assessment, therapy, counseling, or consulting services in person or via electronic transmission or other forms of communication, they obtain the informed consent of the individual or individuals using language that is reasonably understandable to that person or persons except when conducting such activities without consent is mandated by law or governmental regulation or as otherwise provided in this Ethics Code.

I’m pretty sure that when I signed up for facebook, it was to be part of social media, to interact with other people who had also signed up for the service. I don’t remember agreeing to be a guinea pig for whatever manipulations the company wanted to carry out.

But then, maybe I’m just naive. Maybe we signed over our rights and privacy to the corporations when we were five years old and joined the Chuck E. Cheese Birthday Club.

Also notice that the APA rules do have an exception. Here it is:

Psychologists may dispense with informed consent only (1) where research would not reasonably be assumed to create distress or harm and involves (a) the study of normal educational practices, curricula, or classroom management methods conducted in educational settings; (b) only anonymous questionnaires, naturalistic observations, or archival research for which disclosure of responses would not place participants at risk of criminal or civil liability or damage their financial standing, employability, or reputation, and confidentiality is protected; or (c) the study of factors related to job or organization effectiveness conducted in organizational settings for which there is no risk to participants’ employability, and confidentiality is protected or (2) where otherwise permitted by law or federal or institutional regulations.

I’d have to argue that the facebook study does not meet the exception, because it was not purely observational: they manipulated the news items that their users saw. They can’t simultaneously argue that their tinkering with facebook users’ stimuli showed an effect on attitudes, and that their tinkering did not affect their subjects. That, to me, is the key problem — not that they’re analyzing users’ interactions, but that they’re now reaching out to attempt to modify what users do.

Comments

  1. says

    I hate to tell you this, PZ, but A/B interface testing by companies with web sites is as common as fleas on dogs.

  2. says

    Ok, so theres the ethical problems, but then there is the fact that A TON OF THE PROFILES ARE FAKE so I don’t know how they can even count the reactions as valid data when there are so many people who screw w/each other for fun online.

  3. sqlrob says

    I don’t remember agreeing to be a guinea pig for whatever manipulations the company wanted to carry out.

    You might want to read the various EULAs / TOSs to websites a little more carefully then. And don’t forget to keep checking back because of the “we can change this at any time” clauses.

    From bits of the Facebook data policy:

    “to measure or understand the effectiveness of ads you and others see, including to deliver relevant ads to you;”

    “for internal operations, including troubleshooting, data analysis, testing, research and service improvement. “

  4. Al Dente says

    There were already several good reasons I refuse to sign up for Facebook. Here is yet another reason.

  5. says

    sqlrob @ #4:
    That is a piss-poor excuse for informed consent. You know it; I know it. Don’t even play like the EULA justifies this sort of manipulation. It does not. (Speaking as a former psychology graduate student who has actually done research on human subjects, I am fucking ragingly contemptuous right now.)

  6. Gorogh, Lounging Peacromancer says

    Just submitted an IRB application last week, and I’m pretty sure I would not have gotten away with burying my research intention anywhere within pages and pages of distractor information.

    There were already several good reasons I refuse to sign up for Facebook. Here is yet another reason.

    I think I’ll just stick to Al Dente conclusion as well.

  7. Pteryxx says

    “for internal operations, including troubleshooting, data analysis, testing, research and service improvement. “

    This apparently was the clause in the TOS that Facebook claims constituted consent. But the research was for publication in a real journal, not for Facebook’s internal operations. (And it would still be creepy if Facebook were merely manipulating the emotions of its users for better ad targeting.)

    Besides, just because pages and pages of obfuscating text behind a click-wrapper constitute a (legally binding, so far) Terms of Service agreement, doesn’t mean a TOS passes muster as a consent form for research. It’d be as if a medical center had a small plaque behind a potted plant that said “By entering this building you consent to being a research subject”.

  8. sqlrob says

    @Muse142:

    I didn’t say it was good practice. It is wide-spread practice. If you think it isn’t, you’re incredibly naive. They’ve got a fully legal defense.

    I think this was beyond what the board should have permitted. Not all of this research is done under the geas of the research boards, or even published. As @rturpin indicated, A/B testing is very common and there’s nothing limiting the hypothesis being tested beyond what’s in the law.

  9. elly says

    I have a Facebook account, but I haven’t posted to it, or even looked at it, really, in over 18+ months. Initially, it seemed innocuous enough: I touched base with a handful of high school friends, and kept in touch with acquaintances and extended family members that I rarely see due to the distances between us. After a while, however, the volume of trivial/irritating posts/pokes/messages/invitations started to grate on me. It got particularly bad around election time (2012)… I’d log in, glance at the latest comments/likes from my “friends,” feel my anger rising, and just log out again. Ugh.

    Given the time frame of the experiment, it’s entirely possible that I was one of the “guinea pigs” for this study, although my reaction wasn’t to spread the “emotional contagion”… it was simply to withdraw from participation.

  10. says

    @sqlrob #4 – I am on the community advisory board for an organization that does research into HIV vaccines. Our function is to advocate for the people enrolled in the vaccine studies, and I am quite familiar with what does, and does not, make for a valid informed consent document. I have have even helped to write them. A single word in a mountain of verbal diarrhea that can be changed at any time, for any reason, without notifying the study participants, and which does not EXPLICITLY describe what, EXACTLY, will be done and why it is being done and how this may or may not affect study participants is not informed consent.

    Granted, the focus of my CAB is not behavioral studies. However, I have met and talked with people from many other CABs and IRBs, including some that deal with behavioral studies. The procedures for informed consent are the same because the underlying federal and international laws requiring informed consent are the same.

    But what can be done? Who would receive the complaint? I am pissed off enough that I do not want to just sit here wringing my hands: I want this investigated and prosecuted.

  11. says

    I started looking into the matter of informed consent with regards to market research. I found this page at the Illinois Institute of Technology titled Surveys, Focus Groups and Research with Human Subjects. That page links to the school’s boilerplate informed consent document, which provides study participants with the name of the research, the names of those conducting the research and who is supervising them, a full description of the research procedures, a description of risks and benefits associated with participation, and an outline of participant rights, including the right to withdraw from the study at any time, for any reason, without incurring penalty, even a statement outlining the possibility of research-related injuries . This format is very close to other informed consent documents I have worked on in vaccine trials.

    Keep in mind that “informed consent” has two words: INFORMED and CONSENT. If one or the other is deficient or missing, then there is no informed consent. As a Facebook user, I was never given anything that I could even generously describe as informed consent.

  12. neverjaunty says

    We don’t know that they had IRB approval.

    All that we know is that Susan Fiske asked the Facebook study authors if they had IRB approval, and they told her that they totes did because they were, like, pros at this stuff.

    This was apparently good enough for Professor Fiske at the time, but it appears from the article PZ linked that she may have some second thoughts:

    “Fiske added that she didn’t want the “the originality of the research” to be lost, but called the experiment “an open ethical question.””

  13. says

    Another interesting read is the Marketing Research Association’s Code of Marketing Research Standards. Some highlights:

    2. Protect the rights of respondents, including the right to refuse to participate in part or all of the research process.

    7. Ensure that respondent information collected during any study will not be used for sales, solicitations, push polling or any other non-research purpose.

    10. Not represent non-research activity as research.

    11. Provide respondents with clear notice and choice about participation when passively collecting data for research purposes from non-public sources or places, where the respondent would not reasonably expect information to be collected.

    15. Consider data privacy a fundamental part of planning and the research process, and maintain a clear, concise and easy to understand privacy or terms of use policy that describes the ways respondent data is collected, used, disclosed and managed.

    16. Take special care and adhere to applicable law when conducting research across state and national borders and with vulnerable populations, including but not limited to children.

    Yeah, I see a few problems with FB’s research here.

  14. Snoof says

    I’m curious to see if this form of informed consent applies in other cases.

    Would I be able to, say, harvest Zuckerberg’s kidneys because he agreed to the terms of service to my app?

    (I’m just operating on his internals! It’s right there in paragraph seventeen!)

  15. microraptor says

    I decided against signing up for Facebook when I heard that they will take photos that were uploaded and store them rather than deleting them once you decide to take them down (or something like that, it’s been years since then).

    Nothing I’ve heard about Facebook’s behavior has ever made me think about giving them a second chance.

  16. feloniousmonk says

    Ok, a while back, there was an experiment on littering and social influence where the researchers put a flyer on people’s windshields and observed what they did with it when the parking lot was clean vs. when the parking lot was littered. No informed consent necessary, despite the experimental manipulation of the environment in which the naturalistic observation occurred.

    Does anyone here have any real problems with that experiment?

    If not, what’s the real difference?

    Are we invoking the online privacy boogeyman for no reason?

  17. says

    @feloniousmonk #20 – Facebook has apparently claimed that they went through an institutional review board. Assuming this is true, then the research MUST, as a matter of federal and international law, have involved informed consent. I am a community member of a board that helps to write informed consent documents, and I can assure you that Facebook never provided anything that I would call informed consent. From where I am standing, it would appear that Facebook broke the law: whether or not harm followed is entirely beside the point.

    It may be that marketing studies have different standards than the vaccine studies I work with, but the links I provided above seem to indicate that marketing studies are held to standards very close to the ones I am familiar with. It may be that the study involved what is technically meta-data already owned by the company rather than data provided directly by users, but that enters into a realm of very, very murky ethics that the courts have been reluctant to clarify. In either situation, I believe that this research should be pulled out and considered in the light of law and ethical research standards.

  18. Gorogh, Lounging Peacromancer says

    My take on your objection @20, feloniousmonk, would be that it is jeopardizing the informational autonomy of people. If you want, it is paternalizing to assume to know what people would agree to. In medicine ethics, this is a very long standing issue, and the role of the doctor has evolved from the paternalistic “I know what’s best for you, so you don’t have to know exactly what your treatment is” to “you have full autonomy over yourself, therefore you will receive all the information you can and make your own treatment decisions”.

    Roughly. Just a first-glance-association. Resuming breakfast now.

  19. neverjaunty says

    Are we invoking the online privacy boogeyman for no reason?

    Do you not understand the difference between observing behavior in public places and a social network with a privacy policy? Are you claiming that recording anonymous public actions in numbers is the same as secretly preserving individual users’ actions ? Is there a reason you’re begging the question instead of making a point?

    You didn’t bother to link to the actual study so that anyone could meaningfully evaluate what you’re talking about, but I’m going to assume you mean the Cialdini study. Why do you believe that is comparable to Facebook’s study?

  20. dvizard says

    I’d have to argue that the facebook study does not meet the exception, because it was not purely observational: they manipulated the news items that their users saw.

    The problem with that view is that Facebook has to have an algorithm to display news feeds anyway and clearly it is not set in stone. They have likely changed it a number of times in the history of Facebook, and when they change it they will monitor how users react to it (because they want to see how they can make Facebook “better”.) So following your argument, it is perfectly fine to manipulate the feed if it is for their egoistic purposes of finding out how to make Facebook produce more money (since you can’t ban them from it anyway), but if they want to manipulate it with the goal of gaining scientific insight, it is suddenly not OK and needs informed consent.

    So while technically they do a manipulation, I think requiring informed consent here is a bit unfair. You would ban them from doing something for a scientific purpose, which they do anyway all the time for market research.

  21. neverjaunty says

    They’ve got a fully legal defense.

    What is “fully legal defense” even supposed to mean? From a legal POV that’s gibberish. Do you mean that they have a complete defense, that they have a legally sound defense, or what? I assume you didn’t mean to say that their lawyers would not be violating their duties as officers of the court by asserting it.

    Unless there is extremely well-established precedent on what a particular term of the EULA or TOS means (such as an arbitration clause), it is flat-out wrong to say that any given term is a slam dunk. Courts are constantly weighing these issues: did the person agreeing to the terms have the opportunity to review and understand them? Is the questionable behavior actually addressed by the term or or is it so vague that nobody could possibly have agreed to it?

  22. Alverant says

    I’m with some of the others here. Whenever I think about joining the Horde and signing up for FB, I read something like this which puts me off once again.

    The problem with EULAs is that they’re too long and use too much legalese to understand. We shouldn’t need a lawyer to sign onto a social media site. FB is abusing contract law to sneak things through in the fine print. Worse is that they’re the biggest game in town so for many people who depend on social media as a form of advertising, they have no choice but to accept.

    When I first moved to the area I signed on with a dating company to find a girlfriend. I picked this company because they said they would mail me matches every month and my membership would be renewed for free if I didn’t find someone when the initial membership period was up. Guess what, they said they could change the terms of the contract so they could discontinue mailing me matches and since the “free renewal” was an oral promise with someone no longer with the company they didn’t have to honor that too.

    Businesses use too many tricks to part people with their money and if you fight back they can point to some vague contract. If you still complain and raise a big enough stink you’ll have conservatives talk about “free market”, “buyer beware,” and other such nonsense to defend the companies defrauding us.

  23. gmacs says

    And of course it had to be at Princeton.

    I can’t help but think of Zimbardo when I read this.

  24. says

    Regardless of the legal cover it appears that they did not get informed consent (regardless of IRB review). A potentially easier route to go would be to ask the journal to request a retraction until the researchers involved can clarify the ethics of the study.

  25. Gorogh, Lounging Peacromancer says

    Btw, some experiments (such as placebo experiments) really do require some form of deception. Even so, it is required that full disclosure is provided after the experiment. Especially in the Facebook case, it would have been technologically trivial to get fully informed consent/reaffirm informed consent after data collection. Just send out a message and have people reaffirm their will to participate before including their data.

  26. says

    @Gorogh #29 – If a pharmaceutical study involves placebos (and almost all do), that fact must be revealed in the informed consent documentation that is read and signed by study participants BEFORE they enroll in the study. Must, as in “this is required by US and international law.” The documents must clearly state, in plain language understandable by a lay person, that placebos will be involved and that the participant may be in a cohort that does not get the product being tested. That is part of the “informed.”

    Typically, studies are double blinded: neither the participant nor the person administering the product know which cohort the participant is in. I was in a vaccine study in the late 90s, and neither I nor the clinician who took my blood and administered the shots knew what I was getting: if the protocol called for an injection that visit, she drew blood then picked up a brown paper bag from a prep room that had the pre-filled syringe. The whole point was to reduce, if not eliminate, a placebo effect that comes from the participant knowing what he is getting. The clinician doesn’t know, to prevent her from inadvertently communicating the information to the participant. When the study has been completed, the study is unblinded and the participant learns which cohort he was in. That can happen months, even years, after the participant’s participation has ended.

  27. Gorogh, Lounging Peacromancer says

    Gregory – indeed. I was not talking about pharmaceutical studies though, but about studies investigating the placebo effect (roughly, the effect of expectation and conditioning on the experience of symptoms). Those not always involve active treatments (also due to ethical reasons) and frequently have to deceive participants regarding the fact that no active treatments are applied. I only happen to know because I perform such studies.

  28. says

    @Gorogh #31 – Ah, ok. Yeah, the informed consent for a study specifically looking at the placebo effect would look very different. I don’t suppose you could point me to some public documents for these kinds of studies? My CAB has an informal project of collecting them for reference, and placebo effect IC would make for an interesting addition. If you need to keep the source reasonably private, you can contact me through the website linked on my name.

  29. dmgregory says

    I don’t think this is quite as clear-cut as a lot of the takes on it I’ve seen seem to suggest.

    Expanding in a similar vein to dvizard’s remarks above…

    Imagine a theme park changes all of their wayfinding signage, and then tracks the number of people who go to each location, or come to an information counter for directions. By comparing these numbers against the pre-change stats, they’ve just performed an experiment on all of their visitors, which they can use to guide future changes. If they share that information in a press release (“Changing our signs to blue increased ice cream sales by 0.3%”), then they’ve published the results of that research.

    Would they be ethically required to inform and obtain the consent of all of their visitors in order to do this? I’d lean toward no – those visitors came with the knowledge that they would be getting a mediated experience, that much of what they saw would be subject to someone else’s decisions – there’s no such thing as a “neutral” theme park where absolutely everything is up to the visitor.

    What if we extend this to an A/B test – say the park has two different park maps, and they’re not sure which one is better, so they print both and hand one randomly to each guest on the way in, then track the numbers of people coming to information kiosks with the A version versus the B version. This is closer to the Facebook study, but again, I wouldn’t expect to be asked to consent in this case. I don’t have an a priori idea of what the park map “should” be, so whether I get version A or version B I’m still getting what I came to the park for.

    I think a lot of this discussion revolves around an assumption that there is a “neutral” version of what your Facebook feed “should” be, and that any curation constitutes a deviation from that standard. But that’s not what the Facebook feed is – it is necessarily a mediated, algorithmically-curated selection, and there’s no way to do that neutrally. Even a straight-up chronological dump ends up favouring some content over others (eg. if you read FB early in the morning, you’ll see more of your friends’ drunken late night party posts than their more sober mid-day news sharing. Or if it’s based on recency of updates, you’ll see more of flame wars and polls than simple informative posts like “I got a new phone number.”)

    Given that Facebook is tuning these curation algorithms all the time, I’m curious, what criteria make a particular curation change rise to the level of requiring explicit consent?

    I agree that there is a threshold, but it’s not at all clear to me where that is, and on which side of that line this study falls. I note for instance that FB’s recent algorithms seem to de-emphasize threads with intense back-and-forth arguments, which is a choice that carries emotional consequences, but as a user I experience that as an incremental improvement, and I don’t feel it’s something that required my explicit consent for them to roll out – the content remains present and I still get notifications if I continue to participate. So I don’t think the emotional impact of this curation experiment is the key factor. I’m also not sure whether publishing data is the magic factor – Ok Cupid has published tons of data on their users’ activities under a similar EULA framework as Facebook, without this type of controversy.

    Any thoughts on what particular set of factors make this an issue, compared to, say, Google’s experimentation with algorithms on which emails to flag as “important”?

  30. says

    feloniousmonk:

    If not, what’s the real difference?

    A public parking lot is a public space. You’re free to observe people all you want. A public parking space does not have a privacy policy or terms of use. There’s your difference. Rather surprising this needs to be pointed out.

  31. dvizard says

    A public parking lot is a public space. You’re free to observe people all you want. A public parking space does not have a privacy policy or terms of use. There’s your difference. Rather surprising this needs to be pointed out.

    But how does this matter in terms of whether or not you need informed consent for manipulating subjects? Whether a subject is manipulated unknowingly on a public place or in a private house hardly matters to the question at hand. Even more so if the manipulator is the one who makes the rules of the private house. Remember, the question is not if Facebook violated their own TOS when manipulating the customers (which they surely didn’t) but if this constitutes a manipulation which in a context of a psychological study would require informed consent.

  32. dvizard says

    If they share that information in a press release (“Changing our signs to blue increased ice cream sales by 0.3%”), then they’ve published the results of that research.

    And that’s the bizarre part to me. They can do any kind of tests and manipulation on their own servers for their own purposes and that’s perfectly fine but the moment they publish it they suddenly need informed consent. That doesn’t make any sense to me if the goal is to protect people from being manipulated unknowingly.

  33. unclefrogy says

    I also find it a little strange that they decided to publish the study results.
    The only reason I can think of to do that would be to indicate to their customers that they are very adept at manipulating their users so as to insure better sales results.
    let us not forget that regardless of how empowering they declare themselves to be they are like google an aggregator of targets for advertisers who are their customers.
    It would be unusual for advertisers and marketers not to play it close to the line of legality. Did they go over the line? Probably, the question is how far not if.

    uncle frogy

  34. robro says

    As rturpin @#2 says, websites have been doing A/B tests for a long time, an extension of advertising research practices from long before the Internet and Facebook. However, this strikes me as a little different. The part that I think steps over the line is this:

    Facebook then analyzed the future postings of those users over the course of a week…

    It’s one thing to track click rates or purchase rates, but its a different matter to use my content without my explicit permission. Their EULA/TOS may be enough CYA to keep them out of legal trouble, but it does seem questionable to use material that most people would consider protected by their privacy policy.

    You have to wonder why they bothered doing this research. I’m not expert, but it seems that they were looking at whether they can “prime” their users through the feed. From my smattering of reading, there seems to be plenty of evidence that researchers can use priming to manipulate a subject’s state of mind. Apparently, psychological research uses the phenomena all the time…with appropriate informed consent of course. I’m sure the mechanism of delivering the priming isn’t that critical.

    You also have to wonder why they published. This seems unusual for a technology company to do, given how close to the chest most of these companies play.

    Finally, perhaps this reveals something that we should all be aware of: media sources can use a “news feed” to prime their audience then analyze comments to modify their programing and steer their audience where they want them to go. You can bet that Fox and other major news outlets do this kind of research routinely, and your comment that they are a bunch of amoral idiots is just as useful to them as any other comment.

  35. Nick Gotts says

    I’m with some of the others here. Whenever I think about joining the Horde and signing up for FB, I read something like this which puts me off once again. – Alverant

    Me too. I was indeed on the point of giving in – political groups I’m in use it, so it’s a nuisance not to. But if they’ll do this and even boast about it, what else will they do and keep quiet about?

  36. madscientist says

    I can imagine the next excuse: Rupert Murdoch and his cronies make up ‘news’ all the time!

  37. magistramarla says

    Elly @ 12
    Your post could easily have been written by me.
    I joined FB to keep up with my adult children and see pictures of my grandchildren.
    I was quickly “found” by former high school classmates, but I enjoyed that for a while.
    Then the annoying political and religious opinions started to get on my nerves, especially those that were posted by some relatives. I felt that I couldn’t vent my true feelings without hurting some people that I cared about, so I simply withdrew.
    Like you, I haven’t gone back in over 18 months, and my nerves are better for it.

  38. comfychair says

    Until recently, I’d assumed this ‘facebook’ thing (also ‘twitter’) just had to be a made-up parody/satire thing that was just a very very overused and stale meme about our insecurities and the need to have our egos continually stroked. Imagine my shock and disappointment on finding out it was a real thing that people really do for real!

  39. neverjaunty says

    dmgregory @34: As the Atlantic article points out, what Facebook did may very well fall outside ethical guidelines; first, they may have misled the IRB by claiming to be using only pre-existing datasets, when they were actually going to be attempting to manipulate future research; and second, because they did not notify the participants of the true nature of the ‘deceptive research.’

    There’s also the fact that Kramer, one of the study’s authors, works for Facebook and yet claimed in the research paper that he had no conflicts of interest. Perhaps I’m not up to speed on the current state of ethics in research, but my understanding is that if you are doing research at your employer’s behest which will benefit your employer, you’re supposed to disclose that if you publish the research in a peer-reviewed publication.

  40. Tony! The Queer Shoop says

    Nick & others:

    Me too. I was indeed on the point of giving in – political groups I’m in use it, so it’s a nuisance not to. But if they’ll do this and even boast about it, what else will they do and keep quiet about?

    I don’t know if it would work for you, but I know several people that create FB accounts using pseudonyms. Some people have a private FB account and a public one. I imagine you could create just 1 pseudonymous account that you could fill in with whatever information you chose (or none at all). You’re not required to have a picture of yourself and you can set your account so that only the people you choose can view your profile.

  41. saganite says

    Yeah, glad I never joined facebook. Then again, I did join YouTube and was later forced to create an (empty) google+ page. Still, it seems that facebook is particularly nasty when it comes to profiting off users, doing dealings with advertisers and generally disregarding people’s expected rights. I’m sure there are some clauses in their terms of service or something to thinly cover their asses, but usually such (basically hidden) small print doesn’t hold up in court when it goes beyond reasonable expectations of the terms.

  42. says

    I was kicked off Facebook i 2009 (I think) for using the name Jafafa Hots.
    They wouldnt let me sign back up with my real name.
    Decided I was fine with that.

    Back around 2007 I remember saying “facebook is the new AOL.”
    I think I was right. I think a lot of Facebook users think Facebook IS the internet. others, younger people, are leaving in droves. Facebook is just where they log in to get their birthday greeting from distant relatives.

    Anyway, FB will ban you permanently (and prevent you from signing up again) if you use a psuedonym and they figure it out.

    Whats really creepy is that they somehow know who you really are anyway, so if you try to sign up again with your REAL ID, they block you, as you were banned before.

  43. Rolan le Gargéac says

    It’s toe in the water stuff to see if the gubmint can move to the next phase in the serf-isation of the merkan peeps !

    Yeah baby !

  44. Azkyroth Drinked the Grammar Too :) says

    You might want to read the various EULAs / TOSs to websites a little more carefully then.

    One more reason the fucking things should be outlawed.

  45. Azkyroth Drinked the Grammar Too :) says

    It is wide-spread practice. If you think it isn’t, you’re incredibly naive. They’ve got a fully legal defense.

    Why do we always have to have some shitbag yes-man show up to smugly proclaim that whatever abuse of the law we’re objecting to is TOTES LEGAL as if that settled it?

  46. Artor says

    Every single time I start thinking maybe I should get a Facebook account, another incident like this happens, and I realize that would be a bad idea. This cycle has held true for- how long has Facebook been a thing? That long and counting.

  47. says

    The really egregious thing about the Facebook lack of oversight is that this study DID cause harm. In fact, that was one of the stated goals of the study–to see if negative emotions could be elicited by manipulating messages. I am still having trouble believing that Cornell’s IRB approved this using the rationale that data mining using de-identified medical records happens all the time. This was not data mining static records. This was real-time emotional manipulation on publicly available forums.

    The Office of Human Research Protections (OHRP) at NIH would be the logical place to lodge a complaint about this study, but they have been gutted and are essentially useless at this point. Like most other aspects of human society, research protections have been slowly eroding to favor industry over public interest. I plan to skip right to our senators and ask them WTF OHRP is doing with our tax dollars.

    The counter backlash has begun, as well. TIME magazine ran a puff piece about how we are all over-reacting to this. But if we really are going to accept clicking ‘yes’ to a privacy policy as ‘informed’ consent, then we should have no problem with treating physicians secretly exchanging their patient’s medications with experimental drugs to in order to study them, without the patient’s knowledge or consent, as long as they signed the clinic privacy form, right?