The Internet is a Psychology Experiment

Busy month, but the world doesn’t stop of course. My last post was about big data and privacy. This one is about big data and ethics. As data collection and analysis becomes increasingly powerful, those who control the algorithms can literally control the population in various ways and by various degrees. Below are some smart takes on the subject.

Scott Adam, creator of Dilbert, writes about startups in Silicon Valley:

Every entrepreneur is now a psychologist by trade. The ONLY thing that matters to success in our anything-is-buildable Internet world is psychology. How does the customer perceive this product? What causes someone to share? What makes virality happen? What makes something sticky?

Experience and history give start-ups their ideas on what to test first. But the thing that worked for the last business often doesn’t work for the next because no two situations are identical. So psychology on the Internet is an endless series of educated guesses and quantitative testing. Every entrepreneur is a behavioral psychologist with the tools to pull it off.

Christian Sandvig writing for the Social Media Collective Research Blog has an excellent piece about why algorithms are dangerous (it’s a must-read). Here are some of his key takeaways:

  • When I express my opinion about something to my friends and family, I do not want that opinion re-sold without my knowledge or consent.
  • When I explicitly endorse something, I don’t want that endorsement applied to other things that I did not endorse.
  • If I want to read a list of personalized status updates about my friends and family, I do not want my friends and family sorted by how often they mention advertisers.
  • If a list of things is chosen for me, I want the results organized by some measure of goodness for me,not by how much money someone has paid.
  • I want paid content to be clearly identified.
  • I do not want my information technology to sort my life into commercial and non-commercial content and systematically de-emphasize the noncommercial things that I do, or turn these things toward commercial purposes.

Facebook recently revealed that in 2012 it allowed researchers to conduct a psychological experiment on how emotions spread through social media. They manipulated the Newsfeed of over half a million randomly selected users to change the number of positive and negative posts they saw. They found that emotions are indeed contagious.

The research is notable not for the results, but rather, for what it says about the notions of consent and ethics in social media engineering and for algorithms in general. James Grimmelmann, Professor of Law at the University of Maryland, put it this way:

The real scandal, then, is what’s considered “ethical.” The argument that Facebook already advertises, personalizes, and manipulates is at heart a claim that our moral expectations for Facebook are already so debased that they can sink no lower. I beg to differ. This study is a scandal because it brought Facebook’s troubling practices into a realm—academia—where we still have standards of treating people with dignity and serving the common good. The sunlight of academic practices throws into sharper relief Facebook’s utter unconcern for its users and for society. The study itself is not the problem; the problem is our astonishingly low standards for Facebook and other digital manipulators.

In the coming months and years, I think that ethics will be an increasingly pertinent issue. It’s about time we do a deep dive on this.