Who Determines What Will Be “Forgotten” On The Internet?

Who Determines What Will Be “Forgotten” On The Internet?

Readers may have heard of the strange, tragic story of a thirty-one year old Italian woman named Tiziana Cantone.  According to media reports, she made sex videos with a current boyfriend and then sent the video to several other people, among them one of her ex-boyfriends.  At some point in the spring of 2015, the videos were uploaded to various porn sites.  From this point, Cantone became the target of ridicule and mockery.  She changed her name and moved to a different part of Italy.  But everywhere she went, people eventually recognized her.

Who Determines What Will Be “Forgotten” On The Internet?

Cantone, now desperate to have the videos removed, found out that it was not an easy thing to do.  Unfortunately for her, at least one of the videos appeared to show her consenting to having the sex filmed.  It has also been alleged that she was involved in several “swinging” relationships.  She finally took legal action against some of the porn sites, but it turned out to be a laborious process and she incurred about 20,000 Euros in court costs.  In despair, she hanged herself.

Who Determines What Will Be “Forgotten” On The Internet?

The case brought into the spotlight the so-called “right to be forgotten” on the internet.  The right has been recognized in Europe, and permits people to have things about themselves removed from internet search engines unless there are “particular reasons” not to do so.  Basically, a balancing test is done to decide if the public interest in having the information is outweighed by the individual’s privacy interest in removing it.  Not surprisingly, this cutting-edge legal concept has attracted fierce debate.  On one side are the privacy advocates, who justifiably emphasize that a person should not continue to be plagued forever by personal information that sits in search engines for years.  On the other side are the “censorship” opponents who claim that allowing this sort of internet sanitizing would eventually degrade the value of search engines as a whole.

Both sides, of course, have valid points.  The internet has become a vast dumping ground of stale, outdated, and irrelevant personal information.  At some point, people should have the right to get on with their lives and not have to be worried about those drunken photos of themselves that they posted on Instagram, or that traffic court case from 1998.  But the search engines see it differently, at least to some extent.  They argue that a “free society” demands unrestricted access to historical information.  In a recent article about the matter in the New York Times, one commented stated:

“When we’re talking about a broadly scoped right to be forgotten that’s about altering the historical record or making information that was lawfully public no longer accessible to people, I don’t see a way to square that with a fundamental right to access to information,” said Emma Llansó, a free expression scholar at the Center for Democracy and Technology, a tech-focused think tank that is funded in part by corporations, including Google.

Not surprisingly, big business, governments, and some globalist voices are not happy about giving individuals the ability to control what is available about them online.  They know that information is money and power, and fear the removal of data that might affect their bottom line.  Delinking, as it is called, will supposedly have the net effect of scrubbing the internet free of useful information.

Governments are also worried.  They know that the internet and social media is one of their primary ways to gather intelligence in real-time.  In fact, a conference in London in November of 2016 highlighted the growing use of social media as intelligence mines for governments.  Militaries and governments (as well as non-state actors) are tripping over themselves to use social media for their own ends.  According to the article linked to in the preceding sentence:

According to a description of the project on the Thales website, the partners have created a demonstrator tool that is currently being tested with users from security organisations. They said the “Initial feedback is very positive.”

The tool is all about “real-time surveillance”: social media information coming into the system is “immediately analysed” using Big Data algorithms and techniques “to detect changes, trends or anomalies” and “identify potentially dangerous entities”.  The tool is already so powerful, claims Thales, that it takes just 5 to 10 seconds for new information appearing on the web “to show up in the system, so intelligence analysts have up-to-the minute insights into situations as they evolve.”  The current dataset has some 70 million documents, with 25,000 new documents added daily, and search results delivered in less than 5 seconds.

Media Miser extracts and filters data on a particular topic as soon as it is posted online. Tools developed by the NRC process this content in real time by translating and summarising the data. The information is then assigned various ratings and descriptions: a tone rating (positive, negative, neutral); signs of emotion (anger, fear, etc.); the geographic location of the source; and the identities of the individuals or groups involved in the making and distributing the content.

We can surmise that big business and governments are not going to take the lead in protecting the privacy of the average citizen.  In addition, privacy advocates note that the fear-mongering claims of governments and big business about delinking rules are overblown.  They note that Google already practices worldwide deletion and delinking in cases of copyright infringement or piracy.  Since they are already doing it now for big business, why can’t they do it for individuals?

Who Determines What Will Be “Forgotten” On The Internet?

The answer, of course, lies in balancing the legitimate need for privacy rights with the public’s need for access to information free of undue restrictions.  When the information is of such a personal nature that it should fall within the scope of privacy, then the individual should be able to control it.  One possible practical way to implement the “right to be forgotten” is to use something like the software tool Oblivion.

It is possible to construct something that positively balances personal with public rights.  But who will make these decisions?  Who will decide whether something belongs more in the public or the private domain?  If it is to be the search engines themselves, then there must be some other authority to appeal to in cases where wrong decisions are made.  No single entity should have veto power over what appears online; that would be placing too much power in too few hands.

Read More: My Experience Building A Computer


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *