We all know (and have been warned) that what you put out on the internet can come back to bite you. We hear stories of people all the time having their personal and/or work life interrupted by things they’ve said or posted on Facebook.
My personal favorite (and a great example) is the following “status update” on Facebook followed by the boss’s comment:
So, now we have privacy controls and the ability to specify on a piece-by-piece basis what, and who, can see things.If you use them, they’re better than nothing. At least your boss (or potential employer) can’t see that drunken photo of you dancing on the bar top as easily, if you do, in fact, friend him/her.
Let’s talk about Twitter, now. Twitter is completely public and indexed by search engines. I have Google Alerts (and other services) setup to monitor certain things and I am always fascinated by what pops out in the results.
An item popped up in my Google Alert last Thursday and, while I found it amusing at first, as I started thinking about it more, it made me concerned.
What exactly I’m talking about is a website called “Amplicate”
According to the website, it’s purpose is to do this:
“Amplicate
collects similar opinions in one place; making them more likely to be
found by people and companies.”
This would be fine because you actually DO have the ability to input your opinion on things if you, in fact, choose to participate in and interact with this website. Your opinion is, however, limited to whether something “Sucks” or “Rocks”.
My concern is that apparently it indexes Twitter in some way and automatically generates your “opinions” for you. I don’t know how it chooses which tweets to use to form your opinions, nor do I even know how it selects the specific people which it chooses to form (and announce) opinions for. I do know that, at least in my case, I didn’t choose to participate in or interact with this website.
It seems to take keywords from your tweets and then determines whether you think the subject of your tweet “sucks” or “rocks”. It then posts that to the world under the guise that these are YOUR opinions. It’s obviously a computer generating these because some of the “opinions” actually make no sense and it’s apparent that the “opinion” is out of context.
The peril in this is that, to someone who doesn’t actually analyze the opinions that were generated by a computer based on your tweets, they may just assume that these are, in fact, your opinions. If there’s a potential (or current) employer doing a little research on you for whatever reason, this could potentially harm you.
As an example, the Google Alert that returned this discovery to me shows the following (it even hijacked my photo):
So, apparently, I think the word “enough” rocks and that the Oscars and Toyota sucks.
Now we not only have to be careful about what we say, but also about how a computer would interpret it.