The tiny padlock icon that sits next to many web addresses, suggesting protection of users’ most sensitive information — like passwords, stored files, bank details, even Social Security numbers — is broken.
A flaw has been discovered in one of the Internet’s key encryption methods, potentially forcing a wide swath of websites to swap out the virtual keys that generate private connections between the sites and their customers.
On Tuesday afternoon, many organizations were heeding the warning. Companies like Lastpass, the password manager, and Tumblr, the social network owned by Yahoo, said they had issued fixes and warned users to immediately swap out their usernames and passwords.
Slate magazine reports on a new psychology paper from researchers at the University of Manitoba, which sought to investigate whether people who engage in trolling were characterized by personality traits that fall in the so-called Dark Tetrad:
- Machiavellianism (willingness to manipulate and deceive others),
- narcissism (egotism and self-obsession),
- psychopathy (the lack of remorse and empathy), and
- sadism (pleasure in the suffering of others).
It is hard to underplay the results: The study found correlations, sometimes quite significant, between these traits and trolling behavior. What’s more, it also found a relationship between all Dark Tetrad traits (except for narcissism) and the overall time that an individual spent, per day, commenting on the Internet.
Overall, the authors found that the relationship between sadism and trolling was the strongest, and that indeed, sadists appear to troll because they find it pleasurable. “Both trolls and sadists feel sadistic glee at the distress of others,” they wrote. “Sadists just want to have fun … and the Internet is their playground!”
(So remember: When Charles breaks out the ban hammer, he’s not doing it to stifle discussion or debate — he’s merely showing the door to people who really have no interest in being part of the community.)
The NSA is a topic of discussion on social media tonight because of 60 Minutes, but here’s something I bet you didn’t know about the incredibly intrusive techniques Facebook uses to monitor everything you do on their site (and beyond): Facebook Self-Censorship: What Happens to the Posts You Don’t Publish?
We spend a lot of time thinking about what to post on Facebook. Should you argue that political point your high school friend made? Do your friends really want to see yet another photo of your cat (or baby)? Most of us have, at one time or another, started writing something and then, probably wisely, changed our minds.
Unfortunately, the code that powers Facebook still knows what you typed—even if you decide not to publish it. It turns out that the things you explicitly choose not to share aren’t entirely private.
Facebook calls these unposted thoughts “self-censorship,” and insights into how it collects these nonposts can be found in a recent paper written by two Facebookers. Sauvik Das, a Ph.D. student at Carnegie Mellon and summer software engineer intern at Facebook, and Adam Kramer, a Facebook data scientist, have put online an article presenting their study of the self-censorship behavior collected from 5 million English-speaking Facebook users. It reveals a lot about how Facebook monitors our unshared thoughts and what it thinks about them.
The study examined aborted status updates, posts on other people’s timelines, and comments on others’ posts. To collect the text you type, Facebook sends code to your browser. That code automatically analyzes what you type into any text box and reports metadata back to Facebook.
Yes, Facebook is actually keeping track of the things you don’t post. The stuff you delete because you thought better of it. The stuff you thought was gone forever, bits lost in the ether. The stuff you didn’t want anyone to see.
Facebook sees it, and records it, and analyzes it.
Twitter’s Redesigned Block Feature Is a Stalker’s Delight - Update: Twitter Reinstates the Block Feature
Twitter has made a disastrous decision about their user interface, and Leigh Honeywell has one of the best posts on what it means, and how to get around it: Changes to Twitter’s Block Behavior - and a Workaround.
Twitter posted an update today to their blocking functionality. In my opinion, it’s a real step backwards for the usability of Twitter for anyone with a large number of followers, or facing any kind of harassment.
It used to be that when you blocked someone, it would force them to “unfollow” you, in addition to hiding them from your mentions. This is no longer the case:
Note: If your account is public, blocking a user does not prevent that user from following you, interacting with your Tweets, or receiving your updates in their timeline. If your Tweets are protected, blocking the user will cause them to unfollow you.
The obvious objection to my objection is “well your stuff is public anyway, they could just make a new account” - the thing is, this reflects a fundamental misunderstanding of 1) how people use blocking and 2) how harassers operate.
I’m hoping Twitter will rethink this decision. There’s already an outcry (see #RestoreTheBlock), and it’s going to get worse as more people find out that the block feature basically doesn’t work any more.
The old way wasn’t perfect at all, and determined nutjobs could get around it, but this new way equates to just giving up and saying, “Oh well, we don’t want the stalkers mad at us, so you’ll just have to deal with it.”
Twitter spokesperson Jim Prosser says Twitter made the change because it thinks it will cut down on the vitriol, anger, and resentful Jezebel articles that result from knowing you’ve been blocked. “Now when you block a user, they cannot tell that you’ve blocked them,” tweeted Twitter CEO Dick Costolo. “It was a longstanding request from users of block.”
“We saw antagonistic behavior where people would see they were blocked and be mad,” says Prosser.
In my NSHO, this is one of the worst decisions by an Internet company in a while. Twitter has a big problem with targeted harassment, and they should be thinking about how to make it more difficult for abusers.
Instead they’ve made it easier for stalkers to do their dirty work, because they don’t want to lose them as customers.
Well, that didn’t take very long! Twitter has now reinstated the block feature.
We're reverting the changes to block functionality. https://t.co/LOvip2QmLX
I’ll forego the temptation to say “what took you so long:” Google’s Eric Schmidt Announces New Blocks on Child Porn.
Google’s executive chairman Eric Schmidt has outlined how his company is introducing new measures to block child pornography from appearing in its searches. Schmidt explained the changes to Google’s search function in an op-ed in Britain’s Daily Mail newspaper following a campaign of pressure from British politicians.
Schmidt broke the new measures down into subcategories that included “cleaning up” more than 100,000 search results and introducing new warnings that appear above more than 13,000 results, warnings that reiterate that child porn and child sexual abuse is illegal and offer avenues for help. Despite these changes, Schmidt says in his op-ed that “there’s no quick technical fix when it comes to detecting child sexual abuse imagery.” Instead, Google will use humans to review the images to discern the difference between “genuine abuse” and “innocent pictures of kids at bathtime.” Schmidt also details plans to send engineers to the UK’s Internet Watch Foundation and the US National Center for Missing and Exploited Children in addition to funding internships at both organizations.
UK Prime Minister David Cameron is in the midst of an attempted crackdown on pornography in general, with a particular focus on stopping search engines from showing child porn. Earlier this year, Cameron called for “Google, Bing, Yahoo, and the rest” to censor their search results, saying in July: “If there are technical obstacles to acting on [search engines], don’t just stand by and say nothing can be done; use your great brains to help overcome them.” Google has previously shied away from censoring its results directly, choosing instead to develop an open database to which law enforcement agencies, charities, and relevant organizations could add the details of abusive imagery that could then be hidden or removed.