Monday, May 8, 2023

Twitter Criticized For Allowing Texas Shooting Images To Spread

Twitter Criticized For Allowing Texas Shooting Images To Spread

Pat Holloway has seen his share of disasters in his 30-year career as a photojournalist: the 1993 crash in Waco, Texas; In the year the 1995 bombing of the Oklahoma City federal building by Timothy McVeigh; and the 2011 tornado in Joplin, Missouri.

But this weekend, he said in an interview that enough is enough. When live footage of bloodied people at a Texas mall, where at least nine people were killed, including a gunman, started circulating on Twitter, she called on Elon Musk, the owner of Twitter, to do something.

Ms Holloway, 64, said in an interview on Sunday: "This family doesn't deserve to share their dead relatives on Twitter."

Ms Holloway was one of a number of Twitter users who took to the social network for gruesome images of a bloodied toddler wandering the stage after Saturday's shooting. While gruesome images are becoming commonplace on social media, where cellphone cameras and internet connections allow anyone to post anything, the stunning naturalness of the images exposes Twitter's content curation, which has declined since Mr Musk bought the company.

Like other social media companies, Twitter has once again found itself in the same position as traditional newspaper publishers, struggling to make tough decisions about what to show their audience. While newspapers and magazines shield their readers from true naturalistic depictions, Jet magazine in 2010 They allow a few exceptions, such as in 1955 when he published a photo of Emmett Till, a 14-year-old black boy beaten to death in Mississippi. To depict the Southern horror of the Jim Crow era.

Unlike newspaper and magazine publishers, tech companies like Twitter have to manage millions of users through a combination of automated systems and content moderators to enforce decisions at scale.

Other tech companies, such as Facebook parent Meta and YouTube parent Alphabet, have invested in large groups to reduce the spread of violent images on their platforms. Twitter, for its part, has cut back on content moderation since Musk bought the site in late October last year, laying off full-time employees and contractors in the integrity and security teams that oversee content moderation. Mr. Musk, a self-proclaimed "free speech absolutist," said last November that he would create a "content moderation panel" to decide which posts to stay and which to delete. He later renounced this promise.

Twitter and Meta did not respond to requests for comment. A YouTube spokeswoman said the site had begun taking videos of the massacre and promoting credible news sources.

Graphic content has never been completely banned from Twitter, even before Mr. Musk took over. The platform, for example, allowed images of people killed and injured in the war in Ukraine to be newsworthy and informative. The company sometimes places warning labels or pop-ups on sensitive content, asking users to allow them to view the images.

While many users publicly shared images of the carnage, including the gunman who died of shock, others retweeted to highlight the horrors of gun violence. "NRA America," said one tweeter. "He's not going anywhere," said the other. The New York Times does not link to social media posts containing images of nature.

Claire Wardle, co-founder of Brown University's Information Futures Lab, said in an interview that technology companies have a responsibility to protect their users from newsworthy images or other important images, even those that are not convenient to view. . . . As in the past, he mentions the decision to publish a photo of Kim Phuc Phan Ti, known as the "Napalm Girl," after a photo of her suffering from napalm during the Vietnam War surfaced.

It added that it would select graphic images of important events left online with an overlay that would force users to log in to view the content.

"This is news." "A lot of times we see pictures like this in other countries and nobody bats an eye. But it happens to Americans and people say, 'Should we see this?'

For years, social media companies have struggled to distribute gory images and videos after horrific attacks. Last year, Facebook came under fire for running an ad alongside a video of what appeared to be a racist attack in Buffalo, New York, which was streamed live on the video platform Twitch. The Buffalo shooter claimed responsibility for the 2019 mass shooting in Christchurch, New Zealand, which killed at least 50 people and was broadcast live on Facebook. Over the years, Twitter has deleted versions of the Christchurch video, the footage of which glorified the violent messages carried by the shooter.

While the Texas mall shooting was trending on Twitter, it was barely visible on other online platforms on Sunday. A keyword search of the Allen, Texas shooting on Instagram, Facebook, and YouTube yielded mostly news reports and a few public eyewitness videos.

UCLA professor Sarah T. Roberts, who studies content moderation, points to differences between editors at traditional media companies and journalists on social media platforms that are not bound by traditional ethics, including reducing harm to subscribers, friends and family.

"I understand where people are coming from who want to share these images on social media," Ms Roberts said. But unfortunately, social media as a business is not built to support this. It was created to take advantage of the distribution of these images.

Ryan Mc He reported.

A black Hispanic lieutenant shot the police with pepper spray

Labels: ,

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home