Twitter's new head of trust and safety said Twitter is heavily inclined to use automation to moderate content, doing away with some manual reviews and favoring restricting the spread of certain content rather than removing it entirely.

Twitter will also restrict hashtags and search results that are misused in specific areas, including child exploitation, Ella Irwin, Twitter's vice president of trust and safety products, told Reuters, regardless of the potential impacts on the "benign uses" of those terms.

"The most important thing that has changed is that the team has full powers to move quickly," said this Twitter official, on Thursday, in the first interview conducted by a Twitter executive since Elon Musk's acquisition of the social media company in late October.

Her comments come as warnings from researchers about an increase in hate speech on the social media service escalate, especially after Musk announced a pardon for accounts suspended under the company's previous leadership.

The company has faced many questions about its ability and willingness to so-called moderation in harmful and illegal content since Musk laid off half of Twitter's employees and ordered long hours to work, resulting in the loss of hundreds of other employees.

Advertisers - Twitter's main source of revenue - have fled the platform due to concerns about brand safety.

On Friday, Musk pledged to "significantly promote moderation in content and protect freedom of expression" in a meeting with French President Emmanuel Macron.

Musk pledged to promote content moderation and protect freedom of expression in a meeting with Macron (communication sites)

Safety is top priority

Irwin said Musk encouraged the team not to worry about how their actions might affect user growth or revenue, saying safety was the company's top priority.

The safety approach Irwin described reflects, at least in part, an acceleration of changes already planned since last year around Twitter's handling of offensive content, according to the former employees with knowledge of the matter.

One of the slogans, recently mentioned by Musk, was to allow in "freedom of expression, not freedom of access" whereby some tweets that violate the company's policies on offensive content should be left out, but should be prevented from appearing in places like the homepage and search, according to Irwin

.

Twitter has long deployed "visibility filtering" tools for disinformation and had already incorporated them into its official policy for dealing with abusive behavior before Musk's acquisition.

This approach allows for more free speech while minimizing the potential harms associated with widespread offensive content.

The number of tweets with offensive content rose sharply in the week before Musk tweeted on November 23 that impressions or views of hate speech were declining, according to the Center for Digital Hate.

The researchers said that tweets containing anti-black slurs that week were three times as many seen in the month before Musk took over, while tweets containing gay slurs rose 31%.

Researchers: Anti-black tweets increased 3 times what was seen in the month prior to Musk taking over (networking sites)

More risks

Irwin, who joined the company in June and previously held security positions at other companies including Amazon and Google, dismissed suggestions that Twitter does not have the resources or willingness to protect the platform.

It said the layoffs did not significantly affect full- and part-time employees who work in what the company referred to as its "health" divisions, including in "critical areas" such as child safety and content moderation.

Two sources familiar with the cuts said more than 50 percent of the sanitary engineering unit had been laid off.

Irwin did not immediately respond to a request for comment on the assertion, but previously denied that the health team was deeply affected by the layoffs.

She added that the number of people working in the child safety field had not changed since the acquisition, and that the product manager for the team was still present.

She said Twitter had re-filled some positions for those who had left the company, though she declined to provide specific figures on sales.

She said Musk was focused on using automation more, arguing that the company had erred in the past by using time- and labor-intensive human reviews of malicious content.

"He encouraged the team to take more risks, move quickly and keep the podium safe," she added.

With regard to children's safety - for example - Irwin said that Twitter has shifted towards automatic removal of tweets reported by trusted personalities, with a track record of accurately reporting harmful posts.

Carolina Christofoletti, a social threat researcher with TRM Labs who specializes in child sexual abuse material, said she noticed Twitter recently removing some content as quickly as 30 seconds after it was reported, without acknowledging receipt. .

In an interview Thursday, Irwin said Twitter had removed about 44,000 accounts implicated in child safety violations, in cooperation with cybersecurity group Ghost Data.

Twitter also restricts hashtags and search results frequently associated with abuse, such as those for searches for "teen" pornography.

Irwin said earlier concerns about the impact of such restrictions on permitted uses of the terms were gone.

She added that the use of "trusted reporters" was "something we've discussed in the past on Twitter, but there's been some hesitation and frankly some delay."

She concluded her remarks by saying "I think we now have the ability to move forward with things like that."