Momentous Facebook Lawsuit Shows Need to Integrate Morality in Technology

Some private companies, like Gan Jing World, have already adopted an ethics-over-profit approach to developing and providing technology.
Momentous Facebook Lawsuit Shows Need to Integrate Morality in Technology
A pedestrian walks in front of a Meta sign at Facebook headquarters in Menlo Park, Calif., on Oct. 28, 2021. (Justin Sullivan/Getty Images)
Gary Bai
1/8/2024
Updated:
1/9/2024
0:00

Molly Russell was 14.

Ms. Russell was active on social media platforms, including Instagram, Pinterest and Twitter. Around the time of her death, her father found images encouraging self-harm on her computer. In 2017, she “died from an act of self-harm while suffering depression and the negative effects of online content,” Andrew Walker, a coroner from the United Kingdom, concluded in a report that same year.

“I have no doubt Instagram helped kill my daughter,” Molly’s father, Ian, said of Meta’s social media product in an interview with the BBC in 2019.

Ms. Russell’s story was told in a lawsuit that 42 state attorneys general filed against Meta (formerly Facebook) in October. In that case, the states levied a salvo of charges at Meta, alleging that the social media giant knowingly harms children by making them addicted to its products—and concealing the fact that it’s doing so to chase profits.

“Just like Big Tobacco and vaping companies have done in years past, Meta chose to maximize its profits at the expense of public health, specifically harming the health of the youngest among us,” Phil Weiser, Colorado’s attorney general, wrote in a statement.

For those researching social media’s influence on how people behave, Ms. Haugen’s testimony and the states’ lawsuit propels to the surface a much bigger problem: big techs—as big and pervasive as they are—are not serving humanity, calling for a paradigm shift towards integrating morality and humanity in technological design.

The Epoch Times has contacted Meta for comment. In response to the lawsuit in October, Meta spokesperson Liza Crenshaw said in a statement to the press that the company is “disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path.”

‘Addiction, Coercion, Manipulation’

Unprecedented in scale as a state-led legal challenge against big tech, October’s Meta lawsuit sprang from the states’ years-long probe into Meta’s business practices.

Meta-owned companies like Instagram, which boasted a total revenue of $51.4 billion and 2.3 billion users in 2022, “entice, engage and ultimately ensnare youth and teens” through its products, which are designed to exploit kids’ psychological vulnerabilities and employ features aimed at addicting kids to the platforms, the states allege in their lawsuit.

Frances Haugen, a whistleblower who previously worked at Meta, supplied artillery for the case when she leaked the company’s internal documents showing that Meta knows Instagram is toxic for teenage girls but downplays its harmful effects in public.
“Facebook’s products harm children, stoke division, and weaken our democracy,” Ms. Haugen said in a congressional hearing in October 2021. “The company’s leadership knows how to make Facebook and Instagram safer but won’t make the necessary changes because they have put their astronomical profits before people.”

More than 95 percent of Meta’s profits come from advertising. The more engaged people are, the more time people spend on the platform, the more advertisements people see, and the more money Meta earns.

This business model means that there’s a fundamental misalignment between the purpose driving the businesses of companies like Meta—making money—and the users’ intentions in using the technology, James Williams, author and technology ethicist, said in an interview with The Epoch Times on Friday. Mr. Williams was a former strategist at Google who received the Founder’s Award, Google’s highest honor for its employees, in 2010 for his work on search advertising.

As a result, Mr. Williams said, the “modalities of harm”—such as “addiction,” “coercion,” and “manipulation”—are built into the information environments that are ever-present in people’s lives.

“There’s a fundamental dissonance between what we want for ourselves and what these systems want for us,” he said. “The systems that we trust to shape our lives are not fit for purpose, and, indeed, aren’t even designed to help us do that—social media is not designed to help us be social.”

What Can Be Done?

According to Divya Siddarth of the Collective Intelligence Project, an incubator for transformative technologies working with companies like Open AI, the “systemic” problems posed by the technological institutions today call for an answer that goes beyond a cut-and-dry regulatory solution, which usually stalls in the political process.

“Duct-taping our institutions isn’t going to work. We have to grit our teeth and build something new,” Ms. Siddarth said.

As an example, the expert recalled one of her projects with Anthropic, an Artificial Intelligence safety and research company, and proposed a transparency-based model that would open companies building new technologies to public scrutiny.

“We were to ask 1,000 representative Americans what they wanted out of a constitution that governs artificial intelligence, and then we actually retrained one of the models based on that—and we committed to transparency of those rules to the process.”

“A lot of social media companies are famously tight-lipped about giving researchers access to data for their impact on elections, for example,” Ms. Siddarth said. “So, opening yourself up for scrutiny—and then holding yourself accountable—would look like giving researchers access to that data.”

Mr. Williams echoed Ms. Siddarth’s view, noting that regulatory changes are limited to a “whack-a-mole” approach limited to “acute” kind of harm.

What is needed for “generational” change in the technological landscape, he said, is a sort of paradigm-cultural shift and increased technological awareness that would lead to more companies developing—and more people using—technologies that better align with people’s intentions for using the technologies to begin with.

“I’m inspired by projects where you see a more explicit alignment between the intentions of the user and the intentions of the company,” Mr. Williams said. “It’s baked into the business model—companies that don’t just say they care about user interests but will actually commit to it.”

Starting With Morality

One such technology company that promises to infuse moral considerations in developing and providing their products is Gan Jing World, an online video content platform whose executive vice president of technology, James Qiu, escaped communist China in 1989 where he faced police summons over his role in the Tiananmen Square massacre. Around the same time, he was accepted into a computer engineering graduate program in Canada, thus avoiding imminent politicized prosecution in China, after which he went on to forge a career in Silicon Valley, working at tech firms including Apple and Oracle.

A major aspect that moral considerations play out in Gan Jing’s products is the platform’s content. Since its inception in June 2022, Gan Jing has promised and implemented an online environment “free from violent, erotic, criminal, and harmful material,” such as those advocating communism.

Logo of newly launched information platform "Gan Jing World." (Courtesy of Gan Jing World)
Logo of newly launched information platform "Gan Jing World." (Courtesy of Gan Jing World)

“The whole essence of the platform is that you’re avoiding a lot of the addictive stuff like violence, eroticism, drug use, over-sexualization, and content that aligns with the Chinese Communist Party ideology,” Nick Janicki, a spokesperson for Gan Jing, told The Epoch Times in a Friday interview.

Another aspect of this morality-based design is that the platform would try to steer users toward their original intention for coming to the platform, based on a set of goal-based questionnaires that users would take when they sign up, Mr. Janicki said.

“The analogy I sort of use is the cat video one: if you’re watching cat videos all day, you’re going to get a ton more cat videos on YouTube,” Mr. Janicki said. “The difference with Gan Jing World is, it doesn’t drive you down what it finds to be the most addictive necessarily. It’s going to try to always give you sort of a pattern interrupt based on what you initially selected as your interests.”

“So, if you initially selected as a nursing student, and are only interested in things that are career-related, and you start watching cat videos, you’re going to start seeing more career stuff pop up, because that was your initial intent,” Mr. Janicki said, adding that the company is developing more features toward this end.

That’s part of a mindset shift toward treating the individual—not the advertisers—as the client, which technology ethics organizations have called for, for years, Mr. Janicki said, but Gan Jing is among the first to implement the practices.

Lastly, in a deeply polarized world, Gan Jing envisions creating an online community that aspires to traditional culture and the divine, which it believes are ways to unite people.

“Traditional culture can best be described as inherent cultural uniqueness and traits. In Chinese culture, as an example, traditions are varied and diverse across multiple ethnic groups,” Mr. Janicki said. “Many cultures have unique dances, food, and attire. In many traditions, family is of utmost importance, as is a belief and reverence for the divine.”

“The mission of the entire platform is to use technology to empower human beings to go back to traditional culture and divinity,” Mr. Janicki said. “It’s appealing to people who truly feel that value structure.”