Twitter's biggest fail yet
Having a major-party Presidential candidate announce their campaign in a Twitter Space was supposed to be a huge win. It didn't go well for anyone.
On Wednesday, Florida Governor Ron DeSantis announced he was running for President. Instead of taking the traditional route of making a speech or giving an interview, he jumped into a Twitter Space with Elon Musk. It did not go well.
At first, some 600,000 people tried to listen in. I say tried, because the app kept crashing and the Twitter Space kept kicking people out--including the Governor and David Sacks, who was supposed to be hosting the event.
"We are kind of melting the servers," Sacks said at one point.
The server melting went on for 15 minutes or so before the Space ended. A new Space was opened up, this time under Sacks' account--presumably because fewer people would try to join if it weren’t Musk that started it. That’s good for the servers, but not great if you're Governor DeSantis and want to tell a lot of people you're running for President.
DeSantis calculated that making the announcement in this way would help him control the narrative. As an added bonus, he got to make a statement about how the traditional media is biased or corrupt, bypassing it altogether. Well, at least until he went on Fox News later Wednesday night.
The thing is, he didn't control the narrative. His calculation was wrong. Not only is the main story that Twitter had a meltdown, but even after they got it working, DeSantis spent at least 10 minutes listening to Musk and Sacks talk about Twitter and their thoughts on what a momentous event this was for "rooms on social media," whatever that means.
If you go on Fox News or CNN to launch a campaign, you're not going to have to sit there and listen to Anderson Cooper marvel at the wonders of cable television distribution. You might face some pointed questions, but mostly the focus will be on you and the ideas you plan to campaign on.
There really are three things you can take away from the whole thing:
Gov. DeSantis chose to launch his campaign catering to the “very online” crowd and that says a lot about what influences his beliefs and—as a result—his campaign. Most people don’t use Twitter. They get their news from, you know, news outlets—most of which covered the event as a debacle.
Twitter has never really been known for having a great tech stack. Still, the fact that no one thought to make sure it could handle the attention this was sure to generate is a big problem for the future of the platform.
Elon Musk isn’t nearly as interested in the open exchange of ideas as he is in building and promoting his most expensive toy. He sees this as a win purely because of the attention it generated. That’s fine, but it should be a warning for anyone who might be thinking of leveraging Musk for their own benefit—you’ll probably get burned.
Social media gets a warning label
The United States surgeon general, Dr. Vivek Murthy, issued a public warning earlier this week about the risks of social media to young people. In a 19-page report, Dr. Murthy suggests steps to fully understand the possible "harm to the mental health and well-being of children and adolescents."
The report included practical recommendations to help families guide children's social media use, such as keeping mealtimes and in-person gatherings free of devices and creating a "family media plan." The report could help encourage further research to understand whether social media use and the soaring rates of distress among adolescents are related.
Surgeon General Warns That Social Media May Harm Children and Adolescents
New York Times
The report recommends social media platforms take steps to minimize the harm to youth, including:
Developing and implementing policies and practices that prioritize children’s and adolescents’ safety and well-being.
Providing greater transparency about how algorithms work and how content is moderated.
Investing in research to better understand the impact of social media on youth mental health.
Collaborating with researchers, policymakers, and other stakeholders to develop evidence-based solutions.
ChatGPT and privacy on your iPhone
OpenAI released an iOS app for ChatGPT and it quickly became the most popular free app in the App Store. That's not surprising considering some reports suggest ChatGPT had more than 100 million users in January--just two months after it launched. That would make it the fastest-growing technology product of all time.
For comparison, it took Facebook four and a half years to reach that number. Even TikTok took nine months.
The iOS app does, however, come with one important tradeoff that users should be aware of. It's a big enough deal that the app prompts you the first time you open it. In addition to a caution that ChatGPT may just make things up, there's another warning about sharing personal information because "Anonymized chats may be reviewed by our Al trainers to improve our systems."
OpenAI's privacy policy says that when you "use our Services, we may collect Personal Information that is included in the input, file uploads, or feedback that you provide." Specifically, that means that if you ask ChatGPT questions that contain personal information, that information will be sent to OpenAI. That's a big deal when you realize your chat may be read by a human reviewer.
OpenAI launches ChatGPT app for iOS, Android coming soon
CNBC
The company says it anonymizes conversations before they are seen by a human, but that just means that it removes identifying information from the metadata of the file--not the content of your prompt. If you include personal information, that information will still be included.
The internet is not dead yet.
Last week, the Supreme Court of the United States released opinions in two cases that had the potential to break the internet. The cases involved lawsuits against Twitter and Google alleging the two companies were liable for the spread of terrorism-related content posted on their platforms. Historically, platforms have been immune for the content generated by users under a law known as Section 230.
Supreme Court rules for Google, Twitter on terror-related content
The Washington Post
The law does two things. First, it shields platforms from liability for the content posted by users. Second, it gives platforms the right to moderate content as they see fit. A user can’t sue because Twitter suspends their account for posting racist content, for example. Likewise, Twitter can’t be sued if it doesn’t delete objectionable content.
There are some exceptions, but the basic idea that platforms aren’t liable for content posted by users has been the underpinning principle that has made the internet a thing. Without it, there would be no social media or blogs, and most websites would look very different.
In the cases decided Thursday, the court was asked to hold the two companies liable for content that was in violation of anti-terrorism laws. The Supreme Court essentially declined to decide anything on the question of platform immunity, but dismissed the cases on other grounds, leaving the internet as we know it intact, for now.
Montana is the first state to ban TikTok
Montana’s Governor signed a bill that prohibits TikTok from operating in the state, and prohibits users from downloading the app. Apple and Google would be required to remove the app from App Store availability in the state, though it’s not clear how Montana expects that to happen.
The law has already been challenged by a group of creators who say it infringes on their first amendment rights.
“The Act attempts to exercise powers over national security that Montana does not have and to ban speech Montana may not suppress,” the lawsuit says. While Congress has some leeway to impose restrictions for national security reasons, states don’t have the same power.
There are very real problems with TikTok, but it doesn’t seem like passing futile laws is the way to solve any of them. This one will almost certainly be struck down.