The 2024 social media race has begun

With the help of Mohar Chatterjee and Derek Robertson

Buckle up.

The 2024 presidential campaign may have barely begun, but we’re already getting a glimpse of just how online, free and disorienting it probably will.

This is thanks to AI, Elon Musk and digital presidential campaigns.

Over the weekend, a deepfake video of Ron DeSantis’ face and voice superimposed on The Office character Michael Scott went viral with the help of tweet from Donald Trump Jr.

The double-edged video, first released by C3PMeme, a pro-Trump video account with roughly 50,000 followers on Twitter, is notable for how convincing DeSantis’ deepfake looks.

The C3PMeme account illustrates the amazing pace at which the technical capabilities of anonymous online politicians are evolving.

Compare the obvious photoshopped faces Democratic politicians in a video posted in November on Supernatural deepfake DeSantis posted this morning (then remember in the back of your mind the static memes that Trump supporters have created in support of Donald Trump’s policies). first presidential campaign.)

Further confusing, the fact that Elon Musk’s anti-access control ethos means that videos like this will circulate in an environment with less control over potentially misleading material.

Prior to Musk’s rise to power, the blue tick on Twitter was meant to indicate a verified account belonging to a famous person. Under the Mask, a blue checkmark is available to anyone who pays a subscription fee. So C3PMeme’s artificial intelligence deepfakes – and anyone willing to shell out $8 a month – are paired with what looks like Twitter’s seal of approval.

Then there are the campaigns themselves. Much has already been made of DeSantis. very online choosing to announce his run on Twitter Spaces last week.

The reaction of other campaigns to DeSantis’ buggy deployment, trying to lash out at the campaign’s first meme-able moment, just as much tells us a lot about the free digital environment approaching 2024.

The first is President Joe Biden. In recent memory, the norm for incumbents (not named Trump) seeking re-election has been to stay above the fray, which means refraining from commenting on potential rivals in the other party’s primary.

But as technical glitches delayed DeSantis’ announcement on Spaces’ Twitter, Biden jumped right into the social media hype, tweeting “This link works” and sending followers to the donation page for his re-election.

Social Media Fighting Incentives Outweigh Benefits Presidential credibility: Even Musk, whose relationship with the Democrats has soured dramatically in recent months, agreed that the tweet was “total crap.”

Biden’s tweet was also notable for its speed. At the 16-minute delayed start, it essentially amounted to live commentary on a rival campaign event, which was considered a shocking event when Trump first live tweet through a Democratic primary event in 2015.

The Trump campaign’s response was also notable for the speed and sophistication with which it deployed specially crafted videos to try and frame the moment as a disaster for DeSantis and Musk.

Within hours, Trump’s Instagram account posted three videos to its 23 million followers poking fun at the deployment. One video contrast screenshots about his rival’s glitched Twitter announcement with floating footage of supporters cheering Trump. Another video showed Musk’s SpaceX rocket with the caption “Ron! 2024, which crashed at launch. And then there was the beard fake parody event in which Musk and DeSantis are in dialogue with George Soros, Adolf Hitler and the devil.

Be aware that the video in which Hitler briefly questions Satan’s sexual orientation is a starting point for social media scrum 2024. Get ready for a mind-blowing 18 months.

As artificial intelligence advances becomes an increasingly urgent problem in the global economy, conversations about its risks become much more specific. For example, in the energy sector, AI can solve complex resource usage problems in real time, but AI integration can also present new types of risk to critical infrastructure. How should we think of them?

A panel of experts met on Tuesday to discuss their views on whether AI can be (safely) used to provide vital utilities like electricity to Americans.

Conclusion? Maybe something like that, like – and always with the person in the loop. Experts who participated in the discussion included Nvidia’s Mark Spieler, Brown University professor emeritus John Savage, Electric Power Research Institute senior CTO Jeremy Renshaw, and Landis+Gyr senior director Daniel Robertson. They were speaking at a virtual media briefing for the US Energy Association, a non-profit, non-lobbying coalition of think tanks and agencies.

Renshaw noted that EPRI has been exploring how to use AI to help operators better distribute power across the grid. However, in each case, any solutions offered by the AI ​​must pass through the human before any action can be taken.

Meanwhile, John Savage stated that critical infrastructure cannot be completely trusted to automated systems, and that regulations are needed for areas where there is a high risk of incidents that harm society. Savage said the risk of using AI to support the distribution of critical utilities should be measured as “impact-weighted probability.” , like the electrical grid, would be “catastrophic”. — Mohar Chatterjee

State governments propose their own AI policy experiments, with New Jersey is considering setting up a new office to regulate its use in government.

Daniel Hahn of POLITICO reported this morning. in the New Jersey Playbook under a bill proposed by a state senator. Troy Singleton, who delegates responsibility for AI regulation to an “AI officer” along with an advisory board that will provide feedback on the officer’s proposals.

Singleton told Daniel that he “doesn’t think it’s in our best interest that I, as a state legislator, try to prescribe excessively as far as public policy is concerned.” [around artificial intelligence] looks like” and that the bill “will allow people with extensive experience in this area to use that experience to formulate what this public policy should look like.”

Daniel says it’s unlikely the bill will get much action in the midst of New Jersey’s busy budget season, but at least its introduction is a signal much like with crypto and blockchainstates can quickly become laboratories for such experiments with AI policy. Report from law firm Bryan Cave Leighton Paisner reveals that 22 states plus the District of Columbia are considering or have adopted AI-related policies. — Derek Robertson