There was a long list going into Election Day of horror stories that might have played out on social media. There could have been rampant interference by foreign governments, widespread hoaxes, a flood of deliberately false information about voting and much more.
The worst possibilities appeared not to have come to pass Tuesday, some tech researchers said, although they weren't ready to give tech platforms a sterling review just yet — especially after President Donald Trump set off a fresh wave of misinformation early Wednesday by falsely claiming that he had won.
A full accounting of how the end of the campaign played out on sites like Facebook, Twitter and YouTube is still to come as researchers and the companies themselves examine how people used the platforms. But early assessments indicated that, at least publicly, social media didn't stand out as a problem on Election Day.
"We might find out more information in the coming days, but I haven't seen any evidence of something significant," said Dipayan Ghosh, a co-director of the Digital Platforms & Democracy Project at the Harvard Kennedy School. Ghosh, a former Facebook adviser, has often been critical of the company.
"Despite the massive target that the U.S. is, we really haven't seen much, and social media has been pretty effective in addressing the problems," he said.
Any success the platforms achieved wasn't due to a lack of misinformation. There were plenty of examples of misleading information and erroneous claims, even if their precise impact on the election outcomes remains uncertain.
False information about voting in Pennsylvania appeared all over social media and right-wing websites, while in Virginia, election officials said a misleading video circulated showing a person burning sample ballots.
In one of the day's most viral videos, posted by the publisher of a conservative news website, a poll watcher appeared to be denied entry to a polling station in Philadelphia. It was shared more than 33,000 times and had racked up 3 million views by Wednesday, although there was no evidence of a widespread or deeper problem.
Some misinformation efforts ahead of the election appeared to have gained some traction, particularly those that targeted Latinos in Florida and Black voters. Private messaging apps have also been a concern, as misinformation that flows between individuals or small groups can be hard to track.
"We are not done," said Alex Stamos, a former Facebook security chief who's now director of the Stanford University Internet Observatory. This election season he helped to organize the Election Integrity Partnership, which involves more than 120 contributors at several institutions documenting misinformation.
"We will continue to operate, finding and pointing out election disinformation, as long as there's a significant opening for that due to the election's outcome being in doubt," Stamos said. He said late Wednesday that the day had been as busy as Tuesday for his team.
YouTube, for example, faced questions Wednesday over a video that claimed without basis that Democrats were committing voter fraud against Republican ballots. Misinformation was also spreading on the video app TikTok, researchers said. The platform had also announced efforts to curb misleading information.
At least one example of social media misinformation was self-inflicted Tuesday. Some Instagram users reported seeing posts from the app itself telling them to remember to vote "tomorrow," a problem that the company said was due to users' not restarting the app.
"It's early to declare victory in lots of respects, including whether the platforms were successful in dealing with the issues they were preparing for, but it does seem like the catastrophic scenarios didn't occur," said Matt Perault, director of Duke University's Center on Science & Technology Policy and a former policy director at Facebook.
If tech companies eventually get high marks for their handling of the election, it may provide reassurance that their services have improved after four years of unrelenting criticism from lawmakers, users and their own employees.
Almost immediately after the 2016 election, executives such as Facebook CEO Mark Zuckerberg faced questions over whether they had distorted political debates and given a generous leg up to Trump.
Since then, tech companies have imposed a series of changes to stem the flow of misinformation online, such as more aggressive investigation of secret foreign networks, limiting the kinds of targeting that advertisers can use and overhauling their policies for posts that could lead to voter suppression. They've stepped up their use of fact-checking labels, although not always in an evenhanded way.
In the weeks and months before Election Day, platforms scrambled to inoculate their platforms against known super-spreaders of disinformation and political violence. Facebook banned accounts representing the conspiracy theory QAnon, and Twitter restricted its reach. Facebook also wiped out thousands of "militia" groups after several events planned on the platform ended in real-world violence.
They even made some last-minute changes that went to the heart of how online social networks operate, as Facebook suspended its recommendations of political groups and Instagram disabled a hashtag search function. Twitter said it would throttle some posts that included misinformation.
The full scope of how people may have used the tech platforms during the election may not be known for some time. The fact that Russian operatives bought ads on Facebook in 2016, for example, wasn't known publicly until September 2017.
Joan Donovan, research director of Harvard Kennedy School's Shorenstein Center on Media Politics and Public Policy, said the platforms are still operating largely behind the scenes with little transparency about how they enforce their policies.
"We don't know the extent of influence operations across social media platforms or the actions these companies have taken in the last several weeks," Donovan said. And even when the tech companies take down problematic content, she said, they don't always explain their actions well, adding to a narrative that they are suppressing speech.
The election, however, isn't over, and the platforms now face a sizable challenge in the president's false claims about mail-in votes.
Both Facebook and Twitter acted quickly early Wednesday when Trump first began making false claims, putting warning labels on his posts. By Wednesday evening, the platforms' actions had almost become routine.
And at the top of the platforms' feeds, the companies put up proactive information: Votes were still being counted.