Last year, online classes helped many students with disabilities pursue their education. They want the option to continue.
The aftermath of the George Floyd protests and a decreased reliance on standardized tests have led to more diverse admissions at elite universities.
And yes, there could soon be another pandemic.
Antonio Tsialas was thriving in his first year at college. Then he was found dead in a gorge, and the fraternity that recruited him closed ranks. What happened is still a mystery.
Waiving standardized test requirements during the pandemic brought more hopefuls to the Ivy League and large state schools, while less-selective colleges face an alarming drop.
An education program is immersing underprivileged students in Ivy League classes, and the students’ success has raised questions about how elite university gatekeepers determine college prospects.
If social networks and other platforms are to get a handle on disinformation, it’s not enough to know what it is — you have to know how people react to it. Researchers at MIT and Cornell have some surprising but subtle findings that may affect how Twitter and Facebook should go about treating this problematic content.
MIT’s contribution is a counter-intuitive one. When someone encounters a misleading headline in their timeline, the logical thing to do would be to put a warning before it so that the reader knows it’s disputed from the start. Turns out that’s not quite the case.
In a study of nearly 3,000 people who evaluated the accuracy of headlines after receiving different (or no) warnings about them.
Going into the project, I had anticipated it would work best to give the correction beforehand, so that people already knew to disbelieve the false claim when they came into contact with it. To my surprise, we actually found the opposite,” said study co-author David Rand in an MIT news article. “Debunking the claim after they were exposed to it was the most effective.”
When a person was warned beforehand that the headline was misleading, they improved in their classification accuracy by 5.7 percent. When the warning came simultaneously with the headline, that improvement grew to 8.6 percent. But if shown the warning afterwards, they were 25 percent better. In other words, debunking beat “prebunking” by a fair margin.
The team speculated as to the cause of this, suggesting that it fits with other indications that people are more likely to incorporate feedback into a preexisting judgment rather than alter that judgment as it’s being formed. They warned that the problem is far deeper than a tweak like this can fix.
“There is no single magic bullet that can cure the problem of misinformation,” said co-author Adam Berinsky. “Studying basic questions in a systematic way is a critical step toward a portfolio of effective solutions.”
The study from Cornell is equal parts reassuring and frustrating. People viewing potentially misleading information were reliably influenced by the opinions of large groups — whether or not those groups were politically aligned with the reader.
It’s reassuring because it suggests that people are willing to trust that if 80 out of 100 people thought a story was a little fishy, even if 70 of those 80 were from the other party, there might just be something to it. It’s frustrating because of how seemingly easy it is to sway an opinion simply by saying that a large group thinks it’s one way or the other.
“In a practical way, we’re showing that people’s minds can be changed through social influence independent of politics,” said graduate student Maurice Jakesch, lead author of the paper. “This opens doors to use social influence in a way that may de-polarize online spaces and bring people together.”
Partisanship still played a role, it must be said — people were about 21 percent less likely to have their view swayed if the group opinion was led by people belonging to the other party. But even so people were very likely to be affected by the group’s judgment.
Part of why misinformation is so prevalent is because we don’t really understand why it’s so appealing to people, and what measures reduce that appeal, among other simple questions. As long as social media is blundering around in darkness they’re unlikely to stumble upon a solution, but every study like this makes a little more light.
Delivery drivers have been essential to feeding New York, while boosting sales for companies like DoorDash and Uber. But they say work conditions have gotten worse.
Cornell University researchers analyzing 38 million English-language articles about the pandemic found that President Trump was the largest driver of the “infodemic.”
Dorm life got kinda serious.
Uber and Lyft hailed a Cornell paper’s conclusion that their drivers make solid wages. But others have questioned the researchers’ approach.
To provide some semblance of the campus experience during a pandemic, colleges say large chunks of the student body will have to stay away and study remotely for all or part of the year.
Most universities plan to bring students back to campus. But many of their teachers are scared to join them.
Vaping is a controversial habit: it certainly has its downsides, but anecdotally it’s a fantastic smoking cessation aid. The thing is, until behavioral scientists know a bit more about who does it, when, how much and other details, its use will continue to be something of a mystery. That’s where the PuffPacket comes in.
Designed by Cornell engineers, the PuffPacket is a small device that attaches to e-cigarettes (or vape pens, or whatever you call yours) and precisely measures their use, sharing that information with a smartphone app for the user, and potentially researchers, to review later.
Some vaping devices are already set up with something like this, to tell a user when the cartridge is running low or a certain limit has been reached. But generally when vaping habits are studied, they rely on self-report data, not proprietary apps.
“The lack of continuous and objective understanding of vaping behaviors led us to develop PuffPacket to enable proper measurement, monitoring, tracking and recording of e-cigarette use, as opposed to inferring it from location and activity data, or self-reports,” said PhD student Alexander Adams, who led the creation of the device, in a Cornell news release.
The device fits a number of e-cigarette types, fitting between the mouthpiece and the heating element. It sits idle until the user breathes in, which activates the e-cigarette’s circuits, and the PuffPacket’s as well. By paying attention to the voltage, it can tell how much liquid is being vaporized, as well as simpler measurements like the duration and timing of the inhalation.
This data is sent to the smartphone app via Bluetooth, where it is cross-referenced with other information, like location, motion and other metadata. This may lead to identifiable patterns, like that someone vapes frequently when they walk in the morning but not the afternoon, or after coffee but not meals, or far more at the bar than at home — that sort of thing. Perhaps even (with the proper permissions) it could track use of certain apps — Instagram and vape? Post-game puff?
Some of these might be obvious, others not so much — but either way, it helps to have them backed up by real data rather than asking a person to estimate their own usage. They may not know, understand or wish to admit their own habits.
“Getting these correlations between time of day, place and activity is important for understanding addiction. Research has shown that if you can keep people away from the paths of their normal habits, it can disrupt them,” said Adams.
No one is expecting people to voluntarily stick these things on their vape pens and share their info, but the design — which is being released as open source — could be used by researchers performing more formal studies. You can read the paper describing PuffPacket here.
Craig McFarland, the valedictorian of his high school in Jacksonville, Fla., received acceptance letters from 17 colleges and universities in all.
The coronavirus crisis has underscored our enduring inequalities in race, wealth and health.