Older male chimps follow a pattern that researchers also see in humans, preferring to have positive relationships with a few good friends.
Older male chimps follow a pattern that researchers also see in humans, preferring to have positive relationships with a few good friends.
Charles M. Lieber, the chair of Harvard’s chemistry department, claimed in the lawsuit that the university turned its back on a “dedicated faculty member.”
The department says the school’s admissions practices violated the Civil Rights Act of 1964. Yale’s president says the allegation was based on “inaccurate statistics and unfounded conclusions.”
One of the biggest surveys ever of ancient DNA offers new evidence of who the Vikings were and where they went raiding and trading.
The buzzy idea is impractical, critics said. And there isn’t yet real-world data to show it will work.
While government statistics say inflation is low, the reality is that the cost of living has risen during the pandemic, especially for poorer Americans.
Impatient for a coronavirus vaccine, dozens of scientists around the world are giving themselves — and sometimes, friends and family — their own unproven versions.
In dozens of other patients who suppress the virus without drugs, it seems to have been cornered in parts of the genome where it cannot reproduce, scientists reported.
The candidates were the first ones elected through a petition campaign since 1989, when anti-apartheid activists put Archbishop Desmond Tutu on the panel.
The Trump administration’s charge that the university discriminates against Asian-American applicants was disputed by many Asian-American students and others.
Newly emboldened, many New Yorkers want to repurpose streets for walking, biking, dining and schools, even as traffic returns.
When it is safe enough to return to school, young children would benefit the most. Yet financial pressures are pushing colleges to reopen most rapidly, an economist says.
Rates of dementia have steadily fallen over the past 25 years, a new study finds. But the disease is increasingly common in some parts of the world.
A computer model of the cruise-ship outbreak found that the virus spread most readily in microscopic droplets light enough to linger in the air.
In Boston and in the Netherlands, scientists are racing to build a vaccine against the virus strangling the world.
The outcry over free speech and race takes aim at Steven Pinker, the best-selling author and well-known scholar.
The Trump administration said it would no longer require international students to attend in-person classes during the coronavirus pandemic in order to remain in the country.
The international scholars the administration was threatening to send home are vital to American innovation and competitiveness.
A legal battle between universities and the Trump administration over foreign students and online learning escalated on Monday, ahead of a critical federal court hearing.
The league’s decision could be influential for other university presidents as they consider how to handle the coronavirus pandemic. It is the first Division I conference to suspend football for the fall.
For colleges in the middle of the pack, the financial calculus during the pandemic looks very different, with in-person classes on campus a way to survive.
To provide some semblance of the campus experience during a pandemic, colleges say large chunks of the student body will have to stay away and study remotely for all or part of the year.
Bringing millions of students back to campus would create enormous risks for society but comparatively little educational benefit, an economist says.
New rules and design will try to keep New Yorkers safe in the usually crowded plazas, parks and streets.
The new approach finds unlikely allies among some feminist scholars, who say colleges and universities were failing to sufficiently protect the rights of young men accused of sexual misconduct.
Contrary to Trump’s recent comments, specialists say, recent increases are real, and the virus is like a “forest fire” that will burn as long as there is fuel.
Contact tracing is a practice almost as old as epidemiology itself, but today’s technology means the way that we go about tracking the spread of a contagious illness within and between communities is changing very quickly. This presents an opportunity for learning more about the opportunities and challenges presented in extending contact tracing and exposure notification via digital means, especially as contact tracing is likely a key ingredient in any successful reopening of economy in light of ongoing challenges posed by COVID-19.
To that end, we’re happy to be working with the COVID-19 Technology Task Force, as well as Harvard’s Berkman Klein Center, NYU’s Alliance for Public Interest Technology, Betaworks Studios and Hangar. We’ll be playing host on TC to their live-streamed discussion (embedded above) around contact-tracing and exposure-notification efforts, as well as how and when businesses can safely reopen, and what tools can help them to do so. The day’s events will include panel chats and software demonstrations, beginning at 11 AM EDT (8 AM PDT) on Wednesday, June 17.
Below, we’ve included an agenda of the confirmed speakers and demonstrations for the day, and in case you missed it, here’s a roundup of demonstrations of contact tracing and app demonstrations built by a number of companies thus far. RSVP for tomorrow’s free event here.
Contact tracing: what it is, how it works, how tech can help [11:00 – 11:45AM EDT]
Using technology to enable scaled contact tracing [11:45AM – 12:05PM EDT]
Contact tracing considerations for state and city government [12:05 – 12:30PM EDT]
Reopening businesses safely [12:30-1:15PM EDT]
Demos of tools business leaders can use to help reopen safely [1:15-2:00PM EDT]
Margaret Bourdeaux, MD, MPH, is the policy liaison for Partners in Health COVID-19 Contact Tracing Program, and holds appointments at Harvard Medical School, Brigham and Women’s Hospital, and the Belfer Center for Science and International Affairs at Harvard Kennedy School of Government.
Daniel Burka is supporting New York State’s COVID-19 response efforts through Resolve to Save Lives, an initiative of Vital Strategies, a global health initiative led by former CDC Director Dr. Tom Frieden.
Mike Flowers is leading implementation for contact tracing technology and data strategy for the State of New Jersey as a Senior Fellow with the NJ Office of Innovation. Over the last 25 years he has worked in data intelligence with companies and federal, state and local governments, including as New York City’s first Chief Analytics Officer under Mayor Mike Bloomberg
Mary L. Gray is a senior principal researcher at Microsoft Research and an Edmond J. Safra Center for Ethics Fellow at Harvard University.
Jonathan Jackson is the founder and CEO at Dimagi, a social enterprise that develops innovative technology solutions for front-line workforces and underserved populations. They have an extensive background in global health and are a leader in mobile health data collection.
Irina Krechmer is the Chief Technology Officer at Blue Apron, the premier meal-kit company whose mission is to make incredible home cooking accessible to everyone. Before joining Blue Apron, Krechmer most recently served as VP of Engineering at XO Group Inc., the premier technology company with industry-leading digital brands, including The Knot, The Bump, The Nest and GigMasters. Krechmer has over 20 years of experience designing, developing and implementing customer-focused technology solutions, primarily at e-commerce, media and consumer technology companies.
Andrew McLaughlin is helping lead the Task Force’s contact tracing/exposure notification initiative. Andrew is the Chairman of Access Now, the Former Deputy U.S. CTO for the White House, and the Former Director of Global Public Policy at Google.
Andy Moss is currently a Visiting Professor at NYU Tandon teaching entrepreneurship and innovation, as well developing the COR Methodology. He’s an active advisor/mentor to startups and business leaders, and a former Microsoft executive.
Chelsea Raiten is Of Counsel at Gunderson Dettmer Stough Villeneuve Franklin & Hachigian, LLP. Her practice primarily focuses on providing strategic advice and counseling to employers on all aspects of the employment relationship, including hiring and firing practices, layoffs and RIF’s, wage and hour laws, reasonable accommodation, leaves of absence, employee discipline, restrictive covenants, and workplace policies and procedures.
Harper Reed is helping lead the Task Force’s contact-tracing/exposure-notification initiative. Harper is a Director’s Fellow at the MIT Media Lab, a Senior Fellow at the USC Annenberg Innovation Lab and was the CTO of Barack Obama’s 2012 re-election campaign.
Mona Sloane is an NYU-based sociologist working on inequality in the context of AI design and policy. At NYU, she helps form NYU’s Alliance for Public Interest Technology, and is Co-Principal Investigator on the COVID-19 Tech Project. Mona also leads the project Terra Incognita: Mapping NYC’s New Digital Public Spaces in the COVID-19 Outbreak.
Connor Spelliscy is Director of New Platforms at Hangar, a partner at Connectivity Fund, and helps lead COVID-19 Tech Task Force initiatives.
Minerva Tantoco has served in senior technology roles at Palm, Merrill Lynch, and UBS, holds four US patents on intelligent workflow, and served as New York City’s first-ever Chief Technology Officer. Most recently, Tantoco co-founded Grasshopper Bank, an OCC-chartered digital de novo commercial bank, and is currently a consultant and speaker on AI, smart cities, digital transformation, and equity in tech.
Randall Thomas is assisting Resolve to Save Lives and other stakeholders with the New York State response to COVID-19. Randall is the CTO of Geometer, a technology incubator.
Jonathan Zittrain is a professor of law and computer science, and co-founder of Harvard’s Berkman Klein Center for Internet & Society. Jonathan’s work focuses on topics including control of digital property, privacy frameworks and the roles of intermediaries in internet architecture.
Dr. Margaret (Peggy) Hamburg is the Foreign Secretary of the National Academy of Medicine (NAM). Dr. Hamburg previously held the post of Commissioner of the United States Food and Drug Administration (USFDA) and served as Commissioner of Health for the City of New York.
Two major study retractions in one month have left researchers wondering if the peer review process is broken.
As riders prepare to start riding the trains again during New York City’s initial reopening, the safety of public transit is a big question.
Medical records from a little-known company were used in two studies published in major journals. The New England Journal of Medicine has asked to see the data.
As federal agencies take increasingly stringent actions to try to limit the spread of the novel coronavirus pandemic within the U.S., how can individual Americans and U.S. companies affected by these rules weigh in with their opinions and experiences? Because many of the new rules, such as travel restrictions and increased surveillance, require expansions of federal power beyond normal circumstances, our laws require the federal government to post these rules publicly and allow the public to contribute their comments to the proposed rules online. But are federal public comment websites — a vital institution for American democracy — secure in this time of crisis? Or are they vulnerable to bot attack?
In December 2019, we published a new study to see firsthand just how vulnerable the public comment process is to an automated attack. Using publicly available artificial intelligence (AI) methods, we successfully generated 1,001 comments of deepfake text, computer-generated text that closely mimics human speech, and submitted them to the Centers for Medicare & Medicaid Services’ (CMS) website for a proposed federal rule that would institute mandatory work reporting requirements for citizens on Medicaid in Idaho.
The comments we produced using deepfake text constituted over 55% of the 1,810 total comments submitted during the federal public comment period. In a follow-up study, we asked people to identify whether comments were from a bot or a human. Respondents were only correct half of the time — the same probability as random guessing.
The example above is deepfake text generated by the bot that all survey respondents thought was from a human.
We ultimately informed CMS of our deepfake comments and withdrew them from the public record. But a malicious attacker would likely not do the same.
Previous large-scale fake comment attacks on federal websites have occurred, such as the 2017 attack on the FCC website regarding the proposed rule to end net neutrality regulations.
During the net neutrality comment period, firms hired by industry group Broadband for America used bots to create comments expressing support for the repeal of net neutrality. They then submitted millions of comments, sometimes even using the stolen identities of deceased voters and the names of fictional characters, to distort the appearance of public opinion.
A retroactive text analysis of the comments found that 96-97% of the more than 22 million comments on the FCC’s proposal to repeal net neutrality were likely coordinated bot campaigns. These campaigns used relatively unsophisticated and conspicuous search-and-replace methods — easily detectable even on this mass scale. But even after investigations revealed the comments were fraudulent and made using simple search-and-replace-like computer techniques, the FCC still accepted them as part of the public comment process.
Even these relatively unsophisticated campaigns were able to affect a federal policy outcome. However, our demonstration of the threat from bots submitting deepfake text shows that future attacks can be far more sophisticated and much harder to detect.
Let’s be clear: The ability to communicate our needs and have them considered is the cornerstone of the democratic model. As enshrined in the Constitution and defended fiercely by civil liberties organizations, each American is guaranteed a role in participating in government through voting, through self-expression and through dissent.
When it comes to new rules from federal agencies that can have sweeping impacts across America, public comment periods are the legally required method to allow members of the public, advocacy groups and corporations that would be most affected by proposed rules to express their concerns to the agency and require the agency to consider these comments before they decide on the final version of the rule. This requirement for public comments has been in place since the passage of the Administrative Procedure Act of 1946. In 2002, the e-Government Act required the federal government to create an online tool to receive public comments. Over the years, there have been multiple court rulings requiring the federal agency to demonstrate that they actually examined the submitted comments and publish any analysis of relevant materials and justification of decisions made in light of public comments [see Citizens to Preserve Overton Park, Inc. v. Volpe, 401 U. S. 402, 416 (1971); Home Box Office, supra, 567 F.2d at 36 (1977), Thompson v. Clark, 741 F. 2d 401, 408 (CADC 1984)].
In fact, we only had a public comment website from CMS to test for vulnerability to deepfake text submissions in our study, because in June 2019, the U.S. Supreme Court ruled in a 7-1 decision that CMS could not skip the public comment requirements of the Administrative Procedure Act in reviewing proposals from state governments to add work reporting requirements to Medicaid eligibility rules within their state.
The impact of public comments on the final rule by a federal agency can be substantial based on political science research. For example, in 2018, Harvard University researchers found that banks that commented on Dodd-Frank-related rules by the Federal Reserve obtained $7 billion in excess returns compared to non-participants. When they examined the submitted comments to the “Volcker Rule” and the debit card interchange rule, they found significant influence from submitted comments by different banks during the “sausage-making process” from the initial proposed rule to the final rule.
Beyond commenting directly using their official corporate names, we’ve also seen how an industry group, Broadband for America, in 2017 would submit millions of fake comments in support of the FCC’s rule to end net neutrality in order to create the false perception of broad political support for the FCC’s rule amongst the American public.
While our study highlights the threat of deepfake text to disrupt public comment websites, this doesn’t mean we should end this long-standing institution of American democracy, but rather we need to identify how technology can be used for innovative solutions that accepts public comments from real humans while rejecting deepfake text from bots.
There are two stages in the public comment process — (1) comment submission and (2) comment acceptance — where technology can be used as potential solutions.
In the first stage of comment submission, technology can be used to prevent bots from submitting deepfake comments in the first place; thus raising the cost for an attacker to need to recruit large numbers of humans instead. One technological solution that many are already familiar with are the CAPTCHA boxes that we see at the bottom of internet forms that ask us to identify a word — either visually or audibly — before being able to click submit. CAPTCHAs provide an extra step that makes the submission process increasingly difficult for a bot. While these tools can be improved for accessibility for disabled individuals, they would be a step in the right direction.
However, CAPTCHAs would not prevent an attacker willing to pay for low-cost labor abroad to solve any CAPTCHA tests in order to submit deepfake comments. One way to get around that may be to require strict identification to be provided along with every submission, but that would remove the possibility for anonymous comments that are currently accepted by agencies such as CMS and the Food and Drug Administration (FDA). Anonymous comments serve as a method of privacy protection for individuals who may be significantly affected by a proposed rule on a sensitive topic such as healthcare without needing to disclose their identity. Thus, the technological challenge would be to build a system that can separate the user authentication step from the comment submission step so only authenticated individuals can submit a comment anonymously.
Finally, in the second stage of comment acceptance, better technology can be used to distinguish between deepfake text and human submissions. While our study found that our sample of over 100 people surveyed were not able to identify the deepfake text examples, more sophisticated spam detection algorithms in the future may be more successful. As machine learning methods advance over time, we may see an arms race between deepfake text generation and deepfake text identification algorithms.
While future technologies may offer more comprehensive solutions, the threat of deepfake text to our American democracy is real and present today. Thus, we recommend that all federal public comment websites adopt state-of-the-art CAPTCHAs as an interim measure of security, a position that is also supported by the 2019 U.S. Senate Subcommittee on Investigations’ Report on Abuses of the Federal Notice-and-Comment Rulemaking Process.
In order to develop more robust future technological solutions, we will need to build a collaborative effort between the government, researchers and our innovators in the private sector. That’s why we at Harvard University have joined the Public Interest Technology University Network along with 20 other education institutions, New America, the Ford Foundation and the Hewlett Foundation. Collectively, we are dedicated to helping inspire a new generation of civic-minded technologists and policy leaders. Through curriculum, research and experiential learning programs, we hope to build the field of public interest technology and a future where technology is made and regulated with the public in mind from the beginning.
While COVID-19 has disrupted many parts of American society, it hasn’t stopped federal agencies under the Trump administration from continuing to propose new deregulatory rules that can have long-lasting legacies that will be felt long after the current pandemic has ended. For example, on March 18, 2020, the Environmental Protection Agency (EPA) proposed new rules about limiting which research studies can be used to support EPA regulations, which have received over 610,000 comments as of April 6, 2020. On April 2, 2020, the Department of Education proposed new rules for permanently relaxing regulations for online education and distance learning. On February 19, 2020, the FCC re-opened public comments on its net neutrality rules, which in 2017 saw 22 million comments submitted by bots, after a federal court ruled that the FCC ignored how ending net neutrality would affect public safety and cellphone access programs for low-income Americans.
Federal public comment websites offer the only way for the American public and organizations to express their concerns to the federal agency before the final rules are determined. We must adopt better technological defenses to ensure that deepfake text doesn’t further threaten American democracy during a time of crisis.
It’s a mistake to blame density for the spread of the coronavirus.
In “The Last Archive,” Lepore mixes history and 1930s-style radio drama to solve a timely whodunit.
The technique aims to make a person’s cells churn out proteins that will stimulate the body to fight the coronavirus.
Experts say that for the first time since 1998, global poverty will increase. At least a half a billion people could slip into destitution by the end of the year.
Craig McFarland, the valedictorian of his high school in Jacksonville, Fla., received acceptance letters from 17 colleges and universities in all.