Guerrilla Archivists Developed an App to Save Science Data From the Trump Administration

sakkmesterke/Shutterstock.com

The data rescue movement is growing up fast.

On the first Saturday morning in February, scientists, programmers, professors and digital librarians met at New York University in New York City to save federal data sets they thought could be altered or disappear all together under the administration of US president Donald Trump. Around 150 people turned out for the gathering, many after hearing about it through Facebook.

Enthusiasm for guerrilla archiving is skyrocketing; the day at NYU was the latest in a ballooning list of “data rescues” across the country. All-day archiving marathons have been held at Toronto, Philadelphia, Chicago, Indianapolis, Los Angeles, Boston, and Michigan, and by the time the NYU event was over, attendees from several other cities had volunteered to host their own.

“We’ve put 14 new events on the docket in the last week and a half,” says Brendan O’Brien, an independent programmer who builds tools for open-source data. O’Brien showed up to a data rescue event just before Trump’s inauguration and has, in recent weeks, devoted himself to the events full-time. “It seemed obvious that I should just stop everything and focus on this.”

Dates have been set for events at University of California-BerkeleyMITGeorgetownHaverford College in Pennsylvania, and at a coworking space in Austin. Structured like all-day hackathons and organized by volunteers, the events focus on downloading federal science—especially climate-change related—data sets from government websites and uploading them to a new website, datarefuge.org, which they hope can act as an alternative website for federal data during the current administration. They’re also archiving tens of thousands of government web pages and feeding them into the Internet Archive, which runs the Wayback Machine.

The data rescue movement is growing up fast: What started as a project coordinated through group spreadsheets in Google Docs now has a workflow formalized through a custom-built app designed specifically for this purpose by O’Brien and Daniel Allan, a computational scientist at a national lab (Allan preferred not to indicate a specific lab, and emphasized his participation was in his free time and not on behalf of his employer). Eventually, anyone with ten minutes to spare will be able to open the app, check what government URLs have yet to be archived, see whether those can be simply fed into the Internet Archive (or needs more technical attention to scrape and download any data), and “attack a quick data set” from their couch, O’Brien says. The archiving could be remote, and perpetual.

Meanwhile, members of the Environmental Data & Government Initiative, a group of academics and developers that has been acting informally as a liaison between the DIY events, is working on something of a starter pack for people who want to host data rescues of their own, with advice and templates gleaned from lessons learned at earlier events. “With every event we’re learning how to streamline the process,” says O’Brien. The workflow for identifying, downloading, organizing, and archiving data is becoming more seamless—so at each event, a larger volume of data gets processed than at the one before.

By the evening, the group at NYU had fed 5,000 government urls into the Internet Archive. Most came from the Department of Energy, which houses vast amounts of energy data, and the Department of the Interior, which oversees the National Parks and public lands. The programmers in attendance, who spent the day writing scripts and finding ways to download raw data sets that couldn’t be easily fed into the archive in their original format, managed to upload 100 megabytes of data pulled off government servers to datarefuge.org.

All this effort is partly to preserve federal science for researchers to use in the future—the participants are going through great lengths, spurred by the librarians in the crowd, to make sure the data is handled in a way that keeps it valid enough to be used in peer-review research; they are marking who handled the data and when, and including descriptions of how the data was collected and what it describes, to avoid large sets of data becoming out-of-context jumbles of information.

But it’s also to ensure towns, cities, and counties facing environmental problems can still access data that can help make their communities healthier, says Jerome Whittington, the NYU professor who organized the event. The US Environmental Protection Agency, for example, collects data on air and water pollution, contaminated soil, toxic spills, and records when local companies violate rules like regulations against dumping harmful waste. The EPA data is key for communities trying to take action against polluters or otherwise working to gain control over their exposure to harmful toxins.

Under federal appointees with records of being hostile towards environmental health regulation, data may be harder to come by, Whittington worries. “If you don’t have the data, you’ll be told your problem doesn’t exist. It is in a way a struggle over what we consider reality.”