A New Marketplace For Machine Learning Researchers—And Citizen Scientists

October 21, 2019

By Deborah Borfitz

October 22, 2019 | A nonprofit devoted to citizen science has been working with biomedical researchers at Cornell University to apply crowd power and artificial intelligence (AI) to questions neither humans nor machines could answer alone. The Human Computation Institute has already succeeded in building a thriving, self-sustaining community known as Stall Catchers to help Alzheimer’s researchers search mouse brain tissue for signs of blood flow and stalls, says Executive Director Pietro Michelucci, Ph.D.

It is now paying tribute to the human in the loop with an open integration platform, called Civium, which serves as a marketplace for “online cognitive laborers” and machine learning researchers of every description. Research projects related to sickle cell anemia, life-threatening arrhythmias and sudden infant death syndrome (SIDS) will be among the first use cases, Michelucci says. The BBC just devoted an episode of its “People Fixing the World” podcast to the SIDS application, he notes.

Civium will effectively automate what up to now has been a labor-intensive process of teaching a machine to play an online citizen science game such as Stall Catchers more like a human. Crowd data would be required to train the system using different architectures to find one that might work, and then testing it on new research data to confirm the hunch.

More than 20,000 individuals constitute today’s Stall Catchers community specific to Alzheimer’s research, Michelucci says, but it took several years and a lot of effort and money to create. The open science framework of Civium will allow the development of similar human-AI platforms for one-tenth the time and cost by enabling reuse of components of crowd-based systems—and allow anyone with normal human cognitive faculties to contribute to the advancement of science.

Thinking Economy

The impetus for the creation of Civium is growing reliance on crowd-powered systems for certain types of research questions, notably those involving machine learning requiring vast quantities of inputs, says Michelucci. The only way to get “ground truth data” in the requisite volume is through crowdsourcing.

Civium can be the hub for accessing that data, making it more practical to train a machine to gradually subsume analysis being done by the crowd—and leaving humans free to tackle the more challenging, interesting scientific questions, he says. “We see this happening with other platforms such as Eyewire,” which is endeavoring to map the human brain.

“We already have an engaged community and it’s a part of their lifestyle to play these games,” Michelucci continues. Rather than leave these players devastated because a machine has taken over, Civium would allow them to effortlessly jump to another citizen science challenge.

The bigger picture, as Michelucci painted during this year’s Microsoft Faculty Summit, is that the world is transitioning from a knowledge to a thinking economy. “We can already see the evidence of an online market developing around human-based cognition,” he says.

Michelucci himself helped organize the Human Computation Roadmap Summit at the Wilson Center in 2014 where one of the project ideas, called Pathways to Radiology, was a Stall Catchers-like game except the goal was to utilize unemployed and under-employed crowd workers to identify tumors on X-rays. The workers could gradually be given more complex radiographs to read, effectively “crowdsourcing analysis while building a skilled labor force.” Only images with too many mixed opinions from the crowd would be elevated to a professional radiologist. At a certain point, individuals could get credentialed for doing this work online and then start getting paid for it.

One of the people in the breakout group where the idea emerged was Markus Krause, cofounder of the nonprofit Mooqita that has a platform connecting organizations, learners, and educators to solve real-world work tasks. Student workers can take an online course that involves analyzing data for businesses, which can lead to credentials they can leverage to do the same work for pay, Michelucci says.

New economic models are needed to keep crowdsourced projects financially viable beyond the initial burst of enthusiasm, he says. Currently, only a small proportion of such projects succeed before the money runs out.

Many researchers who have created a platform for citizen science believe in open source software, which does not mean free, Michelucci points out. With Civium, researchers will be able to make their machine learning algorithms available for licensing and create a fee structure around that, as well as name their terms. The author of the code might variably ask for royalties or simply a credit for any derivative work, depending on the end user.

As was recently discussed in a detailed blog Michelucci published in Medium, a multitude of stakeholders invested in new data science methods will be able to offer potential solutions to one another via Civium, he says. The platform is being developed with support from Microsoft and in partnership with Microsoft Research and the Pestilli Lab at Indiana University.

As Michelucci explains, Civium will extend the metadata registry and protocols of a reproducibility platform called brainlife.io to accommodate crowd-based computing components as well as portal and platform development. It will also create a valuation and credit assignment system to track use of widgets and services, which will be protected by identity verification services and “blockchain-based accountability.” Civium’s licensing engine will build on a project developed by GridRepublic that applies blockchain traceability to contracts.

To call Civium a platform is a bit of a misnomer, he adds, since it will in fact be an operating system for hybrid supercomputers using cognitive “wetware” for deploying human-AI systems. Civium seeks to employ interoperability standards being developed by the Citizen Science Association.

The Back Story

There would be no Civium had there not first been Stall Catchers. The online research game launched in 2016, shortly before Michelucci relocated the Human Computation Institute from Washington, D.C. to Ithaca, New York and became a visiting professor at Cornell University.

Michelucci was an AI subject matter expert at the Defense Advanced Research Projects Agency (DARPA)—the R&D funding branch of the Department of Defense—for 10 years, he says. In response to the cloud of doubt hanging over human-in-the-loop systems, he initiated research on the topic that resulted in the publication of the Handbook of Human Computation.

BITW_citizen science PQ

The 2013 book demonstrated that cross-disciplinary collaborations can be intentionally built to explore opportunities to combine the respective strengths of humans and machines, says Michelucci. Entomologists teamed up with computer scientists on one chapter, for example, to look at ant colonies as a model of human computation. Eventually, Michelucci felt compelled to “buckle down” and build his own crowd-powered system.

Serendipitously, Michelucci was introduced by a mutual colleague to Chris Schaffer, a biomedical engineer at Cornell involved with the Alzheimer’s project who described the bottleneck created by data analysis happening at one-tenth the speed of its generation by machines. Helping researchers catch up with the backlog became the first mission of the Human Computation Institute.

That marked the birth of the EyesOnALZ citizen science crowdsourcing project, for which Stall Catchers has been the main activity. The project is being funded by the BrightFocus Foundation.

A funding proposal previously submitted to the National Institute of Health to validate the trustworthiness of data produced by non-experts was rejected, says Michelucci, noting that researchers at the time were highly skeptical of citizen science. While two of the reviewers gave highest praise to the idea, a third gave it the lowest score possible. But “the tides are turning,” he quickly adds.

Medical-based citizen science still represents only about 5% of the overall movement, Michelucci says. About 80% of the activity is still concentrated in conservation science.

Wisdom of the Crowd

Game players (aka “catchers”) in the Stall Catchers community watch movies from the brains of mice using a virtual microscope, adaptation of technology previously used in the Stardust@home project that used citizen scientists to help locate interstellar dust particles that could be brought back to earth for study. And Stardust@home was itself a human-based extension of Berkeley’s SETI@home project searching for signs of extraterrestrial intelligence using distributed computing software that could run as a screensaver on home computers.

Playing the game involves moving a slider back and forth to move through layers of mouse brain tissue to look at outlined vessel segments, and backwards and forwards to visualize blood flow over time, he explains. The imagery is captured in vivo using two-photon excitation microscopy, a technique recently improved to enable research breakthroughs in understanding Alzheimer’s disease.

In the first completed analysis, catchers helped researchers determine how many blood stalls are in an Alzheimer’s-afflicted brain and where they’re located relative to the lumps of a sticky protein (aka plaque) long associated with the disease, says Michelucci.

The job of game players is deceptively simple, a “binary decision,” he continues. “Where it gets complicated is in the nuances.” The imagery includes many sources of noise and ambiguity that can create an appearance or absence of motion that is not actually related to blood flow.

“That is why humans can do [the analysis] so much better than machines,” Michelucci says. “Not all the information you need is in the pixels; you have to bring a world knowledge context to the task to be able to do it well.”

The interactive design of Stall Catchers “motivates and engages people while ensuring data quality,” he continues. But “a key aspect to our work is finding the best way to combine answers from many people who are not experts in order to produce one expert-like answer about a single vessel. We call it our magic number,” borrowing from the wisdom-of-the-crowd notion first proposed over 100 years by Charles Darwin’s cousin after observing a “Guess the Weight of the Ox” competition at a country fair.

Initially, to achieve the sensitivity and specificity requirements of the lab for valid research results from Stall Catchers, the magic number was 20 people, says Michelucci. But he has since re-evaluated the approach through the lens of his nearly forgotten dissertation research, so today it only takes about four and a half people on average to come up with an expert-like answer.

The new methodology tracks the performance of individual users in real time and assigns a weight to their answers accordingly, he explains. It also assesses researchers’ confidence in the crowd answer in real time, so “once we gather enough evidence from the crowd we don’t need to look further, whereas with the old approach we’d collect 20 samples no matter what. Now, we might collect three or we might collect 15, depending on the quality of the information coming in.”

Engaged Community

Michelucci says one side benefit of the new methodology is that “everyone can play confidently knowing they can’t mess up the research results,” including a subset of players believed to be in the early stages of Alzheimer’s disease, because the system adjusts to their ability at any given moment in time. “Everyone has a circadian rhythm when it comes to cognition, plus there is always the happy hour phenomenon. This ensures we’re not assuming players have a static level of skill.”

No one ever gets kicked out of the Stall Catchers game because even in the worst-case scenarios—someone is answering randomly or their cat started tapping on the keyboard—the research doesn’t suffer, he says. “It just doesn’t help it very much.”

Out of the 20,000 registered catchers, about 10% are semi-regular players and a few hundred are “super catchers” who play all the time and are part of the “glue” holding the community together, says Michelucci. The remainder may never come back if not specifically invited to do so.

Part of the appeal of playing Stall Catchers is that participants compete against other users to correctly classify vessels as flowing or stalled, he says. The Human Computation Institute also uses an online forum and newsletters to keep players engaged, and one of their super catchers, a retired marketing executive from Eastman Kodak, has volunteered his expertise to help promote events.

This year’s Citizen Science Day—an annual celebration presented by SciStarter and the Citizen Science Association—included a Stall Catchers Megathon hosted on Microsoft’s campus with thousands of catchers across five continents joining virtually and on a livestream video. They collectively answered a key question concerning hypertension and stalls over one weekend—research that would have taken three and a half months to accomplish in a lab, notes Michelucci.

Participation is voluntary and no pay is involved, he says, although best practices in this area are evolving. “We try to set a good example, and part of that is being transparent about everything we do. Parading our failures is part of our culture.”

Catchers are regularly reminded that they’re a “material part of the team,” without which the platform would not exist, Michelucci says. In exchange, the Human Computation Institute is open about the research questions it is trying to help answer and shares all research results, including preliminary ones. “It can sometimes get tricky because we also don’t want to bias anyone.”

This came up a few years ago, when players were helping researchers understand the role of a high-fat diet on the stall rate of capillaries in the mouse brain, he says. “With basic biology education, people could probably intuit that the noisier movies might be associated with mice on a high-fat diet and that would have biased their answers.”

Marking Progress

Earlier this year, Cornell University researchers published an article in Nature Neuroscience pointing to capillary stalls as the explanation for reduced blood flow in the Alzheimer's brain and the causative factors—based on work done without the help of Stall Catchers, Michelucci says. But while that work was ongoing, citizen scientists started analyzing datasets related to identifying the upstream molecular mechanism implicated in the stalling phenomenon.

To date, Stall Catchers has completed analysis of six datasets (not counting the validation ones), each of which would have taken up to a year to analyze in the lab without the citizen scientists, Michelucci says. Preliminary results from analysis of the second dataset, showing that a high-fat diet increases the incidence of stalls, was announced at the British Science Festival in 2017.

Research questions, results and outcomes get reported by the EyesOnALZ team. Participants are currently working on a seventh dataset to see whether blocking the vascular endothelial growth factor molecule, involved in free radical formation and inflammation in tissues, reduces the incidence of stalls.