The Promising Future of AI in Synthetic Biology
Contributed Commentary by Sophie Peresson, DNA Script
November 3, 2023 | The synthetic biology landscape has evolved considerably to include new ways of accessing DNA for research and therapeutic development. An important advance is that DNA production is no longer limited to third-party suppliers. Scientists can purchase automated, benchtop enzymatic DNA synthesis (EDS) devices for their own labs, giving them more control and significantly reducing research and development cycles.
More recently, some in the synthetic biology community have begun exploring the possibility of incorporating artificial intelligence (AI) into their workflows. Their interest is understandable. AI-based applications are already being used in various medical sectors, e.g. to guide surgery-assisting robots, identify tumors from imaging data, and select clinical trial participants. Following recent developments, such as the experience where an AI chatbot assisted graduate students in identifying potential pandemic pathogens, there are increasing questions about the potential of AI to “upskill” individuals without relevant subject matter expertise or experience in synthetic biology.
Our community’s interest in AI makes a lot of sense. We should use technological advancements that enable incredible breakthroughs in e.g. human health, sustainable development, and agriculture, and help us achieve our ambitious goals. Designing, building, and testing DNA sequences with a third-party supplier can be a very slow process. Pairing AI with a benchtop DNA printer in the lab can help scientists rapidly iterate and refine sequences. In addition, AI can also help scientists analyze and identify patterns in data that may once have taken days or even weeks of work.
However, aside from the multiple opportunities, it is important to understand the potential risks of using this technology so we can put proper safeguards in place. Members of the scientific community have begun to sound alarms over specific threats emerging from the intersection of AI and synthetic biology. Too often, regulations lag behind technological innovation leaving the door open for potential misuse, and creating uncertainties for business. We need to take advantage of the opportunity now to get ahead of the curve.
Science thrives best when creativity is supported and promoted. Currently, there are lots of open-source AI resources that scientists can use to design and test gene sequences in silico. Scientists can and should use these tools to accelerate their research. But the freedom to experiment with and test new theories, ideas, and solutions comes with risks. It will be important to maintain a healthy awareness of the potential hazards, as well as what safeguards might look like in a space where AI is routinely deployed. It will therefore be key to proactively build in biosafety and biosecurity requirements into the review and design of new life sciences innovations.
The sudden surge of interest has put the regulatory spotlight firmly on artificial intelligence. Currently, the EU is on track to adopt the world’s first comprehensive law on AI. In the United States, Congressional committees have heard testimony from key stakeholders in the AI field, and seven leading companies in the space recently announced their voluntary commitment to comply with guidelines designed to make AI safer and more transparent.
Meanwhile, representatives from industry, academia, government, nonprofits, and other institutions continue to have conversations about how synthetic oligonucleotides should be produced and used. In fact, the US Health and Human Services Department (HHS) is expected to release its updated guidance on this topic later this year. Separately, as part of the reauthorization for the Pandemic and All-Hazards Preparedness Act, Congress is discussing a bill that will require HHS to prescribe regulations for mitigating risks associated with gene synthesis, and another bill that would assess risks associated with AI and biosecurity.
These discussions are a great step forward, but they are happening independently of each other. A fragmented approach to regulation creates loopholes through which bad actors can take advantage and uncertainty for business.
Collaboration Is Key
Given that fusing AI with synthetic biology is still an emerging field, there is currently less attention from policy makers in this area. That is why we need to have dialogue between the different communities, so scientists know what to expect. Realistically, it is impossible to predict every possible scenario and outcome, so best practices and any legislative framework may not be comprehensive. But they should provide enough of a foundation that we can expand as technologies evolve and grow.
For those conversations to have a successful outcome, they should incorporate feedback from AI experts, scientists, policymakers, and other key stakeholders to fully capture the diversity of perspectives and needs. An important step will be deciding who should be responsible for establishing the regulations, as well as defining emerging risks and building consensus around safeguards. Soliciting participation from multiple stakeholders in the regulatory process also increases the likelihood that people in the community will be willing to adopt the guidelines that emerge from these efforts. We will need extensive international cooperation, particularly among countries vying for a leadership role in AI regulation, to ensure that future guidelines can work on a global level.
Sophie Peresson serves as the Senior Director of Public Affairs and ESG at DNA Script. She can be reached at firstname.lastname@example.org