The Practical, Near-Term Future for Synthetic Biology
September 9, 2021 | Trends from the Trenches—When Aoife Brennan left Ireland, she had no doubt that she’d be back in a year. She was a trained endocrinologist who wanted to jump start her medical research career. “I was 100% certain that I was moving back to Ireland to work in an academic medical center, to pursue some research on the side, and to continue to treat patients,” she says. “That just tells you how wrong you can be early on in your career about where you're going to end up.”
Brennan ended up dazzled by the challenging questions and exciting research opportunities available in the companies and labs in the Boston-Cambridge area. “Once I got sucked in, I knew that moving back to Ireland was probably going to be a difficult proposition, because there are just so many interesting things going on here.” She shifted to biotech and made her way to the Rare Disease Innovation Unit at Biogen before moving to Synlogic, first as Chief Medical Officer, then as CEO.
Now she views her position as scientific strategy development. She still goes to Journal Club every week, because “that's what just gives me energy.” She’s also leading the company as they develop practical applications of synthetic biology.
Brennan recently sat down with Stan Gloss, founding partner at BioTeam, to trace her career journey and talk about the future of synthetic biology—a preview of her upcoming talk at the Bio-IT World Conference & Expo being held online and in person in Boston, September 20-22.
Editor’s Note: Trends from the Trenches is a regular column from BioTeam, offering insights into market trends and their most interesting case studies. Working at the intersection of science, data and technology, BioTeam builds innovative scientific data ecosystems that close the gap between what scientists want to do with data—and what they can do. Learn more at www.bioteam.net.
Stan Gloss: What is the scientific mission at Synlogic?
Aoife Brennan: Synlogic is really focused on using synthetic biology to develop new treatments for patients with disease. We fundamentally believe that we're at this amazing tipping point now, where the tools of synthetic biology are advancing rapidly and we think that this is something that's been under-leveraged in the context of therapeutics. We'd love to be one of the leading companies to make that leap and bring that science forward into the therapeutic space. That's absolutely the mission.
We apply synthetic biology to bacterial therapeutics. Our company is very diverse from a subspecialty perspective with people who come from synthetic biology, computational biology, chemical medicine, pharmacology. We really have a very diverse group because this is a new area. We exist at the intersection of multiple different functions and scientific specialties. It's a really exciting and dynamic space right now.
What does synthetic biology mean to you?
Synthetic biology, fundamentally, is the application of engineering principles to biology. So, this idea that you can co-opt those things that exist in nature to solve specific problems by taking an engineering-based approach. You can create new cellular functions, new cells, new approaches to addressing problems using these toolkits. That's the big picture definition: you can engineer like you would write a computer program or software. You have reusable components that you're combining in new ways, but instead of using silicon, you're using biology to really engineer these components and perform specific functions from a cellular perspective.
It's a super cool area of science. While it is evolving, synthetic biology has potential to disrupt many different businesses, and many different verticals. When I got into it initially, I was cynical about the real overlap between engineering an Impossible Burger and engineering a therapeutic for a patient. But the more you see these parallels between different applications, there's a lot of interface with systems biology.
A few years ago, people wanted to do it, but it was really hard to do good systems biology, because modeling systems is hard. But now, I think that with the advent of some of our tools and things like cloud computing and leveraging GPUs, your laboratory is now the data center.
Absolutely. I think it's still very hard. Our approach was to start with a very simple system, a bacterial cell, from a systems perspective. A single cellular organelle is the most basic. Even there, it's still very difficult, because we can make genetic changes and not necessarily have the full ability to predict the impact that change is going to make on the whole cell. Even with the most simple application of systems biology, a unicellular bacteria, I think we're still only starting to get to the point where we can truly predict a perturbation and what impact that's going to have. We're starting to move there, but it's still very challenging and it's still quite iterative.
In a perfect world, when we have cracked and understood everything, we should be able to completely design in silico and then print the DNA, engineer the cell and have it perform exactly as we intended. That doesn't happen today. We have multiple cycles: design, build, test and then iterate.
But the goal is that the more we understand, the richer the data sets become, the better the predictive algorithms become. The more data we feed in, because we've generated all of these prototypes, the more that we can start to reduce the number of iterative cycles and the more you can truly be predictive and truly get it right first time. That's the whole goal of engineering: that you can control everything to such an extent that you can design something that will function exactly as you've designed it to function.
What would be a simple example of how synthetic biology would make something that I would use?
One of the clinical programs that we're developing right now is for a devastating disease called enteric hyperoxaluria. Patients have absorbed too much oxalate from their diet, which can result in recurrent kidney stones or kidney failure and patients need to be on dialysis. It's very, very difficult. Oxalate is in a lot of healthy foods, so it's almost impossible to exclude from your diet completely. There's no treatment currently for the disease, and it's very difficult for patients.
We do know that there's a naturally occurring bacterium that has a protective effect for these patients who have oxalate called Oxalobacter formigenes. It lives in your gut and about 50% of us are colonized with the O. formigenes. It consumes oxalates. It's one of these true symbiotic relationships between the human host and the bacteria that live on us, where these bacteria have evolved to use something that's a toxin for you as a source of food and nutrients.
But once you've lost it—because you received antibiotics, for instance—it's gone from your microbiome. Recolonizing patients with O. formigenes is really challenging because that bacteria is very, very difficult to grow. It’s a strict anaerobe; it's very finicky.
We thought as a company, why don’t we take the genes out of Oxalobacter formigenes, plug them into a probiotic bacteria that we know is very easy to grow and can be safely given to patients. Lots of people take probiotics, but what if we could make a specific bacterium to treat patients who have high oxalate and kidney damage from elevated oxalate? We were able to take the genetic module from Oxalobacter formigenes and plug it into a safe probiotic called E. coli Nissle. It's available as a probiotic supplement in Germany and Canada.
It took us a while to get the engineering right. We had to iterate to get it really working and active. Now, we have a bacterium that someone can take if they have this disease. Instead of excreting all of the oxalate in their urine and causing recurrent kidney stones, the bacterium consumes the oxalate in their GI tract and prevents them from getting recurrent kidney stones without any exposure systemically.
The bacterium acts in a way that is hopefully going to be a safe long-term option for patients with this disease. This underscores how we think about synthetic biology in terms of being able to pull from nature. We didn't make any of these components since they already exist in nature, but rather, we combined them in a unique way and used the principles of synthetic biology to engineer a new cellular therapy for patients with this disease.
That's fascinating. When you do that kind of research, you're obviously deeply steeped in biology and microbiology. Where does the digital part come into your research? Where do you leverage digital technologies to do these kinds of great things?
Technology plays a role from the initial design all the way through. Our first design step is completely in silico. We're using a computer program to pull a list of sequences of genes with known functional annotation that we’ll start to use to design and build in silico. So, access to that data and infrastructure and prior sequencing information is critical to jumpstarting the process.
Often we will engineer a specific bacterial strain and it will work well, but it may not work well enough. We then say, "Let's do a bake-off and look at a head-to-head of all of the known sequences that we think might have this function." That's only possible with digital technology, because often we're talking about 10,000 sequences. We're doing these huge screening experiments that would be absolutely impossible without the digital curation of the infrastructure. If that doesn't work and there's nothing that works better, we will use evolution-based approaches, where we're able to use machine learning to generate completely new sequences.
The nice thing about working with bacteria is that you can get a data set very quickly, because you can generate a library of thousands and thousands of mutants. You get a bacterial cell with the enzyme inside. You mutate it in multiple different places. You've different combinations of mutations, right? That's far too much data for any human to be able to integrate and curate. So, we rapidly establish these genotype-phenotype datasets, where you might have a couple of mutations that result in much better activity. You have another mutation in another place that gets so-so activity. What happens when you combine both of those together?
You have these huge data sets that then you can apply machine learning to and say, "these are mutants that we just generated randomly, but what happens if we take an informed approach to unique combinations of these mutations and test some of those? Do we get one plus one equals two, or do we get one plus one equals zero?" There are very few companies truly using machine learning to generate candidates and products, but we actually do that and do that successfully.
Once we’ve found an engineered bacterium that looks, in vitro, the best it can be, we then need to work out what's going to happen when we put that into the human body. We have a quantitative biology group that will start using some known information to make predictions about how that bacteria is going to function in the human body. At the end of the day, patients don't care how good it looked in a Petri dish in the lab. What they want to know is, "What's it going to do for me?"
I want to know before we invest in taking this product forward into the clinic, "How much do we know about how it's going to perform in the stomach, which is not a cozy environment.” Because it's competing with all kinds of other bacteria and everything else, we've established a lot of modeling infrastructure to predict how the bacteria is going to perform in humans.
Every time we do a clinical experiment, we're feeding that back and say, "How good was our prediction? Why did we got 50% more activity than we were predicting? Why is that? How can we tweak the model to be better at predicting what we're doing in the clinic?" What we've started to see is we're much less reliant now on animal studies.
We used to go through hundreds and hundreds of mass experiments to find something to convince ourselves that we had the right product to take forward. We have some GI simulation systems that we use to inform what we call in silico modeling or in silico simulation, and as we've started to get better at that, we've been able to be far less reliant on animal studies, which is important for many reasons, including cost and timeline and just humanistic reasons. I think that capability is something that we're really interested in continuing to leverage.
Do you see the datasets that you're building as the equivalent of a compound library to an old pharmaceutical company? You have a dataset now that you keep enhancing and adding to and learning from?
I think there're two datasets that we view as being a source of strength. Number one is that sequencing dataset and you continue to tap into it. Some of that is proprietary to us, and the other part is plugging into datasets other people have and how we think about sharing and curating and collaborating. That sequencing space is just very important and I think, going to be fundamental going forward.
The other data set that I think is invaluable is the validation of what's going to work in humans. That's absolutely going to grow. The more experiments we do clinically, the stronger that ability to predict is going to become. We had a fork in the road early, maybe about four or five years ago, where we had to make a decision about using a third-party vendor to do some of the simulation and modeling for us around activity and efficacy. At the time, we spoke to a number of external vendors, but they had never really worked on an approach like ours that was a microbial, that was just restricted to the GI tract, that could replicate. There was a lot more complexity than just giving us a fixed dose of a small molecule or biologic.
I decided not to move forward.
Instead we made the decision that we were going to build that in house. We're going to own the data. We're going to own the model. As we start to go out and speak to potential big pharma partners, they're really interested, but they're really interested in the model.
I think helping small companies think about data as a source of future value is really important. I see more and more companies bringing that modeling, quantitative science, computational biology in house, because they recognize that it is valuable to curate over and above your specific technology or specific programs. I'm so glad that we made the decision to build it internally, because it's just been very, very helpful.
As a CEO of the company, what are the things that you're most challenged with today?
It's been a very challenging year for all companies and CEOs amidst COVID. What does work look like in the future? We came through a difficult year. We're trying to keep people safe, trying to maintain the research productivity, trying to work out how to navigate a global pandemic. I think we've come out the other end of that now with different expectations about what work looks like and how you maintain the company culture and collaborative culture, while allowing employees the flexibility that they have had last year. People have learned new ways to work as part of the pandemic. That's one on the more human element things.
What's most important for Synlogic as a company is showing that we can really do something to turn the needle for patients. We have really interesting science. A lot of people in the ecosystem are fascinated with what we're doing. We're the leaders in this space around engineering bacteria therapeutics, but we're still very much a “prove it” story People really want to see that you can make a patient feel better, that you can make a real difference.
So where do you see data-driven discovery going in the next three to five years? Where are we going? Where do you see you're going to need to take your company to stay on that leading edge?
I see new target identification as an area of growth. Often, you see multiple companies pursuing the same target, such as amyloid for Alzheimer's disease, for example. Finding new targets, new ways to treat diseases, is going to be important as we continue to accelerate development of solutions for human health.
There's a big data component as well, particularly when you're going after potential microbial new targets for disease there is a microbial component in a lot of diseases that are big public health burden, be it inflammatory bowel disease, be it cancer. You name it, there's an association between the microbiome and something bacteria are doing and the cause of disease. But where we haven't made as much progress, I think, is really understanding the specific mechanisms. What are the targets? What exactly are the bacteria doing there that's causing the disease?
Our hypothesis as a company is that once once you've found a new bacterial target and validated it to the point where you know this is really causing disease, then we’ll have built the components and the tools. We’ll have a shelf that has the components ready to go and be able to plug in that new target as a new potential therapeutic approach.
That’s what I’d love to see: more work in true drug discovery, earlier identification of new disease targets, a new potential of drug targets. At some point, we’d have to invest in that ourselves, but right now, we have made the strategic decision not to invest in this area. There are some interesting companies and academic groups working in this space and we continue to watch it closely.