If you can optimize the genomes of underexplored microorganisms, you could significantly broaden the biomanufacturing toolbox, enabling the cost-efficient production of a far wider range of products using diverse feedstocks, says Boston-based startup Anthology.
“By combining innovations from genome engineering, hardware engineering, and computational biology, we are making a platform that basically accelerates evolution by speeding up the process of generating genome mutations and then picking the winners from our custom build devices.
“This allows us to generate a very large data set that maps all these genotypes to downstream phenotypes, basically enabling the future of generative genome design, where given any feedstock or any product of interest, we can design a genome from scratch.”
AgFunderNews (AFN) caught up with cofounders Tzu-Chieh Tang, PhD (TT), and Jing Zhang, PhD (JZ), at the SynBioBeta conference in San Jose to find out more.
AFN: What’s potentially disruptive about your tech?
TT: When you think about biomanufacturing right now, you usually begin with E. coli and yeast, so you’re naturally limited by their capacity to do things.
The way we think about approaching biomanufacturing or how enabling our technology is, is like this: In five to 10 years’ time, or even sooner, you might have a feedstock you really want to get rid of such as ag waste, animal feces, plastics even, and then you have products that you really want to make such as textiles or high value proteins.
What’s connecting the two nodes is going to be a genome, an organism. We want to generate that organism using a combination of genetic engineering, hardware engineering, and AI tools, so we are not bound by species anymore.
AFN: In a nutshell, what does Anthology do to make this happen?
JZ: While other companies are trying to engineer cells like computers, Anthology takes an entirely different approach by learning from what nature does best.
So using our technology that combines innovations from genome engineering, hardware engineering, and computational biology, we are making a platform that basically accelerates evolution by speeding up the process of generating genome mutations and then picking the winners from our custom build devices.
This allows us to generate a very large data set that maps all these genotypes to downstream phenotypes, basically enabling the future of generative genome design, where given any feedstock or any product of interest, we can design a genome from scratch.
AFN: Why are fungi potentially interesting hosts for biomanufacturing?
JZ: Fungi engineering is very time consuming and complex but the upside is great as we’ve seen industry-proven workhorses that can produce proteins at titers of up to 150g per liter. But they can take a long time [to produce that target protein].
Working with E. coli, for example, you can see results within a week. With yeast, you can see results in one to two weeks. But for fungi, each engineering cycle can take up to 10 weeks until you see a known phenotype. And so we have to be very creative in the way we address this engineering problem by trying to generate a lot of genome variance in a short amount of time, and then use microfluidics devices or other high throughput equipment to screen the candidates that are the best fit for certain purposes.
We have also started to decode the language of protein secretion of fungi by applying AI tools. We are trying to learn the structural relevance for protein production and then eventually that will allow us to generate a panel of different hosts that can make different types of proteins of interest.
AFN: How do companies currently try and optimize their production strains?
TT: There are two mainstream approaches for doing this. In the first one, imagine there are 20 different control knobs and you can tune them one by one [introducing one gene at a time, one mutation at a time, hoping to find improvement through a library of thousands of different constructs]. But as you can imagine, this requires a lot of labor, a lot of money, and, of course, time.
And then there’s another approach that has been around for a while where people are trying to imitate evolution, but we have a very limited set of tools enabling us to do that. So usually people use chemicals or UV to introduce mutations across the genome. But this is a little bit like playing the lottery with extremely low odds.
AFN: So in layman’s terms, what are you doing differently?
TT: Imagine you have a book. We use genetic tools so we can change the order of the paragraphs. We can change the order of the chapters. We can remove a few pages. We can even borrow a few pages from another book.
By doing this, we can introduce diversity. And most importantly, if we see some changes in the phenotype or the traits we like, we can go back to the book, look at it through DNA sequencing and figure out what kind of traits are caused by what kind of mutation.
AFN: How are you screening to see which mutations lead to desirable phenotypes?
TT: It depends on the traits you are interested in. If you are thinking about, for example, the ability to use waste as a feedstock, you can let them just compete against each other. So millions of them in a bioreactor is like running a marathon. You only care about which ones are the fastest.
But there are some traits that cannot be analyzed this way. For example, if you have a group of singers and you want to analyze their ability to sing, you cannot let them sing in the small room together but at the same time evaluate them individually. You would need to put them into different rooms to do that.
So that’s what we do with the cells. We put them in different droplets or beads, and we pass them through a channel so at any given time when they pass through the channel, we analyze them using lasers or other optical tools, so we know what they look like, the amount of protein they are secreting, and we can do real time scoring and ranking.
And then from there [we can pick some potential] winners and study their genotypes and then we can put them through a pipeline of scaling up and validation.
AFN: You’re building on Nobel Prize winning work from Barbara McClintock on so-called ‘jumping genes’?
TT: We take mobile genetic elements or ‘jumping DNA’ from a distant host, and from there, we use a lot of modern genetic tools to add their functionality and most importantly, controllability so we can make sure we can precisely tune the jumping event in our host of interest.
AFN: What is your business model at Anthology?
JZ: Currently we’re testing different business models but there are two main approaches. One is to prove the technology in an existing industrial host, and show that by using our technology, we can improve the cost margin for our partners, and we have demonstrated some success there.
At the same time, we also exploring unmet market needs and talking to large industry partners to see what proteins they want to produce or what feedstocks they want to utilize. And from those conversations, we have generated some high conviction protein targets. Now we are negotiating with larger commercial players to be able to produce these proteins and then push them towards the finish line.
But the whole company was built on the basis that we are addressing a real industry need instead of building a set of cool technologies.