Dyna Robotics' New Model Gives Robots One Job: Work
Its first foundation model, DYNA-1, is designed for real-world tasks with high speed, reliability, and zero setup hassle
Amazon may be recognized as a company heavily experimenting with robots in its warehouses, but Dyna Robotics is aiming to push the entire field forward. The startup aims to deliver commercialized robots that can perform straight out of the box in any environment, producing high-quality results with exceptional throughput. Today, it takes a significant leap forward with the introduction of Dynamism v1 (DYNA-1), its inaugural robot foundation model.
“Historically, most of the robots, as you probably have seen today, is not widely commercialized, and there’s been a lot of industries that are desperately in need of robots because of a lack of labor for on people that they can’t hire,” Lindon Gao, Dyna Robotics co-founder and chief executive, tells me.
“Initially, we thought that people wanted robot employees. What that means is really just a humanoid robot placed in your workforce, and it could do anything for you. But, in speaking to hundreds of customers, we realized that a majority of the people and workforces don’t really need a robot employee. In reality, they just want tasks to be done, and most of these are extremely repetitive, single tasks.”
This is the basis behind DYNA-1, created to help robots with “sustained autonomous execution on complex, dexterous tasks using a pair of stationary arms.” The company states it’s a large language model, but declined to disclose its parameters or token amount.
Digging Deeper Into DYNA-1
When asked what makes DYNA-1 stand out, Gao and another co-founder, York Yang, point to three pillars: quality, speed, and robustness. They emphasize that Dyna Robotics has focused heavily on ensuring high-quality training data, leading to more efficient and optimized learning. On the technical side, the team has refined the model architecture, removing most, if not all, redundancies to boost operational speed. “We are heading about 60 to 70 percent of human speed,” Yang says, a pace he notes is already “close to what the clients need.”
As for robustness, Dyna Robotics leverages a special type of reinforcement learning, based on research from the startup’s third co-founder, Jason Ma, who was a former scientist at Meta, Nvidia, and Google DeepMind. This reward model is helpful when dealing with corner cases. No, these aren’t situations in which the robot gets stuck in a corner. However, it could be based on its very definition—a problem or situation that occurs only outside of normal operating parameters.
“Most of the issue with robots is, if you get into a corner case, you get stuck, and then you can never get out,” Yang explains. “These cases are very hard to detect because in the data set, because [it’s] huge, it’s very hard for humans to cherry-pick all the small pieces and avoid those corner cases. But, with some reinforcement learning techniques, [the robot] can explore by itself and try to recover by itself. This part is also very innovative in the industry to get us to a better place.”
Thanks to this reward model, DYNA-1 has developed several distinguishing capabilities. The company claims it can autonomously explore different strategies, recover from mistakes mid-task, and generate high-quality training data without heavy human involvement. It has also led to zero-shot environment generalization, which Gao defined as the ability to take a robot and place it into any new environment and have it work “straight out of the box.”
Although the model was initially trained to fold napkins, Gao acknowledged that it’s the first of many different tasks Dyna Robotics is working on, and the adaptation time is decreasing rapidly. “What we have realized as we improved our robot learning is that our task-to-task transfer capability is really high,” he says. “So, from something like a fold to another fine-grain task, which we will probably disclose in a few weeks or a month, the ramp is much faster. It took us about, in total, from start to end, maybe like three months, to get our existing performance. From napkin folding to our next task, it took us about a month. And the next one only took us about two weeks. So you’re actually building a compounding S-curve on generalization and performance capabilities over time.”
How Many Napkins Can Your Robot Fold in a Day?
Dyna Robotics has chosen to make its foundation model closed, at least for now. “We realized that a lot of open source…paid more attention to how to make the thing just work. They didn’t pay too much attention [to] performance,” Yang remarks. He contends that performance is exactly what customers are looking for. In the long term, DYNA-1 may be open-sourced, but for now, it’s a closed model.
But providing that level of performance was a challenge for Dyna Robotics. Should adjustments be made at the model level or through data iterations? Yang explains that his team worked to ensure that the model structure could “execute really fast for the robot” and also that the data collected and trained against was “very efficient and it can represent the actual work very well.”
He believes this gives Dyna Robotics an advantage because “there are companies doing a lot of demos that probably don’t focus on efficiency, but mostly focus on if something can be done. And then, they’re executed very slowly.”
To highlight what its model can do, Dyna Robotics tasked a robot to fold napkins over a 24-hour period. In total, over 700 napkins were folded according to restaurant standards, achieving a success rate of at least 99.4 percent without any human intervention and a throughput—the rate at which data is transferred through a system—of 60 percent human speed.
But napkin folding isn’t an industry-accepted benchmarking standard like the ones cited by OpenAI, Meta, Ai2, Google, or ServiceNow. Gao asserts that this task was a client requirement and Dyna Robotics benchmarks “based on what people want, not necessarily on ‘we’re coming up with an arbitrary number with an arbitrary task.'“ He declares this demonstration was done to show that DYNA-1 provides “commercial viability.”
“We are the only company today that is in [a] production environment with dexterous manipulation,” Gao points out. “The reason why we decided to publish this number…over 24 24-hour period, is because typically…if you are able to perform for 24 hours, then you could truly call that this is a growth model that’s robust enough to run by itself without human assistance. And that’s what we wanted to prove. The throughput is our ability to show that we’re able to start hitting commercial milestones.”
The Human Speed Goal
Ultimately, Dyna Robotics has set its sights on having its robots complete tasks at human-level speed. Yang says it currently stands at around 60 to 70 percent. However, he hopes to one day match or even surpass human speed: “That will be much more important in this production environment.”
The startup has three types of robots, though it’s only using DYNA-1 on one for now. It’s also not alone in developing a robot foundation model. Nvidia, Physical Intelligence, Covariant, Figure AI, and 1X Technologies are some companies that have produced similar technology. Be that as it may, Dyna Robotics is pitching that DYNA-1 has an advantage because it doesn’t require any modifications when a robot is in different environments—it just works when turned on. Gao argues that AI models today works “in the environment that you train it in, but it does not work outside of that environment.”
Interestingly, unlike the firms mentioned above, Gao, Yang, and Ma’s company is pursuing a different robotic design: simple hardware with a stationary arm that can do one or more tasks repetitively and effectively. Dyna Robotics seems to be eschewing humanoid robotics.
Investors are interested in what the startup is selling as the company has raised $23.5 million in a seed funding round led by CRV and First Round Capital.
“This is probably one of the major breakthroughs in the industry because, for the first time, you’re able to see a foundation model that works in [a] production environment and actually proves commercial viability,” Gao boasts. “And that’s what we hope to bring to…the broader community, which is now, it’s not a single shot success in the lab or just a demo…but rather, we are…bringing this into [a] real-life environment, starting to add value for humanity.”
He hopes Dyna Robotics’ research will be able to “provide some level of context and information to help the community…further the research and embody AI.”
Featured Image: An AI-generated image of a single arm robot folding napkins. Image credit: Adobe Firefly
great analysis. This space will gain way more traction than humanoid robots