Adobe Launches Firefly Mobile App, Adds AI Models from Ideogram, Pika, Luma, and Runway
The updates bring Firefly to iOS and Android and expand its creative power with leading third-party models
Adobe’s image and video generation platform is coming to iOS and Android. On Tuesday, the company introduced its native Firefly mobile apps, giving creators more freedom to generate content on the go. The launch is part of a broader update to the platform that includes support for new AI models from Google, Ideogram, Pika, Luma, and Runway. Adobe is also enhancing Firefly Boards with better tools for multimedia generation and editing.
AI Creating on the Go
Although Adobe Firefly was accessible through any browser, the company never quite made a strong investment in native apps until now. Zeke Koch, Adobe’s vice president of product management for gen AI, explained during a press briefing last week that the majority of Firefly usage happens on the desktop web. However, when the company released its Firefly Video Model in public beta in February, the platform received a “massive influx” of people through its mobile website—something which surprised Koch. He had expected that still image generation would be predominantly done on phones, not video.
It’s this discovery that led Adobe to launch its Firefly for iOS and Android apps. Koch highlighted that one use case Adobe’s seen is people on a subway, photographing something with their phone and then using it as a composition reference in Firefly to “generate something new with the same structure of that photo that they took…and then they take [it] and animate it into a video. And then later on, when they get to their computer, they take that video and put it in Premiere and turn it into something that they post…”
Creators should be able to avail themselves of most, if not all, of the same features in Firefly mobile that they have on desktop web. This includes using text to image, text to video, and image to video prompts, adding or removing objects with Generative Fill, or extending the size of an image with Generative Expand. In addition, they can use any of the ever-growing AI models supported by Firefly, including OpenAI’s image generator, Google’s Imagen 3 and Veo 2, and Black Forest Labs’ Flux 1.1.
Koch acknowledged that the AI models in Firefly’s mobile app are all cloud-based. This means that it’ll only work if there’s Wi-Fi or data connectivity to the device. However, he revealed that Adobe has been exploring having models run locally and has been researching how to make it happen, but the company doesn’t have anything to announce.
“Creators continue to impress us with the breadth and artistry of the images, videos, graphics, and designs they’re dreaming up in the Firefly app using models from both Adobe and our partners,” Ely Greenfield, Adobe’s senior vice president and chief technology officer, said in a statement. “Our goal with Firefly is to deliver creators the most comprehensive destination on web and mobile, with access to the best generative models from across the industry, in a single integrated experience from ideation to generation and editing.”
New AI Models On Firefly
And speaking of models, the list of supported models is growing with new additions from Black Forest Labs (Flux.1 Kontext), Ideogram (3.0), Luma AI (Ray2), Pika (text-to-video), Runway (Gen-4 Image), and Google (Imagen 4 and Veo 3). According to Koch, these partners were brought into Firefly because they agreed to three core principles Adobe set for granting customer access to third-party models.
The first is that model providers agree not to use the data provided by creators in Firefly to train their own large language models (LLMs). “To our customers—most of whom are creative professionals or business folks—it was super important that we not train on their assets,” he remarked. Adobe asserted that all of its Firefly partners have “agreed to those terms.”
In order to have a seamless user experience, all models in Firefly are accessible using a single subscription. As long as creators are using the platform’s site or app, they can use their Adobe Firefly login to access both first- and third-party models.
Lastly, to provide peace of mind to users, partners must accept that all assets generated will be digitally signed and include a verified certificate identifying the model used to create them. The addition of these so-called “nutritional labels” is intended to make sure AI-generated images and videos are transparent to consumers.
Ideogram, Luma, Pika, and Runway models will initially be available through Firefly Boards. Google’s Imagen 4 can be used in text to image while Veo 3 is accessible in text to video and image to video.
Video Generation Comes to Firefly Boards
The last update coming to Firefly involves the platform’s ideation and moodboard canvas. First launched in April 2025, Firefly Boards helps streamline the early stages of the creative process, starting with images. Now, it’s a multimodal offering, thanks to support for Adobe’s Firefly Video model, Google’s Veo 3, Luma AI’s Ray2, and Pika’s Video Generator.
Koch references that some creators are using Firefly Boards to map out short videos by having it generate key frames. “What a lot of people do is they’ll use various generative models to block out what they want that key frame to be—that first image—and then they use a model to generate it,” he explained. “We said, ‘hey, it’d be great if we could just put that capability right into boards where you can take an asset and convert it into a video right there, and then get a sense of how your movie might flow while you’re there inside Boards.’”
And that's exactly what the Firefly Boards update delivers. Creators can now seamlessly transition from ideation to video generation, all within a single, integrated platform. This streamlined workflow empowers artists to quickly explore their creative vision, iterate on ideas, and bring them to life through dynamic, AI-generated videos. As the Firefly ecosystem continues to expand, Adobe remains committed to providing creators with the most comprehensive set of tools to elevate their work and push the boundaries of what's possible in the digital realm.
In addition, creators can refine images within their boards using natural language prompts, organize their ideas into clean, presentation-ready layouts with a single click, and sync updates across other Adobe apps through linked boards.
The updates to Firefly Boards are available today, though the service remains in public beta.
This is the second AI-related announcement this week from Adobe. On Monday, the company launched its LLM Optimizer, a tool designed to help companies shape how their brands show up in search results such as Google’s AI Overview.