Hello, everyone! Today, I’m diving into LumaLabs Dream Machine, an AI-driven app designed to transform the way we create art videos. I’ve spent some time testing its features, and I’m here to share an honest review of its capabilities, ease of use, and the overall quality of the videos it produces. In the time of promises and release dates that have been postponed, I was genuinely intrigued to experiment with what Dream Machine can do.

And when I said honest review, I meant really honest.

What is Dream Machine?

Dream Machine is an AI model designed to create high-quality, realistic videos quickly from text and images. Positioned as a highly scalable and efficient transformer model, Dream Machine is trained directly on videos, supposedly allowing it to generate physically accurate, consistent, and eventful shots, generating a 5 second video. Dubbed as the first step towards building a universal imagination engine, Dream Machine is now accessible to everyone, aiming to revolutionize the way we approach video creation with AI.

Pricing system

Here’s a breakdown of the different subscription levels available:

  1. Free Plan: This introductory tier is perfect for those just getting started or experimenting with AI video generation. It includes up to 30 generations per month at no cost, allowing users to explore basic features.
  2. Standard Plan: Priced at $29.99 per month, this plan boosts the number of available generations to 120.
  3. Pro Plan: For professional creators who need more robust capabilities, the Pro Plan offers 400 generations per month for $99.99.
  4. Premier Plan: At $499.99 per month, this is the most advanced plan, offering a substantial increase to 2,000 generations per month.
Features Overview

The app has a really user-friendly interface. Beginners can navigate through its features easily. Just write your prompt or insert a reference image to start with. That’s it.

You can view your video gallery and account plan. In the video gallery you can see finished and queued videos. How long is the queue?

According to the website, 120 frames in 120s. However, if you were planning a popcorn break around that estimate, you might end up hungry.

In practice, due to the surge of eager artists testing the limits of their free credits, the system seems a tad overwhelmed. The result? A queuing time that feels more like a suspense thriller, constantly keeping you guessing. In my own experience, a video that was cheerfully promised in 9 minutes decided to take a scenic route, stretching those 9 minutes into a leisurely 2 hours. And just when you think you’ve got a handle on the timing, it’s a roller coaster of expectation management—jumping from 10 minutes to 2 hours, then perhaps back down.

While I do understand the high demand – after all, who wouldn’t rush to test out their shiny new toy? – the persistent optimism of the queue timer could use a dose of reality. A little accuracy in wait times wouldn’t hurt, unless, of course, the goal is to give us all cardiovascular problems :D

While I was writing this review, they added a Download button, which makes interface a lot more friendly.

Now let’s get to the real thing – the videos.

Quality

In assessing the quality of Dream Machine’s output, it’s clear that this AI tool stands a notch above many others in the arena of video generation. The videos produced are impressive, particularly when you consider the relative infancy of this technology. While the results might not be ready to fill the resolution of a cinema screen (yet), they are undoubtedly transformative for the field.

The app delivers a solid performance, creating videos that, for the most part, exhibit a higher fidelity and coherence than what is typically seen from AI-driven tools currently. The textures, colors, and dynamics of the videos hold together well under a variety of artistic directions, but do best in realistic settings.

Text-to-Video

Let’s start with the text-to-prompt settings. This was one of my first generations, and I wanted to portray someone entering a room after a fire. I really like the camera movement because you can get the shaky feeling like someone is literally walking in and feeling the steps. I like the fast camera shot that makes the adrenaline go up. Overall, it’s nice, but there’s a lot of artifacts that just don’t fit in, especially at the end of the shot.

Next, I tried generating some cartoons, and I have to say that I’m amazed. There are a lot of colors, a lot of movement, and they’re really coherent. What I really don’t like is the lack of human anatomy; the changes happen very fast and the bodies are really morphed. Another problem I saw is the anatomy of hands, the everlasting problem of AI. I got some multiple fingers and some weirdly shaped hands, but apart from that, the coherency of the clip was very good.

I will always be in love with landscape shots, so you can see one example of that. The problems included a drone shot, and I really like how it goes, but I don’t like that in some parts, the image is overexposed while the rocks and cliffs seem very dark and lack any kind of texture, especially in the distance, where everything seems to merge into a big blur.

One of the elements I like to incorporate in my images, and now videos, is the use of perspective. I’m a fan of Kubrick, especially how he goes and flows through all of those perspective shots and gives you an emotional feeling. Here we have my attempt at making horror with a person running and then crashing into a parallel universe. I was curious if Luma would be able to keep up with the person running in a straight line and one perspective to make it look a bit more eerie. I can’t say I’m really satisfied with the result because it’s glitchy, and the people are merging, but overall, the feeling and atmosphere are very liminal, which is the effect I wanted to achieve.

Image-to-Video

I saw a lot of users on X and Instagram using their images, inserting them into Luma to get better results, themes, and images they want. So I put a few of my Midjourney images into Luma, and let’s see how that went. For the first thing, I inserted an image of an alien landscape, and I really like how it starts animating like someone’s hiding behind the rocks and watching. It gives me a little bit of Star Wars vibes, which wasn’t really my intention. However, I really like the details—how Luma kept the details and the textures of the stones, the buildings and the distant space.

I used one of my favorite images and inserted it into Luma, and this one got me really amazed. I didn’t invest in the prompt, so it wasn’t very descriptive. I really like how the movement of the reflection goes, and the moment it encapsulates the story in 5 seconds. This one I’m probably most proud of.

I attempted to make a Victorian ghost come down the stairs, and it’s actually really good because it kept the tone of the image and the atmosphere. However, you can see the glitchy movement of the ghost walking down the stairs. At the end of the video, the ghost’s facial expression changes, which is something I didn’t prompt. This affected me in thinking about how it can influence other future stories.

I’m including some other examples and their images side by side for comparison so you can have a better grasp of how animating with Luma looks. And if you know me, you know I like my horror. But for the sake of showing multiple styles and diversity, bear with me.

Here is a preview of all used images from Midjourney, in order:

Some more examples:

Overall, I think Luma is a great tool in development with a lot of potential. In the near future, it will surely expand a lot, and we can expect many improvements because they are listening to users’ needs. However, I don’t think AI videos will replace cinematography completely in the near future because we still cannot control every movement, every logical aspect, and every detail we want. As long as we are human and want every detail controlled, AI will not give us the results we want. However, I think Luma and similar tools will drastically cut down the cost of production and give a lot more creative opportunities, enabling the creation of things we could never have imagined becoming real.