Between the dream knight and the robot: why I decided to train people to speak the language of visual AI
Imagine the scene: you ask an artificial intelligence to create the image of a valiant victorious knight, and it comes up with a silver robot that would be more at home in The Wizard of Oz than in a Game of Thrones story. This is exactly what happened on August 27, during my training session on the art of the visual prompt, when I wanted to show my participants the difference between ChatGPT and Gemini. Let's just say that the knight generated by ChatGPT looked more like he'd just come out of the liquidation section of the Jean Coutu the day after Halloween than he had proudly fought a terrible red dragon.

That moment, funny as it is, is a perfect illustration of why I created this course. Because today, everyone and their mother can generate images with AI, but between you and me, there's still a big difference between clicking "generate" and actually knowing what you're doing. It's like the difference between putting your money in a slot machine at the casino and investing it wisely: technically, both can pay off, but one is pretty much left to chance.
Having the tool is one thing. Knowing how to use it is quite another!
My participants, a fine mix of the manufacturing and techno worlds, arrived with the same existential question: "Why do my AI images look cheap?" The answer? Because they've been sold the idea that AI does all the work for them. Spoiler alert: no. AI is like a super-talented trainee designer who needs a clear explanation of what you want. If you tell it "make me a beautiful picture", you'll get something. If you talk to him about framing, style and point of view, you'll get something that looks good.
What fascinates me most is how each tool has its own personality and its own limits. ChatGPT with DALL-E is your Swiss Army Knife: accessible, integrated, perfect for quick and varied needs. MidJourney? He's the artist of the gang, the one who'll create images that have a visual "wow" factor, but require a little more prompting to understand what you want. Stable Diffusion is for those who like to have total control - a bit like those who refuse to buy IKEA furniture because they prefer to build things themselves.
Beyond the "generate" button: understanding for better control
What really lit a spark with my participants was when we unpacked the mechanics behind it all. Token weighing isn't exactly the stuff of dreams, but it's the art of telling the AI "hey, [this or here] is really important, focus on that". A bit like telling your boyfriend three times NOT to forget the milk at the grocery store.
Using sketches to guide composition? It's a complete game-changer because, as they say, a picture is worth a thousand words, even when it comes to creating one. You're not starting from scratch, you're giving a clear, visual direction instead of trying to explain a composition in vain. It's like giving an engineer's blueprint to a worker on the floor instead of just explaining how it should look.

But beyond pure technique, we also delved into the gray areas that make many people uncomfortable.
- Copyright issues when you generate an image that bears a striking resemblance to the style of a living artist.
- The environmental impact of a technology that consumes as much energy as a small town to create an image of a cat as an astronaut.
It's important to talk about this, because understanding these issues is what makes the difference between using AI responsibly and just riding the wave without thinking.
Autonomy is gained one prompt at a time
For my customers who manage websites or want to create content for their social networks, this training was about giving them the keys to the kingdom. Because yes, you can always hire a designer for every little need, but sometimes you just want to create a quick visual to announce a new promotion or illustrate a blog post. With the right composition reflexes and a solid understanding of the tools available, suddenly you don't have to wait three days to get your image. It doesn't replace the work of a professional designer for big projects, but it does give them the autonomy to do the 1001 little things that, when put together, define their brand image.
What makes me particularly proud is to see how this autonomy transforms the relationship they have with their digital content. It's no longer "we need an image", it becomes "we want to tell this story, visually, and here's how we're going to do it". The nuance is subtle, but it changes everything.
In the end, perhaps my failed knight of ChatGPT was just what was needed for this training. Because he reminded us that, in 2025, artificial intelligence is an extraordinary tool, but it doesn't replace human creativity, strategic thinking and, above all, the intention behind each creation. It amplifies it, if we know how to talk to it. And that's something you have to learn. One prompt at a time.
Do you have a project?
For all your Inbound Marketing initiatives and services involving the HubSpot CRM platform, we would be delighted to collaborate with you. Contact us today to maximize your results with HubSpot and accelerate your growth.