In this series of articles, Crakmedia’s teams will discuss their experience with the AI-based tools they use daily. These tools allow the Quebec-based performance marketing company to quickly adapt to the changing conditions of this highly dynamic type of marketing, which requires near real-time adaptability.
Artificial intelligence is definitely 2024’s buzzword, with several new AI model launches from major tech players. On May 13, 2024, OpenAI launched a new version of its popular model, ChatGPT-4o, or ChatGPT-4 Omni, which is a versatile model capable of handling text, audio, and images. Meta also launched the multilingual version of its LLaMa 3.1 model last July, adding French, Spanish, Portuguese, Italian, and Hindi to the languages it can natively understand while being the only open-source model on the market.
These large language models are well known to the public, and their free versions have been available for some time now. Although impressive, they pale in comparison to the power of their paid versions. Our communications specialist François Tremblay speaks with our creative director, David Hamel, and our web designer, Shanelle Robillard, about the impact that AI technologies have on their work.
AI: A Valuable Asset in Our Toolbox
FT: AI is omnipresent nowadays, and I wonder if that’s also the case in visual creation departments like Crakmedia’s?
DH: We actually have a whole bunch of them! We obviously use a lot of image generation and automatic retouching software. As for video generation, we’re not quite there yet, simply because it consumes a lot of resources, and we can’t run it on our current infrastructure, but it’s coming. So, we use Firefly a lot, the generative AI integrated into the Adobe suite, which we use day-to-day for photo retouching: extending backgrounds, changing the angle of an arm, adding clothing accessories…
SR: It’s really handy for quickly generating specific objects to add to an image when you have a precise idea, but can’t necessarily find images that match exactly what you had in mind. Let’s say you want to add a hat on someone’s head—AI can generate it and integrate it seamlessly, you won’t even notice it was added.
DH: Then, in terms of image generation, we rely heavily on Midjourney, one of the most powerful programs in this field when you want a really polished and perfectly integrated result. We use it to create assets for composing images: parts of logos, badges, tags… pretty much everything except text because it doesn’t work very well. We also have a Stable Diffusion instance running on our internal infrastructure, which allows us to create content by training our own model on visual assets that we own. Stable Diffusion will eventually enable us to create video content for our ad campaigns…once IT updates our servers!
SR: Another AI tool we use a lot is Remini AI, an image upscaler. It’s quite to receive low-quality images and be unable reshoot or find the originals. AI is particularly useful in these cases because if you go from 100 pixels per inch to 300 pixels per inch, you’re basically creating two-thirds of the image out of nothing, as that information doesn’t exist in the original. AI analyzes the image and deduces the missing pixels based on context, which would be impossible to do manually in a reasonable time. We also use AI to easily retouch lighting or even add light sources that weren’t in the original image. This allows us to adapt existing photos without going back to the studio.
DH: We also have tools like Revoicer, which allows us to add voiceovers to our video ads. In performance marketing, we often need to modify campaigns overnight or create multiple versions of the same video to test consumer responses. A tool like Revoicer lets us to do that very quickly.
FT: You say the quality is high, but it seems to me that AI often produces results that are a bit uncanny. Are the results that much better with the paid versions?
SR: Oh yes definitely! We can produce photorealistic images with the paid models, which can’t compare with the free and earlier versions. We don’t always have time to create content in a studio, and the results are excellent with AI… at least with the paid versions!
Adobe’s AI, Firefly, has become very powerful for generating photorealistic images or “completing” images when we need to expand backgrounds or add missing elements. We can even generate realistic faces and, yes, realistic hands that don’t have eight fingers!
DH: And the voices in the voiceover tools are really amazing. These are not the robotic voices we used to have. We can give them a specific personality or a particular emotion. There are so many more parameters, and they can sound very natural compared to what we had even a year ago.
FT: Given that you have access to all these tools, how do they influence your creative process?
SR: Honestly, it saves us so much time that we can tackle much larger projects in a fraction of the time it would take if we created everything traditionally. We no longer have to impose as many limits due to lack of time or resources. Before, I could spend hours photoshopping images, but now I can ask Firefly to do the retouching, and it does it in two seconds—it’s incredible.
DH: We can quickly generate storyboards and proofs of concept, which allows us to start working right away. We can come into the creative process of an ad campaign with visuals early on, which allows everyone to get a concrete idea of what it will look like and clarify the vision at the start of a project. We can also generate reference images to guide our creative process and recreate that in drawing, photography, or video.
FT: So, you both seem to consider AI to be mostly an advantage?
DH: Honestly, it allows us to get a polished, very finished-looking product in a fraction of the time it would have taken before. The time savings are huge! But it’s a bit of a double-edged sword because people start expecting our work to be instantaneous: we feed it to the AI, and boom, we have a result. It’s not quite like that.
SR: You have to write very precise prompts and know what you want, how you want it, because otherwise, you’ll get something kinda random. You need a good eye to spot what doesn’t work in the generated images so you can correct it. And once again, AI doesn’t handle text well at all, so we have to integrate it ourselves most of the time.
DH: Sometimes people come to us with AI-generated visuals, saying they made it in five minutes and got a great result. But when we look at it with an artist’s eye, we realize there are lots of issues, and we can’t put that online as it doesn’t meet our quality standards. At all.
SR: The thing many people don’t understand is that you can’t just let AI run and use what it generates as-is. You can use AI to support your work, but it doesn’t do everything, and you still have to do quality control and adjustments… it can’t replace your judgment and expertise .
FT: What are the limitations of AI, the things you can’t use it for?
DH: There’s not much it truly can’t do, but again, it does not handle text very well.
SR: Suppose we write a very precise prompt to create a logo that includes text—AI generally doesn’t understand what we’re asking. The text will be there, but it will rarely be in the right place, and it’ll be gibberish. We always have to rework it to integrate the text properly.
DH: Another problem is that AI often understands the concept we want, but its first draft will usually be way too detailed, with too many elements overloading the design. It doesn’t handle minimalism or very abstract concepts well, which is often what we want when generating logos or promotional materials. Suppose you want to create a logo for a spider superhero (you know which one), AI will most likely generate a big hairy spider with too many eyes.
SR: Less is more—it hasn’t gotten that concept yet! And AI doesn’t replace your creativity or vision: you have to tell it exactly what you want, understand what you’re asking, and you can’t let it create freely. It doesn’t create—it follows instructions.
FT: So, you don’t think AI will eventually replace flesh-and-blood artists?
DH: No, never completely anyway. There are things AI does well in terms of support and facilitation… but like Shanelle just said: it doesn’t create anything by itself. There’s a lot of imagination, concept design, and visualization work to do before creating a precise prompt, and then we have to analyze the result and ask for very specific corrections or photoshop them ourselves.
It’s kind of like the work you’re doing right now: ChatGPT could technically replace you and write this article for you… but in practice, you have to supervise it and rework the result. It takes a human behind it to ensure high quality because AI can’t evaluate quality. Creative work remains human work, and AI simply helps us achieve our vision faster.
SR: I don’t think it will replace us either, but I think it could make us somewhat lazy if we take these tools for granted and don’t make an effort to learn or maintain our skills. Technically, with the right prompt, you can generate a painting in seconds and pass it off as a piece of art. Personally, I think we need to treat AI as a tool, not a replacement.
DH: I don’t think AI is capable of innovating or inventing a new, never-before-seen style. It’s good at reproducing a style it’s been trained on based on real creations, but it won’t produce anything truly new.
FT: Do you think the use of AI should be better regulated in the creative field?
DH: We will need to develop systems, and there are already some, to detect abusive use of artificial intelligence. For example, to detect someone selling AI art and passing it off as original creations—this shouldn’t be allowed.
There’s also everything related to respecting copyrights. There haven’t been any major scandals yet, but some artists are already seeing their style copied by AI. Since models are trained on existing data, it’s somewhat inevitable that what will be produced will closely resemble existing works or styles.
We will need to better regulate what generative AI creators can use to train their models and define what can be protected by copyright and what is freely usable.
FT: Do you think a work created entirely by AI can be considered an original piece of art?
SR: For me, no, it’s not an original piece of art. Maybe you came up with the concept and wrote the prompt, but you didn’t actually create it yourself.
DH: I would say it depends. In a way, even we as artists learn the craft based on the knowledge and styles of other people. We get inspired by them and sometimes imitate them. It’s a bit disingenuous to say it’s not an original creation if it’s AI, but it is if I created it 100% with my own hands when it’s essentially the same principle.
SR: On the other hand, when I create something, I pour a bit of myself into a project, it is a thoughtful act, and it represents hours of my life. Someone who spends a few minutes creating a painting with AI—it doesn’t have the same value… personally, it even leaves me a bit indifferent. If I go to an artist’s exhibition, I analyze their work and wonder about their intention and its meaning. An AI-generated painting doesn’t have a soul, it doesn’t raise any emotion. For me, that’s what’s missing for it to be considered true original artwork.
FT: Does AI complicate or simplify copyright management in your opinion?
DH: It’s a bit of both. What AI creates isn’t protected by copyright, so we can reuse it in our creations without any issues… as long as the models are properly trained on royalty-free data, of course. On the other hand, it’s difficult to detect if a generated image could infringe on someone’s copyright. That’s why having a Stable Diffusion instance trained on our own data is useful—it generates content based on visuals that we 100% own.
SR: It’s easy to generate content that could potentially violate copyright or appropriate someone’s image. You have to be careful and make sure not to create copyrighted content by accident or, worse, create false representation or spread misinformation. That’s the kind of thing that could get an ad company into serious trouble.
FT: How do you see AI evolving in creative fields in the future?
DH: Honestly, it’s hard to predict. Just in the past year, we’ve gone from tools producing uncanny results to AI generating images of really high quality, almost indistinguishable from reality. I can’t imagine where we’ll be in a few years.
SR: Differentiating the real from the fake will become harder and harder. It will take an expertly trained eye to spot the tiny details that make us say, “Okay, that’s AI!” It’s hard to answer the question because right now AI tools already let us do pretty much everything, so apart from saying they will improve, I can’t see what more they could do.
DH: We are at the start of a revolution, and like any revolution, there’s some unknowns and elements that are scary. People fear that AI will take away jobs, but we thought the same thing with the arrival of radio, TV, computers, the internet…
SR: The thing I hope is that schools will continue to teach students how to do things themselves and create without AI so that they can keep developing those skills and not be completely caught off guard if they don’t have access to it. It’s a powerful tool, but it doesn’t think for you. If you don’t learn the basic principles of art, how can you objectively judge an AI’s result? You simply can’t.
FT: So, in conclusion, AI is a tool to help you achieve your goals faster, but it will never replace human creativity?
DH: Exactly. AI allows us to stop wasting time on tasks that take forever but don’t really add value. No one’s going to look at my design and say, “Have you seen the masking job on that? It’s insane!” and yet that’s probably what will have taken me the most time. No one will notice that I spent 3 hours finding THE perfect generic image for a website, but now I can generate it in 30 seconds and focus on the design, user experience, branding, etc., things that our clients and their customers will notice and that will bring value to our ad campaigns.
Share