Using and communicating AI – Think positive, but remain self-critical and realistic


Using and communicating AI – Think positive, but remain self-critical and realistic

I was recently inspired by the blog article AI generated content to take a closer look at the much-discussed topic of AI from a communication perspective. With the recent breakthroughs, the topic has gained a whole new level of traction – not least for us communication professionals. Whether it’s translations, summaries, image generation or subtitling for films, there’s hardly an AI tool that doesn’t exist, promising to simplify and speed up tasks. However, it is also clear that, apart from pure automation, none of this currently results in finished products. People always have to go back to the drawing board. Making changes, fine-tuning, adapting and expanding. And, this is particularly important: verify. AI tools are not enough for differentiation, for creative moments of surprise, for carving out stories and sophisticated target groups. But AI tools can save us time and help us achieve better results faster. And that’s exactly what we’re aiming for. With human (not AI) understanding.

Realistic, self-critical presentation

As self-critical as the use of AI tools should be, it should also be in the communication around AI topics. AI always fluctuates between being underestimated and over-hyped. We often fall into this trap of over-hyping AI. Therefore, it is important that the positive aspects of the solutions are presented realistically.

The linchpin: trust can only be created through honest communication of the added value and risks of AI use cases. This also involves quality features such as data protection and data transparency. Too little attention is also often paid to the sustainability of AI solutions and their sustainable use.

Don’t start communicating when everything is finished!

In a recent conversation, a German journalist told me: “What I think is missing in the communication around AI is a critical performance analysis of the respective AI development. In other words, where is the AI application worse than the previous application and in danger of becoming a real showstopper instead of a game changer? What lessons can be learned from these experiences?”

Definitely true. The most common case is that the company only starts communicating with the media when everything is finished from the companies side. “But a press release about a finished AI tool initially says nothing at all”, he pointed out. “I lack the self-critical classification. It’s easier for me if companies allow a look under the hood at an early stage and involve the media in the design process.” Why? Because this would then give media professionals the opportunity to try out how good the respective AI solution is in cooperation with the company or start-up. They can then talk about possible applications, benefits and added value. And what relevance the solution has for the market and the respective target groups.

An approach that is worth taking to heart. After all, it’s about creating trust and transparency. And this is achieved through dialog between the parties.