Skip to content

My first encounter with an AI hallucination

Megan Dennison

|
My first encounter with an AI hallucination

With the rise of AI tools, many of us are exploring how these innovations can support our daily tasks, particularly the more time-consuming ones. In PR, where crafting content and conducting research are central to our work but can be labour intensive, AI offers a promising way to work more efficiently. 

For example, when working on a PR Campaign for a new product launch, instead of spending hours manually gathering information on current market trends or competitor launches, we can lean on ChatGPT to quickly research and summarise what we need to know. That way we have more time to spend on strategy and implementation.

Intrigued by its potential, I decided to test AI’s capabilities with ChatGPT to find a supporting statistic for a piece I was writing. However, this simple task led to an unexpected experience: my first encounter with an AI hallucination.

In AI, a hallucination occurs when the system generates outputs that sound convincing but are either factually incorrect or irrelevant to the context. What began as an experiment to streamline my work turned into a valuable lesson about the limitations of AI.

The Day AI “Guessed”

When I asked ChatGPT for a specific statistic, it responded promptly with a percentage that seemed reasonable at first glance. However, it provided no source to verify the claim.

ChatGPT replied with a peculiar response, starting with, “your attention to detail is admirable,” as if acknowledging that I had caught it fabricating. It went on to admit that the figure was not based on factual data but rather an assumption derived from general trends. Essentially, the AI had simply guessed.

This experience demonstrated a critical reality – while AI can be a powerful assistant, it is not infallible. Tools like ChatGPT do not inherently know the truth. They generate responses based on patterns in their training data. My encounter was a clear reminder that trusting AI blindly, without verification, can lead to the dissemination of misinformation.

The importance of effective prompts

One takeaway from this experience is the significant role of prompts in determining AI’s responses.

The way you frame a question or task can greatly influence the quality of the output. Vague or broad prompts can lead to inaccurate or incomplete answers, whilst clear, specific prompts help guide the AI toward more reliable results.

For instance, instead of asking ChatGPT, “What is the percentage of people who prefer X?” a better prompt might be, “Can you find a recent, reliable study or source that provides statistics that shows people’s preferences to X?” This change encourages the AI tool to prioritise evidence-based responses over speculative ones.

Additionally, breaking down complex requests into smaller, more focused questions can improve accuracy. Rather than asking for a broad overview, you might ask, “What reputable sources discuss trends in X?” and then follow up with questions about specific data points. By guiding AI thoughtfully, users can reduce the likelihood of hallucinations and maximise its usefulness.

Balancing AI’s strengths and weaknesses 

AI tools are undeniably impressive and can save time, but they are not a substitute for human judgement. 

My experience reminded me of the importance of using AI as a collaborator rather than an authority. While AI can assist with brainstorming, research, or content drafting, the responsibility for accuracy and credibility ultimately rests with us.

To use AI responsibly, it’s crucial to verify its outputs, especially when dealing with data or claims that require factual accuracy. Cross checking AI generated information with trusted sources is not just a best practice, it’s a necessity. By treating AI as a helper rather than a definitive source, we can harness its potential without compromising on quality or ethics.

As AI continues to evolve, the responsibility lies with us to use it ethically and intelligently. Whether you’re in PR, tech, or any field, embracing AI with a balanced approach ensures that it remains a valuable asset rather than a source of confusion. 

Find out more about all the services we provide here.

Keep updated

Sign up to our weekly and monthly reads

Receive our weekly roundup of the top technology and business media stories and reputational advice as well as our insightful views on topics in the communications and technology space.


    Our views
    When AI eats itself, what’s the future of content? 21.07.2025

    When AI eats itself, what’s the future of content?

    Salt in the wound – When storytelling crosses a line  21.07.2025

    Salt in the wound – When storytelling crosses a line 

    What we’re loving this summer for tech PRs   21.07.2025

    What we’re loving this summer for tech PRs  

    What a celebrity space flight taught us about PR 13.05.2025

    What a celebrity space flight taught us about PR

    Generative Engine Optimisation: The PR-led discipline reshaping brand visibility 13.05.2025

    Generative Engine Optimisation: The PR-led discipline reshaping brand visibility

    Is the future faceless leadership? 13.05.2025

    Is the future faceless leadership?

    Cancelled? There’s Cover for That 12.02.2025

    Cancelled? There’s Cover for That

    The year of the uncomplicated 12.02.2025

    The year of the uncomplicated

    Why AI won’t be taking over from PR 25.10.2024

    Why AI won’t be taking over from PR

    Building a strong reputation in Europe – a comms checklist 25.10.2024

    Building a strong reputation in Europe – a comms checklist

    Strategic PR planning – how to prepare for the summer slowdown 23.07.2024

    Strategic PR planning – how to prepare for the summer slowdown

    Is VR growing up? 22.05.2024

    Is VR growing up?

    Spokespeople – navigating the social media tight rope 30.04.2024

    Spokespeople – navigating the social media tight rope