10 Ways You Can Royally Screw Up With AI
- Toby Martin
- Feb 25
- 6 min read
(and how to stop that from happening)
AI is brilliant. AI is also a liability if you don't know what you're doing.
And right now, a lot of agents are using it like a microwave - pressing buttons, hoping for quick results, and occasionally setting fire to something important.
The good news is that most of the traps are easy to avoid once you know they exist. The bad news is that most people don't know they exist.
Here are 10 ways you can go AI can cause reputational damage to your business, and exactly what to do about each one.
1. It makes things up (and sounds completely confident while doing it)
AI hallucinations aren't glitches. They're a feature of how large language models work - they predict the next most plausible word, not the next most true one. Ask an AI to reference a local statistic or cite a specific law, and it will happily invent one with the authority of a university professor.
This is particularly dangerous in property. Local market data, legal compliance wording, tenancy legislation... get any of it wrong and you'll turn yourself into a professional liability.
Your action: Always fact-check. Carry out Deep Research when looking for reliable data. And check out Notebook LM for exploring dense legislation.
2. It carries bias you can't always see
AI learns from the internet and historical sources, which (as you may have heard) are not exactly a perfectly balanced representation of humanity. The result is that AI can reflect biases baked into its training data: gender assumptions, cultural stereotypes, language that skews towards certain demographics.
In estate agency, this could show up in property descriptions, tenant communications, or marketing copy. Not intentionally, but there all the same.
Your action: Read your AI outputs with fresh eyes. Would this language alienate anyone? Does it make assumptions? Better still, ask a colleague to review anything client-facing with this lens. It takes ten seconds and could save a lot of awkwardness.
3. It sounds like every other agency
Your AI has been trained on the same content as everyone else's AI. So if you ask it to write a property description or a social media post without giving it your voice, your values, or your personality, it will produce something perfectly acceptable, completely unmemorable, and virtually identical to what your competitor posted twenty minutes ago.
Average is invisible. And invisible doesn't win instructions.
Your action: Build a simple tone of voice document - a page or two describing how your agency sounds, what words you use, what you never say, and a few examples of writing you love. Feed it to AI at the start of every content session.
4. You use it as a replacement instead of an assistant
AI doesn't know your customers. It doesn't know the story behind that Victorian terrace with the slightly weird extension. It doesn't know that this particular landlord needs handling with care. It has little local knowledge, no relationship context, or professional instinct.
When you hand the whole job to AI, you strip out the very thing that makes your agency worth choosing over a faceless online alternative.
Your action: Think of AI as a junior assistant, not the company CEO. ten minutes spent refining an AI draft beats a generic AI output every time.
5. You're pasting sensitive data into public tools
When you copy a client's name, address, financial circumstances, or personal details into a free AI tool, you're potentially putting that data somewhere you have no control over. Many free tools use your inputs to train future models, and some store your conversations.
Under GDPR, this is not a grey area. It's a very clear one.
Your action: Check the data handling policies of every AI tool your team uses, and cross-reference it with your own data protection policy. If you're using a business plan (like ChatGPT Teams or Claude for Business), your data is typically protected. If you're using a free personal account, keep it to non-identifiable, generic content only. No names, addresses, or case specifics.
6. You're using the wrong tool for the job
Not all AI platforms are created equal, and using the wrong one is a bit like using a butter knife to put up a shelf - technically possible, but not ideal. ChatGPT, Claude, Gemini, Perplexity, and the dozens of specialist tools each have genuine strengths.
Some are better at long-form writing, whilst others are better at research. Some have live web access, others don't. Some are better at following complex instructions, others are better at quick, conversational tasks.
Your action: Do a quick audit of the AI tools your team uses and what they use them for. Match the tool to the task. For content and writing, try Claude or ChatGPT. For research and up-to-date information, Perplexity is excellent. The right tool in the right hands is a different experience entirely.
7. Rubbish in, rubbish out
The quality of your AI output is almost entirely determined by the quality of your prompt. Vague instruction produces vague content. "Write me a post about this property" will get you something generic. "Write me a warm, conversational 150-word Instagram caption for a three-bed semi in [town], emphasising the south-facing garden and the school catchment, aimed at young families" will get you something usable.
Prompting is a skill. And like all skills, it improves with practice and intent.
Your action: Save your best prompts. Create a shared document of the prompts that have worked well for your most common tasks (property descriptions, social posts, vendor letters, market updates). Build your own prompt library and watch your output quality go up across the board.
8. You post it without reading it
AI is fast. That's the appeal. But fast becomes a problem when it bypasses the basic human sanity check that every piece of outgoing communication deserves.
AI content can be subtly wrong, slightly off-brand, weirdly formal, or just... a bit odd. And once it's out, it's out.
Your action: Make "always read before posting" a non-negotiable policy. Even a 60-second check will catch the obvious stuff. For anything going to clients, give it a proper once-over.
9. You use it for everything
AI is a powerful tool. So is a pressure washer, but you wouldn't use that to clean your glasses.
Some things are better done by humans... A difficult conversation with a vendor who's just had an offer fall through. A personal handwritten note to a long-standing landlord. A property description that requires nuance, or caveats. These moments are where relationships are built and won.
Your action: Don't just use AI on an ad hoc basis, as you'll get into bad habits. Consider what AI is genuinely useful for in your business - and what it isn't. Create a simple "yes / no / sometimes" list. It clarifies thinking, sets expectations, and stops the creeping habit of reaching for AI by default.
10. You ignore the legal and copyright grey areas
Who owns AI-generated content? What are your obligations if AI produces something that inadvertently mirrors copyrighted material? What are the rules around using AI-generated images of properties or people? These questions don't have fully settled answers yet, the law is playing catch-up with the technology.
That doesn't mean you can ignore it. It means you need to stay informed and make sensible decisions in the meantime.
Your action: If you're using AI-generated images commercially, understand where they came from and what licence applies. And if in doubt, seek advice - it's a much cheaper conversation now than after something goes wrong.
One Thing That Ties All of This Together
If this list has left you feeling slightly overwhelmed, here's the thing to hold onto: most of these risks aren't complicated to manage. They just require some thought.
Which is exactly why every agency using AI - that's most of you, whether you've formalised it or not - needs an AI Usage Policy.
Not a 40-page legal document. A clear, practical guide that tells your team: what tools you're approved to use, what data can and can't go into them, what always needs human review, and what AI simply isn't for in your business.
Pair that policy with some proper team training, not a five-minute explainer, but actual hands-on time learning how to use these tools well, and you've got something genuinely valuable. A team that uses AI confidently, consistently, and safely - that's a competitive advantage most agencies haven't built yet.
AI isn't going away. But the agencies that treat it like a magic button will keep getting burnt. The ones who invest a little time in doing it properly will be the ones using it to pull ahead.
See you next time,
Toby



Comments