Episode Summary
In this episode, Alan and Bhairav strip away the hype around AI and talk about what it actually is, what it can and can’t do, and how real-world business owners – especially SMEs and startups – should think about using it.
They cover the fundamentals (in plain English, not “Sanjay-level tech”), from what AI really means, to how large and small language models work, to where the data comes from and why that matters. They also dive into practical use cases: analyzing contracts, mining your own IP, using pre-trained models, and simple tools that can genuinely save time (like automated meeting notes and CRM analysis).
Just as importantly, they tackle the hard questions: bias, hallucinations, responsibility when AI gets it wrong, job impact (especially for analysts), and whether AI can actually damage your business if you use it blindly. Throughout, they come back to the same grounding principle: risk and ROI – exactly the same lens you should apply to any other investment or tool.
Key Discussion Points
1. What is AI, really?
- AI = artificial intelligence: the long-running quest to build machines that can “think” or behave like humans.
- It’s an umbrella term that includes multiple subfields, for example:
- Machine Learning – using algorithms to identify patterns and make predictions from data (e.g. weather forecasting, predictive analytics).
- Neural Networks – models inspired (loosely) by the brain, used in many modern AI systems.
- Computer Vision – systems that interpret images and video (e.g. self-driving cars deciding whether to brake, change lanes, etc.).
- ChatGPT and similar tools are only one small subset of AI – not the whole thing.
2. Why is AI suddenly everywhere?
- Many of the underlying algorithms have existed for years (decades, in some cases).
- The big shift is infrastructure:
- Massive advances in GPU chips (e.g. Nvidia) and data center capacity.
- This lets us crunch vast amounts of data quickly and cheaply enough to make tools like ChatGPT and generative AI viable for mainstream use.
- So it’s not that AI “appeared” overnight – the hardware finally caught up with the maths.
3. Data: Do you need huge datasets to use AI in your business?
- Big models (like ChatGPT, Claude, Gemini) are trained on enormous, general-purpose datasets, mostly scraped from the open internet.
- For businesses, that can be overkill and often the wrong tool:
- Like using a sledgehammer – and the wrong kind of sledgehammer – to crack a nut.
- Your use case might not need a model trained on recipes, Reddit threads, and random blog posts.
- The more useful approach for many companies:
- Use smaller, focused language models trained on your specific data – contracts, policies, case history, sales data, etc.
- This is particularly powerful for organizations with large volumes of structured or semi-structured IP (e.g. law firms, engineering firms, consultancies).
Example: Mining Legal Contracts
- Large law firms or infrastructure/energy businesses may have tens of thousands of contracts.
- AI can:
- Extract key information: parties, fees, SLAs, penalties, obligations, dates.
- Put that into a database for searching, reporting, alerts, and monitoring.
- A practical example using Microsoft Power Platform:
- Use a pre-built document model inside Power Automate / Power Apps.
- Feed it at least ~10 representative contracts.
- Manually tag fields of interest (e.g. fee amount, payment frequency, addresses, clauses).
- The model learns probabilistically what “fees” or “addresses” look like in your documents.
- On new contracts, it auto-extracts these fields with a confidence score (e.g. 90%, 95%).
- You correct low-confidence or wrong predictions, which improves the model.
4. How startups and small businesses can use AI with limited data
- If you don’t have years of proprietary data yet, you still have options:
- Pre-trained models
- Cloud providers (Azure, AWS, Google Cloud) and open-source projects (e.g. Meta’s Llama) provide models already trained on:
- Images (e.g. four‑legged animals),
- General language,
- Various domains.
- You can then fine-tune these models on your smaller, specific dataset (e.g. distinguishing Friesian cows from other animals) instead of starting from scratch.
- Cloud providers (Azure, AWS, Google Cloud) and open-source projects (e.g. Meta’s Llama) provide models already trained on:
- RAG (Retrieval-Augmented Generation)
- Use an existing model without fully retraining it.
- The model is good at understanding language and general semantics.
- You connect it to your own documents or knowledge base as a reference source.
- It doesn’t “absorb” your data into its core weights; it consults your data when answering questions.
- General-purpose tools
- For very early stage startups with little data:
- You can still use ChatGPT / Gemini / Claude for drafting, research, basic analysis, and brainstorming.
- Over time, as you accumulate data, you can move into more tailored and private solutions.
- For very early stage startups with little data:
5. Bias and hallucinations: Is AI unbiased?
- Traditional software: follows explicit rules, generally does exactly what it’s told.
- AI (especially large language and vision models): works on probabilities, not logic rules.
- It predicts the most likely next word, token, or label given its training.
- Bias arises from training data, not from “opinions” in the model:
- Early image models produced mostly white faces because they were trained mostly on white images.
- Most of the internet training data is in English, so non-English outputs lag unless separate models are trained in those languages.
- So yes, AI outputs can be biased.
- That doesn’t mean the system is “thinking” in a human sense; it’s just reflecting the skew in its data.
6. Does AI think like a human?
- No.
- It does not think or understand like we do.
- It works on probabilistic pattern-matching:
- Convert your input into numbers (tokens).
- Search patterns based on prior training.
- Output what is most likely to be a coherent answer.
- That’s why:
- The same question asked twice can generate slightly different answers.
- It can produce very confident nonsense (“hallucinations”).
Bhairav’s summary in plain language:
AI doesn’t think. It looks at information, weighs up what it has “seen” before, and gives the answer that is most likely to be right based on that.
7. How much coding skill do you need to adopt AI?
- Depends on what you’re trying to do. Start with the problem, not the tech.
Low / no-code uses:
- Subscribe to an AI tool (ChatGPT, Gemini, Perplexity, etc.).
- Use built-in platform features:
- Meeting transcription and summarization in Zoom, Teams, etc.
- AI assistants embedded in CRMs, office suites, email clients.
- Many document analysis tools now offer drag‑and‑drop interfaces.
Higher complexity uses:
- If you want to build a core product around AI or highly bespoke internal tooling:
- You’ll likely need developers and data/ML expertise.
- You need to decide whether to build (custom models, infrastructure) or buy (integrate existing services and platforms).
The key question:
“What problem am I solving, and is AI actually the best way to solve it?”
8. Practical, everyday business use cases
Examples discussed:
- Meeting notes
- Zoom, Teams and similar now auto-transcribe and summarize meetings.
- No more scrambling to write minutes; you get structured actions and summaries.
- CRM data analysis
- Export 9 years of customer data from your CRM.
- Feed it to an AI tool and ask:
- Who are the growers vs. the fallers?
- What patterns distinguish customers who grow vs. those who churn?
- Previously, you’d need teams of analysts and lots of time to even approximate this.
These are not massive, billion‑dollar AI projects – they’re simple, practical uses that can deliver serious insight and productivity wins for SMEs.
9. Why big players are spending billions on AI
- Meta, Google, Microsoft and others aren’t throwing billions at AI just to automate meeting notes.
- They want:
- Hyper-personalized ads, content and recommendations.
- On‑the‑fly generation of creative tailored “just for you” at scale.
- That’s where their ROI lies: capturing more marketing spend and attention by being much more targeted.
10. Impact on jobs – especially analysts
- Many analyst-type tasks are clearly under pressure:
- Data cleaning, basic reporting, first-pass analysis, proofreading, etc.
- However:
- We’ve already lived this with spellcheckers and grammar tools; those jobs changed rather than vanished overnight.
- AI can free analysts to focus on deeper thinking and interpretation, not the grunt work.
- Likely outcomes:
- Fewer analysts per team, but those who remain do higher value work.
- Career paths may get trickier if the “junior grunt work” step disappears; organizations need to think about how juniors get experience.
11. What if AI makes a mistake? Who’s responsible?
Two key issues:
- Will you even notice?
- Many people are using AI in a very lazy way:
- Copy/paste from ChatGPT into emails, documents, code, without proper review.
- You must build checks and balances into your process.
- Many people are using AI in a very lazy way:
- Legal / responsibility angle
- Model providers (OpenAI, Anthropic, Google, etc.) clearly state:
- The tools may hallucinate.
- The outputs may be wrong.
- They position AI as a tool, not a professional advisor.
- In practice, it’s on you, the user/business, to:
- Verify outputs.
- Take responsibility for how you apply them.
- Model providers (OpenAI, Anthropic, Google, etc.) clearly state:
Analogy:
- If your calculator gave “1 + 1 = 11”, you’d double-check your input.
- With AI, you need the same skepticism: never assume infallibility.
12. Is AI safe for my business – or could it ruin everything?
- Short answer discussed: Yes, it can absolutely cause damage if misused.
- Main risks:
- Blind trust in outputs; not reviewing or validating them.
- Building entire products or infrastructures on shaky AI-generated code without proper engineering.
- Assuming “AI did it, therefore it’s correct.”
Example from the startup world:
- Founders using AI coding assistants to build MVPs and beyond.
- Claiming they’ve “learned to code” when in reality they’ve learned to prompt an LLM.
- Analogy: saying you know how to build a house because you hammered some planks together.
- It looks like a house… until the first strong wind.
The real danger:
- Not that AI is “evil”, but that people drop their usual due diligence because they’re dazzled by the label “AI”.
13. A sensible stance for business owners
Bhairav’s pragmatic view:
- Be aware of AI. You can’t ignore it.
- Move carefully and deliberately:
- Understand where it genuinely helps.
- Test, verify, iterate.
- Don’t fall for magical thinking:
- Spending £1,000 on some AI project will not suddenly turn you into a global powerhouse.
- As with any other decision:
- Risk and ROI still rule the day.
- Ask: What’s the realistic upside? What’s the downside if it goes wrong? What controls can we put in place?
Memorable Quotes
“What we need is less faith, more reality. Less snake oil, more solutions.”
“AI is more a concept than a thing – the idea of machines that behave like humans – and then lots of different tools trying to get us there.”
“You don’t need a model trained on Middle Eastern recipes to tell you if your employment contract is any good.”
“ChatGPT is just one small subset of AI, not the whole story.”
“People think AI is flawless for some reason. It isn’t – and they tell you that in the disclaimers.”
“It’s like saying you know how to build a house because you stuck a few planks together. It looks like a house, until the wind blows.”
“If you really believe that spending a thousand pounds on AI is going to transform your business into a global powerhouse, you’re probably a little bit mad.”
“We come back, as always, to two words: risk and ROI.”