What Are Buyer Intent Signals? A UK B2B Guide
Seven categories of buyer intent signals UK B2B businesses should track — hiring, funding, complaints, tech, regulatory, filings, news.
“Intent” in B2B sales is not a mystery. It is a trail. Every company thinking about buying something leaves public traces — hiring announcements, funding news, complaints about their current tool, stack changes, filings. The job of a modern lead gen tool is to read those traces and tell you, quietly each week, which companies are likely ready to buy what you sell.
This guide covers the seven signal categories that matter for UK B2B businesses in 2026, how to rank them, how to combine them, and what to avoid.
Why intent beats databases
Static databases — Apollo, Lusha, ZoomInfo — tell you who a company is. Intent signals tell you what they are about to do. Both are useful; one is urgent.
A company that has been on Apollo’s “UK SaaS, 10–50 FTE” list for five years is not more ready to buy today than yesterday. A company that posted its first SDR role last Tuesday, raised a Series A three weeks ago, and had its founder complain about Apollo pricing on LinkedIn on Monday — that is a different signal entirely.
The maths is simple: if you have finite outreach capacity (and every founder does), spend it on companies showing current intent, not on a list of theoretical matches.
The seven categories
1. Hiring
What it is: Public job postings on Adzuna, Reed, Ashby, Greenhouse, and LinkedIn.
Why it matters: A company hiring an SDR is scaling sales. A company hiring its first growth lead is 6–12 weeks from a marketing tools decision. A company hiring “ops manager” after a fundraise is consolidating its stack.
How to read it: Role titles and levels predict different tool purchases. Watch for:
- First-in-role titles (“first SDR”, “first growth lead”, “first ops”) — strongest signal
- Roles suggesting budget unlock (“VP revenue”, “head of marketing after a C-level hire”)
- Simultaneous hires in linked functions (a sales + marketing hire within two weeks)
Noise to filter: Agencies hiring for clients (not the end buyer). Re-posts. Unpaid internships (rarely predict buying).
2. Funding
What it is: Rounds announced via press, Crunchbase, investor blogs, Companies House share filings, InnovateUK grant awards, Tech Nation announcements.
Why it matters: Fresh capital unlocks 12–18 months of budget. The order of purchases is predictable: hires first, then tools, then agency spend, then new initiatives.
How to read it: Seed rounds open early-stage tooling (product analytics, CRM, email). Series A opens sales stack (SDR tools, lead gen). Series B opens enterprise replacement (moving off scrappy tools into mature ones). IPO and buyout trigger integration projects.
Noise to filter: Follow-on rounds (less fresh budget than initial). Bridge rounds (typically defensive, not expansion). Government grants earmarked for research (rarely tool-spend).
3. Complaints
What it is: Public LinkedIn, Reddit, G2, Capterra posts complaining about tool cost, churn, feature gaps, or support.
Why it matters: A company publicly complaining about its current tool is in buying-mode within weeks. A senior-title post carries more weight than a junior one. A complaint post with replies from other users in the same space amplifies the signal.
How to read it: Focus on named-competitor complaints (“Apollo is too expensive”, “Lusha coverage for UK is poor”). Track senior titles. Cross-reference to company size.
Noise to filter: Anonymous posts. Tool reviews not tied to a company. Complaints about unrelated products.
4. Tech-stack changes
What it is: Wappalyzer and BuiltWith detect front-end stack changes. Publicly disclosed SaaS cancellations. CMS migrations.
Why it matters: A company removing one tool and adding another is in procurement mode for adjacent tools. A CMS migration predicts marketing-tool churn (analytics, A/B testing, email).
How to read it: Pair stack changes with hiring signals to confirm direction. A stack change without a relevant hire may be contractor-driven and not buyer-driven.
Noise to filter: Tag manager swaps (trivial). A/B testing tools (often dev-led, not marketing-led).
5. Filings (Companies House)
What it is: UK-registered companies must file accounts, director changes, SIC code changes, and insolvency notices with Companies House.
Why it matters: SIC code changes predict business-model shifts. Director changes predict strategy changes. Insolvency filings obviously kill the deal but also predict downstream effects on suppliers and customers.
How to read it: Combine with revenue band. A £5M-turnover company adding a new SIC code for consulting is likely expanding. Micro-entity filing thresholds (per Companies Act 2024) changed recently — watch for band-crossing events that unlock new buying capacity.
Noise to filter: Dormant company filings. Administrative-only changes.
6. Regulatory
What it is: FCA register updates, Law Society alerts, planning applications, construction tenders.
Why it matters: Heavily regulated businesses (finance, legal, construction) have procurement cadences tied to filings. A new FCA authorisation predicts compliance tooling purchase. A planning application predicts construction SaaS spend.
How to read it: Vertical-specific. Most relevant for customers selling into regulated markets.
Noise to filter: Mass-bulk filings (often administrative updates without spend implications).
7. News and RSS
What it is: Publisher RSS feeds, trade journals, sector newsletters, BBC, FT, TechCrunch UK.
Why it matters: Press coverage predicts cross-functional attention (comms, marketing, senior management). A company in the FT this week is more likely to get SDR coverage next week — generic signal, useful as a modifier.
How to read it: Cross-reference with other signals. Press-alone is a weak signal; press + hiring is a strong one.
Noise to filter: Syndicated press releases. Paid placements. Wire-service duplicate coverage.
Seven categories in practice: a worked example
Abstract categories become useful when you see them combined on a single real-world company. Imagine a B2B SaaS selling revenue-ops software to UK fintechs. In one week of monitoring, a single target — call it “Company X” — lights up as follows:
- Hiring (weight 3x): Posted a “Head of Revenue Operations” role on Reed on Tuesday. First-in-role title. High-signal.
- Funding (weight 2x): Closed a £6M Series A two weeks earlier, per TechCrunch UK.
- Complaints (weight 4x): CTO posted on LinkedIn on Monday about the cost of their current CRM-adjacent tooling, explicitly naming two competitors.
- Tech-stack (weight 2x): Wappalyzer detected removal of a specific analytics tool the previous week.
- Filings (weight 1x): New director added at Companies House 10 days ago, typical of a post-fundraise board expansion.
- Regulatory (weight 1x): No signal.
- News/RSS (weight 0.5x): Brief press mention in a fintech newsletter.
Company X scores approximately 92/100 under this weighting. No cold-email tool, no static database, no LinkedIn scrape would have produced this picture in one pass. The combined signal says: budget is unlocked, the decision-maker is publicly unhappy, and a revenue-ops role will own the replacement decision. The outreach write-up takes five minutes — every paragraph references a fact the buyer has themselves put into the public record.
This is the shape of modern UK B2B lead gen. One clear, ranked, explainable company, not a thousand contacts.
How to rank signals
Not all signals are equal. A rough ordering:
- Fresh complaints (senior title) > named-competitor hiring (first-in-role) > funding announcements
- Multiple signals on the same company > any single signal
- Recent signals (<14 days) > stale signals (>60 days)
- Signals in your target vertical > generic
The compounding rule is key. A company with a single hiring signal is one of 10,000. A company with a hiring signal plus a tech-stack change plus a complaint post is one of 20.
How to combine signals
A practical approach:
- Weight each category for your ICP (e.g. if you sell sales tools, hiring = 3x; complaints = 4x; filings = 1x)
- Time-decay signals exponentially (fresh matters; 60-day-old signals are noise)
- Require at least two categories for a lead to qualify (filters out accidental matches)
- Re-tune weights based on outcomes (if the “complaints” signal consistently produces high-conversion leads for you, raise its weight)
Scoring in practice: a simple, defensible model
You do not need a machine-learning team to weight signals sensibly. A 0–100 score with five inputs works well as a starting point:
- Recency (0–30 points): Linear from 30 points at 0 days old to 0 points at 60 days old. Anything older is archaeology.
- Category strength (0–25 points): Complaints and fresh hiring at the top. Funding mid-tier. News at the bottom.
- Compounding (0–25 points): 0 points for one signal, 15 for two, 25 for three or more in the last 30 days.
- Vertical match (0–10 points): Exact SIC or industry match gets full; adjacent match gets half; generic gets 0.
- Revenue band fit (0–10 points): Match your tier (e.g. £1M–£10M) for full, one band off for half, two bands off for 0.
A score above 70 deserves your attention this week. A score above 85 is a “drop everything, research, outreach today” lead. Most teams we see ignore everything under 60, which is fine at early stages but worth revisiting once your highest tier is saturated.
The model is less important than the discipline. Writing it down, applying it every week, and re-weighting based on outcomes is the win. The common failure mode is running an unwritten model that drifts with the operator’s mood.
Signal decay: what fades when
Intent does not decay linearly. Each category has its own half-life, and knowing this lets you prioritise correctly.
- Hiring: Half-life roughly 21 days. Roles fill or get pulled; budget decision follows by week six.
- Funding: Half-life roughly 45 days. Tooling spend peaks at weeks four to eight after close.
- Complaints: Half-life roughly 14 days. The buyer moves fast when publicly frustrated.
- Tech-stack: Half-life roughly 30 days. Adjacent buying decisions cluster in the same quarter.
- Filings: Half-life roughly 60 days for director changes, longer for SIC-code changes.
- Regulatory: Half-life varies widely — FCA authorisations six months, planning applications twelve.
- News: Half-life roughly 7 days. Press attention is fleeting.
The operational lesson: build your weekly digest around the shortest half-life signals (complaints, hiring, news) and use the longer-lived signals (funding, filings) as context, not headlines.
What to avoid
- Single-signal lists. “All companies hiring SDRs” is 1,200 companies per week. Useless at that volume.
- Outdated signals. Anything older than 60 days is archaeology, not intent.
- Vanity metrics. “Company visited our website” is not intent if you never identified the buyer.
- Scraping LinkedIn profiles. PECR and Meta-TOS risk outweighs the value.
- Paid intent data from US-first vendors. Their UK coverage is patchy.
Building a minimum viable intent pipeline in-house
For a team that wants to start without a tool, the cheapest viable pipeline is three steps:
- Ingest layer. Google Alerts for ten named competitors, ten target industries, and five complaint keywords. RSS feeds from three trade publications per vertical. A Companies House watchlist on fifty target companies (free). LinkedIn follows on the same fifty companies and their key people. Time-cost: one hour a day to scan.
- Scoring layer. A spreadsheet. One column per signal, one row per company. Recency column. Compounding column. Manual score. Once a week, sort by score, take the top ten. Time-cost: two hours a week.
- Outreach layer. For each of the top ten, a one-paragraph note referencing the specific signal. Not a template. Time-cost: twenty minutes a lead, two hours a week.
Total weekly cost: about seven hours and nothing else. Output: ten researched, signal-backed leads — more than most mid-market sales teams surface with tooling. The limiter is time, not insight, which is why most teams eventually adopt a tool to automate steps one and two.
Tools landscape
In 2026, the UK-relevant tools for buyer intent are roughly:
- Free: Google Alerts, Companies House watchlists, LinkedIn company follows, RSS
- Paid intent data: Bombora, G2 intent, 6sense (US-first, UK coverage patchy)
- Native intent + discovery (our category): LeadKing, a handful of others — per-customer AI-tailored
Three worked industry lenses
Signal weighting changes by what you sell. A few concrete examples:
Selling compliance SaaS to UK accountants. Weight regulatory signals (FCA notices, ICAEW alerts) at 4x, complaints at 3x, hiring at 2x. Filings matter — a new “head of practice” at a firm often triggers stack review. News is noise. Complaint sources that matter: AccountingWeb, r/accounting, LinkedIn group posts in chartered-accountancy circles.
Selling payroll to UK SMEs. Weight hiring at 4x (a first HR hire signals payroll-software need within eight weeks), funding at 3x, filings at 2x — new employee-benefit-related director titles show up in filings. Complaints at 2x — PayPal, Sage, QuickBooks churn posts are common and actionable. Regulatory is low relevance unless you serve regulated industries.
Selling a developer tool to UK SaaS startups. Weight complaints at 5x (developers are vocal on Reddit, X, and GitHub), hiring at 3x (first-in-role DevRel, first DevOps), funding at 2x, tech-stack at 3x — every stack change is a potential adjacency. Filings and regulatory rarely apply.
The lens shifts the weights; the method stays the same.
Frequently asked questions
Can signals work without a known ICP? Technically yes, practically no. Signals without an ICP produce a messy, noisy list. Define even a rough ICP first, apply signals to that narrower pool, and the signal-to-noise ratio jumps ten times.
How often should signals refresh? Daily ingest, weekly digest, monthly re-weighting. Daily to catch complaints and hires fast; weekly because reviewing every day creates reviewer fatigue; monthly re-weighting lets you feed outcomes back into the model without over-fitting.
What about negative signals? Worth tracking. Insolvency filings, mass redundancies, and customer-loss press are all strong “do not pursue” signals. Most intent models ignore them, leading to wasted outreach on sinking ships.
Is one signal ever enough? Rarely. A fresh, senior, named-competitor complaint can sometimes stand alone. So can a Series A close for exactly-your-ICP. But the single-signal hit rate is perhaps a quarter of the multi-signal hit rate.
How do we know we are looking at the right signals? Outcome tagging. Six weeks after digest delivery, review: which signal types produced replies, which produced meetings, which produced closed-won. Re-weight. Repeat. Any tool that does not support outcome-based re-weighting is outsourcing your learning to its own defaults.
A final note on signals and ethics
Intent data sits in a narrow lane. Public filings, public job posts, public company pages, public complaints — all fair game in the UK B2B context under a well-documented legitimate-interest assessment. Scraped personal data from LinkedIn profiles, purchased contact lists of dubious provenance, and anything touching consumer data without consent — all off-limits, regardless of what a vendor tells you. The difference shows up in ICO enforcement registers. Stay on the right side of the line; the leads from that side are better anyway.
What to do next
- If you want to see how we score signals in practice, see how LeadKing works.
- If you are new to UK B2B lead gen, start with UK B2B lead generation in 2026.
- If compliance is your first question, read about GDPR-compliant cold outreach.
- Or join the waitlist — we’ll show you signals in your actual market.