I’ve been on both sides of a software RFP more times than I’d like to count.
I’ve written them as a buyer — for ticketing systems, CRM platforms, workforce management tools, knowledge base software. I’ve sat through the demos, scored the responses, run the procurement process, and lived with the consequences.
What I’ve learned is that most RFPs are designed to select the best vendor presentation, not the best vendor solution. The way we typically write them — long lists of feature requirements, weighted scoring matrices, formal response templates — systematically advantages vendors with good sales teams over vendors with good products.
This post is about how to fix that. Not with a template you can copy and paste, but with a different way of thinking about what you’re actually trying to find out.
Why most support tool RFPs fail
Before you can write a better RFP, you need to understand why the standard approach produces mediocre results.
The feature list problem. Most RFPs are built around a long list of required and desired features. Vendors respond by checking boxes. The problem is that “yes, we support omnichannel case distribution” can mean anything from a deeply integrated, workflow-driven system to a checkbox in a settings menu that technically routes emails. The feature list tells you what a tool claims to do. It tells you nothing about how well it does it, or whether your team will actually use it.
The demo problem. Vendor demos are works of art. They’re built on sanitized data, pre-configured for maximum impressiveness, and presented by people whose entire job is making the product look effortless. In twelve years of evaluating support software, I have never seen a vendor demo that accurately represented the day-to-day experience of using the product at volume. The demo shows you what the product can do on its best day. You need to know what it does on a Tuesday when three agents call in sick and your queue is backed up.
The committee problem. Large RFP committees make conservative decisions. When you’re scoring responses by consensus, the tool that offends no one wins — which is usually the tool that’s been around longest, has the most brand recognition, and charges the most. The safe choice and the right choice are often different things.
The requirements problem. Most requirements lists are written in a vacuum, before anyone has deeply interrogated what the team actually needs. They’re generic — pulled from an old RFP or a framework someone found online — rather than specific to your operation, your scale, and your actual failure modes.
Start with your failure modes, not your wish list
Before you write a single line of your RFP, spend two hours answering this question: What is currently breaking, and why?
Not what features are you missing. What is actually failing — and what is that failure costing you?
In my experience, most support tool evaluations are triggered by one of a handful of real problems:
- Agents spend too much time doing manual work that should be automated
- Leaders can’t get the reporting they need to make decisions
- The system can’t scale to the volume or complexity of incoming requests
- The tool is so difficult to use that agents work around it instead of through it
- Integration with other systems is broken or nonexistent
Your RFP should be built around your specific version of these problems. If your primary failure mode is that your agents have four different systems open simultaneously because nothing integrates, then integration architecture should be the central focus of your evaluation — not feature parity across thirty categories.
Write down your top three failure modes before you do anything else. Everything in your RFP should trace back to at least one of them.
Write requirements that expose real differences
Once you know what you’re solving for, write requirements that actually differentiate between vendors — not requirements every vendor can claim to meet.
Bad requirement: “The system must support omnichannel communication.”
Better requirement: “Describe how your system handles a scenario where a customer opens a ticket via email, follows up via chat, and then calls in — and an agent needs to see the full history of that interaction in a single view, in real time, without switching screens.”
The second version can’t be answered with a checkbox. It requires the vendor to show you how their product actually works, and it will produce meaningfully different answers from different vendors.
For every major capability area, try to write at least one scenario-based requirement of this type. Force the vendor to demonstrate, not just declare.
Other high-signal questions I’ve used:
- “Walk us through what happens when the system goes down during peak hours. What does your incident response process look like, and what has your actual unplanned downtime been in the last 12 months?” (You will learn a great deal from how they answer this.)
- “How does your system handle a queue of 500 tickets that need to be rerouted because a skill group becomes unavailable? Walk us through that operationally.”
- “What does a new agent’s first week look like in your system? What training is required before they’re productive, and how do you measure that?”
- “What are the three most common complaints you hear from support managers in their first year of using your product?” (This one makes vendors uncomfortable, which is exactly the point.)
Structure your RFP in phases, not sections
A single-round RFP process is too slow for the early stages and not rigorous enough for the final stages. I run a three-phase process:
Phase 1: Qualification (2 weeks) A short document — no more than 10 questions — focused on fit. Can they handle your volume? Do they have customers at your scale and complexity? What’s their implementation timeline? This eliminates vendors who shouldn’t be in the process before you’ve invested significant time.
Phase 2: Detailed Response (3 weeks) Your full requirements document, including scenario-based questions. Aim for 20–30 meaningful requirements, not 80 checkbox items. Request specific examples of how existing customers use the features most relevant to your failure modes. Ask for references at this stage — but hold them until Phase 3.
Phase 3: Structured Demonstration (2 weeks) This is not a vendor-led demo. You control the agenda. Give the vendor your scenarios in advance — the ones that represent your hardest operational problems — and ask them to demo those specifically. Then ask your team members who will actually use the tool to participate and ask their own questions. Include the people who will administrate the system, not just the people who will buy it.
At the end of Phase 3, before any decision is made, call the references — and not the references the vendor gave you. Ask the vendor who their largest customer in your industry is. Find that customer and ask for an introduction. LinkedIn works fine for this. A 20-minute call with a peer at a company similar to yours is worth more than any formal reference response.
The reference call questions that actually matter
Most reference calls are too polite to be useful. The person giving the reference knows they’re representing the vendor and calibrates accordingly.
Here are the questions that get past the surface:
- “If you were starting this evaluation again, what would you ask during the sales process that you didn’t ask?”
- “What has surprised you — in either direction — about how the product performs at volume?”
- “What does your relationship with their support team look like when something breaks? Can you give me a recent example?”
- “Would you buy this product again if you were starting fresh? Would you look at alternatives first?”
Listen for hesitation as much as content. A long pause before “yes, we’d buy it again” tells you something.
The scoring mistake most teams make
Weighted scoring matrices feel objective. They are not.
The problem is that the weights are chosen before you’ve deeply understood your failure modes, which means they often don’t reflect what actually matters. And when you’re scoring as a committee, you tend to average your way to the middle — the tool that got consistent 7s beats the tool that got 9s on the things you care about and 4s on the things you don’t.
I use a simpler final-stage framework: identify the five things that absolutely must be true for this tool to work in your environment. Call them your non-negotiables. Any vendor who can’t credibly demonstrate all five is out, regardless of their aggregate score. Among the remaining vendors, make a judgment call — because at that point, you have enough information to make one.
The implementation question nobody asks early enough
By the time most teams get to the implementation plan, they’ve already signed the contract.
This is backwards. Implementation is where support tool purchases succeed or fail. A product that is technically excellent but takes nine months to implement and requires three consultants to configure is a worse choice for most teams than a product that is slightly less capable but can be stood up in six weeks by your own people.
Ask every vendor, early in the process: “Who does the implementation — your team, a partner, or us? What does that cost, and what does it require from our side?”
Then ask their references: “How long did implementation actually take versus what you were told?”
The gap between those two numbers will tell you everything you need to know.
A word on total cost of ownership
The licensing fee is not the cost of the tool.
Factor in: implementation costs (often 50–100% of year-one licensing), training time (measure this in lost productivity, not just course fees), ongoing administration (who manages the system, and what does that require?), integration costs (every connection between systems is a project), and renewal pricing (introductory pricing and renewal pricing can be very different numbers).
I have seen organizations choose a “cheaper” tool and spend three times as much in implementation and ongoing administration as they would have with the pricier option. Get total cost of ownership projections in writing, for three years, before you decide.
The short version
If I had to distill this into a single piece of advice, it would be this: design your RFP to surface information, not to confirm a decision you’ve already made.
The best procurement process I ever ran started with me not knowing which tool was right. I had no preferred vendor, no internal stakeholder pushing for a particular product, and no political pressure to move quickly. We followed the process, asked hard questions, called skeptical references, and ran a structured demo with the actual end users in the room.
We didn’t buy the tool with the best sales process. We bought the tool that best handled our specific failure modes. It was in use for five years, and it performed exactly as advertised.
That’s the standard worth aiming for.
I am an ITIL Expert and extremely passionate about customer service, customer experience, best practices and process improvement. I have led support, service, help desk and IT teams as well as quality and call center teams in Canada and the UK. I know how to motivate my teams to ensure that they are putting the customer first.


