AI in Pre-Sales 2026: Adoption Data and SE Workflows
AI hit pre-sales sideways. The first wave looked like chatbots inside demo platforms. The second wave was "AI-powered RFP." Most of it shipped, very little of it stuck. The workflows that did stick are mostly invisible. They are not features; they are habits.
We surveyed 412 practicing SEs in Q1 2026 and cross-referenced their answers against tool adoption data from job postings and disclosed customer counts. What follows is what SEs use, what they measured, and where the productivity claims hold up.
What SEs Use AI For
Adoption is bimodal. Roughly 18% of SEs report daily use of AI tools across multiple workflows. About 24% report no regular use beyond occasional ChatGPT lookups. The remaining 58% sit in the middle: weekly use for two or three repeatable tasks.
The top five workflows by reported time savings:
| Workflow | % of SEs Using | Median Time Saved per Week | Tools Most Cited |
|---|---|---|---|
| Call summaries and follow-ups | 71% | 3.5 hours | Gong, Chorus, Fathom, Granola |
| RFP and security questionnaire drafts | 52% | 5.2 hours | Loopio, Responsive, custom GPTs |
| Discovery prep and account research | 48% | 2.1 hours | Perplexity, Clay, ChatGPT |
| Demo script and talk-track drafting | 34% | 1.8 hours | ChatGPT, Claude, in-platform AI |
| POC plan and success criteria drafting | 22% | 2.4 hours | ChatGPT, Claude, internal tools |
Call summaries dominate adoption because the workflow is automatic. The tool joins the call, summarizes it, and emails the output. No habit change required. RFP drafting saves the most time per use because the underlying task is high-effort and low-stakes-per-paragraph, which is the exact shape AI handles well today.
Where the Time Savings Are Real
Three workflows produced measured, repeatable gains in the survey:
RFP first drafts. SEs using Loopio or Responsive with the AI features turned on report cutting first-draft time by 40 to 60%. The gains hold up because RFP responses lean on a structured content library, so AI is retrieving and rephrasing rather than inventing. The win rate on AI-drafted responses is roughly equal to manually-drafted responses, per practitioner-reported deal outcomes.
Call summaries. Gong, Chorus, and the new generation of dedicated note-takers (Fathom, Granola) reliably save 2 to 4 hours per week. The output is usable for CRM logging and internal handoffs. It is unreliable for customer-facing follow-ups without an edit pass, but the edit pass takes 5 minutes instead of 25.
Discovery research. Perplexity, Clay, and ChatGPT cut account research time from 45 minutes to 15. SEs use them to assemble the pre-call brief: recent funding, leadership changes, technology stack, public commentary on adjacent vendors. The output needs human review but the assembly time collapses.
For tool-by-tool detail on the demo and RFP platforms above, see our demo platforms category guide and RFP automation category guide.
Where the Hype Outran Reality
Two big bets did not deliver in 2025 and have not in 2026.
AI-generated personalized demos at scale. Several demo platforms launched features that promised to auto-generate buyer-specific demo flows from a few prompts. The output looks impressive in vendor videos. In practice, SEs report that the generated demos miss the customer-specific narrative thread, the discovery context, and the technical depth that buyers respond to. Adoption is low and concentrated in lower-stakes top-of-funnel motions.
AI sales co-pilots inside CRM. Salesforce, HubSpot, and the broader CRM ecosystem shipped AI assistants in 2024 and 2025. SEs report using them rarely. The recommendations are too generic, the context window is too narrow, and the friction of switching to a chat panel inside the CRM is higher than just keeping notes in a doc.
The pattern is consistent: AI works for tasks with structured inputs and tolerable output variance. It struggles for tasks that require deep context across many sources, where one bad sentence kills the credibility of the whole output.
The Time Savings That Disappeared
Time-saving claims are easy to overstate because reclaimed time gets reabsorbed by other work. A practical example: SEs who saved 5 hours a week on RFPs did not get 5 hours back. They got 1.5 hours of recovered focus time and 3.5 hours of new work, mostly higher-touch discovery and POC management on additional opportunities.
That reabsorption explains why AI adoption shows up in capacity-per-SE metrics (companies moving SE-to-AE ratios from 1:3 to 1:4 without losing win rates) more than in individual quality-of-life improvement. Our SE-to-AE ratio benchmarks analysis covers this dynamic in depth.
What Hiring Managers Are Looking For
Job postings now mention AI tool fluency at a rate of 31% in 2026, up from 4% in 2023. The phrasing is usually generic ("comfort with AI tools," "uses GenAI in workflows"). A growing minority (about 8% of postings) call out specific platforms or skills: prompt engineering, Clay workflows, custom GPT building.
For SEs interviewing in 2026, the practical move is to have two or three concrete examples ready: a workflow you built, a time-saving result you measured, and the trade-off you made. Vague enthusiasm about AI is now table stakes. Specific examples are differentiators.
See our SE interview questions guide for the framing of AI workflow questions and what hiring managers are checking for.
The Workflows That Will Matter Next
Three areas are early but credible:
Custom GPTs for product-specific demo prep. Internal SE teams are building GPTs trained on their product documentation, common objections, and competitive battlecards. The output is materially better than generic AI for the same task. The friction is the build effort, which most teams underestimate.
POC plan generation from discovery transcripts. A few SE teams have wired Gong or Chorus transcripts into prompt chains that produce a POC plan draft. The output is structured, references specific customer language from discovery, and saves 1 to 2 hours per POC kickoff. This is mostly home-built; off-the-shelf tools have not caught up.
Competitive intel synthesis. SEs using Perplexity and Clay to monitor competitor product updates, pricing leaks, and customer reviews report material gains in keeping battlecards current. The maintenance burden on competitive content has dropped from a quarterly fire drill to a continuous-update workflow.
What to Take Away
AI in pre-sales is real but boring. The wins are in unglamorous places: RFP drafts, call summaries, discovery prep. The losses are in the places vendor marketing focused on: auto-generated demos, in-CRM co-pilots, end-to-end "agent" workflows that promised to replace SE judgment.
For SEs evaluating where to spend learning time in 2026, the highest-payoff areas are RFP tooling fluency, conversation intelligence integration, and one solid custom-GPT build that reflects your product and ICP. That stack covers the workflows that pay back the time invested.
For broader SE career and tooling context, see our SE tool reviews, the SE salary data for compensation benchmarks, and the SE job board for current openings that call out AI fluency requirements.