Blossom Street
  • about
  • SaaS Metrics
  • Mrr Calculator
  • Track Record
  • Portfolio
  • Blog
  • EMAIL PARTNER
return to blog

33 SaaS Earnings Calls that show the real impact of AI on SaaS

by

Sammy Abdullah

We’re reviewing the earnings calls of 81 publicly traded SaaS companies to understand the impact AI is having on SaaS. Below we summarize the key take-aways of 33 of those earnings calls: Snowflake, Salesforce, MongoDB, ServiceTitan, ServiceNow, Microsoft, Appfolio, Palantir, Atlassian, Paylocity, Doximity, Qualys, Bill.com, Zoominfo, Dynatrace, Monday.com, Blackbuad, Cloudflare, Freshworks, Klaviyo, Datadog, Q2 Holdings, Shopify, Hubspot, Paycom, Unity, Twilio, Procore, Jfrog, SPS Commerce, Waystar, SimilarWeb, and Amplitude. After the take-aways below, you will see full ~5 paragraph summaries of each earnings call. We will release more blogs like this as we do more work.

The big take-aways:

AI is not profitable yet. The goal for the moment is driving margin neutral revenue. Microsoft, Salesforce, ServiceNow and nearly every other company described margin pressures from deploying AI product. AI workloads are just very expensive at the moment.

Direct revenue from AI is nascent for most, especially relative to total revenue. And is non-existent for some players like Doximity, or still in its very early stages but growing fast like at Freshworks. That said, enterprise customers are adopting and benefiting from the new AI products built by their already critical software providers like Datadog, Salesforce, and Monday.com. There is also a category of companies that are building usage before monetizing. No company we’ve observed is making AI product a significant part of any forecast, yet, even though they talk extensively about positive AI impact.

Monetization is changing. The pricing conversation for AI is largely going from “per seat” to “per outcome” or “per agent deployed.” Procore for instance wants to price it’s AI on construction dollar volumes, which would insulate it from employee count reductions. Some like Atlassian are sticking hard to the per seat model.

Infrastructure for AI versus AI products. Some like Snowflake, ServiceNow, and MongoDB, Twilio, Datadog, and Cloudlfare will win because AI is supported infrastructurally on their platforms. Some like Dynatrace believe AI makes their product more compelling than ever (in their case it’s observability and monitoring of AI). Others like ServiceTitan and Salesforce are building AI product which actually executes context-aware actions. AI as an assistant or Copilot has been de-emphasized.

Trust is an issue for AI. Doximity has stopped releasing AI product until they get it perfect, because it’s not ok for AI to mis-diagnose a patient. Qualys makes a similar point in cybersecurity; being the agentic remediation layer requires a level of trust that generic AI tools can’t establish. AI errors in healthcare and cybersecurity are near unacceptable, and thus the bar for accuracy is higher.

Performance is strong. Quite a few of these companies like MongoDB, Cloudflare, and ServiceNow, and Datadog closed some of largest deals ever in Q4. Others like Atlassian had record quarters. Those companies that are selling AI products to their customers report better growth and retention among those customer cohorts. On the other hand, there are companies like Zoominfo experiencing serious disruption, with growth falling to near zero.

The moat is very high. Moats for enterprise software are being built around proprietary non-public and sensitive historical customer data, workflows, governance, security, integrations, compliance, vendor trust especially in highly regulated industries like healthcare (Waystar cited this), and operational knowledge of the existing customer. AI cannot standalone, but rather needs to sit on top of software which manages all the above in an enterprise-friendly manner. Additionally, any of these software companies building agents have serios edge, because those agents are trained on enormous repositories of historical customer data that an AI startup will not have. All that said, SaaS companies focused on SMB customers, which have much less internal complexity and are an easier lift, could face a serious threat from AI that allows customers to do internal builds or AI startups; Monday.com and Zoominfo cited issues in their SMB customer bases. SMBs have simpler workflows, less institutional complexity, and less switching cost.

Internal AI improves overall margins. Many of the companies themselves such as Blackbaud and Klaviyo are using AI in their own operations to improve margins. For SaaS, the value of AI is a margin expansion story as much as a revenue growth story, and the market is underweighting both. And given almost every company built minimal to zero AI revenue into forward guidance, the opportunity for outperformance and multiple expansion is significant.

More AI will increase the need for existing software. AI needs their software product to operate more efficiently. For companies that sit on the measurement, observability, and analytics like Amplitude, Datadog, Dynatrace, and Snowflake, AI-driven development could increase demand for incumbent software. AI agents are also becoming a new customer or user. The consumer of enterprise data platforms is no longer just a human analyst or developer, it’s an AI agent querying autonomously, continuously, at scale. That consumption will become monetization for software companies. Even the foundational AI companies themselves are customers: Datadog has 14 of the top 20 AI-native companies as customers. Amplitude has 25 AI-native customers above $100K ARR with one frontier lab at seven figures.

AI is allowing software companies to really show their ROI. AI is allowing software companies new ways to show ROI. Examples include Waystar ($15B in prevented denials), ServiceTitan (18-point EBITDA margin improvement for Max customers), Klaviyo (50% higher open rates, 40% higher revenue per campaign), HubSpot (2x meetings booked for Customer Agent users). They are using those outcomes to justify both higher prices and faster expansion within accounts.

The ~5 paragraph summaries of actual calls are below. Revenue multiples in real time can be seen for all these companies at https://www.softwaremultiples.com/. Also visit https://www.blossomstreetventures.com/ for detailed financials and metrics data for all these companies.

Snowflake

Snowflake’s Q4 call told a story of a company in genuine transition, moving from being the place enterprises store and query data, to being the platform where they actually build and run AI. The numbers: product revenue grew 30% year-over-year, driven primarily by AI workloads, and accounts using AI features rose to more than 9,100, with Snowflake Intelligence, their flagship agentic product, scaling to over 2,500 accounts, nearly doubling quarter-over-quarter.

What makes Snowflake’s AI story somewhat distinct from peers is that AI is benefiting the business on both sides of the income statement. On the revenue side, larger and more strategic deals are getting done; Snowflake signed the largest deal in company history at over $400 million in total contract value, and closed seven nine-figure contracts in the quarter versus two in the same period last year. On the cost side, management reported 40% to 50% higher project margins and compressed delivery cycles through their own internal use of Snowflake Intelligence and Cortex Code, a credible proof point that software companies will be big beneficiaries of AI.

For customers, the value proposition is centered on two products. Cortex Code is used by over 4,400 customers, enabling faster development and deployment of AI workloads. Snowflake Intelligence, meanwhile, is the more agentic play, allowing enterprises to build AI-native workflows directly on top of their existing Snowflake data, without moving it elsewhere. The strategic logic is powerful: enterprises already trust Snowflake with their most sensitive data, and Snowflake is betting they’ll prefer to run AI on top of that data in place rather than pipe it out to a separate system.

The gross margin picture is the caveat. Management acknowledged that newly launched AI products currently carry lower margin profiles, and that margin expansion remains a near-term priority as efficiency improvements are realized. This is the same tension playing out across the AI infrastructure sector; serving AI workloads is compute-intensive, and the economics improve over time but aren’t fully mature yet. Snowflake guided FY27 product gross margin at 75%, roughly flat with FY26’s 75.8%.

The bigger strategic bet is on ecosystem. Snowflake expanded partnerships with Anthropic, OpenAI (a $200 million expansion), and Google Cloud, giving customers native access to leading AI models directly within the platform. Rather than picking a single model winner, Snowflake is positioning itself as the neutral, data-layer that works with all of them, a benefit to its enterprise customers who don’t want to be locked into a single AI vendor. If that positioning holds, Snowflake doesn’t just benefit from the AI wave; it becomes critical infrastructure beneath it.

Salesforce

Salesforce is at an inflection point with AI. Agentforce, Salesforce’s agentic AI platform, closed 29,000 deals in its first 15 months, with marquee enterprise customers like Amazon, Ford, AT&T, and Moderna signing on. The deals over $1 million were up 26% year-over-year, suggesting that AI may be driving larger more strategic contracts.

What’s most notable is how Salesforce is trying to reframe what AI does for customers. Rather than talking about AI in terms of features or capabilities, Benioff introduced a new metric, Agentic Work Units (AWUs), to show that AI agents on the platform are completing 2.4 billion discrete units of actual work: updating records, triggering workflows, making decisions. The message was deliberate: this isn’t AI that thinks or suggests, it’s AI that executes.

On the revenue side, Agentforce ARR hit ~$800 million, up 169% year-over-year, and the broader Agentforce and Data Cloud bundle crossed $2.9 billion ARR, up over 200%. Salesforce is monetizing AI through a mix of premium SKUs, new seat additions, and consumption-based flex credits, which gives them multiple vectors to capture value as usage scales.

The candid note was around gross margins. Token costs remain a real input expense, and Salesforce acknowledged it’s working to optimize efficiency to keep AI gross margins neutral in the near term. AI offerings are not profitable at the moment.

The bigger strategic picture is that Salesforce is betting AI transforms it from a system of record into a system of action where agents handle customer service, sales workflows, and operational decisions autonomously. If that holds, it strengthens their competitive moat considerably and raises switching costs for existing customers.

MongoDB

MongoDB is positioning to be the foundational data layer for the AI era, and the company’s job is to make sure the world knows it. Customers are excited about MongoDB’s platform strength and its ability to serve as an integrated data layer for AI agents, combining search, vector search, and embeddings in a single offering.

The most important thing to understand about MongoDB’s AI story is where they sit in the stack. Unlike Salesforce or Braze, MongoDB isn’t selling AI applications to end users. It’s providing the data infrastructure that AI applications run on. At its flagship MongoDB.local San Francisco event, the company announced the integration of its core database with embedding and reranking models from Voyage AI, creating a unified data intelligence layer for production AI allowing developers to build sophisticated applications at scale with reduced hallucination risk and no requirement to move or duplicate data. That no data movement angle is a meaningful competitive argument: enterprises with sensitive data in MongoDB don’t have to copy it to a separate vector store or AI pipeline, which simplifies architecture and lowers risk. For large financial institutions and regulated industries, who are among MongoDB’s most strategic customers, that matters enormously.

The candid acknowledgment from management, however, is that AI is not yet a material driver to results, though they are encouraged by the growth they are seeing with customers leveraging AI capabilities. The number of customers using vector search and Voyage embedding models nearly doubled year over year, and AI natives, digital natives, and large enterprises are all contributing to the growth across the customer base.

What’s actually driving the strong financials right now is a combination of AI-adjacent tailwinds and MongoDB’s core enterprise momentum. In Q4 they signed an approximately $90 million deal with a large tech company planning to expand both core and AI workloads on Atlas, and a greater than $100 million deal with a large financial institution for Enterprise Advanced, the largest total contract value deal in company history. The significance of that financial institution deal is hard to overstate: large banks are notoriously conservative with infrastructure decisions, and a nine-figure commitment to MongoDB speaks to how seriously enterprises are treating their data platform choices in the context of AI readiness.

AI actually amplifies the case for MongoDB’s multi-model, developer-friendly architecture. As enterprises move from traditional transactional applications toward AI-powered agentic workflows, the data requirements become more complex, requiring search, vector similarity, document storage, and real-time access all in one place. MongoDB’s goal is to become the generational data platform of choice in the AI and multi-cloud era, and if AI application development scales the way most expect, the demand for an integrated, performant, cloud-native data layer like Atlas should grow with it. The risk is that the revenue inflection from AI workloads takes longer to materialize than investors hope. but the Q4 numbers suggest the underlying business is strong enough to carry that bet comfortably while AI catches up.

ServiceTitan

The AI story here is unusually concrete. Where most SaaS companies are talking about AI in terms of platform strategy and future optionality, ServiceTitan is reporting actual customer outcomes from a live product, and those numbers are striking.

The centerpiece is Max, which ServiceTitan is positioning not as an AI feature but as an agentic operating system for the trades, meaning it’s designed to autonomously orchestrate end-to-end workflows across demand generation, dispatch, quoting, payments, and back-office operations for HVAC, plumbing, electrical, and other field service businesses. The first deployment cohort of Max customers saw a 50% increase in average ticket size, one customer achieved over 50% revenue growth in a single month, and another increased EBITDA margin from 18% to 30% while reducing office headcount. And management went further: customers on Max will about double their monthly subscription revenue when fully ramped, an effect not driven by technician expansion, meaning the value is coming from operational efficiency and higher revenue capture per job, not simply from adding more workers.

What makes ServiceTitan’s AI positioning particularly defensible is the data moat underlying it. The company leverages proprietary structured data from over $80 billion in annual transaction volume to drive automation and outcome improvements. This is a critical competitive point that CEO Ara Mahdessian leaned into when analysts asked about AI-native startups competing for the same customers. A point-solution AI tool built for the trades can optimize one workflow but it has no context on how that workflow connects to the rest of the business. ServiceTitan’s integrated platform, spanning marketing, scheduling, dispatch, invoicing, and payroll, creates a contextual data layer that generic AI tools simply can’t replicate from the outside.

The second AI product gaining traction is Virtual Agents: AI-based modules that handle inbound call management and appointment booking, especially during call surges or after normal business hours. For a trades business, missed calls during peak season or after hours are direct revenue losses. A potential customer who can’t book gets routed to a competitor. Virtual Agents directly plug that gap, and because it’s priced as a consumption product, it creates a new usage revenue stream that management believes could grow faster than GTV in fiscal 2027.

The honest caveat is that Max is still early-stage and capacity-constrained. ServiceTitan plans to double Max capacity in Q1 FY2027, with scaling tied to onboarding efficiency and customer success, which is a signal that the bottleneck right now is deployment and training, not demand. To accelerate on all fronts, the company hired Abhishek Mathur from Figma, Meta, and Microsoft as Chief Technology and Product Officer, specifically to drive organizational and technology velocity around AI initiatives. FY2027 is also expected to represent the company’s largest R&D investment ever, with explicit focus on AI inference and internal tooling. The trades are not typically an industry associated with cutting-edge software, which is precisely why ServiceTitan’s moat both in data and in customer relationships could prove so durable as AI raises the stakes for what field service software can actually do.

ServiceNow

If there’s one company in enterprise software that has the most fully-formed AI story right now, it’s ServiceNow. CEO Bill McDermott came into this call on offense, explicitly addressing the “AI will eat software” narrative that has spooked investors across the sector and flipping it on its head. His argument was direct: enterprise AI will be the largest driver of return on the multitrillion-dollar super cycle of AI infrastructure investment, and the real payoff comes when tokens move beyond pilots and get embedded directly into the workflows where business decisions are made, with ServiceNow serving as the semantic layer that makes AI ubiquitous in the enterprise.

The numbers behind Now Assist, ServiceNow’s AI product suite, are the most concrete AI monetization metrics in this cohort. Now Assist ACV surpassed $600 million, more than doubling year-over-year in Q4, with deals over $1 million nearly tripling quarter-over-quarter and 35 such deals closing in Q4 alone. Enterprises aren’t just adding AI as a line item but are making meaningful commitments to it within the ServiceNow platform. The AI control tower, which allows enterprises to govern and orchestrate AI agents across the business, grew over 4x its 2025 targets, another signal that customers are moving from individual AI use cases to thinking about AI management as a platform-level problem.

What ServiceNow is really selling is AI governance as much as AI capability. As enterprises deploy more agents someone has to be in charge of orchestrating them, monitoring them, and ensuring they don’t go rogue or create compliance exposure. ServiceNow is positioning its platform as that orchestration layer, what they call the “universal agentic network” built on MCP and Workflow Data Fabric. Monthly active users grew 25% and the number of workflows and transactions processed on the platform increased over 33% each, reaching 80 billion workflows and 6.4 trillion transactions.

ServiceNow doesn’t want to bet on any single model winning rather it wants to be the workflow layer that works with all of them, insulating itself from model commoditization while capturing value from enterprise adoption regardless of which AI providers customers prefer.

The honest tension in the ServiceNow story is pricing and gross margin. Management acknowledged some gross margin headwinds from hyperscaler and AI infrastructure choices, and the shift to a hybrid pricing model, combining traditional subscription with consumption-based AI usage, introduces some revenue variability that investors are still getting comfortable with. But with $15.5 billion in subscription revenue guided for 2026 at 19–20% growth, a 32% operating margin, and 36% free cash flow margins, ServiceNow is arguably the most financially powerful pure-play on enterprise AI adoption in the market right now. The core business is robust enough that they can absorb the cost of building out AI infrastructure while competitors are still figuring out their strategy.

Microsoft

Microsoft’s call was a declaration that AI is now the gravitational center of the entire business. Satya Nadella opened with a striking statement: “We are only at the beginning phases of AI diffusion and already Microsoft has built an AI business that is larger than some of our biggest franchises.” When you consider that Microsoft’s “biggest franchises” include Office, Windows, and Xbox, each multi-billion dollar businesses, that’s an extraordinary claim. And the numbers behind it are hard to argue with: Microsoft Cloud crossed $50 billion in quarterly revenue for the first time, up 26%, while Azure grew 39%, the acceleration driven explicitly by AI workloads.

The AI story at Microsoft operates on three distinct layers, each reinforcing the others. The first is infrastructure. Capital expenditures hit $37.5 billion in the quarter, with roughly two-thirds allocated to short-lived assets like GPUs and CPUs. Management introduced a new internal metric, tokens per watt per dollar, as their guiding optimization target for AI infrastructure, and reported a 50% increase in throughput on OpenAI inferencing due to infrastructure advances.

The second layer is the Copilot product suite, where AI is directly touching customers at scale. Microsoft 365 Copilot reached 15 million paid seats, with over 160% seat growth year-over-year and tripled number of customers with over 35,000 seats, a signal that enterprise adoption is moving from departmental pilots to company-wide deployments. Average user conversations doubled and daily active users grew 10x. GitHub Copilot reached 4.7 million paid subscribers, up 75% year-over-year, with individual Copilot Pro Plus subscriptions growing 77% sequentially.

The honest tension is on margins. Gross margins declined slightly, driven by continued AI infrastructure investments and growing AI product usage, and management guided for operating margins to be down slightly year-over-year in Q3. Microsoft is, in effect, spending today to capture a revenue curve that extends well into the next decade, and the Q2 numbers suggest that curve is steeper than almost anyone expected.

Appfolio

AppFolio’s Q4 call told a quietly compelling story for understanding how AI actually changes the economics of a vertical SaaS business. AppFolio serves property managers, a customer base that has historically been underserved by software and skeptical of hype. The fact that 98% of AppFolio’s customers are already actively using one or more AI capabilities included in the platform, against an industry backdrop where half of AI users in property management report they can’t actually rely on the AI features in their core system, is a striking market differentiation signal. It suggests AppFolio has threaded a needle that most vertical SaaS companies are still struggling with: making AI functional and trusted at the ground level.

AppFolio is repositioning its platform around three layers: a system of record, a system of action, and a system of growth with agentic AI embedded directly into daily operations, with the explicit goal of enabling customers to evolve from property managers to performance managers. That reframing reflects a fundamental shift in what AppFolio is selling: not software that helps you manage properties, but a platform that actively improves the financial performance of your property management business. Adoption of premium tiers has already exceeded 25%, a meaningful indicator that customers are upgrading to capture AI-driven capabilities.

The business impact of AI is showing up directly in AppFolio’s financials. Non-GAAP operating margin expanded to 24.9% in Q4, up from 20.2% a year earlier, a nearly 500 basis point improvement that reflects both revenue growth and the operating leverage that comes from AI-driven efficiency gains across the platform itself.

What makes AppFolio particularly interesting from an investment lens is that their AI advantage is compounding in a way that’s structurally hard to replicate. 45% of survey respondents in the property management industry say they plan to consolidate their software solutions and AppFolio is the natural consolidation destination, precisely because they’ve embedded AI deeply enough that customers would lose significant operational capability by leaving. The moat here isn’t just the product; it’s the AI-trained workflows, the resident data, and the operational patterns that accumulate the longer a customer stays on the platform. For a vertical SaaS business approaching $1 billion in revenue, that’s a durable competitive position.

Palantir

Palantir’s Q4 2025 call was unlike any other earnings call in enterprise software this cycle partly because of the numbers, which were objectively extraordinary, and partly because Alex Karp delivers earnings calls the way a wartime general addresses troops, not the way a CFO addresses analysts. But strip away the theater and what you find is a company that has, almost overnight, become one of the defining stories of what enterprise AI actually looks like when it works at production scale.

The headline numbers require context to be fully appreciated. Q4 revenue grew 70% year-over-year, the highest growth rate since Palantir went public, and representing a 3,400 basis point acceleration versus Q4 of the prior year. This isn’t a company maintaining a high growth rate, it’s a company where the growth rate itself is accelerating sharply. Full-year 2025 revenue grew 56%, and the company is guiding full-year 2026 revenue of $7.19 billion representing 61% growth.

The company closed 61 deals greater than $10 million in the quarter, and management cited multiple examples of customers signing $80 to $96 million contracts within months of initial engagement, a compressed deal cycle that reflects genuine organizational conviction.

The US vs. international divergence on the call was striking and strategically revealing. US revenue grew 93% year-over-year and 22% sequentially, while international commercial revenue grew just 8% year-over-year, a massive gap that management was candid about.

The government side of the business is also worth understanding in the context of AI impact. US government revenue grew 66% year-over-year, driven in part by a massive $10 billion Army software contract signed last summer and a $448 million Navy contract for shipbuilding supply chain modernization. Karp was unambiguous about what these contracts represent: not just data analytics or workflow tools, but AI systems that are actively changing the operational capabilities of the US military. He argued that AI implementations in the defense context have changed what warfighters are able to do.

The risk most frequently raised by skeptical analysts on the call is whether the US commercial acceleration is sustainable or whether it represents a concentrated burst of pent-up demand that will moderate. Palantir’s answer is essentially that AI-driven enterprise transformation is still in very early innings, that their pipeline of committed deal value reached $11.2 billion (up 105% year-over-year), and that the real constraint on growth is their own capacity to onboard and deliver for customers, not demand.

Atlassian

CEO Mike Cannon-Brookes has been saying “AI is the best thing that’s ever happened to Atlassian” for several quarters, and this quarter he finally had the numbers to fully back it up. Atlassian delivered its first-ever $1 billion cloud revenue quarter, with cloud up 26% year-over-year, and surpassed $6 billion in annual run rate revenue, a milestone that felt like a culmination of years of patient investment in infrastructure that is now paying off precisely because AI workloads need exactly what Atlassian has built.

The strategic logic of Atlassian’s AI position is distinct from most others in this series. Rather than building AI features on top of an existing product, Atlassian is arguing that AI needs Atlassian; that the work tracking, planning, and organizational knowledge embedded in Jira and Confluence becomes more valuable, not less, as AI proliferates. The Teamwork Graph, which now contains well over 100 billion objects and connections across first and third-party tools, is the context layer that enables Rovo Atlassian’s AI assistant to deliver business value that is actually context-aware and actionable, rather than generic. That’s a meaningful competitive claim: a new AI tool can generate text or answer questions, but it can’t tell you which Jira tickets are blocking your sprint, who owns which decision, or how a current workflow has historically performed across 350,000 customers. Atlassian can.

The proof that customers believe this argument is in the adoption metrics. Atlassian’s Teamwork Collection, the AI-powered bundle that serves as the company’s primary AI monetization vehicle, surpassed 1 million seats sold in under nine months, with more than 1,000 customers upgrading, and the company closed a record number of deals over $1 million in ACV, nearly doubling year-over-year. Critically, customers using AI code generation tools create 5% more Jira tasks, have 5% higher monthly active users, and expand Jira seats 5% faster than those not using AI tools, a clean, data-driven demonstration that AI adoption drives more usage of the core platform, not less.

The seat-based pricing debate hung over the call: analysts probed whether consumption-based pricing could erode Atlassian’s model as AI agents proliferate and “seats” become a less meaningful unit. Atlassian’s response is essentially that customers want predictability, and seat-based pricing delivers it, while the Teamwork Collection bundles AI credits on top of seats in a way that captures consumption upside without forcing customers into open-ended consumption risk. RPO grew 44% year-over-year to $3.8 billion, accelerating for the third consecutive quarter.

The competitive angle on the call was also striking. When asked about new AI tools including tools from Anthropic potentially challenging Jira, the CEO was notably unbothered. He noted Atlassian considers Anthropic a partner, using their models within the platform, and argued that new AI tools will emerge but Atlassian’s Teamwork Graph and deep workflow integration provide differentiation that generic AI tools can’t replicate. The focus, he said, remains on human-AI collaboration for complex work: the kind that requires organizational context, compliance, security, and integration that a standalone AI tool

Qualys

The cybersecurity industry is entering a new phase where the speed of attacker exploitation has outpaced the speed of human response, and the only viable answer is autonomous, AI-driven remediation. As threat actors continue to compress time-to-exploit, Qualys believes the next phase of pre-breach risk management will be defined by an agentic AI-driven risk fabric with out-of-the-box business quantification and automated remediation to respond at the speed of threats.

The most significant product announcement on the call was the launch of the AI-native Risk Operations Center which the CEO positioned explicitly as a new category in cybersecurity, designed to centralize an organization’s entire threat response posture. The argument is that the traditional SOC (Security Operations Center) is reactive: it responds to breaches that have already happened. Qualys’s ROC is designed as a pre-breach capability that unifies Continuous Threat Exposure Management with exploit confirmation, risk quantification, and automated remediation, all powered by agentic AI that can act without waiting for a human to review and approve each step. The competitive shot embedded in the CEO’s commentary was pointed and deliberate: he argued that competitors focusing on exposure management can’t win the AI fight if they’re still routing remediation through Jira tickets and ServiceNow tickets. Autonomous decision-making and execution is the actual differentiator, not just identifying vulnerabilities.

On the customer impact side, Qualys’s AI story is still more about where the market is heading than where revenue is today. The ETM (Enterprise TruRisk Management) platform, which serves as the foundation for the agentic AI capabilities, represented 10% of total bookings and 13% of new bookings, up from 8% and 9% previously, a meaningful directional signal but still a small share of the overall business. Patch Management, which is the remediation capability that AI agents actually execute against vulnerabilities, represented 8% of total bookings and 16% of new bookings. Together these metrics point toward a platform transition that’s underway but early. Customers are beginning to adopt the agentic workflow, but the majority of the revenue base is still anchored in the traditional vulnerability management and VMDR products.

The financial picture is disciplined and somewhat unusual relative to the rest of the companies in this series. Full-year 2025 revenue reached $669 million, up 10%, with a 47% adjusted EBITDA margin; exceptional profitability for a company of this size. The 2026 guidance of 7–8% revenue growth is conservative by enterprise software standards, and management was candid that it reflects investment in AI infrastructure, sales and marketing expansion, and federal sector buildout, all of which compress near-term margin slightly.

What makes Qualys particularly interesting from an AI lens is the specificity of its use case. The combination of asset discovery, vulnerability identification, exploit confirmation, risk quantification, and automated patch deployment is one of the few enterprise software workflows where agentic AI is not just useful but arguably necessary for the product to work as intended. No human team can keep up with the velocity of modern vulnerability exploitation. Qualys differentiates by offering integrated patch management and autonomous workflows that allow customers to quickly remediate vulnerabilities.

Paylocity

The most concrete AI signal on the call was deceptively simple: average monthly usage of Paylocity’s AI assistant increased over 100% quarter-over-quarter. That’s a significant sequential jump in a relatively short period. Paylocity recently expanded its AI assistant into HR rules and regulations, tapping into more than 200 IRS and Department of Labor knowledge sources to provide administrators with guidance on tax and labor regulations. For the HR administrators who are Paylocity’s primary users, typically generalists at companies of 50 to 500 employees, the ability to ask a question and get a compliance-grounded answer without calling a lawyer or spending an hour on the IRS website is genuinely valuable.

What’s particularly interesting about Paylocity’s AI strategy is the dual deployment model: AI for customers, and AI for Paylocity itself. Within the operations team, Paylocity is leveraging AI to drive down client case volumes, automate client interactions and case routings, and perform sentiment analysis to flag urgent cases for faster response. This internal AI efficiency play is showing up in the financials: adjusted gross margin expanded 60 basis points year-over-year to 74.4% in Q2, and operating expenses are growing slower than revenue. This is a company guiding to a $622–630 million adjusted EBITDA on $1.74 billion in revenue, roughly a 36% margin.

Paylocity’s position as a system of record allows it to connect data to other systems via APIs, increasing platform utilization, and that customer time savings from AI features lead directly to opportunities for upselling more modules, enhancing the experience and driving revenue growth. This is a different monetization path than many peers: rather than charging directly for AI as a premium SKU, Paylocity is betting that AI-driven engagement deepens the platform relationship, makes customers more likely to expand into adjacent modules, and reduces the churn that has historically been the primary growth constraint in HCM.

Paylocity is operating in a slower-growth regime than most of this peer group. Full-year 2026 revenue guidance of $1.73–1.74 billion represents 9% growth, and recurring revenue is expected to grow 10–11%. That’s solid but not the acceleration story investors see at Atlassian or Salesforce. The macro factor management monitors most closely is employment levels at client companies, since Paylocity’s revenue is partly tied to headcount. Management noted employment levels have been stable with no significant changes expected. AI is helping Paylocity expand revenue per client and improve efficiency, but it isn’t yet the kind of step-change growth driver that reshapes the growth profile.

Doximity

Where nearly every other company in this series was accelerating AI investment and racing to monetize, Doximity was doing something much rarer: pumping the brakes on AI commercialization deliberately, citing patient safety, and then watching its stock drop nearly 24% the next day as the market punished the caution. The divergence between what management said and what the market wanted to hear was striking.

The platform fundamentals are strong by virtually any measure. Doximity surpassed 3 million registered members, with more than 85% of all US physicians and two-thirds of NPs and PAs on the platform, with record usage across daily, weekly, monthly, and quarterly active user metrics. Over 300,000 unique prescribers used AI products in Q3, and January saw an average of four AI queries per prescriber per week for Docs GPT. More than 100 top US health systems have purchased the AI suite, granting access to over 180,000 prescribers. These are impressive adoption numbers for a product that is not yet generating any revenue.

And that last sentence is the crux of the investor tension. No AI revenue is included in current guidance; commercial AI products are expected to launch later in the year. Doximity is building substantial AI infrastructure, investing in usage that is already compressing gross margins from 93% to 91%, and has over 300,000 physicians actively using the product, but is explicitly choosing not to monetize it yet. The reason CEO Jeff Tangney gave was pointed and substantive: a recent Stanford-Harvard study found AI can cause clinical harm in up to 22% of real patient cases, and that overconfident models make those errors harder to spot. In response, Doximity built Peer Check, a clinical peer review layer co-led by renowned physician-scientists Eric Topol and Regina Benjamin, with more than 10,000 US physician experts reviewing AI-generated clinical answers before they’re deployed at scale. The message was clear: Doximity will not monetize AI until it trusts that the AI is safe enough for physicians to rely on without second-guessing every output.

This is a genuinely different philosophy. Healthcare AI occupies a unique risk category: an error in a marketing recommendation costs a company a customer; an error in a clinical AI recommendation can cost a patient their life. DOCS caution isn’t timidity, it’s arguably the only responsible posture for a company whose platform is used by 85% of America’s physicians. The commercial AI launch later in fiscal 2026 will be a significant test: can Doximity translate trust, physician habit, and clinical safety credibility into a monetization model that the market will reward?

The longer-arc story here is one of deliberate sequencing: build the trust, earn the habit, then charge for it. Doximity’s net revenue retention was 112% overall and 117% for the top 20 customers, evidence that clients who deepen their engagement with the platform expand their spend meaningfully. If the AI suite launches commercially later this year with genuine physician trust behind it, and if the pharma headwinds stabilize, the combination of 85% physician coverage, proven engagement habits, and a safety-validated AI product could represent a monetization inflection.

BILL

BILL’s call told a story that sits at an interesting intersection: a company that has built a genuinely useful platform for SMB financial operations, is now threading AI throughout it, and is simultaneously facing an existential investor question about whether AI startups could eventually disintermediate it entirely.

The core business is performing solidly. Core revenue grew 17% year-over-year to $375 million in Q2, total payment volume reached $95 billion up 13%, and transactions processed grew 16% to 35 million. Nearly 500,000 businesses now use the platform, with over 9,500 accounting firms embedded in the network. Multiproduct adoption grew 28% year-over-year, with businesses using both AP/AR and Spend & Expense solutions, a meaningful indicator that customers are deepening their reliance on BILL as a financial operations platform rather than a single-purpose payments tool.

On AI, BILL is pursuing a two-track strategy that distinguishes between AI-for-customers and AI-for-BILL-itself. On the customer side, the company introduced agentic capabilities including a W-9 Agent for vendor management and a coding agent for invoice processing specific, transactional use cases where AI can eliminate manual work that currently slows down SMB finance teams. The framing CEO Lacerte used: “AI will allow us to dive deeper into the stack of transactional confusion and simplify it.” For SMB owners and bookkeepers who spend hours reconciling invoices, chasing down vendor information, and coding transactions to the right GL accounts, that promise is valuable.

On the internal efficiency side, BILL developed a roadmap of AI-driven productivity initiatives, covering developer productivity, internal team automation, and go-to-market optimization, with initial benefits expected to start flowing through in fiscal 2027. This is a multi-year cost structure story.

Analysts pressed on whether AI-native startups could undercut BILL’s position by building cheaper, simpler financial operations tools. Lacerte’s response centered on three moats: deep expertise in financial operations built over two decades, a proprietary data set from processing over $1 trillion in payments that enables superior risk models, and network effects from 8 million entities connected through the BILL ecosystem. The data moat argument is particularly credible: BILL’s ability to assess payment risk, predict cash flow patterns, and detect fraud is a function of seeing an enormous volume of SMB financial transactions over time. A new entrant with a better AI model but no transaction history is structurally disadvantaged in the trust and risk management dimensions that matter most when you’re moving real money for small businesses.

The honest challenge for BILL is growth rate moderation. Full-year core revenue guidance was raised but implies 14–15% growth for the year; respectable, but a deceleration from prior years and well below the trajectory of more AI-accelerated peers in this series. The company is deliberately shifting focus toward larger SMBs and improving customer unit economics rather than maximizing new customer count, which is a sensible strategic move but one that introduces near-term headwinds. The AI monetization story, where agentic capabilities translate into higher ARPU from existing customers, is still in its early innings. If BILL can demonstrate over the next few quarters that AI-driven automation is genuinely expanding what customers are willing to pay, the multiple compression the stock has experienced could look like an opportunity in retrospect. But that proof is still ahead, not behind.

Zoominfo

ZoomInfo’s Q4 2025 call was a study in a company navigating a genuinely difficult strategic moment, caught between a legacy business model under pressure from AI disruption, a promising new product suite not yet in revenue guidance, and a market that has lost patience with the transition timeline. Q4 revenue grew just 3% year-over-year to $319 million, and 2026 guidance projects only 1% revenue growth at the midpoint, extraordinary deceleration for a company that was growing at 20%+ just a few years ago.

To understand ZoomInfo’s AI story, you have to understand its predicament. ZoomInfo built its business on selling B2B contact and intent data to sales and marketing teams; essentially, a database of who to call and when. AI has disrupted that model in two ways simultaneously: first, AI-powered outbound tools have made it easier for companies to generate their own contact intelligence; second, AI agents are replacing some of the human SDRs who were the primary users of ZoomInfo’s data. The company that was once the essential data layer for go-to-market teams is now being forced to redefine what “essential” means in an AI-first sales environment.

Schuck’s answer to that challenge is an explicit platform pivot. The strategic framing is that whether customers access ZoomInfo’s intelligence through the application, through an AI agent, or through something they built themselves, the data flows to where work happens, positioning ZoomInfo as the only platform delivering intelligence, orchestration, and execution for modern go-to-market teams. The new products, GTM Studio, which unifies internal and external data for audience building, and GTM Workspace, are designed to make ZoomInfo the operating system that AI sales agents plug into, rather than a database that humans search manually. Schuck noted that many of the top 50 fastest-growing AI-native companies are already ZoomInfo customers, a meaningful signal that the platform has relevance in the AI-native enterprise, not just legacy enterprise.

The upmarket migration is the clearest positive story on the call. ZoomInfo grew its upmarket segment 6% year-over-year in Q4, tripling its year-over-year growth rate in its seasonally largest quarter, and now has 74% of ACV coming from upmarket customers, up from 70% a year ago, with ACV from the $100,000-plus customer cohort growing double digits and now representing more than 50% of total company ACV. Upmarket customers buy more of the platform, renew at higher rates, and are more strategically embedded, which means the quality of the revenue base is improving even as the headline growth rate suffers from the ongoing cleanup of the downmarket SMB book.

The most candid moment on the call was about the new AI products: ZoomInfo explicitly included no revenue contribution from GTM Studio or other new products in the 2026 revenue guidance, while embedding the associated costs.

What ZoomInfo illustrates in the context of this broader series is a distinct and underappreciated risk: the companies most likely to be hurt by AI in the near term are not necessarily those with the weakest products, but those whose primary value proposition was data that AI can now partially substitute or generate differently. ZoomInfo’s contact and intent data was extraordinarily valuable in a world where finding the right person to call required human research. In a world where AI agents can do that research, synthesize signals, and execute outreach autonomously, the question isn’t whether ZoomInfo’s data is good — it clearly is — but whether it remains uniquely necessary. Schuck’s argument is that data quality is becoming more important, not less, as AI agents proliferate that bad data fed into an AI agent produces bad outputs at scale, and ZoomInfo’s verified, constantly refreshed data set is the antidote.

Dynatrace

Dynatrace’s Q3 FY2026 call was the kind of straightforward execution story that tends to get overshadowed in an earnings season dominated by more dramatic AI narratives. While most companies are racing to build AI capabilities, Dynatrace is arguing that AI creates an urgent and growing need for exactly what it already does. The more AI proliferates, the more complex and opaque software environments become, and the more critical observability, knowing what’s happening, why, and what to do about it, becomes. CEO Rick McConnell’s central thesis at the annual Perform customer conference was that observability is entering a new era in which it is foundational to resilient software and dependable AI environments.

The financial picture reinforces that thesis. ARR stabilized at 16% growth for three consecutive quarters, net new ARR grew double digits for three consecutive quarters, and annualized log management consumption surpassed $100 million, with log management itself growing over 100% year-over-year. The logs story is particularly significant: logs are the raw observational data that flows from AI workloads at enormous volume and velocity, and Dynatrace’s ability to ingest, index, and make intelligent sense of them is a direct beneficiary of the AI infrastructure build-out happening across every enterprise customer. Platform consumption overall continued to grow over 20%, ahead of ARR growth, a usage-leading indicator that suggests the revenue trajectory has further room to run.

The most strategically important announcement on the call was Dynatrace Intelligence, a new agentic AI operations system unveiled at the Perform conference and made available to all customers. What’s notable is the deliberate pricing decision: Dynatrace Intelligence was not priced as a separate SKU but embedded into the platform for all customers. This is a conscious choice to drive adoption first and monetize through increased platform consumption and expanded footprint, rather than creating an AI premium layer that some customers might resist. It mirrors the philosophy of several other companies in this series who are using AI to deepen platform engagement before converting it to direct revenue.

The competitive differentiation Dynatrace leans on hardest is the combination of what CEO McConnell calls “trustworthy deterministic AI” with agentic AI. The argument is nuanced and important: purely probabilistic AI — LLMs making inferences about what might be wrong in a complex system — produces unreliable outputs in production environments where precision matters. Dynatrace’s approach combines its long-standing causal AI engine, which identifies root cause deterministically based on topology and dependency maps, with newer agentic capabilities that can act on those findings autonomously. For an SRE or platform engineer dealing with an incident at 2am, “we think the problem might be here” is much less valuable than “the root cause is definitively this service, and here’s what to do.” That precision is what Dynatrace is selling, and it’s a genuinely differentiated value proposition against generic AI observability tools.

The hyperscaler partnership strategy adds another dimension. In Q3, Dynatrace announced deeper technical integrations with Amazon Bedrock AgentCore, embedding with Azure’s SRE Agent, and serving as the launch partner for GCP Gemini CLI extensions and Gemini Enterprise. This is Dynatrace positioning itself as the observability layer that hyperscaler-native AI agents rely on meaning when a customer builds an AI agent in AWS, Azure, or GCP, Dynatrace is the system that monitors what that agent does, catches when it goes wrong, and helps remediate issues. It’s a smart wedge: rather than competing with hyperscalers for the AI workload itself, Dynatrace is becoming the essential trust and reliability layer underneath those workloads.

The honest question hanging over the call — and which analysts pressed on — is whether 16% ARR growth is the ceiling or a floor as the AI opportunity matures. The company raised full-year guidance by 125 basis points, now targeting 15.5%-16% ARR growth and putting it on track to surpass $2 billion in ARR in fiscal 2026. For a company of Dynatrace’s scale, that’s respectable, but investors hoping for a step-change acceleration driven by AI workload complexity haven’t seen it yet in the headline numbers. Management’s argument is that platform consumption growing at 20%+ is the leading indicator of that acceleration arriving in ARR terms over the next several quarters — and the logs inflection at 100%+ growth is the most concrete evidence they can point to that the flywheel is spinning.

Monday.com

monday.com’s Q4 2025 call presented a company in a genuinely interesting two-speed moment: an enterprise business accelerating meaningfully on the back of AI-driven platform expansion, and an SMB self-serve business struggling with deteriorating unit economics that management expects to persist through 2026.

Start with what’s working. Full-year revenue reached $1.232 billion, up 27%, with Q4 at $334 million, up 25%, and customers with over $500,000 in ARR grew 74% year-over-year. That enterprise acceleration is real, driven by customers standardizing on monday.com not just for project management but for CRM, service operations, software development, and now AI-powered workflows. The AI product metrics are early but directionally exciting: Monday Blocks powered over 77 million actions, Sidekick processed over 500,000 user messages, and Monday Vibe — the company’s AI-native app builder — became the fastest product in monday’s history to surpass $1 million in ARR, reaching that milestone in just 2.5 months after pricing launched in mid-October 2025.

The strategic positioning monday.com is building toward is worth understanding carefully. Co-CEO Eran Zinman described a unified AI platform with four core capabilities: Monday Sidekick (AI assistant), Monday Vibe (AI-native app builder), Monday Agents (autonomous workflow executors), and Monday Workflows (process automation). Teams are increasingly relying on monday.com not just to organize work, but to make decisions, automate outcomes, and execute faster with confidence. That reframing — from a work OS to a work execution platform — is a meaningful upward step in perceived value and justifiable pricing power. And it connects directly to the enterprise motion: larger customers with complex, interconnected workflows are the natural buyers of this expanded capability set, while SMBs may not need or want the full stack.

Which brings us to the harder conversation. The no-touch self-serve channel remains “choppy,” with higher customer acquisition costs and lower returns than historical levels — a dynamic management expects to persist throughout 2026 with no improvement assumed in guidance. This is a genuine structural headwind. The SMB market for work management software has become more competitive and more price-sensitive, partly because AI tools have lowered the barrier to building lightweight alternatives and partly because macro conditions have tightened SMB software budgets. monday.com’s response is to redirect investment toward enterprise, which makes strategic sense but compresses near-term growth rates. 2026 guidance of 18–19% revenue growth is a step down from 27% in 2025 — not alarming on its own, but a deceleration that the market had to digest.

The long-term ambition management has been articulating — a path to much larger revenue targets by 2027 — was another notable moment on the call. The CFO explicitly stated that the 2027 target number is “off the table,” with management ceasing to discuss prior long-term targets due to macroeconomic volatility and the ongoing challenges in no-touch channels. Pulling guidance is rarely received well, and combined with the SMB headwinds and deceleration in growth, it created investor concern despite the genuinely strong enterprise metrics.

The most compelling part of the monday.com story in the context of this series is Monday Vibe — an AI-native app builder that lets business users create custom applications on top of the monday.com platform without writing code. If this scales as management believes it can, it transforms monday.com from a work management platform into something closer to a business application development layer — essentially a low-code/no-code platform that is AI-native from the ground up. The competitive analogy that comes to mind is what Salesforce is trying to do with Agentforce: extend from a system of record into a system of action and creation. Monday.com is pursuing a similar expansion from a different starting point — the work coordination layer rather than the CRM layer — and Vibe’s early ARR traction suggests the market is receptive to the vision, even if the SMB headwinds cloud the near-term picture.

Blackbaud

Blackbaud is not a company most people include when they talk about AI in enterprise software. It serves nonprofits, universities, healthcare foundations, and faith organizations, institutions that are resource-constrained, often technologically cautious, and historically slow to adopt new platforms. And yet the Q4 2025 call made a surprisingly compelling case that Blackbaud may be better positioned for the AI era than its modest growth rate suggests, precisely because of the unique data moat it has built over 45 years serving the social impact sector.

CEO Mike Gianoni opened with a direct and unusually candid framing of the fundamental question facing every vertical SaaS company right now: will AI be beneficial to system-of-record vertical software firms like Blackbaud, or detrimental? Blackbaud processes nearly 30 billion donor predictions annually, manages tens of petabytes of data across its customer base, and has built the most comprehensive philanthropic dataset in existence, proprietary survey and benchmarking data, licensed datasets, identity resolution capabilities, and specialized datasets like Blackbaud Giving Search. Critically, this data is not publicly available on the internet where LLMs can access it meaning no competitor can train a general-purpose AI model to replicate what Blackbaud knows about donor behavior, nonprofit fundraising patterns, and philanthropic outcomes.

The most concrete AI product on the call was the Development Agent, the first of Blackbaud’s “Agents for Good” released at their bbcon conference in October. The use case is beautifully specific and immediately legible in terms of ROI: a university with 190,000 alumni but a fundraising team with bandwidth to focus on only 10,000 of them can deploy the Development Agent as an additional “staff member” that cultivates relationships and raises funds from the other 180,000 alumni through email, text, and a full conversational avatar, self-learning using the data, intelligence, and workflows within Blackbaud’s system of record. That is not a marginal efficiency gain — it’s the ability to extend fundraising reach by 18x without proportionally scaling headcount.

The pricing model matters here too. Blackbaud is structuring Agents for Good as an annual subscription with multiyear contracts — meaning the agent becomes a recurring revenue line rather than a one-time upsell. More than 20% of customers are already asking to move to four-year or longer renewal contracts, which speaks to the depth of platform dependency and the confidence customers have in Blackbaud’s roadmap. 2026 guidance of 4–4.5% revenue growth explicitly assumes no meaningful AI product revenue contribution.

The internal AI story is equally substantive. Every employee at Blackbaud has been required to complete AI training, the entire engineering team is using GitHub Copilot and Anthropic Claude for code generation, bug remediation, and new product development, and Blackbaud AI Chat — embedded within the system of record and leveraging customer and proprietary benchmark data — saw daily usage grow 5x since October. The company also cited three structural efficiency drivers it’s pursuing simultaneously: geographic workforce diversification through a growing India office, closure of the last two legacy data centers, and AI-driven internal productivity.

What makes Blackbaud particularly interesting in the context of this broader series is the mission-alignment dimension. Nonprofits, universities, and foundations are not just technology buyers — they have deeply held values around data privacy, ethical AI use, and donor trust. Blackbaud’s focus on cybersecurity and AI governance for ethical data use isn’t just a compliance checkbox — it’s a competitive moat in a market where the institutions writing the checks care deeply about how their donor data is used. A generic AI tool that scrapes public data and surfaces cold outreach isn’t an acceptable substitute for a Development Agent that operates within a trusted, governed, sector-specific system of record. The nonprofit sector’s inherent conservatism around technology is, in this framing, not a headwind for Blackbaud — it’s a barrier to entry for every competitor trying to break in.

Cloudflare

CEO Matthew Prince positioned the company at the intersection of two forces reshaping the internet simultaneously: the explosion of AI agents and the resulting question of who governs how those agents traverse the web.

Start with the fundamentals, which are genuinely excellent. Q4 revenue grew 34% year-over-year to $614.5 million — the third consecutive quarter of acceleration — with large customers contributing 73% of total revenue, dollar-based net retention jumping 9 percentage points to 120%, and million-dollar customers growing 55% year-over-year. New ACV bookings grew nearly 50% year-over-year with both year-over-year and sequential acceleration, and RPO grew 48% to $2.5 billion — the kind of forward revenue visibility that gives confidence in sustained growth. The largest annual contract value deal in company history — $42.5 million per year — closed in the quarter. This is a business hitting its stride.

But the more important conversation on the call was about what’s happening to Cloudflare’s network itself. CEO Matthew Prince reported that over the month of January alone, the number of weekly requests generated by AI agents more than doubled across the Cloudflare network. That single data point is extraordinary in what it implies. Cloudflare sits between virtually every significant website and the internet — it processes roughly 20% of all web traffic globally. The fact that AI agent-generated traffic is doubling on a monthly basis means the traffic composition of the entire internet is changing, and Cloudflare is uniquely positioned to observe, measure, and increasingly govern that change.

This leads to the most forward-looking and genuinely novel part of the call — Prince’s framing of Cloudflare as a neutral broker between AI companies and content creators. AI models train on internet content, and the commercial relationship between those who create content and those who consume it for training purposes is still being worked out. Prince argued that AI companies and content creators alike are looking to Cloudflare as a trusted neutral third party — that both sides would rather Cloudflare figure out what the future business model looks like than have a hyperscaler do it, since hyperscalers are themselves building foundational models and may have conflicting incentives. Cloudflare’s network position — sitting in the middle of every request, trusted by both sides, with no foundational model of its own — makes it a plausible honest broker in a way that no hyperscaler or AI company can credibly claim to be.

The developer platform story adds another dimension. Cloudflare exited 2025 with more than 4.5 million human developers active on the platform — and that number will soon be joined by AI agents as first-class citizens of the Cloudflare ecosystem. Workers AI (Cloudflare’s inference-at-the-edge product), the AI Gateway (which manages and monitors AI API calls), and the growing suite of developer tools mean that Cloudflare is not just routing traffic between AI agents and the web — it’s becoming the infrastructure layer where AI applications are built, deployed, and governed. The pool-of-funds contract model, where enterprises commit a pool of spend and draw it down across any Cloudflare product, is particularly well-suited to AI workloads where consumption is hard to predict but the underlying dependency is clear.

The gross margin story requires brief acknowledgment — gross margin came in at 74.9%, slightly below the long-term target range of 75–77%, as Cloudflare allocated more network expenses to cost of revenue to better reflect AI infrastructure investment. This is the same infrastructure cost dynamic playing out across virtually every company in this series, and Cloudflare’s handling of it is conservative and transparent. With $4.1 billion in cash and full-year 2026 revenue guidance of $2.785–2.795 billion implying 28–29% growth, the balance sheet and growth profile are strong enough to absorb the investment cycle comfortably.

The bigger picture is this: Cloudflare is one of the few companies in enterprise software that isn’t just using AI or selling AI products — it’s building the infrastructure that AI itself needs to function reliably at internet scale. Every AI agent that browses the web, every model that calls an API, every application that serves AI-generated content — all of it flows through infrastructure that Cloudflare operates.

Freshworks

Freshworks’ Q4 2025 call was defined by a genuine milestone, first-ever GAAP profitability for a full year, and a strategic clarity about where AI fits in the growth equation that was more concrete than most companies in this series have delivered. CEO Dennis Woodside’s framing was direct: “AI is not just a feature in our products, it’s a standalone revenue line delivering measurable value to our customers.”

The numbers behind that claim are still early but directionally meaningful. Freddy AI crossed the 8,000 customer mark for paying AI customers, with over $25 million in ARR that nearly doubled year-over-year. Against a total ARR base of $907 million, $25 million is still a small fraction, but doubling year-over-year from a base of paying customers (not just users) is a credible sign of genuine willingness-to-pay rather than just feature adoption. The company’s target is $100 million in AI-driven ARR over the next three years, a goal that would represent roughly 7% of their current ARR base, achievable if the doubling cadence holds.

The business model for Freddy AI deserves attention. Freshworks is scaling usage and demonstrating value through session-based pricing for AI agents, meaning customers pay per interaction or session rather than per seat. This is a deliberate departure from pure seat-based SaaS and positions Freshworks to capture more value as AI agents handle more tickets, resolve more queries, and execute more workflows autonomously. For IT service desks and customer support teams, session-based pricing maps cleanly to outcomes: if the AI agent resolves 40% of tickets without human intervention, the customer pays for those sessions and the ROI is immediately visible. That alignment between price and value is one of the cleaner AI monetization models in this series.

The two-speed nature of the Freshworks business is important context. The Employee Experience (EX) business — Freshservice for IT and employee service management — crossed $500 million in ARR with 26% year-over-year growth, clearly the growth engine. The Customer Experience (CX) business — Freshdesk for customer support — is being managed more defensively, with focus on retention and unification rather than aggressive growth. AI is more naturally embedded in the EX motion because IT service management has well-defined, repetitive workflows (ticket routing, password resets, software provisioning) where agentic AI can replace or dramatically accelerate human effort with high confidence. Customer service interactions tend to be more variable and emotionally complex, which makes autonomous AI resolution both harder and higher-stakes.

The platform buildout — ITSM, ITOM, ITAM, and ESM under one unified roof through acquisitions of Device42 and Fire Hydrant — is the architectural foundation that makes the AI story more compelling. Freshworks has brought IT service management, IT operations management, IT asset management, and enterprise service management under one cohesive roof, which means AI agents operating within Freshservice have access to a much richer context layer: not just the support ticket, but the asset involved, the operational status of related systems, and the history of similar incidents. That contextual richness is what separates an AI agent that can genuinely resolve an IT issue from one that can only acknowledge it.

The honest challenge for Freshworks is the growth rate trajectory. Revenue growth decelerated from 22% in Q4 2024 to 14% in Q4 2025, and full-year 2026 guidance of $952–960 million implies 13.5–14.5% growth, respectable but not acceleration. The market is watching whether AI monetization can provide a growth catalyst that reverses the deceleration, and the answer isn’t yet visible in the numbers. What is visible is a company that has achieved financial discipline, first-ever GAAP profitability, a clear AI monetization model, and a coherent strategy for the mid-market ITSM space where ServiceNow is too expensive and legacy tools aren’t AI-capable. If the $100 million AI ARR target is achievable on schedule, it would represent a meaningful step toward reaccelerating growth from a stronger profitability foundation — but 2026 is the year that hypothesis gets tested.

Klaviyo

Klaviyo’s Q4 2025 call was one of the cleanest AI impact stories in this entire series — not because the AI revenue numbers are large yet, but because the customer outcome data is unusually concrete and the strategic positioning is genuinely differentiated. Co-CEO Andrew Bialecki’s framing was direct: “The future is autonomous customer experiences”.

The business fundamentals provide a strong foundation. Full-year revenue reached $1.23 billion, up 32%, with Q4 at $350 million, up 30% — and 2026 guidance of $1.50–1.51 billion implies 21.5–22.5% growth. More impressive is the operating leverage: non-GAAP operating expenses were 58% of revenue, the lowest level since the IPO, with AI driving meaningful internal productivity and enabling faster development cycles without commensurate headcount growth. This is the AI efficiency dividend theme appearing again — Klaviyo is building faster and spending less as a percentage of revenue, not because of austerity but because internal AI tools are compounding developer output.

The AI story for customers centers on what Klaviyo calls the autonomous B2C CRM — essentially, AI that can plan, create, send, and optimize marketing campaigns with minimal human intervention, using the rich first-party data that merchants have accumulated within Klaviyo about their customers’ browsing behavior, purchase history, and engagement patterns. The outcome data is striking: more than 50% of marketing campaigns from customers using Marketing Agent are now generated by AI, with some customers achieving a 50% increase in open rates and a 40% rise in revenue per campaign.

The key competitive question on the call — pressed directly by Goldman Sachs — was about what prevents an AI-native competitor from replicating Klaviyo’s advantage. Bialecki’s answer centered on the proprietary context that Klaviyo holds: the company’s advantage lies in its extensive dataset and infrastructure designed specifically for real-time use cases — allowing for real-time decisions and personalization that are difficult to replicate. This is the Klaviyo data moat argument: a general-purpose AI model doesn’t know that a specific customer browsed a particular product twice this week, abandoned their cart on Wednesday, opened a discount email last month but didn’t convert, and has a lifetime value pattern consistent with high-margin repeat buyers. Klaviyo knows all of that in real time, and its AI models are trained on the behavioral patterns of hundreds of thousands of ecommerce merchants across billions of consumer interactions. CEO Bialecki noted that Klaviyo Attributed Value — the revenue Klaviyo attributes to its platform — came to nearly $80 billion for customers last year, which is a remarkable demonstration of how central the platform has become to ecommerce revenue generation.

The newer service product — essentially AI-powered customer service built natively on the Klaviyo platform — is an important strategic expansion. Customer resolution rates increased by 20 points at reference customers, and agent-driven sales saw up to 111% growth, with the service category noted as the fastest-growing product launch in company history. What makes this expansion particularly logical is that the same first-party data that powers marketing personalization — purchase history, browsing behavior, preferences — is equally valuable for customer service. An agent that already knows you’re a repeat buyer, knows what you purchased last month, and knows your communication preferences can resolve a service inquiry far more effectively than a generic support bot. Klaviyo is threading marketing and service on the same data layer, which creates compounding value and makes the platform significantly harder to displace.

Management was conservative with guidance; the CFO explicitly said minimal service contribution is built into 2026 guidance, framing it as embedded upside rather than assumed revenue. Given the early but strong traction signals, that conservatism looks like it could create meaningful upside through the year as the service product adoption scales. The announced global partnership with Accenture and the doubling of million-dollar ARR customers are additional signals that Klaviyo is moving credibly upmarket — and an upmarket customer deploying Klaviyo across both marketing and service is a much stickier, higher-value relationship than a mid-market email marketing customer alone.

Datadog

Datadog’s Q4 2025 call was one of the strongest in the enterprise software sector this cycle, and the stock’s 16% surge on the day reflects how comprehensively it addressed investor concerns about AI’s net impact on the observability market. The headline numbers were excellent: Q4 revenue of $953 million beat the high end of guidance, up 29% year-over-year, with bookings of $1.63 billion representing 37% growth and including 18 deals over $10 million in TCV and two over $100 million. But the more interesting story is about the structure of that growth and what it tells you about how AI is reshaping Datadog’s business.

The most analytically important disclosure on the call was the explicit breakout of AI-native vs. non-AI-native customer growth. CEO Olivier Pomel confirmed what investors had been anxious about — that AI-native companies were a large portion of Datadog’s growth but also a concentrated risk — and then gave the answer the market needed: revenue growth from the broad base of customers excluding AI natives accelerated to 23% year-over-year in Q4, up from 20% in Q3. That acceleration in the non-AI cohort is the key signal.

The AI-native customer base is itself becoming more strategically significant. About 650 AI-native customers use Datadog, including 19 spending $1 million or more annually, with 14 of the top 20 AI-native companies as customers. That last statistic is worth sitting with: 14 of the 20 most important companies building AI infrastructure at scale have chosen Datadog as their observability platform. When those companies grow — as they are, dramatically — Datadog’s revenue from them grows consumption-proportionally. It’s a high-quality, high-growth cohort with deep platform dependency and limited churn risk given how embedded observability is in production operations.

The product story adds a forward-looking dimension that the revenue numbers don’t fully capture yet. MCP server tool calls rose 11-fold quarter-over-quarter in Q4 — a staggering acceleration in AI agent-driven requests flowing through Datadog’s infrastructure. This mirrors what Cloudflare observed on its network: AI agents are creating exponentially more monitoring and observability needs than human-driven applications because they operate continuously, at scale, without the natural pauses that human users introduce. An AI agent making thousands of API calls per hour needs its performance tracked, its errors flagged, its costs monitored, and its security posture verified, all of which are Datadog capabilities.

The AI SRE agent — Datadog’s flagship agentic product for automated incident response and root cause analysis — reached general availability in December and surpassed 2,000 trial and paying customers within a month. That pace of adoption is notable: it suggests both strong market need and a well-designed product that customers can get value from quickly. The AI SRE agent pairs with Datadog’s new OnCall product (which reached 3,000 customers for incident response), creating a loop where Datadog not only detects issues but can autonomously investigate and suggest — or execute — remediation. This is Datadog moving from observability toward what it calls “autonomous operations,” a positioning that closely mirrors Dynatrace’s strategic framing and that creates a much larger TAM than traditional monitoring alone.

The product breadth story is also becoming a real competitive moat. 84% of customers use two or more products, 55% use four or more, and 33% use six or more — metrics that have improved consistently year-over-year. In a market where AI workloads require observability, security, and performance monitoring to be integrated rather than siloed, Datadog’s ability to provide all of these from a single platform — with consistent data models, unified pricing, and no integration tax — is a structural advantage that becomes more valuable as AI application complexity increases. 48% of the Fortune 500 are already Datadog customers, with median ARR per Fortune 500 customer under $500,000 — suggesting enormous expansion headroom within an already-won customer base as those enterprises deepen their AI infrastructure.

The honest nuance on the call was the full-year 2026 guidance of 18–20% growth — a deceleration from the 29% Q4 print, partly explained by concentration in the AI-native cohort and the conservative guidance philosophy CFO David Obstler has consistently maintained. The AI-native companies that drove outsized growth in 2025 create difficult comparisons in 2026, and management is transparent about modeling conservatively on that cohort. But the underlying signal — non-AI-native acceleration, deepening multi-product adoption, an inflection in AI agent-driven usage, and the early momentum of the AI SRE agent — all point toward a business where the AI tailwinds are broadening rather than narrowing as the year progresses.

Shopify

Shopify’s Q4 2025 call was unlike any other in this series because President Harley Finkelstein wasn’t just reporting on AI’s impact on the business, he was announcing that Shopify intends to be the infrastructure layer for the entire new era of AI commerce. The call opened with unusual theatrics for an earnings call, invoking Tobi Lütke’s 2015 IPO vision, and the purpose was deliberate: to signal to the market that what Shopify is building in 2026 is not an incremental feature set but a foundational bet on how commerce itself will work when AI agents replace traditional search as the primary discovery and purchasing channel.

Q4 revenue reached $3.1 billion, up 31% year-over-year, and full-year 2025 revenue of $11.6 billion grew 30% — Shopify’s highest annual growth since 2021. B2B GMV grew 84% in Q4 and 96% for the full year, international revenue grew 36%, and Shop Pay processed $43 billion in Q4 GMV, exceeding 50% of US GPV for Shopify.

Now the AI story, which is Shopify’s most strategically distinctive narrative in this entire series. Since January 2025, orders coming to Shopify stores from AI search have grown 15x. That’s from a small base but 15x in 12 months is not a rounding error; it’s evidence that AI-powered shopping is moving from novelty to channel. What makes this uniquely important for Shopify is how the company is positioned relative to that shift. When an AI shopping agent helps a consumer find and purchase a product, the transaction still needs to flow through a checkout layer, with all the complexity that entails: inventory checks, payment processing, tax calculation, shipping logistics, fraud detection, and post-purchase fulfillment. Shopify’s checkout process is not bypassed by AI agents, the economics for merchants remain the same as if the transaction occurred in the online store, with the full backend of commerce continuing to flow through Shopify. In other words, AI is creating new discovery channels, but Shopify remains the execution layer underneath all of them.

The Universal Commerce Protocol co-developed with Google is Shopify’s bid to codify this position as a standard. UCP is described as “the only protocol that covers the full commerce journey, end-to-end,” and is payment-agnostic. The strategic intent is clear: just as HTTP became the standard that the web runs on, Shopify wants UCP to become the standard that agentic commerce runs on. When analysts pressed on the competitive threat from ACP the competing standard proposed by OpenAI and Stripe Finkelstein’s response emphasized that UCP is designed as a comprehensive protocol covering everything from search to post-order, maintaining the merchant’s checkout logic regardless of what AI agent surfaces the transaction. The “payment-agnostic” framing is notable: Shopify is saying it doesn’t need to own the payment to benefit from the transaction, which lowers the barrier to adoption for merchants and platforms that might otherwise resist ceding the payment relationship.

The Sidekick AI features tell a parallel story about AI’s impact on merchant operations rather than just discovery. Sidekick generated almost 4,000 custom apps, created over 29,000 automations, and edited 1.2 million photos within just three weeks of a feature launch. For a merchant managing a growing ecommerce business, product catalog, marketing campaigns, customer service, inventory management, these are capabilities that previously required a developer or a specialized agency. AI is making Shopify merchants dramatically more operationally capable without proportional headcount growth, which deepens platform dependency and expands the addressable market to entrepreneurs who previously couldn’t afford to operate at scale.

The biggest investment implication from this call is one that runs throughout this entire series but is most vivid here: Shopify is not merely an AI adopter or an AI-enhanced product; it is positioning itself as the commerce infrastructure that AI agents must connect to in order to transact in the physical economy. Agentic commerce is viewed as an evolution of existing channels, giving merchants flexibility to add or remove channels as needed and Shopify’s platform, with its integrated suite of payments, inventory, tax, and analytics, is what makes that flexibility possible at scale. If the trajectory of AI-driven shopping continues, Shopify’s dual position as both the merchant operating system and the commerce protocol that AI agents plug into could represent one of the most structurally advantaged positions in the AI economy.

Q2 Holdings

Q2 Holdings occupies a peculiar position in the enterprise software landscape: it’s a company that has been quietly executing a multi-year transformation into a profitable growth business while serving one of the most conservative, compliance-bound industries in the economy in community and regional banks and credit unions. The Q4 2025 call was, in essence, the culmination of that three-year transformation, and the AI story running through it is less about flashy product launches and more about what becomes possible when the foundational infrastructure finally catches up to the strategic ambition.

The financial discipline story is worth leading with. Full-year 2025 revenue of $794.8 million grew 14% Q2’s highest annual growth rate since 2021, while adjusted EBITDA expanded 49% to $186.5 million, with free cash flow conversion of 93%. Q4 subscription revenue grew 16% year-over-year, EBITDA margins expanded 400+ basis points, and the company finished with a $2.7 billion backlog, up 21% year-over-year.

The completion of Q2’s cloud migration in January 2026 is the single most important structural event in the company’s recent history, and it connects directly to the AI opportunity. For years, Q2 operated across a hybrid of legacy on-premise infrastructure and cloud environments, which constrained both gross margins and the ability to deploy AI capabilities consistently across the customer base. The cloud migration is now complete, and management described it as the single biggest lever for achieving their 60%+ gross margin target, but the less-discussed implication is that a fully cloud-native architecture is also the prerequisite for embedding AI across Q2’s platform in a way that reaches all customers simultaneously rather than a patchy rollout constrained by infrastructure heterogeneity.

The most strategically important AI claim on the call was CEO Matt Flake’s assertion that “AI innovation within financial services will flow through Q2, not around us.” That’s a bold statement, and worth unpacking. Q2’s argument is that AI in banking can’t simply be dropped in from the outside, it requires deep integration with the core digital banking platform, where customer identity, transaction history, behavioral patterns, and risk profiles live. Generic AI tools built by fintech startups or AI companies don’t have access to that data; Q2 does, because it is the system of record for the digital banking experience at hundreds of financial institutions. The Innovation Studio framework, Q2’s open development platform, is the vehicle through which fintechs and partners can build AI-powered applications on top of Q2’s infrastructure, keeping Q2 at the center of the innovation ecosystem rather than being bypassed by it.

The fraud and risk opportunity is where the AI story becomes most concrete near-term. Risk and fraud has emerged as one of the most strategically important areas in Q2’s portfolio, with fraud now described as continuous, cross-channel, and embedded in nearly every digital interaction. Q2 secured its largest fraud technology contract to date with a $200 billion bank in Q4, a landmark win that validates the platform’s relevance at the very top of the market. The argument management makes is compelling: AI-powered fraud detection that can synthesize signals across retail banking, small business, and commercial in real time is fundamentally different from the fragmented point solutions banks have historically deployed. Q2’s integrated platform, sitting across all of those channels simultaneously, is uniquely positioned to offer that cross-channel view in a way that no standalone fraud vendor can match without first solving an integration problem.

The cross-sell opportunity within the existing customer base is also notable. Only 10% of Q2’s Tier 1 customer base has all three of its core solutions — retail digital banking, commercial digital banking, and relationship pricing/fraud — and only 25–30% of digital banking customers have adopted fraud products. That penetration gap represents an enormous amount of runway for expansion revenue growth within an already-won, already-contracted customer base, exactly the kind of expansion that AI-powered products will accelerate as banks feel growing urgency around fraud prevention and competitive differentiation in digital banking experience. The financial institution sector’s conservatism, which has historically slowed Q2’s growth, is now working in its favor: banks are sticky, they don’t churn lightly, and when they do adopt AI solutions, they go deep on a trusted platform rather than experimenting broadly with unproven tools.

Hubspot

HubSpot’s Q4 2025 call was one of the more intellectually substantive in this series, because CEO Yamini Rangan was explicitly grappling with a question that most companies in this series prefer to answer only obliquely: in a world where AI can generate marketing content, qualify leads, answer customer inquiries, and analyze pipeline data autonomously, what is the role of a human-operated CRM platform like HubSpot? Her answer, that the gap between generating AI output and driving actual growth outcomes is where HubSpot wins, is the central thesis of the company’s 2026 strategy.

Full-year 2025 revenue grew 18.2% to $3.1 billion, with Q4 at 18% constant currency growth and 20% as reported, alongside a Q4 operating margin of 22.6% — the kind of growth-and-margin combination that puts HubSpot in the upper tier of scaled SaaS businesses. Net new ARR grew 24% in 2025, six points above constant currency revenue growth, a leading indicator that the demand pipeline is strengthening, not weakening, as AI proliferates.

The AI product metrics are more concrete than most companies in this series have delivered. Customer Agent, HubSpot’s AI support agent, was activated by over 8,000 customers with mid-sixties percent resolution rates, meaning it resolved roughly two-thirds of customer inquiries without human intervention. Prospecting Agent was activated by over 10,000 customers, up 57% quarter-over-quarter, and Data Agent was activated by 2,500+ customers. The usage-based credit consumption data is particularly useful for understanding where AI is actually generating value: Customer Agent accounted for approximately 60% of credits consumed in Q4, with Prospecting Agent, Data Agent, and intent monitoring each representing 10–15% of credits. That distribution tells you that the customer service use case is the most advanced and most adopted, while prospecting and data enrichment are emerging as the next wave.

The customer outcome data is where Rangan built her AI credibility argument most effectively. Customer Agent users were booking nearly twice as many meetings compared to the prior year a striking productivity improvement that goes directly to the growth outcomes that HubSpot’s customers care about. This is Rangan’s core thesis made concrete: generic AI can generate text, but HubSpot’s AI embedded in a platform that knows your contacts, your deal history, your customer interactions, and your competitive context can actually drive measurable revenue growth because it has the context to make good decisions, not just the capability to generate outputs.

CTO Dharmesh Shah’s presence on the call was notable; he appeared specifically to address the competitive threat from external AI agents and the question of whether tools like Claude or ChatGPT connectors could route around HubSpot entirely. Shah noted increased utilization of HubSpot’s Claude and ChatGPT connectors by leading-edge customers but no material uptake yet for third-party Claude Co-Work-style features that might disintermediate the platform. The implication is that HubSpot is watching this carefully and doesn’t believe the disintermediation risk is imminent; but the fact that Shah was on the call to address it signals that management takes the question seriously. HubSpot’s answer is that context and integration are the moat, not the AI model itself, and that a customer who tries to use a generic AI agent for sales and marketing quickly discovers they need the CRM data that HubSpot holds.

The internal AI adoption story adds the efficiency dividend dimension. 97% of code committed in 2025 used AI assistance, and nearly 60% of support is handled by AI internally, numbers that are among the highest cited by any company in this series.

The pricing transition is also worth understanding as context for the 2026 growth outlook. 90% of legacy customers have moved to HubSpot’s new pricing model, with nearly 50% of ARR through first renewal — meaning the structural shift that enables HubSpot to charge for AI usage (through the credit model) is nearly complete. As that base continues to expand and AI credit consumption grows, HubSpot gains a consumption-based revenue layer on top of its subscription base — the same hybrid model that several companies in this series are pursuing, and one that creates upside to guidance as AI usage scales.

Paycom

Paycom’s Q4 2025 call was a study in the tension between a genuinely innovative automation story and a growth profile that tells a more modest tale. CEO Chad Richison has been one of the most consistent voices in HCM software around the idea that payroll and HR can be fully automated, not just assisted, and that the end state is a system that employees interact with directly to manage their own HR experiences without HR professionals as intermediaries. The 2026 guidance of 6–7% total revenue growth and 7–8% recurring revenue growth is conservative by any standard, and it sits in sharp contrast to the aspirational language around full solution automation and AI-driven decisioning that dominated the prepared remarks.

The product story centers on two flagship innovations. BETI (Better Employee Transaction Interface) pioneered the concept of employee-driven payroll — employees review and approve their own payroll before it’s processed, catching errors at the source rather than after the fact. IWant is the newer AI-powered layer on top, functioning as a command-driven natural language interface that lets managers, executives, and HR professionals ask questions and get immediate, contextual answers from within the Paycom system of record. Leaders describe IWant as a catalyst for deeper insight, with one CEO noting they can go in without training and immediately understand more about their business — and the product saw usage increase 80% in January alone from Q4 levels.

The quantified ROI story is one of the most specific in this series: managers save as many as 600 hours per year, executives up to 60 hours, HR teams up to 240 hours, and employees across the organization collectively reclaim 3,600 hours annually. For a mid-market company where HR is often a small team wearing multiple hats, 240 hours reclaimed per HR professional is a genuinely meaningful productivity improvement — equivalent to roughly six weeks of additional capacity.

The challenge the market is wrestling with is whether these product innovations translate into accelerating revenue, and the 2026 guide suggests the answer is not yet — at least not dramatically. Annual revenue retention improved to 91% from 90% — a positive directional signal, but still below where best-in-class HCM platforms tend to operate. Client count grew approximately 5% in 2025, suggesting the automation and AI narrative is not yet meaningfully pulling new customers off the sidelines at an accelerating pace.

The “full solution automation” positioning is Paycom’s most distinctive strategic bet, and one that has an interesting read-across to the broader theme running through this series. Richison’s core thesis is that most enterprise software companies offer a buffet, a collection of features that customers pick and choose from, deploying them partially and leaving significant value on the table. Paycom’s vision is closer to a tasting menu: a fully integrated, automated system where every component works together and the customer doesn’t have to decide which pieces to adopt because the system is designed to run end-to-end without human intervention. The goal is “full solution automation to where you buy it, you configure it, and it does everything else for you.” That’s an AI-native product philosophy, not an AI-enhanced legacy product — and it’s the right framework for the era we’re in. The question is whether Paycom can accelerate its market penetration, it estimates it has only attacked 5% of its total addressable market, with a message that requires enterprises to think differently about how they deploy HR software, which remains a slower sales cycle than the technology itself might warrant.

Unity

Unity’s Q4 2025 call was structured around a single central thesis: the IronSource story is ending, and the Vector story is just beginning, and when you understand what Vector actually is and what data it has access to, the bull case becomes considerably more interesting than the headline numbers suggest. The stock’s 33% decline on earnings day reflected investor frustration with near-term revenue softness and the ongoing IronSource drag. But the underlying dynamics that management laid out on the call point toward a business in genuine strategic transition rather than structural decline.

The mechanics first. Vector delivered its third straight quarter of mid-teen sequential growth and reached a 53% increase in the three quarters since launch, with January 2026 setting an all-time monthly record, 72% higher than the prior January and surpassing December’s holiday peak. That January figure is particularly striking because January is typically a seasonally weak month for mobile advertising; growing 72% year-over-year in what should be a trough month signals underlying demand that isn’t just cyclical. By end of 2026, management expects Vector’s quarterly revenue run rate to be comfortably above $1 billion annually, a target that, if achieved, would represent one of the fastest-scaling AI advertising products in the industry.

The reason Vector is strategically distinctive in the context of this series is the data it sits on top of. Unity is the game engine used to build a substantial portion of the world’s mobile games — meaning that when games built on Unity are played, Unity can observe (with developer consent) highly granular, real-time behavioral data about how users actually interact with those games: what they click on, where they spend time, what in-app purchases they make, how their engagement patterns evolve over sessions and days. Customer opt-in rates to the developer data framework exceed 90% — meaning almost the entire Unity developer ecosystem has consented to share runtime behavioral data. Over Q1, Unity is scaling testing of runtime engine data with the expectation it will be live in Vector during Q2 — and management was explicit that this isn’t expected to be a sudden inflection but rather a compound improvement in model quality over time.

This is a data moat with a specific character: it’s not purchase history or demographic data (which many ad platforms have), it’s behavioral engagement data from the actual gameplay experience, observed in real time across hundreds of millions of devices. An ad targeting model trained on that signal can predict which users are likely to convert on an in-app purchase with significantly more precision than models trained on web browsing or social media behavior, because the signal is more direct and less noisy. Management framed the transition from IronSource to Vector explicitly as a shift from commoditized lower-margin ad network revenue to deeply differentiated AI platform revenue — and the margin improvement story confirms that framing: EBITDA margins expanded to 22% for the full year with free cash flow up 41% to over $400 million.

The Create business — Unity’s game engine and development tools — adds a second AI dimension that gets less attention but matters strategically. Unity 6 is showing the highest adoption speed of any major release, Create revenue grew 16% excluding non-strategic items, and the China business grew nearly 50% driven by ecosystem interoperability. The 2026 product roadmap for Create centers on two themes: browser-based authoring (no download required, making Unity accessible to a dramatically wider developer audience) and AI authoring tools that make game creation accessible to non-professional developers. If AI lowers the barrier to creating interactive content — games, simulations, virtual experiences — the long-run result is more content built on Unity, which generates more behavioral data, which improves Vector’s models. The two businesses are more symbiotically linked than their separate revenue reporting suggests.

The honest tension on the call — and the reason the stock reacted so harshly — is that the IronSource decline is still creating near-term revenue headwinds that mask the Vector growth story. IronSource declined to 11% of Grow revenue in Q4 and is expected to fall below 6% of total company revenue in Q1, which means the drag is largely behind them — but “largely behind” is not “fully behind,” and investors who’ve been waiting for the Vector acceleration to show up clearly in reported revenue numbers are still waiting for the noise to clear. The 2026 setup, with IronSource essentially a rounding error, should finally allow Vector’s growth to be visible in the headline numbers. Whether management’s conviction in Vector’s trajectory — and in the data moat that underlies it — proves correct is the central investment question for this company.

Twilio

After years of prioritizing growth over profitability and facing legitimate questions about whether its communications API business had the kind of defensibility that justifies a premium multiple, the company delivered: record quarterly revenue of $1.4 billion, $256 million of non-GAAP operating income, $256 million of free cash flow, and for the full year, $5.1 billion in revenue with $924 million of non-GAAP operating income and $945 million of free cash flow — its first full year of GAAP profitability.

But the more interesting story on the call was the AI positioning, and specifically the claim that Twilio is not being disrupted by AI but is instead becoming more essential because of it. CEO Khozema Shipchandler made this argument directly: “We are moving beyond being a provider of communications channels and data toward becoming a foundational infrastructure layer in the age of AI, providing customers with the foundational infrastructure layer that embeds persistence, memory, context, and the ability to spin up an agent, no matter what its capabilities are, all on the Twilio platform.” That’s a meaningful repositioning of what Twilio is. Not just SMS and voice APIs, but the communications backbone through which AI agents interact with the world.

The Voice AI story is the clearest near-term evidence of this transition. Voice AI revenue grew above 60% year-over-year in Q4, the fastest growing segment in the business, as enterprises deploy AI agents for customer service, sales automation, and appointment management that need to make and receive phone calls, handle conversations, and navigate real-time voice interactions. Twilio sits in the middle of those calls: it’s the infrastructure that routes the audio, handles the telephony, manages the compliance, and increasingly provides the AI scaffolding that makes those agents work. Voice AI agents are being integrated into core customer care and sales automation platforms, with broad adoption across all customer cohorts and significant interest from the ISV community.

RCS — Rich Communication Services — is the other emerging growth story worth understanding. RCS saw a 5x sequential increase in volume, with 70%+ open rates on messages — dramatically higher than traditional SMS. As carriers and device manufacturers roll out RCS more broadly, branded, interactive messages (complete with product images, carousels, and click-to-buy buttons) are becoming the standard for business-to-consumer communication. Twilio is well-positioned here because RCS requires the same carrier relationships and A2P messaging infrastructure that Twilio has spent a decade building. Management was appropriately cautious about the pace of RCS adoption — it’s still growing from a smaller base — but the 70%+ open rates versus the 20–25% typical for email marketing are a compelling data point about the channel’s value proposition.

The honest tension on the call was around gross margins. Non-GAAP gross margin declined to 49.9% in Q4 — down 200 basis points year-over-year — and the company expects an additional 170 basis point reduction in 2026 due to approximately $190 million in incremental carrier pass-through fees. Management was transparent that these fees compress margin percentages but don’t harm profit dollars or cash generation — a crucial distinction. The carrier fees are largely pass-through costs associated with revenue growth, and Twilio’s operating expense discipline (non-GAAP operating expenses declined 1% year-over-year) means the dollar-level profitability is expanding even as the percentage margin faces pressure.

The strategic picture Twilio is painting is one of a company that owns the communication nervous system that AI agents need to interact with humans and each other at scale. Every outbound notification, every inbound voice call, every RCS message, every two-factor authentication — as AI agents proliferate and their need to communicate across channels grows, the infrastructure layer that manages those communications becomes more valuable and higher volume. Analyst coverage noted Twilio’s combination of omnichannel communications, contextual data, AI frameworks, developer base, and technology partnerships as making it “the company to beat in CPaaS AI.” Whether that competitive position is durable against well-funded challengers — and whether the 8–9% organic revenue growth guided for 2026 can re-accelerate as AI agent deployment scales — is the central question investors are now evaluating against a much cleaner financial backdrop than Twilio has had in years.

Procore

Procore’s Q4 2025 call was anchored by a CEO who spent significant time making a case that most enterprise software companies can only aspire to make: that their core business is strong, their AI adoption is real and growing, and — crucially — their strong results were achieved *before* any material top-line contribution from AI. “Our strong results and momentum were all achieved before any material top line benefits from AI that we expect to realize in the future,” CEO Ajay Gopal stated directly.

The underlying business metrics are genuinely strong for a company in the construction technology space. Q4 revenue grew 15.6% to $349 million, $100,000+ ARR customers grew 20% year-over-year to more than 2,700, and million-dollar ARR customers grew 34% to 115 customers. Procore Pay — the embedded payments product — grew customers 70%+ year-over-year, adding a financial services revenue layer on top of the software subscription that mirrors what several other vertical SaaS companies in this series are building. RPO growth of 22% with longer average contract durations signals that customers are deepening their commitment, not hedging.

The AI story at Procore operates on two levels that deserve separate attention. The first is current adoption. 66,000 unique active users are using Procore AI, and nearly 700 customers have created thousands of agents on the Procore platform. For a construction software company serving an industry not known for rapid technology adoption, that’s a meaningful early signal. The construction sector is uniquely data-rich — every project generates enormous volumes of drawings, specifications, RFIs, submittals, safety reports, schedule updates, and financial data — and AI that can make sense of that complexity has an immediate and obvious ROI case for project managers, superintendents, and owners who are currently drowning in documents and spreadsheets.

The second and more strategic level is the Datagrid acquisition, which Procore is positioning as its AI engine for extracting intelligence from the unstructured data that pervades construction — blueprints, contracts, inspection reports, change orders. CEO Gopal called AI “an even more meaningful catalyst than any we’ve seen before” for the construction industry — a significant claim for an industry that has historically lagged in technology adoption. The argument is that AI can finally make sense of the extraordinary complexity and documentation burden of large construction projects, where a single major project can involve hundreds of subcontractors, thousands of documents, and billions of dollars flowing through an interconnected web of contracts and invoices.

The monetization strategy is still being developed, but the direction is clear. Gopal indicated Procore is likely to include AI offerings within upcoming bundles as part of new packaging, and also likely to include consumption-based components — the same hybrid approach emerging across this series. This is smart positioning: bundling AI into higher-tier packages drives ARPU expansion through upsell, while consumption-based components capture value from the most intensive users and create revenue that grows with AI workload volume rather than headcount.

There’s also a structural advantage in Procore’s pricing model that becomes more interesting in the context of AI. Unlike seat-based SaaS, Procore prices on construction volume — the dollar value of projects being managed. This insulates revenue from workforce reductions and enables efficiency-driven margin gains supported by agentic AI deployment — meaning if AI makes construction teams more productive and able to manage more projects per person, Procore actually benefits as construction volume scales, rather than being hurt by the headcount reductions that AI might cause.

The Procore story is also notable for a dimension unique in this series: the international hyperscaler data center win. A top international deal was secured with a UK data center hyperscaler, positioning Procore in the fast-growing AI infrastructure construction vertical. The irony is elegant — the physical construction of AI data centers is itself becoming a major construction category, and Procore is winning the software contracts to manage those builds. The companies spending $30–40 billion annually on AI infrastructure need project management software to coordinate those construction programs, and Procore is the leading platform for exactly that type of large, complex capital project.

Jfrog

JFrog’s Q4 2025 call told a story that sits at a genuinely underappreciated intersection: the company that manages how software gets built and deployed is now becoming the company that manages how *AI* gets built and deployed — and the security, governance, and supply chain challenges in that transition are creating exactly the kind of complex, high-stakes problem that JFrog is purpose-built to solve.

The financial foundation is strong. Full-year revenue reached $531.8 million, up 24% year-over-year, with cloud revenue growing 45% to $243.3 million — now representing 46% of total revenue, up from 39% a year prior. Large customers spending more than $1 million annually grew 42% to 74, and customers spending over $100,000 grew 50% year-over-year — the kind of enterprise momentum that speaks to JFrog becoming more strategic to its customers, not less, despite competitive concerns about AI potentially automating away some of its core use cases.

The core thesis of JFrog’s AI positioning is worth spending time on, because it’s more specific and arguably more defensible than most. JFrog’s Artifactory platform is where enterprises store, manage, and distribute software artifacts — the compiled packages, libraries, containers, and binaries that make up modern software applications. In the AI era, a new category of artifact has emerged: AI models, training datasets, model weights, and the outputs of AI-generated code. JFrog announced the availability of AI Catalog and Agentic Remediation capabilities to address emerging challenges created by the introduction of AI models and agent-generated code — which is a direct response to the new security attack surface that AI creates. An enterprise deploying a downloaded open-source model from Hugging Face has the same supply chain security questions as one deploying a downloaded open-source library from PyPI: is this safe? Is it licensed correctly? Has it been scanned for vulnerabilities or malicious code?

The Hugging Face partnership is strategically clever precisely because it addresses this challenge at the source. CEO Shlomi Ben Haim described Hugging Face as “a top of funnel for JFrog because it is the open-source hub” — meaning the place where enterprises go to find and download AI models is now funneling those enterprises toward JFrog’s secure model registry. The NVIDIA Enterprise AI Factory partnership adds another dimension: enterprises building AI applications on NVIDIA infrastructure need their model artifacts managed, versioned, and secured in exactly the same way their software artifacts have always been managed. JFrog is positioning itself as the universal registry for both.

The security story is the fastest-growing part of the business. Security core — JFrog Advanced Security and Curation — represented over 10% of total ARR at year end, 7% of total revenue for the year, and comprised 16% of year-end RPO compared to 12% prior year. That acceleration in RPO contribution is significant: it means security is growing faster than the rest of the business and is being committed to on longer-term contracts, suggesting customers view JFrog’s security capabilities as a strategic investment rather than a discretionary add-on. The Curation capability — which screens packages for security vulnerabilities, license compliance, and operational risk *before* they reach developers — is particularly relevant in an AI context where the risk of supply chain attacks through malicious or compromised models is real and growing.

The MCP server integration mentioned on the call is one of the more forward-looking product signals. MCP server and JFrog Fly integrations support a shift toward business-to-agent market workflows, enabling code agents to interact directly with the platform. This is JFrog acknowledging that the new consumer of its artifact registry may not be a human developer fetching a package — it might be an AI coding agent automatically pulling dependencies as part of an autonomous software development workflow. Ensuring that those agent-driven interactions are secure, governed, and auditable is a new but critical requirement that JFrog’s existing infrastructure is uniquely suited to address.

The honest challenge in the JFrog story is the growth rate deceleration to 17.5% guided for 2026, from 24% in 2025. Management framed this partly as timing — some large deals pulled into 2025 — and partly as a deliberate choice to consolidate lower-value customers and focus on enterprise expansion. But it’s also a reflection of the reality that JFrog’s core DevOps market is mature, and the AI adjacency opportunity, while real and growing, hasn’t yet produced the step-change revenue acceleration that would justify a meaningfully higher growth multiple. The security and AI governance opportunity is clearly there; the question is the pace at which enterprise adoption converts to incremental JFrog revenue.

SPS Commerce

SPS Commerce’s Q4 2025 call was the story of a quietly exceptional company navigating a genuinely difficult near-term environment while laying the groundwork for what it believes is an AI-driven acceleration ahead. The milestone of 100 consecutive quarters of revenue growth is extraordinary by any standard — 25 years of unbroken quarterly revenue increases is a testament to a business model built on network effects and structural supply chain necessity rather than on macroeconomic tailwinds. That consistency matters when evaluating the current deceleration against the long-term narrative.

The financial picture tells two simultaneous stories. Full-year revenue grew 18% to $751.5 million, with recurring revenue up 20%, and adjusted EBITDA grew 24% to $231.4 million — outpacing revenue growth and demonstrating meaningful operating leverage. But the near-term guide is softer: 2026 revenue guidance of approximately 7% growth reflects several headwinds that are more timing-related than structural — delayed retailer enablement campaigns, Amazon policy changes affecting the revenue recovery business, and continued invoice scrutiny among existing customers causing some down-selling. Management was explicit that these are temporary rather than permanent, with the back half of 2026 expected to benefit from campaigns that slipped out of Q4 2025.

The AI story at SPS Commerce is distinct from most in this series because it’s rooted in the specific problem of supply chain data complexity, not general-purpose business automation. SPS operates a network of over 54,000 customers — brands, retailers, distributors, and logistics providers — who exchange electronic data interchange (EDI) documents as part of their daily commerce relationships. The volume and complexity of those transactions is enormous: purchase orders, advance ship notices, invoices, inventory feeds, compliance documents. The launch of MACS — SPS’s agentic AI capability — leverages proprietary network intelligence and billions of transactions to provide guided workflows and automated monitoring. This is a data moat argument applied to supply chain: SPS has processed so many transactions across so many trading partner relationships that its AI models have context about what “normal” looks like, what exceptions typically mean, and how to route resolution — context that no general-purpose AI tool could replicate from the outside.

The monetization strategy for MACS is still being developed. CEO Chadwick Collins stated that MACS is initially available to beta customers, and the company is monitoring usage to inform monetization strategies — while it is expected to enhance competitive positioning and retention, the monetization opportunities will become clearer as customer usage is better understood. This is a familiar pattern across this series — build usage before monetizing, learn what customers actually value, then price accordingly — but it also means MACS is currently a retention and competitive differentiation tool rather than a direct revenue driver.

The ARPU story is arguably more important near-term than the AI story. ARPU reached approximately $14,300 for the year — and management was explicit that revenue growth in 2026 will come primarily from ARPU expansion rather than net new customer additions. This is a platform maturation signal: SPS has penetrated a large portion of its natural customer universe, and the growth vector is now selling more to existing customers — analytics, revenue recovery, additional trading partner connections, and eventually AI-powered capabilities. The cross-sell opportunity within the existing base is substantial: fulfillment customers who also adopt revenue recovery, analytics customers who add compliance monitoring, and eventually all of them adopting AI-powered exception management.

The structural reason SPS Commerce is worth watching in the AI context is the network effect dimension. When a retailer demands that its suppliers use a specific EDI format or connection type, every supplier who complies joins the SPS network. That mandatory compliance-driven onboarding creates network value that compounds over time and is essentially impossible for a new entrant to replicate at scale. AI that can make sense of that network — pattern-matching across millions of transaction pairs, detecting anomalies, predicting disruptions, optimizing routing — becomes more valuable the larger the network gets. SPS’s 100-quarter track record isn’t just a historical achievement; it’s evidence that the underlying network effect is durable enough to sustain growth through multiple economic cycles, and that durability is what gives the AI investment long-term credibility.

Waystar

Waystar’s Q4 2025 call was, in the context of this series, a unique kind of earnings story: a company that can point to a specific, quantified dollar amount of AI-driven value delivered to its customers. “Waystar Altitude AI prevented more than $15 billion in denials for our clients, reduced appeal time by 90%, and drove double-digit increases in denial overturn rates.” That’s not an engagement metric or a usage number — it’s a dollar figure that directly corresponds to revenue that healthcare providers would have lost without the AI. In an industry where claim denials represent a multi-hundred-billion-dollar annual problem, $15 billion in prevented denials in a single year is a number that commands attention in every C-suite conversation Waystar has with a hospital CFO.

The business fundamentals reflect this value delivery. Q4 revenue reached $304 million, up 24% year-over-year and 12% organically, with adjusted EBITDA of $129 million and a 42.5% margin — exceeding the company’s long-term 40% target. The company crossed $1 billion in annual revenue for the first time in 2025, a milestone that marks its transition from a high-growth mid-market company to a scaled enterprise platform. Net revenue retention was 112% with 97% gross revenue retention and a net promoter score above 70 — metrics that are extraordinary for a healthcare technology company serving an industry known for long sales cycles, high switching costs, and conservative technology adoption.

The AI story at Waystar has a specific character that distinguishes it from most in this series. Healthcare revenue cycle management — the process of getting paid for clinical services — is one of the most complex and error-prone workflows in any industry. A typical US hospital submits millions of claims per year, each of which must be coded correctly, submitted to the right payer in the right format, appealed if denied, and tracked through a 90–180 day payment cycle. Payers deny roughly 60 million claims annually, and the administrative cost of managing those denials consumes a staggering share of healthcare revenue. Approximately 50 of Waystar’s solutions leverage AI, and nearly 40% of revenue is driven by AI embedded in mission-critical reimbursement workflows — which means AI is not an add-on feature at Waystar, it’s the core mechanism through which the platform delivers its primary value proposition.

The Iodine Software acquisition — which brought AI-powered clinical documentation improvement and mid-cycle denial prevention — extended Waystar’s reach into the part of the revenue cycle where clinical and financial data intersect. This is where some of the most significant denials occur: a claim denied not because of a billing error but because the clinical documentation doesn’t adequately support the care provided. Iodine adds more than 1,000 hospitals and health systems with only 35% customer overlap, expanding the addressable market and cross-sell opportunity — and together the combined platform delivers full revenue cycle visibility through a unified financial and clinical platform. The data underlying this platform — 1 in 3 US hospital discharges and more than 7 billion annual transactions — creates a proprietary intelligence layer that no new entrant can replicate quickly.

The monetization approach is worth understanding precisely because it maps value to outcome more directly than almost any company in this series. New AI agents in 2026 will launch as both new SKUs with distinct pricing and as augmentations to existing modules — meaning Waystar has a path to charge more for AI when it delivers incremental functionality, and to embed AI into existing subscriptions when it serves as a retention and competitive differentiation tool. The CEO’s framing was direct: “AI drives retention and allows for price increases reflective of the value delivered.” For a company whose customers are measurably recapturing billions in denied revenue, that pricing power argument is among the most defensible in enterprise software.

The healthcare regulatory and trust dimension also plays into Waystar’s competitive advantage in a way that mirrors our Doximity and Qualys observations. Most healthcare providers — particularly community hospitals and mid-sized health systems — lack the internal engineering talent to build or customize AI for their revenue cycle workflows. Most clients prefer integrating AI capabilities into their existing systems rather than building their own, and they value working with trusted partners like Waystar which offers a cyber-secure, integrated platform. This is a trust and complexity moat: healthcare AI that touches billing, reimbursement, and clinical documentation sits in a compliance environment (HIPAA, payer rules, CMS regulations) that makes the barrier to switching from a proven, trusted platform extraordinarily high. Waystar has spent years earning that trust, and a Black Book survey of more than 750 healthcare leaders demonstrated that Waystar leads the industry in client satisfaction with AI execution and outcomes.

SimilarWeb

Similarweb’s Q4 2025 call was a story of a data company caught in an interesting position — possessing what may be one of the most valuable proprietary datasets in the AI economy, but struggling to convert that asset into predictable revenue fast enough to satisfy investors expecting a cleaner growth profile. Revenue grew 11% year-over-year to $72.8 million, below guidance — “mostly due to the timing of 2 large LLM data training contracts that did not close yet, but remain active in our pipeline.” Insider Monkey CEO Or Offer was direct about the dynamic: given the size and complexity of those AI contracts, sales cycles take longer to complete — but once closed, they represent large multiyear revenue opportunities with strong expansion potential.

The core asset is worth understanding clearly. Similarweb has spent over a decade building the most comprehensive independent dataset of digital traffic behavior on the internet — website visits, app usage, engagement patterns, referral sources, audience demographics, and competitive benchmarking across millions of digital properties globally. This data is valuable to enterprise customers for competitive intelligence, investor research, and market analysis. But in the AI era, it has acquired a second and potentially much larger use case: training data for large language models and AI systems that need to understand how the digital economy works, what consumers do online, and how web traffic flows. AI revenue grew 3x year-over-year — from a small base, but accelerating sharply — and AI-related revenue reached 11% of total sales in Q4, up from 8% in Q2 2025.

The LLM data training opportunity is structurally different from Similarweb’s traditional SaaS business in ways that create both excitement and operational challenge. Traditional customers buy subscriptions for ongoing competitive intelligence. LLM training customers buy large bulk datasets — one-time or periodic purchases that can be very large in dollar terms but are inherently irregular in timing. The 2026 guidance range was widened to $10 million precisely because “the timing of them to land is not that clear,” which is a candid acknowledgment that the company is in a transitional period where a new, high-value customer category is growing fast but doesn’t yet fit neatly into the quarterly cadence that public market investors expect from a SaaS business.

The partnership signals are strategically significant. The Manus partnership — following its acquisition by Meta — represents a step-change in reach, embedding Similarweb data into agent-driven workflow environments. When AI agents are conducting market research, competitive analysis, or investment due diligence autonomously, they need a trusted source of digital traffic data. Similarweb sitting inside the tools those agents use is a distribution model that doesn’t require a traditional enterprise sales cycle — it’s usage-driven, scales with agent adoption, and positions Similarweb as infrastructure rather than a point solution. The expanded Bloomberg Terminal integration adds a parallel distribution channel into institutional finance, where web traffic data is used as an alternative data signal for investment decisions.

The core SaaS business has its own structural improvement story running underneath the AI narrative. 60% of ARR is now on multiyear contracts, up from 49% a year ago — a significant shift toward durability and revenue predictability that reduces churn risk and increases customer lifetime value. 63% of ARR comes from customers generating over $100,000 annually, with net revenue retention at 103% for that enterprise cohort — meaning the largest customers are expanding, not contracting. The AI Studio product launch — which embeds AI-powered market intelligence directly into enterprise workflows with consumption-based pricing — is designed to broaden usage within existing accounts and drive ARPU expansion without requiring new logo growth.

The honest tension on the call was the combination of a revenue miss, widened guidance, acknowledged salesforce execution disappointment in 2025, and a new CFO joining in December — a lot of operational uncertainty concentrated in a single quarter. Management’s response was appropriately direct: refocus the go-to-market on inbound leads, build a dedicated team for large LLM contract pursuit, and ground the 2026 guidance in the high-visibility core business rather than counting on AI deal timing. The 2026 guidance is $305-$315 million representing 10% growth at the midpoint.

Amplitude

Amplitude’s Q4 2025 call opened with a strategic argument that reframes the entire conversation about AI’s impact on product analytics. Most software CEOs are defending against the narrative that AI will disintermediate their product. Skates made the opposite case: AI coding assistance from Anthropic, OpenAI, Cursor and others has compressed development cycles dramatically, accelerating the velocity at which companies ship new products — and when software is this easy to build, it creates a gap between how fast teams can ship features and how fast they can learn if they are working. This shifts the pressure to the “use and learn” side of the product development loop — understanding how users behave, what works, what doesn’t, and what to do next. In other words, AI doesn’t reduce the need for product analytics — it multiplies it, because teams are shipping so much faster that the bottleneck moves from building to learning.

The financial results validated the strategic narrative. Q4 revenue reached $91.4 million, up 17% year-over-year — up from 9% growth in the prior fiscal year — with total ARR of $366 million, up 17% year-over-year and up $18 million sequentially, the highest net new ARR quarter since 2021. Customers with $100,000+ ARR grew to 698, up 18% year-over-year with the largest sequential increase on record, and million-dollar ARR customers reached 56, up 33% year-over-year. Net dollar retention improved to above 105%, up from 100% at the end of 2024 — the direction the metric needs to be moving for a company whose next phase of growth depends on expanding within enterprise accounts.

The AI native customer story is one of the most specific in this series. More than 25 of the leading AI-native companies are customers with over $100,000 in ARR, and one of the world’s largest frontier AI labs is a seven-figure customer. The frontier lab use case is illuminating: they came to Amplitude to replace a manual system built from fragmented internal tools and raw warehouse data, because they needed to understand activation, engagement, retention, and monetization end-to-end for their own products — the same problems that consumer and enterprise software companies have always faced. AI companies are, at their core, software companies, and they have the same product analytics needs as everyone else. The difference is that AI companies’ stakes are higher and their iteration cycles are faster, making the need for trusted behavioral data more acute.

The agentic usage data is the most striking near-term signal on the call. In October 2025, virtually no queries to Amplitude were triggered by AI agents. By the time of the Q4 call, agents were driving 25% of total queries — and agents also drove the vast majority of overall incremental query growth. That trajectory — from zero to 25% in a few months — suggests that AI agents are rapidly discovering Amplitude’s behavioral data as a necessary input to their workflows. When a coding agent, product agent, or marketing agent needs to understand how users are actually interacting with a product, Amplitude is the system they’re querying. This creates a new consumption layer that doesn’t require human-driven engagement — agents query the platform autonomously, continuously, at scale.

The technical reason Amplitude is positioned to capture this is worth understanding. Skates was direct: an agentic analytics platform reaching 76% success rate on complex production-grade queries — seven times better than a straight text-to-SQL approach — because it can’t be replicated accurately with an LLM on a data warehouse. Amplitude has worked with thousands of companies over 13 years and amassed the world’s largest database of user behavior, purpose-built with the correct retention and funnel logic and analytical tools exposed in a way that enables AI to reason effectively. The “text-to-SQL on a data warehouse” comparison is pointed — it’s a direct acknowledgment that the obvious competitive threat (generic LLMs accessing raw data) produces inferior results precisely because behavioral analytics requires semantic understanding of what user actions *mean*, not just what they are.

The MCP integrations with Anthropic, OpenAI, Figma, GitHub, Lovable, and Slack extend this intelligence to where teams already work — embedding Amplitude’s behavioral context directly into the tools developers and product managers use daily. The Global Agent launch — which replaces the traditional dashboard paradigm with a conversational interface that can answer complex behavioral questions in natural language — and the InfiniGrow acquisition for AI-native marketing analytics round out a coherent vision: Amplitude as the behavioral intelligence system that sits underneath the entire product development and go-to-market stack, accessible by both humans and AI agents, continuously learning from every product interaction across thousands of companies.

‍

Sammy Abdullah

Managing Partner & Co-Founder

Enjoyed this post?

Share it using the links below.

Copy link
Share on LinedIn
Copy link
Share on Facebook

Get Our Newsletter in Your Inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.
  • SaaS Metrics
  • Portfolio
  • Blog
  • contact us
5307 E Mockingbird Ln, Suite 802  Dallas, TX 75206