Blossom Street
  • about
  • SaaS Metrics
  • Mrr Calculator
  • Track Record
  • Portfolio
  • Blog
  • EMAIL PARTNER
return to blog

18 SaaS Earnings Calls that Show the True Impact of AI on SaaS

by

Sammy Abdullah

We’re reviewing the earnings calls of 81 publicly traded SaaS companies to understand the impact AI is having on SaaS. Below we summarize the key take-aways of 18 of those earnings calls: Snowflake, Salesforce, MongoDB, ServiceTitan, ServiceNow, Microsoft, Appfolio, Palantir, Atlassian, Paylocity, Doximity, Qualys, Bill.com, Zoominfo, Dynatrace, Monday.com, Blackbuad, and Cloudflare. After the take-aways below, you will see full ~5 paragraph summaries of each earnings call. We will release more blogs like this as we do more work.

The big take-aways:

AI is not profitable yet. The goal for the moment is driving margin neutral revenue. Microsoft, Salesforce, ServiceNow and nearly every other company described margin pressures from deploying AI product. AI workloads are just very expensive at the moment.

Direct revenue from AI is nascent for most. And is non-existent for some players like Doximity, or still in its very early stages but growing fast. Enterprise customers do seem to be adopting and benefiting from the new AI products built by their already critical software providers like Salesforce and Monday.com. There is also a category of companies that are building usage before monetizing.

Infrastructure for AI versus AI products. Some like Snowflake, ServiceNow, and MongoDB, and Cloudlfare will win because AI is supported infrastructurally on their platforms. Some like Dynatrace believe AI makes their product more compelling than ever (in their case it’s observability and monitoring of AI). Others like ServiceTitan and Salesforce are building AI product which actually executes context-aware actions.

Trust is an issue for AI. Doximity has stopped releasing AI product until they get it perfect, because it’s not ok for AI to mis-diagnose a patient. Qualys makes a similar point in cybersecurity; being the agentic remediation layer requires a level of trust that generic AI tools can’t establish. AI errors in healthcare and cybersecurity are near unacceptable, and thus the bar for accuracy is higher.

Performance is strong. Quite a few of these companies like MongoDB, Cloudflare, and ServiceNow closed some of largest deals ever in Q4. Others like Atlassian had record quarters. Those companies that are selling AI products to their customers report better growth and retention among those customer cohorts. On the other hand, there are companies like Zoominfo experiencing serious disruption, with growth falling to near zero.

The moat is very high. Moats for enterprise software are being built around non-public data, workflows, governance, security, integrations, compliance, and operational knowledge of the existing customer. AI cannot standalone, but rather needs to sit on top of software which manages all the above in an enterprise-friendly manner. That said, for SMB customers which have much less internal complexity and are an easier lift, AI is becoming a real threat to incumbent software; Monday.com sited this in their SMB customer base.

Internal AI improves overall margins. Many of the companies themselves such as Blackbaud are using AI in their own operations to improve margins.

The ~5 paragraph summaries of actual calls are below. Revenue multiples in real time can be seen for all these companies at https://www.softwaremultiples.com/. Also visit https://www.blossomstreetventures.com/ for detailed financials and metrics data for all these companies.

Snowflake

Snowflake’s Q4 call told a story of a company in genuine transition, moving from being the place enterprises store and query data, to being the platform where they actually build and run AI. The numbers: product revenue grew 30% year-over-year, driven primarily by AI workloads, and accounts using AI features rose to more than 9,100, with Snowflake Intelligence, their flagship agentic product, scaling to over 2,500 accounts, nearly doubling quarter-over-quarter.

What makes Snowflake’s AI story somewhat distinct from peers is that AI is benefiting the business on both sides of the income statement. On the revenue side, larger and more strategic deals are getting done; Snowflake signed the largest deal in company history at over $400 million in total contract value, and closed seven nine-figure contracts in the quarter versus two in the same period last year. On the cost side, management reported 40% to 50% higher project margins and compressed delivery cycles through their own internal use of Snowflake Intelligence and Cortex Code, a credible proof point that software companies will be big beneficiaries of AI.

For customers, the value proposition is centered on two products. Cortex Code is used by over 4,400 customers, enabling faster development and deployment of AI workloads. Snowflake Intelligence, meanwhile, is the more agentic play, allowing enterprises to build AI-native workflows directly on top of their existing Snowflake data, without moving it elsewhere. The strategic logic is powerful: enterprises already trust Snowflake with their most sensitive data, and Snowflake is betting they’ll prefer to run AI on top of that data in place rather than pipe it out to a separate system.

The gross margin picture is the caveat. Management acknowledged that newly launched AI products currently carry lower margin profiles, and that margin expansion remains a near-term priority as efficiency improvements are realized. This is the same tension playing out across the AI infrastructure sector; serving AI workloads is compute-intensive, and the economics improve over time but aren’t fully mature yet. Snowflake guided FY27 product gross margin at 75%, roughly flat with FY26’s 75.8%.

The bigger strategic bet is on ecosystem. Snowflake expanded partnerships with Anthropic, OpenAI (a $200 million expansion), and Google Cloud, giving customers native access to leading AI models directly within the platform. Rather than picking a single model winner, Snowflake is positioning itself as the neutral, data-layer that works with all of them, a benefit to its enterprise customers who don’t want to be locked into a single AI vendor. If that positioning holds, Snowflake doesn’t just benefit from the AI wave; it becomes critical infrastructure beneath it.

Salesforce

Salesforce is at an inflection point with AI. Agentforce, Salesforce’s agentic AI platform, closed 29,000 deals in its first 15 months, with marquee enterprise customers like Amazon, Ford, AT&T, and Moderna signing on. The deals over $1 million were up 26% year-over-year, suggesting that AI may be driving larger more strategic contracts.

What’s most notable is how Salesforce is trying to reframe what AI does for customers. Rather than talking about AI in terms of features or capabilities, Benioff introduced a new metric, Agentic Work Units (AWUs), to show that AI agents on the platform are completing 2.4 billion discrete units of actual work: updating records, triggering workflows, making decisions. The message was deliberate: this isn’t AI that thinks or suggests, it’s AI that executes.

On the revenue side, Agentforce ARR hit ~$800 million, up 169% year-over-year, and the broader Agentforce and Data Cloud bundle crossed $2.9 billion ARR, up over 200%. Salesforce is monetizing AI through a mix of premium SKUs, new seat additions, and consumption-based flex credits, which gives them multiple vectors to capture value as usage scales.

The candid note was around gross margins. Token costs remain a real input expense, and Salesforce acknowledged it’s working to optimize efficiency to keep AI gross margins neutral in the near term. AI offerings are not profitable at the moment.

The bigger strategic picture is that Salesforce is betting AI transforms it from a system of record into a system of action where agents handle customer service, sales workflows, and operational decisions autonomously. If that holds, it strengthens their competitive moat considerably and raises switching costs for existing customers.

MongoDB

MongoDB is positioning to be the foundational data layer for the AI era, and the company’s job is to make sure the world knows it. Customers are excited about MongoDB’s platform strength and its ability to serve as an integrated data layer for AI agents, combining search, vector search, and embeddings in a single offering.

The most important thing to understand about MongoDB’s AI story is where they sit in the stack. Unlike Salesforce or Braze, MongoDB isn’t selling AI applications to end users. It’s providing the data infrastructure that AI applications run on. At its flagship MongoDB.local San Francisco event, the company announced the integration of its core database with embedding and reranking models from Voyage AI, creating a unified data intelligence layer for production AI allowing developers to build sophisticated applications at scale with reduced hallucination risk and no requirement to move or duplicate data. That no data movement angle is a meaningful competitive argument: enterprises with sensitive data in MongoDB don’t have to copy it to a separate vector store or AI pipeline, which simplifies architecture and lowers risk. For large financial institutions and regulated industries, who are among MongoDB’s most strategic customers, that matters enormously.

The candid acknowledgment from management, however, is that AI is not yet a material driver to results, though they are encouraged by the growth they are seeing with customers leveraging AI capabilities. The number of customers using vector search and Voyage embedding models nearly doubled year over year, and AI natives, digital natives, and large enterprises are all contributing to the growth across the customer base.

What’s actually driving the strong financials right now is a combination of AI-adjacent tailwinds and MongoDB’s core enterprise momentum. In Q4 they signed an approximately $90 million deal with a large tech company planning to expand both core and AI workloads on Atlas, and a greater than $100 million deal with a large financial institution for Enterprise Advanced, the largest total contract value deal in company history. The significance of that financial institution deal is hard to overstate: large banks are notoriously conservative with infrastructure decisions, and a nine-figure commitment to MongoDB speaks to how seriously enterprises are treating their data platform choices in the context of AI readiness.

AI actually amplifies the case for MongoDB’s multi-model, developer-friendly architecture. As enterprises move from traditional transactional applications toward AI-powered agentic workflows, the data requirements become more complex, requiring search, vector similarity, document storage, and real-time access all in one place. MongoDB’s goal is to become the generational data platform of choice in the AI and multi-cloud era, and if AI application development scales the way most expect, the demand for an integrated, performant, cloud-native data layer like Atlas should grow with it. The risk is that the revenue inflection from AI workloads takes longer to materialize than investors hope. but the Q4 numbers suggest the underlying business is strong enough to carry that bet comfortably while AI catches up.

ServiceTitan

The AI story here is unusually concrete. Where most SaaS companies are talking about AI in terms of platform strategy and future optionality, ServiceTitan is reporting actual customer outcomes from a live product, and those numbers are striking.

The centerpiece is Max, which ServiceTitan is positioning not as an AI feature but as an agentic operating system for the trades, meaning it’s designed to autonomously orchestrate end-to-end workflows across demand generation, dispatch, quoting, payments, and back-office operations for HVAC, plumbing, electrical, and other field service businesses. The first deployment cohort of Max customers saw a 50% increase in average ticket size, one customer achieved over 50% revenue growth in a single month, and another increased EBITDA margin from 18% to 30% while reducing office headcount. And management went further: customers on Max will about double their monthly subscription revenue when fully ramped, an effect not driven by technician expansion, meaning the value is coming from operational efficiency and higher revenue capture per job, not simply from adding more workers.

What makes ServiceTitan’s AI positioning particularly defensible is the data moat underlying it. The company leverages proprietary structured data from over $80 billion in annual transaction volume to drive automation and outcome improvements. This is a critical competitive point that CEO Ara Mahdessian leaned into when analysts asked about AI-native startups competing for the same customers. A point-solution AI tool built for the trades can optimize one workflow but it has no context on how that workflow connects to the rest of the business. ServiceTitan’s integrated platform, spanning marketing, scheduling, dispatch, invoicing, and payroll, creates a contextual data layer that generic AI tools simply can’t replicate from the outside.

The second AI product gaining traction is Virtual Agents: AI-based modules that handle inbound call management and appointment booking, especially during call surges or after normal business hours. For a trades business, missed calls during peak season or after hours are direct revenue losses. A potential customer who can’t book gets routed to a competitor. Virtual Agents directly plug that gap, and because it’s priced as a consumption product, it creates a new usage revenue stream that management believes could grow faster than GTV in fiscal 2027.

The honest caveat is that Max is still early-stage and capacity-constrained. ServiceTitan plans to double Max capacity in Q1 FY2027, with scaling tied to onboarding efficiency and customer success, which is a signal that the bottleneck right now is deployment and training, not demand. To accelerate on all fronts, the company hired Abhishek Mathur from Figma, Meta, and Microsoft as Chief Technology and Product Officer, specifically to drive organizational and technology velocity around AI initiatives. FY2027 is also expected to represent the company’s largest R&D investment ever, with explicit focus on AI inference and internal tooling. The trades are not typically an industry associated with cutting-edge software, which is precisely why ServiceTitan’s moat both in data and in customer relationships could prove so durable as AI raises the stakes for what field service software can actually do.

ServiceNow

If there’s one company in enterprise software that has the most fully-formed AI story right now, it’s ServiceNow. CEO Bill McDermott came into this call on offense, explicitly addressing the “AI will eat software” narrative that has spooked investors across the sector and flipping it on its head. His argument was direct: enterprise AI will be the largest driver of return on the multitrillion-dollar super cycle of AI infrastructure investment, and the real payoff comes when tokens move beyond pilots and get embedded directly into the workflows where business decisions are made, with ServiceNow serving as the semantic layer that makes AI ubiquitous in the enterprise.

The numbers behind Now Assist, ServiceNow’s AI product suite, are the most concrete AI monetization metrics in this cohort. Now Assist ACV surpassed $600 million, more than doubling year-over-year in Q4, with deals over $1 million nearly tripling quarter-over-quarter and 35 such deals closing in Q4 alone. Enterprises aren’t just adding AI as a line item but are making meaningful commitments to it within the ServiceNow platform. The AI control tower, which allows enterprises to govern and orchestrate AI agents across the business, grew over 4x its 2025 targets, another signal that customers are moving from individual AI use cases to thinking about AI management as a platform-level problem.

What ServiceNow is really selling is AI governance as much as AI capability. As enterprises deploy more agents someone has to be in charge of orchestrating them, monitoring them, and ensuring they don’t go rogue or create compliance exposure. ServiceNow is positioning its platform as that orchestration layer, what they call the “universal agentic network” built on MCP and Workflow Data Fabric. Monthly active users grew 25% and the number of workflows and transactions processed on the platform increased over 33% each, reaching 80 billion workflows and 6.4 trillion transactions.

ServiceNow doesn’t want to bet on any single model winning rather it wants to be the workflow layer that works with all of them, insulating itself from model commoditization while capturing value from enterprise adoption regardless of which AI providers customers prefer.

The honest tension in the ServiceNow story is pricing and gross margin. Management acknowledged some gross margin headwinds from hyperscaler and AI infrastructure choices, and the shift to a hybrid pricing model, combining traditional subscription with consumption-based AI usage, introduces some revenue variability that investors are still getting comfortable with. But with $15.5 billion in subscription revenue guided for 2026 at 19–20% growth, a 32% operating margin, and 36% free cash flow margins, ServiceNow is arguably the most financially powerful pure-play on enterprise AI adoption in the market right now. The core business is robust enough that they can absorb the cost of building out AI infrastructure while competitors are still figuring out their strategy.

Microsoft

Microsoft’s call was a declaration that AI is now the gravitational center of the entire business. Satya Nadella opened with a striking statement: “We are only at the beginning phases of AI diffusion and already Microsoft has built an AI business that is larger than some of our biggest franchises.” When you consider that Microsoft’s “biggest franchises” include Office, Windows, and Xbox, each multi-billion dollar businesses, that’s an extraordinary claim. And the numbers behind it are hard to argue with: Microsoft Cloud crossed $50 billion in quarterly revenue for the first time, up 26%, while Azure grew 39%, the acceleration driven explicitly by AI workloads.

The AI story at Microsoft operates on three distinct layers, each reinforcing the others. The first is infrastructure. Capital expenditures hit $37.5 billion in the quarter, with roughly two-thirds allocated to short-lived assets like GPUs and CPUs. Management introduced a new internal metric, tokens per watt per dollar, as their guiding optimization target for AI infrastructure, and reported a 50% increase in throughput on OpenAI inferencing due to infrastructure advances.

The second layer is the Copilot product suite, where AI is directly touching customers at scale. Microsoft 365 Copilot reached 15 million paid seats, with over 160% seat growth year-over-year and tripled number of customers with over 35,000 seats, a signal that enterprise adoption is moving from departmental pilots to company-wide deployments. Average user conversations doubled and daily active users grew 10x. GitHub Copilot reached 4.7 million paid subscribers, up 75% year-over-year, with individual Copilot Pro Plus subscriptions growing 77% sequentially.

The honest tension is on margins. Gross margins declined slightly, driven by continued AI infrastructure investments and growing AI product usage, and management guided for operating margins to be down slightly year-over-year in Q3. Microsoft is, in effect, spending today to capture a revenue curve that extends well into the next decade, and the Q2 numbers suggest that curve is steeper than almost anyone expected.

Appfolio

AppFolio’s Q4 call told a quietly compelling story for understanding how AI actually changes the economics of a vertical SaaS business. AppFolio serves property managers, a customer base that has historically been underserved by software and skeptical of hype. The fact that 98% of AppFolio’s customers are already actively using one or more AI capabilities included in the platform, against an industry backdrop where half of AI users in property management report they can’t actually rely on the AI features in their core system, is a striking market differentiation signal. It suggests AppFolio has threaded a needle that most vertical SaaS companies are still struggling with: making AI functional and trusted at the ground level.

AppFolio is repositioning its platform around three layers: a system of record, a system of action, and a system of growth with agentic AI embedded directly into daily operations, with the explicit goal of enabling customers to evolve from property managers to performance managers. That reframing reflects a fundamental shift in what AppFolio is selling: not software that helps you manage properties, but a platform that actively improves the financial performance of your property management business. Adoption of premium tiers has already exceeded 25%, a meaningful indicator that customers are upgrading to capture AI-driven capabilities.

The business impact of AI is showing up directly in AppFolio’s financials. Non-GAAP operating margin expanded to 24.9% in Q4, up from 20.2% a year earlier, a nearly 500 basis point improvement that reflects both revenue growth and the operating leverage that comes from AI-driven efficiency gains across the platform itself.

What makes AppFolio particularly interesting from an investment lens is that their AI advantage is compounding in a way that’s structurally hard to replicate. 45% of survey respondents in the property management industry say they plan to consolidate their software solutions and AppFolio is the natural consolidation destination, precisely because they’ve embedded AI deeply enough that customers would lose significant operational capability by leaving. The moat here isn’t just the product; it’s the AI-trained workflows, the resident data, and the operational patterns that accumulate the longer a customer stays on the platform. For a vertical SaaS business approaching $1 billion in revenue, that’s a durable competitive position.

Palantir

Palantir’s Q4 2025 call was unlike any other earnings call in enterprise software this cycle partly because of the numbers, which were objectively extraordinary, and partly because Alex Karp delivers earnings calls the way a wartime general addresses troops, not the way a CFO addresses analysts. But strip away the theater and what you find is a company that has, almost overnight, become one of the defining stories of what enterprise AI actually looks like when it works at production scale.

The headline numbers require context to be fully appreciated. Q4 revenue grew 70% year-over-year, the highest growth rate since Palantir went public, and representing a 3,400 basis point acceleration versus Q4 of the prior year. This isn’t a company maintaining a high growth rate, it’s a company where the growth rate itself is accelerating sharply. Full-year 2025 revenue grew 56%, and the company is guiding full-year 2026 revenue of $7.19 billion representing 61% growth.

The company closed 61 deals greater than $10 million in the quarter, and management cited multiple examples of customers signing $80 to $96 million contracts within months of initial engagement, a compressed deal cycle that reflects genuine organizational conviction.

The US vs. international divergence on the call was striking and strategically revealing. US revenue grew 93% year-over-year and 22% sequentially, while international commercial revenue grew just 8% year-over-year, a massive gap that management was candid about.

The government side of the business is also worth understanding in the context of AI impact. US government revenue grew 66% year-over-year, driven in part by a massive $10 billion Army software contract signed last summer and a $448 million Navy contract for shipbuilding supply chain modernization. Karp was unambiguous about what these contracts represent: not just data analytics or workflow tools, but AI systems that are actively changing the operational capabilities of the US military. He argued that AI implementations in the defense context have changed what warfighters are able to do.

The risk most frequently raised by skeptical analysts on the call is whether the US commercial acceleration is sustainable or whether it represents a concentrated burst of pent-up demand that will moderate. Palantir’s answer is essentially that AI-driven enterprise transformation is still in very early innings, that their pipeline of committed deal value reached $11.2 billion (up 105% year-over-year), and that the real constraint on growth is their own capacity to onboard and deliver for customers, not demand.

Atlassian

CEO Mike Cannon-Brookes has been saying “AI is the best thing that’s ever happened to Atlassian” for several quarters, and this quarter he finally had the numbers to fully back it up. Atlassian delivered its first-ever $1 billion cloud revenue quarter, with cloud up 26% year-over-year, and surpassed $6 billion in annual run rate revenue, a milestone that felt like a culmination of years of patient investment in infrastructure that is now paying off precisely because AI workloads need exactly what Atlassian has built.

The strategic logic of Atlassian’s AI position is distinct from most others in this series. Rather than building AI features on top of an existing product, Atlassian is arguing that AI needs Atlassian; that the work tracking, planning, and organizational knowledge embedded in Jira and Confluence becomes more valuable, not less, as AI proliferates. The Teamwork Graph, which now contains well over 100 billion objects and connections across first and third-party tools, is the context layer that enables Rovo Atlassian’s AI assistant to deliver business value that is actually context-aware and actionable, rather than generic. That’s a meaningful competitive claim: a new AI tool can generate text or answer questions, but it can’t tell you which Jira tickets are blocking your sprint, who owns which decision, or how a current workflow has historically performed across 350,000 customers. Atlassian can.

The proof that customers believe this argument is in the adoption metrics. Atlassian’s Teamwork Collection, the AI-powered bundle that serves as the company’s primary AI monetization vehicle, surpassed 1 million seats sold in under nine months, with more than 1,000 customers upgrading, and the company closed a record number of deals over $1 million in ACV, nearly doubling year-over-year. Critically, customers using AI code generation tools create 5% more Jira tasks, have 5% higher monthly active users, and expand Jira seats 5% faster than those not using AI tools, a clean, data-driven demonstration that AI adoption drives more usage of the core platform, not less.

The seat-based pricing debate hung over the call: analysts probed whether consumption-based pricing could erode Atlassian’s model as AI agents proliferate and “seats” become a less meaningful unit. Atlassian’s response is essentially that customers want predictability, and seat-based pricing delivers it, while the Teamwork Collection bundles AI credits on top of seats in a way that captures consumption upside without forcing customers into open-ended consumption risk. RPO grew 44% year-over-year to $3.8 billion, accelerating for the third consecutive quarter.

The competitive angle on the call was also striking. When asked about new AI tools including tools from Anthropic potentially challenging Jira, the CEO was notably unbothered. He noted Atlassian considers Anthropic a partner, using their models within the platform, and argued that new AI tools will emerge but Atlassian’s Teamwork Graph and deep workflow integration provide differentiation that generic AI tools can’t replicate. The focus, he said, remains on human-AI collaboration for complex work: the kind that requires organizational context, compliance, security, and integration that a standalone AI tool

Qualys

The cybersecurity industry is entering a new phase where the speed of attacker exploitation has outpaced the speed of human response, and the only viable answer is autonomous, AI-driven remediation. As threat actors continue to compress time-to-exploit, Qualys believes the next phase of pre-breach risk management will be defined by an agentic AI-driven risk fabric with out-of-the-box business quantification and automated remediation to respond at the speed of threats.

The most significant product announcement on the call was the launch of the AI-native Risk Operations Center which the CEO positioned explicitly as a new category in cybersecurity, designed to centralize an organization’s entire threat response posture. The argument is that the traditional SOC (Security Operations Center) is reactive: it responds to breaches that have already happened. Qualys’s ROC is designed as a pre-breach capability that unifies Continuous Threat Exposure Management with exploit confirmation, risk quantification, and automated remediation, all powered by agentic AI that can act without waiting for a human to review and approve each step. The competitive shot embedded in the CEO’s commentary was pointed and deliberate: he argued that competitors focusing on exposure management can’t win the AI fight if they’re still routing remediation through Jira tickets and ServiceNow tickets. Autonomous decision-making and execution is the actual differentiator, not just identifying vulnerabilities.

On the customer impact side, Qualys’s AI story is still more about where the market is heading than where revenue is today. The ETM (Enterprise TruRisk Management) platform, which serves as the foundation for the agentic AI capabilities, represented 10% of total bookings and 13% of new bookings, up from 8% and 9% previously, a meaningful directional signal but still a small share of the overall business. Patch Management, which is the remediation capability that AI agents actually execute against vulnerabilities, represented 8% of total bookings and 16% of new bookings. Together these metrics point toward a platform transition that’s underway but early. Customers are beginning to adopt the agentic workflow, but the majority of the revenue base is still anchored in the traditional vulnerability management and VMDR products.

The financial picture is disciplined and somewhat unusual relative to the rest of the companies in this series. Full-year 2025 revenue reached $669 million, up 10%, with a 47% adjusted EBITDA margin; exceptional profitability for a company of this size. The 2026 guidance of 7–8% revenue growth is conservative by enterprise software standards, and management was candid that it reflects investment in AI infrastructure, sales and marketing expansion, and federal sector buildout, all of which compress near-term margin slightly.

What makes Qualys particularly interesting from an AI lens is the specificity of its use case. The combination of asset discovery, vulnerability identification, exploit confirmation, risk quantification, and automated patch deployment is one of the few enterprise software workflows where agentic AI is not just useful but arguably necessary for the product to work as intended. No human team can keep up with the velocity of modern vulnerability exploitation. Qualys differentiates by offering integrated patch management and autonomous workflows that allow customers to quickly remediate vulnerabilities.

Paylocity

The most concrete AI signal on the call was deceptively simple: average monthly usage of Paylocity’s AI assistant increased over 100% quarter-over-quarter. That’s a significant sequential jump in a relatively short period. Paylocity recently expanded its AI assistant into HR rules and regulations, tapping into more than 200 IRS and Department of Labor knowledge sources to provide administrators with guidance on tax and labor regulations. For the HR administrators who are Paylocity’s primary users, typically generalists at companies of 50 to 500 employees, the ability to ask a question and get a compliance-grounded answer without calling a lawyer or spending an hour on the IRS website is genuinely valuable.

What’s particularly interesting about Paylocity’s AI strategy is the dual deployment model: AI for customers, and AI for Paylocity itself. Within the operations team, Paylocity is leveraging AI to drive down client case volumes, automate client interactions and case routings, and perform sentiment analysis to flag urgent cases for faster response. This internal AI efficiency play is showing up in the financials: adjusted gross margin expanded 60 basis points year-over-year to 74.4% in Q2, and operating expenses are growing slower than revenue. This is a company guiding to a $622–630 million adjusted EBITDA on $1.74 billion in revenue, roughly a 36% margin.

Paylocity’s position as a system of record allows it to connect data to other systems via APIs, increasing platform utilization, and that customer time savings from AI features lead directly to opportunities for upselling more modules, enhancing the experience and driving revenue growth. This is a different monetization path than many peers: rather than charging directly for AI as a premium SKU, Paylocity is betting that AI-driven engagement deepens the platform relationship, makes customers more likely to expand into adjacent modules, and reduces the churn that has historically been the primary growth constraint in HCM.

Paylocity is operating in a slower-growth regime than most of this peer group. Full-year 2026 revenue guidance of $1.73–1.74 billion represents 9% growth, and recurring revenue is expected to grow 10–11%. That’s solid but not the acceleration story investors see at Atlassian or Salesforce. The macro factor management monitors most closely is employment levels at client companies, since Paylocity’s revenue is partly tied to headcount. Management noted employment levels have been stable with no significant changes expected. AI is helping Paylocity expand revenue per client and improve efficiency, but it isn’t yet the kind of step-change growth driver that reshapes the growth profile.

Doximity

Where nearly every other company in this series was accelerating AI investment and racing to monetize, Doximity was doing something much rarer: pumping the brakes on AI commercialization deliberately, citing patient safety, and then watching its stock drop nearly 24% the next day as the market punished the caution. The divergence between what management said and what the market wanted to hear was striking.

The platform fundamentals are strong by virtually any measure. Doximity surpassed 3 million registered members, with more than 85% of all US physicians and two-thirds of NPs and PAs on the platform, with record usage across daily, weekly, monthly, and quarterly active user metrics. Over 300,000 unique prescribers used AI products in Q3, and January saw an average of four AI queries per prescriber per week for Docs GPT. More than 100 top US health systems have purchased the AI suite, granting access to over 180,000 prescribers. These are impressive adoption numbers for a product that is not yet generating any revenue.

And that last sentence is the crux of the investor tension. No AI revenue is included in current guidance; commercial AI products are expected to launch later in the year. Doximity is building substantial AI infrastructure, investing in usage that is already compressing gross margins from 93% to 91%, and has over 300,000 physicians actively using the product, but is explicitly choosing not to monetize it yet. The reason CEO Jeff Tangney gave was pointed and substantive: a recent Stanford-Harvard study found AI can cause clinical harm in up to 22% of real patient cases, and that overconfident models make those errors harder to spot. In response, Doximity built Peer Check, a clinical peer review layer co-led by renowned physician-scientists Eric Topol and Regina Benjamin, with more than 10,000 US physician experts reviewing AI-generated clinical answers before they’re deployed at scale. The message was clear: Doximity will not monetize AI until it trusts that the AI is safe enough for physicians to rely on without second-guessing every output.

This is a genuinely different philosophy. Healthcare AI occupies a unique risk category: an error in a marketing recommendation costs a company a customer; an error in a clinical AI recommendation can cost a patient their life. DOCS caution isn’t timidity, it’s arguably the only responsible posture for a company whose platform is used by 85% of America’s physicians. The commercial AI launch later in fiscal 2026 will be a significant test: can Doximity translate trust, physician habit, and clinical safety credibility into a monetization model that the market will reward?

The longer-arc story here is one of deliberate sequencing: build the trust, earn the habit, then charge for it. Doximity’s net revenue retention was 112% overall and 117% for the top 20 customers, evidence that clients who deepen their engagement with the platform expand their spend meaningfully. If the AI suite launches commercially later this year with genuine physician trust behind it, and if the pharma headwinds stabilize, the combination of 85% physician coverage, proven engagement habits, and a safety-validated AI product could represent a monetization inflection.

BILL

BILL’s call told a story that sits at an interesting intersection: a company that has built a genuinely useful platform for SMB financial operations, is now threading AI throughout it, and is simultaneously facing an existential investor question about whether AI startups could eventually disintermediate it entirely.

The core business is performing solidly. Core revenue grew 17% year-over-year to $375 million in Q2, total payment volume reached $95 billion up 13%, and transactions processed grew 16% to 35 million. Nearly 500,000 businesses now use the platform, with over 9,500 accounting firms embedded in the network. Multiproduct adoption grew 28% year-over-year, with businesses using both AP/AR and Spend & Expense solutions, a meaningful indicator that customers are deepening their reliance on BILL as a financial operations platform rather than a single-purpose payments tool.

On AI, BILL is pursuing a two-track strategy that distinguishes between AI-for-customers and AI-for-BILL-itself. On the customer side, the company introduced agentic capabilities including a W-9 Agent for vendor management and a coding agent for invoice processing specific, transactional use cases where AI can eliminate manual work that currently slows down SMB finance teams. The framing CEO Lacerte used: “AI will allow us to dive deeper into the stack of transactional confusion and simplify it.” For SMB owners and bookkeepers who spend hours reconciling invoices, chasing down vendor information, and coding transactions to the right GL accounts, that promise is valuable.

On the internal efficiency side, BILL developed a roadmap of AI-driven productivity initiatives, covering developer productivity, internal team automation, and go-to-market optimization, with initial benefits expected to start flowing through in fiscal 2027. This is a multi-year cost structure story.

Analysts pressed on whether AI-native startups could undercut BILL’s position by building cheaper, simpler financial operations tools. Lacerte’s response centered on three moats: deep expertise in financial operations built over two decades, a proprietary data set from processing over $1 trillion in payments that enables superior risk models, and network effects from 8 million entities connected through the BILL ecosystem. The data moat argument is particularly credible: BILL’s ability to assess payment risk, predict cash flow patterns, and detect fraud is a function of seeing an enormous volume of SMB financial transactions over time. A new entrant with a better AI model but no transaction history is structurally disadvantaged in the trust and risk management dimensions that matter most when you’re moving real money for small businesses.

The honest challenge for BILL is growth rate moderation. Full-year core revenue guidance was raised but implies 14–15% growth for the year; respectable, but a deceleration from prior years and well below the trajectory of more AI-accelerated peers in this series. The company is deliberately shifting focus toward larger SMBs and improving customer unit economics rather than maximizing new customer count, which is a sensible strategic move but one that introduces near-term headwinds. The AI monetization story, where agentic capabilities translate into higher ARPU from existing customers, is still in its early innings. If BILL can demonstrate over the next few quarters that AI-driven automation is genuinely expanding what customers are willing to pay, the multiple compression the stock has experienced could look like an opportunity in retrospect. But that proof is still ahead, not behind.

Zoominfo

ZoomInfo’s Q4 2025 call was a study in a company navigating a genuinely difficult strategic moment, caught between a legacy business model under pressure from AI disruption, a promising new product suite not yet in revenue guidance, and a market that has lost patience with the transition timeline. Q4 revenue grew just 3% year-over-year to $319 million, and 2026 guidance projects only 1% revenue growth at the midpoint, extraordinary deceleration for a company that was growing at 20%+ just a few years ago.

To understand ZoomInfo’s AI story, you have to understand its predicament. ZoomInfo built its business on selling B2B contact and intent data to sales and marketing teams; essentially, a database of who to call and when. AI has disrupted that model in two ways simultaneously: first, AI-powered outbound tools have made it easier for companies to generate their own contact intelligence; second, AI agents are replacing some of the human SDRs who were the primary users of ZoomInfo’s data. The company that was once the essential data layer for go-to-market teams is now being forced to redefine what “essential” means in an AI-first sales environment.

Schuck’s answer to that challenge is an explicit platform pivot. The strategic framing is that whether customers access ZoomInfo’s intelligence through the application, through an AI agent, or through something they built themselves, the data flows to where work happens, positioning ZoomInfo as the only platform delivering intelligence, orchestration, and execution for modern go-to-market teams. The new products, GTM Studio, which unifies internal and external data for audience building, and GTM Workspace, are designed to make ZoomInfo the operating system that AI sales agents plug into, rather than a database that humans search manually. Schuck noted that many of the top 50 fastest-growing AI-native companies are already ZoomInfo customers, a meaningful signal that the platform has relevance in the AI-native enterprise, not just legacy enterprise.

The upmarket migration is the clearest positive story on the call. ZoomInfo grew its upmarket segment 6% year-over-year in Q4, tripling its year-over-year growth rate in its seasonally largest quarter, and now has 74% of ACV coming from upmarket customers, up from 70% a year ago, with ACV from the $100,000-plus customer cohort growing double digits and now representing more than 50% of total company ACV. Upmarket customers buy more of the platform, renew at higher rates, and are more strategically embedded, which means the quality of the revenue base is improving even as the headline growth rate suffers from the ongoing cleanup of the downmarket SMB book.

The most candid moment on the call was about the new AI products: ZoomInfo explicitly included no revenue contribution from GTM Studio or other new products in the 2026 revenue guidance, while embedding the associated costs.

What ZoomInfo illustrates in the context of this broader series is a distinct and underappreciated risk: the companies most likely to be hurt by AI in the near term are not necessarily those with the weakest products, but those whose primary value proposition was data that AI can now partially substitute or generate differently. ZoomInfo’s contact and intent data was extraordinarily valuable in a world where finding the right person to call required human research. In a world where AI agents can do that research, synthesize signals, and execute outreach autonomously, the question isn’t whether ZoomInfo’s data is good — it clearly is — but whether it remains uniquely necessary. Schuck’s argument is that data quality is becoming more important, not less, as AI agents proliferate that bad data fed into an AI agent produces bad outputs at scale, and ZoomInfo’s verified, constantly refreshed data set is the antidote.

Dynatrace

Dynatrace’s Q3 FY2026 call was the kind of straightforward execution story that tends to get overshadowed in an earnings season dominated by more dramatic AI narratives. While most companies are racing to build AI capabilities, Dynatrace is arguing that AI creates an urgent and growing need for exactly what it already does. The more AI proliferates, the more complex and opaque software environments become, and the more critical observability, knowing what’s happening, why, and what to do about it, becomes. CEO Rick McConnell’s central thesis at the annual Perform customer conference was that observability is entering a new era in which it is foundational to resilient software and dependable AI environments.

The financial picture reinforces that thesis. ARR stabilized at 16% growth for three consecutive quarters, net new ARR grew double digits for three consecutive quarters, and annualized log management consumption surpassed $100 million, with log management itself growing over 100% year-over-year. The logs story is particularly significant: logs are the raw observational data that flows from AI workloads at enormous volume and velocity, and Dynatrace’s ability to ingest, index, and make intelligent sense of them is a direct beneficiary of the AI infrastructure build-out happening across every enterprise customer. Platform consumption overall continued to grow over 20%, ahead of ARR growth, a usage-leading indicator that suggests the revenue trajectory has further room to run.

The most strategically important announcement on the call was Dynatrace Intelligence, a new agentic AI operations system unveiled at the Perform conference and made available to all customers. What’s notable is the deliberate pricing decision: Dynatrace Intelligence was not priced as a separate SKU but embedded into the platform for all customers. This is a conscious choice to drive adoption first and monetize through increased platform consumption and expanded footprint, rather than creating an AI premium layer that some customers might resist. It mirrors the philosophy of several other companies in this series who are using AI to deepen platform engagement before converting it to direct revenue.

The competitive differentiation Dynatrace leans on hardest is the combination of what CEO McConnell calls “trustworthy deterministic AI” with agentic AI. The argument is nuanced and important: purely probabilistic AI — LLMs making inferences about what might be wrong in a complex system — produces unreliable outputs in production environments where precision matters. Dynatrace’s approach combines its long-standing causal AI engine, which identifies root cause deterministically based on topology and dependency maps, with newer agentic capabilities that can act on those findings autonomously. For an SRE or platform engineer dealing with an incident at 2am, “we think the problem might be here” is much less valuable than “the root cause is definitively this service, and here’s what to do.” That precision is what Dynatrace is selling, and it’s a genuinely differentiated value proposition against generic AI observability tools.

The hyperscaler partnership strategy adds another dimension. In Q3, Dynatrace announced deeper technical integrations with Amazon Bedrock AgentCore, embedding with Azure’s SRE Agent, and serving as the launch partner for GCP Gemini CLI extensions and Gemini Enterprise. This is Dynatrace positioning itself as the observability layer that hyperscaler-native AI agents rely on meaning when a customer builds an AI agent in AWS, Azure, or GCP, Dynatrace is the system that monitors what that agent does, catches when it goes wrong, and helps remediate issues. It’s a smart wedge: rather than competing with hyperscalers for the AI workload itself, Dynatrace is becoming the essential trust and reliability layer underneath those workloads.

The honest question hanging over the call — and which analysts pressed on — is whether 16% ARR growth is the ceiling or a floor as the AI opportunity matures. The company raised full-year guidance by 125 basis points, now targeting 15.5%-16% ARR growth and putting it on track to surpass $2 billion in ARR in fiscal 2026. For a company of Dynatrace’s scale, that’s respectable, but investors hoping for a step-change acceleration driven by AI workload complexity haven’t seen it yet in the headline numbers. Management’s argument is that platform consumption growing at 20%+ is the leading indicator of that acceleration arriving in ARR terms over the next several quarters — and the logs inflection at 100%+ growth is the most concrete evidence they can point to that the flywheel is spinning.

Monday.com

monday.com’s Q4 2025 call presented a company in a genuinely interesting two-speed moment: an enterprise business accelerating meaningfully on the back of AI-driven platform expansion, and an SMB self-serve business struggling with deteriorating unit economics that management expects to persist through 2026.

Start with what’s working. Full-year revenue reached $1.232 billion, up 27%, with Q4 at $334 million, up 25%, and customers with over $500,000 in ARR grew 74% year-over-year. That enterprise acceleration is real, driven by customers standardizing on monday.com not just for project management but for CRM, service operations, software development, and now AI-powered workflows. The AI product metrics are early but directionally exciting: Monday Blocks powered over 77 million actions, Sidekick processed over 500,000 user messages, and Monday Vibe — the company’s AI-native app builder — became the fastest product in monday’s history to surpass $1 million in ARR, reaching that milestone in just 2.5 months after pricing launched in mid-October 2025.

The strategic positioning monday.com is building toward is worth understanding carefully. Co-CEO Eran Zinman described a unified AI platform with four core capabilities: Monday Sidekick (AI assistant), Monday Vibe (AI-native app builder), Monday Agents (autonomous workflow executors), and Monday Workflows (process automation). Teams are increasingly relying on monday.com not just to organize work, but to make decisions, automate outcomes, and execute faster with confidence. That reframing — from a work OS to a work execution platform — is a meaningful upward step in perceived value and justifiable pricing power. And it connects directly to the enterprise motion: larger customers with complex, interconnected workflows are the natural buyers of this expanded capability set, while SMBs may not need or want the full stack.

Which brings us to the harder conversation. The no-touch self-serve channel remains “choppy,” with higher customer acquisition costs and lower returns than historical levels — a dynamic management expects to persist throughout 2026 with no improvement assumed in guidance. This is a genuine structural headwind. The SMB market for work management software has become more competitive and more price-sensitive, partly because AI tools have lowered the barrier to building lightweight alternatives and partly because macro conditions have tightened SMB software budgets. monday.com’s response is to redirect investment toward enterprise, which makes strategic sense but compresses near-term growth rates. 2026 guidance of 18–19% revenue growth is a step down from 27% in 2025 — not alarming on its own, but a deceleration that the market had to digest.

The long-term ambition management has been articulating — a path to much larger revenue targets by 2027 — was another notable moment on the call. The CFO explicitly stated that the 2027 target number is “off the table,” with management ceasing to discuss prior long-term targets due to macroeconomic volatility and the ongoing challenges in no-touch channels. Pulling guidance is rarely received well, and combined with the SMB headwinds and deceleration in growth, it created investor concern despite the genuinely strong enterprise metrics.

The most compelling part of the monday.com story in the context of this series is Monday Vibe — an AI-native app builder that lets business users create custom applications on top of the monday.com platform without writing code. If this scales as management believes it can, it transforms monday.com from a work management platform into something closer to a business application development layer — essentially a low-code/no-code platform that is AI-native from the ground up. The competitive analogy that comes to mind is what Salesforce is trying to do with Agentforce: extend from a system of record into a system of action and creation. Monday.com is pursuing a similar expansion from a different starting point — the work coordination layer rather than the CRM layer — and Vibe’s early ARR traction suggests the market is receptive to the vision, even if the SMB headwinds cloud the near-term picture.

Blackbaud

Blackbaud is not a company most people include when they talk about AI in enterprise software. It serves nonprofits, universities, healthcare foundations, and faith organizations, institutions that are resource-constrained, often technologically cautious, and historically slow to adopt new platforms. And yet the Q4 2025 call made a surprisingly compelling case that Blackbaud may be better positioned for the AI era than its modest growth rate suggests, precisely because of the unique data moat it has built over 45 years serving the social impact sector.

CEO Mike Gianoni opened with a direct and unusually candid framing of the fundamental question facing every vertical SaaS company right now: will AI be beneficial to system-of-record vertical software firms like Blackbaud, or detrimental? Blackbaud processes nearly 30 billion donor predictions annually, manages tens of petabytes of data across its customer base, and has built the most comprehensive philanthropic dataset in existence, proprietary survey and benchmarking data, licensed datasets, identity resolution capabilities, and specialized datasets like Blackbaud Giving Search. Critically, this data is not publicly available on the internet where LLMs can access it meaning no competitor can train a general-purpose AI model to replicate what Blackbaud knows about donor behavior, nonprofit fundraising patterns, and philanthropic outcomes.

The most concrete AI product on the call was the Development Agent, the first of Blackbaud’s “Agents for Good” released at their bbcon conference in October. The use case is beautifully specific and immediately legible in terms of ROI: a university with 190,000 alumni but a fundraising team with bandwidth to focus on only 10,000 of them can deploy the Development Agent as an additional “staff member” that cultivates relationships and raises funds from the other 180,000 alumni through email, text, and a full conversational avatar, self-learning using the data, intelligence, and workflows within Blackbaud’s system of record. That is not a marginal efficiency gain — it’s the ability to extend fundraising reach by 18x without proportionally scaling headcount.

The pricing model matters here too. Blackbaud is structuring Agents for Good as an annual subscription with multiyear contracts — meaning the agent becomes a recurring revenue line rather than a one-time upsell. More than 20% of customers are already asking to move to four-year or longer renewal contracts, which speaks to the depth of platform dependency and the confidence customers have in Blackbaud’s roadmap. 2026 guidance of 4–4.5% revenue growth explicitly assumes no meaningful AI product revenue contribution.

The internal AI story is equally substantive. Every employee at Blackbaud has been required to complete AI training, the entire engineering team is using GitHub Copilot and Anthropic Claude for code generation, bug remediation, and new product development, and Blackbaud AI Chat — embedded within the system of record and leveraging customer and proprietary benchmark data — saw daily usage grow 5x since October. The company also cited three structural efficiency drivers it’s pursuing simultaneously: geographic workforce diversification through a growing India office, closure of the last two legacy data centers, and AI-driven internal productivity.

What makes Blackbaud particularly interesting in the context of this broader series is the mission-alignment dimension. Nonprofits, universities, and foundations are not just technology buyers — they have deeply held values around data privacy, ethical AI use, and donor trust. Blackbaud’s focus on cybersecurity and AI governance for ethical data use isn’t just a compliance checkbox — it’s a competitive moat in a market where the institutions writing the checks care deeply about how their donor data is used. A generic AI tool that scrapes public data and surfaces cold outreach isn’t an acceptable substitute for a Development Agent that operates within a trusted, governed, sector-specific system of record. The nonprofit sector’s inherent conservatism around technology is, in this framing, not a headwind for Blackbaud — it’s a barrier to entry for every competitor trying to break in.

Cloudflare

CEO Matthew Prince positioned the company at the intersection of two forces reshaping the internet simultaneously: the explosion of AI agents and the resulting question of who governs how those agents traverse the web.

Start with the fundamentals, which are genuinely excellent. Q4 revenue grew 34% year-over-year to $614.5 million — the third consecutive quarter of acceleration — with large customers contributing 73% of total revenue, dollar-based net retention jumping 9 percentage points to 120%, and million-dollar customers growing 55% year-over-year. New ACV bookings grew nearly 50% year-over-year with both year-over-year and sequential acceleration, and RPO grew 48% to $2.5 billion — the kind of forward revenue visibility that gives confidence in sustained growth. The largest annual contract value deal in company history — $42.5 million per year — closed in the quarter. This is a business hitting its stride.

But the more important conversation on the call was about what’s happening to Cloudflare’s network itself. CEO Matthew Prince reported that over the month of January alone, the number of weekly requests generated by AI agents more than doubled across the Cloudflare network. That single data point is extraordinary in what it implies. Cloudflare sits between virtually every significant website and the internet — it processes roughly 20% of all web traffic globally. The fact that AI agent-generated traffic is doubling on a monthly basis means the traffic composition of the entire internet is changing, and Cloudflare is uniquely positioned to observe, measure, and increasingly govern that change.

This leads to the most forward-looking and genuinely novel part of the call — Prince’s framing of Cloudflare as a neutral broker between AI companies and content creators. AI models train on internet content, and the commercial relationship between those who create content and those who consume it for training purposes is still being worked out. Prince argued that AI companies and content creators alike are looking to Cloudflare as a trusted neutral third party — that both sides would rather Cloudflare figure out what the future business model looks like than have a hyperscaler do it, since hyperscalers are themselves building foundational models and may have conflicting incentives. Cloudflare’s network position — sitting in the middle of every request, trusted by both sides, with no foundational model of its own — makes it a plausible honest broker in a way that no hyperscaler or AI company can credibly claim to be.

The developer platform story adds another dimension. Cloudflare exited 2025 with more than 4.5 million human developers active on the platform — and that number will soon be joined by AI agents as first-class citizens of the Cloudflare ecosystem. Workers AI (Cloudflare’s inference-at-the-edge product), the AI Gateway (which manages and monitors AI API calls), and the growing suite of developer tools mean that Cloudflare is not just routing traffic between AI agents and the web — it’s becoming the infrastructure layer where AI applications are built, deployed, and governed. The pool-of-funds contract model, where enterprises commit a pool of spend and draw it down across any Cloudflare product, is particularly well-suited to AI workloads where consumption is hard to predict but the underlying dependency is clear.

The gross margin story requires brief acknowledgment — gross margin came in at 74.9%, slightly below the long-term target range of 75–77%, as Cloudflare allocated more network expenses to cost of revenue to better reflect AI infrastructure investment. This is the same infrastructure cost dynamic playing out across virtually every company in this series, and Cloudflare’s handling of it is conservative and transparent. With $4.1 billion in cash and full-year 2026 revenue guidance of $2.785–2.795 billion implying 28–29% growth, the balance sheet and growth profile are strong enough to absorb the investment cycle comfortably.

The bigger picture is this: Cloudflare is one of the few companies in enterprise software that isn’t just using AI or selling AI products — it’s building the infrastructure that AI itself needs to function reliably at internet scale. Every AI agent that browses the web, every model that calls an API, every application that serves AI-generated content — all of it flows through infrastructure that Cloudflare operates.

‍

Sammy Abdullah

Managing Partner & Co-Founder

Enjoyed this post?

Share it using the links below.

Copy link
Share on LinedIn
Copy link
Share on Facebook

Get Our Newsletter in Your Inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.
  • SaaS Metrics
  • Portfolio
  • Blog
  • contact us
5307 E Mockingbird Ln, Suite 802  Dallas, TX 75206