Blossom Street
  • about
  • SaaS Metrics
  • Mrr Calculator
  • Track Record
  • Portfolio
  • Blog
  • EMAIL PARTNER
return to blog

What Q4 Earnings Calls Reveal About AI's Impact on Software - updated

by

Sammy Abdullah

We’re reviewing the earnings calls of 81 publicly traded SaaS companies to understand the impact AI is having on SaaS. Below we summarize the key take-aways of 13 of those earnings calls: Snowflake, Salesforce, MongoDB, ServiceTitan, ServiceNow, Microsoft, Appfolio, Palantir, Atlassian, Paylocity, Doximity, Qualys, and Bill.com. After the take-aways below, you will see full ~5 paragraph summaries of each earnings call. We will release more blogs like this as we do more work.

The big take-aways:

AI is not profitable yet. The goal for the moment is driving margin neutral revenue. Microsoft, Salesforce, ServiceNow and nearly every other company described margin pressures from deploying AI product. AI workloads are just very expensive at the moment.

Direct revenue from AI is nascent for most. And is non-existent for some players like Doximity, or still in its very early stages but growing fast. Enterprise customers of do seem to be adopting and benefiting from the new AI products built by their already critical software providers like Salesforce. There is also a category of companies that are building usage before monetizing.

Infrastructure for AI versus AI products. Some like Snowflake, ServiceNow, and MongoDB will win because AI is supported infrastructurally on their platforms. Others like ServiceTitan and Salesforce are building AI product which actually executes context-aware actions.

Trust is an issue for AI. Doximity has stopped releasing AI product until they get it perfect, because it’s not ok for AI to mis-diagnose a patient. Qualys makes a similar point in cybersecurity; being the agentic remediation layer requires a level of trust that generic AI tools can’t establish. AI errors in healthcare and cybersecurity are near unacceptable, and thus the bar for accuracy is higher.

Performance is strong. Quite a few of these companies like MongoDB and ServiceNow closed some of largest deals ever in Q4. Others like Atlassian had record quarters. Those companies that are selling AI products to their customers report better growth and retention among those customer cohorts.

The moat is very high. Moats for software are being built around data, workflows, governance, security, integrations, compliance, and operational knowledge of the existing customer. AI cannot standalone, but rather needs to sit on top of software which manages all the above in an enterprise-friendly manner.

Internal AI improves overall margins. Many of the companies themselves are using AI in their own operations to improve margins.

The ~5 paragraph summaries of actual calls are below. Revenue multiples in real time can be seen for all these companies at https://www.softwaremultiples.com/. Also visit https://www.blossomstreetventures.com/ for detailed financials and metrics data for all these companies.

Snowflake

Snowflake’s Q4 call told a story of a company in genuine transition, moving from being the place enterprises store and query data, to being the platform where they actually build and run AI. The numbers: product revenue grew 30% year-over-year, driven primarily by AI workloads, and accounts using AI features rose to more than 9,100, with Snowflake Intelligence, their flagship agentic product, scaling to over 2,500 accounts, nearly doubling quarter-over-quarter.

What makes Snowflake’s AI story somewhat distinct from peers is that AI is benefiting the business on both sides of the income statement. On the revenue side, larger and more strategic deals are getting done; Snowflake signed the largest deal in company history at over $400 million in total contract value, and closed seven nine-figure contracts in the quarter versus two in the same period last year. On the cost side, management reported 40% to 50% higher project margins and compressed delivery cycles through their own internal use of Snowflake Intelligence and Cortex Code, a credible proof point that software companies will be big beneficiaries of AI.

For customers, the value proposition is centered on two products. Cortex Code is used by over 4,400 customers, enabling faster development and deployment of AI workloads. Snowflake Intelligence, meanwhile, is the more agentic play, allowing enterprises to build AI-native workflows directly on top of their existing Snowflake data, without moving it elsewhere. The strategic logic is powerful: enterprises already trust Snowflake with their most sensitive data, and Snowflake is betting they’ll prefer to run AI on top of that data in place rather than pipe it out to a separate system.

The gross margin picture is the caveat. Management acknowledged that newly launched AI products currently carry lower margin profiles, and that margin expansion remains a near-term priority as efficiency improvements are realized. This is the same tension playing out across the AI infrastructure sector; serving AI workloads is compute-intensive, and the economics improve over time but aren’t fully mature yet. Snowflake guided FY27 product gross margin at 75%, roughly flat with FY26’s 75.8%.

The bigger strategic bet is on ecosystem. Snowflake expanded partnerships with Anthropic, OpenAI (a $200 million expansion), and Google Cloud, giving customers native access to leading AI models directly within the platform. Rather than picking a single model winner, Snowflake is positioning itself as the neutral, data-layer that works with all of them, a benefit to its enterprise customers who don’t want to be locked into a single AI vendor. If that positioning holds, Snowflake doesn’t just benefit from the AI wave; it becomes critical infrastructure beneath it.

Salesforce

Salesforce is at an inflection point with AI. Agentforce, Salesforce’s agentic AI platform, closed 29,000 deals in its first 15 months, with marquee enterprise customers like Amazon, Ford, AT&T, and Moderna signing on. The deals over $1 million were up 26% year-over-year, suggesting that AI may be driving larger more strategic contracts.

What’s most notable is how Salesforce is trying to reframe what AI does for customers. Rather than talking about AI in terms of features or capabilities, Benioff introduced a new metric, Agentic Work Units (AWUs), to show that AI agents on the platform are completing 2.4 billion discrete units of actual work: updating records, triggering workflows, making decisions. The message was deliberate: this isn’t AI that thinks or suggests, it’s AI that executes.

On the revenue side, Agentforce ARR hit ~$800 million, up 169% year-over-year, and the broader Agentforce and Data Cloud bundle crossed $2.9 billion ARR, up over 200%. Salesforce is monetizing AI through a mix of premium SKUs, new seat additions, and consumption-based flex credits, which gives them multiple vectors to capture value as usage scales.

The candid note was around gross margins. Token costs remain a real input expense, and Salesforce acknowledged it’s working to optimize efficiency to keep AI gross margins neutral in the near term. AI offerings are not profitable at the moment.

The bigger strategic picture is that Salesforce is betting AI transforms it from a system of record into a system of action where agents handle customer service, sales workflows, and operational decisions autonomously. If that holds, it strengthens their competitive moat considerably and raises switching costs for existing customers.

MongoDB

MongoDB is positioning to be the foundational data layer for the AI era, and the company’s job is to make sure the world knows it. Customers are excited about MongoDB’s platform strength and its ability to serve as an integrated data layer for AI agents, combining search, vector search, and embeddings in a single offering.

The most important thing to understand about MongoDB’s AI story is where they sit in the stack. Unlike Salesforce or Braze, MongoDB isn’t selling AI applications to end users. It’s providing the data infrastructure that AI applications run on. At its flagship MongoDB.local San Francisco event, the company announced the integration of its core database with embedding and reranking models from Voyage AI, creating a unified data intelligence layer for production AI allowing developers to build sophisticated applications at scale with reduced hallucination risk and no requirement to move or duplicate data. That no data movement angle is a meaningful competitive argument: enterprises with sensitive data in MongoDB don’t have to copy it to a separate vector store or AI pipeline, which simplifies architecture and lowers risk. For large financial institutions and regulated industries, who are among MongoDB’s most strategic customers, that matters enormously.

The candid acknowledgment from management, however, is that AI is not yet a material driver to results, though they are encouraged by the growth they are seeing with customers leveraging AI capabilities. The number of customers using vector search and Voyage embedding models nearly doubled year over year, and AI natives, digital natives, and large enterprises are all contributing to the growth across the customer base.

What’s actually driving the strong financials right now is a combination of AI-adjacent tailwinds and MongoDB’s core enterprise momentum. In Q4 they signed an approximately $90 million deal with a large tech company planning to expand both core and AI workloads on Atlas, and a greater than $100 million deal with a large financial institution for Enterprise Advanced, the largest total contract value deal in company history. The significance of that financial institution deal is hard to overstate: large banks are notoriously conservative with infrastructure decisions, and a nine-figure commitment to MongoDB speaks to how seriously enterprises are treating their data platform choices in the context of AI readiness.

AI actually amplifies the case for MongoDB’s multi-model, developer-friendly architecture. As enterprises move from traditional transactional applications toward AI-powered agentic workflows, the data requirements become more complex, requiring search, vector similarity, document storage, and real-time access all in one place. MongoDB’s goal is to become the generational data platform of choice in the AI and multi-cloud era, and if AI application development scales the way most expect, the demand for an integrated, performant, cloud-native data layer like Atlas should grow with it. The risk is that the revenue inflection from AI workloads takes longer to materialize than investors hope. but the Q4 numbers suggest the underlying business is strong enough to carry that bet comfortably while AI catches up.

ServiceTitan

The AI story here is unusually concrete. Where most SaaS companies are talking about AI in terms of platform strategy and future optionality, ServiceTitan is reporting actual customer outcomes from a live product, and those numbers are striking.

The centerpiece is Max, which ServiceTitan is positioning not as an AI feature but as an agentic operating system for the trades, meaning it’s designed to autonomously orchestrate end-to-end workflows across demand generation, dispatch, quoting, payments, and back-office operations for HVAC, plumbing, electrical, and other field service businesses. The first deployment cohort of Max customers saw a 50% increase in average ticket size, one customer achieved over 50% revenue growth in a single month, and another increased EBITDA margin from 18% to 30% while reducing office headcount. And management went further: customers on Max will about double their monthly subscription revenue when fully ramped, an effect not driven by technician expansion, meaning the value is coming from operational efficiency and higher revenue capture per job, not simply from adding more workers.

What makes ServiceTitan’s AI positioning particularly defensible is the data moat underlying it. The company leverages proprietary structured data from over $80 billion in annual transaction volume to drive automation and outcome improvements. This is a critical competitive point that CEO Ara Mahdessian leaned into when analysts asked about AI-native startups competing for the same customers. A point-solution AI tool built for the trades can optimize one workflow but it has no context on how that workflow connects to the rest of the business. ServiceTitan’s integrated platform, spanning marketing, scheduling, dispatch, invoicing, and payroll, creates a contextual data layer that generic AI tools simply can’t replicate from the outside.

The second AI product gaining traction is Virtual Agents: AI-based modules that handle inbound call management and appointment booking, especially during call surges or after normal business hours. For a trades business, missed calls during peak season or after hours are direct revenue losses. A potential customer who can’t book gets routed to a competitor. Virtual Agents directly plug that gap, and because it’s priced as a consumption product, it creates a new usage revenue stream that management believes could grow faster than GTV in fiscal 2027.

The honest caveat is that Max is still early-stage and capacity-constrained. ServiceTitan plans to double Max capacity in Q1 FY2027, with scaling tied to onboarding efficiency and customer success, which is a signal that the bottleneck right now is deployment and training, not demand. To accelerate on all fronts, the company hired Abhishek Mathur from Figma, Meta, and Microsoft as Chief Technology and Product Officer, specifically to drive organizational and technology velocity around AI initiatives. FY2027 is also expected to represent the company’s largest R&D investment ever, with explicit focus on AI inference and internal tooling. The trades are not typically an industry associated with cutting-edge software, which is precisely why ServiceTitan’s moat both in data and in customer relationships could prove so durable as AI raises the stakes for what field service software can actually do.

ServiceNow

If there’s one company in enterprise software that has the most fully-formed AI story right now, it’s ServiceNow. CEO Bill McDermott came into this call on offense, explicitly addressing the “AI will eat software” narrative that has spooked investors across the sector and flipping it on its head. His argument was direct: enterprise AI will be the largest driver of return on the multitrillion-dollar super cycle of AI infrastructure investment, and the real payoff comes when tokens move beyond pilots and get embedded directly into the workflows where business decisions are made, with ServiceNow serving as the semantic layer that makes AI ubiquitous in the enterprise.

The numbers behind Now Assist, ServiceNow’s AI product suite, are the most concrete AI monetization metrics in this cohort. Now Assist ACV surpassed $600 million, more than doubling year-over-year in Q4, with deals over $1 million nearly tripling quarter-over-quarter and 35 such deals closing in Q4 alone. Enterprises aren’t just adding AI as a line item but are making meaningful commitments to it within the ServiceNow platform. The AI control tower, which allows enterprises to govern and orchestrate AI agents across the business, grew over 4x its 2025 targets, another signal that customers are moving from individual AI use cases to thinking about AI management as a platform-level problem.

What ServiceNow is really selling is AI governance as much as AI capability. As enterprises deploy more agents someone has to be in charge of orchestrating them, monitoring them, and ensuring they don’t go rogue or create compliance exposure. ServiceNow is positioning its platform as that orchestration layer, what they call the “universal agentic network” built on MCP and Workflow Data Fabric. Monthly active users grew 25% and the number of workflows and transactions processed on the platform increased over 33% each, reaching 80 billion workflows and 6.4 trillion transactions.

ServiceNow doesn’t want to bet on any single model winning rather it wants to be the workflow layer that works with all of them, insulating itself from model commoditization while capturing value from enterprise adoption regardless of which AI providers customers prefer.

The honest tension in the ServiceNow story is pricing and gross margin. Management acknowledged some gross margin headwinds from hyperscaler and AI infrastructure choices, and the shift to a hybrid pricing model, combining traditional subscription with consumption-based AI usage, introduces some revenue variability that investors are still getting comfortable with. But with $15.5 billion in subscription revenue guided for 2026 at 19–20% growth, a 32% operating margin, and 36% free cash flow margins, ServiceNow is arguably the most financially powerful pure-play on enterprise AI adoption in the market right now. The core business is robust enough that they can absorb the cost of building out AI infrastructure while competitors are still figuring out their strategy.

Microsoft

Microsoft’s call was a declaration that AI is now the gravitational center of the entire business. Satya Nadella opened with a striking statement: “We are only at the beginning phases of AI diffusion and already Microsoft has built an AI business that is larger than some of our biggest franchises.” When you consider that Microsoft’s “biggest franchises” include Office, Windows, and Xbox, each multi-billion dollar businesses, that’s an extraordinary claim. And the numbers behind it are hard to argue with: Microsoft Cloud crossed $50 billion in quarterly revenue for the first time, up 26%, while Azure grew 39%, the acceleration driven explicitly by AI workloads.

The AI story at Microsoft operates on three distinct layers, each reinforcing the others. The first is infrastructure. Capital expenditures hit $37.5 billion in the quarter, with roughly two-thirds allocated to short-lived assets like GPUs and CPUs. Management introduced a new internal metric, tokens per watt per dollar, as their guiding optimization target for AI infrastructure, and reported a 50% increase in throughput on OpenAI inferencing due to infrastructure advances.

The second layer is the Copilot product suite, where AI is directly touching customers at scale. Microsoft 365 Copilot reached 15 million paid seats, with over 160% seat growth year-over-year and tripled number of customers with over 35,000 seats, a signal that enterprise adoption is moving from departmental pilots to company-wide deployments. Average user conversations doubled and daily active users grew 10x. GitHub Copilot reached 4.7 million paid subscribers, up 75% year-over-year, with individual Copilot Pro Plus subscriptions growing 77% sequentially.

The honest tension is on margins. Gross margins declined slightly, driven by continued AI infrastructure investments and growing AI product usage, and management guided for operating margins to be down slightly year-over-year in Q3. Microsoft is, in effect, spending today to capture a revenue curve that extends well into the next decade, and the Q2 numbers suggest that curve is steeper than almost anyone expected.

Appfolio

AppFolio’s Q4 call told a quietly compelling story for understanding how AI actually changes the economics of a vertical SaaS business. AppFolio serves property managers, a customer base that has historically been underserved by software and skeptical of hype. The fact that 98% of AppFolio’s customers are already actively using one or more AI capabilities included in the platform, against an industry backdrop where half of AI users in property management report they can’t actually rely on the AI features in their core system, is a striking market differentiation signal. It suggests AppFolio has threaded a needle that most vertical SaaS companies are still struggling with: making AI functional and trusted at the ground level.

AppFolio is repositioning its platform around three layers: a system of record, a system of action, and a system of growth with agentic AI embedded directly into daily operations, with the explicit goal of enabling customers to evolve from property managers to performance managers. That reframing reflects a fundamental shift in what AppFolio is selling: not software that helps you manage properties, but a platform that actively improves the financial performance of your property management business. Adoption of premium tiers has already exceeded 25%, a meaningful indicator that customers are upgrading to capture AI-driven capabilities.

The business impact of AI is showing up directly in AppFolio’s financials. Non-GAAP operating margin expanded to 24.9% in Q4, up from 20.2% a year earlier, a nearly 500 basis point improvement that reflects both revenue growth and the operating leverage that comes from AI-driven efficiency gains across the platform itself.

What makes AppFolio particularly interesting from an investment lens is that their AI advantage is compounding in a way that’s structurally hard to replicate. 45% of survey respondents in the property management industry say they plan to consolidate their software solutions and AppFolio is the natural consolidation destination, precisely because they’ve embedded AI deeply enough that customers would lose significant operational capability by leaving. The moat here isn’t just the product; it’s the AI-trained workflows, the resident data, and the operational patterns that accumulate the longer a customer stays on the platform. For a vertical SaaS business approaching $1 billion in revenue, that’s a durable competitive position.

Palantir

Palantir’s Q4 2025 call was unlike any other earnings call in enterprise software this cycle partly because of the numbers, which were objectively extraordinary, and partly because Alex Karp delivers earnings calls the way a wartime general addresses troops, not the way a CFO addresses analysts. But strip away the theater and what you find is a company that has, almost overnight, become one of the defining stories of what enterprise AI actually looks like when it works at production scale.

The headline numbers require context to be fully appreciated. Q4 revenue grew 70% year-over-year, the highest growth rate since Palantir went public, and representing a 3,400 basis point acceleration versus Q4 of the prior year. This isn’t a company maintaining a high growth rate, it’s a company where the growth rate itself is accelerating sharply. Full-year 2025 revenue grew 56%, and the company is guiding full-year 2026 revenue of $7.19 billion representing 61% growth.

The company closed 61 deals greater than $10 million in the quarter, and management cited multiple examples of customers signing $80 to $96 million contracts within months of initial engagement, a compressed deal cycle that reflects genuine organizational conviction.

The US vs. international divergence on the call was striking and strategically revealing. US revenue grew 93% year-over-year and 22% sequentially, while international commercial revenue grew just 8% year-over-year, a massive gap that management was candid about.

The government side of the business is also worth understanding in the context of AI impact. US government revenue grew 66% year-over-year, driven in part by a massive $10 billion Army software contract signed last summer and a $448 million Navy contract for shipbuilding supply chain modernization. Karp was unambiguous about what these contracts represent: not just data analytics or workflow tools, but AI systems that are actively changing the operational capabilities of the US military. He argued that AI implementations in the defense context have changed what warfighters are able to do.

The risk most frequently raised by skeptical analysts on the call is whether the US commercial acceleration is sustainable or whether it represents a concentrated burst of pent-up demand that will moderate. Palantir’s answer is essentially that AI-driven enterprise transformation is still in very early innings, that their pipeline of committed deal value reached $11.2 billion (up 105% year-over-year), and that the real constraint on growth is their own capacity to onboard and deliver for customers, not demand.

Atlassian

CEO Mike Cannon-Brookes has been saying “AI is the best thing that’s ever happened to Atlassian” for several quarters, and this quarter he finally had the numbers to fully back it up. Atlassian delivered its first-ever $1 billion cloud revenue quarter, with cloud up 26% year-over-year, and surpassed $6 billion in annual run rate revenue, a milestone that felt like a culmination of years of patient investment in infrastructure that is now paying off precisely because AI workloads need exactly what Atlassian has built.

The strategic logic of Atlassian’s AI position is distinct from most others in this series. Rather than building AI features on top of an existing product, Atlassian is arguing that AI needs Atlassian; that the work tracking, planning, and organizational knowledge embedded in Jira and Confluence becomes more valuable, not less, as AI proliferates. The Teamwork Graph, which now contains well over 100 billion objects and connections across first and third-party tools, is the context layer that enables Rovo Atlassian’s AI assistant to deliver business value that is actually context-aware and actionable, rather than generic. That’s a meaningful competitive claim: a new AI tool can generate text or answer questions, but it can’t tell you which Jira tickets are blocking your sprint, who owns which decision, or how a current workflow has historically performed across 350,000 customers. Atlassian can.

The proof that customers believe this argument is in the adoption metrics. Atlassian’s Teamwork Collection, the AI-powered bundle that serves as the company’s primary AI monetization vehicle, surpassed 1 million seats sold in under nine months, with more than 1,000 customers upgrading, and the company closed a record number of deals over $1 million in ACV, nearly doubling year-over-year. Critically, customers using AI code generation tools create 5% more Jira tasks, have 5% higher monthly active users, and expand Jira seats 5% faster than those not using AI tools, a clean, data-driven demonstration that AI adoption drives more usage of the core platform, not less.

The seat-based pricing debate hung over the call: analysts probed whether consumption-based pricing could erode Atlassian’s model as AI agents proliferate and “seats” become a less meaningful unit. Atlassian’s response is essentially that customers want predictability, and seat-based pricing delivers it, while the Teamwork Collection bundles AI credits on top of seats in a way that captures consumption upside without forcing customers into open-ended consumption risk. RPO grew 44% year-over-year to $3.8 billion, accelerating for the third consecutive quarter.

The competitive angle on the call was also striking. When asked about new AI tools including tools from Anthropic potentially challenging Jira, the CEO was notably unbothered. He noted Atlassian considers Anthropic a partner, using their models within the platform, and argued that new AI tools will emerge but Atlassian’s Teamwork Graph and deep workflow integration provide differentiation that generic AI tools can’t replicate. The focus, he said, remains on human-AI collaboration for complex work: the kind that requires organizational context, compliance, security, and integration that a standalone AI tool

Qualys

The cybersecurity industry is entering a new phase where the speed of attacker exploitation has outpaced the speed of human response, and the only viable answer is autonomous, AI-driven remediation. As threat actors continue to compress time-to-exploit, Qualys believes the next phase of pre-breach risk management will be defined by an agentic AI-driven risk fabric with out-of-the-box business quantification and automated remediation to respond at the speed of threats.

The most significant product announcement on the call was the launch of the AI-native Risk Operations Center which the CEO positioned explicitly as a new category in cybersecurity, designed to centralize an organization’s entire threat response posture. The argument is that the traditional SOC (Security Operations Center) is reactive: it responds to breaches that have already happened. Qualys’s ROC is designed as a pre-breach capability that unifies Continuous Threat Exposure Management with exploit confirmation, risk quantification, and automated remediation, all powered by agentic AI that can act without waiting for a human to review and approve each step. The competitive shot embedded in the CEO’s commentary was pointed and deliberate: he argued that competitors focusing on exposure management can’t win the AI fight if they’re still routing remediation through Jira tickets and ServiceNow tickets. Autonomous decision-making and execution is the actual differentiator, not just identifying vulnerabilities.

On the customer impact side, Qualys’s AI story is still more about where the market is heading than where revenue is today. The ETM (Enterprise TruRisk Management) platform, which serves as the foundation for the agentic AI capabilities, represented 10% of total bookings and 13% of new bookings, up from 8% and 9% previously, a meaningful directional signal but still a small share of the overall business. Patch Management, which is the remediation capability that AI agents actually execute against vulnerabilities, represented 8% of total bookings and 16% of new bookings. Together these metrics point toward a platform transition that’s underway but early. Customers are beginning to adopt the agentic workflow, but the majority of the revenue base is still anchored in the traditional vulnerability management and VMDR products.

The financial picture is disciplined and somewhat unusual relative to the rest of the companies in this series. Full-year 2025 revenue reached $669 million, up 10%, with a 47% adjusted EBITDA margin; exceptional profitability for a company of this size. The 2026 guidance of 7–8% revenue growth is conservative by enterprise software standards, and management was candid that it reflects investment in AI infrastructure, sales and marketing expansion, and federal sector buildout, all of which compress near-term margin slightly.

What makes Qualys particularly interesting from an AI lens is the specificity of its use case. The combination of asset discovery, vulnerability identification, exploit confirmation, risk quantification, and automated patch deployment is one of the few enterprise software workflows where agentic AI is not just useful but arguably necessary for the product to work as intended. No human team can keep up with the velocity of modern vulnerability exploitation. Qualys differentiates by offering integrated patch management and autonomous workflows that allow customers to quickly remediate vulnerabilities.

Paylocity

The most concrete AI signal on the call was deceptively simple: average monthly usage of Paylocity’s AI assistant increased over 100% quarter-over-quarter. That’s a significant sequential jump in a relatively short period. Paylocity recently expanded its AI assistant into HR rules and regulations, tapping into more than 200 IRS and Department of Labor knowledge sources to provide administrators with guidance on tax and labor regulations. For the HR administrators who are Paylocity’s primary users, typically generalists at companies of 50 to 500 employees, the ability to ask a question and get a compliance-grounded answer without calling a lawyer or spending an hour on the IRS website is genuinely valuable.

What’s particularly interesting about Paylocity’s AI strategy is the dual deployment model: AI for customers, and AI for Paylocity itself. Within the operations team, Paylocity is leveraging AI to drive down client case volumes, automate client interactions and case routings, and perform sentiment analysis to flag urgent cases for faster response. This internal AI efficiency play is showing up in the financials: adjusted gross margin expanded 60 basis points year-over-year to 74.4% in Q2, and operating expenses are growing slower than revenue. This is a company guiding to a $622–630 million adjusted EBITDA on $1.74 billion in revenue, roughly a 36% margin.

Paylocity’s position as a system of record allows it to connect data to other systems via APIs, increasing platform utilization, and that customer time savings from AI features lead directly to opportunities for upselling more modules, enhancing the experience and driving revenue growth. This is a different monetization path than many peers: rather than charging directly for AI as a premium SKU, Paylocity is betting that AI-driven engagement deepens the platform relationship, makes customers more likely to expand into adjacent modules, and reduces the churn that has historically been the primary growth constraint in HCM.

Paylocity is operating in a slower-growth regime than most of this peer group. Full-year 2026 revenue guidance of $1.73–1.74 billion represents 9% growth, and recurring revenue is expected to grow 10–11%. That’s solid but not the acceleration story investors see at Atlassian or Salesforce. The macro factor management monitors most closely is employment levels at client companies, since Paylocity’s revenue is partly tied to headcount. Management noted employment levels have been stable with no significant changes expected. AI is helping Paylocity expand revenue per client and improve efficiency, but it isn’t yet the kind of step-change growth driver that reshapes the growth profile.

Doximity

Where nearly every other company in this series was accelerating AI investment and racing to monetize, Doximity was doing something much rarer: pumping the brakes on AI commercialization deliberately, citing patient safety, and then watching its stock drop nearly 24% the next day as the market punished the caution. The divergence between what management said and what the market wanted to hear was striking.

The platform fundamentals are strong by virtually any measure. Doximity surpassed 3 million registered members, with more than 85% of all US physicians and two-thirds of NPs and PAs on the platform, with record usage across daily, weekly, monthly, and quarterly active user metrics. Over 300,000 unique prescribers used AI products in Q3, and January saw an average of four AI queries per prescriber per week for Docs GPT. More than 100 top US health systems have purchased the AI suite, granting access to over 180,000 prescribers. These are impressive adoption numbers for a product that is not yet generating any revenue.

And that last sentence is the crux of the investor tension. No AI revenue is included in current guidance; commercial AI products are expected to launch later in the year. Doximity is building substantial AI infrastructure, investing in usage that is already compressing gross margins from 93% to 91%, and has over 300,000 physicians actively using the product, but is explicitly choosing not to monetize it yet. The reason CEO Jeff Tangney gave was pointed and substantive: a recent Stanford-Harvard study found AI can cause clinical harm in up to 22% of real patient cases, and that overconfident models make those errors harder to spot. In response, Doximity built Peer Check, a clinical peer review layer co-led by renowned physician-scientists Eric Topol and Regina Benjamin, with more than 10,000 US physician experts reviewing AI-generated clinical answers before they’re deployed at scale. The message was clear: Doximity will not monetize AI until it trusts that the AI is safe enough for physicians to rely on without second-guessing every output.

This is a genuinely different philosophy. Healthcare AI occupies a unique risk category: an error in a marketing recommendation costs a company a customer; an error in a clinical AI recommendation can cost a patient their life. DOCS caution isn’t timidity, it’s arguably the only responsible posture for a company whose platform is used by 85% of America’s physicians. The commercial AI launch later in fiscal 2026 will be a significant test: can Doximity translate trust, physician habit, and clinical safety credibility into a monetization model that the market will reward?

The longer-arc story here is one of deliberate sequencing: build the trust, earn the habit, then charge for it. Doximity’s net revenue retention was 112% overall and 117% for the top 20 customers, evidence that clients who deepen their engagement with the platform expand their spend meaningfully. If the AI suite launches commercially later this year with genuine physician trust behind it, and if the pharma headwinds stabilize, the combination of 85% physician coverage, proven engagement habits, and a safety-validated AI product could represent a monetization inflection.

BILL

BILL’s call told a story that sits at an interesting intersection: a company that has built a genuinely useful platform for SMB financial operations, is now threading AI throughout it, and is simultaneously facing an existential investor question about whether AI startups could eventually disintermediate it entirely.

The core business is performing solidly. Core revenue grew 17% year-over-year to $375 million in Q2, total payment volume reached $95 billion up 13%, and transactions processed grew 16% to 35 million. Nearly 500,000 businesses now use the platform, with over 9,500 accounting firms embedded in the network. Multiproduct adoption grew 28% year-over-year, with businesses using both AP/AR and Spend & Expense solutions, a meaningful indicator that customers are deepening their reliance on BILL as a financial operations platform rather than a single-purpose payments tool.

On AI, BILL is pursuing a two-track strategy that distinguishes between AI-for-customers and AI-for-BILL-itself. On the customer side, the company introduced agentic capabilities including a W-9 Agent for vendor management and a coding agent for invoice processing specific, transactional use cases where AI can eliminate manual work that currently slows down SMB finance teams. The framing CEO Lacerte used: “AI will allow us to dive deeper into the stack of transactional confusion and simplify it.” For SMB owners and bookkeepers who spend hours reconciling invoices, chasing down vendor information, and coding transactions to the right GL accounts, that promise is valuable.

On the internal efficiency side, BILL developed a roadmap of AI-driven productivity initiatives, covering developer productivity, internal team automation, and go-to-market optimization, with initial benefits expected to start flowing through in fiscal 2027. This is a multi-year cost structure story.

Analysts pressed on whether AI-native startups could undercut BILL’s position by building cheaper, simpler financial operations tools. Lacerte’s response centered on three moats: deep expertise in financial operations built over two decades, a proprietary data set from processing over $1 trillion in payments that enables superior risk models, and network effects from 8 million entities connected through the BILL ecosystem. The data moat argument is particularly credible: BILL’s ability to assess payment risk, predict cash flow patterns, and detect fraud is a function of seeing an enormous volume of SMB financial transactions over time. A new entrant with a better AI model but no transaction history is structurally disadvantaged in the trust and risk management dimensions that matter most when you’re moving real money for small businesses.

The honest challenge for BILL is growth rate moderation. Full-year core revenue guidance was raised but implies 14–15% growth for the year; respectable, but a deceleration from prior years and well below the trajectory of more AI-accelerated peers in this series. The company is deliberately shifting focus toward larger SMBs and improving customer unit economics rather than maximizing new customer count, which is a sensible strategic move but one that introduces near-term headwinds. The AI monetization story, where agentic capabilities translate into higher ARPU from existing customers, is still in its early innings. If BILL can demonstrate over the next few quarters that AI-driven automation is genuinely expanding what customers are willing to pay, the multiple compression the stock has experienced could look like an opportunity in retrospect. But that proof is still ahead, not behind.

‍

Sammy Abdullah

Managing Partner & Co-Founder

Enjoyed this post?

Share it using the links below.

Copy link
Share on LinedIn
Copy link
Share on Facebook

Get Our Newsletter in Your Inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.
  • SaaS Metrics
  • Portfolio
  • Blog
  • contact us
5307 E Mockingbird Ln, Suite 802  Dallas, TX 75206