Blossom Street
  • about
  • SaaS Metrics
  • Mrr Calculator
  • Track Record
  • Portfolio
  • Blog
  • EMAIL PARTNER
return to blog

40 SaaS Earnings Calls show AI will be the biggest boon to the space

by

Sammy Abdullah

We’re convincedthe biggest beneficiaries of AI will be enterprise software companies.  To understand exactly how AI is reshapingenterprise SaaS, we reviewed the earnings calls of 40 publicly traded SaaScompanies: Snowflake, Salesforce, MongoDB, ServiceTitan, ServiceNow, Microsoft,AppFolio, Palantir, Atlassian, Paylocity, Doximity, Qualys, Bill.com, ZoomInfo,Dynatrace, Monday.com, Blackbaud, Cloudflare, Freshworks, Klaviyo, Datadog, Q2Holdings, Shopify, HubSpot, Paycom, Unity, Twilio, Procore, JFrog, SPSCommerce, Waystar, SimilarWeb, Amplitude, Figma, Weave, RingCentral, Workiva,Five9, Alarm.com, and Appian.

 

What follows isa summary of what management teams are actually saying about AI's impact ontheir businesses, their customers, and their competitive positioning. We'llrelease additional installments as our work continues (We have 81 earningsreports to get through).  We believe thisis the most comprehensive analysis of enterprise software earnings callspublished this cycle and what we’re learning cuts against the grain: enterprisesoftware is going to be the biggest beneficiary of the move to agentic AI.

 

Big take-awaysare below:

 

 

AI is notprofitable yet.  The goal for themoment is driving margin neutral revenue. Microsoft, Salesforce, ServiceNow and nearly every other company describedmargin pressures from deploying AI product. AI workloads are just very expensive at the moment.  They’re also complex; Appian reported a significantjump in services revenue because these AI deployments aren’t just co-pilots;  serious production grade deployments at theenterprise level require deep integration work, data governance, and processdesign.  

 

Directrevenue from AI is nascent for most, especially relative to total revenue.  And is non-existent for some players likeDoximity, or still in its very early stages but growing fast like at Freshworks. That said, enterprise customers are adoptingand benefiting from the new AI products built by their already critical softwareproviders like Datadog, Salesforce, and Monday.com.  There is also a category of companies thatare building usage before monetizing, like Figma.  No company we’ve observed is making AI producta significant part of any forecast, yet, even though they talk extensivelyabout positive AI impact.  The softwarecompanies are being very conservative in forecasting, to their detriment in themarkets, but it’s because the sales cycle is different, pricing is new, deal sizesare bigger, and both customers and software vendors are figuring out this newAI thing together.    

 

Monetizationis changing.  The pricingconversation for AI is largely going from "per seat" to "peroutcome" or "per agent deployed.” Procore for instance wants to price it’s AI on construction dollar volumes,which would insulate it from employee count reductions. Some like Atlassian aresticking hard to the per seat model.  

 

Infrastructurefor AI versus AI products.  Some likeSnowflake, ServiceNow, and MongoDB, Twilio, Datadog, and Cloudlfare will win becauseAI is supported infrastructurally on their platforms.  Some like Dynatrace believe AI makes their productmore compelling than ever (in their case it’s observability and monitoring of AI).  Others like ServiceTitan and Salesforce arebuilding AI product which actually executes context-aware actions.    AI asan assistant or Copilot has been de-emphasized.

 

Trust is anissue for AI.  Doximity has stoppedreleasing AI product until they get it perfect, because it’s not ok for AI to mis-diagnosea patient.  Qualys makes a similar pointin cybersecurity; being the agentic remediation layer requires a level of trustthat generic AI tools can't establish. AI errors in healthcare and cybersecurity are near unacceptable, andthus the bar for accuracy is higher.  

 

Performanceis strong.  Quite a few of these companieslike MongoDB, Cloudflare, and ServiceNow, and Datadog closed some of largestdeals ever in Q4.  Others like Atlassianhad record quarters.  Those companiesthat are selling AI products to their customers report better growth andretention among those customer cohorts.   On the other hand, there are companies likeZoominfo experiencing serious disruption, with growth falling to near zero.

 

The moat isvery high.  Moats for enterprise softwareare being built around proprietary non-public and sensitive historical customerdata, workflows, governance, security, integrations, compliance, vendor trustespecially in highly regulated industries like healthcare (Waystar cited this),and operational knowledge of the existing customer.  AI cannot standalone, but rather needs to siton top of software which manages all the above in an enterprise-friendlymanner.  Additionally, any of thesesoftware companies building agents have serios edge, because those agents aretrained on enormous repositories of historical customer data that an AI startupwill not have.  All that said, SaaS companiesfocused on SMB customers, which have much less internal complexity and are aneasier lift, could face a serious threat from AI that allows customers to do internalbuilds or AI startups; Monday.com and Zoominfo cited issues in their SMBcustomer bases.   SMBs have simpler workflows, lessinstitutional complexity, and less switching cost.

 

Sitting nearthe data is especially advantageous.  Anumber of companies like Five9 and Appian report their customers have to solvedata hygiene and silo issues before AI can be deployed effectively.  This is a pre-requisite, so if you’re asoftware company sitting near or on top of the data already, you’ve got edgeover an upstart.  

 

Internal AIimproves overall margins.  Many ofthe companies themselves such as Blackbaud and Klaviyo are using AI in theirown operations to improve margins.  ForSaaS, the value of AI is a margin expansion story as much as a revenue growthstory, and the market is underweighting both. And given almost every company built minimal to zero AI revenue intoforward guidance, the opportunity for outperformance and multiple expansion issignificant.  

 

More AI willincrease the need for existing software. AI needs software to operate more efficiently.  As the Appian CEO put it, “AI without workflowsis chaos.” For companies that sit on the measurement, observability, andanalytics like Amplitude, Datadog, Dynatrace, and Snowflake, AI-drivendevelopment increases demand for incumbent software.  AI agents are also becoming a new customer oruser.  The consumer of enterprise dataplatforms is no longer just a human analyst or developer, it's an AI agentquerying autonomously, continuously, at scale. That consumption will become monetization for software companies.  Even the foundational AI companies themselvesare customers:  Datadog has 14 of the top20 AI-native companies as customers. Amplitude has 25 AI-native customers above$100K ARR with one frontier lab at seven figures.  

 

AI is allowingsoftware companies to really show their ROI. AI is allowing software companies new ways to show ROI.  Examples include Waystar ($15B in preventeddenials), ServiceTitan (18-point EBITDA margin improvement for Max customers),Klaviyo (50% higher open rates, 40% higher revenue per campaign), HubSpot (2xmeetings booked for Customer Agent users). They are using those outcomes to justify both higher prices and fasterexpansion within accounts.  Missioncritical and financially consequential workflows to the customer are seeingreal agentic adoption from software companies.    

 

Build vsBuy.  Customers are concludingquickly they don’t want to build AI internally, especially those in regulated industries.  RingCentral notes the engineering talent and customercompliance issues make building AI in-house unattractive; why not trust anexisting software vendor?  Waystar noteda similar dynamic.  Customers are notbuilding AI point solutions, rather they’re turning to their existing software vendors.

 

The legacybusiness is funding the excitement.  Quitea few of the publics have a dynamic whereby their legacy business which is slowgrowth but generating free cash flow is funding an exciting but nascent AIplatform or product.  Dropbox,RingCentral, SPS Commerce all fit into this. These companies are re-inventing themselves with AI, not being destroyedby it.  That transition may take time, butif their theses are right, it has tremendous upside for the patient investor.  

 

Summaries of actualearnings calls are below.  Revenuemultiples in real time can be seen for all these companies at https://www.softwaremultiples.com/.  Also visit https://www.blossomstreetventures.com/for detailed financials and metrics data for all these companies.  

 

Thank youfor your readership.  Email the author directlyat sammy@blossomstreetventures.com

 

 

 

 

 

Snowflake

Snowflake'sQ4 call told a story of a company in genuine transition, moving from being theplace enterprises store and query data, to being the platform where theyactually build and run AI.  The numbers:product revenue grew 30% year-over-year, driven primarily by AI workloads, andaccounts using AI features rose to more than 9,100, with Snowflake Intelligence,their flagship agentic product, scaling to over 2,500 accounts, nearly doublingquarter-over-quarter.

 

What makesSnowflake's AI story somewhat distinct from peers is that AI is benefiting thebusiness on both sides of the income statement. On the revenue side, larger andmore strategic deals are getting done; Snowflake signed the largest deal incompany history at over $400 million in total contract value, and closed sevennine-figure contracts in the quarter versus two in the same period last year.  On the cost side, management reported 40% to50% higher project margins and compressed delivery cycles through their owninternal use of Snowflake Intelligence and Cortex Code, a credible proof pointthat software companies will be big beneficiaries of AI.

 

Forcustomers, the value proposition is centered on two products. Cortex Code isused by over 4,400 customers, enabling faster development and deployment of AIworkloads. Snowflake Intelligence, meanwhile, is the more agentic play, allowingenterprises to build AI-native workflows directly on top of their existingSnowflake data, without moving it elsewhere. The strategic logic is powerful:enterprises already trust Snowflake with their most sensitive data, andSnowflake is betting they'll prefer to run AI on top of that data in placerather than pipe it out to a separate system.

 

The grossmargin picture is the caveat. Management acknowledged that newly launched AIproducts currently carry lower margin profiles, and that margin expansionremains a near-term priority as efficiency improvements are realized. This isthe same tension playing out across the AI infrastructure sector; serving AIworkloads is compute-intensive, and the economics improve over time but aren'tfully mature yet. Snowflake guided FY27 product gross margin at 75%, roughlyflat with FY26's 75.8%.

 

The biggerstrategic bet is on ecosystem. Snowflake expanded partnerships with Anthropic,OpenAI (a $200 million expansion), and Google Cloud, giving customers nativeaccess to leading AI models directly within the platform. Rather than picking asingle model winner, Snowflake is positioning itself as the neutral, data-layerthat works with all of them, a benefit to its enterprise customers who don'twant to be locked into a single AI vendor. If that positioning holds, Snowflakedoesn't just benefit from the AI wave; it becomes critical infrastructurebeneath it.

 

Salesforce

Salesforceis at an inflection point with AI.  Agentforce,Salesforce's agentic AI platform, closed 29,000 deals in its first 15 months,with marquee enterprise customers like Amazon, Ford, AT&T, and Modernasigning on. The deals over $1 million were up 26% year-over-year, suggestingthat AI may be driving larger more strategic contracts.

 

What's mostnotable is how Salesforce is trying to reframe what AI does for customers.Rather than talking about AI in terms of features or capabilities, Benioffintroduced a new metric, Agentic Work Units (AWUs), to show that AI agents onthe platform are completing 2.4 billion discrete units of actual work: updatingrecords, triggering workflows, making decisions. The message was deliberate:this isn't AI that thinks or suggests, it's AI that executes.

 

On therevenue side, Agentforce ARR hit ~$800 million, up 169% year-over-year, and thebroader Agentforce and Data Cloud bundle crossed $2.9 billion ARR, up over200%. Salesforce is monetizing AI through a mix of premium SKUs, new seatadditions, and consumption-based flex credits, which gives them multiplevectors to capture value as usage scales.

 

The candidnote was around gross margins. Token costs remain a real input expense, andSalesforce acknowledged it's working to optimize efficiency to keep AI grossmargins neutral in the near term. AI offerings are not profitable at the moment.

 

The biggerstrategic picture is that Salesforce is betting AI transforms it from a systemof record into a system of action where agents handle customer service, salesworkflows, and operational decisions autonomously. If that holds, itstrengthens their competitive moat considerably and raises switching costs forexisting customers.

 

MongoDB

MongoDB ispositioning to be the foundational data layer for the AI era, and the company'sjob is to make sure the world knows it. Customers are excited about MongoDB'splatform strength and its ability to serve as an integrated data layer for AIagents, combining search, vector search, and embeddings in a single offering.

 

The mostimportant thing to understand about MongoDB's AI story is where they sit in thestack. Unlike Salesforce or Braze, MongoDB isn't selling AI applications to endusers.  It's providing the datainfrastructure that AI applications run on. At its flagship MongoDB.local SanFrancisco event, the company announced the integration of its core databasewith embedding and reranking models from Voyage AI, creating a unified data intelligencelayer for production AI allowing developers to build sophisticated applicationsat scale with reduced hallucination risk and no requirement to move orduplicate data. That no data movement angle is a meaningful competitiveargument: enterprises with sensitive data in MongoDB don't have to copy it to aseparate vector store or AI pipeline, which simplifies architecture and lowersrisk. For large financial institutions and regulated industries, who are amongMongoDB's most strategic customers, that matters enormously.

 

The candidacknowledgment from management, however, is that AI is not yet a materialdriver to results, though they are encouraged by the growth they are seeingwith customers leveraging AI capabilities. The number of customers using vectorsearch and Voyage embedding models nearly doubled year over year, and AInatives, digital natives, and large enterprises are all contributing to thegrowth across the customer base.

 

What'sactually driving the strong financials right now is a combination ofAI-adjacent tailwinds and MongoDB's core enterprise momentum. In Q4 they signedan approximately $90 million deal with a large tech company planning to expandboth core and AI workloads on Atlas, and a greater than $100 million deal witha large financial institution for Enterprise Advanced, the largest totalcontract value deal in company history. The significance of that financialinstitution deal is hard to overstate: large banks are notoriously conservativewith infrastructure decisions, and a nine-figure commitment to MongoDB speaksto how seriously enterprises are treating their data platform choices in thecontext of AI readiness.

 

AI actually amplifiesthe case for MongoDB's multi-model, developer-friendly architecture. Asenterprises move from traditional transactional applications toward AI-poweredagentic workflows, the data requirements become more complex, requiring search,vector similarity, document storage, and real-time access all in one place.MongoDB's goal is to become the generational data platform of choice in the AIand multi-cloud era, and if AI application development scales the way mostexpect, the demand for an integrated, performant, cloud-native data layer likeAtlas should grow with it. The risk is that the revenue inflection from AIworkloads takes longer to materialize than investors hope. but the Q4 numberssuggest the underlying business is strong enough to carry that bet comfortablywhile AI catches up.

 

ServiceTitan

 

The AI storyhere is unusually concrete. Where most SaaS companies are talking about AI interms of platform strategy and future optionality, ServiceTitan is reportingactual customer outcomes from a live product, and those numbers are striking.

 

Thecenterpiece is Max, which ServiceTitan is positioning not as an AI feature butas an agentic operating system for the trades, meaning it's designed toautonomously orchestrate end-to-end workflows across demand generation,dispatch, quoting, payments, and back-office operations for HVAC, plumbing,electrical, and other field service businesses. The first deployment cohort ofMax customers saw a 50% increase in average ticket size, one customer achievedover 50% revenue growth in a single month, and another increased EBITDA marginfrom 18% to 30% while reducing office headcount. And management went further:customers on Max will about double their monthly subscription revenue whenfully ramped, an effect not driven by technician expansion, meaning the valueis coming from operational efficiency and higher revenue capture per job, notsimply from adding more workers.

 

What makesServiceTitan's AI positioning particularly defensible is the data moatunderlying it. The company leverages proprietary structured data from over $80billion in annual transaction volume to drive automation and outcomeimprovements. This is a critical competitive point that CEO Ara Mahdessianleaned into when analysts asked about AI-native startups competing for the samecustomers. A point-solution AI tool built for the trades can optimize oneworkflow but it has no context on how that workflow connects to the rest of thebusiness. ServiceTitan's integrated platform, spanning marketing, scheduling,dispatch, invoicing, and payroll, creates a contextual data layer that genericAI tools simply can't replicate from the outside.

 

The secondAI product gaining traction is Virtual Agents: AI-based modules that handleinbound call management and appointment booking, especially during call surgesor after normal business hours. For a trades business, missed calls during peakseason or after hours are direct revenue losses.  A potential customer who can't book getsrouted to a competitor. Virtual Agents directly plug that gap, and because it'spriced as a consumption product, it creates a new usage revenue stream thatmanagement believes could grow faster than GTV in fiscal 2027.

 

The honestcaveat is that Max is still early-stage and capacity-constrained. ServiceTitanplans to double Max capacity in Q1 FY2027, with scaling tied to onboardingefficiency and customer success, which is a signal that the bottleneck rightnow is deployment and training, not demand. To accelerate on all fronts, thecompany hired Abhishek Mathur from Figma, Meta, and Microsoft as ChiefTechnology and Product Officer, specifically to drive organizational andtechnology velocity around AI initiatives. FY2027 is also expected to representthe company's largest R&D investment ever, with explicit focus on AIinference and internal tooling. The trades are not typically an industryassociated with cutting-edge software, which is precisely why ServiceTitan'smoat both in data and in customer relationships could prove so durable as AIraises the stakes for what field service software can actually do.

 

ServiceNow

If there'sone company in enterprise software that has the most fully-formed AI storyright now, it's ServiceNow. CEO Bill McDermott came into this call on offense,explicitly addressing the "AI will eat software" narrative that hasspooked investors across the sector and flipping it on its head. His argumentwas direct: enterprise AI will be the largest driver of return on themultitrillion-dollar super cycle of AI infrastructure investment, and the realpayoff comes when tokens move beyond pilots and get embedded directly into theworkflows where business decisions are made, with ServiceNow serving as thesemantic layer that makes AI ubiquitous in the enterprise.

 

The numbersbehind Now Assist, ServiceNow's AI product suite, are the most concrete AImonetization metrics in this cohort. Now Assist ACV surpassed $600 million,more than doubling year-over-year in Q4, with deals over $1 million nearlytripling quarter-over-quarter and 35 such deals closing in Q4 alone. Enterprisesaren't just adding AI as a line item but are making meaningful commitments toit within the ServiceNow platform. The AI control tower, which allowsenterprises to govern and orchestrate AI agents across the business, grew over4x its 2025 targets, another signal that customers are moving from individualAI use cases to thinking about AI management as a platform-level problem.

 

WhatServiceNow is really selling is AI governance as much as AI capability. Asenterprises deploy more agents someone has to be in charge of orchestratingthem, monitoring them, and ensuring they don't go rogue or create complianceexposure. ServiceNow is positioning its platform as that orchestration layer,what they call the "universal agentic network" built on MCP andWorkflow Data Fabric. Monthly active users grew 25% and the number of workflowsand transactions processed on the platform increased over 33% each, reaching 80billion workflows and 6.4 trillion transactions.

 

ServiceNowdoesn't want to bet on any single model winning rather it wants to be theworkflow layer that works with all of them, insulating itself from modelcommoditization while capturing value from enterprise adoption regardless ofwhich AI providers customers prefer.

 

The honesttension in the ServiceNow story is pricing and gross margin. Managementacknowledged some gross margin headwinds from hyperscaler and AI infrastructurechoices, and the shift to a hybrid pricing model, combining traditionalsubscription with consumption-based AI usage, introduces some revenuevariability that investors are still getting comfortable with. But with $15.5billion in subscription revenue guided for 2026 at 19-20% growth, a 32%operating margin, and 36% free cash flow margins, ServiceNow is arguably themost financially powerful pure-play on enterprise AI adoption in the marketright now. The core business is robust enough that they can absorb the cost ofbuilding out AI infrastructure while competitors are still figuring out theirstrategy.

 

Microsoft

Microsoft's callwas a declaration that AI is now the gravitational center of the entirebusiness. Satya Nadella opened with a striking statement: "We are only atthe beginning phases of AI diffusion and already Microsoft has built an AIbusiness that is larger than some of our biggest franchises." When youconsider that Microsoft's "biggest franchises" include Office,Windows, and Xbox, each multi-billion dollar businesses, that's anextraordinary claim. And the numbers behind it are hard to argue with:Microsoft Cloud crossed $50 billion in quarterly revenue for the first time, up26%, while Azure grew 39%, the acceleration driven explicitly by AI workloads.

 

The AI storyat Microsoft operates on three distinct layers, each reinforcing the others.The first is infrastructure. Capital expenditures hit $37.5 billion in thequarter, with roughly two-thirds allocated to short-lived assets like GPUs andCPUs. Management introduced a new internal metric, tokens per watt per dollar, astheir guiding optimization target for AI infrastructure, and reported a 50%increase in throughput on OpenAI inferencing due to infrastructure advances.

 

The secondlayer is the Copilot product suite, where AI is directly touching customers atscale. Microsoft 365 Copilot reached 15 million paid seats, with over 160% seatgrowth year-over-year and tripled number of customers with over 35,000 seats, asignal that enterprise adoption is moving from departmental pilots tocompany-wide deployments. Average user conversations doubled and daily activeusers grew 10x. GitHub Copilot reached 4.7 million paid subscribers, up 75%year-over-year, with individual Copilot Pro Plus subscriptions growing 77%sequentially.

 

The honesttension is on margins. Gross margins declined slightly, driven by continued AIinfrastructure investments and growing AI product usage, and management guidedfor operating margins to be down slightly year-over-year in Q3.  Microsoft is, in effect, spending today tocapture a revenue curve that extends well into the next decade, and the Q2numbers suggest that curve is steeper than almost anyone expected.

 

Appfolio

AppFolio'sQ4 call told a quietly compelling story for understanding how AI actuallychanges the economics of a vertical SaaS business. AppFolio serves propertymanagers, a customer base that has historically been underserved by softwareand skeptical of hype. The fact that 98% of AppFolio's customers are alreadyactively using one or more AI capabilities included in the platform, against anindustry backdrop where half of AI users in property management report theycan't actually rely on the AI features in their core system, is a strikingmarket differentiation signal. It suggests AppFolio has threaded a needle thatmost vertical SaaS companies are still struggling with: making AI functionaland trusted at the ground level.

 

AppFolio isrepositioning its platform around three layers: a system of record, a system ofaction, and a system of growth with agentic AI embedded directly into dailyoperations, with the explicit goal of enabling customers to evolve fromproperty managers to performance managers. That reframing reflects afundamental shift in what AppFolio is selling: not software that helps youmanage properties, but a platform that actively improves the financialperformance of your property management business. Adoption of premium tiers hasalready exceeded 25%, a meaningful indicator that customers are upgrading tocapture AI-driven capabilities.

 

The businessimpact of AI is showing up directly in AppFolio's financials. Non-GAAPoperating margin expanded to 24.9% in Q4, up from 20.2% a year earlier, anearly 500 basis point improvement that reflects both revenue growth and theoperating leverage that comes from AI-driven efficiency gains across theplatform itself.

 

What makesAppFolio particularly interesting from an investment lens is that their AIadvantage is compounding in a way that's structurally hard to replicate. 45% ofsurvey respondents in the property management industry say they plan toconsolidate their software solutions and AppFolio is the natural consolidationdestination, precisely because they've embedded AI deeply enough that customerswould lose significant operational capability by leaving. The moat here isn'tjust the product; it's the AI-trained workflows, the resident data, and theoperational patterns that accumulate the longer a customer stays on theplatform. For a vertical SaaS business approaching $1 billion in revenue,that's a durable competitive position.

 

Palantir

Palantir'sQ4 2025 call was unlike any other earnings call in enterprise software thiscycle partly because of the numbers, which were objectively extraordinary, andpartly because Alex Karp delivers earnings calls the way a wartime generaladdresses troops, not the way a CFO addresses analysts. But strip away thetheater and what you find is a company that has, almost overnight, become oneof the defining stories of what enterprise AI actually looks like when it worksat production scale.

 

The headlinenumbers require context to be fully appreciated. Q4 revenue grew 70%year-over-year, the highest growth rate since Palantir went public, andrepresenting a 3,400 basis point acceleration versus Q4 of the prior year. Thisisn't a company maintaining a high growth rate, it's a company where the growthrate itself is accelerating sharply. Full-year 2025 revenue grew 56%, and thecompany is guiding full-year 2026 revenue of $7.19 billion representing 61%growth.

 

The companyclosed 61 deals greater than $10 million in the quarter, and management citedmultiple examples of customers signing $80 to $96 million contracts withinmonths of initial engagement, a compressed deal cycle that reflects genuineorganizational conviction.

 

The US vs.international divergence on the call was striking and strategically revealing.US revenue grew 93% year-over-year and 22% sequentially, while internationalcommercial revenue grew just 8% year-over-year, a massive gap that managementwas candid about.

 

Thegovernment side of the business is also worth understanding in the context ofAI impact. US government revenue grew 66% year-over-year, driven in part by amassive $10 billion Army software contract signed last summer and a $448million Navy contract for shipbuilding supply chain modernization. Karp wasunambiguous about what these contracts represent: not just data analytics orworkflow tools, but AI systems that are actively changing the operationalcapabilities of the US military. He argued that AI implementations in thedefense context have changed what warfighters are able to do.

 

Therisk most frequently raised by skeptical analysts on the call is whether the UScommercial acceleration is sustainable or whether it represents a concentratedburst of pent-up demand that will moderate. Palantir's answer is essentiallythat AI-driven enterprise transformation is still in very early innings, thattheir pipeline of committed deal value reached $11.2 billion (up 105%year-over-year), and that the real constraint on growth is their own capacityto onboard and deliver for customers, not demand.

 

 

Atlassian

CEO MikeCannon-Brookes has been saying "AI is the best thing that's ever happenedto Atlassian" for several quarters, and this quarter he finally had thenumbers to fully back it up. Atlassian delivered its first-ever $1 billioncloud revenue quarter, with cloud up 26% year-over-year, and surpassed $6billion in annual run rate revenue, a milestone that felt like a culmination ofyears of patient investment in infrastructure that is now paying off preciselybecause AI workloads need exactly what Atlassian has built.

 

Thestrategic logic of Atlassian's AI position is distinct from most others in thisseries. Rather than building AI features on top of an existing product,Atlassian is arguing that AI needs Atlassian; that the work tracking, planning,and organizational knowledge embedded in Jira and Confluence becomes morevaluable, not less, as AI proliferates. The Teamwork Graph, which now containswell over 100 billion objects and connections across first and third-partytools, is the context layer that enables Rovo Atlassian's AI assistant todeliver business value that is actually context-aware and actionable, ratherthan generic. That's a meaningful competitive claim: a new AI tool can generatetext or answer questions, but it can't tell you which Jira tickets are blockingyour sprint, who owns which decision, or how a current workflow hashistorically performed across 350,000 customers. Atlassian can.

 

The proofthat customers believe this argument is in the adoption metrics. Atlassian'sTeamwork Collection, the AI-powered bundle that serves as the company's primaryAI monetization vehicle, surpassed 1 million seats sold in under nine months,with more than 1,000 customers upgrading, and the company closed a recordnumber of deals over $1 million in ACV, nearly doubling year-over-year.Critically, customers using AI code generation tools create 5% more Jira tasks,have 5% higher monthly active users, and expand Jira seats 5% faster than thosenot using AI tools, a clean, data-driven demonstration that AI adoption drives moreusage of the core platform, not less.

 

Theseat-based pricing debate hung over the call: analysts probed whetherconsumption-based pricing could erode Atlassian's model as AI agentsproliferate and "seats" become a less meaningful unit.  Atlassian’s response is essentially thatcustomers want predictability, and seat-based pricing delivers it, while theTeamwork Collection bundles AI credits on top of seats in a way that capturesconsumption upside without forcing customers into open-ended consumption risk.RPO grew 44% year-over-year to $3.8 billion, accelerating for the thirdconsecutive quarter.

 

Thecompetitive angle on the call was also striking. When asked about new AI tools includingtools from Anthropic potentially challenging Jira, the CEO was notablyunbothered. He noted Atlassian considers Anthropic a partner, using theirmodels within the platform, and argued that new AI tools will emerge butAtlassian's Teamwork Graph and deep workflow integration providedifferentiation that generic AI tools can't replicate. The focus, he said,remains on human-AI collaboration for complex work: the kind that requiresorganizational context, compliance, security, and integration that a standaloneAI tool

 

Qualys

Thecybersecurity industry is entering a new phase where the speed of attackerexploitation has outpaced the speed of human response, and the only viableanswer is autonomous, AI-driven remediation. As threat actors continue tocompress time-to-exploit, Qualys believes the next phase of pre-breach riskmanagement will be defined by an agentic AI-driven risk fabric without-of-the-box business quantification and automated remediation to respond atthe speed of threats.

 

The mostsignificant product announcement on the call was the launch of the AI-nativeRisk Operations Center which the CEO positioned explicitly as a new category incybersecurity, designed to centralize an organization's entire threat responseposture. The argument is that the traditional SOC (Security Operations Center)is reactive: it responds to breaches that have already happened. Qualys's ROCis designed as a pre-breach capability that unifies Continuous Threat ExposureManagement with exploit confirmation, risk quantification, and automatedremediation, all powered by agentic AI that can act without waiting for a humanto review and approve each step. The competitive shot embedded in the CEO’s commentarywas pointed and deliberate: he argued that competitors focusing on exposuremanagement can't win the AI fight if they're still routing remediation throughJira tickets and ServiceNow tickets.  Autonomousdecision-making and execution is the actual differentiator, not justidentifying vulnerabilities.

 

On thecustomer impact side, Qualys's AI story is still more about where the market isheading than where revenue is today. The ETM (Enterprise TruRisk Management)platform, which serves as the foundation for the agentic AI capabilities,represented 10% of total bookings and 13% of new bookings, up from 8% and 9%previously, a meaningful directional signal but still a small share of theoverall business. Patch Management, which is the remediation capability that AIagents actually execute against vulnerabilities, represented 8% of totalbookings and 16% of new bookings. Together these metrics point toward aplatform transition that's underway but early.  Customers are beginning to adopt the agenticworkflow, but the majority of the revenue base is still anchored in thetraditional vulnerability management and VMDR products.

 

Thefinancial picture is disciplined and somewhat unusual relative to the rest ofthe companies in this series. Full-year 2025 revenue reached $669 million, up10%, with a 47% adjusted EBITDA margin; exceptional profitability for a companyof this size. The 2026 guidance of 7-8% revenue growth is conservative byenterprise software standards, and management was candid that it reflectsinvestment in AI infrastructure, sales and marketing expansion, and federalsector buildout, all of which compress near-term margin slightly.

 

What makesQualys particularly interesting from an AI lens is the specificity of its usecase. The combination of asset discovery, vulnerability identification, exploitconfirmation, risk quantification, and automated patch deployment is one of thefew enterprise software workflows where agentic AI is not just useful butarguably necessary for the product to work as intended. No human team can keepup with the velocity of modern vulnerability exploitation. Qualysdifferentiates by offering integrated patch management and autonomous workflowsthat allow customers to quickly remediate vulnerabilities.  

 

Paylocity

The mostconcrete AI signal on the call was deceptively simple: average monthly usage ofPaylocity's AI assistant increased over 100% quarter-over-quarter. That's asignificant sequential jump in a relatively short period. Paylocity recentlyexpanded its AI assistant into HR rules and regulations, tapping into more than200 IRS and Department of Labor knowledge sources to provide administratorswith guidance on tax and labor regulations. For the HR administrators who are Paylocity'sprimary users, typically generalists at companies of 50 to 500 employees, theability to ask a question and get a compliance-grounded answer without callinga lawyer or spending an hour on the IRS website is genuinely valuable.  

 

What'sparticularly interesting about Paylocity's AI strategy is the dual deploymentmodel: AI for customers, and AI for Paylocity itself. Within the operationsteam, Paylocity is leveraging AI to drive down client case volumes, automateclient interactions and case routings, and perform sentiment analysis to flagurgent cases for faster response. This internal AI efficiency play is showingup in the financials: adjusted gross margin expanded 60 basis pointsyear-over-year to 74.4% in Q2, and operating expenses are growing slower thanrevenue. This is a company guiding to a $622-630 million adjusted EBITDA on$1.74 billion in revenue, roughly a 36% margin.

 

Paylocity'sposition as a system of record allows it to connect data to other systems viaAPIs, increasing platform utilization, and that customer time savings from AIfeatures lead directly to opportunities for upselling more modules, enhancingthe experience and driving revenue growth. This is a different monetizationpath than many peers: rather than charging directly for AI as a premium SKU,Paylocity is betting that AI-driven engagement deepens the platformrelationship, makes customers more likely to expand into adjacent modules, andreduces the churn that has historically been the primary growth constraint inHCM.

 

Paylocity isoperating in a slower-growth regime than most of this peer group. Full-year2026 revenue guidance of $1.73-1.74 billion represents 9% growth, and recurringrevenue is expected to grow 10-11%.  That’ssolid but not the acceleration story investors see at Atlassian or Salesforce.The macro factor management monitors most closely is employment levels atclient companies, since Paylocity's revenue is partly tied to headcount.Management noted employment levels have been stable with no significant changesexpected.  AI is helping Paylocity expandrevenue per client and improve efficiency, but it isn't yet the kind ofstep-change growth driver that reshapes the growth profile.

 

 

 

Doximity

Where nearlyevery other company in this series was accelerating AI investment and racing tomonetize, Doximity was doing something much rarer: pumping the brakes on AIcommercialization deliberately, citing patient safety, and then watching itsstock drop nearly 24% the next day as the market punished the caution. Thedivergence between what management said and what the market wanted to hear wasstriking.

 

The platformfundamentals are strong by virtually any measure. Doximity surpassed 3 millionregistered members, with more than 85% of all US physicians and two-thirds ofNPs and PAs on the platform, with record usage across daily, weekly, monthly,and quarterly active user metrics. Over 300,000 unique prescribers used AIproducts in Q3, and January saw an average of four AI queries per prescriberper week for Docs GPT. More than 100 top US health systems have purchased theAI suite, granting access to over 180,000 prescribers. These are impressiveadoption numbers for a product that is not yet generating any revenue.

 

And thatlast sentence is the crux of the investor tension. No AI revenue is included incurrent guidance; commercial AI products are expected to launch later in theyear. Doximity is building substantial AI infrastructure, investing in usagethat is already compressing gross margins from 93% to 91%, and has over 300,000physicians actively using the product, but is explicitly choosing not tomonetize it yet. The reason CEO Jeff Tangney gave was pointed and substantive:a recent Stanford-Harvard study found AI can cause clinical harm in up to 22%of real patient cases, and that overconfident models make those errors harderto spot. In response, Doximity built Peer Check, a clinical peer review layerco-led by renowned physician-scientists Eric Topol and Regina Benjamin, withmore than 10,000 US physician experts reviewing AI-generated clinical answersbefore they're deployed at scale. The message was clear: Doximity will notmonetize AI until it trusts that the AI is safe enough for physicians to relyon without second-guessing every output.

 

This is agenuinely different philosophy. Healthcare AI occupies a unique risk category: anerror in a marketing recommendation costs a company a customer; an error in aclinical AI recommendation can cost a patient their life. DOCS caution isn'ttimidity, it's arguably the only responsible posture for a company whoseplatform is used by 85% of America's physicians. The commercial AI launch laterin fiscal 2026 will be a significant test: can Doximity translate trust,physician habit, and clinical safety credibility into a monetization model thatthe market will reward?

 

Thelonger-arc story here is one of deliberate sequencing: build the trust, earnthe habit, then charge for it. Doximity's net revenue retention was 112%overall and 117% for the top 20 customers, evidence that clients who deepentheir engagement with the platform expand their spend meaningfully. If the AIsuite launches commercially later this year with genuine physician trust behindit, and if the pharma headwinds stabilize, the combination of 85% physiciancoverage, proven engagement habits, and a safety-validated AI product couldrepresent a monetization inflection.

 

BILL

BILL's calltold a story that sits at an interesting intersection: a company that has builta genuinely useful platform for SMB financial operations, is now threading AIthroughout it, and is simultaneously facing an existential investor questionabout whether AI startups could eventually disintermediate it entirely.

 

The corebusiness is performing solidly. Core revenue grew 17% year-over-year to $375million in Q2, total payment volume reached $95 billion up 13%, andtransactions processed grew 16% to 35 million. Nearly 500,000 businesses nowuse the platform, with over 9,500 accounting firms embedded in the network.Multiproduct adoption grew 28% year-over-year, with businesses using both AP/ARand Spend & Expense solutions, a meaningful indicator that customers aredeepening their reliance on BILL as a financial operations platform rather thana single-purpose payments tool.

 

On AI, BILLis pursuing a two-track strategy that distinguishes between AI-for-customersand AI-for-BILL-itself. On the customer side, the company introduced agenticcapabilities including a W-9 Agent for vendor management and a coding agent forinvoice processing specific, transactional use cases where AI can eliminatemanual work that currently slows down SMB finance teams. The framing CEOLacerte used: "AI will allow us to dive deeper into the stack oftransactional confusion and simplify it." For SMB owners and bookkeeperswho spend hours reconciling invoices, chasing down vendor information, andcoding transactions to the right GL accounts, that promise is valuable.

 

On theinternal efficiency side, BILL developed a roadmap of AI-driven productivityinitiatives, covering developer productivity, internal team automation, andgo-to-market optimization, with initial benefits expected to start flowingthrough in fiscal 2027. This is a multi-year cost structure story.

 

Analystspressed on whether AI-native startups could undercut BILL's position bybuilding cheaper, simpler financial operations tools. Lacerte's responsecentered on three moats: deep expertise in financial operations built over twodecades, a proprietary data set from processing over $1 trillion in paymentsthat enables superior risk models, and network effects from 8 million entitiesconnected through the BILL ecosystem. The data moat argument is particularlycredible: BILL's ability to assess payment risk, predict cash flow patterns,and detect fraud is a function of seeing an enormous volume of SMB financialtransactions over time. A new entrant with a better AI model but no transactionhistory is structurally disadvantaged in the trust and risk managementdimensions that matter most when you're moving real money for small businesses.

 

The honestchallenge for BILL is growth rate moderation. Full-year core revenue guidancewas raised but implies 14-15% growth for the year; respectable, but adeceleration from prior years and well below the trajectory of moreAI-accelerated peers in this series. The company is deliberately shifting focustoward larger SMBs and improving customer unit economics rather than maximizingnew customer count, which is a sensible strategic move but one that introducesnear-term headwinds. The AI monetization story, where agentic capabilitiestranslate into higher ARPU from existing customers, is still in its earlyinnings. If BILL can demonstrate over the next few quarters that AI-drivenautomation is genuinely expanding what customers are willing to pay, themultiple compression the stock has experienced could look like an opportunityin retrospect. But that proof is still ahead, not behind.

 

Zoominfo

 

ZoomInfo'sQ4 2025 call was a study in a company navigating a genuinely difficultstrategic moment, caught between a legacy business model under pressure from AIdisruption, a promising new product suite not yet in revenue guidance, and amarket that has lost patience with the transition timeline. Q4 revenue grewjust 3% year-over-year to $319 million, and 2026 guidance projects only 1%revenue growth at the midpoint, extraordinary deceleration for a company thatwas growing at 20%+ just a few years ago.

 

Tounderstand ZoomInfo's AI story, you have to understand its predicament.ZoomInfo built its business on selling B2B contact and intent data to sales andmarketing teams; essentially, a database of who to call and when. AI hasdisrupted that model in two ways simultaneously: first, AI-powered outboundtools have made it easier for companies to generate their own contactintelligence; second, AI agents are replacing some of the human SDRs who werethe primary users of ZoomInfo's data. The company that was once the essentialdata layer for go-to-market teams is now being forced to redefine what"essential" means in an AI-first sales environment.

 

Schuck'sanswer to that challenge is an explicit platform pivot. The strategic framingis that whether customers access ZoomInfo's intelligence through theapplication, through an AI agent, or through something they built themselves,the data flows to where work happens, positioning ZoomInfo as the only platformdelivering intelligence, orchestration, and execution for modern go-to-marketteams. The new products, GTM Studio, which unifies internal and external datafor audience building, and GTM Workspace, are designed to make ZoomInfo theoperating system that AI sales agents plug into, rather than a database thathumans search manually. Schuck noted that many of the top 50 fastest-growingAI-native companies are already ZoomInfo customers, a meaningful signal thatthe platform has relevance in the AI-native enterprise, not just legacyenterprise.

 

The upmarketmigration is the clearest positive story on the call. ZoomInfo grew itsupmarket segment 6% year-over-year in Q4, tripling its year-over-year growthrate in its seasonally largest quarter, and now has 74% of ACV coming fromupmarket customers, up from 70% a year ago, with ACV from the $100,000-pluscustomer cohort growing double digits and now representing more than 50% oftotal company ACV. Upmarket customers buy more of the platform, renew at higherrates, and are more strategically embedded, which means the quality of therevenue base is improving even as the headline growth rate suffers from theongoing cleanup of the downmarket SMB book.

 

The mostcandid moment on the call was about the new AI products: ZoomInfo explicitlyincluded no revenue contribution from GTM Studio or other new products in the2026 revenue guidance, while embedding the associated costs.

 

WhatZoomInfo illustrates in the context of this broader series is a distinct andunderappreciated risk: the companies most likely to be hurt by AI in the nearterm are not necessarily those with the weakest products, but those whoseprimary value proposition was data that AI can now partially substitute orgenerate differently. ZoomInfo's contact and intent data was extraordinarilyvaluable in a world where finding the right person to call required humanresearch. In a world where AI agents can do that research, synthesize signals,and execute outreach autonomously, the question isn't whether ZoomInfo's datais good — it clearly is — but whether it remains uniquely necessary. Schuck'sargument is that data quality is becoming more important, not less, as AIagents proliferate that bad data fed into an AI agent produces bad outputs atscale, and ZoomInfo's verified, constantly refreshed data set is the antidote.

 

Dynatrace

Dynatrace'sQ3 FY2026 call was the kind of straightforward execution story that tends toget overshadowed in an earnings season dominated by more dramatic AI narratives.While most companies are racing to build AI capabilities, Dynatrace is arguingthat AI creates an urgent and growing need for exactly what it already does.The more AI proliferates, the more complex and opaque software environmentsbecome, and the more critical observability, knowing what's happening, why, andwhat to do about it, becomes. CEO Rick McConnell's central thesis at the annualPerform customer conference was that observability is entering a new era inwhich it is foundational to resilient software and dependable AI environments.

 

Thefinancial picture reinforces that thesis. ARR stabilized at 16% growth forthree consecutive quarters, net new ARR grew double digits for threeconsecutive quarters, and annualized log management consumption surpassed $100million, with log management itself growing over 100% year-over-year. The logsstory is particularly significant: logs are the raw observational data thatflows from AI workloads at enormous volume and velocity, and Dynatrace'sability to ingest, index, and make intelligent sense of them is a directbeneficiary of the AI infrastructure build-out happening across everyenterprise customer. Platform consumption overall continued to grow over 20%,ahead of ARR growth, a usage-leading indicator that suggests the revenuetrajectory has further room to run.

 

The moststrategically important announcement on the call was Dynatrace Intelligence, anew agentic AI operations system unveiled at the Perform conference and madeavailable to all customers. What's notable is the deliberate pricing decision:Dynatrace Intelligence was not priced as a separate SKU but embedded into theplatform for all customers. This is a conscious choice to drive adoption firstand monetize through increased platform consumption and expanded footprint,rather than creating an AI premium layer that some customers might resist. Itmirrors the philosophy of several other companies in this series who are usingAI to deepen platform engagement before converting it to direct revenue.

 

Thecompetitive differentiation Dynatrace leans on hardest is the combination ofwhat CEO McConnell calls "trustworthy deterministic AI" with agenticAI. The argument is nuanced and important: purely probabilistic AI — LLMsmaking inferences about what might be wrong in a complex system — producesunreliable outputs in production environments where precision matters.Dynatrace's approach combines its long-standing causal AI engine, whichidentifies root cause deterministically based on topology and dependency maps,with newer agentic capabilities that can act on those findings autonomously.For an SRE or platform engineer dealing with an incident at 2am, "we thinkthe problem might be here" is much less valuable than "the root causeis definitively this service, and here's what to do." That precision iswhat Dynatrace is selling, and it's a genuinely differentiated valueproposition against generic AI observability tools.

 

Thehyperscaler partnership strategy adds another dimension. In Q3, Dynatraceannounced deeper technical integrations with Amazon Bedrock AgentCore,embedding with Azure's SRE Agent, and serving as the launch partner for GCPGemini CLI extensions and Gemini Enterprise. This is Dynatrace positioningitself as the observability layer that hyperscaler-native AI agents rely on  meaning when a customer builds an AI agent inAWS, Azure, or GCP, Dynatrace is the system that monitors what that agent does,catches when it goes wrong, and helps remediate issues. It's a smart wedge:rather than competing with hyperscalers for the AI workload itself, Dynatraceis becoming the essential trust and reliability layer underneath thoseworkloads.

 

The honestquestion hanging over the call — and which analysts pressed on — is whether 16%ARR growth is the ceiling or a floor as the AI opportunity matures. The companyraised full-year guidance by 125 basis points, now targeting 15.5%-16% ARRgrowth and putting it on track to surpass $2 billion in ARR in fiscal 2026. Fora company of Dynatrace's scale, that's respectable, but investors hoping for astep-change acceleration driven by AI workload complexity haven't seen it yetin the headline numbers. Management's argument is that platform consumptiongrowing at 20%+ is the leading indicator of that acceleration arriving in ARRterms over the next several quarters — and the logs inflection at 100%+ growthis the most concrete evidence they can point to that the flywheel is spinning.

 

Monday.com

monday.com'sQ4 2025 call presented a company in a genuinely interesting two-speed moment:an enterprise business accelerating meaningfully on the back of AI-drivenplatform expansion, and an SMB self-serve business struggling withdeteriorating unit economics that management expects to persist through 2026.

 

Start withwhat's working. Full-year revenue reached $1.232 billion, up 27%, with Q4 at$334 million, up 25%, and customers with over $500,000 in ARR grew 74%year-over-year. That enterprise acceleration is real, driven by customersstandardizing on monday.com not just for project management but for CRM,service operations, software development, and now AI-powered workflows. The AIproduct metrics are early but directionally exciting: Monday Blocks poweredover 77 million actions, Sidekick processed over 500,000 user messages, andMonday Vibe — the company's AI-native app builder — became the fastest productin monday's history to surpass $1 million in ARR, reaching that milestone injust 2.5 months after pricing launched in mid-October 2025.

 

Thestrategic positioning monday.com is building toward is worth understandingcarefully. Co-CEO Eran Zinman described a unified AI platform with four corecapabilities: Monday Sidekick (AI assistant), Monday Vibe (AI-native appbuilder), Monday Agents (autonomous workflow executors), and Monday Workflows(process automation). Teams are increasingly relying on monday.com not just toorganize work, but to make decisions, automate outcomes, and execute fasterwith confidence. That reframing — from a work OS to a work execution platform —is a meaningful upward step in perceived value and justifiable pricing power.And it connects directly to the enterprise motion: larger customers withcomplex, interconnected workflows are the natural buyers of this expandedcapability set, while SMBs may not need or want the full stack.

 

Which bringsus to the harder conversation. The no-touch self-serve channel remains"choppy," with higher customer acquisition costs and lower returnsthan historical levels — a dynamic management expects to persist throughout2026 with no improvement assumed in guidance. This is a genuine structuralheadwind. The SMB market for work management software has become morecompetitive and more price-sensitive, partly because AI tools have lowered thebarrier to building lightweight alternatives and partly because macroconditions have tightened SMB software budgets. monday.com's response is toredirect investment toward enterprise, which makes strategic sense butcompresses near-term growth rates. 2026 guidance of 18-19% revenue growth is astep down from 27% in 2025 — not alarming on its own, but a deceleration thatthe market had to digest.

 

Thelong-term ambition management has been articulating — a path to much largerrevenue targets by 2027 — was another notable moment on the call. The CFOexplicitly stated that the 2027 target number is "off the table,"with management ceasing to discuss prior long-term targets due to macroeconomicvolatility and the ongoing challenges in no-touch channels. Pulling guidance israrely received well, and combined with the SMB headwinds and deceleration ingrowth, it created investor concern despite the genuinely strong enterprisemetrics.

 

The mostcompelling part of the monday.com story in the context of this series is MondayVibe — an AI-native app builder that lets business users create customapplications on top of the monday.com platform without writing code. If thisscales as management believes it can, it transforms monday.com from a workmanagement platform into something closer to a business application developmentlayer — essentially a low-code/no-code platform that is AI-native from theground up. The competitive analogy that comes to mind is what Salesforce istrying to do with Agentforce: extend from a system of record into a system ofaction and creation. Monday.com is pursuing a similar expansion from adifferent starting point — the work coordination layer rather than the CRMlayer — and Vibe's early ARR traction suggests the market is receptive to thevision, even if the SMB headwinds cloud the near-term picture.

 

Blackbaud

Blackbaud isnot a company most people include when they talk about AI in enterprisesoftware. It serves nonprofits, universities, healthcare foundations, and faithorganizations, institutions that are resource-constrained, oftentechnologically cautious, and historically slow to adopt new platforms. And yetthe Q4 2025 call made a surprisingly compelling case that Blackbaud may bebetter positioned for the AI era than its modest growth rate suggests, preciselybecause of the unique data moat it has built over 45 years serving the socialimpact sector.

 

CEO MikeGianoni opened with a direct and unusually candid framing of the fundamentalquestion facing every vertical SaaS company right now: will AI be beneficial tosystem-of-record vertical software firms like Blackbaud, or detrimental? Blackbaudprocesses nearly 30 billion donor predictions annually, manages tens ofpetabytes of data across its customer base, and has built the mostcomprehensive philanthropic dataset in existence, proprietary survey andbenchmarking data, licensed datasets, identity resolution capabilities, andspecialized datasets like Blackbaud Giving Search. Critically, this data is notpublicly available on the internet where LLMs can access it meaning nocompetitor can train a general-purpose AI model to replicate what Blackbaudknows about donor behavior, nonprofit fundraising patterns, and philanthropicoutcomes.

 

The mostconcrete AI product on the call was the Development Agent, the first ofBlackbaud's "Agents for Good" released at their bbcon conference inOctober. The use case is beautifully specific and immediately legible in termsof ROI: a university with 190,000 alumni but a fundraising team with bandwidthto focus on only 10,000 of them can deploy the Development Agent as anadditional "staff member" that cultivates relationships and raisesfunds from the other 180,000 alumni through email, text, and a fullconversational avatar, self-learning using the data, intelligence, andworkflows within Blackbaud's system of record. That is not a marginalefficiency gain — it's the ability to extend fundraising reach by 18x withoutproportionally scaling headcount.

 

The pricingmodel matters here too. Blackbaud is structuring Agents for Good as an annualsubscription with multiyear contracts — meaning the agent becomes a recurringrevenue line rather than a one-time upsell. More than 20% of customers arealready asking to move to four-year or longer renewal contracts, which speaksto the depth of platform dependency and the confidence customers have inBlackbaud's roadmap. 2026 guidance of 4-4.5% revenue growth explicitly assumesno meaningful AI product revenue contribution.

 

The internalAI story is equally substantive. Every employee at Blackbaud has been requiredto complete AI training, the entire engineering team is using GitHub Copilotand Anthropic Claude for code generation, bug remediation, and new productdevelopment, and Blackbaud AI Chat — embedded within the system of record andleveraging customer and proprietary benchmark data — saw daily usage grow 5xsince October. The company also cited three structural efficiency drivers it'spursuing simultaneously: geographic workforce diversification through a growingIndia office, closure of the last two legacy data centers, and AI-driveninternal productivity.

 

What makesBlackbaud particularly interesting in the context of this broader series is themission-alignment dimension. Nonprofits, universities, and foundations are notjust technology buyers — they have deeply held values around data privacy,ethical AI use, and donor trust. Blackbaud's focus on cybersecurity and AIgovernance for ethical data use isn't just a compliance checkbox — it's acompetitive moat in a market where the institutions writing the checks caredeeply about how their donor data is used. A generic AI tool that scrapespublic data and surfaces cold outreach isn't an acceptable substitute for aDevelopment Agent that operates within a trusted, governed, sector-specificsystem of record. The nonprofit sector's inherent conservatism around technologyis, in this framing, not a headwind for Blackbaud — it's a barrier to entry forevery competitor trying to break in.

 

Cloudflare

CEO MatthewPrince positioned the company at the intersection of two forces reshaping theinternet simultaneously: the explosion of AI agents and the resulting questionof who governs how those agents traverse the web.

 

Start withthe fundamentals, which are genuinely excellent. Q4 revenue grew 34%year-over-year to $614.5 million — the third consecutive quarter ofacceleration — with large customers contributing 73% of total revenue,dollar-based net retention jumping 9 percentage points to 120%, andmillion-dollar customers growing 55% year-over-year. New ACV bookings grewnearly 50% year-over-year with both year-over-year and sequential acceleration,and RPO grew 48% to $2.5 billion — the kind of forward revenue visibility thatgives confidence in sustained growth. The largest annual contract value deal incompany history — $42.5 million per year — closed in the quarter. This is abusiness hitting its stride.

 

But the moreimportant conversation on the call was about what's happening to Cloudflare'snetwork itself. CEO Matthew Prince reported that over the month of Januaryalone, the number of weekly requests generated by AI agents more than doubledacross the Cloudflare network. That single data point is extraordinary in whatit implies. Cloudflare sits between virtually every significant website and theinternet — it processes roughly 20% of all web traffic globally. The fact thatAI agent-generated traffic is doubling on a monthly basis means the trafficcomposition of the entire internet is changing, and Cloudflare is uniquelypositioned to observe, measure, and increasingly govern that change.

 

This leadsto the most forward-looking and genuinely novel part of the call — Prince'sframing of Cloudflare as a neutral broker between AI companies and contentcreators. AI models train on internet content, and the commercial relationshipbetween those who create content and those who consume it for training purposesis still being worked out. Prince argued that AI companies and content creatorsalike are looking to Cloudflare as a trusted neutral third party — that bothsides would rather Cloudflare figure out what the future business model lookslike than have a hyperscaler do it, since hyperscalers are themselves buildingfoundational models and may have conflicting incentives. Cloudflare's networkposition — sitting in the middle of every request, trusted by both sides, withno foundational model of its own — makes it a plausible honest broker in a waythat no hyperscaler or AI company can credibly claim to be.

 

Thedeveloper platform story adds another dimension. Cloudflare exited 2025 withmore than 4.5 million human developers active on the platform — and that numberwill soon be joined by AI agents as first-class citizens of the Cloudflareecosystem. Workers AI (Cloudflare's inference-at-the-edge product), the AIGateway (which manages and monitors AI API calls), and the growing suite ofdeveloper tools mean that Cloudflare is not just routing traffic between AIagents and the web — it's becoming the infrastructure layer where AIapplications are built, deployed, and governed. The pool-of-funds contractmodel, where enterprises commit a pool of spend and draw it down across anyCloudflare product, is particularly well-suited to AI workloads whereconsumption is hard to predict but the underlying dependency is clear.

 

The grossmargin story requires brief acknowledgment — gross margin came in at 74.9%,slightly below the long-term target range of 75-77%, as Cloudflare allocatedmore network expenses to cost of revenue to better reflect AI infrastructureinvestment. This is the same infrastructure cost dynamic playing out acrossvirtually every company in this series, and Cloudflare's handling of it isconservative and transparent. With $4.1 billion in cash and full-year 2026revenue guidance of $2.785-2.795 billion implying 28-29% growth, the balancesheet and growth profile are strong enough to absorb the investment cyclecomfortably.

 

The biggerpicture is this: Cloudflare is one of the few companies in enterprise softwarethat isn't just using AI or selling AI products — it's building theinfrastructure that AI itself needs to function reliably at internet scale.Every AI agent that browses the web, every model that calls an API, everyapplication that serves AI-generated content — all of it flows through infrastructurethat Cloudflare operates.

 

Freshworks

Freshworks'Q4 2025 call was defined by a genuine milestone, first-ever GAAP profitabilityfor a full year, and a strategic clarity about where AI fits in the growthequation that was more concrete than most companies in this series havedelivered. CEO Dennis Woodside's framing was direct: "AI is not just afeature in our products, it's a standalone revenue line delivering measurablevalue to our customers."

 

The numbersbehind that claim are still early but directionally meaningful. Freddy AIcrossed the 8,000 customer mark for paying AI customers, with over $25 millionin ARR that nearly doubled year-over-year. Against a total ARR base of $907million, $25 million is still a small fraction, but doubling year-over-yearfrom a base of paying customers (not just users) is a credible sign of genuinewillingness-to-pay rather than just feature adoption. The company's target is$100 million in AI-driven ARR over the next three years, a goal that wouldrepresent roughly 7% of their current ARR base, achievable if the doublingcadence holds.

 

The businessmodel for Freddy AI deserves attention. Freshworks is scaling usage anddemonstrating value through session-based pricing for AI agents, meaningcustomers pay per interaction or session rather than per seat. This is adeliberate departure from pure seat-based SaaS and positions Freshworks tocapture more value as AI agents handle more tickets, resolve more queries, andexecute more workflows autonomously. For IT service desks and customer supportteams, session-based pricing maps cleanly to outcomes: if the AI agent resolves40% of tickets without human intervention, the customer pays for those sessionsand the ROI is immediately visible. That alignment between price and value isone of the cleaner AI monetization models in this series.

 

Thetwo-speed nature of the Freshworks business is important context. The EmployeeExperience (EX) business — Freshservice for IT and employee service management— crossed $500 million in ARR with 26% year-over-year growth, clearly thegrowth engine. The Customer Experience (CX) business — Freshdesk for customersupport — is being managed more defensively, with focus on retention andunification rather than aggressive growth. AI is more naturally embedded in theEX motion because IT service management has well-defined, repetitive workflows(ticket routing, password resets, software provisioning) where agentic AI canreplace or dramatically accelerate human effort with high confidence. Customerservice interactions tend to be more variable and emotionally complex, whichmakes autonomous AI resolution both harder and higher-stakes.

 

The platformbuildout — ITSM, ITOM, ITAM, and ESM under one unified roof throughacquisitions of Device42 and Fire Hydrant — is the architectural foundationthat makes the AI story more compelling. Freshworks has brought IT servicemanagement, IT operations management, IT asset management, and enterpriseservice management under one cohesive roof, which means AI agents operatingwithin Freshservice have access to a much richer context layer: not just thesupport ticket, but the asset involved, the operational status of relatedsystems, and the history of similar incidents. That contextual richness is whatseparates an AI agent that can genuinely resolve an IT issue from one that canonly acknowledge it.

 

The honestchallenge for Freshworks is the growth rate trajectory. Revenue growthdecelerated from 22% in Q4 2024 to 14% in Q4 2025, and full-year 2026 guidanceof $952-960 million implies 13.5-14.5% growth, respectable but notacceleration. The market is watching whether AI monetization can provide agrowth catalyst that reverses the deceleration, and the answer isn't yetvisible in the numbers. What is visible is a company that has achievedfinancial discipline, first-ever GAAP profitability, a clear AI monetizationmodel, and a coherent strategy for the mid-market ITSM space where ServiceNowis too expensive and legacy tools aren't AI-capable. If the $100 million AI ARRtarget is achievable on schedule, it would represent a meaningful step towardreaccelerating growth from a stronger profitability foundation — but 2026 isthe year that hypothesis gets tested.

 

Klaviyo

Klaviyo's Q42025 call was one of the cleanest AI impact stories in this entire series — notbecause the AI revenue numbers are large yet, but because the customer outcomedata is unusually concrete and the strategic positioning is genuinelydifferentiated. Co-CEO Andrew Bialecki's framing was direct: "The futureis autonomous customer experiences".

 

The businessfundamentals provide a strong foundation. Full-year revenue reached $1.23billion, up 32%, with Q4 at $350 million, up 30% — and 2026 guidance of$1.50-1.51 billion implies 21.5-22.5% growth. More impressive is the operatingleverage: non-GAAP operating expenses were 58% of revenue, the lowest levelsince the IPO, with AI driving meaningful internal productivity and enablingfaster development cycles without commensurate headcount growth. This is the AIefficiency dividend theme appearing again — Klaviyo is building faster andspending less as a percentage of revenue, not because of austerity but becauseinternal AI tools are compounding developer output.

 

The AI storyfor customers centers on what Klaviyo calls the autonomous B2C CRM —essentially, AI that can plan, create, send, and optimize marketing campaignswith minimal human intervention, using the rich first-party data that merchantshave accumulated within Klaviyo about their customers' browsing behavior,purchase history, and engagement patterns. The outcome data is striking: morethan 50% of marketing campaigns from customers using Marketing Agent are nowgenerated by AI, with some customers achieving a 50% increase in open rates anda 40% rise in revenue per campaign.

 

The keycompetitive question on the call — pressed directly by Goldman Sachs — wasabout what prevents an AI-native competitor from replicating Klaviyo'sadvantage. Bialecki's answer centered on the proprietary context that Klaviyoholds: the company's advantage lies in its extensive dataset and infrastructuredesigned specifically for real-time use cases — allowing for real-timedecisions and personalization that are difficult to replicate. This is theKlaviyo data moat argument: a general-purpose AI model doesn't know that aspecific customer browsed a particular product twice this week, abandoned theircart on Wednesday, opened a discount email last month but didn't convert, andhas a lifetime value pattern consistent with high-margin repeat buyers. Klaviyoknows all of that in real time, and its AI models are trained on the behavioralpatterns of hundreds of thousands of ecommerce merchants across billions ofconsumer interactions. CEO Bialecki noted that Klaviyo Attributed Value — therevenue Klaviyo attributes to its platform — came to nearly $80 billion forcustomers last year, which is a remarkable demonstration of how central theplatform has become to ecommerce revenue generation.

 

The newerservice product — essentially AI-powered customer service built natively on theKlaviyo platform — is an important strategic expansion. Customer resolutionrates increased by 20 points at reference customers, and agent-driven sales sawup to 111% growth, with the service category noted as the fastest-growingproduct launch in company history. What makes this expansion particularlylogical is that the same first-party data that powers marketing personalization— purchase history, browsing behavior, preferences — is equally valuable forcustomer service. An agent that already knows you're a repeat buyer, knows whatyou purchased last month, and knows your communication preferences can resolvea service inquiry far more effectively than a generic support bot. Klaviyo isthreading marketing and service on the same data layer, which createscompounding value and makes the platform significantly harder to displace.

 

Managementwas conservative with guidance; the CFO explicitly said minimal servicecontribution is built into 2026 guidance, framing it as embedded upside ratherthan assumed revenue. Given the early but strong traction signals, thatconservatism looks like it could create meaningful upside through the year asthe service product adoption scales. The announced global partnership withAccenture and the doubling of million-dollar ARR customers are additionalsignals that Klaviyo is moving credibly upmarket — and an upmarket customerdeploying Klaviyo across both marketing and service is a much stickier,higher-value relationship than a mid-market email marketing customer alone.

 

Datadog

Datadog's Q42025 call was one of the strongest in the enterprise software sector thiscycle, and the stock's 16% surge on the day reflects how comprehensively itaddressed investor concerns about AI's net impact on the observability market.The headline numbers were excellent: Q4 revenue of $953 million beat the highend of guidance, up 29% year-over-year, with bookings of $1.63 billionrepresenting 37% growth and including 18 deals over $10 million in TCV and twoover $100 million. But the more interesting story is about the structure ofthat growth and what it tells you about how AI is reshaping Datadog's business.

 

The mostanalytically important disclosure on the call was the explicit breakout ofAI-native vs. non-AI-native customer growth. CEO Olivier Pomel confirmed whatinvestors had been anxious about — that AI-native companies were a largeportion of Datadog's growth but also a concentrated risk — and then gave theanswer the market needed: revenue growth from the broad base of customersexcluding AI natives accelerated to 23% year-over-year in Q4, up from 20% inQ3. That acceleration in the non-AI cohort is the key signal.

 

TheAI-native customer base is itself becoming more strategically significant.About 650 AI-native customers use Datadog, including 19 spending $1 million ormore annually, with 14 of the top 20 AI-native companies as customers. Thatlast statistic is worth sitting with: 14 of the 20 most important companiesbuilding AI infrastructure at scale have chosen Datadog as their observabilityplatform. When those companies grow — as they are, dramatically — Datadog'srevenue from them grows consumption-proportionally. It's a high-quality,high-growth cohort with deep platform dependency and limited churn risk givenhow embedded observability is in production operations.

 

The productstory adds a forward-looking dimension that the revenue numbers don't fullycapture yet. MCP server tool calls rose 11-fold quarter-over-quarter in Q4 — astaggering acceleration in AI agent-driven requests flowing through Datadog'sinfrastructure. This mirrors what Cloudflare observed on its network: AI agentsare creating exponentially more monitoring and observability needs thanhuman-driven applications because they operate continuously, at scale, withoutthe natural pauses that human users introduce. An AI agent making thousands ofAPI calls per hour needs its performance tracked, its errors flagged, its costsmonitored, and its security posture verified, all of which are Datadogcapabilities.

 

The AI SREagent — Datadog's flagship agentic product for automated incident response androot cause analysis — reached general availability in December and surpassed2,000 trial and paying customers within a month. That pace of adoption isnotable: it suggests both strong market need and a well-designed product thatcustomers can get value from quickly. The AI SRE agent pairs with Datadog's newOnCall product (which reached 3,000 customers for incident response), creatinga loop where Datadog not only detects issues but can autonomously investigateand suggest — or execute — remediation. This is Datadog moving fromobservability toward what it calls "autonomous operations," apositioning that closely mirrors Dynatrace's strategic framing and that createsa much larger TAM than traditional monitoring alone.

 

The productbreadth story is also becoming a real competitive moat. 84% of customers usetwo or more products, 55% use four or more, and 33% use six or more — metricsthat have improved consistently year-over-year. In a market where AI workloadsrequire observability, security, and performance monitoring to be integratedrather than siloed, Datadog's ability to provide all of these from a singleplatform — with consistent data models, unified pricing, and no integration tax— is a structural advantage that becomes more valuable as AI applicationcomplexity increases. 48% of the Fortune 500 are already Datadog customers,with median ARR per Fortune 500 customer under $500,000 — suggesting enormousexpansion headroom within an already-won customer base as those enterprisesdeepen their AI infrastructure.

 

The honestnuance on the call was the full-year 2026 guidance of 18-20% growth — adeceleration from the 29% Q4 print, partly explained by concentration in theAI-native cohort and the conservative guidance philosophy CFO David Obstler hasconsistently maintained. The AI-native companies that drove outsized growth in2025 create difficult comparisons in 2026, and management is transparent aboutmodeling conservatively on that cohort. But the underlying signal —non-AI-native acceleration, deepening multi-product adoption, an inflection inAI agent-driven usage, and the early momentum of the AI SRE agent — all pointtoward a business where the AI tailwinds are broadening rather than narrowingas the year progresses.

 

Shopify

Shopify's Q42025 call was unlike any other in this series because President HarleyFinkelstein wasn't just reporting on AI's impact on the business, he wasannouncing that Shopify intends to be the infrastructure layer for the entirenew era of AI commerce. The call opened with unusual theatrics for an earningscall, invoking Tobi Lütke's 2015 IPO vision, and the purpose was deliberate: tosignal to the market that what Shopify is building in 2026 is not an incrementalfeature set but a foundational bet on how commerce itself will work when AIagents replace traditional search as the primary discovery and purchasingchannel.

 

Q4 revenuereached $3.1 billion, up 31% year-over-year, and full-year 2025 revenue of$11.6 billion grew 30% — Shopify's highest annual growth since 2021. B2B GMVgrew 84% in Q4 and 96% for the full year, international revenue grew 36%, andShop Pay processed $43 billion in Q4 GMV, exceeding 50% of US GPV for Shopify.

 

Now the AIstory, which is Shopify's most strategically distinctive narrative in thisentire series. Since January 2025, orders coming to Shopify stores from AIsearch have grown 15x. That's from a small base but 15x in 12 months is not arounding error; it's evidence that AI-powered shopping is moving from noveltyto channel. What makes this uniquely important for Shopify is how the companyis positioned relative to that shift. When an AI shopping agent helps aconsumer find and purchase a product, the transaction still needs to flowthrough a checkout layer, with all the complexity that entails: inventorychecks, payment processing, tax calculation, shipping logistics, frauddetection, and post-purchase fulfillment. Shopify's checkout process is notbypassed by AI agents, the economics for merchants remain the same as if thetransaction occurred in the online store, with the full backend of commercecontinuing to flow through Shopify. In other words, AI is creating newdiscovery channels, but Shopify remains the execution layer underneath all ofthem.

 

TheUniversal Commerce Protocol co-developed with Google is Shopify's bid to codifythis position as a standard. UCP is described as "the only protocol thatcovers the full commerce journey, end-to-end," and is payment-agnostic.The strategic intent is clear: just as HTTP became the standard that the webruns on, Shopify wants UCP to become the standard that agentic commerce runson. When analysts pressed on the competitive threat from ACP the competingstandard proposed by OpenAI and Stripe Finkelstein's response emphasized thatUCP is designed as a comprehensive protocol covering everything from search topost-order, maintaining the merchant's checkout logic regardless of what AIagent surfaces the transaction. The "payment-agnostic" framing isnotable: Shopify is saying it doesn't need to own the payment to benefit fromthe transaction, which lowers the barrier to adoption for merchants andplatforms that might otherwise resist ceding the payment relationship.

 

The SidekickAI features tell a parallel story about AI's impact on merchant operationsrather than just discovery. Sidekick generated almost 4,000 custom apps,created over 29,000 automations, and edited 1.2 million photos within justthree weeks of a feature launch. For a merchant managing a growing ecommercebusiness, product catalog, marketing campaigns, customer service, inventorymanagement, these are capabilities that previously required a developer or aspecialized agency. AI is making Shopify merchants dramatically moreoperationally capable without proportional headcount growth, which deepensplatform dependency and expands the addressable market to entrepreneurs whopreviously couldn't afford to operate at scale.

 

The biggestinvestment implication from this call is one that runs throughout this entireseries but is most vivid here: Shopify is not merely an AI adopter or anAI-enhanced product; it is positioning itself as the commerce infrastructurethat AI agents must connect to in order to transact in the physical economy.Agentic commerce is viewed as an evolution of existing channels, givingmerchants flexibility to add or remove channels as needed  and Shopify's platform, with its integratedsuite of payments, inventory, tax, and analytics, is what makes thatflexibility possible at scale. If the trajectory of AI-driven shoppingcontinues, Shopify's dual position as both the merchant operating system andthe commerce protocol that AI agents plug into could represent one of the moststructurally advantaged positions in the AI economy.

 

Q2Holdings

Q2 Holdingsoccupies a peculiar position in the enterprise software landscape: it's acompany that has been quietly executing a multi-year transformation into aprofitable growth business while serving one of the most conservative,compliance-bound industries in the economy in community and regional banks andcredit unions. The Q4 2025 call was, in essence, the culmination of thatthree-year transformation, and the AI story running through it is less aboutflashy product launches and more about what becomes possible when the foundationalinfrastructure finally catches up to the strategic ambition.

 

Thefinancial discipline story is worth leading with. Full-year 2025 revenue of$794.8 million grew 14% Q2's highest annual growth rate since 2021, whileadjusted EBITDA expanded 49% to $186.5 million, with free cash flow conversionof 93%. Q4 subscription revenue grew 16% year-over-year, EBITDA marginsexpanded 400+ basis points, and the company finished with a $2.7 billionbacklog, up 21% year-over-year.

 

Thecompletion of Q2's cloud migration in January 2026 is the single most importantstructural event in the company's recent history, and it connects directly tothe AI opportunity. For years, Q2 operated across a hybrid of legacy on-premiseinfrastructure and cloud environments, which constrained both gross margins andthe ability to deploy AI capabilities consistently across the customer base.The cloud migration is now complete, and management described it as the singlebiggest lever for achieving their 60%+ gross margin target, but theless-discussed implication is that a fully cloud-native architecture is alsothe prerequisite for embedding AI across Q2's platform in a way that reachesall customers simultaneously rather than a patchy rollout constrained byinfrastructure heterogeneity.

 

The moststrategically important AI claim on the call was CEO Matt Flake's assertionthat "AI innovation within financial services will flow through Q2, notaround us." That's a bold statement, and worth unpacking. Q2's argument isthat AI in banking can't simply be dropped in from the outside, it requiresdeep integration with the core digital banking platform, where customeridentity, transaction history, behavioral patterns, and risk profiles live.Generic AI tools built by fintech startups or AI companies don't have access tothat data; Q2 does, because it is the system of record for the digital bankingexperience at hundreds of financial institutions. The Innovation Studioframework, Q2's open development platform, is the vehicle through whichfintechs and partners can build AI-powered applications on top of Q2'sinfrastructure, keeping Q2 at the center of the innovation ecosystem ratherthan being bypassed by it.

 

The fraudand risk opportunity is where the AI story becomes most concrete near-term.Risk and fraud has emerged as one of the most strategically important areas inQ2's portfolio, with fraud now described as continuous, cross-channel, andembedded in nearly every digital interaction. Q2 secured its largest fraudtechnology contract to date with a $200 billion bank in Q4, a landmark win thatvalidates the platform's relevance at the very top of the market. The argumentmanagement makes is compelling: AI-powered fraud detection that can synthesizesignals across retail banking, small business, and commercial in real time isfundamentally different from the fragmented point solutions banks havehistorically deployed. Q2's integrated platform, sitting across all of thosechannels simultaneously, is uniquely positioned to offer that cross-channelview in a way that no standalone fraud vendor can match without first solvingan integration problem.

 

Thecross-sell opportunity within the existing customer base is also notable. Only10% of Q2's Tier 1 customer base has all three of its core solutions - retaildigital banking, commercial digital banking, and relationship pricing/fraud -and only 25-30% of digital banking customers have adopted fraud products. Thatpenetration gap represents an enormous amount of runway for expansion revenuegrowth within an already-won, already-contracted customer base, exactly thekind of expansion that AI-powered products will accelerate as banks feelgrowing urgency around fraud prevention and competitive differentiation indigital banking experience. The financial institution sector's conservatism,which has historically slowed Q2's growth, is now working in its favor: banksare sticky, they don't churn lightly, and when they do adopt AI solutions, theygo deep on a trusted platform rather than experimenting broadly with unproventools.

 

 

 

Hubspot

HubSpot's Q42025 call was one of the more intellectually substantive in this series,because CEO Yamini Rangan was explicitly grappling with a question that mostcompanies in this series prefer to answer only obliquely: in a world where AIcan generate marketing content, qualify leads, answer customer inquiries, andanalyze pipeline data autonomously, what is the role of a human-operated CRMplatform like HubSpot? Her answer, that the gap between generating AI outputand driving actual growth outcomes is where HubSpot wins, is the central thesisof the company's 2026 strategy.

 

Full-year2025 revenue grew 18.2% to $3.1 billion, with Q4 at 18% constant currencygrowth and 20% as reported, alongside a Q4 operating margin of 22.6% — the kindof growth-and-margin combination that puts HubSpot in the upper tier of scaledSaaS businesses. Net new ARR grew 24% in 2025, six points above constantcurrency revenue growth, a leading indicator that the demand pipeline isstrengthening, not weakening, as AI proliferates.

 

The AIproduct metrics are more concrete than most companies in this series havedelivered. Customer Agent, HubSpot's AI support agent, was activated by over8,000 customers with mid-sixties percent resolution rates, meaning it resolvedroughly two-thirds of customer inquiries without human intervention.Prospecting Agent was activated by over 10,000 customers, up 57%quarter-over-quarter, and Data Agent was activated by 2,500+ customers. Theusage-based credit consumption data is particularly useful for understandingwhere AI is actually generating value: Customer Agent accounted forapproximately 60% of credits consumed in Q4, with Prospecting Agent, DataAgent, and intent monitoring each representing 10-15% of credits. Thatdistribution tells you that the customer service use case is the most advancedand most adopted, while prospecting and data enrichment are emerging as thenext wave.

 

The customeroutcome data is where Rangan built her AI credibility argument mosteffectively. Customer Agent users were booking nearly twice as many meetingscompared to the prior year a striking productivity improvement that goesdirectly to the growth outcomes that HubSpot's customers care about. This isRangan's core thesis made concrete: generic AI can generate text, but HubSpot'sAI embedded in a platform that knows your contacts, your deal history, yourcustomer interactions, and your competitive context can actually drivemeasurable revenue growth because it has the context to make good decisions,not just the capability to generate outputs.

 

CTO DharmeshShah's presence on the call was notable; he appeared specifically to addressthe competitive threat from external AI agents and the question of whethertools like Claude or ChatGPT connectors could route around HubSpot entirely.Shah noted increased utilization of HubSpot's Claude and ChatGPT connectors byleading-edge customers but no material uptake yet for third-party ClaudeCo-Work-style features that might disintermediate the platform. The implicationis that HubSpot is watching this carefully and doesn't believe thedisintermediation risk is imminent;  butthe fact that Shah was on the call to address it signals that management takesthe question seriously. HubSpot's answer is that context and integration arethe moat, not the AI model itself, and that a customer who tries to use ageneric AI agent for sales and marketing quickly discovers they need the CRMdata that HubSpot holds.

 

The internalAI adoption story adds the efficiency dividend dimension. 97% of code committedin 2025 used AI assistance, and nearly 60% of support is handled by AIinternally, numbers that are among the highest cited by any company in thisseries.

 

The pricingtransition is also worth understanding as context for the 2026 growth outlook.90% of legacy customers have moved to HubSpot's new pricing model, with nearly50% of ARR through first renewal — meaning the structural shift that enablesHubSpot to charge for AI usage (through the credit model) is nearly complete.As that base continues to expand and AI credit consumption grows, HubSpot gainsa consumption-based revenue layer on top of its subscription base — the samehybrid model that several companies in this series are pursuing, and one thatcreates upside to guidance as AI usage scales.

 

Paycom

Paycom's Q42025 call was a study in the tension between a genuinely innovative automationstory and a growth profile that tells a more modest tale. CEO Chad Richison hasbeen one of the most consistent voices in HCM software around the idea thatpayroll and HR can be fully automated, not just assisted, and that the endstate is a system that employees interact with directly to manage their own HRexperiences without HR professionals as intermediaries. The 2026 guidance of6-7% total revenue growth and 7-8% recurring revenue growth is conservative byany standard, and it sits in sharp contrast to the aspirational language aroundfull solution automation and AI-driven decisioning that dominated the preparedremarks.

 

The productstory centers on two flagship innovations. BETI (Better Employee TransactionInterface) pioneered the concept of employee-driven payroll — employees reviewand approve their own payroll before it's processed, catching errors at thesource rather than after the fact. IWant is the newer AI-powered layer on top,functioning as a command-driven natural language interface that lets managers,executives, and HR professionals ask questions and get immediate, contextualanswers from within the Paycom system of record. Leaders describe IWant as acatalyst for deeper insight, with one CEO noting they can go in withouttraining and immediately understand more about their business — and the productsaw usage increase 80% in January alone from Q4 levels.

 

Thequantified ROI story is one of the most specific in this series: managers saveas many as 600 hours per year, executives up to 60 hours, HR teams up to 240hours, and employees across the organization collectively reclaim 3,600 hoursannually. For a mid-market company where HR is often a small team wearingmultiple hats, 240 hours reclaimed per HR professional is a genuinelymeaningful productivity improvement — equivalent to roughly six weeks ofadditional capacity.

 

Thechallenge the market is wrestling with is whether these product innovationstranslate into accelerating revenue, and the 2026 guide suggests the answer isnot yet — at least not dramatically. Annual revenue retention improved to 91%from 90% — a positive directional signal, but still below where best-in-classHCM platforms tend to operate. Client count grew approximately 5% in 2025,suggesting the automation and AI narrative is not yet meaningfully pulling newcustomers off the sidelines at an accelerating pace.

 

The"full solution automation" positioning is Paycom's most distinctivestrategic bet, and one that has an interesting read-across to the broader themerunning through this series. Richison's core thesis is that most enterprisesoftware companies offer a buffet, a collection of features that customers pickand choose from, deploying them partially and leaving significant value on thetable. Paycom's vision is closer to a tasting menu: a fully integrated,automated system where every component works together and the customer doesn'thave to decide which pieces to adopt because the system is designed to runend-to-end without human intervention. The goal is "full solutionautomation to where you buy it, you configure it, and it does everything elsefor you." That's an AI-native product philosophy, not an AI-enhancedlegacy product — and it's the right framework for the era we're in. Thequestion is whether Paycom can accelerate its market penetration, it estimatesit has only attacked 5% of its total addressable market, with a message thatrequires enterprises to think differently about how they deploy HR software,which remains a slower sales cycle than the technology itself might warrant.

 

Unity

Unity's Q42025 call was structured around a single central thesis: the IronSource storyis ending, and the Vector story is just beginning, and when you understand whatVector actually is and what data it has access to, the bull case becomesconsiderably more interesting than the headline numbers suggest. The stock's33% decline on earnings day reflected investor frustration with near-term revenuesoftness and the ongoing IronSource drag. But the underlying dynamics thatmanagement laid out on the call point toward a business in genuine strategictransition rather than structural decline.

 

Themechanics first. Vector delivered its third straight quarter of mid-teensequential growth and reached a 53% increase in the three quarters sincelaunch, with January 2026 setting an all-time monthly record, 72% higher thanthe prior January and surpassing December's holiday peak. That January figureis particularly striking because January is typically a seasonally weak monthfor mobile advertising; growing 72% year-over-year in what should be a troughmonth signals underlying demand that isn't just cyclical. By end of 2026,management expects Vector's quarterly revenue run rate to be comfortably above$1 billion annually, a target that, if achieved, would represent one of thefastest-scaling AI advertising products in the industry.

 

The reasonVector is strategically distinctive in the context of this series is the datait sits on top of. Unity is the game engine used to build a substantial portionof the world's mobile games — meaning that when games built on Unity areplayed, Unity can observe (with developer consent) highly granular, real-timebehavioral data about how users actually interact with those games: what theyclick on, where they spend time, what in-app purchases they make, how theirengagement patterns evolve over sessions and days. Customer opt-in rates to thedeveloper data framework exceed 90% — meaning almost the entire Unity developerecosystem has consented to share runtime behavioral data. Over Q1, Unity isscaling testing of runtime engine data with the expectation it will be live inVector during Q2 — and management was explicit that this isn't expected to be asudden inflection but rather a compound improvement in model quality over time.

 

This is adata moat with a specific character: it's not purchase history or demographicdata (which many ad platforms have), it's behavioral engagement data from theactual gameplay experience, observed in real time across hundreds of millionsof devices. An ad targeting model trained on that signal can predict whichusers are likely to convert on an in-app purchase with significantly moreprecision than models trained on web browsing or social media behavior, becausethe signal is more direct and less noisy. Management framed the transition fromIronSource to Vector explicitly as a shift from commoditized lower-margin adnetwork revenue to deeply differentiated AI platform revenue — and the marginimprovement story confirms that framing: EBITDA margins expanded to 22% for thefull year with free cash flow up 41% to over $400 million.

 

The Createbusiness — Unity's game engine and development tools — adds a second AIdimension that gets less attention but matters strategically. Unity 6 isshowing the highest adoption speed of any major release, Create revenue grew16% excluding non-strategic items, and the China business grew nearly 50%driven by ecosystem interoperability. The 2026 product roadmap for Createcenters on two themes: browser-based authoring (no download required, makingUnity accessible to a dramatically wider developer audience) and AI authoringtools that make game creation accessible to non-professional developers. If AIlowers the barrier to creating interactive content — games, simulations,virtual experiences — the long-run result is more content built on Unity, whichgenerates more behavioral data, which improves Vector's models. The twobusinesses are more symbiotically linked than their separate revenue reportingsuggests.

 

The honesttension on the call — and the reason the stock reacted so harshly — is that theIronSource decline is still creating near-term revenue headwinds that mask theVector growth story. IronSource declined to 11% of Grow revenue in Q4 and isexpected to fall below 6% of total company revenue in Q1, which means the dragis largely behind them — but "largely behind" is not "fullybehind," and investors who've been waiting for the Vector acceleration toshow up clearly in reported revenue numbers are still waiting for the noise toclear. The 2026 setup, with IronSource essentially a rounding error, shouldfinally allow Vector's growth to be visible in the headline numbers. Whethermanagement's conviction in Vector's trajectory — and in the data moat thatunderlies it — proves correct is the central investment question for thiscompany.

 

Twilio

After yearsof prioritizing growth over profitability and facing legitimate questions aboutwhether its communications API business had the kind of defensibility thatjustifies a premium multiple, the company delivered: record quarterly revenueof $1.4 billion, $256 million of non-GAAP operating income, $256 million offree cash flow, and for the full year, $5.1 billion in revenue with $924million of non-GAAP operating income and $945 million of free cash flow — itsfirst full year of GAAP profitability.

 

But the moreinteresting story on the call was the AI positioning, and specifically theclaim that Twilio is not being disrupted by AI but is instead becoming moreessential because of it. CEO Khozema Shipchandler made this argument directly:"We are moving beyond being a provider of communications channels and datatoward becoming a foundational infrastructure layer in the age of AI, providingcustomers with the foundational infrastructure layer that embeds persistence,memory, context, and the ability to spin up an agent, no matter what itscapabilities are, all on the Twilio platform." That's a meaningfulrepositioning of what Twilio is. Not just SMS and voice APIs, but thecommunications backbone through which AI agents interact with the world.

 

The Voice AIstory is the clearest near-term evidence of this transition. Voice AI revenuegrew above 60% year-over-year in Q4, the fastest growing segment in thebusiness, as enterprises deploy AI agents for customer service, salesautomation, and appointment management that need to make and receive phonecalls, handle conversations, and navigate real-time voice interactions. Twiliosits in the middle of those calls: it's the infrastructure that routes theaudio, handles the telephony, manages the compliance, and increasingly providesthe AI scaffolding that makes those agents work. Voice AI agents are beingintegrated into core customer care and sales automation platforms, with broadadoption across all customer cohorts and significant interest from the ISVcommunity.

 

RCS — RichCommunication Services — is the other emerging growth story worthunderstanding. RCS saw a 5x sequential increase in volume, with 70%+ open rateson messages — dramatically higher than traditional SMS. As carriers and devicemanufacturers roll out RCS more broadly, branded, interactive messages(complete with product images, carousels, and click-to-buy buttons) arebecoming the standard for business-to-consumer communication. Twilio iswell-positioned here because RCS requires the same carrier relationships andA2P messaging infrastructure that Twilio has spent a decade building.Management was appropriately cautious about the pace of RCS adoption — it'sstill growing from a smaller base — but the 70%+ open rates versus the 20-25%typical for email marketing are a compelling data point about the channel'svalue proposition.

 

The honesttension on the call was around gross margins. Non-GAAP gross margin declined to49.9% in Q4 — down 200 basis points year-over-year — and the company expects anadditional 170 basis point reduction in 2026 due to approximately $190 millionin incremental carrier pass-through fees. Management was transparent that thesefees compress margin percentages but don't harm profit dollars or cashgeneration — a crucial distinction. The carrier fees are largely pass-throughcosts associated with revenue growth, and Twilio's operating expense discipline(non-GAAP operating expenses declined 1% year-over-year) means the dollar-levelprofitability is expanding even as the percentage margin faces pressure.

 

Thestrategic picture Twilio is painting is one of a company that owns thecommunication nervous system that AI agents need to interact with humans andeach other at scale. Every outbound notification, every inbound voice call,every RCS message, every two-factor authentication — as AI agents proliferateand their need to communicate across channels grows, the infrastructure layerthat manages those communications becomes more valuable and higher volume.Analyst coverage noted Twilio's combination of omnichannel communications,contextual data, AI frameworks, developer base, and technology partnerships asmaking it "the company to beat in CPaaS AI." Whether that competitiveposition is durable against well-funded challengers — and whether the 8-9%organic revenue growth guided for 2026 can re-accelerate as AI agent deploymentscales — is the central question investors are now evaluating against a muchcleaner financial backdrop than Twilio has had in years.

 

Procore

Procore's Q42025 call was anchored by a CEO who spent significant time making a case thatmost enterprise software companies can only aspire to make: that their corebusiness is strong, their AI adoption is real and growing, and — crucially —their strong results were achieved *before* any material top-line contributionfrom AI. "Our strong results and momentum were all achieved before anymaterial top line benefits from AI that we expect to realize in thefuture," CEO Ajay Gopal stated directly.

 

Theunderlying business metrics are genuinely strong for a company in theconstruction technology space. Q4 revenue grew 15.6% to $349 million, $100,000+ARR customers grew 20% year-over-year to more than 2,700, and million-dollarARR customers grew 34% to 115 customers. Procore Pay — the embedded paymentsproduct — grew customers 70%+ year-over-year, adding a financial servicesrevenue layer on top of the software subscription that mirrors what severalother vertical SaaS companies in this series are building. RPO growth of 22%with longer average contract durations signals that customers are deepeningtheir commitment, not hedging.

 

The AI storyat Procore operates on two levels that deserve separate attention. The first iscurrent adoption. 66,000 unique active users are using Procore AI, and nearly700 customers have created thousands of agents on the Procore platform. For aconstruction software company serving an industry not known for rapidtechnology adoption, that's a meaningful early signal. The construction sectoris uniquely data-rich — every project generates enormous volumes of drawings,specifications, RFIs, submittals, safety reports, schedule updates, andfinancial data — and AI that can make sense of that complexity has an immediateand obvious ROI case for project managers, superintendents, and owners who arecurrently drowning in documents and spreadsheets.

 

The secondand more strategic level is the Datagrid acquisition, which Procore ispositioning as its AI engine for extracting intelligence from the unstructureddata that pervades construction — blueprints, contracts, inspection reports,change orders. CEO Gopal called AI "an even more meaningful catalyst thanany we've seen before" for the construction industry — a significant claimfor an industry that has historically lagged in technology adoption. Theargument is that AI can finally make sense of the extraordinary complexity anddocumentation burden of large construction projects, where a single majorproject can involve hundreds of subcontractors, thousands of documents, andbillions of dollars flowing through an interconnected web of contracts and invoices.

 

Themonetization strategy is still being developed, but the direction is clear.Gopal indicated Procore is likely to include AI offerings within upcomingbundles as part of new packaging, and also likely to include consumption-basedcomponents — the same hybrid approach emerging across this series. This issmart positioning: bundling AI into higher-tier packages drives ARPU expansionthrough upsell, while consumption-based components capture value from the mostintensive users and create revenue that grows with AI workload volume ratherthan headcount.

 

There's alsoa structural advantage in Procore's pricing model that becomes more interestingin the context of AI. Unlike seat-based SaaS, Procore prices on constructionvolume — the dollar value of projects being managed. This insulates revenuefrom workforce reductions and enables efficiency-driven margin gains supportedby agentic AI deployment — meaning if AI makes construction teams moreproductive and able to manage more projects per person, Procore actuallybenefits as construction volume scales, rather than being hurt by the headcountreductions that AI might cause.

 

The Procorestory is also notable for a dimension unique in this series: the internationalhyperscaler data center win. A top international deal was secured with a UKdata center hyperscaler, positioning Procore in the fast-growing AIinfrastructure construction vertical. The irony is elegant — the physicalconstruction of AI data centers is itself becoming a major constructioncategory, and Procore is winning the software contracts to manage those builds.The companies spending $30-40 billion annually on AI infrastructure needproject management software to coordinate those construction programs, andProcore is the leading platform for exactly that type of large, complex capitalproject.

 

Jfrog

JFrog's Q42025 call told a story that sits at a genuinely underappreciated intersection:the company that manages how software gets built and deployed is now becomingthe company that manages how *AI* gets built and deployed — and the security,governance, and supply chain challenges in that transition are creating exactlythe kind of complex, high-stakes problem that JFrog is purpose-built to solve.

 

Thefinancial foundation is strong. Full-year revenue reached $531.8 million, up24% year-over-year, with cloud revenue growing 45% to $243.3 million — nowrepresenting 46% of total revenue, up from 39% a year prior. Large customersspending more than $1 million annually grew 42% to 74, and customers spendingover $100,000 grew 50% year-over-year — the kind of enterprise momentum thatspeaks to JFrog becoming more strategic to its customers, not less, despitecompetitive concerns about AI potentially automating away some of its core usecases.

 

The corethesis of JFrog's AI positioning is worth spending time on, because it's morespecific and arguably more defensible than most. JFrog's Artifactory platformis where enterprises store, manage, and distribute software artifacts — thecompiled packages, libraries, containers, and binaries that make up modernsoftware applications. In the AI era, a new category of artifact has emerged:AI models, training datasets, model weights, and the outputs of AI-generatedcode. JFrog announced the availability of AI Catalog and Agentic Remediationcapabilities to address emerging challenges created by the introduction of AImodels and agent-generated code — which is a direct response to the newsecurity attack surface that AI creates. An enterprise deploying a downloadedopen-source model from Hugging Face has the same supply chain securityquestions as one deploying a downloaded open-source library from PyPI: is thissafe? Is it licensed correctly? Has it been scanned for vulnerabilities ormalicious code?

 

The HuggingFace partnership is strategically clever precisely because it addresses thischallenge at the source. CEO Shlomi Ben Haim described Hugging Face as "atop of funnel for JFrog because it is the open-source hub" — meaning theplace where enterprises go to find and download AI models is now funnelingthose enterprises toward JFrog's secure model registry. The NVIDIA EnterpriseAI Factory partnership adds another dimension: enterprises building AIapplications on NVIDIA infrastructure need their model artifacts managed,versioned, and secured in exactly the same way their software artifacts havealways been managed. JFrog is positioning itself as the universal registry forboth.

 

The securitystory is the fastest-growing part of the business. Security core — JFrogAdvanced Security and Curation — represented over 10% of total ARR at year end,7% of total revenue for the year, and comprised 16% of year-end RPO compared to12% prior year. That acceleration in RPO contribution is significant: it meanssecurity is growing faster than the rest of the business and is being committedto on longer-term contracts, suggesting customers view JFrog's securitycapabilities as a strategic investment rather than a discretionary add-on. TheCuration capability — which screens packages for security vulnerabilities,license compliance, and operational risk *before* they reach developers — isparticularly relevant in an AI context where the risk of supply chain attacksthrough malicious or compromised models is real and growing.

 

The MCPserver integration mentioned on the call is one of the more forward-lookingproduct signals. MCP server and JFrog Fly integrations support a shift towardbusiness-to-agent market workflows, enabling code agents to interact directlywith the platform. This is JFrog acknowledging that the new consumer of itsartifact registry may not be a human developer fetching a package — it might bean AI coding agent automatically pulling dependencies as part of an autonomoussoftware development workflow. Ensuring that those agent-driven interactionsare secure, governed, and auditable is a new but critical requirement thatJFrog's existing infrastructure is uniquely suited to address.

 

The honestchallenge in the JFrog story is the growth rate deceleration to 17.5% guidedfor 2026, from 24% in 2025. Management framed this partly as timing — somelarge deals pulled into 2025 — and partly as a deliberate choice to consolidatelower-value customers and focus on enterprise expansion. But it's also areflection of the reality that JFrog's core DevOps market is mature, and the AIadjacency opportunity, while real and growing, hasn't yet produced thestep-change revenue acceleration that would justify a meaningfully highergrowth multiple. The security and AI governance opportunity is clearly there;the question is the pace at which enterprise adoption converts to incrementalJFrog revenue.

 

SPSCommerce

SPSCommerce's Q4 2025 call was the story of a quietly exceptional companynavigating a genuinely difficult near-term environment while laying thegroundwork for what it believes is an AI-driven acceleration ahead. Themilestone of 100 consecutive quarters of revenue growth is extraordinary by anystandard — 25 years of unbroken quarterly revenue increases is a testament to abusiness model built on network effects and structural supply chain necessityrather than on macroeconomic tailwinds. That consistency matters whenevaluating the current deceleration against the long-term narrative.

 

Thefinancial picture tells two simultaneous stories. Full-year revenue grew 18% to$751.5 million, with recurring revenue up 20%, and adjusted EBITDA grew 24% to$231.4 million — outpacing revenue growth and demonstrating meaningfuloperating leverage. But the near-term guide is softer: 2026 revenue guidance ofapproximately 7% growth reflects several headwinds that are more timing-relatedthan structural — delayed retailer enablement campaigns, Amazon policy changesaffecting the revenue recovery business, and continued invoice scrutiny amongexisting customers causing some down-selling. Management was explicit thatthese are temporary rather than permanent, with the back half of 2026 expectedto benefit from campaigns that slipped out of Q4 2025.

 

The AI storyat SPS Commerce is distinct from most in this series because it's rooted in thespecific problem of supply chain data complexity, not general-purpose businessautomation. SPS operates a network of over 54,000 customers — brands,retailers, distributors, and logistics providers — who exchange electronic datainterchange (EDI) documents as part of their daily commerce relationships. Thevolume and complexity of those transactions is enormous: purchase orders,advance ship notices, invoices, inventory feeds, compliance documents. Thelaunch of MACS — SPS's agentic AI capability — leverages proprietary networkintelligence and billions of transactions to provide guided workflows andautomated monitoring. This is a data moat argument applied to supply chain: SPShas processed so many transactions across so many trading partner relationshipsthat its AI models have context about what "normal" looks like, whatexceptions typically mean, and how to route resolution — context that nogeneral-purpose AI tool could replicate from the outside.

 

Themonetization strategy for MACS is still being developed. CEO Chadwick Collinsstated that MACS is initially available to beta customers, and the company ismonitoring usage to inform monetization strategies — while it is expected toenhance competitive positioning and retention, the monetization opportunitieswill become clearer as customer usage is better understood. This is a familiarpattern across this series — build usage before monetizing, learn whatcustomers actually value, then price accordingly — but it also means MACS iscurrently a retention and competitive differentiation tool rather than a directrevenue driver.

 

The ARPUstory is arguably more important near-term than the AI story. ARPU reachedapproximately $14,300 for the year — and management was explicit that revenuegrowth in 2026 will come primarily from ARPU expansion rather than net newcustomer additions. This is a platform maturation signal: SPS has penetrated alarge portion of its natural customer universe, and the growth vector is nowselling more to existing customers — analytics, revenue recovery, additionaltrading partner connections, and eventually AI-powered capabilities. Thecross-sell opportunity within the existing base is substantial: fulfillmentcustomers who also adopt revenue recovery, analytics customers who addcompliance monitoring, and eventually all of them adopting AI-powered exceptionmanagement.

 

Thestructural reason SPS Commerce is worth watching in the AI context is thenetwork effect dimension. When a retailer demands that its suppliers use aspecific EDI format or connection type, every supplier who complies joins theSPS network. That mandatory compliance-driven onboarding creates network valuethat compounds over time and is essentially impossible for a new entrant toreplicate at scale. AI that can make sense of that network — pattern-matchingacross millions of transaction pairs, detecting anomalies, predictingdisruptions, optimizing routing — becomes more valuable the larger the networkgets. SPS's 100-quarter track record isn't just a historical achievement; it'sevidence that the underlying network effect is durable enough to sustain growththrough multiple economic cycles, and that durability is what gives the AIinvestment long-term credibility.

 

Waystar

Waystar's Q42025 call was, in the context of this series, a unique kind of earnings story:a company that can point to a specific, quantified dollar amount of AI-drivenvalue delivered to its customers. "Waystar Altitude AI prevented more than$15 billion in denials for our clients, reduced appeal time by 90%, and drovedouble-digit increases in denial overturn rates." That's not an engagementmetric or a usage number — it's a dollar figure that directly corresponds torevenue that healthcare providers would have lost without the AI. In anindustry where claim denials represent a multi-hundred-billion-dollar annualproblem, $15 billion in prevented denials in a single year is a number thatcommands attention in every C-suite conversation Waystar has with a hospitalCFO.

 

The businessfundamentals reflect this value delivery. Q4 revenue reached $304 million, up24% year-over-year and 12% organically, with adjusted EBITDA of $129 millionand a 42.5% margin — exceeding the company's long-term 40% target. The companycrossed $1 billion in annual revenue for the first time in 2025, a milestonethat marks its transition from a high-growth mid-market company to a scaledenterprise platform. Net revenue retention was 112% with 97% gross revenueretention and a net promoter score above 70 — metrics that are extraordinaryfor a healthcare technology company serving an industry known for long salescycles, high switching costs, and conservative technology adoption.

 

The AI storyat Waystar has a specific character that distinguishes it from most in thisseries. Healthcare revenue cycle management — the process of getting paid forclinical services — is one of the most complex and error-prone workflows in anyindustry. A typical US hospital submits millions of claims per year, each ofwhich must be coded correctly, submitted to the right payer in the rightformat, appealed if denied, and tracked through a 90-180 day payment cycle.Payers deny roughly 60 million claims annually, and the administrative cost ofmanaging those denials consumes a staggering share of healthcare revenue.Approximately 50 of Waystar's solutions leverage AI, and nearly 40% of revenueis driven by AI embedded in mission-critical reimbursement workflows — whichmeans AI is not an add-on feature at Waystar, it's the core mechanism throughwhich the platform delivers its primary value proposition.

 

The IodineSoftware acquisition — which brought AI-powered clinical documentationimprovement and mid-cycle denial prevention — extended Waystar's reach into thepart of the revenue cycle where clinical and financial data intersect. This iswhere some of the most significant denials occur: a claim denied not because ofa billing error but because the clinical documentation doesn't adequatelysupport the care provided. Iodine adds more than 1,000 hospitals and healthsystems with only 35% customer overlap, expanding the addressable market andcross-sell opportunity — and together the combined platform delivers fullrevenue cycle visibility through a unified financial and clinical platform. Thedata underlying this platform — 1 in 3 US hospital discharges and more than 7billion annual transactions — creates a proprietary intelligence layer that nonew entrant can replicate quickly.

 

Themonetization approach is worth understanding precisely because it maps value tooutcome more directly than almost any company in this series. New AI agents in2026 will launch as both new SKUs with distinct pricing and as augmentations toexisting modules — meaning Waystar has a path to charge more for AI when itdelivers incremental functionality, and to embed AI into existing subscriptionswhen it serves as a retention and competitive differentiation tool. The CEO'sframing was direct: "AI drives retention and allows for price increasesreflective of the value delivered." For a company whose customers aremeasurably recapturing billions in denied revenue, that pricing power argumentis among the most defensible in enterprise software.

 

Thehealthcare regulatory and trust dimension also plays into Waystar's competitiveadvantage in a way that mirrors our Doximity and Qualys observations. Mosthealthcare providers — particularly community hospitals and mid-sized healthsystems — lack the internal engineering talent to build or customize AI fortheir revenue cycle workflows. Most clients prefer integrating AI capabilitiesinto their existing systems rather than building their own, and they valueworking with trusted partners like Waystar which offers a cyber-secure,integrated platform. This is a trust and complexity moat: healthcare AI thattouches billing, reimbursement, and clinical documentation sits in a complianceenvironment (HIPAA, payer rules, CMS regulations) that makes the barrier toswitching from a proven, trusted platform extraordinarily high. Waystar hasspent years earning that trust, and a Black Book survey of more than 750healthcare leaders demonstrated that Waystar leads the industry in clientsatisfaction with AI execution and outcomes.

 

SimilarWeb

Similarweb'sQ4 2025 call was a story of a data company caught in an interesting position —possessing what may be one of the most valuable proprietary datasets in the AIeconomy, but struggling to convert that asset into predictable revenue fastenough to satisfy investors expecting a cleaner growth profile. Revenue grew11% year-over-year to $72.8 million, below guidance — "mostly due to thetiming of 2 large LLM data training contracts that did not close yet, butremain active in our pipeline." Insider Monkey CEO Or Offer was directabout the dynamic: given the size and complexity of those AI contracts, salescycles take longer to complete — but once closed, they represent largemultiyear revenue opportunities with strong expansion potential.

 

The coreasset is worth understanding clearly. Similarweb has spent over a decadebuilding the most comprehensive independent dataset of digital traffic behavioron the internet — website visits, app usage, engagement patterns, referralsources, audience demographics, and competitive benchmarking across millions ofdigital properties globally. This data is valuable to enterprise customers forcompetitive intelligence, investor research, and market analysis. But in the AIera, it has acquired a second and potentially much larger use case: trainingdata for large language models and AI systems that need to understand how thedigital economy works, what consumers do online, and how web traffic flows. AIrevenue grew 3x year-over-year — from a small base, but accelerating sharply —and AI-related revenue reached 11% of total sales in Q4, up from 8% in Q2 2025.

 

The LLM datatraining opportunity is structurally different from Similarweb's traditionalSaaS business in ways that create both excitement and operational challenge.Traditional customers buy subscriptions for ongoing competitive intelligence.LLM training customers buy large bulk datasets — one-time or periodic purchasesthat can be very large in dollar terms but are inherently irregular in timing.The 2026 guidance range was widened to $10 million precisely because "thetiming of them to land is not that clear," which is a candidacknowledgment that the company is in a transitional period where a new,high-value customer category is growing fast but doesn't yet fit neatly intothe quarterly cadence that public market investors expect from a SaaS business.

 

Thepartnership signals are strategically significant. The Manus partnership —following its acquisition by Meta — represents a step-change in reach,embedding Similarweb data into agent-driven workflow environments. When AIagents are conducting market research, competitive analysis, or investment duediligence autonomously, they need a trusted source of digital traffic data.Similarweb sitting inside the tools those agents use is a distribution modelthat doesn't require a traditional enterprise sales cycle — it's usage-driven,scales with agent adoption, and positions Similarweb as infrastructure ratherthan a point solution. The expanded Bloomberg Terminal integration adds aparallel distribution channel into institutional finance, where web trafficdata is used as an alternative data signal for investment decisions.

 

The coreSaaS business has its own structural improvement story running underneath theAI narrative. 60% of ARR is now on multiyear contracts, up from 49% a year ago—a significant shift toward durability and revenue predictability that reduceschurn risk and increases customer lifetime value. 63% of ARR comes fromcustomers generating over $100,000 annually, with net revenue retention at 103%for that enterprise cohort — meaning the largest customers are expanding, notcontracting. The AI Studio product launch — which embeds AI-powered marketintelligence directly into enterprise workflows with consumption-based pricing— is designed to broaden usage within existing accounts and drive ARPUexpansion without requiring new logo growth.

 

The honesttension on the call was the combination of a revenue miss, widened guidance,acknowledged salesforce execution disappointment in 2025, and a new CFO joiningin December — a lot of operational uncertainty concentrated in a singlequarter. Management's response was appropriately direct: refocus thego-to-market on inbound leads, build a dedicated team for large LLM contractpursuit, and ground the 2026 guidance in the high-visibility core businessrather than counting on AI deal timing. The 2026 guidance is $305-$315 millionrepresenting 10% growth at the midpoint.

 

Amplitude

Amplitude'sQ4 2025 call opened with a strategic argument that reframes the entireconversation about AI's impact on product analytics. Most software CEOs aredefending against the narrative that AI will disintermediate their product.Skates made the opposite case: AI coding assistance from Anthropic, OpenAI,Cursor and others has compressed development cycles dramatically, acceleratingthe velocity at which companies ship new products — and when software is thiseasy to build, it creates a gap between how fast teams can ship features andhow fast they can learn if they are working. This shifts the pressure to the"use and learn" side of the product development loop — understandinghow users behave, what works, what doesn't, and what to do next. In otherwords, AI doesn't reduce the need for product analytics — it multiplies it,because teams are shipping so much faster that the bottleneck moves frombuilding to learning.

 

Thefinancial results validated the strategic narrative. Q4 revenue reached $91.4million, up 17% year-over-year — up from 9% growth in the prior fiscal year —with total ARR of $366 million, up 17% year-over-year and up $18 millionsequentially, the highest net new ARR quarter since 2021. Customers with$100,000+ ARR grew to 698, up 18% year-over-year with the largest sequentialincrease on record, and million-dollar ARR customers reached 56, up 33%year-over-year. Net dollar retention improved to above 105%, up from 100% atthe end of 2024 — the direction the metric needs to be moving for a companywhose next phase of growth depends on expanding within enterprise accounts.

 

The AInative customer story is one of the most specific in this series. More than 25of the leading AI-native companies are customers with over $100,000 in ARR, andone of the world's largest frontier AI labs is a seven-figure customer. Thefrontier lab use case is illuminating: they came to Amplitude to replace amanual system built from fragmented internal tools and raw warehouse data,because they needed to understand activation, engagement, retention, andmonetization end-to-end for their own products — the same problems thatconsumer and enterprise software companies have always faced. AI companies are,at their core, software companies, and they have the same product analyticsneeds as everyone else. The difference is that AI companies' stakes are higherand their iteration cycles are faster, making the need for trusted behavioraldata more acute.

 

The agenticusage data is the most striking near-term signal on the call. In October 2025,virtually no queries to Amplitude were triggered by AI agents. By the time ofthe Q4 call, agents were driving 25% of total queries — and agents also drovethe vast majority of overall incremental query growth. That trajectory — fromzero to 25% in a few months — suggests that AI agents are rapidly discoveringAmplitude's behavioral data as a necessary input to their workflows. When acoding agent, product agent, or marketing agent needs to understand how usersare actually interacting with a product, Amplitude is the system they'requerying. This creates a new consumption layer that doesn't requirehuman-driven engagement — agents query the platform autonomously, continuously,at scale.

 

Thetechnical reason Amplitude is positioned to capture this is worthunderstanding. Skates was direct: an agentic analytics platform reaching 76%success rate on complex production-grade queries — seven times better than astraight text-to-SQL approach — because it can't be replicated accurately withan LLM on a data warehouse. Amplitude has worked with thousands of companiesover 13 years and amassed the world's largest database of user behavior,purpose-built with the correct retention and funnel logic and analytical toolsexposed in a way that enables AI to reason effectively. The "text-to-SQLon a data warehouse" comparison is pointed — it's a direct acknowledgmentthat the obvious competitive threat (generic LLMs accessing raw data) producesinferior results precisely because behavioral analytics requires semanticunderstanding of what user actions *mean*, not just what they are.

 

The MCPintegrations with Anthropic, OpenAI, Figma, GitHub, Lovable, and Slack extendthis intelligence to where teams already work — embedding Amplitude'sbehavioral context directly into the tools developers and product managers usedaily. The Global Agent launch — which replaces the traditional dashboardparadigm with a conversational interface that can answer complex behavioralquestions in natural language — and the InfiniGrow acquisition for AI-nativemarketing analytics round out a coherent vision: Amplitude as the behavioralintelligence system that sits underneath the entire product development andgo-to-market stack, accessible by both humans and AI agents, continuouslylearning from every product interaction across thousands of companies.

 

Figma

Figma's Q42025 call was, in the context of this series, a landmark moment in the AImonetization story: a company that crossed $1 billion in annual revenue in itsfirst year as a public company, with AI already embedded deeply enough in itscustomer base that it's about to pull the trigger on formal consumption-basedAI pricing; not as a future roadmap item, but as a March 2026 launch. Thattiming is specific, confident, and rare.

 

Thefinancial picture is exceptional. Full-year revenue reached $1.056 billion, up41% year-over-year and above the high end of guidance, with Q4 revenue of $34million growing 40% and accelerating from prior periods. Net dollar retentionfor customers above $10,000 in ARR reached 136% — the highest level in tenquarters — and 67 customers now spend more than $1 million in ARR, up 68%year-over-year. Those are metrics of a platform that customers are not justrenewing but aggressively expanding within.

 

The AIadoption story is what makes Figma's call stand out in this series. Mostcompanies in our corpus are reporting AI feature usage as a leading indicatorof future monetization. Figma is reporting something more specific and moreimmediately actionable: approximately 75% of paid customers with more than$10,000 in ARR now consume AI credits on a weekly basis — with no suchcustomers the previous year. That's a complete transformation of the customerbase's relationship with AI in twelve months. And crucially, those credits arecurrently included in existing subscriptions — meaning Figma has been buildingusage at scale without charging separately for it, and is now preparing tomonetize that usage starting in March.

 

This is themost deliberately executed "build usage before monetizing" strategyin the series. CEO Dylan Field confirmed that the company will begin monetizingboth seats and AI credit consumption in March, introducing a hybrid model notyet reflected in historical results, with pricing impact anticipated asrenewals flow through. The guidance for 30% full-year revenue growth in 2026 —despite being well above consensus — is described as conservative preciselybecause the AI credit consumption monetization is only partially embedded inthe numbers. As the renewal cycle catches up through the year, the contributionfrom AI credits grows progressively larger.

 

The productexpansion story is equally important. Figma has grown from four to eightproducts in 2025, with Make — its collaborative design and prototyping tool —as the fastest-growing. Make's weekly active users grew over 70%quarter-over-quarter, with nearly 60% of Make files created by non-designers,and over half of paid customers above $100,000 in ARR building weekly in Make.The non-designer statistic is striking: it means Make is pulling Figma intoentirely new user cohorts — product managers, engineers, marketers — whopreviously had no reason to be in a design tool. This dramatically expandsFigma's seat potential within existing enterprise accounts and creates naturalexpansion vectors that don't require new logo acquisition.

 

The Weaveeacquisition — rebranded as Figma Weave — adds AI-driven image, video,animation, and motion generation natively into the platform, extending Figma'screative surface area at exactly the moment when generative media is becoming amainstream enterprise capability rather than a specialty tool.

 

Thestrategic positioning Field articulated on the call is one of the clearest inthis series: "We always want to be in a place where as models get better,Figma gets better." This is a deliberate infrastructure-like posture —rather than competing with foundation model providers, Figma is building theapplication layer that benefits from every improvement in underlying AIcapabilities. Better vision models make image editing more powerful. Bettercode generation makes Figma's design-to-code pipeline more reliable. Betterreasoning makes AI-assisted design critique more useful. Figma doesn't need towin the model race — it needs to remain the design and collaboration layer thatsits on top of whoever does.

 

The onehonest tension on the call is the operating margin trajectory: full-year 2026non-GAAP operating income guidance of $100-$110 million implies an 8% margin,down from 12% in 2025 — a deliberate investment in AI and infrastructure as thecompany scales. This fits the pattern we've seen across this series: thecompanies most confident in their AI position are the ones most willing toinvest through the margin in 2026, because they believe the consumption revenuelayer they're building will more than recover that investment in 2027 andbeyond.

 

Weav

Weave's Q42025 call told the story of a company that serves nearly 40,000 small andmedium-sized healthcare practice locations — dental offices, optometry clinics,veterinary practices, specialty medical — and is making an explicit bet thatthe AI era doesn't threaten its position but actually makes its platform moreessential. The financial foundation, while modest in absolute scale, reflectsconsistent execution: Q4 revenue of $63.4 million grew 17% year-over-year witha record gross margin of 73.3% and operating income of $2.3 million, markingthe 16th consecutive quarter of meeting or exceeding the high end of revenueguidance. That streak of execution, in a segment often characterized asdisruption-vulnerable, is itself a form of credibility.

 

The AIpositioning argument CEO Brett White made is one of the most directcounter-narratives to "AI threatens SMB SaaS" in this series. Theconventional concern is that AI will allow SMBs to build their own solutions orswitch to cheaper alternatives. White's answer is that AI actually *increases*Weave's value because it makes the platform an active participant in the laborbudget of the practice, not just a communication tool: "Headcountreductions in a practice lead to even higher usage and dependency on the Weaveplatform. As we add agentic solutions to handle additional workflows, wecapture share of wallet from the practice's labor budget historically allocatedto administrative staff or call centers. This simultaneously reduces practicecosts and improves ROI." That's a precise argument: as AI reducesadministrative headcount, the practices that retain Weave are relying on itmore heavily, not less — and Weave is now competing for the portion of thelabor budget that previously went to front-desk staff, which is far larger thanthe software budget.

 

Thedifferentiation argument centers on what Weave calls a "system ofwork" rather than a standalone chatbot. Weave differentiates by owning thetelephony stack and the practice phone number, allowing for seamless contextpreservation when a call transitions to text or an AI agent. The platform actsas an orchestration layer that treats patient interactions as single persistentthreads across voice, text, and AI agents — booking appointments, filinginsurance, collecting payments — rather than just informing. The emphasis onowning the telephony stack is strategically important: it means Weave controlsthe practice's primary patient touchpoint — the phone — in a way that a bolt-onAI tool cannot replicate without displacing the entire communications infrastructure.Weave's unique differentiators include owning the telephony stack, trustedrelationships, and deep industry-specific workflows — the platform retainscontext across interactions, enabling seamless handoffs and optimized practiceoperations that are hard to replicate.

 

The TrueLarkacquisition — rebranded as the foundational layer of Weave's AI Receptionist —is the product move that operationalizes this strategy. In the first half of2026, Weave expects general availability of its omnichannel AI Receptionistacross all vertical markets, enabling practices to answer calls 24/7 with an AIagent that can address common questions, request or book appointments, andintelligently hand off more complex interactions to staff — with conversationstranscribed and posted into a unified inbox preserving full context across bothAI agent and staff interactions. The 24/7 availability dimension is genuinelytransformative for small practices: a solo dental office or veterinary cliniccurrently misses calls outside business hours that could be appointmentbookings. An AI receptionist that captures those calls at near-zero marginalcost is a direct revenue impact that justifies the platform cost on its own.

 

Thesecond-half 2026 roadmap extends into intake and payments — automated paymentrequests after claims adjudication and collection of co-pays and pretreatmentdeposits directly within scheduling flows — which connects the AI receptionistcapability to Weave's fastest-growing business segment. Payments grew at morethan twice the company's revenue growth rate in 2025, and the new CareCreditpartnership adds patient financing integration — meaning the AI Receptionist,once fully deployed, becomes the front end of an integrated payments workflowrather than just a scheduling tool. The revenue capture opportunity fromautomated payment collection in a healthcare context is substantial: practicesthat currently rely on staff to chase co-pays and outstanding balances aftervisits can automate that entire workflow into the AI layer.

 

The honesttension in the Weave story is the NRR of 93%, which management attributedlargely to lapping a prior-year price increase. It's not a crisis, but it'sbelow the 100%+ threshold that signals a truly expansionary customer base. TheAI Receptionist and payments automation roadmap are both directly targeted atclosing this gap — by giving existing customers more reasons to expand theirplatform usage and by creating new consumption-based revenue streams that growas practice volume grows, independent of seat-based pricing.

 

RingCentral

RingCentral'sQ4 2025 call was structured around a thesis that Founder and CEO Vlad Shmunisstated directly: the company's strong results are not an aberration but anearly sign of good things to come as RingCentral transforms itself into anagentic voice AI company. That's a significant reframing for a business thatinvestors have primarily understood as a cloud PBX and unified communicationsprovider — and the financial evidence on the call suggested the transformationis more advanced than the market has recognized.

 

Thefinancial picture has genuinely improved. Total revenue reached $2.52 billionfor 2025, up 4.8% year-over-year, with subscription revenue growing 5.6%.Record free cash flow exceeded $500 million, up 32% year-over-year, translatingto $5.81 per share. GAAP operating margin reached nearly 5% — the first fullyear of positive GAAP operating margin — and non-GAAP operating margin reached22.5%, up 150 basis points. RingCentral also announced its first-ever quarterlydividend, a signal of capital allocation confidence that management has beenworking toward for years. The balance sheet cleanup is substantially complete:debt reduced by over $275 million, ending the year at 1.7x net leverage, withthe remaining $609 million convertible maturity being addressed through theundrawn credit facility.

 

The AI storyis the more interesting narrative, and it has two distinct dimensions. Thefirst is the product suite RingCentral has built around voice AI — AIR (AIReceptionist), AVA (AI Virtual Assistant), and ACE (AI Conversation Expert) —which together constitute what the company calls its agentic voice AI platform.ARR from customers utilizing at least one monetized AI product has more thandoubled year-over-year and is now approaching 10% of overall ARR, with pure AIARR almost tripling year-over-year. Crucially, RCAI utilizing customers havesignificantly better ARPU and stickier net retention exceeding 100%, and thenew logo AI attach rate is meaningfully higher than the installed base —meaning customers who start with AI are more valuable and less likely to churnthan the legacy base. That's the most important monetization signal on thecall.

 

The secondAI dimension is internal deployment. RingCentral is one of the few companies inthis series that has deployed its own voice AI products internally at scale andcan cite the operational results: internal adoption of the AI suite drove a 15%reduction in handle time for support agents. This is a direct efficiencydividend of the kind running through this series, but with the addedcredibility of being the vendor's own product eating its own cooking.

 

 

Thestrategic argument for why RingCentral wins in the agentic voice AI era isworth spelling out, because it's less obvious than Twilio's infrastructure playor Weave's vertical specialization. RingCentral's moat is the combination ofcarrier-grade telephony infrastructure, regulatory compliance across dozens ofcountries, and 500,000 business customers who have already trusted the platformwith their mission-critical communications. The platform is model-agnostic,allowing use of various AI models as needed — utilizing OpenAI's latest voicemodels for real-time and post-processing transactions and continuously testingthe best models for specific transaction types. This model-agnosticarchitecture is a deliberate competitive position: RingCentral isn't betting onany single AI provider but rather on its own orchestration layer, complianceinfrastructure, and enterprise relationships as the durable differentiators.

 

The honesttension is the growth rate — 4-5% revenue growth guided for 2026, which ismodest relative to the AI transformation narrative being articulated. 2026subscription revenue growth guidance of 4.5%-5.5% with free cash flow guidanceof $580-$600 million reflects a business that is generating substantial cashbut not yet seeing the AI tailwind translate into meaningfully acceleratingtop-line growth. The bull case is that the approaching-10% RCAI ARR cohort,with its better ARPU and superior retention, will gradually pull up the blendedgrowth rate as it becomes a larger share of the base — and that the new logo AIattach rate suggests the next generation of customers looks fundamentallydifferent from the legacy installed base.

 

Workviva

Workiva's Q42025 call told a story that reads differently depending on how you frame the AImoment. The surface narrative is a steady, well-executed compliance andfinancial reporting software company accelerating into a strong growth rate.The deeper narrative is that the very thing making enterprises most anxiousabout AI — the integrity, traceability, and defensibility of AI-generated data— is exactly what Workiva is purpose-built to solve. In a world where AI isgenerating reports, financial narratives, ESG disclosures, and regulatoryfilings, the question of "how do you know this is accurate andauditable?" becomes more urgent, not less, and that question is Workiva'sentire reason for existing.

 

Thefinancial results were strong across every metric. Q4 revenue reached $239million, up 20% year-over-year and exceeding the high end of guidance by $3million, with subscription revenue growing 21%. Non-GAAP operating marginreached 19.1% for the quarter — 1,170 basis points higher than a year prior and160 basis points above the high end of guidance. Large contract cohortsaccelerated dramatically: contracts above $300,000 grew 42% year-over-year, andcontracts above $500,000 grew 37% — both well ahead of overall revenue growth.The enterprise concentration of growth, combined with 74% of subscriptionrevenue coming from multiproduct customers (up from 70% a year prior), reflectsa platform that customers are expanding within rather than treating as a pointsolution.

 

The AIpositioning is nuanced and specific in a way that distinguishes Workiva frommost companies in this series. Rather than building AI features that generatecontent or automate workflows, Workiva's AI thesis centers on what CEO JulieIskow described as the demand for "trusted connected data" — theability to trace any number, narrative, or disclosure back through an auditablechain of custody to its source. "As AI adoption accelerates, the demandfor trusted connected data will only grow. We believe this shift is a durablelong-term tailwind for Workiva." The logic is straightforward: if an AIsystem generates a sustainability report or an SEC filing, the CFO, auditors,and regulators need to know that every figure in that report can be verified,traced to source systems, and defended under scrutiny. Workiva's connected dataplatform, with its linked spreadsheets, documents, and data sources, is theinfrastructure that makes that verification possible.

 

Nearly 30%of customers have now enabled AI platform features — with adoption driven bythe recognition that Workiva's AI operates in an environment that is safe,secure, and applies across the full portfolio of solutions, without requiringadd-ins or plugins. The embedded nature of Workiva's AI — already inside theplatform rather than bolted on from outside — is a significant practicaldifferentiator in a compliance context where security and data governanceconcerns are paramount. A CFO considering AI for financial reporting doesn'twant to route sensitive financial data to an external AI service; they want AIthat works within the controlled, permissioned environment where theirfinancial data already lives.

 

Thesustainability and ESG dimension is worth noting separately. Workiva built itsESG reporting platform over several years, positioning it ahead of whatappeared to be mandatory global disclosure requirements. While the USregulatory environment has moderated under the current administration, EuropeanESG reporting requirements (CSRD) remain active and demanding, andinternational revenue grew to 27% of total revenue, up 300 basis pointsyear-over-year — suggesting Workiva is capturing the European compliance waveeven as US ESG mandates have softened. The GRC (governance, risk, andcompliance) portfolio adds another dimension: as AI systems become moreembedded in enterprise operations, the governance and risk management of thosesystems becomes a compliance requirement in its own right.

 

The 2026guidance reflects genuine confidence: subscription revenue growth ofapproximately 19%, non-GAAP operating margin of 15%-15.5% (up 560 basis pointsat the top end), and free cash flow margin of approximately 19% — a combinationof growth acceleration and margin expansion that is unusual and speaks tooperational leverage finally materializing at scale. The new leadership hiresacross CRO, CPO, and CFO roles suggest the company is preparing for alarger-scale growth motion, not a steady-state optimization.

 

Fiv9

Five9's Q42025 call was, in several ways, the most honest accounting of the CCaaSindustry's AI inflection point in this series. The outgoing CEO Mike Burkland —stepping down after 18 years having grown the company from $10 million to a$1.2 billion run rate — used his final earnings call to make a comprehensivecase for why Five9 is positioned at the center of what he described as"the early stages of a long-term transition in the CX industry." Thetransition he described involves two simultaneous growth vectors: thetraditional CCaaS market growing at a 9% CAGR, and the generative AI customerservice market projected to grow at a 34% CAGR, together constituting acombined $48 billion annual spend by 2029.

 

Thefinancial results were solid. Q4 revenue reached $300 million, up 8%year-over-year, with subscription revenue growing 12% and representing 82% oftotal revenue. Adjusted EBITDA margin reached 26% — up 260 basis pointsyear-over-year — and free cash flow more than doubled year-over-year to $67million, a 22% margin. Full-year 2025 marked Five9's first year of positiveGAAP EPS at $0.45 per diluted share, a milestone in the company's profitabilityjourney.

 

The AI storyis where the call gets structurally interesting. Enterprise AI revenuesurpassed $100 million in annual run rate and grew approximately 50%year-over-year in Q4, accelerating from 41% growth in Q3. That acceleration —and management's explicit statement that it's visible in the bookings pipeline— is the central forward-looking signal. Enterprise AI bookings more thandoubled year-over-year, and new CEO Amit Mathradas confirmed that bothgreenfield AI deals and installed-base AI penetration are growingsimultaneously, which matters because it means Five9's existing 3,000+enterprise customers represent a massive expansion opportunity even without newlogo additions.

 

Five9'sspecific positioning in the AI contact center market is worth understandingprecisely. The company's argument is not that it builds AI models — it doesn't,it partners with Google Cloud, Salesforce, ServiceNow, and others — but that itowns the orchestration and conversational data layer that makes AI agentseffective in a contact center context. Contact center AI is harder than itlooks: an AI agent handling a customer call needs to access live CRM data,understand the customer's history and intent, navigate complex escalationlogic, comply with recording and disclosure regulations, integrate with billingand ticketing systems, and hand off seamlessly to a human agent when needed.Five9's platform has spent years solving exactly those integration andorchestration challenges for enterprise clients, and that operationalcomplexity is what generic AI providers cannot replicate quickly. The GoogleCloud partnership was described by Burkland as "very significant forFive9," positioning Five9 as the enterprise orchestration layer on top ofGoogle's AI capabilities — similar to how other companies in this series arepositioning themselves as the trusted execution layer beneath AI.

 

The honeststructural challenge on the call is the one that makes Five9's story genuinelycomplex: AI voice agents that can handle routine contacts may reduce the numberof human agents that enterprises need, which could reduce seat-based CCaaSrevenue even as consumption-based AI revenue grows. Management acknowledgedthis directly, framing it as a transition from seat-based to usage-basedrevenue that they believe will be net positive — AI agents handling more volumeat lower cost per interaction, driving total platform spend up even as theper-seat model evolves. The 2026 guidance projects that revenue will beback-half weighted with a return to double-digit growth in H2, driven bybookings backlog converting to revenue from both AI and core CCaaS.

 

Theleadership transition from Burkland to Mathradas adds execution risk in thenear term — new CEOs take time to put their stamp on a platform strategy — butMathradas's Q4 framing was confident and product-oriented, focusing on AI,partner engagement, and operational execution. More than 80% of Five9'sbusiness is partner-influenced, and the company doubled the number of partnerscertified to implement Five9 services in 2025 — a distribution investment thatpays off as enterprise AI deployments require the kind of systems integrationexpertise that only a deep partner ecosystem can provide at scale.

 

Alarm.com

Alarm.com'sQ4 2025 call was the most pragmatic AI conversation in this series, anddeliberately so. CEO Steve Trundle has built a $1 billion revenue platform on athesis of steady, durable growth through deep service provider partnershipsrather than venture-style growth narratives, and his treatment of AI on thiscall reflected that same measured disposition: AI is a meaningful productenhancer and a competitive differentiator in specific workflows, but it doesn'tchange Alarm.com's fundamental device-based SaaS model or create the kind ofexistential business model disruption that some of its peers are navigating.

 

Thefinancial milestone first. Full-year 2025 SaaS and license revenue grew 9.2% to$689.4 million, and total revenue exceeded $1 billion for the first time — ameaningful threshold for a company that has grown quietly and consistently forover a decade. Adjusted EBITDA guided to $213-$215 million for 2026, with totalrevenue guidance of $1.058-$1.065 billion including the Resideo Grid Servicesacquisition. The business model is notable in this series for its structure:Alarm.com doesn't sell to end customers directly — it sells through thousandsof independent security dealer and service provider partners who install andmaintain systems in homes and businesses. That channel model creates a naturalflywheel: as partners add subscribers, Alarm.com collects recurring SaaSrevenue without the customer acquisition cost of direct-to-consumer businesses.

 

The AI storyat Alarm.com is specifically about computer vision and video analytics appliedto physical security — a domain where AI creates genuinely measurable,demonstrable value. Management clarified that AI serves as a productivity andfeature enhancer — for example, proactive deterrence — rather than a threat totheir device-based SaaS pricing model. The key product advancement is AI-drivenproactive deterrence: instead of recording a break-in for forensic review afterthe fact, AI video analytics can identify a suspicious person approaching aproperty in real time and trigger an automated deterrence response — afloodlight activation, a speaker warning, an alert to a remote monitoringstation. Trundle described central-station-augmented remote video monitoring asa key growth driver because it shifts the value proposition from forensicreview to deterrence, which is a much higher-value outcome for the propertyowner and justifies the premium subscription tier.

 

Thecommercial Prism Series camera lineup delivers higher-resolution imaging, colorvideo at night, two-way audio, and support for premium video analyticsincluding AI-driven proactive deterrence and central-station remote videomonitoring — with more than 2 million active video cameras and devices deployedacross Alarm.com's commercial property base. The video attachment rate in theinstalled base is growing steadily, and each camera upgrade tends to prompt apackage upgrade to a higher SaaS tier — creating an ARPU expansion dynamic thatCEO Trundle described as roughly 70-75% of growth coming from ARPU rather thannew subscriber additions.

 

TheEnergyHub story is structurally distinct and worth separating from the securitybusiness. EnergyHub manages grid-edge flexibility for utilities — essentially,it controls distributed energy resources (thermostats, EV chargers, batteries,water heaters) on behalf of utilities to balance grid load during peak demandperiods. Connected devices under management grew 50% year-over-year, andEnergyHub currently serves utilities covering 50 million of the 130 millionmeters in North America, with a current enrollment rate of only 5% — withmanagement seeing a path to 10% as device categories expand beyond thermostats.The AI angle here is in the optimization algorithms that predict demandresponse needs and coordinate device behavior across millions of endpointssimultaneously — a real-time AI orchestration problem at physicalinfrastructure scale that is both technically demanding and highly defensible.

 

Whatdistinguishes Alarm.com in the context of this series is its explicitacknowledgment of what AI doesn't change. Unlike several companies that arerepositioning their entire identity around AI, Trundle's framing wasconsistent: AI makes specific features better (video analytics, deterrence,energy optimization), but the platform's durability comes from the serviceprovider channel, the hardware ecosystem, and the subscriber relationships —all of which AI enhances but doesn't replace. For investors concerned about AIdisruption, that's a reassuring posture from a management team that has earnedcredibility through consistent execution over many years.

 

Appian

Appian's Q42025 call was built around a thesis that CEO Matthew Calkins has beendeveloping for years but that has become newly compelling in the AI era: thatAI without process is chaotic, and that the value of Appian's platform isn't incompeting with AI models but in giving those models the structure, governance,and orchestration they need to operate reliably in enterprise environments.Calkins stated it directly: "AI needs process, workflow isessential." That's a deceptively simple claim that has significantimplications for how enterprise AI deployment actually works in practice.

 

The financialresults were strong. Q4 total revenue grew 22% year-over-year to $202.9million, cloud subscription revenue grew 18% to $117 million, and adjustedEBITDA came in at $19.7 million — above guidance. Customers with $1million-plus ARR grew from 115 to 140 year-over-year. The quality of growth isshifting upmarket in a way that matters: seven-figure software deals weresecured with a European pharmaceutical research organization and a NorthAmerican aerospace manufacturer, the latter forecasting $60 million inthree-year savings. Those are the kinds of quantified ROI commitments thatvalidate premium pricing and justify multi-year enterprise contracts.

 

The headlinedeal was the $500 million, ten-year enterprise license agreement with a U.S.Army branch, covering deployment to more than 100,000 users. This is thelargest deal in Appian's history and a landmark for the public sector strategy.The Army ELA is driven by the need to modernize legacy applications, and itallows Appian to engage with any part of the organization and other governmentdepartments. For a platform company, an ELA of this scale creates a franchise:once Appian is the standard for low-code application development across amilitary branch, every new workflow automation and AI-assisted process that theArmy builds becomes an Appian opportunity. The public sector moat in thiscontext is identical to the enterprise moat we've seen throughout this series —compliance requirements, data governance, security clearances, and theinstitutional knowledge embedded in the platform make switching extraordinarilycostly.

 

The AI adoptionmetric was striking: AI use on the Appian platform increased 14 timesyear-over-year. That's the kind of growth rate that signals genuineproduct-market fit acceleration rather than incremental adoption. The specificway Appian embeds AI is worth understanding — it's not building a standalone AIassistant or copilot, but integrating AI reasoning and action capabilitiesdirectly into process workflows. An enterprise using Appian can build a processthat routes a document, uses an AI model to classify and extract informationfrom it, makes a governance-enforced decision based on that extraction,notifies the right people, and creates an audit trail — all within a singleplatform with complete data lineage. That's fundamentally different from droppinga chatbot on top of a legacy process.

 

Appian is usingAI internally to enhance development capabilities and in every deployment — foracceleration, optimization, and recommendations — leading to the creation ofAppian applications that provide structure and safety. The internal AI adoptionreinforces the external product story: Appian is building what it sells, andthe operational results of that internal deployment feed directly into theproduct development roadmap.

 

Theprofessional services growth of 36% in Q4 deserves attention as a signal aboutwhere enterprise AI deployment actually is. The CFO attributed strongprofessional services growth to demand for AI implementation and changes infederal government dealings. AI deployments in regulated industries don'tconfigure themselves — they require deep integration work, process design, datagovernance setup, and compliance validation. Appian's professional servicesbusiness is growing because customers are buying the implementation expertisealongside the platform, which is a sign that these are serious,production-grade AI deployments rather than pilots. It also means Appian hasvisibility into how its customers are actually using AI in practice —intelligence that feeds back into the product and creates a competitiveadvantage that generalist software vendors can't easily replicate.

‍

Sammy Abdullah

Managing Partner & Co-Founder

Enjoyed this post?

Share it using the links below.

Copy link
Share on LinedIn
Copy link
Share on Facebook

Get Our Newsletter in Your Inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.
  • SaaS Metrics
  • Portfolio
  • Blog
  • contact us
5307 E Mockingbird Ln, Suite 802  Dallas, TX 75206