Blossom Street
  • about
  • SaaS Metrics
  • Mrr Calculator
  • Track Record
  • Portfolio
  • Blog
  • EMAIL PARTNER
return to blog

The Impact of AI on 24 Publicly Traded SaaS Companies

by

Sammy Abdullah

We’re reviewingthe earnings calls of 81 publicly traded SaaS companies to understand theimpact AI is having on SaaS.  Below wesummarize the key take-aways of 24 of those earnings calls: Snowflake, Salesforce,MongoDB, ServiceTitan, ServiceNow, Microsoft, Appfolio, Palantir, Atlassian,Paylocity, Doximity, Qualys, Bill.com, Zoominfo, Dynatrace, Monday.com, Blackbuad,Cloudflare, Freshworks, Klaviyo, Datadog, Q2 Holdings, Shopify, and Hubspot.  After the take-aways below, you will see full~5 paragraph summaries of each earnings call. We will release more blogs like this as we do more work.  

 

The bigtake-aways:  

 

AI is notprofitable yet.  The goal for themoment is driving margin neutral revenue. Microsoft, Salesforce, ServiceNow and nearly every other company describedmargin pressures from deploying AI product. AI workloads are just very expensive at the moment.

 

Directrevenue from AI is nascent for most.  Andis non-existent for some players like Doximity, or still in its very early stagesbut growing fast like at Freshworks.  Thatsaid, enterprise customers do seem to be adopting and benefiting from the new AIproducts built by their already critical software providers like Datadog, Salesforce,and Monday.com.  There is also a categoryof companies that are building usage before monetizing.  No company we’ve observed is making AI producta significant part of any forecast, yet, even though they talk extensivelyabout positive AI impact.  

 

Monetizationis changing.  The pricingconversation for AI is largely going from "per seat" to "peroutcome" or "per agent deployed.” Although some like Atlassian are sticking hard to the per seatmodel.  

 

Infrastructurefor AI versus AI products.  Some likeSnowflake, ServiceNow, and MongoDB, Datadog, and Cloudlfare will win because AIis supported infrastructurally on their platforms.  Some like Dynatrace believe AI makes their productmore compelling than ever (in their case it’s observability and monitoring of AI).  Others like ServiceTitan and Salesforce arebuilding AI product which actually executes context-aware actions.    AI asan assistant or Copilot has been de-emphasized.

 

Trust is anissue for AI.  Doximity has stoppedreleasing AI product until they get it perfect, because it’s not ok for AI to mis-diagnosea patient.  Qualys makes a similar pointin cybersecurity; being the agentic remediation layer requires a level of trustthat generic AI tools can't establish. AI errors in healthcare and cybersecurity are near unacceptable, andthus the bar for accuracy is higher.  

 

Performanceis strong.  Quite a few of these companieslike MongoDB, Cloudflare, and ServiceNow, and Datadog closed some of largestdeals ever in Q4.  Others like Atlassianhad record quarters.  Those companiesthat are selling AI products to their customers report better growth andretention among those customer cohorts.   On the other hand, there are companies likeZoominfo experiencing serious disruption, with growth falling to near zero.

 

The moat isvery high.  Moats for enterprise softwareare being built around non-public and sensitive historical customer data,workflows, governance, security, integrations, compliance, and operational knowledgeof the existing customer.  AI cannotstandalone, but rather needs to sit on top of software which manages all theabove in an enterprise-friendly manner. That said, for SMB customers which have much less internal complexityand are an easier lift, AI is becoming a real threat to incumbent software;Monday.com and Zoominfo sited this in their SMB customer base.   SMBshave simpler workflows, less institutional complexity, and less switching cost.

 

Internal AIimproves overall margins.  Many ofthe companies themselves such as Blackbaud and Klaviyo are using AI in theirown operations to improve margins.  ForSaaS, the value of AI is a margin expansion story as much as a revenue growthstory, and the market is underweighting both. And given almost every company built minimal to zero AI revenue intoforward guidance, the opportunity for outperformance and multiple expansion issignificant.  

 

The ~5paragraph summaries of actual calls are below. Revenue multiples in real time can be seen for all these companies at https://www.softwaremultiples.com/.  Also visit https://www.blossomstreetventures.com/for detailed financials and metrics data for all these companies.

 

Snowflake

Snowflake's Q4call told a story of a company in genuine transition, moving from being theplace enterprises store and query data, to being the platform where theyactually build and run AI.  The numbers:product revenue grew 30% year-over-year, driven primarily by AI workloads, andaccounts using AI features rose to more than 9,100, with Snowflake Intelligence,their flagship agentic product, scaling to over 2,500 accounts, nearly doublingquarter-over-quarter.

 

What makesSnowflake's AI story somewhat distinct from peers is that AI is benefiting thebusiness on both sides of the income statement. On the revenue side, larger andmore strategic deals are getting done; Snowflake signed the largest deal incompany history at over $400 million in total contract value, and closed sevennine-figure contracts in the quarter versus two in the same period last year.  On the cost side, management reported 40% to50% higher project margins and compressed delivery cycles through their owninternal use of Snowflake Intelligence and Cortex Code, a credible proof pointthat software companies will be big beneficiaries of AI.

 

For customers,the value proposition is centered on two products. Cortex Code is used by over4,400 customers, enabling faster development and deployment of AI workloads.Snowflake Intelligence, meanwhile, is the more agentic play, allowingenterprises to build AI-native workflows directly on top of their existingSnowflake data, without moving it elsewhere. The strategic logic is powerful:enterprises already trust Snowflake with their most sensitive data, andSnowflake is betting they'll prefer to run AI on top of that data in placerather than pipe it out to a separate system.

 

The grossmargin picture is the caveat. Management acknowledged that newly launched AIproducts currently carry lower margin profiles, and that margin expansionremains a near-term priority as efficiency improvements are realized. This isthe same tension playing out across the AI infrastructure sector; serving AIworkloads is compute-intensive, and the economics improve over time but aren'tfully mature yet. Snowflake guided FY27 product gross margin at 75%, roughlyflat with FY26's 75.8%.

 

The biggerstrategic bet is on ecosystem. Snowflake expanded partnerships with Anthropic,OpenAI (a $200 million expansion), and Google Cloud, giving customers nativeaccess to leading AI models directly within the platform. Rather than picking asingle model winner, Snowflake is positioning itself as the neutral, data-layerthat works with all of them, a benefit to its enterprise customers who don'twant to be locked into a single AI vendor. If that positioning holds, Snowflakedoesn't just benefit from the AI wave; it becomes critical infrastructurebeneath it.

 

Salesforce

Salesforce isat an inflection point with AI.  Agentforce,Salesforce's agentic AI platform, closed 29,000 deals in its first 15 months,with marquee enterprise customers like Amazon, Ford, AT&T, and Modernasigning on. The deals over $1 million were up 26% year-over-year, suggestingthat AI may be driving larger more strategic contracts.

 

What's mostnotable is how Salesforce is trying to reframe what AI does for customers.Rather than talking about AI in terms of features or capabilities, Benioffintroduced a new metric, Agentic Work Units (AWUs), to show that AI agents onthe platform are completing 2.4 billion discrete units of actual work: updatingrecords, triggering workflows, making decisions. The message was deliberate:this isn't AI that thinks or suggests, it's AI that executes.

 

On the revenueside, Agentforce ARR hit ~$800 million, up 169% year-over-year, and the broaderAgentforce and Data Cloud bundle crossed $2.9 billion ARR, up over 200%.Salesforce is monetizing AI through a mix of premium SKUs, new seat additions,and consumption-based flex credits, which gives them multiple vectors tocapture value as usage scales.

 

The candid notewas around gross margins. Token costs remain a real input expense, andSalesforce acknowledged it's working to optimize efficiency to keep AI grossmargins neutral in the near term. AI offerings are not profitable at the moment.

 

The biggerstrategic picture is that Salesforce is betting AI transforms it from a systemof record into a system of action where agents handle customer service, salesworkflows, and operational decisions autonomously. If that holds, itstrengthens their competitive moat considerably and raises switching costs forexisting customers.

 

MongoDB

MongoDB ispositioning to be the foundational data layer for the AI era, and the company'sjob is to make sure the world knows it. Customers are excited about MongoDB'splatform strength and its ability to serve as an integrated data layer for AIagents, combining search, vector search, and embeddings in a single offering.

 

The mostimportant thing to understand about MongoDB's AI story is where they sit in thestack. Unlike Salesforce or Braze, MongoDB isn't selling AI applications to endusers.  It's providing the datainfrastructure that AI applications run on. At its flagship MongoDB.local SanFrancisco event, the company announced the integration of its core databasewith embedding and reranking models from Voyage AI, creating a unified data intelligencelayer for production AI allowing developers to build sophisticated applicationsat scale with reduced hallucination risk and no requirement to move orduplicate data. That no data movement angle is a meaningful competitiveargument: enterprises with sensitive data in MongoDB don't have to copy it to aseparate vector store or AI pipeline, which simplifies architecture and lowersrisk. For large financial institutions and regulated industries, who are amongMongoDB's most strategic customers, that matters enormously.

 

The candidacknowledgment from management, however, is that AI is not yet a materialdriver to results, though they are encouraged by the growth they are seeingwith customers leveraging AI capabilities. The number of customers using vectorsearch and Voyage embedding models nearly doubled year over year, and AInatives, digital natives, and large enterprises are all contributing to thegrowth across the customer base.

 

What's actuallydriving the strong financials right now is a combination of AI-adjacenttailwinds and MongoDB's core enterprise momentum. In Q4 they signed anapproximately $90 million deal with a large tech company planning to expandboth core and AI workloads on Atlas, and a greater than $100 million deal witha large financial institution for Enterprise Advanced, the largest totalcontract value deal in company history. The significance of that financialinstitution deal is hard to overstate: large banks are notoriously conservativewith infrastructure decisions, and a nine-figure commitment to MongoDB speaksto how seriously enterprises are treating their data platform choices in thecontext of AI readiness.

 

AI actually amplifiesthe case for MongoDB's multi-model, developer-friendly architecture. Asenterprises move from traditional transactional applications toward AI-poweredagentic workflows, the data requirements become more complex, requiring search,vector similarity, document storage, and real-time access all in one place.MongoDB's goal is to become the generational data platform of choice in the AIand multi-cloud era, and if AI application development scales the way mostexpect, the demand for an integrated, performant, cloud-native data layer likeAtlas should grow with it. The risk is that the revenue inflection from AIworkloads takes longer to materialize than investors hope. but the Q4 numberssuggest the underlying business is strong enough to carry that bet comfortablywhile AI catches up.

 

ServiceTitan

 

The AI storyhere is unusually concrete. Where most SaaS companies are talking about AI interms of platform strategy and future optionality, ServiceTitan is reportingactual customer outcomes from a live product, and those numbers are striking.

 

The centerpieceis Max, which ServiceTitan is positioning not as an AI feature but as an agenticoperating system for the trades, meaning it's designed to autonomouslyorchestrate end-to-end workflows across demand generation, dispatch, quoting,payments, and back-office operations for HVAC, plumbing, electrical, and otherfield service businesses. The first deployment cohort of Max customers saw a50% increase in average ticket size, one customer achieved over 50% revenuegrowth in a single month, and another increased EBITDA margin from 18% to 30%while reducing office headcount. And management went further: customers on Maxwill about double their monthly subscription revenue when fully ramped, aneffect not driven by technician expansion, meaning the value is coming fromoperational efficiency and higher revenue capture per job, not simply fromadding more workers.

 

What makesServiceTitan's AI positioning particularly defensible is the data moatunderlying it. The company leverages proprietary structured data from over $80billion in annual transaction volume to drive automation and outcomeimprovements. This is a critical competitive point that CEO Ara Mahdessianleaned into when analysts asked about AI-native startups competing for the samecustomers. A point-solution AI tool built for the trades can optimize oneworkflow but it has no context on how that workflow connects to the rest of thebusiness. ServiceTitan's integrated platform, spanning marketing, scheduling,dispatch, invoicing, and payroll, creates a contextual data layer that genericAI tools simply can't replicate from the outside.

 

The second AIproduct gaining traction is Virtual Agents: AI-based modules that handleinbound call management and appointment booking, especially during call surgesor after normal business hours. For a trades business, missed calls during peakseason or after hours are direct revenue losses.  A potential customer who can't book getsrouted to a competitor. Virtual Agents directly plug that gap, and because it'spriced as a consumption product, it creates a new usage revenue stream thatmanagement believes could grow faster than GTV in fiscal 2027.

 

The honestcaveat is that Max is still early-stage and capacity-constrained. ServiceTitanplans to double Max capacity in Q1 FY2027, with scaling tied to onboardingefficiency and customer success, which is a signal that the bottleneck rightnow is deployment and training, not demand. To accelerate on all fronts, thecompany hired Abhishek Mathur from Figma, Meta, and Microsoft as ChiefTechnology and Product Officer, specifically to drive organizational andtechnology velocity around AI initiatives. FY2027 is also expected to representthe company's largest R&D investment ever, with explicit focus on AIinference and internal tooling. The trades are not typically an industryassociated with cutting-edge software, which is precisely why ServiceTitan'smoat both in data and in customer relationships could prove so durable as AIraises the stakes for what field service software can actually do.

 

ServiceNow

If there's onecompany in enterprise software that has the most fully-formed AI story rightnow, it's ServiceNow. CEO Bill McDermott came into this call on offense,explicitly addressing the "AI will eat software" narrative that hasspooked investors across the sector and flipping it on its head. His argumentwas direct: enterprise AI will be the largest driver of return on themultitrillion-dollar super cycle of AI infrastructure investment, and the realpayoff comes when tokens move beyond pilots and get embedded directly into theworkflows where business decisions are made, with ServiceNow serving as thesemantic layer that makes AI ubiquitous in the enterprise.

 

The numbersbehind Now Assist, ServiceNow's AI product suite, are the most concrete AImonetization metrics in this cohort. Now Assist ACV surpassed $600 million,more than doubling year-over-year in Q4, with deals over $1 million nearlytripling quarter-over-quarter and 35 such deals closing in Q4 alone. Enterprisesaren't just adding AI as a line item but are making meaningful commitments toit within the ServiceNow platform. The AI control tower, which allowsenterprises to govern and orchestrate AI agents across the business, grew over4x its 2025 targets, another signal that customers are moving from individualAI use cases to thinking about AI management as a platform-level problem.

 

What ServiceNowis really selling is AI governance as much as AI capability. As enterprisesdeploy more agents someone has to be in charge of orchestrating them,monitoring them, and ensuring they don't go rogue or create complianceexposure. ServiceNow is positioning its platform as that orchestration layer,what they call the "universal agentic network" built on MCP andWorkflow Data Fabric. Monthly active users grew 25% and the number of workflowsand transactions processed on the platform increased over 33% each, reaching 80billion workflows and 6.4 trillion transactions.

 

ServiceNowdoesn't want to bet on any single model winning rather it wants to be theworkflow layer that works with all of them, insulating itself from modelcommoditization while capturing value from enterprise adoption regardless ofwhich AI providers customers prefer.

 

The honesttension in the ServiceNow story is pricing and gross margin. Managementacknowledged some gross margin headwinds from hyperscaler and AI infrastructurechoices, and the shift to a hybrid pricing model, combining traditionalsubscription with consumption-based AI usage, introduces some revenuevariability that investors are still getting comfortable with. But with $15.5billion in subscription revenue guided for 2026 at 19-20% growth, a 32%operating margin, and 36% free cash flow margins, ServiceNow is arguably themost financially powerful pure-play on enterprise AI adoption in the marketright now. The core business is robust enough that they can absorb the cost ofbuilding out AI infrastructure while competitors are still figuring out theirstrategy.

 

Microsoft

Microsoft's callwas a declaration that AI is now the gravitational center of the entirebusiness. Satya Nadella opened with a striking statement: "We are only atthe beginning phases of AI diffusion and already Microsoft has built an AIbusiness that is larger than some of our biggest franchises." When youconsider that Microsoft's "biggest franchises" include Office,Windows, and Xbox, each multi-billion dollar businesses, that's anextraordinary claim. And the numbers behind it are hard to argue with:Microsoft Cloud crossed $50 billion in quarterly revenue for the first time, up26%, while Azure grew 39%, the acceleration driven explicitly by AI workloads.

 

The AI story atMicrosoft operates on three distinct layers, each reinforcing the others. Thefirst is infrastructure. Capital expenditures hit $37.5 billion in the quarter,with roughly two-thirds allocated to short-lived assets like GPUs and CPUs.Management introduced a new internal metric, tokens per watt per dollar, astheir guiding optimization target for AI infrastructure, and reported a 50%increase in throughput on OpenAI inferencing due to infrastructure advances.

 

The secondlayer is the Copilot product suite, where AI is directly touching customers atscale. Microsoft 365 Copilot reached 15 million paid seats, with over 160% seatgrowth year-over-year and tripled number of customers with over 35,000 seats, asignal that enterprise adoption is moving from departmental pilots tocompany-wide deployments. Average user conversations doubled and daily activeusers grew 10x. GitHub Copilot reached 4.7 million paid subscribers, up 75%year-over-year, with individual Copilot Pro Plus subscriptions growing 77%sequentially.

 

The honesttension is on margins. Gross margins declined slightly, driven by continued AIinfrastructure investments and growing AI product usage, and management guidedfor operating margins to be down slightly year-over-year in Q3.  Microsoft is, in effect, spending today tocapture a revenue curve that extends well into the next decade, and the Q2numbers suggest that curve is steeper than almost anyone expected.

 

Appfolio

AppFolio's Q4call told a quietly compelling story for understanding how AI actually changesthe economics of a vertical SaaS business. AppFolio serves property managers, acustomer base that has historically been underserved by software and skepticalof hype. The fact that 98% of AppFolio's customers are already actively usingone or more AI capabilities included in the platform, against an industrybackdrop where half of AI users in property management report they can'tactually rely on the AI features in their core system, is a striking marketdifferentiation signal. It suggests AppFolio has threaded a needle that mostvertical SaaS companies are still struggling with: making AI functional andtrusted at the ground level.

 

AppFolio isrepositioning its platform around three layers: a system of record, a system ofaction, and a system of growth with agentic AI embedded directly into dailyoperations, with the explicit goal of enabling customers to evolve fromproperty managers to performance managers. That reframing reflects afundamental shift in what AppFolio is selling: not software that helps youmanage properties, but a platform that actively improves the financialperformance of your property management business. Adoption of premium tiers hasalready exceeded 25%, a meaningful indicator that customers are upgrading tocapture AI-driven capabilities.

 

The businessimpact of AI is showing up directly in AppFolio's financials. Non-GAAPoperating margin expanded to 24.9% in Q4, up from 20.2% a year earlier, anearly 500 basis point improvement that reflects both revenue growth and theoperating leverage that comes from AI-driven efficiency gains across theplatform itself.

 

What makesAppFolio particularly interesting from an investment lens is that their AIadvantage is compounding in a way that's structurally hard to replicate. 45% ofsurvey respondents in the property management industry say they plan toconsolidate their software solutions and AppFolio is the natural consolidationdestination, precisely because they've embedded AI deeply enough that customerswould lose significant operational capability by leaving. The moat here isn'tjust the product; it's the AI-trained workflows, the resident data, and theoperational patterns that accumulate the longer a customer stays on theplatform. For a vertical SaaS business approaching $1 billion in revenue,that's a durable competitive position.

 

Palantir

Palantir's Q42025 call was unlike any other earnings call in enterprise software this cycle partlybecause of the numbers, which were objectively extraordinary, and partlybecause Alex Karp delivers earnings calls the way a wartime general addressestroops, not the way a CFO addresses analysts. But strip away the theater andwhat you find is a company that has, almost overnight, become one of thedefining stories of what enterprise AI actually looks like when it works atproduction scale.

 

The headlinenumbers require context to be fully appreciated. Q4 revenue grew 70%year-over-year, the highest growth rate since Palantir went public, andrepresenting a 3,400 basis point acceleration versus Q4 of the prior year. Thisisn't a company maintaining a high growth rate, it's a company where the growthrate itself is accelerating sharply. Full-year 2025 revenue grew 56%, and thecompany is guiding full-year 2026 revenue of $7.19 billion representing 61%growth.

 

The companyclosed 61 deals greater than $10 million in the quarter, and management citedmultiple examples of customers signing $80 to $96 million contracts withinmonths of initial engagement, a compressed deal cycle that reflects genuineorganizational conviction.

 

The US vs.international divergence on the call was striking and strategically revealing.US revenue grew 93% year-over-year and 22% sequentially, while internationalcommercial revenue grew just 8% year-over-year, a massive gap that managementwas candid about.

 

The governmentside of the business is also worth understanding in the context of AI impact.US government revenue grew 66% year-over-year, driven in part by a massive $10billion Army software contract signed last summer and a $448 million Navycontract for shipbuilding supply chain modernization. Karp was unambiguousabout what these contracts represent: not just data analytics or workflowtools, but AI systems that are actively changing the operational capabilitiesof the US military. He argued that AI implementations in the defense contexthave changed what warfighters are able to do.

 

Therisk most frequently raised by skeptical analysts on the call is whether the UScommercial acceleration is sustainable or whether it represents a concentratedburst of pent-up demand that will moderate. Palantir's answer is essentiallythat AI-driven enterprise transformation is still in very early innings, thattheir pipeline of committed deal value reached $11.2 billion (up 105%year-over-year), and that the real constraint on growth is their own capacityto onboard and deliver for customers, not demand.

 

 

Atlassian

CEO MikeCannon-Brookes has been saying "AI is the best thing that's ever happenedto Atlassian" for several quarters, and this quarter he finally had thenumbers to fully back it up. Atlassian delivered its first-ever $1 billioncloud revenue quarter, with cloud up 26% year-over-year, and surpassed $6billion in annual run rate revenue, a milestone that felt like a culmination ofyears of patient investment in infrastructure that is now paying off preciselybecause AI workloads need exactly what Atlassian has built.

 

The strategiclogic of Atlassian's AI position is distinct from most others in this series.Rather than building AI features on top of an existing product, Atlassian isarguing that AI needs Atlassian; that the work tracking, planning, andorganizational knowledge embedded in Jira and Confluence becomes more valuable,not less, as AI proliferates. The Teamwork Graph, which now contains well over100 billion objects and connections across first and third-party tools, is thecontext layer that enables Rovo Atlassian's AI assistant to deliver businessvalue that is actually context-aware and actionable, rather than generic.That's a meaningful competitive claim: a new AI tool can generate text oranswer questions, but it can't tell you which Jira tickets are blocking yoursprint, who owns which decision, or how a current workflow has historicallyperformed across 350,000 customers. Atlassian can.

 

The proof thatcustomers believe this argument is in the adoption metrics. Atlassian'sTeamwork Collection, the AI-powered bundle that serves as the company's primaryAI monetization vehicle, surpassed 1 million seats sold in under nine months,with more than 1,000 customers upgrading, and the company closed a recordnumber of deals over $1 million in ACV, nearly doubling year-over-year.Critically, customers using AI code generation tools create 5% more Jira tasks,have 5% higher monthly active users, and expand Jira seats 5% faster than thosenot using AI tools, a clean, data-driven demonstration that AI adoption drives moreusage of the core platform, not less.

 

The seat-basedpricing debate hung over the call: analysts probed whether consumption-basedpricing could erode Atlassian's model as AI agents proliferate and"seats" become a less meaningful unit.  Atlassian’s response is essentially thatcustomers want predictability, and seat-based pricing delivers it, while theTeamwork Collection bundles AI credits on top of seats in a way that capturesconsumption upside without forcing customers into open-ended consumption risk.RPO grew 44% year-over-year to $3.8 billion, accelerating for the thirdconsecutive quarter.

 

The competitiveangle on the call was also striking. When asked about new AI tools includingtools from Anthropic potentially challenging Jira, the CEO was notablyunbothered. He noted Atlassian considers Anthropic a partner, using theirmodels within the platform, and argued that new AI tools will emerge butAtlassian's Teamwork Graph and deep workflow integration providedifferentiation that generic AI tools can't replicate. The focus, he said,remains on human-AI collaboration for complex work: the kind that requiresorganizational context, compliance, security, and integration that a standaloneAI tool

 

Qualys

Thecybersecurity industry is entering a new phase where the speed of attackerexploitation has outpaced the speed of human response, and the only viableanswer is autonomous, AI-driven remediation. As threat actors continue tocompress time-to-exploit, Qualys believes the next phase of pre-breach riskmanagement will be defined by an agentic AI-driven risk fabric without-of-the-box business quantification and automated remediation to respond atthe speed of threats.

 

The mostsignificant product announcement on the call was the launch of the AI-nativeRisk Operations Center which the CEO positioned explicitly as a new category incybersecurity, designed to centralize an organization's entire threat responseposture. The argument is that the traditional SOC (Security Operations Center)is reactive: it responds to breaches that have already happened. Qualys's ROCis designed as a pre-breach capability that unifies Continuous Threat ExposureManagement with exploit confirmation, risk quantification, and automatedremediation, all powered by agentic AI that can act without waiting for a humanto review and approve each step. The competitive shot embedded in the CEO’s commentarywas pointed and deliberate: he argued that competitors focusing on exposuremanagement can't win the AI fight if they're still routing remediation throughJira tickets and ServiceNow tickets.  Autonomousdecision-making and execution is the actual differentiator, not justidentifying vulnerabilities.

 

On the customerimpact side, Qualys's AI story is still more about where the market is headingthan where revenue is today. The ETM (Enterprise TruRisk Management) platform,which serves as the foundation for the agentic AI capabilities, represented 10%of total bookings and 13% of new bookings, up from 8% and 9% previously, ameaningful directional signal but still a small share of the overall business.Patch Management, which is the remediation capability that AI agents actuallyexecute against vulnerabilities, represented 8% of total bookings and 16% ofnew bookings. Together these metrics point toward a platform transition that'sunderway but early.  Customers arebeginning to adopt the agentic workflow, but the majority of the revenue baseis still anchored in the traditional vulnerability management and VMDRproducts.

 

The financialpicture is disciplined and somewhat unusual relative to the rest of thecompanies in this series. Full-year 2025 revenue reached $669 million, up 10%,with a 47% adjusted EBITDA margin; exceptional profitability for a company ofthis size. The 2026 guidance of 7-8% revenue growth is conservative byenterprise software standards, and management was candid that it reflectsinvestment in AI infrastructure, sales and marketing expansion, and federalsector buildout, all of which compress near-term margin slightly.

 

What makesQualys particularly interesting from an AI lens is the specificity of its usecase. The combination of asset discovery, vulnerability identification, exploitconfirmation, risk quantification, and automated patch deployment is one of thefew enterprise software workflows where agentic AI is not just useful butarguably necessary for the product to work as intended. No human team can keepup with the velocity of modern vulnerability exploitation. Qualysdifferentiates by offering integrated patch management and autonomous workflowsthat allow customers to quickly remediate vulnerabilities.  

 

Paylocity

The mostconcrete AI signal on the call was deceptively simple: average monthly usage ofPaylocity's AI assistant increased over 100% quarter-over-quarter. That's asignificant sequential jump in a relatively short period. Paylocity recentlyexpanded its AI assistant into HR rules and regulations, tapping into more than200 IRS and Department of Labor knowledge sources to provide administratorswith guidance on tax and labor regulations. For the HR administrators who are Paylocity'sprimary users, typically generalists at companies of 50 to 500 employees, theability to ask a question and get a compliance-grounded answer without callinga lawyer or spending an hour on the IRS website is genuinely valuable.  

 

What'sparticularly interesting about Paylocity's AI strategy is the dual deploymentmodel: AI for customers, and AI for Paylocity itself. Within the operationsteam, Paylocity is leveraging AI to drive down client case volumes, automateclient interactions and case routings, and perform sentiment analysis to flagurgent cases for faster response. This internal AI efficiency play is showingup in the financials: adjusted gross margin expanded 60 basis pointsyear-over-year to 74.4% in Q2, and operating expenses are growing slower thanrevenue. This is a company guiding to a $622-630 million adjusted EBITDA on$1.74 billion in revenue, roughly a 36% margin.

 

Paylocity'sposition as a system of record allows it to connect data to other systems viaAPIs, increasing platform utilization, and that customer time savings from AIfeatures lead directly to opportunities for upselling more modules, enhancingthe experience and driving revenue growth. This is a different monetizationpath than many peers: rather than charging directly for AI as a premium SKU,Paylocity is betting that AI-driven engagement deepens the platformrelationship, makes customers more likely to expand into adjacent modules, andreduces the churn that has historically been the primary growth constraint inHCM.

 

Paylocity isoperating in a slower-growth regime than most of this peer group. Full-year2026 revenue guidance of $1.73-1.74 billion represents 9% growth, and recurringrevenue is expected to grow 10-11%.  That’ssolid but not the acceleration story investors see at Atlassian or Salesforce.The macro factor management monitors most closely is employment levels atclient companies, since Paylocity's revenue is partly tied to headcount.Management noted employment levels have been stable with no significant changesexpected.  AI is helping Paylocity expandrevenue per client and improve efficiency, but it isn't yet the kind ofstep-change growth driver that reshapes the growth profile.

 

 

 

Doximity

Where nearlyevery other company in this series was accelerating AI investment and racing tomonetize, Doximity was doing something much rarer: pumping the brakes on AIcommercialization deliberately, citing patient safety, and then watching itsstock drop nearly 24% the next day as the market punished the caution. Thedivergence between what management said and what the market wanted to hear wasstriking.

 

The platformfundamentals are strong by virtually any measure. Doximity surpassed 3 millionregistered members, with more than 85% of all US physicians and two-thirds ofNPs and PAs on the platform, with record usage across daily, weekly, monthly,and quarterly active user metrics. Over 300,000 unique prescribers used AIproducts in Q3, and January saw an average of four AI queries per prescriberper week for Docs GPT. More than 100 top US health systems have purchased theAI suite, granting access to over 180,000 prescribers. These are impressiveadoption numbers for a product that is not yet generating any revenue.

 

And that lastsentence is the crux of the investor tension. No AI revenue is included incurrent guidance; commercial AI products are expected to launch later in theyear. Doximity is building substantial AI infrastructure, investing in usagethat is already compressing gross margins from 93% to 91%, and has over 300,000physicians actively using the product, but is explicitly choosing not tomonetize it yet. The reason CEO Jeff Tangney gave was pointed and substantive:a recent Stanford-Harvard study found AI can cause clinical harm in up to 22%of real patient cases, and that overconfident models make those errors harderto spot. In response, Doximity built Peer Check, a clinical peer review layerco-led by renowned physician-scientists Eric Topol and Regina Benjamin, withmore than 10,000 US physician experts reviewing AI-generated clinical answersbefore they're deployed at scale. The message was clear: Doximity will notmonetize AI until it trusts that the AI is safe enough for physicians to relyon without second-guessing every output.

 

This is agenuinely different philosophy. Healthcare AI occupies a unique risk category: anerror in a marketing recommendation costs a company a customer; an error in aclinical AI recommendation can cost a patient their life. DOCS caution isn'ttimidity, it's arguably the only responsible posture for a company whoseplatform is used by 85% of America's physicians. The commercial AI launch laterin fiscal 2026 will be a significant test: can Doximity translate trust,physician habit, and clinical safety credibility into a monetization model thatthe market will reward?

 

The longer-arcstory here is one of deliberate sequencing: build the trust, earn the habit,then charge for it. Doximity's net revenue retention was 112% overall and 117%for the top 20 customers, evidence that clients who deepen their engagementwith the platform expand their spend meaningfully. If the AI suite launchescommercially later this year with genuine physician trust behind it, and if thepharma headwinds stabilize, the combination of 85% physician coverage, provenengagement habits, and a safety-validated AI product could represent amonetization inflection.

 

BILL

BILL's calltold a story that sits at an interesting intersection: a company that has builta genuinely useful platform for SMB financial operations, is now threading AIthroughout it, and is simultaneously facing an existential investor questionabout whether AI startups could eventually disintermediate it entirely.

 

The corebusiness is performing solidly. Core revenue grew 17% year-over-year to $375million in Q2, total payment volume reached $95 billion up 13%, andtransactions processed grew 16% to 35 million. Nearly 500,000 businesses nowuse the platform, with over 9,500 accounting firms embedded in the network.Multiproduct adoption grew 28% year-over-year, with businesses using both AP/ARand Spend & Expense solutions, a meaningful indicator that customers aredeepening their reliance on BILL as a financial operations platform rather thana single-purpose payments tool.

 

On AI, BILL ispursuing a two-track strategy that distinguishes between AI-for-customers andAI-for-BILL-itself. On the customer side, the company introduced agenticcapabilities including a W-9 Agent for vendor management and a coding agent forinvoice processing specific, transactional use cases where AI can eliminatemanual work that currently slows down SMB finance teams. The framing CEOLacerte used: "AI will allow us to dive deeper into the stack oftransactional confusion and simplify it." For SMB owners and bookkeeperswho spend hours reconciling invoices, chasing down vendor information, andcoding transactions to the right GL accounts, that promise is valuable.

 

On the internalefficiency side, BILL developed a roadmap of AI-driven productivity initiatives,covering developer productivity, internal team automation, and go-to-marketoptimization, with initial benefits expected to start flowing through in fiscal2027. This is a multi-year cost structure story.

 

Analystspressed on whether AI-native startups could undercut BILL's position bybuilding cheaper, simpler financial operations tools. Lacerte's responsecentered on three moats: deep expertise in financial operations built over twodecades, a proprietary data set from processing over $1 trillion in paymentsthat enables superior risk models, and network effects from 8 million entitiesconnected through the BILL ecosystem. The data moat argument is particularlycredible: BILL's ability to assess payment risk, predict cash flow patterns,and detect fraud is a function of seeing an enormous volume of SMB financialtransactions over time. A new entrant with a better AI model but no transactionhistory is structurally disadvantaged in the trust and risk managementdimensions that matter most when you're moving real money for small businesses.

 

The honestchallenge for BILL is growth rate moderation. Full-year core revenue guidancewas raised but implies 14-15% growth for the year; respectable, but adeceleration from prior years and well below the trajectory of moreAI-accelerated peers in this series. The company is deliberately shifting focustoward larger SMBs and improving customer unit economics rather than maximizingnew customer count, which is a sensible strategic move but one that introducesnear-term headwinds. The AI monetization story, where agentic capabilitiestranslate into higher ARPU from existing customers, is still in its earlyinnings. If BILL can demonstrate over the next few quarters that AI-drivenautomation is genuinely expanding what customers are willing to pay, themultiple compression the stock has experienced could look like an opportunityin retrospect. But that proof is still ahead, not behind.

 

Zoominfo

 

ZoomInfo's Q42025 call was a study in a company navigating a genuinely difficult strategicmoment, caught between a legacy business model under pressure from AIdisruption, a promising new product suite not yet in revenue guidance, and amarket that has lost patience with the transition timeline. Q4 revenue grewjust 3% year-over-year to $319 million, and 2026 guidance projects only 1%revenue growth at the midpoint, extraordinary deceleration for a company thatwas growing at 20%+ just a few years ago.

 

To understandZoomInfo's AI story, you have to understand its predicament. ZoomInfo built itsbusiness on selling B2B contact and intent data to sales and marketing teams;essentially, a database of who to call and when. AI has disrupted that model intwo ways simultaneously: first, AI-powered outbound tools have made it easierfor companies to generate their own contact intelligence; second, AI agents arereplacing some of the human SDRs who were the primary users of ZoomInfo's data.The company that was once the essential data layer for go-to-market teams isnow being forced to redefine what "essential" means in an AI-firstsales environment.

 

Schuck's answerto that challenge is an explicit platform pivot. The strategic framing is thatwhether customers access ZoomInfo's intelligence through the application,through an AI agent, or through something they built themselves, the data flowsto where work happens, positioning ZoomInfo as the only platform deliveringintelligence, orchestration, and execution for modern go-to-market teams. Thenew products, GTM Studio, which unifies internal and external data for audiencebuilding, and GTM Workspace, are designed to make ZoomInfo the operating systemthat AI sales agents plug into, rather than a database that humans searchmanually. Schuck noted that many of the top 50 fastest-growing AI-nativecompanies are already ZoomInfo customers, a meaningful signal that the platformhas relevance in the AI-native enterprise, not just legacy enterprise.

 

The upmarketmigration is the clearest positive story on the call. ZoomInfo grew itsupmarket segment 6% year-over-year in Q4, tripling its year-over-year growthrate in its seasonally largest quarter, and now has 74% of ACV coming fromupmarket customers, up from 70% a year ago, with ACV from the $100,000-pluscustomer cohort growing double digits and now representing more than 50% oftotal company ACV. Upmarket customers buy more of the platform, renew at higherrates, and are more strategically embedded, which means the quality of therevenue base is improving even as the headline growth rate suffers from theongoing cleanup of the downmarket SMB book.

 

The most candidmoment on the call was about the new AI products: ZoomInfo explicitly includedno revenue contribution from GTM Studio or other new products in the 2026revenue guidance, while embedding the associated costs.

 

What ZoomInfoillustrates in the context of this broader series is a distinct andunderappreciated risk: the companies most likely to be hurt by AI in the nearterm are not necessarily those with the weakest products, but those whoseprimary value proposition was data that AI can now partially substitute orgenerate differently. ZoomInfo's contact and intent data was extraordinarilyvaluable in a world where finding the right person to call required humanresearch. In a world where AI agents can do that research, synthesize signals,and execute outreach autonomously, the question isn't whether ZoomInfo's datais good — it clearly is — but whether it remains uniquely necessary. Schuck'sargument is that data quality is becoming more important, not less, as AIagents proliferate that bad data fed into an AI agent produces bad outputs atscale, and ZoomInfo's verified, constantly refreshed data set is the antidote.

 

Dynatrace

Dynatrace's Q3FY2026 call was the kind of straightforward execution story that tends to getovershadowed in an earnings season dominated by more dramatic AI narratives.While most companies are racing to build AI capabilities, Dynatrace is arguingthat AI creates an urgent and growing need for exactly what it already does.The more AI proliferates, the more complex and opaque software environmentsbecome, and the more critical observability, knowing what's happening, why, andwhat to do about it, becomes. CEO Rick McConnell's central thesis at the annualPerform customer conference was that observability is entering a new era inwhich it is foundational to resilient software and dependable AI environments.

 

The financialpicture reinforces that thesis. ARR stabilized at 16% growth for threeconsecutive quarters, net new ARR grew double digits for three consecutivequarters, and annualized log management consumption surpassed $100 million, withlog management itself growing over 100% year-over-year. The logs story isparticularly significant: logs are the raw observational data that flows fromAI workloads at enormous volume and velocity, and Dynatrace's ability toingest, index, and make intelligent sense of them is a direct beneficiary ofthe AI infrastructure build-out happening across every enterprise customer.Platform consumption overall continued to grow over 20%, ahead of ARR growth, ausage-leading indicator that suggests the revenue trajectory has further roomto run.

 

The moststrategically important announcement on the call was Dynatrace Intelligence, anew agentic AI operations system unveiled at the Perform conference and madeavailable to all customers. What's notable is the deliberate pricing decision:Dynatrace Intelligence was not priced as a separate SKU but embedded into theplatform for all customers. This is a conscious choice to drive adoption firstand monetize through increased platform consumption and expanded footprint,rather than creating an AI premium layer that some customers might resist. Itmirrors the philosophy of several other companies in this series who are usingAI to deepen platform engagement before converting it to direct revenue.

 

The competitivedifferentiation Dynatrace leans on hardest is the combination of what CEOMcConnell calls "trustworthy deterministic AI" with agentic AI. Theargument is nuanced and important: purely probabilistic AI — LLMs makinginferences about what might be wrong in a complex system — produces unreliableoutputs in production environments where precision matters. Dynatrace'sapproach combines its long-standing causal AI engine, which identifies rootcause deterministically based on topology and dependency maps, with neweragentic capabilities that can act on those findings autonomously. For an SRE orplatform engineer dealing with an incident at 2am, "we think the problemmight be here" is much less valuable than "the root cause is definitivelythis service, and here's what to do." That precision is what Dynatrace isselling, and it's a genuinely differentiated value proposition against genericAI observability tools.

 

The hyperscalerpartnership strategy adds another dimension. In Q3, Dynatrace announced deepertechnical integrations with Amazon Bedrock AgentCore, embedding with Azure'sSRE Agent, and serving as the launch partner for GCP Gemini CLI extensions andGemini Enterprise. This is Dynatrace positioning itself as the observabilitylayer that hyperscaler-native AI agents rely on  meaning when a customer builds an AI agent inAWS, Azure, or GCP, Dynatrace is the system that monitors what that agent does,catches when it goes wrong, and helps remediate issues. It's a smart wedge:rather than competing with hyperscalers for the AI workload itself, Dynatraceis becoming the essential trust and reliability layer underneath thoseworkloads.

 

The honestquestion hanging over the call — and which analysts pressed on — is whether 16%ARR growth is the ceiling or a floor as the AI opportunity matures. The companyraised full-year guidance by 125 basis points, now targeting 15.5%-16% ARRgrowth and putting it on track to surpass $2 billion in ARR in fiscal 2026. Fora company of Dynatrace's scale, that's respectable, but investors hoping for astep-change acceleration driven by AI workload complexity haven't seen it yetin the headline numbers. Management's argument is that platform consumptiongrowing at 20%+ is the leading indicator of that acceleration arriving in ARRterms over the next several quarters — and the logs inflection at 100%+ growthis the most concrete evidence they can point to that the flywheel is spinning.

 

Monday.com

monday.com's Q42025 call presented a company in a genuinely interesting two-speed moment: anenterprise business accelerating meaningfully on the back of AI-driven platformexpansion, and an SMB self-serve business struggling with deteriorating uniteconomics that management expects to persist through 2026.

 

Start withwhat's working. Full-year revenue reached $1.232 billion, up 27%, with Q4 at$334 million, up 25%, and customers with over $500,000 in ARR grew 74%year-over-year. That enterprise acceleration is real, driven by customersstandardizing on monday.com not just for project management but for CRM,service operations, software development, and now AI-powered workflows. The AIproduct metrics are early but directionally exciting: Monday Blocks poweredover 77 million actions, Sidekick processed over 500,000 user messages, andMonday Vibe — the company's AI-native app builder — became the fastest productin monday's history to surpass $1 million in ARR, reaching that milestone injust 2.5 months after pricing launched in mid-October 2025.

 

The strategicpositioning monday.com is building toward is worth understanding carefully.Co-CEO Eran Zinman described a unified AI platform with four core capabilities:Monday Sidekick (AI assistant), Monday Vibe (AI-native app builder), MondayAgents (autonomous workflow executors), and Monday Workflows (processautomation). Teams are increasingly relying on monday.com not just to organizework, but to make decisions, automate outcomes, and execute faster withconfidence. That reframing — from a work OS to a work execution platform — is ameaningful upward step in perceived value and justifiable pricing power. And itconnects directly to the enterprise motion: larger customers with complex,interconnected workflows are the natural buyers of this expanded capabilityset, while SMBs may not need or want the full stack.

 

Which brings usto the harder conversation. The no-touch self-serve channel remains"choppy," with higher customer acquisition costs and lower returnsthan historical levels — a dynamic management expects to persist throughout2026 with no improvement assumed in guidance. This is a genuine structuralheadwind. The SMB market for work management software has become morecompetitive and more price-sensitive, partly because AI tools have lowered thebarrier to building lightweight alternatives and partly because macroconditions have tightened SMB software budgets. monday.com's response is toredirect investment toward enterprise, which makes strategic sense butcompresses near-term growth rates. 2026 guidance of 18-19% revenue growth is astep down from 27% in 2025 — not alarming on its own, but a deceleration thatthe market had to digest.

 

The long-termambition management has been articulating — a path to much larger revenuetargets by 2027 — was another notable moment on the call. The CFO explicitlystated that the 2027 target number is "off the table," withmanagement ceasing to discuss prior long-term targets due to macroeconomicvolatility and the ongoing challenges in no-touch channels. Pulling guidance israrely received well, and combined with the SMB headwinds and deceleration ingrowth, it created investor concern despite the genuinely strong enterprisemetrics.

 

The mostcompelling part of the monday.com story in the context of this series is MondayVibe — an AI-native app builder that lets business users create customapplications on top of the monday.com platform without writing code. If thisscales as management believes it can, it transforms monday.com from a workmanagement platform into something closer to a business application developmentlayer — essentially a low-code/no-code platform that is AI-native from theground up. The competitive analogy that comes to mind is what Salesforce istrying to do with Agentforce: extend from a system of record into a system ofaction and creation. Monday.com is pursuing a similar expansion from adifferent starting point — the work coordination layer rather than the CRMlayer — and Vibe's early ARR traction suggests the market is receptive to thevision, even if the SMB headwinds cloud the near-term picture.

 

Blackbaud

Blackbaud isnot a company most people include when they talk about AI in enterprisesoftware. It serves nonprofits, universities, healthcare foundations, and faithorganizations, institutions that are resource-constrained, oftentechnologically cautious, and historically slow to adopt new platforms. And yetthe Q4 2025 call made a surprisingly compelling case that Blackbaud may bebetter positioned for the AI era than its modest growth rate suggests, preciselybecause of the unique data moat it has built over 45 years serving the socialimpact sector.

 

CEO MikeGianoni opened with a direct and unusually candid framing of the fundamentalquestion facing every vertical SaaS company right now: will AI be beneficial tosystem-of-record vertical software firms like Blackbaud, or detrimental? Blackbaudprocesses nearly 30 billion donor predictions annually, manages tens ofpetabytes of data across its customer base, and has built the mostcomprehensive philanthropic dataset in existence, proprietary survey andbenchmarking data, licensed datasets, identity resolution capabilities, andspecialized datasets like Blackbaud Giving Search. Critically, this data is notpublicly available on the internet where LLMs can access it meaning nocompetitor can train a general-purpose AI model to replicate what Blackbaudknows about donor behavior, nonprofit fundraising patterns, and philanthropicoutcomes.

 

The mostconcrete AI product on the call was the Development Agent, the first ofBlackbaud's "Agents for Good" released at their bbcon conference inOctober. The use case is beautifully specific and immediately legible in termsof ROI: a university with 190,000 alumni but a fundraising team with bandwidthto focus on only 10,000 of them can deploy the Development Agent as anadditional "staff member" that cultivates relationships and raisesfunds from the other 180,000 alumni through email, text, and a fullconversational avatar, self-learning using the data, intelligence, andworkflows within Blackbaud's system of record. That is not a marginalefficiency gain — it's the ability to extend fundraising reach by 18x withoutproportionally scaling headcount.

 

The pricingmodel matters here too. Blackbaud is structuring Agents for Good as an annualsubscription with multiyear contracts — meaning the agent becomes a recurringrevenue line rather than a one-time upsell. More than 20% of customers arealready asking to move to four-year or longer renewal contracts, which speaksto the depth of platform dependency and the confidence customers have inBlackbaud's roadmap. 2026 guidance of 4-4.5% revenue growth explicitly assumesno meaningful AI product revenue contribution.

 

The internal AIstory is equally substantive. Every employee at Blackbaud has been required tocomplete AI training, the entire engineering team is using GitHub Copilot andAnthropic Claude for code generation, bug remediation, and new productdevelopment, and Blackbaud AI Chat — embedded within the system of record andleveraging customer and proprietary benchmark data — saw daily usage grow 5xsince October. The company also cited three structural efficiency drivers it'spursuing simultaneously: geographic workforce diversification through a growingIndia office, closure of the last two legacy data centers, and AI-driveninternal productivity.

 

What makesBlackbaud particularly interesting in the context of this broader series is themission-alignment dimension. Nonprofits, universities, and foundations are notjust technology buyers — they have deeply held values around data privacy,ethical AI use, and donor trust. Blackbaud's focus on cybersecurity and AIgovernance for ethical data use isn't just a compliance checkbox — it's acompetitive moat in a market where the institutions writing the checks caredeeply about how their donor data is used. A generic AI tool that scrapespublic data and surfaces cold outreach isn't an acceptable substitute for aDevelopment Agent that operates within a trusted, governed, sector-specificsystem of record. The nonprofit sector's inherent conservatism around technologyis, in this framing, not a headwind for Blackbaud — it's a barrier to entry forevery competitor trying to break in.

 

Cloudflare

CEO MatthewPrince positioned the company at the intersection of two forces reshaping theinternet simultaneously: the explosion of AI agents and the resulting questionof who governs how those agents traverse the web.

 

Start with thefundamentals, which are genuinely excellent. Q4 revenue grew 34% year-over-yearto $614.5 million — the third consecutive quarter of acceleration — with largecustomers contributing 73% of total revenue, dollar-based net retention jumping9 percentage points to 120%, and million-dollar customers growing 55%year-over-year. New ACV bookings grew nearly 50% year-over-year with bothyear-over-year and sequential acceleration, and RPO grew 48% to $2.5 billion —the kind of forward revenue visibility that gives confidence in sustainedgrowth. The largest annual contract value deal in company history — $42.5million per year — closed in the quarter. This is a business hitting itsstride.

 

But the moreimportant conversation on the call was about what's happening to Cloudflare'snetwork itself. CEO Matthew Prince reported that over the month of Januaryalone, the number of weekly requests generated by AI agents more than doubledacross the Cloudflare network. That single data point is extraordinary in whatit implies. Cloudflare sits between virtually every significant website and theinternet — it processes roughly 20% of all web traffic globally. The fact thatAI agent-generated traffic is doubling on a monthly basis means the trafficcomposition of the entire internet is changing, and Cloudflare is uniquelypositioned to observe, measure, and increasingly govern that change.

 

This leads tothe most forward-looking and genuinely novel part of the call — Prince'sframing of Cloudflare as a neutral broker between AI companies and contentcreators. AI models train on internet content, and the commercial relationshipbetween those who create content and those who consume it for training purposesis still being worked out. Prince argued that AI companies and content creatorsalike are looking to Cloudflare as a trusted neutral third party — that bothsides would rather Cloudflare figure out what the future business model lookslike than have a hyperscaler do it, since hyperscalers are themselves buildingfoundational models and may have conflicting incentives. Cloudflare's networkposition — sitting in the middle of every request, trusted by both sides, withno foundational model of its own — makes it a plausible honest broker in a waythat no hyperscaler or AI company can credibly claim to be.

 

The developerplatform story adds another dimension. Cloudflare exited 2025 with more than4.5 million human developers active on the platform — and that number will soonbe joined by AI agents as first-class citizens of the Cloudflare ecosystem.Workers AI (Cloudflare's inference-at-the-edge product), the AI Gateway (whichmanages and monitors AI API calls), and the growing suite of developer toolsmean that Cloudflare is not just routing traffic between AI agents and the web— it's becoming the infrastructure layer where AI applications are built,deployed, and governed. The pool-of-funds contract model, where enterprisescommit a pool of spend and draw it down across any Cloudflare product, isparticularly well-suited to AI workloads where consumption is hard to predictbut the underlying dependency is clear.

 

The grossmargin story requires brief acknowledgment — gross margin came in at 74.9%,slightly below the long-term target range of 75-77%, as Cloudflare allocatedmore network expenses to cost of revenue to better reflect AI infrastructureinvestment. This is the same infrastructure cost dynamic playing out acrossvirtually every company in this series, and Cloudflare's handling of it isconservative and transparent. With $4.1 billion in cash and full-year 2026revenue guidance of $2.785-2.795 billion implying 28-29% growth, the balancesheet and growth profile are strong enough to absorb the investment cyclecomfortably.

 

The biggerpicture is this: Cloudflare is one of the few companies in enterprise softwarethat isn't just using AI or selling AI products — it's building theinfrastructure that AI itself needs to function reliably at internet scale.Every AI agent that browses the web, every model that calls an API, everyapplication that serves AI-generated content — all of it flows through infrastructurethat Cloudflare operates.

 

Freshworks

Freshworks' Q42025 call was defined by a genuine milestone, first-ever GAAP profitability fora full year, and a strategic clarity about where AI fits in the growth equationthat was more concrete than most companies in this series have delivered. CEODennis Woodside's framing was direct: "AI is not just a feature in ourproducts, it's a standalone revenue line delivering measurable value to ourcustomers."

 

The numbersbehind that claim are still early but directionally meaningful. Freddy AIcrossed the 8,000 customer mark for paying AI customers, with over $25 millionin ARR that nearly doubled year-over-year. Against a total ARR base of $907million, $25 million is still a small fraction, but doubling year-over-yearfrom a base of paying customers (not just users) is a credible sign of genuinewillingness-to-pay rather than just feature adoption. The company's target is$100 million in AI-driven ARR over the next three years, a goal that wouldrepresent roughly 7% of their current ARR base, achievable if the doublingcadence holds.

 

The businessmodel for Freddy AI deserves attention. Freshworks is scaling usage anddemonstrating value through session-based pricing for AI agents, meaningcustomers pay per interaction or session rather than per seat. This is adeliberate departure from pure seat-based SaaS and positions Freshworks tocapture more value as AI agents handle more tickets, resolve more queries, andexecute more workflows autonomously. For IT service desks and customer supportteams, session-based pricing maps cleanly to outcomes: if the AI agent resolves40% of tickets without human intervention, the customer pays for those sessionsand the ROI is immediately visible. That alignment between price and value isone of the cleaner AI monetization models in this series.

 

The two-speednature of the Freshworks business is important context. The Employee Experience(EX) business — Freshservice for IT and employee service management — crossed$500 million in ARR with 26% year-over-year growth, clearly the growth engine.The Customer Experience (CX) business — Freshdesk for customer support — isbeing managed more defensively, with focus on retention and unification ratherthan aggressive growth. AI is more naturally embedded in the EX motion becauseIT service management has well-defined, repetitive workflows (ticket routing,password resets, software provisioning) where agentic AI can replace ordramatically accelerate human effort with high confidence. Customer serviceinteractions tend to be more variable and emotionally complex, which makesautonomous AI resolution both harder and higher-stakes.

 

The platformbuildout — ITSM, ITOM, ITAM, and ESM under one unified roof throughacquisitions of Device42 and Fire Hydrant — is the architectural foundationthat makes the AI story more compelling. Freshworks has brought IT servicemanagement, IT operations management, IT asset management, and enterpriseservice management under one cohesive roof, which means AI agents operatingwithin Freshservice have access to a much richer context layer: not just thesupport ticket, but the asset involved, the operational status of relatedsystems, and the history of similar incidents. That contextual richness is whatseparates an AI agent that can genuinely resolve an IT issue from one that canonly acknowledge it.

 

The honestchallenge for Freshworks is the growth rate trajectory. Revenue growthdecelerated from 22% in Q4 2024 to 14% in Q4 2025, and full-year 2026 guidanceof $952-960 million implies 13.5-14.5% growth, respectable but notacceleration. The market is watching whether AI monetization can provide agrowth catalyst that reverses the deceleration, and the answer isn't yetvisible in the numbers. What is visible is a company that has achievedfinancial discipline, first-ever GAAP profitability, a clear AI monetizationmodel, and a coherent strategy for the mid-market ITSM space where ServiceNowis too expensive and legacy tools aren't AI-capable. If the $100 million AI ARRtarget is achievable on schedule, it would represent a meaningful step towardreaccelerating growth from a stronger profitability foundation — but 2026 isthe year that hypothesis gets tested.

 

Klaviyo

Klaviyo's Q42025 call was one of the cleanest AI impact stories in this entire series — notbecause the AI revenue numbers are large yet, but because the customer outcomedata is unusually concrete and the strategic positioning is genuinelydifferentiated. Co-CEO Andrew Bialecki's framing was direct: "The futureis autonomous customer experiences".

 

The businessfundamentals provide a strong foundation. Full-year revenue reached $1.23billion, up 32%, with Q4 at $350 million, up 30% — and 2026 guidance of$1.50-1.51 billion implies 21.5-22.5% growth. More impressive is the operatingleverage: non-GAAP operating expenses were 58% of revenue, the lowest levelsince the IPO, with AI driving meaningful internal productivity and enablingfaster development cycles without commensurate headcount growth. This is the AIefficiency dividend theme appearing again — Klaviyo is building faster andspending less as a percentage of revenue, not because of austerity but becauseinternal AI tools are compounding developer output.

 

The AI storyfor customers centers on what Klaviyo calls the autonomous B2C CRM —essentially, AI that can plan, create, send, and optimize marketing campaignswith minimal human intervention, using the rich first-party data that merchantshave accumulated within Klaviyo about their customers' browsing behavior,purchase history, and engagement patterns. The outcome data is striking: morethan 50% of marketing campaigns from customers using Marketing Agent are nowgenerated by AI, with some customers achieving a 50% increase in open rates anda 40% rise in revenue per campaign.

 

The keycompetitive question on the call — pressed directly by Goldman Sachs — wasabout what prevents an AI-native competitor from replicating Klaviyo'sadvantage. Bialecki's answer centered on the proprietary context that Klaviyoholds: the company's advantage lies in its extensive dataset and infrastructuredesigned specifically for real-time use cases — allowing for real-timedecisions and personalization that are difficult to replicate. This is theKlaviyo data moat argument: a general-purpose AI model doesn't know that aspecific customer browsed a particular product twice this week, abandoned theircart on Wednesday, opened a discount email last month but didn't convert, andhas a lifetime value pattern consistent with high-margin repeat buyers. Klaviyoknows all of that in real time, and its AI models are trained on the behavioralpatterns of hundreds of thousands of ecommerce merchants across billions ofconsumer interactions. CEO Bialecki noted that Klaviyo Attributed Value — therevenue Klaviyo attributes to its platform — came to nearly $80 billion forcustomers last year, which is a remarkable demonstration of how central theplatform has become to ecommerce revenue generation.

 

The newerservice product — essentially AI-powered customer service built natively on theKlaviyo platform — is an important strategic expansion. Customer resolutionrates increased by 20 points at reference customers, and agent-driven sales sawup to 111% growth, with the service category noted as the fastest-growingproduct launch in company history. What makes this expansion particularlylogical is that the same first-party data that powers marketing personalization— purchase history, browsing behavior, preferences — is equally valuable forcustomer service. An agent that already knows you're a repeat buyer, knows whatyou purchased last month, and knows your communication preferences can resolvea service inquiry far more effectively than a generic support bot. Klaviyo isthreading marketing and service on the same data layer, which createscompounding value and makes the platform significantly harder to displace.

 

Management wasconservative with guidance; the CFO explicitly said minimal servicecontribution is built into 2026 guidance, framing it as embedded upside ratherthan assumed revenue. Given the early but strong traction signals, thatconservatism looks like it could create meaningful upside through the year asthe service product adoption scales. The announced global partnership withAccenture and the doubling of million-dollar ARR customers are additionalsignals that Klaviyo is moving credibly upmarket — and an upmarket customerdeploying Klaviyo across both marketing and service is a much stickier,higher-value relationship than a mid-market email marketing customer alone.

 

Datadog

Datadog's Q42025 call was one of the strongest in the enterprise software sector thiscycle, and the stock's 16% surge on the day reflects how comprehensively itaddressed investor concerns about AI's net impact on the observability market.The headline numbers were excellent: Q4 revenue of $953 million beat the highend of guidance, up 29% year-over-year, with bookings of $1.63 billionrepresenting 37% growth and including 18 deals over $10 million in TCV and twoover $100 million. But the more interesting story is about the structure ofthat growth and what it tells you about how AI is reshaping Datadog's business.

 

The mostanalytically important disclosure on the call was the explicit breakout ofAI-native vs. non-AI-native customer growth. CEO Olivier Pomel confirmed whatinvestors had been anxious about — that AI-native companies were a largeportion of Datadog's growth but also a concentrated risk — and then gave theanswer the market needed: revenue growth from the broad base of customersexcluding AI natives accelerated to 23% year-over-year in Q4, up from 20% inQ3. That acceleration in the non-AI cohort is the key signal.

 

The AI-nativecustomer base is itself becoming more strategically significant. About 650AI-native customers use Datadog, including 19 spending $1 million or moreannually, with 14 of the top 20 AI-native companies as customers. That laststatistic is worth sitting with: 14 of the 20 most important companies buildingAI infrastructure at scale have chosen Datadog as their observability platform.When those companies grow — as they are, dramatically — Datadog's revenue fromthem grows consumption-proportionally. It's a high-quality, high-growth cohortwith deep platform dependency and limited churn risk given how embeddedobservability is in production operations.

 

The productstory adds a forward-looking dimension that the revenue numbers don't fullycapture yet. MCP server tool calls rose 11-fold quarter-over-quarter in Q4 — astaggering acceleration in AI agent-driven requests flowing through Datadog'sinfrastructure. This mirrors what Cloudflare observed on its network: AI agentsare creating exponentially more monitoring and observability needs thanhuman-driven applications because they operate continuously, at scale, withoutthe natural pauses that human users introduce. An AI agent making thousands ofAPI calls per hour needs its performance tracked, its errors flagged, its costsmonitored, and its security posture verified, all of which are Datadogcapabilities.

 

The AI SREagent — Datadog's flagship agentic product for automated incident response androot cause analysis — reached general availability in December and surpassed2,000 trial and paying customers within a month. That pace of adoption isnotable: it suggests both strong market need and a well-designed product thatcustomers can get value from quickly. The AI SRE agent pairs with Datadog's newOnCall product (which reached 3,000 customers for incident response), creatinga loop where Datadog not only detects issues but can autonomously investigateand suggest — or execute — remediation. This is Datadog moving fromobservability toward what it calls "autonomous operations," apositioning that closely mirrors Dynatrace's strategic framing and that createsa much larger TAM than traditional monitoring alone.

 

The productbreadth story is also becoming a real competitive moat. 84% of customers usetwo or more products, 55% use four or more, and 33% use six or more — metricsthat have improved consistently year-over-year. In a market where AI workloadsrequire observability, security, and performance monitoring to be integratedrather than siloed, Datadog's ability to provide all of these from a singleplatform — with consistent data models, unified pricing, and no integration tax— is a structural advantage that becomes more valuable as AI applicationcomplexity increases. 48% of the Fortune 500 are already Datadog customers,with median ARR per Fortune 500 customer under $500,000 — suggesting enormousexpansion headroom within an already-won customer base as those enterprisesdeepen their AI infrastructure.

 

The honestnuance on the call was the full-year 2026 guidance of 18-20% growth — adeceleration from the 29% Q4 print, partly explained by concentration in theAI-native cohort and the conservative guidance philosophy CFO David Obstler hasconsistently maintained. The AI-native companies that drove outsized growth in2025 create difficult comparisons in 2026, and management is transparent aboutmodeling conservatively on that cohort. But the underlying signal —non-AI-native acceleration, deepening multi-product adoption, an inflection inAI agent-driven usage, and the early momentum of the AI SRE agent — all pointtoward a business where the AI tailwinds are broadening rather than narrowingas the year progresses.

 

Shopify

Shopify's Q42025 call was unlike any other in this series because President HarleyFinkelstein wasn't just reporting on AI's impact on the business, he wasannouncing that Shopify intends to be the infrastructure layer for the entirenew era of AI commerce. The call opened with unusual theatrics for an earningscall, invoking Tobi Lütke's 2015 IPO vision, and the purpose was deliberate: tosignal to the market that what Shopify is building in 2026 is not an incrementalfeature set but a foundational bet on how commerce itself will work when AIagents replace traditional search as the primary discovery and purchasingchannel.

 

Q4 revenuereached $3.1 billion, up 31% year-over-year, and full-year 2025 revenue of$11.6 billion grew 30% — Shopify's highest annual growth since 2021. B2B GMVgrew 84% in Q4 and 96% for the full year, international revenue grew 36%, andShop Pay processed $43 billion in Q4 GMV, exceeding 50% of US GPV for Shopify.

 

Now the AIstory, which is Shopify's most strategically distinctive narrative in thisentire series. Since January 2025, orders coming to Shopify stores from AIsearch have grown 15x. That's from a small base but 15x in 12 months is not arounding error; it's evidence that AI-powered shopping is moving from noveltyto channel. What makes this uniquely important for Shopify is how the companyis positioned relative to that shift. When an AI shopping agent helps aconsumer find and purchase a product, the transaction still needs to flowthrough a checkout layer, with all the complexity that entails: inventorychecks, payment processing, tax calculation, shipping logistics, frauddetection, and post-purchase fulfillment. Shopify's checkout process is notbypassed by AI agents, the economics for merchants remain the same as if thetransaction occurred in the online store, with the full backend of commercecontinuing to flow through Shopify. In other words, AI is creating newdiscovery channels, but Shopify remains the execution layer underneath all ofthem.

 

The UniversalCommerce Protocol co-developed with Google is Shopify's bid to codify thisposition as a standard. UCP is described as "the only protocol that coversthe full commerce journey, end-to-end," and is payment-agnostic. Thestrategic intent is clear: just as HTTP became the standard that the web runson, Shopify wants UCP to become the standard that agentic commerce runs on.When analysts pressed on the competitive threat from ACP the competing standardproposed by OpenAI and Stripe Finkelstein's response emphasized that UCP isdesigned as a comprehensive protocol covering everything from search topost-order, maintaining the merchant's checkout logic regardless of what AIagent surfaces the transaction. The "payment-agnostic" framing isnotable: Shopify is saying it doesn't need to own the payment to benefit fromthe transaction, which lowers the barrier to adoption for merchants andplatforms that might otherwise resist ceding the payment relationship.

 

The Sidekick AIfeatures tell a parallel story about AI's impact on merchant operations ratherthan just discovery. Sidekick generated almost 4,000 custom apps, created over29,000 automations, and edited 1.2 million photos within just three weeks of afeature launch. For a merchant managing a growing ecommerce business, productcatalog, marketing campaigns, customer service, inventory management, these arecapabilities that previously required a developer or a specialized agency. AIis making Shopify merchants dramatically more operationally capable withoutproportional headcount growth, which deepens platform dependency and expandsthe addressable market to entrepreneurs who previously couldn't afford tooperate at scale.

 

The biggestinvestment implication from this call is one that runs throughout this entireseries but is most vivid here: Shopify is not merely an AI adopter or anAI-enhanced product; it is positioning itself as the commerce infrastructurethat AI agents must connect to in order to transact in the physical economy.Agentic commerce is viewed as an evolution of existing channels, givingmerchants flexibility to add or remove channels as needed  and Shopify's platform, with its integratedsuite of payments, inventory, tax, and analytics, is what makes thatflexibility possible at scale. If the trajectory of AI-driven shoppingcontinues, Shopify's dual position as both the merchant operating system andthe commerce protocol that AI agents plug into could represent one of the moststructurally advantaged positions in the AI economy.

 

Q2Holdings

Q2 Holdingsoccupies a peculiar position in the enterprise software landscape: it's acompany that has been quietly executing a multi-year transformation into aprofitable growth business while serving one of the most conservative,compliance-bound industries in the economy in community and regional banks andcredit unions. The Q4 2025 call was, in essence, the culmination of thatthree-year transformation, and the AI story running through it is less aboutflashy product launches and more about what becomes possible when the foundationalinfrastructure finally catches up to the strategic ambition.

 

The financialdiscipline story is worth leading with. Full-year 2025 revenue of $794.8million grew 14% Q2's highest annual growth rate since 2021, while adjustedEBITDA expanded 49% to $186.5 million, with free cash flow conversion of 93%.Q4 subscription revenue grew 16% year-over-year, EBITDA margins expanded 400+basis points, and the company finished with a $2.7 billion backlog, up 21%year-over-year.

 

The completionof Q2's cloud migration in January 2026 is the single most important structuralevent in the company's recent history, and it connects directly to the AIopportunity. For years, Q2 operated across a hybrid of legacy on-premiseinfrastructure and cloud environments, which constrained both gross margins andthe ability to deploy AI capabilities consistently across the customer base.The cloud migration is now complete, and management described it as the singlebiggest lever for achieving their 60%+ gross margin target, but theless-discussed implication is that a fully cloud-native architecture is alsothe prerequisite for embedding AI across Q2's platform in a way that reachesall customers simultaneously rather than a patchy rollout constrained byinfrastructure heterogeneity.

 

The moststrategically important AI claim on the call was CEO Matt Flake's assertionthat "AI innovation within financial services will flow through Q2, notaround us." That's a bold statement, and worth unpacking. Q2's argument isthat AI in banking can't simply be dropped in from the outside, it requiresdeep integration with the core digital banking platform, where customeridentity, transaction history, behavioral patterns, and risk profiles live.Generic AI tools built by fintech startups or AI companies don't have access tothat data; Q2 does, because it is the system of record for the digital bankingexperience at hundreds of financial institutions. The Innovation Studioframework, Q2's open development platform, is the vehicle through whichfintechs and partners can build AI-powered applications on top of Q2'sinfrastructure, keeping Q2 at the center of the innovation ecosystem ratherthan being bypassed by it.

 

The fraud andrisk opportunity is where the AI story becomes most concrete near-term. Riskand fraud has emerged as one of the most strategically important areas in Q2'sportfolio, with fraud now described as continuous, cross-channel, and embeddedin nearly every digital interaction. Q2 secured its largest fraud technologycontract to date with a $200 billion bank in Q4, a landmark win that validatesthe platform's relevance at the very top of the market. The argument managementmakes is compelling: AI-powered fraud detection that can synthesize signalsacross retail banking, small business, and commercial in real time isfundamentally different from the fragmented point solutions banks havehistorically deployed. Q2's integrated platform, sitting across all of thosechannels simultaneously, is uniquely positioned to offer that cross-channelview in a way that no standalone fraud vendor can match without first solvingan integration problem.

 

The cross-sellopportunity within the existing customer base is also notable. Only 10% of Q2'sTier 1 customer base has all three of its core solutions - retail digitalbanking, commercial digital banking, and relationship pricing/fraud - and only25-30% of digital banking customers have adopted fraud products. Thatpenetration gap represents an enormous amount of runway for expansion revenuegrowth within an already-won, already-contracted customer base, exactly thekind of expansion that AI-powered products will accelerate as banks feelgrowing urgency around fraud prevention and competitive differentiation indigital banking experience. The financial institution sector's conservatism,which has historically slowed Q2's growth, is now working in its favor: banksare sticky, they don't churn lightly, and when they do adopt AI solutions, theygo deep on a trusted platform rather than experimenting broadly with unproventools.

 

 

 

Hubspot

HubSpot's Q42025 call was one of the more intellectually substantive in this series,because CEO Yamini Rangan was explicitly grappling with a question that mostcompanies in this series prefer to answer only obliquely: in a world where AIcan generate marketing content, qualify leads, answer customer inquiries, andanalyze pipeline data autonomously, what is the role of a human-operated CRMplatform like HubSpot? Her answer, that the gap between generating AI outputand driving actual growth outcomes is where HubSpot wins, is the central thesisof the company's 2026 strategy.

 

Full-year 2025revenue grew 18.2% to $3.1 billion, with Q4 at 18% constant currency growth and20% as reported, alongside a Q4 operating margin of 22.6% — the kind ofgrowth-and-margin combination that puts HubSpot in the upper tier of scaledSaaS businesses. Net new ARR grew 24% in 2025, six points above constantcurrency revenue growth, a leading indicator that the demand pipeline isstrengthening, not weakening, as AI proliferates.

 

The AI productmetrics are more concrete than most companies in this series have delivered.Customer Agent, HubSpot's AI support agent, was activated by over 8,000customers with mid-sixties percent resolution rates, meaning it resolvedroughly two-thirds of customer inquiries without human intervention.Prospecting Agent was activated by over 10,000 customers, up 57%quarter-over-quarter, and Data Agent was activated by 2,500+ customers. Theusage-based credit consumption data is particularly useful for understandingwhere AI is actually generating value: Customer Agent accounted forapproximately 60% of credits consumed in Q4, with Prospecting Agent, DataAgent, and intent monitoring each representing 10-15% of credits. Thatdistribution tells you that the customer service use case is the most advancedand most adopted, while prospecting and data enrichment are emerging as thenext wave.

 

The customeroutcome data is where Rangan built her AI credibility argument mosteffectively. Customer Agent users were booking nearly twice as many meetingscompared to the prior year a striking productivity improvement that goesdirectly to the growth outcomes that HubSpot's customers care about. This isRangan's core thesis made concrete: generic AI can generate text, but HubSpot'sAI embedded in a platform that knows your contacts, your deal history, yourcustomer interactions, and your competitive context can actually drivemeasurable revenue growth because it has the context to make good decisions,not just the capability to generate outputs.

 

CTO DharmeshShah's presence on the call was notable; he appeared specifically to addressthe competitive threat from external AI agents and the question of whethertools like Claude or ChatGPT connectors could route around HubSpot entirely.Shah noted increased utilization of HubSpot's Claude and ChatGPT connectors byleading-edge customers but no material uptake yet for third-party ClaudeCo-Work-style features that might disintermediate the platform. The implicationis that HubSpot is watching this carefully and doesn't believe thedisintermediation risk is imminent;  butthe fact that Shah was on the call to address it signals that management takesthe question seriously. HubSpot's answer is that context and integration arethe moat, not the AI model itself, and that a customer who tries to use ageneric AI agent for sales and marketing quickly discovers they need the CRMdata that HubSpot holds.

 

The internal AIadoption story adds the efficiency dividend dimension. 97% of code committed in2025 used AI assistance, and nearly 60% of support is handled by AI internally,numbers that are among the highest cited by any company in this series.

 

The pricingtransition is also worth understanding as context for the 2026 growth outlook.90% of legacy customers have moved to HubSpot's new pricing model, with nearly50% of ARR through first renewal — meaning the structural shift that enablesHubSpot to charge for AI usage (through the credit model) is nearly complete.As that base continues to expand and AI credit consumption grows, HubSpot gainsa consumption-based revenue layer on top of its subscription base — the samehybrid model that several companies in this series are pursuing, and one thatcreates upside to guidance as AI usage scales.

‍

Sammy Abdullah

Managing Partner & Co-Founder

Enjoyed this post?

Share it using the links below.

Copy link
Share on LinedIn
Copy link
Share on Facebook

Get Our Newsletter in Your Inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.
  • SaaS Metrics
  • Portfolio
  • Blog
  • contact us
5307 E Mockingbird Ln, Suite 802  Dallas, TX 75206