Datadog

9 minute read

Published:

Datadog

  • Product: Observability platform
  • New or existing: Innovated an existing product
  • Disruption: Developed observability platform for cloud based computing that revolutionized devops by driving efficiency and collaboration.
  • Target Market: Developers, startups, small businesses, large businesses
  • Competitive Landscape: Datadog was a game changer and continues to lead with its breadth of capability and very customer-oriented, hands-on approach.
Moat1

INTRODUCTION

Datadog came into the market in 2010 as the observability platform of the future. Unlike other players in the observability space at the time, Datadog’s platform was built for modern cloud infrastructure at a time when cloud services were just beginning to proliferate and take computing by storm.

As the cloud transformed computing, many functions from the old system such as monitoring could not just carry over. Unlike in the older model, under the cloud, servers spin up and disappear constantly, workloads run in containers and systems are spread out across geographic regions. Furthermore the very infrastructure undergoing these changes was defined in code.

Datadog became a key driver for this transformation by redefining how observability platforms now operate as well as how teams collaborate with each other in this newer environment. Datadog also made observability platforms more accessible. Whereas before, observability platforms served by Dynatrace and other firms were an enterprise staple purchased via protracted negotiations with procurement, Datadog went bottoms up with a more easily scalable model making their platform popular among developers and accessible to small businesses and startups similar to what AWS did with cloud computing.

Here I discuss how Datadog transformed the observability industry. I discuss the market trajectory of Datadog’s position in the market through the lens of Datadog’s go-to-market strategy and usage-based pricing model highlighting both strengths and weaknesses.

I then suggest two alternative usage-based pricing models that incorporate risk-sharing mechanisms similar to the choice of either a high deductible plan with a low monthly premium or one with a much lower risk in return for a high premium. The model I suggest allows Datadog customers to opt out of the risk of unexpected autoscaling resulting through paying a higher “premium.”

PRICING MODEL

Datadog entered the market with their Infrastructure Monitoring service and that will be the focus.

Currently, Datadog offers three service tiers: Free, Pro and Enterprise. Each option is fenced to apply to its own use case.

Datadog anchors their pricing on their lower longer term annual plan rates incentivizing longer term commitments over their on-demand monthly service.

The Pro service starts at $15 per server per month (for annual commitment) but deriving total cost is more complex because Datadog employs high water mark billing on the number of hosts as measured on an hourly basis. Specifically, they take the maximum count of the lower 99% of usage hours. Removing the top 1% provides a smoothing shield to the customer against billing spikes.

Let’s see how this works for our hypothetical customer Cha-Cha-Ching Inc:

Cha-Cha-Ching Inc. is a small SaaS business that bakes virtual cupcakes. They normally bake only a few virtual cupcakes every day and need only 20 hosts per month.
One day, an influencer publishes a story about their virtual cupcakes and they see a surge in virtual cupcake baking over the next five days. In order to handle the heightened volume of virtual cupcakes over those two weeks, Cha-Cha-Ching's server system autoscales up to 200 hosts for just those five days.

That translates to the following distribution over the 30 days:

  • 16 days @ 20 hosts
  • 5 days @ 200 hosts

Hours:

  • There are 720 hours in a month (30 x 24) and 120 hours in 5 days.
  • The top 1% removes about 7 hours (720 x 0.01) of the highest server count.
  • The other 113 hours at 200 hosts is fair game.

Cha-Cha-Ching is thus billed for 200 hosts at $15.

VIRTUES OF DATADOG’S USAGE-BASED PRICING MODEL

The virtues of usage-based pricing extend to both sides of the aisle. Research by OpenView showed that SaaS companies relying on usage-based pricing grew at a rate 30% higher than those that are subscription-based. Under usage-based, revenue grows with the customer’s usage and often as the customer achieves success gaining customers and expanding its own infrastructure (more servers, more revenue to Datadog).

HighWater4

As the customer’s usage scales, the high water mark approach keeps scaled usage monetizable for Datadog. With cloud infrastructure servers are ephemeral and they autoscale as activity surges. Datadog’s products drive efficiencies that enable businesses to better allocate resources i.e. toward marketing or product development.

Usage-based pricing also lowers the customer’s barriers to entry. Customers only pay for what they use whereas under a subscription-based model customers pay regardless of whether or how minimally the product gets used. Low cost barriers lessen friction at entry points enabling small businesses and startups to experiment at low marginal cost while keeping cumulative costs within their budget. Paddle data found that this reduced friction is associated with 25% lower acquisition costs than under traditional models.

This is pertinent to a land-and-expand model like Datadog’s. Customers tend to start with infrastructure monitoring and add on more products over time. With low barriers, doing so lowers the need to commit to a product they may not have identified value for. This model drives product-led growth and lowers customer acquisition costs (CAC) to Datadog.

THE RISKS

The risk of the high water mark approach is that customers may have higher than anticipated bill spikes. In consideration of this, Datadog removes the top 1% of billable hours as a smoothing measure to mitigate this risk but that has not prevented some infamously negative press on it.

In another case, an engineer wrote some bad code that produced far more (billable) custom objects than expected.

JUSTIFICATIONS

Aside from billing anecdotes and a chorus of crying developers on Reddit, the retention metrics reported by Datadog actually tell a different story. Datadog’s results have indicated a strong propensity of customer relationships to expand over time. As of the end of 2025, the increase in the number of Datadog customers with over 100K in annual run-rate revenue (ARR) grew by 19% year over year to 4,310 (90% of ARR). The number of customers with ARR of over $1M also grew by 30% to 603.

Datadog’s trailing 12-month dollar-based net retention rate (NRR) of 120% as of December 31, 2025 indicates they saw revenue growth of 20% due to service expansion for the same cohort of customers over the period. Thus it can be inferred that increasingly more customers are spending more, expanding their use and crossing higher spend thresholds over time.

Further evidence of land and expand success is the increase in customers using multiple Datadog products. As of December 31, 2025, 55% of customers were using four or more products up from 50% the year before.

HighWaterMark

Switching observability platforms requires massive configuration, learning and deployment complexity. Switching carriers carries tremendous risk to the business and a high cost of failure by the mission-critical nature of the systems supported by Datadog. Gross revenue retention (GRR) provides a good measure of the relevance of switching costs. A high GRR suggests low churn and contraction indicating that few customers have severed tied or cut back on use. Morningstar reports that while the average gross retention among wide-moat companies is 94%, Datadog comes in at 97%. Importantly, Datadog’s high GRR suggests that customers have been staying on the platform despite all the noise about high billing risks cited above.

ALTERNATIVE APPROACHES

As with any business, the long run risk of competition shortens over time. Engineers familiar with Datadog monetize migration services which lowers switching costs. Disintermediation becomes more of a threat as one of Datadog’s largest customers, OpenAI is notably moving toward building their own solution.

Datadog holds a unique position by virtue of its product and customer oriented approach as well as its strong developer community (I myself have run into a few outspoken ones).

The glaring weakness that competitors do target is customer fears of sudden astronomical bills.

Two ways to protect positioning amid these particular risks:

  • An alert service (a monetizable add-on) that alerts stakeholders (number of whom is also monetizable) to billing thresholds. I believe Datadog may already have this.
  • Employing a risk-based pricing model similar to health insurance with
  • a high deductible, high risk plan for $15 per server with 1% smoothing (the high watermark count on the lowest 99% of hours) and
  • a high premium plan with a wider 10% shield from billing spikes (taking the high watermark count on the lowest 90% of hours).

The alert service seems the most obvious - and I think if it is an add-on product that already exists for every Datadog product that carries a spiky bill risk then more should be done from a product and marketing standpoint to make that clear in order to stymie the negative chattering.

The second may be one to keep in the back pocket as the competitive landscape shifts. Datadog reports very health cash flows so price reductions may become inevitable over the long run but the main vulnerability they run up against is the pesky spiky bill.

This model preserves the value-cost alignment in their pricing model but allows customers to opt-in to whichever risk-return ratio they can stomach.

The two protection mechanisms combined places all of that protection into the customer’s hands - so that spiky bills associated with high water mark pricing can no longer be attributable to a faulty model but to irresponsible choices on behalf of the customer.