SaaS Management Simplified.

Discover, Manage and Secure all your apps

Built for IT, Finance and Security Teams

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Recognized by

Reducing Databricks Spend with the FinOps Framework: A Real-World Case Study

Originally Published:
July 28, 2025
Last Updated:
July 31, 2025
8 min

Introduction: Why Cloud Cost Data Needs a Common Language

As demonstrated by forward-thinking organizations and shared through the FinOps Foundation’s community stories, this case reflects practical strategies enterprises are using to reclaim control over cloud and SaaS spending.

When cloud spending spirals beyond visibility, accountability becomes elusive, and optimization almost impossible. It was precisely the challenge faced by a large mining enterprise that had heavily invested in Databricks for machine learning, AI exploration, and IoT data processing across mining operations. While Databricks’ consumption-based pricing provided flexibility, the lack of standardized cost data made it nearly impossible to allocate cloud costs accurately, identify waste, or enable business units to take ownership of their spending.

That’s where the FOCUS FinOps cloud cost data standard entered the picture. With aspirations to improve cost accountability, automate ingestion, and enable FinOps platform integration, this AI-driven organization embarked on a journey to transform Databricks cost data into the FOCUS specification, ultimately aligning its cloud strategy with business goals.

These are the exact types of problems CloudNuro.ai was built to solve, across cloud and SaaS.

FinOps Journey: Re-Mapping Chaos into the FOCUS FinOps Cloud Cost Data Standard

For this global data-and-AI enterprise, operating complex analytics, machine learning, and IoT workloads, the promise of Databricks was performance at scale. But the reality of managing cost allocation for DBU-based billing across multi-cloud deployments was anything but easy. Despite Databricks offering usage tables and internal dashboards, these tools failed to deliver the standardized, business-aligned visibility the FinOps team needed.

What followed was a deeply technical and strategic effort to adopt the FOCUS FinOps cloud cost data format, operationalize ingestion, and unlock cost ownership across teams.

Challenge: SaaS Cost Data Was Not “Ready” for FinOps

The team realized early on that Databricks’ native system tables (e.g., system.billing.usage, system.billing.list_prices) offered raw billing data, but not in a form aligned with the FOCUS spec. This meant they couldn’t plug cost into their chosen FinOps platform, or track trends across clouds, products, or services.

Internally, frustration mounted:

  • Finance couldn’t align DBU spend to services or teams
  • Engineering teams had no visibility into workload-level cost deltas
  • Procurement lacked leverage for negotiation due to unclear usage patterns

What made it worse? Each Databricks SKU had different price points based on region, compute type, and deployment model, yet this granularity was invisible to most of the business.

Step 1: Understanding the FOCUS Schema and Translation Pain

The team began by downloading the FOCUS 1.0 and 1.1 specs. But as they discovered, the schema wasn’t plug-and-play. FOCUS expected 35+ columns such as:

  • billing_period_start and billing_period_end
  • usage_date
  • service_name, usage_type, resource_id
  • cost, currency, region

Yet Databricks tables didn’t offer many of these fields directly:

  • Date formatting issues emerged (e.g., milliseconds in ISO 8601 timestamps)
  • Field names were inconsistent or missing altogether
  • Some data, like region, resource ID, or discount types required computed values

So, they built a translation matrix: Column A was the FOCUS spec, Column B was what Databricks provided, and Column C were derived fields. Everything was documented in GitHub, with inline markdown explaining each SQL transformation and its reasoning.

They used GitHub Copilot to iterate on logic:

  • Parse FOCUS docs into field lists
  • Infer data types
  • Suggest transformation logic
  • Validate joins across list_prices and usage tables

By breaking down the FOCUS schema into “consumable bites,” the team built SQL queries that created 100% FOCUS-conformant datasets, including workarounds for fields like sku_id, service_id, and usage granularity.

We weren’t going to wait for the vendor. We used what they gave us, mapped it to what we needed, and built an ETL layer around it.”, Cloud Cost Architect

Step 2: Building the Notebook-Based ETL Engine

Databricks Notebooks became the orchestration layer. The logic was broken down into three distinct stages:

A. Retrieval
  • Scheduled job ran daily at 12:05am (post billing reconciliation)
  • Queried system.billing.usage and list_prices with start_date and end_date filters
  • Data was scoped per billing account and usage type
B. Transformation
  • SQL transformed native Databricks fields into FOCUS-aligned schema
  • Timestamps were converted to FOCUS-compliant ISO formats
  • service_name was inferred from origin product fields
  • Conditional joins filtered usage types by workload type (SQL Jobs, DLT, All-Purpose)
C. Output and Load
  • Resulting dataset was exported in CSV (required by their FinOps platform)
  • Files were saved to Unity Catalog volumes (Databricks’ internal object storage)
  • REST API scripts uploaded the files into the platform using three-step ingestion process

Step 3: Cross-Team Collaboration via Visualization

Before pushing to the FinOps platform, the team used Databricks' own BI tools to validate the output. They created in-notebook dashboards:

  • Cost by service name over time
  • DBU consumption by cluster type
  • Price skew comparisons across All-Purpose vs. Jobs vs. SQL Compute
  • Visual deltas between actual usage and reserved DBU commitments

These dashboards weren’t just internal—they were shared across:

  • Engineering teams to reduce idle cluster time
  • Finance teams to plan for discounts
  • Product owners to identify “cost per experiment” metrics

This data democratization seeded early buy-in across the business—even before chargeback was implemented.

Step 4: Community-Sourced Problem Solving

One sticking point nearly stalled the project: their ingestion platform didn’t accept timestamps with milliseconds. The FOCUS spec implied ISO 8601 compliance—including milliseconds.

A Slack message to the FinOps Foundation community led to:

  • A response from Datadog’s FinOps architect
  • Confirmation from Walmart’s engineer that milliseconds were optional
  • A community-led fix: truncate to YYYY-MM-DDTHH:MM:SSZ and proceed

This episode crystallized why FinOps isn’t just a role—it’s a movement. Standards aren’t just documents—they’re relationships, shared knowledge, and collective success.

Curious how leading enterprises fast-track FinOps with zero rework? See how CloudNuro quietly powers success behind the scenes.

Outcomes: From Raw Billing to Real Accountability

What began as a focused technical exercise—transforming Databricks billing into the FOCUS FinOps cloud cost data standard—evolved into a business-wide transformation in cost ownership, trust, and optimization.

By standardizing cost data, automating ingestion, and enabling analysis across teams, this AI-driven enterprise turned cost chaos into clarity—and realized both financial savings and behavioral change.

1. Financial Visibility Unlocked Across Databricks Workloads

Before this project, most teams could only see blended DBU spend by environment or account. After implementing the FOCUS pipeline:

  • Every DBU, job, and cluster was tagged and mapped to a service, workload, or business function.
  • “SQL Compute,” “All-Purpose,” “DLT,” and “Interactive Apps” were now traceable to cost centers.
  • 90% of spend was allocated with FOCUS metadata within 60 days.

The FinOps platform visualizations began showing:

  • Cost by business service
  • Usage by team, region, environment
  • Top contributors to unreserved DBU overages
  • Idle cluster trends and shutdown candidates

Before: Monthly CSV extracts and manual pivot tables
After: Daily dashboards driving automated alerts and budget actions

Now when someone asks ‘why did our cloud bill spike?’, we can answer in under 10 minutes—not 10 days. — Sr. Manager, Cloud Engineering

2. $2.1M+ in Optimization Opportunities Unlocked

The team’s primary savings came from behavioral and architectural shifts, not negotiated discounts:

  • Idle All-Purpose Compute clusters were reduced by 70%
  • Batch jobs were moved to cheaper “Jobs Compute” environments
  • Exploratory workloads were scheduled, instead of left running 24/7
  • Dashboards exposed unused committed DBUs, enabling pre-buy adjustments

These shifts led to:

  • $850K/year in All-Purpose overspend eliminated
  • $1.1M in annual savings via rearchitecting high-frequency jobs
  • $200K saved in user education on cluster defaults and cost behavior

The team also gained leverage in procurement discussions—armed with precise per-SKU usage patterns.

Wondering what’s really driving your cloud costs—and how to take control? Explore how CloudNuro uncovers answers your teams can act on.

3. Cost Attribution Empowered Budget Ownership

Once cost data conformed to FOCUS, the FinOps team could apply chargeback or showback policies with confidence. Business units received reports showing:

  • Total Databricks spend per product/service
  • Breakdown by cluster type and workload
  • Trends against forecasted DBU commitments

This transparency reduced tension between engineering and finance. Teams began forecasting compute needs proactively, not reactively.

Finance finally saw unit economics per use case:

  • Cost per GenAI document indexed
  • Cost per data pipeline executed
  • Cost per sensor stream processed (IoT)

FOCUS gave us a language to talk to the business. It was no longer about raw DBUs—it was about outcomes.” — Director, Enterprise IT Finance

4. Platform-Led Chargeback Became Possible

With FOCUS-formatted cost data ingested daily, the FinOps platform could:

  • Run cost allocation rules by tag, label, or service name
  • Align costs with GL accounts
  • Trigger alerts for anomalies, overruns, and idle capacity

This set the stage for:

  • SaaS chargeback pilots across Databricks, Snowflake, and GCP
  • Integration with budgeting tools (ERP, FP&A)
  • Adoption of FinOps KPIs as part of executive scorecards

The CIO began using cloud cost per experiment as a key metric in quarterly reviews.

CloudNuro supports both license-based and usage-based chargeback across SaaS and cloud, with policy-driven allocations and dashboard-ready outputs. - Ready to see a Sample ->

5. Community, Not Just Code, Drove Speed

Without FinOps Foundation Slack and GitHub collaboration, this journey might’ve taken 2x longer. The team credits:

  • Open-source schema references
  • Feedback on field transformations
  • Peer validation for timestamp logic
  • Public release of their SQL in GitHub

Their GitHub repo is now used by multiple organizations looking to transform Databricks usage into FOCUS format—cutting down implementation time for others.

They even received early previews from Databricks confirming native FOCUS export is coming, validating their advocacy efforts.

Bonus Outcome: Databricks Takes Notice

Near the end of this project, Databricks announced a private preview of native FOCUS-format cost export—something that hadn’t existed when this team began.

Their feedback, code contribution, and user story helped push the vendor to prioritize this feature.

This showcases a broader lesson: when enterprises lead, vendors listen.

Lessons for the Sector: Blueprint for Operationalizing the FOCUS FinOps Standard

This case study reveals more than just a cost transformation—it exposes a repeatable, cross-domain blueprint for adopting the FOCUS FinOps cloud cost data standard. Whether you're working with Databricks, Snowflake, ServiceNow, or any SaaS/IaaS vendor, the same challenges, breakthroughs, and best practices apply.

Let’s unpack the critical learnings every FinOps team, cloud architect, and IT finance leader should take away.

1. Don’t Wait for the Vendor—Build the Translation Layer Yourself

One of the most powerful takeaways: the absence of native FOCUS support is not a blocker.

This team didn’t wait for Databricks to build FOCUS-formatted exports. Instead, they:

  • Pulled billing and usage data from native system tables
  • Aligned fields with the FOCUS spec via SQL
  • Built automation to generate, store, and upload CSVs daily

By building the bridge themselves, they:

  • Got to value 12–18 months faster
  • Reduced manual effort across teams
  • Created leverage in vendor discussions

If your FinOps journey is stuck waiting on SaaS providers, you’re not doing FinOps—you’re doing wishful thinking.

Fascinated by how it all connects? Discover how CloudNuro bridges your SaaS and cloud data, out of the box.

2. Standardization Isn’t Just Technical—it’s Cultural

The FOCUS format gave this organization a shared language between finance, engineering, and product. No more debates about:

  • What “All-Purpose Compute” really meant
  • Why usage didn’t align with billing
  • Whether a service cost more due to design or usage

Once FOCUS was in place, dashboards showed normalized costs with consistent columns—regardless of source. This meant:

  • Finance could forecast with confidence
  • Engineers could monitor cost per cluster or experiment
  • Leadership could see trends across all cloud services

The result? Less friction, more accountability, and fewer surprises.

3. Use Visualization to Accelerate Behavioral Change

Data is powerful—but visualizations drive action.

In this case, pre-FinOps charts were isolated, raw, and unreadable by business stakeholders. Post-FOCUS dashboards revealed:

  • Idle cluster usage trending up after Q2 deployments
  • SQL workloads overtaking All-Purpose clusters by job count
  • Spend spikes caused by feature testing—before production launch

Dashboards weren’t confined to IT. They were:

  • Embedded into monthly product reviews
  • Sent to cost center owners
  • Shared in sprint planning sessions

Visualization made cloud cost everyone’s problem—and opportunity.

4. Don’t Just Map Data—Define Meaningful Cost Dimensions

FOCUS helped normalize fields, but insight comes from contextual enrichment.

This team went beyond:

  • usage_type = "SQL Compute"

They added:

  • team_owner
  • business_unit
  • environment
  • project_name

This metadata was captured at ingestion or joined from other internal systems. It enabled dashboards that answered:

  • What’s our cost per AI experiment?
  • How does team A’s job throughput compare to team B?
  • Are we using our reserved DBUs efficiently by business function?

Cloud cost became a unit of business productivity—not just an IT line item.

What if your cost reports spoke your org’s language? See how CloudNuro maps spend to the dimensions that matter—automatically.

5. ETL Isn’t Optional—It’s a FinOps Core Capability

ETL—Extract, Transform, Load—might sound technical, but it’s foundational to FinOps success.

This case proves:

  • You don’t need a complex pipeline—just disciplined, documented SQL
  • You don’t need enterprise tooling—Databricks notebooks worked just fine
  • You do need automation, scheduling, and auditability

By operationalizing the flow, this team ensured:

  • Consistency in daily cost data
  • Traceability across environments
  • Trust in the source of truth used for chargeback

Every FinOps team should have a data operations playbook—and a team that owns it.

6. Community Isn’t a Luxury—It’s an Accelerator

The team’s most unexpected advantage? Slack.

A timestamp formatting issue threatened to derail ingestion. Within an hour, peers from Datadog, Walmart, and the FinOps Foundation clarified the spec and confirmed a simple fix.

This saved:

  • Days of delay
  • Incorrect assumptions
  • Rework and QA friction

Community input helped:

  • Validate schema logic
  • Confirm interpretations
  • Share reusable notebooks

Open-source code, shared knowledge, and Slack channels were as critical as cloud tooling. FinOps is not just a discipline—it’s a movement.

You’re never the only one solving this problem. Ask. Share. Collaborate. That’s how you do FinOps right.

7. Requesting Vendor Support Matters—and It Works

At the start, Databricks didn’t support FOCUS-formatted exports. The team took two actions:

  1. Sent feedback via the in-product form
  2. Shared their needs with the Databricks account team

By the end of the project, Databricks had announced a private preview of FOCUS support—accelerated by community feedback.

Vendors respond to clear, repeated demand. When FinOps practitioners lead, platforms follow.

From Visibility to Value—Faster

This case study illustrates what’s possible when an enterprise doesn’t wait for perfection—but builds it. By transforming raw billing exports into standardized, FOCUS-aligned cloud cost data, this AI-first organization achieved:

  • Near real-time visibility into Databricks spend
  • Department-level cost ownership
  • 7-figure optimization wins
  • Vendor influence through practitioner-led transformation

But most importantly, they unlocked organizational confidence—finance could plan, engineering could optimize, and leadership could measure.

Now imagine doing this not just for Databricks, but across:

  • SaaS vendors with hidden usage patterns
  • IaaS providers with inconsistent metadata
  • Cloud-native services with dynamic consumption

That’s what CloudNuro.ai enables.

CloudNuro helps you:

  • Ingest cost and usage data across SaaS and cloud
  • Normalize it into FOCUS, showback, or chargeback-ready formats
  • Map cost to teams, apps, and business units
  • Automate policy-driven allocation and reporting
  • Empower CIOs, CFOs, and FinOps leaders to govern spend collaboratively

We don’t replace your cloud or SaaS tools—we help you finally understand them.

Want to replicate this transformation?
Book a free FinOps insights demo with CloudNuro.ai to identify waste, enable chargeback, and drive accountability across your tech stack.

A New Language for Cloud Accountability

Having a clear view of who’s using what, and what it’s costing us, has changed the way we operate. It’s not just about spend anymore—it’s about strategy. We’ve gone from reactive cloud billing to proactive cost engineering

Head of Cloud Finance

Global AI & Data Enterprise

This quote underscores the real transformation: FOCUS wasn’t just a data format—it was a trust enabler. It allowed teams across IT, product, and finance to collaborate using the same source of truth.

Ready to shift from reactive firefighting to confident financial strategy? Learn how CloudNuro makes that transition seamless.

Original Video

This story was originally shared with the FinOps Foundation as part of their enterprise case study series.

Table of Content

Start saving with CloudNuro

Request a no cost, no obligation free assessment —just 15 minutes to savings!

Get Started

Table of Content

Introduction: Why Cloud Cost Data Needs a Common Language

As demonstrated by forward-thinking organizations and shared through the FinOps Foundation’s community stories, this case reflects practical strategies enterprises are using to reclaim control over cloud and SaaS spending.

When cloud spending spirals beyond visibility, accountability becomes elusive, and optimization almost impossible. It was precisely the challenge faced by a large mining enterprise that had heavily invested in Databricks for machine learning, AI exploration, and IoT data processing across mining operations. While Databricks’ consumption-based pricing provided flexibility, the lack of standardized cost data made it nearly impossible to allocate cloud costs accurately, identify waste, or enable business units to take ownership of their spending.

That’s where the FOCUS FinOps cloud cost data standard entered the picture. With aspirations to improve cost accountability, automate ingestion, and enable FinOps platform integration, this AI-driven organization embarked on a journey to transform Databricks cost data into the FOCUS specification, ultimately aligning its cloud strategy with business goals.

These are the exact types of problems CloudNuro.ai was built to solve, across cloud and SaaS.

FinOps Journey: Re-Mapping Chaos into the FOCUS FinOps Cloud Cost Data Standard

For this global data-and-AI enterprise, operating complex analytics, machine learning, and IoT workloads, the promise of Databricks was performance at scale. But the reality of managing cost allocation for DBU-based billing across multi-cloud deployments was anything but easy. Despite Databricks offering usage tables and internal dashboards, these tools failed to deliver the standardized, business-aligned visibility the FinOps team needed.

What followed was a deeply technical and strategic effort to adopt the FOCUS FinOps cloud cost data format, operationalize ingestion, and unlock cost ownership across teams.

Challenge: SaaS Cost Data Was Not “Ready” for FinOps

The team realized early on that Databricks’ native system tables (e.g., system.billing.usage, system.billing.list_prices) offered raw billing data, but not in a form aligned with the FOCUS spec. This meant they couldn’t plug cost into their chosen FinOps platform, or track trends across clouds, products, or services.

Internally, frustration mounted:

  • Finance couldn’t align DBU spend to services or teams
  • Engineering teams had no visibility into workload-level cost deltas
  • Procurement lacked leverage for negotiation due to unclear usage patterns

What made it worse? Each Databricks SKU had different price points based on region, compute type, and deployment model, yet this granularity was invisible to most of the business.

Step 1: Understanding the FOCUS Schema and Translation Pain

The team began by downloading the FOCUS 1.0 and 1.1 specs. But as they discovered, the schema wasn’t plug-and-play. FOCUS expected 35+ columns such as:

  • billing_period_start and billing_period_end
  • usage_date
  • service_name, usage_type, resource_id
  • cost, currency, region

Yet Databricks tables didn’t offer many of these fields directly:

  • Date formatting issues emerged (e.g., milliseconds in ISO 8601 timestamps)
  • Field names were inconsistent or missing altogether
  • Some data, like region, resource ID, or discount types required computed values

So, they built a translation matrix: Column A was the FOCUS spec, Column B was what Databricks provided, and Column C were derived fields. Everything was documented in GitHub, with inline markdown explaining each SQL transformation and its reasoning.

They used GitHub Copilot to iterate on logic:

  • Parse FOCUS docs into field lists
  • Infer data types
  • Suggest transformation logic
  • Validate joins across list_prices and usage tables

By breaking down the FOCUS schema into “consumable bites,” the team built SQL queries that created 100% FOCUS-conformant datasets, including workarounds for fields like sku_id, service_id, and usage granularity.

We weren’t going to wait for the vendor. We used what they gave us, mapped it to what we needed, and built an ETL layer around it.”, Cloud Cost Architect

Step 2: Building the Notebook-Based ETL Engine

Databricks Notebooks became the orchestration layer. The logic was broken down into three distinct stages:

A. Retrieval
  • Scheduled job ran daily at 12:05am (post billing reconciliation)
  • Queried system.billing.usage and list_prices with start_date and end_date filters
  • Data was scoped per billing account and usage type
B. Transformation
  • SQL transformed native Databricks fields into FOCUS-aligned schema
  • Timestamps were converted to FOCUS-compliant ISO formats
  • service_name was inferred from origin product fields
  • Conditional joins filtered usage types by workload type (SQL Jobs, DLT, All-Purpose)
C. Output and Load
  • Resulting dataset was exported in CSV (required by their FinOps platform)
  • Files were saved to Unity Catalog volumes (Databricks’ internal object storage)
  • REST API scripts uploaded the files into the platform using three-step ingestion process

Step 3: Cross-Team Collaboration via Visualization

Before pushing to the FinOps platform, the team used Databricks' own BI tools to validate the output. They created in-notebook dashboards:

  • Cost by service name over time
  • DBU consumption by cluster type
  • Price skew comparisons across All-Purpose vs. Jobs vs. SQL Compute
  • Visual deltas between actual usage and reserved DBU commitments

These dashboards weren’t just internal—they were shared across:

  • Engineering teams to reduce idle cluster time
  • Finance teams to plan for discounts
  • Product owners to identify “cost per experiment” metrics

This data democratization seeded early buy-in across the business—even before chargeback was implemented.

Step 4: Community-Sourced Problem Solving

One sticking point nearly stalled the project: their ingestion platform didn’t accept timestamps with milliseconds. The FOCUS spec implied ISO 8601 compliance—including milliseconds.

A Slack message to the FinOps Foundation community led to:

  • A response from Datadog’s FinOps architect
  • Confirmation from Walmart’s engineer that milliseconds were optional
  • A community-led fix: truncate to YYYY-MM-DDTHH:MM:SSZ and proceed

This episode crystallized why FinOps isn’t just a role—it’s a movement. Standards aren’t just documents—they’re relationships, shared knowledge, and collective success.

Curious how leading enterprises fast-track FinOps with zero rework? See how CloudNuro quietly powers success behind the scenes.

Outcomes: From Raw Billing to Real Accountability

What began as a focused technical exercise—transforming Databricks billing into the FOCUS FinOps cloud cost data standard—evolved into a business-wide transformation in cost ownership, trust, and optimization.

By standardizing cost data, automating ingestion, and enabling analysis across teams, this AI-driven enterprise turned cost chaos into clarity—and realized both financial savings and behavioral change.

1. Financial Visibility Unlocked Across Databricks Workloads

Before this project, most teams could only see blended DBU spend by environment or account. After implementing the FOCUS pipeline:

  • Every DBU, job, and cluster was tagged and mapped to a service, workload, or business function.
  • “SQL Compute,” “All-Purpose,” “DLT,” and “Interactive Apps” were now traceable to cost centers.
  • 90% of spend was allocated with FOCUS metadata within 60 days.

The FinOps platform visualizations began showing:

  • Cost by business service
  • Usage by team, region, environment
  • Top contributors to unreserved DBU overages
  • Idle cluster trends and shutdown candidates

Before: Monthly CSV extracts and manual pivot tables
After: Daily dashboards driving automated alerts and budget actions

Now when someone asks ‘why did our cloud bill spike?’, we can answer in under 10 minutes—not 10 days. — Sr. Manager, Cloud Engineering

2. $2.1M+ in Optimization Opportunities Unlocked

The team’s primary savings came from behavioral and architectural shifts, not negotiated discounts:

  • Idle All-Purpose Compute clusters were reduced by 70%
  • Batch jobs were moved to cheaper “Jobs Compute” environments
  • Exploratory workloads were scheduled, instead of left running 24/7
  • Dashboards exposed unused committed DBUs, enabling pre-buy adjustments

These shifts led to:

  • $850K/year in All-Purpose overspend eliminated
  • $1.1M in annual savings via rearchitecting high-frequency jobs
  • $200K saved in user education on cluster defaults and cost behavior

The team also gained leverage in procurement discussions—armed with precise per-SKU usage patterns.

Wondering what’s really driving your cloud costs—and how to take control? Explore how CloudNuro uncovers answers your teams can act on.

3. Cost Attribution Empowered Budget Ownership

Once cost data conformed to FOCUS, the FinOps team could apply chargeback or showback policies with confidence. Business units received reports showing:

  • Total Databricks spend per product/service
  • Breakdown by cluster type and workload
  • Trends against forecasted DBU commitments

This transparency reduced tension between engineering and finance. Teams began forecasting compute needs proactively, not reactively.

Finance finally saw unit economics per use case:

  • Cost per GenAI document indexed
  • Cost per data pipeline executed
  • Cost per sensor stream processed (IoT)

FOCUS gave us a language to talk to the business. It was no longer about raw DBUs—it was about outcomes.” — Director, Enterprise IT Finance

4. Platform-Led Chargeback Became Possible

With FOCUS-formatted cost data ingested daily, the FinOps platform could:

  • Run cost allocation rules by tag, label, or service name
  • Align costs with GL accounts
  • Trigger alerts for anomalies, overruns, and idle capacity

This set the stage for:

  • SaaS chargeback pilots across Databricks, Snowflake, and GCP
  • Integration with budgeting tools (ERP, FP&A)
  • Adoption of FinOps KPIs as part of executive scorecards

The CIO began using cloud cost per experiment as a key metric in quarterly reviews.

CloudNuro supports both license-based and usage-based chargeback across SaaS and cloud, with policy-driven allocations and dashboard-ready outputs. - Ready to see a Sample ->

5. Community, Not Just Code, Drove Speed

Without FinOps Foundation Slack and GitHub collaboration, this journey might’ve taken 2x longer. The team credits:

  • Open-source schema references
  • Feedback on field transformations
  • Peer validation for timestamp logic
  • Public release of their SQL in GitHub

Their GitHub repo is now used by multiple organizations looking to transform Databricks usage into FOCUS format—cutting down implementation time for others.

They even received early previews from Databricks confirming native FOCUS export is coming, validating their advocacy efforts.

Bonus Outcome: Databricks Takes Notice

Near the end of this project, Databricks announced a private preview of native FOCUS-format cost export—something that hadn’t existed when this team began.

Their feedback, code contribution, and user story helped push the vendor to prioritize this feature.

This showcases a broader lesson: when enterprises lead, vendors listen.

Lessons for the Sector: Blueprint for Operationalizing the FOCUS FinOps Standard

This case study reveals more than just a cost transformation—it exposes a repeatable, cross-domain blueprint for adopting the FOCUS FinOps cloud cost data standard. Whether you're working with Databricks, Snowflake, ServiceNow, or any SaaS/IaaS vendor, the same challenges, breakthroughs, and best practices apply.

Let’s unpack the critical learnings every FinOps team, cloud architect, and IT finance leader should take away.

1. Don’t Wait for the Vendor—Build the Translation Layer Yourself

One of the most powerful takeaways: the absence of native FOCUS support is not a blocker.

This team didn’t wait for Databricks to build FOCUS-formatted exports. Instead, they:

  • Pulled billing and usage data from native system tables
  • Aligned fields with the FOCUS spec via SQL
  • Built automation to generate, store, and upload CSVs daily

By building the bridge themselves, they:

  • Got to value 12–18 months faster
  • Reduced manual effort across teams
  • Created leverage in vendor discussions

If your FinOps journey is stuck waiting on SaaS providers, you’re not doing FinOps—you’re doing wishful thinking.

Fascinated by how it all connects? Discover how CloudNuro bridges your SaaS and cloud data, out of the box.

2. Standardization Isn’t Just Technical—it’s Cultural

The FOCUS format gave this organization a shared language between finance, engineering, and product. No more debates about:

  • What “All-Purpose Compute” really meant
  • Why usage didn’t align with billing
  • Whether a service cost more due to design or usage

Once FOCUS was in place, dashboards showed normalized costs with consistent columns—regardless of source. This meant:

  • Finance could forecast with confidence
  • Engineers could monitor cost per cluster or experiment
  • Leadership could see trends across all cloud services

The result? Less friction, more accountability, and fewer surprises.

3. Use Visualization to Accelerate Behavioral Change

Data is powerful—but visualizations drive action.

In this case, pre-FinOps charts were isolated, raw, and unreadable by business stakeholders. Post-FOCUS dashboards revealed:

  • Idle cluster usage trending up after Q2 deployments
  • SQL workloads overtaking All-Purpose clusters by job count
  • Spend spikes caused by feature testing—before production launch

Dashboards weren’t confined to IT. They were:

  • Embedded into monthly product reviews
  • Sent to cost center owners
  • Shared in sprint planning sessions

Visualization made cloud cost everyone’s problem—and opportunity.

4. Don’t Just Map Data—Define Meaningful Cost Dimensions

FOCUS helped normalize fields, but insight comes from contextual enrichment.

This team went beyond:

  • usage_type = "SQL Compute"

They added:

  • team_owner
  • business_unit
  • environment
  • project_name

This metadata was captured at ingestion or joined from other internal systems. It enabled dashboards that answered:

  • What’s our cost per AI experiment?
  • How does team A’s job throughput compare to team B?
  • Are we using our reserved DBUs efficiently by business function?

Cloud cost became a unit of business productivity—not just an IT line item.

What if your cost reports spoke your org’s language? See how CloudNuro maps spend to the dimensions that matter—automatically.

5. ETL Isn’t Optional—It’s a FinOps Core Capability

ETL—Extract, Transform, Load—might sound technical, but it’s foundational to FinOps success.

This case proves:

  • You don’t need a complex pipeline—just disciplined, documented SQL
  • You don’t need enterprise tooling—Databricks notebooks worked just fine
  • You do need automation, scheduling, and auditability

By operationalizing the flow, this team ensured:

  • Consistency in daily cost data
  • Traceability across environments
  • Trust in the source of truth used for chargeback

Every FinOps team should have a data operations playbook—and a team that owns it.

6. Community Isn’t a Luxury—It’s an Accelerator

The team’s most unexpected advantage? Slack.

A timestamp formatting issue threatened to derail ingestion. Within an hour, peers from Datadog, Walmart, and the FinOps Foundation clarified the spec and confirmed a simple fix.

This saved:

  • Days of delay
  • Incorrect assumptions
  • Rework and QA friction

Community input helped:

  • Validate schema logic
  • Confirm interpretations
  • Share reusable notebooks

Open-source code, shared knowledge, and Slack channels were as critical as cloud tooling. FinOps is not just a discipline—it’s a movement.

You’re never the only one solving this problem. Ask. Share. Collaborate. That’s how you do FinOps right.

7. Requesting Vendor Support Matters—and It Works

At the start, Databricks didn’t support FOCUS-formatted exports. The team took two actions:

  1. Sent feedback via the in-product form
  2. Shared their needs with the Databricks account team

By the end of the project, Databricks had announced a private preview of FOCUS support—accelerated by community feedback.

Vendors respond to clear, repeated demand. When FinOps practitioners lead, platforms follow.

From Visibility to Value—Faster

This case study illustrates what’s possible when an enterprise doesn’t wait for perfection—but builds it. By transforming raw billing exports into standardized, FOCUS-aligned cloud cost data, this AI-first organization achieved:

  • Near real-time visibility into Databricks spend
  • Department-level cost ownership
  • 7-figure optimization wins
  • Vendor influence through practitioner-led transformation

But most importantly, they unlocked organizational confidence—finance could plan, engineering could optimize, and leadership could measure.

Now imagine doing this not just for Databricks, but across:

  • SaaS vendors with hidden usage patterns
  • IaaS providers with inconsistent metadata
  • Cloud-native services with dynamic consumption

That’s what CloudNuro.ai enables.

CloudNuro helps you:

  • Ingest cost and usage data across SaaS and cloud
  • Normalize it into FOCUS, showback, or chargeback-ready formats
  • Map cost to teams, apps, and business units
  • Automate policy-driven allocation and reporting
  • Empower CIOs, CFOs, and FinOps leaders to govern spend collaboratively

We don’t replace your cloud or SaaS tools—we help you finally understand them.

Want to replicate this transformation?
Book a free FinOps insights demo with CloudNuro.ai to identify waste, enable chargeback, and drive accountability across your tech stack.

A New Language for Cloud Accountability

Having a clear view of who’s using what, and what it’s costing us, has changed the way we operate. It’s not just about spend anymore—it’s about strategy. We’ve gone from reactive cloud billing to proactive cost engineering

Head of Cloud Finance

Global AI & Data Enterprise

This quote underscores the real transformation: FOCUS wasn’t just a data format—it was a trust enabler. It allowed teams across IT, product, and finance to collaborate using the same source of truth.

Ready to shift from reactive firefighting to confident financial strategy? Learn how CloudNuro makes that transition seamless.

Original Video

This story was originally shared with the FinOps Foundation as part of their enterprise case study series.

Start saving with CloudNuro

Request a no cost, no obligation free assessment —just 15 minutes to savings!

Get Started

Save 20% of your SaaS spends with CloudNuro.ai

Recognized Leader in SaaS Management Platforms by Info-Tech SoftwareReviews

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.