What Data Engineers Don’t Know About FinOps

By Srikanth
8 Min Read
What Data Engineers Don’t Know About FinOps 1

It’s not the first thing you think of when you hear the name Mark Zuckerberg, but his mantra to “move fast and break things” when he was getting Facebook off the ground is one many data engineers hold dear. Afterall, the freedom to “fail fast” is an approach that can open the door to innovation.


An environment where not only is it okay to fail—but encouraged—can lead to dramatic product adoption and revenue growth. The key is to understand unit economics—ensuring that data product revenue outpaces cloud analytics expenses. Cloud data spending can become an anchor instead of an accelerator without some checks and balances in place.

Enterprises want—and need—to stay competitive but not at the risk of financial ruin. Companies’ desire to grow efficiently while also maximizing ROI is the impetus behind the practice of FinOps, a financial practice that drives measurable, predictable ROI, to help businesses leverage cloud usage-based pricing models by paying only for what they use and using what they need. For FinOps to work, however, everyone needs to be aligned, and that’s often where organizations encounter their first stumbling block. To wit, 40% of 2023 State of FinOps survey respondents reported difficulty enabling engineers to take action on cost optimization initiatives.

As a data engineer, you might see efforts to reign in cloud data costs as not your problem or a distraction from performing day-to-day tasks, but when it comes to cloud data platforms, costs are a growing concern. According to IDC, data management represents about 40% of the cloud bill and it is the fastest growing segment. A strong FinOps program helps data engineers make the most of the cloud’s variable cost model. Here are just a few things data engineers might not know about FInOps:

Race to leverage AI drives historic volumes, users, and challenges. Industry-specific AI adoption is still nascent, but 75% of professionals surveyed by McKinsey expect GenAI will cause significant or disruptive changes to their industries. The stakes are high and organizations are rapidly launching new pipelines to feed AI’s voracious data appetite.

New applications create an urgent need for additional data sets for training, validation, verification, and drift analysis. As data teams scramble to put the required pipelines and cloud infrastructure in place, a complex tangle of interdependencies and inefficiencies can stall innovation. Seventy-two percent of executives surveyed by MIT Technology Review say data challenges are the most likely factor to jeopardize their AI/ML goals.

AI helps predict data pipeline and data application efficiency. Consider AI-driven data observability, which according to Gartner, “goes beyond traditional monitoring and detection. It also provides robust, integrated visibility over data and data landscape.” AI-driven data observability tools improve data’s overall reliability by surfacing anomalies and unknowns that warrant a second look before they multiply or work their way into production. Leveraging FinOps, businesses can scale their analytics capabilities based on demand while optimizing cost and ultimately unlocking the full potential of cloud-based data analytics.

Time is money. While clearly not envisioning the staggering amounts of money that modern cloud computing systems can—and do—blow through, Benjamin Franklin’s narration holds, “time is money.” The longer a cloud compute instance runs, the higher the associated data processing expenses. Achieving high performance in cloud data analytics goes a long way toward significantly improving cost efficiency. Architecting and coding for performance, however, are just the first steps toward maximizing results. AI-powered performance tuning can dramatically boost ROI.

With so many users across the organization with varying degrees of skills, something such as not leveraging partitioning can make a job take exponentially longer to process and as a result, rack up unnecessary costs. Real-time monitoring of jobs, pipelines, projects, and even users allows for the proactive identification of bottlenecks and/or inefficiencies in resource utilization. It’s this insight that empowers organizations to make timely adjustments that enhance performance and manage spend.

One word: Elastic. The cloud’s elasticity is a double-edged sword. On one side, it allows data engineers near infinite capacity to spin up cloud infrastructure whenever you need it. But, it’s the very pay-as-you-go nature of cloud services that can lead to unpredictable costs, especially at the hands of inexperienced users. Implementing a FinOps practice helps organizations forecast and plan for their cloud expenses, and in doing so brings predictability to the variable cost model. By establishing budgeting frameworks and implementing cost-allocation strategies, businesses can better manage their cloud spending and stave off any nasty surprises.

Collaboration among data engineering and data developers is essential for cloud data spend to be optimized. Many companies have instituted tools and processes that provide visibility into cluster usage along with recommendations for optimizing for the size needed, as well as opportunities to fine tune code for efficiency in dev environments, before inefficient code makes it to production and drives up costs. It takes a shift towards a finance mindset among data teams and collaboration with FinOps practitioners to focus on identifying opportunities to improve efficiency, reduce waste, and drive innovation with higher ROI.

Agile data engineering: Integrating cost optimization into CI/CD

Time spent optimizing code and query performance and efficiency in development saves troubleshooting and re-work when data pipelines and applications are in production. Applying agile methodologies, such as continuous integration and continuous deployment (CI/CD), helps you share data engineering best practices, improve code quality, and save time.

CI/CD can streamline data pipeline development and deployment, accelerating release times and frequency, while improving code quality.

Dynamic analysis of your code, queries, data layout/partitioning, job, and cluster configuration can be automated as part of your CI/CD pipeline to simplify this process and ensure data applications are optimized before you put them into production. An automated CI/CD pipeline streamlines development, enables faster time-to-market, and improves collaboration. The result is reduced manual effort and faster delivery of new data products and capabilities.

Implementing a successful FinOps program isn’t without its challenges, but the rewards are great. Before setting out, make sure your company has set clear financial goals that align with its business objectives. Buy-in is also key. Fostering cross-functional collaboration between finance, IT, and operations teams to drive effective implementation is a must. It’s also important to leverage monitoring and observability tools to automatically tag your cloud resources. And lastly, it’s imperative that data usage patterns are continuously monitored and analyzed, as that allows you to increase efficiency and run more workloads on your cloud data platform.

Contributed By Clinton Ford, FinOps Champion, Unravel Data

Share This Article
Passionate Tech Blogger on Emerging Technologies, which brings revolutionary changes to the People life.., Interested to explore latest Gadgets, Saas Programs
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *