top of page

Why the Cheapest Sensor Is Never the Cheapest Decision

  • Writer: Srihari Maddula
    Srihari Maddula
  • 1 day ago
  • 12 min read

Author: Srihari Maddula  •  Founder & Technical Lead, Eurth Techtronics Pvt Ltd

Category: Manufacturing Realities 

Estimated Reading Time: 18–20 minutes

Published: April 2025


The Hook: A Number That Feels Like a Win


It is a feeling every hardware engineer and product manager recognises. You are deep in a BOM review. Columns of component descriptions, quantities, and unit prices stretch across your screen. Then you find it — an alternative sensor, functionally equivalent on the datasheet, priced at forty percent less than what you originally specified. The procurement team is happy. The timeline looks tighter. You mark the swap as approved and move forward.


Twelve months later, your field support team is drowning. Calibration drift reports are coming in from deployed units. Your firmware team has spent six weeks chasing a timing sensitivity bug that only appears with the new component under certain temperature conditions. The replacement part has a sixteen-week lead time. And your original vendor — the one whose component you dropped — has moved on.


What felt like a win at the BOM stage has compounded into a cost that dwarfs the original saving. This is not an unusual story. Across embedded product development, in industrial, agricultural, medical, and consumer IoT systems, component selection decisions made on unit price alone are among the most reliably expensive mistakes a team can make.


This article is a structured framework for thinking about the true cost of a sensor or component. It is written for hardware engineers, systems architects, and technical founders who want to move beyond the BOM number and reason about the full lifecycle economics of their component choices.



Section 1: Why Unit Price Is a Misleading Signal


1.1 — The BOM Is a Snapshot, Not a System

A bill of materials is a point-in-time list. It captures what a component costs per unit at a given quantity, from a given vendor, at a given moment. It does not capture what that component will cost your engineering team to integrate, your manufacturing line to test, your quality team to characterise, or your support organisation to maintain in the field.

The unit price is visible. Everything downstream of it is not — until it materialises as an engineering escalation, a customer complaint, or a production halt.

This asymmetry between visible and hidden cost is the core problem. Teams optimise aggressively on the visible number because it is measurable, auditable, and directly linked to margin targets. The hidden costs are diffuse — they accrue slowly across different teams over months and never appear as a single line item that can be traced back to the original component decision.


1.2 — The Seven Cost Dimensions of a Component

A component carries cost across seven distinct dimensions. Only one of them — unit price — is visible at BOM stage. The remaining six are hidden, and their combined weight almost always exceeds the unit price delta you are trying to optimise.


Cost Dimension

What It Represents

Visibility at BOM Stage

Cost Signal

Unit price of the sensor

Very visible — quoted on every BOM

Calibration Cost

Engineer hours to characterize & validate

Hidden — often discovered post-prototype

Firmware Complexity

Driver depth, timing sensitivity, edge cases

Hidden — grows with integration

Field Failure Rate

Drift, sensitivity degradation, ESD exposure

Hidden — reveals itself months later

Replacement Logistics

Spare parts availability, requalification time

Hidden — painful during support phase

Second Source Risk

Single vs multi-vendor supply chain

Partially visible — ignored under time pressure

End-of-Life Risk

Longevity of the component in the market

Hidden — catastrophic when triggered


The critical insight here is not that hidden costs exist — most experienced engineers know they do. The insight is that hidden costs are asymmetric in their distribution. They tend to concentrate in components that appear cheap on the surface because cheapness is often achieved by eliminating the things that make a component robust: tighter manufacturing tolerances, better characterisation data, longer longevity commitments, and stronger application engineering support.


1.3 — The Compounding Problem


These hidden cost dimensions do not exist in isolation. They interact and amplify each other.


A component with higher calibration cost typically also has higher firmware complexity, because the behaviour that needs compensating in software is the same behaviour that makes factory calibration expensive. A component with elevated field failure rate typically generates both replacement logistics burden and requalification effort if it triggers a regulatory re-submission in controlled industries.

The result is a cost that does not add — it multiplies. A sensor that saves you eight dollars per unit in hardware cost can easily generate forty to eighty dollars per unit in total lifecycle cost if even two or three of the hidden dimensions are significantly worse than your incumbent component.


Section 2: The Calibration Cost Trap

2.1 — Factory Calibration Is Engineering Time at Scale


When you select a component that requires factory calibration — offset compensation, sensitivity normalisation, temperature coefficient correction — you are not just adding a step to your manufacturing line. You are committing to an engineering investment that scales with every unit you produce.


At prototype stage, this is invisible. Your firmware engineer runs a calibration routine, stores coefficients in non-volatile memory, and moves on. The process takes a few minutes per unit. At ten units, it is manageable. At ten thousand units, that same process — if not architected carefully — becomes a manufacturing bottleneck that limits your throughput, requires specialised test equipment, and adds a non-trivial per-unit cost to your COGS.


The question to ask during component selection is not 'does this component require calibration?' The question is 'what does calibration look like at production volumes, and who bears that cost?'. Components that ship pre-calibrated with traceable coefficients stored in internal registers shift that burden to the manufacturer. Components that require user-side calibration shift it entirely to you.



2.2 — Drift Is Calibration's Long Tail


Even a component that passes factory calibration can drift in the field. Mechanical stress during PCB assembly, thermal cycling across deployment environments, humidity ingress, and accumulated operational hours all contribute to measurement drift over time.


A component with a tight initial specification but poor long-term stability is a particularly dangerous choice for deployments where field recalibration is operationally impossible — remote agricultural monitoring nodes, embedded industrial sensors in sealed enclosures, and medical devices in clinical settings all share this characteristic.


The cost of drift in the field is not just technical. It is trust. When a system that was accurate at deployment begins reporting values that diverge from ground truth, the first casualty is user confidence. The second casualty is the engineering team's time as they attempt to diagnose whether the issue is firmware, hardware, installation, or the sensor itself. The third casualty is your support escalation pipeline.


PRINCIPLE  Always ask for long-term stability data — not just initial accuracy. The datasheet accuracy figure is a T=0 measurement under ideal conditions. What matters for a deployed product is drift over twelve, twenty-four, and thirty-six months under real operating conditions.


Section 3: Firmware Complexity as a Hidden Multiplier


3.1 — The Integration Tax

Every component you add to a system carries an integration tax — the engineering hours required to write a driver, validate timing, handle edge cases, and build confidence that the component behaves predictably across the full range of operating conditions your product will encounter.


This tax varies enormously between components that appear functionally equivalent. A sensor with a well-documented, widely-used digital interface, established open-source driver references, and an active application engineering team at the vendor will have a much lower integration tax than a cheaper alternative with a sparse datasheet, an undocumented vendor-specific register set, and no community implementation to reference.


For a single prototype, the difference might be two or three days of engineering effort. For a product that ships across multiple hardware revisions over three to five years, that difference compounds. Every new firmware engineer who joins the team inherits the complexity. Every regression after a chip revision has to be debugged against a component that was difficult to understand in the first place.


3.2 — The Interaction Problem

Component firmware complexity does not scale linearly with the number of components in a system. It scales combinatorially, because components interact.


A sensor that requires precise timing on a shared SPI bus can interfere with a display controller on the same bus. A component that generates significant switching noise on its power rail can corrupt measurements from a sensitive analog front end. A component that holds a shared I2C address with another device on your board forces an architectural workaround that ripples through your firmware design.


These interaction problems are invisible at component selection time. They reveal themselves during integration, typically at the worst possible moment in a development schedule. The components most likely to generate these problems are those selected without a deep understanding of system-level behaviour — which is precisely the class of component you tend to select when optimising purely on price.


3.3 — The Revision Risk


Vendors of low-cost components revise their silicon more frequently and with less notice than established vendors with long-term product commitments. A revision might change internal timing, modify the default state of a configuration register, or alter the power-on behaviour in a way that is technically within datasheet tolerances but breaks your existing firmware.


The cost of a silicon revision is an emergency firmware triage cycle, a revalidation run on your test bench, and potentially a field update campaign if the revision made it into deployed units. This cost is not theoretical — it is one of the most reliably recurring costs in embedded product development, and it disproportionately affects teams that have chosen components without assessing the vendor's revision history and commitment practices.


Section 4: Field Failure Rate and the Support Cost Iceberg


4.1 — MTBF Is Not a Field Failure Rate


Most component datasheets publish a Mean Time Between Failures figure. This number is calculated from accelerated life testing under controlled conditions, extrapolated using standard reliability models. It is a useful relative benchmark. It is not a prediction of the failure rate your product will experience in deployment.


Field failure rates are shaped by the actual environment your product operates in — temperature extremes, humidity cycles, vibration profiles, ESD exposure, power quality, and installation practices. A component that performs reliably in a climate-controlled server room may fail systematically in an agricultural environment where temperature swings from eight degrees at dawn to forty-eight degrees at peak afternoon, and where power from a diesel generator carries voltage transients that would not be tolerated in any standard MTBF test.


The teams that understand their deployment environment deeply — and select components accordingly — consistently achieve lower field failure rates than teams that rely on datasheet MTBF figures alone. The cost difference shows up not in the BOM, but in the warranty claims, replacement shipments, and field engineer dispatches that accumulate over the product lifecycle.


4.2 — The Support Cost Multiplier


When a component fails in the field, the direct cost of the replacement is rarely the dominant expense. The dominant expenses are the support engineer time to diagnose the issue, the logistics of getting a replacement to the deployment site, the downtime cost to the end customer, and the reputational damage if the failure is visible or repeated.


These costs scale with your deployment footprint. At fifty units deployed, a one-percent field failure rate means one support case. At five thousand units deployed, it means fifty simultaneous support cases — each consuming engineering bandwidth that could have been directed toward your next product. The component that saved you eight dollars per unit across five thousand units in BOM cost may have committed you to forty thousand dollars in annual support overhead.


PRINCIPLE  Calculate support cost at scale before locking a component. If your target deployment is N units and your expected field failure rate is F%, ask what N × F × (average support cost per failure) looks like over a three-year period. If that number exceeds your BOM savings, the decision is clear.


Section 5: Supply Chain Risk and the Second Source Problem


5.1 — Single-Source Components Are a Hidden Liability


A component available from only one vendor is a single point of failure in your supply chain. This is acceptable — sometimes unavoidable — for highly specialised components where no equivalent alternative exists. It is not acceptable for commodity or near-commodity components where your cost optimisation decision has landed you on a single source.


The risk materialises in several forms. The vendor may discontinue the component, triggering a last-time-buy decision and ultimately a redesign. The vendor may face their own supply chain disruptions — fabrication constraints, logistics disruptions, raw material shortages — that impose lead times incompatible with your production schedule. The vendor may be acquired, and the acquiring entity may not honour the product roadmap commitments you were counting on.


The global semiconductor supply disruption that began in 2020 and extended through 2022 taught this lesson at scale to the entire electronics industry. Companies with well-characterised second sources and design variants maintained production continuity. Companies that had optimised purely on price — selecting single-source components with no qualified alternative — faced production halts, emergency redesigns, and in some cases, permanent market share loss to competitors who had maintained supply.


5.2 — The Qualification Cost of a Switch


When a supply chain disruption forces a component switch mid-production, the cost is not just the difference in unit price between the original and the alternative. The full cost includes requalification testing, firmware adaptation, updated calibration procedures, and in regulated industries, a formal change notification process that may require regulator approval before the new component can ship in a certified product.


This qualification cost is not abstract. It is measured in engineering weeks, and it arrives exactly when your team can least afford the distraction — during a production ramp, a customer deadline, or a critical field issue. The teams that pre-qualify a second source before they need it, even if they never use it, buy themselves an insurance policy whose premium is modest compared to the cost of an unplanned switch.


Section 6: A Framework for True Cost Evaluation


6.1 — The Five Questions to Ask Before a Component Decision


Rather than a checklist, think of these as the five questions that will surface the majority of the hidden cost risk in any component selection decision:


  1. What does this component cost to integrate, not just to buy? Estimate the firmware engineering hours to write a production-quality driver, including edge case handling, error recovery, and power management. Price those hours at your true engineering cost rate.

  2. What does calibration look like at volume? Walk through the calibration process step by step at your target production volume. Identify the test equipment required, the throughput limitation, and the per-unit time cost.

  3. What is the realistic field failure profile? Do not use the MTBF figure from the datasheet. Talk to engineers who have deployed this component in similar environments. Look at community forums and application notes for known failure modes. If the component is new and that data does not exist, factor that uncertainty into your decision.

  4. Is there a qualified second source? If the answer is no, is the unique capability of this component worth the supply chain risk? Can you design the system so that a second source can be qualified in parallel, even if it is not your primary vendor?

  5. What is this component's end-of-life risk over the expected product lifetime? If your product has a ten-year service commitment and the component is in its third generation with no long-term availability commitment from the vendor, that is a risk that needs to be priced into the decision now rather than discovered during a warranty period.


6.2 — The Total Cost of Component Ownership Model


A useful mental model is Total Cost of Component Ownership — borrowed from the enterprise IT concept of Total Cost of Ownership, but applied to individual component decisions. The TCCO of a component is the sum of its unit price, integration cost, calibration cost at volume, expected support cost over the deployment lifetime, and the expected cost of supply chain disruption weighted by probability.


You do not need a precise number. You need an order-of-magnitude comparison. If Component A has a unit price of twelve dollars and Component B has a unit price of seven dollars, the question is not whether you save five dollars per unit. The question is whether Component A's TCCO is lower than Component B's TCCO despite the higher unit price. In many cases, it is — often substantially.


Building this comparison into your standard component selection process takes time initially. Teams that practice it consistently report that it becomes faster over successive decisions, because they develop intuitions about which cost dimensions to scrutinise most carefully for different component categories and application contexts.


6.3 — When Cheap Is Actually the Right Call


This framework is not an argument that premium components are always the right choice. It is an argument for making cost decisions on the correct cost basis.


There are contexts where a lower-cost component is genuinely the right decision even after full TCCO analysis. Consumer electronics with short product lifecycles and high price sensitivity, where the deployment environment is controlled and the support model is replace-not-repair, present a different TCCO profile than industrial equipment expected to operate for a decade in harsh conditions. A prototype run of twenty units for market validation has a fundamentally different risk profile than a volume production run of ten thousand units for critical infrastructure.


The discipline is to make the analysis explicit, not to always reach a particular conclusion. Teams that run through the five questions above — even informally — make better component decisions than teams that default to the cheapest option or the familiar option without structured evaluation.


Closing: The Sensor That Costs the Most Is the One You Have to Change


In embedded product development, the most expensive component is not the one with the highest unit price. It is the one you selected incorrectly and have to change after deployment. A field rework campaign, a design revision, a requalification run — these costs are measured not in tens of dollars per unit but in hundreds of thousands of rupees in engineering time, logistics, and customer relationship repair.


The discipline of true cost evaluation is not about being conservative or risk-averse. It is about being accurate. It is about seeing the full picture of what a component decision commits you to, not just the number that appears on your BOM.


At EurthTech, every component selection in a client engagement goes through a structured evaluation that covers integration complexity, calibration requirements, field failure profile, supply chain resilience, and lifecycle risk. This process adds time to the design phase. It saves multiples of that time over the product lifecycle.


The cheapest sensor on the market is not your enemy. Making a decision based solely on that number is.


About the Author

Srihari Maddula is the Founder and Technical Lead of Eurth Techtronics Pvt Ltd, an electronics product design and IoT engineering company based in Hyderabad, India. EurthTech has delivered over 26 embedded systems products across industrial, agricultural, medical, and defence applications. This blog series shares frameworks and principles from real product development practice — without compromising client confidentiality.


Eurth Techtronics Pvt Ltd  •  www.eurthtech.com  •  Hyderabad, India


 
 
 

Comments


EurthTech delivers AI-powered embedded systems, IoT product engineering, and smart infrastructure solutions to transform cities, enterprises, and industries with innovation and precision.

Factory:

Plot No: 41,
ALEAP Industrial Estate, Suramapalli,
Vijayawada,

India - 521212.

  • Linkedin
  • Twitter
  • Youtube
  • Facebook
  • Instagram

 

© 2025 by Eurth Techtronics Pvt Ltd.

 

Development Center:

2nd Floor, Krishna towers, 100 Feet Rd, Madhapur, Hyderabad, Telangana 500081

Menu

|

Accesibility Statement

bottom of page