Half of everything software teams build is wasted.
This is a consistent finding across multiple studies and decades of data, and it should be the starting point for any conversation about digital product delivery.
The Standish Group's foundational CHAOS research, tracking feature usage across mission-critical applications, found that 50% of features are used almost never, 30% are used infrequently, and only 20% are used often.
Whilst Pendo's 2024 product benchmarks, drawing on aggregated usage data from hundreds of software products, states that approximately 80% of features built never achieve meaningful adoption - only 12% of features generate 80% of average daily usage volume.
These numbers mean that the majority of engineering effort on most digital products is spent building things that users do not value enough to use.
The financial implications are significant. Across an organisation with multiple product teams, the cumulative waste can run into millions. This cost is also compounded, with unused features adding maintenance burden, increasing testing complexity, creating additional attack surface for security vulnerabilities, and making the product harder to learn and navigate for the users who remain.
The question of what to build is therefore the highest-leverage decision in any digital product programme. If the team builds the right things, most of the other decisions become easier. If the team builds the wrong things, it can become increasingly difficult for any amount of technical excellence to recover the investment.
This article looks at why technology projects consistently lose focus on user outcomes, what the evidence says about the impact of user-centred delivery, and how engineering leaders can build the discipline of tying every feature to genuine user value.
The Drift from User Outcomes
Technology projects rarely set out to ignore users. The initial brief is always framed around user needs: we are building this system to help these people do these things more effectively. Early conversations focus on user journeys, pain points, and desired outcomes, with the first sprints typically delivering visible, user-facing functionality.
Drift can also happen driven by technical. As the project progresses, the backlog accumulates technical work, such as infrastructure setup, security hardening, database optimisation, API refactoring, dependency upgrades, and performance tuning, with each task being individually justified and often genuinely necessary. The problem is that these tasks are framed in technical language, disconnected from any user-facing outcome, and prioritised alongside (or above) user stories without a shared framework for comparison.
This can lead to sprint reviews starting to demonstrate infrastructure work to stakeholders who have no way to assess whether the project is on track, because nothing they can see or experience has changed. The feedback loop that keeps the product aligned with user needs gradually weakens and can eventually break. The team is still delivering, stories are completed, velocity is maintained, deployment pipelines are green, but what is being delivered has drifted from the purpose the project was commissioned to serve.
PMI's 2025 Pulse of the Profession research, surveying nearly 3,000 project professionals, found that only 14% of employees feel aligned with the goals of their organisation. When this misalignment exists at an organisational level, it compounds within product delivery teams. Engineers build what the backlog tells them to build. Product owners prioritise what stakeholders request. Stakeholders request what they believe is important, often informed by the loudest customer complaint, the most recent executive conversation, or a competitor feature they noticed, rather than by systematic analysis of user behaviour. Without a disciplined connection to actual user outcomes, each layer of this chain can introduce drift.
The stakeholder request pattern is a particularly common source of wasted effort. Stakeholders often translate user problems into solution specifications: "we need a PDF export button" rather than "users need to share report data with colleagues who do not have system access." The first framing prescribes a specific solution, whereas the second framing describes a user need that could be addressed in multiple ways, some of which may be simpler, faster and more effective than the solution initially assumed. When the team builds the specified solution without interrogating the underlying need, they risk building something that technically works but does not solve the problem the user actually has.
Structural factors can also compound this drift. In many organisations, contracts define success by scope delivered rather than outcomes achieved, creating an incentive to build everything specified regardless of whether it is needed. Project sponsors request broad functionality to avoid perceived gaps, whilst legacy systems set a precedent for feature coverage, with modernisation teams cautious about removing functionality even where usage is low or unclear, and roadmaps are built to demonstrate activity rather than reflect evidence of use. Each of these drivers shapes delivery priorities away from user focus, and in environments without consistent user validation, completed features accumulate even when they serve no meaningful purpose.
The Evidence for User-Centred Delivery
Forrester's widely cited research suggests that every £1 invested in user experience design yields a return of £100 – a 9,900% ROI. Other studies suggest similar:
- The Interaction Design Foundation cites a similar rule of thumb: every £1 invested in UX saves £10 in development and £100 in post-release maintenance.
- The Baymard Institute attributes high rates of drop-off and conversion loss in transactional systems to usability failures, and consistent user input has been shown to improve decision-making and delivery outcomes across multiple studies.
- McKinsey's Business Value of Design study, tracking 300 publicly listed companies over five years, found that organisations in the top quartile for design maturity achieved 32% higher revenue growth and 56% higher total shareholder returns.
- Whilst The Design Management Institute’s Design Value Index found that design-led companies outperformed the S&P 500 by 219% over a ten-year period.
These studies demonstrate that companies which consistently pay attention to how users interact with their systems tend to perform better overall.
PMI's 2025 research adds a more recent data point. The report found that high-performing project professionals track an average of 9.1 success factors per project, compared to 6.3 for others. Critically, the additional factors these professionals track include customer satisfaction, strategic alignment, and stakeholder trust – measures that go well beyond the traditional iron triangle of scope, schedule, and budget. The PMI data found that organisations prioritising interpersonal capabilities, including collaborative leadership, communication and empathy, achieve 72% business goal success rates, compared to 65% for those that do not. Empathy, in a project context, means understanding what users actually need and building accordingly.
Gartner reports that approximately 85% of organisations now favour a product-centric application delivery model, reflecting a broad industry shift toward continuous, outcome-oriented development and away from project-based delivery with fixed scope and predetermined features.
This raises an obvious question: if the vast majority of organisations now favour product-centric delivery, why does feature waste remain so high? The answer is that favouring a model and operating one are very different things. Many organisations have adopted the language and structures of product thinking, such as product owners, product teams, and outcome-based roadmaps, while the underlying behaviours remain project-oriented.
The evidence base is consistent in that user-centred delivery produces better outcomes by every meaningful measure: adoption, revenue, retention, satisfaction and return on investment.
Product Thinking in Practice
The discipline of keeping features focused on users is fundamentally about product thinking. This is an orientation where features are treated as experiments with hypotheses: “We believe that building this capability will deliver this outcome for this user group”. Each sprint delivers something a user can see, or respond to, which creates the feedback loop that keeps the product aligned with actual needs.
This stands in contrast to project thinking, where features are treated as deliverables on a timeline: “we committed to delivering this module by this date”. Project thinking measures progress by output (stories completed, features shipped). Product thinking measures progress by outcome (user adoption, task completion rates, satisfaction scores, time saved). The distinction shapes everything from backlog prioritisation to how success is measured.
The principle is straightforward: every piece of work should link to a deliverable feature or component that serves a user need. There will always be lower-level technical requirements, such as infrastructure, security, and performance optimisation, and these are essential – the discipline is in how they are framed. For example:
- The authentication refactor exists so that users can log in reliably with single sign-on
- The caching layer exists so that the dashboard loads in under two second
- The API redesign exists so that the mobile app can display real-time data
This framing keeps the team oriented toward outcomes and gives stakeholders meaningful progress signals.
When technical work cannot be connected to a specific user-facing capability, it is worth questioning whether it needs to happen now, or at all. Some technical work is genuinely foundational and has no immediate user-visible output. Infrastructure, security hardening, compliance requirements, and resilience improvements all fall into this category – they are essential to the product's viability even though users may never interact with them directly. This is acceptable, provided the team can articulate which user-facing capability it enables and when that capability will be demonstrable. A backlog where infrastructure work is connected to upcoming user stories maintains alignment. A backlog where infrastructure work accumulates without a clear path to user value is a question point.
The Sprint Demo as Diagnostic
There is a simple process for whether a project has maintained its focus on user outcomes. At the end of every sprint, can the team demonstrate something to a user (or a convincing proxy for a user) and get meaningful feedback?
If the answer is consistently yes, the project has a functioning feedback loop where stakeholders can see progress in terms they understand, and users can respond to real functionality, providing the kind of concrete feedback that written requirements can never capture. This means that the team can course-correct early, adjusting priorities based on what they learn from demonstrating working software.
If the answer is consistently no, and sprints are consumed by technical work that is invisible to the people the product is being built for, then the project has lost its connection to user outcomes. This does not necessarily mean the work is wasted, but it does mean that the primary mechanism for validating whether the team is building the right thing has been suspended. Every sprint without user feedback can become a sprint where assumptions go untested and drift goes undetected.
However, this is not to say that every sprint is required to deliver a polished, user-facing feature. Some sprints will focus on enabling infrastructure, and that is expected. The discipline is in maintaining the cadence of user-visible progress. If three consecutive sprints pass without anything demonstrable to a user, the team should discuss whether this is justified and plan when the next user-visible delivery will occur.
The Hidden Costs: Technical Debt and Rework
The cost of building the wrong features extends well beyond the initial wasted engineering effort. Two downstream consequences, technical debt and rework, compound the damage in ways that are often invisible until they become severe.
Technical debt is typically attributed to shortcuts in architecture or code quality, but research shows it is also shaped by delivery choices, particularly around unnecessary features. When teams implement functionality that adds limited user value, they increase the system's size, surface area and maintenance cost. A study published in ScienceDirect found that 25% of engineering effort goes toward managing technical debt – a figure that grows as unused features accumulate in the codebase. Every feature that remains in the system, whether used or unused, must be maintained, tested, secured and accounted for in future architectural decisions. The system becomes harder to understand, change and extend, even when the code quality of each individual feature is high.
Rework is an equally significant cost. Further research published in ScienceDirect found that 40-50% of development effort is spent on rework, often because features did not align with user needs or because requirements shifted after implementation began. Rework becomes less expensive when it happens early – teams that gather and act on user input during discovery and prototyping reduce the likelihood of revising features after release. This is a further argument for the continuous user validation that sprint demos and embedded product ownership provide: the cost of learning that a feature misses the mark in a sprint review is a fraction of the cost of discovering the same misalignment after deployment.
The Prioritisation Discipline
The feature usage data provides the strongest argument for rigorous prioritisation. If 80% of features go unused, or 50% are used almost never, then building every feature that stakeholders request, with equal weight and priority, virtually guarantees that the majority of engineering effort will be wasted.
The remedy for this is to tie every feature to a user outcome with a testable hypothesis. Before a feature enters the backlog, the team should be able to articulate:
- which users will benefit,
- what they will be able to do that they cannot do today,
- how the team will know if the feature is successful (through measurable adoption, task completion or satisfaction), and
- what the cost of building and maintaining this feature will be relative to its expected value.
This discipline can be uncomfortable. It requires potentially saying no to stakeholders who want features built, accepting that some ideas which seem compelling in a workshop will not survive contact with evidence, product owners to defend prioritisation decisions with data, and engineering leaders to support them in doing so. It also requires the organisational maturity to treat features as investments that must justify their cost, and to retire features that fail to deliver value, rather than allowing them to accumulate indefinitely.
The accumulation problem can be seen in most digital products only ever adding features. Features that were built, launched, found little adoption and quietly stopped being useful remain in the product, consuming maintenance effort, increasing test suite size, adding complexity to the codebase and confusing users who encounter them. The discipline of feature retirement – actively removing functionality that is not delivering value – is one of the most neglected practices in product development, and one of the most valuable. Every feature removed is maintenance effort recovered, test complexity reduced and user experience simplified.
This issue becomes especially visible during modernisation efforts. Teams are understandably cautious about removing legacy functionality, even where usage is low or unclear. Carrying features forward feels safer than evaluating their current value. This can be aided by bringing evidence to decision making. Factors such as usage analytics, service logs and user feedback can identify which legacy behaviours remain genuinely important. Where data is not available, teams can flag features for conditional review during prototyping, introduce them only when a user need is confirmed and capture workarounds or support cases as signals of feature relevance. This approach supports leaner, more maintainable systems that evolve in response to current needs rather than historical defaults.
Pendo's research found that product adoption and usage now rank as the most important metrics of product success among product leaders – a marked change from prior years, when product leaders were more likely to measure success by features shipped. The industry is moving, slowly, from measuring output to measuring outcomes. Organisations that make this shift at the project level and tie every feature to a user outcome and measuring whether it delivered, should result in wasting far less of their engineering investment.
Keeping the User in the Room
The most effective mechanism for maintaining user focus is to keep the user (or a credible representation of the user) in the room throughout the project. This means embedded product ownership from someone who deeply understands user needs and has the authority to make prioritisation decisions, regular user research as a continuous practice throughout delivery, analytics on existing features informing what to build next, and using real usage data to validate assumptions and identify where users are struggling.
PMI's 2025 research found that projects with a clear vision of success, defined in terms of value delivered, score +41 on PMI's Net Project Success Score, compared to -18 for those without one. A clear vision describes the value the product will deliver to the people who use it, and it provides the reference point against which every feature decision can be evaluated. When the vision is clear, prioritisation becomes easier because every feature can be assessed against a shared understanding of what the product is trying to achieve and for whom.
The technology industry has spent decades improving how software is built – better languages, frameworks, infrastructure, testing, and deployment pipelines. These improvements are genuine and valuable. However, the persistent gap in project success rates suggests that the industry has invested heavily in building things right while underinvesting in building the right things. With 50% of projects globally now classified as successful (a figure that has barely improved despite enormous advances in delivery capability, the remaining gap is overwhelmingly about what teams choose to build, not how they build it.
User-centred feature delivery addresses this gap directly, and the evidence that it works is among the most robust in the entire field of software engineering. The organisations that close this gap will be those that bring the same rigour to deciding what to build that they already apply to how they build it – treating features as investments to be validated, measured and, when necessary, retired.
Sources referenced in this article:
- PMI — Pulse of the Profession 2025: Boosting Business Acumen (2,841 professionals surveyed)
- PMI — Step Up: Redefining the Path to Project Success With M.O.R.E. (2025)
- McKinsey — The Business Value of Design (300 companies tracked over five years)
- Pendo — 2024 Product Benchmarks: The Hidden Cost of Bad Software
- Pendo — 2019 Feature Adoption Report (80% of features rarely or never used)
- Forrester — The Business Impact of UX (every $1 invested in UX returns $100)
- Design Management Institute — Design-Led Companies Outperform S&P 500 by 219%
- ScienceDirect — Technical Debt in Software Engineering (25% of engineering effort)
- ScienceDirect — Rework in Software Development (40-50% of development effort)
- Gartner — 85% of Organisations Favour Product-Centric Delivery (cited in Orangesoft)
- Standish Group — CHAOS Report 2020: Beyond Infinity (50,000+ projects, feature usage data)
- Standish Group — CHAOS Report 1994 (user involvement as #1 success factor)


