Why investment committees default to feasibility even when strategy is sound
Most leadership teams agree on strategic direction—improve customer journeys, modernize operations, strengthen resilience, and enable faster change. Investment debates become contentious when initiatives compete for the same scarce resources and when the execution path is uncertain. In that context, executives tend to use feasibility as the deciding factor, even when strategic value is compelling, because feasibility is where delivery risk concentrates.
A scoring model that explicitly balances strategic alignment (should we do it) with feasibility and readiness (can we do it, now, without creating unmanaged risk) is not a portfolio hygiene tool. It is a strategy validation mechanism. It helps leadership determine whether ambitions can be achieved within the organization’s current digital capabilities and change capacity, and it creates a defensible basis for focus when priorities inevitably collide.
Strategic alignment as a governance question, not a narrative
Define alignment in terms executives can hold accountable
Strategic alignment becomes measurable when it is expressed as an outcome commitment and mapped to an explicit strategic objective. Strong models separate “strategic importance” from “executive enthusiasm.” They ask whether the initiative directly advances a defined priority—such as revenue growth, retention, cost-to-serve reduction, or control and resilience strengthening—and whether there is a named accountable owner for benefit realization.
Weighted scoring is useful because it forces leadership to state its priorities. A portfolio model that assigns meaningful weight to strategic alignment is making a governance statement: the organization is willing to invest in initiatives that advance strategy, but only when alignment is evidenced and owned.
Avoid false precision in alignment scoring
Executives routinely discount scoring models when “alignment” becomes a subjective catch-all. Alignment criteria should be limited and specific—typically three or four anchors that reflect the current strategy. The score should be justified with a short evidence note: which strategic objective it supports, what target metric it moves, and which executive is accountable for the outcome.
Feasibility and readiness as investment filters
Feasibility is a composite of constraints, not a single estimate
Feasibility is often reduced to a delivery estimate or a technology difficulty rating. For executive decision-making, feasibility must incorporate the constraints that determine whether an initiative can be delivered safely and sustainably: resource availability, operational change capacity, data readiness, integration complexity, third-party dependencies, control implications, and resilience obligations.
Leaders should treat feasibility as a composite score that reflects the likelihood of achieving the intended outcomes within acceptable risk. A high-value initiative with low feasibility may still be approved, but the decision should be explicit: leadership is accepting higher delivery risk or funding prerequisite work to raise feasibility.
Readiness is the difference between “can we do it” and “can we start”
Many initiatives are feasible in principle but not ready to begin. Readiness is the practical filter executives use to prevent premature starts that later require rework, architectural exceptions, or emergency controls. Typical readiness checks include whether dependencies are identified, whether required data domains are fit for purpose, whether operating processes are prepared for change, and whether risk and compliance impacts have been assessed early enough to avoid late-stage gating.
Feasibility must include assurance capacity
In regulated environments, delivery speed is constrained by the ability to produce and maintain control evidence. Feasibility scoring should therefore include the organization’s assurance capacity: the maturity of testing discipline, change management, model governance where relevant, and third-party oversight. Without this, scoring models systematically over-rank initiatives that look easy to build but difficult to operate safely.
Integrated scoring frameworks and how to adapt them for executive governance
Weighted scoring models as the executive default
Weighted scoring models are widely used because they can be tailored to leadership priorities and can incorporate both value and constraints. The executive advantage is transparency: criteria, weights, and scoring definitions can be governed, reviewed, and adjusted as conditions change. The risk is that flexibility becomes inconsistency unless criteria and evidence standards are controlled.
RICE as a useful sub-model for customer and product backlogs
RICE (Reach, Impact, Confidence, Effort) is effective for comparing product and journey improvements, especially when teams can quantify reach and estimate effort reliably. As an executive investment tool, RICE typically needs augmentation: control impacts, operational readiness, and dependency concentration often matter as much as customer impact. When used, it is best treated as a component score feeding a broader portfolio model rather than as the sole ranking mechanism.
The 1-3-9 method to force differentiation
Many scoring exercises fail because every initiative clusters in the middle. Non-linear scales such as 1-3-9 can help leadership distinguish high-potential initiatives from incremental work. The method is only credible, however, when the score anchors are defined clearly and tied to evidence rather than to advocacy.
Designing a scoring model that leaders will trust
Keep criteria tight and decision-relevant
Executives tend to accept models with fewer, higher-quality criteria because they are easier to govern and harder to game. A practical pattern is three criteria focused on alignment/value and three focused on feasibility/readiness. Additional considerations should be handled as gates (pass/fail) rather than as extra weighted dimensions.
Separate gates from scores
Gates prevent time being wasted scoring initiatives that are not decision-ready. Common gates include a named business owner, a defined outcome metric, identification of critical dependencies, and a preliminary risk and compliance assessment. This keeps the scoring conversation focused on comparative choices rather than on missing basics.
Define evidence expectations for each score
A scoring model becomes a governance artifact only when it is auditable: each score has a short justification and a reference to supporting analysis or data. Evidence can be lightweight—an estimate based on prior deliveries, dependency mapping, operational volume data, or a control impact assessment—but it must be consistent across initiatives.
Make weights a governed parameter
Weights communicate leadership priorities. If weights are not reviewed, the model quietly enforces outdated priorities even when strategy and constraints evolve. Mature governance treats weights as a reviewed parameter: adjusted annually and revisited when significant changes occur in cost pressure, resilience commitments, regulatory expectations, or capacity constraints.
Readiness and feasibility scoring categories that improve decision quality
Resource and change capacity
Feasibility should reflect not only budget availability but also scarce change capacity in operations and technology. Initiatives that require the same expert teams, the same data domains, or the same operational units should be penalized for portfolio congestion unless sequencing is explicit.
Technology and data foundations
Leaders need a clear view of whether foundational capabilities exist for the initiative: data quality and lineage, integration patterns, identity and access controls, observability, and release discipline. Scoring should reflect whether the initiative builds on shared foundations or requires bespoke workarounds that increase long-term cost and risk.
Dependency and concentration risk
Initiatives with many cross-domain dependencies or heavy reliance on third parties tend to have hidden schedule and control risks. A feasibility filter that captures dependency concentration improves portfolio realism by discouraging highly coupled starts unless prerequisites are funded.
Control and resilience implications
Feasibility is higher when controls can be designed into the solution with clear ownership, evidence capture, and sustainable operating procedures. Initiatives that create new risk obligations—particularly around data protection, third-party oversight, and operational resilience—should be scored accordingly unless the bank has mature assurance processes to support pace.
Common failure modes executives should anticipate
Using scores to avoid judgment
Scoring models support judgment; they do not replace it. Leaders should expect exceptions when a strategically critical initiative scores lower due to current constraints. The governance value comes from forcing the exception to be explicit: which constraints are being accepted, which prerequisites are being funded, and what evidence will be used to reassess.
Optimism bias in feasibility scoring
Feasibility scores are prone to optimism when teams treat complexity as a build problem rather than an operate-and-assure problem. Requiring cross-functional scoring input—technology, operations, risk, compliance, and finance—reduces bias and produces a more realistic view of readiness.
Portfolio drift when weights and gates are not updated
Even a well-designed model deteriorates if it is not maintained. Outdated gates allow ill-defined work into the pipeline, and outdated weights pull funding toward yesterday’s priorities. Executives should expect scoring governance to be as continuous as financial planning governance.
Strategy validation and prioritization for focused investment decisions
When leadership is attempting to focus investment decisions, the core question is whether strategic ambitions are realistic given current digital capabilities. Strategic alignment scoring alone is insufficient because it cannot explain deliverability. Feasibility scoring alone is insufficient because it can bias toward the easiest work rather than the most important work. A combined model, grounded in readiness filters, turns the tension into a governed trade-off: which outcomes matter most now, which constraints limit execution, and which prerequisite capabilities must be strengthened to make higher-ambition initiatives achievable.
A maturity-based perspective makes the model more reliable by reducing “unknown unknowns.” Readiness and feasibility are not abstract concepts; they reflect measurable capability across technology and data foundations, operating model effectiveness, delivery discipline, and risk-and-control evidence practices. Benchmarked through the DUNNIXER Digital Maturity Assessment, leadership teams can use a shared baseline to validate which initiatives are truly executable, identify where sequencing must be constrained by capability gaps, and increase confidence that portfolio focus reflects both strategic intent and execution reality.
Reviewed by

The Founder & CEO of DUNNIXER and a former IBM Executive Architect with 26+ years in IT strategy and solution architecture. He has led architecture teams across the Middle East & Africa and globally, and also served as a Strategy Director (contract) at EY-Parthenon. Ahmed is an inventor with multiple US patents and an IBM-published author, and he works with CIOs, CDOs, CTOs, and Heads of Digital to replace conflicting transformation narratives with an evidence-based digital maturity baseline, peer benchmark, and prioritized 12–18 month roadmap—delivered consulting-led and platform-powered for repeatability and speed to decision, including an executive/board-ready readout. He writes about digital maturity, benchmarking, application portfolio rationalization, and how leaders prioritize digital and AI investments.
References
- https://www.larksuite.com/en_us/blog/project-selection
- https://www.smartsheet.com/content/project-scoring
- https://umbrex.com/resources/strategy-offsites-guide-to-transformative-events/prioritizing-strategic-initiatives/#:~:text=Weighted%20Scoring%20Model,highest%20scores%20should%20be%20prioritized.
- https://nulab.com/learn/project-management/prioritization-frameworks/
- https://www.projectmanager.com/blog/project-prioritization-scoring-model#:~:text=The%20project%20prioritization%20scoring%20model%20assigns%20scores%20to%20projects%20based,impact%20on%20stakeholders%20and%20urgency.
- https://medium.com/@davidjcmorris/weighted-dvf-a-simple-model-for-scoring-and-prioritizing-ideas-3591e194ded2
- https://productschool.com/blog/product-fundamentals/weighted-scoring-model
- https://www.smartsheet.com/content/project-scoring#:~:text=for%20Project%20Management-,What%20Is%20a%20Scoring%20Model%20in%20Project%20Management?,important%20factors%20for%20each%20respondent.
- https://www.larksuite.com/en_us/blog/project-scoring#:~:text=Strategic%20alignment%2C%20which%20assesses%20how,%E2%80%8B
- https://impelhub.com/blog/growth-strategy-scoring/#:~:text=Sorting%20Strategies%20by%20Highest%20Total,reevaluated%2C%20adjusted%2C%20or%20deferred.
- https://umbrex.com/resources/strategy-offsites-guide-to-transformative-events/prioritizing-initiatives/
- https://productschool.com/blog/product-fundamentals/ultimate-guide-product-prioritization#:~:text=Developed%20by%20the%20Intercom%20team%2C%20the%20RICE,system%20compares%20Reach%2C%20Impact%2C%20Confidence%2C%20and%20Effort.
- https://www.thepmrepo.com/tools/weighted-scoring-model
- https://www.qmarkets.net/resources/article/idea-matrix/#:~:text=Innovation%20isn't%20just%20about,transparent%2C%20and%20data%2Ddriven.
- https://www.skipso.com/resources/understanding-project-selection-criteria-en#:~:text=Quick%20Summary&text=Criteria%20should%20assess%20strategic%20alignment,projects%20align%20with%20organizational%20goals.&text=Utilize%20both%20hard%20metrics%20and,captures%20innovative%20potential%20more%20effectively.&text=Projects%20must%20support%20the%20organization's,critical%20for%20long%2Dterm%20success.&text=A%20robust%20selection%20framework%20helps,essential%20for%20effective%20innovation%20management.&text=Diverse%20perspectives%20can%20better%20inform,leads%20to%20more%20successful%20outcomes.
- https://www.nngroup.com/articles/prioritization-methods/#:~:text=RICE%20is%20a%20prioritization%20framework%20developed%20by%20Intercom.
- https://sedulogroup.com/blog-post/strategic-goals-feasibility/#:~:text=Define%20the%20initiative's%20purpose%20within,tracked%20through%20a%20balanced%20scorecard.
- https://en.tacto.ai/buyer-lexicon/scoring-model#:~:text=The%20scoring%20model%20is%20based,structured%20analysis%20of%20different%20options.
- https://www.linkedin.com/pulse/aligning-projects-business-strategy-key-long-term-feghali-jabre-0reoe#:~:text=According%20to%20recent%20studies%20by%20McKinsey%20&,of%20revenue%20growth%20and%20profitability%20(McKinsey%2C%202022).
- https://wearemercury.com/evaluate-and-prioritise-technology-decisions-with-the-1-3-9-scoring-method/#:~:text=The%201%2D3%2D9%20method,9:%20High%20importance%20or%20impact
- https://www.sciencedirect.com/science/article/pii/S1473309915700423#:~:text=Developing%20a%20composite%20feasibility%20score%20There%20are,each%20approach%20are%20described%20in%20the%20appendix.