This page has been archived.
Information identified as archived on the Web is for reference, research or recordkeeping purposes. It has not been altered or updated after the date of archiving. Web pages that are archived on the Web are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats on the "Contact Us" page.
IM/IT Investment Evaluation Guide
May, 2000
Chief Information Officer Branch
Treasury Board of Canada Secretariat
1. Planning Phase: Choosing the Best IM/IT Investments
2. Tracking and Oversight: Manage the Investments by Monitoring for Results
3. Evaluation Phase: Learn From the Process
Planning Phase: | Choosing the Best IM/IT Investments |
Tracking & Oversight: | Manage the Investments by Monitoring for Results |
Evaluation Phase: | Learn From the Process |
Questionnaire |
The goal of the planning phase is to assess and organize current and proposed IM/IT projects and then create a portfolio of IM/IT projects. In doing so, this phase helps ensure that the organization:
A critical element of this phase is that a group of senior executives makes project selection and prioritization decisions based on a consistent set of decision criteria that compare costs, benefits, risks, and potential returns of the various IM/IT projects.
Once the IM/IT projects have been selected, senior executives periodically assess the progress of the projects against their projected cost, scheduled milestones, and expected mission benefits. The type and frequency of the reviews associated with this monitoring activity are usually based on the analysis of risk, complexity, and cost that went into selecting the project and that are performed at critical project milestones. If a project is late, over cost, or not meeting performance expectations, senior executives decide whether it should be continued, modified, or cancelled.
The evaluation phase provides a mechanism for constantly improving the organization's IM/IT investment process. The goal of this phase is to measure, analyze, and record results, based on the data collected throughout each phase. Senior executives assess the degree to which each project met its planned cost and schedule goals and fulfilled its projected contribution to the organization's mission. The primary tool in this phase is the post-implementation review (PIR), which should be conducted once a project has been completed. PIRs help senior managers assess whether a project's proposed benefits were achieved and refine the IM/IT selection criteria.
This is an assessment of the investment management processes that the organization is following to plan IM/IT investments, control and monitor progress of these investments, and evaluate final results. The central question to be answered is:
"Does the organization have defined, documented processes for planning, selecting, controlling, and evaluating its IM/IT investments?"
The goal in assessing an organization's processes is to identify to what extent the organization has a structure in place for managing and evaluating IM/IT investments.
An important point to remember when making an assessment of existing processes is that the evaluation should be focused solely on the organization's policies, practices, and procedures, not on actual decisions. Having institutionalized management processes, honed to work in the culture of the organization, is critical to producing consistently good results. The investment processes should accurately reflect the way the organization actually functions and makes decisions.
An IM/IT investment process cannot operate without accurate, reliable, and up-to-date data on project costs, benefits, and risks. It is the basis for informed decision-making. In addition, documentation of management decisions is essential to begin to assemble a track record of results. Evaluating the data involved in the IM/IT investment management process requires evaluating two different types of data:
All projects (proposed, under development, operational, etc.) should have complete and accurate project information – cost and benefit data, risk assessments, links to business/program goals and objectives, and performance measures, as well as up-to-date project-specific data, including current costs, implementation plans, staffing plans, and performance levels. In addition, the organization should have qualitative and quantitative project requirements and decision criteria in place to help screen IM/IT projects, assess and rank projects, and control and evaluate the projects as they move through the various phases of their life cycle.
All management actions and decisions that are made should be documented and maintained. Moreover, some decisions require that additional information be produced. For instance, after a project is selected, project-specific review schedules and risk mitigation plans should be developed.
One of the most important goals of this guide is enabling evaluators to assess the effectiveness of the organization's IM/IT investment process and the extent to which it is contributing to the improved mission performance of the organization. After evaluating the processes that the organization uses to plan, monitor, and evaluate IM/IT investments and the data that are used to make decisions evaluators will be in a much better position to reach conclusions about the specific decisions that the organization is making. The central focus of analysis is on whether management decisions and actions are being taken using the investment control processes and requisite project data.
The IM/IT investment portfolio should represent a mixture of those projects that best meet the mission needs of the organization. Projects in the portfolio should be consistently monitored and decisions should be made at key milestones to ensure that the project is continuing to have its expected business or programmatic impact with a focus on minimizing risk and maximizing return. Completed projects are evaluated to compare actual performance levels to estimated levels and to feed lessons learned back into the Planning phases.
The IM/IT investment management process begins with the project planning process. Projects being proposed for funding are put through a "coarse-grained" screening process to:
Proposals that pass this screening process have their costs, benefits, and risks analyzed in-depth. Once this is accomplished, all of the projects are compared against some common decision criteria and ranked based on their relative benefits, costs, and risks. Using this prioritized list as a guide, senior managers make decisions about which projects will be proposed for funding for the upcoming year. This post-prioritization decision-making on the appropriate mixture of projects is the essence of IM/IT portfolio analysis. Finally, after these funding decisions have been made, schedules for reviewing projects are established or updated.
The organization should have a process that outlines how to introduce projects for funding and how these projects will be screened for relevancy to business/program goals and objectives and technical soundness. Specifically, the organization should:
The screening process should be established in policy guidance (to ensure that it is conducted consistently) and used at all levels of the organization. As part of the initial screening process, there should be documented screening criteria (minimal requirements) that all projects are expected to meet.
The screening criteria should serve three functions. They should:
On the basis of this screening process, projects will either move on for more in-depth analysis or will be sent back to the originating program group.
The cost, risk, and benefit information of all projects (initial concept, proposed, under development, operational) should be analyzed and assessed in detail.
Each project should have a business case developed that provides the sponsor's justification for the project. The business case should identify the organizational needs that the project is meeting or proposes to meet; provide information on the benefits, costs, and risks of the project; and establish proposed project development time frames and delivery schedules. The information in the business case should be continuously updated to ensure that it always reflects the current situation.
The organization should have some established group or audit function that is responsible for verifying and validating the various analyses (cost/benefit analyses including feasibility studies, risk assessments, and alternatives analyses) and information that are submitted as part of a project's business case. This validation should include:
The organization should have a management information system (MIS) or some other mechanism where all project information is collected and maintained. Such a mechanism, if kept accurate and up-to-date, can make data verification and validation easier by allowing the organization to track costs, risks, etc. over time.
This mechanism for collecting and maintaining project information will also be essential during the Control and Evaluation phases to:
After each project's cost, risk, and benefit information has been examined and validated, all of the projects should be compared against some common decision criteria in order to weigh the relative merits of the projects and develop a prioritized listing of projects.
The criteria used for assessing and ranking projects should consist of elements related to three essential areas - benefits, costs, and risks. Often organizations will establish broad categories related to these three areas and then develop more specific sub elements that come under each broad category. For example, an organization may establish risk as a categorical heading and then include schedule risks, cost sensitivity, technical risks, organizational risks, and risks if the project is not undertaken as sub elements under the risk heading.
Different organizations will break these broad categories and sub elements out in different ways. For instance, some organizations may include a project's costs as one of several factors under risk, while others break project costs out as a separate category.
Decisions should rarely be made based on one project factor, such as the project's estimated cost or a projection of reduced cycle time. Using an assortment of decision criteria to make decisions allows an organization to take into account and compare the different subtleties of a wide variety of projects.
The organization may assign weights to each of the broad categories, as well as any sub elements related to each category, in order to help prioritize those factors that the organization considers to be the most significant (e.g., a company that has limited experience developing systems may give technical risk a greater weight than projected cost). The mixture of weights among the ranking criteria will vary from organization to organization. The weights that are given should take into account the organization's unique mission, capabilities, and limitations. The weighting schema that the organization establishes should be defined and documented. Such documentation is even more important if different weighting approaches are used for different kinds of projects (operational, infrastructure, applications development projects, etc.).
To provide senior managers with an understanding of the relative costs, risks, and benefits of each project compared to the other projects, the organization may develop a scoring model or decision support tool. Such a tool compares the costs, benefits, and risks of each project against the cost, benefit, and risk criteria and assigns a score for each factor. The scores that the project receives for each factor are then added up to produce a cumulative score that establishes the project's relative worth and allows comparison against all other projects.
An important point for an organization in developing such a scoring model or decision support tool is to precisely define the scoring elements. The purpose behind these definitions is to ensure more consistent or uniform objectivity in the scoring process, which helps to eliminate widely varying interpretations and implementation.
The criteria for comparing and ranking projects should be used uniformly across the organization (i.e., unit, division, directorate, Region, Sector level decisions should be made using a set of criteria that are similar to criteria used for Branch or Department-level decisions). Although different levels of the organization may use additional criteria, the organization should have a set of minimum criteria that are used enterprise wide. Using some common decision criteria provides greater assurance that the organization is selecting projects consistently and helps to avoid "apples versus oranges" project comparison problems.
There should also be incentives to ensure compliance with the process and to dissuade gamesmanship. The organization should identify who is responsible for enforcing the process and there should be explicit consequences for noncompliance.
The organization should have a senior management decision-making body, made up of program, and financial managers, that makes decisions about which projects to fund for the year based on its determination of where organizational needs are greatest. Such a determination will usually be made by analyzing the gap between the organization's goals and objectives (as highlighted in its strategic and annual performance plans) and the organization's existing capacity.
The roles and responsibilities of the IM/IT investment review group should be clearly identified and documented. The organization should also identify how this group will go about making decisions. This should include establishing how decisions will be reached, how conflicts will be handled, and how stakeholder input will be brought into the process.
The investment review group will make decisions about which projects to propose for funding, using the list of ranked projects as a key input. As the group goes about making these decisions, a number of trade offs will have to be made. For instance, the group will need to decide how much should be spent to continue operating and maintaining existing systems, versus funding enhancements to current systems, versus funding systems that are currently under development, versus funding new projects, versus funding research projects that assessing the applicability of emerging technologies. The group must also determine the proportions that will be spent on the various IM/IT types (i.e., research and development, administrative, mission critical, infrastructure, etc.). And, the group must take into account dependencies among projects.
The decision-making process should help address difficulties associated with using different units of measure for analyzing different kinds of IM/IT projects, as well as a balancing of "soft" versus "hard" quantitative data.
To aid the investment review group in making trade offs between various project types and phases, the organization may maintain a data repository that contains historical information on expenditures in different IM/IT investment categories (operations and maintenance, enhancements to current systems, new systems development, research into developing or applying emerging technologies, etc.). By maintaining this information, the organization can review how much was spent previously and factor this in to current spending decisions.
As part of the process of making trade offs and determining spending priorities, the organization may also conduct a review (in- house or via outside consultant/expert) of its current IM/IT spending portfolio to assess alignment with mission needs, priorities, strategic direction, major process re-engineering, etc. This review may include a trend analysis to show how patterns of investment and spending have changed, as well as an analysis to estimate how the spending pattern may change with the proposed IM/IT portfolio.
No matter how rigorous or structured the organization's decision- making process is, decisions about which projects to select for funding are ultimately managerial decisions. If senior managers select projects that score low when compared to other projects (e.g., high-risk, high-return projects) the justification for these decisions should be documented and the project's progress should be closely monitored during the Control phase. Making such exceptions should be kept as minimal as possible, however, to preserve the integrity of the decision-making process.
The process of reviewing and selecting IM/IT projects should be explicitly linked with other business processes (e.g., planning, budgeting, acquisition). Most investment decisions should mirror a planning decision or business objective and should be reflected in related budgeting documents and decisions.
The investment review group's responsibilities will usually not end once it has decided upon the mix of projects that will be proposed to comprise the current year's investment portfolio. Instead, the group should meet on a regular basis (often quarterly) to discuss the status of projects and to make further project decisions. The group may also be responsible for reviewing investment portfolio decisions that were made by lower-level organizational units.
After making funding decisions, each project that was selected should have a review schedule established for it, or should have its current review schedule assessed and updated as needed. The time frames for these reviews will depend on various project-specific factors (amount of risk, investment size, mission importance, capability of the project team, etc.).
It is important that these reviews be conducted on a regular, scheduled basis. These reviews do not necessarily have to coincide with major project milestones. Moreover, "review triggers" should be established that automatically require a management review meeting. For example, a cost, schedule, or performance deviation of 10% or greater might require an immediate project review.
Good decisions require good data. Ensuring that each project meets established screening and ranking requirements and that the project's information is accurate and up-to-date is essential for ensuring that the most critical needs of the organization are being met by the projects and systems that are selected. In addition, the ex post information that is generated during this phase, such as project review schedules or risk mitigation plans, based on the planning decisions that are made, is critical for controlling and evaluating projects during the next two phases.
The efficiency of the investment management process depends initially upon how well the organization is ensuring that all projects meet initial project acceptance requirements and that necessary project proposal and justification steps have been performed. There should be evidence that each project that is submitted has been screened, analyzed, and evaluated according to processes and criteria established by the organization.
The information that is analyzed may include verification that all requisite planning data were submitted, that answers were received for all relevant questions, that projects met business/program goals and conformed to the agency's information technology architecture, and that projects that did not meet these requirements were not allowed to move on for further review and consideration. There should also be evidence demonstrating that all business units adhered to organizational policies and procedures regarding the screening and acceptance of projects.
Much of the evidence that will be reviewed will consist of cursory completeness and quality checks. For instance, if the organization has requirements that all projects over a certain cost threshold must
There should also be evidence that information that was submitted was validated by a quality assurance/control function. Such validation can be performed by in-house quality control/quality assurance staff, internal audit staff (e.g., inspector general), etc. The project information should also be verified to ensure that it is accurate and reflects the most up-to-date information.
All project information should be up-to-date, cost numbers should be accurate, benefits should be quantified to the extent possible, risks should be spelled out, alternatives should be identified, and sensitivity analyses should have been conducted.
Each project that is submitted should have a business case prepared that provides justification for the project. Included in the business case should be identification of the project's functional requirements and estimates of the project's life-cycle costs, benefits, and risks (to the extent possible), as well as the corresponding analyses that were conducted to develop the estimates. Making accurate cost savings estimates and benefit determinations requires having at least a rudimentary understanding of the baseline costs and benefits from existing IM/IT capabilities.
A key analysis that should almost always be submitted with project proposals is a cost/benefit analysis. A complete cost/benefit analysis should
The amount of rigor and types of analyses that are conducted will depend, in part, on the size of the investment and the amount of risk. It may not be economical to conduct an in-depth cost-benefit analysis for a low-cost, low-risk project that only affects a specific division or office or a limited number of users. The organization should have a process that outlines what project data are required given each project's type, cost, and risks; variation in the quality or type of data should not be ad hoc.
Listed below are some of the cost, risk, and benefit elements that an organization should keep in mind as IM/IT develops project estimates.
Costs (recurring and non-recurring)
Risks
Benefits (will usually consist of both tangible and intangible benefits)
In identifying and measuring IM/IT benefits, it is important to always remember the business function or process that is being supported by the technology. For instance, the benefits that are gained from implementing EDI technology are derived from the increased capability and efficiency that the technology provides to the organization and its customers.
All of the information in the business case should be as up-to-date and accurate as possible. If the analyses are to yield meaningful results, it is essential that the project team carefully formulate assumptions, identify feasible alternatives, and provide realistic cost and benefit estimates.
Most agencies have criteria or methodologies detailing how cost/benefit analyses are to be conducted and what should be included. In addition, OMB Circular A-94 provides guidance and discount rates for conducting cost/benefit analyses.
Because all projects (ongoing, under development, etc.) go through the Planning process (usually on an annual basis), portfolio data from previous years should be available to assess and compare previously selected projects. The spending, cost, and obligation data in this portfolio should be up-to-date and categorized in ways that are most meaningful to organization management.
An agency's cost accounting system should be able to distinguish between what has been obligated to date and what is still available, as well as identify what the incurred costs to date were for. In addition, the system may be able to split spending into more specific categories, such as development, operations, maintenance, etc. (Activity-based cost tracking, for example, should provide this detail.)
There are several pieces of information that should arise out of the Planning phase, based on the actual decisions that are made. This information includes:
The organization should also be maintaining net cost and benefit information on the complete portfolio of IM/IT investments.
Finally, all of the projects that were selected for funding should be included in the Organization Capital Plan that is submitted to OMB. Information that is submitted in this plan should include baseline cost, schedule, and performance goals for each project.
In addition to the Capital Plan, decisions that are made on the mix of existing and new projects should be clearly identified in the agency's annual performance plans. Actions described in the Capital Plan to implement the funding, procurement, and management of the IM/IT projects should also be articulated in these performance plans.
All projects that are selected for funding should have project review schedules, risk management plans, and project-specific performance measures established. All of this information will be particularly critical for assessing performance, identifying risks, and making decisions during the Control and Evaluation phases.
The timing of reviews, as well as the number of reviews that will be conducted, will depend on the investment size of the project, the amount of risk, the capability of the project team, etc.
In addition, the investment review group may identify additional project management or investment review reporting requirements (data, information, analysis), beyond what is specified by existing processes, for projects that it determines are particularly high-risk. These additional requirements should be clearly documented and communicated to the responsible project team. The project team should also be given explanation detailing how this information ... and its assessment by senior management ... may influence project continuation, delay, or cancellation.
At some point the project team should develop an outline or strategy describing how any necessary acquisitions will be handled. Key tenants of a sound acquisition strategy are that it appropriately allocates risk between the organization and contractor, effectively uses competition, ties contract payment to accomplishments, and takes maximum advantage of commercial technology.
The purpose of the Planning phase is to put the organization in the best possible position to make decisions about which IM/IT proposals or projects to fund. Getting to this final decision requires that initial decisions be made about whether proposed projects should be moved on for further consideration. It then requires decisions to be made about the relative merits of each individual project. This is followed by the most important decisions, in which tradeoffs are made between the various projects and systems in order to develop the IM/IT investment portfolio that will be funded for the upcoming year.
All new projects should have a decision made about whether the project meets all minimal project requirements, at what organizational level the project should be reviewed, and the level of analytical rigor necessary for decisions. While these screening decisions should be relatively straightforward, driven primarily by project-level data sufficiency, they should not be thought of as simply a cursory exercise. The overall efficiency and effectiveness of the entire Planning phase depends to a large extent upon these initial screening decisions.
The organization should also have a process for determining where in the organization a funding decision should be made. The efficiency of the investment management process is significantly affected by how well the organization identifies which projects should be reviewed where. Senior decision makers should not spend their time in lengthy, in-depth reviews of projects that could have been easily assessed and decided upon at lower organizational levels.
Decisions made at this stage are the most important of all. The projects that are proposed to make up the investment portfolio for the year should represent the best match with organizational needs and business objectives or, in instances where exceptions were made, an explanation should be provided detailing reasons for the exception.
In making the planning decisions, senior managers should be taking into account tradeoffs between the various projects and systems that are going to be funded. Making these tough choices requires the organization to develop and maintain an understanding that not every project or system can be funded. Spending more for operational systems may mean that there is less money for research and development. The relative merits of each project should be rigorously assessed and analyzed in order to prioritize and select those projects that best match the most critical business needs of the organization.
In addition, projects selected for the portfolio should have decisions made about how often they will be reviewed and how associated risks are going to be managed.
Achieving maximum benefits from a project, while minimizing risks, requires that the project be consistently monitored and managed for successful results. During the Tracking and Oversight phase, agency executives should be actively engaged in monitoring all of the projects in the investment portfolio, making decisions and taking actions to change the course of a project when necessary and incorporating their experiences back in to the Selection phase to further refine and improve the process.
Each project should be reviewed at key milestones in its life cycle (a project review schedule should have been approved when the initial funding decision was made). The focus of these reviews should expand as projects move from initial design and pilot through full implementation and as the dollar amounts that are expended increase.
A low-cost, small-scale research and development project being conducted to determine the applicability of a systems technology to a business process requirement might receive limited review other than assessing whether the general approach is sound and feasible. However, projects that are preparing for limited field or full-scale implementation should be reviewed in depth including cost and performance to date - to ensure that the project delivers promised benefits within cost and risk limitations and to correct any problems before significant dollars are expended.
In addition, as the reviews are conducted, the context of the program that the system or project supports should be factored in. For instance, a project may exceed performance expectations, but if it is contributing to a program that is failing or is no longer needed, then little is gained for the organization.
The project reviews should assess several aspects of the project and its development. Below are examples of assessment categories that should be considered as part of the project reviews:
The organization should have some standard policies that help ensure that these different categories are assessed uniformly across the organization; however, the measures that are used to evaluate each project will be specific to that project. For instance, the organization may have a requirement that all projects have their schedules reviewed, but the schedules that are reviewed will be different for each project.
The problem with many progress reviews is that they focus almost exclusively on cost and schedule concerns. While these factors are important, the prime focus of progress reviews should be on ensuring that benefits are being accomplished, that risks are being managed, and that the project is still meeting strategic needs. As noted earlier, "review triggers," such as updated benefit/cost ratios or ROI thresholds, done in conjunction with schedule and spending checks, can help the organization determine when actions need to be taken.
The organization should have a documented process detailing how reviews will be conducted, what data and project information is required, and how decisions will be made based on the results of the project reviews. This process should include identifying roles and responsibilities for making decisions, as well as rules for how the project decisions will be made.
Some organizations use a traffic-light method to help make project decisions. Projects are given red, yellow, or green lights depending on how the project rated against expected performance measures. Yellow lights indicate that management action is necessary to avoid potential problems affecting project outcomes. Red lights indicate that major problems have already occurred. (As with all reporting and scoring mechanisms, it is critical that the organization define the conditions associated with element.) The following is an example of a traffic light tracking and oversight process:
The organization should also have an independent audit team, quality assurance group, or independent validation and verification (IV&V) contractor who is responsible for ensuring that project information is valid and verifying that corrective actions have been taken. In addition, the organization should have procedures in place to ensure that information from this quality assurance function is integrated in to the project review process.
Finally, the organization should have mechanisms in place to ensure that project teams are complying with the tracking and oversight process. This may include incentives for raising problems to senior managers and disincentives for noncompliance.
Project reviews, while helping to ensure accountability, should not be totally viewed as a "gotcha" opportunity, in which project managers are punished when problems are identified. Rather, the reviews should be considered opportunities for raising problems early, when they may be easier to address, rather than allowing the problems to be buried, creating a risk that they will arise later when costs are higher and the potential impact is greater.
Senior managers (particularly program managers) should be actively involved in the ongoing project reviews and are responsible for making decisions about whether to continue, accelerate, modify, or cancel a project. While members of the development team can, and should, be part of the decision-making process, they should not have unilateral responsibility or authority to make all project decisions. In addition, site executives and project managers should take part in devising and approving the solution to any problems that are identified.
All of the information in the business case, including the various analyses that were conducted to justify the project, should be updated to reflect the current state as project implementation continues and dollar amounts increase.
Some leading organizations estimate that often they cannot accurately estimate costs or quantify benefits until almost 40 percent of the way into the project.
The organization should have a uniform mechanism (e.g., management information system) for collecting, automating, and processing data on expected versus actual outcomes. Specifically, this mechanism should: provide the cost and performance data needed to monitor and evaluate investments individually and strategically, provide feedback on the project's adherence to strategic initiatives and plans, and allow for the review of unexpected costs or benefits that resulted from investment decisions.
Data in this system should be easily accessible to both the program team and senior managers.
Collecting and maintaining project information is important, not only from a project review standpoint, but also from the standpoint of establishing an organizational memory. Decisions in all three phases of the investment cycle (Planning, Tracking and Oversight, and Evaluate) will depend on this information being accessible and up-to-date.
Information learned during the Tracking and Oversight phase should be fed back in to the Planning phase to help make future selection decisions and to modify and enhance the screening and selection decision criteria. To make this easier, there should be some mechanism in place for aggregating decisions and actions in order to identify patterns of problems or, conversely, patterns of excellence.
Document the warning signs that, with hindsight, preceded the problem, identify what remedial steps were taken, and what the outcome of this approach was. Such documentation will help to make future acquisition decisions and identify recurring problems on existing programs.
Because the Selection phase usually occurs only once a year during the annual budget process, project information for that phase tends to be collected and assessed on a periodic basis. In contrast, information in the Tracking and Oversight phase is continuously collected, updated, and fed to agency decision makers. The data in the Tracking and Oversight phase should consist of such items as comparisons of actual results achieved to date versus estimates and an assessment of benefits achieved as part of project pilots or prototypes. Data collected during this phase will also consist of ex post documentation such as executive decisions and changes made to projects to address risks or better meet business requirements. The type and depth of data that are collected and maintained in this phase should be commensurate with the type and size of the project.
As projects move from one phase of the project's life cycle to the next, and as the dollars that are expended increase, interim results should be compared against estimates to ensure that the project is progressing as expected and to indicate when actions should be taken as problems arise or requirements change.
The organization should have project-specific measures established to help analyze actuals versus estimates, ensure that the project is meeting business requirements, and identify where improvements may be needed. These measures will consist of items such as cost and schedule information, quantitative and qualitative benefit measures, status of deliverables, risk elements, etc. The measures should be updated as actual costs, risks, and benefits are identified.
Using these measures, the organization should identify and monitor interim results that are achieved. The following are examples of the kinds of data that should be analyzed:
Accumulation of actual cost data and comparisons to estimated cost levels.
Evidence that results for the phase (or results to date) have been compared against initial estimates for cost, schedule, performance, risk, and return. Documentation of the change between the current number and scope of requirements and the original requirements baseline established for the project. Documentation of the comparison between the current business conditions and assumptions and the projects' initial assumptions and context. After the release of each new increment, each project participating in the increment should be analyzed to determine what interim benefits have been achieved in comparison to the previous increment.
Documentation of differences between the actual performance of the software organization or contractor and their claims at the beginning of the project (e.g., schedule, costs, functionality, technical solutions, etc.).
Aggregate data covering costs, benefits, and project performance for all IM/IT projects in the investment portfolio.
The cost, benefit, schedule, and risk information that was included in the business case, including the various analyses that were conducted to justify the project, should be updated as project implementation continues and as dollar amounts increase. For instance, it may have been difficult to precisely estimate costs and benefits when the project was first proposed, but such quantification may be improved as prototype and pilot project results become available.
Information and analyses in the business case should also be updated to provide justification for adding additional functional requirements to the project. This justification should weigh the costs of adding the requirement late in the development process versus the anticipated benefits that are expected from the added functionality.
Older versions of these analyses should be maintained for later comparisons and to feed lessons learned back in to the Selection phase.
The primary focus of the Tracking and Oversight phase should be on making project management decisions. Actions should be taken quickly to address problems as they are identified and senior managers should be actively involved in making decisions about all of the projects in the investment portfolio. While many of these decisions will be implicit (the project is right on course, no problems have been identified, requirements have remained the same and, thus, the decision will usually be to continue the project as is), it is critical, nonetheless, that a conscious decision be made about the future of each project.
As each project is reviewed at various stages in its life-cycle, decisions should be made about the future of the project. These decisions will be unique for each particular project and should be based on the particular merits of the project. In addition, some explanation or documentation of the decision should be included. Even implicit decisions should have some documentation to show that a conscious decision was made to continue the project.
Projects that have deficiencies or problems identified (actuals exceed estimated levels, risks are increasing, requirements have changed, etc.) should have a decision made by senior managers about what to do with the project. Decisions will usually involve one of four alternatives: modify the project cancel the project continue the project as is accelerate the project's development
Decisions may also be made to suspend funding or make future funding releases conditional on corrective actions being taken. These decisions should be documented, along with an explanation or criteria stating how funding can be reobtained. The decisions should also be reflected in budget information. For instance, if a project's development is halted while the feasibility of an alternative is assessed, budgetary spending information should reflect such a halt in funding. There should also be an explanation or documented criteria stating what must occur before funding is reinstated.
In addition, depending on what decisions are made about a project, future "cascading" actions resulting from these decisions should be clearly identified and delineated. For instance, halting the development of a project will impact a number of other areas, including project management, personnel and staffing decisions, budget decisions and spending priorities, etc. These cascading actions should also be reviewed and monitored to ensure that money is not continuing to be spent and that all development activities have ceased.
An independent review should be conducted prior to funding being reinstated to ensure that all corrective actions have been taken and to determine whether additional changes or modifications are still needed.
A review of past activities and decisions made about a particular project can influence both current and future managerial decisions. This is a primary reason why aggregating information is important. Aggregating allows trends to be more easily identified. Looking at projects across an agency or bureau, for example, can help pinpoint those divisions that have had repeated success at developing and managing IM/IT projects, and those that have had more trouble. This in turn can be used as inputs for decision makers when weighing organizational capability risks and determining project review schedules.
Aggregating can also help as the organization looks to refine and improve the screening and selection criteria and performance measures. Data can be aggregated by project, or can be grouped along unit, divisional, bureau, or agency lines.
Problems that are identified from this analysis may be serve as an indication of specific endemic weaknesses with project management, contractor oversight, or cost-estimation practices that need revision and corrective actions. In addition, positive trends that are identified can provide valuable lessons for highlighting and reinforcing organizational strengths.
The Evaluation phase "closes the loop" on the IM/IT investment management process by comparing actuals against estimates in order to assess performance and identify areas where future decision making can be improved. Lessons that are learned during the Evaluation phase should be geared towards modifying future Selection and Control decisions. Central to this process is the post-implementation review with its evaluation of the historical record of the project.
Once a project has reached a final end point (e.g., the project is fully implemented, the project has been cancelled, etc.), a post- implementation review (or post-investment review) should be conducted. This review will usually occur about 3 to 12 months after a project has reached its final end point and should be conducted by a group other than the project development team to ensure that it is conducted independently and objectively.
Organizations often spend significant time and resources focused on selecting IM/IT projects, but less attention is given to evaluating projects after they are implemented. Yet the information gained from PIRs is critical for improving how the organization selects, manages, and uses its IM/IT resources.
Each PIR that is conducted should have a dual focus - it should:
To ensure that each project is evaluated consistently, the organization should have a documented methodology for conducting PIRs. This methodology, which should be used at all organizational levels, should spell out roles and responsibilities for conducting reviews and for taking actions based on the results. PIRs should be required on a regular basis to ensure that completed projects are reviewed in a timely manner. The organization should also have policies or procedures that document how information from the PIRs is to be relayed back to decision makers.
Finally, because there is a great deal of knowledge that can be gained from failed projects, evaluations should also be conducted for projects that were cancelled prior to being fully implemented. Although project accountability is important, these evaluations should focus on identifying what went wrong with the project, in order to learn from mistakes and minimize the chances of their being repeated.
All of the PIR information gained in the Evaluation phase should be collected and maintained with project information gathered during the other two phases. Developing this complete library of project information helps to establish an organizational memory in which both successes and failures can be used for learning.
There should be some mechanism or process to ensure that information is being aggregated and fed back in to improve the investment management process. For instance, the cost, risk, and benefit criteria (including the weights given to each) for the Selection phase may be refined to ensure greater implementation success of future projects. The organization should also determine whether there may be different or more appropriate cost, benefit, and risk measures that could be established that would help better monitor projects.
Data collected during the Evaluation phase will be primarily historical in nature focusing on the outcome of a project as compared to executives' expectations for the project. In addition, ex post information that is developed should include modifications made to the Selection and Control phases, as well as the institutionalized lessons-learned information. This information should be used to revise the Selection and Control phases and to help make future investment decisions.
The focus of the PIR should be on evaluating a project's actual results compared to estimates in terms of cost, schedule, performance, and mission improvement outcomes. An attempt should also be made to determine the causes of major differences between planned and end results. And the PIR should be used to help identify any inappropriate systems development and project management practices.
The PIR should provide a wide range of information regarding both the project and the process for developing and implementing the project. Specific information includes the following:
Outputs of the PIR should include user evaluations of the effectiveness of the project, actual costs broken out by category, measurements used to calculate benefits, a comparison matrix of actuals to estimates, and business-as-achieved documentation.
The organization should be maintaining documentation of all decisions, changes, actions, and results that occurred throughout the project's life cycle, as well as other relevant project information, such as the business case and updated cost/benefit analyses. The organization should also be tracking recommendations (for both improving the project and refining the overall investment management process) that arise out of the PIRs.
This "track record" will be invaluable for helping the organization refine and improve its processes as more and more information is collected and aggregated.
A number of key decisions will be made during the Evaluation phase, including an assessment of how well the project met its intended objectives, a determination of what changes or modifications to the project are still needed, and an identification of ways to modify or improve the overall investment management process to better maximize results and minimize risks. In addition, the organization may assess the overall performance of its IM/IT investments in improving mission performance. To make these decisions, agency executives must gauge the degree to which past decisions have influenced the outcome of IM/IT projects, understand why these decisions had the effect that they did, and determine how changing the processes for making decisions could create a better outcome for current IM/IT projects and future IM/IT proposals.
The results and recommendations that arise out of the PIRs, combined with other project information, are a critical input for senior decision makers to use to assess the project's impact on mission performance. In making this assessment, senior managers will need to ask a number of questions about the project, including the following:
Even after a project has been implemented, decisions should be made on a regular basis about the status of the project. Senior managers should regularly question whether
In addition, because operation and maintenance (O&M) costs, for such activities as hardware upgrades, system software changes, and ongoing user training, can consume a significant amount of IM/IT resources (some have estimated that ownership costs, operations and maintenance costs, and disposition costs can consume as much as 80 percent of a project's total life-cycle costs), a plan should be developed for the continued support and operation of each IM/IT project.
An organization's investment management process will usually not remain static, but will evolve and change over time as the organization learns more and more about what has been successful and what still needs to be improved. Modifications that may be made to the process include the following:
The results from one project will often not provide enough information to allow significant modification to be made to the agency's IM/IT decision-making processes. However, a significant , recurring system development problem found across multiple projects over time would because for refining or even significantly revising the organization's decision-making processes and criteria.
The causes for differences between planned and actual results should be determined and corrective actions to the overall IM/IT management process, decision criteria, etc. should be identified and documented. Once the causes for differences between planned and actual results have been determined, steps should be taken to address these causes in order to ensure greater success in the future.
All alterations or updates that are made to the Planning and Tracking and Oversight phases, based on the results of PIRs, should be documented.
PLANNING PROCESS
Screening New Projects
Analyzing and Ranking All Projects Based on Benefit, Cost and Risk Criteria
Selecting a Portfolio of Projects
Establishing Project Review Schedules
Evidence that Each Project has Met Project Submission Requirements
Analyses of Each Project's Costs, Benefits and Risks
Data on the Existing Portfolio
Scoring and Prioritization Outcomes
Project Review Schedules
Determining Whether Projects Met Process Stipulated Requirements
Deciding Upon the Mixture of Projects in the Overall IM/IT Investment Portfolio
TRACKING & ASSESSMENT
Consistently Monitoring Projects
Involving the Right People
Documenting All Major Actions and Decisions
Feeding Lessons Learned Back Into the Planning Phase
Measures of Interim Results
Updated Analyses of Each Project's Costs, Benefits, Schedule and Risks
Deciding Whether to Cancel, Modify, Continue or Accelerate a Project
Aggregating Data and Reviewing Collective Actions Taken to Data
EVALUATION PROCESS
Conduct Post Implementation Reviews (PIRs) Using a Standard Methodology
Feeding Lessons Learned Back into the Planning and Tracking & Oversight Phases
Measurements of Actual vs. Projected Performance
Documented "Track Record" (Project and Process)
Assessing Projects Impact on Mission Performance and Determining Future Prospects for the Project
Revising the Planning and Tracking & Oversight Based on Lessons