A recommended approach for developing function point counts is to first functionally decompose the software into its elementary functional components (base functional components). This decomposition may be illustrated graphically on a Functional Hierarchy. The hierarchy provides a pictorial ‘table of contents’ or ‘map’ of the functionality of the application to be delivered. This approach has the advantage of being able to easily convey the scope of the application to the user, not only by illustrating the number of functions delivered by each functional area, but also a comparative size of each functional area measured in function points.
If the software to be developed is planned to replace existing production applications it is useful to asses if the business is going to be delivered more, less or the same functionality. The replacement system’s functionality can be mapped against the functionality in the existing system. A quantitative assessment of the difference can be measured in function points. Note, this comparison can only be done if the existing applications have already been sized in Function Points.
Multiplying the size of the application to be replaced by an estimate of the $ dollar cost per function point to develop, enables project sponsors to develop quick estimates of replacement costs. Industry derived costs are available and provide a ballpark figure for the likely cost. Industry figures are a particularly useful reference if the re-development is for a new software or hardware platform not previously experienced by the organisation. Ideally organisations should establish their own ‘cost per function point’ metrics for their own particular environment based on project history.
If you are considering implementing a‘customised off the shelf’ package solution then this provides a quick comparison of the estimated package implementation costs to compare with an in-house build. Package costs typically need to include the cost of re-engineering the business to adapt the current business Processes to those delivered by the package. These costs are usually not a consideration for in-house developed software.
Initial project estimates often exceed the sponsors planned delivery date and budgeted cost. A reduction in the scope of the functionality to be delivered is often needed so that it is delivered within a predetermined time or budget constraints. The functional hierarchy provides the ‘sketch-pad’ to do scope negotiation. I.e. it enables the project manager and the user to work together to identify and Flag (label) those functions which are:
The scope of the different scenarios can then be quickly determined by measuring the Functional Size of the different scenarios. E.g., the project size can be objectively measured to determine what the size (and cost and duration) would be if:
This allows the user to make more informed decisions on which functions will be included in each release of the application based on their relative priority compared to what is possible given the time, cost and resource constraints of the project.
Functionally sizing the requirements for the application quantifies the different types of functionality delivered by an application. The function point count assigns function points to each of the function types:
Industry figures available from ISBSG Repository for projects measured with IFPUG function points indicates that ‘complete’ applications tend to have consistent and predictable ratios of each of the function types. The profile of functionality delivered by each of the function types in a planned application can be compared to that of the typical profile from implemented applications, to highlight areas where the specifications may be incomplete or there may be anomalies.
The following pie chart illustrates the function point count profile for a planned Accounts Receivable application compared to that from the ISBGS data. The reporting functions (outputs) are lower than predicted by industry comparisons. Incomplete specification of reporting functions is a common phenomena early in a project’s lifecycle and highlights the potential for substantial growth creep later in the project as the user identifies all their reporting needs.
The quantitative comparison below shows that the reporting requirements were lower than expected by about half (14% compared to the expected 23% of the total function points). The project manager in this case verified with the user that the first release of the software would require all reporting requirements and the user indicated that more reports were likely to be specified. The project manager increased the original count to allow for the extra 9% and based his early project estimates on the higher figure that was more likely to reflect the size of the delivered product. The function point measurement activity enabled the project manager to quantify the potential missing functionality and justify his higher, more realistic estimate.
Once the scope of the project is agreed the estimates for effort, staff resources, costs and schedules need to be developed. If productivity rates (hours per function point, $cost per function point) from previous projects are known, then the project manager can use the function point count to develop the appropriate estimates. If your organisation has only just begun collecting these metrics and does not have sufficient data to establish its own productivity rates then the ISBGS industry data can be used in the interim.
The functional hierarchy developed as part of the function point count during project development can assist the testing manager to identify high complexity functional areas which may need extra attention during the testing phase. Dividing the total function points for each functional area by the total number of functions allocated to that group of functions, enables the assessment of the relative complexity of each of the functional areas.
The effort to perform acceptance testing and the number of test cases required is related to the number and complexity of the user functions within a functional area. Quantifying the relative size of each functional area will enable the project manager to allocate appropriate testing staff and check relative number of test cases assigned.
Many organisations have large legacy software applications, that due to their age, are unable to be quickly enhanced to respond to the needs of their rapidly changing business environments. Over time these applications have been patched and expanded until they have grown to monstrous proportions. Frustrated by long delays in implementing changes, lack of support for their technical platform and expensive support costs, management will often decide to redevelop the entire application. For many organisations this strategy of rebuilding their super-large applications has proved to be a disaster resulting in cancellation of the project mid-development. Industry figures show that the risk of project failure rapidly increases with project size. Projects less than 6500 function points have a risk of failure of less than 20% in comparison with projects over 5000 function points which have a probability of cancellation close to 40%7. This level of risk8 is unacceptable for most organisations.
Assessing planned projects for their delivered size in function points enables management to make informed decisions about the risk involved in developing large highly integrated applications or adopting a lower risk phased approach described below.
If the project manager decides on a phased approach to the project development then related modules may be relegated to different releases. This strategy may require temporary interfacing functionality to be built in the first release to be later decommissioned when the next module is integrated. The function point count allows project managers to develop ‘what if scenarios’ and quantify the project scope of each phase as a means of making objective decisions. Questions to which quantitative answers can be provided are:
If it is decided to implement the application as a phased development then the size of each release can be optimised to that which is known to be manageable9. This can be easily done by labelling functions with the appropriate Release and performing ‘what-if’ scenarios by including and excluding functions from the scope of the count for the release.