Activity Based Costing (ABC)
ABC Vs. Traditional Costing
Benefits of ABC System
Why ABC Systems?
Costing and Estimation are two important functions that determine the competitiveness of every company. To compete effectively one has to know how much cost one is incurring on each project that the organization is undertaking. The company can only reduce its operational costs only if it knows how much is being spent and who is spending more. Similarly to know in advance how much a project is going to cost or how many months it is going to take is also important. This is important for winning contracts, budget allocation, financial and manpower planning and so on. The more accurate the estimates the better; because if the estimates are tool low the company will have to incur losses and if it is too high the company might not win the contract and might allocate too much resources to the project. In this article we will see costing and estimation.
To know the cost of producing a software system or product is very important for an organization as it decides a host of activities like pricing, profit sharing, and so on. It is also important to monitor the costs to keep control over them. The technological revolution is rapidly transforming traditional development and marketing methods. The result has been a dramatic change in ratios between fixed and variable costs. To compete in today’s dynamic and rapidly changing global marketplace, organizations need to understand and control their overhead costs. In today’s environment the organizations need to practice activity based costing (ABC) rather than the traditional costing methods. We will see what is ABC, how it is different from the traditional costing methods and so on in the next few sections.
ACTIVITY BASED COSTING (ABC)
ABC attributes variable, fixed and overhead costs directly to each product or service by using the activities required to produce the product or service as a means of allocation. With ABC the cost of a product or service equals the cost of consumables and other materials plus the sum of the cost of all activities used to produce the product or service. Activity-based management (ABM) is a system using ABC to do costing and cost control. The objective of this article is not go into the implementation details of ABC. That is beyond the scope of this book. We are trying to give an overview of ABC and why we consider it important for software organizations to implement it. For detailed information on ABC, the readers are directed to the following sources (and also to the sources in the selected bibliography):
- Hicks, D. T., Activity-Based Costing: Making it Work for Small and Mid-Sized Companies (Second Edition), John Wiley & Sons, 1999.
- Forrest, E., Activity-Based Management: A Comprehensive Implementation Guide, McGraw-Hill, 1996.
- Player, R. S., and Keys, D. E., (Eds.), Activity-Based Management: Arthur Andersen’s Lessons from the ABM Battlefield (Second Edition), John Wiley & Sons, 1999..
ABC vs. Traditional Costing
Traditional costing accumulates the cost of raw materials and direct labor, then applies overhead costs using an arbitrary allocation factor such as size of the project, number of employees in the project, etc. ABC relates resources to the actual activities that consume them. Conventional wisdom states that the production of a product or service produces costs. More accurately it is the activity involved in the production of a product or service that creates the cost. So, if we agree that an activity involves cost, then it follows that the actual cost of a product or service should be the sum total of the costs of each activity required to produce it. By breaking down product cost according to individual activities or events, costs can be controlled by managing each of the activities and / or the events that cause the cost-consuming activity.
In both traditional costing methods and ABC, the direct labor costs are allocated to each project, product or service. So if a person has worked a certain number of hours in a project, then the cost incurred by the company for that person for that much time is the direct labor cost. This is a reasonably straightforward part if the company has accurate records of who has worked in each project and for how many hours. The allocation of utilities proves to be a bit more challenging. In the absence of a metering mechanism the utilities are allocated to the different projects based on some factor like size, head count and so on. But this creates a problem. Suppose there is a project that uses the telephone a lot; the client is in another country and there is a daily teleconferencing. The team size of the project is 3. So if the organization is using the team size as the allocation factor, this project will be allocated only a portion of what it has spent by way of communication expenses. This results in wrong cost information.
ABC does not allocate overheads based on one or two arbitrary methods such as team size, project duration and so on that have little or no relationship to how a product or project uses the overhead services. Instead, ABC systems identify how these resources are consumed by each product or project and attach values according to this consumption pattern.
Benefits of ABC System
An ABC system has the benefit of being a highly effective control tool. In the traditional system, the theory is that the production of a product creates the cost. That view examines cost after it has been incurred. It is easy to see that cost once expended cannot be controlled. So the contribution of the traditional costing system in controlling the cost is limited to doing a postmortem and telling the management that the costs were higher and should have been controlled. ABC examines cost in a new and more controllable way. The cost of all products and services is the total cost of all the activities consumed by the product or service. So by monitoring the activities and the related costs, the management can take corrective actions before the cost overrun occurs. This is not possible in the traditional way, as we know that the cost overrun occurred only after it has occurred.
ABC systems focus on activities rather than costs. By organizing the work process into distinct activities, a significant control advantage is gained. Controlling the activity rather than the cost is the objective of ABC. In ABC systems control begins by separating activities into value-added and non-value-added categories. If a value added activity is being consumed, costs may be increasing, but so is value. If a non-value-added activity is increasing, so are the costs, but with no added benefit. Therefore, an important aspect of ABC is to control the non-value-added activities. This is achieved by providing the reports to the managers and project leaders about the cost for each activity under the two categories. Control is almost automatic, as managers are provided with the opportunity to see the activities in their areas of responsibility as value-added and non-value-added. The will naturally place importance on reducing the costs of the non-value-added activities.
Why ABC Systems?
The purpose of ABC is to remove the distortions caused by traditional costing systems, such as absorption-based and direct costing. These traditional systems were adequate when direct labor costs were a large percentage of the product cost. However today this is no longer true. ABC takes the best attributes of absorption-based and direct costing and applies all indirect costs to products and services by analyzing the activity that actually produces the particular cost. This method treats all cost as if they were variable costs.
The use of ABC system provides a highly efficient means to modify the entire organizational process. It also provides the method to judge how changes in the performance of activities affect the overall cost. The ABC system not only provides a highly accurate method of costing but also promotes activity efficiencies by exposing activities that were once buried in an overhead pool. By separating the cost of these support activities, each subsystem can directly trace the effect of its efficiencies on the total project cost. Coupled with a responsibility accounting system, this information is highly motivational, since each team can see the impact of its actions on the overall cost of the system or product and soon each group will find ways to improve their efficiencies and reduce their costs.
The experiences of managers who have used ABC systems indicate that a properly designed ABC system provides a strategic and tactical advantage far superior to traditional systems. ABC systems help managers understand and eliminate complexity. It provides managers with true cost and removes distorted cost information. ABC also helps managers understand the impact of their decisions like team size increase, over time, make or buy, outsourcing and so on. ABC can change the way managers decides the project team, hardware and software environment, use of tools and utilities and so on. In addition the managers will be able to identify and eliminate the non-value-added activities, thus reducing development costs.
Every organization needs to develop estimates (cost, time, etc.) to make intelligent decisions. It is necessary to have an idea—an estimate—how much a project is going to cost. The estimation of project cost is required for sending proposals, bidding, working capital management, etc. The cost estimation and planning is basically a project management function. Usually the costing is carried out along with the scheduling. The principle components of project costs are:
- Hardware costs
- Travel and training costs
- Effort costs (salary for software engineers)
- Allocation of the organizational overheads
The dominant factor among the above is the effort cost. This is also the most difficult component to estimate and control and has the most significant effect on overall costs. Software cost estimation is a continuing activity that starts at the proposal stage and continues or revised throughout the lifetime of a project. Projects usually have a budget and continual cost estimation is necessary to ensure that there are no time and cost overruns. The different techniques of software cost estimation are:
- Algorithmic cost modeling – The most scientific, although not necessarily the most accurate, approach to costing and scheduling is to use an algorithmic costing model. Such a model can be built by analyzing the costs and attributes of completed projects. This method uses an algorithm to calculate the software cost estimate as a function of a number of variables (like lines of code (LOC), function points, complexity of the software and so on) that are considered to be the major cost drivers. The basic difficulty with algorithmic cost modeling is that it relies on the quantification of some attribute of the finished software product like number of lines of code. Cost estimation is most critical early in the software process long before the product is completed and so the person who does the costing has to depend on his experience and intuition to estimate the appropriate attribute value for input to the costing model. An example of the algorithmic model is the COCOMO model  developed by Boehm. Algorithmic models are objective, free from bias, repeatable and are objectively calibrated using previous experience. But these models are unable to deal with exceptional situations, exceptional personnel, exceptional teamwork and so on.
- Expert judgment – One or more experts on software development techniques to be used and on application domain are consulted. They each estimate the project cost and the final cost estimate is arrived at by discussions and consensus. This method relies on the experts making educated guesses at the required project attributes. The accuracy of such estimates clearly depends on both the expert’s understanding of the qualitative properties of the project being estimated, the capabilities of his organization and its software engineers and on their experience with pervious projects. One way to improve the accuracy of this technique is get the opinion of more than one expert.
- Estimation by analogy – This technique is applicable when other projects in the same application domain have been completed. The cost of the new project is estimated by analogy with these completed projects. Here the approach is like “It took Mathews 3 months to design code and test the ‘Helpdesk’ system. Bob is less experienced but the project is less complex. So it will take 2.5 months for him to finish the project, and so on.” Here the strength and weakness of this technique is that the estimation is based on a previous project. If the conditions of the base project and the estimated project are the same, the estimations will be accurate otherwise it will be inaccurate.
- Parkinson’s law – Parkinson’s Law  states that work expands to fill the time available. In software costing, this means that the cost is determined by available resources rather than by objective assessment. If the software has to be delivered in 12 months and 5 people are available, the effort required is estimated as 60 person months. Parkinson’s Law is not a recommended practice as it is inaccurate and is without any scientific basis and most often produces estimates that are grossly wrong.
- Pricing to win – The software cost is estimated to be whatever the customer has available to spend on the project. The estimated effort depends on the customer’s budget and not on the software functionality. This is basically a method to win a software contract, but it often puts the organization into trouble as the estimate in most cases turn out to be so low that the organization will incur huge losses.
- Top-down estimation – A cost estimate is established by considering the overall functionality of the product and how that functionality is provided by the interacting sub-functions. Cost estimates are made on the basis of the logical function rather than the components implementing that function. The disadvantage of this approach is that it does not take into account of the complexity and technical difficulties of the subsystems and sometimes misses components altogether.
- Bottom-up estimation – The cost of each component is estimated. All these costs are added to produce a final cost estimate. Both top-down and bottom-up approach can be done in conjunction with the other techniques mentioned above.
Each technique, as we have seen, has its own advantages and disadvantages. The most important point is that no method is better or worse than the other (except for the Parkinson’s Law and Price-to-win estimation). It is the type of the project, amount of historical information available, etc., that usually decides which technique is to be chosen. In the case of large projects, several cost estimation techniques should be used in parallel and their results compared. If these techniques result in radically different costs, it implies that not enough costing information is available. More information should be sought and the costing process should be repeated until the estimates converge. Software costing is often simply a matter of experience or political judgment and these skills come only through experience. Of the above mentioned methods the most commonly used ones are:
- Estimation by expert judgment
- Estimation by analogy
- Algorithmic Estimation.
One of the more recent estimation techniques is the Proxy-Based Estimation (Probe)  by Humphrey. In this technique one combines the basic design for the product with the historical size data (proxies) to estimate the size of the new product. In estimating the size, the project is broken down into smaller subsystems. These subsystems most often will be similar to the subsystems from the previous projects, the value for those parts can be estimated with a high degree of accuracy. Here the more historical data that one has the more accurate the results. There is another technique Putnam  combines the low, high and most likely estimates to arrive at the final estimate.
Here we would like to warn about the over dependence on historical data in estimation. Historical data can be used as a guideline or can be used to verify whether the estimates are way out of target. But too much dependence on historical data can result in very inaccurate results. The reason for this is that in the software development no two projects are the same. The technology is changing every day, the people, their individual capabilities, the project team composition, the capabilities of the team and so on will not be the same for two projects. Technological advancements or programming practices alone can make tremendous differences. A project that is done without CASE tools and one done with a CASE tool will have different time and cost estimates, even if the functionality and all other parameters are the same. So historical data should be kept, the database should be updated after each project, but too much reliance on historical data should be avoided.
In our experience we have found that a combination of the Delphi technique and the PERT-Based Beta distribution method (using the high, low, most likely estimates) gives the most accurate results. In this method, the first thing we do is to assemble a panel of experts. The number in the panel should be 3 – 6. These people should be familiar with the organization, its personnel, the capabilities and limitations (of both the organization and the personnel) in addition to the technological issues involved in the project. Senior managers, project leaders, QA personnel, etc. are ideal candidates to be part of the panel. The project manager or the person in-charge of the project briefs the panel about the project. The panel members can ask questions and clear their doubts about the project in this meeting. The presentation gives an overview of the project, the functionality that is expected and if possible the main modules of the project. Any specific questions that the panel members have are discussed in the meeting. The panel members also discuss and arrive at a project break down structure. Or in other words, the panel members decide on how the project should be broken down into the various modules or subsystems. Here the level up to which the project should be broken down is a consensus decision of the panel members as they are the people who will be estimating. Stop at a level that is comfortable to all. A project breakdown to the minute-component level is not required. Here the group who is doing the estimate is an expert panel.
Once the meeting is over, the panel is asked to give the estimate. Here the panel should make three estimates high (a pessimistic approach which assumes a situation where everything will go wrong), low (an optimistic approach which assumes a situation where everything will work as planned) and most likely. The three estimates should be done for each subsystem. It is important is that the project should be divided into subsystems (the project breakdown structure) before doing an estimate. The final estimate is calculated using the following formula:
Estimate = (High + 4*Most Likely + Low)/6.
The estimate is calculated for each subsystem using the above formula. Here consider an example. The high, low and most likely values for one subsystem were 6, 4 and 5. The estimate will be 5 ([6+20+4]/6). The estimate will also be 5 if the values were 12, 2 and 4. But in the second case the degree of uncertainty is much higher because of the wider spread between the optimistic and pessimistic values. A large difference between the high and low values signifies too many unknowns and should require more attention. We must calculate the percentage of error, which will give the degree of confidence in the estimate based on the standard deviation and the estimate. Standard deviation (SD) and error percentage for each subsystem is calculated using the following formula:
SD = (High – Low)/6
Error percentage = (SD/Estimate)*100
For example, the SD and error percentage for the first set of values (6,4, and 5) are 0.33 and 6.67. The SD and percentage of error for the second set of values (12, 2 and 4) are 1.67 and 33.4. If the percentage of error is more than 10 – 15, then the estimate needs to be reviewed again. The panel should sit together and analyze the steps that they took, the assumptions that they made to reach the values and find if there is anything that need to be changed, added, removed and so on. If there is some difficulty in estimating the subsystem, then break it down into smaller components and then do the estimate. This process should continue until the percentage of error is less than 10. The estimates of the subsystems can be added together to get the total estimate (TE) for the project. The total standard deviation (TSD) and the total error percentage are calculated using the following formulae:
- TSD = Square root (Sum (SD2))
- Total error percentage = (TSD/TE)*100
If the total percentage of error is too high the estimates are again reworked till the error percentage is within acceptable limits. The advantage of this system is that it uses the experience and expertise of senior professionals; it takes into account the nature, strength, capabilities and limitations of the organization, its personnel and the team. It takes into account the technological environment and it does not rely heavily on historical data or past performances. Here we would like to make one thing very clear; we are not advocating against the use of historical data, but too much reliance on historical data in this constantly changing environment can result in inaccurate estimates. So historical data can be used as guideline but for each new project the estimation should be done afresh. Software development projects are different form other projects. For example, the time taken for typing a 250-word page by a person with 30 wpm typing speed will be the same every time such an activity is performed. But in the case of software development where the number of variables and the degree to which each variable can change is very high, estimation based on historical data alone should be avoided.
In this article we saw costing and estimation. These are two important functions that decide the success of an organization. We have seen activity-based costing (ABC) and its benefits. We also saw an overview of the different estimation methods and a new estimation method, which is a combination of the Delphi technique, and the PERT-Based Beta Distribution method.
 Boehm, B. W., Software Engineering Economics, Prentice Hall PTR, 1981.
 Parkinson. G.N., Parkinson’s Law and Other Studies in Administration, Houghton-Mifflin, 1957.
 Humphrey, W. S., A Discipline for Software Engineering, Addison-Wesley, 1995.
 Putnam, L. and Myers, W., Measures of Excellence: Reliable Software on Time, Within Budget, Yourdon Press, 1992.
- Beaujon, G. J. and Singhal, V. R., “Understanding the Activity Costs in an Activity-Based Cost System, Journal of Cost Management, Spring, 1990, pp. 55-72.
- Boehm, B. E., et al, Software Cost Estimation with COCOMO II, Prentice Hall PTR, 2000.
- Brimson, J. A., Activity Accounting: An Activity-Based Costing Approach, John Wiley & Sons, 1997.
- Cokins, G., Activity-Based Cost Management Making It Work: A Manager’s Guide to Implementing and Sustaining an Effective ABC System, McGraw-Hill, 1996.
- Cooper, R., The Rise of Activity-Based Costing—Part One: What Is an Activity-Based System? Journal of Cost Management, Summer, 1998, pp. 45-54.
- Cooper, R., The Rise of Activity-Based Costing—Part Two: When Do I Need an Activity-Based System? Journal of Cost Management, Fall, 1998, pp. 41-48.
- Cooper, R., The Rise of Activity-Based Costing—Part Three: How Many Cost Drivers Do I Need? Journal of Cost Management, Winter, 1998, pp. 34-46.
- Cooper, R., The Rise of Activity-Based Costing—Part Four: What Do Activity-Based Cost Systems Look Like? Journal of Cost Management, Spring, 1989, pp. 38-49.
- Forrest, E., Activity-Based Management: A Comprehensive Implementation Guide, New York: McGraw-Hill, 1996.
- Londeix, B., Cost Estimation for Software Development, Reading, MA: Addison-Wesley, 1988.
- Nair, M., Activity-Based Information Systems: An Executive’s Guide to Implementation, New York: John Wiley & Sons, 1999.