Context: Backsourcing is the process of insourcing previously outsourced activities. Backsourcing can be a viable alternative when companies experience environmental or strategic changes, or challenges with outsourcing. While outsourcing and related processes have been extensively studied, few studies report experiences with backsourcing.
Objectives: We summarize the results of the research literature on backsourcing of IT, with a focus on software development. By identifying practically relevant experience, we present findings that may help companies considering backsourcing. In addition, we identify gaps in the current research literature and point out areas for future work.
Method: Our systematic literature review (SLR) started with a search for empirical studies on the backsourcing of IT. From each study, we identified the context in which backsourcing occurred, the factors leading to the decision, the backsourcing process, and the outcomes of backsourcing. We employed inductive coding to extract textual data from the papers and qualitative cross-case analysis to synthesize the evidence.
Results: We identified 17 papers that reported 26 cases of backsourcing, six of which were related to software development. The cases came from a variety of contexts. The most common reasons for backsourcing were improving quality, reducing costs, and regaining control of outsourced activities. We model the backsourcing process as containing five sub-processes: change management, vendor relationship management, competence building, organizational build-up, and transfer of ownership. We identified 14 positive outcomes and nine negative outcomes of backsourcing. We also aggregated the evidence and detailed three relationships of potential use to companies considering backsourcing. Finally, we have highlighted the knowledge areas of software engineering associated with the backsourcing of software development.
Conclusion: The backsourcing of IT is a complex process; its implementation depends on the prior outsourcing relationship and other contextual factors. Our systematic literature review contributes to a better understanding of this process by identifying its components and their relationships based on the peer-reviewed literature. Our results can serve as a motivation and baseline for further research on backsourcing and provide guidelines and process fragments from which practitioners can benefit when they engage in backsourcing.
Berg, Helene; Holgeid, Knut Kjetil; Jørgensen, Magne & Volden, Gro Holst
(2023).
Successful IT projects – A multiple case study of benefits management practices.
Procedia Computer Science.
ISSN 1877-0509.
219,
p. 1847–1859.
doi: 10.1016/j.procs.2023.01.482.
Full text in Research ArchiveShow summary
Delivering project benefits for users and society is a key aspect of success in public IT projects. The traditional success measures, such as time and cost, only tell parts of the story. Furthermore, one of the main challenges in public IT projects is the inability to produce benefits. The objective of the study is to give evidence-based advice in order to contribute to better benefits management. This objective is achieved through increased knowledge about practices within two central aspects: identification and planning of benefits, and how benefit management is practiced during the execution phase of IT projects. The authors collected information about 23 public IT projects both through interviews with project personnel and by reviewing project documents. These information sources were then analyzed, using mainly qualitative methods. It was found that most projects had some form of a cost-benefit analysis, but the quality and comprehensiveness of the analyses varied. Furthermore, the interview results suggested that the later use of the cost-benefit analysis in benefit management during the project was less important for benefit management, and that the main purpose of the analysis was to ensure approval of the business case. When asked about benefit management practices during the execution phase of the projects, the interviewees’ answers were divided almost equally between “important” and “not important.” This applied to both the general practice of benefit management and the use of the benefit plan. Personnel with clear responsibility and sufficient authority to realize benefits was one of the most frequently mentioned features that contributed to the realization of benefits. For the later termination phase and evaluation phase, the findings revealed that projects used few resources to evaluate and document realized benefits. In conclusion, the study revealed both awareness and a focus on benefit management practices in the projects represented in the dataset, but also shortcomings. Based on the results, the authors include a set of five practical recommendations for better benefits management.
Holgeid, Knut Kjetil; Jørgensen, Magne; Volden, Gro Holst & Berg, Helene
(2022).
Realising benefits in public IT projects: A multiple case study.
IET Software.
ISSN 1751-8806.
7.
doi: 10.1049/sfw2.12079.
Full text in Research ArchiveShow summary
Information Technology (IT) investments in the public sector are large, and it is essential that they lead to benefits for the organisations themselves and wider society. While there is evidence suggesting a positive connection between the existence of benefits management practices and the realisation of benefits, less is known about how to implement such practices effectively. The aim of this paper is to provide insights into when benefits are most likely to be realised, and how benefits management practices and roles should be implemented, in order to have a positive effect on the success of projects in terms of realising benefits. We collected data relating to 10 public IT projects in Norway. For each project, information on benefits management was collected from project documents by interviewing the project owners and benefits owners and via follow-up surveys. The benefits with the highest degree of realisation were those internal to the organisation, while those with the lowest degree were societal benefits. Projects assessed as having more specific, measurable, accountable, and realistically planned benefits were more successful in terms of realising benefits. Benefit owners were most effective when they were able to attract attention to the benefits to be realised, had a strong mandate, and had the domain expertise.
Jørgensen, Magne; Halkjelsvik, Torleif & Liestøl, Knut
(2022).
When should we (not) use the mean magnitude of relative error (MMRE) as an error measure in software development effort estimation?
Information and Software Technology.
ISSN 0950-5849.
143.
doi: 10.1016/j.infsof.2021.106784.
Jørgensen, Magne & Escott, Eban
(2022).
Relative estimates of software development effort: Are they more accurate or less time-consuming to produce than absolute estimates, and to what extent are they person-independent?
Information and Software Technology.
ISSN 0950-5849.
143,
p. 1–9.
doi: 10.1016/j.infsof.2021.106782.
Show summary
Context
Estimates of software development effort may be given as judgments of relationships between the use of efforts on different tasks-that is, as relative estimates. The use of relative estimates has increased with the introduction of story points in agile software development contexts.
Objective
This study examines to what extent relative estimates are likely to be more accurate or less time-consuming to produce than absolute software development effort estimates and to what extent relative estimates can be considered developer-independent.
Method
We conducted two experiments. In the first experiment, we collected estimates from 102 professional software developers estimating the same tasks and randomly allocated to providing relative estimates in story points or absolute estimates in work-hours. In the second experiment, we collected the actual efforts of 20 professional software developers completing the same 5 programming tasks and used these to analyze the variance in relative efforts.
Results
The results from the first experiment indicates that the relative estimates were less accurate than the absolute estimates, and that the time consumed completing the estimation work was higher for those using relative estimation, even when only considering developers with extensive prior experience in story point–based estimation for both tasks. The second experiment revealed that the relative effort was far from developer-independent, especially for the least productive developers. This suggests that relative estimates to a large extent are developer-dependent.
Conclusions
Although there may be good reasons for the continued use of relative estimates, we interpret our results as not supporting that the use of relative estimates is connected with higher estimation accuracy or less time consumed on producing the estimates. Neither do our results support a high degree of developer-independence in relative estimates.
Holgeid, Knut Kjetil; Jørgensen, Magne; Sjøberg, Dag & Krogstie, John
(2021).
Benefits management in software development: A systematic review of empirical studies.
IET Software.
ISSN 1751-8806.
15(1),
p. 1–24.
doi: 10.1049/sfw2.12007.
Full text in Research Archive
Jørgensen, Magne; Bergersen, Gunnar R. & Liestøl, Knut
(2021).
Relations between Effort Estimates, Skill Indicators, and Measured Programming Skill.
IEEE Transactions on Software Engineering.
ISSN 0098-5589.
47(12),
p. 2892–2906.
doi: 10.1109/TSE.2020.2973638.
Halkjelsvik, Torleif & Jørgensen, Magne
(2021).
When 2 + 2 should be 5: The summation fallacy in time prediction.
Journal of Behavioral Decision Making.
ISSN 0894-3257.
doi: 10.1002/bdm.2265.
Show summary
Predictions of time (e.g., work hours) are often based on the aggregation of estimatesof elements (e.g., activities and subtasks). The only types of estimates that can besafely aggregated by summation are those reflecting predicted average outcomes(expected values). The sums of other types of estimates, such as bounds of confi-dence intervals or estimates of the mode, do not have the same interpretation astheir components (e.g., the sum of the 90% upper bounds is not the appropriate 90%upper bound of the sum). The present research shows that this can be a potentialsource of bias in predictions of time. In Studies 1 and 2, professionals with experi-ence in estimation provided total estimates of time that were inconsistent with theirestimates of individual tasks. Study 3 shows that this inconsistency can be attributedto improper aggregation of time estimates and demonstrates how this can produceboth overestimation and underestimation—and also confidence intervals that are fartoo wide. Study 4 suggests that the results may reflect a more general fallacy in theaggregation of probabilistic quantities. The inconsistencies and biases appear to belargely driven by a tendency to naïvely sum (2+2=4) probabilistic (stochastic)values. Thissummation fallacymay be consequential in contexts where informalestimation methods (expert judgment) are used.
Jørgensen, Magne; Welde, Morten & Halkjelsvik, Torleif
(2021).
Evaluation of Probabilistic Project Cost Estimates.
IEEE transactions on engineering management.
ISSN 0018-9391.
70(10),
p. 3481–3496.
doi: 10.1109/TEM.2021.3067050.
Show summary
Evaluation of cost estimates should be fair and give incentives for accuracy. These goals, we argue, are challenged by a lack of precision in what is meant by a cost estimate and the use of evaluation measures that do not reward the most accurate cost estimates. To improve the situation, we suggest the use of probabilistic cost estimates and propose guidelines on how to evaluate such estimates. The guidelines emphasize the importance of a match between the type of cost estimate provided by the estimators and the chosen cost evaluation measure, and the need for an evaluation of both the calibration and the informativeness of the estimates. The feasibility of the guidelines is exemplified in an analysis of a set of 69 large Norwegian governmental projects. The evaluation indicated that the projects had quite accurate and unbiased P50 estimates and that the prediction intervals were reasonably well-calibrated. It also showed that the cost prediction intervals were noninformative with respect to differences in cost uncertainty and, consequently, not useful to identify projects with higher cost uncertainty. The results demonstrate the usefulness of applying the proposed cost estimation evaluation guidelines.
Holgeid, Knut Kjetil & Jørgensen, Magne
(2020).
Practices connected to perceived client benefits of software projects.
IET Software.
ISSN 1751-8806.
14(6),
p. 677–683.
doi: 10.1049/iet-sen.2019.0141.
Filkukova, Petra & Jørgensen, Magne
(2020).
How to pose for a professional photo: The effect of three facial expressions on perception of competence of a software developer.
Australian Journal of Psychology.
ISSN 0004-9530.
72(3),
p. 257–266.
doi: 10.1111/ajpy.12285.
Jørgensen, Magne
(2019).
Relationships Between Project Size, Agile Practices, and Successful Software Development: Results and Analysis.
IEEE Software.
ISSN 0740-7459.
36(2),
p. 39–43.
doi: 10.1109/MS.2018.2884863.
Jørgensen, Magne
(2019).
Evaluating probabilistic software development effort estimates: Maximizing informativeness subject to calibration.
Information and Software Technology.
ISSN 0950-5849.
115,
p. 93–96.
doi: 10.1016/j.infsof.2019.08.006.
Jørgensen, Magne & Yamashita, Aiko
(2016).
Cultural Characteristics and Their Connection to Increased Risk of Software Project Failure.
Journal of Software.
ISSN 1796-217X.
11(6),
p. 606–614.
doi: 10.17706/jsw.11.6.606-614.
Jørgensen, Magne
(2016).
Better Selection of Software Providers through Trialsourcing.
IEEE Software.
ISSN 0740-7459.
33(5),
p. 48–53.
doi: 10.1109/MS.2015.24.
Context
The trustworthiness of research results is a growing concern in many empirical disciplines.
Aim
The goals of this paper are to assess how much the trustworthiness of results reported in software engineering experiments is affected by researcher and publication bias, given typical statistical power and significance levels, and to suggest improved research practices.
Method
First, we conducted a small-scale survey to document the presence of researcher and publication biases in software engineering experiments. Then, we built a model that estimates the proportion of correct results for different levels of researcher and publication bias. A review of 150 randomly selected software engineering experiments published in the period 2002–2013 was conducted to provide input to the model.
Results
The survey indicates that researcher and publication bias is quite common. This finding is supported by the observation that the actual proportion of statistically significant results reported in the reviewed papers was about twice as high as the one expected assuming no researcher and publication bias. Our models suggest a high proportion of incorrect results even with quite conservative assumptions.
Conclusion
Research practices must improve to increase the trustworthiness of software engineering experiments. A key to this improvement is to avoid conducting studies with unsatisfactory low statistical power.
Jørgensen, Magne
(2015).
The Effect of the Time Unit on Software
Development Effort Estimates.
In Keshav, Dahal & Rijal, Rameshwar (Ed.),
9th International Conference on Software, Knowledge, Information Management and Applications (SKIMA) 2015.
IEEE (Institute of Electrical and Electronics Engineers).
ISSN 978-1-4673-6744-8.
doi: 10.1109/SKIMA.2015.7399992.
Jørgensen, Magne & Papatheocharous, Efi
(2015).
Believing is Seeing: Confirmation Bias Studies in
Software Engineering.
In Matos, Jose Silva & Alves, José Carlos (Ed.),
41st Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2015.
IEEE Press.
ISSN 978-1-4673-7585-6.p. 92–95.
doi: 10.1109/SEAA.2015.56.
Jørgensen, Magne
(2014).
What we do and don't know about software development effort estimation.
IEEE Software.
ISSN 0740-7459.
31(2),
p. 37–40.
doi: 10.1109/MS.2014.49.
Jørgensen, Magne
(2013).
Relative Estimation of Software Development Effort: It Matters With What and How You Compare.
IEEE Software.
ISSN 0740-7459.
30(2),
p. 74–79.
doi: 10.1109/MS.2012.70.
Jørgensen, Magne
(2013).
The influence of selection bias on effort overruns in software development projects.
Information and Software Technology.
ISSN 0950-5849.
55(9),
p. 1640–1650.
doi: 10.1016/j.infsof.2013.03.001.
Jørgensen, Magne & Kitchenham, Barbara
(2012).
Interpretation problems related to the use of regression models to decide on economy of scale in software development.
Journal of Systems and Software.
ISSN 0164-1212.
85(11),
p. 2494–2503.
doi: 10.1016/j.jss.2012.05.068.
Halkjelsvik, Torleif & Jørgensen, Magne
(2012).
From Origami to Software Development: A Review of Studies on Judgment-Based Predictions of Performance Time.
Psychological bulletin.
ISSN 0033-2909.
138(2),
p. 238–271.
doi: 10.1037/a0025996.
Tamrakar, Ritesh & Jørgensen, Magne
(2012).
Does the use of Fibonacci numbers in Planning Poker affect effort estimates?
In Mendes, Emilia & Genero, Marcela (Ed.),
Proceedings, 16th International Conference on Evaluation and Assessment in Software Engineering.
IET Research Journals.
ISSN 978-1-84919-541-6.p. 228–232.
Jørgensen, Magne
(2011).
Overconfidence in the accuracy of own work effort predictions: The role of interval width.
In Brun, Wibecke; Keren, Gideon; Kirkebøen, Geir & Montgomery, Henry (Ed.),
Perspectives on Thinking, Judging, and Decision Making.
Universitetsforlaget.
ISSN 9788215018782.p. 47–56.
Jørgensen, Magne
(2011).
Contrasting ideal and realistic conditions as a means to improve judgment-based software development effort estimation.
Information and Software Technology.
ISSN 0950-5849.
53(12),
p. 1382–1390.
doi: 10.1016/j.infsof.2011.07.001.
Halkjelsvik, Torleif; Jørgensen, Magne & Teigen, Karl Halvor
(2011).
To Read Two Pages, I Need 5 Minutes, but Give Me 5 Minutes and I will Read Four: How to Change Productivity Estimates by Inverting the Question.
Applied Cognitive Psychology.
ISSN 0888-4080.
25(2),
p. 314–323.
doi: 10.1002/acp.1693.
Halkjelsvik, Torleif; Jørgensen, Magne & Teigen, Karl Halvor
(2010).
To read two pages, I need 5 minutes, but give me 5 minutes and I will read four: how to change productivity estimates by inverting the question.
Applied Cognitive Psychology.
ISSN 0888-4080.
doi: 10.1002/acp.1693.
Jørgensen, Magne
(2010).
Identification of more risks can lead to increased over-optimism of and over-confidence in software development effort estimates.
Information and Software Technology.
ISSN 0950-5849.
52(5),
p. 506–516.
doi: 10.1016/j.infsof.2009.12.002.
Jørgensen, Magne
(2009).
McGuire-programmets stammebehandling: Hva er det? Virker det? Hva kan logopeder lære fra det?
Norsk tidsskrift for logopedi.
ISSN 0332-7256.
55(1),
p. 13–18.
Jørgensen, Magne & Gruschke, Tanja Milijana
(2009).
The Impact of Lessons-Learned Sessions on Effort Estimation and Uncertainty Assessments.
IEEE Transactions on Software Engineering.
ISSN 0098-5589.
35(3),
p. 368–383.
doi: 10.1109/TSE.2009.2.
Jørgensen, Magne; Jørgensen, Magne; Grimstad, Stein & Grimstad, Stein
(2008).
Judgment-updating among software professionals.
In Hossain, Nazmul A. N. M. & Ouzrout, Y (Ed.),
Proceedings of the International Conference on Software, Knowledge, Information Management and Applications.
University of Bradford, School of Informatics.
ISSN 9781851432516.p. 62–67.
Hannay, Jo Erskine & Jørgensen, Magne
(2008).
The Role of Artificial Design Elements in Software Engineering Experiments.
IEEE Transactions on Software Engineering.
ISSN 0098-5589.
34(2),
p. 242–259.
Sjøberg, Dag; Dybå, Tore & Jørgensen, Magne
(2007).
The Future of Empirical Methods in Software Engineering Research.
In Briand, Lionel & Wolf, Alexander (Ed.),
Future of Software Engineering.
IEEEE-CS.
ISSN 0-7695-2829-5.p. 358–378.
Jørgensen, Magne; Faugli, Bjørn & Gruschke, Tanja
(2007).
Characteristics of software engineers with optimistic prediction.
Journal of Systems and Software.
ISSN 0164-1212.
80(9),
p. 1472–1482.Show summary
This paper examines the degree to which level of optimism in software engineers? predictions is related to optimism on previous predictions, general level of optimism (explanatory style, life orientation and self-assessed optimism), development skill, confidence in the accuracy of their own predictions, and ability to recall effort used on previous tasks. Results from four experiments suggest that more optimistic software engineers are characterized by more optimistic previous predictions, higher confidence in the accuracy of their own predictions, lower development skills, poorer ability or willingness to recall effort on previous tasks, and higher optimism scores. However, a substantial part of the variation in the level of optimism seems to be random.
Jørgensen, Magne & Sjøberg, Dag
(2006).
Expert Estimation of Software Development Work.
In Madhavji, Nazim; Fernandez-Ramli, Juan & Perry, Dewayne (Ed.),
Software Evolution and Feedback: Theory and Practice.
IEEE Press.
ISSN 0-470-87180-6.p. 523–527.
Overoptimistic predictions are common in software engineering projects, e.g., the average software project cost overrun is about 30%. This paper examines the use of two popular general tests of optimism (the ASQ and the LOT-R test) to select software engineers that are less likely to provide overoptimistic predictions. A necessary, but not sufficient, condition for this use is that there is a strong relationship between optimism score, as measured by the ASQ and LOT-R tests, and predictions. We report from two experiments on this topic. The experiments suggest that the relation between optimism score as measured by ASQ or LOT-R and predictions is too weak to enable a use of these optimism measurement instruments to select more realistic estimators in software organizations. Our results also suggest that a person's general level of optimism and over-optimistic predictions of performance are, to a large extent, unrelated.
Grimstad, Stein & Jørgensen, Magne
(2006).
A Framework for the Analysis of Software Cost.
In Maldonado, Jose & Wohlin, Clas (Ed.),
ISESE'2006 (Fifth ACM-IEEE International Symposium on Empirical Software Engineering.
ACM Publications.
ISSN 1-59593-218-6.p. 58–65.
Moløkken-Østvold, Kjetil Johan; Jørgensen, Magne; Sørgaard, Pål & Grimstad, Stein
(2005).
Management of Public Software Projects: Avoiding Overruns.
In Burge, Andrew (Eds.),
Hawaiian International Conference on Business.
Hawaii International Conference on Business.
Jørgensen, Magne & Grimstad, Stein
(2005).
Over-optimism in Software Development Projects: ?The winner?s curse?
In Aquino, Vicente Alarcon (Eds.),
15th International Conference on Electronics, Communications and Computers (CONIELECOMP'05).
IEEE (Institute of Electrical and Electronics Engineers).
ISSN 07695-2283-1.p. 280–285.
Gruschke, Tanja Milijana & Jørgensen, Magne
(2005).
Assessing Uncertainty of Software Development Effort Estimates:The Learning From Outcome Feedback.
In Morasca, Sandro (Eds.),
11th IEEE International Software Metrics Symposium (METRICS 2005).
IEEE (Institute of Electrical and Electronics Engineers).
ISSN 0-7695-2371-4.p. 1–10.
Jørgensen, Magne; Kitchenham, B. & Dybå, Tore
(2005).
Teaching Evidence-Based Software Engineering to University Students.
In Morasca, Sandro (Eds.),
11th IEEE International Software Metrics Symposium (METRICS 2005).
IEEE (Institute of Electrical and Electronics Engineers).
ISSN 0-7695-2371-4.
Jørgensen, Magne
(2005).
The "Magic Step" of Judgment-Based Software Effort Estimation.
In Kokinov, B (Eds.),
International Conference on Cognitive Economics.
NBU Press.
ISSN 954-535-404-6.p. 105–114.
Jørgensen, Magne & Teigen, Karl Halvor
(2005).
Kan vi unngå at "så og si helt sikkert" bare betyr "60% sikkert"?
Prosjektledelse.
ISSN 1500-0516.p. 29–31.
Teigen, Karl Halvor & Jørgensen, Magne
(2005).
When 90% confidence intervals are 50% certain: On the credibility of credible intervals.
Applied Cognitive Psychology.
ISSN 0888-4080.
19,
p. 455–475.Show summary
Estimated confidence intervals for general knowledge items are usually too narrow. We report five experiments showing that people have much less confidence in these intervals than dictated by the assigned level of confidence. For instance, 90% intervals can be associated with an estimated confidence of 50% or less (and still lower hit rates). Moreover, interval width appears to remain stable over a wide range of instructions (high and low numeric and verbal confidence levels). This leads to a high degree of overconfidence for 90% intervals, but less for 50% intervals or for free choice intervals (without an assigned degree of confidence). To increase interval width one may have to ask exclusion rather than inclusion questions, for instance by soliciting �improbable� upper and lower values (Experiment 4), or by asking separate �more than� and �less than� questions (Experiment 5). We conclude that interval width and degree of confidence have different determinants, and cannot be regarded as equivalent ways of expressing uncertainty.
Moløkken, Kjetil Johan & Jørgensen, Magne
(2005).
Expert Estimation of Web-Development Projects: Are Software Professionals in Technical Roles More Optimistic Than Those in Non-Technical Roles? Empirical Software Engineering.
ISSN 1382-3256.
10(1),
p. 7–30.
Dybå, Tore; Kitchenham, B. & Jørgensen, Magne
(2005).
Evidence-based Software Engineering for Practitioners.
IEEE Software.
ISSN 0740-7459.
22(1),
p. 58–65.
Jørgensen, Magne
(2005).
Practical guidelines for better support of expert judgement-based software effort estimation.
IEEE Software.
ISSN 0740-7459.
23(3),
p. 57–63.
Moløkken-Østvold, Kjetil Johan & Jørgensen, Magne
(2005).
A Comparison of Software Project Overruns – Flexible vs. Sequential Development Models.
IEEE Transactions on Software Engineering.
ISSN 0098-5589.
31(9),
p. 754–766.
Jørgensen, Magne
(2005).
Evidence-Based Guidelines for Assessment of Software Development Cost Uncertainty.
IEEE Transactions on Software Engineering.
ISSN 0098-5589.
31(11),
p. 942–954.
Mair, C.; Shepperd, Martin & Jørgensen, Magne
(2005).
An Analysis of Data Sets Used to Train and Validate Cost Prediction Systems.
Software engineering notes.
ISSN 0163-5948.
30(4),
p. 1–6.
Welde, Morten; Jørgensen, Magne; Larssen, Per Fridtjof & Halkjelsvik, Torleif
(2019).
Estimering av kostnader i store statlige prosjekter: Hvor gode er estimatene og usikkerhetsanalysene i KS2-rapportene? .
Ex ante akademisk forlag.
ISBN 978-82-93253-81-5.96 p.Full text in Research Archive
Halkjelsvik, Torleif & Jørgensen, Magne
(2018).
Time Predictions: Understanding and Avoiding Unrealism in Project Planning and Everyday Life.
Springer.
ISBN 978-3-319-74952-5.122 p.Full text in Research Archive
Volden, Gro Holst; Jørgensen, Magne & Holgeid, Kjetil
(2022).
Successful IT projects – a multiple case study of benefits management practices.
Volden, Gro Holst; Jørgensen, Magne; Holgeid, Kjetil & Berg, Helene
(2021).
Jakten på nytte i offentlige it-prosjekter.
Stat og styring.
ISSN 0803-0103.
2021(3),
p. 38–41.
Overoptimistic predictions are common in software engineering projects, e.g., the average software project cost overrun is about 30%. This paper examines the use of two popular general tests of optimism (the ASQ and the LOT-R test) to select software engineers that are less likely to provide overoptimistic predictions. A necessary, but not sufficient, condition for this use is that there is a strong relationship between optimism score, as measured by the ASQ and LOT-R tests, and predictions. We report from two experiments on this topic. The experiments suggest that the relation between optimism score as measured by ASQ or LOT-R and predictions is too weak to enable a use of these optimism measurement instruments to select more realistic estimators in software organizations. Our results also suggest that a person's general level of optimism and over-optimistic predictions of performance are, to a large extent, unrelated.
Jørgensen, Magne
(2006).
A Preliminary Model of Judgment-based Project Software Effort Predictions.
Gruschke, Tanja Milijana & Jørgensen, Magne
(2006).
How much does feedback and performance review improve software development effort estimation? An Empirical Study.
Gruschke, Tanja Milijana & Jørgensen, Magne
(2006).
To know or not to know: when does feedback lead to better assessment of uncertainty of own beliefs?
Jørgensen, Magne
(2006).
Software Cost Estimation: When to Use Expert Judgment and When to Use Models.
Grimstad, Stein & Jørgensen, Magne
(2006).
A Framework for the Analysis of Software Cost.
Teigen, Karl Halvor; Jørgensen, Magne & Halberg, Anne-Marie
(2006).
The strange case of subjective uncertainty intervals.
Jørgensen, Magne; Kitchenham, B. & Dybå, Tore
(2005).
Teaching Evidence-Based Software Engineering to University Students.
Jørgensen, Magne & Gruschke, Tanja Milijana
(2005).
Industrial Use of Formal Software Cost Estimation Models: Expert Estimation in Disguise.
Jørgensen, Magne
(2005).
The "Magic Step" of Judgment-Based Software Effort Estimation.
Gruschke, Tanja Milijana & Jørgensen, Magne
(2005).
Assessing Uncertainty of Software Development Effort Estimates:The Learning From Outcome Feedback.
Jørgensen, Magne & Grimstad, Stein
(2005).
Over-optimism in Software Development Projects: ?The winner?s curse?
Grimstad, Stein; Jørgensen, Magne & Moløkken-Østvold, Kjetil Johan
(2005).
The Clients' Impact on Effort Estimation Accuracy in Software Development Projects.