CFP last date
20 December 2024
Reseach Article

SOA based and Quality of Human-Centric Experiments - A Quasi-Experiment over Software Engineering

Published on October 2014 by Banu Chandra, S. Maruthu Perumal
International Conference on Advanced Computer Technology and Development
Foundation of Computer Science USA
ICACTD - Number 1
October 2014
Authors: Banu Chandra, S. Maruthu Perumal
0f8bae6e-3c1b-41a1-bf40-8a6daccb3b48

Banu Chandra, S. Maruthu Perumal . SOA based and Quality of Human-Centric Experiments - A Quasi-Experiment over Software Engineering. International Conference on Advanced Computer Technology and Development. ICACTD, 1 (October 2014), 22-25.

@article{
author = { Banu Chandra, S. Maruthu Perumal },
title = { SOA based and Quality of Human-Centric Experiments - A Quasi-Experiment over Software Engineering },
journal = { International Conference on Advanced Computer Technology and Development },
issue_date = { October 2014 },
volume = { ICACTD },
number = { 1 },
month = { October },
year = { 2014 },
issn = 0975-8887,
pages = { 22-25 },
numpages = 4,
url = { /proceedings/icactd/number1/18334-1407/ },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Proceeding Article
%1 International Conference on Advanced Computer Technology and Development
%A Banu Chandra
%A S. Maruthu Perumal
%T SOA based and Quality of Human-Centric Experiments - A Quasi-Experiment over Software Engineering
%J International Conference on Advanced Computer Technology and Development
%@ 0975-8887
%V ICACTD
%N 1
%P 22-25
%D 2014
%I International Journal of Computer Applications
Abstract

Research into how humans interact with computers has a long and rich history. Only a small fraction of this research has considered how humans interact with computers when engineering software. A similarly small amount of research has considered how humans interact with humans when engineering software. For the last forty years, we have largely taken an artifact-centric approach to software engineering research. To meet the challenges of building future software systems, I argue that we need to balance the artifactcentric approach with a human-centric approach, in which the focus is on amplifying the human intelligence required to build great software systems. A human-centric approach involves performing empirical studies to understand how software engineers work with software and with each other, developing new methods for both ecomposing and composing models of software to to ease the cognitive load placed on engineers and on creating computationally intelligent tools aimed at focusing the humans on the tasks only the humans can solve. Context: Several text books and papers published between 2000 and 2002 have attempted to introduce experimental design and statistical methods to software engineers undertaking empirical studies. Objective: This paper investigates whether there has been an increase in the quality of human-centric experimental and quasi-experimental journal papers over the time period 1993 to 2010. Method: Seventy experimental and quasi experimental papers published in four general software engineering journals in the years1992-2002 and 2006-2010 were each assessed for quality by three empirical software engineering researchers using two quality assessment methods (a questionnaire-based method and a subjective overall assessment). Regression analysis was used to assess the relationship between paper quality and the year of publication, publication date group (before 2003 and after 2005), source journal, and average coauthor experience, citation of statistical text books and papers, and paper length. The results were validated both by removing papers for which the quality score appeared unreliable and using an alternative quality measure. Results: Paper quality was significantly associated with year, citing general atistical texts, and paper length (p < 0. 05). Paper length did not reach significance when quality was measured using an overall subjective assessment.

References
  1. G. Eason, B. Noble, and I. N. Sneddon, "On certain integrals of Lipschitz-Hankel type involving products of Bessel functions," Phil. Trans. Roy Soc. London, vol. A247, pp. 529–551, April 1955, (references)
  2. D. T. Campbell and J. C. Stanley, Experimental and Quasi-Experimental Designs for Research, Houghton Mifflin Company, 1966.
  3. T. D. Cook and D. T. Campbell, Quasi-Experimentation: Design and Analysis Issues for Field Settings. Rand McNally Collage, 1979.
  4. I. K. Crombie, The Pocket Guide to Appraisal. BMJ Books,1996.
  5. O. Dieste and A. G. Padua, "Developing Search Strategies forDetecting Relevant Experiments for Systematic Reviews," Proc. First Int'l Symp, Empirical Software Eng. and Measurement,pp. 215-224, 2007.
  6. O. Dieste, A. Grima´n, N. Juristo, and H. Saxena, "Quantitative Determination of the Relationship between Internal Validity and Bias in Software Engineering: Consequences for Systematic Literature Reviews," Proc. Int'l Symp. Empirical Software Eng. And Metrics, pp. 285-288, 2011
  7. T. Dyba°, V. B. Kampenes, and D. I. K. Sjøberg, "A Systematic Review of Statistical Power in Software Engineering Experiments," Information and Software Technology, vol. 48, no. 8, pp. 745- 755, 2006
  8. L. D. Fisher, D. O. Dixon, J. Herson, R. K. Frankowski, M. S. Hearon, and K. E. Pearce, "Intention to Treat in Clinical Trials," Statistical Issues in Drug Research and Development, K. E. Pearce, ed. , pp. 331- 350, Marcel Dekker, 1990.
  9. A. Fink, Conducting Research Literature Reviews: From the Internet to Paper. Sage Publication, Inc. , 2005.
  10. T. Greenhalgh, How to Read a Paper: The Basics of Evidence-Based Medicine. BMJ Books, 2000. A. Jedlitschka, M. Ciolkowski, and D. Pfahl, "Reporting Experiments in Software Engineering," Guide to Advanced Empirical Software Eng. , F. Shull, J. Singer, and D. I. K. Sjøberg, eds. , Springer- Verlag, 2008.
  11. J. Juristo and A. Moreno, Basics of Software Engineering Experimentation. Kluwer Academic Publishers, 2001
  12. P. Ju¨ ni, A. Witschi, R. Bloch, and M. Egger, "The Hazards of Scoring the Quality of Clinical Trials for Meta-Analysis," J. Am. Medical Assoc. , vol. 282, no. 11, pp. 1054-1060, 1999.
  13. H. Liu and H. B. K. Tan, "Testing Input Validation in Web Applications through Automated Model Recovery," J. Systems and Software, vol. 81, pp. 222-233, 2007
  14. V. B. Kampenes, T. Dyba°, J. E. Hannay, and D. I. K. Sjøberg, "A Systematic Review of Effect Size in Software Engineering Experiments," Information and Software Technology, vol. 49, no. 11/12, pp. 1073-1086, 2007
  15. V. B. Kampenes, "Quality of Design Analysis and Reporting of Software Engineering Experiments: A Systematic Review," PhD thesis, Dept. of Informatics, Univ. of Oslo, 2007.
  16. V. B. Kampenes, T. Dyba°, J. E. Hannay, and D. I. K. Sjøberg, "A Systematic Review of Quasi-Experiments in Software Engineering," Information and Software Technology, vol. 51, no. 1, pp. 71- 82, 2009.
  17. B. Kitchenham, S. L. Pfleeger, L. M. Pickard, P. Jones, D. Hoaglin, K. El Emam, and J. Rosenberg, "Preliminary Guidelines for Empirical Research in Software Engineering," IEEE Trans. Software Eng. , vol. 28, no. 8, pp. 721-734, Aug. 2002.
  18. B. A. Kitchenham, D. I. K. Sjøberg, T. Dyba°, D. Pfhal, P. Brereton, D. Budgen, M. Ho¨ st, and P. Runeson, "Three Empirical Studies on the Agreement of Reviewers about the Quality of Software Engineering Experiments," Information and Software Technology, vol. 54, pp. 804-819, 2012.
  19. B. A. Kitchenham, D. I. K. Sjøberg, O. P. Brereton, D. Budgen, T. Dyba° , M. Høst, D. Pfahl, and P. Runeson, "Can We Evaluate the Quality of Software Engineering Experiments?" Proc. Conf. Empirical Software Eng. and Metrics, 2010.
  20. W. F. Rosenberger, "Dealing with Multiplicities in Pharmacoepidermioloical Studies," Pharmacoepidemiology and Drug Safety, vol. 5, pp. 95-100, 1996.
  21. R. L. Rosnow and R. Rosenthal, People Studying People. Artifacts and Ethics in Behavioural Research. W. H. Freeman and Company, 1997.
  22. J. Singer, "Using the APA Style Guidelines to Report Experimental Results," Proc. Workshop Empirical Studies in Software Maintenance, pp. 71-75, 1999
  23. D. I. K. Sjøberg, J. E. Hannay, O. Hansen, V. B. Kampenes, A. Karahasanovic, N. K. Liborg, and A. C. Rekdal, "A Survey of Controlled Experiments in Software Engineering," IEEE Trans. Software Eng. , vol. 31, no. 9, pp. 733-753, Sept. 2005.
  24. W. R. Shadish, T. D Cook, and D. T. Campbell, Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton Mifflin Company, 2002.
  25. P. E. Shrout and J. L. Fleiss, "Intraclass Correlations: Uses in Assessing Rater Reliability," Psychological Bull. , vol. 86, no. 2, pp. 420-428, 1979.
  26. A. K. Wagner, S. B. Soumerai, F. Zhang, and D. Ross-Degnan, "Segmented Regression Analysis of Interrupted Time Series Studies in Medication Use Research," J. Clinical Pharmacy Therapeutics, vol. 27, pp. 299-309, 2002
  27. C. Wohlin, P. Runeson, M. Ho¨ st, M. C. Ohlsson, B. Regnell, and A. essle´n, Experimentation in Software Engineering—An Introduction. Kluwer, Academic Press, 2000.
  28. M. A. Wojcicki and P. Strooper, "Maximising the Information Gained by a Study of Static Analysis Technologies for Current Software," Empirical Software Eng. , vol. 12, no. 6, pp. 617-645, 2007
Index Terms

Computer Science
Information Sciences

Keywords

Component Formatting Style Styling Insert (key Words)