International Conference on Advanced Computer Technology and Development |
Foundation of Computer Science USA |
ICACTD - Number 1 |
October 2014 |
Authors: Banu Chandra, S. Maruthu Perumal |
0f8bae6e-3c1b-41a1-bf40-8a6daccb3b48 |
Banu Chandra, S. Maruthu Perumal . SOA based and Quality of Human-Centric Experiments - A Quasi-Experiment over Software Engineering. International Conference on Advanced Computer Technology and Development. ICACTD, 1 (October 2014), 22-25.
Research into how humans interact with computers has a long and rich history. Only a small fraction of this research has considered how humans interact with computers when engineering software. A similarly small amount of research has considered how humans interact with humans when engineering software. For the last forty years, we have largely taken an artifact-centric approach to software engineering research. To meet the challenges of building future software systems, I argue that we need to balance the artifactcentric approach with a human-centric approach, in which the focus is on amplifying the human intelligence required to build great software systems. A human-centric approach involves performing empirical studies to understand how software engineers work with software and with each other, developing new methods for both ecomposing and composing models of software to to ease the cognitive load placed on engineers and on creating computationally intelligent tools aimed at focusing the humans on the tasks only the humans can solve. Context: Several text books and papers published between 2000 and 2002 have attempted to introduce experimental design and statistical methods to software engineers undertaking empirical studies. Objective: This paper investigates whether there has been an increase in the quality of human-centric experimental and quasi-experimental journal papers over the time period 1993 to 2010. Method: Seventy experimental and quasi experimental papers published in four general software engineering journals in the years1992-2002 and 2006-2010 were each assessed for quality by three empirical software engineering researchers using two quality assessment methods (a questionnaire-based method and a subjective overall assessment). Regression analysis was used to assess the relationship between paper quality and the year of publication, publication date group (before 2003 and after 2005), source journal, and average coauthor experience, citation of statistical text books and papers, and paper length. The results were validated both by removing papers for which the quality score appeared unreliable and using an alternative quality measure. Results: Paper quality was significantly associated with year, citing general atistical texts, and paper length (p < 0. 05). Paper length did not reach significance when quality was measured using an overall subjective assessment.