Mostrar el registro sencillo del ítem

dc.contributor.authorKazerouni, Ayaan M.
dc.contributor.authorShaffer, Clifford A.
dc.contributor.authorEdwards, Stephen H.
dc.contributor.authorServant-Cortés, Francisco Javier 
dc.date.accessioned2024-10-31T13:20:08Z
dc.date.available2024-10-31T13:20:08Z
dc.date.issued2019
dc.identifier.citationAyaan M. Kazerouni, Clifford A. Shaffer, Stephen H. Edwards, and Francisco Servant. 2019. Assessing Incremental Testing Practices and Their Impact on Project Outcomes. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (SIGCSE '19). Association for Computing Machinery, New York, NY, USA, 407–413. https://doi.org/10.1145/3287324.3287366es_ES
dc.identifier.urihttps://hdl.handle.net/10630/34972
dc.description.abstractSoftware testing is an important aspect of the development process, one that has proven to be a challenge to formally introduce into the typical undergraduate CS curriculum. Unfortunately, existing assessment of testing in student software projects tends to focus on evaluation of metrics like code coverage over the finished software product, thus eliminating the possibility of giving students early feedback as they work on the project. Furthermore, assessing and teaching the process of writing and executing software tests is also important, as shown by the multiple variants proposed and disseminated by the software engineering community, e.g., test-driven development (TDD) or incremental test-last (ITL). We present a family of novel metrics for assessment of testing practices for increments of software development work, thus allowing early feedback before the software project is finished. Our metrics measure the balance and sequence of effort spent writing software tests in a work increment. We performed an empirical study using our metrics to evaluate the test-writing practices of 157 advanced undergraduate students, and their relationships with project outcomes over multiple projects for a whole semester. We found that projects where more testing effort was spent per work session tended to be more semantically correct and have higher code coverage. The percentage of method-specific testing effort spent before production code did not contribute to semantic correctness, and had a negative relationship with code coverage. These novel metrics will enable educators to give students early, incremental feedback about their testing practices as they work on their software projects.es_ES
dc.language.isoenges_ES
dc.publisherACMes_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectSoftware - Diseñoes_ES
dc.subjectSoftware - Evaluaciónes_ES
dc.subject.otherSoftware process managementes_ES
dc.subject.otherSoftware testinges_ES
dc.subject.otherStudent assessmentes_ES
dc.subject.otherSoftware engineeringes_ES
dc.titleAssessing Incremental Testing Practices and Their Impact on Project Outcomes.es_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.relation.eventtitleACM Technical Symposium on Computer Science Education (SIGCSE)es_ES
dc.relation.eventplaceMinneapolis, Minnesota, U.S.A.es_ES
dc.relation.eventdateFebruary 2019es_ES
dc.rights.ccAttribution-NonCommercial-NoDerivatives 4.0 Internacional*


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional