<style type="text/css"> .wpb_animate_when_almost_visible { opacity: 1; }</style> Enap catalog › Details for: When Can We Conclude That Treatments or Programs "Don't Work"?
Normal view MARC view ISBD view

When Can We Conclude That Treatments or Programs "Don't Work"?

By: WEISBURD, David; LUM, Cynthia M.; YANG, Sue-Ming.
Material type: materialTypeLabelArticlePublisher: Thousand Oaks : Sage Publications, May 2003Subject(s): What works; Null Hypothesis Significance Testing; Effect Sizes; Statistical Significance; Statistical PowerThe Annals of The American Academy of Political and Social Science 587, p. 31-48Abstract: In this article, the authors examine common practices of reporting statiscally nonsignificant findings in criminal justice evaluation studies. They find that criminal justice evaluators often make formal erros in the reporting of statistically nonsignificant results. Instead of simply concluding that the results were not statistically significant, or that there is not enough evidence to support an effect of treatment, they often mistakenly accept the null hypothesis and state that the intervention had no impact or did not work. The authors propose that researches define a second null hypothesis that sets a mininal threshold for program effectiveness. In an illustration of this approach, they find that more than half of the studies that had no statistically significant findings for a traditional, no difference null hypothesis evidenced a statistically significant result in the case of a minimal worthwhile treatment effect null hypothesis
Tags from this library: No tags from this library for this title. Log in to add tags.
    average rating: 0.0 (0 votes)
No physical items for this record

In this article, the authors examine common practices of reporting statiscally nonsignificant findings in criminal justice evaluation studies. They find that criminal justice evaluators often make formal erros in the reporting of statistically nonsignificant results. Instead of simply concluding that the results were not statistically significant, or that there is not enough evidence to support an effect of treatment, they often mistakenly accept the null hypothesis and state that the intervention had no impact or did not work. The authors propose that researches define a second null hypothesis that sets a mininal threshold for program effectiveness. In an illustration of this approach, they find that more than half of the studies that had no statistically significant findings for a traditional, no difference null hypothesis evidenced a statistically significant result in the case of a minimal worthwhile treatment effect null hypothesis

There are no comments for this item.

Log in to your account to post a comment.

Click on an image to view it in the image viewer

Escola Nacional de Administração Pública

Escola Nacional de Administração Pública

Endereço:

  • Biblioteca Graciliano Ramos
  • Funcionamento: segunda a sexta-feira, das 9h às 19h
  • +55 61 2020-3139 / biblioteca@enap.gov.br
  • SPO Área Especial 2-A
  • CEP 70610-900 - Brasília/DF
<
Acesso à Informação TRANSPARÊNCIA

Powered by Koha