000 03644nam a2200373uu 4500
999 _c13096
_d13096
001 5052515171211
003 OSt
005 20230831175256.0
008 050525s2004 xx ||||g| |0|| 0 eng d
020 _a0761908943
040 _aBR-BrENAP
_bPt_BR
090 _a5 R831e
999 _bPHL2MARC21 1.1
100 1 _99226
_a Rossi, Peter H.
245 1 0 _aEvaluation :
_ba systematic approach
250 _a7. ed.
260 _aLondon :
_bSage,
_c2004
300 _a470 p.
500 _aInclui bibliografia e índice
505 8 0 _t1. An overview of program evaluation
_tWhat is program evaluation?
_tA brief history of evaluation
_tThe defining characteristics of program evaluation
_tEvaluation research in practice
_tWho can do evaluations?
_t2. Tailoring evaluations
_tWhat aspects of the evaluation plan must be tailored?
_tWhat features of the situation should the evaluation plan take into account?
_tThe nature of the evaluator-stakeholder relationship
_tEvaluation questions and evaluation methods
_t3. Identifying issues and formulating questions
_tWhat makes a good evaluation question?
_tDetermining the specific questions the evaluation should answer
_tCollating evaluation questions and setting priorities
_t4. Assessing the need for a program
_tThe role of evaluators in diagnostic social conditions and service needs
_tDefining the problem to be addressed
_tSpecifying the extent of the extent of the problem: when, where, and how big?
_tDefining and identifying the targets of interventions
_tDescribing target populations
_tDescribing the nature of service needs
_t5. Expressing and assessing program theory
_tThe evaluability assessment perspective
_tDescribing program theory
_tEliciting program theory
_tAssessing program theory
_tPossible outcomes of program theory assessment
_t6. Assessing and monitoring program process
_tWhat is program process evaluation and monitoring?
_tPerspectives on program process monitoring
_tMonitoring service utilization
_tMonitoring organizational functions
_tAnalysis of program process monitoring data
_t7. Measuring and monitoring program outcomes
_tProgram outcomes
_tIdentifying relevant outcomes
_tMeasuring program outcomes
_tMonitoring program outcomes
_t8. Assessing program impact: randomized field experiments
_tWhen is an impact assessment appropriate?
_tKey concepts in impact assessment
_tRandomized field experiments
_tLimitations on the use of randomized experiments
_t9. Assessing program impact: alternative designs
_tBias in estimation of program effects
_tQuasi-experimental impact assessment
_tSome cautions about using quasi-experiments for impact assessment
_t10. Detecting, interpreting, and analysing program effects
_tThe magnitude of a program effect
_tDetecting program effects
_tAssessing the practial significance of program effects
_tExamining variations in program effects
_tThe role of meta-analysis
_t11. Measuring efficiency
_tKey concepts in efficiency analysis
_tConducting cost-benefit analyses
_tConducting cost-effectiveness analyses
_t12. The social context of evaluation
_tThe social ecology of evaluations
_tThe profession of evaluation
_tEvaluation standards, guidelines, and ethics
_tUtilization of evaluation results
_tEpilogue: the future of evaluation
650 4 _aPolítica Pública
_912838
650 4 _911965
_aPolítica Social
650 4 _aProgramas Sociais
_913867
650 4 _912277
_a Avaliação
650 4 _aAvaliação de Desempenho
_912937
650 4 _aPolítica Pública
_912838
650 4 _aGestão de Projetos
_913021
650 4 _aGestão de Projetos
_913021
650 4 _aDiagnóstico
_916347
700 1 _917794
_aLipsey, Mark W.
700 1 _aFREEMAN, Howard E
_917793
909 _a201712
_bELDA
942 _cG
_2ddc
041 _aeng