Assessment

Table of Contents:

EVALUATION OVERVIEW

In order to determine the extent to which goals of the Gateway Engineering Education Coalition are being met, the Coalition has placed strong emphasis on program evaluation. The evaluation has two main purposes: to gather information and draw conclusions regarding the overall extent and effectiveness of Coalition programs for use in decision making (summative evaluation) and in making improvement of the programs as they are being developed (formative evaluation). The philosophy of the Coalition is that informed change is rooted in continuous inquiry that provides relevant information to decision makers.

The Coalition has contracted with the Office of Faculty and TA Development at The Ohio State University to coordinate program evaluation. The design for the evaluation relies on a combination of central and local support. Critical assumptions underlying the evaluation concept for this program were that the plan would:

  • be flexible enough to accommodate the uniqueness of local programs;
  • be comprehensive enough to reflect both institutional and Coalition-level decision making needs and timetables and provide for regular, immediate feedback for project and Coalition-level improvement (formative evaluation);
  • provide technical sound information regarding outcomes and effectiveness in achieving Coalition goals (summative evaluation);
  • employ a combination of qualitative and quantitative strategies; and
  • provide for the assurance of credible, valid information.

Evaluation activity is coordinated by a Central Evaluator, Dr. Nancy Lust at The Ohio State University, who works with Local Evaluators at each of the ten Coalition schools and with Program Areas Leaders to gather and analyze data and report findings on the effects of Coalition activities. The Central Evaluator serves as designer of the plan; developer of evaluation instrumentation; consultant to Local Evaluators, Program Area Leaders, and other Coalition members; synthesizer of reports; and author of the annual report.

Local Evaluators, who are individuals with experience in social science research at each Coalition school, use instruments developed by the Central Evaluator to collect information from faculty and students on a core group of questions that center on overall program effectiveness. They analyze this data and report it to their local group and to the Central Evaluator for inclusion in the annual report.

Program Area Leaders, who are Gateway faculty members who coordinate the cross-Coalition program areas, collect reports/other information about project effectiveness from project personnel within their program area. They analyze this data and report it to their program area group and to the Central Evaluator for inclusion in the annual report.

SCOPE OF EVALUATION

The scope of the evaluation is broad, focusing on:

  • Local-Level Activities
  • Program Area Activities
  • Generic Coalition-Wide Activities (with regard to communication and networking; institutionalization of Gateway goals; and dissemination of Gateway ideas and products)

EVALUATION OBJECTIVES

The evaluation objectives are to assess the extent and effectiveness of Gateway efforts related to:

  1. curriculum innovation and development
  2. educational technology/communications development
  3. educational methodology
  4. faculty development
  5. continuous quality improvement
  6. student development
  7. student recruitment and retention
  8. overall communication
  9. faculty collaboration
  10. institutionalization of Gateway goals in member schools, and
  11. dissemination of Gateway ideas, materials, and products.

The evaluation objectives are based on the Coalition's four major program areas: Curriculum Innovation and Development; Human Potential Development; Educational Technology and Methodology; and Evaluation and Continuous Quality Improvement. Underlying each of the program areas is a conceptual framework to guide overall project design and implementation.

METHODS

Several methods, both qualitative and quantitative, are used for collecting evaluation data. These include: interviews with Gateway faculty and students using protocols developed by Central Evaluation; longitudinal tracking of samples of Gateway and non-Gateway students following a sampling plan developed by Central Evaluation; official counts of personnel involved in Gateway activities following NF guidelines; a survey of Gateway Institutional Activities Leaders and Program Area Leaders using a questionnaire developed by Central Evaluation; and anecdotal vignettes.

EVALUATION TOOL

As an aid to Local Evaluators and Program Area Leaders for conceptualizing the overall evaluation plan and carrying out their assigned duties, the Central Evaluator developed an evaluation tool - the Evaluation Information Matrix. This matrix laid out, in logical fashion, the essential components and procedures needed to implement the evaluation.

ONGOING AND ANNUAL FEEDBACK & REPORTING

Local Evaluators provide ongoing feedback about evaluation results, orally and in writing, to their group to improve the local program as it is being implemented. They also provide support to faculty in developing innovative strategies for assessing their Gateway courses/projects. Program Area Leaders also provide evaluation feedback throughout the year, orally and in writing, to project personnel and the Gateway Board to improve their Coalition-wide programs. The Local Evaluators and Program Area Leaders both provide continuous feedback to the Central Evaluator, orally and through written annual evaluation reports. The Central Evaluator synthesizes the data from their reports for use in writing the annual Central Evaluation Report that is submitted to Gateway Central. She also reports regularly, orally and in writing, to the Gateway Board.


Copyright © 1997 by the Gateway Coalition.
Questions or comments? Contact [email protected]
Last modified: April 26th, 1998.