This is the second installment of our series on developing, implementing and institutionalizing a comprehensive assessment program in an educational setting. Here, we focus on the first of four strategies: Initiating a Structured Process.
Our experience demonstrates that the only way to build a strong assessment and continuous improvement foundation is to provide all participants with a roadmap to follow. There are two important reasons for this requirement. First, the design and implementation of assessment processes require a large degree of participation across the institution. Without a clear structure to follow, the diversity of institutional needs cannot be adequately identified and subsequently met. The second reason for needing a structured process is that assessment programs require specific activities in order to be effective. If the design and development of assessment methods are done haphazardly, they will not be effective and may become an unnecessary drain on institutional resources.
A Five-Step Process
In order to design and implement a comprehensive assessment plan for educational institutions, we identified a five-step process to follow, as illustrated in Figure 1. These five steps serve as a framework that an assessment committee can follow to support its efforts. Again, as with the four strategies, these steps cannot be viewed as a linear process or one in which certain elements can be selected over others. We have found that many of the activities associated with these steps can be done in parallel. However, there are specific actions that are prerequisite. Here is a brief description of each.
Step 1. Identify and define educational objectives, strategies, and outcomes. This first step comprises some of the most important and challenging activities in the overall process. Unfortunately, these activities are often not given the proper attention they deserve. The definition of objectives, strategies, and measurable outcomes is of primary importance.
In our Partner institutions, assessment committees are formed to work with administrators, faculty, students, and external constituents to define institutional-, departmental-, and course-level objectives, strategies and measurable outcomes. Institutional-level objectives, strategies and outcomes are defined as those that cut across all departments and programs. Common examples include outcomes associated with facilities� non-academic programs and administrative work processes like recruitment and enrollment. Departmental-level focuses should be on the learning objectives, strategies, and outcomes of academic programs and the effect these have on graduates as a result of the curriculum offered. Course-level objectives, strategies, and outcomes help to define what learning outcomes are expected as a result of a specific course. Each level requires its own iteration of the process.
Step 2. Identify and select assessment methods. After educational objectives and outcomes have been defined, participants begin to identify best sources of data required, review existing assessment efforts to ensure that the "wheel is not re-invented," and ultimately select additional assessment methods as required. Most administrators and faculty find this step to be the most difficult. However, from our experience, if the first step is conducted properly, the identification of what types of assessment are required will go smoothly. Both traditional and non-traditional assessment methodologies should be reviewed for each outcome. While describing the various approaches of assessment is beyond the scope of this column, there are several alternatives for potential application including surveys, portfolio approaches, capstone projects, embedded work samples, interviews, self and peer assessment, and industrial advisory boards.
In addition, most universities have several existing assessment systems in place. One of the first things an assessment team can accomplish during this step is the identification of all existing assessment activities both in and out of the classroom. It is quite common to find that data is already being collected on faculty, students, and alumni, sometimes in a number of locations across the campus. An inventory of these assessment activities should be taken.
Step 3. Develop and pilot test new assessment methods. The next step focuses on the initial development and "piloting" of new assessment methods. During this step, new methods can be tested to assure that they meet certain criteria for effectiveness such as reliability and validity, cost-effectiveness, ease of administration, perceived fairness, and information leading to improvement. From our experience, we have found that once all educational outcomes are identified, two or three assessment initiatives can provide an adequate start for evaluation and improvement purposes. In many of our Partner institutions, the assessment methods most commonly piloted have been competency-based surveys for project-oriented courses, alumni surveys, and course evaluations. We have found that assessment programs built around these three initiatives provide an institution with a broad degree of information on student performance, course effectiveness, and alumni perception of curriculum effectiveness.
Step 4. Expand assessment processes. The post-pilot testing step takes newly developed assessment methods and expands their use to a larger audience. In effect, the methods in question become part of the overall educational process. There are many important aspects to the activities during this phase. One of the most important lessons we have learned is that educating all involved is critical. As you expand an assessment process to involve many constituents, the training needs to become more formal. Typically, training can remain somewhat informal during the pilot testing phase. However, once the number of people involved increases, there is a need to formally explain what the purpose of the assessment is, how it works, and how results will be reported and applied towards improvement.
Step 5. Apply results for improvement. The final step of our process involves the collection and use of information derived from implemented assessment programs. Many institutions do not take advantage of the information provided. How the data are used and applied for continuous improvement is the focus of this last step. Of course, how the information will be used for improvement is a question that must be answered in earlier steps. However, the implementation of the continuous improvement process has its own set of unique activities.
In the next issue, we will focus on developing the skills needed by faculty and students in order to make assessment and continuous improvement a reality.