July 2007 Index
 
Home Page


Editor’s Note
: Assessment is the basis of course development and continuous quality improvement. This comprehensive program improvement strategy developed for the Ajman University Science and Technology program revisits goals and assessment procedures on a regular basis to maintain academic programs that are relevant and high in quality.

Development of an Ongoing Assessment System
for Academic Programs

Zuhrieh Shana
United Arab Emerits

Abstract

Ajman University of Science and Technology, like other accredited and reputable higher education institutions, needs to regularly assess the effectiveness of its academic programs. This commitment is documented in its institutional mission "to guarantee pertinence and quality of educational programs through the constant assessment of learning outcomes."

It is a well established fact that a single assessment tool may not give an accurate and reliable result. Consequently, it is recommended to use a variety of assessment tools and programs to ensure fair and objective judgments of real achievement of the graduates. The "Nine Principles of Good Practice for Assessing Student Learning" (AAHE, 1992) supports the significance of broad representation of assessment tools in order to cross traditional boundaries and take an innovative approach in pursuing excellence in student learning's assessments. In this regard, an electronic assessment program, Objective-Based Course Assessment Program, is being designed to be used as a systematic and ongoing process of determining if the program is meeting its expectations.

The paper discusses and describes justifications for this assessment program, conceptual framework, and an example of its usage at the Department of Educational Technology/Ajman University of Science and Technology (AUST). When assessment of each objective of all courses in the curriculum is completed, the degree to which program goals and objectives have been achieved is determined. Although it is designed and utilized for the Department of Educational Technology, AUST, this template-like assessment program can be adapted and used in any academic program at any educational institution.

Keywords: Objective-Based Assessment, Higher Education, Classroom-Based Assessment, Curriculum-Based Assessment, Outcome-Based Assessment, Curriculum Development, Accountability, Assessment, Program’s Accreditations, Learning Achievements, Teaching Evaluations , Program Evaluation , Course Evaluation

Introduction

Background

An academic program is defined by Cookson (1996) as "the organized learning activities which have been systematically planned to achieve, in a specific period of time, certain specific learning outcomes for one or more [participants]." In higher education, the rationale for program improvement is to achieve better student learning outcomes. Institutions are increasingly asked to demonstrate the effectiveness of their programs, which has led to the development of many assessment programs campus-wide.

Furthermore, since the performance of the individual students represents the performance of the institution accountable for providing the learning opportunities, the essential aim in assessment programs can be traced to a widespread interest in improving program quality and the need to respond creatively to internal and external constraints. This requires concrete proof and feedback on how well individual courses, programs, and the university as a whole are accomplishing their stated missions, goals and objectives.

Assessment in higher education has evolved over the years; hence program assessment is not new to educational institutions in general, and to AUST in particular. The majority of current assessment programs depend on collecting and reporting data to accrediting organizations. Assessment data collected by academic program directors, individual administrators and faculty members requires considerable effort. This is a good start, but the process is limited in scope, efficiency, storage, retrieval and planned function of the data.

Schilling and Schilling (1999) support the notion that “the impact of all this assessment on day-to-day functioning of the academy has been modest at best”. Moreover, the process is inefficient and results in duplication of effort and failure to collect certain relevant data. Thus, as the number of academic programs continues to grow and more data is collected, there is a need to develop a comprehensive, efficient, organized, and reliable program for the assessment of students’ experiences and accomplishments. For this purpose, a comprehensive Objective-Based Course Assessment Program was developed at AUST to pursue this goal.

Purpose

It is suggested that assessment is “first and foremost about improving student learning and secondarily about determining accountability for the quality of learning produced." (Angelo, 1999) The main purpose of the Objective-Based Course Assessment Program is derived from the above statement by focusing on continuously improved student learning through:

  • Definite proof of the students’ attainment of specified objectives;

  • Appropriate decisions on the curriculum, instruction, and an efficient strategy to overcome weaknesses of the program; and

  • A foundation to enhance teaching and learning within program courses.

Research Questions

The stated assessment program is expected to answer the following questions:

  1. What do we want our students to learn?

  2. What are we doing to help them learn it?

  3. How well are we doing what we are supposed to do?

  4. What should we do in the future for improvement?

Related Literature

Assessment

By analyzing the existing literature and research studies related to the academic program assessment, the Palomba & Banta (1999) definition: “assessment … is the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development,” was adopted for this study. Similar conclusions have been reached by Maki (2004).

Moreover, studies by educators and educational researchers support the claim that assessment has the responsibility of offering valid data of learning achievement in order to inform all participants in the learning process, to facilitate provision of further learning, or to attest that a required level has been attained, (Huba & Freed, 2000; Banta & Associates, 2002; Bresciani, 2003; Driscoll & Cordero De Noriega, 2006). Literature also corroborated the fact that in many academic programs “students are being evaluated, but there is no evidence that the program is being evaluated and improved as a result of these evaluations. The department needs to develop more specific objectives, with an eye toward evaluating curricular and co-curricular learning opportunities provided to the students.” (Hardy, 2004)

On the other hand, the findings of Palomba and Banta (1999); Stassen, Doherty and Poe (2001) defines four levels of assessment:

  • Classroom assessment, involves assessment of individual students at the course level typically by the class instructor;

  • Course assessment, involves assessment of a specific course;

  • Program assessment, involves assessment of academic and support programs and is the focus of this study; and

  • Institution assessment, involves assessment of campus-wide characteristics and issues.

For the purpose of this study, the above levels of assessment have been adopted. This is quite helpful in developing a quick view of the literature.

Classroom and Course Assessment

Course assessment has always been an implied concern in higher education. Consequently, the exceptional development of classroom and course assessment suggests that the “classroom assessment is the purest form of assessment-for-improvement, because the information gleaned can be immediately used to improve teaching and learning …the further away from the individual classroom you get, the harder it becomes to turn assessment data into useable information.” (Miller, 1997)

Since 1932, the distinguished scholar, Ralph Tyler suggested that teachers should "formulate the course objectives, define the objectives in terms of student behavior, collect situations in which students are to indicate the presence or absence of each objective, and provide the method of evaluating the student’s reactions in the light of each objective.” Moreover, Wright (1999) confirmed that “all the curriculum reform in the world is ultimately useless if students do not learn what faculty teaches. Assessment is a response to the realization that—curricular reform, new teaching technologies, testing and grading, and ever-higher tuition notwithstanding—college graduates do not seem to be learning at a level that matches the expectations of employers, parents, or the general public. If assessment is to improve undergraduate education, it must be a faculty-defined, faculty-controlled activity."

Moreover, assessing learning achievement should be based on assessing the knowledge, skills and attitudes in their curricula and courses. To provide a more comprehensive picture of student learning, Peggy Maki (2006), called attention to the focus on assessment as a process of investigating the efficacy of the educational practices. Consequently, the wide range of activities involved in student learning is considered as direct evidence of how they form meanings. She also added that, “we are educating the next generation of experts in our disciplines. We should be curious about how we are educating those future experts and the pedagogy that underlies that education.”

Program Assessment

Assessment research has offered constructive information and insights on how to ensure quality in educational institutions: “the quality of an institution is marked, more than anything else, by the quality of its departments and its academic programs. Departments…are semi-autonomous organizations…and their vitality is what makes the institution tick. Without program quality, what happens in the rest of the institution makes little difference.” (Wergin, 2003)

Program Assessment can be looked upon as "any regular and systematic process by which the faculty designs, implements, and uses valid data gathering techniques for determining program effectiveness and making decisions about a program conducts and improvement." (Metzler and Tjeerdsma, 1998) Program assessment is based on a number of interrelated components and processes in place at the university to ensure institutional effectiveness in accordance with its mission. Thus the results of the assessment are to inform and often change instructional practices and courses, curriculum and program design. The relation between the institutional mission and course learning objectives is illustrated by the following:




Figure 1: Relationship between institutional mission and learning objectives

Banta reported that (2002), a successful academic program’s assessment begins by establishing the program’s objectives. These objectives should be based on the institutional goals and mission. However, taking into consideration the fact that institutional and program goals are often ambiguous, it is crucial to state and specify the program's objectives that will be considered as the basis of the program’s core curriculum courses. Subsequently, an academic program should state its curriculum course objectives as clearly and specifically as possible before any assessment methods/ instruments are considered or data is collected. In Erwin’s words, "one must know what is to be assessed before one knows how to assess it."(Erwin, 1991) For this reason, course objectives must drive the selection of assessment methods and instruments.

By assessing the students’ achievements in each course, we can evaluate the course, and if we evaluate all the courses we can judge the effectiveness of the curriculum, programs, faculties and the total institution. This relationship amongst the different components of higher education institutions shows the complexity, importance and necessity of an ongoing assessment to ensure that all educational components are focused to accomplishing its mission. It emphasizes the need of an assessment plan that applies at all levels, from the institutional level to major programs and specific courses.

Institution Assessment

In a study by Gray (2002), he stated that institutional assessment is a form of systematic investigation that results in improvement or accountability. However, there are a large number of definitions given by different researchers. Distinctions between these definitions have been summarized by Ewell (2002) as follows:

a)   Assessment initially refers to the processes used to determine an individual’s mastery of complex abilities, generally through observed performance.

b)  The performance of the individual students has been combined to reflect the performance of the institutions responsible for providing the learning opportunities.

c)   Assessment in higher education is currently seen as a special type of program evaluation whose purpose is to gather evidence to improve curricula and pedagogy with the intent of identifying means to improve the academic program's effectiveness.

To foster a greater and deeper understanding of institutional assessment, assessment guidelines provided by Driscoll & Cordero De Noriega (2006), have been examined from a logical perspective. The guidelines are:

  • Define and clarify program goals and outcomes for long-term improvement.

  • Make assessment-for-improvement a team effort.

  • Embed assessment into campus conversations about learning.

  • Use assessment to support diverse learning abilities and to understand conditions under which students learn best.

  • Connect assessment processes to questions or concerns that program decision makers or internal stakeholders really care about.

  • Make assessment protocols and results meaningful and available to internal and external stakeholders for feedback and ultimately improvement.

  • Design an assessment model that aligns with the institutional capacity to support it.

The above guidelines show that the main characteristic of assessment in higher education has been altered from focusing on institution-centered to learner-centered. This is in accordance to Huba and Freed (1999), who "encourage(s) us to focus on the student learning component of our teaching as it takes place within the entire system of our institution and within the smaller systems of our academic programs and courses.” In fact, much of the examined literature supported this fact by implying that the full cycle of the assessment will not be complete, beneficial and worth the effort unless the results can be used for an ongoing educational improvement. (David et al., 1989; Glickman, 1991; Meier, 1987; Miles & Louis, 1990; O'Neil, 1990). Through a precise and comprehensive assessment program, vital information can be derived for the maintenance of the college’s integrity and its educational programs.

The literature provides an overview of general assessment issues, but lacks in-depth investigations to develop more reliable assessment tools and programs. Program assessment “can have negative effects - unnecessary apprehension, distraction of time from teaching and research, and unfulfilled promises and expectations." (Fulks, 2004) However, the continued existence and growth of program assessment practices suggest that results can be advantageous. Given the accumulation of program assessment at all levels of higher education, there is a need for a systematic study to develop assessment tools to support individual and collective program needs.

Communication of Assessment Data

After collecting the data, the following questions must be answered: what are we going to do with all of this data? Will anyone actually read it? Keep in mind that the activities leading to the reporting of the data may be just as valuable—or more valuable—than the data itself. (Peter Dlugos, 2003)

CAP assessment data are to be submitted in an electronic format, using University supported software programs (spreadsheets for data and e-mail or word processing programs for reports).

The flow of assessment information is illustrated in the chart below:
 



Figure 2. Flow of Assessment Information

Participants, Roles and Responsibilities:

1. Departmental Committee

  • Reports to the faculty committee and faculty

  • Subcommittees may be formed to enhance effectiveness and efficiency

a. Membership

§     Academic program’s chairperson

§    The selected course instructor

§    Department’s faculty members

§    Student Services Dean

§    Director of Institutional Information Technology

b. Roles and Responsibilities

§    Selection of four courses a year, one from each level

§    Submission of portfolio to the selected courses. Each course has to document all the course activities (samples of all tests, individualize and group projects, assignments)

§    Specify course intended goals and objectives

§    Indicate level of difficulty of each objective / whether it is easy, average or hard (based on blooms taxonomy and IQ normal distribution)

§    Assign, based on the previous step, points/ grades out of hundred for each objective ( this has to be based on the emphases, complexity and importance

§    Specify a criteria / standard of achievements for each objective

§    Assign teaching methods, strategies and evaluation instruments/ exercises (examination, lab assignment, other written exercise, etc.) for each objective

§    Assess students’ performance on the specified tools for each objective

§    Assess student performance on all program courses to assess program effectiveness

§    Evaluate the assessment data collected

§    Facilitate department-wide discussions regarding the specific assessment findings and the assessment program;

§    Create an assessment report and convey the findings to the dean

2. Faculty Committee

  •  Reports to the University committee & Vice president of Academic Affairs

  •  Subcommittees may be formed to enhance effectiveness and efficiency

a. Membership

§    Faculty Dean

§    Faculty Departments’ Chairs

§    All Faculties Deans can participate

b. Roles and Responsibilities

§    Collection of the program/department specific data

§    Analysis of program/department specific and relevant assessment data

§    Evaluation of the collected data with emphasis on the difference between the desired and actual result

§    Recommending a plan for closing the gap between where we are now( according to the results) and where we need to be( according to our mission)

§    Reporting program/department specific data through appropriate channels to Vice President of Academic Affairs

3.  University Committee

  •  Reports to the University President

  •  Subcommittees may be formed to enhance effectiveness and efficiency

a. Membership

§    Vice President of Academic Affairs

§    Campus Director

§    Dean of Students Affairs

§    Dean of Records and Admission

§    Dean of Libraries and Resources Center

§    Dean of University Requirements

§    Head of IT Division

b. Roles and Responsibilities

§    Revision of the final report and discussion of any clarification, verification, suggestions or recommendations with the program/department chairperson and faculty dean

§    Acceptance of the final report and incorporating it into the annual Assessment Program report for dissemination to the University community

§    Making reports available for external accreditation committee

§    Establishing a plan for communicating and using results for improvement

§    Monitoring the improvement plan.

§    Communicating any assessment data collected through the appropriate channels

The Assessment Program

Objective-Based Course Assessment Program requires the faculty of the academic programs to have agreed upon goals and specific objectives for the students to achieve in each course in order to assess the students directly against those objectives. Each student’s grade in the course depends solely upon the objectives he/ she has achieved. The assessment program describes how objectives and the means of attaining them can contribute to a student’s achievements and to the course’s degree of success which can be determined directly from the number of objectives met (figure1).

The Objective-Based CAP© Course Template allows instructors to record student performance on a variety of assessment tasks, and tabulates that information over the semester. Once the data is entered and the performance criteria are set, a click of a button can generate multiple reports to show how students have performed on their objective based assessments. It also entails gathering, analyzing, and interpreting evidence to determine how well performance matches those expectations and standards; in addition to using the resulting information to document, explain, and improve performance.
 

Course No:

 

Instructor Data

Course name:

 

Name:

 

 

Depart:

 

 

Phone:

 

 

Email:

Goals & Objectives:

 

 

 

Goal-1:  

 

 

 

Objective 1-1:

 

 

 

 

Description {Teaching method – Strategy – Criteria for assessing the achievement}

 

 

Means of Assessment {Quizzes, Portfolio, T/F, … etc)

 

 

Accomplishments {Students scores}

 

Objective 1-2:

 

 

 

 

Description {Teaching method – Strategy – Criteria for assessing the achievement}

 

 

Means of Assessment {Quizzes, Portfolio, T/F, … etc}

 

 

Accomplishments {Students scores}

 

Objective 1-3:

 

 

 

 

Description {Teaching method – Strategy – Criteria for assessing the achievement}

 

 

Means of Assessment {Quizzes, Portfolio, T/F, … etc}

 

 

Accomplishments {Students scores}

Goal-2:

 

 

 

Objective 2-1:

 

 

 

 

Description {Teaching method – Strategy – Criteria for assessing the achievement}

 

 

Means of Assessment {Quizzes, Portfolio, T/F, … etc}

 

 

Accomplishments {Students scores}

 

Objective 2-2:

 

 

 

 

Description {Teaching method – Strategy – Criteria for assessing the achievement}

 

 

Means of Assessment {Quizzes, Portfolio, T/F, … etc}

 

 

Accomplishments {Students scores}


Figure 3: Course Presentation in Objective-Based CAP Template


Objective-Based CAP© Cycle:

The Assessment Cycle applies the systematic methodology of ADDIE, the instructional design model, to the assessment, design, development, implementation and evaluation of academic programs (Ryder, 2006).

This entire discussion on objective assessment has posed some important questions that can be given serious consideration when designing an assessment program/plan. Questions regarding standards include:

  •  What level of performance and by what percentage of the students is considered adequate?

  •  At what point do we decide that there is a problem?

 

Figure 4: Objective-based CAP Cycle

There are no definite quick answers to these questions; however, experts say that at least 75% of the students must achieve at least 75% of the course objectives, for the following reasons:

1.  The Normal Distribution (Normal Curve):

Intelligence can be defined as the ability to learn or understand or to deal with new or trying situations. According to CTB /McGraw-Hill "in a normal distribution, approximately two-thirds (68.3%) of the scores lie within the limits of one standard deviation above and one standard deviation below the mean. One-sixth of the scores lie more than one standard deviation above the mean, and one-sixth laid more than one standard deviation below the mean(figure 2). For example, deviations IQs are standard scores with a mean of 100 and, usually, a standard deviation of 16. (See http://www.ctb.com/articles/article_information.jsp?CONTENT )

The normal curve represents the normal distribution of IQ. It illustrates that 68% of the scores lie between -1 and +1 standard deviation (Talman, 2007).

 

Figure 5: Percentage of Cases under Portions of the Normal Curve
 

2. Objectives Performance Index

“The OPI makes test results both understandable and useful for the teacher in planning effective learning strategies and activities. The OPI is an estimate of the number of items that a student would be expected to answer correctly if there had been 100 similar items for that objective....The OPI scale runs from 0 for total lack of mastery to 100 for complete mastery. For CBT Achievement tests, OPI scores between 0 and 49 are regarded as the non-Mastery level. Scores between 50 and 74 are regarded as indications of partial Mastery. Scores of 75 and above are regarded as the Mastery level.”(CTB/McGraw-Hill, 1997)

3. AUST Grading System and Graduation Requirements

A Grade Point Average (GPA) of 2 points which is C or an average of 70% is required for students’ graduation. “Students will not be allowed to graduate unless they achieve the accumulative grade point average of 2 or above even if they have passed all subjects projected for the degree they are studying for.(AUST website: http://www.ajman.ac.ae/aust/index.htm)

Therefore, to classify a course as “meets expectation”, a minimum of 75 % of the students in this course must get a grade of C+ or above; in other words to reach the mastery level, in at least 75% of the objectives. Nevertheless, the CAP assessment program allows the department’s committee to choose the achievement level for each objective according to its level of difficulty based on Bloom’s Taxonomy (Anderson & Krathwohl, 2001) or any other logical and approved explanation.

The average of students’ achievements in the academic program courses will be calculated and a detailed report will be provided to this program based on the following criteria:

§    Exceeds expectation: 85% to 100% of the objectives are met

§    Meets expectation: 75% to 85% of the objectives are met

§    Needs modification: 65% to 74% of the objectives are met

§    Unsatisfactory: 64% or less objectives are met

Implementation

The CAP Program was implemented as a Windows-based computer program by using Microsoft Visual FoxPro version 8 as the main programming tool and Microsoft Access 2003 as the database management system. Figure 3 shows a snapshot of the CAP program interface.

Figure 6: The Interface of the CAP Program

The program accepts the raw data which is: courses and the student database. The assessment process starts by identifying the course assessment criteria, methods/ means of assessment with the points allocated as a weight for each method/ mean, and the students' scores in each objective. The final outputs of the program are two items: a final report for the course performance and the students' performance charts in each objective.

The fundamental nature of the CAP assessment program is to integrate assessment activities that focus on the academic program’s goals and objectives that align with institutional mission statement, goals and objectives. The CAP assessment program has been designed to ensure that all academic programs use the same assessment program elements, definitions of terms, database, and reporting designs. This unified approach will result in an improved institution-wide awareness of assessment and the database which will lead to an ongoing progress of program effectiveness and enhancement of student learning.

To achieve the above mentioned aims, the objective-based CAP program has been demonstrated to a group of Ajman University faculty members representing different departments, and modified according to their recommendations and suggestions. Moreover, the objective-based CAP program was tested on one of the Educational Technology Department’s courses, Instructional Print and Audio Media (580111) with three credit hours (1 theory +4 practical) in the first semester 20041 and a similar course Visual Media 580123 in the second semester 20042 (1 theory +4 practical); with the same group of students and taught by the same faculty member.

Similar results were obtained with the two courses. However, slight improvements have been noticed in the second course (580123). The explanation for this improvement was due to the time that the courses were offered. The first course being offered in the (580111) during the first semesters of the entire study plan; hence the students are still new to the teaching/ learning environment at AUST. Hatcher and others (1992) have explained this by predicting that satisfaction with the university experience is linked to students’ performance, (figures 7a & 7b).

Table 1a
Instructional Print and Audio Media – First Semester

Grades

obj_1

obj_2

obj_3

obj_4

obj_5

obj_6

obj_7

obj_8

A+

0%

0%

0%

10%

10%

10%

10%

0%

A

10%

10%

10%

0%

10%

10%

10%

10%

B+

0%

0%

0%

0%

0%

10%

10%

0%

B

20%

10%

50%

40%

50%

40%

40%

20%

C+

0%

0%

0%

0%

0%

0%

0%

10%

C

60%

70%

30%

30%

10%

10%

10%

50%

D+

0%

0%

0%

0%

0%

10%

10%

10%

D

10%

10%

10%

20%

20%

10%

10%

0%

F

0%

0%

0%

0%

0%

0%

0%

0%

 

Figure 7a: Overall results of 580111: Instructional Print & Audio Media

 
Table 1b
Instructional Visual Media – Second Semester

Grades

obj_1

obj_2

obj_3

obj_4

obj_5

obj_6

obj_7

obj_8

A+

0%

0%

0%

0%

0%

0%

0%

0%

A

10%

10%

10%

10%

20%

20%

20%

10%

B+

0%

0%

50%

30%

40%

40%

40%

20%

B

20%

10%

0%

10%

10%

10%

10%

0%

C+

60%

70%

30%

30%

10%

10%

10%

60%

C

0%

0%

0%

0%

0%

0%

0%

0%

D+

0%

0%

0%

0%

0%

10%

10%

10%

D

10%

10%

10%

20%

20%

10%

10%

0%

F

0%

0%

0%

0%

0%

0%

0%

0%

 

Figure 7b: Overall results of 580123: Instructional Visual Media

Based on the fact that, the major intention in creating and implementing an assessment instrument is to eliminate, or at least limit, the factors that might have an undesirable effect on its validity, and the interpretation of the assessments’ outcomes has to be based on the extent to which such factors can be controlled. For this reason, and in support of the continuous corrective/improvement of the assessment program, the Objective-Based Course Assessment Program has been used for the last three years, to assess one course each semester. Results of individual courses are listed below:


 
Table 2
Objective-Based Course Assessment Program

S

Course Name

Semester

No of
Students

Final Result

1

Instructional Print and Audio Media

2nd 2004

10

Meets Expectations

2

Instructional Visual Media

2nd 2004

10

Meets Expectations

3

Computer Based Training

1st 2005

34

Meets Expectations

4

Introduction to Distance Education

1st 2005

15

Needs Improvement

5

Practicum

1st 2006

6

Exceeds Expectations

6

Training Strategies

1st 2007

20

Meets Expectations


The above trial testing, provided a continuous and concrete feedback about the assessment program. Results were channeled back into the program development for accuracy.

Timing, Findings and Improvements:

1. Timing

Since no institution or department has the resources or time to continually assess all possible aspects of each academic program, it is rational to begin or focus the department’s assessment efforts on the Programs’ core courses. Two courses per semester will be selected for assessment, four courses each year, one from each academic level of the study program (1st 2nd, 3rd & 4th). So by the end of assessment of all the core courses, the department will be ready for the accreditation and review process.

2. Findings

The findings will be presented in one or more of the following approaches: narrative, tabular, or graphics. The result will be based upon the following rating categories, as shown in figure 5:

  •  Exceeds Expectations

Reserved for those whose achievements substantially exceed acceptable performance; all objectives and job requirements are met and the end result is outstanding.

  •  Meets Expectations

A term applied to those whose achievements meet all objectives and job requirements; competent in all responsibilities of the position; and require minimal direction.

  •  Needs Improvement

Reserved for those whose objectives and job requirements are not fully achieved; and require substantial direction.

  •  Unsatisfactory

A term applied to those who fail to achieve objectives and job requirements; requiring continuous direction. Overall performance is unacceptable.

3. Implications/Recommendations

Discuss the findings’ implications and suggest recommendations for improvement plans. These recommendations should be supported by data; it often is supported from findings which involve one student or more learning objective; they are also supported by best practice in professional literature.

4. Improvement Plan

Assessment data collected at the academic program level will form the most essential component used for Program’s quality assessment and improvement. The improvement plan includes:

  •  Assessment Findings: A brief description of program assessment findings.

  •  Justifications: A brief explanation of reasons and rational for the improvement plan.

  •  Objective: Listing of the desired objectives of the improvement plan.

  •  Actions: Listing of main activities needed to achieve the identified objectives of the plan.

  •  Responsibility: Responsibility for carrying out the plan.

  •  Duration: Time needed to conduct the plan (beginning and ending times)

  •  Budget: Budget and resources needed, if any.  

  •  Wrap-up: Explores end of plan consequences and if objectives were sufficiently met

Recommendations

Objective - based assessment has been a valuable and an integral part of programmatic development. It has the capability to positively influence students and faculty members in academic programs throughout the campus. Through the mentioned program, faculty will acquire useful information about student learning that may support existing educational practices or demonstrate any necessary changes. It also encourages collaborative and team faculty work to develop strategies that match with the department’s educational missions, goals, and objectives. This is considered to be essential in order for the assessment program to be successful.

At the Department of Educational Technology at AUST, the objective-based course assessment program has demonstrated the capacity of presenting systematic attention to how a program has performed in relation to what was intended. Since reliable assessment programs often take years to reach perfection and to be able to generate the exact type of results expected, we recommend that AUST adopt this system. However, in this case, certain modifications must be tailored to suit AUST’s needs. Such modifications may include feeding in information about AUST University, Colleges, Programs and Courses; these will need to be updated regularly. Moreover, this template like Objective-Based Course Assessment Program can be easily adopted and applied in any program at any higher educational institution.

 

ACKNOWLEDGEMENTS

The author thanks Mr. Shubair Abed Al Kareem, the programmer of the Objective-Based Course Assessment Program and assures that without his significant and continuous contributions, the achievements described in this paper would not have been possible.

 

References

1.   AHHE Assessment Forum (1992). “Principles of Good Practice for Assessing Student Learning.”

2.   Anderson, L.W., & Krathwohl (Eds.). (2001). Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. New York: Longman

3.   Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd Ed.). San Francisco, CA: Jossey-Bass

4.   Bakersfield College Hatcher, L., Kryter, K., Prus,. J. & Fitzgerald, V. (1992). Predicting college student satisfaction, commitment, and attrition from invest model constructs. Journal of Applied Social Psychology 22, 1273 - 1296.

5.   Banta and Associates (2002). Building a scholarship of assessment (pp. 3-25). San Francisco: Jossey-Bass.

6.   Beyond the Numbers, A Guide to Interpreting and Using the Results of Standardized Achievement Tests, page 11, CTB/McGraw-Hill, 1997

7.   Bresciani, M. J. (2003). Expert-driven assessment: Making it meaningful. Educause Center for Applied Research (ECAR) Research Bulletin, (21).

8.   Cookson, Peter S., (1996) Program Planning for Lifelong Education [Draft]. State College: Pennsylvania State University

9.   CTB/McGraw-Hill (1997), Beyond the Numbers, A Guide to Interpreting and Using the Result of Standardized Achievement Tests, page 11.

10. David, J.L., Purkey, S., and White, P. (1989). Restructuring in Progress: Lessons from Pioneering Districts. Washington, DC: National Governors Association

11. Driscoll & Cordero De Noriega (2006) Taking ownership of accreditation: Assessment processes that promote institutional improvement and faculty engagement. Sterling, VA: Stylus.

12. Ewell, P. (2002). An emerging scholarship: A brief history of assessment. In T. W. Banta and Associates (Eds.), Building a scholarship of assessment (pp. 3-25). San Francisco: Jossey-Bass.

13. Fulks, Janet (2004) Assessing Student Learning in Community Colleges, Bakersfield College, California USA. Retrieved on 10 of May/2007from:
http://online.bakersfieldcollege.edu/courseassessment/

14. Glickman, C. (1991). Pretending Not to Know What We Know. Educational Leadership, 48(8).

15. Gray, Peter J. (2002). ‘The Roots of Assessment: Tensions, Solutions, and Research Directions.’ IN Trudy W. Banta and Associates. Building a Scholarship of Assessment. Jossey-Bass, publ. San Francisco, CA. pp. 49-66.

16. Hardy, Joyce Phillips (2004) Chadron State College Assessment – Status Report

17. Hatcher, L., Kryter, K., Prus,. J. & Fitzgerald, V. (1992.) Predicting college student satisfaction, commitment, and attrition from invest model constructs. Journal of Applied Social Psychology 22, 1273 - 1296.

18. Huba, Mary E. and Jann E. Freed. 1999. Learner-Centered Assessment on College Campuses. Allyn and Bacon, publ. Chapter 1, pp. 1-31.

19. Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Needham Heights, MA: Allyn and Bacon.

20. Maki, Peggy L. (2004) Assessing for learning: Building a sustainable commitment across the institution, Stylus, p. 2, 4

21. Maki, Peggy (2006) Assessing What Students Learn in Technology-Based Learning Environments,
ELI Web Seminar,

22. Miles, M.B., and Louis, K.S. (1990). Mustering the Will and Skill for Change. Educational Leadership, 47(8).

23. Meier, D. (1987). Central Park East: An Alternative Story. Phi Delta Kappan, June, 1987

24. Metzler, Michael W., and Bonnie L. Tjeerdsman. "PETE Program Assessment Within a Development, Research, and Improvement Framework," Journal of Teaching in Physical Education, 17 (July 1998), 468-492

25. Miller, M. A. (1997). Looking for results: The second decade. In American Association for Higher Education (Ed.), Assessing impact: Evidence and action (pp. 23-30). Washington, DC: American Association for Higher Education.

26. O'Neil, J. (1990). Piecing Together the Restructuring Puzzle. Educational Leadership, 47(7).

27. Palomba, Catherine & Banta, Trudy(1999) Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education, Jossey-Bass, p. 4)

28. Ryder, M. (2006). Instructional design models. http://carbon.cudenver.edu/~mryder/itc_data/idmodels.html

29. Stassen, L.A.M., Doherty, K., & Poe, M. (2001). Program based review and assessment: Tools and techniques for program improvement. Office of Academic Planning and Assessment, University of Massachusetts, Amherst.

30. Schilling, Karen Maitland and Karl L. Schilling. (1999). Proclaiming and Sustaining Excellence: Assessment as a Faculty Role. ASHE-ERIC Higher Education Report Volume 26, No. 3. Washington D.C.: The George Washington University, Graduate School of Education and Human Development. 127 pp.

31. Trudy W. Banta and Associates (2002). Building a Scholarship of Assessment. San Francisco, CA:. Jossey-Bass/John Wiley & Sons,. 368 pp.

32. Tyler, R.W. (1932) 'The construction of examinations in botany and zoology', Service Studies in Higher Education, Ohio State University, Bureau of Educational Research Monographs, 15, 49-50

33. Wergin, Jon F. (2003). Departments that Work: Building and Sustaining Cultures of Excellence in Academic Programs. Anker Publishing Company, Inc. Bolton MA. 156 pp

34. Wright, B. D. (1999). Evaluating learning in individual courses. Retrieved from http://www.cai.cc.ca.us/Resources June 15, 2003.

 

About the Author

Dr. Zuhrieh A. Shana
Deputy Head, Ed Tech Dept.
Ajman University of Science
and Technology
United Arab Emerits
fjac.zuhrieh@ajman.ac.ae

zoeshanaa@yahoo.com

Dr. Zuhrieh Shana has been the Deputy Chair of the Educational Technology Department at Ajman University of Science and Technology, UAE since 2002. She graduated with a B.Sc. and a Master’s degree in Instructional media from Utah State University and a Ph.D. in Educational Media from University of Missouri, USA. She has over 20 years of teaching, training, consulting and research experience in different academic institutions in USA, Saudi Arabia, Canada and the United Arab Emerits.

Besides lecturing at the graduate and undergraduate level and leading various student services activities,  she worked with students individually and supervised graduate and undergraduate students’ teaching training and research. She also developed an Objective-Based Course Assessment Program, to facilitate the Department’s curriculum development, and to help in creating an environment where students and faculty can get pleasure from their work and be dynamic.

 

go top
July 2007 Index
Home Page