May 2009 Index 
 
Home Page

Editor’s Note:  In communication theory (and practice) we use feedback to confirm that messages are correctly received and understood. It is especially important in teaching and learning for reinforcing correct responses and providing negative reinforcement for incorrect behavior. This is a comprehensive  study to determine how well feedback systems in distance learning achieve the desired outcomes.

Can Students Improve Learning with
their use of an Instructor's Extensive
Feedback Assessment Process?

Ni Chang
USA

Abstract

What kinds of feedback are welcomed by e-students and what are reasons behind their positive or negative perceptions? Unfortunately, these are not much and well documented by published literature. The present research study was designed to fill the void by exploring pre-service teachers’ perceptions toward the ways an instructor employed to provide feedback to their assignments. This study was also intended to understand reasons behind the students’ perceptions. A sample of 29 students participated in the survey study. The data was analyzed quantitatively and qualitatively. The research findings show that the students strongly and positively supported the way that the feedback was communicated to them. The qualitative analysis identified two themes, including Promptness and Helpfulness. The theme of Helpfulness also attaches several categories, offering specific reasons for the participants’ strong preferences to ways that the instructor provided feedback to their assignments. This research report will also share the negative cases, future research recommendations, and educational implications.

Keywords: online feedback, personalized instruction, interactions between instructor and student, student learning, assessment process.

Introduction

Feedback is intended to help improve one’s study or work. Assessment of students’ assignments, therefore, should move beyond the practice usually performed by instructors. In other words, awarding a summative grade is not an adequate response for an instructor to facilitate student learning. Summative grading is “too little, too late” even if the grade is accompanied by some brief notes denoting positive feedback, such as “Good” or “Excellent” or by a few words indicating the need for improvement. The reason is largely due to the fact that these symbols are unlikely to propel students’ higher level of thinking or to encourage them to genuinely reflect on their performances for enhancement. Unfortunately and traditionally, this type of assessment appears to be employed frequently in higher education.

In a virtual learning environment, assessing quizzes and exams in this or a similar fashion has been observed as well. Popular and widely utilized by many e-instructors in higher education is automatic/machine-generated feedback. This relatively novel means of assessment has been proclaimed by some researchers. Peat and Franklin (2002) argued that the machine-generated feedback provided students with quiz or exam results without any lingering delay. Northcote (2002) agreed with Peat and Franking in that this methodology reduced the time and cost needed by the course instructor when grading a large number of student assignments. However, this approach fails to acknowledge diverse learning styles, which have pervaded most classrooms today irrespective of their nature, be it a traditional face-to-face meetings or e-learning environments. A lack of information appropriate to an individual student’s specific learning needs could be adverse to the student’s otherwise high confidence in learning (Chang & Petersen, 2006).

Feedback generated by machine/computer signifies an interactive exchange between a learner and a machine. Human interactive elements, unavoidably omitted through this assessment process, in fact, are crucial to effective learning (Chang, 2009), as they not only allow students to know how and where further work is needed, but also enable a course instructor to analyze obstacles to student desirable learning (Chang & Petersen, 2006). Given the pros and cons of the traditional conception of grading and the nature of automatic feedback disseminated by computers, an individualized coaching type of feedback makes a lot of sense in promoting student learning and may be something that a course instructor would like to attempt (Chang, 2009). This present study was intended to explore pre-service students’ perceptions and their related rationales toward the way the instructor provided their online assignments with personalized feedback.

Theoretical Framework

Feedback, by design, is to advance student learning and to promote understanding as a communicative process (Public Broadcasting System, 2007). Thurmond and Wambach (2004) further defined feedback as information exchange between a course instructor and student about course related activities and projects for the purpose of student learning. While it is no longer a question whether or not feedback needs to be provided to students, the crucial questions to ask are how and when feedback needs to be offered for students’ assignments.

Given that immediate feedback might be appreciated by students, Bonnels (2008) questioned when feedback ought to be provided to actually facilitate learning and how often students wanted feedback from course instructors. Feedback helps allay the students’ sense of isolation and inform the status of work completion (Billings, 2000). It is important for helping students maintain pace and schedule in the online classroom (Thurmond, 2003). Vella (2002) suggested that formative feedback be provided at scheduled points for students to improve their learning.

With respect to preferable means used by an instructor to offer feedback to students’ assignments, Morgan and Toledo (2006) compared the students’ reaction toward handwritten and typewritten feedback. Using Table PC and computer to offer feedback, the researchers found that the students had favorable inclinations toward handwritten feedback generated by Table PC. Handwritten feedback seemed to students less distant than the typewritten. The drawback of this approach lies in the limited space in which to provide specific comments. Lack of detailed information consequently would result in students’ confusion and possible frustration.

Although another strategy is pervasive and has been deployed by a wide array of higher institutions, it still has its shortfalls. Machine/computer generated feedback enables students to receive rapid feedback to their quizzes and exams. Not only does it provide students with quiz or exam results without any lingering delay (Peat & Franklin, 2002), but it reduces the cost and time needed by the course instructor to grade a large number of student assignments (Northcote, 2002). However, negative aspects might be that mechanical corrections might not provide the instructor with adequate information needed to analyze obstacles to student success (Chang & Petersen, 2006). This standardized evaluation form or feedback does little to promote learning. The students are not encouraged to examine or change any instructional learning behaviors based on feedback or discussions of the event. Since a grade granting process with a few words tucked in the margin as feedback does little to promote students’ content understanding, it appears unlikely that a conscientious and conscious examination of limited feedback in order to enhance academic understanding will occur. Most importantly, it lacks individualized and personal interaction. It fundamentally is about a one-size-fits-all policy to attend to all learners’ work. This does not address the principle of advocating for the rights and needs of diverse learners in the educational field with the understanding that there are distinctive learning styles.

To address the deficiencies the aforementioned approaches possess, an e-instructor is encouraged to consider employing an individualized coaching type of feedback (Chang, 2009). Feedback provided is in response to an individual learner’s specific learning status expressed by his or her homework. Reading and contemplating the instructor’s feedback allows learning to take place. Therefore, it is regarded as personalized coaching (Chang & Petersen, 2006). Most importantly, this type of interaction via a virtual learning environment is crucial to effective learning (Chang, 2009). Vasilyeva, De Bra, Pechenizkiy, and Puuronen (2008) were in favor of the personalized feedback with the provision that tailoring feedback to a student’s preferences and responses to questions of an online test should be part of the responsibilities of faculty. These researchers took a close look at the differing feedback, which was designed for multiple-choice quizzes and concluded that providing feedback was a process of scaffolding, as it rendered assistance compatible to an individual student’s expressed level of learning. It did this by allowing students to receive feedback that they believed suited the way that they answered questions. El Mansour and Mupinga (2007) further endorsed the theory by surveying 34 online students. They found that the students supported quick and personalized feedback, as it kept them on right track and allowed the instructor to gain further knowledge of the student’s learning progress. If "the teachers did not get to know the students personally," the students felt lost in cyberspace (p. 245). Even though there is some literature addressing merits derived from feedback in terms of facilitating student learning, specific discussion on facilitating student learning through individualized coaching has not been explored extensively (Gallien & Oomen-Early, 2008; Mason & Bruning, 2003). There is also a scarcity of discussions in the published work targeting the undergraduate students’ perceptions toward an instructor’s immediate and detailed feedback. The findings of this research study, as such, will fill a void in the field by sharing pre-service teachers’ perceptions toward the way that the instructor communicated with them in regards to their homework as well as the rationale behind their perceptions. The research question underlying this study was, “What are students’ preferences and related reasons for an instructor’s feedback in the process of online assessments?”

Methodology

Subject and Site

Twenty nine pre-service teachers participated in this study on a Midwestern regional university campus. The majority of the students were seniors (69%) with an age range between 21 and 24, and (66%) with a GPA at 3.0 or over (69%). It was a convenience sample because it happened that the researcher was their instructor (hereafter referred to the instructor). The instructor taught these students through a course entitled, “Introduction to Early Childhood Education.” There were two sections for this course with one having 20 students and the other having 10 with both meeting twice weekly.

After being admitted into the teacher preparation program, the enrolled students must move through the entire program in cohort. To graduate from this institution, the students must successfully complete three semesters (three blocks—Block One, Block Two, and Block Three) as well as their student teaching. This present study took place during Block One—the first semester after their enrollment in the teacher education preparation program. With respect to computer technological skills, every student must take a required course entitled, “Using Computers in Education” (W200) before their admission to the teacher preparation program. Through this course, the students acquire basic knowledge and skills about computers and familiarize themselves with how to use Oncourse CL (Oncourse Collaboration and Learning (https://oncourse.iu.edu/portal) is one of the course management systems developed by Indiana University in collaboration with other major universities). The knowledge and skills gained from this course paves the way for subsequent learning with computer technology.

Data Collection

The students and the instructor met on Tuesdays and Thursdays in accordance with the university academic calendar. In the traditional face-to-face meeting, normal classroom activities, including lectures, small and large group discussions, and hands-on activities, were conducted. All their assignments, however, were submitted through Forum on Oncourse CL. The students were allowed to pace and control their learning by deciding when to submit assignments provided that they conscientiously followed the corresponding deadlines for those assignments. Once receiving an assignment, the instructor reviewed it by making necessary comments and corrections in light of the guidelines (the protocol) with the use of Comment and Track Changes features available in the Microsoft Word. These two features enabled the written feedback to elaborate and explain in order to inform the student of what had been expectedly and extraordinarily achieved and why certain areas needed improvement. The feedback not only covered course-related content, but also was concerned about the use of the APA style (the citation guidelines stipulated by the American Psychological Association) and grammatical and mechanical errors. At the end of the student’s writing, the instructor also left a summary note, acknowledging the student’s effort in completing and submitting the work, e.g. “Thank you for the submission,” confirming good work, e.g., “Your introduction and objective note are well done,” and/or pointing out where, if any, he or she was expected to concentrate, e.g., “There are areas for further improvement (please see the comments in the right margin of the text).” In short, the instructor made every effort to respond to their work thoroughly and promptly.

If there was the need for improvement, the instructor would mark 1 on the electronic gradebook to indicate that revision from the student was expected. However, the student was able to freely make his or her own decision as to whether or not the relevant revision would be made or whether or not he or she would take the grade as indicated in the feedback without revising the work. If the instructor received a student’s revised work, a grade (usually represented by points) corresponding to the quality of that revision was granted. If the work was still below expectations, this same student would be expected to make another decision on whether or not continual revision would be made. Once the second round of revision was received and was up to the expectation, this assignment would then earn 60% of missing points. For example, if an assignment was awarded 80/100 in the first review, it is clear that this assignment misses 20 points. If the revision was up to the expectation, the grade would then be changed to 92 by adding 12 points (newly gained points as a result of revision) to 80 (the original points) through this formula: 20 (missing points) X.60% (the maximum % of missing points one can earn provided that the work is in good quality) = 12 in the second review. A similar protocol of grading on the student’ ensuing revisions would continue in such a manner until the instructor’s satisfaction was achieved or until the student informed the instructor of his or her unwillingness to continue the e-communication revisions.

The instructor frequently encouraged and reminded the students to email or telephone the instructor when they either had a question or felt confused about the feedback or any assignment to avoid unnecessary frustration, which may tend to discourage an otherwise positive learning disposition.


Instrument

The survey questions were administered during the last day of class. The students completed the survey in their own individual classrooms. The instructor was not present when the students were answering the survey questions in either of these two classrooms. A student representative was responsible for collecting the completed surveys with their informed consent forms. Then, the entire pack was stored in the Assistant Dean’s office until all the grades were formally submitted online. There was no limit of time set for the students to respond to the survey questions.

The survey consisted of three demographic information questions, four close-ended questions, and two open-ended questions. The three demographic questions solicited the students’ age range, student status, and GPA. Four close-ended questions included their preference with respect to how helpful the online feedback provided by the instructor was (their preference was noted on a 5-point Likert scale with 5 indicating the most preferred and 1 the least preferred), whether or not they had easy access to computers, whether or not they were interested in computers, and a percentage from 0% to100% of computer integration that they would like to see computer technology integrated into their teaching and learning (the present online submission of assignments and communication between the instructor and students consisted of approximately 30% integration, based on the instructor’s knowledge). Two open-ended questions asked the students to elaborate rationales for their choice of a certain numeral relating to their preference of the way that the instructor offered feedback and the percentage they believed should be the appropriate amount of computer integration into their future learning (if they would take this course again). There was no limitation on the number of reasons that the students were allowed to note on the survey. Because of its scope, this present study focused only on the questions with reference to the students’ insights regarding the instructor’s feedback.

Data Analysis

The data were analyzed both quantitatively and qualitatively. With respect to the quantitative analysis, the instructor recorded frequencies for the students’ degrees of preference toward the way that the instructor provided feedback to their assignments submitted via Forum on Oncourse. Qualitative data analysis was intended to obtain the reasons for the students’ selection of a specific numeral on a 5-point Likert scale. To analyze, the instructor first read and re-read the completed surveys to get a sense of the content focusing on the central idea of the present study. The data were then coded with the abbreviations of the tentative categories, which were transposed to a list. The instructor then re-read through the raw data to confirm or add to the initial decisions and to combine or refine those that might overlap. This approach enabled the instructor to ensure that all the data were saturated and sorted to a corresponding category. Identifying common threads from the coded categories was the next step in the data analysis process. The common threads were the basis for the written thematic statements and narratives concerning the various aspects of preservice teachers’ reactions toward online feedback.

Trustworthiness

The general purpose of this research study was to obtain two measures, one of  knowledge production (regarding the perception of the pre-service teachers toward the way that the instructor provided feedback to their assignments) and the rationale  (why the students liked or disliked that approach to the provision of personalized online feedback). These purposes coincided with the “trustworthiness” principles set forth by Lincoln and Guba (1985).

Results and Discussion

This study was intended to explore the viewpoints of pre-service teachers with respect to immediate and elaborate online feedback that the instructor provided through the fall semester of 2008. The primary question underlying the study was, “What were students’ preferences and related reasons for an instructor’s feedback in the process of online assessment?” None of the participants disliked the way that the instructor provided feedback to their assignments. It was found that 66% of the students selected 5 (strongly preferred) while 36% chose 4 (strongly preferred). None of the students selected 3 or below (see Table 1). This result demonstrates that all the participants were in favor of the way that the instructor communicated with them online in the process of assessment.  With respect to the reasons explicating why they preferred the way that the feedback was given, two themes have been identified as a result of the data analysis:

Table 1
Students’ preferences for the instructor’s feedback in the process of assessment

*Preference

5

4

3

2

1

percentage

66

34

0

0

0

*5 denotes strongly preferred and 1 denotes the least preferred

Promptness and Helpfulness. There were no categories found for the theme of Promptness, but there were several categories expounding the theme of Helpfulness. The Helpfulness categories include the following: being essential, encouraging, stimulating thinking, reflecting, building a learning community, being personal, revising, enhancing knowledge and skills, and anytime and anywhere. This research report also addresses negative cases and discrepant data as a separate theme—Confusing, but. All the students’ names used in this report are pseudo to protect their identities. The following provides detailed explanations.

Promptness

In response to why they felt the online feedback was helpful, 11 students particularly noted that the feedback was timely and quick. Because of the immediate feedback, the participants implied that their learning was greatly helped. Alisha wrote: “I felt it was helpful that Dr. Chang provided us with immediate feedback.” Mary and Nina chimed in:

  • There would be times when I’d email a paper and not even an hour later, I’d have feedback on it (Nina, Fall, 2008).
  • E-email responses on Oncourse were given back promptly after the original email [after the instructor received my email] (Mary, Fall, 2008).

The fact that immediate feedback received a favorable vote from the participants was substantiated by Riffell and Sibley (2003). These researchers found from their survey study that even if the feedback were given in a programmed standard form, the students felt that frequent and detailed hints (programmed feedback) were fundamental to significantly increasing their ability to learn. They reasoned it was due to that fact that adequate feedback was not only helpful, but also enabled them to understand the course materials. Song, Singleton, Hill, and Koh (2004) translated immediate feedback to immediacy. That is, immediate feedback was a manifestation that the course instructor cared about student learning (Chang & Petersen, 2006). Piffell and Sibley further argued, based on the result of a survey study, that immediate feedback was tied to three components useful for effective learning. These include self-motivation, time management, and organization. In this sense, immediate feedback motivated students to learn and encouraged them to reexamine their ways of managing time and organizing their learning process.

Some students appreciated the effort the instructor made to send the feedback to them in a prompt fashion. Myliana wrote, “I think that feedback is great and I really think that you are very good about getting it back fast, that is really appreciated.” The students’ gratitude also alludes to the fact that delayed responses would result in varying levels of student frustration (Riffell & Sibley, 2003; Song et al., 2004). El Mansour and Mupinga (2007) confirmed this through the analysis of 34 online student surveys and found that without the quick feedback, many students would feel lost in cyberspace.

Helpfulness

All the participants were either strongly supportive or supportive of the way that the instructor provided feedback to their assignments submitted via Forum on Oncourse (see Table 1). Furthermore, nearly half of the students (14 students) particularly stated that the online feedback provided by the instructor was helpful. The participants’ specific rationales behind their choices vary from student to student and were reported in several sub-categories as follows:

Being essential: Seven participants recognized the feedback provided to them was essential, because “[it] is essential for us to improve [learning] (Kim, 2008). Some students added,

  • It gives me a deeper understanding of the projects and helped me prepare for the next one (John, 2008).
  • It showed me the expectations for future assignments (Kathlyn, 2008).

The comments made by Kim, Kathlyn, and John positively supported the notion that detailed feedback was deemed useful, because the feedback assisted them in understanding why certain points or segments of their assignments were acceptable and why other perspectives were off track. With clear directions and support given by the instructor, the students felt confident in moving on to the next level.

Even though there were guidelines spelled out for each assignment available in the course syllabus and even though the students were often reminded and encouraged to carefully follow the guidelines when completing their assignment, there were still some students who were unclear about the expectations. Kathlyn (Fall, 2008) provided the reason behind it by attributing it to a lack of time to read owing to their time commitments. Kathlyn’s notion was echoed in a survey study conducted by Killian and Willhite (2003), which solicited the insights of students concerning online learning. It found that some non-traditional students with long commutes and multiple adult responsibilities commonly recognized that there was an insufficient amount of time for them to communicate with others online. This deficiency, therefore, resulted in their dissatisfaction with online discourse in the preservice teacher preparation program. The circumstance, nevertheless, was, is, and will be pervasive to many commuter campuses. To mediate the situation, coercing these students to read the guidelines would elicit little in desirable learning outcomes. The “double dosage” tactic—encouraging the students to read the guidelines while offering detailed feedback to facilitate their learning—appears to be helpful for student’s learning. However, it deserves a further investigation to corroborate this conclusion.

Encouraging: The instructor’s feedback worked as a propeller to “push” students to work harder and better with assistance appropriate to their individual situations. Mary wrote, “The feedback was always positive, encouraging the success of the students, including myself.” Even though there are only 13 words in this sentence, an in-depth meaning embedded in it was much sensed. Mary expressly had exchanged the idea with some of her fellow classmates, if not all of them. Their resultant discussions converging on this topic verified the impact that the instructor’s feedback had on students’ learning success. The explanatory feedback explained why a student did a good job and/or why a need for improvement was expected. Such an attentive approach to providing positive feedback was consistent with Sull’s (2008) perspective, which suggested that an instructor always be cognizant to heed word choices when it came to providing feedback. After all, the purpose of feedback was to help improve students’ learning by assisting them in understanding why things that have been done are up to or below expectations.

Stimulating thinking: The students recognized that the content of the feedback evoked their thinking. Anthony wrote, “Your comments make me think.” Michael went on to explain, “You make me think about what I observed and recorded. I [thus] included a lot of thoughts [in my writing].” In the contemporary society, a fast-paced living style is prevalent, leaving little room and time for people to think in depth about things they have encountered or experienced. Effective learning requires deepened thinking, because learned knowledge might possibly become one’s own through the necessary thought processes. If there is no stimulus to stir up one’s thinking, one might simply indulge in receiving, but not digesting information. In this way, superficial knowledge is likely to blossom.

Does the way that feedback was offered to the students’ assignments provide evidence that students’ thinking was provoked, which promoted their desires to extend their learning by including more in their writing or assignments? Garrison, Anderson, & Archer (2000) noted that there was a positive link between written communication and a higher order of thinking. Assisted by the instructor’s explanatory feedback that contains “good insights” (Casity, Fall, 2008) and that provides “good ideas” (Michelle, Fall, 2008), students gradually learn how to think as the dialogical communication is domain-specific and context-dependent; it directs students to focus on what to think (Garrison et al., 2000). “Explanatory feedback becomes crucial when one’s ideas are being constructively but critically assessed” (Garrison et al., 2000, p. 25). This is a strategy to cultivate student’s tendency to question obtained information, rather than to simply translate it into words without thinking and reflection. This is a “knock-on effect,” which is supported by such an e-course instructor’s guidance (Hall, 2002, p. 157).

Reflecting: Some students decided not to revise any or some assignments for varying reasons. Nonetheless, the decisions, as such, are not equivalent to the abandonment of reviewing the online feedback. In fact, they still read the feedback and found that the feedback was meaningful and helpful to them. Synthia wrote, “Due to the many assignments, I did not revise much of my work, but was happy with my grade and reflected after the comments.” Synthia’s expression conveyed a message that the feedback did encourage her to think about her learning experience, thus influencing her performance; it was a helpful way for her to gain knowledge and skills. This is consistent with Garrison et al.’s (2000) notion that critical discourses are fundamental to successful attainment of knowledge and exercised through one’s own reflection on performance. In re-examining what has been done is a process’ one must undertake to scrutinize all pertinent aspects for improvement.

Cultivating pre-service teachers to become reflective practitioners is strongly expected by the Interstate New Teacher Assessment and Support Consortium (INTASC, http://www.wresa.org/Pbl/The INTASC Standards overheads.htm). Standard 9 is about “Reflective practice: Professional development.” Specifically, it states, “The teacher is a reflective practitioner who continually evaluates the effects of his or her choices . . .”

Building a Learning Community: Emily commented, “I think that the communication and feedback creates a community within the classroom.” In the instructor’s feedback, the students are reminded and encouraged to read one another’s work and to comment on the reading afterwards. The expectation worked as an additional avenue for the students to communicate with one another outside class so as to enhance their understandings of course related materials and to establish ties to one another in and out of school. A dynamic atmosphere positive to learning was initiated in this unique manner. As learners are left alone to work with computers, all visual body gestures are absent. The instructor’s feedback, in a sense, could have a favorable effect on learning in making the instructor’s presence visible (Chang, 2009; Chang & Petersen, 2006). Instructor’s feedback is regarded by students as being supportive of their learning (Lim and Cheah, 2003), which paralleled Garrison et al.’s (2000) “social presence.” Social communication via a course management system is one of the essential means to bring about dynamic interactivity with the guidance of a course instructor, which was beneficial to student learning via an online learning environment.

Being Personal: Being personal was stated by students to be helpful feedback. Christina shared, “I really enjoyed getting personal feedback from you on my papers.” It was viewed that the feedback was at personal levels as it targeted every individual’s paper with individually specific comments and notes toward the student’s paper rather than with a one-size-fits-all approach to treating all the papers received. Christina explained her notion this way: “I think by having you give feedback online, it provides individualized instruction that could not be accomplished in the classroom.” In a traditional face-to-face classroom setting, it is hard for an instructor to provide feedback specific to every student’s concrete learning status. Generalizing how the assignments have been done by the students appears something ordinary for an instructor to do in a group face-to-face setting. An instructor might announced, “You all did well on this assignment.” “I am proud of you for your doing such a nice job.” Yet, this generalized statement would bring on consequent probable questions. Had all the students achieved such a level high enough to deserve the praise like this? Had all the students made similar progress in uniformity? What would the students make out of this general statement with the knowledge that some of them did not do well at all? What would those students feel and think of the biased or untruthful praise? Moreover, as addressed earlier, even if an instructor grades a student’s paper with a few simplified comments here and there, these comments might temporarily perk up one’s either happy or displeased emotion, but hardly could make explicit what was needed so as to help the learners discern the rationale behind the marks or remarks. To address these inadequacies, detailed comments compatible to individual learning levels through the assessment process would be one of the assessment approaches that a course instructor should take into account and exercise (Chang, 2009) and it is one that is significant in student high-level knowledge building (Garrison et al., 2000).

Revising: The instructor’s feedback was conducive to students’ reworking on their assignments. Cheryl commented, “It [feedback allows] me to revise my work individually and [to] strive to perfect my papers.” Lena agreed and said that the feedback was helpful because “your online feedback was very clear and was helpful for me to correct my paper.” It is clear that explanatory feedback is helpful and useful as it supplies the students with orientations for amelioration. The students reasoned that useful and helpful feedback was feedback that the students were able to clearly follow when revising their work. Furthermore, such feedback enabled them to deepen their knowledge through the revision process (Chang, 2007; Hall, 2002), as “[a]llowing the student to rework and resubmit an answer is important in the learning process” (Siew, 2003, p. 46). Although the participants’ wordings, such as “helpful for me to correct my paper” seemed to be indicative of their aim to solely correct papers, a close analysis of the students’ remarks would advert to the notion that the students must reexamine and ruminate on the areas for improvement so as to achieve expected conceptual understanding through the process of revision.

The explanatory feedback given by the instructor could also help move away from the development of learners’ unnecessary frustration and intimidation to a great degree. There might be a gap between a developer of the guidelines and that of a user concerning the way to interpret them. It could well be that the guidelines might be crystal clear to a developer, but confusing to a user. To abate the incongruity and to facilitate student learning, the instructor needs to explicate and reinterpret the guidelines to the user in the process of reviewing the student’s work. This type of individualized assistance and instruction was favored by the participants, e.g. “It showed me the expectations for future assignments” (Clare, fall, 2008). Garrison, et al. (2000) posited the instructor’s active intervention was a way to identify students’ misconceptions and to assist them in constructing deep levels of knowledge. It enables the instructor to remove barriers to student successful learning (Chang & Petersen, 2006).

Enhancing knowledge and skills: Prior to studying in the teacher education preparation program, the majority of students were familiar with the MLA style (citation guidelines by Modern Language Association) although some were exposed to the APA style (citation guidelines by American Psychology Association) to varying degrees. In Block One (the first semester after the students were admitted into the teacher education preparation program), the APA style is the primary expectation when it comes to citation guidelines. The requirement of this citation style often imposes difficulty on student learning. The explanatory feedback assisted the student learning: “It [Feedback] helps me with the APA style” (Synthia, Fall, 2008). Even though examples and instruction of how to cite APA style were accessible to the students online, as with assignment guidelines, the students seemed to feel that deepening their understandings of the APA style with the assistance of the instructor’s feedback was the most helpful.   

Such a notion was further substantiated by a student’s voice that the instructor’s feedback was inextricable to student learning. As have been indicated earlier, even though there had been several lectures, class discussions, and group practices taking place precedent to their development of lesson plans through formal assignments, explanatory feedback enabled the students to develop a clear understanding of their lesson planning. Sherry noted, “. . . [feedback] helped me learn the format of the lesson plan. Lesson plan development is construed as one of the most difficult tasks to some education students. It is expected of the students to master numerous crucial aspects in lesson planning so as to execute it successfully. The feedback suitable for the students’ varying levels of learning provides scaffolding to students’ understandings on those seemingly complicated aspects of a lesson plan format.

All the aforementioned findings were echoed in the study conducted by Jelfs, Nathan, and Barrett (2004) regarding when, how, and what students used external help. These researchers argued students expected external help from a course instructor, which would mostly derive from the instructor’s diagnostic and constructive assessment. Evaluating and diagnosing student work is a way to provide scaffolding to student learning.

Anytime and anywhere: Some students perceived the provision of feedback was helpful, because corresponding with the course instructor was ubiquitous and independent of location and time. This method of retrieving and responding to the instructor’s feedback, if a student has access to the Internet, is also unbridled. Becky wrote, “I could get the feedback when I was at home, in class, or other places that have the Internet.” Being omnipresent with the use of course management systems is, by no means, a new topic in instructional technology. However, the feedback that was downloadable from the Internet at a time best suited to a student’s own schedule is of great significance to discuss. As a student is ready to retrieve the feedback from a course instructor, it could also be the time when the student is mentally prepared to read, reflect, and revise the task at hand. Communicating with a course instructor could also be genuine and effective with the learner’s full pledged concentration. This approach to gaining and deepening knowledge might be effective as the student could be very much in earnest. In contrast, when feedback is handed back to students before, in, and/or after a face-to-face meeting, the students might, at best, be able to give it a quick cursory view. Some related questions might arise at the time when the student viewed the feedback, but those questions might not have a chance to survive if the students’ schedules were “hectic” and if their imminent obligations were other than seeking answers from the professor present in the class at the moment. Hall (2002) pointed out that students attending to normally scheduled on-campus meetings once or twice a week might have a limited vision of study. It may be the students’ false perception that learning takes place only a day or two before or after the scheduled class meetings. With respect to communications, they may not be able to have frequent dialogues with their professors due to the limitation of face-to-face meetings offered weekly. Online communication breaks the pattern and allows unconstrained access to materials helpful and useful to student learning.

Confusing, but

Though no student marked 3 or below on the 5-point Likert scale, a couple of students were concerned about clarity of feedback. These concerns were classified into two categories:

  1. pointing out insufficiency of the feedback, but acknowledging the usefulness of the feedback simultaneously. For example, Brian wrote, “At times, feedback was difficult to follow, but overall I found it very helpful.”
  2. expressing negative feelings toward the instructor’s feedback. For example, Terri wrote, “Sometimes it is difficult to understand her comments. They don’t make any sense to me.

These two students’ viewpoints toward how the feedback was provided stand in stark contrast to those of many pre-service teachers in this study, such as Tyler, who commented, “[I] was able to really see comments well. [The feedback was] made it easier [for me] to revise [my work].” Although small, the discrepancy, nonetheless, still is worthy of the instructor’s attention. Online communication is largely dependent on text communication. The paradigm of teaching and learning has been shifted from auditory and speaking to visual and writing. To those who are not accustomed to learning primarily based on reading and writing, they very likely will experience a huge learning curve. This level of discomfort was reported by Becky (fall, 2008), “It was a little hard to get comfortable with the comments on the word documents.” While these participants had taken a pre-requisite technology course prior to the course under study, the major learning tasks involved in that course, in essence, were comprised of technology know-how skills. Rarely did these students have a direct experience of communicating with others or a professor in a way similar to that expected by this course. This fundamental change from listening and speaking to reading and writing is challenging and consequently causes discomfort to some students. Another reason for the emerged confusion might be grounded in the fact that the course under study was one of the first courses for the participants in the teacher education preparation program to undertake. Immersed in this learning process, new terminology, jargon, or concepts might become temporary barriers to their comprehension. Although an initiative taken by a student to request clarification from the professor could well be one way to resolve this problem, as one of the students pointed out, “If I was unclear of her feedback meaning, I would e-mail my question. She was very quick to respond and helpful in clarifying.” Regrettably, the instructor had not received many such email queries. This phenomenon could be caused by a lack of time on the students’ part or by the unfamiliarity of this novel way to learn. Facing this circumstance, the instructor might need to modify means currently being undertaken in communicating with students via the text-based medium to assist learners who have much on their plates, who have a weak sense of self-regulation, self-management, and self-organization skills, and who are intimated by this new modality of learning.

Special effort also needs to be made to seek appropriate approaches to interacting and dialoguing with students with special needs. Terri, who wrote the second comment (see (2) above), had a learning disability. Although the instructor had believed that considerable electronic assistance had been rendered to Terri over the course of the fall semester of 2008, it evidently led to an undesired outcome. More adverse effect on Terri’s learning might also lie in Terri’s frequent absences from classes. At any rate, helping students with special needs to strive in class has led the instructor to suggest a future research effort. Essentially, engaging in Content Analysis to compare/contrast between the course instructor’s feedback to those who deemed the feedback beneficial and helpful in various ways and those who held different opinions could be helpful and useful. Wanstreet (2007) found, after engaging in an ample literature review, that there was not much emphasis on how an instructor would know what a learner knew and might be able to do and what the learner might need to know and need to do. The research results may inform the related field as to how to assist diverse learners to reach their learning goals successfully.

Conclusion

This study was designed to explore pre-service teachers’ perceptions with respect to immediate and elaborate feedback that the instructor provided during the fall semester of 2008 as well as their corresponding rationales behind the revealed perceptions. All the students were in support of how the instructor furnished their homework with feedback. The rationale related to their strong preferences involved the following: the instructor’s feedback was prompt, confirmed the expectations of the assignments, stimulated their thinking, and encouraged their reflections upon their work and observations. The instructor’s feedback also has been translated by the students to be personalized and individualized instruction, as it was tailored to their own needs and learning levels to advance their understanding. There were a couple of students who were unable to follow the instructor’s feedback, which is indicative of the need for further improvement so as to arrive at satisfactory learning outcomes. All in all, the ideology of personalized coaching, as such, was consistent with the three presences identified by Garrison et al. (2000), namely, social presence, cognitive presence, and teaching presence. Vygostky’s (1978) theory of the zone of proximal development (ZPD) pointed to the necessity that learning should take place in a social context.

The fact that the instructor analyzed each individual’s work through the assessment process represents teaching presence. Teaching presence is also embodied in the dialogical communication that lends itself to students’ heightened understanding. Cognitive presence takes place when the instructor’s comments had positive effects on the students’ level of understanding and when the student was earnestly engaged in the revision process.

Future Research

Future research effort is needed to corroborate the present research results by utilizing a diverse and comprehensive sampling. Additionally, considering online teaching and learning is still in its infancy, there have been a growing number of research studies looking at this novel way of teaching and learning. However, there is a scarcity of literature addressing the issue of feedback to student learning (Gallien & Oomen-Early, 2008; Mason & Bruning, 2003). Therefore, effort should be made to converge on questions, such as, “How can instructors interact with online learners in this novel teaching and learning environment so that students are apt to self-regulate their own learning?” More understanding is useful with respect to how, what, and when automatic/machine generated feedback and/or individually tailored feedback is suitably employed to accurately, authentically, and fairly assess and facilitate student learning. A further investigation also involves seeking ways to encourage students to feel free to ask questions without feeling intimidated in an online learning environment. Howland and Moore (2002) found that some students lacked initiative in asking questions online because “it was hard for me to compose a question in writing that didn’t sound rude or silly” (a student comment in Howland and Moore, p. 191). Furthermore, according to Wanstreet’s (2007) extensive literature review, it is noticeable that social connection has been enormously and frequently addressed, whereas psychological connection has been unfairly underrepresented with respect to e-classroom instruction. Future research foci, in this sense, should be placed on how to successfully and effectively promote student affective involvement in learning. Lastly, considering that the participants were mostly seniors, young (ages ranging from 21 to 24), and somewhat academically advanced (the grade point average was about 3.0), future research may specifically be desired to investigate the relationships between these variables and their respective preferences toward the way that the personalized feedback is provided. Could those factors affect the outcomes of the study?

Educational Implications

This new modality might lessen the burden of an instructor as a large chunk of work (grading papers from every student in the class all at once) can thus be reduced into smaller, more manageable pieces. The short turnaround period between receiving and returning the work promotes effective learning as well. It is because concepts just learned might still be fresh in the student’s mind, which is conducive to students’ deepened understanding. Students using this new method of online submissions and a new way of interacting with a course instructor might be enabled to learn how to work with computers to assist in their learning. They also may learn how to organize and regulate their own time in a more productive manner. While it is a rewarding and worthwhile effort, a course instructor might need to be flexible, expecting students to submit their work anytime prior to the expected deadline. To this end, an instructor ought to find a way to help his or her students to change their mindset by fully taking advantage of e-communication that is available 24/7.

To provide students with immediate and elaborate feedback requires a course instructor to make a large commitment as executing this undertaking is time consuming. An instructor has to write detailed comments to different segments in a student paper and must do so for every student’s paper. This commitment provides scaffolding for students’ learning, because some students have not yet possessed skills to communicate in a text-based context (Jelfs & Colbourn, 2002). Some have not yet been exposed to experiences necessary for learning success in higher education, nor have they learned self-management skills. These students have a high propensity to hinge on external assistance (Li, Lee, & Kember, 2000). Dialogues between an instructor and a learner in a constructivist manner whereby the student is learning how to construct knowledge with the use of computer technology are consistent with Vygotsky’s (1978) notion of the zone of proximal development (ZPD). ZPD defines scaffolding as “an activity in which teachers or more experienced learners provide support and guidance” to the learner (in Jelfs et al., 2004, p. 87). Providing assisted learning is to foster “independent and non-assisted learning” competency. 1st phrase:  The scaffolding strategies appropriate for the needs of the students and for being responsive to learners might help them to move closer and closer to the new way of learning. This typology of interaction might also establish a rapport between the instructor and student.

Limitation

The participants involved in the study largely were first generation college students. As this was a survey-based study, the data were entirely drawn from the participants’ insights. It could be that the participants might not have completely recorded their responses. The sample used for this study was not large enough and was an examination of one university in the Midwest. Generalization of the research findings should be made with caution. However, the findings of this study are provocative and may help interested e-instructors. Those instructors who may have recently begun a similar teaching adventure may see similarities between their own classroom situations and the context described in this study. The findings of the present study could help such persons seek innovative ways to reach out to their students in an individualized manner to facilitate learning.

References

Billings, D. (2000). Framework for assessing outcomes and practices in web-based courses in nursing. Journal of Nursing Education, 39(2), 60-67.

Bonnels, W. (2008). Improving feedback to students in online courses. Nursing Education Perspectives. FindArticles.com. Retrieved Dec. 14, 2008, from http://findarticles.com/p/articles/mi_hb3317/is_5_29/ai_n29476348

Chang, N. (2009). Significance and uniqueness of personalized e-coaching. In P. Roger, G. Berg, J. Boettcher, C. Howard, L. Justice, & K. Schenk (Eds.), Encyclopedia of Distance Learning. Hershey, New York: Information Science Reference.

Chang, N., & Petersen, N. J. (2006). Cybercoaching: An emerging model of personalized   online assessment. In D. D. Williams, S. L. Howell, & M. Hricko (Eds.), Online assessment, measurement, and evaluation: Emerging practices. (pp. 110-130) Hershey, PA: Idea Group Inc.

El Mansour, B., & Mupinga, D. M. (2007). Students’ positive and negative experiences in hybrid and online classes. College Student Journal, 41(1), 242-248.

Gallien, T., & Oomen-Early, J. (2008). Personalized versus collective instructor feedback in the online courseroom: Does type of feedback affect student satisfaction, academic performance and perceived connectedness with the instructor? International Journal on E-Learning, 7(3), 463-476.

Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment computer conferencing in higher education. Internet and Higher Education, 2(2-3), 87-105.

Hall, R. (2002). Aligning learning, teaching and assessment using the Web: An evaluation of pedagogic approaches. British Journal of Educational Technology, 33(2), 149–158.

Howland, J. L., & Moore, J. L. (2002). Student perceptions as distance learners in Internet-based courses. Distance Education, 23(2), 183-195.

Jelfs, A., & Colbourn, C. (2002). Do students’ approaches to learning affect their perceptions of using computing and information technology? Journal of educational Media, 27, 41-53.

Jelfs, A., Nathan, R., & Barrett, C. (2004). Scaffolding students: Suggestions on how to equip students with the necessary study skills for studying in a blended learning environment. Journal of Educational Media, 29(2), 86-96.

Killian, J. & Willhite, G. L. (2003). Electronic discourse in preservice teacher preparation. Journal of Technology and Teacher Education, 11(3), 377-396.

Li, N., Lee, L., & Kember, D. (2000). Towards self-direction in study methods: The ways in which new students learn to study part-time. Distance Education, 21, 6-28.

Lim, C. P., & Cheah, P. T. (2003). The role of the tutor in asynchronous discussion boards: A case study of a pre-service teacher course. Education Media International. Retrieved November 20, 2005, from www.tandf.co.uk/journals/routledge/09523987.html

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Part, CA: Sage.Mason, B., & Bruning, R. (2003). Providing feedback in computer-based instruction: What the research tells us. Retrieved July 21, 2007, from http://dwb.unl.edu/Edit/MB/MasonBruning.html

Morgan, V. L., & Toledo, C. A. (2006). Online feedback and student perceptions. Journal of Interactive Online Learning, 5(3). Retrieved July 4, 2007, from http://www.ncolr.org/jiol

Northcote, M. (2002). Online assessment: friend, foe or fix? British Journal of Educational Technology; 33(5), 623-626.

Peat, M., & Franklin, S. (2002). Supporting student learning: The use of computer–based formative assessment modules. British Journal of Educational Technology, 33(5), 515-523.

Public Broadcasting System. (2007). Media literacy glossary. Retrieve December 14, 2008, from http://wneo.org/media/glossary.htm.

Riffell, S. K., & Sibley, D. H. (2003). Learning online: Sutdent perceptions of a hybrid learning format. Journal of College Science Teaching, 32(6), 394-399.

Siew, P. F. (2003).  Flexible on-line assessment and feedback for teaching linear algebra. International Journal of Mathematical Education in Science & Technology, 34(1), 43-52.

Song, L., Singleton, E. S., Hill, J. R., & Koh, M. H. (2004). Improving online learning: Student perceptions of useful and challenging characteristics. The Internet and Higher Education, 7(1), 59-70.

Sull, E.C. (2008, February). Giving positive feedback online-Even when it’s negative. Online Classroom 6.

Thurmond, V. A. (2003). Examination of interaction variables as predictors of students' satisfaction and willingness to enroll in future Web-based courses while controlling for student characteristics. Published Dissertation. University of Kansas. Parkland, FL: Dissertation.com. Available online http://www.dissertation.com/library/1121814a.htm

Thurmond, V., & Wambach, K. (2004). Understanding interactions in distance education: A review of the literature. International Journal of Instructional Technology and Distance Learning. Retrieved December 14, 2008, from http://itdl.org/journal/jan_04/article02.htm.

Vasilyeva, E., De Bra, P., Pechenizkiy, M., & Puuronen, S. (2008). Tailoring feedback in online assessment: Influence of learning styles on the feedback preferences and elaborated feedback effectiveness. Paper presented at Eighth IEEE International Conference, 1-5, 834 – 838.

Vella, J. (2002). Learning to listen, learning to teach: The power of dialogue in educating adults (Rev. ed). San Francisco:Jossey-Bass.

Vygotsky, L. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.

Wanstreet, C. (2007). Interaction in online learning Environments: A review of the Literature. The Quarterly Review of Distance Education, 7(4), 399–411.

About the Author

Dr. Ni Chang is Associate Professor of Education at Indiana University South Bend. She received her master’s and doctoral degrees from Peabody College of Vanderbilt University. She has over 12 years of web-based and hybrid teaching experiences. These experiences have been researched and transformed into numerous conference presentations, book chapters, and refereed journal articles.

Email: nchang@iusb.edu

go top
May 2009 Index 
Home Page