Editor’s Note: New media are often criticized for perceived weaknesses. When shortcomings are overcome, the result is a superior learning tool. The virtual classroom was initially criticized for isolating students and professors; tools for community building and interactivity now favor distance learning for many subjects. The virtual classroom and asynchronous activities enable mid-career professionals to integrate meaningful learning experiences into their busy schedules; it also opens up college programs for persons anywhere who are unable to attend on-campus programs. Instructional designers, teachers, and administrators are anxious to optimize their contributions to this new learning environment. They have great interest in what research can tell us from the point of view of instructors and students.
Do you teach in a Virtual Classroom?
View slide presentations posted by instructor
Using the whiteboard tools in class
Reading messages from members in text-based chat
Posting or replying to a message in a text-based chat
Interacting privately using text-based chat
Talking to the others using the audio chat option
Asking the moderator questions by raising my hand
Using the polling feature to respond to questions
Using emoticons and other activity indicators
Viewing archived virtual classroom sessions
Viewing the desktop shared other participants
Using the breakout room in a virtual class session
Viewing websites loaded within a session
Able to moderate a virtual class session
The respondents rated the features using a four-point Likert scale (4=strongly agree, 3=agree, 2=disagree, and 1=strongly disagree). Their average responses ranged from 2.49-3.09, which indicated a fairly positive view of this learning environment. The ability to view the instructor’s slide presentations (M=3.09) and sharing one’s desktop (M=3.09) were more beneficial than the other features. On the contrary, one-to-one private chats (M=2.49) and using the breakout rooms during VC sessions (M=2.51) were the least beneficial features. The 14 items pertaining to the features of the Virtual Classroom had a Cronbach’s alpha of .92.
It was expected that each inter-total correlation would meet the set criteria (.30 < r) (Ferketich, 1991). Item-total correlations for the interactivity scale revealed that only one item “my typing hindered me” was under the .30 threshold. Similarly, one item on the synchrony scale, “the class was monotonous,” had a correlation below .30. The usefulness and ease of use and sense of community scales also had one item-total correlation that were less than the criteria, r=.21 and r=.28 respectively.
Scale Mean if Item Deleted
Facilitated instructor to student Interaction
Facilitated student to student Interaction
The quality of class discussions were High
I learned from my fellow students in this class
Instructor frequently attempted to elicit student interaction
My typing hindered me (R)
It was easy to follow class discussions
I could not talk freely because I could
not see my classmates face to face
It reduced my travel time to the campus to attend
It reduced my travel cost
It helped me collaborate with peers without having
I had bandwidth limitations
I had technical problems
The class was monotonous
Usefulness and Ease of Use
It enhanced my effectiveness
It improved my performance
It was easy for me to become skillful in using VC
I found it easy to get the virtual
classroom to do what I want it to do
I was not confident using the VC (R)
Sense of Community
I felt isolated
There were not many collaborative activities
I did not feel a sense of belonging in the classroom
I worked on my own for most of the projects
While item analysis focuses on the individual item in a composite instrument (Ferketich, 1991), it is equally important to consider the composite scores. Scales were computed by adding the ratings for each subset of items representing a characteristic. Since the number of items varies for each characteristic the original values were converted to 0-100 scales to facilitate comparing the mean and standard deviation of the scales with one another. Usefulness and ease of use had the highest average rating (M=70.5), followed by sense of community (M=67.5), interactivity (M=65.3), and synchrony (M=63.3). Responses for all the scales generated a Cronbach’s alpha between .70 and .77. See Table 3.
Means (SD) Converted scale 0-100
Useful and Ease
SD, standard deviation; CI, confidence interval; Alpha, Cronbach’s alpha coefficient, r, correlation coefficient
Although the VC is growing in popularity (Arbaugh, 2000; Flately, 2007), there are few studies on the Virtual classroom with different populations and in various contexts (Arbaugh, 2000; Author & Author, 2010). The data from this study provide information on the validity and reliability of the Virtual Classroom Instrument, which was designed to measure students’ perceptions of the features and characteristics of this e-learning environment.
Additional improvements may strengthen the VCI further. For instance, eliminating certain items, using the criteria set for item-analysis, will increase the reliability estimates of the respective scales (of the characteristics). Two examples illustrate this phenomenon: For the interactivity scale deleting the item “I could not talk freely because I could not see my classmates face to face” increases the reliability coefficient from .70 to .83. This item may be problematic for conceptual reasons as well as how it is worded. Conceptually, the item may be less about interaction and more about introversion/extroversion or individual preferences for instruction. In terms of wording this item combines two negative statements. The reliability of the synchrony scale (α=.70) increases to .75 by removing the last item “the class was monotonous.” A class that is boring or mundane is different than instruction that is delivered simultaneously to a group of individuals.
The sense of community scale contained one item that fell outside the criteria (.30 < r) recommended by Ferketich (1991). We suggest reconsidering the item “I worked on my own for most of the projects” (r=.28). On the surface this item seems related to community. Upon closer scrutiny it could also be about individual work habits or preference rather than formation of community. Another plausible rationale for the low correlation pertains to the class in which this study evolved. In the instructional technology class students were required to submit projects individually, which is counterintuitive to developing a sense of community. It is suggested that future studies consider the nature of the class as it relates to this domain. Despite the explanation, the items within the sense of community scale may be need to replaced or modified prior to subsequent use of the instrument and then retested for reliability and validity.
The items that are crafted will need to adhere to instrument construction guidelines. As such, they should avoid double negatives like the one item that was suggested for deletion in the interactivity scale. Statements like this often confuse respondents and can increase measurement error (Dillman, 2000). Other researchers may want to add other questions that are relevant to study such as prior online course enrollment, type of delivery method (fully online class, hybrid, etc), frequency of VC use, and student’s familiarity with other forms of technology to see how these variables correspond with student’s perceptions of the VC.
Both face validity and content validity can be limited ways of ascertaining whether or not a measurement tool is valid. Other types of validity may provide stronger evidence. For example, instrument(s) that deal with the same constructs can be administered to respondents and used to determine convergent validity. Alternatively, construct validity can be determined by conducting factor analyses. While this study does not have the adequate sample size to accomplish this, promoting the use of this instrument in future studies can lead to its use with larger samples. Once the appropriate sample size is obtained this manner of validation can occur. According to the prescription (Burns & Grove, 2001) a minimum number of 10-15 participants for each item is suggested. Based on the items that represent the VC characteristics in this study (n=23) the sample should consist of a minimum of 230 respondents (10 x 23).
The researchers acknowledge the lack of generalizability of this study due to the nature and size of the sample. However due to the novelty of the Virtual Classroom obtaining larger samples is difficult. Ferketich (1991) acknowledges the difficulty in finding 200-300 subjects, for item-analysis, when an instrument is designed for rare populations. In clinical populations, item-analysis usually is conducted with far fewer subjects. While one-source and social desirability response bias may be present, these are inherent in survey research that is used in this capacity (Boardman & Sundquist, 2009). While this study does not have the adequate sample size for factor analysis, which would validate the constructs on the VCI, the authors sought experts to help determine face and content validity. Although the items were valid, few items appear problematic. These items have been identified and can be addressed in future iterations of the instrument.
Despite these limitations the data can be used by instructors, researchers, and practitioners who are interested in student’s perceptions of the Virtual Classroom. This study provides evidence of validity and acceptable reliability for measuring student’s perceptions of the VCI. However several items within the characteristics need more testing to further establish the reliability and validity of the VCI. Making improvements to the existing instrument will strengthen the quality of data on the VC. In the future, researchers who collect data from larger samples may elect to examine whether demographic characteristics such as sex, age, and previous online course enrollment reveal significant perceptual differences in the features, characteristics, or other aspects of the VC. Other research directions include the need for more cross-disciplinary studies on the VC and studies that investigate course outcomes.
Allen, I. E. & Seaman, J. (2009). Learning on demand: Online education in the United States Retrieved from http://sloanconsortium.org/publications/survey/pdf/learningondemand.pdf
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2004). Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association.
Arbaugh, J. B. (2000). Virtual classroom characteristics and student satisfaction with online MBA courses. Journal of Management Education, 24(1), 32-54. doi:10.1177/105256290002400104
Ardichvili, A. (2008). Learning and knowledge sharing in virtual communities of practice: Motivators, barriers, and enablers. Advances in Developing Human Resources, 10(4), 541-554. doi:10.1177/1523422308319536
Bielman, V. A., Putney, L. G., & Strudler, N. (2003). Constructing Community in a Postsecondary Virtual Classroom. Journal of Educational Computing Research, 29(1), 119-144.
Boardman, C. & Sundquist, E. (2009). Toward understanding work motivation: Worker attitudes and the perception of effective public service. The American Review of Public Administration, 39(5), 519-535.
Burns, N. & Grove, S. K. (2001). The practice of nursing research conduct, critique, and utilisation (4th ed). Philadelphia, PA: W. B. Saunders Co.
Cameron, B. A., Morgan, K., Williams, K. C., & Kostelecky, K. L. (2009). Group projects: Student perceptions of the relationship between social tasks and a sense of community in online group work. American Journal of Distance Education, 23(1), 20-33. doi:10.1080/08923640802664466
Carmines, E. G. & Zeller, R. A. (1979). Reliability and validity assessment. Thousand Oaks, CA: Sage.
Clapper, D. C., & Harris, L. L. (2008). Reliability and validity of an instrument to describe burnout among collegiate athletic trainers. Journal of Athletic Training, 43(1), 62-59.
Clark, D. N., & Gibb, J. L. (2006). Virtual team learning: An introductory study team exercise. Journal of Management Education, 30(6), 765-787. doi:10.1177/1052562906287969
Clark, R. (2005, May). Four steps to effective virtual classroom teaching. Learning Solutions Magazine. Retrieved from http://www.learningsolutionsmag.com/articles/266/four-steps-to-effective-virtual-classroom-training
Constantinos, E. R. & Papadakis, S. (2009). Using LAMS to facilitate an effective synchronous virtual classroom in the teaching of algorithms to undergraduate students. Presented at 2009 European LAMS and Learning Design Conference.
Dillman, D. A. (2000). Mail and internet surveys: The tailored design method. New York: John Wiley & Sons, Inc.
Dineen, B. R. (2005). Teamxchange: A team project experience involving virtual teams and fluid team membership. Journal of Management Education, 29(4), 593-616. doi:10.1177/1052562905276275
DuFrene, D. D., Lehman, C. M., Kellermanns, F. W., & Pearson, R. A. (2009). Do Business Communication Technology Tools Meet Learner Needs? Business Communication Quarterly, 72(2), 146-162.
Falvo, D. A., & Solloway, S. (2004). Constructing Community in a Graduate Course about Teaching with Technology. TechTrends: Linking Research & Practice to Improve Learning, 48(5), 56-85.
Ferketich, S. (1991). Focus on psychometrics. Aspects of item analysis. Research in Nursing & Health, 14, 165-168.
Flatley, M. E. (2007). Teaching the virtual presentation. Business Communication Quarterly, 70(3), 301-305. doi:10.1177/1080569907305305
Gilmore, S., & Warren, S. (2007). Themed article: Emotion online: Experiences of teaching in a virtual learning environment. Human Relations, 60(4), 581-608. doi:10.1177/0018726707078351
Ioannou, A. (2008). Development and initial validation of a satisfaction scale on diversity. Paper presented at the annual meeting of American Educational Research Association, New York, NY.
Keefe, T. J. (2003). Using technology to enhance a course: The importance of interaction. EDUCAUSE Quarterly, 1, 24–34.
Kirkpatrick, G. (2005). Online 'chat' facilities as pedagogic tools: A case study. Active Learning in Higher Education, 6(2), 145-159. doi:10.1177/1469787405054239
Knupfer, N. N., Gram, T. E., & Larsen, E. Z. (1997). Participant analysis of a multiclass, multi-state, on-line, discussion list. Retrieved from ERIC Database (ED 409845)
Lee, D., & Kang, S. (2005). Perceived Usefulness and Outcomes of Intranet-Based Learning (IBL): Developing Asynchronous Knowledge Management Systems in Organizational Settings. Journal of Instructional Psychology, 32(1), 68-73.
Lee, H., & Rha, I. (2009). Influence of Structure and Interaction on Student Achievement and Satisfaction in Web-Based Distance Learning. Educational Technology & Society, 12(4), 372-382.
Liaw, S., Huang, H., & Chen, G. (2007). Surveying instructor and learner attitudes toward e-learning. Computers & Education, 49(4), 1066-1080.
Ludlow, L. H., Enterline, S. E., & Cochran-Smith, M. (2008). Learning to teach for Social Justice Beliefs Scale: An application of Rasch measurement principles. Measurement and Evaluation in Counseling and Development, 40(4), 194-214.
Markman, K. M. (2009). So what shall we talk about: Openings and closings in chat-based virtual meetings. Journal of Business Communication, 46(1), 150-170. doi: 10.1177/0021943608325751
Mikulecky, L. (1998). Diversity, discussion, and participation: Comparing Web-based and campus-based adolescent literature classes. Journal of Adolescent & Adult Literacy, 42(2), 84–97.
McBrien, J. L., Jones, P., & Cheng, R. (2009). Virtual spaces: Employing a synchronous online classroom to facilitate student engagement in online learning. International Review of Research in Open and Distance Learning, 10(3). Retrieved from http://0-search.ebscohost.com.uncclc.coast.uncwil.edu/login.aspx?direct=true&db=eric&AN=EJ847763&site=ehost-live
McLure Wasko, M., Faraj, S. (2005). Why should I share? Examining social capital and knowledge contribution in electronic networks of practice. MIS Quarterly 29(1), 35-57.
Muirhead, B. (2004). Encouraging interactivity in online classes. International Journal of Instructional Technology and Distance Learning, Retrieved from http://itdl.org/Journal/Jun_04/article07.htm
Ng, K. C. & Murphy, D. (2005). Evaluating interactivity and learning in computer conferencing using content analysis techniques. Distance Education, 26(1), 89-109.
Nunnally, J. C. (1978). Psychometric theory. New York: McGraw-Hill.
Michele A. Parker and Florence Martin (2010). Using virtual classrooms: Student perceptions of features and characteristics in an online and blended course. Journal of Online Learning and Teaching, 6(1), 135-147.
van Raaij, E. M., & Schepers, J. J. L. (2008). The acceptance and use of a virtual learning environment in China. Computers & Education, 50(3), 838-852.
Rhode, J. F. (2009). Interaction equivalency in self-paced online learning environments: An exploration of learner preferences. International Review of Research in Open and Distance Learning, 10(1), 1-23. Retrieved from http://0-search.ebscohost.com.uncclc.coast.uncwil.edu/login.aspx?direct=true&db=eric&AN=EJ831712&site=ehost-live
Rovai, A. P., & Wighting, M. J. (2005). Feelings of alienation and community among higher education students in a virtual classroom. Internet & Higher Education, 8(2), 97-110. doi:10.1016/j.iheduc.2005.03.001
Santos, J. A. (1999). Cronbach’s Alpha: A tool for assessing the reliability of scales.
Journal of Extension, 37(2). Retrieved from http://www.joe.org/joe/1999april/tt3.php
Suhonen, R., Schmidt, L. A., & Radwin, L. (2007). Measuring individualized nursing care: Assessment of reliability and validity of three scales. Journal of Advanced Nursing, 59(1), 77-85, doi: 10.1111/j.1365-2648.2007.04282.x
Wimba (2009). Wimba for Higher Education. Retrieved from http://www.wimba.com/solutions/highereducation/wimba_classroom_for_higher_education
Winograd, D. (2000, October). The effects of trained moderation in online asynchronous distance learning. Paper presented at the annual meeting of Association for Educational Communication and Technology, Denver, CO.
Tallent-Runnels, M. K., Thomas, J. A., Lan, W. Y., Cooper, S., Ahern, T. C., Shaw, S. M, & Liu, X. (2006). Teaching courses online: A review of the research. Review of Educational Research, 76(1), 93-135. doi: 10.3102/00346543076001093
Yukay-Yuksel, M. (2009). A Turkish version of the School Social Behavior Scales (SSBS). Educational Sciences: Theory & Practice, 9(3), 1633-1650.
Michele A. Parker is an Assistant Professor of Educational Leadership at University of North Carolina at Wilmington. Her doctorate is in Research, Statistics, and Evaluation from the University of Virginia. She teaches instructional technology and research courses. Her scholarship includes the use of technology in education and in research.
Emily R. Grace is a doctoral student in Educational Leadership at University of North Carolina at Wilmington. She works as the Project Coordinator for the Hill Center Regional Educational Model at UNC Wilmington. Her master’s degrees include school administration and elementary education. Prior to her current position, she worked as a school administrator and teacher in grades K-12.
Florence Martin is an Assistant Professor in the Instructional Technology program at the University of North Carolina, Wilmington. She received her Doctorate and Master's in Educational Technology from Arizona State University. She has a bachelor's degree in Electronics and Communication Engineering from Bharathiyar University, India. Previously, she worked on instructional design projects for Maricopa Community College, University of Phoenix, Intel, Cisco Learning Institute, and Arizona State University. She was a co-principal investigator on the Digital Visual Literacy NSF grant working with Maricopa Community College District in Arizona. She researches technology tools that improve learning and performance (e.g., learning management systems, virtual classrooms, web 2.0 tools).