Sep 2004 Index
 
Home Page


Editor’s Note
: This large online staff-development program for Special Education involved federal, state, and local education agencies. It used state teams, retreats, a project web site, and studies to collect data. This study evaluates design, production, and implementation of an online staff development module.

Beta Testing Online Staff Development for Implementation at Scale

Edward L. Meyen, Ronald J. Aust and Chien Yang

Keywords: Evaluation, Field Testing, Beta Testing, Staff development, Professional development, In-service, Online instruction, e-learning, Instructional design, Implementation, Scalability. Dissemination

Abstract

This beta test was conducted as part of the activities carried out by a multi-state planning project to develop guidelines for implementation of large-scale online staff development programs. The beta test was designed to evaluate the features of an online staff development model and to provide participants in the planning project a common experience in understanding online staff development. Fifty-one educators, in several professional roles, from nine states participated in a series of focus groups and surveys to develop recommendations on implementation.

Background

The planning project that provided the context for this study emerged from the work of the Online Academy (Meyen, 2002) and the University of Kansas e-Learning Design Lab (eDL) (Meyen, 2003). As part of the planning effort, a needs assessment study (Meyen, Ramp, Harrod, & Bui, 2002) identified 113 topics perceived as being important for staff development programs. A supplemental award from the Office of Special Education Programs (OSEP) in the U.S. Department of Education supported a prioritization process that established topics for developing five online modules to address critical staff development needs of national significance and a planning process aimed at the development of recommendations on the delivery of online staff development by states. The following online modules were developed, beta tested, and released to state dissemination teams:

  • Curricular Design and Instructional Accommodations for Secondary Students with Mild Disabilities

  • Relating Instructional Assessments to Standards

  • Models for Collaboration

  • Transition-Focused Secondary Education for All Students

  • Developing Standards-Based IEPs

The beta testing model reported in this paper was designed to provide a basis for refining the modules and serving as an experience for planning participants as a way to achieve a common understanding of this type of online staff development. In this context, the beta test was central to the process that led to the ultimate decisions on recommendations for implementation of large-scale online staff development. The final report for the supplemental project, along with the five modules, can be accessed on the eDL website at www.elearndesign.org.

The Multi-State Planning Process staff development

External Leadership: The board was comprised of nine individuals representing state education agencies (SEAs), regional resource centers (RRCs), local education agencies (LEAs), institutions of higher education (IHEs), and the Office of Special Education Programs in the U.S. Department of Education (OSEP). The divisions of Research to Practice and Monitoring and State Improvement Planning in OSEP were both represented.

State teams: Nine state teams represented the primary source of data input for the planning process and the participants in the beta test. Each team was comprised of representatives from the respective SEA and an institution of higher education, along with a principal, a staff development specialist, and at least one classroom teacher. In one case, a regional resource center representative served on a team (this person was also on the national advisory board). For all of the teams, the SEA representative served as the team leader, coordinating all team activities.

Retreat Planning Sessions: Two two-day retreats involving the state teams and members of the board represented the setting for planning. Planning sessions were also conducted by the board prior to the retreats. During the board planning sessions, the initial project goals were framed, the planning process conceptualized, and the retreat model agreed upon. OSEP awarded funds to each state to cover the costs for their team members to participate in the retreat sessions.

Project web site: The project web site served as the primary communications vehicle. All data were conducted electronically and modules that were beta tested by the participants were accessed through the web site. Furthermore, participants communicated with their team members, the eDL staff, and their team leader via electronic communications.

Studies Supporting the Planning Process: Three studies were conducted as means of collecting data that would inform the planning process. These included (a) identification of barriers to online staff development, (b) determining conditions and or parameters of successful online staff development, and (c) beta testing of online staff development modules by planning participants. This paper focuses on the beta test study (Meyen & Yang, 2003a,b).

Literature Review

While there is an emerging literature base on evaluating online instruction the focus tends to be more on the perceptions of students enrolled in online instruction rather than an emphasis on the design, navigation and content structure. Instructional Technology Service (ITS) at Texas A & M University (Texas A&M University, nd) provides a model to design online courses, which includes five phases: analysis, design, develop, implement, and evaluate (ADDIE). According to ITS, evaluation is the final phase of designing online courses and it serves as the quality management component for online programs. Evaluation aims to evaluate the effectiveness and quality of instructional process and materials by the completion of alpha or beta testing. It also serves the purpose of ensuring that learners are comfortable with the technology used in online learning (Boyers, 1997). Bodily and Mitchell (1998) published a source book on the evaluation of Challenge Grants funded by the U.S. Department of Education. More recently resources have begun to appear in the literature on the formative evaluation of web-based courses (Maslowski, Visscher, Collis, & Bloeman, 2000; Youngman, Gotcher, Vafa, Dinsmore, & Goucher, 2000). The beta test design for this study drew heavily from the work of Beyer (1995) on distance education. While beta testing or evaluation of e-learning programs is extensively applied in large corporations (e.g. Microsoft), little is addressed in terms of the guidelines and considerations in interpreting the beta testing results.

The beta test design for this study drew heavily from the work of the Online Academy (Meyen, Aust, Bui, Ramp, & Smith, 2002). The Academy employed five foci in beta testing 22 online modules nationally. The foci included (1) instructional design of the online modules, (2) the production system, (3) content development, (4) usability and navigation, and (5) implementation.

Methodology

The procedures for the beta testing emerged out of discussions at the first planning retreat as well as the experiences of the eDL staff in previous beta testing. The instructional design for the staff development modules was the same as that developed by the Online Academy (Meyen, 2002) that had been subjected to extensive evaluation (Meyen et. al., 2002). Because the content had been developed by national content experts, it was decided that while this beta test would address content, it would focus primarily on participants’ perceptions of the implications for implementing online staff development. As a result, demographic data were collected on each participant completing a module. This allowed the results to be analyzed by total group of respondents or by role of individual participants (i.e., SEA representatives, IHE faculty, teachers, principals, or staff development specialists).

Procedures employed in the beta test included the following:

  • Content experts were paired with a teacher.

  • Modules were demonstrated to participants at an orientation session.

  • Participants were required to select a module of their personal choice to beta test.

  • A web site was developed containing all the information necessary to participate in the beta testing and to report evaluative responses.

  • A response monitoring system maintaining subject anonymity was used.

  • Deadlines were set for completion of the beta testing and reporting of the results.

  • Results were tabulated electronically as they were received.

  • A summary of quantitative responses (Effectiveness Record) and narrative responses (Beta Test Audit) was developed and posted.

  • Content issues were reviewed internally to determine if they warranted revision.

  • An audit summary based on analysis of responses was developed.

  • The audit summary was reported back to the participant group during the second retreat.

Results

Context of Beta Test

The instructional design and the technology employed in developing the staff development modules had already been extensively beta tested and were being implemented with other content in preservice teacher education programs nationally. Additionally, the content had been developed by content experts selected nationally and had been subjected to review.

The beta test was targeted at seeking input from the project participants, particularly to assess the effectiveness of the modules and the implications for online staff development; participants were also asked to identify any errors in content, problems related to navigation, and organizational needs. The engagement of the project participants in the beta test process had the advantage of ensuring that each participant would personally experience online staff development as part of the project. Since few had prior personal experience with developing or participating in online staff development, this direct involvement was viewed as critical to their preparation of recommendations for the full-scale online staff development implementation. Thus, it was important to conduct the beta test early in the project to vest critical stakeholders in the evaluation and refinement for the full-scale implementation.

Demographics

Fifty-one individuals from nine states participated in the beta test. Participants included 10 state education agency staff, 6 professors, 6 principals, 14 teachers, 8 professional development specialists, and 7 individuals representing Regional Resource Centers and individuals in varied administrative roles. Forty-four were female and seven were male.

Table 1
Participants’ Experience on Computers and Internet Usage

Amount of Experience

Computers

Internet Usage

Less than 1 year

           --

1

1-3 years

2

8

3-5 years

6

15

5+ years

42

27

 

n=50

n=51

Relevant experiences of participants in using computers, the Internet, and in completing an online course are reported in Table 1. One participant reported less than one year of experience in using the Internet, eight 1-3 years, fifteen 3-5 years and twenty-seven reported more than five years. In terms of experience in using computers, two reported 1-3 years of experience, six 3-5 years and forty-two reported more than 5 years.

Table 2 contains data on Internet usage per week. As illustrated, 25 participants reported spending less than 5 hours a week, twenty-one 5-10 hours and five reported spending more than 10 hours a week on the Internet.

Table 2
Internet Usage Experience per week

Hours per week

# of Participants

Less than 5 hours

25

5-10 hours

21

More than 10 hours

5

 

n=51

When asked about their experience in taking online courses, 38 reported that they had never taken an online course. Ten reported having taken 1-2 courses, with three reporting completing 3-10.

Table 3 describes participants’ use of technology in their professional role. Twenty-one reported that they fully use technology in their daily work; 29 reported that they make moderate use of technology in their daily work. One reported rarely using technology in daily work.

Table 3
Technology Usage by Participants

Ranking

# of Participants

Rare usage

1

Moderate usage

29

Full usage

21

 

n=51

In contrast to a typical beta test that focuses on assessing the effectiveness of design, validity of content, stability and usability of the technologies, the framework for this beta test was influenced largely by the project planning needs. The framework included two elements. The first was to establish an Effectiveness Record on the part of the participants. Specifically, this element addressed participants’ perceptions of preferred characteristics of online staff development, their experiences in testing the online staff development modules, and the contributions of specific features in the instructional design to their learning. The second element took the form of a Beta Test Audit, collecting qualitative data from open-ended questions and observations based on the participants’ experience in completing an online staff development module. This element was designed to identify data that would be helpful in revising the modules and/or contribute to the subsequent framing of recommendations on delivery models. Data for both elements were collected online.

Effectiveness Record Results

The Effectiveness Record process employed represents an attempt to assess the effectiveness of online instruction from the perception of a group of stakeholders who have a common goal. In this case the stakeholders were individuals representing different educational agencies in nine states who were engaged in making recommendations on strategies for large-scale implementation of online staff development. Thus, the specific goal was to assess the effectiveness of online instruction in the process of determining its application to online staff development.

The rationale for building an Effectiveness Record based on the perceptions of participants in the beta testing was that all participants (a) would be basing their responses on the same online instructional design and technology features, (b) had the same frame of reference in that the context was online staff development, and (c) completed the experience within the same time frame and with a similar level of support.

Each participant was asked a series of questions organized into four categories:

  • Importance of the characteristics of online instruction

  • Response to online module design

  • Contributions of Orientation and Support level features to learning

  • Contributions of Lesson level features to learning

Participants responded to each question using the following 5-point scale:

5 points

Strongly Agree

(sa)

4 points

Agree

(a)

3 points

Uncertain

(u)

2 points

Disagrees

(d)

1 point

Strongly Disagree

(sd)

Importance of the characteristics of online instruction. Seven characteristics of online instruction were identified (see Table 4). Each attribute was embedded in the design used to create the online modules that were tested by the participants. As noted in Table 4 the mean scores ranged from a high of 4.86 for the flexibility of being able to work on online instruction when convenient and the ability to make hard copies of any feature of a module to the usefulness of media, which received a mean rating of 3.98.

Table 4
Importance of Characteristics of Online Instruction

 

sa

a

u

d

sd

Mean Score

Review in original form

25

23

2

1

0

4.41

Work when convenient

45

5

1

0

0

4.86

Work where I want

42

7

2

0

0

4.78

Have immediate access

38

11

2

0

0

4.71

Can make hard copies

43

8

0

0

0

4.84

Can review at any time

45

5

1

0

0

4.86

Media is useful

7

37

4

2

0

3.98

n = 51

 

 

 

 

 

 

 

Table 5
Responses to Selected Design, Usability and Instructional Features of Modules

 

sa

a

u

d

sd

Mean Score

Were easy to set up and start

29

19

0

2

1

4.43

Were easy to use

29

19

2

1

0

4.49

Were instructionally well designed

24

22

4

1

0

4.35

Kept my attention/interest

12

28

6

5

0

3.92

Helped m understand content

24

24

2

1

0

4.39

Had meaningful graphics

4

21

12

13

1

3.27

Would help develop my skills

22

24

3

1

0

4.34

Are effective delivery approach

22

21

6

2

0

4.24

n = 51

 

 

 

 

 

 

The consistently high rankings of the selected characteristics of online instruction reflect perspectives that are learner-oriented. Thus, they relate to aspects of online instruction that are generalizable to most models of online instruction. In looking at the mean values, the fact that while they are all comparatively high, the usability of media was the lowest warrants consideration. Developing of multimedia resources is very costly. If, after further study, it is determined that the use of multimedia does not add significantly to the effectiveness of online instruction for adult learners, significant resources and time could be saved by minimizing the use of media in online instruction for adult learners. On the other hand, this ranking may be more related to the nature of the specific multimedia embedded in these modules than the general use of multimedia in online instruction.

Response to online module design. There were eight items in this series. As noted in Table 5, the mean scores ranged from a high of 4.49 for ease of use to a low of 3.27 for the contributions of graphics to their learning. The lower ranking for the instructional effectiveness of the graphics may be associated with the specific graphics or the graphic style associated with these modules. Graphics were integrated into the multimedia features of the modules; thus, this relatively low ranking for graphics may be related to the similar ranking given to the use of media, reported in Table 4. This uncertainty about the value of graphics adds to the need for further study of the value of media and/or the type of media that is most applicable to online instruction designed for adult learners.

Table 6
Perceived Values of Orientation and Support Feature
to Learning by Participants

 

sa

a

u

d

sd

Mean Score

Introduction

16

34

0

0

1

4.25

Critical questions

18

27

6

0

0

4.24

Content map

12

28

6

5

0

3.92

Structure

17

25

6

2

0

4.14

Help

8

26

15

2

0

3.78

Syllabus

23

26

1

1

0

4.39

Research

22

18

8

1

2

4.12

n = 51

 

 

 

 

 

 

Tables 6 and 7 report data on items that are directly related to features in the module design tested by participants. All features in the design are addressed with the exception of the Practice feature, which was not included because it is carried out following completion of the module. In this test, students were not required to do the Practice as the test was conducted following the end of the school year and most practices exercises involved applications in the classroom.

The module design consists of four levels: Orientation, Support, Lesson, and Practice. The Orientation Level consists of the Introduction, Critical Questions, and Content Maps. These features provide information helpful to the students in getting started. They may return to the Orientation Level occasionally. The Support Level includes the Syllabus and an aggregation of selected features from the Lesson Level, i.e., Readings, Glossary, Directed Questions and Assessments. The Support Level represents an easy accessible source for review once the module has been completed. The Lesson Level includes the primary instructional features, i.e., Outline, Notes, Glossary, Readings, Advance Organizer for the Lecture, a Mediated Lecture, Activities, Directed Questions and Assessment. The Practice Level allows students to apply what they have learned from the module. Each module averaged about four lessons.

Table 7
Contributions of Lesson Features to Their Learning

 

sa

a

u

d

sd

Mean Score

outline

27

20

3

1

0

4.43

notes

30

18

2

1

0

4.51

glossary

26

22

2

0

0

4.48

readings

17

19

8

1

4

3.90

preview

22

19

6

4

0

4.16

presentation

37

12

2

0

0

4.69

activities

20

28

2

1

0

4.31

directed quests

25

22

4

0

0

4.41

assessment

27

15

6

3

0

4.29

n = 51

 

 

 

 

 

 

Beta Test Audit

Like the data collected via the Effectiveness Record, the data for the Beta Test Audit were derived from an instrument available to all participants online. The primary difference is that whereas the Effectiveness Record was based on structured questions requiring quantifiable responses, the Beta Test Audit results were derived from responses to open-ended questions or unstructured opportunities to report observations on the beta test experience. Thus, the data were in a qualitative format and not subject to tabulation in the same manner as the Effectiveness Record data. The compilation of results required some knowledge of the beta testing process, the module design, and the content. Additionally, it was important to understand how the results would be used in the revision process to be able to prepare the results summary in a maximally useable form. The audit results were prepared by selected members of the eDL production staff.

The production staff first transferred the raw data verbatim to a word document. The statements were then aligned with the module they addressed, and each statement was coded to ensure that it could be traced to the exact location in the appropriate module. When changes were required, a brief descriptor characterizing each comment or suggestion was inserted in the border of the record to serve as a cue to the staff reviewing the data. Once completed, the results became the focus of staff meetings to determine any revisions warranted. Figure 1 includes an excerpt from the sample Beta Test Audit report prepared on the staff development modules as a result of this beta test. The sample reflects the early stages of the audit. As actions were taken based on the data, they were recorded in the space on the border.

Staff M1

I found the content to be well grounded in research.

Perhaps because the modules are trying to reach both individuals with experience and novices, I found the introductory material to be more than needed. Some of the content was very basic.

Adding more interactivity to the module could be helpful.

Research-positive

 

Content too basic

 

Review interactivity

Staff M2

Enjoyed the assessments - kept me on task and forced me to pay attention.

Reading so much text could be difficult for staying on task.

RealPlayer: I could not get it to work therefore I had to use the text version. That makes it much more tedious to work through the module.

I liked being able to preview the notes and outline/syllabus.

I would have liked more Practice activities for applying the knowledge and links to web sites.

Assessment – positive

 

 

RealPlayer problem

 

 

Review practice activities

 

Consider links

Staff M2

Look forward to making these available on a large scale.

 

Staff M4

This was a very easy (friendly) site to navigate with very few glitches. Makes me want to participate in more of the same.

Very friendly

No problem

Staff M4

Typo in glossary: Functional Vocational Rehabilitation: add “s” to “interest.”

Typo in glossary: Person Centered Planning: add the word “it” after “because.”

 Where are the two handouts in Lesson II?

Typo: Lesson II Assessment: Number 5 add “s” to “meeting.”

Typo: Notes for Lesson 3: Number 5: “environment” misspelled.

Typo: Lesson 3: Self-Determination Section: Why is “self-advocacy” capitalized when underlined?

Typo: Lesson 3: Conceptual Module for Self-Determination.

Check all typos. Make corrections. Verify corrections.

 

Check content

 

Format-positive


Figure 1. Excerpts from sample beta test audit report
 

Summary of Beta Test Process and Results

In any beta test, decisions must be based on how to interpret results and improve the program. Needed revision is rarely evident based on test results or quantitative data alone. Each situation and subjective comments require careful study. The revision process must consider staff input based on their experience and access to new technologies. The goal is to improve all elements of the product knowing conditions under which it will be used. It is important to know the technology capacity of schools where online staff development will be offered and whether teachers will do much work on the program at home. While you do not produce online staff development for the lowest common denominator in terms of available technology, it is important to strike a balance that ensures maximum access for the target audience.

It is not uncommon in beta testing that some participants will immediately want to implement the program and in the process identify additional information that is shared. That was true in this case as well. Shortly after completing the beta test, three states opted to use selected modules for staff development. In each case they offered additional suggestions based on their early experience that resulted in minor modifications, but ones that were important to them. Beta testing, while structured, must be considered a rather open system. For example, anecdotal information may lead to modifications equal in importance to the results of systematic testing with large numbers of participants. It is difficult to predict the best sources of good information. Sometimes the best suggestions come from naïve users, at other times from the most sophisticated and experienced user. In all cases, the developer must decide the significance of each suggestion or concern.

For the eDL, the Effectiveness Record and the Beta Test Audit combined with staff input continues to evolve as a model. Thus, with each development project the staff continues to refine the beta testing procedures. The newness of online instruction, especially its application to staff development, presents a number of challenges. That is, decision makers’ expectations are sometimes ahead of the technology. Few people are experienced in developing online instruction to be taken to scale. Often the assumption is made that because they are able to employ a particular feature in their school or on their campus it is applicable nationally. In reality, taking online instruction to scale introduces the most significant challenges. In addition, personal preferences enter into the testing process along with views that are not supported by research. That is, because they reflect what may be seen in the popular media many assume that they are essential to mediated online instruction.

When interpreting beta testing results one is faced with having to make judgments on which suggestions or data warrant revision or addition to the original instructional program. The decision is seldom clear. There are always conditions that may explain why the concern occurred or minimize the significance of the suggestion. On the other hand, in some cases what may appear to be a minor suggestion that comes from a single user may, after careful consideration, be found to be highly significant.

When interpreting the results of beta testing, the eDL staff has found that questions such as the following must be considered in examining each suggestion or concern:

  • Is this a design, technical or content concern?

  • Is this a systematic concern or an isolated issue?

  • Is this a personal preference concern?

  • Are additional data available to support this concern?

  • If it is a content concern, can it be confirmed?

  • Is the focus of the concern something that is intentional and previously explained?

  • How important is making the correction compared to the cost and implications of not making it?

  • If it is technical, is it correctable?

  • Is the concern user-related?

  • Is the concern clearly communicated?

  • Has the concern already been addressed?

  • Does staff judgment support making the revision?

  • How does the concern relate to responses from previous beta test results?

As noted, this beta test served two purposes: (a) to improve the quality and usability of the online modules for staff development and (b) to inform the process of determining the potential effectiveness of online staff development and framing recommendations for the implementation of online staff development on a statewide basis or at scale nationally.

The nature of the revisions made as a consequence of the beta test took several forms, although most were in the form of edits of features that were found to be distracting; in some cases these were substantive. Some content modifications occurred as a way of offering clarification. Also, incomplete citations were corrected. No instructional design changes were made, but some graphic design changes were included in the final version. While the multimedia was not changed, the results of the beta test supported the need for further research to study the importance of multimedia resources on online instruction for adult learners. The results related to informing the deliberations of participants in the project in making recommendations on implementation (i.e., delivery of online staff development at scale) were significant. They were factored into the design of the Barriers/Solutions Study and the Parameters of Online Staff Development Study. The results of these studies eventually influenced the framing of recommendations by maximizing the information available to participants (Meyen et. al.., 2002).

Lessons Learned

  1. Participation in beta testing is an effective approach to understanding online staff development.

  2. Professional educators in different roles are able to reach consensus on the effectiveness of the specific features of online instruction that are applicable to online staff development.

  3. The configuration of quantitative (Effectiveness Record) and qualitative (Beta Test Audit) yield significant benefits in a planning process targeted to taking an online staff development program to scale.

  4. The process of compiling beta test results and framing reports allows production staff to gain insights from the perspectives of the respondents.

  5. Participation in beta testing facilitates communication among planning participants.

  6. Requiring beta test participants to respond to inquiries on each feature of the online instructional design results in greater understanding of ht overall instructional model.

  7. Open-ended qualitative items allow the production staff to focus on individual responses that otherwise might have been overshadowed in an aggregate analysis.

  8. What may appear to content authors as minor error of omissions may be perceived by practitioners as significant and, therefore, warranting correction.

  9. Beta testing can be carried out in a timely manner and the response cost reduced is systemic and effectively monitored.

  10. Beta testing can yield data that result in the framing of questions for subsequent research (e.g., the value of the multimedia presentation of content versus text.)

  11. Participation in beta testing is an effective way for individuals to understand the features of an online instructional model. It also appears to build confidence to interact in discussion s of online staff development.

Conclusion

The beta test study was designed to help inform the multi-state planning process. Two additional studies were conducted for the same purpose but not reported in this paper. They addressed potential barriers to online staff development and the identification of conditions essential to successful online staff development. There were fifty-one participants in the beta test. The test was conducted prior to the development of the implementation recommendations by the planning group. The focus of the beta test was on the characteristics of online staff development, features of the design, and content. Two strategies were employed in conducting the beta test. One involved a quantitative process for an Effectiveness Record approach and the other was a Beta Test Audit. The focus of the first strategy was on evaluation each feature in the online model and the perceptions of participants. The second involved qualitative techniques of collect data in the form of observations and insights of participants in a narrative and anecdotal format.

ACKNOWLEDGMENTS

Preparation of this article was supported in part by the Office of Special Education Programs in the U.S. Department of Education. Recognition is due to Cheryl Harrod, Meng Yew Tee and Dan Spurgin of the e-Learning Design Lab staff and to the Center for Research on Learning and the Information & Telecommunication Technology Center whose collaboration created the e-Learning Design Lab.

References

Beyer, B.K. (1995). How to conduct a formative evaluation. Alexandria, VA: Association for Supervision and Curriculum Development. (ERIC Document Reproduction Services NO. ED 391830).

Bodily, S., & Mitchell, K.J. (1998). An educator’s guide to evaluating the use of technology in schools and classrooms (on-line). Available at: http://www.ed.gov/pubs/EdTechGuide/appa.html.

Boyers, K. (1997). Lessons from your desk. Association Management, 49, 1997, 50-53.

Maslowski, R., Visscher, A.J., Collis, B., Bloeman, P.P.M. (2000). The formative evaluation of a web-based course-management system within a university setting. Educational Technology, 40(30), 5-19.

Meyen, E.L. (2002). Final Report: The Online Academy (Office of Special Education Programs, U.S. Department of Education. Lawrence, Kansas: Center for Research on Learning – University of Kansas.

Meyen, E.L. (2003). Final Report: Online Delivery Model Project (Office of Special Education Programs, U.S. Department of Education. Lawrence, Kansas: Center for Research on Learning – University of Kansas.

Meyen, E.L., & Ramp, E., Harrod, C.A., & Bui, Y.N. (2002). A national assessment of staff development needs related to the education of students with disabilities, Focus on Exceptional Children.

Meyen, E.L., & Yang, C.H. (2003). Barriers to Implementing Large-Scale Online Staff Development Programs. Online Journal for Distance Learning Administration. In press.

Meyen, E.L., & Yang, C.H. (2003). Parameters of Online Staff Development Study. International Journal of Instructional Technology and Distance Learning. In press.

Meyen, E.L., Aust, R.J., Bui, Y.N., Ramp, E., & Smith, S.J. (2002). The online academy formative evaluation approach to evaluating online instruction, The Internet and Higher Education, 5, 89-108.

Meyen, E.L., Aust, R.J., Bui, Y.N., Ramp, E., Smith, S.J. (2002). The online academy formative evaluation approach to evaluating online instruction. The Internet and Higher Education, 5, 89-108.

Texas A & M University, Instructional Technology Service (ITS). How to design your course. Retrieved 7/12/03 from http://www.tamu.edu/its/consult/ howtodesign. htm (n.d.)

Youngman, T., Gotcher, L., Vafa, S., Dinsmore, S., & Goucher, O.B. (2000, February). A university design team approach: Developing courses for on-line distance education. Proceedings of the Society for Information Technology and Teacher Education International Conference, San Diego, CA.

About the Authors.

Edward L. Meyen, Ph.D., is a Budig Teaching Professor for the Department of Special Education at the University of Kansas and the Co-Director of the e-Learning Design Lab. The Lab is a joint venture with the School of Engineering and the School of Education.

Ronald J. Aust, Ph.D., is Associate Professor in the Department of Teaching and Leadership at the University of Kansas. His research and development in the e-Learning Design Lab focuses on user interfaces and instructional design.

Chien-Hui Yang is a Ph.D. student in early childhood special education and a graduate research assistant with the e-Learning Design Lab at the University of Kansas.

Contact Information:

Ed Meyen, e-Learning Design Lab
University of Kansas
1000 Sunnyside Avenue – Suite 3061
Lawrence, KS  66045-7555
Tel. 785-864-0675  E-mail:  meyen@ku.edu

 

go top
Sep 2004 Index
Home Page