Friday, August 21, 2009

Follow up to USDOE Report

In June, I wrote a post about the Department of Education’s Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies (2009). In that post, I reported the main findings listed in the press release, but had not fully evaluated the report. I took this opportunity to do so. The report has a lot to offer, so bear with me here!

The report outlines the following four purposes (U.S. Department of, 2009, p. xi):
1. How does the effectiveness of online learning compare with that of face-to-face instruction?
2. Does supplementing face-to-face instruction with online instruction enhance learning?
3. What practices are associated with more effective online learning?
4. What conditions influence the effectiveness of online learning?
It is a quantitative meta-analysis of existing empirical studies, contrasting the effectiveness of online, blended, and face-to-face learning. A process of research followed by thorough screening, coding, and full-text analysis yielded 51 studies for evaluation. The screening process looked for reports that (a) compared the modalities in question, (b) focused on learning outcomes, (c) demonstrated strong research methods and design, and (d) gave enough data to evaluate impact. Additionally, to control for inappropriate influence, they weighted studies based on their sample size before combining results during the data analysis. They also tested for homogeneity, moderator variables, practice variables, condition variables, and methods variables.

The results demonstrated the following:
  1. Blended (online and face-to-face) learning maintained the strongest learning outcome advantage, followed by online only, and then face-to-face. However, this is not directly due to the delivery medium, but has more to do with increased time on task, greater accessibility to resources, and overall pedagogical differences evident in blended and online learning.
  2. There is no significant difference in outcomes when comparing blended and online only mediums.
  3. Overall, studies found no significant difference between instructional methodologies that included multiple media (video, audio, PowerPoint, etc…), and those that did not.
  4. However, some advantage is present when students have some control over how they interact with media. For example, can students pause or chapter-jump a video or narrated PowerPoint, or are they required to watch them completely and linearly.
  5. Research regarding the effectiveness of including quizzing in online learning is inconclusive.
  6. Including simulations slightly increases students’ outcomes.
  7. Including tools or assignments requiring students to reflect on their learning is the most effective method for improving student learning outcomes.
  8. Online learning is an equally effective choice for undergraduates, graduates, and professionals throughout a large variety of studies. However, it is not as effective for K-12. (Although available studies in this area were limited.)
  9. Learning platforms combining asynchronous and synchronous methods seemed more effective than platforms with only one of these.
For online higher education, these results are encouraging. And, I am happy the effectiveness of online learning is unconstrained by the medium, and relies more on how it us utilized by educators. This is reminiscent of Gardner’s (2003) plea to ensure technology serves education, instead of the reverse.

Most significant to the development of our global society, is the overwhelming effectiveness of self-reflection, self-monitoring, and the concepts of transformational learning (Palloff & Pratt, 2007). This seems to fall in line with Goleman’s emotional intelligence (O’Neil, 1996), and Pink’s (2005) (video)arguments for the coming of a conceptual age. However, it does seem to decry connectivist (Siemens, 2008) theories in favor of constructivism or cognitivism.

Combining the above, with the reports’ call to “redesign instruction to incorporate additional learning opportunities” (U.S. Department of, 2009, p. 51), should lead to:
  1. An increased focus on transformational and reflective learning
  2. A stronger awareness of emotional intelligence
  3. Organic course designs including (a) more learner controlled media, (b) student choice of assessment methods, (c) less traditional quizzing, and (d) combined synchronous and asynchronous methods.
  4. An increase in hybrid or blended courses
  5. Additional research studies focused on filling the “[lack] a coherent body of linked studies that systematically test theory-based approaches in different contexts” (U.S. Department of, 2009, p. 49).


References
Gardner, H. (2003, April 21). Multiple intelligences after 20 years. Paper presented to the American Educational Research Association, Chicago, IL. Retrieved July 29, 2009 from [ http://www.pz.harvard.edu/PIs/HG_MI_after_20_years.pdf ]http://www.pz.harvard.edu/PIs/HG_MI_after_20_years.pdf

O'Neil, J. (1996). On emotional intelligence: A conversation with Daniel Goleman. Educational Leadership, 54(1), 6 - 11.

Pink, D. (2005, February). Wired 13.02: Revenge of the Right Brain. Wired. Retrieved August 1, 2009, from [ http://www.wired.com/wired/archive/13.02/brain.html?pg=2&topic=brain&topic_set= ]http://www.wired.com/wired/archive/13.02/brain.html?pg=2&topic=brain&topic_set=.

Palloff, R., & Pratt, K. (2007). Building online learning communities: Effective strategies for the virtual classroom. San Francisco: Jossey-Bass.

Siemens, G. (2008, January 27). Learning and knowing in networks: Changing roles for educator and designers [Paper presented to ITFORUM]. Retrieved June 3, 2009, from [ http://it.coe.uga.edu/itforum/Paper105/Siemens.pdf ]http://it.coe.uga.edu/itforum/Paper105/Siemens.pdf

U.S. Department of education. (2009, May). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. In B. Means, Y. Toyama, R. Murphy, M. Bakia, & K. Jones (Eds.), Office of planning, evaluation, and policy development (No. ED-04-CO-0040). Retrieved June 26, 2009, from U.S. Department of Education Web site: [ http://www.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf ]http://www.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf

Wednesday, August 05, 2009

If I had only known then ...

Knowing and Doing

In my relatively short time as an academic researcher, I have explored many authors’ work. To date, the most impactful has been Albert Bandura’s Self-Efficacy: The exercise of control (1997). To normal people (non-Phd candidates) I often describe it as the foundational reason most self-help books work. Bandura (1997) defines self-efficacy as “beliefs in one’s capabilities to organize and execute the courses of action required to produce given attainments” (p. 3). A simply complex definition Bandura explains over the remaining 522 pages.

For now, you need to keep in mind what you already know and feel – your belief in your ability to accomplish a given task significantly influences the outcome.

When diffusing technology to new audiences, it is critical to know their level of belief in their ability to adopt. However, knowing that is not enough, you must do something about it. Driscoll (2005) does a wonderful job of combining Bandura's (1997) self-efficacy with Keller's ARCS (attention, relevance, confidence, satisfaction) model of motivation. This begins to close the gap between knowing your audience, and doing something about it.

If I had only known then …

Some time ago, I developed and implemented a discussion board for my faculty. Its primary purpose was to bring together a diverse faculty dispersed to several buildings as the campus had grown; losing the sense of community they once enjoyed. Additionally, it would disseminate information, allow faculty to give input on policy and procedure, discuss curriculum, and mentor each other.

The system was live for about 9 months, and never realized its potential.


If I had known about Bandura’s (1997) and Keller’s (Driscoll, 2005) theories, the results would have been different. To begin with, many faculty members believed they could not operate the system or integrate it into their daily life. They were hesitant, expressed feelings of inconvenience, and were often uninterested. While we discussed these concerns, we did not address them in the systematic and effective way the ARCS model allows.


Attention: Instructors were attentive during the training, but we did not engage their curiosity, or cerate within them Keller’s “attitude of inquiry” (Driscoll, 2005, p. 334). Their attention did not last. As Driscoll (2005) suggests, integrating more interesting problems to solve, and changing how we delivered the material would have helped.


Relevance: The system was relevant to the goal of creating more community, and providing the faculty a stronger voice - what Driscoll (2005) referred to as Keller's "ends-oriented" relevance. However, it lacked what Driscoll labeled Keller’s “means-oriented” relevance (p. 335) - or the way to achieve your goal. I think this, combined with their unfamiliarity with discussion boards, was the system’s fatal flaw. Improving instructors' familiarity with the boards was just a matter of time. Solving the means-oriented relevance would have required more focus on how a virtual community can provide strong connections.


Confidence: Many faculty members were confident in their abilities. However, for those that were not, we did make our expectations clear and provide many opportunities for them to be successful using the system – Keller’s first and second strategies for building confidence (Driscoll, 2005, p. 336). Unfortunately, we did not provide them with enough assistance outside of the training, nor did we allow them the flexibility to “control … their own learning” (p. 337). Addressing these issues would require a less structured curriculum design allowing exploration of key concepts and functionality. Additionally, establishing support hot-lines and email addresses may have provided a safety net for their confidence.


Satisfaction: We did provide sufficient feedback during and outside of the training, as instructors used the system, to generate satisfaction. However, given the other deficiencies, this did not have enough impact to sustain use.


Next time … results will be different.


Brad


References

Bandura, A. (1997). Self-efficacy: the exercise of control. New York: W.H. Freeman and Company.
Dirscoll, M. P. (2005). Psychology of learning for instruction (3rd ed.). Boston: Pearson. (Original work published 1995)