Tuesday, October 8, 2013

Online Discussion, Student Engagement, and Critical Thinking - Annotated Biblography

Williams, L., & Lahman, M. (2011). Online Discussion, Student Engagement, and Critical Thinking. Journal of Political Science Education, 7(2), 143–162. doi:10.1080/15512169.2011.564919

The authors, professors at Manchester College, use data from both advanced and lower level undergraduates enrolled in traditional classroom-based general education courses, to test the usefulness of their tool for content analysis in identifying student engagement and critical thinking in an online discussion forum. They found the tool merged and refined from existing content analysis protocols effective: "We were able to code a large amount of written material in a reliable fashion." (p. 159). The authors also claimed that they replicated and demonstrated the effective link between student interaction and critical thinking.

The main focus of the article was to report the development of a content analysis tool and how the tool performed in the initial implement. Although the authors demonstrated an interesting combination of different tools for content analysis, it seems debatable that whether the "hybrid coding scheme" (p. 150) actually sustained the advantages of existing tools developed by previous researchers while improving the easiness to use and the reliability. Is the tool "just right" (p. 146) in the specificity (be mutually exclusive) and reliability, and have enough categories to reflect the characteristics of the discussion (be exhausted) as it was claimed? The following potential problems were identified:
  • Missing uniformity in coding scheme. Each researcher developed their tool ("coding scheme" or "protocol" as in the article) from a selected angle. That's how a scheme can meet the fundamental qualification of being exhausted and exclusive as a coding tool. When different coding schemes were merged, the uniformity of each scheme was broken. As a result, the hybrid scheme become not very exhaustive and exclusive.
  • In the hybrid tool, the dimensions of interaction (p. 150) were mainly derived from TAT (Fahy, 2005). Unfortunately, the authors forgot that the TAT was developed to advance the discriminant capability and reliability. In order to achieve the goals, the TAT strives to reduce the number of coding categories and takes sentence as the unit of analysis. The hybrid tool seems to be designed in reverse. Not only the categories were intertwined, the coding rules also contradicted the purpose of using sentence as the unit of analysis (Each sentence could be coded in as many categories as the coders wish). It eliminated the best part of TAT and made an easy-to-be-identified unit 'boundary overlap' - an issue that often occurs in using message or meaning as the unit of analysis.
  • The intention to give a clear cut between interaction and critical thinking is questionable. Firstly, critical thinking is a component and outcome of interaction (Fahy, 2005) thus they are difficult to be clearly divided. Secondly, the categorization for interaction and critical thinking described in the study were confusing - they often overlay with each other.
The mean reliability scores reported by the study were 0.55 to 0.70 (p. 154), much lower than the 0.70 to 0.94 with TAT as reported in previous research (Fahy, 2005). The first two potential problems listed above might explain part of the reason.

The frequencies of the hybrid model were greatly different from the original report with TAT (Fahy, 2005) in the percentage of Referential Statements. While the Referential Statements comprised 60.0% of the sentences in the study, they were only 10.2% in the original study (Fahy, 2005). The differences might be caused by the different research context at which these two studies based: two periods of one-week discussion for this study and a 13-week full course for Fahy's study. The former was very focus on providing critical comments to given essays and the latter contained diverse learning situations. It is reasonable that students discussed in different ways in these two different learning contexts. The finding leads to a possible conclusion that there is no one best content analysis tool for all research contexts in term of discriminant capability. The researcher might need to modify the existing tools to fit a particular research context.

Despite above mentioned issues, the study presented a concise summary of the most cited tools for transcript analysis in computer-mediated communication (CMC).  It provided a clear guide for readers about who, when and what in the study of student interaction in CMC.

Fahy, P. (2005). Two Methods for Assessing Critical Thinking in Computer-Mediated Communications (CMC) Transcripts. International Journal of Instructional Technology and Distance Learning