Friday, October 11, 2013

Knowing Knowledge - A Summary of the Book

I have spent several days reading Knowing Knowledge (Siemens, 2006) - a book that conceptualizes learning and knowing in Connectivism. cMOOC, as a form of e-learning, is constructed upon the theory of Connectivism therefore it is necessary for me to read the book in order to study MOOC. The followings are concepts / quotations that might be closely related to my dissertation study - critical thinking in MOOCs.

Overview

  • We should consider learning as a chao system and study like the ways they study chao systems in terms of the meaning of knowing, the interaction/communication patterns, etc.
Connectivism is
  • "the assertion that learning is primarily a network forming process" (p. 15). Connectivism considers knowledge connective.

knowledge flow cycle (p.6)
  • Co-creation: build on/with the work of others
  • Dissemination: analysis, evaluation, and filtering elements through the network
  • Communication of key ideas
  • Personalization: internalization, dialogue, or reflection
  • Implementation: action, feedback

Connective knowledge networks possess four traits  (p. 16):
  • Diversity. Is the widest possible spectrum of points of view revealed?dy
  • Autonomy. Were the individual knowers contributing to the interaction of their own accord, according to their own knowledge, values and decisions, or were they acting at the behest of some external agency seeking to magnify a certain point of view through quantity rather than reason and reflection?
  • Interactivity. Is the knowledge being produced the product of an interaction between the members, or is it a (mere) aggregation of the members’ perspectives?
  • Openness. Is there a mechanism that allows a given perspective to be entered into the system, to be heard and interacted with by others?
Learning and knowledge environment
  • democratic and diverse (p.47)
  • dynamic and capable of evolving, adapting, and responding to external change


Stages of Knowledge Construction (p.45)
The staged view of Connectivism about how individuals encounter and explore knowledge in a networked/ecological manner (from the basic moves to the more complex):
  1. Awareness and receptivity: acquire, access.
  2. Connection forming: form network, filter, select, add.
  3. Contribution and involvement: become a visible node, acknowledge by others, reciprocate, share.
  4. Pattern recognition: recognize emerging patterns and trends.
  5. Meaning-making: act/reform view points/perspectives/opinions.
  6. Praxis: tweak, build, recreate one's network/meta-cognition, reflect, experiment, act, evaluate.
Components of a knowledge sharing environment (p.87)

  • Informal, not structured
  • Tool-Rich
  • Consistency and time
  • Trust
  • Simplicity
  • Decentralized, fostered, connected
  • High tolerance for experimentation and failure

Characteristics that are required in an effective ecology (p. 90):

  • a space for gurus and beginners to connect,
  • a space for self-expression,
  • a space for debate and dialogue,
  • a space to search archived knowledge,
  • a space to learn i n a structured manner,
  • a space to communicate new information and knowledge indicative of changing elements within the field of practice (news, research), and 
  • a space to nurture ideas, test new approaches, prepare for new competition, pilot processes.

Ecologies are nurtured and fostered… instead of constructed and mandated.

Skills our learners need (p. 113):

  • Anchoring. Staying focused on important tasks while undergoing a deluge of distractions.
  • Filtering. Managing knowledge flow and extracting important elements.
  • Connecting with each other. Building networks in order to continue to stay current and informed.
  • Being Human together. Interacting at a human, not only utilitarian, level…to form social spaces.
  • Creating and deriving mearning. Understanding implications, comprehending meaning and impact.
  • Evaluation and authentication. Determining the value of knowledge…and ensuring authenticity.
  • Altered processes of validation. Validating people and ideas within appropriate context.
  • Critical and creative thinking. Question and dreaming 
  • Pattern recognition. Recognizing patterns and trends.
  • Navigate knowledge landscape. Navigating between repositories, people, technology, and ideas while achieving intended purposes.
  • Acceptance of uncertainty. Balancing what is known with the unknown…to see how existing knowledge relates to what we do not know.
  • Contextualizing (understanding context games). Understanding the prominence of context…seeing continuums…ensuring key contextual issues are not overlooked in context-games.


Siemens, G. (2006). Knowing Knowledge. Retrieved from http://www.elearnspace.org/KnowingKnowledge_LowRes.pdf

Tuesday, October 8, 2013

Online Discussion, Student Engagement, and Critical Thinking - Annotated Biblography

Williams, L., & Lahman, M. (2011). Online Discussion, Student Engagement, and Critical Thinking. Journal of Political Science Education, 7(2), 143–162. doi:10.1080/15512169.2011.564919

The authors, professors at Manchester College, use data from both advanced and lower level undergraduates enrolled in traditional classroom-based general education courses, to test the usefulness of their tool for content analysis in identifying student engagement and critical thinking in an online discussion forum. They found the tool merged and refined from existing content analysis protocols effective: "We were able to code a large amount of written material in a reliable fashion." (p. 159). The authors also claimed that they replicated and demonstrated the effective link between student interaction and critical thinking.

The main focus of the article was to report the development of a content analysis tool and how the tool performed in the initial implement. Although the authors demonstrated an interesting combination of different tools for content analysis, it seems debatable that whether the "hybrid coding scheme" (p. 150) actually sustained the advantages of existing tools developed by previous researchers while improving the easiness to use and the reliability. Is the tool "just right" (p. 146) in the specificity (be mutually exclusive) and reliability, and have enough categories to reflect the characteristics of the discussion (be exhausted) as it was claimed? The following potential problems were identified:
  • Missing uniformity in coding scheme. Each researcher developed their tool ("coding scheme" or "protocol" as in the article) from a selected angle. That's how a scheme can meet the fundamental qualification of being exhausted and exclusive as a coding tool. When different coding schemes were merged, the uniformity of each scheme was broken. As a result, the hybrid scheme become not very exhaustive and exclusive.
  • In the hybrid tool, the dimensions of interaction (p. 150) were mainly derived from TAT (Fahy, 2005). Unfortunately, the authors forgot that the TAT was developed to advance the discriminant capability and reliability. In order to achieve the goals, the TAT strives to reduce the number of coding categories and takes sentence as the unit of analysis. The hybrid tool seems to be designed in reverse. Not only the categories were intertwined, the coding rules also contradicted the purpose of using sentence as the unit of analysis (Each sentence could be coded in as many categories as the coders wish). It eliminated the best part of TAT and made an easy-to-be-identified unit 'boundary overlap' - an issue that often occurs in using message or meaning as the unit of analysis.
  • The intention to give a clear cut between interaction and critical thinking is questionable. Firstly, critical thinking is a component and outcome of interaction (Fahy, 2005) thus they are difficult to be clearly divided. Secondly, the categorization for interaction and critical thinking described in the study were confusing - they often overlay with each other.
The mean reliability scores reported by the study were 0.55 to 0.70 (p. 154), much lower than the 0.70 to 0.94 with TAT as reported in previous research (Fahy, 2005). The first two potential problems listed above might explain part of the reason.

The frequencies of the hybrid model were greatly different from the original report with TAT (Fahy, 2005) in the percentage of Referential Statements. While the Referential Statements comprised 60.0% of the sentences in the study, they were only 10.2% in the original study (Fahy, 2005). The differences might be caused by the different research context at which these two studies based: two periods of one-week discussion for this study and a 13-week full course for Fahy's study. The former was very focus on providing critical comments to given essays and the latter contained diverse learning situations. It is reasonable that students discussed in different ways in these two different learning contexts. The finding leads to a possible conclusion that there is no one best content analysis tool for all research contexts in term of discriminant capability. The researcher might need to modify the existing tools to fit a particular research context.

Despite above mentioned issues, the study presented a concise summary of the most cited tools for transcript analysis in computer-mediated communication (CMC).  It provided a clear guide for readers about who, when and what in the study of student interaction in CMC.

Fahy, P. (2005). Two Methods for Assessing Critical Thinking in Computer-Mediated Communications (CMC) Transcripts. International Journal of Instructional Technology and Distance Learning

Saturday, October 5, 2013

Complexity

I enrolled in a MOOC - Introduction to Complexity. Complexity is one of the theoretical foundation of the Connectivism at which the MOOCs based. So, I think I need to have an understanding on the theory.

Other than basic theory of Complexity, I will also learn how to use NetLogo - a computer software for the analysis of Complexity. So far so good.


Thursday, September 26, 2013

More Thoughts on the Dissertation

After few days reading the articles, "critical thinking" - the term I originally used for my dissertation title doesn't seem so bad to me now. Text-based discussion forums provides a means for interaction, consensus searching and new knowledge construction in distance education (Fahy, 2001). Quality of interaction should be able to represent levels of critical thinking of online learners.

Buraphadeja's study (2010) suggested no statistically significant relationship between mean level of knowledge construction in content analysis and SNA while Fahy's study (2001) found analysis results from these two methods supported each other. The differences might come from the different selection of the authors in the tool for content analysis and the target courses they investigated. My theses aims to adopt similar research idea to study a new context - MOOC. Possible finding is the interaction pattern in MOOCs. I also plan to have an in-depth study on several most engaged learners in a selected MOOC to further investigate their interaction patterns. Contribution of the theses to the field would be: suggestions on learning facilitation in MOOCs for instructors and designers.

Oct. 19
I am thinking investigating the following questions:
The depth of learning:
(1) The depth of discussion threads (How many sub-threads does one thread derived? or intensity). Note: 70% messages were posted in the first tow levels (initial post and subtopic 1 (Gibbs, 2008).
(2) Number of topics evolved from the given topic by the instructor?
(3) Have the derivative topics maintained the connection to the goals listed by the instructor at the beginning of the MOOC?
(4) Number of isolated messages
Have the discussion been continued after the MOOC?

The self-organized grouping:
(1) Are there some kinds of homogeneous characteristics presented between members of groups that are self-organized while taking the MOOC? How do the learners choose to connect to someone? (class conduct, initial impressions, and interactions. (Gibbs, 2008, p. 17))
(2) Grouping and centrality (Gibbs, 2008, p. 17)
(3) How long does it take for learners to form a stable (or somewhat stable) discussion group? How long can a group sustain?
(4) Power law
(5) Group size and stability (多少人的組最穩定)

The characteristics of the learner-instructor interaction:
(1) Are there differences in the above four questions between different instructors or MOOCs?

independent variables: learner characteristics, moderating (skills) characteristics/styles, types of posts (5+3 categories in TAT), timeline of posts (when/who post what), social media adopted by learners for this MOOC,
dependent variables: depth of discussion, number of derivative topics, consistence of the derivative topics,





Friday, September 20, 2013

Interaction in MOOCs - Students and Instructors' Perception

I've found quite a few MOOC literature. To my disappointment, many of them are either not relevant or weak in the argument. Here is an example.

One article (Khalil & Ebner, 2013) discussed the levels of satisfaction on interaction in MOOCs. The authors deployed two web-base survey questionnaires based on the five-step model (Access and Motivation, Online Socialization, Information Exchange, Knowledge Construction and Development) for interactivity developed by Salmon (2001). The students and the instructors self-reported one's perception and then the data were collected and analyzed. The authors concluded that there is a gap between students’ and instructors' perception and satisfaction of interaction in MOOCs and there was a lack of student-instructor interaction.

I found myself not persuaded by the authors' conclusions because:

  • the returned ratio of the questionnaires were low (11 out of 250) 
  • the research results were based on self-reports from participants instead of tools which could represent participants' perceptions more directly such as content analysis
  • using Salmon five-step model as a single measurement seems difficult to really look into the heart of the interaction taken place in the courses.


Khalil, H., & Ebner, M. (20130624). “How satisfied are you with your MOOC?” - A Research Study on Interaction in Huge Online Courses. World Conference on Educational Multimedia, Hypermedia and Telecommunications 2013, 2013(1), 830–839. 

Literature Review of MOOC Related Articles

Title: MOOCs: A system review on the MOOC studies published between 2008~2012
Authors: Liyanagunawardena, Adams, and Williams (2013)

Description:

Basically, the article is more about classifying the content of published articles and less about critical review. There were eight categories used: introductory, concept, case studies, educational theory, technology, participant focused, provider focused, and other.

Others: popular authors in MOOCs in term of number of articles published; types of MOOCs (c-MOOC and AI-Stanford like MOOC (Rodriguez, 2012) vs. cMOOC and xMOOC (Daniel, 2012)).

The authors also pointed out problems in MOOC studies including lacking of ethical consideration in using publically available data; neglecting data existing in virture spaces other than LMS;

Data Collection

Duration: 2008-2012
No. of Articles: 46 including 2008 (1), 2009 (1), 2010 (7), 2011 (11), 2012 (26).
Sources: Journal (17), Conferences (13), magazines (10), report (3), workshop (2)

My comment:

Limited articles were collected. The total collection of articles were only 46. It seems difficult to believe since 2012 were called "the year of MOOC" because so many MOOCs have attracted lots discussion. One of the reasons let to the small collection could be that the authors only considered the articles with the term "MOOC" in the title or abstract.

Liyanagunawardena, T. R., Adams, A. A., & Williams, S. A. (2013). MOOCs: A systematic study of the published literature 2008-2012. The International Review of Research in Open and Distance Learning, 14(3), 202–227.

Title: The Maturing of the Mooc: Literature Review of Massive Open Online Courses and Other Forms of Online Distance Learning
Authors: Liyanagunawardena, Adams, and Williams (2013)

Description:


  • The review mainly examined the topic of the literature.
  • “mere completion is not a relevant metric, that learners participate in many valid ways, and that those who do complete MOOCs have high levels of satisfaction.” (Haggard, 2013, P.6)
  • The literature review on MOOC concluds that "after a phase of broad experimentation, a process of maturation is in place. MOOCs are heading to become a significant and possibly a standard element of credentialed University education, exploiting new pedagogical models, discovering revenue and lowering costs." (p. 5)

Data Collection

One hundred know and recent literature on MOOCs and open distance learning. Three categories of literature were collected: individual polemical articles discussing the impacts of MOOCs on educational institutions and learners, formal and comprehensive surveys, and general press writing and journalism.

Department for Business, Innovation and Skills. (2013). The Maturing of the Mooc: Literature Review of Massive Open Online Courses and Other Forms of Online Distance Learning (BIS Report Papers No. 130) (p. 123). London, UK: Department for Business, Innovation & Skills. 

Wednesday, September 18, 2013

MOOC Discussion Forums

September 18, 2013

Revisit literature on interaction theory to refresh my memory. After all, it's been three years since I studied the topic as an assignment project of EDDE 801. The revisit did not bring me any new idea/finding.

Routinely, I check newsletter from OLDaily and MOOC.ca every day. An article from Phil Hill, an education technology consultant, led me to a list of articles. Hill argued that discussion forums in MOOCs (massive open online courses) are centralized discussion forums and are barriers to student engagement. As he quoted from Robert McGuire (Sept. 3, 2013):

"Most MOOC discussion forums have dozens of indistinguishable threads and offer no way to link between related topics or to other discussions outside the platform. Often, they can’t easily be sorted by topic, keyword, or author. As a result, conversations have little chance of picking up steam, and community is more often stifled than encouraged."

Hill also supported his argument with studies from MIT and Stanford University. They pointed out that lower than 3% MOOC participants posted in discussion forums unless they were credit earners.

I haven't read these article carefully but they reminded me of Rivard's report regarding MOOC dropout rate (March 8, 2013). Rivard argued that it may not make sense to compare the number who register to the number who finish because different kinds of people are signing up for the online classes and what their goals are. "Some clearly do not intend to ace or even take every test, nor want to earn a largely meaningless certificate of completion." (Rivard, 2013) MOOCs participants are diverse populations who come with various goals in mind. Some faculty members also enroll in MOOCs because they want to watch how other faculty teach their subject. It is clear that this group of MOOC participants are not likely will take the assignment and are likely will drop-out any time. They drop-out the courses because they had never intended to completed the courses from the very first beginning.

The same reason might also apply to the problem of low posting rate in MOOC discussion forums. Some participants just want to be the lurkers. By check-in the forums and view the discussions, they've got what they need. What I am trying to say is that it might be a good way for a study on MOOCs to begin with dividing the participants into several subgroups based on individual goals for enrollment in the MOOC. Then, it would be more meaningful to continue discussing learner behavior.

Su-Tuan Lulee