Friday, October 11, 2013

Knowing Knowledge - A Summary of the Book

I have spent several days reading Knowing Knowledge (Siemens, 2006) - a book that conceptualizes learning and knowing in Connectivism. cMOOC, as a form of e-learning, is constructed upon the theory of Connectivism therefore it is necessary for me to read the book in order to study MOOC. The followings are concepts / quotations that might be closely related to my dissertation study - critical thinking in MOOCs.

Overview

  • We should consider learning as a chao system and study like the ways they study chao systems in terms of the meaning of knowing, the interaction/communication patterns, etc.
Connectivism is
  • "the assertion that learning is primarily a network forming process" (p. 15). Connectivism considers knowledge connective.

knowledge flow cycle (p.6)
  • Co-creation: build on/with the work of others
  • Dissemination: analysis, evaluation, and filtering elements through the network
  • Communication of key ideas
  • Personalization: internalization, dialogue, or reflection
  • Implementation: action, feedback

Connective knowledge networks possess four traits  (p. 16):
  • Diversity. Is the widest possible spectrum of points of view revealed?dy
  • Autonomy. Were the individual knowers contributing to the interaction of their own accord, according to their own knowledge, values and decisions, or were they acting at the behest of some external agency seeking to magnify a certain point of view through quantity rather than reason and reflection?
  • Interactivity. Is the knowledge being produced the product of an interaction between the members, or is it a (mere) aggregation of the members’ perspectives?
  • Openness. Is there a mechanism that allows a given perspective to be entered into the system, to be heard and interacted with by others?
Learning and knowledge environment
  • democratic and diverse (p.47)
  • dynamic and capable of evolving, adapting, and responding to external change


Stages of Knowledge Construction (p.45)
The staged view of Connectivism about how individuals encounter and explore knowledge in a networked/ecological manner (from the basic moves to the more complex):
  1. Awareness and receptivity: acquire, access.
  2. Connection forming: form network, filter, select, add.
  3. Contribution and involvement: become a visible node, acknowledge by others, reciprocate, share.
  4. Pattern recognition: recognize emerging patterns and trends.
  5. Meaning-making: act/reform view points/perspectives/opinions.
  6. Praxis: tweak, build, recreate one's network/meta-cognition, reflect, experiment, act, evaluate.
Components of a knowledge sharing environment (p.87)

  • Informal, not structured
  • Tool-Rich
  • Consistency and time
  • Trust
  • Simplicity
  • Decentralized, fostered, connected
  • High tolerance for experimentation and failure

Characteristics that are required in an effective ecology (p. 90):

  • a space for gurus and beginners to connect,
  • a space for self-expression,
  • a space for debate and dialogue,
  • a space to search archived knowledge,
  • a space to learn i n a structured manner,
  • a space to communicate new information and knowledge indicative of changing elements within the field of practice (news, research), and 
  • a space to nurture ideas, test new approaches, prepare for new competition, pilot processes.

Ecologies are nurtured and fostered… instead of constructed and mandated.

Skills our learners need (p. 113):

  • Anchoring. Staying focused on important tasks while undergoing a deluge of distractions.
  • Filtering. Managing knowledge flow and extracting important elements.
  • Connecting with each other. Building networks in order to continue to stay current and informed.
  • Being Human together. Interacting at a human, not only utilitarian, level…to form social spaces.
  • Creating and deriving mearning. Understanding implications, comprehending meaning and impact.
  • Evaluation and authentication. Determining the value of knowledge…and ensuring authenticity.
  • Altered processes of validation. Validating people and ideas within appropriate context.
  • Critical and creative thinking. Question and dreaming 
  • Pattern recognition. Recognizing patterns and trends.
  • Navigate knowledge landscape. Navigating between repositories, people, technology, and ideas while achieving intended purposes.
  • Acceptance of uncertainty. Balancing what is known with the unknown…to see how existing knowledge relates to what we do not know.
  • Contextualizing (understanding context games). Understanding the prominence of context…seeing continuums…ensuring key contextual issues are not overlooked in context-games.


Siemens, G. (2006). Knowing Knowledge. Retrieved from http://www.elearnspace.org/KnowingKnowledge_LowRes.pdf

Tuesday, October 8, 2013

Online Discussion, Student Engagement, and Critical Thinking - Annotated Biblography

Williams, L., & Lahman, M. (2011). Online Discussion, Student Engagement, and Critical Thinking. Journal of Political Science Education, 7(2), 143–162. doi:10.1080/15512169.2011.564919

The authors, professors at Manchester College, use data from both advanced and lower level undergraduates enrolled in traditional classroom-based general education courses, to test the usefulness of their tool for content analysis in identifying student engagement and critical thinking in an online discussion forum. They found the tool merged and refined from existing content analysis protocols effective: "We were able to code a large amount of written material in a reliable fashion." (p. 159). The authors also claimed that they replicated and demonstrated the effective link between student interaction and critical thinking.

The main focus of the article was to report the development of a content analysis tool and how the tool performed in the initial implement. Although the authors demonstrated an interesting combination of different tools for content analysis, it seems debatable that whether the "hybrid coding scheme" (p. 150) actually sustained the advantages of existing tools developed by previous researchers while improving the easiness to use and the reliability. Is the tool "just right" (p. 146) in the specificity (be mutually exclusive) and reliability, and have enough categories to reflect the characteristics of the discussion (be exhausted) as it was claimed? The following potential problems were identified:
  • Missing uniformity in coding scheme. Each researcher developed their tool ("coding scheme" or "protocol" as in the article) from a selected angle. That's how a scheme can meet the fundamental qualification of being exhausted and exclusive as a coding tool. When different coding schemes were merged, the uniformity of each scheme was broken. As a result, the hybrid scheme become not very exhaustive and exclusive.
  • In the hybrid tool, the dimensions of interaction (p. 150) were mainly derived from TAT (Fahy, 2005). Unfortunately, the authors forgot that the TAT was developed to advance the discriminant capability and reliability. In order to achieve the goals, the TAT strives to reduce the number of coding categories and takes sentence as the unit of analysis. The hybrid tool seems to be designed in reverse. Not only the categories were intertwined, the coding rules also contradicted the purpose of using sentence as the unit of analysis (Each sentence could be coded in as many categories as the coders wish). It eliminated the best part of TAT and made an easy-to-be-identified unit 'boundary overlap' - an issue that often occurs in using message or meaning as the unit of analysis.
  • The intention to give a clear cut between interaction and critical thinking is questionable. Firstly, critical thinking is a component and outcome of interaction (Fahy, 2005) thus they are difficult to be clearly divided. Secondly, the categorization for interaction and critical thinking described in the study were confusing - they often overlay with each other.
The mean reliability scores reported by the study were 0.55 to 0.70 (p. 154), much lower than the 0.70 to 0.94 with TAT as reported in previous research (Fahy, 2005). The first two potential problems listed above might explain part of the reason.

The frequencies of the hybrid model were greatly different from the original report with TAT (Fahy, 2005) in the percentage of Referential Statements. While the Referential Statements comprised 60.0% of the sentences in the study, they were only 10.2% in the original study (Fahy, 2005). The differences might be caused by the different research context at which these two studies based: two periods of one-week discussion for this study and a 13-week full course for Fahy's study. The former was very focus on providing critical comments to given essays and the latter contained diverse learning situations. It is reasonable that students discussed in different ways in these two different learning contexts. The finding leads to a possible conclusion that there is no one best content analysis tool for all research contexts in term of discriminant capability. The researcher might need to modify the existing tools to fit a particular research context.

Despite above mentioned issues, the study presented a concise summary of the most cited tools for transcript analysis in computer-mediated communication (CMC).  It provided a clear guide for readers about who, when and what in the study of student interaction in CMC.

Fahy, P. (2005). Two Methods for Assessing Critical Thinking in Computer-Mediated Communications (CMC) Transcripts. International Journal of Instructional Technology and Distance Learning

Saturday, October 5, 2013

Complexity

I enrolled in a MOOC - Introduction to Complexity. Complexity is one of the theoretical foundation of the Connectivism at which the MOOCs based. So, I think I need to have an understanding on the theory.

Other than basic theory of Complexity, I will also learn how to use NetLogo - a computer software for the analysis of Complexity. So far so good.