Sunday, March 30, 2014

Video Data Analysis

Silver & Patashnick:
At the beginning of the work, Silver & Patashnick pointed out fidelity of the data and research tools mean different things. Fidelity of data means how real the data represents the real situation in research settings. Fidelity of tools refers to their abilities to handle complex and high fidelity data. They also brought up the issue that video data might not be sufficient for capturing the emotional tone. This happened a lot in my data collection experience. The perspective of the data collector and the angle of video recorder would determine the sufficiency of emotional tone capturing.

Similar to the notion of researchers’ choice, Silver and Patashnick pointed out that it is researchers’ decision to choose if they want to have transcription before they dove into the research process. Some researches don’t need to have transcript prepared before formal research began. One problem is associated with the preparation of transcript. There are multiple ways to analyze one segment of video. Research questions determine different transcription styles. For example, some research study doesn’t need to pay attention to non-verbal activities, whereas, some do. Another example of different research goal would determine transcribing style is if it is necessary to add time stamp. In my research field, it is necessary to add time stamp. We code video data with pre-defined theoretical framework. We coded them in order for quantitative analysis. This requires us to be precise about the coding.  Therefore, there are times that we need to go back to the raw data: video, to examine something. Although it has limitations of making organization of data hard to manage, we still need time stamps.

They also pointed out the issue that qualitative research tools could do more than assisting coding. They could help with data integration, organization, exploration and interpretation

Coding for retrieval is an innovative view for me. This might because we used coding for categorizing data too much. This is a good way to organize the data. It also points to the notion that computer assistive tools may be used for purposes more than coding and categorizing.

Woods & Dempster:
At the beginning of the article, they mentioned the complexities of qualitative research as well as the voluminous analyzing tools. They highlighted the issue that although there are many qualitative analytical tools, they don’t alleviate the complexities of qualitative analysis.

In the study, although the major argument they were trying to make was that Transana did a good job in terms of analyzing complex qualitative data sources. This is definitely true. However, in the section of how to juxtapose two different sorts of transcripts, they gave researchers’ authorities to determine ways to organize them.


In the article, the way that Dr. Woods segmented the screen capturing video shed light to my video analysis experiences. Although we didn’t have video data that is as complex as what was discussed in the article, we used multiple sources of data, such as students’ worksheets and lecture notes. Our goal was to have a coordinated account of students’ learning trajectory, which reflected in their assessments. To do this, we first located students’ hints of progress between two assessments. Afterwards, we found similar conceptual growth from students’ worksheets. Finally, we found evidence from the video. This was also a complex process. If I had read this article early, I would have segmented the video into chunks and added memos for each parts. This at least would alleviate my pain of looking for appropriate video to look at.

Thursday, March 27, 2014

After Class Reflection_0326

 After the class, I have a deeper understanding of how different research purpose could affect tools select. Unlike other computer assisted qualitative research tools, NVivo needs to have pre-defined coding schemes that are hierarchical. This feature tends to exclude codes that are emergent from the data, which refers to codes that come out from data. This features will exclude studies that are explorative and without pre-defined codes.

Then in the class, we discussed the risk nature of having pre-defined codes. Rebecca raised the point that it might be risky to have codes pre-defined since the data would have different pattern than the codes. I would not be bothered much by this. Coding is an interpretive and trial-and –error process. I agree that coding should be several rounds based, which means the second round of coding should complement what didn’t achieve on the first round. I also think the importance of having pre-defined coding scheme means having a clear theoretical framework ready before conducting the study.


From the small group discussion, I found different fields hold different research method. In learning sciences, most of the work do coding for quantifying qualitative data. There are some qualitative studies that do coding for interpretive purposes. We also talked the necessity of having mixed methods of doing research.

Sunday, March 23, 2014

Codes and Coding

Saldana:
A code means a word or a short phrase that assigns a summative attributes of a certain parts of qualitative data. The first cycle of the coding process can be larger-grained than future rounds. Coding is not a precise science; it is rather an interpretive endeavor. This is different from my interpretation of coding. For me, the coding method introduced here focused more on summarizing the data rather than categorizing them and assigning nominal codes. Another difference between my research endeavor and coding methods introduced here is simultaneous coding. We quantifying qualitative data and count the frequencies. Therefore, we don’t use embedded codes a lot. That would cause trouble when we count frequencies.

There is one similarity between our ways of coding and what qualitative researchers would do. Coding is a process of judgment and assigning researchers’ perceptions to data. It is a process of re-interpretation. However, if we approach this from the perspective of active roles researchers should take, getting more involved with the research process provides more insights of the data. Therefore, coding is not a labling process, but it is a linking process. I like one metaphor in the article that coding generates the bones of the analysis, while integration of codes connects bones together. One thing that what qualitative researchers did shared with my research endeavor is to get contents and categories refined each round of coding. What we will do for refining the codes was to do some pilot coding with the codes and try some data.

Another prominent difference from our research endeavor is the theory induction. We used coding scheme by adopting from existing or adapting them from existing theoretical framework. This is more of a top-bottom method. While the method introduced in this article is more of a bottom to top one.

I totally agree with the authors’ points that doing some manual coding first before moving to electronic devices. Otherwise, our energies would waste on getting to know how to use computer tools.  

Konopasek:
The major argument of this paper is that computer tools could do more than representing what was happening in researchers’ minds. They are not replicating what researchers did pre-technology. They externalized researchers’ thinking and trying to make qualitative research explicit. The notion, which suggested qualitative research is implicit and resided in researchers’ minds, highlighted features of qualitative researches being implicit, hard to teach. It posits qualitative research as practices that are about “reading the data”. The author’s major goal, in this article, is to explain how computer tools could change the impression of qualitative research as implicit and conducting within minds of researchers, which are invisible.

I am ok with his/her points of externalizing the process of qualitative research, which is good for me, a novice in qualitative research. However, I am wondering would over-replying on coded data, in Konopasek’s words, trying to separate raw data with coded data, cause the issue of abstraction and over-generalization, which goes against to features of qualitative study, a practice embedded in rich context. 

I was keeping making connections with our earlier discussion about roles of technological tools.

Making qualitative research method explicit and accessible to new researchers sounds like to link back to the questions we discussed at the beginning of this course. This would equate qualitative research as code and retrieve, which in turn de-emphasized the role of other qualitative research methods. Moreover, I am not sure if Konopasek only used Atlas.ti as one example to illuminate his points, but his strong arguments in the article that Atlas.ti could do more than extending researchers’ minds 

Sunday, March 9, 2014

Transcribing Data

Hammersley:
I found the discussion of whether data is obtained naturally from the environment or obtained with researchers’ interpretation is quite interesting to me. I am making connection with our prior readings that were about whether technology sets a distance between researchers and data. Foudantioanlism assumes that data is “grown” organically from the environment, with Hammersley’s words, that data is "given" to researchers. In this case, data is “protected” from any "pollution" from the environment. However, no matter in which disciplines, data was used to make inferences. In research report, researchers need to "polish" their data to make it accessible to readers. This means researchers cannot present their raw data. Data needs to be refined or produced. This connects to Hammersley’s later points of construction of transcript rather than reproducing transcript. Transcribers will impose their interpretation of cultural aspects on the data they would transcribe. Transcribers will also transcribe according to their understandings of what the person might be meaning.

However, hammersley mentioned later in the article that neither givenness nor construction captures the complexity of transcribing. Transcriptions need them both. The interdependent relationship between givenness and construction is that construction is relied on givenness. This notion is not hard to understand. Researchers were exposed to raw data first.  Even strictly transcribing is a combination of givenness and constructional. 

As an extension of the notion of strictly transcriptions and descriptions, Hammersley pointed out that they’re also interrelated. For research report, researcher interpret meanings from strict transcriptions and make descriptions.  

This notion makes good connection to technology sets a distance between researchers and data. With the intervention of technology, researchers cannot access to the raw data. Although they would construct new meanings of raw data, they had access to raw data with self-transcribing. However, with technology's processing of raw data, they were fed with data that is post-constructed. 

Markle, West & Rich:
They described that showing transcripts and making conversational analysis are not enough for conveying the atmosphere in real settings, especially participants’ emotions. However, they referred new technological tools that called VITAL, which allows researchers to insert video excerpts in to research report. In this way, readers and researchers could “watch” the video rather than read secondary data produced by researchers. Therefore, by the end, Markle and his co-authors pointed out the importance of having the original data attached with original file, which gave others researchers’ opportunities to assess the analysis. However, this raises the problem of protecting participants’ identities.   


Plus, they also brought up the issue that transcribing is never a neutral activity. Transcribers must have brought their understandings to the activity. Therefore, connecting with Hammersley, there should be a continuum that is with one end being givenness and the other being construction, and data transcribing in the middle. 

Sunday, March 2, 2014

Collecting Data from Internet

Carmicheal’s
At the beginning of the chapter, Carmicheal pointed the issue of the limitation of doing secondary analysis: not sharing same feelings as primary researchers. However, in later sections of this chapter, they provided databases and archives that make the reanalysis of secondary data possible. Before I read this chapter intensively, I thought secondary data analysis shares many similarities with meta-synthesis. However, it appears to me that secondary data analysis is different from meta-synthesis as the former one doesn’t deal with synthesizing primary data. Researchers who reanalyzed secondary data sources might use one set of data, but aim to answer different research questions. However, I am a bit suspicious of secondary data analysis activities, especially for ethnographers. In qualitative research, researchers are participants. Like situative learning environment, observations of ethnographic studies are not independent from the research context. However, researchers of secondary data could not situate the studies back to original contexts.

Carmicheal also pointed out sharing meta-codes to follow researchers. Connecting my own experiences of using Dedoose of doing paper meta-synthesis, Dedoose does a good job of presenting meta-codes clearly to researchers. The codes could also be exported as separate files with majority data.

Hine:
At the beginning of the article, Hine suggested that internet provides opportunities for researchers to engage with ethnographic studies. Internet should be not treated as a separate social sphere. Rather, it is one of multi-dimensional social life. Therefore, ethnographers take advantage of online communities to conduct observations in natural settings. The social communities organized in virtual world present researchers with richness and breadth of social practices that, otherwise, will not be shown with full breadth by statistics. In my research field, computer supported collaborative learning, some researchers used asynchronous, online discussion forum as their major data source. Some do discourse analysis and some employ the code and count method. In one online community, participants’ practices are situated within a particular cultural environment, which Hine also brought up in the article. This requires researchers integrate the cultural aspect into their analysis, such as considering the cultural dimension in discourse analysis. Hine also questioned doing content analysis and tracking patterns out from the analysis. Due to the integrations of social structure and cultural aspect in the community, simply drawing patterns out of data might raise some problems.

Being connected with Carmicheal’s work, in which he proposed that internet activities are one of the multi-dimensions of social life, Hine pointed out that social network analysis connects online activities well with offiline activities.

Garcia et al:
Garcia proposed that researchers as experiencers rather than observers. Therefore, lots of studies suggest researchers should act as participants or members of one community, or acting in a lurking role to study cultural issues of one community. However, I am bit worried about the experiential roles researchers take would affect their stances of observation. Although researchers would have a deeper understanding of the community, acting as participants rather than observers would affect their subjective stance as they would report their experiences of participation rather than observations.

Garcia and her co-authors pointed out the importance of integrating multiple sources of data, such as virtual data with textual data. This connects back with Carmicheal’s notions of having different data sources triangulated.