Sunday, March 30, 2014

Video Data Analysis

Silver & Patashnick:
At the beginning of the work, Silver & Patashnick pointed out fidelity of the data and research tools mean different things. Fidelity of data means how real the data represents the real situation in research settings. Fidelity of tools refers to their abilities to handle complex and high fidelity data. They also brought up the issue that video data might not be sufficient for capturing the emotional tone. This happened a lot in my data collection experience. The perspective of the data collector and the angle of video recorder would determine the sufficiency of emotional tone capturing.

Similar to the notion of researchers’ choice, Silver and Patashnick pointed out that it is researchers’ decision to choose if they want to have transcription before they dove into the research process. Some researches don’t need to have transcript prepared before formal research began. One problem is associated with the preparation of transcript. There are multiple ways to analyze one segment of video. Research questions determine different transcription styles. For example, some research study doesn’t need to pay attention to non-verbal activities, whereas, some do. Another example of different research goal would determine transcribing style is if it is necessary to add time stamp. In my research field, it is necessary to add time stamp. We code video data with pre-defined theoretical framework. We coded them in order for quantitative analysis. This requires us to be precise about the coding.  Therefore, there are times that we need to go back to the raw data: video, to examine something. Although it has limitations of making organization of data hard to manage, we still need time stamps.

They also pointed out the issue that qualitative research tools could do more than assisting coding. They could help with data integration, organization, exploration and interpretation

Coding for retrieval is an innovative view for me. This might because we used coding for categorizing data too much. This is a good way to organize the data. It also points to the notion that computer assistive tools may be used for purposes more than coding and categorizing.

Woods & Dempster:
At the beginning of the article, they mentioned the complexities of qualitative research as well as the voluminous analyzing tools. They highlighted the issue that although there are many qualitative analytical tools, they don’t alleviate the complexities of qualitative analysis.

In the study, although the major argument they were trying to make was that Transana did a good job in terms of analyzing complex qualitative data sources. This is definitely true. However, in the section of how to juxtapose two different sorts of transcripts, they gave researchers’ authorities to determine ways to organize them.


In the article, the way that Dr. Woods segmented the screen capturing video shed light to my video analysis experiences. Although we didn’t have video data that is as complex as what was discussed in the article, we used multiple sources of data, such as students’ worksheets and lecture notes. Our goal was to have a coordinated account of students’ learning trajectory, which reflected in their assessments. To do this, we first located students’ hints of progress between two assessments. Afterwards, we found similar conceptual growth from students’ worksheets. Finally, we found evidence from the video. This was also a complex process. If I had read this article early, I would have segmented the video into chunks and added memos for each parts. This at least would alleviate my pain of looking for appropriate video to look at.

1 comment:

  1. Really nice connections to your own research study, Yawen! I really enjoyed thinking through the links between your research process and the articles. So, does your research team engage in direct coding or do you typically work directly with transcripts?

    ReplyDelete