Sunday, April 20, 2014

Collaboration Between Team Mates

Manderson et al
At the beginning of the article, Manderson and his co-authors made the distinction between qualitative research instruments and quantitative research instruments. They mentioned that for well-designed quantitative research, as long as instruments are reliable and interpretations are valid, there are limited opportunities for various interpretations. For qualitative studies, This may not be true, as the advantage of qualitative study is its richness; therefore, interpretations are situational and hard to “transfer” between researchers and situations. However, this would raise issues of validity, reliability and interpretability of qualitative data. Different methods of interpreting and using raw data would produce different qualitative results. Therefore, in this article, Manderson and his co-authors proposed a method called “structured method of analysis”, which is featured with using highly structured methods of analysis, with “explicit codes and categories”. In this way, qualitative data which involves  interpretation was partially transformed to quantitative studies. The pitfall of this method would be to raise an issue of de-contextualizing qualitative data. Ross described one alternative way to solve this problem: sorting the files into categories for further analysis.
Another issued caused by multiple researchers in a team is the difficulties of assuring confidentiality of data. This might require the principle investigator to assure that every researcher have IRB certificate cleared.

Sin:
As a continuum of Manderson’s notion of “structured method analysis”, Sin described an evaluation study using mixed method, trying to quantifying qualitative data. However, improving the validity of interpretations, quantifying qualitative data is not enough.  In the article, Sin reflected that it is necessary to have clear codes if the project involves collaborations in a team. However, this doesn’t mean that codes will not be changed after team members dig into the data. Therefore, researchers would adopt the way of pilot coding to test if one coding tree is reliable enough. The coding process described in Sin’s work shared many similarities with my research experiences. Given the collaborative feature in my group, it is quite likely that preliminary and ambiguous coding schemes, especially, at the beginning would cause lots of confusions and varying understandings between collaborators. Therefore, we tend to have very clear coding books developed together with the coding scheme. This appears to be an effective way to reduce ambiguity among researchers. After we had clear descriptions on the code book, we would move to pilot coding. What we usually do in our group is to have 10% of data and code based on the preliminary coding scheme. This process would involve with adding more “grounded based” codes emerged from the data.
The way of software use is determined by conditions rather than by the function of the software. This relates to the issue that software doesn’t analyze the data, but people do. Another expectation that is behind software use is that computer tools will not analyze the data quickly as people expect.
N6 enables researchers to combine quantitative methods and qualitative method together. However, researchers don’t have settled answers concerning what is being quantified and what types of quantitative analysis is more appropriate.

Barry:

Being different than other two studies which tried to quantify qualitative data, aiming for improving reliability among researchers in a group, Barry approached this problem from the perspective of reflexivity practices. Reflexivity is defined as the awareness of the researcher’s own presence in the research process. The major part of this study described how Barry’s team constructed reflexive accounts. Two tools were used in this study, one is an narration recording individual researchers’ reflexivity and the other was about definitions of key theoretical concepts. The first tool involved with sharing their positions and their preferred theoretical framework. The second process involved with reflections on theoretical stances and how would those guide study designs. 

Thursday, April 17, 2014

Reflections After Dr. Paulus Talk

Personally, I really enjoyed Dr. Paulus’s presentation on Tuesday. She gave us a more holistic view of carrying out one project in ATLAS. I like the point she made at the beginning that analytical tools don’t do the analysis for you. Researchers are still the persons who carry out the analysis. Moreover, she emphasized that using ATLAS doesn't associate with adopting grounded theory. ATLAS does support coding and segmenting, but this doesn’t mean researchers need to use ATLAS for coding, segmenting, and retrieving. Sometimes, researchers do coding and segmenting aiming for organizing their data well. As a new user of ATLAS, I use it for data organizing process a lot.

Dr. Paulus mentioned that using analytical tools would make research process more transparent. I already experienced this when I was doing my skill builder assignment where I was trying to integrate analytical tools into the case study I have been working on. By using analytical tool, although I need to locate evidences from each data source to triangulate my results by myself, linking evidences in multiple files became easier and transparent. Being transparent means easier to track back in the future.


 Dr. Paulus noted that ATLAS also supports literature review. I am excited to explore this more, maybe, in my newsletter column assignment. This is really important for qualitative researchers. For literature review, there is no need to review literatures that are not relevant to the study. However, to join into the conversation of researchers in the field, one need to show how does the the current study relates/solves issues in the literature. Using the function of linking multiple files in one project, connecting literature review with the analysis process becomes easier. 

Sunday, April 13, 2014

Debates of Qualitative Tools

In the section of size of database, Taylor and her co-authors mentioned that it was not qualitative research tools that lead to the trend of collecting large data. Sometimes, it is the research project which aims for having large dataset. This is quite true and seen in my daily research experiences. Our project adopted a concept called design based research, through which we assessed students' learning not only from growth shown in pre-to-posttest, but we are more cared about their learning process and treat classroom as an ecology. This requires us to have multiple sources of data, such as students’ participation pattern, students’ artifacts, videos, which record their learning experiences. Having large data set doesn’t necessarily mean we would adopt a shallow analytical method. We used pre-test and posttests to examine students’ overall learning pattern and process data, such artifacts and classroom video data to unveil the journey students took to arrive the destination. The qualitative research tools do a nice job to organize different kinds of data and link between them, which would be time-consuming if there had been no analytical tools. However, it is still researchers who do analytical work. Analytical tools perform clerical and technical tasks.
Another feature that qualitative tools amaze me most is it would be easier to add new codes to existing coding system if it is necessary. Comparing to the tedious work in manual coding, going back to raw data as new codes been added is not uncommon. However, using qualitative tools would expedite this process.
I agree that qualitative research tools make the analytical process more transparent. This is especially true when researchers need to link multiple datasets or make sense of a large dataset. For example, without tools, either I need to remember the linkage between files or I need to write them down. Given the importance of triangulation of data source, linking between files is important. 

I am not worried that qualitative research tools could create distance between researchers and data. although using qualitative research tools tend to popularize one analytical method, coding, sometimes, doing coding and segmenting is a first step towards doing more advanced analysis. Therefore, technological tools should not be blamed. If one researcher wants to adopt simple "coding and counting" as his/her method, s/he would do that without technology. 

Thursday, April 10, 2014

ATLAS.ti

I want to reflect the features of ATLAS and how they relate to the reflexive nature of qualitative research. ATLAS supports preliminary coding activities, which is called free codes in ATLAS. This is an important feature for qualitative researchers. Reflecting my experiences of composing coding schemes, having a place to record preliminary codes are important and helpful. This might be especially true for researchers who based their analysis on grounded theory. Another feature of ATLAS that is supportive for the coding process is researchers don’t need to have the relationships among codes ready before they dig into the data. After they take a look at the data and have codes developed, they could organize relationships among codes by assigning different networks ATLAS provides.

Besides, as I mentioned in my skill builder, comparing with other qualitative research tools, ATLAS supports juxtaposing four documents within one project, which support data triangulation for qualitative researchers.


Another feature I loves ATLAS most is it connects codes closely with documents. For video coding, it positions codes with video closely. Although dedoose highlights documents with different colors based on different codes, dedoose doesn’t position them closely. This made data tracking back especially difficult. 

Sunday, March 30, 2014

Video Data Analysis

Silver & Patashnick:
At the beginning of the work, Silver & Patashnick pointed out fidelity of the data and research tools mean different things. Fidelity of data means how real the data represents the real situation in research settings. Fidelity of tools refers to their abilities to handle complex and high fidelity data. They also brought up the issue that video data might not be sufficient for capturing the emotional tone. This happened a lot in my data collection experience. The perspective of the data collector and the angle of video recorder would determine the sufficiency of emotional tone capturing.

Similar to the notion of researchers’ choice, Silver and Patashnick pointed out that it is researchers’ decision to choose if they want to have transcription before they dove into the research process. Some researches don’t need to have transcript prepared before formal research began. One problem is associated with the preparation of transcript. There are multiple ways to analyze one segment of video. Research questions determine different transcription styles. For example, some research study doesn’t need to pay attention to non-verbal activities, whereas, some do. Another example of different research goal would determine transcribing style is if it is necessary to add time stamp. In my research field, it is necessary to add time stamp. We code video data with pre-defined theoretical framework. We coded them in order for quantitative analysis. This requires us to be precise about the coding.  Therefore, there are times that we need to go back to the raw data: video, to examine something. Although it has limitations of making organization of data hard to manage, we still need time stamps.

They also pointed out the issue that qualitative research tools could do more than assisting coding. They could help with data integration, organization, exploration and interpretation

Coding for retrieval is an innovative view for me. This might because we used coding for categorizing data too much. This is a good way to organize the data. It also points to the notion that computer assistive tools may be used for purposes more than coding and categorizing.

Woods & Dempster:
At the beginning of the article, they mentioned the complexities of qualitative research as well as the voluminous analyzing tools. They highlighted the issue that although there are many qualitative analytical tools, they don’t alleviate the complexities of qualitative analysis.

In the study, although the major argument they were trying to make was that Transana did a good job in terms of analyzing complex qualitative data sources. This is definitely true. However, in the section of how to juxtapose two different sorts of transcripts, they gave researchers’ authorities to determine ways to organize them.


In the article, the way that Dr. Woods segmented the screen capturing video shed light to my video analysis experiences. Although we didn’t have video data that is as complex as what was discussed in the article, we used multiple sources of data, such as students’ worksheets and lecture notes. Our goal was to have a coordinated account of students’ learning trajectory, which reflected in their assessments. To do this, we first located students’ hints of progress between two assessments. Afterwards, we found similar conceptual growth from students’ worksheets. Finally, we found evidence from the video. This was also a complex process. If I had read this article early, I would have segmented the video into chunks and added memos for each parts. This at least would alleviate my pain of looking for appropriate video to look at.

Thursday, March 27, 2014

After Class Reflection_0326

 After the class, I have a deeper understanding of how different research purpose could affect tools select. Unlike other computer assisted qualitative research tools, NVivo needs to have pre-defined coding schemes that are hierarchical. This feature tends to exclude codes that are emergent from the data, which refers to codes that come out from data. This features will exclude studies that are explorative and without pre-defined codes.

Then in the class, we discussed the risk nature of having pre-defined codes. Rebecca raised the point that it might be risky to have codes pre-defined since the data would have different pattern than the codes. I would not be bothered much by this. Coding is an interpretive and trial-and –error process. I agree that coding should be several rounds based, which means the second round of coding should complement what didn’t achieve on the first round. I also think the importance of having pre-defined coding scheme means having a clear theoretical framework ready before conducting the study.


From the small group discussion, I found different fields hold different research method. In learning sciences, most of the work do coding for quantifying qualitative data. There are some qualitative studies that do coding for interpretive purposes. We also talked the necessity of having mixed methods of doing research.

Sunday, March 23, 2014

Codes and Coding

Saldana:
A code means a word or a short phrase that assigns a summative attributes of a certain parts of qualitative data. The first cycle of the coding process can be larger-grained than future rounds. Coding is not a precise science; it is rather an interpretive endeavor. This is different from my interpretation of coding. For me, the coding method introduced here focused more on summarizing the data rather than categorizing them and assigning nominal codes. Another difference between my research endeavor and coding methods introduced here is simultaneous coding. We quantifying qualitative data and count the frequencies. Therefore, we don’t use embedded codes a lot. That would cause trouble when we count frequencies.

There is one similarity between our ways of coding and what qualitative researchers would do. Coding is a process of judgment and assigning researchers’ perceptions to data. It is a process of re-interpretation. However, if we approach this from the perspective of active roles researchers should take, getting more involved with the research process provides more insights of the data. Therefore, coding is not a labling process, but it is a linking process. I like one metaphor in the article that coding generates the bones of the analysis, while integration of codes connects bones together. One thing that what qualitative researchers did shared with my research endeavor is to get contents and categories refined each round of coding. What we will do for refining the codes was to do some pilot coding with the codes and try some data.

Another prominent difference from our research endeavor is the theory induction. We used coding scheme by adopting from existing or adapting them from existing theoretical framework. This is more of a top-bottom method. While the method introduced in this article is more of a bottom to top one.

I totally agree with the authors’ points that doing some manual coding first before moving to electronic devices. Otherwise, our energies would waste on getting to know how to use computer tools.  

Konopasek:
The major argument of this paper is that computer tools could do more than representing what was happening in researchers’ minds. They are not replicating what researchers did pre-technology. They externalized researchers’ thinking and trying to make qualitative research explicit. The notion, which suggested qualitative research is implicit and resided in researchers’ minds, highlighted features of qualitative researches being implicit, hard to teach. It posits qualitative research as practices that are about “reading the data”. The author’s major goal, in this article, is to explain how computer tools could change the impression of qualitative research as implicit and conducting within minds of researchers, which are invisible.

I am ok with his/her points of externalizing the process of qualitative research, which is good for me, a novice in qualitative research. However, I am wondering would over-replying on coded data, in Konopasek’s words, trying to separate raw data with coded data, cause the issue of abstraction and over-generalization, which goes against to features of qualitative study, a practice embedded in rich context. 

I was keeping making connections with our earlier discussion about roles of technological tools.

Making qualitative research method explicit and accessible to new researchers sounds like to link back to the questions we discussed at the beginning of this course. This would equate qualitative research as code and retrieve, which in turn de-emphasized the role of other qualitative research methods. Moreover, I am not sure if Konopasek only used Atlas.ti as one example to illuminate his points, but his strong arguments in the article that Atlas.ti could do more than extending researchers’ minds