Sunday, April 20, 2014

Collaboration Between Team Mates

Manderson et al
At the beginning of the article, Manderson and his co-authors made the distinction between qualitative research instruments and quantitative research instruments. They mentioned that for well-designed quantitative research, as long as instruments are reliable and interpretations are valid, there are limited opportunities for various interpretations. For qualitative studies, This may not be true, as the advantage of qualitative study is its richness; therefore, interpretations are situational and hard to “transfer” between researchers and situations. However, this would raise issues of validity, reliability and interpretability of qualitative data. Different methods of interpreting and using raw data would produce different qualitative results. Therefore, in this article, Manderson and his co-authors proposed a method called “structured method of analysis”, which is featured with using highly structured methods of analysis, with “explicit codes and categories”. In this way, qualitative data which involves  interpretation was partially transformed to quantitative studies. The pitfall of this method would be to raise an issue of de-contextualizing qualitative data. Ross described one alternative way to solve this problem: sorting the files into categories for further analysis.
Another issued caused by multiple researchers in a team is the difficulties of assuring confidentiality of data. This might require the principle investigator to assure that every researcher have IRB certificate cleared.

Sin:
As a continuum of Manderson’s notion of “structured method analysis”, Sin described an evaluation study using mixed method, trying to quantifying qualitative data. However, improving the validity of interpretations, quantifying qualitative data is not enough.  In the article, Sin reflected that it is necessary to have clear codes if the project involves collaborations in a team. However, this doesn’t mean that codes will not be changed after team members dig into the data. Therefore, researchers would adopt the way of pilot coding to test if one coding tree is reliable enough. The coding process described in Sin’s work shared many similarities with my research experiences. Given the collaborative feature in my group, it is quite likely that preliminary and ambiguous coding schemes, especially, at the beginning would cause lots of confusions and varying understandings between collaborators. Therefore, we tend to have very clear coding books developed together with the coding scheme. This appears to be an effective way to reduce ambiguity among researchers. After we had clear descriptions on the code book, we would move to pilot coding. What we usually do in our group is to have 10% of data and code based on the preliminary coding scheme. This process would involve with adding more “grounded based” codes emerged from the data.
The way of software use is determined by conditions rather than by the function of the software. This relates to the issue that software doesn’t analyze the data, but people do. Another expectation that is behind software use is that computer tools will not analyze the data quickly as people expect.
N6 enables researchers to combine quantitative methods and qualitative method together. However, researchers don’t have settled answers concerning what is being quantified and what types of quantitative analysis is more appropriate.

Barry:

Being different than other two studies which tried to quantify qualitative data, aiming for improving reliability among researchers in a group, Barry approached this problem from the perspective of reflexivity practices. Reflexivity is defined as the awareness of the researcher’s own presence in the research process. The major part of this study described how Barry’s team constructed reflexive accounts. Two tools were used in this study, one is an narration recording individual researchers’ reflexivity and the other was about definitions of key theoretical concepts. The first tool involved with sharing their positions and their preferred theoretical framework. The second process involved with reflections on theoretical stances and how would those guide study designs. 

1 comment:

  1. As I read your comments, one question that came to mind is how is validity being defined? I think quite often we write/talk about validity in published works (even) and assume that we are all coming at it similarly or that these concepts can be easily transferable across methodological domains. I wonder what defining validity from a qual or quan perspective does for making sense of collaboration as a validation strategy or even just reconsidering how collaborative analysis goes about validating findings. Thoughts?

    ReplyDelete