April 17, 2013

"Painless" publishing? Perish the thought

I need an attitude check about publishing in academe. My heart is just not in it.

My background and experience as a high school Language Arts teacher have something to do with it. Writing instruction has changed considerably over the last several decades. I am deeply indebted to the reader/writer workshop model and practitioner-scholars such as Lucy Calkins, Linda Rief, and Nancy Atwell, who have influenced my personal approach to writing as well as how I frame instruction for young writers.

So, for example, with an ulterior motive informed by the fact that my son insists he "hates" reading, I announced recently over dinner one night that he was going to write books this summer. His immediate response was, "Can we publish them?" Now, that makes my heart swell!

Publishing redefined
Where I come from, "publishing" simply means sharing your writing with an audience of at least one (and not just your teacher). My son is only seven years old, and he already grasps this notion, reinforced not only by me but his Kindergarten and first-grade teachers, too.

This is a monumental K-12 instructional reform. Just ask your parents or grandparents to recall their own memories of learning to write. They will likely regale you with horror stories of weekly themes written strictly for the English teacher, who heavily inscribed each page of the students' composition notebooks in red pen, possibly encoding her feedback using one of the cryptic, numerical hierarchies of English grammar errors, such as Warriner's or UT's own Hodges Harbrace handbook. (And you thought journal reviewers were mean!)

Those of us of a certain age can probably recall being told by well-intentioned English teachers never to use the first-person and never to write in this-or-that color of ink.  I suppose it's all part of being socialized into academic Discourse, which serves a purpose, but by virtue of its sheer dominancy in K-12 education and beyond, has all but eclipsed other legitimate ways of being, communicating, and publishing.

For example, some in academia look askance upon digital and web-based publishing. As a matter of professional survival, graduate students and untenured faculty avoid publication in open-access, online journals for fear it will be discounted in decisions about hiring, promotion, and tenure. As Rich (2012) warns: "Remember that not all peer-reviewed journals are equal nor will these likely be evaluated equally on the job market" (p. 378).

Still. I am intrigued by online journals such as New Horizons For Learning, supported by Johns Hopkins University.  It has an established history of open-access online publishing for manuscripts pertaining to all aspects of education. Part of the New Horizons mission is to create a lab of ideas through a virtual roundtable of expert professionals.  The journal puts forth a call for open submissions for publication, and participants vet the content through a "generative process" on the website. It's a complete revisioning of peer-reviewed publication.

So, I bristle a bit upon receiving well-intentioned advice about having "a competitive publishing record" (Rich, 2012, p. 376). I guess I am just having myself a sort of "Norm Denzin" moment. But, then, I consider Tracy's (2010) argument for developing universal standards of quality for rigor and trustworthiness in qualitative research, and I can appreciate the usefulness of criteria for evaluation. As Tracy says, "...[G]uidelines and best practices regularly serve as helpful pedagogical launching pads across a variety of interpretive arts" (p. 838).

I am reminded of the old adage: you must learn the rules before you can break the rules.

Why should the writing process, including the publication stage, be any less rigorous? From a sociocultural perspective (my preferred lens), it makes sense. The writing process is a continual learning process, in which the learner, or in this case, writer, must use and internalize the external tools of the trade (i.e. "rules" and conventions of language use and representation) before she can expertly innovate and break new literary ground with them.

SOAPP
One of my favorite "pedagogical launching pads" in the classroom was a mnemonic to scaffold students' thinking as they approached new writing tasks and performance “prompts.”  The mnemonic was “SOAP,” which stood for “subject, occasion, audience, and purpose,” as in, “What is the subject?  What is the occasion for writing?” and so on.  Later I modified SOAP, which I came to view as not appropriately acknowledging students’ social and cultural contexts. I added a second "P" to stand for "perspective" or "position." Using the rule of SOAPP, I encouraged students to ask themselves, “What is my position?” and “What is my perspective?”  It was a small step toward leaving the safe but predictable (and oftentimes inauthentic) confines of classroom discourse.

As I read Rich's publishing guide for graduate students, I thought (for the first time in a long time) about my trusty SOAPP heuristic. In my field, literacy scholars often critique academic-based discourses as limiting students’ literacy development and falling short of helping students attend to their own perspectives and identities vis-à-vis writing topics and intended audiences.

Dyson (2004), for instance, speaks of a paradox in her essay “Writing and the Sea of Voices.” During the last part of the 20th century, teachers, riding a wave of socioculturally fueled research, tried hard – too hard it seems – to bridge sociocultural contexts by incorporating more talk and discussion in the classroom. But, the pendulum swung too far, and researchers began to focus new attention on the negative impact of teacher-centered talk on student writing processes. Dyson argues that some teacher-initiated talk may create classroom cultures that are no more culturally relevant than the cognitive and Behaviorist environments that preceded them.

Although her work is dedicated to the social and cultural aspects of K-12 teaching and learning, Dyson's critique of the “dyadic” apprenticeship model inspired me to re-examine Rich's (2013) article. How does his "Quick and (Hopefully) Painless Guide to Establishing Yourself as a Scholar" function as one such "dyadic encounter"? In dispensing his writing advice, how well does Rich address the components of SOAPP, particularly the last "P" ?

Rich immediately launches into the writing "occasion," which refers to both immediate situations and contexts that generate a piece as well as broader contexts that motivate its development. Rich addresses all of these, including the universal "publish or perish" mantra that prods much academic writing. He makes a good point that as soon as one starts graduate school, occasions to conduct research and to write are everywhere. These ideas should be noted or logged for future reference.

What follows is an intertwined discussion of "subject" and "purpose" for writing. Rich encourages readers to choose topics about which they are passionate and for which they have an interest, but the viability of a topic primarily resides in whether or not the author can achieve some meaningful purpose or "contribution" with it. This aligns with advice from other scholars (Kilbourn, Piantanida & Garman), and I respect Rich's avoidance of the clichéd literature "gap." Identifying one's purpose is "the most important question to address" (Rich, p. 376), but to answer the question, one must consider the "complex interplay" between subject and purpose (to use Kilbourn's expression). From a qualitative standpoint, Rich hints at the value of highly subjective, unique interpretive perspectives, when he suggests that "challenging the conventional wisdom has more appeal than reaffirming what is already accepted" (p. 376).

As this article focuses on publishing, the "audience" component figures prominently, from advice on formatting (avoid jargon in subheads) to leveraging peer feedback as a gauge of broader audience appeal. Some of Rich's advice is helpful, but some tips come across as superficial flourishes, at best, to downright pandering, at worst. He instructs ambitious graduate students to scan a desired journal and, "If a journal has recently published something similar on your topic in the past few years, attempt to cite this work; many journals ask the recently published authors to review articles" (p. 377).

And then this: "After you choose a journal for submission, tailor the paper based on the journal's particular theoretical, methodological, or interdisciplinary focus to encourage positive reviews or at least minimize negative remarks" (p. 377). Something about that does not settle well with me, and it has something to do with that other "P."

Save for one comment about developing a "clear and concise style" (p. 378), Rich provides little to no guidance in his "Painless Guide" on developing and maintaining one's personal voice and positionality -- dare I say "integrity"? -- during the tumultuous publishing process.

Writing for an audience is a skill. Not too long ago, being a published author was an accomplishment enjoyed by an elite few. That is still true in academia, but other genres of writing and writing communities are expanding and reaching new audiences. Authors once shut out from the publishing process are finding a voice.

It's an exciting time, but with opportunity comes responsibility. The publishing process sensitizes the writer to the needs of her audience. Ideally, through the process, the writer also develops and refines her style, voice, and perspective. It should be a mutually reinforcing dynamic, but, in academic publishing, it seems difficult to maintain a balance between the two.

References
Dyson, A. H. (2004). Writing and the sea of voices: Oral language in, around, and about writing. In R.B. Ruddell & N.J. Unrau (Eds.), Theoretical Models and Processes of Reading (5th ed., pp. 146-162). Newark, DE: International Reading Association.

Kilbourn, B. (2006). The qualitative doctoral dissertation proposal. The Teachers College Record, 108(4), 529–576.

Rich, T. S. (2013). Publishing as a graduate student: A quick and (hopefully) painless guide to establishing yourself as a scholar. PS: Political Science & Politics, 46(02), 376–379.

Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16(10), 837–851.
Share/Bookmark

April 11, 2013

ATLAS.ti networks as a mechanism for establishing trustworthiness

On Tuesday morning I attended a special topics webinar on ATLAS.ti network views conducted by Susanne Friese.

The network view tool in ATLAS.ti enables the researcher to create "interactive mind maps," something I have already written a bit about in a previous post.



Basically, the hermeneutic unit (the ATLAS project file) is the total network. All objects added to or created within ATLAS (documents, artifacts, memos, quotations, codes) instantly become nodes in the network when they are linked to something else (for instance, a code to a quotation, a memo to a code, and so on), and these links may then be visualized.

I sat in on the session in hopes of learning more about this ATLAS function, which I have come to appreciate as a scaffold for both analysis and writing (harder and harder to separate these two processes, I am learning).

As I listened to Susanne Friese and her "vondervul" German accent, I also made a few connections between network views and this week's readings in EP659, which focus on trustworthiness and issues of quality in qualitative inquiry (Anfara, Brown & Mangione, 2002; Tracy, 2010).

According to Anfara et al., "...[A] key part of qualitative research is how we account for ourselves, how we reveal that world of secrets" (p. 29). Neither Anfara and his colleagues nor Tracy specifically mention CAQDAS as a tool for ensuring quality, but both articles discuss transparency of data management, coding, and analysis as a mechanism for strengthening research quality.

Tracy, for example, provides a more than adequate rationale for use of digital tools in her discussion of "rich rigor," one of the eight major criteria in her reconceptualization of excellent qualitative research. She writes,
Rigorous data analysis may be achieved through providing the reader with an explanation about the process by which the raw data are transformed and organized into the research report. Despite the data-analysis approach, rigorous analysis is marked by transparency regarding the process of sorting, choosing, and organizing the data. (p. 841)
And Anfara et al.’s example of “code mapping” (p. 32) is quite simply a picture of one researcher’s analytic process that could easily be depicted using ATLAS.ti’s network view. Although, with charts and tables, one must be critically conscious of not merely showing a hierarchy of codes, which Friese adamantly warns against. (See "Analytic capabilities of network views" below.) The researcher builds a network view in ATLAS based on his or her interpretation of relationships across codes and categories, thus making the network view tool indispensable for conducting constant comparative analyses, building "audit trails," and "documenting the procedures used to generate categories" (Anfara et al., p. 33).

Creswell (2013) is more direct in his endorsement of CAQDAS. In the chapter on validation strategies in the newest edition of his Qualitative Inquiry and Research Design, he mentions the use of computer programs to assist in recording and analyzing data as one of several ways for enhancing the stability and dependability of findings, or, what is known in positivistic terms as "reliability."

Networks are compelling visual diagrams that add transparency to the researcher's process, and I am now considering how I might incorporate them into the write-up of my own research findings or my dissertation appendices.

Here are a few other notes I took from Tuesday’s webinar:

Basic concepts of network views
  • Strong and weak links are denoted by solid (strong) and dotted (weak) lines
  • Weak links between nodes are unnamed. They exist between memos and quotations, codes and quotations, memos and memos, and between families and their members.
  • Named links express relationships between two codes or two quotations
  • Named links may be "directed/transitive" ------> and "non-directed/symmetric" <-------> 
  • Background colors in network views coordinate to code colors (if you use color)
  • Hyperlinks occur on the data level, between two codes or between two quotes
How to link objects
  • You can drag and drop from anywhere, bringing objects into the network view manager. You can also import nodes into the network (any object from the HU). For instance, you can select a code and import all its quotations.
  • Take time to play with the display options under the Display drop-down menu
  • In the relations editor, you can create your own relations with colored lines of different point sizes (See “How to create relations” below.)
  • Comments can be added to relations and are denoted with a tilde just as with other comments in other areas of ATLAS
  • NEVER DELETE AN OBJECT from the network view, use the "remove from network view" option instead
  • Creating links between data opens up different kinds of relations, as opposed to linking codes, which is more conceptual
  • You can save a network as a graphic file (png, gif, jpg) and insert into a PowerPoint or MSWord doc
How to create relations
  • Open Relations Editor and expand the window until you see the Edit tab
  • Decide on the relation you want and crate a unique identifier, the first three letters and then the actual memo text (e.g. REA is the identifier for "is reason for")
Working with hyperlinks

Example of "star" links
  • Hyperlinks are "stars" or "chains" of links
  • Quotes may be linked within and across documents
Analytic capabilities of network views
  • Codes are just topics and areas of interest within the data, they describe
  • Networks take it to the conceptual level, so networks are for linking across categories and depicting relationships between data, not for building code hierarchies. DO NOT USE NETWORK VIEW FUNCTION TO REPRESENT YOUR CODE STRUCTURE. IT'S TWO DIFFERENT LEVELS OF ANALYSIS.
  • Use network view after coding when you start to see relationships.
  • ATLAS.ti version 7 allows you to filter codes that you bring into the view. You can hit Reset Filter to bring back all codes.
  • Code families are just grouping mechanisms for codes. Families are filters: create family, then turn on filter.
References
Anfara, V. A., Brown, K. M., & Mangione, T. L. (2002). Qualitative analysis on stage: Making the research process more public. Educational Researcher, 31(7), 28–38.
 Creswell, J. W. (2013). Qualitative inquiry and research design: Choosing among five approaches (3rd ed., Kindle version.). Los Angeles: SAGE Publications, Inc.
 Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16(10), 837–851.

Share/Bookmark

April 1, 2013

A new "pressure point"

I remember well Piantanida and Garman’s (2009) advice about constructing a logic-of-justification for a research genre. It is not simply a matter of locating a recipe for the study. Instead, one must recognize and sort through the “epistemological and methodological pressure points” in the literature and choose which ideas will best guide the study at hand (Ch. 7, Conventions of a Genre and Logic-of-Justification, para 2).

I had forgotten that Piantanida and Garman had specifically referenced grounded theory as one such genre framed by “contentious literature”!

So, as I began my background reading of constant comparative analysis (with its roots in grounded theory), I was not expecting to encounter a new pressure point.

The tension for me, however, is not within the grounded theory tradition itself but between grounded theory’s constant comparative method and Stake’s (1995) case study method.

I thought I had this part pretty well sewn up: case study research methods, followed by constant comparative analysis, and concluding with activity systems analysis. The path, while not necessarily easy, was at least clear.

Now, a bump in the path. 

Points of departure
On the subjects of generalization and sampling, I see a need to reconcile the different analytic stances of Strauss and Corbin versus Stake. With constant comparative method originating in the grounded theory tradition, and with the purpose of grounded theory being a systematic progression from descriptive to theoretical, is constant comparative analysis compatible with Stake's case study approach?

Time and again, Stake emphasizes that “case studies are undertaken to make the case understandable” (p. 85). He writes, “The function of research is not necessarily to map and conquer the world but to sophisticate the beholding of it” (p. 43). A skilled case researcher should organize his or her report in such a way as to stimulate a resonance with readers, who draw on their own past experiences with past cases to form “naturalistic generalizations” (p. 85).


Case research, then, is never about sampling: “Our first obligation is to understand this one case” (p. 4, emphasis added).

On the other hand, Strauss and Corbin (1998) present a step-by-step process of microanalysis that moves beyond description to conceptualizing and classifying and (eventually) theory building. The basic operations of theory building are 1) asking questions and 2) making theoretical comparisons -- analytic tools that guide and direct theoretical sampling.

In grounded theory, the researcher makes theoretical comparisons when in doubt or confused by the data. "The object, then, is to become sensitive to the number and types of properties that might pertain to phenomena that otherwise might not be noticed or noticed only much later" (Strauss &Corbin, p. 82). Properties and dimensions of one thing are used as tools for examining another.

Strauss and Corbin say the point of theoretical comparison is to move beyond describing and pinning down "facts." The issue, they say, is to move from the particular to the general.

The sticking point
How does the process of theoretical comparison fit with case study analysis?

Stake is clear: case studies don't produce generalizations, they refine them. Through counter example, a case study may help modify an existing generalization, but "the real business of case study is particularization, not generalization. We take a particular case and come to know it well, not primarily as to how it is different from others but what it is, what it does" (p. 8).

Particularization is the avenue to understanding, not generalization.

Or, has constant comparative method (and coding and memo writing, for that matter) become so commonplace within the broad spectrum of qualitative inquiry that it can be logically applied in descriptive case study? Has the constant comparative method become independent of grounded theory as a thematic analysis tool? At what point did the tools of grounded theory work cross over into the mainstream?

For instance, Saldaña (2013) presents six coding methods from the grounded theory “coding canon” in the newest edition of his book: in vivo, process, initial (formerly “open”), focused, axial, and theoretical (formerly “selective”). He says they all can be used in other non-grounded theory studies (p. 51).

At my breaking point
Part of my confusion, I realize now, stems from the fact that I have never clearly resolved in my head if my case is, according to Stake's terms, an "intrinsic" one or an "instrumental" one. In an intrinsic study the unique case itself is the center of attention; in instrumental study the case(s) are selected purposefully based on the researcher’s need to more fully understand an issue. Whatever the case, it’s not a process of sampling.

I think my study is somewhere in the middle: the case was practically handed to me on a silver platter, but my interest is instrumental and pre-existing. I embraced the opportunity to study the case in question because of my own a priori interests.

Stake warns that it is often not easy to categorize one’s work based on these distinctions. The researcher’s interest in the case (intrinsic versus instrumental) dictates the methods of analysis.

Where once I appreciated Stake’s broadminded stance (“..I encourage you readers to be alert for tactics that do not fit your style of operation or circumstances. Before you is a palette of methods” [Introduction, p. xii].), I now barely comprehend his musings on the “mystical side of analysis” (p. 72).

Stake writes,
Where thoughts come from, whence meaning, remains a mystery. The page does not write itself, but by finding, for analysis, the right ambience, the right moment, by reading and rereading the accounts, by deep thinking, then understanding creeps forward and your page is printed. (p. 73)
Whaaat?

Points on a continuum
Must I choose between Stake’s riddles and Strauss and Corbin’s rigidity?

In short, no. But I definitely need to write a clear articulation of the purpose, nature, and selection of my case as part of the logic-of-justification for my analytic methods.

For Stake, the case researcher must be equally inclined toward inductive analysis, which he calls "categorical aggregation," and interpretive analysis, or "direct interpretation." An intrinsic case study requires more direct interpretation, as there is little time or need to aggregate categorical data. Intrinsic case studies are more descriptive, with emphasis on particularization. In contrast, instrumental case studies are more theoretical, with emphasis on induction and generalization.

These analytical methods reside along a paradigmatic continuum with no hard-and-fast boundaries. Stake writes, "The quantitative side of me looked for the emergence of meaning from the repetition of phenomena. The qualitative side of me looked for the emergence of meaning in the single instance" (p. 76).

As with every other stage of the process, reflexivity is key:
Each researcher needs, through experience and reflection, to find the forms of analysis that work for him or her....The nature of the study, the focus of the research questions, the curiosities of the researcher pretty well determine what analytic strategies should be followed: categorical aggregation or direct interpretation. (p. 77)
The point, I think, is it is up to me to argue compatibility between the constant comparative method and Stake. The type and purpose of the case, the conceptual structure of the study, and reflexive management of evolving research questions will determine where I land along the analysis continuum.

References
Piantanida, M., & Garman, N. B. (2009). The qualitative dissertation: A guide for students and faculty (2nd ed., Kindle version.). Thousand Oaks, CA: Corwin.

Saldaña, J. (2013). The coding manual for qualitative researchers (2nd ed.). Los Angeles: SAGE Publications Ltd.

Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: SAGE Publications.

Strauss, A., & Corbin, J. M. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory. Thousand Oaks, CA: SAGE Publications.
Share/Bookmark

Dropping the ball

Research is a juggling act. I think to be a successful researcher you must juggle at any given time at least four of the following: data collection, reading, writing, analysis, reflexivity -- always reflexivity and reflection.

Am I leaving anything out? Probably.

This whole semester I thought I was doing well, when, in fact, at most I was tossing around maybe two things at a time. Big deal.

After reading the coding and analysis literature this week, I realize I have dropped a major ball, the analysis one.

I also realize I have some new questions. (See end of post for questions).

Coding, analysis, and interpretation
Coding is a commonly accepted approach to analysis, and, as such, it has its share of critics, who claim it is an over-glorified form of reductionist frequency counting.

In the opening chapters of the new edition of his coding manual, Saldaña (2013) addresses the critics. He says coding is not the end-all, be-all of analysis. Nor is it a discrete stage of analysis done in isolation from other aspects of research.

Saldaña argues that coding does not distance the researcher from his or her data:
If you are doing your job right as a qualitative researcher, nothing could be further from the truth. Coding well requires that you reflect deeply on the meanings of each and every datum. Coding well requires that you read and reread and reread yet again as you code, recode, and recode yet again. Coding well leads to total immersion In your data corpus with the outcome being exponential and intimate familiarity with its detail, subtleties, and nuances. (p. 39)
I so need to do this. I haven't been handling the data well at all. I have been amassing it, stockpiling it, peering at it occasionally with heavy heart as I do with hampers of dirty laundry. In fact, I now have piles of neatly folded, laundered clothes, and no grasp of my study. I will choose laundry any day over the challenge of data analysis.

The readings on coding and analysis cause me to feel a pit in my stomach as I realize how far removed I have become from the actual texts of my study. I haven't transcribed in weeks. I haven't touched, much less reflected on, field notes, emails, and other artifacts from last semester. I acquired a thick folder of documentation from one of the key informants for my study, and the documents still sit where I left them two weeks ago.

Worse, I have been continually conducting interviews with participants for the last three months, without the benefit of immersion in the data to guide or shape my interactions with those participants.

In their discussion on the process of line-by-line microanalysis, Strauss and Corbin (1998) write, "We are moved through microanalysis by asking questions, lots of them, some general but others more specific. Some of these questions may be descriptive, helping us ask better interview questions during the subsequent interviews" (p. 66).

I should be forming theoretical questions that probe relationships between concepts and then asking these question during follow-up interviews.

I just concluded the first round of interviews. Time is running out for follow-ups, as my participants are classroom teachers who will not relish being interviewed over summer vacation. This is a potential problem.

Analytic memos
And because I have not been immersed in the data, I also have not generated a single analytic memo about the data, not since last semester. This is another problem.

Analytic memos are the raw materials for what will ultimately become the “theoretic text,” in which the researcher “finally sees the theoretic interpretation – core thesis – he or she wants to put forward” (Piantanida & Garman, 2009, Ch. 13, para 1). They are initiated by “aha moments” and “conceptual leaps” that put “the myriad individual, idiosyncratic, and situational details into a meaningful, coherent, theoretic perspective” (para 2).

According to Saldaña, "Virtually every qualitative research methodologist agrees: whenever anything related to and significant about the coding or analysis of the data comes to mind, stop whatever you are doing and write a memo about it immediately" (p. 42).

Richardson (1994) refers to analytic memos as "theoretical notes." These are the researcher's hunches, hypotheses, connections, and/or critiques about what is being seen and heard in the field. The researcher opens up his or her texts to interpretation and a "critical epistemological stance" (p. 526).

Memo writing goes hand-in-hand with analysis. All sorts of memos may be generated during research, but the analytic ones mark the researcher's first attempts at creating findings.

In writing workshop, instructors guide their students: "Don't get it right, get it writ." This advice applies to the research memo as well. Saldaña says just write the memo; worry about the title and category later. He explains,
I simply write what is going through my mind, then determine what type of memo I have written to title it and thus later determine its place in the data corpus. Yes, memos are data; and as such they, too, can be coded, categorized, and searched with CAQDAS programs. Dating each memo helps keep track of the evolution of your study. Giving each memo a descriptive title and evocative subtitle enables you to classify it and later retrieve it through a CAQDAS search. (p. 42)
Grounded Theory and ATLAS.ti
One thing that has become more transparent to me in the coding and analysis readings is the connection between grounded theory and ATLAS.ti.

Some researchers distrust CAQDAS tools because “enduring foundationalist epistemologies are clearly being drawn on in their design and programming” (Brown, 2002). After reading portions of Strauss and Corbin’s (1998) handbook on grounded theory, I finally see what the critics are talking about. Key features of ATLAS seem to be borrowed directly from the grounded theory genre: open codes, in vivo codes, network views (Strauss and Corbin call them “diagrams”), and the integration of memos.

Strauss and Corbin did not claim to know much about computers in their 1998 volume, but they specifically mention ATLAS.ti as “more geared toward theory building” (p. 276) and reproduce a memo from ATLAS developer Heiner Legeiwe that says as much.

Questions:
  • Both the grounded theory guidelines as well as Saldaña’s book refer to the use of categories. How does one denote categories in ATLAS.ti? I have been using prefixes to organize codes upfront, so my prefixes are definitely not categorical. They are simply organizational/topical in nature. Is this what code families are for? Does it matter?
  • Saldaña refers to “subcodes” and “subcategories.” How can these be represented in ATLAS.ti?
References
Brown, D. (2002). Going digital and staying qualitative: Some alternative strategies for digitizing the qualitative research process. Forum Qualitative Sozialforschung/Forum: Qualitative Social Research, 3(2). Retrieved from http://www.qualitative-research.net/index.php/fqs/article/viewArticle/851

Piantanida, M., & Garman, N. B. (2009). The qualitative dissertation: A guide for students and faculty (2nd ed., Kindle version.). Thousand Oaks, CA: Corwin.

Richardson, L. (1994). Writing: A method of inquiry. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 516–529). Thousand Oaks, CA: SAGE Publications, Inc.

Strauss, A., & Corbin, J. M. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory. Thousand Oaks, CA: SAGE Publications.

Saldaña, J. (2013). The coding manual for qualitative researchers (2nd ed.). Los Angeles: SAGE Publications Ltd.
Share/Bookmark