March 12, 2013

Notes on ATLAS.ti from Friese (2012)

For my dissertation, I have created a hermeneutic unit (HU) in ATLAS.ti. It resides in a folder on my desktop, and I am gradually pulling in documents related to my project, starting with annotated literature and reading notes on Cultural-Historical Activity Theory (CHAT), identity theory, case study, and interviewing methods. I am grouping my literature using the document family function.

For now, my focus is on the document families for case study and interviewing, which I am reading, analyzing, and coding for purposes of writing a more substantial logic-of-justification (Piantanida & Garman, 2009). As part of my project contract for EP659, I will incorporate this synthesis of my readings into my Chapter 3 overhaul.

This is only the second time I have used ATLAS.ti to conduct a literature review. The first time, I found that most of my analysis occurred outside of the software, and I used ATLAS to simply organize and categorize my annotations. The software was helpful, but I did not allot enough time to achieve a true synthesis with it.

Now, as before, I feel my attempts with ATLAS are constrained by a lack of time, and I am attempted to revert back to my "old ways." Why bother taking time to learn a digital tool for a process that people have conducted successfully for years with paper and pencil?

But I want my use of CAQDAS to count for something this time around. So, with that in mind, I began reading the eBook version of Susanne Friese's (2013) QualitativeData Analysis with ATLAS.ti.

Friese's book appeals to those novices and skeptics who may ask, "...[I]f the computer doesn’t do the coding, then what is it good for?" (p. 1). She argues that CAQDAS opens up data coding to a host of new possibilities, and new users must understand the potential value-added of CAQDAS software or risk a cursory application of it. Friese describes an all-too-familiar scenario: "...[W]hen I started to use software to analyze qualitative data in 1992, I did what most novices probably do: I looked at the features, played around a bit, muddled my way through the software and the data, gained some insights and wrote a report. It worked – somehow. But it wasn’t very orderly or systematic" (p. 2).

Friese argues that nowhere in the extant literature on CAQDAS is there a systematic guide for its use. In her book on ATLAS.ti, she refrains from being overly prescriptive -- how can she be with a system that, at last count, offered more than 400 sub-menus? Friese simply outlines the approach she has honed over the last 20 years. Passages laden with technical how-to are couched in methodological terms. The “how” is linked to the “why,” so readers can appreciate the intended analytical rigor behind each skill-building exercise.

In consideration of my next big foray into ATLAS.ti, I read with an eye toward learning new methodological and technical practices and techniques. Here are some key take-aways:

Methodological advice and ah-ha's:


  • Learn to interpret the numerals inside brackets following each code. The first number refers to the number of times a code has been used (its "groundedness"); the second number relates to the code's "density" and has to do with how the code functions within the network of other codes. I never understood what the second number signified, until now. Interestingly, ATLAS offers network views, but the networks are created manually through the user's interpretive process. This is discussed more in-depth in Chapter 7.  
  • As an early analytic move, comment on each PDoc and group PDocs into families as you add them to the HU. Document families are a prerequisite for running certain kinds of data queries later on. 
  • Refer to Chapter 4 for an interesting discussion of how to use the coding tools in ATLAS. The tools are modeled after principles of Corbin and Strauss' grounded theory, but that does not limit how they are applied. Chapter 4 does an excellent job of weaving methodological advice with technical how-to, such as the use and pitfalls of in vivo coding. 
  • Be diligent about defining codes with the comment tool. Code definitions evolve over time, and sometimes over the course of a project, it is possible to forget what a code originally stood for. This is another reason to use the drag-and-drop method afforded by the Code Manager (Friese's preference) instead of the list option (what I am accustomed to using). By keeping the Code Manager open in the workspace, definitions are constantly viewable as you select and apply each code. 
  • Manage memo settings (p. 138). This step is crucial, as writing memos in ATLAS goes hand-in-hand with use of the query tool. I have been taking field notes and writing memos in Evernote, and I have been using Evernote's tagging utility to label my notes thus: methodological note, theoretical note, personal note, and observational note (based on Richardson's scheme, which I wrote about in a previous post). I like using the Evernote app on my iPad in the field; it's less intrusive. And I generally avoid using my PC laptop (which runs ATLAS) for any form of data entry because I dislike the keyboard. Following Friese's suggestion, I adjusted the memo types in ATLAS to align with the memo types/tags I already use. Now, as I begin coding and analyzing my texts with ATLAS, I may continue generating and labeling memos using my personal scheme. Next, I need to consider how to get my Evernote data into ATLAS. Copy and paste into internal text documents within ATLAS? (See pp. 55-56.) 
  • Create analytic memos based on the research questions (pp. 143-145, p. 148). It is likely that most of my subsequent memos, while working in ATLAS, will be theoretical or methodological in nature. Friese suggests a special class of analytic memos called "research question" memos for when the user enters into a second level of analysis that involves querying the data and finding relations. Research question memos may be generated at the start of the project and added to and revised over the duration of the project. She explains,"In your first research question memos, the answers will probably be descriptive. But in time they will become more abstract as you get ideas for further questions, add new research question memos and basically take it one step further at a time, gaining more and more understanding, exploring more and more details of your data landscape and starting to see relations between them" (p. 145). 
  • After coding the texts/data, review Chapter 6 for ideas about how to query the data. The last half of Chapter 6 is very procedural and skills-based and will make more sense to a reader who already has a list of codes and has started to conceptualize those codes. Friese reviews the query tool, the co-occurrence explorer, and the Codes-Primary-Documents-Table, which can be used to find relationships and patterns. At this stage of analysis, the research question memos can be used to keep record of queries and results (as pictured on p. 144). Further, the memos, if set up correctly and linked to quotations, can be used to generate output that serves as "building blocks" for the findings chapter. 
  • Consider incorporating ATLAS.ti into the dissertation defense. ATLAS can support the presentation of findings in a number of ways listed on pp. 219-221.
Methodologically, this passage from page 1 of Chapter 6 says it all:
A lot of the analysis happens as you write, not by clicking on some buttons and outputting some results. You need to look at what the software retrieves, read through it and write it up in your own words in order to gain an understanding of what is happening in the data. Most of the time insights need to be worked at and are not revealed to you immediately by looking at the results. Simply seeing that there are, say, 10 quotations is not enough; numbers are sometimes useful, they hint that there might be something interesting there, but the important step is to take a closer look and to see what’s behind them. (p. 133)  
Technological advice and ah-ha's:  
  • When the Code Manager is open, navigate through a long list of codes by pointing the cursor on the Code Manager, and typing the first few letters of the desired code. If I ever amass the average number of 120-200 codes, this practice will be useful.  
  • When preparing transcripts in a word processor, save them as rich text (.rtf files) so that ATLAS does not need to convert them. Rich text is the standard file format in ATLAS, not .doc or docx. Typically, after transcribing in InqScribe, I format transcripts in MS Word. It would not be difficult for me to Save As .rtf. This is something to consider; although, in the past I have used .doc files in ATLAS and did not incur any problems. 
  • Many suggestions in Chapter 2 for formatting transcripts! I have not yet loaded any transcripts into my dissertation HU, and Friese's formatting guidelines should be easy to implement. These include marking speakers with unique identifiers that do not appear in the text (i.e. "INT" for "interviewer," and so on), double spacing between turns, and breaking up long turns with empty lines. Some of these preparations are to facilitate use of the ATLAS automatic coding tool, which I don't know if I even want or need to use, but Friese suggests getting into the habit anyway.  
  • Learn the ATLAS.ti file protocol. Friese calls it "the biggest hurdle" in ATLAS project management, and on pp. 36-37 she thoroughly demystifies this aspect of the program. It's a matter of understanding "external references," a system of document storage and retrieval that prevents a project file from becoming too big and unwieldy. I now have a firmer understanding of how the one folder for all data rule works and why I should follow this rule. More importantly, the name of each data file should be analytically helpful, not generic like "transcript1," "transcript2," and so on. Choose a file-naming system early. It will add transparency and efficiency to the project. The data files (primary documents) can be sorted in the document manager, providing a quick glimpse of your sampling. 
One immensely telling detail: the total number of menus and submenus in ATLAS.ti equaled 443 at the time Friese published her handbook. The point is that working with ATLAS can be highly individualized. The interface is designed to provide users with multiple options for performing the same function. Thus, through practice, the user develops his or her own preferred workflow and routines.  (I like this as it basically sums up my whole approach to teaching and learning with technology.)

References 
Friese, S. (2012). Qualitative data analysis with ATLAS.ti. (eBook.). Los Angeles: SAGE Publications Ltd.

Piantanida, M., & Garman, N. B. (2009). The qualitative dissertation: A guide for students and faculty. (2nd ed., Kindle version.). Thousand Oaks, CA: Corwin.
Share/Bookmark

1 comment:

  1. Another book that I found hugely helpful was Lewins & Silver (2007), even though they talk across multiple software packages they organize the book around the "major tasks of analysis" which provides a great big picture overview of what the software can do.

    Note that Friese wrote her book before 7 came out, so some things may have changed (I am not sure that .rtf is still the default file format, and the way that documents are stored, etc. have changed now with the introduction of the "my library" and "team library.")

    The ability to run queries (what Lewins and Silver call 'interrogating the dataset') is really the heart of what makes software tools powerful. In fact, families aren't analytically useful at all until you start using the query tool (you have to use colors or prefixes to really organize your codes - and I think Friese does a pretty good job explaining this).

    I wish they wouldn't call them families (or super-codes) because their functions are not at all what you think they would be.

    Anyway, you might consider submitting a review of the book for a qualitative journal for publication - I haven't seen one yet.

    ReplyDelete

Be nice! And thanks for visiting my blog!