August 25, 2011

What is the discourse analysis and literary criticism connection?

[Note: Please see previous post for a full reflection on Neil Mercer's Words and Minds, chapters 1-4.]

Coming at discourse analysis (DA) from the perspective of an English major and high school language arts teacher, my curiosity is naturally piqued by references to "readers," "texts," and "interpretations." 

Mercer does this a lot in Words and Minds. He writes, for example, "'Context' is created anew in every interaction between a speaker and listener or writer and reader" (p. 21). When Mercer refers to written texts (as opposed to utterances) and the "interpretive efforts of the reader," I can't help but think of Louis Rosenblatt's transactional theory, which in my current field of study, reading education, has been described as “a wave in an ocean unto itself” (Ruddell & Unrau, p. 1121).  It's a multidisciplinary perspective that includes comparative literature, philosophy, aesthetics, linguistics, and sociology. This lineage of thought is similar to DA.  Is there a connection?
  
Transactional theory views reader and text as bound in a “total dynamic situation,” where meaning is not extracted from the text but created between reader and text (Ruddell & Unrau, p. 1121). As a preservice English teacher in the early-90s, I came to recognize transactional theory, also referred to as "reader-response," as the primary theory that informed most of the English and language arts instruction I received as both a high school student and undergrad. (Although, I did have one American literature professor who admonished us on the first day of class that we were never to interpret a rabbit in a Robert Frost poem as anything other than a rabbit.)

Last week's class notes and some of the reading indicate that DA draws from literary theory, without making specific mention of Rosenblatt.  Starks and Trinidad, for instance, say DA evolved from linguistic studies, literary criticism, and semiotics (p. 1374).  From what areas of literary criticism does DA draw, and is transactional theory one of them?  I made a cursory pass at the other required texts for the course and could not find a single mention of literary theory, literary criticism, reader-response, or Rosenblatt. 

But there are several parallels.  Take, for instance, Rosenblatt's concept of the transactional paradigm, which borrows from an overall shift in "habits of thinking" over the last century and which casts aside "the old stimulus-response, subject-object, individual-social dualisms" (pp. 1364-1365). Under the new paradigm, "the human being is seen as part of nature, continuously in transaction with an environment -- each one conditions the other" (p. 1365).

Rosenblatt describes conversations, or "linguistic transactions," as "temporal" activities. "Each person has come to the transaction with an individual history, manifested in what has been termed a linguistic-experiential reservoir" (p. 1367).  Readers of Mercer might recognize the "reservoir" as a "frame of reference."

Similar to the way American trial judges reject linguistic interpretations of recorded dialogue in favor of one-shot "common-sense" understandings (Mercer, p. 36), transactional theory has its own critics who view reader-response as too subjective and "anything goes."  

There is a competing school of thought, commonly called "close reading," that presents a text stripped of its contextual layers. During close readings, students are taught to attend only to textual elements (words, images, language devices) as sources of meaning. It may be too pat to suggest that conversation analysis has its own parallel form of literary theory in the close reading, but the similarities are interesting.
References:
Mercer, N. (2000). Words and minds: how we use language to think together. New York: Taylor & Francis e-Library.
Rosenblatt, L.M. (2004). The transactional theory of reading and writing. In R.B. Ruddell & N.J. Unrau (Eds.), Theoretical Models and Processes of Reading (5th ed., pp. 1363-1398). Newark, DE: International Reading Association.  
Ruddell, R.B. & Unrau, N.J. (2004). Introduction: Models of reading and writing processes. In R.B. Ruddell & N.J. Unrau (Eds.), Theoretical Models and Processes of Reading (5th ed., pp. 1116-1126). Newark, DE: International Reading Association.  
Starks, H. & Trinidad, S.B. (2007). Choose your method: A comparison of phenomenology, discourse analysis, and grounded theory.  Qualitative Health Research, 17(10), 1372-1380.

Share/Bookmark

August 24, 2011

Reading notes on Mercer, chapters 1-4

While reading the first few pages of Mercer's Words and Minds, I was reminded of the old cliche of "building a plane while flying it." Communication between individuals carries as much potential for being productive -- and enthralling --  as it does for being disastrous. 


 "Building" origami paper airplanes. Patrick Ryan/Lifesize/Getty Images
Instead of planes or flight paths, Mercer's preferred image is a "track." He compares two people in conversation to "operators of some strange, dual-controlled track-laying vehicle called 'language.'" The process of "joint contextualizing can be done well or badly" (p. 25).  [Note: Page numbers may be off; I'm still learning how to cite from an eBook that doesn't have "fixed" page numbers.]

Hints of the epistemological foundation of discourse analysis (DA) come early and fast in Words and Minds; although, Mercer doesn't actually mention DA until Chapter 4.  

Photo by Samurai at FreeDigitalPhotos.net
Mercer quickly rejects the positivist orientation that regards language as "transmitting ideas in a precise, unchanged form" (p. 5) and tells us that  "language is not only used to enable joint thinking about a problem, language use itself may create a problem to be resolved" (p. 12). Language is a tool, and one that we humans often misuse or abuse.  Mistakes happen. Problems of application or interpretation may themselves become the object of study. 

I like the empowering stance that Mercer assumes when he asserts that nothing is fixed in language, including our interpretations. There is that bleak worldview that regards language as just some rhetorical bludgeon wielded by politicians, professors, partisan media outlets,  and other powerful elites. "Fortunately," Mercer assures us, listeners and readers have their own powers of perception and "knowledge resources." Through dialogue and asking questions, we get to decide our own interpretation of "powerful" texts (p. 82).

That vision may strike some readers as a little too rosy or idealistic. I believe it is possible, but it carries a huge educational imperative.  I've long been interested in the field of critical literacies and the work of organizations like the Center for Media Literacy (CML). CML publishes a "toolkit" based on Five Key Questions that Can Change the World. These are very similar to the questions Mercer lists on page 82.

My nascent understanding of DA is it's a methodological tradition that, when done well, informs and improves our communication skills. I detect some social justice imperatives here, DA as route to improving living and work conditions (such as in classrooms, my area of interest). I shared my thoughts with a more advanced DA student, who said, "Well, now you are getting into critical discourse analysis."  Is that where I'm going? I have no idea.

Mercer acknowledges that his claim that language provides insight into how people think may seem "dubious" because "thinking" is not observable (p. 16). But the analysis of "interthinking," as Mercer calls it, is possible, and it's a lot easier than the work of some psychological researchers who focus on interior thought processes.

Interthinking occurs through dialog between people, and we can systematically and rigorously draw inferences from "whatever information we can" (p. 16). Concepts to look at and analyze closely include: context (a "mental phenomenon," any available information that people bring to a situation), physical artifacts (the material context), conversational ground rules for different kinds of talk (generally tacit and culturally specific), and frames of reference (past cultural experiences, perspectives, world views).

Part of the rigor and high standards of DA emanate from its tangible products -- the transcripts. In the preface Mercer tells us that when presenting transcripts in the book, he used a modified format, not the standardized Jeffersonian.   I had started asking questions about the "analytical abstractions" of DA and its intended audience in my last week's reflection, and I continue to wonder about the implications for audience when using Jeffersonian transcription.

If readers of a Jeffersonian transcript already know the symbols or are provided a detailed key, they can almost "hear" the text (Evers, 2011), but it seems that only experienced transcriptionists and conversation/discourse analysts would ever willingly attempt to read such a transcript.  At first glance, these documents are quite off-putting, almost like another language, and couldn't that be a barrier to reaching the audience whose communication skills you are hoping to improve? 

Even Mercer refers to the products -- the transcripts -- of conversation analysis as having "a dauntingly technical quality" (p. 57). I agree, so I guess I'm wondering about presentation of findings, and if it's common for analysts to carefully consider the desired target audience and modify their transcripts accordingly prior to publication, as did Mercer.

I'm also thinking about connections between DA and literary theory, and I will try to post those thoughts soon. [See What is the discourse analysis and literary criticism connection?]

References:

Evers, J. C. (2011). From the past into the future. How technological developments change our ways of data collection, transcription and analysis. Forum: Qualitative Social Research, 12(1). Retrieved from http://www.qualitative-research.net/index.php/fqs/article/viewArticle/1636

Mercer, N. (2000). Words and minds: how we use language to think together. New York: Taylor & Francis e-Library.

Share/Bookmark

August 18, 2011

Discourse Analysis: questions, reflections, reservations

A few days ago I received an email from a fellow doc student was considering taking Discourse Analysis, a research methods course I will be taking this semester. She was wondering if discourse analysis would be the right methodological "fit" for her.

Fact is, I'm wondering that for myself, too.

Because I would love to have my colleague in the course with me, I tried to convince her to take it. In my email reply I said, "My hunch is discourse analysis could be an overall method of study or could just inform your method of data collection, transcription, and/or analysis along the way.  I definitely recommend you keep taking methods courses, or what you learned in intro to qual will slip away."

That's all I could think to say because I really don't know much about discourse analysis as a qualitative research tradition.

I wish I could say, "I'm taking Discourse Analysis because my committee thinks it would be a good idea." Or, "It matches my dissertation proposal." Or, "It fits my research question."  I would first need a committee, a dissertation topic, and a research question for any of that to be true!

On the plus side, I am an undergraduate English major and journalism minor and a former high school English teacher who loves to, well, talk! (I've been told by well-meaning friends who like me that I could carry on conversation with a tree stump.) I don't require much arm-wrenching to take a course that focuses on language use. 

So, it's established I am interested in language and qualitative research methods.  Is that reason enough to take a semester of Discourse Analysis? The reading selections for the first week leave me with more questions than answers. 

Silverman claims the preparation and study of transcribed conversation is an absolutely worthwhile endeavor, but "without a way of defining a research problem, even detailed transcription can be merely an empty critique.  Thus we need to ask: what sort of features are we searching for in our transcripts and what approach lies behind this search?" (p. 166) 

To prepare transcripts with the attention to detail required by discourse analysis, it seems you would have to have a clear sense of purpose.  A well-defined research question.  I have neither of these. What I do have is an idea for a pilot study about what would happen if literacy teachers in the UT Reading Education program were allowed to choose and adapt digital and multimedia storytelling tools for purposes of representing case study data.  The current manner in which case studies are reported and shared is fraught with technical difficulties, so I proposed an intervention in the form of a design experiment. 

In my first pass at a proposal for this research project I made no mention of language-in-use or its role in the "creation and maintenance of social norms, the construction of personal and group identities, and the negotiation of social and political interaction" (Starks & Trinidad, 2007, p. 1374). Is possible or even appropriate to retrofit my research idea with a discourse analysis bent? Could I take a segment of recorded audio from my desired participant population and "get started" with "repeated, careful listening" (Silverman, p. 163)?

I am also wondering about intended audiences for discourse analysis and the level of "analytic abstraction" (Starks & Trinidad, 1377) required in the final write-up of these studies.  I want to collaborate with real teacher/learners, using 21st century digital tools to solve a real problem in practice and produce research that reaches a practitioner-based audience, not policy makers.

Share/Bookmark

August 4, 2011

Reflection on web conferencing and Green Eggs and Ham

I came home from class last night to one sick kid and another kid who wanted Green Eggs and Ham read to her. Luckily, I have a domestic partner (who usually has both kids tucked away in bed asleep when I get home). He took on the tummy ache, while I tackled the umpteenth reading of Dr. Seuss' book.

from ManyEyes
I'm spending too much time with this EP604 stuff because, as I was reading the story aloud, I couldn't help but relate issues of digital tools, collaborative research methods, and alternative forms of representation to Sam's persistent and steadfast pursuit of that curmudgeonly "other" guy. 
Just, "Try them! Try them!"

Thing is, sometimes I am Sam, and sometimes I am that other guy.    

In the days leading up to our demo session in Centra, which was last night, I was feeling a little like the "other." For reasons I hope I made clear in the second paragraph of this post, staying home in my jammies plugged into a router for four hours was not entirely feasible.  So, I packed up and headed off to my lonely, gray, ice-cold cubicle at the College of Education. Fun.

Online, collaborative workspaces scare me a bit. So many things can go wrong. And sure enough, over the course of the evening, my audio seemed not to work, my Internet connection failed, and my ears literally ached under my headset.  I was so tense after four hours of being plugged in, powered up, and logged in.

Minor constraints, I suppose, considering all that we accomplished in the virtual classroom: class surveys with instantaneous feedback, Internet "safaris," break-out and whole group discussion sessions that were nearly -- if not completely -- as effective as face-to-face.  

But the single greatest affordance of web conferencing I observed last night was how it can dissolve geographic and cost barriers that frequently prevent cross-disciplinary interaction and access to outside field experts.  Our guest speaker joined us from hundreds of miles away to provide a visual demonstration of her research process.  She also contributed insights and elaborated on her past experience in transforming research findings into a performative text.  I am pleased the Centra session was recorded because I hope to access it and listen to it again in the near future.  And that's another affordance.

The whole experience last night was a good example of teaching about and through digital tools in a safe, non-threatening way that stretched my thinking and pushed me out of my comfort zone (quite literally). 

The same theme carried over into the readings and examples provided on artistic and visual representations of research findings. To quote Woo (2008), "...[W]e should go ahead and challenge our own parameters to create possibilities that might not have been previously imagined" (p. 327). I realize she is speaking specifically here about winning over "traditionalists" who resist arts-based research, but her sentiment brought me full circle back to the broad mandates posed by authors of earlier articles we've read in EP604. As a student, teacher, and novice researcher I must acknowledge the implications of living and working in a digitally mediated society (Brown, 2002; Garcia, 2009).

Share/Bookmark

August 2, 2011

Reflection on transcription in early literacy contexts

Photo by arztsamui at FreeDigitalPhotos.net
Last night's hands-on work with Transana combined with my recent, background research about transcription have me wondering about my next big project with digital tools: using transcription software to prepare audio data from a Kindergarten literacy study.
  
Per my EP604 instructor's request, I have been doing some reading and reviewing of the literature on transcription, looking for methodological and theoretical insights that might inform my process. How should I approach this transcription job considering the context of the study?

The study in question is designed to assess young children’s incidental literacy learning as a result of exposure to and participation with moving picture books for children, aka "eBooks." Three examiners performed pre- and post-interviews with Kindergartners in which the students' interactions with traditional, print-based picture books were audio recorded. 

The design for this study is modeled after earlier studies performed by Sulzby (1985) and de Jong & Bus (2002, 2004).  So, I began my inquiry by taking a closer look at the methods sections within these articles. I am finding scant theoretical or procedural information provided about transcription, just as Lapadat & Lindsay (1999) asserted.

de Jong & Bus (2002) make no mention of how their transcripts were derived, saying only that they made use of "verbatim transcripts" during coding (p. 148).  However, it is clear from their description of data analysis that the transcripts included children's verbal and nonverbal cues for attending to text (for example, sounding out words as well as finger-point reading). In their 2004 study, the authors simply indicate that "verbatim transcripts" were analyzed (p. 387).  

In both studies, de Jong & Bus videotaped all book-reading sessions with the children.  I can only assume that the videos (and perhaps heavy fieldnotes) were tremendously helpful for fleshing out the transcripts. To me, this suggests broad implications and limitations for the current study in which I am involved, where no video data were collected due to significant IRB hassles.

Next, I checked out the 1985 article in which Sulzby describes findings from two studies that demonstrate a developmental pattern of children's emergent pre-reading skills.  She gives a fairly concrete description of her transcription procedure, even listing the symbols used to code the transcripts to indicate such things as finger-point reading, rise and fall of intonation, and phonetic spellings of attempted words.

Sulzby worked first from audio, then expanded the transcripts based on video and fieldnotes to include "descriptions of non-verbal behaviors coded to book pages and activities" (p. 476). According to Sulzby, the examiners who conducted and recorded the emergent readings were responsible for transcribing the tapes for each session. Each transcript was then checked by two other examiners.

That sounds like an ideal context in which to apply Transana software. Had Sulzby performed her study today, I imagine she and her research assistants would have found the multi-user version of Transana to be very helpful, given their commitment to the use of video and their collaborative process in which the transcripts were "continually checked during various analyses" (p. 476).

As I have no video to work from and very poor fieldnotes, I am continuing with my initial choice of Inqscribe software for purposes of transcribing the Kindergarten eBooks study.

However, I am curious about making more deliberate use of conventional transcription symbols (in addition to the slashes [ // ] I am already using to indicate the children's phonetic pronunciations).  One distinct advantage that Transana offers over Inqscribe is the integration of some basic symbol buttons in the text editor. I should make more concerted effort toward delineating pauses, changes in tone, and other vocal noises. According to Bloom (1993), "Studying language in context, and studying the development of language in the context of other developments in the child, require that we preserve far more than just the spoken word in the record we make of the data we collect" (p. 164).

For that reason alone, I wonder if Transana would not be an overall better choice for any project involving children's voices, regardless of the mode of data collection.

References (I am trying out the drag-and-drop function of Zotero and Mendeley!):
Bloom, L. (1993). Transcription and coding for child language research. In J. A. Edwards & M. D. Lampert (Eds.), Talking data: Transcription and coding in discourse research, 149-166. Hillsdale, New Jersey: Lawrence Erlbaum Associates, Inc.


De Jong, M. T., & Bus, A. G. (2002). Quality of book-reading matters for emergent readers: An experiment with the same book in a regular or electronic format. Journal of Educational Psychology, 94(1), 145. American Psychological Association. Retrieved from http://psycnet.apa.org/journals/edu/94/1/145/

De Jong, M. T., & Bus, A. G. (2004). The efficacy of electronic books in fostering kindergarten children’s emergent story understanding. Reading Research Quarterly, 39(4), 378–393. International Reading Association. Retrieved from http://www.reading.org/Library/Retrieve.cfm?D=10.1598/RRQ.39.4.2&F=RRQ-39-4-de_Jong.html


Lapadat, J. C., & Lindsey, A. C. (1999). Transcription in research and practice: From standardization of technique to interpretive positions. Qualitative Inquiry, 5(1), 64-86.

Sulzby, E. (1985). Children’s emergent reading of favorite storybooks: A developmental study. Reading Research Quarterly, 20(4), 458–481. International Reading Association. Retrieved from http://www.reading.org/Library/Retrieve.cfm?D=10.1598/RRQ.20.4.4&F=RRQ-20-4-Sulzby.html



Share/Bookmark

August 1, 2011

Off the fence at last? Conducting a PDF metadata extraction experiment

Photo by twobee at FreeDigitalPhotos.net
As a PhD student entering her third year of studies, I think it's time for me to get off the fence about reference management software. I've muddled through the last two years, cobbling together reference pages in APA style and exploring, but never fully committing to, Zotero.

When I started my PhD program in Fall 2009, I chose to do my first literature review with open-source Zotero instead of the proprietary EndNote software that my institution supports.

I learned to love open-source applications when I took a course from Dr. Jay Pfaffman as an Instructional Technology master's student. With Zotero's Firefox plugin, I could create and own my own bibliographic database that synced with my Zotero web account, meaning it was accessible from any computer with an Internet connection.

I finished the literature review using Zotero to organize and tag my files and then automatically generate an APA-formatted bibliography. But under the crush of my course load and the multitude of distractions and obligations that go with doctoral-level work, I never gave myself the time to explore Zotero's interface and documentation.  I never used Zotero to take notes on my resources, nor did I take advantage of its word processor plug-ins for cite-and-write functionality. It all seemed so complicated.

I was printing, photocopying, underlining, annotating, and sticky-noting mounds of literature and basically using Zotero as the digital equivalent of 3x5 bib cards, which most people of a certain age can remember from their high school and college English classes. In and of itself, the ability to automatically generate a list of works cited is a nice thing, but is that enough value-added "worth investing money and time in?" (Hensley, 2011, p. 205) I'm pretty sure I could alphabetize my bib cards and word-process my bibliography the "old-fashioned" way in about the same amount of time it takes to fool with Zotero.

Now, in EP604 we are learning about the next generation of citation management software. These programs, including Zotero, aspire to do more and may possibly alter the entire academic research experience.  Zotero, for instance, has released an alpha version of a free-standing desktop application as part of the larger Zotero Everywhere project. Meanwhile, Mendeley, a commercial, cross-platform application, is already just about everywhere, with desktop, web, and mobile apps.

Both Zotero and Mendeley draw on social media functionality to provide a collaborative platform for public and private research groups.  But Mendeley has upped the ante with what it calls "Knowledge Discovery," which draws on readership statistics to predict research trends and to push new content out to users.  Another step toward the Googlification of everything.

Mendeley claims to be the "world's largest crowd-sourced library," and I am interested in the impact of collaboration and social networking on the research experience.   But for the moment, I have more pressing needs. I want a citation management tool that will integrate seamlessly with my conversion to paperless and that will help me reign in and organize two years' worth of scattered resources.

Ideally, I would like a tool that functions both as a document reader and a citation manager, but I am not at all impressed with either Mendeley's or Zotero's annotating capabilities.  To make matters worse, the Mendeley iPad app repeatedly crashes even after being uninstalled and re-installed. (Zotero doesn't even have an iPad app, although one appears to be in the works). I've resolved this issue by using a different iPad app to annotate and export "flattened" PDFs to Dropbox, a process I will describe in more detail in a future post.

If I outsource reading and annotating to an iOS app and put networking and collaboration on the back burner for the time being, that leaves me with the same basic question about reference managers that Aaron Tay posed last year on his Musings about Librarianship blog: "How good are they at figuring out citations from PDFs?"  Tay ran a series of "non-scientific tests" to see how well EndNote, Mendeley, WizFolio, and Zotero ingested a collection of 10 PDFs he downloaded from the Internet.

Using five bibliographic fields (article title, author, publication year, journal volume and issue number, and page numbers), Tay evaluated the results for each of the ten articles. A "pass" meant the software extracted correct information for all five fields, a "partial" indicated at least one field was satisfied, and a "fail" meant no bibliographic information was found. EndNote and WizFolio each had five "fails," and Mendeley and Zotero had a respectable combination of "passes" and "partials," with Zotero having the most passes of all.

Since a year has gone by with many upgrades and fixes along the way, I thought it would be interesting to try a scaled-down version of Tay's metadata experiment.  I used four articles from my desktop representing a range of years (from 1989 to 2011) and what I hoped would be a range of PDF versions, with and without DOIs, etc. I focused exclusively on Mendeley and Zotero; other than that, I followed all the steps that Tay described in his original post.

To make it more interesting, I utilized both applications' capability for inserting formatted citations by drag-and-drop technology straight into the text editor of this blog. I had never tried this before with either Mendeley or Zotero and was eager to see how it works. (BTW, the drag-and-drop piece was super easy with both applications, but I prefer Zotero's split-screen format that integrates with the Firefox browser better than resizing the Mendeley Desktop window.)

For comparison purposes, I copied and pasted my manually formatted APA citations first, with the Mendeley and Zotero citations following. I changed the text color within citations to indicate deviations from APA or missing information.

Manually formatted APA citations:  
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32-42.
Mishra, P. & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017-1054.
O’Bannon, B. W., Lubke, J. K., Beard, J. L., & Britt, V. G. (2011). Using podcasts to replace lecture: Effects on student achievement. Computers & Education, 57, 1885-1892. doi:10.1016/j.compedu.2011.04.001
The New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60-92.

Mendeley citations (note automatic insertion of some DOIs):
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated Cognition and the Culture of Learning. Educational Researcher, 18(1), 32(missing page range). doi:10.2307/1176008
MISHRA, P., & KOEHLER, M. J. (2006). Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge. Teachers College Record, 108(6), 1017-1054. doi:10.1111/j.1467-9620.2006.00684.x
O’Bannon, B. W., Lubke, J. K., Beard, J. L., & Britt, V. G. (2011). Using podcasts to replace lecture: Effects on student achievement. Computers & Education, 57(3), 1885-1892. doi:10.1016/j.compedu.2011.04.001
No author. A pedagogy of multiliteracies : Designing social futures. (1996). Library. (No journal, volume or issue number, or page numbers)

Zotero citations (note automatic double-spacing):
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational researcher, 18(1), 32(missing page range).
Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers college record, 108(6), 1017(missing page range).
O’Bannon, B. W., Lubke, J. K., Beard, J. L., & Britt, V. G. (2011). Using Podcasts to Replace Lecture: Effects on Student Achievement. Computers & Education. (No volume or page numbers)
Pedagogy+of+Multiliteracies_New+London+Group.pdf. (n.d.). . Retrieved from http://vassarliteracy.pbworks.com/f/Pedagogy+of+Multiliteracies_New+London+Group.pdf (Hmmm. Just really messed up!)

So, should I stick it out with Zotero or make the leap to Mendeley?

I like that Mendeley located and added the DOIs for three out of the four documents.  I like how Zotero automatically double-spaces.  The 1996 article by The New London Group is just all the way around problematic and perhaps should not have been included in my little "experiment," but it is a seminal writing in the field of literacy and I will need it in my web-based library at some point.  Another limitation is the fact that I did not include at least one example of a conference proceeding to see how the two reference managers performed in that situation.  I probably should run another test with a proceedings paper before I choose a tool.

Or, do I even have to choose? According to Julie Meloni at the ProfHacker blog, it is easy to import Zotero resources to Mendeley, and, "Given the syncing abilities, it would be possible (and not terribly difficult or time consuming) to, say, work with Zotero as your primary tool yet sync with Mendeley so as to increase the content in your field and just add to the community in general."

What would you do?
Share/Bookmark