Categories
Analysis Interviews PhD qualitative quantitative survey

Notes on: The coding manual for qualitative researchers – Saldaña Chapter 1

Fair warning – this is very much just a post for me and is more about how I store my notes to search later than publicly examining this highly regarded book about qual data analysis.

Open coding -> Axial coding

‘A code is a researcher generated construct that symbolises or “translates” data’ (P.4)

Can be a single word capturing the primary topic of a paragraph – e.g. security – even if that word isn’t used

When a code is taken from the transcript, it is put in quotes and is an in vivo code

Eclectic coding – open ended process

Decoding – reflecting on text to decipher the core meaning
Encoding – determining and applying an appropriate label

Patterns demonstrate habits, salience and importance in people’s daily lives (routine, ritual, rules, roles, relationships) (P.6)

Simultaneous coding – multiple codes applied to the same text, indicating that one theme is part of a larger theme

Data can not always be precisely boundaried – it is fuzzy
Characteristic patterns:
Similarity – things happen the same way
Difference – they happen in predictably different ways
Frequency – happen often or seldom
Sequence – happen in a certain order
Correspondence – happen in relation to other activities
Causation – One appears to cause another

Patterns aren’t the only show in town
Anomalies and deviations can also intrigue us
It is ok to have stray/orphan codes

My theoretical lens may shape the codes I use

Coding is a cyclical act – first cycle of coding rarely gets it right. Can be a 2nd, 3rd even 4th cycle of recoding

[I think I need to do some of my survey analysis in tandem with interview coding. I’m also curious if i have used data about blended roles in the survey – closest may be % association with a role. Quartiles/Quintiles? (or above 50%)
The whole knowledge/activity thing seems particularly relevant]

Saldana – Codes are essence capturing that you cluster together by similarity and regularity (ie a pattern) to develop categories and thus analysis of their connections.

Analysis is searching for patterns in data and ideas that explain why these patterns are there

[What can I take from interviewee’s survey responses to inform this analysis – working on the assumption that survey takers are trying less hard to write their own hero narrative]

Grounded theory approach to coding – Initial -> Focused -> Axial

Harding says some codes can be applied to multiple categories. This conflicts with domain or taxonomic coding but works with ‘fuzzy sets’ which acknowledges overlaps. (The risk in this if overused is of weakening category boundaries) P.11

[Did I ask enough/anything about edvisor’s personal strategies for making work relationships better? Bit of quals/accreditation maybe. This possibly should have been informed by the main survey]

Data – Code (+ subcode) -> Category (+ subcategory) -> Themes/Concepts -> Assertion/Theory

Theory comes from the interrelation of themes and concepts
(but it doesn’t always have to – we can also apply existing theory to the process)

Themes can be outcomes of coding but shouldn’t be the code itself. Codes should be more explicit and descriptive.

I will likely see themes as I code – just put it in Scrivener as an analytical note and move on.
(Other phenomena may also emerge, depending on the approach, like participant processes, emotions and values)

Jess has recommended ‘open coding’ as my first step – Saldana doesn’t list this specifically but I think ‘eclectic’ coding is the closest version. Also seems recommended for ECRs

Interviewer questions/prompts/comments aren’t coded
BUT
if the interactions are significant – e.g. meaning making – it may be appropriate.
[I can think of times I said something and they agreed and said it added to their thinking]

Code irrelevant sections as N/A (not applicable)

Code my own reflective notes during interviews and transcription [I probably need to find a way to make stuff from me clear]

Preparing data for coding

For manual (pen and paper) coding, format the page so there is a good wide (50%) white space on the right to add codes and notes.

Break the text into digestible stanzas

Abbreviate participant names to an initial

Put non-code bits (e.g. my questions and comments) into brackets

This can also still have value in Nvivo but I should see what the software needs.

Pre-coding

So I have done some of this in highlighting/copying key quotes while fixing the transcript.
When bringing these into Nvivo, I should codes all of these bits as QUOTE to make them easier to find.

It may also be worth me putting all of my text into italics.

The Word doc for preliminary jottings could have 3 columns:
Raw data (transcript) | Preliminary code | Final code

Keep a page with research questions, theory framework, study goals, main issues etc at hand to stay on track

Questions to consider as I review the transcripts:

  • What are people doing/trying to accomplish?
  • How exactly do they do this? What specific strategies do they use?
  • How do they talk about, characterise and understand what is going on?
  • What assumptions are they making?
  • What do I see going on here?
  • What did I learn from these notes/transcript?
  • Why did I include them?
  • How is what is going on here similar to or different from other interviews?
  • What is the broader import or significance of this incident or event? What is it a case of?
  • What strikes you?
  • What surprised me? (to track my assumptions)
  • What intrigued you? (to track my positionality)
  • What disturbed me? (to track tensions with my values)

Coding contrasting data

The codes from the 2nd transcript may make me go back and tweak those for the 1st, so code a contrasting data source (e.g. don’t do all ETs in a cluster, go AD – ET – LD etc)

Lumper VS Splitter coding – Lumper uses minimal codes for a section, catching the essence of the category. Splitter is more line by line, greater detail but this may be overwhelming.

How many final codes / categories / concepts?
Huge variance in the literature about this:
Codes – 30-40 OR 80-100 OR 50-300
Categories – 15-20 OR 25-30
Concepts/Themes – 3 or 5-7

[Do I have text questions in the survey that could or should be coded? Did I already do that informally in Survey 1? Do I need to describe that process better in my Methods section?]

Quantitizing the Qualitative

Generally reducing codes and categories to quant data isn’t needed but it can have value in corroborating quant findings from the survey (maybe from an aca/prof, role type, gender perspective?)
This is paradigmatic corroboration and can add trustworthiness. P.27
Look for quant data in the survey with statistical significance first.
Hypothesis coding is designed to test differences between 2 or more participant groups P.27

[Check what statistical tools Nvivo has – Dedoose is also suggested]

Make a codebook / code list
Separate file – may be done in Nvivo though
Code-Description-Data examples for reference
Could also include inclusion/exclusion data and atypical examples