Categories
how-to Interviews methodology

Thoughts on: Focus on Methodology: Eliciting rich data: A practical approach to writing semi-structured interview schedules (Bearman, 2019)

Well that’s a lot of colons.

This is just a quick post but I found this paper really insightful and accessible and wanted to share it. In a nutshell, Bearman lays out some sensible practical tips for getting the most from semi-structured interviews.

She starts with a general outline of what qualitative data offers in a research project in terms of providing insights into human experiences and behaviour that raw stats struggle to provide.

For this reason, she effectively hammers home the point that the questions used in this kind of interviewing need to be framed in such a way as to draw on a participant’s personal experiences, ideally tied to specific points in time rather than more generalised opinions, as it is the former that can yield richer descriptive data.

She spends some time outlining the hows and whys of creating more open questions that spark discussion and invite the participant to answer more in their own voice.

She neatly summarises the key ideas here:

Ten Heuristics for Interview Schedules That Elicit Rich Data
1. Know your phenomenon of interest.
2. Aim for experiences more than opinions.
3. Start with a good warm-up question.
4. Brainstorm around the experiences you want to know about.
5. Use open-ended questions.
6. Consider the valence of your questions.
7. Leave space for interviewers to improvise; probes can help.
8. Start concrete and easy, finish with abstract and hard.
9. Final reflections offer opportunities for interviewee open comment.
10. Pilot, adjust the schedule and pilot again.

As someone right at the point of working on my semi-structured interview questions, this article was immensely valuable. (Thanks Dwayne Ripley for sharing)

Categories
case study methodology

Thoughts on: Five misunderstandings about case-study research (Flyvbjerg, 2006)

One of the things that I’ve noticed as I explore the scholarly world is that there appear to be as many different ways to do research as there are researchers. Every time I’ve discussed my research with someone, they seem to have had a different take on the best way to do it. This, I guess, comes down to their experiences and how they would do it if it was their project, based on their way of seeing the world and the knowledge within. It shouldn’t surprise me then that, as people make these approaches and paradigms part of their identity, that they can get strangely passionate and maybe even political about ‘the right way to do things’. (Not everyone, mind you, but more than a few).

Which brings us to Flyvbjerg and this take on the value of case studies in qualitative research. Rather than simply talking through the nature and merits of the case study as a way of understanding something, the author positions it in opposition to common criticisms of this form of research. Kind of mythbusters for qualitative research I guess.

To be frank, I’m still getting my head around what research is, so rather than follow him down this rabbit-hole in depth, I’m just going to share the parts of this that stood out the most and that got me thinking about what I want to do. A significant part of the thrust of the paper seems to lie in whether we can be confident that a case study tells us something meaningful about the world. He comes back several times to a larger philosophical tension between case studies and larger scale quantitative research that seeks to prove a hypothesis or demonstrate the existence of things that in combination add up to something meaningful.

In addition, from both an understanding-oriented and an action-oriented perspective, it is often more important to clarify the deeper causes behind a given problem and its consequences than to describe the symptoms of the problem and how frequently they occur. Random samples emphasizing representativeness will seldom be able to produce this kind of insight; it is more appropriate to select some few cases chosen for their validity. (p.229)

For me, the main points of contention are: Is this simply a one-off outlier that you are describing or is this a situation that is likely to be seen repeatedly? (Generalisability) What does the fact that the research chose this particular case to study mean in terms of its independence or representativeness? (Verification bias) Is it possible to extract meaningful truths from this story? (Ability to summarise findings).

Generalisability

Flyvbjerg contends that looking at one case can indeed tell us a lot. The idea of falsification is, in essence, that it only takes one example that contradicts a stated belief to change that idea.

The case study is ideal for generalizing using the type of test that Karl Popper(1959) called “falsification,” which in social science forms part of critical reflexivity. Falsification is one of the most rigorous tests to which a scientific proposition can be subjected: If just one observation does not fit with the proposition, it is considered not valid generally and must therefore be either revised or rejected. Popper himself used the now famous example “all swans are white” and proposed that just one observation of a single black swan would falsify this proposition and in this way have general significance and stimulate further investigations and theory building. The case study is well suited for identifying “black swans” because of its in-depth approach: What appears to be “white” often turns out on closer examination to be “black.” (p.227-228)

Verification bias

In some ways, the other side of this is what we learn when the things that we didn’t expect to happen, do. Flyvbjerg seems to feel that this is a fairly compelling counter to the idea that researchers conducting case studies choose the cases that are most likely to match their hypotheses, noting that we learn much more when the unexpected occurs.

A model example of a “least likely” case is Robert Michels’s (1962) classical study of oligarchy in organizations. By choosing a horizontally struc-tured grassroots organization with strong democratic ideals—that is, a type of organization with an especially low probability of being oligarchical—Michels could test the universality of the oligarchy thesis; that is, “If this organization is oligarchic, so are most others.” A corresponding model example of a “most likely” case is W. F. Whyte’s (1943) study of a Boston slum neighborhood, which according to existing theory, should have exhibited social disorganization but in fact, showed quite the opposite (p.231)

Summarising findings

Life is complex and not everything can necessarily be boiled down to basic truths. Flyvbjerg largely rejects the position that this is a weakness of case studies, instead valuing ambiguity

The goal is not to make the case study be all things to all people. The goal is to allow the study to be different things to different people. I try to achieve this by describing the case with so many facets—like life itself—that different readers may be attracted,or repelled, by different things in the case. Readers are not pointed down anyone theoretical path or given the impression that truth might lie at the end of such a path. Readers will have to discover their own path and truth inside thecase. Thus, in addition to the interpretations of case actors and case narrators,readers are invited to decide the meaning of the case and to interrogate actors ’and narrators’ interpretations to answer that categorical question of any case study, “What is this case a case of?” (p.238)

I’m not sure that this level of ambiguity sits comfortably with me but I can see value in the case study as a whole. In terms of my own work, there’s a final additional quote that I like that speaks to the idea of research undertaken by practitioners – something I have noticed as somewhat of a gap when it comes to research about edvisors.

Here, too, this difference between large samples and single cases can be understood in terms of the phenomenology for human learning discussed above. If one, thus, assumes that the goal of the researcher’s work is to under-stand and learn about the phenomena being studied, then research is simply a form of learning. If one assumes that research, like other learning processes,can be described by the phenomenology for human learning, it then becomes clear that the most advanced form of understanding is achieved when researchers place themselves within the context being studied. Only in this way can researchers understand the viewpoints and the behavior, which characterizes social actors. Relevant to this point, Giddens (1982) stated that valid descriptions of social activities presume that researchers possess those skills necessary to participate in the activities described:

“I have accepted that it is right to say that the condition of generating descriptions of social activity is being able in principle to participate in it. It involves“mutual knowledge,” shared by observer and participants whose action constitutes and reconstitutes the social world.” (Giddens, 1982, p. 15)

(P.236)

[[zotpress items=”{2977232:YBBUAHRS},{2977232:EUTC489F}”

Categories
methodology reflection

Research update #59: I’m back – what did I miss?

Photo by Andrea Piacquadio from Pexels

I took a little time off – as it appears many of my fellow candidates are – due to the plague and the impact it is having on, well, everything. Work in the online education space has been frantic and it seemed like a good time not to try to do too much.

One thing that I’m very conscious of now is the fact that the role and value (at least hopefully perceptions of value) of edvisors has changed now. I know this will impact what I’m looking at but it’s not really clear yet how. Academics are absolutely far more aware that we exist and largely seem to be appreciative of this fact. What does this mean for my main research question?

What strategies are used in HE to promote understanding of the roles and demonstrate the value of edvisors among academic staff and more broadly within the institution?

To be honest, I’ve been thinking for a while now that this isn’t the right question anyway. It doesn’t explain why I’m doing this research (the problem) and it moves straight into looking for a narrow set of solutions for an assumed problem. This problem being that academics and management don’t know what edvisors do or what they contribute. It also assumes that edvisors and edvisor units have the time, energy, skill or political capital to develop and implement formal strategies to address this.

The heart of the issue is really, to put it plainly, why don’t people respect our skills, experience and knowledge and take our advice seriously? Which seems possibly a bit pointed or needy as a research question but that’s not hard to tweak. So this is something that I’m thinking seriously about at the moment.

Something else is the fact that I’ve never been entirely happy with my methodology. Unfortunately, as someone who hasn’t done a lot of research before – at least at this scale – I’m dealing with a lot of unknown unknowns. How much data do you need for a good thesis? People have said to me recently that the best PhD is a done one, so maybe the question is just how much data do you need for a thesis – but I feel like if I’m putting in the time, it needs to be good.

Generally my approach when faced with a big project is to gather up everything that seems to have some value and throw it at the wall to see what sticks. Then it is just a gradual process of filtering and refining. The problem is that the scope of “everything” has expanded to cover edvisors across three roles, academics and leaders in potentially 40+ universities around Australia, as well as policy documents, job ads and position descriptions, organisational structures and whatever else crops up along the way. Given my ties to the TELedvisors community, I’d hope that this group will also play a substantial part of what I’m doing.

But maybe this can be done more cleverly.

Could there be enough material just in the edvisor community? Even in the TELedvisor community? (486 members and counting). I’d long felt that case studies were an interesting way to tell a story but lack something authoritative. But I’ve been reading Five misunderstandings about case study research by Flyvbjerg (2006) and I’m starting to see the possibilities. (I think I’ll do a separate post about this)

If the world’s going to change, I might as well join in.

Flyvbjerg, B. (2006). Five Misunderstandings About Case-Study Research. Qualitative Inquiry, 12(2), 219–245. https://doi.org/10.1177/1077800405284363
Categories
ethics methodology politics Professional staff

Research update #57: Curly questions in ethics

I heard back about my ethics application a few weeks back – it’s mostly fine but there is a big question that I need to respond to before I can go ahead. It’s essentially to do with whether the institution or individuals in the institution are the real participants.

I want to work with key informants in edvisor roles in most (ideally all) of the universities in Australia to learn about their perceptions and experiences in these roles. That’s the easy bit. I also want to gather some rich empirical data about the numbers of peoples in these roles, both in central and faculty – and other? – teams, and how these teams are structured. That’s the hard part.

The ethics committee wants to know what I am going to do in terms of getting permission from the institution to collect this data. In hindsight, this is clearly something I should have given more thought to in the research design. While to me, this data doesn’t seem particularly sensitive, there’s all manner of university politics and other sensitivities surrounding this, apparently.

My feeling is that for this data to be truly meaningful, it needs to reflect all the universities. Otherwise it is just an average or an estimate. (Which is what most of the existing research I’ve found provides.) So what happens if some institutions don’t want to share? (I don’t really expect that to be the case but people being people, who can say?)

The logistics of obtaining permission is another challenge. Am I looking at one person in the institution (maybe like a DVCA – but really I have no idea) or do I need to clear this with them and leaders in each individual faculty? Assuming 6 faculties per institution on average, 280 people? Clearly this isn’t practical.

A few things I’m going to follow up that will hopefully shed light on this. The Council of Australasian Leaders of Learning and Teaching (CAULLT) recently released a very useful environmental scan of professional learning in HE that captured exactly some of this data – though only in central teams from what I can tell. Hopefully the report’s author Kym Fraser can offer some advice on what they did in terms of permissions.

There are also some statutory reporting requirements that HE institutions in Australia have relating to reporting on staffing numbers to the government that might also demonstrate that permission isn’t needed. From what I’ve seen so far, this data doesn’t go into the level of detail that I need though and probably doesn’t go into organisational structures either. Most unis have Business Intelligence units that manage this kind of data – moreso for internal use – I’m also going to chat to them. I don’t think they will be able to make a call on permission but they may have a better idea where to go next.

Another significant question that the Ethics committee has thrown up is whether universities will have issues with their staff working as a key informant for a few hours to do work that is outside that person’s ordinary duties. I really have no answer to this – though I kind of wonder if this question would have been asked if it was academic staff that I was planning to work with. (I probably won’t say that in my response.) It does bring me back to the seeking permission question/dilemma.

Have you had any experience with these kinds of questions? Got any tips?

Categories
methodology reflection research

#Research Update 54: No more excuses

(Caution – this is very rambly and introspective and I think I largely used this to tease out some ideas that seem quite obvious in hindsight. You can pretty safely skip this post, even if you sometimes find my other ones interesting)

A couple more months have passed since I went through my confirmation and while I’ve been letting ideas percolate and I’ve been developing plans, it feels like there hasn’t been enough pixels put to e-paper

I caught up with Peter, who continues to assure me that I’m not aiming too high, and he said a few times that more than anything else, I need to be taking notes about everything. That was one thing that I was using this blog for and it is the thing that I am returning to.

I actually like writing and I don’t feel like I get to do enough of it in my day-to-day – or at least I should say, I don’t get to do enough satisfying writing. Emails written and instruction/process writing has skyrocketed as I slowly get my head around the challenges of a shared management role in a Higher Education institution. In those cases (other than instructions and processes), a lot of what I’m writing still feels like it is wrong because the landscape is changing so quickly that it is incredibly difficult to have the context and rationale of many of the things I’m responding to. I am quickly – though not quickly enough – learning that I’m not in a position to raise questions about decisions made at an executive level and I need to get on with just implementing them. Which is ironic I guess because I feel as though many of the calls that I have been making are similarly questioned by my team members and I know how frustrating that is. (Because I have the full context perhaps and they don’t? Who knows – that does at least seem to be one thing I can try to do better anyway)

The apparent binary between rational factors and emotional factors in decision making and activities at all levels is definitely something I had never given enough thought to before. Both types of factors are valid and need to be addressed, working with the emotional is a lot harder though. I feel as though I have touched on this a little in the Lit Review as far as teachers/academics goes but have greatly underestimated its impact across the educational ecosystem. I do suspect that this ecosystem is relatively unique in terms of workplaces and that people accustomed to working in “normal” work environments frequently don’t make allowances for it when they try to apply typical change management strategies and tools. It feels as though I have already seen it bewilder and crush the spirits of more than a few sensible and good people. It is probably both a strength and a weakness of Higher Education and I guess I need to find some way to explore and explain it in my research. I keep coming back to the Brew, Boud, Lucas & Crawford article from 2017 about “Responding to university policies and initiatives: the role of reflexivity in the mid-career academic” as something that both shocks and enlightens me about aspects of university culture. This culture seeps through all areas of the institution.

Brew, A., Boud, D., Lucas, L., & Crawford, K. (2017). Responding to university policies and initiatives: the role of reflexivity in the mid-career academic. Journal of Higher Education Policy and Management, 39(4), 378–389. https://doi.org/10.1080/1360080X.2017.1330819

Coming back to methodology, one of my big concerns as I work out how to do the first round of interviews with Key Informants (approx 12 across edvisor and manager roles – maybe some teacher??) has been how to find a reflective sample of Australia’s Higher Ed landscape. In broad brushstrokes, we have city and regional/rural universities, “elite” research institutions (the Group of Eight), technology oriented universities, younger research focussed ones and a large set of ‘others’ that are often considered by learners as having more of a career-gaining purpose (though quality research is also done in these ones). Some institutions are financially well-off and others struggle for survival – which could both make them more open to innovation and teaching and learning support offered by edvisors as well as less able to pay for it. Culturally, the ‘elite’ universities – and particularly the academics within (to apply a ridiculously broad brush) might have much more restrictive internal hierarchies and cultures that downplay teaching support from ‘non-academics’ – or even teaching over all.

So how to allow for all of these factors (and so many more) in choosing which institutions to focus on in a logistically feasible study. Peter’s feeling – which surprised me but kind of makes sense – was that these distinctions fade away somewhat if I ultimately aim to gather rich data from all the Australian universities. All 40-43 of them (depending on the inclusion of private and international unis with Australian campuses).

In a separate writing practice I like to write ridiculously unfilmable science fiction and horror scripts. I used to write like a producer, only including the things that I thought were actually doable (not that I have the experience to know what that is any more). After a while though, I realised that this seriously stunted the enjoyment that I got from telling crazy stories and I decided that the first drafts needed to have everything and I could leave the problems of actually realising them as someone else’s problem. This feels a little bit like that in some ways and maybe it’s a terrible analogy as none of the scripts have ever been made but at the same time, it seems increasingly like the only way I am going to really learn what it is to be a researcher is to aim too high and then let reality whittle that down into something achievable.

So I guess I’m aiming to explore the relationships between edvisors, academics and management in all Australian Higher Ed institutions, in some way.

The key informant interviews are still as much about working out how to do this substantive piece of research and the different avenues that I might need to follow in order to get access to institutional data. Given that every institution is different, I guess I can only hope to get indicative insights into how this might be done rather than definitive information.

Any way I cut it, I need to actually be doing it to learn about this rather than trying to work out the perfect fully-formed solution in my head before I go and do it. Which will be a challenge but one not unlike my current new work role.

This has been my TED talk, thanks for listening. (It was really just about committing to some ideas I now realise and there is no better way to do this than have to commit them to screen.)

Categories
methodology PhD research

Research update #53: Methodology or Messodology?

I have identified around 17 different types of data that I want to collect for this research. I have been waiting for people who know more about this than I do to say – ‘you’re out of your mind’ – but as yet, nobody has.

It looks a little something like this.

More than a few of these things (edvisor numbers, quals, entry points, unit structures) don’t even necessarily answer my research questions but seem important in the journey towards them. The I.T bit in the corner is more of a stray thought because I’ve been spending a LOT of time in my own edvisor practice lately chatting to them and there is wealth of research to be done on their role in edutech projects that nobody seems to have touched on yet.

Determining how, where and from whom to gather this data is my first stage and will involve working closely with a set of key informants across institutions. I would assume a mix of edvisors, edvisor unit managers (or higher level types – DVCAs maybe?) and I’d imagine teachers but that seems slightly hazier right at the moment. One of the edvisors on the review panel did note that there is a major difference between types of edvisors and while I believe I have acknowledged that, I can probably give it a lot more thought in terms of considering the relationships between edvisors (academic developers, learning designers and learning technologists) and our perceptions of each other. So that’s fun.

For now, the logical thing to do seems to focus on the interviews with key informants, which are intended (amongst other things) to provide some insights into how to go about collecting the rest of this data. I’d like to get a reasonably representative cross-section of people in a range of different types of unis (I considered TAFE and private providers but that’s just too much extra), so I figure I need Group of Eight, Australian Technology Network, Innovative Research and Regional ones. But maybe that’s overdoing it. I do think there is something to be seen in comparing teaching oriented vs research oriented ones and perhaps also (though maybe this is the same thing) well resourced vs less well resourced institutions. Then again I haven’t considered any of these things as factors in my proposal so far, so ??? Anyway, I guess that falls under the research apprenticeship side of this whole endeavour.

But, be honest, this still seems like way too much to be trying to do right?

Categories
Analysis methodology mooc SOCRMx Uncategorized

SOCRMx Week #8: The End

Well I probably said all that I needed to say on my general feelings about this MOOC in my last post so this is largely for the sake of completion. The final week of this course is a peer assessed piece of writing analysing the methods used in a sample paper. Turns out that I missed the deadline to write that – I may even have been working on my Week 7 post when that deadline fell – so this appears to be the end of the road for me. I could still go through and do the work but I found the supplied paper unrelated to my research and using methodologies that I have little interest in. The overall questions raised and things to be mindful of in the assessment instructions are enough.

  • What method of analysis was used?
  • How was the chosen method of analysis appropriate to the data?
  • What other kinds of analysis might have been used?
  • How was the analysed designed? Is the design clearly described? What were its strengths and weaknesses?
  • What kind of issues or problems might one identify with the analysis?
  • What are the key findings and conclusions, and how are they justified through the chosen analysis techniques?

And so with that, I guess I’m done with SOCRMx. In spite of my disengagement with the community, the resources and the structure really have been of a high standard and, more importantly, incredibly timely for me. As someone returning to study after some time who has not ever really had a formal research focus, there seems to be a lot of assumed knowledge about research methodology and having this opportunity to get a birds-eye view of the various options was ideal. I know I still have a long way to go but this has been a nice push in the right direction.

 

Categories
Analysis methodology qualitative quantitative research

SOCRMx Week #7: Qualitative analysis

I’m nearly at the end of Week #8 in the Social Research Methods MOOC and while I’m still finding it informative, I’ve kind of stopped caring. The lack of community and particularly of engagement from the teachers has really sucked the joy out of this one for me. If the content wasn’t highly relevant, I’d have left long ago. And I’ll admit, I haven’t been posting the wonderfully detailed and thoughtful kind of posts on the forum or in the assigned work that they other 5 or so active participants have been doing but I’ve been contributing in a way that supports my own learning. I suspect the issue is that this is being run as a formal unit in a degree program and I’m not one of those students. Maybe it’s that I chose not to fork over the money for a verified certificate. Either way, it’s been an unwelcoming experience overall. When I compare it to the MITx MOOC I did a couple of years ago on Implementing Education Technology, it’s chalk and cheese. Maybe it’s a question of having a critical mass of active participants, who knows. But as I say, at least the content has been exactly what I’ve needed at this juncture of my journey in learning to be a researcher.

This week the focus was on Qualitative Analysis, which is where I suspect I’ll being spending a good amount of my time in the future. One of my interesting realisations early on in this though was that I’ve already tried to ‘cross the streams’ of qual and quant analysis this year when I had my first attempt at conducting a thematic analysis of job ads for edvisors. I was trying to identify specific practices and tie them to particular job titles in an attempt to clarify what these roles were largely seen to be doing. So there was coding because clearly not every ad was going to say research, some might say ‘stay abreast of current and emerging trends’ and other might ask the edvisor to ‘evaluate current platforms’. Whether or not that sat in “research” perfectly is a matter for discussion but I guess that’s a plus of the fuzzy nature of qualitative data, where data is more free to be about the vibe.

But then I somehow ended up applying numbers to the practices as they sat in the job ad more holistically, in an attempt to place them on a spectrum between pedagogical (1) and technological (10). Which kind of worked in that it gave me some richer data that I could use to plot the roles on a scattergraph but I wouldn’t be confident that this methodology would stand up to great scrutiny yet. Now maybe just because I was using numbers it doesn’t mean that it was quantitative but it still feels like some kind of weird fusion of the two. And I’m sure that I’ll find any number of examples of this in practice but I haven’t seen much of this so far. I guess it was mainly nice to be able to put a name to what I’d done. To be honest, as I was initially doing it, I assumed that there was probably a name for what I was doing and appropriate academic language surrounding it, I just didn’t happen to know what that was.

I mentioned earlier that qualitative analysis can be somewhat ‘fuzzier’ than quantitative and there was a significant chunk of discussion at the beginning of this week’s resources about that. Overall I got the feeling that there was a degree of defensiveness, with the main issue being that the language and ideas used in quantitative research are far more positivist in nature – epistemologically speaking (I totally just added that because I like that I know this now) – and are perhaps easier to justify and use to validate the data. You get cold hard figures and if you did this the right way, someone else should be able to do exactly the same thing.

An attempt to map some of those quantitative qualities to the qualitative domain was somewhat poo-pooed because it was seen as missing the added nuance present in qualitative research or something – it was a little unclear really but I guess I’ll need to learn to at least talk the talk. It partly felt like tribalism or a turf war but I’m sure that there’s more to it than that.  I guess it’s grounded in a fairly profoundly different way of seeing the world and particularly of seeing ‘knowing’. On the one side we have a pretty straight forward set of questions dealing with objective measurable reality and on the other we have people digging into perspectives and perceptions of that reality and questioning whether we can ever know or say if any of them are absolutely right.

Long story short, there’s probably much more contextualisation/framing involved in the way you analyse qual data and how you share the story that you think it tells. Your own perceptions and how they may have shaped this story also play a far more substantial part. The processes that you undertook – including member checking, asking your subject to evaluate your analysis of their interview/etc to ensure that your take reflects theirs – also play a significant role in making your work defensible.

The section on coding seemed particular relevant so I’ll quote that directly:

Codes, in qualitative data analysis, are tags that are applied to sections of data. Often done using qualitative data analysis software such as Nvivo or Dedoose.

Codes can overlap, and a section of an interview transcript (for example) can be labeled with more than one code. A code is usually a keyword or words that represent the content of the section in some way: a concept, an emotion, a type of language use (like a metaphor), a theme.

Coding is always, inevitably, an interpretive process, and the researcher has to decide what is relevant, what constitutes a theme and how it connects to relevant ideas or theories, and discuss their implications.

Here’s an example provided by Jen Ross, of a list of codes for a project of hers about online reflective practice in higher education. These codes all relate to the idea of reflection as “discipline” – a core idea in the research:

  • academic discourse
  • developing boundaries
  • ensuring standards
  • flexibility
  • habit
  • how professionals practice
  • institutional factors
  • self assessment

Jen says: These codes, like many in qualitative projects, emerged and were refined during the process of reading the data closely. However, as the codes emerged, I also used the theoretical concepts I was working with to organise and categorise them. The overall theme of “discipline”, therefore, came from a combination of the data and the theory.

https://courses.edx.org/courses/course-v1:EdinburghX+SOCRMx+3T2017/courseware/f41baffef9c14ff488165814baeffdbb/23bec3f689e24100964f23aa3ca6ee03/?child=last

I already mentioned that I undertake thematic analysis of a range of job ads, which could be considered to be “across case” coding. This is in comparison to “within-case” coding, where one undertakes narrative analysis by digging down into one particular resource or story. This involves “tagging each part of the narrative to show how it unfolds, or coding certain kinds of language use” while thematic analysis is about coding common elements that emerge while looking at many things. In the practical exercise – I didn’t do it because time is getting away from me but I read the blog posts of those who did – a repeated observation was that in this thematic analysis, they would often create/discover a new code half way through and then have to go back to the start to see if and where that appear in the preceding resources.

On a side note, the practical activity did look quite interesting, it involved looking over a collection of hypothetical future reflections from school leavers in the UK in the late 1970s. They were asked to write a brief story from the perspective of them 40 years in the future, on the cusp of retirement, describing the life they had lived. Purely as a snapshot into the past, it is really worth a look for a revealing exploration of how some people saw life and success back in the day.Most of the stories are only a paragraph or two.

https://discover.ukdataservice.ac.uk/QualiBank/?f=CollectionTitle_School%20Leavers%20Study

And once again, there were a bunch of useful looking resources for further reading about qualitative analysis

  • Baptiste, I. (2001). Qualitative Data Analysis: Common Phases, Strategic Differences. Forum: Qualitative Social Research, 2/3. http://www.qualitative-research.net/index.php/fqs/article/view/917/2002
  • Markham, A. (2017). Reflexivity for interpretive researchers http://annettemarkham.com/2017/02/reflexivity-for-interpretive-researchers/
  • ModU (2016). How to Know You Are Coding Correctly: Qualitative Research Methods. Duke University’s Social Science Research Unit. https://www.youtube.com/watch?v=iL7Ww5kpnIM
  • Riessman, C.K. (2008). ‘Thematic Analysis’ [Chapter 3 preview] in Narrative Methods for the Human Sciences. SAGE Publishing https://uk.sagepub.com/en-gb/eur/narrative-methods-for-the-human-sciences/book226139#preview Sage Research Methods Database
  • Sandelowski, M. and Barroso, J. (2002). Reading Qualitative Studies. International Journal of Qualitative Methods, 1/1. https://journals.library.ualberta.ca/ijqm/index.php/IJQM/article/view/4615
  • Samsi, K. (2012). Critical appraisal of qualitative research. Kings College London. https://www.kcl.ac.uk/sspp/policy-institute/scwru/pubs/2012/conf/samsi26jul12.pdf
  • Taylor, C and Gibbs, G R (2010) How and what to code. Online QDA Web Site, http://onlineqda.hud.ac.uk/Intro_QDA/how_what_to_code.php
  • Trochim, W. (2006). Qualitative Validity. https://www.socialresearchmethods.net/kb/qualval.php
Categories
Analysis methodology mooc quantitative SOCRMx

Week #6 SOCRMx – Quantitative analysis

This section of the SOCRMx MOOC offers a fair introduction to statistics and the analysis of quantitative date. At least, enough to get a grasp on what is needed to get meaningful data and what it looks like when statistics are misused or misrepresented. (This bit in particular should be a core unit in the mandatory media and information literacy training that everyone has to take in my imaginary ideal world)

The more I think about my research, the more likely I think it is to be primarily qualitative but I can still see the value in proper methodology for processing the quant data that will help to contextualise the rest. I took some scattered notes that I’ll leave here to refer back to down the road.

Good books to consider – Charles Wheelan: Naked Statistics: Stripping the dread from data (2014) & Daniel Levitin: A Field Guide to Lies and Statistics: A Neuroscientist on How to Make Sense of a Complex World (2016)

Mean / Median / Mode

Mean – straightforward average.

Median – put all the results in a line and choose the one in the middle. (Better for average incomes as high-earners distort the figures)

Mode – which section has the most hits in it

Student’s T-Test – a method for interpreting what can be extrapolated from a small sample of data. It is the primary way to understand the likely error of an estimate depending on your sample size

It is the source of the concept of “statistical significance.”

A P-value is a probability. It is a measure of summarizing the incompatibility between a particular set of data and a proposed model for the data (the null hypothesis). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5366529/

“a significance level is an indication of the probability of an observed result occurring by chance under the null hypothesis; so the more you repeat an experiment, the higher the probability you will see a statistically significant result.”

Overall this entire domain is one where I think I’m only really going to appreciate the core concepts when I have a specific need for it. The idea of a distribution curve where the mean of all data points represents the high point and standard deviations (determined by a formula) show us the majority of the other data points seems potentially useful but, again, until I can practically apply it to a problem, just tantalisingly beyond my grasp.

Categories
methodology SOCRMx survey

Week #4 SOCRMx – Reflecting on methods

This week in the Social Research Methods MOOC we take a moment to take a breath and consider the approaches that we currently favour.

One of the activities is to reflect in our blog – so I guess this is that. I’m looking at surveys because I still need to get my head around discourse analysis, not having really used it before.

Reflecting on your chosen methods

Choose one of the approaches you’ve explored in previous weeks, and write a reflective post in your blog that answers the following questions. Work though these questions systematically, and try to write a paragraph or two for each:

What three (good) research questions could be answered using this approach?

I’m fairly focused on my current research questions at the moment and I would say that using surveys will help me to start answering them, but I certainly wouldn’t rely solely on surveys. The questions are: How do education advisors see their role and value in Tertiary Education? How are education advisor roles understood and valued by teachers and institutional management? What strategies are used in tertiary education to promote understanding of the roles of education advisors among teaching staff and more broadly within the institution.

What assumptions about the nature of knowledge (epistemology) seem to be associated with this approach?

The main assumption is that subjective or experience based knowledge is sufficient. I don’t believe that this is the case. Clearly, a survey can be useful in collecting broad data about the attitudes that people claim or even believe that they hold however people can have a tendency to want to see themselves in the best possible light – the heroes of their own story – and responses might be more indicative of what people would like to think they believe than what their actions show them to believe.

What kinds of ethical issues arise?

This would depend on the design of the research. Assuming there is no need for participants to be subsequently identifiable, anonymity should enable respondents to express their opinions freely and without concern for consequences. Questions should be designed in a way that is not unnecessarily intrusive or likely to influence the way that respondents answer. I’d also assume that good research design would ensure that the demographics of survey participants is reflective of that community.

What would “validity” imply in a project that used this approach?

I would say that ‘validity’ would require addressing some of the issues that I’ve already raised. Primarily that the survey itself could be relied upon to collect data that accurately reflects the opinions of the survey respondents without influencing these opinions or asking ambiguous questions that could be interpreted in different ways. My overall preference would be for the survey to be one part of a larger research project that provides data from different sources that can be used to provide greater ‘validity’.

What are some of the practical or ethical issues that would need to be considered?

The survey would need to be anonymous and the data kept securely. Questions should be designed to be as clear and neutral as possible and a sufficiently representative sample of participants obtained. Given the number of surveys that people get asked to complete these days, ensuring that people have a clear understanding of the purpose and value of the research would be vital. For the same reason, I’d suggest that we have a responsibility to ask people only for the information that we need and nothing more.

And finally, find and reference at least two published articles that have used this approach (aside from the examples given in this course). Make some notes about how the approach is described and used in each paper, linking to your reflections above.

Mcinnis, C. (1998). Academics and Professional Administrators in Australian Universities: dissolving boundaries and new tensions. Journal of Higher Education Policy and Management, 20(2), 161–173.

Comparison of two surveys, one of academic staff (1993) and one of administrative/professional staff (1996). Analysis of results, some additional questions were added to the second survey

Wohlmuther, S. (2008). “Sleeping with the enemy”: how far are you prepared to go to make a difference? A look at the divide between academic and allied staff. Journal of Higher Education Policy and Management, 30(4), 325–337.

Based on an anonymous online survey of 29% of all staff – academic and professional at her institution, which included questions about demographics, perceptions of the nature of their roles, the ‘divide’ and the value of different types of staff in relation to strategic priorities.

Both surveys related to workplace issues and attitudes, which meant that privacy was a significant factor. I was less impressed with the approach taken by Wohlmuther, which I felt was overly ambiguous in parts.

“Survey respondents were asked what percentage of their time they spent on allied work and what percentage of their time they should spend on allied work. The term ‘allied work’ was not defined. It was left to the respondent to interpret what they meant by allied work” (p.330)

I do still think that I’ll use surveys as a starting point but expect to then take this information and use it to help design interviews and also to inform analysis of other sources of data.