Research update #42: Proposal writing day 6. hmm

Yeah this is really going to need a few drafts. The words and the ideas are coming out but it feels less like a review of the literature and more like I’m describing the context and the issues with some supporting citations and quotes at this stage.

It’s ok, there’s time. I think I might take a quick refresher on how lit reviews really work though. One thing I haven’t done enough yet I think is talk about gaps in the literature and also which bits of the literature I question and why.

In looking at the barriers to collaboration, I added a small section discussing why and how edvisors can also create challenges – I’ve certainly known a few people who either didn’t know anywhere near enough or thought they knew far more than they actually did and would just barge in and tell academics that they needed to change without taking the time to understand why they used their current practices. Looking at this section in the context of the whole ‘barriers’ section, it seems disproportionately small and is making me wonder whether I’m being objective enough. Then again, this isn’t something that I’ve really come across in the literature and that’s what I’m meant to be discussing so maybe it’s ok. But it doesn’t feel ok.

Research update #41: Proposal writing day 5 – barriers to collaboration and the McDonald’s solution

I’m feeling uncertain about the structural advice that I got from one of my supervisors – and also about some of my own decisions. I’d initially thought that there was a lot to say about the nature of edvisor roles as well as some of the internal tensions in the community between professional and academic ones and that 2000 words would enable me to have a sufficiently rich discussion of this in the literature. (Because I’m still on the lit review). I was advised to cut that section down to 1000 and to reallocate that to the other sections. I’m kind of feeling that there is still a fair bit to discuss that doesn’t sit well elsewhere. My quandary is whether I trust the advice from someone with far more experience in academic writing or trust my own (I believe) richer understanding of the material, which to me says that the discussion of the nature of edvisors in the literature needs a deeper dive. I’ll go with the former for now and ask for feedback based on that but I have a feeling that this section will end up needing to be bigger.

My second issue is that in looking at the relationships between edvisors and institutional leadership and edvisors and academics, I think I’ve already touched on several of the issues in the next section about barriers to collaboration. It feels a little like I’m repeating myself in this section, although given that this question is pretty much at the heart of why I’m doing this research, maybe that doesn’t matter and it’s ok to reiterate it.

Ultimately I know that the solution is simply to shut up and write and save these bigger questions for editing and redrafting. I read of a problem-solving approach to decision making once that I’ll call the McDonald’s solution. It is essentially that if you are in a group trying to work out where to go for lunch and nobody is offering suggestions, throw up the worst option (McDonald’s) so that people are forced to commit to something better. I guess this is what the first draft needs to be.

I was given a certificate once in a script writing workshop that I went to giving me permission to write badly. I should dig that out and stick it up on the wall.

One other thing I should note is an interesting blog post from one of the PhD candidates at my uni who is looking at the anthropology of higher education (to paraphrase). It discusses a lot of issues surrounding the nature of work and exploitation based on love of the career but also delves into the onion layers of reasons and excuses that people use to not own responsibility for this sub-par situation. I’ll admit that I’ve found it fairly easy to ascribe ultimate responsibility for a number of problematic ideas and decisions to the upper echelons of institutional leadership but this blog post has reminded me that even they will pass the buck along to macro level neoliberal governmental and economic policy positions and this isn’t entirely untrue. I won’t accept that ultimately nobody should be accountable or that nobody has the capacity to make change for the better and there is still a lot going on at the top end that seems to drive some of these issues but it is handy to remember that everyone has their masters.

 

Research update #40: Proposal writing day 4: Edvisors and teachers

scrivener screenshot

 

 

 

 

 

I really didn’t expect to but I’ve caught up to my schedule. It’s largely because I decided that I needed to write words rather than write well although there is also the fact that I decided to trust my recollection of the broad ideas in the literature. Rather than painstakingly find each citation and quote on the fly as I write, I’m going to trust that they are out there and that on my next pass I can take the time to put them in. I think this will also help because I’m building a basic scaffold that seems to be flowing nicely and which should make it easier for me to find and compartmentalise the citations and quotes. I’m also fairly confident that I’ll also rediscover richer ideas that I can use to flesh out what I’ve already said. I’ll need to spend a little more time thinking about what the literature doesn’t say, and how to explain that and why it matters, but it’s been nice being able to put the pedal to the metal and just let the words come out as they want to.

In a nutshell, I covered the fact that the way that edvisor teams are structured and placed in institutions – centrally/college-based and also functionally – can be a barrier to effective work and particularly because of the tensions that exist between institutional and academics’ priorities. (Trying to remember that most good edvisors also have their own values conversation going on about ensuring the best possible learning and teaching amongst this). I moved on to the relationships between academics and edvisors and noted the difference between those from academic vs professional backgrounds. Touched on disciplinary silos, pressures faced by academics to be the experts in all things and the fact that many of them don’t really know what we do – or can do. This can be evident particularly in the research that they write and I think this will be a rich primary source to explore when I move into the research phase.

So I guess there is going to be a little more work than I expected when I’ve written draft one but the words are coming and draft two should not be far behind at all.

Research update #39: Proposal writing Day 3: “Rest day” but with some interesting revelations nonetheless

Well it was more a day where I’d made a previous commitment to sit on an interview panel for an Education Technologist position for a friend, followed shortly afterwards by a farewell party for a friend in the same unit.

I was able to take a couple of hours to glance over some of my earliest blog posts relating to this research, which were helpful in that I could see how much my question has evolved over time but perhaps lacked a little something in terms of direct relevance to the work that I’m doing now. Fortunately, the responses to the interview questions themselves did align more closely, albeit more in terms of gaining some additional background insights.

My assigned question to ask was along the lines of what technology do you see having an impact on higher education in the next 3-5 years? Something calling for a little crystal ball-gazing and my inclination was far more to give extra points to those who were unwilling to commit to specific products or brands. It was more about getting a sense of who is keeping an eye on things than getting the (impossible) right answer. Responses ranged from mobile devices (fairly likely though parts of my institution seem mystifyingly resistant to this) to AI, AR and drones. One candidate tried valiantly to steer this conversation around to rubrics and assessment and points for trying I guess.

The more revealing question was what do you think the role of an education technologist is? This was interesting because these were all people that had applied for a position with a specified set of criteria but the responses were still relatively varied. Clearly advising on and supporting the use of technology was a common theme in the responses but from there we seemed to veer into whichever areas the candidates felt they were strongest in. Fair enough, the point of the interview is to sell yourself. This included research, production of resources and information management skills. When we asked some to expand on their answers, by differentiating the technologist role from an ed designer or ed developer, things got more interesting. Before I started digging down into this field, my take was that a developer was more like a software or web developer than the more commonly used professional developer. One candidate felt that the ed dev would be building apps. Most got that the designer had more to do with course or curriculum design to varying degrees but most also recognised that there is a lot of overlap between all of these roles and the fact that they all had slightly different takes was good for me in that it reinforced what I’ve seen in the literature (and experienced in the day to day) about the fuzziness of most of these definitions.

I guess another interesting aspect of the interviews was in seeing where everyone had come from. We had people that had entered the field from graphic design, web and multimedia design, teaching and librarianship. For me, none of this disqualified anyone though the harsh reality is that in looking for someone able to hit the ground running, it’s hard not to favour someone with experience working with academics. How you get that experience in the first place is the real challenge I guess and I think I can probably expand a little on the pathways/entry point ideas section – though I don’t feel that there has been a lot of discussion of this in the literature that I’ve seen to date.

So while I didn’t write much and I didn’t find a whole lot in my previous note-taking blog posts, I still feel like I came away with a few more ideas.

Research update #38: Proposal writing Day 2 – more on edvisors, less on edvisors & institutions

I’m kind of just staring at the screen now with 27 different tabs open across two browsers so I guess it’s time to take a mental break at the very least. Going by my schedule, I was meant to have knocked out 750 words on the relationship between edvisors and institutions – or my precisely I guess institutional management/leadership. I currently have 129.

But that’s because I only wrote about 500/1000 yesterday on edvisors more broadly. I think part of my challenge is that, first draft or not, I still like to try to turn out a moderately elegant sentence that flows smoothly into the next one and advances the story or idea. What I need to do is worry less about this and just get the brutish ugly ideas down so that they might be prettied up later.

The bigger issue though is that I didn’t put enough time into getting all my sources, quotes and ideas into a single location before I started writing. I’ve spent enough time with the literature to know broadly what it says and how I want to bring it together and I know I have the citations to support this but I didn’t put them all into the notes document. They are instead, scattered through this blog, Zotero and assorted stacks of paper with pencil notes scrawled all through them. The point of blogging about many of these papers was to create a searchable archive of these ideas but with the way that the question has changed over time, the way that I have tagged these posts has not quite kept pace.

I’m still enjoying the writing and being forced to commit to particular ideas and language, I’m just slightly up in the air about whether it would be more beneficial to stop and spend the time assembling everything before I proceed or if I should just press on, write what I can as a first draft and then come up with a much improved second draft by bringing all the stray elements together. The latter seems the way to go as I’m well versed in the fine arts of procrastination and preparation, endless preparation is absolutely one of my go-tos in this regard. The other advantage of just writing is that it will let me work out the structure a little better which should make the process of searching for and gathering the quotes and citations a lot simpler.

I hit the 1000 word target for the edvisors section just before lunch but later felt that a discussion of the place of credentialing might sit better in the edvisors and institutions section. I was also a little concerned that I was discussing literature without really explaining why or what I was looking for in particular, so once more I spent a little more time than planned on that section. I had initially planned on 2000 words for my discussion of edvisors in the literature but revised this to 1000 on advice from Lina. I have a feeling that I could probably hit the 2000 without too much trouble as I dig deeper into the tensions between academic and professional edvisors.

Most of my thinking until recently revolved around the bizarre love/hate triangle between academics, institutional management/leadership and edvisors and how this impacts upon collaborative relationships. I’d kind of put aside the internal tensions both between academics and professional staff – particularly in the academic developer space where there’s a big question about where scholarly research fits into edvisor practices – and also between variously located teams within institutions. Most commonly central vs college/faculty based but there is also some toe-treading that occurs between rival disciplinary teams. The good news is that it’s all just more material to work with.

So while I’m not hitting my perhaps ambitious writing targets yet, the ideas are flowing.

 

Research update #37: Proposal writing Day 1 – Edvisors lit review

writing plan dates

I’ve booked in two weeks leave from work to get at least a first draft of my thesis proposal together. There’s a loose structure in place and I’m all about just getting the words down at this stage. As a first draft, I’m allowing for it being relatively terrible – which is probably the hardest part because I do like the words that I use to work well together – and the plan is to have something to send off for feedback just before Christmas.

Given that I’m aiming for between 750-1000 words a day mostly, I think I’ll be spending the morning pulling together the various ideas, quotes and references in the morning and doing the writing writing in the afternoon.

Today the focus is on edvisors in the literature, which isn’t as easy as I’d thought given that part of the reason for the thesis is their lack of visibility in the research. Or, more to the point, the fact that a lot of what I’ve been looking at is more closely related to where they/we sit in the institution, our relationships with institutional leadership and academics and the strategies that we do and could use to improve this. What I’m left with is more the descriptive, defining kind of work. Breaking this up into the three core role types of academic developer, education designer and learning technologist should help and there’s still plenty of time to move things around.

Mostly I just need to remember that this is the literature section, so I’m really only to talk about what other people have been talking about. I guess I can talk briefly about what hasn’t been discussed but that seems like a trap in some ways as maybe it has and I just missed it. (Pretty sure this is a universal refrain among PhDers though)

If you do read this post and are aware of a strikingly significant article or book etc about the nature of edvisors (academic developers etc – I wonder how long I’m going to need to add this), please let me know.

 

SOCRMx Week #8: The End

Well I probably said all that I needed to say on my general feelings about this MOOC in my last post so this is largely for the sake of completion. The final week of this course is a peer assessed piece of writing analysing the methods used in a sample paper. Turns out that I missed the deadline to write that – I may even have been working on my Week 7 post when that deadline fell – so this appears to be the end of the road for me. I could still go through and do the work but I found the supplied paper unrelated to my research and using methodologies that I have little interest in. The overall questions raised and things to be mindful of in the assessment instructions are enough.

  • What method of analysis was used?
  • How was the chosen method of analysis appropriate to the data?
  • What other kinds of analysis might have been used?
  • How was the analysed designed? Is the design clearly described? What were its strengths and weaknesses?
  • What kind of issues or problems might one identify with the analysis?
  • What are the key findings and conclusions, and how are they justified through the chosen analysis techniques?

And so with that, I guess I’m done with SOCRMx. In spite of my disengagement with the community, the resources and the structure really have been of a high standard and, more importantly, incredibly timely for me. As someone returning to study after some time who has not ever really had a formal research focus, there seems to be a lot of assumed knowledge about research methodology and having this opportunity to get a birds-eye view of the various options was ideal. I know I still have a long way to go but this has been a nice push in the right direction.

 

SOCRMx Week #7: Qualitative analysis

I’m nearly at the end of Week #8 in the Social Research Methods MOOC and while I’m still finding it informative, I’ve kind of stopped caring. The lack of community and particularly of engagement from the teachers has really sucked the joy out of this one for me. If the content wasn’t highly relevant, I’d have left long ago. And I’ll admit, I haven’t been posting the wonderfully detailed and thoughtful kind of posts on the forum or in the assigned work that they other 5 or so active participants have been doing but I’ve been contributing in a way that supports my own learning. I suspect the issue is that this is being run as a formal unit in a degree program and I’m not one of those students. Maybe it’s that I chose not to fork over the money for a verified certificate. Either way, it’s been an unwelcoming experience overall. When I compare it to the MITx MOOC I did a couple of years ago on Implementing Education Technology, it’s chalk and cheese. Maybe it’s a question of having a critical mass of active participants, who knows. But as I say, at least the content has been exactly what I’ve needed at this juncture of my journey in learning to be a researcher.

This week the focus was on Qualitative Analysis, which is where I suspect I’ll being spending a good amount of my time in the future. One of my interesting realisations early on in this though was that I’ve already tried to ‘cross the streams’ of qual and quant analysis this year when I had my first attempt at conducting a thematic analysis of job ads for edvisors. I was trying to identify specific practices and tie them to particular job titles in an attempt to clarify what these roles were largely seen to be doing. So there was coding because clearly not every ad was going to say research, some might say ‘stay abreast of current and emerging trends’ and other might ask the edvisor to ‘evaluate current platforms’. Whether or not that sat in “research” perfectly is a matter for discussion but I guess that’s a plus of the fuzzy nature of qualitative data, where data is more free to be about the vibe.

But then I somehow ended up applying numbers to the practices as they sat in the job ad more holistically, in an attempt to place them on a spectrum between pedagogical (1) and technological (10). Which kind of worked in that it gave me some richer data that I could use to plot the roles on a scattergraph but I wouldn’t be confident that this methodology would stand up to great scrutiny yet. Now maybe just because I was using numbers it doesn’t mean that it was quantitative but it still feels like some kind of weird fusion of the two. And I’m sure that I’ll find any number of examples of this in practice but I haven’t seen much of this so far. I guess it was mainly nice to be able to put a name to what I’d done. To be honest, as I was initially doing it, I assumed that there was probably a name for what I was doing and appropriate academic language surrounding it, I just didn’t happen to know what that was.

I mentioned earlier that qualitative analysis can be somewhat ‘fuzzier’ than quantitative and there was a significant chunk of discussion at the beginning of this week’s resources about that. Overall I got the feeling that there was a degree of defensiveness, with the main issue being that the language and ideas used in quantitative research are far more positivist in nature – epistemologically speaking (I totally just added that because I like that I know this now) – and are perhaps easier to justify and use to validate the data. You get cold hard figures and if you did this the right way, someone else should be able to do exactly the same thing.

An attempt to map some of those quantitative qualities to the qualitative domain was somewhat poo-pooed because it was seen as missing the added nuance present in qualitative research or something – it was a little unclear really but I guess I’ll need to learn to at least talk the talk. It partly felt like tribalism or a turf war but I’m sure that there’s more to it than that.  I guess it’s grounded in a fairly profoundly different way of seeing the world and particularly of seeing ‘knowing’. On the one side we have a pretty straight forward set of questions dealing with objective measurable reality and on the other we have people digging into perspectives and perceptions of that reality and questioning whether we can ever know or say if any of them are absolutely right.

Long story short, there’s probably much more contextualisation/framing involved in the way you analyse qual data and how you share the story that you think it tells. Your own perceptions and how they may have shaped this story also play a far more substantial part. The processes that you undertook – including member checking, asking your subject to evaluate your analysis of their interview/etc to ensure that your take reflects theirs – also play a significant role in making your work defensible.

The section on coding seemed particular relevant so I’ll quote that directly:

Codes, in qualitative data analysis, are tags that are applied to sections of data. Often done using qualitative data analysis software such as Nvivo or Dedoose.

Codes can overlap, and a section of an interview transcript (for example) can be labeled with more than one code. A code is usually a keyword or words that represent the content of the section in some way: a concept, an emotion, a type of language use (like a metaphor), a theme.

Coding is always, inevitably, an interpretive process, and the researcher has to decide what is relevant, what constitutes a theme and how it connects to relevant ideas or theories, and discuss their implications.

Here’s an example provided by Jen Ross, of a list of codes for a project of hers about online reflective practice in higher education. These codes all relate to the idea of reflection as “discipline” – a core idea in the research:

  • academic discourse
  • developing boundaries
  • ensuring standards
  • flexibility
  • habit
  • how professionals practice
  • institutional factors
  • self assessment

Jen says: These codes, like many in qualitative projects, emerged and were refined during the process of reading the data closely. However, as the codes emerged, I also used the theoretical concepts I was working with to organise and categorise them. The overall theme of “discipline”, therefore, came from a combination of the data and the theory.

https://courses.edx.org/courses/course-v1:EdinburghX+SOCRMx+3T2017/courseware/f41baffef9c14ff488165814baeffdbb/23bec3f689e24100964f23aa3ca6ee03/?child=last

I already mentioned that I undertake thematic analysis of a range of job ads, which could be considered to be “across case” coding. This is in comparison to “within-case” coding, where one undertakes narrative analysis by digging down into one particular resource or story. This involves “tagging each part of the narrative to show how it unfolds, or coding certain kinds of language use” while thematic analysis is about coding common elements that emerge while looking at many things. In the practical exercise – I didn’t do it because time is getting away from me but I read the blog posts of those who did – a repeated observation was that in this thematic analysis, they would often create/discover a new code half way through and then have to go back to the start to see if and where that appear in the preceding resources.

On a side note, the practical activity did look quite interesting, it involved looking over a collection of hypothetical future reflections from school leavers in the UK in the late 1970s. They were asked to write a brief story from the perspective of them 40 years in the future, on the cusp of retirement, describing the life they had lived. Purely as a snapshot into the past, it is really worth a look for a revealing exploration of how some people saw life and success back in the day.Most of the stories are only a paragraph or two.

https://discover.ukdataservice.ac.uk/QualiBank/?f=CollectionTitle_School%20Leavers%20Study

And once again, there were a bunch of useful looking resources for further reading about qualitative analysis

  • Baptiste, I. (2001). Qualitative Data Analysis: Common Phases, Strategic Differences. Forum: Qualitative Social Research, 2/3. http://www.qualitative-research.net/index.php/fqs/article/view/917/2002
  • Markham, A. (2017). Reflexivity for interpretive researchers http://annettemarkham.com/2017/02/reflexivity-for-interpretive-researchers/
  • ModU (2016). How to Know You Are Coding Correctly: Qualitative Research Methods. Duke University’s Social Science Research Unit. https://www.youtube.com/watch?v=iL7Ww5kpnIM
  • Riessman, C.K. (2008). ‘Thematic Analysis’ [Chapter 3 preview] in Narrative Methods for the Human Sciences. SAGE Publishing https://uk.sagepub.com/en-gb/eur/narrative-methods-for-the-human-sciences/book226139#preview Sage Research Methods Database
  • Sandelowski, M. and Barroso, J. (2002). Reading Qualitative Studies. International Journal of Qualitative Methods, 1/1. https://journals.library.ualberta.ca/ijqm/index.php/IJQM/article/view/4615
  • Samsi, K. (2012). Critical appraisal of qualitative research. Kings College London. https://www.kcl.ac.uk/sspp/policy-institute/scwru/pubs/2012/conf/samsi26jul12.pdf
  • Taylor, C and Gibbs, G R (2010) How and what to code. Online QDA Web Site, http://onlineqda.hud.ac.uk/Intro_QDA/how_what_to_code.php
  • Trochim, W. (2006). Qualitative Validity. https://www.socialresearchmethods.net/kb/qualval.php

Research update #36: Playing well with others

The nature of my research topic, with a focus on the status of professional staff in an academic world, feels risky at times. While I know that academic staff occupy edvisor roles as well, I have a feeling that I’ll be digging into sensitive areas around the academic/professional divide that often seem to be swept under the carpet because they raise uncomfortable questions about privilege and class in the academy and some entrenched beliefs about what makes academics special. It would be incredibly presumptuous for me to think that my ideas are all necessarily right and the point of research is to put them to the test and see where they take me but there’s a fair chance that some of what I’m going to have to say won’t always be well received by some of the people that I work with and who pay me. The other big issue is whether if my findings demonstrate a blind spot to professional staff in academics, those same academics responsible for assessing my research will see the value in my work.

Fortunately at this stage I don’t have my heart set on a career as an academic – I really do like doing what I do – but it seems imprudent to prematurely cut one’s options. I am conscious that I need to be more researcherly or scholarly in the language that I use in this space. I sent out a slightly provocative tweet yesterday, prompted by a separate (joke) tweet that I saw which said that the fastest way to assemble a bibliography was to publicly bemoan the lack of research in topic x. 

After 36 hours I’ve had no literature recommended but a university Pro Vice-Chancellor replied suggesting a collaboration on this area of mutual interest. Which surprised and flattered me greatly, considering that I was concerned that I’d come across as a little bolshie in my questions. Maybe it’s wrong of me to see academics as some kind of monolithic whole.

Maybe the trick is to just worry less and be honest. You can’t please everyone and if you can stand behind your work, maybe that’s enough.

I’m not sure. We seem to live in incredibly sensitive times.

 

 

Week #6 SOCRMx – Quantitative analysis

This section of the SOCRMx MOOC offers a fair introduction to statistics and the analysis of quantitative date. At least, enough to get a grasp on what is needed to get meaningful data and what it looks like when statistics are misused or misrepresented. (This bit in particular should be a core unit in the mandatory media and information literacy training that everyone has to take in my imaginary ideal world)

The more I think about my research, the more likely I think it is to be primarily qualitative but I can still see the value in proper methodology for processing the quant data that will help to contextualise the rest. I took some scattered notes that I’ll leave here to refer back to down the road.

Good books to consider – Charles Wheelan: Naked Statistics: Stripping the dread from data (2014) & Daniel Levitin: A Field Guide to Lies and Statistics: A Neuroscientist on How to Make Sense of a Complex World (2016)

Mean / Median / Mode

Mean – straightforward average.

Median – put all the results in a line and choose the one in the middle. (Better for average incomes as high-earners distort the figures)

Mode – which section has the most hits in it

Student’s T-Test – a method for interpreting what can be extrapolated from a small sample of data. It is the primary way to understand the likely error of an estimate depending on your sample size

It is the source of the concept of “statistical significance.”

A P-value is a probability. It is a measure of summarizing the incompatibility between a particular set of data and a proposed model for the data (the null hypothesis). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5366529/

“a significance level is an indication of the probability of an observed result occurring by chance under the null hypothesis; so the more you repeat an experiment, the higher the probability you will see a statistically significant result.”

Overall this entire domain is one where I think I’m only really going to appreciate the core concepts when I have a specific need for it. The idea of a distribution curve where the mean of all data points represents the high point and standard deviations (determined by a formula) show us the majority of the other data points seems potentially useful but, again, until I can practically apply it to a problem, just tantalisingly beyond my grasp.