Category Archives: methodology

SOCRMx Week #8: The End

Well I probably said all that I needed to say on my general feelings about this MOOC in my last post so this is largely for the sake of completion. The final week of this course is a peer assessed piece of writing analysing the methods used in a sample paper. Turns out that I missed the deadline to write that – I may even have been working on my Week 7 post when that deadline fell – so this appears to be the end of the road for me. I could still go through and do the work but I found the supplied paper unrelated to my research and using methodologies that I have little interest in. The overall questions raised and things to be mindful of in the assessment instructions are enough.

  • What method of analysis was used?
  • How was the chosen method of analysis appropriate to the data?
  • What other kinds of analysis might have been used?
  • How was the analysed designed? Is the design clearly described? What were its strengths and weaknesses?
  • What kind of issues or problems might one identify with the analysis?
  • What are the key findings and conclusions, and how are they justified through the chosen analysis techniques?

And so with that, I guess I’m done with SOCRMx. In spite of my disengagement with the community, the resources and the structure really have been of a high standard and, more importantly, incredibly timely for me. As someone returning to study after some time who has not ever really had a formal research focus, there seems to be a lot of assumed knowledge about research methodology and having this opportunity to get a birds-eye view of the various options was ideal. I know I still have a long way to go but this has been a nice push in the right direction.

 

SOCRMx Week #7: Qualitative analysis

I’m nearly at the end of Week #8 in the Social Research Methods MOOC and while I’m still finding it informative, I’ve kind of stopped caring. The lack of community and particularly of engagement from the teachers has really sucked the joy out of this one for me. If the content wasn’t highly relevant, I’d have left long ago. And I’ll admit, I haven’t been posting the wonderfully detailed and thoughtful kind of posts on the forum or in the assigned work that they other 5 or so active participants have been doing but I’ve been contributing in a way that supports my own learning. I suspect the issue is that this is being run as a formal unit in a degree program and I’m not one of those students. Maybe it’s that I chose not to fork over the money for a verified certificate. Either way, it’s been an unwelcoming experience overall. When I compare it to the MITx MOOC I did a couple of years ago on Implementing Education Technology, it’s chalk and cheese. Maybe it’s a question of having a critical mass of active participants, who knows. But as I say, at least the content has been exactly what I’ve needed at this juncture of my journey in learning to be a researcher.

This week the focus was on Qualitative Analysis, which is where I suspect I’ll being spending a good amount of my time in the future. One of my interesting realisations early on in this though was that I’ve already tried to ‘cross the streams’ of qual and quant analysis this year when I had my first attempt at conducting a thematic analysis of job ads for edvisors. I was trying to identify specific practices and tie them to particular job titles in an attempt to clarify what these roles were largely seen to be doing. So there was coding because clearly not every ad was going to say research, some might say ‘stay abreast of current and emerging trends’ and other might ask the edvisor to ‘evaluate current platforms’. Whether or not that sat in “research” perfectly is a matter for discussion but I guess that’s a plus of the fuzzy nature of qualitative data, where data is more free to be about the vibe.

But then I somehow ended up applying numbers to the practices as they sat in the job ad more holistically, in an attempt to place them on a spectrum between pedagogical (1) and technological (10). Which kind of worked in that it gave me some richer data that I could use to plot the roles on a scattergraph but I wouldn’t be confident that this methodology would stand up to great scrutiny yet. Now maybe just because I was using numbers it doesn’t mean that it was quantitative but it still feels like some kind of weird fusion of the two. And I’m sure that I’ll find any number of examples of this in practice but I haven’t seen much of this so far. I guess it was mainly nice to be able to put a name to what I’d done. To be honest, as I was initially doing it, I assumed that there was probably a name for what I was doing and appropriate academic language surrounding it, I just didn’t happen to know what that was.

I mentioned earlier that qualitative analysis can be somewhat ‘fuzzier’ than quantitative and there was a significant chunk of discussion at the beginning of this week’s resources about that. Overall I got the feeling that there was a degree of defensiveness, with the main issue being that the language and ideas used in quantitative research are far more positivist in nature – epistemologically speaking (I totally just added that because I like that I know this now) – and are perhaps easier to justify and use to validate the data. You get cold hard figures and if you did this the right way, someone else should be able to do exactly the same thing.

An attempt to map some of those quantitative qualities to the qualitative domain was somewhat poo-pooed because it was seen as missing the added nuance present in qualitative research or something – it was a little unclear really but I guess I’ll need to learn to at least talk the talk. It partly felt like tribalism or a turf war but I’m sure that there’s more to it than that.  I guess it’s grounded in a fairly profoundly different way of seeing the world and particularly of seeing ‘knowing’. On the one side we have a pretty straight forward set of questions dealing with objective measurable reality and on the other we have people digging into perspectives and perceptions of that reality and questioning whether we can ever know or say if any of them are absolutely right.

Long story short, there’s probably much more contextualisation/framing involved in the way you analyse qual data and how you share the story that you think it tells. Your own perceptions and how they may have shaped this story also play a far more substantial part. The processes that you undertook – including member checking, asking your subject to evaluate your analysis of their interview/etc to ensure that your take reflects theirs – also play a significant role in making your work defensible.

The section on coding seemed particular relevant so I’ll quote that directly:

Codes, in qualitative data analysis, are tags that are applied to sections of data. Often done using qualitative data analysis software such as Nvivo or Dedoose.

Codes can overlap, and a section of an interview transcript (for example) can be labeled with more than one code. A code is usually a keyword or words that represent the content of the section in some way: a concept, an emotion, a type of language use (like a metaphor), a theme.

Coding is always, inevitably, an interpretive process, and the researcher has to decide what is relevant, what constitutes a theme and how it connects to relevant ideas or theories, and discuss their implications.

Here’s an example provided by Jen Ross, of a list of codes for a project of hers about online reflective practice in higher education. These codes all relate to the idea of reflection as “discipline” – a core idea in the research:

  • academic discourse
  • developing boundaries
  • ensuring standards
  • flexibility
  • habit
  • how professionals practice
  • institutional factors
  • self assessment

Jen says: These codes, like many in qualitative projects, emerged and were refined during the process of reading the data closely. However, as the codes emerged, I also used the theoretical concepts I was working with to organise and categorise them. The overall theme of “discipline”, therefore, came from a combination of the data and the theory.

https://courses.edx.org/courses/course-v1:EdinburghX+SOCRMx+3T2017/courseware/f41baffef9c14ff488165814baeffdbb/23bec3f689e24100964f23aa3ca6ee03/?child=last

I already mentioned that I undertake thematic analysis of a range of job ads, which could be considered to be “across case” coding. This is in comparison to “within-case” coding, where one undertakes narrative analysis by digging down into one particular resource or story. This involves “tagging each part of the narrative to show how it unfolds, or coding certain kinds of language use” while thematic analysis is about coding common elements that emerge while looking at many things. In the practical exercise – I didn’t do it because time is getting away from me but I read the blog posts of those who did – a repeated observation was that in this thematic analysis, they would often create/discover a new code half way through and then have to go back to the start to see if and where that appear in the preceding resources.

On a side note, the practical activity did look quite interesting, it involved looking over a collection of hypothetical future reflections from school leavers in the UK in the late 1970s. They were asked to write a brief story from the perspective of them 40 years in the future, on the cusp of retirement, describing the life they had lived. Purely as a snapshot into the past, it is really worth a look for a revealing exploration of how some people saw life and success back in the day.Most of the stories are only a paragraph or two.

https://discover.ukdataservice.ac.uk/QualiBank/?f=CollectionTitle_School%20Leavers%20Study

And once again, there were a bunch of useful looking resources for further reading about qualitative analysis

  • Baptiste, I. (2001). Qualitative Data Analysis: Common Phases, Strategic Differences. Forum: Qualitative Social Research, 2/3. http://www.qualitative-research.net/index.php/fqs/article/view/917/2002
  • Markham, A. (2017). Reflexivity for interpretive researchers http://annettemarkham.com/2017/02/reflexivity-for-interpretive-researchers/
  • ModU (2016). How to Know You Are Coding Correctly: Qualitative Research Methods. Duke University’s Social Science Research Unit. https://www.youtube.com/watch?v=iL7Ww5kpnIM
  • Riessman, C.K. (2008). ‘Thematic Analysis’ [Chapter 3 preview] in Narrative Methods for the Human Sciences. SAGE Publishing https://uk.sagepub.com/en-gb/eur/narrative-methods-for-the-human-sciences/book226139#preview Sage Research Methods Database
  • Sandelowski, M. and Barroso, J. (2002). Reading Qualitative Studies. International Journal of Qualitative Methods, 1/1. https://journals.library.ualberta.ca/ijqm/index.php/IJQM/article/view/4615
  • Samsi, K. (2012). Critical appraisal of qualitative research. Kings College London. https://www.kcl.ac.uk/sspp/policy-institute/scwru/pubs/2012/conf/samsi26jul12.pdf
  • Taylor, C and Gibbs, G R (2010) How and what to code. Online QDA Web Site, http://onlineqda.hud.ac.uk/Intro_QDA/how_what_to_code.php
  • Trochim, W. (2006). Qualitative Validity. https://www.socialresearchmethods.net/kb/qualval.php

Week #6 SOCRMx – Quantitative analysis

This section of the SOCRMx MOOC offers a fair introduction to statistics and the analysis of quantitative date. At least, enough to get a grasp on what is needed to get meaningful data and what it looks like when statistics are misused or misrepresented. (This bit in particular should be a core unit in the mandatory media and information literacy training that everyone has to take in my imaginary ideal world)

The more I think about my research, the more likely I think it is to be primarily qualitative but I can still see the value in proper methodology for processing the quant data that will help to contextualise the rest. I took some scattered notes that I’ll leave here to refer back to down the road.

Good books to consider – Charles Wheelan: Naked Statistics: Stripping the dread from data (2014) & Daniel Levitin: A Field Guide to Lies and Statistics: A Neuroscientist on How to Make Sense of a Complex World (2016)

Mean / Median / Mode

Mean – straightforward average.

Median – put all the results in a line and choose the one in the middle. (Better for average incomes as high-earners distort the figures)

Mode – which section has the most hits in it

Student’s T-Test – a method for interpreting what can be extrapolated from a small sample of data. It is the primary way to understand the likely error of an estimate depending on your sample size

It is the source of the concept of “statistical significance.”

A P-value is a probability. It is a measure of summarizing the incompatibility between a particular set of data and a proposed model for the data (the null hypothesis). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5366529/

“a significance level is an indication of the probability of an observed result occurring by chance under the null hypothesis; so the more you repeat an experiment, the higher the probability you will see a statistically significant result.”

Overall this entire domain is one where I think I’m only really going to appreciate the core concepts when I have a specific need for it. The idea of a distribution curve where the mean of all data points represents the high point and standard deviations (determined by a formula) show us the majority of the other data points seems potentially useful but, again, until I can practically apply it to a problem, just tantalisingly beyond my grasp.

Week #4 SOCRMx – Reflecting on methods

This week in the Social Research Methods MOOC we take a moment to take a breath and consider the approaches that we currently favour.

One of the activities is to reflect in our blog – so I guess this is that. I’m looking at surveys because I still need to get my head around discourse analysis, not having really used it before.

Reflecting on your chosen methods

Choose one of the approaches you’ve explored in previous weeks, and write a reflective post in your blog that answers the following questions. Work though these questions systematically, and try to write a paragraph or two for each:

What three (good) research questions could be answered using this approach?

I’m fairly focused on my current research questions at the moment and I would say that using surveys will help me to start answering them, but I certainly wouldn’t rely solely on surveys. The questions are: How do education advisors see their role and value in Tertiary Education? How are education advisor roles understood and valued by teachers and institutional management? What strategies are used in tertiary education to promote understanding of the roles of education advisors among teaching staff and more broadly within the institution.

What assumptions about the nature of knowledge (epistemology) seem to be associated with this approach?

The main assumption is that subjective or experience based knowledge is sufficient. I don’t believe that this is the case. Clearly, a survey can be useful in collecting broad data about the attitudes that people claim or even believe that they hold however people can have a tendency to want to see themselves in the best possible light – the heroes of their own story – and responses might be more indicative of what people would like to think they believe than what their actions show them to believe.

What kinds of ethical issues arise?

This would depend on the design of the research. Assuming there is no need for participants to be subsequently identifiable, anonymity should enable respondents to express their opinions freely and without concern for consequences. Questions should be designed in a way that is not unnecessarily intrusive or likely to influence the way that respondents answer. I’d also assume that good research design would ensure that the demographics of survey participants is reflective of that community.

What would “validity” imply in a project that used this approach?

I would say that ‘validity’ would require addressing some of the issues that I’ve already raised. Primarily that the survey itself could be relied upon to collect data that accurately reflects the opinions of the survey respondents without influencing these opinions or asking ambiguous questions that could be interpreted in different ways. My overall preference would be for the survey to be one part of a larger research project that provides data from different sources that can be used to provide greater ‘validity’.

What are some of the practical or ethical issues that would need to be considered?

The survey would need to be anonymous and the data kept securely. Questions should be designed to be as clear and neutral as possible and a sufficiently representative sample of participants obtained. Given the number of surveys that people get asked to complete these days, ensuring that people have a clear understanding of the purpose and value of the research would be vital. For the same reason, I’d suggest that we have a responsibility to ask people only for the information that we need and nothing more.

And finally, find and reference at least two published articles that have used this approach (aside from the examples given in this course). Make some notes about how the approach is described and used in each paper, linking to your reflections above.

Mcinnis, C. (1998). Academics and Professional Administrators in Australian Universities: dissolving boundaries and new tensions. Journal of Higher Education Policy and Management, 20(2), 161–173.

Comparison of two surveys, one of academic staff (1993) and one of administrative/professional staff (1996). Analysis of results, some additional questions were added to the second survey

Wohlmuther, S. (2008). “Sleeping with the enemy”: how far are you prepared to go to make a difference? A look at the divide between academic and allied staff. Journal of Higher Education Policy and Management, 30(4), 325–337.

Based on an anonymous online survey of 29% of all staff – academic and professional at her institution, which included questions about demographics, perceptions of the nature of their roles, the ‘divide’ and the value of different types of staff in relation to strategic priorities.

Both surveys related to workplace issues and attitudes, which meant that privacy was a significant factor. I was less impressed with the approach taken by Wohlmuther, which I felt was overly ambiguous in parts.

“Survey respondents were asked what percentage of their time they spent on allied work and what percentage of their time they should spend on allied work. The term ‘allied work’ was not defined. It was left to the respondent to interpret what they meant by allied work” (p.330)

I do still think that I’ll use surveys as a starting point but expect to then take this information and use it to help design interviews and also to inform analysis of other sources of data.

Week #3 SOCRMx – Discourse Analysis

When I first stumbled across Foucault in some paper since cast to the depths of my mind, my immediate response was that it was wanky and unhelpful theoretical tosh. I’ll admit that I struggled to get my head around it but my broad takeaway was that it sat too far in the whole post-modern create your own reality school that has since brought us ‘fake news’ and Donald Trump.

Imagine my surprise then as I worked through the resources relating to Discourse Analysis – and particular five different theoretical approaches to doing it – only to find the Foucauldian Discourse Analysis might in fact be the closest thing to what I need in exploring the language used around Edvisors to see if and how it shapes their status and identity in tertiary education institutions. The other option is Critical Discourse Analysis, which kind of works in the same way but seems slightly angrier about it. Maybe not angrier but you seem to need to start from the position that there is an existing problem (which there probably is) and then dig into what you’re going to do about it. Both are on the table for now anyway.

The great news is that from what I knew of this a week ago – that it existed and a couple of people had mentioned that it sounded like what I wanted to do – I now think that can see why and how it might be valuable. Not that I know how to do it yet but that will come with time.

So once again the EdinburghX SOCRMx MOOC is coming through for me. I had hoped to have explored 2-3 additional topics by now but came down horribly sick late last week and am barely just functional again now.

For what it’s worth, here are my other scratch notes on Discourse Analysis taken from the course so far:

Qualitative approach to the study of language in use – spoken or text.

Covers diverse sources from interviews/focus groups to secondary material such as archival material, policy documents, social media and so on.

Various ways of doing it from the micro (sentence by sentence) to the macro (overall impact of how language is used) depending on the theoretical framework chosen.

References: Discourse – David Howarth and Analysing Discourse – Norman Fairclough (more practical)

Common criticisms of DA – it’s idealist (the world is just a product of our minds) and relativist (anything goes). Also that Discourse Analysts confuse changing the way that we talk about a thing with actually changing the thing itself. Maybe, maybe not.

“Critical discourse analysis is actually really interested in the ways in which systems of representation have actual material effects and asymmetrical effects on the distribution of burdens and benefits on particular social groups, access to resources and so on and so forth” (MOOC video introduction)

There are many different types of discourse analysis, including conversation analysis, which analyses talk in detail (see Charles Antaki’s excellent web site for a good introduction to conversation analysis), and critical discourse analysis, which pays particular attention to how relations of power and domination are enacted through discourse. “”

An important aspect of discourse analysis, for our purposes, is that it treats language as action. As Gee puts it, language “allows us to do things and be things… saying things in language never goes without also doing things and being things” (Gee, 2011, p.1). It also places importance on context: “to understand anything fully you need to know who is saying it and what the person saying it is trying to do” (ibid, p.2).

Not Conversational Analysis for my work

Critical Discourse Analysis – about power relationships and social issues. Almost seems too loaded? Documents that seek to present particular political positions

Foucauldian Discourse Analysis might be relevant – how language shapes identity

There was also an assignment for us to try it out with. One of my major interests is job advertisements, which is perhaps not the best place to start given how formalised the structures of these things are but I did it all the same. Outlaw Country!

This is the sample text:

*This is a new open-ended, part-time (0.5 FTE) post in the E-Learning Development Team, which has been created to support the development of the University’s online distance learning provision. The role holder will provide application management support to academic programme teams for the delivery of fully online courses. In the performance of these duties the role holder will coordinate the registration of courses, students and staff on the University’s Canvas learning management system (LMS).

The post will provide first-line user support to staff and second-line support to students, responding to queries on the Canvas LMS. The post requires a combination of good technological skills, awareness of course and user administration processes and expertise in delivering training and support services. Creative approaches to problem solving and the ability to learn and apply new skills quickly will be necessary, as well as good organisational skills, excellent interpersonal skills and above all, a strong commitment to customer service.

The role forms part of a small team working to the highest standards and best practices for online learning. You will be expected to work on your own initiative, leading staff training and user support services, as well as working effectively within a team.*

These are my responses.

1.Significance: The nature of the text is highly specific and directive. The requirements expected of the reader are made explicit with the use of terms like “The post requires”, “will be necessary” and “ you will be expected”. As a job advertisement this is fairly standard language. The use of “and above all” gives extra weighting to the need for a “strong commitment to customer service”

2. Practices: This text is being used to describe a recruitment process

3.Identities: This text describes in detail the characteristics that the (suitable) reader should possess and explicitly states their relationships with other people and groups described. This positions the writer very much as the person holding the power

4.Relationships: The text defines the relationship between the reader (if successful) and stakeholders in the university and also the relationship between the reader and writer (employee/employer)

5. Politics: The nature of a job advertisement is to describe ‘how things should be’. It broadly pushes a line that the institution cares about quality teaching and learning and also quality customer support.

6. Connections: Everything is relevant to everything else in this piece of text because it has a singular focus on the specific goal of recruiting the right person.

7. Sign systems and knowledge: Some of the language used assumes that a certain type of knowledge relating to technology enhanced learning is possessed by the reader. It is heavily factual and not supportive of different interpretations of what is written.

 

I don’t know if I’m ‘doing it right’ particularly but it did make me think a little more about the nature of the power relationships expressed in job ads and the claims that they make to reflect an absolute truth in reality. So that seems like a thing.

I haven’t taken a look at the discussion posts for the other topics but the fact that there are only 3 other posts about Discourse Analysis in this MOOC after 3 weeks makes me wonder whether it’s simply a topic that people are engaging with or whether people aren’t really engaging with the MOOC overall. Hopefully it’s the former, because I’m getting a lot out of this.

Week #2 SOCRMx – Surveys

Week 2&3 of the EdinburghX Social Research Methods MOOC sees us starting to dig into a couple of methods from a list of about 8. Being a nerd who really wants to get my head around 4 or 5 of them (surveys, discourse analysis, interviews, focus groups and social network analysis) I think I’ve already over-committed but the readings and the activities are great.

For surveys, I now need to design a simple survey of 6-8 questions exploring some aspect of the use of social media by a specific group of people. Big surprise, I’m going to delve into how TEL advisors (academic developers, education designers, learning technologists) use social media as part of their participation in a community of practice. Given the nature of the participants, I am assuming a reasonable level of understanding of the concepts.

I think some of these questions might be more complex than I need them to be but I figure they’re a work in process. (And now I’m wondering if WordPress has some kind of cool survey building tool that I can put them into. Ok, looking for plugins is a rabbit hole – text is just fine.)

  1. Do you use social media platforms as a part of your professional community of practice as a TEL advisor?

[ ] Yes  [ ] No

2. If yes, which of the following social media platforms do you use to participate in your professional Community of Practice (CoP). (Choose as many as are applicable)

[ ] Twitter  [ ] Facebook [ ] LinkedIn [ ] Google+ [ ] Instagram [ ] Tumblr [ ] Wechat [ ] other ___________ (please list)

3. Of the social media platforms that you use in your professional CoP, rank them from 1 (most useful) to (least useful) where is the final option

[ ] Twitter  [ ] Facebook [ ] LinkedIn [ ] Google+ [ ] Instagram [ ] Tumblr [ ] Wechat [ ] other ___________ (please list)

4. Approximately how long have you used social media as part of your professional CoP

[ ] 0 years (I don’t) [ ] Less than 1 year [ ] 1 – 2 years [ ] 3-4 years [ ] 5 or more years

5. Approximately how many people are you connected with in the social media platforms that you use for your professional CoP? (Including people that you follow and those that follow you)

[ ] Under 20 [ ] 20 – 50 [ ] 51- 99 [ ] 100 – 500 [ ] 501 – 1000 [ ] More than 1000

6. Rank in order of importance to you from 1 (most important) to n (least important) the reasons why you use social media with your professional CoP

[ ] To get help [ ] To promote your work [ ] To belong to a community [ ] To keep up to date [ ] To share ideas [ ] Other ________________ (please list)

7. How important is it for you to separate your professional life from your personal life when you use social media platforms?

[ ] Highly important [ ] somewhat important [ ] neutral [ ] somewhat unimportant [ ] highly unimportant

 

Ok, overall I’m reasonably happy with these questions – they’re possibly a little wordier than I’d like but I’m trying to be pretty specific. Bringing ranking in is possibly also more complex than it needs to be, particularly when I’m not asking people to rank all the options, only those that they selected or find relevant. May be overreaching there.

All in all though, I think this could result in some pretty rich data. Not sure what to do about people who don’t use Soc med – maybe that’s a screening question? Though it would kind of be useful to get a sense of the proportions

Can I get a method: The EdinburghX SOCRMx Social Research Methods MOOC Week #1

MOOC Week #1 question responses

Making a blog post is part of the participation in the MOOC. I’m just going to put my answers here at the top so people don’t need to read the rest of my post about the MOOC and methods etc.

I’ve been working on this PhD for a little under two years now, so most of these questions I’ve covered in previous posts but will answer for the sake of the exercise.

  • What kind of topics are you interested in researching?

The relationships between edvisors (academic developers, education designers, learning technologists etc) and academics and institutional management

  • What initial research questions might be starting to emerge for you?

What strategies are used in HE to promote understanding of the roles and value of edvisors among academic staff, and more broadly within the institution? Which among these strategies are effective and why?

How do edvisors see their role and value in Higher Education institutions?

How are edvisor roles understood and valued by academics and HE management?

  • What are you interested in researching – people, groups, communities, documents, images, organisations?

People, groups/organisations, documents

  • Do you have an initial ideas for the kinds of methods that might help you to gather useful knowledge in your area of interest?

Currently leaning towards survey/interview and document analysis – job advertisements and organisational structures

  • What initial questions do you have about those methods? What don’t you understand yet?

Is this the best way to do what I want to do? Are there better alternatives?

  • Do you perceive any potential challenges in your initial ideas: either practical challenges, such as gaining access to the area you want to research, or the time it might take to gather data; or conceptual challenges; such as how the method you are interested in can produce ‘facts’, ‘truths’, or ‘valuable knowledge’ in your chosen area?

Not sure yet. I’m conscious that there might be sensitivities and politics to deal with.

Ok, so that’s the end of the ‘homework’ part of the blog. This next bit is what I’d already started writing about why I’m here and so on. 

One of the nice things that comes up from time to time when I discuss my research with academics is that they’ll excitedly start telling me about the methods and methodology that might be helpful. It’s a shame that no single suggested approach to data collection or analysis has been the same and that I don’t have a rich enough understanding of all the options to be able to make a comparison. It absolutely all gets noted down though and I will give all of the options extra attention as I come to some conclusion about what I plan to do.

A couple of things strike me about this variety of opinions – chief of which being that it can seem almost ideological in some ways. I’ve had people that I’ve barely finished describing the broad research question to swear up and down that their magic potion is the only one that will possibly cure my ailments. This is before I’ve even gotten down to what kind of data I think will be helpful or what my underpinning theories are.

Now I don’t question the sincerity of these people for a second and I even find it slightly touching that they can be so supportive and encouraging to a complete stranger. I’m sure that they’ve worked through any number of methods and learnt hard lessons about what works and what doesn’t and are keen to see other people spared some of those difficulties. It does seem though overall that once you’ve landed on a methodological tribe, that’s where you live. (But honestly, this is definitely supposition, I’m sure there’s more nuance than that – or at least I hope so).

If this is the way that things work, I can see positives and negatives. On the positive side, I would hope that pretty well any method or methodology can be valid if you can make a strong enough case for it. On the negative side, if there is an ingrained tribalism to a method and your reviewer lives in a different tribe, will you get the fairest hearing? Scholarship is meant to be grounded in objectivity but if a scholar has sunk part of their soul into a particular theory or a particularly approach to scholarship, might you not have to work a little harder if you choose a different angle?

Working out the angle in the first place is my real challenge. I have some ideas about where I’m going and what I want to explore, and I think there are some theories will inform this but I still feel that I’m very much in the unknown unknowns territory when it comes to methods. There was a mandatory research methods unit when I did my Masters way back when but at the time I had no intentions of moving into further research so I left it until last. Without seeing any particular application for the unit, I did the base level of work needed to finish it – actually I’m being ungenerous there, I still managed a Distinction – and promptly forgot everything.

There are research training opportunities available at my current uni but they are virtually entirely catered for on-campus, full-time students so it’s up to me to find my own way. It’s only recently that I’ve felt that I had a reasonable grasp on my topic so I’ve been happy to stay focused on the literature and put the how-to-research part on the back-burner. Which is all a very long-winded way of talking about why I’ve started the EdX EdinburghX SOCRMx Social Research Methods MOOC. From what I can see, this offers the overview of options that I need – they seem to favour creating one’s own bespoke set of methods, which suits my personal approach – and I’m hopeful that this will give me the focus that I’ve been lacking. I’ll obviously be keeping an eye out for the approaches that have already been commended to me, hopefully I’ll get a better picture of where exactly they sit on the map.

There’s a couple of other things that I’m already liking about this MOOC – there seems to be a manageable number of participants (~94 posts in the introduce yourself forum) and the MOOC moderators seem quite keen on the use of our own blogs for reflections and communication.

Oh and now I’m completely sold – I know this is pretty basic tool but this is essentially exactly what I’ve been looking for. They’ve used a multi-choice quiz to provide detailed feedback about methods that might suit particular research preferences. (Kind of like a buzzfeed quiz that isn’t harvesting your personal data for fun and profit). (All the answers are right)

mooc methods questions

There was also a nice video explainer of Epistemology – which I kind of knew was essentially about ways of knowing of but wasn’t clear why it mattered and perhaps also the nature of the different ways of knowing (e.g getting information from an authority figure vs experience/logic/science/tradition etc).

So yes, pretty happy with what I’ve seen so far here

Thoughts on: Teaching online (in Teaching thinking: Beliefs and knowledge in Higher Education) (Goodyear, P. 2002)

Writing about work by your supervisor feels a little strange but, as adults and scholars, it really shouldn’t. Obviously there is a power dynamic and a question for me of what to do if I disagree with him. Putting aside the matter that Peter Goodyear has worked and researched in this field forever and is highly regarded internationally while I am essentially a neophyte, I’m almost certain that his worst reaction would be the slightest brow-crinkling and a kindly, interested “ok, so tell me why”. He even made the point that the research may now be dated but it could be worth following the citation trail. Fortunately none of this is an issue because, as you’d hope from your supervisor, it’s pretty great and there is much to draw from it.

In summary, this chapter focuses on understanding what and how teachers think when they are teaching online. Sadly perhaps, little has changed in the nature of online teaching in the 14 years since this was written – the online teaching activities described are largely related to students reading papers and participating in discussions on forums. This gives the chapter a degree of currency in terms of the technology (although a few questions emerged for me in terms of the impact of social media) and I imagine that little has changed in teacher thought processes in this time related to assessing and trying to engage students online.

In some ways it’s the methodology used in the study that is the most exciting part of this – it steers away from the sometimes problematic reliance on transcript analysis used often (at the time?) in research on online learning and makes more use of the opportunities for observation. Observing a teacher reading, processing and replying to discussion forum posts offers opportunities for insight into their thoughts that a far richer than one might get in observing face to face teaching. By using a combination of concurrent and retrospective verbalisation and interview, a rich picture emerges.

Concurrent verbalisation involves getting the tutor to keep up a kind of stream of consciousness dialogue as they work on the discussion posts, with the researcher prompting them if they fall silent for more than 10 seconds. This can prove difficult for the teacher at times as they need to stop speaking at times to concentrate on the replies that they write but a balance is generally found. The session is also videotaped and the researcher and teacher watch it back together, (‘stimulated recall’),  which gives the teacher the opportunity to discuss what they were thinking in the quiet moments as well as enabling them to expand on their recorded comments. In terms of understanding the things that are important to teachers and how they work with the students, I find this method really exciting. I’m not at all sure how or if it will align with my own research when I come to it but this rich insight seems invaluable.

The author opens the chapter by thoroughly going through the motivations for researching teaching – ranging from an abstracted interest in it as a good area for study to a more action research oriented focus on improving specific aspects of teaching practice. He explores the existing literature in the field – particularly in relation to online learning and finds that (at the time) there were a number of significant gaps in research relating to practice and he proceeds to set out six high level research questions relating to online teaching. It seems worthwhile sharing them here

  1. What are the essential characteristics of online teaching? What tasks are met? What actions need to be taken? Are there distinct task genres that further differentiate the space of online teaching?

  2. How do these practices and task genres vary across different educational settings (e.g between disciplines, or in undergraduate vs postgraduate teaching, or in campus based vs distance learning) and across individuals?

  3. For each significant kind of online teaching, what knowledge resources are drawn upon by effective teachers? How can we understand and represent the cognitive and other resources and processes implicated in their teaching?

  4. How do novice online teachers differ from expert and experienced online teachers? How do they make the transition? How does their thinking change? How does the knowledge on which they draw change? How closely does this resemble ‘the knowledge growth in teaching’ about which we know from studies of teaching in other, more conventional, areas?…

  5. What do teachers say about their experiences of online learning? How do they account for their intentions and actions? How do their accounts situation action in relation to hierarchies of belief about teaching and learning (generally) and about teaching and learning online?

  6. How do learners’ activities and learning outcomes interact with teaching actions? (p.86)

Skipping forward, Goodyear conducted the research with a number of teachers working online and identified several key factors that shape what and how teachers teach online. The focus of their attention – is it on the student, the content, how well the subject is going, whether students are learning, the technology, how students will respond to their feedback etc – can vary wildly from moment to moment. Their knowledge of their students – particularly when they might never meet them in person – can shape the nuance and personalisation of their communications. This also ties to “presentation of self” – also known as presence – which is equally important in forming effective online relationships. Understanding of online pedagogy and attitudes towards it are unsurprisingly a big factor in success in teaching online and this also impacts on their ability to manage communication and conflict in an online space, where normal behaviours can change due to perceived distance.

There were a lot of other noteworthy ideas in this chapter that are worth including here and it also sparked a few of my own ideas that went off on something of a tangent.

Those who foresee an easy substitution of teaching methods too frequently misunderstand the function or underestimate the complexity of that which they would see replaced (p.80)

Teaching is not an undifferentiated activity. What is involved in giving a lecture to 500 students is different from what is involved in a one-to-one, face-to-face, tutorial. Also, interactive, face-to-face, or what might be called ‘live’ teaching is different from (say) planning a course, giving feedback on an essay, designing some learning materials, or reflecting on end-of-course student evaluation reports. (James Calderhead structures his 1996 review of teachers’ cognitions in terms of ‘pre-active’, ‘interactive’ and ‘post-active reflection’ phases to help distinguish the cognitive demands of ‘live’ teaching from its prior preparation and from reflection after the event) (p.82)

The affordances of the user interface are an important factor in understand how online tutors do what they do. This is not simply because online tutors need to understand the (relatively simple) technical procedures involved in searching, reading and writing contributions. Rather the interface helps structure the tutors’ tasks and also takes some of the cognitive load off the tutor (P.87)

Studies of ‘live’ classroom teaching in schools have tended towards the conclusion that conscious decision-making is relatively rare – much of what happens is through the following of well-tested routines (Calderhead, 1984). While swift routine action can be found in online tutoring, its curiously asynchronous nature does allow more considered problem solving to take place (p.97)

Many of these ideas crystallise thoughts that I’ve come to over recent years and which I’ve shared with Peter in our supervision meetings. I’m going to choose to believe that his inner voice is saying at these points, ‘good, you’re on track’ rather than ‘well, obviously and I wrote about this a decade and a half ago’. This is why we go with this apprenticeship model I guess.

As for the other random thought that emerged from reading this paper was that as we get more comfortable with using video and asking/allowing students to submit videos as assessments, we’ll need new ways to ‘read’ videos. Clearly these will already exist in the scholarhood but they may not be as widely known as we need.

Quick reading: Five papers on Academic Development (Hannon, 2008; Hicks, 2005; Boud & Brew, 2013; Lee & McWilliam, 2008; Bath & Smith, 2004)

Academic development refers to the professional development of academics – which makes sense when you think about it. Evidently I hadn’t thought about that a lot because until I skim read these five papers, I had put academic developers in the same broad (and perhaps vague) category as education designers and learning technologists. People working with teachers/academics to support teaching and learning and developing resources.

These are the papers:

I had just assumed that given that the terminology hasn’t really been settled yet (consider blended/flexible/online/technology-enhanced/e-learning), people have been using the terms that they prefer. (I’ve been toying with Director of Education Innovation as a new title but apparently that will upset the Directors of our schools, so that won’t fly).

Anyway, this was the first of a few realisations that I’ve had in the last week of trying to get my research back on track – ironically enough perhaps while I’ve been in the midst of a major academic development project of my own. (STELLAR – which will get its own post shortly).

Recognising that I need to move on to a new topic of exploration in my holistic overview of the central elements in supporting TELT practices in Higher Ed. but also feeling that I haven’t yet covered Education Support Staff (ESS) adequately, I decided to take the temperature of ESS research via five papers. (I’ve also been concerned that while the deep reading that I’ve been doing has been valuable, I’m spending too long on individual papers and chapters in the process.) I allocated a single 25 min pomodoro period to each of these new papers, including writing notes. Admittedly, four of the five papers I’ve decided that I still need to read in full and may well come back to them in the next topic anyway. (However, I changed my initially planned ‘next topic’ from Universities as Organisations to Teachers as a result of these papers and some other thinking recently, so this still feels like progress)

In a nutshell, as I’ve been looking at research relating to education support staff over the last couple of months, I’ve probably been in my own tribal mindset. I do still believe that there are significant cultural factors at play in higher ed. that mean that knowledge and experience aren’t always appropriately used or recognised if you’re not in the academic tribe and this is an area to work on. There are also an incredibly diverse range of reasons for this, some more understandable than others. I have to admit that I’ve not been as open to the more understandable (and valid) ones as I should’ve and that empathy is always an important part of communication and collaboration.

So after this post on the matter, I’m going to take a first pass at my lit review relating to ESSes and focus on the academic/teacher side. (Ultimately people that teach are teachers and this is the side of the academics’ work that I’m looking at – it’s also a more meaningful term in this context – but I realise that terminology is perhaps more important than I thought.

These are my quick responses to the papers that I skimmed

This is a particularly insightful paper that uses “the discourse analytic method of “interpretative repertoires (Potter & Wetherall, 1987)” (p.15) to consider issues in academic development with a particular focus on education technology and changing teaching practices.

Hannon essentially distills the approaches into ‘enabling’ and ‘guiding’ and interviews 25 individuals working with education technology (including academics and ESSes) about their experiences in one university in this space.

He identifies four main differences in the ways that practice is organised:

  • Developing staff or developing courses (p.19)
  • Implementing or adapting institutional strategy (p.20)
  • Drawing together – systems or community (p.22)
  • Reframing technology or reframing the user (p.23)

Ultimately, Hannon finds that:

it is neither institutional strategy nor learning technologies that impose these constraints, rather the discourse or repertoires associated with their operationalisation (p.27)

I’ll certainly be coming back to this paper in the future.

Hicks looked at issues more in relation to the role of Academic Developers – and people working in Education Support units – as ‘change agents’, caught between the strategic requirements and priorities of the university executive and the needs of teachers and learners.

She felt that the voice of academic developers is seldom heard in research in this field and takes time to address this within a Bourdieuian framework emphasising social systems by inviting developers to participate in a number of focus groups.

Hicks’ paper sits well alongside most of the other papers that I have looked at already, with a focus on the tensions between academic and professional staff as well as academic staff and ‘management’ – with the ESSes torn between the two and underutilised.

This paper may be a useful source of additional supporting quotes and could also be worth reviewing when I get to university as an organisation.

David Boud is a major figure in research into Higher Education in Australia, (Angela Brew presumably is as well but it’s Boud that I’ve heard more about to date), so I was keen to read this one.

The idea of practice theory (Kemmis) is something that I keep coming across (and has also been suggested by my supervisor) and it’s at the heart of this paper. In a nutshell, it’s about framing academic work as practice and considering three key foci

practice development, fostering learning-conducive work and deliberately locating activity within practice. It also suggests that academic development be viewed as a practice (p.208)

Given that my new area of exploration is teachers/teaching/academics, this is a timely examination of academic practice that I will absolutely be delving into in far greater depth. It also offers a nice bridge between these two areas and I think it will also help me to inform my other (professional) work.

This paper presents a solid overview of tribalism in academia and the emergence of Higher Education as a field of study in its own right that needs to be claimed by academic developers. (I’d wonder whether an idea of “academy developers” is more fitting here).

One thing that I’ve come to realise in this sector is that trying to take on organisational cultural issues directly is unproductive, so while I’d prefer tribalism to be replaced with the embrace of a broader notion of being part of a collaborative community of scholars, I realise that it won’t happen any time soon. I guess the real questions are; do the members of a tribe respect the knowledge of another tribe and is teaching and learning in Higher Education something that can be owned by one tribe? Perhaps something more along the lines of tribal elders – strictly in the H.E T&L discipline area, never the ‘academy’ itself – could work?

When it comes to the role of ESS, I note that the authors quote Rowland et al (1998), which has popped up in most of these papers and is high on my list of future reading. It’s a fairly brutal quote however.

[a]t best, they [i.e. academics] view those in these [academic development] units as providing a service to help them teach. At worst, they ignore them as lacking academic credibility and being irrelevant to the real intellectual tasks of academic life. (Rowland, Byron, Furedi, Padfield & Smyth, 1998, p.134) (p.10)

This is certainly another paper to read in full as I explore the idea of academic work and teaching.

This final paper by Lee and McWilliam leans heavily on Foucault and “games of truth and error” and a fairly specific idea of irony. It again explores the tensions that academic developers encounter in the space between executive/management priorities and teacher needs. As someone that hasn’t yet explored Foucault, I imagine it might be of value if this is theoretical direction that I choose but for the most part I just felt that I didn’t get the joke.

Ok, so hopefully this give me a decent starting point for writing something about the literature as it relates to education support staff (obviously there is always more to explore but the best writing is the writing that you’ve actually done and having something to show will make it easier to find the gaps – both in ideas covered in the research as well as in what I’ve been reading and not reading.

Onwards to teachers.

 

Thoughts on: ‘Sleeping with the enemy’: how far are you prepared to go to make a difference? A look at the divide between academic and allied staff (Wohlmuther, 2008)

At this stage of looking at the matter of professional staff and academic staff in Higher Education, I feel that I’m somewhat flogging a dead horse and everything that needs to be said, has been said. So why am I still looking at this paper? Initially I was concerned that it grated on me because it doesn’t fit with my current narrative that there are significant cultural factors in universities that make it unnecessarily difficult for professional staff – particularly those in education support roles – to be heard when it comes to discussing teaching and learning.

If this was the case, I’d clearly not being doing my best work as a scholar – open to new information and willing to reconsider my world view in the face of it. Having looked over the paper a few times now though, I have to say that I think it’s just not that great a piece of research. A number of assertions are made that simply aren’t supported by the evidence presented and some of the reasoning seems specious. Events from four years prior to the publication date are referred to in the future tense but there is no discussion of whether they happened or what the consequences were.

Assuming that this is poor research – or perhaps poor analysis – it makes me happy that I’ve reached a point where I can identify bad work but also a little concerned that I’m wrong or I’m missing something because this was still published in a peer reviewed journal that I’ve found a lot of good work in previously. (Then again, I assume that most journals have their own favoured perspectives and maybe this was well aligned with it). I searched in vain to find other writing by the author but she appears to be a ghost, with no publications or notable online presence since the paper came out.

In a nutshell, based on an anonymous online survey of 29% of all staff – academic and professional at her institution, which included questions about demographics, perceptions of the nature of their roles, the ‘divide’ and the value of different types of staff in relation to strategic priorities, the author concludes that there is minimal dissension between academic and “allied” staff and most of what little there is, is felt by the allied staff.

Now it’s entirely reasonable that this may well be the case but there are a few elements of the paper that seem to undermine the authors argument. Wohlmuther asks survey participants about their perceptions of a divide but doesn’t dig directly into attitudes towards other kinds of staff, which McInnis (1998), Dobson (2000) and Szekeres (2004) all identified as central factors. She looks at the perceptions of contributions of academic and allied staff members to the strategic goals of the organisation which obliquely explores their ‘value’ within the organisation but it seems limited. Given the ambiguous value of some higher level strategic goals (Winslett, 2016), this would seem to tell an incomplete story.

The greatest weakness of the paper to my mind is that ‘allied’ and ‘academic’ work roles are unclear.

Survey respondents were asked what percentage of their time they spent on allied work and what percentage of their time they should spend on allied work. The term ‘allied work’ was not defined. It was left to the respondent to interpret what they meant by allied work (p.330)

With no further examination of the responses via focus groups or interviews, this alone (to me anyway) seems to make the findings murky.

She found that only 29% of staff – all staff? that is unclear – felt that there was “good understanding and respect for the significance of each others roles and all staff work well together” (p.331) across the institute, however doesn’t take this to be an indicator of division.

Looking over the paper again, these are probably my main quibbles and perhaps they aren’t so dramatic. This tells me that I still have a way to go before I can truly ‘read’ a paper properly but I’m on the way