Author Archives: Colin Simpson

Research update #40: Proposal writing day 4: Edvisors and teachers

scrivener screenshot

 

 

 

 

 

I really didn’t expect to but I’ve caught up to my schedule. It’s largely because I decided that I needed to write words rather than write well although there is also the fact that I decided to trust my recollection of the broad ideas in the literature. Rather than painstakingly find each citation and quote on the fly as I write, I’m going to trust that they are out there and that on my next pass I can take the time to put them in. I think this will also help because I’m building a basic scaffold that seems to be flowing nicely and which should make it easier for me to find and compartmentalise the citations and quotes. I’m also fairly confident that I’ll also rediscover richer ideas that I can use to flesh out what I’ve already said. I’ll need to spend a little more time thinking about what the literature doesn’t say, and how to explain that and why it matters, but it’s been nice being able to put the pedal to the metal and just let the words come out as they want to.

In a nutshell, I covered the fact that the way that edvisor teams are structured and placed in institutions – centrally/college-based and also functionally – can be a barrier to effective work and particularly because of the tensions that exist between institutional and academics’ priorities. (Trying to remember that most good edvisors also have their own values conversation going on about ensuring the best possible learning and teaching amongst this). I moved on to the relationships between academics and edvisors and noted the difference between those from academic vs professional backgrounds. Touched on disciplinary silos, pressures faced by academics to be the experts in all things and the fact that many of them don’t really know what we do – or can do. This can be evident particularly in the research that they write and I think this will be a rich primary source to explore when I move into the research phase.

So I guess there is going to be a little more work than I expected when I’ve written draft one but the words are coming and draft two should not be far behind at all.

Research update #39: Proposal writing Day 3: “Rest day” but with some interesting revelations nonetheless

Well it was more a day where I’d made a previous commitment to sit on an interview panel for an Education Technologist position for a friend, followed shortly afterwards by a farewell party for a friend in the same unit.

I was able to take a couple of hours to glance over some of my earliest blog posts relating to this research, which were helpful in that I could see how much my question has evolved over time but perhaps lacked a little something in terms of direct relevance to the work that I’m doing now. Fortunately, the responses to the interview questions themselves did align more closely, albeit more in terms of gaining some additional background insights.

My assigned question to ask was along the lines of what technology do you see having an impact on higher education in the next 3-5 years? Something calling for a little crystal ball-gazing and my inclination was far more to give extra points to those who were unwilling to commit to specific products or brands. It was more about getting a sense of who is keeping an eye on things than getting the (impossible) right answer. Responses ranged from mobile devices (fairly likely though parts of my institution seem mystifyingly resistant to this) to AI, AR and drones. One candidate tried valiantly to steer this conversation around to rubrics and assessment and points for trying I guess.

The more revealing question was what do you think the role of an education technologist is? This was interesting because these were all people that had applied for a position with a specified set of criteria but the responses were still relatively varied. Clearly advising on and supporting the use of technology was a common theme in the responses but from there we seemed to veer into whichever areas the candidates felt they were strongest in. Fair enough, the point of the interview is to sell yourself. This included research, production of resources and information management skills. When we asked some to expand on their answers, by differentiating the technologist role from an ed designer or ed developer, things got more interesting. Before I started digging down into this field, my take was that a developer was more like a software or web developer than the more commonly used professional developer. One candidate felt that the ed dev would be building apps. Most got that the designer had more to do with course or curriculum design to varying degrees but most also recognised that there is a lot of overlap between all of these roles and the fact that they all had slightly different takes was good for me in that it reinforced what I’ve seen in the literature (and experienced in the day to day) about the fuzziness of most of these definitions.

I guess another interesting aspect of the interviews was in seeing where everyone had come from. We had people that had entered the field from graphic design, web and multimedia design, teaching and librarianship. For me, none of this disqualified anyone though the harsh reality is that in looking for someone able to hit the ground running, it’s hard not to favour someone with experience working with academics. How you get that experience in the first place is the real challenge I guess and I think I can probably expand a little on the pathways/entry point ideas section – though I don’t feel that there has been a lot of discussion of this in the literature that I’ve seen to date.

So while I didn’t write much and I didn’t find a whole lot in my previous note-taking blog posts, I still feel like I came away with a few more ideas.

Research update #38: Proposal writing Day 2 – more on edvisors, less on edvisors & institutions

I’m kind of just staring at the screen now with 27 different tabs open across two browsers so I guess it’s time to take a mental break at the very least. Going by my schedule, I was meant to have knocked out 750 words on the relationship between edvisors and institutions – or my precisely I guess institutional management/leadership. I currently have 129.

But that’s because I only wrote about 500/1000 yesterday on edvisors more broadly. I think part of my challenge is that, first draft or not, I still like to try to turn out a moderately elegant sentence that flows smoothly into the next one and advances the story or idea. What I need to do is worry less about this and just get the brutish ugly ideas down so that they might be prettied up later.

The bigger issue though is that I didn’t put enough time into getting all my sources, quotes and ideas into a single location before I started writing. I’ve spent enough time with the literature to know broadly what it says and how I want to bring it together and I know I have the citations to support this but I didn’t put them all into the notes document. They are instead, scattered through this blog, Zotero and assorted stacks of paper with pencil notes scrawled all through them. The point of blogging about many of these papers was to create a searchable archive of these ideas but with the way that the question has changed over time, the way that I have tagged these posts has not quite kept pace.

I’m still enjoying the writing and being forced to commit to particular ideas and language, I’m just slightly up in the air about whether it would be more beneficial to stop and spend the time assembling everything before I proceed or if I should just press on, write what I can as a first draft and then come up with a much improved second draft by bringing all the stray elements together. The latter seems the way to go as I’m well versed in the fine arts of procrastination and preparation, endless preparation is absolutely one of my go-tos in this regard. The other advantage of just writing is that it will let me work out the structure a little better which should make the process of searching for and gathering the quotes and citations a lot simpler.

I hit the 1000 word target for the edvisors section just before lunch but later felt that a discussion of the place of credentialing might sit better in the edvisors and institutions section. I was also a little concerned that I was discussing literature without really explaining why or what I was looking for in particular, so once more I spent a little more time than planned on that section. I had initially planned on 2000 words for my discussion of edvisors in the literature but revised this to 1000 on advice from Lina. I have a feeling that I could probably hit the 2000 without too much trouble as I dig deeper into the tensions between academic and professional edvisors.

Most of my thinking until recently revolved around the bizarre love/hate triangle between academics, institutional management/leadership and edvisors and how this impacts upon collaborative relationships. I’d kind of put aside the internal tensions both between academics and professional staff – particularly in the academic developer space where there’s a big question about where scholarly research fits into edvisor practices – and also between variously located teams within institutions. Most commonly central vs college/faculty based but there is also some toe-treading that occurs between rival disciplinary teams. The good news is that it’s all just more material to work with.

So while I’m not hitting my perhaps ambitious writing targets yet, the ideas are flowing.

 

Research update #37: Proposal writing Day 1 – Edvisors lit review

writing plan dates

I’ve booked in two weeks leave from work to get at least a first draft of my thesis proposal together. There’s a loose structure in place and I’m all about just getting the words down at this stage. As a first draft, I’m allowing for it being relatively terrible – which is probably the hardest part because I do like the words that I use to work well together – and the plan is to have something to send off for feedback just before Christmas.

Given that I’m aiming for between 750-1000 words a day mostly, I think I’ll be spending the morning pulling together the various ideas, quotes and references in the morning and doing the writing writing in the afternoon.

Today the focus is on edvisors in the literature, which isn’t as easy as I’d thought given that part of the reason for the thesis is their lack of visibility in the research. Or, more to the point, the fact that a lot of what I’ve been looking at is more closely related to where they/we sit in the institution, our relationships with institutional leadership and academics and the strategies that we do and could use to improve this. What I’m left with is more the descriptive, defining kind of work. Breaking this up into the three core role types of academic developer, education designer and learning technologist should help and there’s still plenty of time to move things around.

Mostly I just need to remember that this is the literature section, so I’m really only to talk about what other people have been talking about. I guess I can talk briefly about what hasn’t been discussed but that seems like a trap in some ways as maybe it has and I just missed it. (Pretty sure this is a universal refrain among PhDers though)

If you do read this post and are aware of a strikingly significant article or book etc about the nature of edvisors (academic developers etc – I wonder how long I’m going to need to add this), please let me know.

 

SOCRMx Week #8: The End

Well I probably said all that I needed to say on my general feelings about this MOOC in my last post so this is largely for the sake of completion. The final week of this course is a peer assessed piece of writing analysing the methods used in a sample paper. Turns out that I missed the deadline to write that – I may even have been working on my Week 7 post when that deadline fell – so this appears to be the end of the road for me. I could still go through and do the work but I found the supplied paper unrelated to my research and using methodologies that I have little interest in. The overall questions raised and things to be mindful of in the assessment instructions are enough.

  • What method of analysis was used?
  • How was the chosen method of analysis appropriate to the data?
  • What other kinds of analysis might have been used?
  • How was the analysed designed? Is the design clearly described? What were its strengths and weaknesses?
  • What kind of issues or problems might one identify with the analysis?
  • What are the key findings and conclusions, and how are they justified through the chosen analysis techniques?

And so with that, I guess I’m done with SOCRMx. In spite of my disengagement with the community, the resources and the structure really have been of a high standard and, more importantly, incredibly timely for me. As someone returning to study after some time who has not ever really had a formal research focus, there seems to be a lot of assumed knowledge about research methodology and having this opportunity to get a birds-eye view of the various options was ideal. I know I still have a long way to go but this has been a nice push in the right direction.

 

SOCRMx Week #7: Qualitative analysis

I’m nearly at the end of Week #8 in the Social Research Methods MOOC and while I’m still finding it informative, I’ve kind of stopped caring. The lack of community and particularly of engagement from the teachers has really sucked the joy out of this one for me. If the content wasn’t highly relevant, I’d have left long ago. And I’ll admit, I haven’t been posting the wonderfully detailed and thoughtful kind of posts on the forum or in the assigned work that they other 5 or so active participants have been doing but I’ve been contributing in a way that supports my own learning. I suspect the issue is that this is being run as a formal unit in a degree program and I’m not one of those students. Maybe it’s that I chose not to fork over the money for a verified certificate. Either way, it’s been an unwelcoming experience overall. When I compare it to the MITx MOOC I did a couple of years ago on Implementing Education Technology, it’s chalk and cheese. Maybe it’s a question of having a critical mass of active participants, who knows. But as I say, at least the content has been exactly what I’ve needed at this juncture of my journey in learning to be a researcher.

This week the focus was on Qualitative Analysis, which is where I suspect I’ll being spending a good amount of my time in the future. One of my interesting realisations early on in this though was that I’ve already tried to ‘cross the streams’ of qual and quant analysis this year when I had my first attempt at conducting a thematic analysis of job ads for edvisors. I was trying to identify specific practices and tie them to particular job titles in an attempt to clarify what these roles were largely seen to be doing. So there was coding because clearly not every ad was going to say research, some might say ‘stay abreast of current and emerging trends’ and other might ask the edvisor to ‘evaluate current platforms’. Whether or not that sat in “research” perfectly is a matter for discussion but I guess that’s a plus of the fuzzy nature of qualitative data, where data is more free to be about the vibe.

But then I somehow ended up applying numbers to the practices as they sat in the job ad more holistically, in an attempt to place them on a spectrum between pedagogical (1) and technological (10). Which kind of worked in that it gave me some richer data that I could use to plot the roles on a scattergraph but I wouldn’t be confident that this methodology would stand up to great scrutiny yet. Now maybe just because I was using numbers it doesn’t mean that it was quantitative but it still feels like some kind of weird fusion of the two. And I’m sure that I’ll find any number of examples of this in practice but I haven’t seen much of this so far. I guess it was mainly nice to be able to put a name to what I’d done. To be honest, as I was initially doing it, I assumed that there was probably a name for what I was doing and appropriate academic language surrounding it, I just didn’t happen to know what that was.

I mentioned earlier that qualitative analysis can be somewhat ‘fuzzier’ than quantitative and there was a significant chunk of discussion at the beginning of this week’s resources about that. Overall I got the feeling that there was a degree of defensiveness, with the main issue being that the language and ideas used in quantitative research are far more positivist in nature – epistemologically speaking (I totally just added that because I like that I know this now) – and are perhaps easier to justify and use to validate the data. You get cold hard figures and if you did this the right way, someone else should be able to do exactly the same thing.

An attempt to map some of those quantitative qualities to the qualitative domain was somewhat poo-pooed because it was seen as missing the added nuance present in qualitative research or something – it was a little unclear really but I guess I’ll need to learn to at least talk the talk. It partly felt like tribalism or a turf war but I’m sure that there’s more to it than that.  I guess it’s grounded in a fairly profoundly different way of seeing the world and particularly of seeing ‘knowing’. On the one side we have a pretty straight forward set of questions dealing with objective measurable reality and on the other we have people digging into perspectives and perceptions of that reality and questioning whether we can ever know or say if any of them are absolutely right.

Long story short, there’s probably much more contextualisation/framing involved in the way you analyse qual data and how you share the story that you think it tells. Your own perceptions and how they may have shaped this story also play a far more substantial part. The processes that you undertook – including member checking, asking your subject to evaluate your analysis of their interview/etc to ensure that your take reflects theirs – also play a significant role in making your work defensible.

The section on coding seemed particular relevant so I’ll quote that directly:

Codes, in qualitative data analysis, are tags that are applied to sections of data. Often done using qualitative data analysis software such as Nvivo or Dedoose.

Codes can overlap, and a section of an interview transcript (for example) can be labeled with more than one code. A code is usually a keyword or words that represent the content of the section in some way: a concept, an emotion, a type of language use (like a metaphor), a theme.

Coding is always, inevitably, an interpretive process, and the researcher has to decide what is relevant, what constitutes a theme and how it connects to relevant ideas or theories, and discuss their implications.

Here’s an example provided by Jen Ross, of a list of codes for a project of hers about online reflective practice in higher education. These codes all relate to the idea of reflection as “discipline” – a core idea in the research:

  • academic discourse
  • developing boundaries
  • ensuring standards
  • flexibility
  • habit
  • how professionals practice
  • institutional factors
  • self assessment

Jen says: These codes, like many in qualitative projects, emerged and were refined during the process of reading the data closely. However, as the codes emerged, I also used the theoretical concepts I was working with to organise and categorise them. The overall theme of “discipline”, therefore, came from a combination of the data and the theory.

https://courses.edx.org/courses/course-v1:EdinburghX+SOCRMx+3T2017/courseware/f41baffef9c14ff488165814baeffdbb/23bec3f689e24100964f23aa3ca6ee03/?child=last

I already mentioned that I undertake thematic analysis of a range of job ads, which could be considered to be “across case” coding. This is in comparison to “within-case” coding, where one undertakes narrative analysis by digging down into one particular resource or story. This involves “tagging each part of the narrative to show how it unfolds, or coding certain kinds of language use” while thematic analysis is about coding common elements that emerge while looking at many things. In the practical exercise – I didn’t do it because time is getting away from me but I read the blog posts of those who did – a repeated observation was that in this thematic analysis, they would often create/discover a new code half way through and then have to go back to the start to see if and where that appear in the preceding resources.

On a side note, the practical activity did look quite interesting, it involved looking over a collection of hypothetical future reflections from school leavers in the UK in the late 1970s. They were asked to write a brief story from the perspective of them 40 years in the future, on the cusp of retirement, describing the life they had lived. Purely as a snapshot into the past, it is really worth a look for a revealing exploration of how some people saw life and success back in the day.Most of the stories are only a paragraph or two.

https://discover.ukdataservice.ac.uk/QualiBank/?f=CollectionTitle_School%20Leavers%20Study

And once again, there were a bunch of useful looking resources for further reading about qualitative analysis

  • Baptiste, I. (2001). Qualitative Data Analysis: Common Phases, Strategic Differences. Forum: Qualitative Social Research, 2/3. http://www.qualitative-research.net/index.php/fqs/article/view/917/2002
  • Markham, A. (2017). Reflexivity for interpretive researchers http://annettemarkham.com/2017/02/reflexivity-for-interpretive-researchers/
  • ModU (2016). How to Know You Are Coding Correctly: Qualitative Research Methods. Duke University’s Social Science Research Unit. https://www.youtube.com/watch?v=iL7Ww5kpnIM
  • Riessman, C.K. (2008). ‘Thematic Analysis’ [Chapter 3 preview] in Narrative Methods for the Human Sciences. SAGE Publishing https://uk.sagepub.com/en-gb/eur/narrative-methods-for-the-human-sciences/book226139#preview Sage Research Methods Database
  • Sandelowski, M. and Barroso, J. (2002). Reading Qualitative Studies. International Journal of Qualitative Methods, 1/1. https://journals.library.ualberta.ca/ijqm/index.php/IJQM/article/view/4615
  • Samsi, K. (2012). Critical appraisal of qualitative research. Kings College London. https://www.kcl.ac.uk/sspp/policy-institute/scwru/pubs/2012/conf/samsi26jul12.pdf
  • Taylor, C and Gibbs, G R (2010) How and what to code. Online QDA Web Site, http://onlineqda.hud.ac.uk/Intro_QDA/how_what_to_code.php
  • Trochim, W. (2006). Qualitative Validity. https://www.socialresearchmethods.net/kb/qualval.php

Research update #36: Playing well with others

The nature of my research topic, with a focus on the status of professional staff in an academic world, feels risky at times. While I know that academic staff occupy edvisor roles as well, I have a feeling that I’ll be digging into sensitive areas around the academic/professional divide that often seem to be swept under the carpet because they raise uncomfortable questions about privilege and class in the academy and some entrenched beliefs about what makes academics special. It would be incredibly presumptuous for me to think that my ideas are all necessarily right and the point of research is to put them to the test and see where they take me but there’s a fair chance that some of what I’m going to have to say won’t always be well received by some of the people that I work with and who pay me. The other big issue is whether if my findings demonstrate a blind spot to professional staff in academics, those same academics responsible for assessing my research will see the value in my work.

Fortunately at this stage I don’t have my heart set on a career as an academic – I really do like doing what I do – but it seems imprudent to prematurely cut one’s options. I am conscious that I need to be more researcherly or scholarly in the language that I use in this space. I sent out a slightly provocative tweet yesterday, prompted by a separate (joke) tweet that I saw which said that the fastest way to assemble a bibliography was to publicly bemoan the lack of research in topic x. 

After 36 hours I’ve had no literature recommended but a university Pro Vice-Chancellor replied suggesting a collaboration on this area of mutual interest. Which surprised and flattered me greatly, considering that I was concerned that I’d come across as a little bolshie in my questions. Maybe it’s wrong of me to see academics as some kind of monolithic whole.

Maybe the trick is to just worry less and be honest. You can’t please everyone and if you can stand behind your work, maybe that’s enough.

I’m not sure. We seem to live in incredibly sensitive times.

 

 

Week #6 SOCRMx – Quantitative analysis

This section of the SOCRMx MOOC offers a fair introduction to statistics and the analysis of quantitative date. At least, enough to get a grasp on what is needed to get meaningful data and what it looks like when statistics are misused or misrepresented. (This bit in particular should be a core unit in the mandatory media and information literacy training that everyone has to take in my imaginary ideal world)

The more I think about my research, the more likely I think it is to be primarily qualitative but I can still see the value in proper methodology for processing the quant data that will help to contextualise the rest. I took some scattered notes that I’ll leave here to refer back to down the road.

Good books to consider – Charles Wheelan: Naked Statistics: Stripping the dread from data (2014) & Daniel Levitin: A Field Guide to Lies and Statistics: A Neuroscientist on How to Make Sense of a Complex World (2016)

Mean / Median / Mode

Mean – straightforward average.

Median – put all the results in a line and choose the one in the middle. (Better for average incomes as high-earners distort the figures)

Mode – which section has the most hits in it

Student’s T-Test – a method for interpreting what can be extrapolated from a small sample of data. It is the primary way to understand the likely error of an estimate depending on your sample size

It is the source of the concept of “statistical significance.”

A P-value is a probability. It is a measure of summarizing the incompatibility between a particular set of data and a proposed model for the data (the null hypothesis). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5366529/

“a significance level is an indication of the probability of an observed result occurring by chance under the null hypothesis; so the more you repeat an experiment, the higher the probability you will see a statistically significant result.”

Overall this entire domain is one where I think I’m only really going to appreciate the core concepts when I have a specific need for it. The idea of a distribution curve where the mean of all data points represents the high point and standard deviations (determined by a formula) show us the majority of the other data points seems potentially useful but, again, until I can practically apply it to a problem, just tantalisingly beyond my grasp.

Thoughts on: Agency and stewardship in academic development: the problem of speaking truth to power (Peseta, 2014)

In some ways this is a ‘thoughts on thoughts on’ as I’m writing about Tai Peseta’s summary reflection at the end of a special issue of the International Journal of Academic Development focusing on the politics of academic development. Specifically, it asked writers to respond to this theme:

amid the array of contested and politically difficult agendas, how do academic developers enact and imagine a future for themselves (and the profession) in ways that recognise and take seriously the business of their own political power, and in particular, their responsibility to speak truth to power (p.65)

I’ve been going to IJAD a lot in my reading because of those that I consider to be the three main edvisor roles – academic developer, education designer and learning technologist – it is academic developers that appear to dominate the research space. Which does make me wonder whether it is a role that is more dominated by people in academic (rather than professional) positions than the other two. Something I’ll be keeping an eye on.

The more time I spend looking at this particular role-type, the more I’m seeing the terms academic and educational developer used interchangeably, which doesn’t help my current line of thinking about education designers/developers primarily as people working with academics to design and built learning resources and online course sites. However it does fortunately still work with my other ideas that titles in the edvisor domain are all over the shop. 

Anyway, much of this is by the by. Peseta elegantly ties together the core ideas of five papers about academic developer practice across Europe, Canada and Australia into a wider discussion about how much power or influence ADs can or should exert in their institutions. The broad tone is that this power is far more than I have personally seen but she does note that there can often be a tendency in these kinds of papers to be slightly celebratory and overstate things. 

A second reading however is that while the collaboration portrayed in this account contains all the hallmarks of a cautious victory narrative, there remains an underlying question about the possible kinds of representation of academic development initiatives. In reflecting on our modes of justification, I find myself asking who is offering this story? How is the discursive field organised to enable this particular account of it?My goal is not to be cynical but rather to open up the spaces and meanings that illustrate the spectacle of academic development’s political power (p.67)

This mention of cynicism in particular brings me to what I found to be one of the most interesting parts of the author’s reflection. I must confess that in working in an environment where cynicism seemingly abounds, it is easy to travel down the same path. When mystifying decisions are handed down from on high with minimal or laughable consultation, information is fearfully hoarded by people that lack the capacity to use it well and there is a generally pervasive belief that most people don’t care about teaching and learning (vs research), it can seem like a natural progression to simply go with the cynical flow. Fortunately my job leads me more often than not to those people who do care about education and who are capable, so this at least tempers those inclinations.

It was revealing to see today in the results of the National Tertiary Education Union survey of 13500 university workers that only 27% expressed confidence in the people who run their various institutions. Sadly, clearly cynicism is the dominant culture. When we get to this state, I suspect that our ability to understand and empathise with the people that we work with and the cycle only worsens. Peseta discusses the Polish study in this issue where educational reform leaders described three institutional responses to change and characterised academics variously as:

…traditionalists, individualists, unaware, in pain, irrational, lazy, or inert. Each of these three logics permeates the policies of academic development in different ways with different reasons and leads to any number of reactions about the merits of institutional initiatives: pernicious, naive, neutral, welcome, celebratory and necessary. What is to be (or has been) our response to the contradictory reactions about our work as academic developers? What conceptual tools are at our disposal to understand the origins of these perceptions and to see arguments about them as a necessary part of an academic developer’s political repertoire. (p.67-68) 

 

There are some big ideas to unpack in this. The educational reform leaders in this study may well be right in their summary of many of the academics that they have tried to work with but they may equally have misunderstood what has led to these behaviours. They may be grossly oversimplifying the nature of their academics, which is a human thing to do when we find ourselves in opposition to someone who doesn’t share our vision. Their rejection of this vision then calls our own abilities into question and so rather than interrogate those, it’s far more comforting to attribute resistance to lesser personal qualities. (Which isn’t to say that they can’t be present as well, just to complicate matters).

At the heart of these issues (for ADs) I would suggest is the triangular relationship between institutional management, academics and academic developers. ADs are routinely forced into a position where they are tasked with effectively driving compliance to institutional policies and initiatives by offering training in ‘doing things the new/right way’ or trying to advocate best practices to the powers that be. This, to me, seems to be the issue of where and whether ADs should assert their political power. When things take the former route

Too heavy an emphasis on compliance without critical engagement leads to dull, bureaucratic box-ticking , and effectively hollows out academic development of its intellectual contribution. Similarly, accepting and lamenting resistance without considered debate or challenges entrenches tradition unthinkingly. Although both positions are productive and necessary for academic development to flourish as a critical encounter, they each contain an uneasy energy characteristic of Di Napoli’s (2014) agonistic spaces. Yet is in in precisely these spaces tha academic developers realise and grasp the power they have to form and practise their judgement, developing a feel for the game and what it means to be in it. In these spaces, the question which usually lurks is ‘what do I do with the power and influence I have?’  (p.66)

This is also perhaps where Peseta and I diverge a little – and I’ll readily accept that my experience in Higher Ed is limited to one institution – but, as a professional staff member, I’ve never had a feeling of any political power. This may simply be a reflection of my particular context or my lack of experience in politicking and the fact that the author and most of the authors of the papers in the special issue do feel that they have some degree of power has to make me wonder if ‘it’s not you, it’s me’. So this in itself has been something of a breakthrough in some ways and is giving me a lot to consider.

The author and the authors of the papers in the special issue spell out a number of strategic approaches to developing and exercising their power that are worth exploring. Many of them seem highly valuable but a handful I’d question.

From them we learn something about how teaching and learning issues unfold into urgent institutional problems; we develop an insight into the different ways academic developers read the rhythms of their contexts, draw on research, assemble arguments, and galvanise people and resources to reformulate and address the challenges before them. Most importantly, we get a sense of how a particular course of action is justified and argued for over others (p.67)

This to me positions ADs as providers of frank and fearless advice that draws on scholarly practices that senior academics and institutional management (generally the same thing) are more likely to respond to. It puts advocacy front and centre (alongside research) as a key practice of ADs. This is something that I’ve rarely seen specifically listed in job advertisements and position descriptions for these kinds of roles, although maybe it sits under ‘advise’. This certainly lends weight to my feeling that Peseta and the other authors largely see AD roles as being occupied by academics. This is extended in the discussion of the Norwegian paper

… we are privy to the insights of a very experienced group of academic developers and this shows in several ways: in their description of the political context and their participation in it; in their deployment of expertise (institutional know-how and educational research); their sense of what to argue for and what to withdraw from; and more generally, in the way they understand the possibilities and limits of academic development (through their choice of a sense-making framework: discursive institutionalism. This piece really shines when the sense-making apparatus kicks in: levels of ideas (policy, programme and philosophy); types of discourses (coordinative and communicative); and types of ideas (cognitive and normative)… It seems to me that one of the compelling lessons from this paper is about inducting academic developers into the scholarship of the field as an opportunity to debate and defend a set of views about higher education (p.68) (emphasis mine)

This quote leaves me a little unclear as to whether Peseta is suggesting that ADs should be inducted into the scholarship of the discipline being taught or broader scholarship about teaching and learning. (That’ll teach me to only read a summary of a paper and not the paper itself. Fear not, it’s on the long list). One question or idea that has come up a number of times in discussions within the TELedvisor community is whether academics need to better understand what edvisors do but I can see a strong case for going the other way. (Even when we assume that we know). If it is about delving into disciplinary scholarship (e.g. microeconomics) I’m less convinced, as much for the sheer feasibility of it all. Maybe being to ask questions about approaches to teaching and learning that align better to disciplinary practices and scholarship is a practical middle-ground.

Moving on to the study in the special issue by Debowski, Peseta notes a different strategic approach being taken by Australian ADs.

We find an Australian academic development scene keen on a model of partnership with its political allies: from external quality agencies to teaching and learning funding bodies. The politicisation is plausible enough but the distributed nature of the political game carries noteworthy and worrying epistemological effects. The first is that the job of academic development shifts to one of ‘translation’ and ‘implementation’, suggesting in part that the intellectual puzzles of learning and teaching in higher education have more or less been settled. Moreover the thorny and substantial issue of what (and whose) knowledge is being ‘translated’ and ‘implemented’ is left unattended. A second effect is tying oneself too closely to the external political game is that it can divert attention away from a commitment to the project of knowledge-making. (p.68)

Part of me has to wonder whether this different approach – between Norway and Australia – is reflective of national cultural characteristics or if it is simply a matter of the specific examples being examined. If my feeling that ADs don’t carry a lot of power in Australia is widely true, it would make more sense to lean on other authorities to help get things done.

Peseta draws her reflection to a close by reasonably asking

whether academic developers are eager to imagine themselves in the role of steward, where there is a job to be done in caring for the field – its history, ethics and politics – in ways that are future looking. It does seem to me that a condition of scholarship lies in academic developers’ disposition to scholarliness and scholarship, as well as a desire to know and immerse themselves in the peculiarities that comprise the field. If we are to better support academic developers in navigating the messy politics of the agency game, then we need more occasions to dispute, debate and deliberate on what it is that we offer learning and teaching in higher education. We need occasions to test our politics with others in and outside of the field. (p.69)

I would love to see this happening but having had a taste of institutional and academic culture where this absolutely does not happen, I can completely understand ADs wanting this but choosing to spare themselves from banging their heads against a brick wall. (And I thought I was going to be less cynical in this post). Maybe banging our heads against walls is a necessary part of a practice though.

I’ll wrap this post up with one more quote that I want to include but couldn’t find a way to fit into the discussion. I’ll certainly be reading more of this special issue as it clearly speaks directly to my research and hopefully I can also use it to spark wider discussion in the TELedvisor community.

What feels fresh and thrilling to me is that the lens of political ontology unlocks two important aspects of the work. First, it draws attention to the matter of justificatory politics, inviting us to interrupt the discourses that structure the accounts of our work as academic developers. While institutional capture provides academic development with much sought-after leverage and profile, it has the uncanny effect too of infantilising academic developers’ professional imagination such that our identities, values and actions can appear to outsiders as inseparable from what an institution requires. Second, the focus on ontology locates these interruptions as individual and collective acts of political agency, inciting us to lead more public conversations about our values at exactly the time when higher education’s purpose has multiplied. Without these conversations, there may be a temptation to position academic developers flexible and enterprising operators advocating on behalf of greedy institutions (Sullivan, 2003) regardless of their own professional and personal values. Many of us would baulk at this suggestion while reflecting on its distinct likelihood (p.66)

No punches pulled there.

 

Research update #35 – Writing like a proper academic

My writing style in this blog is intended to be conversational and focused on using the act of writing to help me to give form to my ideas. So sometimes it can be insightful and sometimes it can be somewhat more rambling. I’ve been very conscious the whole way through that this is not the style that I will need to employ when I’m actually writing my thesis.

Interestingly (perhaps) I had a bit of a mental to-and-fro in that last sentence between using ’employ’ or ‘use’. Nine times out of ten I would’ve gone with ‘use’, as I believe in simple and concise language but maybe because I’m thinking about how I will need to write in the future, I went with the more formal ’employ’. Or maybe the rhythm of the words worked better with ’employ’ as there is something strangely musical in language that seems important when I write. Anyway, I did mention that I can sometimes be rambly.

This self-consciousness about my writing style has risen up a little lately as I’ve been reading some of the blog posts of my SOCRMx colleagues. Many of them are doing the MOOC for course credit, so it could simply be that they are writing as they believe they are expected to or perhaps have gotten into the habit of doing, but it is still a style that I feel somewhat removed from.

Which is why I was happy to come across this post from one of my two favourite PhD gurus, Inger “Thesis Whisperer” Mewburn. With a title like “Academic writing is like a painful upper-class dinner party” you can probably work out where she is going with it. In a nutshell, her argument is that to be taken seriously in academia, you need to write like an “uptight white person”.

Meaning essentially that caution, nuance and form rule the day, with the choice of words offering worlds of hidden meaning about your actual, never to be expressed feelings. Using ‘assert’ rather than ‘argue’ is effectively a headbutt to the credibility of the author that you are discussing as it suggests that they are incapable of rationally supporting their idea and instead need to resort to an appeal to authority to make their point. (I have a feeling that I’ve probably used ‘assert’ at some point when I simply felt that I’d been overusing ‘argue’ so I’ll be paying particular attention here)

All of which brings me back to something that I’ve previously reflected on here, which is that your reader – and more importantly your reviewer and assessor’s personal tastes can carry far more importance in how your work is received than your ideas. I can appreciate that forms of communication evolve over time and become significant because they demonstrate an understanding of certain key concepts of scholarship but overall I find it a shame that vital ideas might be disregarded because they aren’t expressed in the appropriate fashion. A few commenters at the end of the post were outraged that Inger was reinforcing this dominant paradigm and vowed never to buy her book but I think they missed the point. Inger was talking about what is and they are focused on what should be. Her core idea was that communication should still be clear and accessible where possible but that it will be read in particular ways by an audience and it is important to be mindful of how that audience reads if you want to communicate with them.

She also includes a link to an incredibly handy verb cheat sheet divided by whether you think the work that you are describing is awesome, neutral or poor. She makes the point that this is written for research in her domain – part social sciences and part education – and people need to find their own but given that her domain is mine, I’m pretty happy to have it as a starting point.

Thanks Thesis Whisperer