Categories
PhD reflection research

Thoughts on: “What should count as Education Research: Notes towards a new paradigm” (Anyon, 2006)

I’m on my weekly bus trip to Sydney – between 3.5 – 4 hours – to take a workshop on Thesis Proposal Writing (and also to get to know my scholarly colleagues) so it seems like a good time to do some reading and jot down some ideas. (The super chatty backpackers of last week are gone and the bus is basically a big moving quiet library – with wifi, which is great in itself)

So I’ve diligently downloaded some of the recommended readings – in this case 

Ladson-Billings, G., & Tate, W. F. (2014). Education Research in the Public Interest: Social Justice, Action, and Policy. Teachers College Press.
– and I start reading. Very quickly I realise that while it is an interesting enough chapter, focussing on the need for bigger picture research into the social and political contexts that surround the success or otherwise of education reform in “urban” American schools, it’s pretty well irrelevant to my own research.

This at least leads me to a few thoughts and ideas for TELT practices.

When teachers provide optional readings, it would be great if there was an option to

  1. tag them (ideally by both the teacher and student)
  2. support student recommendations/ratings
  3. directly include in-line options for commenting

It would also be valuable if teachers (while I’m focussing on Higher Ed, I think I’ll go with the term teachers instead of lecturers/academics for now) provided a short abstract or even just a basic description.

This got me thinking further about the informal student recommendation/rating systems that are currently in use and what we need to learn from them. Students at my university, the ANU – I guess I need to add a disclaimer on this blog about all opinions etc being my own and I don’t speak for the ANU – have created a lively Facebook space where they share information and opinions (and cat/possum pictures). These discussions often include questions about which are good (easy) courses or what lecturer x is like. I suspect that the nature of these communities – particularly the student ownership – makes officially sanctioned groups/pages less appealing, so it isn’t necessarily a matter of aping these practices, rather looking for opportunities to learn from them in our TELT practices.

My own supervisor has written about the student experience of TELT practices – I’ll be curious to see whether this question is addressed. (Reading that book is high on my list, I’m just trying to get my head around what it means to be a PhD student and researcher currently so this is the leaning of my reading to date)

The chapter does finish with a quote that I did find relevant though:

Most educational research seeks to provide guidance into how to alter existing policies or practices deemed problematic, but the extent to which research findings effect change is small. The impotence of most research to alter established policy and practice is well recognized

So even when it doesn’t appear that a reading is going to be of value, I guess it can still trigger other ideas and offer more universal thoughts.

Post Script: Just looking at the citation above, it’s clear that I need to get a better grasp of how to use Zotero in the browser. Any and all advice most welcome.

Categories
cognition communication emotion

Inspired by Alda

ticket for alan alda talk

Easily one of my favourite things about working at a university is the rich range of speakers that come to share ideas with us. This week alone we have presentations for International Women’s Day and lectures on vote buying in Indonesia, Public Private Partnerships in infrastructure, Poverty alleviation in Brazil and Argentina, the Paris Climate Talks, the 2016 Defence White paper and exploring fertility of Aboriginal and Torres Strait Islander Australians.

Last night we also had Alan Alda talking about science communication. He was amazing.

This isn’t something that I knew about him before but this is a long standing passion of his. He is the co-founder of the Alan Alda Center for Communicating Science at Stony Brook University and hosted a tv series – Scientific American Frontiers – interviewing scientists around the world for more than a decade.

Funnily enough, I suspect that like many of the 1300+ people in the audience, that wasn’t my primary reason for going to the talk. (Though it did seem interesting in itself). Whether for his performances as Hawkeye in M*A*S*H, Sen. Arnold Vinick in The West Wing or most recently Pete in Louis CK’s Horace and Pete, Alda is an astounding actor and communicator and has won over many fans in his long career.

While Alda spoke directly about science communication, it was clear to me that everything he said could just as easily be applied to teaching practice, particularly in higher ed where there can be a tendency to get caught up in highly complex and dry technical language. (Which isn’t to say that this isn’t needed or that academics and scholars don’t need common specific terminology to communicate sophisticated content, more that particularly when introducing new concepts, it can be helpful to think about other cognitive processes that aid in learning)

In a nutshell, what I took away from the presentation was:

  • It’s ok to use plain English to explain concepts that the audience (student) isn’t familiar with
  • People retain information far better when it is attached to an emotion that they have experienced in receiving it.
  • Presenting your information as a narrative with a degree of showmanship will enhance engagement.
  • When you know too much about something, it can be easy to forget how to see it from the perspective of a novice (and adjust your explanation accordingly)

Alda illustrated all of these core ideas with stories and demonstrations that were exciting (a desperate rush for emergency surgery in Chile), disgusting (children thrown in a river in medieval times to ensure that public events stayed in public memory), amusing (an exercise in getting the audience to guess a song by having someone tap it out on the lectern) and truly sad (doomed lovers doing a heartbeat experiment)

Much of what he had to say resonated deeply with many ideas related to cognition and learning over the years that have sparked my interest in scenarios, game based learning and gamification. While he didn’t drill down into which researcher showed what, there is a wealth of research out there that has demonstrated the value of the emotional and personal connections that presenters/scientists/teachers can add to their teaching practices to make them resonate more with an audience.

When asked which areas of science have the biggest problems with this, he made the point that what the anti-vaccination campaigners on their side  (as far as persuasion goes) is the emotion and the intimacy of their personal stories. No idea how to counter this but I think he’s right.

There was also an additional point raised (timely on International Women’s Day) about how women in science sometimes feel that they have to present a more dispassionate and impersonal face to their audiences to avoid the stereotypes of “emotional” women. Again, no solutions but an interesting point.

The Q&A component of the talk was filmed and here it is

Categories
PhD proposal question research Uncategorized

Thoughts on: A general framework for developing proposals – Developing Effective Research Proposals. (Punch, 2000)

book cover

Writing in this format for gathering my thoughts and collecting useful quotes and ideas from articles/books/etc proved fairly useful to me while completing my Masters so I figured that I’d give it a shot here now.

(Actually it’s funny now going back to that old blog as the final post was an overview of my thoughts about doing a research methodology subject, which seemed utterly redundant as it was the final subject in the degree and not an area that I felt that I would likely to spend any further time on)

Anyway, while I thought the first of these posts would relate to Paul Trowler’s mini-book on “Doing Insider Research in Universities”, I’m still working my way through (and enjoying) that and in the meantime was given Chapter 3 of Punch’s book about research proposals to read at the first of the Thesis Proposal Writing Workshop sessions offered by USyd ESW. (Homework, who knew?)

Punch offers a pragmatic and seemingly reasonable (based on my limited knowledge) approach to framing a research proposal. He readily acknowledges that there can be no single perfect approach but more a broad set of guiding principles that should enable one to hone one’s area of research interest down to specific and measurable data collection questions. (This isn’t to say that it won’t be a cyclical, iterative process with some potential dead-ends but ultimately it should result in a product that is “neat, well-structured and easy to follow”)

Here are some of the key points that I took from the chapter:

  • Three key questions at the heart of the proposal – What, Why and How (how coming later and including when, who and where – i.e. the methodology)
  • Why is important – the justification for the research and will often merit multiple sections
  • Logical flow from research area -> research topic -> general research questions -> specific research questions -> data collection questions

Possible examples:
Research area: youth suicide
Research topic: factors associated with the incidence of youth suicide
General research question: “What is the relationship between family background factors and the incidence of youth suicide?”
Specific research question: “What is the relationship between family income and the incidence of youth suicide?”

The point is to move toward questions that can be directly asked and answered.
“Is it clear what data will be required to answer this question?”
The answers to the general questions are the sum and synthesis of the more specific questions.

Punch prefers the term “indicators” to “factors” (which I have been tending to use to date) because “of its wide applicability across different types of research. It applies in quantitative and qualitative contexts, whereas the terms ‘dimensions’, ‘factors’ and ‘components’ have more quantitative connotations.

He also makes the point that the more well-considered the research questions are, the more they suggest the types of data that are necessary to answer them. “It is the idea behind the expression that ‘a question well asked is a question half-answered.'”

Punch goes on to point out that “should” questions (e.g. Should nurses wear white uniforms?) are unduly complex and require a lot of unpacking to answer. (Who’s to judge “should”?)
A more productive question might be “Do nurses think they should wear white uniforms?” – to which I would add maybe “Why do nurses think they should wear white uniforms?” – which perhaps gets more complicated but can still form a reasonable question to a nurse.

In broad terms, Punch then reiterates the importance of being clear on the what and the why of the research before moving on to methodology. There is some interesting discussion of the value of hypotheses in relation to the research questions – though at this stage I don’t think they will be relevant to my research – which links to aligning theory to the research questions.

Some reflections and questions raised.

At this early stage I’ve been concerned about my lack of a strong research background in terms of knowing what kind of methodology I plan to use. Many of my peers seem to have already mapped out the next 3-6 years and I’m still trying to figure out what I really want/need to find out.

This chapter has reminded me that figuring out the what and why – which I’ve made a modest start on in my mind at least – is vital in informing the next steps in the research.

It has also sparked a few random ideas and questions for me to pursue, which feels like a win.

Why don’t more people use TELT practices in Higher Ed / Adult Ed?
Is the learning technologist a factor? Where do we sit? In Organisation? or separately?
(There’s some crossover with pedagogy maybe. Also compliance and innovation)
How do these factors interrelate?

What if I start out by thinking there is a gap in the literature and there actually isn’t?
What’s the difference between a learning practice and a teaching practice?
Which factors (or sets of factors) impact TELT practices and how do they interrelate?
What actions are needed at what levels & contexts to mitigate the barrier factors?

Just finally, I’ve also decided on some tools to start my documenting process – Zotero and Scrivener. (Probably worthy of posts in their own right). The following bibliographic entry comes from the Zotpress plugin for WordPress and seems to have done a nice job in preview. (I do need to find out what the “official” citation style is. Currently I’m going APA because I like it)

Punch, K. F. (2000). Developing effective research proposals (pp. 21–33). SAGE Publications Ltd.
Categories
ed tech edtech education PhD teaching

Starting a PhD

Photo 1-03-2016, 5 21 45 PM

This is me, today, Tuesday the 1st of March 2016. This is the day that I officially start my PhD studies (is it studies or research?) with the Faculty of Education and Social Work at the University of Sydney.

Surprisingly enough, the exact topic is a work in progress but broadly I will be looking into Technology Enhanced Learning and Teaching (TELT) Practices  in Higher Education, the factors that influence it and ways to better support it. My supervisor is Peter Goodyear and my associate supervisor is Lina Markauskaite, both decent seeming people that have done a lot of respected work in this and related areas.

So why am I doing it?

This is the make-or-break question I suspect. The thing that will ultimately determine whether or not I finish. Happily I think my reasons are solid.

I want to know more about this field and I want to be better at my job as a learning technologist. (I used to mock the pretension of that title but it’s grown on me). I don’t necessarily aspire to a job in academia but I do think that this will help me professionally whichever path I do end up taking.

I see the questions that I have around this field as a puzzle and one which deserves to be solved. I think that technology can be better employed in adult education to create deeper and more meaningful learning experiences for students and it disappoints me that I don’t see this happening more regularly. I’d like to better understand what factors shape TELT practices in higher education and see what can be done to better support it.

I’m grateful for the opportunity that I’ve been given in being taken on as a student. I haven’t followed the more conventional academic path to get here in terms of research based study and there is certainly some catching up to do but this just makes me more determined to succeed.

The word “scholar” was mentioned a few times last week when I attended the HDR (Higher Degree by Research) induction session and while for some reason it evokes images of 12th Century monks painstakingly writing on parchment by candlelight in a dim cell, it feels special to be a (tiny) part of this history.

I should probably go read something now. (Though surely I’ve earned a break – see, proud scholar already)

Categories
ePortfolio mahara Uncategorized

Trying out Mahara – an ePortfolio adventure

Managing your online identity – particularly your professional identity – is arguably now one of the core digital literacies. This is why I’ve long taken an interest in the use of ePortfolio tools.

As the “Education Innovation Office” in a college at a major Australian university, it’s my job to keep us moving forward and to find the best ways to do so. I’d previously dabbled with ePortfolios before coming here but had never really used them for a project and thus had no direct evidence to support a case that they are worth pursuing.

A few months ago I came across CMALT – Certified Member of the Association of Learning Technologists – useful seeming accreditation and connection to a community of practice of my peers. The application process lends itself very well to the use of ePortfolios so I decided to take the Mahara ePortfolio platform out for a spin.

Our IT/Web team was kind enough to install an instance on a local server – A Windows server sadly (more on that later) – and off I went.

Using Mahara 

Mahara enables users to curate a range of files, text, links and other resources into highly customisable pages. A range of page layouts are available and content can easily drag/dropped into the page.

mahara edit content screenshot

These pages can then be gathered into collections and private URLs generated so that the user is able to choose which pages are shared with whom.

In terms of ease of use, so far so good. My biggest concern at this stage was in finding a way to provide clear connections between the evidence that I was providing and the reflective posts that I was making to respond to the selection criteria for the CMALT application. Utlimately I decided on footnote style supertext annotations that referred to numbered sections containing files at the side of the text.

mahara footnotes

The good parts

Using Mahara was a highly intuitive process that made it very easy to quickly produce a professional looking page. It certainly helped that Mahara is based on Moodle code (to some extent) as I have used this for a number of years, but I feel confident even a user without Moodle experience will pick it up quickly.

The file management system is similarly quick to pick up, with a simple space that can be used to upload files and organise them into folders.

The range of themes that can be used to style a portfolio (or the whole site) offer a reasonable degree of personalisation and I suspect that it is possible to do a lot more if you are happy to dive into the CSS and tinker.

The lesser parts

As I mentioned earlier, our Mahara instance was installed on a Windows server rather than the recommended platform and this generated a number of back-end error messages. Broadly the system seems to be working fairly well and when it is rolled out officially at a university level, I’m sure that it will be done according to the recommended specs.

For this reason, it’s hard to know whether some things don’t work – most notably the Open Badges plugin – because of our non-compliant server configuration or because of other issues. This was more of a nice-to-have in any case so it hasn’t been a major headache.

One thing that did cause a few more headaches though was the fact that when adding a folder of files to a page, the process for correctly selecting the folder (so that it actually displays the files) is fairly unintuitive in the latest release. A user needs to click the white space next to the folder name – but not the name itself or the folder icon – to select it. This caused me a good few hours of frustration but I have been told it will be addressed in future versions.

The user manual is fairly rich and detailed and there is also a growing community of users, so it isn’t generally too hard to find an answer when you strike a problem.

Wrapping up

I did spend a fair while tweaking my ePortfolio to get it just right but this was never labourious. My ePortfolio probably doesn’t make full use of the range of tools on offer but I think it has done the job reasonably well.

Feel free to take a look at it at http://mahara.cbenet.anu.edu.au/view/view.php?t=vex1phdEaZm2QsXRUGc9 and if you have any thoughts or suggestions, please let me know.

Categories
education design

When Schools Overlook Introverts: Why Quiet Time is Important for the Learning Process – The Atlantic

For many students, quiet time is key for the learning process.

Source: When Schools Overlook Introverts: Why Quiet Time is Important for the Learning Process – The Atlantic

I found a lot of value to consider in this article – while I can appreciate the value of group work, it has felt to me that there is perhaps an overemphasis on it in education design. While the notion of learning styles is heavily poo-pooed now, it could well be worth considering an education design that addresses people’s preferred styles of interaction and need for quiet, reflective time to synthesise concepts.

Categories
11.133x ed tech edtech mooc

Week 4 of the 11.133x MOOC – Bringing it on home

The final week (ok 2 weeks) of the MITx – Implementation and Evaluation of Educational Technology MOOC – is now done and dusted and it’s time for that slight feeling of “what do I do now?” to kick in.

This final section focused on sound evaluation processes – both formative and summative – during and after your ed tech implementation. This whole MOOC has had a very smooth, organic kind of flow and this brought it to a very comfortable conclusion.

Ilona Holland shared some particularly useful ideas about areas to emphasise in the evaluation stage: appeal (engagement), interest (sparking a desire to go further), comprehension, pace and usability. She and David Reider clarified the difference between evaluation and research – largely that in an evaluation you go in without a hypothesis and just note what you are seeing.

In keeping with the rest of these posts, I’ll add the assignment work that I did for this final unit as well as my overall reflections. Spoiler alert though, if you work with educational technology (and I assume you do if you are reading this blog), this is one of the best online courses that I’ve ever done and I highly recommend signing up for the next one.


 

Assessment 4 – Evaluation process.

  1. Decide why you are evaluating. Is it just to determine if your intervention is improving learner’s skills and/or performance? Is it because certain stakeholders require you to?

We will evaluate this project because it is an important part of the process of implementing any educational technology. We need to be confident that this project is worth proceeding with at a larger scale. It will also provide supporting evidence to use when approaching other colleges in the university to share the cost of a site-wide license.

  1. Tell us about your vision of success for the implementation. This step is useful for purposes of the course. Be specific. Instead of saying “All students will now be experts at quadratic equations,” consider if you would like to see a certain percentage of students be able to move more quickly through material or successfully complete more challenging problems.

Our goal in using PollEverywhere in lectures is to increase student engagement and understanding and to reduce the number of questions that students need to ask the lecturer after the lecture.

A secondary goal would be to increase the number of students attending lectures.

Engagement seems like a difficult thing to quantify but we could aim for a 10% increase in average student grades in assessments based on lecture content. We could also aim for lecturers receiving 10% fewer student questions during the week about lecture content. A 20% increase in attendance also would be a success.

  1. Generate questions that will guide the evaluation. What do you need and want to know regarding the efficacy of your implementation? Are there questions that other stakeholders care about that should also be included? Think about your desired goals and outcomes for the implementation.

Questions for students:

I find lectures engaging
I am more likely to attend lectures now because of the use of PollEverywhere
I find PollEverywhere easy to use
PollEverywhere works reliably for me
The use of PollEverywhere feedback in lectures has helped deepen my understanding of the content

Questions for lecturers:

I have found PollEverywhere easy to use
PollEverywhere works reliably for me in lectures
PollEverywhere has helped me evaluate and adjust my lectures
Fewer students ask me questions between lectures since I started using PollEverywhere
Students seem more engaged now

  1. Determine what data and information you need to address the questions and how you will collect it.This could be qualitative or quantitative. You might consider observing teachers and students in action or conducting surveys and interviews. You might look at test performance, participation rates, completion rates, etc. It will depend on what is appropriate for your context.

Pre-use survey of students relating to engagement in lectures and their attitudes towards lectures
Observation of classes using PollEverywhere in lectures and student activity/engagement
Lecture attendance numbers?
Use of PollEverywhere near the end of lectures to gather student feedback
Comparison of assessment grade averages
Feedback from students in tutorials
University SELS (Student Experience of Learning Support) and SET (Student Experience of Teaching) surveys
Data derived directly from Poll results

  1. Consider how you would communicate results and explain if certain results would cause you to modify the implementation. In a real evaluation, you would analyze information and draw conclusions. Since your course project is a plan, we will skip to this step.

The quantitative data (changes in grades, results from polls in lectures, student surveys, attendance estimates) could be collated and presented in a report for circulation around the college. We could also make a presentation at our annual teaching and learning day – which could incorporate use of the tool.

Qualitative data could be built into case studies and a guide to the practical use of the tool.

Evidence emerging during the trial period could be acted on quickly by discussing alternatives with the pilot group and making changes to the way that the tool is used. This might include changing the phrasing of questions, requesting that students with twitter access use this option for responding to the poll or exploring alternative methods of displaying the PollEverywhere results (if PowerPoint is problematic)

Part 5: Reflection

What was difficult about creating your plan? What was easy?

Generally speaking, coming up with the plan overall was a fairly painless experience. The most complicated part was developing tools to identify and evaluate the most appropriate options. This was because the guest speakers gave me so many ideas that it took a while to frame them in a way that made sense to me and which offered a comprehensive process to work through. (This ended up being 3-4 separate documents but I’m fairly happy with all of them as a starting point).

As with all of the activities, once I had discovered the approach that worked for me and was able to see how everyone else was approaching the question, things seemed to fall into place fairly smoothly.

What parts of the course were the most helpful? Why? Did you find certain course materials to be especially useful?

I think I have a fairly process oriented way of thinking – I like seeing how things fit together and how they relate to the things that come before and after. So the sections that dug down into the detail of processes – section 2 and section 4 with the evaluation plans – appealed the most to me.

I can understand the majority of people working with education technology are in the K-12 area and so it makes sense this is where many of the guest experts came from but this did sometimes seem slightly removed from my own experiences. I had to do a certain amount of “translating” of ideas to spark my own ideas.

What about peer feedback? How did your experiences in the 11.133x learning community help shape your project?

Peer feedback was particularly rewarding. A few people were able to help me think about things in new ways and many were just very encouraging. I really enjoyed being able to share my ideas with other people about their projects as well and to see a range of different approaches to this work.

General observations

I’ve started (and quit) a few MOOCs now and this was easily the most rewarding. No doubt partially because it has direct relevance to my existing work and because I was able to apply it in a meaningful way to an actual work task that happened to come up at the same time.

I had certain expectations of how my project was going to go and I was pleased that I ended up heading in a different direction as a result of the work that we did. This work has also helped equip me with the skills and knowledge that I need to explain to a teacher why their preferred option isn’t the best one – and provide a more feasible alternative.

While it may not necessarily work for your EDx stats, I also appreciated the fact that this was a relatively intimate MOOC – it made dealing with the forum posts feel manageable. (I’ve been in MOOCs where the first time you log in you can see 100+ pages of Intro posts and this just seems insurmountable). It felt more like a community.

I liked the idea of the interest groups in the forum (and the working groups) but their purpose seemed unclear (beyond broad ideals of communities of practice) and after a short time I stopped visiting. (I also have a personal preference for individual rather than group work, so that was no doubt a part of this)

I also stopped watching the videos after a while and just read the transcripts as this was much faster. I’d think about shorter, more tightly edited videos – or perhaps shorter videos for conceptual essentials mixed with more conversational case-study videos (marked optional)

Most of the events didn’t really suit my timezone (Eastern Australia) but I liked that they were happening. The final hangout did work for me but I hadn’t had a chance to work on the relevant topic and was also a little caught up with work at the time.

All in all though, great work MOOC team and thanks.

(I also really appreciated having some of my posts highlighted – it’s a real motivator)

Categories
11.133x ed tech edtech feedback

Week 3 of the 11.133x MOOC – On fire

In comparison to last week, I ploughed through this weeks’ work. Probably because I have a separate education tech training event on this week – the ACODE Learning Technology Teaching Institute – and wanted to clear the decks to give this due focus.

This week in MOOC land saw us looking at the evaluation phase of a project. Once more, the videos and examples were a little (ok entirely) focussed on K-12 examples (and a detour into ed tech startups that was slightly underwhelming) but there were enough points of interest to keep my attention.

I even dipped back into GoAnimate for one of the learning activities, which was fun.

The primary focus was on considering barriers to success before starting an implementation and it did introduce me to a very solid tool created by Jennifer Groff of MIT that provides a map of potential barriers across 6 areas. Highly recommend checking it out.

Groff, Jennifer, and Chrystalla Mouza. 2008. “A Framework for Addressing Challenges to Classroom Technology Use.” AACE Journal 16 (1): 21-46.

As for my contributions, as ever, I’ll add them now.


 

Barriers and Opportunities for using PollEverywhere in uni lectures. 

There are a number of potential barriers to this success of this project, however I believe that several of them have been considered and hopefully addressed in the evaluation process that led us to our choice of PollEverywhere. One of them only came up on Friday though and it will be interesting to see how it plays out.

1 School – Commercial Relationships

Last week I learnt that a manager in the school that has been interested in this project (my college is made up of four schools) has been speaking to a vendor and has arranged for them to come and make a presentation about their student/lecture response tool. (Pearson – Learning Catalytics). Interestingly this wasn’t a tool on my radar in the evaluation process – it didn’t come up in research at all. A brief look at the specs for the tool (without testing) indicates though that it doesn’t meet several of our key needs.

I believe that we may be talking to this vendor about some of their other products but I’m not sure what significance this has in our consideration of this specific product. The best thing that I can do is to put the new product through the same evaluation process as the one that I have selected and make the case based on selection criteria. We have also purchased a license for PollEverywhere for trialling, so this project will proceed anyway. We may just need to focus on a pilot group from other schools.

2 School – Resistance to centralisation

Another potential obstacle may come from one of our more fiercely independent schools. They have a very strong belief in maintaining their autonomy and “academic freedom” and have historically resisted ideas from the college.

There isn’t a lot that can be done about this other than inviting them to participate and showcasing the results after the piloting phase is complete.

3 School – network limitations

This is unfortunately not something that we can really prepare for. We don’t know how well our wireless network will support 300+ students trying to access a site simultaneously. This was a key factor in the decision to use a tool that enables students to participate via SMS/text, website/app and Twitter.

We will ask lecturers to encourage students to used varied submission options. If the tool proves successful, we could potentially upgrade the wireless access points.

4 Teacher – Technical ability to use the tool

While I tried to select a tool that appears to be quite user-friendly, there are still aspects of it that could be confusing. In the pilot phase, I will develop detailed how-to resources (both video and print) and provide practical training to lecturers before they use the tools.

5 Teacher – Technical

PollEverywhere offers a plug-in that enables lecturers to display polls directly in their PowerPoint slides. Lecturers don’t have permission to install software on their computers, so I will work with our I.T team to ensure that this is made available.

6 Teacher – Pedagogy

Poorly worded or times questions could reduce student engagement. During the training phase of the pilot program, I will discuss the approach that the teacher intends to take in their questions. (E.g. consider asking – did I explain that clearly VS do you understand that)

Opportunities

Beyond the obvious opportunity to enhance student engagement in lectures, I can see a few other potential benefits to this project.

Raise the profile of Educational technology

A successful implementation of a tool that meshes well with existing practice will show that change can be beneficial, incremental and manageable.

Open discussion of current practices

Providing solid evidence of improvements in practices may offer a jumping off point for wider discussion of other ways to enhance student engagement and interaction.

Showcase and share innovative practices with other colleges

A successful implementation could lead to greater collegiality by providing opportunities to share new approaches with colleagues in other parts of the university.

Timeline

This isn’t incredibly detailed yet but is the direction I am looking at. (Issues in brackets)

    Develop how-to resources for both students and lecturers(3)
    Identify pilot participants (1,2)
    Train / support participants (3,4,6)
    Live testing in lectures (5)
    Gather feedback and refine
    Present results to college
    Extend pilot (repeat cycle)
    Share with university

 

Oh and here is the GoAnimate video. (Don’t judge me)

http://goanimate.com/videos/0e8QxnJgGKf0?utm_source=linkshare&utm_medium=linkshare&utm_campaign=usercontent

Categories
11.133x clicker ed tech edtech education feedback mooc Uncategorized

Week 2 of the 11.133x MOOC – getting things done. (Gradually)

The second week (well fortnight really) of the 11.133x MOOC moved us on to developing some resources that will help us to evaluate education technology and make a selection.

Because I’m applying this to an actual project (two birds with one stone and all) at work, it took me a little longer than I’d hoped but I’m still keeping up. It was actually fairly enlightening because the tool that I had assumed we would end up using wasn’t the one that was shown to be the most appropriate for our needs. I was also able to develop a set of resources (and the start of a really horribly messy flowchart) that my team will be able to use for evaluating technology down the road.

I’m just going to copy/paste the posts that I made in the MOOC – with the tools – as I think they explain what I did better than glibly trying to rehash it here on the fly.


 

Four tools for identifying and evaluating educational technology

I’ve been caught up with work things this week so it’s taken me a little while to get back to this assignment but I’m glad as it has enabled me to see the approaches that other people have been taken and clarify my ideas a little.

My biggest challenge is that I started this MOOC with a fairly specific Ed Tech project in mind – identifying the best option in student lecture instant response systems. The assignment however asks us to consider tools that might support evaluating Ed Tech in broader terms and I can definitely see the value in this as well. This has started me thinking that there are actually several stages in this process that would probably be best supported by very different tools.

One thing that I have noticed (and disagreed with) in the approaches that some people have taken has been that the tools seem to begin with the assumption that the type of technology is selected and then the educational /pedagogical strengths of this tool are assessed. This seems completely backwards to me as I would argue that we need to look at the educational need first and then try to map it to a type of technology.

In my case, the need/problem is that student engagement in lectures is low and a possible solution is that the lecturer/teacher would like to get better feedback about how much the students are understanding in real time so that she can adjust the content/delivery if needed.

Matching the educational need to the right tool

When I started working on this I thought that this process required three separate steps – a flowchart to point to suitable types of technology, a checklist to see whether it would be suitable and then a rubric to compare products.

As I developed these, I realised that we also need to clearly identify the teacher’s educational needs for the technology, so I have add a short survey about this here, at the beginning of this stage.

I also think that a flowchart (ideally interactive) could be a helpful tool in this stage of identifying technology. (There is a link to the beginning of the flowchart below)

I have been working on a model that covers 6 key areas of teaching and learning activity that I think could act as the starting point for this flowchart but I recognise that such a tool would require a huge amount of work so I have just started with an example of how this might look. (Given that I have already identified the type of tool that I’m looking at for my project, I’m going to focus more on the tool to select the specific application)

I also recognise that even for my scenario, the starting point could be Communication or Reflection/Feedback, so this could be a very messy and large tool.

The key activities of teaching/learning are:
• Sharing content
• Communication
• Managing students
• Assessment tasks
• Practical activities
• Reflection / Feedback

I have created a Padlet at http://padlet.com/gamerlearner/edTechFlowchart and a LucidChart athttps://www.lucidchart.com/invitations/accept/6645af78-85fd-4dcd-92fe-998149cf68b2 if you are interested in sharing ideas for types of tools, questions or feel like helping me to build this flowchart.

I haven’t built many flowcharts (as my example surely demonstrates) but I think that if it was possible to remove irrelevant options by clicking on sections, this could be achievable.

Is the technology worthwhile?

The second phase of this evaluation lets us look more closely at the features of a type of technology to determine whether it is worth pursuing. I would say that there are general criteria that will apply to any type of technology and there would also need to be specific criteria for the use case. (E.g. for my lecture clicker use case, it will need to support 350+ users – not all platforms/apps will do this but as long as some can, it should be considered suitable)

Within this there are also essential criteria and nice-to-have criteria. If a tool can’t meet the essential criteria then it isn’t fit for purpose, so I would say that a simple checklist should be sufficient as a tool will either meet a need or it won’t. (This stage may require some research and understanding of the available options first). This stage should also make it possible to compare different types of platforms/tools that could address the same educational needs. (In my case, for example, providing physical hardware based “clickers” vs using mobile/web based apps)

This checklist should address general needs which I have broken down by student, teacher and organisational needs that could be applied to any educational need. It should also include scenario specific criteria.

Evaluating products
It’s hard to know exactly what the quality of the tool or the learning experiences will be. We need to make assumptions based on the information that is available. I would recommend some initial testing wherever possible.
I’m not convinced that it is possible to determine the quality of the learning outcomes from using the tool so I have excluded these from the rubric.
Some of the criteria could be applied to any educational technology and some are specifically relevant to the student response / clicker tool that I am investigating.


 

Lecture Response System pitch

This was slightly rushed but it does reflect the results of the actual evaluation that I have carried out into this technology so far. (I’m still waiting to have some questions answered from one of the products)

I have emphasised the learning needs that we identified, looked quickly at the key factors in the evaluation and then presented the main selling points of the particular tool. From there I would encourage the teacher/lecturer to speak further to me about the finer details of the tool and our implementation plan.

Any thoughts or feedback on this would be most welcome.

Edit: I’ve realised that I missed some of the questions – well kind of.
The biggest challenge will be how our network copes with 350+ people trying to connect to something at once. The internet and phone texting options were one of the appealing parts about the tool in this regard, as it will hopefully reduce this number.

Awesome would look like large numbers of responses to poll questions and the lecturer being able to adjust their teaching style – either re-explaining a concept or moving to a new one – based on the student responses.


 

These are the documents from these two assignments.

Lecture Response systemsPitch ClickersEdTechEvaluationRubric EducationalTechnologyNeedsSurvey ColinEducation Technology Checklist2

Categories
11.133x ed tech edtech education design mooc

Week 1 of the 11.133x MOOC – tick.

MOOCs often take a little while to ramp up but the MITx 11.133x Implementation yada yada – I’m going to stick with 11.133x for now, that’s a long title – MOOC feels like it’s just about right.

There has been a fair whack of the standard common sense in the videos so far – have a purpose, don’t choose the technology before you know what you want to do with it. stakeholders matter etc – but it has been well presented and by a range of different people.

There has probably been more emphasis on looking at ed tech for K-12 schools rather than higher education than I like but I guess it is a larger chunk of the audience. The ability to form/join affinity groups in the forums has at least let me connect with other uni people around the world.

In terms of practical activities, it has really REALLY helped to come to this MOOC with a project in mind. I’m looking for a live student response/feedback tool (most likely web/app based) that can be used in lectures (large lectures 350+) to poll students about their understanding of content.

This fed well into our first two activities, which involved looking at the context that this project will occur in and considering whether it sits in a general or specific domain and whether it will change procedure or instruction. (I’ll post my responses to both below)

Responding to other posts – you need to respond to at least three to complete the module – helps to clarify some of the concepts. I have a feeling that this isn’t a huge MOOC either – there aren’t hundreds of pages of responses in the forums to each question which is often kind of hellish to process.

Profile your implementation context

Target environment
I work in the College of Business and Economics in a leading Australian university. We’re relatively well resourced, so buying new tech generally isn’t an issue within reason, which allows me to focus on the suitability of the tool. We have large numbers of international students in our undergraduate cohort. The majority of students are comfortable with mobile and online technology. At an undergraduate level, the students tend to be young adults.

The college is comparatively conservative in some ways – although fortunately our leadership understands and supports the value of innovation. There is an emphasis placed on the seriousness and prestige of our brand that I need to factor into the look and feel of college associated tools.
There is a range of acceptance and engagement with learning technology from academics in the college, from enthusiasm to resistance to change. (Last week I had a long conversation with someone about why he still needs an overhead projector – we’re getting him one)
Our largest lecture theatres can hold up to 600 people (which is big for us) and the college wi-fi has recently been upgraded.

Key stakeholder

Recently one of our finance lecturers contacted me – I’m the learning technology person for the college – and asked what we have in the way of live student response/feedback systems. Tools that will enable her to post survey/understanding questions on screen during lectures and get real-time responses from students via mobile/web apps.

She is relatively new to the college and lectures to a group of 350+ students. (This is relatively large for us although some of our foundation subjects have 800+ students). She is keen to enhance the interactivity of her lectures but is also concerned about finding the right tool. She really doesn’t want any technology failures during her lectures as she believes that this will kill student trust in this kind of technology. She would also prefer not to trial multiple tools on her students as she is concerned that experimenting on students may diminish their educational experience.

Potential for the technology
There has been a lot of ongoing discussion at the university in recent years about the effectiveness of lectures. Attendance rates are around 30% in many disciplines, due to student work/life commitments, recording of lectures and a host of other reasons.

The lecture format itself is questioned however it is deeply ingrained in many parts of the culture so finding ways to augment and enhance the lecture experience seems like a more effective approach.
Student response/feedback apps can be a powerful way to instantly track understanding and engagement of material in lectures and I am keen to see what we can do with it. While some students may feel confident to ask questions in a lecture, others may feel uncomfortable with this from cultural perspectives or due to English being a second language.

The lecturer has already been in contact with a supplier of a particular platform, however I have some reservations as on a preliminary investigation, their product appears to provide much more functionality than might be needed and may be unnecessarily complicated. However, I’m hoping that this MOOC will help me to work through this process.

Domain / Approach Chart

Socrative

This seems like a bit of a cop-out given that the example given was PollEverywhere but if you check myprevious post, you’ll see that I’m looking for a tool to use for live student feedback in lectures.

Socrative is one of several tools that considering to meet this need. It is a basic, online tool that enables a teacher to create a quiz/survey question, show it to the class through a data projector and then get the students to respond to (generally multichoice) via an app on their phone or a web browser.

Of the ones that I’ve seen so far, it’s easy to set up and seems to work fairly well. (I blogged a comparison between it and Kahoot a while ago)

I’d say that it is Domain General because it can be used widely and it is more about changing an approach to procedure, because without it, a teacher could just ask for a show of hands instead. (This I think will get a better response though because it is less embarrassing)

My main concern with Socrative for my project is that the website says that it is best used with classes of 50 or less and I am looking for something that supports 350+