Categories
11.133x ed tech edtech mooc

Week 4 of the 11.133x MOOC – Bringing it on home

The final week (ok 2 weeks) of the MITx – Implementation and Evaluation of Educational Technology MOOC – is now done and dusted and it’s time for that slight feeling of “what do I do now?” to kick in.

This final section focused on sound evaluation processes – both formative and summative – during and after your ed tech implementation. This whole MOOC has had a very smooth, organic kind of flow and this brought it to a very comfortable conclusion.

Ilona Holland shared some particularly useful ideas about areas to emphasise in the evaluation stage: appeal (engagement), interest (sparking a desire to go further), comprehension, pace and usability. She and David Reider clarified the difference between evaluation and research – largely that in an evaluation you go in without a hypothesis and just note what you are seeing.

In keeping with the rest of these posts, I’ll add the assignment work that I did for this final unit as well as my overall reflections. Spoiler alert though, if you work with educational technology (and I assume you do if you are reading this blog), this is one of the best online courses that I’ve ever done and I highly recommend signing up for the next one.


 

Assessment 4 – Evaluation process.

  1. Decide why you are evaluating. Is it just to determine if your intervention is improving learner’s skills and/or performance? Is it because certain stakeholders require you to?

We will evaluate this project because it is an important part of the process of implementing any educational technology. We need to be confident that this project is worth proceeding with at a larger scale. It will also provide supporting evidence to use when approaching other colleges in the university to share the cost of a site-wide license.

  1. Tell us about your vision of success for the implementation. This step is useful for purposes of the course. Be specific. Instead of saying “All students will now be experts at quadratic equations,” consider if you would like to see a certain percentage of students be able to move more quickly through material or successfully complete more challenging problems.

Our goal in using PollEverywhere in lectures is to increase student engagement and understanding and to reduce the number of questions that students need to ask the lecturer after the lecture.

A secondary goal would be to increase the number of students attending lectures.

Engagement seems like a difficult thing to quantify but we could aim for a 10% increase in average student grades in assessments based on lecture content. We could also aim for lecturers receiving 10% fewer student questions during the week about lecture content. A 20% increase in attendance also would be a success.

  1. Generate questions that will guide the evaluation. What do you need and want to know regarding the efficacy of your implementation? Are there questions that other stakeholders care about that should also be included? Think about your desired goals and outcomes for the implementation.

Questions for students:

I find lectures engaging
I am more likely to attend lectures now because of the use of PollEverywhere
I find PollEverywhere easy to use
PollEverywhere works reliably for me
The use of PollEverywhere feedback in lectures has helped deepen my understanding of the content

Questions for lecturers:

I have found PollEverywhere easy to use
PollEverywhere works reliably for me in lectures
PollEverywhere has helped me evaluate and adjust my lectures
Fewer students ask me questions between lectures since I started using PollEverywhere
Students seem more engaged now

  1. Determine what data and information you need to address the questions and how you will collect it.This could be qualitative or quantitative. You might consider observing teachers and students in action or conducting surveys and interviews. You might look at test performance, participation rates, completion rates, etc. It will depend on what is appropriate for your context.

Pre-use survey of students relating to engagement in lectures and their attitudes towards lectures
Observation of classes using PollEverywhere in lectures and student activity/engagement
Lecture attendance numbers?
Use of PollEverywhere near the end of lectures to gather student feedback
Comparison of assessment grade averages
Feedback from students in tutorials
University SELS (Student Experience of Learning Support) and SET (Student Experience of Teaching) surveys
Data derived directly from Poll results

  1. Consider how you would communicate results and explain if certain results would cause you to modify the implementation. In a real evaluation, you would analyze information and draw conclusions. Since your course project is a plan, we will skip to this step.

The quantitative data (changes in grades, results from polls in lectures, student surveys, attendance estimates) could be collated and presented in a report for circulation around the college. We could also make a presentation at our annual teaching and learning day – which could incorporate use of the tool.

Qualitative data could be built into case studies and a guide to the practical use of the tool.

Evidence emerging during the trial period could be acted on quickly by discussing alternatives with the pilot group and making changes to the way that the tool is used. This might include changing the phrasing of questions, requesting that students with twitter access use this option for responding to the poll or exploring alternative methods of displaying the PollEverywhere results (if PowerPoint is problematic)

Part 5: Reflection

What was difficult about creating your plan? What was easy?

Generally speaking, coming up with the plan overall was a fairly painless experience. The most complicated part was developing tools to identify and evaluate the most appropriate options. This was because the guest speakers gave me so many ideas that it took a while to frame them in a way that made sense to me and which offered a comprehensive process to work through. (This ended up being 3-4 separate documents but I’m fairly happy with all of them as a starting point).

As with all of the activities, once I had discovered the approach that worked for me and was able to see how everyone else was approaching the question, things seemed to fall into place fairly smoothly.

What parts of the course were the most helpful? Why? Did you find certain course materials to be especially useful?

I think I have a fairly process oriented way of thinking – I like seeing how things fit together and how they relate to the things that come before and after. So the sections that dug down into the detail of processes – section 2 and section 4 with the evaluation plans – appealed the most to me.

I can understand the majority of people working with education technology are in the K-12 area and so it makes sense this is where many of the guest experts came from but this did sometimes seem slightly removed from my own experiences. I had to do a certain amount of “translating” of ideas to spark my own ideas.

What about peer feedback? How did your experiences in the 11.133x learning community help shape your project?

Peer feedback was particularly rewarding. A few people were able to help me think about things in new ways and many were just very encouraging. I really enjoyed being able to share my ideas with other people about their projects as well and to see a range of different approaches to this work.

General observations

I’ve started (and quit) a few MOOCs now and this was easily the most rewarding. No doubt partially because it has direct relevance to my existing work and because I was able to apply it in a meaningful way to an actual work task that happened to come up at the same time.

I had certain expectations of how my project was going to go and I was pleased that I ended up heading in a different direction as a result of the work that we did. This work has also helped equip me with the skills and knowledge that I need to explain to a teacher why their preferred option isn’t the best one – and provide a more feasible alternative.

While it may not necessarily work for your EDx stats, I also appreciated the fact that this was a relatively intimate MOOC – it made dealing with the forum posts feel manageable. (I’ve been in MOOCs where the first time you log in you can see 100+ pages of Intro posts and this just seems insurmountable). It felt more like a community.

I liked the idea of the interest groups in the forum (and the working groups) but their purpose seemed unclear (beyond broad ideals of communities of practice) and after a short time I stopped visiting. (I also have a personal preference for individual rather than group work, so that was no doubt a part of this)

I also stopped watching the videos after a while and just read the transcripts as this was much faster. I’d think about shorter, more tightly edited videos – or perhaps shorter videos for conceptual essentials mixed with more conversational case-study videos (marked optional)

Most of the events didn’t really suit my timezone (Eastern Australia) but I liked that they were happening. The final hangout did work for me but I hadn’t had a chance to work on the relevant topic and was also a little caught up with work at the time.

All in all though, great work MOOC team and thanks.

(I also really appreciated having some of my posts highlighted – it’s a real motivator)