Sunday 13 May 2012

Methodologies

As with some of the research and use of  surveys last week, this week we are presented with some evidence and data that is also augmented with the addition of audio and visual logs, as well as interviews. The surveys range from external, online, paper, at times suggested by faculty, within the first week, at six monthly intervals; so there is a great breadth of methods being used.
In my previous role, I was responsible for administrating learner response surveys, so it’s interesting to see the different forms and approaches and compare with my own experience. There will always be shortcomings. For me, centrally, I could only reach those who were on email,(too much time to pull of address data and process hand filled documents) who had completed their learning(about 5,00 people a year).We considered research with those in the middle of their learning, but this involved a great deal of administration time for one person, to make it worthwhile (our data systems not sophisticated enough). On average about 70% of completed learners were contacted each year with about 40% of them (around 1,500) completed the survey. The surveys ask them to give feedback on how they found their learning – from the general quality, to the quality of training advisers who helped them. It included both quantative and qualitative. I have learnt that you have to give people the opportunity to speak – to have their say. That way you also hear the good and the bad. Having data for 5 years, with not a great deal of changes in the questions, has meant that we can build up a bigger picture of trends. I am always conscious when creating reports to ensure that the contexts are right , and that we explain the actual data based on percentage representation (as highlighted well in the Kennedy reports) But we don’t just use the surveys. We also use the self evaluations of training managers; the anecdotal feedback gathered on a quarterly basis from those supporting training; the data from the database, and like Richardson, our own understandings as based on being immersed in the environment. Which allows us to get ‘triangulation’ of a sort.
So here’s the thing – there are always going to be flaws and imperfections. There will always be an imbalance towards the objective for the research, or a bias in the reporting. It’s very hard to be completely unbiased. So we always need to bear this in mind when looking at data and research. Yorke (2009) has an interesting discussion about this in the context of students completing surveys, pulling together other research in the area, looking at student cognition and response times. Do students say what they think, or what they think they should think? An interesting question – especially as I must have completed about 30 psychometric surveys in the last 5 years alone. If I think about my own experience – I fill these in now a lot quicker than I used to. Certainly with the testing I do before I meet the physiatrist for my job role, I try not to think too hard – especially as it’s timed! However I hate end of course surveys and happy sheets. I need time to process what I thought, and what I have learnt. I also like to give honest feedback – which can be hard if your name is on the top of the paper and you hand it to your tutor. But then there does need to be the right timing – too long and I don’t care about feedback anymore. The most useful feedback is peer to peer, but you need to create a culture of feedback. Hence it is also good to consider when these research activities take place. Now that I understand how important it can be, I probably am more likely to take part in research. However as an 18 year old – I probably didn’t have much time for research.
It’s interesting to see that one survey used an external company, and rewards. I fill in more surveys if I know there is a reward. I still do it honestly, but it’s an added incentive. We also commission a lot of external research, so that there is more ‘professional’ approach, and maybe less bias. Often this also includes sampling from outside of the organisation as well, so that we can draw comparisons.
Questions from the activities
How can researchers ensure that the responses they get from students about their expectations are meaningful?
This will really depend on the students, age, context. Students understanding why the research is there. I would suggest that depending on these a different approach may be used. For example, a masters student may be better at being interviewed, because often they are asked to analyse their own feelings and experience, so they may have more cognitive capacity. A 16 year old may need more simple direction. One thing not really mentioned in the research is thing like focus groups. Often the sharing of experience helps students to be more open about their own experience. I have also experimented with this in an online context through conferencing. So researchers need to think about who they want to be involved in the research and what they want to get out of it. But also be clear about the limitations of study, and not be afraid to talk about those limitations.
 Is there a need for new methodological approaches to try to elicit the real learner voice? Where the researcher is an insider-researcher, involved in the design/delivery of the module being studied, does this add an additional layer of bias? Can better ways be developed to analyse the digital traces students leave when traversing their modules?
I have lumped these questions together as they all lead to similar discussions for me. There is a new approach that has been taking the fore in some more developed online learning – where the programmes allow students to do things like rate the pages, or like them, or add notes. The behind the screen LMS can also help to track learners and their activities (as I am sure they are doing with us), to judge where they might get stuck, or spend more time. We do this in face to face learning, where we watch the learners, check for understanding, change activities, get regular feedback sessions with facilitators. Evaluate at the end. The same can be done in an online environment. It helps you to see what activities are being done, the tone and emotional state of the group. Naturally – every mix of students will be different, but this analysis of the course itself, can help with triangulation. I would like to see analysis of activities, alongside student reflected experiences, alongside tutor and trainer experiences. I know of many occasions when I think that someone hasn’t enjoyed a course, or they have been difficult throughout – yet when they get to the end they say it was amazing.
When we talk specifically about the role of technology in the learner life. Richardson (2009) study was great as he tried to control some of the earlier anomalies – so therefore, training tutors, setting student expectations etc. For me all research becomes more useful when it has a depth of approach. Hence the use of audio or video logs add a different, richer, dimension – but they need to be used alongside other data.

Yorke, M. (2009) ‘“Student experience” surveys: some methodological considerations and an empirical investigation’, Assessment and Evaluation in Higher Education, vol.34, no.6, pp.721–39;

No comments:

Post a Comment