I attended the SEDA Spring Conference on Teaching Learning and Assessment, which was held in Edinburgh on 12-13 May.
It was a great conference, as expected, and this is my account of the two days.
Ian’s keynote was focussed on the challenges in implementing good practice in assessment and feedback. His slides were imbued with great references to work done in the field and, hopefully, they will be online soon.
- Ian highlighted the importance of tracking students online, so that educators can evaluate what works and what doesn’t (Wayne Smutz, 2013).
- Talking about online learning, Ian stated that students want access to information online 24/7, and all in one place. Thus, designing online content should always be done with students in mind.
- Ian also highlighted the role of assessment to develop employability skills and what employers want out of graduates. I have some more reservations about this topic, as I still believe that universities should take what employers want with a pinch of salt. Our role is delivering academic skills, the ability to learn, not the learning of specific skills.
- I really liked Ian’s idea that assessment should be both ‘economically efficient’ as well as ‘academically efficient’. This is a very pragmatic and effective idea: we need to make it good, but also feasible.
- The big problem with assessment is that it usually fails to meet students’ expectations, and it can drive to wrong behaviour. A few examples of this are driven by:
- The symbolic use of marking (do we really need a 100 points scale? What does it mean to the students?) Students tend to have a simpler marking rubric in their mind: “I did well, I did good, I did ok, I did awful”. But often this scale is misaligned with the mark they get.
- Structured assessments, full of sub-parts, and sub-marks that average each other do not give a clear picture of where students needs improvement, and just drives a strategic behaviour based on compensation and hitting desired threshold, stripping the complexity of the learning process. (Grades can conceal actual performance).
- We should have in mind that we are assessing the work, not the person. We should convey this feeling to the student as much as we can.
- Sometimes the needs of good assessment design and good marking clash with institutional constraints based on turnaround time: a typical issue within a constructive alignment perspective.
- Ian said that feedback is not effective if there is no evidence of its consequences. Another powerful concept. We often talk about relating assessment with learning outcomes, but do our students know what the learning outcomes are? (Susanne Orr). A way to encompass this problem is to have students rewriting learning outcomes in their own words to contextualise them, in partnership with staff.
- Ian suggests that we should use a ‘graded learning profile’, a form of learning portfolio with minimal aggregation of marks and a clear understanding of the skills attained on the student’s side.
- Myth busting: it is not true that students will work to meet minimum requirements, if the assessment process if constructively aligned.
There is much more I could write about this great keynote, I thoroughly enjoyed it.
Sally (@ProfSallyBrown) and Kay explored the use of exemplars to support students in developing assessment literacy. Their key message was that the use of pre-emptive formative assessment can be crucial in the development of unconscious and tacit knowledge about the assessment process. They find that the ‘failing exemplar’ (and example of poor quality work) was even more helpful than good quality one in supporting students’ assessment literacy. My colleague Phil Long emphasized that good assessment literacy should include: ideas, connections, and extensions, and that we should guide students through these stages. The slides of Sally and Kay are available here. They have also provided a useful assessment literacy bibliography.
Deena (@DeenaI) challenged us to think about the provocative idea of having students to design their own assessment. Something that belongs the Learning Contract Design literature.
Deena based her talk on the idea that we can harness self-authorship (Baxter-Magolda) in assessment and use intrinsic motivation in delivering good work as a much more powerful driver than extrinsic motivators. According to the principles of self-authorship, student choose and write the criteria for assessment and the learning outcomes the assessment is going to address.
Mighty Laura (@laura_ritchie) emphasized the importance of student agency in assessment design and delivery. We need to help our students to question the mode according to which they perform tasks, and of course we need to help them to understand the tasks, acquire the necessary skills and believe in themselves. (How could self-efficacy not come up when Laura is in the room?) In Laura’s talk, assessment is seen as a ‘criterion validated reflection’: what a beautiful definition! Laura asked us to define what an essay is, and she shared her view that desired features of an essay should be: excellence, reflection, creativity, and learning. I fear creativity is the hardest one to harness, but it is also true that this feature is the most rewarding to discover in our students’ work. Laura has already written a great blog account of her session, well worthy a read.
Linda presented the results of her research on how essay feedback to students relates to the mark awarded. Laura introduced us to Brown and Glover’s classification of feedback comments, ranking comments motivationally (positive, negative, neutral) and practically (through indication, correction, and explanation). Laura then presented her work showing the correlation between marks and proportion of positive and negative comments. This is fascinating research, which probably needs to be made a little more robust with larger sample sizes, and more structure in the empirical analysis.
A large team of researchers (@y1feedback) from Ireland talked us through the material generated by a project aimed at evaluating different approaches to Technology Enhanced Learning approaches to address the needs of First Year students. The First Year focus harnesses the research to the quest for ways to support transitions through TEL. Can digital technologies open ways to enhance feedback for First Year students? The researchers highlight that the feedback experience of first year students is often inconsistent. They advocate that good feedback needs to both formal and informal, be feed-forward oriented, and be based on a dialogue. They compiled a wide range of case-studies and they observed that the challenges emerging from their surveys of TEL approaches are: (i) truly dialogic feedback is hard to implement, (ii) the potential of technologies is sometimes high to realise, and (iii) the problem of competing priority in feedback deliver is always present. Their website is full of great resources. I think the next stage of their project should consist of distilling what works better and what doesn’t, creating a menu from which teachers can choose the pedagogies and tools more suited to their needs.
This was the end of the first day.
Margaret Price has a similar experience to mine (even though she is much more established than me, of course). She comes from the Business disciplines, but she migrated her research into education and pedagogy. Margaret started her keynote talk on the premise that feedback does not seem to have much of an effect.
- The discourse on assessment seems rather unsophisticated and superficial to her: issues with fairness, cheating, and grade inflation, are always on the agenda, but these are not the core issues.
- Margaret touched a little on the anonymity as she remarked this was not even much of a debated issue. I must agree with her, it has always been imposed on me…and I hate it!
- She digs deeper when she said that there is a problem of collusion between staff and students to keep assessment and feedback as they are. More traditional forms of assessment tend to be taken as granted.
- Margaret remarked that we need to take a programme approach to feedback, not a piecemeal approach. This recalls our opening keynote by Ian Pirie. It led me to think that the constructive alignment theory can show us that sometimes learning and teaching practices are perfectly aligned…but on the wrong path!
- In line with the presentation on exemplars, we were challenged to reflect on the fact that assessment criteria are assumed to be explicit, but they are still imbued with tacit knowledge that can only be shared by exposure and experience.
- Marking consistency is another issue: there is huge mark variation, especially in essays. Phil Race tweeted that essays are good for giving feedback, but we should not mark them.
- What is the impact of feedback? What would generate good impact? Possible answers are: student engagement, understanding issues, experiencing a relational approach, and affecting self-efficacy.
- What makes good feedback ‘good’ then? Margaret suggests using student-researchers to find out. She claims that we have 3 success factors: (i) technical (presentation), (ii) particularity (personal and engaged feedback), and (iii) recognition of student effort (including the level of detail in feedback).
- In terms of context-based criteria for good feedback, we can account for: (i) assessment design, (ii) pre-conditions, and (iii) marker predictability. (Timing is not perceived as a big issue).
- In terms of expectations, we need to account for: (i) mark expectations, (ii) student epistemology, resilience, and beliefs.
Margaret concluded her talk suggesting that there is unexploited scope for assessment and feedback in the area of student development. This is quite a broad concepts, and I will need to think it over.
This presentation discussed the role of using marking rubrics in TurninUK. It was interesting to hear the pros and the cons of using TurnitinUK. It seems that no platform is perfect. Features that caught my attention:
- Rubrics can help students to self-assess.
- TurnitinUK links feedback comments to learning outcomes.
- However, TurnitinUK doesn’t allow to have different rubrics on the same website. Need to set a different one for each rubric.
- An advantage of Blackboard is the ability of writing in the rubric itself to customise it.
This presentation highlighted the problem of MOOC attrition rates: low engagement and patriotic bias in peer-marking (peers from similar background/countries tend to mark each other higher).
- Coursera addresses the problem using a “calibrated peer-assessment” system.
- MOOCs also have an element of self-assessment (after receiving the peer-assessment).
Self-assessment is still underrated in literature. However, peer-assessment is still affected by a lot of attrition.
- A controversial question is: should ‘effort’ be included in the marking rubric? In my opinion this can be done only by specifying the evaluation criteria for this.
Two presentations (and keynote) of the day were based on the development of the TESTA programme. This is a project that articulates across auditis, experience questionnaire, focus groups, case studies, workshops, and a range of resources designed programmatic review of assessment and feedback practices. Highlights from these talks are the following:
- The shift in perspective should lead to move: (i) from “my module” to “our programme”, (ii) from teacher to student-centred, (iii) from NSS to enhancement.
- In terms of NSS, key concepts were: knee-jerk, and coping with poor performance through a spit and polish approach.
- In terms of curriculum design, it was emphasized that content and knowledge are dominant, but there is little training on curriculum design.
- TESTA highlighted that the tendency is using misguided assumptions to interventions in teaching and learning: the ‘academic’ approach of metrics, analysis of data. This needs more attention (and partnership with the students).
- I just observed that receiving feedback is an emotional process and we should acknowledge that.
- The issue of staff workload in marking was an important one. An interesting piece of feedback from staff: “Using online marking gave me back my Christmas holidays”.
- Reducing staff workload is a good incentive to buy staff into innovation, but sometimes staff does not see the advantage of technology.
- It was advocated that reducing summative assessment and increasing formative assessment can reduce workload, but I would disagree (unless we conduct one worse than the other).
That is all for now!
Yesterday I attended a seminar on internationalisation, discussing issues of communicating in English with international students. It revealed much more useful than ever expected. I storified the tweets for this session, and the debate has gone on until later hours at night with some of my colleagues.
The seminar speaker, Chris Bishop, is a Learning Enhancement Tutor for the Dean of Students’ Office, here at UEA. Chris works a lot with international students with the aim of supporting their academic writing.
First of all I loved how he framed academic writing itself:
The conventions surrounding academic writing indicate ‘an institutional practice of mystery’ (Lillis, 1999:12)
I could not agree more.
Chris’ talk highlighted how frustration emerges on the two sides of the pond as both international students and their teachers react to coursework and feedback. Expectations of teachers are very high. More than that, expectations might be very biased: are we stating the norm when we comment on our students’ writing? Or are we just stating what our personal preference is?
Providing feedback on somebody’s writing and ways of expressing herself puts us in a position of power. We can easily hinder student confidence, which is never for the better.
International students are aware that their English is not great. Our task is not reminding them then, but to help them making the best they can of the skills they have, and those they can acquire.
My own contribution to the discussion, based on my personal experience, is the following:
- International teachers might make as much damage as the natives. When I started teaching in the UK I was rather strict with language, and the feeling was: if I managed to learn and master this, you should do too. It took me time to revise this, and nobody ever taught me.
- International students ‘absorb’ English like a sponge. Most certainly I did. Problem is that, in an international environment, one absorbs whatever s/he is exposed to, which might be grammatically/idiomatically correct or not. It took me ages to correct ways of expressing myself that I picked from others, and that I found out being wrong, or imprecise at least.
As I emailed Chris after the session, I must say that my appreciation for his work has increased exponentially. I mainly fight battles on one front in my teaching. (Hopefully with the students and not against the students). But it has become evident to me that, in his line of duty, he has to fight battles simultaneously on two fronts!
We need to keep discussing on these topics, and raise awareness about the issues of working in internationalised academic environments. We need to find new ways and some sort of consensus on how to assess the work of our international students, on how to compare it to the work of native speakers. We talk a lot about managing students’ expectations. But who manages ours?
I attended the TurningTechnology Users Conference held in Manchester on 22 September 2014. My first time at this kind of events.
I have been tweeting a lot (as usual):
These resources were made available from TurningTechnologies:
- Videos to short-presentations and testimonials
- Slides to presentations:
- Introducing Classroom Interactivity – two years on
- Flipping Roles: Student Sourcing of Both Questions and Answers
(This is our own UEA Simon Lancaster in action: excellent talk!)
- From Commonwealth Games to the Common Goal of Improving Student Feedback
I enjoyed the event overall and I would definitely recommend it. However, I think this conference would be even more beneficial to new comers (rather than “users”) to the world of Student Response Systems.
The keynote by Eric Mazur was excellent. (I would have not expected any less!).
He also suggested to link to resources on Peer Instruction techniques: the peer instruction network.
TurningTechnologies have other interesting online resources for Student Response Systems users, available from the following links:
Finally, of course there is my own resource and contact page, linked to the HEA Project that I am developing on fostering Academic Self-Efficacy with the aid of Student Response Systems.
I recently delivered a workshop on methods to integrate Student Response Systems and VLEs for formative assessment and teaching evaluation at the Anglia Ruskin University “Assessment Fiesta”. It was a great opportunity to meet colleagues, listen to interesting presentations, and showcase my project on Academic Self-Efficacy and learning technologies (When Student Confidence Clicks). So here follows my account for the day and what I am most certainly taking home from this experience.
A link to the storify tweets collection for the day is here.
First of all, I listened to Kate Little (NUS) (@katelittle)
in conversation with Prof Sally Brown (@ProfSallyBrown) on issues related to assessment and feedback, its role in the NSS and I found out about the NUS Assessment Benchmarking Tool, which I want to use to check that my assessment strategy matches with NUS objectives.
- The plan of NUS is taking up Student Unions to the same level of analysis of NSS scores adopted by HEIs.
- Consistent and ‘authentic’ assessment are paramount values, whereas anonymity does not seem to be a big concern.
- Setting up the scene on plagiarism by saying during inductions ‘not to do it’ does not seem the best approach.
- Electronic feedback is ok, but students want to be able to become personal with the marker if need be.
- Start early with administering assessment, embed task with your feedback to elicit further action and response to feedback.
I then went to a workshop by Phil Race (@RacePhil). My first time: very powerful. There was so much to take in! What I learnt…
- One hasn’t learnt something unless she hasn’t spoken about it. Students need to verbalise. (And this is why teachers learn even more than students while they teach).
- I learnt how to use post-its to generate quick and useful feedback within a group of people. (Apparently Phil is famous for this, but to me it was an epiphany).
- An important question to ask (aside from what, when, where, and how) is ELSE?
That creates more inquiry and stimulates deepening the analysis.
- “Show students what quality is” says Royce Sadler.
There was a showcase of good practice made of short 7-minutes presentations:
- I loved Debbie Holley’s (@debbieholley1) presentation on short videocasts she posts on her VLE, where she gets personal and re-assuring to the students. Really nice! She has a good blog with ideas as well.
- I also liked Matthew East’s (@mdleast) presentation on group wikis developed and delivered via VLE from students to students. Excellent work and idea.
I delivered my own workshop, which was well received (chuffed!)
The final remarks (with the help of Sally Brown):
- Students are better learners if they understand how learning occurs.
(Sally Brown always cites education literature to students).
- “Show them what quality is” says Royce Sadler.
- Read Royce Sadler, Dylan William, and David Nicols (that is suggested by Debbie Holley) on assessment and feedback.
Anglia Ruskin (Cambridge) had a presentation delivered by Prof Mantz Yorke, which pretty much relates to issues of student self-efficacy. I want to find out more about this. An introduction about his talk is here. (This was on 1 Oct 2014).