I attended the SEDA Spring Conference on Teaching Learning and Assessment, which was held in Edinburgh on 12-13 May.
It was a great conference, as expected, and this is my account of the two days.
Ian’s keynote was focussed on the challenges in implementing good practice in assessment and feedback. His slides were imbued with great references to work done in the field and, hopefully, they will be online soon.
- Ian highlighted the importance of tracking students online, so that educators can evaluate what works and what doesn’t (Wayne Smutz, 2013).
- Talking about online learning, Ian stated that students want access to information online 24/7, and all in one place. Thus, designing online content should always be done with students in mind.
- Ian also highlighted the role of assessment to develop employability skills and what employers want out of graduates. I have some more reservations about this topic, as I still believe that universities should take what employers want with a pinch of salt. Our role is delivering academic skills, the ability to learn, not the learning of specific skills.
- I really liked Ian’s idea that assessment should be both ‘economically efficient’ as well as ‘academically efficient’. This is a very pragmatic and effective idea: we need to make it good, but also feasible.
- The big problem with assessment is that it usually fails to meet students’ expectations, and it can drive to wrong behaviour. A few examples of this are driven by:
- The symbolic use of marking (do we really need a 100 points scale? What does it mean to the students?) Students tend to have a simpler marking rubric in their mind: “I did well, I did good, I did ok, I did awful”. But often this scale is misaligned with the mark they get.
- Structured assessments, full of sub-parts, and sub-marks that average each other do not give a clear picture of where students needs improvement, and just drives a strategic behaviour based on compensation and hitting desired threshold, stripping the complexity of the learning process. (Grades can conceal actual performance).
- We should have in mind that we are assessing the work, not the person. We should convey this feeling to the student as much as we can.
- Sometimes the needs of good assessment design and good marking clash with institutional constraints based on turnaround time: a typical issue within a constructive alignment perspective.
- Ian said that feedback is not effective if there is no evidence of its consequences. Another powerful concept. We often talk about relating assessment with learning outcomes, but do our students know what the learning outcomes are? (Susanne Orr). A way to encompass this problem is to have students rewriting learning outcomes in their own words to contextualise them, in partnership with staff.
- Ian suggests that we should use a ‘graded learning profile’, a form of learning portfolio with minimal aggregation of marks and a clear understanding of the skills attained on the student’s side.
- Myth busting: it is not true that students will work to meet minimum requirements, if the assessment process if constructively aligned.
There is much more I could write about this great keynote, I thoroughly enjoyed it.
Sally (@ProfSallyBrown) and Kay explored the use of exemplars to support students in developing assessment literacy. Their key message was that the use of pre-emptive formative assessment can be crucial in the development of unconscious and tacit knowledge about the assessment process. They find that the ‘failing exemplar’ (and example of poor quality work) was even more helpful than good quality one in supporting students’ assessment literacy. My colleague Phil Long emphasized that good assessment literacy should include: ideas, connections, and extensions, and that we should guide students through these stages. The slides of Sally and Kay are available here. They have also provided a useful assessment literacy bibliography.
Deena (@DeenaI) challenged us to think about the provocative idea of having students to design their own assessment. Something that belongs the Learning Contract Design literature.
Deena based her talk on the idea that we can harness self-authorship (Baxter-Magolda) in assessment and use intrinsic motivation in delivering good work as a much more powerful driver than extrinsic motivators. According to the principles of self-authorship, student choose and write the criteria for assessment and the learning outcomes the assessment is going to address.
Mighty Laura (@laura_ritchie) emphasized the importance of student agency in assessment design and delivery. We need to help our students to question the mode according to which they perform tasks, and of course we need to help them to understand the tasks, acquire the necessary skills and believe in themselves. (How could self-efficacy not come up when Laura is in the room?) In Laura’s talk, assessment is seen as a ‘criterion validated reflection’: what a beautiful definition! Laura asked us to define what an essay is, and she shared her view that desired features of an essay should be: excellence, reflection, creativity, and learning. I fear creativity is the hardest one to harness, but it is also true that this feature is the most rewarding to discover in our students’ work. Laura has already written a great blog account of her session, well worthy a read.
Linda presented the results of her research on how essay feedback to students relates to the mark awarded. Laura introduced us to Brown and Glover’s classification of feedback comments, ranking comments motivationally (positive, negative, neutral) and practically (through indication, correction, and explanation). Laura then presented her work showing the correlation between marks and proportion of positive and negative comments. This is fascinating research, which probably needs to be made a little more robust with larger sample sizes, and more structure in the empirical analysis.
A large team of researchers (@y1feedback) from Ireland talked us through the material generated by a project aimed at evaluating different approaches to Technology Enhanced Learning approaches to address the needs of First Year students. The First Year focus harnesses the research to the quest for ways to support transitions through TEL. Can digital technologies open ways to enhance feedback for First Year students? The researchers highlight that the feedback experience of first year students is often inconsistent. They advocate that good feedback needs to both formal and informal, be feed-forward oriented, and be based on a dialogue. They compiled a wide range of case-studies and they observed that the challenges emerging from their surveys of TEL approaches are: (i) truly dialogic feedback is hard to implement, (ii) the potential of technologies is sometimes high to realise, and (iii) the problem of competing priority in feedback deliver is always present. Their website is full of great resources. I think the next stage of their project should consist of distilling what works better and what doesn’t, creating a menu from which teachers can choose the pedagogies and tools more suited to their needs.
This was the end of the first day.
Margaret Price has a similar experience to mine (even though she is much more established than me, of course). She comes from the Business disciplines, but she migrated her research into education and pedagogy. Margaret started her keynote talk on the premise that feedback does not seem to have much of an effect.
- The discourse on assessment seems rather unsophisticated and superficial to her: issues with fairness, cheating, and grade inflation, are always on the agenda, but these are not the core issues.
- Margaret touched a little on the anonymity as she remarked this was not even much of a debated issue. I must agree with her, it has always been imposed on me…and I hate it!
- She digs deeper when she said that there is a problem of collusion between staff and students to keep assessment and feedback as they are. More traditional forms of assessment tend to be taken as granted.
- Margaret remarked that we need to take a programme approach to feedback, not a piecemeal approach. This recalls our opening keynote by Ian Pirie. It led me to think that the constructive alignment theory can show us that sometimes learning and teaching practices are perfectly aligned…but on the wrong path!
- In line with the presentation on exemplars, we were challenged to reflect on the fact that assessment criteria are assumed to be explicit, but they are still imbued with tacit knowledge that can only be shared by exposure and experience.
- Marking consistency is another issue: there is huge mark variation, especially in essays. Phil Race tweeted that essays are good for giving feedback, but we should not mark them.
- What is the impact of feedback? What would generate good impact? Possible answers are: student engagement, understanding issues, experiencing a relational approach, and affecting self-efficacy.
- What makes good feedback ‘good’ then? Margaret suggests using student-researchers to find out. She claims that we have 3 success factors: (i) technical (presentation), (ii) particularity (personal and engaged feedback), and (iii) recognition of student effort (including the level of detail in feedback).
- In terms of context-based criteria for good feedback, we can account for: (i) assessment design, (ii) pre-conditions, and (iii) marker predictability. (Timing is not perceived as a big issue).
- In terms of expectations, we need to account for: (i) mark expectations, (ii) student epistemology, resilience, and beliefs.
Margaret concluded her talk suggesting that there is unexploited scope for assessment and feedback in the area of student development. This is quite a broad concepts, and I will need to think it over.
This presentation discussed the role of using marking rubrics in TurninUK. It was interesting to hear the pros and the cons of using TurnitinUK. It seems that no platform is perfect. Features that caught my attention:
- Rubrics can help students to self-assess.
- TurnitinUK links feedback comments to learning outcomes.
- However, TurnitinUK doesn’t allow to have different rubrics on the same website. Need to set a different one for each rubric.
- An advantage of Blackboard is the ability of writing in the rubric itself to customise it.
This presentation highlighted the problem of MOOC attrition rates: low engagement and patriotic bias in peer-marking (peers from similar background/countries tend to mark each other higher).
- Coursera addresses the problem using a “calibrated peer-assessment” system.
- MOOCs also have an element of self-assessment (after receiving the peer-assessment).
Self-assessment is still underrated in literature. However, peer-assessment is still affected by a lot of attrition.
- A controversial question is: should ‘effort’ be included in the marking rubric? In my opinion this can be done only by specifying the evaluation criteria for this.
Two presentations (and keynote) of the day were based on the development of the TESTA programme. This is a project that articulates across auditis, experience questionnaire, focus groups, case studies, workshops, and a range of resources designed programmatic review of assessment and feedback practices. Highlights from these talks are the following:
- The shift in perspective should lead to move: (i) from “my module” to “our programme”, (ii) from teacher to student-centred, (iii) from NSS to enhancement.
- In terms of NSS, key concepts were: knee-jerk, and coping with poor performance through a spit and polish approach.
- In terms of curriculum design, it was emphasized that content and knowledge are dominant, but there is little training on curriculum design.
- TESTA highlighted that the tendency is using misguided assumptions to interventions in teaching and learning: the ‘academic’ approach of metrics, analysis of data. This needs more attention (and partnership with the students).
- I just observed that receiving feedback is an emotional process and we should acknowledge that.
- The issue of staff workload in marking was an important one. An interesting piece of feedback from staff: “Using online marking gave me back my Christmas holidays”.
- Reducing staff workload is a good incentive to buy staff into innovation, but sometimes staff does not see the advantage of technology.
- It was advocated that reducing summative assessment and increasing formative assessment can reduce workload, but I would disagree (unless we conduct one worse than the other).
That is all for now!
I attended this HEA event while being in full teaching mode. I had to run from a late afternoon lecture to the station, jump on a train, get a wee sleepover in York, and a late return journey home. Well well worth the effort! HEA might be a little more money constrained, but it can still count of enthusiastic promoters and discipline leads!
Besides, they have adopted me as part of their enlarged family you know? After being awarded my Teaching Development Grant last year I started to get more and more involved with their events. I presented at 3 of their conferences in one year, tweeted, spammed everybody with my fliers, learned, promoted, disseminated, developed…and made some great connections and friends!
Well, well, without further rambling, let’s talk about this great event.
- First of all we had a fantastic keynote talk by Mike Neary (@mikeneary) on the Lincoln “Student as Producer” model, as well as on many others great Cooperative Education initiatives. The Students as Partners theme as at the centre of the agenda, but many other issues were raised in the Q&A session, including how can we align the UKPSF framework with the principles of good student and partners practice.
- The first workshop I attended was led by Alex Buckley (@ajbtwit) and Camille Kandiko. (@cbkandiko). They presented the first results of the UKES 2014 (UK Engagement Survey) and discussed with us about issues connected with cognitive testing and development of the UKES questionnaire. I will definitely have to read the full report, and there is an interesting blog post as well. We also had some interesting conversations about how the data process flows within each individual institution, and it was fascinating to hear about different experiences across the sector. I need to think much more about what discussed in this workshop. More posts will follow.
- The second workshop was led by the mighty Kathy Wright (@HEAEducation). Kathy tackled issues related to student feedback and NSS from a refreshing perspective, leading us to reflect on what can be done at an individual level (as module convenors, or leaders of specific units) to prevent bad surprises from occurring. She encouraged us to think about our teaching journey, sailing off with our crew, chasing for a treasure island full of great learning objectives. We explored what could hamper our journey, how we could avoid shipwrecks, and what could help us sailing safe and sound to destination. Again, more posts will follow, as I am planning to illustrate my own journey as I thought of it during the workshop.
This was a great event, which I would recommend to everybody who is passionate and committed to good practice in learning and teaching in HE. I tweeted away for the whole duration of the day and I posted a Storify report of tweets (#HEAenhancement).
I recently delivered a workshop on methods to integrate Student Response Systems and VLEs for formative assessment and teaching evaluation at the Anglia Ruskin University “Assessment Fiesta”. It was a great opportunity to meet colleagues, listen to interesting presentations, and showcase my project on Academic Self-Efficacy and learning technologies (When Student Confidence Clicks). So here follows my account for the day and what I am most certainly taking home from this experience.
A link to the storify tweets collection for the day is here.
First of all, I listened to Kate Little (NUS) (@katelittle)
in conversation with Prof Sally Brown (@ProfSallyBrown) on issues related to assessment and feedback, its role in the NSS and I found out about the NUS Assessment Benchmarking Tool, which I want to use to check that my assessment strategy matches with NUS objectives.
- The plan of NUS is taking up Student Unions to the same level of analysis of NSS scores adopted by HEIs.
- Consistent and ‘authentic’ assessment are paramount values, whereas anonymity does not seem to be a big concern.
- Setting up the scene on plagiarism by saying during inductions ‘not to do it’ does not seem the best approach.
- Electronic feedback is ok, but students want to be able to become personal with the marker if need be.
- Start early with administering assessment, embed task with your feedback to elicit further action and response to feedback.
I then went to a workshop by Phil Race (@RacePhil). My first time: very powerful. There was so much to take in! What I learnt…
- One hasn’t learnt something unless she hasn’t spoken about it. Students need to verbalise. (And this is why teachers learn even more than students while they teach).
- I learnt how to use post-its to generate quick and useful feedback within a group of people. (Apparently Phil is famous for this, but to me it was an epiphany).
- An important question to ask (aside from what, when, where, and how) is ELSE?
That creates more inquiry and stimulates deepening the analysis.
- “Show students what quality is” says Royce Sadler.
There was a showcase of good practice made of short 7-minutes presentations:
- I loved Debbie Holley’s (@debbieholley1) presentation on short videocasts she posts on her VLE, where she gets personal and re-assuring to the students. Really nice! She has a good blog with ideas as well.
- I also liked Matthew East’s (@mdleast) presentation on group wikis developed and delivered via VLE from students to students. Excellent work and idea.
I delivered my own workshop, which was well received (chuffed!)
The final remarks (with the help of Sally Brown):
- Students are better learners if they understand how learning occurs.
(Sally Brown always cites education literature to students).
- “Show them what quality is” says Royce Sadler.
- Read Royce Sadler, Dylan William, and David Nicols (that is suggested by Debbie Holley) on assessment and feedback.
Anglia Ruskin (Cambridge) had a presentation delivered by Prof Mantz Yorke, which pretty much relates to issues of student self-efficacy. I want to find out more about this. An introduction about his talk is here. (This was on 1 Oct 2014).