On Assessment Systems for the Arts
Whilst I’m no longer directly involved in assessing music in either competitive or educational settings, I still regularly interact with a variety of institutions that use them, and so still find myself thinking about how they work. The users of these systems – competitors, examination candidates, and the teachers and coaches who support them – often have a slightly conflicted relationship with them. On the one hand, they value the external validation that the systems offer, while on the other, they don’t quite trust them to recognise the value of the artists they judge.
I recently saw a disgruntled teacher complain about the feedback a student had been given on the grounds that art is ‘subjective’ – and thus by implication that what the examiner had criticised could have been a legitimate choice rather than a flaw. This is one of those comments that is both totally right and maddeningly wrong; it captures an important truth but also misses a whole lot of simultaneously true things. And as it’s quite a common discourse for grumbling about assessment in the arts, I felt it was worth unpacking a bit.
So, there are things that assessment systems can do well (and usually do). Technical control is pretty quantifiable and observable, and thus a very reliable indicator of level. You can get some quibbling around the edges (was that poor tempo control or deliberate rubato?) but mostly technical competence is clearly understood. If someone is disappointed with the outcome on technical grounds, the issue is usually a lack of awareness of what constitutes higher-level performance in that discipline. If you are the first in your family and school to reach that level, they’ll all lead you to think you’re brilliant when in a larger pond you’d be a small-medium sized fish.
Questions of artistry - beauty, communicativeness, insight – are often thought to be more problematic to judge, though actually assessment systems generally do a decent job with these too. They may be harder to quantify than what proportion of the notes were right and in-tune, or if you got the right number of beats in every bar, but there are still generally agreed modes of understanding within communities of practice. If this weren’t the case, people wouldn’t have aesthetic experiences while listening to music.
It is interesting to note, though, that these shared values are negotiated rather than mandated, and you can see this in the different places the negotiations happen in different assessment method. In systems where the assessors are trained by an organisation (e.g. grade examinations, barbershop judging), the negotiation happens during the training process, and assessors all apply the agreed standards independently in operation. In systems where practitioner experts are brought in to as part of the assessing team (e.g. Conservatoire recitals and composition portfolios), the panels use protocols of structured discussions about the merits of each performance or portfolio to agree artistic judgements before converting these into numerical values within the assessment system.
When I say that systems handle these elements well, this actually means they handle them consistently. The primary goal of any assessment system is standardisation of level. This is both through a responsibility to be fair to all those being assessed, and through a need to guard the integrity of the outcomes so they are usefully meaningful to the artistic communities who refer to them. If your Grade 8 piano counts towards your university entrance qualifications, it needs to be clear what that Grade 8 means.
Having been built for consistency, what artistic assessment systems don’t handle well is the unexpected. They can be built to recognise creativity (as in Composition degrees), but even that is always in the context of shared horizons of expectation. If a system come across something it’s not been calibrated for – whether that’s personal individuality, the transferring of aesthetic practices from a different domain into the one being judged, or straight-up innovation – it is uncertain how it will handle them. The assessors might wildly over- or under-score it, or they might simple disregard the most interesting thing about the experience and grade it according to the criteria which are already established. By definition calibration takes encountering something enough times to make comparisons and negotiate how it is to be valued, so anything genuinely new or different takes its chances on first encounter.
The main conclusion to draw from this is to accept that no assessment system will ever really see you in your full individual artistic glory. By all means continue to use them – they provide useful practice goals and external markers relative to the specific artistic communities they serve. There will always be some useful feedback to guide you on your journey, and the fact that it is quite generalised makes it no less valid. But don’t take the process too personally – it’s not really about you. The artist asserting their right to their subjective take on their art identifies something experientially essential to the process of making meaningful music, but functionally irrelevant to the assessment process.
...found this helpful?
I provide this content free of charge, because I like to be helpful. If you have found it useful, you may wish to make a donation to the causes I support to say thank you.
Archive by date
- 2024 (32 posts)
- 2023 (51 posts)
- 2022 (51 posts)
- 2021 (58 posts)
- 2020 (80 posts)
- 2019 (63 posts)
- 2018 (76 posts)
- 2017 (84 posts)
- 2016 (85 posts)
- 2015 (88 posts)
- 2014 (92 posts)
- 2013 (97 posts)
- 2012 (127 posts)
- 2011 (120 posts)
- 2010 (117 posts)
- 2009 (154 posts)
- 2008 (10 posts)