abcd Research Developments

‹-- PreviousNext --›

Michael Bonshor's methodology diagramMichael Bonshor's methodology diagram

The recent abcd Choral Leaders Festival in Birmingham saw the introduction of a new research stream to this annual event. This is the brainchild of Martin Ashley (whose own research is well worth reading if you don’t know it already), as part of a wider project to facilitate a research culture amongst practitioners and develop a disciplinary community amongst choral researchers. Look out for the launch of a new journal in the coming months as another part of this initiative.

As you might imagine, a morning immersed in multiple researchers’ work filled more pages of my notebook than can be digested in a single blog post. Indeed, many of the things I noted will show more in the way they nourish my thinking over the coming months than in any immediate reporting. But there are themes that need thinking about, and this blog is where I do my thinking in public.

Actually, before I start teasing out these threads, I would just comment that it was a pity we really had no discussion time to start making these cross-references together. The sessions used the standard academic 30-minute slots, which traditionally are conceived as 20 minute paper plus 10 minutes questions. But what with not quite starting on time, making introductions, the occasional technical glitch, you’ve probably lost 5 minutes of that, and it only takes a short over-run then to obliterate all the discussion time.

Part of the problem here I think is people framing their papers as reporting on a specific piece of research, and then trying to fit it into 20 minutes, rather than as ‘what, of all the results this research produced, can I usefully share in a very short time frame?’ Part of it is also I think due to changes in presentational norms such that you get very few people reading papers verbatim these days, preferring instead to talk to a structure articulated on slides. On the whole this produces much more engaging presentations, but it is a more diffuse form of discourse and thus gives you much less control over time, and reduces even further the amount you can cover in that time.

It should be added that all the presenters were great speakers: personable, clear, fluent. Though as at least 3 of them are choral conductors as well as scholars we should not perhaps be surprised – these are essential survival skills for holding the attention of amateur singers. But all the more reason why I missed the chance to hear the dialogue between them as well as the content of what they’d brought to share with us.

I mention this, you understand, in the spirit of dropping a hint to any researcher who may happen across my blog. If your research is worth sharing, you must expect your audience to want to play with all the interesting ideas in it and and so give them space to do so. They will be happier, and you will be more affirmed in your work. They can read all the stuff you had to cut to achieve this in the published version, which will be better for having lived through the discussion process at the paper stage.

Anyway, having got that off my chest, I’d like to mull briefly on methodology. Michael Bonshor made this a theme for his paper, and included a helpful diagram to show the relationship between research and practice: how lived experience inspires the questions research addresses, and how the answer research produces then get applied back into real life. This is a picture of my life.

Michael introduced this diagram in his exposition of qualitative methodologies, but on reflection I think it applies equally well to quantitative work such as Richard Seaton et al’s paper on intermittent pitch drift. And for that matter, Katie Overy’s keynote paper on music and neuroscience. She spoke of the way that the research designs of hardcore experimental methods such as fMRI are underpinned by decades of anecdote and praxis, developing hypotheses and framing questions that the empirical investigations might answer. These in turn need not only experimental replication to validate them, but applied case studies back in the field to evaluate the usefulness of their conclusions.

Both of these conceptions of method seemed to me more useful than the division into ‘objective’ and ‘subjective’ methods made in one session introduction. Not merely for the implied hierarchy therein (which is even sharper in these terms than between ‘hard’ and ‘soft’, two other arguably unhelpful terms), but for its inaccuracy. Qualitative methods collect reports of subjective experience from its research subjects as data for analysis, but the processing of those data – the research itself – aims to be systematic.

Even if one regards a researcher’s membership of the musical community being studied as being an impediment to ‘true’ objectivity, the more accurate term to describe them would be ‘intersubjective’. But even then, anyone who regards ‘true’ objectivity (without the scare quotes) as logically possible probably needs to go back to Kant’s Ding-an-Sich and get themselves sufficiently epistemologically confused to stop lording it over other people’s methods. And yes, I am using long words as an act of academic aggression there thank you very much. Apparently I got a bit grumpy about this.

Onto more cheerful methodological matters, there were some useful thoughts around the distinction between studying choirs in the lab versus in their natural habitat. One of the challenges for the work Katie Overy reported on is how to derive results that are meaningful for real life situations from the very unnatural environment of brain scanners. (Answer: be very clear and specific in your research design, and gradually build up a global picture from the results of many individual studies.) One of the strengths, meanwhile of Seaton et al’s empirical work on pitch drift was its collection of quantitative measures from naturalistic environments.

These are matters I explored at length in Part I of my second book, and by happy coincidence is an area I’ve been asked to address in a research seminar later this year, so it was a good moment for me to have a collection of new examples to hand to start framing my arguments for that. But enough of methodology for now; in my next post(s) I’d like to turn to content and tease out some interesting cross-references between the papers.

...found this helpful?

I provide this content free of charge, because I like to be helpful. If you have found it useful, you may wish to make a donation to the causes I support to say thank you.


Archive by date

Syndicate content