Failure is an unavoidable element of any academic career. For all but a small number of ‘superstar über-scholars’ most of the research papers we submit will be rejected, our most innovative book proposals will be politely rebuffed, and our applications for grants, prizes and fellowships will fall foul of good fortune. There is, of course, a strong correlation between ambition and failure in the sense that the more innovative and risky you try to be, the bolder the claims you try to substantiate, and the ‘bigger’ the journal you try to publish in the higher your chances of rejection.
Imagine. You are an ecologist. You recently discovered that a chemical that is discharged from a local manufacturing plant is threatening a bird that locals love to watch every spring. Now, imagine that you desperately want your research to be relevant and make a difference to help save these birds. All of your training gives you depth of expertise that few others possess. Your training also gives you the ability to communicate and navigate things such as probabilities, uncertainty, and p-values with ease.
But as NPR’s Robert Krulwich argues, focusing on this very specialized training when you communicate policy problems could lead you in the wrong direction. While being true to the science and best practices of your training, one must also be able to tell a compelling story. Perhaps combine your scientific findings with the story about the little old ladies who feed the birds in their backyards on spring mornings, emphasizing the beauty and majesty of these avian creatures, their role in the community, and how the toxic chemicals are not just a threat to the birds, but are also a threat to the community’s understanding of itself and its sense of place. The latest social science is showing that if you tell a good story, your policy communications are likely to be more effective.
Depoliticisation is a key trend identified in the political science literature in recent years, succinctly defined by Flinders and Wood as “the narrowing of the boundaries of democratic politics”. Henrik Bang identifies “big politics” as a root cause of depoliticisation, where ‘star quality’ politicians, academics and others dominate the public debate, squeezing out the less powerful and eroding the links between authority and the public. If citizens feel constrained by such parameters of politics, then perhaps it is unsurprising when they vote for radical options such as leaving the European Union or electing Donald Trump to the US presidency. So here, depoliticisation is a means of suppressing debate, only for it to erupt at a later point in the political process. Continue reading Politicising science: necessary, not evil→
Supra national and cross-national funding is increasingly becoming the norm in a context characterised by international consortia. Although bringing advantages in terms of scale, it raises issues about the relative salience of national location and gender and its implications for the funding of projects led by women, and for the composition of research teams. These themes are explored in a case study of a cross-national research programme, with a broadly Nordic funding structure, based in Sweden, and with a total budget of approximately EUR 3million. Two critical intervention points were identified: firstly those related to the relative power of the location based Steering Group and the gender balanced expert panels; and secondly the project leaders’ attitudes to diversity.
In a recent column in The Telegraph, Allister Heath claims that the humanities and social sciences are suffering from increasing groupthink, inwardness and irrelevance – creating an environment in which certain political outlooks are suppressed and academic research rarely resonates beyond the hallowed halls of the university. Such an account simply does not square with the realities of universities in 21st Century Britain. Heath praises the University world of the twentieth century but then neglects the golden rule that drove that work and is still present in twenty-first century academia: make sure you have robust evidence to support your arguments. In terms of academic research, the supposed thought police of the left are in little evidence in the pluralistic university faculties that we know across the U.K., places in which rich debates over theory and methods take place.
by Andrew Ryder, Fellow at the University of Bristol, Associate Fellow at the Third Sector Research Centre, University of Birmingham, and Visiting Professor at Corvinus University Budapest.
It is my contention that universities are institutions of central importance in maintaining humanist values. Alas we live in age where such a vision seems to be at risk through an audit culture which seems to commodify and tame knowledge production. I come from a background of service provision and activism, as a teacher and later community organiser working for Gypsy, Traveller and Roma (GTR) communities, and have sought to base this work on emancipatory practice. Since I started lecturing full time in higher education five years ago, through employment at the Corvinus University Budapest and a series of fellowships at the University of Bristol and Third Sector Research Centre, Birmingham, I have sought to fuse my previous background of emancipatory work with knowledge production. This has primarily been achieved by promoting collaborative research with GTR communities. There is a growing interest in the co-production of research knowledge involving academics working in partnership with marginalised citizens and communities. However, the concept of community participation in research – certainly as equal partners – has been, and remains, contested. Is the knowledge generated ‘tainted’ by activism and engagement or can it be critical and objective? Continue reading Research With and For Marginalised Communities→
Early in my studies a supervisor recommended that I replicate a key publication in my research area on the relationship of public opinion and social welfare policy. Throughout my entire dissertation studies I couldn’t do it. This is how I arrived at the following conclusion:
Different researchers (or teams) who work with the same data and employ the same statistical models will not arrive at the same results.
My study was actually a reanalysis, not a replication because I took the same data and methods as the original researchers. Of course in true replication studies researchers do not expect to arrive at identical or even similar results. The subjective perceptions of the scientists and the unique observational contexts lead to variations in results. But with secondary data and reproduction of statistical models how are different outcomes possible?! These secondary observer effects, as I label them,