The Way Ahead—Opportunities and Possibilities

Emerging Lessons Prioritising Science Communication As has been discussed, the central thrust of discussion at the ‘Gover’Science’ Seminar centered on the topic of public engagement in science governance. However, it was emphasized a number of times at the Seminar, that effective public engagement can only take place against a wider background of successful general science communication [Q].

Saturday, May 15, 2010

Emerging Lessons

Prioritising Science Communication

As has been discussed, the central thrust of discussion at the ‘Gover’Science’ Seminar centered on the topic of public engagement in science governance. However, it was emphasized a number of times at the Seminar, that effective public engagement can only take place against a wider background of successful general science communication [Q].

It is important not to allow specific interests in participatory methods to undermine appreciation of the importance of clear, accurate, intelligible, balanced communication of relevant prevailing scientific knowledge [J].

This is especially significant with respect to the remits of agencies specifically charged with responsibilities for facilitating public engagement activities [Q].

The importance of effective science communication applies both within the context of individual exercises in public engagement, as well as with regard to encompassing policy discourses.

Indeed, somewhat ironically, it applies in equally to the communication to wider audiences of the results of participatory exercises themselves [J]. Here, reporting is often too long-winded [O] and expressed in language that is too technical [K]. With respect to the effective communication of science in wider policy discourses, outside public engagement processes themselves, particular attention must be paid to the role of the media.

Broadcast and printed media wield considerable influence on policy makers, if not on public attitudes themselves [L].
Indeed, there is evidence that policy makers often pay more attention to the reporting of science in the media than they do to more formal reporting channels [L].

Although social science research repeatedly shows that the reporting of science in the media does not determine public attitudes in the simple fashion that is often assumed by policy makers [E], the persistence of such perceptions in itself serves unjustifiably to exaggerate the mediating role of the media in wider science policy discourses.

Among some in the risk communication community, there is a concern that media representations of science tend to dwell too much on the negative connotations of technological risk [L].

Under various views, this can present dangers equally of ‘warning fatigue’ over excessive media discussion of risk and of ‘lifestyle coercion’ under which spurious pressure militates unreasonably against certain consumption patterns [L].

The specifics of such opinions may vary from case to case, but what is clear, is that the media often have their own very strong agendas.

Science communication by particular media outlets is often motivated by overtly political campaigning aims. Even where this is not the case, priority is persistently attached to finding interesting ‘storylines’, rather than the setting out of a balanced picture [L].

The media are particularly poor at communicating the more complex aspects of uncertainty, ambiguity and ignorance discussed above (Section 2.2.4), often rendering such discussions more polarized on these matters than are the public attitudes themselves [E].

Taken together, the importance of the role of the media in wider processes of science communication underscores the importance of direct efforts more effectively to engage the media themselves. One area of activity in this regard involves the development of codes of responsibility for journalists.

Although established career journalists are notoriously immune to influence of this kind, such initiatives are stand better prospects of success where they are targeted at journalism training courses [L].

Efforts to inculcate a greater sense of ‘responsibility’ in media reporting of science, does not necessarily require adoption of a simplistic understanding of ’scientific facts’. It applies as much to the accurate reporting of uncertainties as to particular viewpoints on what constitute the appropriate interpretations.

A final important area of discussion with respect to science communication concerns the increasingly important role of information and communication technology – especially the Internet. With the increasing ‘dematerialisation’ of communication [Q], the Internet presents an area of attention that may directly help to address some of the challenges presented by the established media formats discussed above.

This presents a particularly promising resource in relation to the challenges of’ scaling up’ public engagement activities (see Section 3.1.4) [I].

The advent of initiatives such as ‘Wikipedia’ illustrates the creative enabling role of the Internet in the fostering of more effective processes of ‘distributed knowledge’ [Q].

However, it is important not to under-estimate the difficulties associated with effective use of information technologies in science communication.

Experience thus far has as often been negative as positive – for instance in relation to disappointing levels of usage among initiatives such as the European Commission’s ‘your voice’ portal [I]. There is presently a serious dearth of research on these pressing questions over the potential practical role of internet [M].

One final theme that emerged in the Seminar discussions concerning science communication was the importance of "not reinventing the wheel”. If science communication in the broad sense is to fully assimilate the necessity for two-way dialogue, then it must learn more fully to address the experience of the science shop movement [L].

This is a field in which researchers have wrestled for decades with the dilemmas of balancing scientific discipline, rigour and clarity with respect for divergent social interests, values and perspectives [L].

Particular discussions within the science shop movement over appropriate quality criteria and incentives may have much to offer to wider consideration of effective science communication [L]. But in order for this to happen, a series of significant challenges need to be overcome.

The science shop movement needs to find ways to improve the profile and enhance the’ prestige’ attached to its outputs [L]. It needs to find a way to resolve the longstanding question of whether science shop activities are properly part of university curricula or whether they constitute parallel or super ordinate activities requiring other forms of support [I].

Greater recognition of the importance of the science shop experience for the establishing of wider social practices of two-way dialogue in science communication may play a role in assisting with these challenges.

The Task of Evaluation

One theme that arises repeatedly – especially in relation to the challenge of ‘embedding’ public engagement in mainstream policy making (Section 3.2.1) [F] – concerns the importance of effective provision for the evaluation of participatory processes.

Accordingly, it will be argued later that this constitutes an essential element in the development of a persuasive ‘business case’ for public engagement (Section 4.2.2) [I].

At its best, evaluation addresses the practical need on the part of practitioners, researchers, potential sponsors and prospective participants for clear, firm information on the strengths and weaknesses of different approaches in different contexts [M].

It is an important element in ensuring long term continuity and cumulative progress in public engagement, rather than the currently more ad hoc fitful process ion 0f disparate one-off exercises [F].

Crucially, evaluation requires that public engagement be undertaken with explicit attention to the need for follow-up after the event, as well as provision for reflection and independent engagement as part of the process itself [F].

However, despite the many benefits of more established procedures for evaluation, there are also difficulties. Many of these lie at an operation level. Clear distinctions must be made between long and short term impacts and between direct and indirect results [F].

This difficulty is compounded by the fact that (positive and negative) impacts of public engagement are often complex, indirect, delayed and ambiguous – and so very difficult to measure [I].

Care is required in generalizing from specifics: what constitutes effective or successful practice in one context need not necessarily translate to others [F].

The often highly polarized arenas within which participation takes place, renders it especially important to exercise caution over the acceptance of claims made on the part of sponsors or practitioners [H] or criticisms on the part of detractors.

Conclusions are hampered at a practical level, by a lack of codified experience from earlier projects [E]. This underscores the need for basic mapping research as a prerequisite to effective evaluation [M] – especially in relation to successful uptake by governance institutions (such as the European Commission’s directorates general) [R].

In the absence of this, there is tendency for deliberations over ‘best practice’ to go around in circles, or even backwards [R]. Unfortunately, however, there are limits in what such operational approaches to evaluation can actually achieve.

The deeper problem is, that some of the more simplistic aspirations to basic rules concerning ‘best practice’ fail to acknowledge that there are some fundamental differences of view over the role of evaluation [K].

In short, although it is possible to agree on fundamental criteria at a general level –such as legitimacy in recruitment, fairness in dialogue and transparency in process – the form and interpretation of such criteria must to some extent depend on perspective and context [M].

Less than one view, evaluation is just a matter of establishing which approaches ‘work’ and then communicating and implementing these approaches [K].

Under another perspective, the framing (and thus results) of evaluation must (like participation itself) remain dependent on the purpose or context of the initiatives in question [K].

Perhaps the most important feature of the context that bears on evaluation is whether an exercise is intended to achieve aims concerned with normative enhancement of democracy, substantive outputs in terms of sustainability or precaution or instrumental motivations over trust or credibility (Section 3.1.4): each of these would yield different evaluative criteria [I].

If this view is accepted, then efforts to establish a single definitive scheme of evaluative criteria appear to be insufficiently reflective and overly instrumental. Indeed, far from promoting consensus, efforts too strongly to assert particular visions of ‘best practice’ may actually foster further tension and conflict.

Representation and Democracy

One particularly important example of a contextual factor bearing on the interpretation of evaluation, concerns the view that is taken of participatory processes in relation to the established institutions and procedures of representative democracy.

Is participation a substitute for other forms of democratic deliberation and accountability, or is it a complement? Aside from the specific connotations for evaluative criteria, this question holds profoundly important implications for the general role of public engagement in the governance of science.

For its part, the Seminar working group on the issue of participation and representation developed a clear agreed ‘statement of needs’ [N]:

- Clarity on the boundaries and expectations for all participants;

- larity over the working frameworks for participation;
- responsibility to experiment with new forms of learning; 
- the embedding of participatory process in mainstream European Commission programmes;
- the undertaking of participation also on the process of implementing participation.

However, for all its value as a pointer to necessary areas for further work, this does not fully resolve the complexities of the question of how participatory process relates to representative democracy.

One reason for the difficulty here is that the answer to this question begs in turn the question of precisely what is meant by representative democracy in the first place. This can sometimes be unclear – there being a number of different views in political discourses in the industrial democracies that make up the EU [F].

Beyond this, the answer also depends on which of several different views is taken on the role and nature of participation. Is it about collecting a microcosm of socio-political perspectives? Is it concerned with staging public competition between arguments?

Does it involve the mediated balancing of negotiated interests? Or is it a more technical process of ‘preference feedback’, using settings such as focus groups or citizen panels as social scientific experiments to produce evidence in order to inform wider policy making [G]?

Each of these yields significantly different answers to the question of the relationship between participation and representative democracy. Nowhere are the implications of this question more acute, than with the issue of representativeness itself.

Here – as in evaluation more generally – there are dangers in reifying simplistic notions of representation and taking this to extremes [G]. It has already been mentioned, for instance (Section 2.2.4), that the process of deliberation itself can reduce representativeness in participation and that participant’s acquisition of expertise compounds this [K]. Particular examples arise in the medical field [K].

The question of whether or not a particular process or exercise has been ‘representative’ depends
on subjective and context-specific judgments over what constitutes the appropriate partitioning of relevant perspectives.

In any exercise that involves (as must necessarily be the case) fewer viewpoints than are extant in wider political discourse, there must be questions over the weighting or priority attached to those viewpoints that are included.

These difficulties with the representativeness of participatory process are somewhat alleviated, if attention turns to the more practical question of the relationship with decision making itself [E].

Here, there can be little doubt over the frequently constructive value of participatory exercises as a means to build broader-based negotiated resolutions to challenging political problems. One such example arising
in discussion at the Seminar, concerned the case of negotiations over land contamination problems in the Italian town of Breschia [J].

Here, there emerges a very positive complementary role for participation in relation to representative democracy, but only if participation is oriented towards the systematic exploration of the detailed implications of different social perspectives for a particular science governance problem, and then explicitly conveys these implications to decision making [E].

In other words, the tensions with representative democracy are reduced when participation is used not to ‘close down’ on consensus, but to ‘open up’ the range of different equally legitimate options for decision making [G].

In this view, there emerge interesting synergies and complementarities between the role of participatory process in ‘opening up’ and representative democracy in ‘closing down’ decision making. In other words, participatory approaches are better seen as a means to ‘inform policy making’ than to ‘undertake decision taking’ [I] .

As emphasized by the Seminar working group on this point, the only robust response to this problem lies in making the appropriate relationship between participatory process and representative democracy in any given context an explicit focus of attention in participation itself [N].

Ends