Audit Measurement of Crisis Communication Preparedness across Different Branches of Government – Sharing Experiences Gained in Finland

An audit measurement of crisis communication preparedness was conducted for 67 organizations, mainly on the national level, in Finland. This unique project was one of the ways to implement new crisis communication regulations getting into force a year earlier. Its purpose was to strengthen central government communication in incidents and emergencies. The audit consisted of a digital survey in the participating organizations and a reflection meeting on the level of each of the 14 branches involved. The indicators were customized taking the crisis communication scorecard, developed in an international project, as a basis. In addition, several meetings were arranged to facilitate related interorganizational learning and exchange of practices. The findings showed diversity across the different branches of government and led to an exchange of practices. The evaluation of the measurement tool confirmed that it had been good to include reputation issues in the indicators for the national government level, as crises are diverse and can be initiated by reputation issues, or emergencies may also bring up reputation issues.


Introduction
Performance measurement has been utilized before, but a broad measurement applied to crisis communication on the national level has not been reported before. The research builds on work done in an earlier project 'Developing a Crisis Communication Scorecard', supported by EU funding 1 . The research consortium for this project had been led by a Finnish team, and Finnish experts had been represented in its Advisory Board. The project delivered an online tool for the evaluation of emergency crisis communication by public organizations before, during and after a crisis. The complete tool is available as a free download (www.crisiscommunication.fi/criscomscore). The need for communication to prevent, help prepare for and mitigate crises has been often stipulated and there is a high awareness that communication activities need to be integrated in crisis management . The crisis communication scorecard is a tool to turn this awareness into action. It builds on the 'Crisis and Emergency Risk Communication Model' (Reynolds & Seeger, 2005;Veil et al., 2008), defining communication tasks for all of the different phases of risk and crisis management. The use of the tool has been suggested to enhance understanding of the role of communication and the need to integrate it in crisis management (Pattala & Vos, 2011).
The tool has various quality indicators that clarify the aims of crisis communication in the respective crisis phases (Vos et al., 2011). Project research for the crisis communication scorecard demonstrated points of attention, for example, the importance of connecting with media-consumption habits (Harro-Loit et al., 2012), that were included in an audit measurement. A follow-up project on public empowerment that also served as inspiration, underlined the importance of including publics in crisis management and acknowledging their diversity (Linnell et al., 2015). In this way, insights from studies and available literature were linked to crisis phases and stakeholder groups, and further analyzed to provide a basis for the indicators .
The methodology proposed for the audit was inspired by self-assessment principles of the European Foundation for Quality Management (Ahaus & Diepman, 2002), and by scorecard measurement as proposed by Norton (2001, 2006). Following the approach of strategy maps (Kaplan & Norton, 2004), a strategy map to clarify the strategic contribution of communication to crisis management was developed (see . Scorecards have been developed for various areas, such as, disaster management (Moe et al., 2007), corporate communication (Hering et al., 2004;Vos & Schoemaker, 2004) and municipality communication (Vos, 2009), but also organizational functions such as human resources (Becker et al., 2001) and marketing (Peelen et al., 2000).
The aim of the crisis communication scorecard is to identify strong and weak aspects to help improve the quality of crisis communication in emergencies (Palttala & Vos, 2011). The users of this earlier general scorecard are public organizations in Europe, such as rescue organizations. This measurement tool was now customized to better suit national organizations in the Finnish context by including the new government regulations in the indicators and phrasing the indicators in the national language. The research questions focused on identifying strong and weak points, as well as opportunities for interorganizational learning.
The tool brings together indicators that represent insights gained in previous research and practice. However, as all crises differ and evolve over time, it is even more important that the tool supports reflection and discussion among experts involved, for example, on how to monitor citizen views during evolving crises. Monitoring includes social media interaction which shows public perceptions and reactions to communication by authorities, such as issues risen by people and effects of warning messages (Ruggiero & Vos, 2014). Although there are models explaining what makes issues grow in social media during crises (e.g. Zhang & Vos, 2015), the unpredictable process along which crises evolve also calls for improvisation by communication experts (Falkheimer & Heide, 2010). Monitoring and improvisation go hand in hand during the process of communication strategy making (Ruggiero, 2016). As complex crises call for cooperation among various organizations and public groups (Helsloot, 2008), this requires joint learning. Thus, safety is a coproduction that benefits from learning opportunities (Palttala & Vos, 2011), such as offered by the scorecard tool developed here.

Method
The structure of the customized measurement tool was formed by four phases of crisis management (preparedness; warning; response; reconstruction and evaluation). In each phase there were three sections related to different stakeholder groups (citizens, media, and organizational network). For each phase tasks were identified that, next, were specified per stakeholder group in quality indicators. Moreover, an explanation of the indicator was added.
In total, there were 24 communication tasks identified, that were measured by 55 indicators.
The language of the tool was Finnish.
An example of a topic and one of its indicators follows (translated). been recognized, but hardly any action has been undertaken; 3= We act on this to some extent but not systematically; 4= This is to a large extent a systematic part of the action; 5= This is fully a systematic part of the action; 0= Do not know, or this indicator is not relevant for our organization, not included in averages).
The 67  Next, the quantitative results of the survey were used by the researchers for a comparison of the different branches, using Digium and SPSS to construct tables and graphs showing average results (color-coded tables, bar diagrams and semantic differential graphs). This was added to by the qualitative results of the questionnaire, using thematic analysis to group answers and select clarifying quotes. Results and opportunities for inter-organizational learning were discussed in a meeting leading to an exchange of experiences. Finally, the report was presented and commented with a view on further development.

Findings
The overall results per crisis phase were higher for the indicators of the preparedness phase (3.48) and early warning (3.46), and lower for response phase (3.29) and the phase of reconstruction and evaluation (3.36). This indicates that arranging facilities and ways of Changes in communication practices over time need consideration as, for example, social media interaction is not yet much used in crisis situations in the country but this may change in new crisis situations. The latter was also a good example of diversity, as some branches already had gained much experience with this. In the scholarly literature it has been noted that, concerning active social media policies, in many organizations there is room for improvement (e.g. Coombs & Holladay, 2012;Coombs et al., 2015). Although providing information and clear instructions for citizens is important in crisis situations (Sellnow, 2015), especially listening to citizen views and concerns is underused (Macnamara, 2016). Social media offer many possibilities to do so (Wright and Hinson, 2009).
The evaluation of the measurement tool confirmed that it had been good to include reputation issues in the indicators for the national government level, as crises are diverse and can be initiated by reputation issues, or emergencies may also bring up reputation issues (Vos, 2017).
The tool helped to make quality criteria and their assessment concrete, but one should note that there always is a story behind the numbers. It was suggested that a benchmark measurement could be done once in three years, in the meantime being supplemented with work within the branches and after-crisis evaluations using the indicators.
As a limitation of such measurements it should be noted, that a process like this needs strong commitment and an open culture of learning. In this case the preconditions were covered, but this cannot be assumed in all cases as a benchmark is a sensitive matter and thus the outcomes may be difficult to compare. For example, a high level of ambition may lead to lower selfassessment. It would be recommended to supplement self-assessment with other measures such as external assessment.
Where the preconditions of commitment and an open culture of learning are met, the process can be a motivating experience and help implement crisis communication regulations.
Customization of the measurement tool will be needed in many cases, and for this purpose the original crisis communication scorecard is freely available online. The measurement is not a goal in itself; rather it is seen as a way to support reflection and invite to organizational learning.
Online Journal of Communication and Media Technologies Volume: 8 -Issue: 1 January -2018