WHAT IS LOCALISM AND DOES IT MATTER?

WHAT IS LOCALISM AND DOES IT MATTER?
Steve Martin 1
Paper presented to the Commonwealth Local Government Forum
Research Colloquium: Sustainable local governance for prosperous communities
Cardiff University, - March 2011
Introduction
Over the last thirty years successive governments in the UK have exerted ever tighter
control over local authority services and spending. Leading local government figures
and commentators have railed against the erosion of local autonomy but found
themselves powerless to resist the rising tide of centralisation. Opposition parties
have commonly pledged to hand powers back to local government only to renege on
their promises once they have been safely installed in office. All this changed
however in the summer of 2010 when ministers in the newly elected government
moved swiftly to start work on sweeping away the top down targets and performance
frameworks inherited from their immediate predecessors.
This paper explores the implications of this new ‘localist agenda’. The first section
charts the loss of local autonomy under Conservative and New Labour
administrations in the UK since 1979. The second section briefly examines the
impacts this had on local services and accountability. The paper then turns to the
current government’s proposals. It argues that its plans mark a potentially decisive
break with the policies of the last thirty years and examines their prospects and
potential pitfalls and some implications for future research.
1
Centre for Local & Regional Government Research, Cardiff Business School, Colum Drive, Cardiff, UK
Email: [email protected]
A brief history of centralisation
The hollowing out of the local state under the Thatcher and Major governments has
been well documented. Local authorities in the UK were forced to hand control over
a range of functions to local appointed bodies whose activities were overseen by
boards comprising local politicians, business people and unelected representatives of
the community and voluntary sectors. Council house tenants were given a right to
buy their homes at reduced rates and remaining stocks were transferred to arm’s
length companies. Schools were given new freedoms from local authority control.
And compulsory competitive tendering (CCT) led to the externalisation of swathes of
services to the private sector (Rao and Young 1995; Walsh et al. 1997; Boyne 1998;
Choi 1999; Vincent-Jones 1999). Local government spending was also tightly
controlled and its performance increasingly subject to monitoring from the centre.
Ministers dictated how much funding councils received by capping ‘excessive’
council tax increases and ring fencing grants for use by specific services. The Audit
Commission was created in 1983 and initially given the job of checking that local
authorities were achieving economy, efficiency and effectiveness (McSweeney 1988).
A decade later it was charged with defining statutory performance indicators that all
councils were required to publish and using these data to compare councils’
performance (Humphrey 2002).
There was fierce local opposition to these moves. Public sector trades unions
organised protested against what they saw as an assault on their members’ pay and
conditions, and many Labour controlled authorities worked out ingenious routes
around CCT legislation. But the lack of local fiscal autonomy (around 80 percent of
local authority expenditure is met by grant from central government) and the absence
of formal constitutional guarantees of the role - or even the continued existence - of
local government meant that Westminster and Whitehall always held the whip hand.
And in a show of strength Parliament voted to abolish the largest authorities – the six
metropolitan county councils and the Greater London Council – that offered some of
the stiffest opposition to Thatcher’s reforms.
In some respects New Labour picked up where the New Right had left off. But there
were important differences. Thatcher’s strategy was to divest the state of
responsibility for service delivery through a process of privatisation. By contrast
Blair promised to invest in local government provided it signed up to a programme of
modernisation. But like Thatcher he and his advisers were reluctant ‘to trust their
party colleagues in local government with money or functions, or even with the
unchaperoned exercise of common party purposes’ (Walker 1998, p.4). They feared
that the actions of unfettered ‘Loony Left’ councils might be a political liability,
threatening Labour’s prospects of an all important second term. The 1998 Local
Government White Paper therefore made it clear that ‘The old culture of paternalism
and inwardness’ must to ‘be swept away’ and local authorities were expected to
embrace ‘a demanding agenda for change’ (Cmnd 4014). In a speech to local council
leaders in February 1998 the Prime Minister outlined the deal telling them ‘If you are
unwilling or unable to work to the modern agenda then the government will have to
look to other partners to take on your role’ (Blair 1998). Meanwhile his senior
warned that local government was ‘drinking in the last chance saloon’.
The Government pledged to abolish CCT and ‘crude and universal capping’ of local
authority budgets (Cmnd 4014, para 5.7). And over the following decade there were
large real terms increases in the level of central government grants to local authorities.
But ministers were not about to cast off all restraint. They retained powers to limit
council tax rises in order to ward off what they regarded as excessive increases and
tightened existing controls on how councils’ spent the funding they were allocated,
introducing a range of new ‘ring fenced’ grants, most noticeably in education which
accounts for almost 40% of local authority spending (Travers 2004). By 2010 more
than two thirds of central government funding to English local authorities was ear
marked by ministers for specific purposes; just 31% was given to them as a block
grant (HM Treasury 2010).
Like their predecessors, New Labour ministers were keen to see the private sector
play an increased role in the provision of local public services. In their view the
problem with CTT was the way in which it had been implemented which had ‘led to
unimaginative tendering, and often frustrated rather than enhanced real competition’
(Cmnd 4014, para 7.22). On average contracts advertised between 1989 and 1992
attracted fewer than one external bid (Walsh and Davis 1993) and even after more
than a decade of CCT internal providers were still winning well over half of all
tenders and almost three quarters of the estimated £2.4 billion worth of business that
was covered by the legislation (LGMB 1997). The top down imposition of market
testing had, the Labour Government argued, poisoned relationships between public
and private sectors, and the emphasis on economy had led to a decline in service
quality as in-house and external bidders were forced to pare tenders to the bone
(Walsh et al. 1997; Coulson 1998; Davis and Walker 1997).
For all these reasons CCT was replaced with a new duty of ‘Best Value’ which, far
from doing away with externalisation, was designed to ‘create the conditions under
which there is likely to be greater interest from the private and voluntary sectors in
working with local government to deliver quality services at a competitive price’
(Cmnd 4014, clause 7.30). Section four of the 1999 Local Government Act required
authorities to put in place arrangements to secure continuous improvement in the
discharge of all of their functions. In order to comply with this new duty they must
use a range of ‘tests of competitiveness’ and should not deliver services in-house if
they found that ‘other more efficient and effective means are available’ (DETR, 1998
p. 20). To ensure that authorities complied with this new duty ministers introduced a
new ‘Best Value’ performance management framework. Every council was required
to conduct fundamental reviews of all of its functions over a five year period (Ball et
al. 2002) and to publish annual plans setting out details of current performance and
targets for future improvements. For the first time all local services became subject to
external inspection, and the secretary of state was given powers to intervene directly
where authorities failed to conduct sufficiently robust reviews or there was thought to
be a risk of serious or persistent underperformance – examples included failures to
agree a programme of fundamental performance reviews, consult adequately with the
public, publish prescribed performance information, make robust comparisons with
other service providers, set adequate performance targets, or the ‘unreasonable
neglect’ of options for service provision (DETR 1999). To monitor progress the
Government devised more than 200 Best Value Performance Indicators (BVPIs)
which between them covered all frontline and corporate services and at their height
required some 287 pages of guidance to try and ensure that data were collected in a
comparable form (Boyne 2002).
Within a year this highly bureaucratic regime had run into serious difficulties. Most
authorities undertook a far larger number of more narrowly focused reviews than the
Government or Audit Commission had anticipated. As a result it was impossible for
inspectors to deliver on their promise (or threat) to scrutinise every review. More
importantly, senior officials had come to the view that reviews of individual services
were unlikely to get to grips with the root causes of underperformance. The
experience of early interventions in failing services pointed to underlying problems in
the management of the authority as a whole. The Audit Commission concluded that
‘serious and sustained service failure is also a failure of corporate leadership’ (Audit
Commission, 2002: 19). Inspection of individual services was therefore of limited
value because ‘Without clear corporate leadership for change it becomes a very
negative task based process’ (Audit Commission, 2001: 14).
As a result from 2002 onwards the Government introduced a new assessment
framework known as Comprehensive Performance Assessments (CPAs). These
categorised councils as ‘poor’, ‘weak’, ‘fair’, ‘good’ or ‘excellent’ on the basis of
judgements about individual services and the overall leadership of the authority.
Service were divided into seven main ‘blocks’ (environment, housing, culture, fire
and rescue, services to children, social care and benefits administration) in the case
upper tier and unitary authorities, and four areas in the case of districts (housing,
environment, culture and benefits administration). Scores for each block were
weighted according to their importance to national government and then aggregated to
give an overall ‘performance’ score. This was then combined with an assessment of
the council’s corporate capacity to provide the overall rating. Results for all 150
single tier and county councils in England were published annually. District councils
and fire and rescue authorities underwent similar though less frequent assessments.
Authorities placed in the bottom two categories were subject to external intervention
and support which often resulted in the removal of senior managers and in some cases
of council leaders. Over time the CPA methodology was refined to provide what the
Audit Commission called a ‘harder test’. Assessments of the main service blocks
remained largely unchanged, but the five point scale was replaced by a four point star
rating system ranging from ‘no star’ to ‘three stars’ and the criteria for assessing
corporate capacity were broadened to include the quality of an authority’s
partnerships with other local agencies, its effectiveness as a community leader and the
way in which it managed its budgets (Downe 2008).
In April 2009 CPAs were superseded by Comprehensive Area Assessments (CAAs)
which assessed the key public service outcomes in a locality, rather than focusing
exclusively on the services for which councils had direct responsibility. CAAs were
intended to provide an independent and ‘joined up’ assessment of the public service
outcomes and the quality of life in each locality. There were two main elements. An
Area Assessment focused on priorities set out in local area agreements which were
negotiated by central government departments and local public service providers. In
addition the police, health service, local authority and fire and rescue services were
the subject of separate Organisational Assessments which focused on the management
and performance of these individual agencies. Unlike CPAs, area and organisational
assessments did not involve primary data collection. They relied on existing
inspection reports, external audits and a (slightly) reduced set of around 190 statutory
performance indicators. The rationale for the introduction of CAAs was twofold.
They were supposed to enable the seven different local inspectorates (the Audit
Commission, for Social Care Inspection; Commission; Inspectorate of
Constabulary; Inspectorate of Prisons; Inspectorate of Probation; and the Office for
Standards in Education, Children's Services and Skills) to coordinate their activities
and therefore significantly reduce the burden of inspection on local agencies. And
they were designed to encourage local service providers to work together to tackle
deep seated economic and social issues (such as economic regeneration; care for older
people; reducing the number of young people not in education, employment or
training; shortages of affordable housing; environmental sustainability; reducing
crime; preventing violent extremism; and tackling the causes of poor health) which
were not the sole preserve of any of them but which required concerted action by a
number of different sectors.
The impacts of top down performance frameworks
For the last decade then English local government has then been subjected to
unprecedented levels of external surveillance and financial control (Lowndes 2002).
It has not been entirely one-way traffic. Local authorities were granted a new (though
little used) power to promote the well being of their areas; controls on borrowing were
relaxed; and central-local relations became less fraught. The combative rhetoric of
the early Blair years gave way to a more conciliatory tone. Ministers no longer talked
of last chances. Local government was now a ‘trusted partner’. However it is clear
that at a time when most countries have been devolving power away from the centre,
the UK moved decisively in the opposite direction.
Many UK scholars were fiercely critical of New Labour’s approach. Stewart (2003:
253) complains of ‘over-prescription; over-inspection and over-regulation’ (Stewart,
2003: 253). Wilson (2003) accuses ministers and their officials of ‘control freakery
gone mad’. Davies (2008) condemns them for what he regards as ‘double dealing’ –
talking up devolution whilst doing the opposite. And Goldsmith (2002: 109)
classifies the UK as an outlier within Europe, noting that ‘No other European country
has anything like the plethora of initiatives, special grants, powers over taxing and
spending and regional oversight as does Britain’. According to its detractors, the New
Labour approach to public services reform was wrong both in principle and practice.
There have been four main types of critique. First, some commentators see external
inspection as an infringement of local democracy. Local politicians have their own
democratic mandate which ministers should respect. Local authorities are closest to
the citizen and best placed to understand and respond to the needs of their local
communities. External regulation was therefore an insult which indicated a lack of
trust in and regard for councils and councillors. Second, the imposition of national
standards and external performance regimes was said to foster a compliance mentality
in a way which stifles innovation. Accountability is channelled upwards to ministers,
robbing councils of the flexibility they need to be responsive to local priorities. Third,
external regulation was costly. According to government figures, by 2005 the direct
costs of the local government inspectorates in England amounted to £97 million per
annum (ODPM/HM Treasury 2005). In addition local authorities incurred significant
compliance costs – the staff time and other resources that local authorities need to
expend to make themselves ‘auditable’ by preparing plans, collecting performance
data, hosting inspection visits and responding to audit and inspection reports. A
survey of English local authority chief executives found that councils devoted an
average of 597 staff days per annum to preparing for and managing inspections
(Downe and Martin 2007). The time taken up on ‘paperwork’ might, it is argued,
have been better spent on core tasks such as managing ‘frontline services’ (Hood and
Peters 2004: 278), and regulation produces other ‘dysfunctional effects such as
ossification, a lack of innovation, tunnel vision and suboptimization’ (van Thiel and
Leeuw 2002: 270). Fourth, a number of scholars have questioned the rigour of the
CPA methodology (Martin et al. 2010). Andrews (2004) criticises it for failing to
take account of the impact of deprivation on performance. Jacobs and Goddard
(2007) and Cutler and Waine (2003) argue that star rating systems like CPAs are
misleading because they mask the complex and multi-faceted nature of performance
and the resulting aggregate scores are highly sensitive to the weightings which are
used. And Maclean et al. (2007) showed CPA judgements to be a poor predictor of
future performance.
However, empirical research on the impacts of New Labour’s ‘modernisation’ agenda
suggests that in spite of these concerns the combination of large increases in spending
and unprecedented focus on improvement was associated with significant and
sustained performance gains. The three main sources of longitudinal data about local
government performance in England - statutory performance indicators, inspection
reports and public satisfaction surveys – all indicate that there were significant
improvements (Martin 2009). Statutory performance indicators show variations
between services but some spectacular gains in areas such as waste management and
culture services. CPA scores confirm this. There were improvement in almost all
services between 2002 and 2008. Almost three quarters (72 percent) of authorities
moved up one or more performance categories in terms of their overall performance.
The proportion ranked in the top category rose from 15 to 42 percent, whilst the
proportion in the lowest two performance categories decreased from 23 percent to 3
percent (Audit Commission, 2009). The results of national surveys of public
satisfaction with local services based on large samples (in excess of 200,000 people)
drawn from every local authority in England are consistent with the evidence from
inspections and performance indicators. They showed that there were significant
increases in public satisfaction with almost all local government services.
Research which has investigated the causes of these improvements has demonstrated
the importance of the external challenge and stimulus provided by inspection. Both
local government managers and national policy makers see CPA as having been
particularly effective (Grace and Martin 2008). A survey of senior local authority
officers in 2006 found that a large majority reported it to be most important external
driver of improvement in their councils and compared its effectiveness favourably
with more ‘collegiate’ and less confrontational policy instruments such as the Beacon
Council scheme and programmes of peer review initiated by local government bodies
(Downe and Martin 2006). Systematic comparisons with Wales and Scotland which
eschewed the muscular English approach in favour of less confrontational approaches
to assessment suggest that local government performance has not improved as rapidly
in these countries (Downe et al. 2008; Andrews and Martin 2010). Similarly, research
on the CAAs found that although local authorities, police and health bodies were
critical of the burdens which it placed on them, they had encouraged better
partnership working between them and focused attention on important public service
outcomes (Hayden et al. 2010). There is still much to learn about how and in what
circumstances hard edged performance regimes like Best Value, CPA and CAA
inspections work best (Hood 2007; James and Wilson 2010). However, these findings
are consistent with research which points to the effectiveness of top down ‘terror and
targets’ in other services (Bevan and Hood 2006).
The triumph of ‘localism’?
Notwithstanding the success of New Labour’s approach in encouraging performance
improvements, critics who argued that the costs and dysfunctions of external
assessments outweighed any gains eventually won the day. And by the time of the
2010 General Election all three major political parties pledged a change of direction
and a shift of power away from Whitehall to town halls. The architects of CPA and
similar performance regimes in other sectors argued that the improvements which had
been achieved meant that top down management and measurement could now be
eased back in favour of a greater role for self improvement and self regulation.
Inspections yielded diminishing marginal returns over time and according to the
former head of the Prime Minister’s Delivery Unit became less necessary as public
services moved from being ‘awful’ to ‘adequate’ (Barber 2007). Moreover, local
government now faced new challenges. The improvement apparatus of the previous
decade had been good at supporting councils to become ‘competent organisations’ in
a time of increasing resources, but they now needed to become move ‘beyond
competence’ (Swann et al. 2005). The incremental improvements of the past –
essentially doing the same thing, but better – would not produce the efficiency gains
needed to meet the demands of deficit reduction. Service providers were going to
have to do some things differently developing new services and transforming delivery
processes by working across organisational boundaries (Martin and Davis 2008).
For both of the main opposition parties top down performance assessments were
emblematic of all that was wrong with the Blair/Brown approach to public services
reform. Liberal Democrats were long standing champions of decentralisation, whilst
Conservative shadow ministers pledged to wage war on ‘Big Government’, rolling
back the frontiers of the ‘Nanny State’ by taking power away from the bureaucrats
(including inspectors) and handing it to local people who would act as ‘armchair
auditors’. The coalition government has been true to its word, going much further
and much faster than most observers imagined possible. One of the first acts of the
new Secretary of State for Communities and Local Government was to instruct the
inspectorates to stop all work on the next round of CAAs. On August, without
warning or prior consultation, he announced the abolition of Audit Commission
accusing it of having ‘lost its way’ and profligate waste of public money. The
Department of Health subsequently announced an end to annual performance
assessments of councils by the Care Quality Commission and Ofsted began phasing
out its annual assessments of children’s services assessment. Responsibility for
external auditing of local authority accounts would be transferred to the commercial
sector with accounting standards overseen by the National Audit Office. Standards for
England, which was created by Labour to overseen a framework designed to ensure
that local politicians maintain good standards of conduct, was also abolished.
The Government has recently explained that its actions are intended ‘to free up local
authorities to enable them to be innovative in the delivery of services, rather than
merely seeking to raise performance against centrally established criteria to achieve
good inspection results. Local authorities will have the freedom to deliver services in
ways that meet local needs, and will be accountable for those services to their
electorates. These principles are key elements of localism’ (CLG 2011: para 22).
Government departments will continue to have a role in specifying and aggregating
information which is ‘of national importance’ or required to ensure accountability to
Parliament. But ‘the principal aim is to reduce the burden of data collection on local
government’ (para 25). The emphasis will be on providing local residents with the
information they need to hold councils to account and it will be up to local authorities
to decide what these data are and in what form to make them available. A draft
‘Localism Bill’, published in December 2010 and currently making its passage
through Parliament, proposes a range of other changes. Some apparently increase the
level of discretion for local authorities. Councils are to be given a general power of
competence. They will be allowed to revert to the traditional committee system
which was abolished by Labour and to decide who to place on housing waiting lists.
The Government is also proposing to abandon regional housing targets. There are
though some changes which the Government plans to force on local authorities.
Twelve of the largest English cities will be made to replace existing council leaders
with directly elected mayors and all authorities will have to publish details of senior
staff pay. Many of the proposed changes are in fact designed to hand power to local
communities rather than local government. It will become easier for local groups to
take over the running of local services from authorities, to stake a claim to important
local buildings, and to trigger referendums on local issues. Referendums will also
become mandatory where councils propose to increase council tax above a ceiling set
by ministers.
It is too early to tell which of the provisions in the draft bill will make it into law or to
discern exactly what the performance framework of the future will look like but it is
perhaps worth making a number of observations of possible future directions.
One interpretation is that the localism bill is a sham. Critics have been quick to point
out that ministers plan to retain the powers which their predecessors took to intervene
in authorities. And the secretary of state has not been slow to tell authorities what he
believes they should be doing on a range of issues from the fraught issue of
fortnightly bin collections to the development of shared services. The Society of
Local Authority Chief Executives complained recently that the draft bill has a
distinctly ‘Orwellian quality’. It asks ‘how on earth can we maintain the fiction that
this is a Localism Bill when it has 142 new regulatory powers for the Secretary of
State?’ and argues ‘This is centralist, not localist, and does nothing in pursuit of the
government's desire to usher in a “post-bureaucratic age”’ (SOLACE 2011).
A second interpretation is that we are witnessing a genuine but partial and somewhat
tentative attempt to devolve power to the local level. The Communities and Local
Government Department, which is responsible for the localism bill and the changes to
the performance framework that have been described above, has always been more
positively inclined than other key government departments to devolution. It remains
to be seen whether the Departments for Education and Health, which oversee the
largest spending local authority services, will follow suit. Moreover, the proposals
contained in the localism bill do not promise any fundamental change in control over
local authority funding. There has been a reduction in the amount of ring fencing, but
ministers will continue to determine how much funding is available to each council.
There has been talk of returning control over non-domestic rates to local government
but as yet there are no published proposals to do so and even if this were to happen
English local authorities would still enjoy less fiscal autonomy than their counterparts
in many other parts of the world.
A third interpretation is that ministers are deadly serious about devolving power but
not to local authorities. The early signs suggest that are more interested in handing
control to local communities than to local councillors. As a senior civil servant
explained to me last month: ‘We looking to leap frog over local government and
empower local people’. Whilst the Local Government Association (unsurprisingly)
argues that it is for councils to regulate their own performance (De Groot 2006; IDeA
2009; LGA 2011), the secretary of state has repeatedly emphasised the importance of
enabling local people to take over the running of local services and keep councils in
check.
Fourth, it could be argued that the concept of ‘localism’ is very convenient for
ministers at a time when they are imposing the deepest cuts in local authority budgets
in living memory. Over the next four years English councils need to reduce their
spending by around a quarter with much of the pain having been ‘front end loaded’ in
the first and second years. Councils are being forced to shed staff and reduce
services. In this context localism might be seen as little more than the
decentralisation of penury, a means whereby ministers can put some distance in
voters’ minds between themselves and the effects on local services of the national
debt crisis. Whereas the Blair/Brown governments offered local authorities
additional investment in exchange for improvements in performance, the offer from
Cameron and Clegg is the granting of limited freedoms and flexibilities in return for
doing the government’s dirty work.
Whilst the eventual scope and significance of the Government’s version of ‘localism’
is difficult to predict at this stage, the clearest and most advanced component of its
agenda is the removal of top down targets and dismantling of external inspection, and
in itself poses questions which merit attention from both policy makers and scholars,
and might form the basis for a future research agenda.
First, there is a real risk that faced with the challenges of making large budget cuts
and without the external stimulus provided by inspection, local authorities will lose
sight of the need to continue to improve service quality. This could have a
particularly detrimental impact on vulnerable groups who are most dependent on
public services.
Second, there are questions about the effectiveness of self regulation – either by
individual authorities or the local government sector as a whole. The literature
suggests that there are a number of advantages to regulatory regimes in which there is
little relational distance between regulators and regulatees. It is suggested that there
is less incentive for gaming and a greater chance of learning from ‘best practice’.
However, the risks of regulatory capture are also higher (Ayres and Braithwaite
1992). This begs questions about the ability of the Local Government Association, as
a membership organisation whose primary purpose is to represent local government
interests, to bring failing authorities to book.
The top down performance frameworks of the past have been criticised by scholars
for leading to an ‘intensification of managerialism at the expense of local democracy’
(Davies 2008: 4) which left ‘councillors often being little more than electedmanagers, rather than local politicians able to articulate and act upon the wishes of the
citizenry’ (Copus 2006: 5). Effective self regulation by individual councils implies a
need for councillors to make informed and honest assessments of the performance of
their councils and challenge managers where there are problems. But the harsh
realities of local party politics make this a tall order. Councillors from controlling
groups will naturally want to defend their administration’s record, whilst opposition
groups will be tempted to seek political advantage by exaggerating problems.
Similarly, existing research on public awareness of and attitudes to the performance
of local public services cautions against inflated expectations of public scrutiny of
council performance. Surveys conducted by Ipsos MORI over the last ten years show
that there is little public appetite for greater involvement in monitoring services. Most
people are passive recipients of performance data. Only a quarter say that they would
to have more of a say about the way in which their council runs things. Thus while
the OnePlace website (established by the Audit Commission as a vehicle for
disseminating information from CAAs) attracted large numbers of visitors, very few
of these were members of the general public. Most of the hits came from local
authorities, other public bodies, the media and third sector organisations.
And there must also be doubts about the public will trust performance data provided
by authorities themselves. Audit Commission inspections provided independent
judgements about a council’s capacity and performance which could be used by
elected members, businesses, voters and other local interest groups to hold their local
authority to account. It is not clear whether they and other local stakeholders will
regard performance information provided by authorities themselves as being similarly
reliable and impartial. In the absence of the Commission there may therefore be a
need to consider how to validate and safeguard the integrity of locally produced data.
And there are important questions about how authorities can benchmark their
performance in order to identify good practice in the absence of comparative national
indicators.
Finally cuts in public spending make it even more important for local services to
avoid duplication and to find ways to pool their resources. But the absence of ‘joined
up’ assessments to encourage them to work together could mean that budget cuts
drive them back into traditional ‘silos’ in an effort to protect their own budgets, with
adverse consequences for both efficiency and outcomes.
All of this suggests that self regulation (whatever form it eventually takes) is unlikely
to come cheap. To be effective it will require considerable investment in gathering
robust performance data and equipping local politicians and voters with the
knowledge and skills they will need to interpret information and use it to hold services
to account. It remains to be seen whether local authorities facing massive budget
pressures will be willing to take on these additional burdens.
References
Andrews, R. (2004) ‘Analysing Deprivation and Local Authority Performance’ Public
Money and Management 24 (1) pp. 19-26.
Andrews, R. and Martin, S.J. (2010) ‘Regional variations in public service outcomes:
the impact of policy divergence in England, Scotland and Wales’ Regional Studies
44 (8) pp. 919-934.
Audit Commission (2001) Changing Gear: Best Value annual statement 2001,
London: Audit Commission.
Audit Commission (2002) Comprehensive Performance Assessment: Scores and
analysis of performance for single tier and county councils in England,
London: Audit Commission.
Audit Commission (2009) Final score: the impact of the Comprehensive Performance
Assessment of local government 2002-08, London: Audit Commission.
Ayres, I. and Braithwaite, J. (1992) Responsive regulation, Oxford: Oxford University
Press.
Ball, A., Broadbent, J. and Moore, C. (2002) ‘Best Value and the Control of Local
Government’ Public Money and Management, 22 (2) pp. 9-16.
Barber, M. (2007) Instruction to Deliver: Tony Blair, the Public Services and the
Challenge to Deliver, London: Politicos.
Bevan, G. and Hood, C. (2006) ‘What's measured is what matters: targets and gaming
in the English public health care system’, Public Administration, 84 (3) pp. 517538.
Blair, T. (1998) A new vision for local government, London: IPPR.
Boyne, G.A. (1998) ‘Competitive tendering in local government services: a review of
theory and evidence’ Public Administration 76 (4) pp. 695-712.
Boyne, G.A. (2002) ‘Concepts and Indicators of Local Authority Performance’
Public Money and Management 22:2 pp. 17-24.
Choi, Y-C. (1999) The dynamics of public service contracting: the British experience,
Bristol: Policy Press.
Copus, C. (2006) ‘British local government: A case for a new constitutional
settlement’, Public Policy and Administration 21(1) pp. 4-21.
Coulson, A. (1998) ‘Trust and contract in public sector management’ in Coulson, A.
(ed) Trust and contracts: relationships in local government, health and public
services, 9-34, Bristol: Policy Press.
Coulson, A. (2009) ‘targets and Terror: Government by performance Indicators’,
Local Government Studies 35 (2) pp. 271-281.
Davies J. (2008) ‘Double devolution or double dealing? The local government white
paper and the Lyons review’, Local Government Studies 34 (1) pp. 3-22.
Davis, H. and Martin, S.J. (2008) Public Services Inspection in the UK, London:
Jessica Kingsley.
De Groot, L. (2006) ‘Generating Improvement from Within’ in Martin, S.J. (ed)
Public Service Improvement: policies, progress and prospects, Abingdon:
Routledge.
Department of Transport and the Regions (1999) Protocol for intervention in failing
councils. London: DETR.
Downe, J. and Martin, S.J. (2006) ‘Joined Up Policy in Practice? The Coherence and
Impacts of the Local Government Modernisation Agenda’, Local Government
Studies 32 (4) pp. 465-488.
Downe, J. and Martin, S.J. (2007) ‘Regulation inside government: processes and
impacts of inspection of local public services’, Policy and Politics 35 (2) pp.
215-232.
Downe, J.D. (2008) ‘Inspection of local government services’, in Davis, H. and
Martin S.J. (Eds) Public Services Inspection in the UK, pp. 18-36. Jessica
Kingsley, London.
Downe, J.D., Grace, C.L, Martin, S.J. and Nutley, S.M. (2008) ‘Best Value Audits in
Scotland: Winning without scoring?’, Public Money and Management 28 (1)
pp. 77-84.
Downe, J.D., Grace, C.L, Martin, S.J. and Nutley, S.M. (2010) ‘Theories of public
service improvement: A comparative analysis of local performance assessment
frameworks’, Public Management Review, 12 (5) pp. 663-678.
Goldsmith, M. (2002) ‘Central control over local government – a western European
comparison’, Local Government Studies 28 (3) pp. 91-112.
Grace, C. and Martin, S.J. (2008) Getting Better all the time? Local government
improvement its prospects, London: IDeA.
Hayden, C. Jenkins, L. Rikey, B. Martin, S.J. Downe, J. Lee Chan, D. McLartyn, K.
Pierce, A. and Collins, B. (2010) Comprehensive Area Assessment: an
evaluation of year 1, Shared Intelligence: London.
H. M. Treasury (2010) Public Expenditure Statistical Analyses 2010, London:
HMSO.
Hood, C. (2007) ‘Public service management by numbers: Why does it vary? Where
has it come from? What are the gaps and the puzzles?’ Public Money and
Management 27 (2) pp. 95-102.
Humphrey, J.C. (2002), ‘A scientific approach to politics? On the trail of the Audit
Commission’, Critical Perspectives on Accounting 13 (1) pp. 39–62.
IDeA (2009) Setting the pace: Developing a framework for sector-led help, London:
IDeA.
Ipsos MORI (2007) Frontiers of Performance in Local Government IV London: Ipsos
MORI.
Jacobs, R. and Goddard, M. (2007) ‘How do Performance Indicators Add up?’ Public
Money and Management 27 (2) pp.103-110.
James, O. and Wilson, D. (2010) ‘Introduction: Evidence from the Comparison of
Public Service Performance’ Evaluation 16 (1) pp. 5-12.
Kelly, J. (2003) ‘The Audit Commission guiding, steering and regulating local
government’, Public Administration 81 (3) pp. 459–476.
Laffin, M. (2008) ‘Local Government Modernisation in England’ Local Government
Studies 34 (1) pp. 109-125.
Leach, S. (2010) ‘The Audit Commission’s view of politics’ Local Government
Studies 36 (3) pp. 445-461.
LGMB (1997) Flexible working: Working patterns in local authorities and the wider
economy, London: LGMB.
Local Government Association (2011) Taking the lead: self regulation and
improvement in local government, London: LGA.
Lowdnes, V. (2002) ‘Between rhetoric and reality: Does the 2001 White paper reverse
the centralising trend in Britain?’, Local Government Studies, 28 (3) pp. 135147.
Martin, S.J. (2009) The state of local services: performance improvement in local
government, London: Department of Communities and Local Government.
Martin, S.J., Downe, J.D., Grace, C.L. and Nutley, S.M. (2010) ‘Validity, Utilization
and Evidence-based Policy: the Development and Impact of Performance
Improvement Regimes in Local Public Services’ Evaluation 16 (1) pp. 31-42.
McLean, I, Haubrich, D. and Gutierrez-Romero, R (2007) ‘The Perils and Pitfalls of
Performance Measurement’ Public Money and Management 27 (2) pp. 111119.
McSweeney, B. (1988), ‘Accounting for the Audit Commission’, The Political
Quarterly 29 (1) pp. 28–43.
Rao, N. and Young. K. (1995) Competition, contracts and change: the local authority
experience of CCT, London: LGC Communications.
Stewart, J. (2003) Modernising British local government: an assessment of Labour’s
reform programme, Palgrave MacMillan, Basingstoke.
Swann, P., Davis, H., Kelleher, J., Ritters, K., Sullivan, F. and Hobson, M. (2005)
Beyond Competence: Driving Local Government Improvement, London: The
Local Government Association.
Travers, T. (2004) ‘Local government finance: busy going nowhere?’ in Stoker, G.
and Wilson, D. (eds) British Local Government into the Century, pp. 151-164.
Basingstoke: Palgrave MacMillan.
Vincent-Jones, P. (1999) ‘Competition and contracting in the transition from CCT to
Best Value: towards a more reflexive regulation’, Public Administration 77 (2)
pp. 273-291.
Walker, D. (1998) ‘The new local government agenda: a response’, Public Money and
Management 18 (1) pp. 4-5.
Walsh, K. and H. Davis (1993) Competition and service: the impact of the Local
Government Act 1988, London: HMSO.
Walsh, K., Deakin, N. Smith, P., Spurgeon, P. and Thomas, N. (1997) Contracting for
change. Oxford: Oxford University Press.
Wilson, D. (2003) Unravelling control freakery: redefining central-local government
relations, British Journal of Politics and International Relations 5 (3) pp. 317346.