How to fulfill user needs – metadata, administrative data and processes

How to fulfill user needs –
metadata, administrative data and processes
Lars Thygesen, Director, Statistics Denmark
Mogens Grosen Nielsen, Chief Adviser, Statistics Denmark
Abstract: Statistical organizations disseminate statistics to an extent never seen before. We
have developed tools that handle increasing amounts of data. However in the production of
statistics we tend to look inward. Our primary focus is on whether macro figures are
reasonable and published in time. We don’t take user‘s situation sufficiently into account e.g. the increasing complexity in the ways statistics are used. According to in-depth analysis
of user needs, we believe that it is an urgent task to uncover needs and to give end-users
better assistance when they use statistics or wish to find relevant statistics. Starting from
consequences of going from industrial society to a globalized knowledge society, the paper
discusses how to handle issues related to user needs. Our users must constantly increase their
level of information in order to solve increasingly complex problems. This requires
processes at statistical organizations that handle user needs and feedback from users. The
paper suggests that quality and metadata are defined and implemented with the aim to
support these processes. Regarding the organization of data the paper suggests using
administrative data as “generic fulfillment of user needs”, since user needs – to a certain
extent – can be fulfilled dynamically by combining data from different sources. The paper
concludes with a suggestion of an interdisciplinary approach using the following models:
Generic Statistical Process Model (GSBPM), project management model, system
development model and metadata/quality-model.
1. Introduction
The main point of this paper is that statistical quality and metadata should be defined and implemented in
such a way that users get profound help in their day-to-day work. Producers of official statistics have, so
far, been too focused on their own processes and their own concepts of quality. But today there is a strong
need to focus on users and deliver services that are less industrial, more flexible and adjusted to individual
needs.
The paper first presents results from user-consultations in Denmark. It argues that business processes have
changed profoundly, going from industrial society to knowledge society. In order to fulfil user needs we
must go from “silos” to focus on business processes and users. The definition and use of quality and
metadata concepts depends on the users’ business processes, for which the information is intended.
Understanding of users’ business issues, needs and processes should be supported by targeted metadata
models and information-retrieval-systems. In addition we should establish processes (e.g. Facebook) that
give new ways to communicate with users, and understand user-needs. It is argued that statistics based on
administrative registers can be organised in a way so that many needs can be fulfilled by combining
sources. We call this generic fulfilment of user needs. Used in this way the combined sources is an underexploited goldmine. The paper suggests an interdisciplinary approach using the following models: process
model (GSBPM), project management model, system development model and metadata/quality-model.
2 User consultations
2.1 Metadata should guide and give value to users
Statistics Denmark is presently working to integrate metadata systems, with a special emphasis on making
metadata available to end users with contents, and in a form that will facilitate end users in their business
processes, in situations where they might use or consider using official statistics. To this end, we have
undertaken rather deep consultations with key user segments about their business processes. We started
with this question: How well do we know the preferences of our users? Statisticians and IT people may
have very good ideas about what will benefit users, and there is a temptation to “just do it”. Our idea was
that it would be worthwhile to try to better understand how users wish to use statistics, and which role
metadata could and should play in this process.
This means trying to see metadata not just as a documentation that should be made because we want to
have a good documentation, e.g. for archiving; but see them as part of a process that gives value to our
users of today. Some documentation might prove to be of lesser interest to these users than we imagined.
We want to give priority to developments that users would seem to really use in their business processes.
As part of the consultations we presented a prototype of an integrated metadata-system. Our hypothesis
was that we could improve the situation for users by making our existing metadata-elements (quality,
concepts, classifications and variables) inter-operate so that you could go from, e.g., one concept to all
quality declarations of statistics where this concept is used. Users should be able to navigate across the
whole space of metadata. We believed that this would be valuable even if it might not be possible to
improve the basic metadata.
We conducted three focus groups meetings on documentation with various groups of users:
1. Intensive users, mostly government
2. Municipal and regional users
3. Education and the media
The focus groups each consisted of around 10-14 handpicked users, and were chaired by an external
consultant, while a number of observers from Statistics Denmark were present but not allowed to speak
unless asked (which was quite difficult). It gave a very lively and frank discussion, and many aspects of
Statistics Denmark's services were constructively criticized. Observations were not limited to
documentation but also included the statistics produced, revision policy, etc.
2.2 The results of the consultations
The consultations in general confirmed that we needed a much better understanding of our users - at least
those users with a complex use of statistics. Many of the tasks that users face today are more complex
compared to traditional use of statistics. In order to give the user appropriate help it is required that we
understand the business processes of the user. An example of this is a journalist who looks for statistics to
illuminate developments to illustrate the differences between southern and northern European countries'
economies. The journalist would often prefer to have only brief and easily understood data and metadata
that can be directly included in the daily newspaper. Other examples are documentation projects where
you want to have indicators that shed light on the development of, say, conditions for elderly people. Here
it is crucial to understand not only the data but also the business processes in the municipalities where data
are collected. It is also necessary to understand the business processes in ministries and municipalities,
where the data are used in political decision making. Other highlights from the consultations are: a) need
for more global statistics; b) Statistics Denmark should be less sectorial; c) metadata should give guidance
to both less skilled and expert users.
The list below shows some key conclusions:
− The metadata prototype and the method with four interconnected components (quality declarations, concepts, variables and classifications) won strong support in all three groups. It was found
that such a development of metadata would provide a good and logical approach to documentation
− There is great need for metadata among intensive users, slightly less among municipalities, and the
media say they have almost no need – deadline is now, they have no time to read anything
− There are essentially two ways that users search statistics: 1) Ad hoc, broad search, and 2) Deep
search in a fixed subject area that you know well. By broad search, the documentation was
important, but in all groups, there is a certain tendency to call a Statistics Denmark expert and ask.
− Documentation of data breaks and changes of definitions is insufficient. Each break should be
mentioned and well explained in relation to the figures
− Statistics Denmark was encouraged to produce long time series which are corrected for breaks,
otherwise each user must do it himself, which is not necessarily better, and in any event costs
resources and gives different results
− Revisions and revisions practices should be better documented
− Statistics Denmark must explain uncertainty and possible error sources, explaining what data can,
respectively cannot be used for
− Documentation of comparability across domains is often very deficient and it is a problem that
Statistics Denmark's employees often do not know enough about adjacent statistical domains
− Users are often unaware of the contents of quality declarations. They should be supplemented with
'pop up' messages in Statbank.dk, especially where there is a problematic figure
− The variables documentation system was rated as very relevant, especially the so-called “highquality documentation” was praised. There is a wish to be able to distinguish between variables
used in register data (micro data) and aggregated data (StatBank). One should have a filter in the
search, so you could ask only to see the desired type of variables
− All documentation now on paper should be made digitally accessible. There is some important
documentation that exists only in books and Statistical Reports and is therefore hard to find.
Some messages of statistical production:
− Statistics Denmark is generally too sectorial. It is hard to use Statistics Denmark across internal
organisational boundaries. Adjacent statistics don’t relate to each other in publications
− It's important for many users to compare Statistics Denmark’s numbers with international figures,
typically from Eurostat or the OECD, but it is not easy. Where are the corresponding figures?
Especially Eurostat has much less documentation than Statistics Denmark. Users often try to find
an indicator in e.g. OECD.Stat where the number for Denmark is similar to the Statistics Denmark
number. It would be very helpful if Statistics Denmark could link to relevant sites.
− Several expressed the desire for more development in the statistics production in relation to tasks
that users have (relevance). The statistics must keep abreast of developments in society
− Users want to be informed whether statistics are preliminary or definitive numbers
− users would like to participate actively in development groups around new statistical domains
Some messages regarding dissemination:
− Users would like to have one entrance to all documentation : from the statistics there should be
access / link to the right spot within the integrated documentation model
− Be honest in the announcement if Statistics Denmark estimates that data quality is poor
− Definitions, comments to tables and figures should appear when the mouse is moved over the cells
(in StatBank); likewise, warnings if there are breaks or concerns about data quality
3. From industrial society to knowledge society
3.1 From stable to dynamic processes
This user consultation and many other surveys show that there is a need for other types of products and
services than those our statistical institutions in their origin were supposed to deliver [11]. The world has
changed. This new society has been given many names: post-industrial, post-modern, information society,
knowledge society, etc. In this paper we use the word knowledge society. One aspect of the knowledge
society is that many of our users’ business processes have changed profoundly. In the industrial society,
the processes were stable, the products produced were relatively stable and did not have a high content of
knowledge. The products were produced for a mass-market without customers interfering in the
production process. Today, the users in many places have “become part of the organisation”.
3.2 Main challenge: complexity
However, the “society still talks about itself” as if we were culturally living in a world of national states
with impenetrable physical borders. This reduces the focus on cross border topics e.g. finance and
environment. At the organisational level many organizations still organize and talk about themselves as if
the foundation were still physical industrial production with the production of knowledge as a residual
phenomenon. The main challenge in many organisations today is how to handle social complexity related
to production of knowledge rather than how to convert raw materials to tangible products sold on a market
[6], [7], [12]. Nationally grown statistical organizations are still to a certain extent influenced of this ‘old
days’ thinking from when they were originally formed. See Annex 1. History of statistics – from state
secrets to independence of political interests.
In brief, increasing complexity is the problem. For statistical organisations these considerations have at
least two main consequences: First, what categories (i.e. what statistics) should we use to describe the
knowledge based society in order to fulfil contemporary user-needs? Second: how should we be
organised to fulfil user-needs in a society whose production is primarily dependent on handling of
knowledge?
3.3. From “silos” to focus on business processes and users
Many discussions on initiatives on changing processes in the international statistical community are about
the problem on organisation and thinking in silos. This discussion is present in the literature on business
change: “most companies had focused on dividing processes into specific activities that were assigned to
specific departments. Each department developed its own standards and procedures to manage the
activities delegated to it. Along the way, in many cases, departments became focused on doing their own
activities in their own way, without much regard for the overall process. This is often referred to as silo
thinking, an image that suggests that each department on the organization chart is its own isolated silo”[5]
As a reaction to this we see a movement toward to dynamic “social-system thinking”. Regarding
processes the change has gone from focus on optimising stable processes to focus on what users need and
adjustment of the processes accordingly. Assembly lines at the Ford T factory are an example of the
former. Dynamic production lines in the Toyota factories are an example of the latter type of business
processes. The main focus in the former is on how to split the business processes into functional units. The
main focus in the latter is how to build dynamic business processes as value-chains based on user needs
and feedback from users. The challenge is: how do we create desirable social system given institutions as
formally decided frameworks. See Annex 2. How to create desirable social systems?
3.4 Which statistics should we produce and how should we organize?
Regarding the product, we want to capture all relevant aspects of society that users need. The existing
approach needs to be supplemented with more user-focus. We must establish processes that provide the
right knowledge for the user. But how more precisely should we interact with users? GSBPM (depicted in
figure 1 below) as a high-level model is a starting point that helps in simplification of the problem.
Processes on quality and metadata are placed as supporting processes. Regarding feedback from the
outside world, we use a model that focuses on ensuring creation of knowledge and learning. We will
distinguish between single-loop and double-loop feedback1.
Figure 1. High level business process model.
3.5 How to benefit from business process models in statistics
There have been many discussions of the purposes for the use of GSBPM. Should we industrialize
statistics (as proposed by the High-Level Group for Strategic Developments in Business Architecture in
Statistics (HLG-BAS))? Should we focus on standardisation of work-processes? Learning? Automation
with IT? How should we organize? There are many on-going initiatives, some with good results and some
with less good results. The GSBPM has given us a good framework but also given us new challenges.
It is the impression of the authors that many initiatives do not have the right starting point as described in
the chapter above. We must be ready to organise in a way that matches the external complexity and
ensures that all processes add value for the users. These ideas are expressed clearly in the value chain
thinking, described above. Users should not be offered to have “any colour of their car, as long as it is
black”. Statistical organisations must be prepared to react almost immediately to demands from users who
want a car personalised to their needs or taste.
We do this by creating a flexible process-oriented organisation, where users are involved in defining needs
and outputs and where we have a non-silo organisation that focuses on horizontal contributions of value of
each process. Do the processes give value to users? This also includes non-operative functions like
management, IT and Human Resource.
4. The quality concept and user needs
4.1 History
In continuation of the discussion above, the definition and use of quality concepts obviously depend on
the users’ business processes for which the information is intended. In this chapter we argue that the
quality concept should be defined more precisely with user needs as the main focus. We must have a
description of quality as a way to reduce complexity in relation to users.
Historically it is possible to see parallels between the movement from industrial society to knowledge
society on one side, and the movement of the quality concept from having mainly product-focus to having
mainly user-focus. Traditionally, “the focus of quality control is inspection and correction. From a batch
of production output, a sample is selected and each item is inspected for defects. The number of defective
items is measured and, if that number exceeds a certain predetermined maximum, the whole batch is
rejected, meaning that it is scrapped or sent back to the production line to be reworked.”[8] See annex 3.
About TQM and quality concepts defined by Eurostat.
4.2 How to integrate quality elements into the business process model?
Output quality is achieved through process quality. Eurostat emphasises two broad aspects:
“Effectiveness: which leads to the outputs of good quality; and Efficiency: which leads to production at
minimum cost for NSO’s and respondents.” [8] But how to define and implement quality more precisely?
Eurostat only gives some general guidelines referring to CoP principles.
Many NSIs define and publish quality metadata as an end-product. E.g. the quality report is produced and
used after end of production. However the content of quality should be implemented in a way to ensure
“fitness for use”. This is not the case when quality is defined after end of production. It is suggested that
Relevance, Accuracy and Reliability, Timeliness and Punctuality, Accessibility and Clarity Coherence and
Comparability should be described together with the user as an integral part of investigating user needs.
The reason for this is that quality and methodology information is valuable both in relation to discussion
with external and internal users.
We can differentiate the users on a continuum of behaviours of beginner, experienced and expert-user.
Processes and applications should be designed accordingly. For the beginner, a methodology and quality
descriptions should be relatively easy to understand. For the experienced and expert user the descriptions
and applications can be more complex. Regarding the processes, they should be designed in a way to
support the needs and levels of users.
5. The metadata concept and user needs
5.1 History
Metadata was first introduced by Bo Sundgren in 1972. He used the following distinctions:
(a)
“the real-world phenomena that we are interested in:the object system
(b)
information about the object system
(c)
data representing information about the object system
.... Accordingly, the data base should contain quality information and other information about the
information contents of the data base. We shall refer to such information as "information on
information"[15].
Later, ISO 11179 gave a systematic view on metadata. This standard has been followed by many metadata
applications in e.g. Sweden, Portugal, Canada. These systems all have different ways to interpret ISO
11179. It has been popular to implement four subsystems. We suggest using the following high-level
metadata model
Figure 2. High level metadata-model
The model includes description of content (methodology and quality declarations etc.), variables,
classifications and concepts. The model has been implemented as separate systems in Portugal and other
countries. Lately the DDI and SDMX as XML-based standards have been implemented many places. The
advantage on using DDI is that the many elements are defined in the DDI-lifecycle standard. Many tools
are available for the DDI standard, so you don’t need to build a metadata system from scratch. The
development of GSIM (Generic Statistical Information Model) guide on the conceptual level. [16]
5.2 How to integrate metadata elements into the business process model?
In the chapter on quality it was suggested that quality-elements are defined during the need definition
process. Other types of metadata – concepts, classifications and variables – should be defined in a similar
way. We must match the user’s context. This means that different users (including internal users at the
NSOs) have different needs. As part of the processes in GSBPM this should be taken into account. We can
distinguish between three types of users: beginner, experienced and expert. Each group should be
supported. This means that the applications and supporting documents should be targeted at each usergroup. Besides targeted documentation, each user group should be supported by social media (blog,
Facebook, etc.).
6
Register based statistics and generic fulfilment of user needs
6.1 From one survey per user need to fulfilling many user needs per survey
Traditionally, official statistics have been produced using surveys and censuses aiming at fulfilling welldefined user needs. This way of production can be well calibrated with needs that are known beforehand.
Outputs can also be adapted – to a limited extent – to emerging needs that were not taken into account
when planning the survey.
We have witnessed over the last 20 years an explosion and diversification of user needs, calling for ad hoc
production of statistics when new needs arise. In most cases, fulfilling such needs cannot wait for the NSO
to carry out a new survey, and the cost and response burden of such a survey would be a serious obstacle.
In Denmark and several other countries this has led to the development of a system allowing for linking
and reuse of statistical data across statistical domains. This information system requires the use and
storage of identification numbers for statistical units (persons, enterprises, dwellings and real estate). It is
based on Svein Nordbotten’s notion of an Archive Statistical System ([1] and [2]).
In this system, we try to take a more integrated view of the need for information as input to knowledgeprocesses. We build an integrated model of “reality”, rather than seeing statistics as a number of isolated
surveys, or islands of information. The model builds on the most important classes of entities (or objects)
that our users wish to analyse. These are persons (and families / households), business units, and
dwellings / real estate. In most cases, users are interested only in sub-groups of one or more of these 3
classes, e.g. unemployed persons in the municipality of Copenhagen as of 1 July 2012.
The objects are interlinked by relations, most important being Person living in dwelling (Habitation) and
Person working in workplace (Employment). The model holds information about states and processes
affecting the objects, e.g. unemployed persons in the municipality of Copenhagen as of 1 July 2012 who
were hospitalized during the preceding year. A simple information model is shown here:
Micro data must be efficiently organized in an archive that at the same time protects the information from
any attempts to invade privacy and makes it possible to use it whenever needs arise. The data should thus
be linked, when necessary, using identifiers for linkage.
In order to fill in most of the model, we make intensive secondary use of administrative sources stemming
from all sectors of (public) administration. Cornerstones are the three basic registers keeping track of the
populations of the most important units in our statistics: Persons, business units and dwellings. A large
number of other administrative registers provide data, allowing us to make estimates of the processes or
events going on in different parts of the model (e.g. death), as well as the states that objects may be in (e.g.
alive, dead).
In estimating the variables, registers are combined with sample surveys as sources of the system. They are
linked with the other data as needed in the estimation process, and they are stored in the archive so they
can be used alongside register data for other end use purposes.
In summary, the archive statistical system reuses and combines data from multiple sources, serving needs
that have not been foreseen at the time of collection. In addition the system offers great advantages that
are highly appreciated by end users:
-
Providing for consistency across traditional statistical domains, since the same estimates of
variables are used in several domains
-
Allowing longitudinal studies where individual objects (e.g. persons) can be followed over a
number of years in order to study inference.
6.2 Metadata should guide the users
When building such an integrated system based on multiple sources and with innumerable possible uses, it
becomes a huge challenge allowing users to understand statistics; Potential users should be able to
understand, whether some parts of the system can be combined in order to create useful knowledge,
relevant to their decisions and possessing sufficient accuracy, timeliness, etc. The metadata we make
available to potential users must enable them to make such evaluations. This requires a generic metadata
system which allows for many different views on the metadata, so that the different user communities can
be accommodated. It is not possible for an NSI to fully know or understand the business processes of all
these diverse user communities. Therefore we must consult carefully with users in order to find out what
works and what does not work.
7. Putting the pieces together – an interdisciplinary approach
Statistic Demark is presently conducting a pilot-project as part of implementing our metadata-strategy.
The purpose is to establish a common understanding of, and guidelines for documentation that are
integrated with the GSBPM-model. The documentation includes description of content (methodology and
quality declarations etc.), variables, classifications and concepts. In addition we include documentation of
processes, user-manuals and IT.
But how should we integrate the work on documentation with the GSBPM? Our ambition with the pilot is
to integrate four disciplines/models: process model, project management model, system development
model and metadata/quality-model. Furthermore the ambition is to use the insight on how to fulfil user
needs as described in the chapters above. GSBPM gives the overall structure and guides us in
understanding how and when to produce the expected product to the user. In the pilot-study we focus on
how GSPBM is used in the first three main processes: needs, design and build.
Regarding the metadata and quality we have placed these as supporting processes (see figure 1 above).
These processes must ensure that the metadata-elements mentioned above are integrated into the approach
by having guidelines on where to define and process quality (related to users) and subsequently definition
of concepts, classifications and variables. As part of the metadata pilot project and as part of the
implementation of the Register Strategy in Social Statistics, the use of registers as generic sources is
integrated. This implies that the possibilities for combining sources are discussed with users. Subsequently
the registers are used in the production of the desired result to the user.
We are using Colectica as metadata-tool and DDI-lifecycle as standard for storing metadata. Regarding
project management, system development and processes, we use tailor-made templates that as far as
possible follow established international standards (e.g. Project Initiation Document, Business Process
Model Notation, standard for specifying use-cases etc.). Project initiation will be finalised after the needsprocess. Hereafter the remaining development: design, build, etc. must be planned and controlled
following the usual Prince 2 guidelines.
8. Concluding remarks:
The paper has shown that quality and metadata should be defined and implemented in such a way that
users get profound help when they in their day-to-day work. Solutions and models that we offer must be
linked dynamically to our users’ business issues and processes. This requires processes at statistical
organizations that handle user-needs and feedback from users. The paper concludes with a suggestion of
an interdisciplinary approach using the following models: process model (GSBPM), project management
model, system development model and metadata/quality-model.
References
[1] Nordbotten, S. (1961): Elektronmaskinene og Statistikkens Utforming i Årene Framover (Computers
and the Future Form of of Statistics). The Statistical Conferences of the Nordic Countries in Helsinki
1960. Statistical Reports of the Nordic Countries, Vol. 7, p. 135-141. Helsingfors 1981
[2] Nordbotten, S. (1966): A Statistical File System. Statistisk Tidsskrift 1966:2. Stockholm
[3] Sundgren B., Thygesen L. (2009): Innovative approaches to turning statistics into knowledge.
Statistical Journal of the IAOS: Journal of the International Association for Official Statistics. Amsterdam
[4] Thygesen, L. (1983): Methodological Problems Connected with a Socio-Demographic Statistical
System Based on Administrative Records. Bulletin of the International Statistical Institute, Volume L
Book 1, Madrid
[5] Harmon, Paul. (2007): Business Process Change – A Guide for Business Process Managers and BPM
and Six Sigma Professionals. Massachusetts, USA.
[6] Senge, Peter (1990): The Fifth Discipline: The Art & Practice of the Learning Organization. New
York, USA
[7] Qvortrup, Lars (2001): Det lærende samfund - hyperkompleksitet og viden. Gyldendal, København.
[8] Eurostat (2009): ESS Handbook for Quality Reports 2009 Edition, Office for Official Publications of
the European Communities, Luxembourg
[9] Jensen, Poul (2000): Dansk Statistik 1950-2000 Bind 1. Danmarks Statistik, København
[10] Espejo, Raul (2000): Self-construction of desirable social systems in Kybernetes, Vol. 29 no. 7/8,
MCB University Press
[11] Nielsen, Mogens Grosen; Thygesen, Lars (2011): How do end users of statistics want metadata?
Paper presented at Workshop on Statistical Metadata in Geneve: Implementing the GSBPM and
Combining Metadata Standards, 05 - 07 October 2011
[12] Luhmann, Niklas (1992), Europæisk rationalitet, in Autopoisis II, Politisk Revy, København
[13] Morgan, Gareth (1986), Images of Organizations, Sage Publications
[15] Sundgren, Bo; Jane Greenberg (2009), Metadata correspondence with Jane Greenberg about the first
use of the concept metadata.
[16] Working group established by HLG-BAS. (2012), Generic Statistical Information Model (GSIM).
Version 0.4, May 2012. Draft for review.
Annex 1. History of statistics – from state secrets to independence of political interests.
Historically the first Census was carried out in Denmark in 1769. At that time we did not have the
problems described above. The results were not published as it would reveal strength position in case the
country would go into war. Later the tasks of statistics were close related to needs related to the political
and economic administration of society. The decision on which statistics to produce was often taken by
the Ministry of Finance.
According to the new law for Statistics Denmark in 1966 the statistics must support democratic processes
and be impartial, so that it cannot come under suspicion of being coloured by political considerations. It
should be available to all and must therefore be made public once it is available. With this approach, the
statistical work will be essentially different from that of the departments that work with direct ministerial
control, and whose agenda and resource potential is determined by political considerations. [9] In that
respect the foundation of statistics has moved from “state secret”-decisions to decisions taken by a
politically independent Board. But do we need better or additional “devices” in order to fulfil user needs?
Annex 2. How to create desirable social systems?
Seen from a knowledge production perspective we must be careful to ensure flexible fulfilment of user
needs. This suggests that it is just a question on moving away from silos toward an organisation that is
structured in accordance with different kinds of user-contexts. Silos cause problems. But is it just a
question on silos? Could it be overall strategy, relation to users or a question on mentality – it is wellknown that change is difficult. The challenge is: how do we create desirable social system given
institutions as formally decided frameworks. “Society relies on institutions to conserve aspects that it
considers worth conserving. But often institutions evolve as dysfunctional social systems. They lack an
appreciation of the wider framework they ought to be part of and therefore of their systemic roles in the
creation of desirable social meanings. It is socially necessary “to see” and develop social desirable social
systems beyond institutions” [10]. Social systems cannot be controlled linearly from outside. Via their
own built-in complexity they can only respond to certain impulses from the outside world. The use of
business process models and metadata-models are example of own built complexity.
Annex 3. About TQM and quality concepts defined by Eurostat.
In the 1940s and 1950s, more emphasis was placed on preventing defects occurring rather than correcting
them. This was referred to as upstream quality control, and the broader range of quality measures were
referred to as quality assurance. The main notion of quality assurance was extended to total quality
management (TQM). TQM principles are typically expressed along the following lines a) Customer
focus, b) Leadership and constancy of purpose Involvement of people c) Process approach d) Systems
approach to management e) Continual improvement f) Factual approach to decision making g) Mutually
beneficial supplier relationships
A lot of work in Eurostat and statistical organisations has followed TQM or similar lines. Eurostat defines
quality as “The most general and succinct definition of product quality is fitness for use” [8]. Afterwards
they distinguish between output and process quality. The definition of quality of output and process
quality is placed under the Code of Practice (CoP) principles. These principles set the standard for
developing, producing and disseminating European statistics. In line with the ESS Quality Definition and
Code of principles output quality in the ESS assessed in terms of the following components. Relevance,
Accuracy and Reliability, Timeliness and Punctuality, Accessibility and Clarity, Coherence and
Comparability.
Notes:
1
single-loop feedback only requires that the we respond by actions within the scope of our current operating framework - there
is no change in business model, organization, vision and mission.
double-loop feedback impacts and challenges more basic assumptions and commitments. This should result in deeper inquiry
into experience to examine the basis of the assumptions, by which it governs itself, and it may change those assumptions in the
process.