Maturity report baselined against SROC attendees

Data Capability self-assessment at SROC 15
Introduction
A draft set of 30 self-assessment questions was circulated at the 2015 Student Records Officers Conference
(SROC) which was held in York between 16 and 18 March 2015. The questionnaire was completed by 31 HE
institutions and the results are presented here.
Feedback from this exercise was used to help refine the toolkit before publication.
We are enormously grateful to colleagues at SROC who took the time to help with this exercise.
Maturity levels
The assessment was based on the standard 5 level maturity model.
The questions assessed the data capability of the institution across four broad categories; People and Culture,
Business Process, Data Activities and Technology. The results were plotted against the maturity levels shown
above. In this analysis, Chaotic has a score of 1 and Predictive has a score of 5.
Overall results
Median score = 2.71
We can see that the SROC attendees rated their data capability as broadly stable. This would suggest a capability
where known outputs (such as the HESA return, internal management information, financial data, etc.) are well
understood, and have sufficient priority, subject matter expertise and investment to deliver them at an
acceptable level of quality.
However, at this level there is rarely a real focus on benefits derived from improved data capability – be that in
reducing the burden in processing data, or by adding value to it through linking, analysis and analytics. Stable
does not suggest that ‘everything is okay’ – at best it means nothing is getting much worse, but things certainly
are not getting better. This suggests that there needs to be a shift in institutional thinking and priorities before
the significant and sustainable benefits of the next level can be attained.
We do see a number of outliers at either edge of the bell curve which is predicted on a self-assessed distribution.
Our view from the limited data set we have analysed is that this snapshot of institutions’ data capability is likely to
be representative of the sector as a whole.
People and culture
Median score = 3.00
Unsurprisingly the SROC assessment group scored the People and Culture block as the highest across the 4
dimensions. From their view of the institution the value and profile of data is very well understood. Our
expectation is a multi-disciplinary assessment team would probably pull these scores down. People and Culture
has such an important part to play in a data management improvement plan. Without both a top down
appreciation and backing of the value of data, and a pervasive institutional awareness of how data needs to be
managed, it is extremely difficult to effect lasting change.
Looking at the SROC data, we still only have a score of (just) stable. This suggests that much of the work that will
need to be done in many – if not most – institutions will start with People and Culture.
Page 2 of 4
Data Activities
Median score = 2.9
Here, we are assessing if data outputs are supported by well understood activities, if the concept of data quality
and reporting is embedded, and how the reconciliation of many copies of primary datasets is achieved. What we
see with these mixed results close to Level 3 is that there is likely to be a clear focus on ‘getting the job done’
rather than improving the data – whether that be in terms of collection, quality, storage, use, archiving, etc.
We see in the results that timeliness (especially around statutory reporting and funding) is prioritised over the
wider aspects of data management.
Business Process
Median score = 2.6
A much more concentrated distribution is found in the business process assessment block. Here we are evaluating
how well processes to create known outputs are documented and shared, how – if at all – these dovetail into
wider organisational deliverables, and how change is managed around the data entities. A score of 2.6 – which is
close to reactive – suggests a low level of capability in these areas.
The key finding here was that processes exist (although sometimes in individuals heads) for ‘important’ outputs
and deliverables, but change is extremely difficult to manage – be that external/new collection or internal/new
insights and reporting. In addition, data feels very siloed with multi-disciplinary teams coming together only
during ‘data crisis’ and root cause analysis not being formally used to prevent the same problems re-occurring.
Finally the measurement of data – both in terms of what metrics should be used and who should be responsible
for them – is very weak in this assessment. This is not an unusual state of affairs for a sub 3 maturity institution. It
is more about getting things done that need to be done and moving onto the next deadline. The lack of
integration with wider business change generally makes this situation worse as workaround and temporary (that
go permanent) fixes usurp appropriate business process and data governance.
Page 3 of 4
Technology
Median score = 2.55
Technology is the last block we assessed for good reason. We believe that while technology is – of course –
extremely important in collecting, processing, storing, extracting, archiving and purging data, it is in support of
good data management and practice. We also have seen poor investment decisions made around tooling
(especially in chasing spurious data quality metrics) which have little benefit to the wider organisation.
Here we can see some manifestation of this. Some institutions are receiving ‘good service’ from their technology
in terms of data warehouses, data management tools and probably analytics. Others are not seeing the value and
it is likely that the technology solutions are buried in a silo of which they are unaware.
Summary
This exercise suggested a sector-wide maturity score of 2.71 based on this self-selecting sample. It showed that, in
data capability terms, the sector has to improve in order to move to a maturity level that will properly support
their internal aspirations and external obligations.
The SROC assessment was of huge value to this project in terms of honing the question set, and setting
assumptions around where the focus of our signposting and gap analysis needs to be. Following the conference
the assessment was revised in order to reflect lessons learned from the initial survey of institutions.
Page 4 of 4