APS Updates

Impact assessment in Student Lifecycle Relationship Management

leave a comment »

I’ve been thinking about how to measure the impact of SLRM projects as part of some synthesis work that I’m doing as a Critical Friend for JISC’s Relationship Management Programme, Student Progression, Retention and Non-completion strand.

I’ve proposed 6 characteristics that might be useful to evaluate in respect of these projects:

  • Capacity for action by the learner/service user: communication methods, choice, personalisation, activities/actions possible, appropriate technologies and usability
  • Clarity of service definition: Identification of actors, process knowledge and understanding, service description, quality of technical service delivery
  • Extent of service control: measurement, feedback, comparison versus target levels, understanding of intervention points, quality control
  • Sustainability of tools/services: robustness, continuous improvement, resourcing, policy impact
  • Prevalence in organisation: ad hoc, pilot, in individual department(s), institution-wide, policy impact
  • Integration: data sharing across systems, common or single centralised systems, availability to staff through appropriate technology, breaking down silos.

These were developed as a result of reviewing the 8 SLRM projects in the current Strand 2 of the Programme, all of which have used Service Design as the approach. This is all about touch points and use of technology to deliver ‘capacities for action by the learner/student/customer/client’.

I also identified 2 axes that might determine the RM that will be described by the 6 characteristics:

  • Coverage of learner/student groups: all, differentiated by subject, level, special circumstances
  • Coverage of lifecycle: all or particular part(s)

I initially thought that it would be useful to have simple numerical ratings against each of the 6 characteristics; something like ‘0’ = not done; ‘1’ = Basic; ‘2’ = Developing; ‘3’ = Strategic. This would be a bit like the BCE maturity model, but I rather think my ratings here are a house of cards, so I’ve scrapped that as not a useful idea at this stage. There’s no accurate basis for the ratings.

I then thought maybe we could do an evaluation and impact assessment framework based on “What can we learn?” and “What was the impact?” for each characteristic. If we assessed these, then we could look for commonalities across each project and come up with some useful synthesis.

Each of these characteristics could have a quality indicator, such as ‘increase’, ‘better’, ‘more’ attached to it. However, quantitative assessment cannot be carried out as no baseline figures exist. We can point out areas where these have improved (or not), and relate this to the segment of students and the part of the lifecycle.

Perhaps we can put some impact markers on too – such as ‘minor’, ‘major’, ‘critical’. Or indicating progress – such as ‘increasing’, ‘better’, ‘significantly better’, ‘officially recognised’, ‘implemented in policy change’ and so on.

I’m not yet convinced this is the correct direction, but it gives an idea of my current thoughts.

Advertisements

Written by benthamfish

September 19, 2012 at 4:48 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: