APS Updates

Archive for the ‘Course Data Programme’ Category

Consuming XCRI-CAP III: Skills Development Scotland

leave a comment »

Skills Development Scotland has operated a data collection system called PROMT for many years. PROMT is a client application (not browser-based) that sits on your computer and presents you with a series of screens for each course you want to maintain. Each course may have many ‘opportunities’ (these are the same as XCRI-CAP presentations) with different start dates, visibility windows and other characteristics. Many fields in PROMT have specific requirements for content that make the experience of keying not particularly enjoyable (though it has been improved since first launch).

With OU course marketing information consisting of several hundred courses and over 1,000 opportunities, it was with some relief that we at APS (running 3rd party course marketing information dissemination for The OU) turned to the SDS’ Bulk Update facility, using XCRI-CAP 1.1. We had been nervous of using this facility initially, because PROMT data is used not only for the SDS’ course search service, but also has a direct link to a student registration and tracking service for ILAs (Independent Learning Accounts; for non-Scottish readers, ILAs continued in Scotland even though they were discontinued for a while south of the border). Students can get ILA funding only for specific types of course, so each course/opportunity has to be approved by Skills Development Scotland. Changes to the course marketing information can result in ILA approval being automatically rescinded (albeit temporarily), which can mean the provider losing student tracking details, and therefore being at risk of losing the student entirely. So naturally we decided to do some careful testing in conjunction with both SDS and our colleagues at The OU’s Scottish office.

Fortunately we discovered that when we uploaded opportunities the system added them on to existing records, rather than replacing them, so student tracking was unaffected. In addition, individual fields of course records for existing courses was over-written but the records remained active and opportunities were unchanged. These features meant that data integrity was maintained for the opportunity records, and we could always revert to the existing version and delete the new, if necessary.

We were able to load new courses with new opportunities, and also existing courses with new opportunities with no significant problems. The potential ILA difficulty was somewhat reduced, because The OU’s information for an individual opportunity does not need to be updated once it has been approved for ILA; our main reason for updating opportunities themselves was to add in fees information, but cost information has to be present before an opportunity can gain ILA approval, so this type of update would not interrupt ILA approval or student tracking.

Owing to requirements for some proprietary data, for example numerical fees information and separate VAT, not everything could be captured through XCRI-CAP. However, using the PROMT interface for checking the data, adding in very small extras and deleting duplicated opportunities was comparatively light work, as the mass of it was handled by the XCRI-CAP import.

Strikingly good parts of our Bulk Update process (apart from the obvious vast reduction in keying time):

  • Use of a vocabulary for qualification type in PROMT. This made it easy to use various rules to map from The OU data to the required qualification grouping. These rules included a close examination of the content of the qualification title in the XCRI-CAP data to make sure we mapped to the correct values.
  • For some elements, use of standardised boilerplate text in specific circumstances, again identified by business rules.
  • Good reporting back from the SDS Bulk Update system on the status (and errors) from the import. This included an online status report showing how many records of each type had been successfully uploaded, with date and time, after a few minutes from the time of loading.
  • The system permits us to download the whole data set (well, technically as much as could be mapped) in XCRI-CAP 1.1 format, so we were able to compare the whole new set of records with what we expected to have.
  • The ability to review the new data in the PROMT client interface within minutes of the Bulk Upload. This gives a great reassurance that nothing’s gone wrong, and it permits rapid checking and small tweaks if necessary.

I see this combination of bulk upload with a client or web-based edit and review interface as an excellent solution to course marketing information collection. This push method of data synchronisation has the advantage of maintaining the provider’s control of the supply, and it still permits fine-tuning, checking and manual editing if that is necessary. In contrast a fully automatic ‘pull’ version might leave the provider out of the loop – not knowing either whether the data has been updated, or whether any mistakes have been made. This is particularly important in cases where the collector is unfamiliar with the provider’s data.

XCRI-CAP: turn 12 days of keying into 3 hours of checking.


Written by benthamfish

March 6, 2013 at 2:50 pm

Consuming XCRI-CAP I

leave a comment »

This post and a few later ones will be some musings on my experiences of how XCRI-CAP is or might be consumed by aggregating organisations and services. I’ll not go into the theoretical models of how it could be done, but I’ll touch on the practicalities from my perspective. Which, I admit, is not as a ‘proper’ technical expert: I don’t write programmes other than the occasional simplistic perl script, neither do I build or manage database systems, other than very simple demonstrators in MS Access, and I dabble in MySQL and SQL Server only through the simplest of front end tools.

My main XCRI-CAP consuming efforts have been with four systems: XXPTrainagain, Skills Development Scotland’s Bulk Import Facility and K-Int’s Course Data Programme XCRI-CAP Aggregator.

XXP characteristics

  • Collaborative working between APS (my company) and Ingenius Solutions in Bristol
  • Service platform for multiple extra services, including provider and feed register (for discovery of feeds), AX-S subject search facility, CSV to XCRI converter, web form data capture, getCourses feed outputs (SOAP and RESTful)
  • Doesn’t yet have an auto-loader for XCRI-CAP. We can load manually or via our CSV to XCRI facility.

Trainagain characteristics

  • Existing system with its own established table structure, its own reference data and own courses data
  • SQL Server technology
  • I have off-line ‘sandbox’ version for playing around with.

Skills Development Scotland Bulk Import Facility characteristics

  • XCRI-CAP 1.1 not 1.2
  • Existing live XCRI-CAP aggregation service (push architecture)
  • Works in conjunction with the PROMT data entry system

K-Int XCRI-CAP Aggregator characteristics

  • Built on existing Open Data Aggregator, a generalised XML consuming service.
  • Takes a ‘relaxed’ view to validation – not well-formed data can be imported.
  • Outputs JSON, XML and HTML. But not XCRI-CAP.

These are early days for data aggregation using XCRI-CAP. There’s been a chicken-and-egg situation for a while. Aggregating organisations won’t readily invest in facilities to consume XCRI-CAP feeds until a large number of feeds exist, while HEIs don’t see the need for a feed if no-one is ready to consume them. The Course Data Programme takes the second one of these (I guess that’s the egg??) problems – if we have 63 XCRI-CAP feeds, then we should have a critical mass to provoke aggregating organisations to consume them.

Some of the questions around consumption of XCRI-CAP feeds centre on technical architecture issues (Push or Pull?), what type of feed to publish (SOAP, RESTful, just a file?), how often should the feed be updated and / or consumed (real-time updating? weekly?, quarterly? annually? Whenever stuff changes?), how do the feed owners know who’s using it? (open access v improper usage, copyright and licencing). Some of these issues are inter-related, and there are other practical issues around consuming feeds for existing services – ensuring that reference data is taken into account, for example.

I’ll try to tease out my impressions of the practicalities of consuming XCRI-CAP in various ways over the next few blog posts.

XCRI-CAP: turn 12 days of keying into 3 hours of checking.

Written by benthamfish

February 21, 2013 at 3:11 pm

AX-S Widget Demonstrator – Complete!

leave a comment »

The demonstrator is now live at: http://igsl.co.uk/xxp/ax-s/ou.html.  This demonstrator provides the AX-S search for Open University XCRI-CAP 1.2 data on a mock-up of the look-and-feel of the Open University website.

As explained in an earlier post the AX-S search facility provides concept-based subject search functionality that retrieves records not only matching the user’s selected subject search term itself, but also matching broader and narrower linked concepts. Records were classified with JACS3 codes, which were used to link the courses to a specially constructed thesaurus of terms. When searching, each retrieved record is ranked in the search results list in accordance with how close its JACS3 subject is to the user’s search term within the thesaurus. This functionality can be provided via the AX-S Widget to any institution with an XCRI-CAP 1.2 feed classified with a recognised subject coding scheme (such as JACS3, LDCS, SSA and so on) for use on their website and has the potential to be developed further with additional filters taken from the XCRI-CAP data such as studyMode or attendancePattern.

There were three main work strands in the project: development of the widget itself, development of back-end functions, such as data loading and search functionality, and construction of our bespoke thesaurus of subject terms, on which the searching is based. Software development by InGenius Solutions was key to the success of the project. It was also dependent on classification of the data with JACS3 codes, handled by APS (who also converted the OU data to XCRI-CAP 1.2), and of course, supply of courses data and the website look-and-feel by The Open University.

The project involved more updating of our original thesaurus of terms than was initially expected, but this has now been largely finalised. Some small improvements can still be made by tidying up the detailed formatting of the thesaurus and these are in progress. The demonstrator has been systematically user tested and refined and the code and documentation is available on GitHub.

The AX-S Widget Demonstrator shows how standardised data and small modular software components can be combined to provide a new service that would be very expensive for a single institution to develop, but cost-effective when developed centrally for use across a larger number of institutions. We are pleased to say that there is already interest from several Universities to include this widget on their websites, and we hope to see it in live use soon.

Written by jennifermdenton

January 25, 2013 at 1:44 pm

AX-S (Advanced XCRI-CAP Search) Widget Demonstrator: Introduction

with one comment

The AX-S Widget is a small chunk of code which can be embedded on any institution’s website. It provides ‘best of breed’ subject searching using a specially designed search algorithm to provide more accurate and more relevant results than can be obtained through other methods, for example UCAS’ Course Search or the National Careers Service’s course search service.

It uses a university or college XCRI-CAP 1.2 feed to populate its data source. The use of the XCRI-CAP standard enables the search data source to be kept synchronised with the live courses information on the institution’s website.

To try the AX-S search, go to the Demonstrator web page at http://igsl.co.uk/xxp/ax-s/ou.html and start typing your topic into the ‘Search for:’ box and select one of the search terms that presents itself. The system automatically matches your text with its search terms as you type. You can also optionally select an Education Level from the drop-down list. When you hit the ‘Search’ button, the widget sends your choices off to the search engine held on the XXP (XCRI eXchange Platform) server, which carries out the search. It returns a list of courses matched conceptually to your choice of search term. As well as courses that match exactly with the topic you’ve chosen, the results will include courses in topics that are broader or narrower than your topic, sorted by their relevance to your choice.

For example, using the term “software engineering” will give you results not only in Software Engineering itself at the top of the list, but also lower down the list courses in more general Computing, then in development using specific techniques, such as object-oriented approaches and Java. These results are all widening out from Software Engineering, or narrowing in to topics within the field.

The Widget Demonstrator uses sample data from, and the look-and-feel of, the Open University website (with their permission), but is not currently a live search on their website. For the above example the Demonstrator in its current version brings back over 30 results. The current Open University website keyword-based search brings back 5 specific courses at its top level, plus Software Engineering as a subject of research, and a link to general Computing and IT. However, it does not include conceptual matches, such as management of software projects or computing for commerce and industry, but is limited to results with the words “software” and “engineering” in them. The advanced search functionality of the AX-S demonstrator has also been tested successfully against leading web search services, such as UCAS Course Search and the National Careers Services’ search facilities.

Written by benthamfish

January 24, 2013 at 1:59 pm

Are you SITSing comfortably?

leave a comment »

I’ve been musing for some while now on the SITS Module and Course Collaboration meeting in November, arranged by colleagues at Cranfield University and the University of Wolverhampton. The latter has implemented a Module Approval system using SITS Process Manager, and their approach had several particularly interesting characteristics:

  • An insistence that academics must deliver what’s been validated and what students have been told about, rather than permitting on-the-fly variations.
  • Academics are asked to write information for the student audience (not for validation processes) – this required some training.
  • A primary purpose of writing information was to enable it to be re-used.
  • Everyone has access to everything; nothing is filtered out so it can’t be seen.
  • It isn’t a ‘fits all needs’ solution, but it ‘does most’.

I think this highlights some particular issues for different circumstances in different institutional cultures.

‘Deliver what’s validated and what the students have been told about’ might seem like a no-brainer. However, practice varies across institutions and even within institutions, and the process of course design (rather than delivery) can be seen as a continuous one with no particular end point. As a board game designer and board game player, I see a parallel here. Game design is also an ongoing process that never finishes, as improvements to the game can always be made. But when playing an instance of the game, it’s essential that the players know the rules are fixed, or the game loses its credibility and the players’ experience is undermined. Similarly, even if you *want* to improve the instance of a course, changing aspects of the advertised and expected course arrangements or curriculum can undermine the student experience. Sitting on your hands and waiting till the next iteration might be a better approach, but does the academic culture or common practice support this approach?

‘Writing for the student audience’ and re-use of information are key aspects of maximising the advantage of process improvement and standardisation using XCRI-CAP, I feel. Implementation of this type of change may be difficult, especially in a heavily decentralised institution, because it entails engagement of the whole academic community and perhaps a change in the culture not only of how to write courses information, but also in the freedom that individuals perceive they ought to have in creating the materials. This is a good example of how an information management process can have a potentially far-reaching impact on culture.

‘Everyone has access to everything’. Everyone knows that access to information is a power-based concept. This may be a particularly high hurdle for some institutions, but if visibility is poor, then process inefficiencies and potentially quality-destroying workarounds or breaches of regulations and guidelines, can be concealed. In many revisions of validation and approval processes, there is a tension between the perceived flexibility of ‘free form’ manual processes (even though they may take a long time) and the perceived inflexibility of digital ones (even though they may be quicker). However, these perceptions often hide the complexity of existing manual methods and cloud the ‘business rules’ that are supposed to be applied. Cultural change may be necessary, so that staff actually adhere to methods, time scales, and detailed procedures that have been formally promulgated in the past, but not necessarily fully  adhered to in the present. Processes supported by digital technologies should model the agreed business rules, such that flexibility and inflexibility are reflections of the agreed processes. I suspect that this is the core technical challenge of process improvement here.

The final bullet is also important. It’s unlikely that the nirvana of a perfect solution will be reached by process improvement and associated cultural change. Expectations have to be managed. Change must be an improvement on existing methods, but each person has to be sufficiently involved in and engaged with the proposed changes that their understanding of the change process itself enables that individual to realise the limitations of the changes. And oft-times the new processes must be able to cope with, or support, valid exceptions and complexity.

Written by benthamfish

January 7, 2013 at 12:06 pm

Perils of typing

leave a comment »

“With some trepidation” is how I started my recent email to the CourseDataStage1 mailing list, as I asked for comments on a suggestion about a vocabulary for course ‘type’. We have an ongoing robust discussion.

The type element in the course context is defined in our Data Definitions document (http://www.xcri.co.uk/KbLibrary/XCRI_CAP_Data_Definitions3.0.docx) as:

“A grouping of similar courses in terms of target audience”.

After receiving some enlightening comments from responses to my email, I’m beginning to question whether this is a useful course attribute.

The attempt at this task came from two main sources: (i) requests from demonstrator projects for a mechanism to filter out particular ‘types’, such as CPD courses from the K-INT aggregator, and (ii) requests for a mechanism for parameter driven filters on XCRI-CAP feeds. The intention was to cover things like Undergraduate, Postgraduate, CPD – from a course perspective, not qualification. From these requests, I manufactured a requirement as follows:

“The course type vocabulary should provide a means by which an XCRI-CAP data source can be filtered, so that a consuming system or search function can extract groups of similar courses in terms of target audience, without repeating the vocabularies covered elsewhere (in particular study mode, attendance mode, attendance pattern, education level, qualification type).”

My first very draft course type vocabulary for the Course Data Programme feeds is given below, bearing in mind the intention was not to make each term exclusive – you can have a course that’s both ‘type=Short Course’ and ‘type=Continuing Professional Development course’. The item to the left of the slash is the key (computer-readable code), while the item to the right is the value (human readable text):

  • UG/Undergraduate course
  • PG/Postgraduate course
  • FE/Further Education course
  • CPD/Continuing Professional Development course
  • SC/Short course
  • MD/Module
  • EM/Course for people in employment

I suspect that the last one may have been a bit too controversial, and I’d happily withdraw it, if we could get somewhere with the others.

A particular point I should perhaps have been clearer about is that course type isn’t intended to be a generic ‘every type of course grouping can go in here’ thing. We already have suitable elements for subject, cost, mode of study and such like. What we don’t have is a mechanism to enable aggregators to filter out major groupings, such as those detailed above. For example, it seems reasonable to me (and others) that filtering out courses defined by the provider as CPD courses ought to be possible, and we don’t currently have a way to do this. However, I would also accept that this is quite a subjective process, and the link to target audience may be tenuous and perhaps not all that helpful.

This kind of discussion raises some important issues, not just concerning whether someone can find a suitable course, but also concerning setting the agenda about, or putting boundaries around things, that perhaps shouldn’t be constrained in that way. This is a common issue with vocabularies (’cause of death’ on death certificates being probably one of the most famous / infamous). One of our thoughtful respondents mentioned that there is an ethical dimension to trying to define a target audience. Courses may be labelled as one thing may be entirely suitable for a learner not in that group, and it is preferable to let the learner decide on the suitability of the learning opportunity, because it is the learner that knows his or her specific needs.

While I can accept that argument, it still seems to me that we ought to be able to provide some facility that enables a service owner seeking to aggregate and display data about CPD courses to do so. At least we’re not describing courses as ‘vocational’ and ‘non-vocational’.

Written by benthamfish

November 30, 2012 at 3:43 pm

AX-S Widget Demonstrator – Update

leave a comment »

There has been a slight delay due to a technical issue in loading the XCRI-CAP data, but user testing is now underway on the AX-S widget demonstrator.  As part of the deliverables of the proejct there will be a full mock up of how the widget would look on a University site (in this case the OU) and also a site with the widget code.  While user testing is going on the documentation will be written for the sample code.

Written by jennifermdenton

November 23, 2012 at 10:23 am