中国A片

When the courseware inspector calls

十一月 10, 1995

Simon Price and Jason Probert describe the way Bristol computer buffs play detective to find software defects.

Over the past few years there has been an explosion in the number of computer assisted learning (CAL) packages developed. Unfortunately, the quality of this courseware varies widely between packages and, in some cases, within the package itself.

While there are undoubtedly pockets of excellence in the field of courseware development, a significant quality assurance issue needs to be addressed before CAL can be consistently and reliably produced to a sufficiently high quality.

On the surface it may appear strange that a piece of courseware developed by a group of hard-working and dedicated individuals, all experts in their disciplines, should be anything other than of satisfactory quality. But on closer examination, it is the necessarily diverse composition of this group which lies at the heart of the quality issue.

By its nature, the development of courseware for CAL is a multidisciplinary activity. It involves bringing together subject and pedagogic expertise from the discipline being taught, development expertise from the field of software engineering and interface expertise from the field of human computer interaction (HCI).

It is rare to find an individual with skills in more than one of these disciplines and therefore it is normal for CAL development to be undertaken as a collaborative venture employing specialists from each of these areas. In theory, this mixture of skills and techniques provides the ingredients for a successful development project, in practice it is a recipe for disaster. The underlying problem lies in the diverse and, frequently, incompatible vocabularies and methodologies employed by experts from each of these disciplines. There is a clear need for a common framework which bridges the discipline gaps and enables efficient integration of the products from each discipline.

At the University of Bristol, as part of the Teaching and Learning Technology Programme (TLTP) economics consortium's WinEcon software project, the authors have been developing a technique, called courseware inspection, which addresses the problems inherent in the multidisciplinary development of CAL courseware.

Courseware inspection has its origins in a conventional software engineering practice known as software inspection (sometimes referred to as Fagan inspections, named after Michael Fagan who pioneered the technique at IBM in the mid-1970s). Courseware inspection merges these technically orientated software inspections with the more HCI orientated usability inspections to produce a customised technique for assuring quality in courseware development.

Over a two and a half year period the WinEcon project has developed a single, integrated package consisting of approximately 1,000 screens of courseware, representing more than 75 hours of teaching time covering the whole first-year economics degree syllabus. Development was a collaboration between 26 economists and 17 programmers/HCI experts geographically distributed over eight universities.

A courseware development project of this size presented a quality assurance challenge in terms of the sheer volume and scope even before taking the discipline gaps into account. From the outset there was a clear requirement for an overarching quality assurance procedure, and courseware inspections were introduced at an early stage to fulfil that role.

Courseware inspections are a formalised code and design review procedure performed by a team of people with each team member assigned a specific role. The object of an inspection is to detect and log as many defects as possible prior to testing or releasing the courseware.

Importantly, inspections do not attempt to resolve design or coding problems unless the solution is trivial. Also, inspections have an entirely technical orientation and differ from managerially focused reviews usually carried out by most projects.

The inspection process is governed by a detailed checklist divided into three categories: educational content, interface and program. This forms the basis for the formal inspections and serves as a guide to developers enabling them to perform their own preparatory reviews prior to the official inspection.

The format and content of the checklist evolved through feedback and evaluation of consecutive rounds of inspections. These established the common vocabulary and practices across the development team. Indeed, one of the main quality improving features of courseware inspection is the positive educational side-effect the process has on developers, irrespective of their native discipline. This results in an improvement in the quality of their future work on the project with consequent downstream benefits in terms of the time and cost to completion, cost of testing and cost of maintenance.

Courseware inspection is also credited with bringing a number of managerial benefits to the project. Foremost among these is a commonly agreed measure of quality in the project.

Quality is highly subjective but the inspections' checklist reduces this subjectivity by assessing quality on the basis of largely objective check points. The inspection process also casts the inspection team in an executive editorial role where pure objectivity is simply not possible and consistency is the next best option.

This common view of quality also solves one of the major problems in collaborative, multi-party development: defining what "completion" means. For the TLTP economics consortium, a completed courseware module is one which passes a courseware inspection without defects.

WinEcon is published by Blackwell. Simon Price is lecturer in the Centre for Computing in Economics, University of Bristol. Email: Simon.Price @bristol.ac.uk. Jason Probert is a lecturer at the University of Natal, South Africa.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT