Conference papers


[ ALIA home | conference home | papers | photographs | search... ]
online20001 conference logo

Information literacy strategies

Finite Questions and Infinite Answers: Online Assessment of Information Literacy Skills

Julie Badger, Susan Roberts
Swinburne University of Technology

Information staff at Swinburne University's Lilydale campus design and deliver an information literacy module within Information Methods, a compulsory subject for all first year higher education students. The information literacy component comprises units in constructing search strategies, using electronic databases, and searching the internet and the Swinburne Intranet. The component accounts for 20% of the students' assessment. Formative and summative assessment online raises a number of issues such as the huge number of possible correct answers to many of the questions, and the need to adapt procedures for both 'hands-on' and examination conditions. This paper addresses practical and educational ramifications.

Introduction

The Information Literacy program at Swinburne University's Lilydale campus has been described and evaluated in previous conference papers so we will not be discussing issues that have been raised before. Details of these papers are included in the bibliography. This paper concentrates on the aspect of our program that has given us the greatest concern over the four years the program has been in use - assessing student achievement.

Evaluating the effectiveness of our student assessment procedures has been our program's most problematic feature. This paper focuses on our attempts to assess student achievement in information literacy and in particular, our forthcoming move into large-scale online assessment. We will consider the pedagogical and practical issues that this raises for ongoing formative evaluation of student learning and for end-of-semester summative assessment of the course objectives. Firstly however, a little background information is necessary.

The story so far...

The Lilydale campus is now approaching its fifth year, having moved from a site in Mooroolbark at the start of 1997. The Mooroolbark campus was used as a pilot in a Multi Modal Learning project in 1992, and this is now the accepted strategy at Lilydale. Traditional teaching techniques are combined with a range of independent learning methods that rely heavily on computer and media technology. Courses offered at the Campus are bachelor degrees in applied science, business, and social science. An honours program has been active for the past two years, and a graduate diploma in e-commerce is also offered.

All first year students are expected to complete four core subjects, regardless of the course they take. Library staff conduct user education sessions in all subjects but play a leading part in Information Methods, a subject that includes a module on critical and creative information literacy. Students are expected to gain skills that will lay the foundation for their learning in other subject areas and also in their later careers. Library information staff design and develop the curriculum, compose the relevant section of the online learning guide, deliver two lectures, devise, mark and grade two compulsory assignments and 20% of the examination. A large amount of staff time is consumed in this way.

Student characteristics

The student load, as of 31 March 2000 was 1,559 EFTSU, manifested as 1,992 individual students. About three-quarters of these study full-time, on campus. Fifty-two per cent are recent school leavers, and, for the first time since Swinburne, Lilydale's establishment, women students outnumber men. Women now account for 55% of the student body, men 45%. (Swinburne University of Technology, 2000, [1]). Sixty-four per cent of students at Lilydale are studying business in either generic or tagged bachelor degrees, or a dual award with TAFE.

Information literacy skills of beginning students

From 1999 onwards, students enrolling for Information Methods have been required to sit a placement test to assess their entry-level skills in computer literacy. Results indicate that there are diverse levels of information technology knowledge and literacy among the first year students; 25% of these students have negligible computer skills. Older students returning to study after many years are often tentative about computer technology, but are very motivated to learn and 'catch up' with younger students. Those who demonstrate that they have basic computer skills are exempted from the preliminary part of the subject (Module A). Those students who have little or no computer literacy are enrolled in Module A, which focuses on gaining confidence in the use of information technology, particularly computers and the internet. (Swinburne University of Technology, 2000, [2] p. 433).

From the library point of view, it is helpful to know that students will be either exempt from this basic module, or have done it and passed. New students need to develop a high degree of computer confidence in a short time. Most of their subjects have some online component and in many cases they are obliged to submit assignments through this medium. In addition, ours is essentially a virtual library. We have a small print collection but are heavily reliant on electronic delivery of information resources. Students need to develop information technology skills before they can progress towards information literacy.

Past and current means of assessment

For the past few years, library staff have set two assignments for Information Methods. The first is a self-paced assignment that simultaneously develops and assesses skills in using the Swinburne OPAC. The second assignment is more challenging and covers advanced catalogue searching, constructing search strategies, using CD ROM and web-based databases, and searching the internet critically and effectively. This assignment is corrected, graded and returned to the students so they can use it to revise for their exam. An average of three hundred assignments is submitted each semester and these take vast amounts of time to assess.

At the end of semester the students take a final exam in Information Methods. The library information staff are responsible for setting 20% of the exam questions. Both the assignment questions and exam questions are revised each semester, keeping up with changes in databases, current trends, and our perception that some of the questions were ambiguous or unclear to the students. It has been an interesting experience to be part of both the formative and summative evaluation in this subject. We have been able to examine both sets of results from two points of view - what they tell us about the students' levels of information literacy, and whether we have been effective in our teaching.

Assessing the assessment

We have observed that the students' scores were slightly higher this year than in previous semesters. This has been the trend for the four years that the program has been operating. Some possible reasons for this could be:

  • Perhaps we have become better teachers. Much effort has gone into improving our skills in flexible delivery, instructional design and developing new ways of presenting material. We have made changes in the way we deliver the module, and ironed out ambiguities in the assignment and exam.
  • The students come to us with better computer skills now that they are required to sit the placement test and receive remedial teaching (Module A) if they do not meet the criteria.
  • Tertiary Entrance Rank scores to gain entry into courses at Lilydale have increased steadily so presumably we are getting a better calibre of student.

Response patterns

We developed a database of student results for first semester 2000 with the intention of searching for a relationship between their assignment results and their examination results. The three appendices to this paper are graphs of data that were extracted from the database, and they illustrate some interesting points. The graphs only take into account the students who completed both the assignment and the exam: there were some students who only completed one or the other.

Looking at the overall results for Semester One, 2000:

  • The genders were equally represented in our sample.
  • Students who scored well in the exam generally did well in their assignment also.
  • On average, students did 18% better on the assignment than in the exam.
  • The mean score for the assignment was 11.5 out of 15.
  • The mean score for the exam was 12.2 out of 20.
  • Of the 50 students who received a score of 17 or above out of a possible 20, only 3 did not submit an assignment. On the other hand, of the 66 students scoring 10 or below in the exam, 19 did not submit an assignment.
  • The mode for the assignment was 14 out of 15 and for the exam, 16 out of 20. This is an improvement on the scores from previous years and we believe this reflects the fact that the program is now more effective as far as the instructional design, teaching and assessment aspects are concerned.

Students did better on multiple choice questions where they only had to recognise the correct answer than they did on questions that asked them to write a short paragraph. Here is an example of each type:

  1. The Age (newspaper)
  2. Business ASAP
  3. PsycLIT
  4. Academic Search Elite

A much higher percentage of students successfully answered the multiple choice than the question that required the paragraph answer. This raises some doubts about the level of student understanding. We realise that simply putting the test online will not solve this kind of problem. So why do we want to move to online assessment of information literacy skills?

Advantages and disadvantages of online assessment

  • We are spending large amounts of time marking and grading assignments and exams at the moment - we need to find a more efficient and adaptable method of assessing students and would like to automate as much of it as we can.
  • Many subjects at Lilydale use online assessment, which reduces the amount of teacher intervention required. All students on this campus are familiar with this form of assessment.
  • The rest of the subject is destined for online assessment and our component will need to be compatible.
  • Web-based testing will mean that we can test at any time and in any place.
  • Test administration is simplified.
  • Measurement error is reduced.
  • An item bank of test questions that provide a few alternatives at each testing session would cut down some of the 'collaboration' that currently occurs among the students.
  • Questions can be randomised so that students sitting next to each other are not presented with the questions in the same order.
  • It is possible to include video clips or screen dumps in the test questions.
  • Tests can be scored instantly, so students can have immediate feedback, rather than having to wait for weeks while we correct their assignments manually.
  • We will be able to see the trouble spots clearly.
  • The data will be easier to collate, analyse and diagnose.
  • We may be able to make use of commercially available assessment packages. An example is 'The Learning Manager' (TLM), which is already used and supported by our TAFE division. It could be used to develop assessments that comprise several pre-formatted question types that can be scored, reported and analysed automatically. An alternative is 'Hot Potatoes' which is free for educational, non-commercial use and which facilitates the construction of six types of test items suitable for electronic assessment. These are just two of many packages we are considering but the availability of local support probably makes The Learning Manager the better choice. Another option is to develop our own package but we are not seriously considering this.

Some of our concerns about on-line assessment of information literacy skills are:

  • The databases we are using are growing every day so that there are often many dozens or even hundreds of acceptable answers when we require students to find resources on a research topic. This means that we have to check every answer on the databases. This has been a long-standing concern and online assessment will not solve all the problems.
  • We can frame the questions in such a way that the number of possible answers is reduced but this is not easy to do without losing much of the educational value of the question. In terms of the skills we are trying to teach it would be better to have the questions as open-ended as possible.
  • Constant change in the electronic environment, such as databases moving from CD-ROM to web-based formats make it difficult to prepare questions very far in advance.
  • There are difficulties in developing an item bank of test questions. We would have to ensure that parallel questions are of equal difficulty and that they are unambiguous and valid. Also, we will probably find the questions are only useable for a short time.
  • Constructing test items is a specialist skill and there are many pitfalls for the inexperienced. These problems can be much worse in computer-based assessment because the computer cannot make allowances where a human being might. In a multiple choice item, for example, the possible answers have to cover every conceivable acceptable response, such as abbreviations or alternative spellings. Otherwise an answer which would have been accepted by a human marker will be marked as incorrect.
  • Only certain types of information literacy skills can be assessed by multiple choice, true and false, short answer and cloze technique questions. It is difficult to provide for the higher level skills of evaluation, analysing and synthesising sources of information. However, as our current program is designed, initially at least, to give students the skills to survive in a high tech environment, this is not such a major problem for our assignment, which is completed in the first few weeks of term. It is a consideration for the longer term - by the end of their course we hope that students have reached these higher levels. We will probably have to stay with more traditional methods of assessing them.
  • The success of any computer-based program is affected by the quality of the technology and the support staff. This is often out of the control of library staff.

Is online assessment fair, valid and reliable?

It is sometimes argued that the mechanics of using the computer to perform assessment tasks can get in the way of assessing subject content knowledge. This is probably a decreasing problem, as computer use becomes more widespread. Anyway, in our context the technology skills ARE part of the content and not just the mechanism for delivering it. In our environment information technology skills and information literacy skills are inextricably linked.

It has also been argued that computer-based testing is biased by demographic factors. It is claimed that males do better than females and that students from lower socio-economic groups do not perform well because they have had limited experience using computers. However, studies have not supported this belief. Bicanich, Slivinski, Hardwicke and Kapes (1997) found that Internet-based delivery of a test does not affect student performance. They found no differences between two groups of students, one of whom took a paper-and-pencil test while the other took a computer-based version of the same test. Because all our students undergo a placement test at the beginning of the course, by the time they are doing our assignment they have received instruction, if they need it, on using computer packages.

In several case studies reported by Greenberg (1998) it was found that scores from computerised and manual tests are comparable for the individuals taking them. Students who did well on the computerised tests also did well on the manual tests and those who performed poorly on the computer-based tests fared no better on the manual tests. In relation to our information literacy program this effect can be seen by comparing the response patterns of our students on the computer-based assignment and the final examination which is performed under traditional examination conditions. Although they were not alternative versions of the same test, the assignment and the exam covered the same material. We believe that the fact that our students performed 18% better on the hands-on assignment than in the paper-based examination indicates that they will not be disadvantaged by computer-based assessment (see appendices 1-3).

Nevertheless, it is probably better to vary the type of assessment. Students have personal preferences - some like multiple choice questions and some are more comfortable with a discursive style. Online teaching and learning programs often tend to ignore the different learning styles among students -and this is even more likely to happen with online assessment procedures. We strongly believe that the human element is an important part of our teaching strategy and this is certainly the preference expressed by many students.

We would also like to find ways of rewarding students who start from a very low level, make huge improvements and meet the required standard, although this may not bring them up to the standard of students who brought an elevated level of skill to the course with them. Similarly we do not want our testing procedures to restrict a student capable of obtaining very high levels of achievement. Some computer-adaptive tests can be customised for individual users. The degree of difficulty of a test can be set at the appropriate level for each student so that none of them are discouraged by constant failure or left unchallenged by a too-easy test. Such tests usually begin with questions of medium difficulty but then adapt to the ability level of the test taker. Those who answer the first few questions correctly are given more difficult questions for the rest of the test. Those who cannot answer the questions are shown simplified versions of the remaining questions. (Greenberg, 1998) The number of questions can also be increased so that a student has more opportunities to grasp the skill being taught or assessed.

This is better than trying to create a middle of the road test that may be discouraging for some and absurdly easy for others. It would offer the better students a chance to explore some of the advanced features and functionality of the databases - something we do not usually get around to in our formal instruction which is introductory and generic. This technique would identity students in need of extra work without having to humiliate them first. It would also be compatible with the proposed recommendation expressed in the Information Literacy Standards for higher education as drafted at the national workshop conducted by the University of South Australia, 22-23 September 2000 for the Council of Australian University Librarians (CAUL). 'All students are expected to demonstrate all of the standards, but not everyone will demonstrate them to the same level or at the same time'.

Our user education sessions for later year students are not so systematic and structured as our first year program. It is difficult to tell if students are building on the skills they learnt in first year. We would like to carry out comprehensive evaluation at the end of second and third year to gather information on the long-term effects of our information literacy program. We would like to know if there is a match between what we taught and what graduating students think would have been valuable. We would also like to gain a better understanding of the research behaviour of third year students compared with first years. Online assessment could play a part in such a program but would need to be supplemented by other methods.

Quality issues

When completing library exercises many students are happy to hand in the first article they retrieve that meets the search criteria - we get many copies of the same journal article submitted for this reason. Are they more selective when locating resources for other subjects? If they are not, all we have managed to teach them is basically just a mechanical skill - i.e. click on the check box and be rewarded with a journal article. Online information literacy instruction and assessment can deteriorate into a matching exercise if we are not careful.

We have to find time and ways to ensure that students really understand what they are doing and are evaluating the information they retrieve for quality and relevance. We have done well to be given two weeks out of a twelve-week course but this is obviously not enough for the gradual development and continuous improvement of information literacy skills that could be developed with a lengthier, less frenetic program. Moreover, we cannot even assume that all students will transfer the skills they have learned in Information Methods to their behaviour in other subjects. We would ideally like to have a structured course not only for the students' first semester but for all of their undergraduate years and beyond.

Many lifelong learning skills are beyond the scope of online teaching, learning and assessment. For instance, the final examination needs to be structured in such a way that the students can demonstrate their understanding of complex knowledge and skills.

Unfortunately online assessment is not a cure-all - testing methods must be congruent with the learning objectives and desired educational outcomes. Bosseau and Martin (1999) state 'When considering campus-wide information literacy assessment, it may be difficult to separate information literacy from the overarching goals of undergraduate education and the overall assessment of student learning. So do not separate it!' We are incorporating information literacy skills into many of our other subjects but not yet in a systematic way. This is the next challenge for us and if we can reduce the amount of effort we are currently spending on student evaluation by introducing online assessment, we will be able to address this issue.

Conclusion - so what will online assessment do?

Our information literacy program has been revised eight times in four years. Three of these revisions were major re-writes. The need for such extensive and continual revision has been brought about mainly by the volatile nature of the electronic database environment. This is what has created the need for the infinite number of questions we have had to devise to monitor student progress. It is also responsible for the infinite number of acceptable answers to many of these questions. We are aware that online assessment is not going to solve all our problems. We hope it will streamline our formative assessment procedures, make them more objective and enable us to give more timely feedback than is currently the case.

Assessing the development of higher level skills such as evaluating the quality of resources is difficult to do online. Computer-based assessment works best for criterion referenced testing, e.g. does the result meet the criteria specified? In our case it will also work best for formative assessment. Summative assessment of students' first year's achievement in developing understanding of the concepts and abstractions involved in information literacy will continue to be done under exam conditions.

We are satisfied that an online version of our assignment will be effective as both a learning tool and a practical test, and that it will relieve but not eliminate the heavy work load involved in assessing student achievement in information literacy.

Bibliography


indextop



http://conferences.alia.org.au/online2001/papers/information.literacy.strategies.c.html
© ALIA [ feedback | update | privacy ] . 6:10am 27 February 2010

Warning: Unknown(): open(/tmp/sess_1536b8c34ec4e39971083be7ca3bcf96, O_RDWR) failed: No space left on device (28) in Unknown on line 0

Warning: Unknown(): Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/tmp) in Unknown on line 0