[ ALIA home | conference home | papers | photographs | search... ]
Information literacy strategies
Finite Questions and Infinite Answers: Online Assessment of Information Literacy Skills
Julie Badger, Susan Roberts
Information staff at Swinburne University's Lilydale campus design and deliver an information literacy module within Information Methods, a compulsory subject for all first year higher education students. The information literacy component comprises units in constructing search strategies, using electronic databases, and searching the internet and the Swinburne Intranet. The component accounts for 20% of the students' assessment. Formative and summative assessment online raises a number of issues such as the huge number of possible correct answers to many of the questions, and the need to adapt procedures for both 'hands-on' and examination conditions. This paper addresses practical and educational ramifications.
The Information Literacy program at Swinburne University's Lilydale campus has been described and evaluated in previous conference papers so we will not be discussing issues that have been raised before. Details of these papers are included in the bibliography. This paper concentrates on the aspect of our program that has given us the greatest concern over the four years the program has been in use - assessing student achievement.
Evaluating the effectiveness of our student assessment procedures has been our program's most problematic feature. This paper focuses on our attempts to assess student achievement in information literacy and in particular, our forthcoming move into large-scale online assessment. We will consider the pedagogical and practical issues that this raises for ongoing formative evaluation of student learning and for end-of-semester summative assessment of the course objectives. Firstly however, a little background information is necessary.
The story so far...
The Lilydale campus is now approaching its fifth year, having moved from a site in Mooroolbark at the start of 1997. The Mooroolbark campus was used as a pilot in a Multi Modal Learning project in 1992, and this is now the accepted strategy at Lilydale. Traditional teaching techniques are combined with a range of independent learning methods that rely heavily on computer and media technology. Courses offered at the Campus are bachelor degrees in applied science, business, and social science. An honours program has been active for the past two years, and a graduate diploma in e-commerce is also offered.
All first year students are expected to complete four core subjects, regardless of the course they take. Library staff conduct user education sessions in all subjects but play a leading part in Information Methods, a subject that includes a module on critical and creative information literacy. Students are expected to gain skills that will lay the foundation for their learning in other subject areas and also in their later careers. Library information staff design and develop the curriculum, compose the relevant section of the online learning guide, deliver two lectures, devise, mark and grade two compulsory assignments and 20% of the examination. A large amount of staff time is consumed in this way.
The student load, as of 31 March 2000 was 1,559 EFTSU, manifested as 1,992 individual students. About three-quarters of these study full-time, on campus. Fifty-two per cent are recent school leavers, and, for the first time since Swinburne, Lilydale's establishment, women students outnumber men. Women now account for 55% of the student body, men 45%. (Swinburne University of Technology, 2000, ). Sixty-four per cent of students at Lilydale are studying business in either generic or tagged bachelor degrees, or a dual award with TAFE.
Information literacy skills of beginning students
From 1999 onwards, students enrolling for Information Methods have been required to sit a placement test to assess their entry-level skills in computer literacy. Results indicate that there are diverse levels of information technology knowledge and literacy among the first year students; 25% of these students have negligible computer skills. Older students returning to study after many years are often tentative about computer technology, but are very motivated to learn and 'catch up' with younger students. Those who demonstrate that they have basic computer skills are exempted from the preliminary part of the subject (Module A). Those students who have little or no computer literacy are enrolled in Module A, which focuses on gaining confidence in the use of information technology, particularly computers and the internet. (Swinburne University of Technology, 2000,  p. 433).
From the library point of view, it is helpful to know that students will be either exempt from this basic module, or have done it and passed. New students need to develop a high degree of computer confidence in a short time. Most of their subjects have some online component and in many cases they are obliged to submit assignments through this medium. In addition, ours is essentially a virtual library. We have a small print collection but are heavily reliant on electronic delivery of information resources. Students need to develop information technology skills before they can progress towards information literacy.
Past and current means of assessment
For the past few years, library staff have set two assignments for Information Methods. The first is a self-paced assignment that simultaneously develops and assesses skills in using the Swinburne OPAC. The second assignment is more challenging and covers advanced catalogue searching, constructing search strategies, using CD ROM and web-based databases, and searching the internet critically and effectively. This assignment is corrected, graded and returned to the students so they can use it to revise for their exam. An average of three hundred assignments is submitted each semester and these take vast amounts of time to assess.
At the end of semester the students take a final exam in Information Methods. The library information staff are responsible for setting 20% of the exam questions. Both the assignment questions and exam questions are revised each semester, keeping up with changes in databases, current trends, and our perception that some of the questions were ambiguous or unclear to the students. It has been an interesting experience to be part of both the formative and summative evaluation in this subject. We have been able to examine both sets of results from two points of view - what they tell us about the students' levels of information literacy, and whether we have been effective in our teaching.
Assessing the assessment
We have observed that the students' scores were slightly higher this year than in previous semesters. This has been the trend for the four years that the program has been operating. Some possible reasons for this could be:
We developed a database of student results for first semester 2000 with the intention of searching for a relationship between their assignment results and their examination results. The three appendices to this paper are graphs of data that were extracted from the database, and they illustrate some interesting points. The graphs only take into account the students who completed both the assignment and the exam: there were some students who only completed one or the other.
Looking at the overall results for Semester One, 2000:
Students did better on multiple choice questions where they only had to recognise the correct answer than they did on questions that asked them to write a short paragraph. Here is an example of each type:
A much higher percentage of students successfully answered the multiple choice than the question that required the paragraph answer. This raises some doubts about the level of student understanding. We realise that simply putting the test online will not solve this kind of problem. So why do we want to move to online assessment of information literacy skills?
Advantages and disadvantages of online assessment
Some of our concerns about on-line assessment of information literacy skills are:
Is online assessment fair, valid and reliable?
It is sometimes argued that the mechanics of using the computer to perform assessment tasks can get in the way of assessing subject content knowledge. This is probably a decreasing problem, as computer use becomes more widespread. Anyway, in our context the technology skills ARE part of the content and not just the mechanism for delivering it. In our environment information technology skills and information literacy skills are inextricably linked.
It has also been argued that computer-based testing is biased by demographic factors. It is claimed that males do better than females and that students from lower socio-economic groups do not perform well because they have had limited experience using computers. However, studies have not supported this belief. Bicanich, Slivinski, Hardwicke and Kapes (1997) found that Internet-based delivery of a test does not affect student performance. They found no differences between two groups of students, one of whom took a paper-and-pencil test while the other took a computer-based version of the same test. Because all our students undergo a placement test at the beginning of the course, by the time they are doing our assignment they have received instruction, if they need it, on using computer packages.
In several case studies reported by Greenberg (1998) it was found that scores from computerised and manual tests are comparable for the individuals taking them. Students who did well on the computerised tests also did well on the manual tests and those who performed poorly on the computer-based tests fared no better on the manual tests. In relation to our information literacy program this effect can be seen by comparing the response patterns of our students on the computer-based assignment and the final examination which is performed under traditional examination conditions. Although they were not alternative versions of the same test, the assignment and the exam covered the same material. We believe that the fact that our students performed 18% better on the hands-on assignment than in the paper-based examination indicates that they will not be disadvantaged by computer-based assessment (see appendices 1-3).
Nevertheless, it is probably better to vary the type of assessment. Students have personal preferences - some like multiple choice questions and some are more comfortable with a discursive style. Online teaching and learning programs often tend to ignore the different learning styles among students -and this is even more likely to happen with online assessment procedures. We strongly believe that the human element is an important part of our teaching strategy and this is certainly the preference expressed by many students.
We would also like to find ways of rewarding students who start from a very low level, make huge improvements and meet the required standard, although this may not bring them up to the standard of students who brought an elevated level of skill to the course with them. Similarly we do not want our testing procedures to restrict a student capable of obtaining very high levels of achievement. Some computer-adaptive tests can be customised for individual users. The degree of difficulty of a test can be set at the appropriate level for each student so that none of them are discouraged by constant failure or left unchallenged by a too-easy test. Such tests usually begin with questions of medium difficulty but then adapt to the ability level of the test taker. Those who answer the first few questions correctly are given more difficult questions for the rest of the test. Those who cannot answer the questions are shown simplified versions of the remaining questions. (Greenberg, 1998) The number of questions can also be increased so that a student has more opportunities to grasp the skill being taught or assessed.
This is better than trying to create a middle of the road test that may be discouraging for some and absurdly easy for others. It would offer the better students a chance to explore some of the advanced features and functionality of the databases - something we do not usually get around to in our formal instruction which is introductory and generic. This technique would identity students in need of extra work without having to humiliate them first. It would also be compatible with the proposed recommendation expressed in the Information Literacy Standards for higher education as drafted at the national workshop conducted by the University of South Australia, 22-23 September 2000 for the Council of Australian University Librarians (CAUL). 'All students are expected to demonstrate all of the standards, but not everyone will demonstrate them to the same level or at the same time'.
Our user education sessions for later year students are not so systematic and structured as our first year program. It is difficult to tell if students are building on the skills they learnt in first year. We would like to carry out comprehensive evaluation at the end of second and third year to gather information on the long-term effects of our information literacy program. We would like to know if there is a match between what we taught and what graduating students think would have been valuable. We would also like to gain a better understanding of the research behaviour of third year students compared with first years. Online assessment could play a part in such a program but would need to be supplemented by other methods.
When completing library exercises many students are happy to hand in the first article they retrieve that meets the search criteria - we get many copies of the same journal article submitted for this reason. Are they more selective when locating resources for other subjects? If they are not, all we have managed to teach them is basically just a mechanical skill - i.e. click on the check box and be rewarded with a journal article. Online information literacy instruction and assessment can deteriorate into a matching exercise if we are not careful.
We have to find time and ways to ensure that students really understand what they are doing and are evaluating the information they retrieve for quality and relevance. We have done well to be given two weeks out of a twelve-week course but this is obviously not enough for the gradual development and continuous improvement of information literacy skills that could be developed with a lengthier, less frenetic program. Moreover, we cannot even assume that all students will transfer the skills they have learned in Information Methods to their behaviour in other subjects. We would ideally like to have a structured course not only for the students' first semester but for all of their undergraduate years and beyond.
Many lifelong learning skills are beyond the scope of online teaching, learning and assessment. For instance, the final examination needs to be structured in such a way that the students can demonstrate their understanding of complex knowledge and skills.
Unfortunately online assessment is not a cure-all - testing methods must be congruent with the learning objectives and desired educational outcomes. Bosseau and Martin (1999) state 'When considering campus-wide information literacy assessment, it may be difficult to separate information literacy from the overarching goals of undergraduate education and the overall assessment of student learning. So do not separate it!' We are incorporating information literacy skills into many of our other subjects but not yet in a systematic way. This is the next challenge for us and if we can reduce the amount of effort we are currently spending on student evaluation by introducing online assessment, we will be able to address this issue.
Conclusion - so what will online assessment do?
Our information literacy program has been revised eight times in four years. Three of these revisions were major re-writes. The need for such extensive and continual revision has been brought about mainly by the volatile nature of the electronic database environment. This is what has created the need for the infinite number of questions we have had to devise to monitor student progress. It is also responsible for the infinite number of acceptable answers to many of these questions. We are aware that online assessment is not going to solve all our problems. We hope it will streamline our formative assessment procedures, make them more objective and enable us to give more timely feedback than is currently the case.
Assessing the development of higher level skills such as evaluating the quality of resources is difficult to do online. Computer-based assessment works best for criterion referenced testing, e.g. does the result meet the criteria specified? In our case it will also work best for formative assessment. Summative assessment of students' first year's achievement in developing understanding of the concepts and abstractions involved in information literacy will continue to be done under exam conditions.
We are satisfied that an online version of our assignment will be effective as both a learning tool and a practical test, and that it will relieve but not eliminate the heavy work load involved in assessing student achievement in information literacy.
© ALIA [ feedback | update | privacy ] . 6:10am 27 February 2010