EGovernance and Cloud Computing Services - 2012 |
Foundation of Computer Science USA |
EGOV - Number 3 |
December 2012 |
Authors: Kavitha Rajamani, Vijaya Kathiravan |
63958b81-1aa9-4d02-bb9a-c6ccf45ebb1e |
Kavitha Rajamani, Vijaya Kathiravan . Composing Sequential Test Items with Multipart Criteria in Adaptive Testing. EGovernance and Cloud Computing Services - 2012. EGOV, 3 (December 2012), 3-7.
The traditional learning environment is being rapidly supplemented by an E-Learning environment, particularly Computer Assisted Instruction (CAI). Each learner has different learning status and therefore should use different test items in their evaluation. The Computerized Adaptive Test (CAT) can adjust the degree of difficulty of test items dynamically depending on their ability. A good test will not only help the instructor evaluate the learning status of the students, but also facilitate the diagnosis of the problems embedded in the students' learning process. One of the most important and challenging issues in conducting a good test is the construction of test sheets that can meet various criteria. Therefore, several measures have been proposed to represent the quality of each test item, such as degree of difficulty and discrimination. However, the quality of a test not only depends on the quality of the item bank, but also relates to the way the assessment sheet is constructed. Selection of appropriate test items is important when constructing an assessment sheet that meets multi-criteria assessment requirements, such as expected difficulty degree, expected discrimination degree, number of the test items, estimated testing time and the specified distribution of relevant concept weights. Dynamic question generation is proposed which uses the novel approach of Particle Swarm Optimization. This approach will improve the efficiency of composing near optimal serial test items to meet multiple assessment criteria. The proposed approach can be compared with some existing means in terms of efficiency.