Computerized adaptive testing is a kind of computer-based test that adapts to the examinee's potential level. Thus, it also has been referred to as tailored testing.
Computerized adaptive testing successively chooses questions for maximizing the accuracy of the exam depending on what is known regarding the examinee from former questions. From the examinee's point of view, the difficulties of the exam may seem to tailor itself to their potential level. For instance, if an examinee does well on an average difficulty, they'll then be given a tougher question. Or, when they performed badly, they will be presented with a simpler question. When compared with static many choice tests that most people has experienced, with a set of items given to all examinees, CAT need lesser test items to have equally accurate and correct scores. However, there is absolutely nothing about the CAT methodology that needs the items to be multiple-choice; just as the majority of exams are multiple-choice; most CAT examinations also make use of this format.
Adaptive tests provide uniformly accurate scores for the majority of test-takers. Conversely, regular fixed tests usually provide the perfect precision for test-takers of average ability and quite poorer accuracy for test-takers with additional extreme test scores.
An adaptive test could typically be reduced by 50% yet still maintain a greater level of precision compared to a fixed version. This results in time savings for the test-taker. Test-takers don't waste their precious time attempting things that are too challenging or trivially simple. In addition, the testing team takes the advantage of the time savings; the expense of examinee seats time is a lot reduced. However, simply because the advance of a computerized adaptive testing entails a lot more expense compared to a standard fixed-form test, a huge population is needed for a computerized adaptive testing program to be economically fruitful.
The very first issue experienced in computerized adaptive testing is calibration of item pool. So as to model the features of the items (like to select the best items), all items of the test has to be pre-administered to a considerable sample and should be analyzed. To accomplish this, brand-new items ought to be mixed into the functional items of an examination (the answers are recorded however don't stimulate test-takers' scores), referred to as pre-testing, pilot testing, or seeding. This presents ethical, logistical, and security issues. For instance, it's impossible to field a functional adaptive test with new, invisible items; all items have to be pretested with a great enough sample to get sturdy item statistics. This sample might be needed to be as much as 1000 examinees. Every program should decide just what percentage of the test could realistically be comprised of un-scored test items.
Even though adaptive tests possess exposure control algorithms to avoid over-use of some items, the exposure dependent upon ability is usually not controlled and could easily get close to 1. This means, it's common for a few items to get common on tests for individuals of similar ability. This is a critical security issue because groups sharing items might have the same functional potential level. In fact, a totally randomized examination is considered the most secure and also least effective.
Computerized adaptive test are the potential future of examination. They function by aligning both difficulty as well as number of items to every examinee.