WO2006131819A2 - Method and system for automated adaptive skill practice - Google Patents

Method and system for automated adaptive skill practice Download PDF

Info

Publication number
WO2006131819A2
WO2006131819A2 PCT/IB2006/001506 IB2006001506W WO2006131819A2 WO 2006131819 A2 WO2006131819 A2 WO 2006131819A2 IB 2006001506 W IB2006001506 W IB 2006001506W WO 2006131819 A2 WO2006131819 A2 WO 2006131819A2
Authority
WO
WIPO (PCT)
Prior art keywords
task
user
response
actuation
determined
Prior art date
Application number
PCT/IB2006/001506
Other languages
French (fr)
Other versions
WO2006131819A3 (en
Inventor
Vijay Jha
Original Assignee
Vijay Jha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vijay Jha filed Critical Vijay Jha
Publication of WO2006131819A2 publication Critical patent/WO2006131819A2/en
Publication of WO2006131819A3 publication Critical patent/WO2006131819A3/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Abstract

The present invention relates to dedicated electronic apparatus and methods for improving skills - speed with accuracy, of candidates for success in job-tests and other career-oriented tests in Mathematics, Reasoning, Grammar, numerical problems in Science, etc. More particularly, the present invention relates to electronic apparatus and methods, which generate and/or automatically simulate enough random practice tasks or stimulus in a group of topics, evaluate user's response time for speed and accuracy in each task, wait for a predetermined interval between two tasks, smoothen the response time measurement by averaging, verify if the user has achieved a prescribed speed/accuracy level before starting precise response-time analysis, and give the performance report. The present invention further relates to those electronic apparatus and methods, which detect weak topics, generate or simulate enough practice tasks targeted at improving the weak spots, and report errors due to slips and impulses in speedy response tests to improve user's self-error- monitoring.

Description

METHOD AJND SYSTEM FOR AUTOMATED ADAPTIVE SKILL
PRACTICE
FIELD OF THE INVENTION The present invention relates to dedicated electronic apparatus and methods for improving skills - speed with accuracy, of candidates for success in job-tests and other career-oriented tests in Mathematics, Reasoning, Grammar, numerical problems in Science, etc. It relates to electronic apparatus and methods, which generate and/or automatically simulate enough random practice tasks or stimulus in a group of topics, evaluate user's response time for speed and accuracy in each task, wait for a predetermined interval between two tasks, smoothen the response time measurement by averaging, verify if the user has achieved a prescribed speed/accuracy level before starting precise response-time analysis, and give the performance report. It further relates to those electronic apparatus and methods, which detect weak topics, generate or simulate enough practice tasks targeted at improving the weak spots, and report errors due to slips and impulses in speedy response tests to improve user's self-error- monitoring.
PRIOR ART AND BACKGROUND OF THE INVENTION Due to increasing importance of development of intellectual skills in the 21st century, there is more emphasis on specialized coaching to develop specific skills for many high-stakes examinations. The market has also been flooded with numerous electronic systems and tools catering to individual students, or coaching centers. While existing dedicated electronic gadgets cater to kids only, some Internet systems and software cater to students preparing for high-stakes tests like the SAT. Some existing Network software function in the LAN environment and cater to coaching centers providing services to students preparing for high-stakes tests. These systems usually include: (i) presentation of timed "sample tests" and practice tests, (ii) scoring of responses from these tests, (iii) question-specific feedback (e.g., response chosen, correct answer, explanation etc), and (iv) some test-taking tips like skipping questions. Some of these use audio and/or graphics and/or have the provision of explanations for each of the response alternatives for each item, while others allow the user to mark items to be skipped and returned to. Some of these also offer feedback of a study plan based upon the results of a "sample test". A recent development in this direction is tools based upon cognitive diagnostics assessments (CDA), built around an item matrix (Q-matrix) (For details and more references see the background of the United States Patent 6688889). By "task instance" we mean a particular example of a task; calculating the area of a triangle with variable sides a, b, and c is a task, while calculating the area of a triangle with sides 3, 4, and 5 is a task instance. The task, or task type in general can be of two kinds - multiple-choice tasks and non-multiple choice tasks, in which the user is asked to type in the correct answer. In both kinds of tasks we have further subdivisions - tasks with more than one values in the answer, and tasks with single value in the answer; a task requiring to derive the area and perimeter of a triangle with given sides has two values in its answer. At the third layer, each of the above four subdivisions can be further subdivided into two categories each - Mathematical and Numerical tasks, and tasks belonging to other subjects like Logical Reasoning, Grammar etc. In Multiple-choice tasks a problem is stated together with more than one choice, at least one of which is correct, In Mathematical and Numeric tasks exactly one choice is correct. In systems and methods of the present invention related to problem simulation, we assume Mathematical and Numerical tasks, while in systems and methods related to multiple-choice simulation for response, we assume Mathematical and Numerical multiple-choice tasks in which exactly one choice is correct. In all other modules we assume any general task as defined above. Latest research in psychophysics and psychology of learning shows that attentive repetition of a task physically enlarges areas of the brain responsible for it (B. Draganski, C. Gaser, V. Busch, G. Schuierer, U. Bogodahn & A. May, "Neuroplasticity: Changes in grey matter induced by training", Nature 427, 311-312), that all knowledge is basically performatory (David A. Rosenbaum, Richard A. Carlson, & Rick O. Gilmore, "Acquisition of intellectual and perceptual-motor skills", Annual Review of Psychology, 2001, Vol. 52; later referred by [RC])), and that methodical practice can turn an average user into world class performers (Delaney P F, Reder L M, Staszewski J J, Ritter F E, "The strategy specific nature of improvement: The power law applies by strategy within task.", Psychological Science 9(1): 1-7, 1998) (Later referred by [DRSR]). Stressing only upon understanding without speed has also been the main reason of gradual loss of user's motivation in the subject, which leads to the lack of understanding at higher level in many cases (Binder, C, Precision Teaching: Measuring and attaining exemplary academic achievement. Youth Policy, 1988, 10(7), 12-15; later referred as [BC]). So, understanding alone, without appropriate skill development, can not survive for long. So, intellectual skill development should follow the principles of physical skill development - regular meticulously planned exercises and continuous performance monitoring; this is how an experienced coach trains athletes. It requires randomly generating enough task instances in a group of topics, to evaluate user's response in a task for speed and accuracy for measuring skill, to detect weak spots, generate tasks targeted at Improving those weak spots, and to automatically generate performance report.
First generation of educational tools presented the user task instances from databases, recorded user's response, and compared it with pre-stored correct answer. Next generation of educational software, based upon Computerized Adaptive Tests (CAT) (H. Wainer et al. (Ed), "Computerized Adaptive Testing: A Primer", New Jersey: Lawrence Erlabum Associates, Publishers, 1990), considered correct or wrong response, and based upon it moved to the higher or lower levels. Some recent CAT software tools have numerous learning activity levels for each task (or games), and can record students' activity patterns and generate performance reports. All these tools aid in mere knowledge acquisition, and not in skill mastery, which in contrast to mere knowledge acquisition, means fluency, i. e. accuracy with speed [BC]. These tools neither incorporate any research in the psychology of skill development, nor distinguish skill mastery from mere knowledge acquisition. These tools do not analyze user's response time in each task, without which one can not distinguish skill level from mere knowledge, since former requires speed with accuracy. Users have distinct biological clocks and peaks of motivation. For best result they should work at their best times of motivation. Given the size and cost/performance ratio of the available systems, it is desirable to have an inexpensive gadget personalized to user's needs. Available dedicated gadgets are addressed only to kids (See http://www.vtechkids.com/).
To store database of hundreds of instances of each task for each of the possible digit sizes of operands would require much memory storage, its compilation is an error- prune process, and would increase the cost of the device. So, one needs some mechanism to store as many tasks as possible as template, and to automatically and randomly simulate hundreds of instances of the task on the fly. This problem can be solved for Mathematical and Numerical tasks. Job-tests require multiple choices, in which incorrect alternatives, known as distractors, are obtained by cleverly adding small errors in the correct choice. To reduce the cost of storage of the content in an inexpensive gadget without the traditional database capabilities, it is desirable to have a mechanism to simulate multiple choices for response on the fly for numerous tasks. For this, a mechanism is required to generate distractors, which should not be easy to guess, i. e. should work. This problem can be solved for a wide class of multiple-choice mathematical and numerical tasks. In case there is no mechanism to automate the elimination of weak distractors for some task, one needs to store sufficient number of its instances, e. e. values of variables figuring in the task and corresponding set of responses to be presented to the user, as in the traditional tools.
Contrary to the simple alternatives of "correct" or "incorrect" required to test "knowledge", "skill" is scored on a much wider contiguous scale by response-time measurement, and the difficulty lies in evaluating skills on an objective contiguous scale, so that one's scores of skill in two distinct tasks could be compared to detect the weak spot. For example, one takes one second in adding and two seconds in multiplying 344 and 488. Which is his or her weak spot - addition or multiplication? So, with shifting of focus from manual to the mental, due to quicker brain's reaction time, efficient progress monitoring through orthodox coaching becomes very difficult; even many dedicated tutors can not perform it manually with efficiency in subjects like mathematics at the secondary school level. From common tests meant for a group of students one gets one's average performance over a group of topics, but not in each and every sub topic. It doesn't help in determining accurately in which task the user is least skilled. So, it is desirable to have a mechanism to compare user's skill in two distinct tasks, so as to detect which task is the weak-spot of the user. On starting the device, it is desirable to offer the user a menu to select a single task or a group of tasks for the current practice-session. Once these mechanisms are in place, the system can evaluate and present the user practice tasks within the selected group of tasks in the increasing order of user's relative performance in tasks, the weakest being the first.
Since an appropriate time duration (called waittime) dependent upon the difficulty- level of the task between a user's response to a task and presentation of the next task should improve performance by giving the user time to relax, this feature is desirable. To get rid of chance fluctuations in user's response time measurement, one needs to average the response time of a few instances of a task in sequence (See [DRSR]). Also, since the user can increase his speed in a task at the cost of accuracy, to avoid problems with incorrect trials in response time measurement (See page 5 of [DRSR]), it is desirable to have a mechanism to analyze response time only after the user achieves a certain level of speed and accuracy.
Further, recent results on error-related negativity (ERN) show that when one makes an error in a speeded response task, there is a distinct negative deflection in the scalp electrical field over medial frontal cortex, which peaks at approximately 80 milliseconds after the error (Scheffers MK, Coles MG, "Performance monitoring in a confusing world: error-related brain activity, judgments of response accuracy, and types of errors", J Exp Psychol Hum Percept Perform. 2000 Feb; 26(1), 141-51; later referred by [SC]). This negative deflection is called the error-related-negativity, or the ERN. It depends upon awareness, and subject's motivation to respond correctly. Faster ERN is associated with smaller error rates (Patricia E. Pailing, Sydney J. Segalowitcz, Jane Dywan, and Patricia L. Davies, "Error negativity and response control", Psychophysiology, 39 (2002), 198-206, Cambridge University Press; later referred by [PS]). So, by quickening ERN one can decrease error rates. It is also known that each time an answer is produced under external reinforcement, the association between the problem and answer is increased, the increment is much more for correct answers than for wrong answers [DR]. In fact, this principle lies behind all reward and punishment based learning methods. So, it is desirable to implement a simple mechanism to provide external reinforcement to quicken user's ERN during skill practice, which may eventually improve his or her self- error monitoring, decrease the error-rates, and thus shorten the skill development process. It would also give two added quantitative measures for performance evaluation in a task - percentage of errors due to slips and impulses, and user's response time for correction of such errors in a task.
Some Internet software (LearningSoft etc) are better in diagnostics. However, these, despite added communication cost, are for kids. United States Patent Application (Abbreviated by USPA) 20040005536 claims to provide a universal placement system to diagnose and place students for various courses, and can be linked to software of distinct companies. USPA 0030198929 is software to automate the testing, administration of test batteries, assembly, and delivery of individualized instructional materials. USPA 0020161732 is software for writing educational quizzes for learning. USPA 20040009462 relates to on-line learning, electronic development, storage, retrieval and delivery of customized study courses in a multi-sensory environment. United States Patent (Abbreviated by USP) 6840774 gives systems and methods for improving Math skills of kids in four basic arithmetic operations with single digit operands, and begins with a database of problems. USP 6186794 is a software for adaptive learning using artificial intelligence techniques of Rule based systems. USPA 20040126745 gives a system and method for improving math skills along entirely different directions. USP 6688889 gives a computerized test preparation system employing individually tailored diagnostics and remedication, which considers tasks with multiple choice for response, but follows the tradition of cognitive diagnostics assessment (CDA). None of these tools mention the ERN related mechanism for detecting errors due to slips and impulses, or mechanisms for detecting weak tasks based upon response time analysis, or for crossing the accuracy/speed-threshold before starting response time analysis, or for smoothening the response time measurement by averaging, or a waittime dependent upon the difficulty-level of the task. Most of these are software, and not independent gadgets; a few gadgets are only for kids. These use database of tasks and pre-stored answers, instead of using any mechanism for random simulation of tasks whenever possible, and which make it expensive to incorporate in smaller inexpensive devices. So, one needs a system, which assuming the basic knowledge, should aid in developing skills; thus going beyond the CAT. A dedicated system is required to embody all features and methods outlined above.
OBJECTS AND THE SUMMARY OF THE INVENTION It is an object of the invention to obviate the above drawbacks and provide a dedicated system and methods that provides an automated and adaptive skill practice. It is another object of the present invention to improve speed with accuracy of the user in any desired subject by implementing a scientific mechanism for detecting his or her weak tasks and giving him or her an opportunity to improve these weak tasks. It is yet another object of the present invention to improve user's self-error- monitoring by giving him or her an opportunity to correct errors due to slips and impulses and providing him or her with detailed performance statistics related to it.
SUMMARY OF THE INVENTION The instant invention provides an Automated Adaptive Skill Practice (AASP) electronic device (See fig 1) and method (Automated Adaptive Skill Practice). The AASP gadget and method is used for developing skills for success in job-tests and other career-oriented tests in subjects like mathematics, logical reasoning, grammar, numerical problems in science etc at school levels. All a user has to do is to start the device.
In accordance with the invention, a processor based apparatus for implementing AASP is provided. In the preferred embodiment the apparatus (See fig 1) consists of a processor, Random Access Memory (RAM), AASP-ROM, the content ROM (preferably a write-protected memory), and a non-volatile User Data Memory (UDM), a display and an input devices, which executes modules mentioned below. The processor fetches the relevant steps of methods of these modules from the AASP- ROM and executes it. It gets all objects mentioned in these modules from the content ROM or the UDM, uses RAM for processing, updates statistics into the UDM, and displays performance statistics. In another embodiment several or all of above components of the apparatus may occupy one chip. In another embodiment the gadget may exclude the display and input devices, but include an interface to a computer to use latter' s display and input devices. In another embodiment the gadget may exclude the display, and use an external video terminal, like television, through an interface.
The Task Practice Module (TPM) (See fig 12) consists of the following Modules in one embodiment. The Module to Simulate Random Stimulus Instances of a task (SRSI) (See fig 14) gets a task object from the content ROM, and simulates random stimulus instance of that task using latter's stimulus and constraint recipes. The Multiple-choice Response Simulator (See fig 16) simulates multiple choices for response by executing the correct choice generator module, and using the response recipe and the Task Distractor Code of the task. The Response Recorder (RR) (See fig 18) records user's response, waits for a prescribed time period, then executes the ERN Recorder module (ER) (See fig 19), which gives the user chance to correct errors due to slips and impulses, and records the status and timing of correction and user's response time, the error status and the erroneous response. Then the RR displays the correct choice and erroneous response if so set, and sends the values to the TPM to updates statistics.
The Task Practice Unit Module (TPUM) (See fig 11) gets a task, and the unit size N from the "Task Practice Constants" (TPC) (See fig 3) of the content ROM, executes TPM repeatedly till the total number of correct responses is less than N, then updates statistics into "User's Performance Statistics for Tasks" (UPS) (See fig 4) of that task in the UDM.
The Skill Practice Module (SPM) (See fig 10) gets a task from the content ROM and its "qualified" status field from its User's Initial State record (UISR) (See fig 4). If the first field of the UISR contains "True", then it executes the TPUM till the user quits, and displays user's performance statistics. If this field contains "False" then it gets the "error-threshold %" x, the "minimal number of units for accuracy" y, and the "weakness-threshold" z from the TPC, and executes TPUM at least y times, or till user's average weakness and percent error in that session exceeds z and x respectively. Then it records "True" in the "qualified" status field, initial values of user's parameters before this stage in the UISR of that task, and initializes all fields except the "best response time" of the UPS of that task to zero.
The Module to Adjust Maxerntime (AMAX) (See fig 6 - fig 7) gets the timing of correction of errors due to slips and impulses for each task with "qualified" status "True" from its UISR, and finds their maximum. If this maximum is less than the upper limit of such allowable timing "maxerntime", then it appropriately decreases and updates "maxerntime".
The Module to Detect Weakest Task (DWT) sequentially takes tasks with "qualified" status "True" to compare it with the weakest of all tasks already compared by the given comparison mechanism, executes the Weakness Comparison Module (See fig 9), which executes the WEAKNESS module to derive weakness value of the task (See fig 8), and then the DWT outputs task in which the user is weakest. The comparison mechanism compares the ratio of the average response time to the predetermined standard response time of a task object to that of the another. In particular, in one embodiment the standard response time is taken as the average asymptote of the task, so well known from the theory of the Learning Curves.
The Automated Adaptive Skill Practice method (AASP) (See fig 5) executes the Module AMAX, and also gives user the weakest task for practice to strengthen it, by successively executing the DWT and the SPM modules.
BRIEF DESCRIPTIONS OF THE DRAWINGS
In the drawings accompanying the specification,
Figure 1 describes the hardware block diagram of the preferred embodiment, Figure 2 describes the brief content of the AASP-ROM.
Figure 3 describes the brief content of the Content ROM.
Figure 4 describes the brief content of the user data memory.
Figure 5 describes the AASP.
Figure 6 describes the module to adjust "maxerntime" (AMAX). Figure 7 describes the function used in the module AMAX.
Figure 8 describes the module to derive the weakness value of a task (WEAKNESS).
Figure 9 describes a comparison mechanism WCMP.
Figure 10 describes the Skill practice module (SPM).
Figure 11 describes the task practice unit module (TPUM). Figure 12 describes the process diagram of the Task practice module (TPM).
Figure 13 describes the block diagram of the Task practice module (TPM).
Figure 14 describes the Module to simulate random stimulus instances of a task
(SRSI).
Figure 15 describes the block diagram of the Module SRSI. Figure 16 describes the Module to simulate multiple choices for response (SMCR).
Figure 17 describe the block diagram of the Module SMCR.
Figure 18 describes the response recorder module (RR) to record user's response.
Figure 19 describes the ERN Recorder module (ER).
DETAILED DESCRIPTION
Accordingly, the present invention provides an automated method for adaptive testing of skill of a user, said method comprising the steps:
(a) generating and displaying task instance to the user; (b) detecting a time at which the task instance is displayed to the user;
(c) detecting a first actuation and optionally a second actuation of the input device by the user and a corresponding first actuation time and an optional second actuation time; (d) determining the user's response to task instance from the said first and the optional second actuation of the input device and analyzing the correctness of the user's response to the displayed task instance;
(e) evaluating the skill of the user based on the correctness of the user's response and a time period taken to provide the correct response. In an embodiment of the present invention, the said first actuation of the input device is determined as the user's response to the task instance if the said first actuation is determined to be correct.
In another embodiment of the present invention, if the first actuation of the input device is determined as the user's response, the corresponding first actuation time is determined as the time period taken to provide the correct response.
In yet another embodiment of the present invention, if the said first actuation of the input device is determined to be incorrect, an opportunity is provided to the user to actuate the input device within a first predetermined amount of time period called "maxerntime". In still another embodiment of the present invention, if the user actuates the input device within the predetermined amount of time period i.e. within the "maxerntime", the actuation is treated as a second actuation of the input device. In one more embodiment of the present invention, the said second actuation of the input device by the user is determined as the user's response to the task instance if the said second actuation is determined to be correct.
In one another embodiment of the present invention, if the second actuation of the input device is determined as the user's response, the corresponding second actuation time is determined as the time period taken to provide the correct response and the time difference between the first actuation time and the second actuation time called the "erntime" is determined.
In an embodiment, the method of the present invention further comprises displaying the first and/or the second actuation of the input device and the correct answer to the task instance on a display means in a distinguishable manner. In another embodiment, the method of the present invention further comprises repeating the steps (a) to (e) using multiple task instances of same type of task object. In yet another embodiment, the method of the present invention further comprises (a) determining the following: (i) the number of times the first actuation of the input device is determined to be incorrect; (ii) the number of times such a second actuation of the input device is determined as a correct response to the task instance and (iii) a total "erntime" for all the second actuations of the input device received from the user, and (b) displaying to the user the said determined values for supporting the user to improve the skills.
In still another embodiment of the present invention, prior to repeating steps (a) to (e), the user is provided a predetermined amount of time period called wait time for relaxing.
In one more embodiment, the method of the present invention further comprises determining an average time period taken to provide the correct response for multiple task instances of the same type of task object thereby eliminating chance fluctuations. In one another, the method of the present invention further comprises repeating steps (a) to (e) at least "m" number of times or till "percentage correct response" of the user is below a predetermined value "n". In a further embodiment of the present invention, based on the average time period taken to provide the correct response for multiple task instances of the same type of task object, weakness value for the type of task object is determined. In an embodiment, the method of the present invention further comprises repeating steps (a) to (e) for the task object for which the weakness value is determined to be above a first threshold value.
In another embodiment, the method of the present invention further comprises repeating the steps (a) to (e) using task instances of different types of task object. In yet another embodiment, the method of the present invention further comprises determining an average time for each type of task object, determining the weakness for each type of task object and determining the task object having the highest weakness.
In still another embodiment, the method of the present invention further comprises repeating steps (a) to (e) for the task object having the highest weakness. In a further embodiment of the present invention, the value of the first predetermined amount of time period i.e. "maxerntime" is set to decrease when the steps (a) to (e) are being repeated using multiple task instances derived from a same type of task object.
In a further more embodiment of the present invention, the said task instance is randomly selected from a memory device having plurality of task instances or the said task instance is randomly generated / simulated or from a single task object, thereby eliminating the need to store multiple task instances of the same type of task object on a memory device. In an embodiment of the present invention, the said task instance is selected from the group consisting of (a) multiple choice based task instances and (b) non-multiple choice based task instances.
In another embodiment t of the present invention, each of the said multiple choice based task instances have at least one correct answer and wherein each of the correct answer has a single value in the correct answer or multiple-values in the correct answer.
In yet another embodiment of the present invention, each of the said multiple choice based task instances and/or the non-multiple choice based task instances are selected from subjects selected from the group consisting of (a) mathematics and numerical problems in science and (b) non-mathematical based problems.
In still another embodiment of the present invention, the said task instance in subjects selected from the group consisting of mathematics and numerical problems in Science are randomly generated / simulated from a single task object thereby eliminating the need to store multiple task instances of the same type of task object on a memory device.
In one more embodiment of the present invention, the task object comprises at least one variable and a provision for substituting the said at least one variable by a numerical number. In one another embodiment of the present invention, the multiple choice based task instance comprises a stimulus and a corresponding multiple choice response.
In a further embodiment of the present invention, the stimulus is generated by:
(a) obtaining a task object having at least one variable; and
(b) generating a numerical number randomly and substituting the said randomly generated numerical number in place of the said at least one variable contained in the said task object thereby generating the stimulus.
In a further more embodiment of the present invention, in step (b), the said randomly simulated numerical number is simulated based on a predetermined constraint.
In another embodiment of the present invention, the said predetermined constraint is a variable constraint.
In yet another embodiment of the present invention, the multiple choice response for a stimulus is generated by:
(a) processing the stimulus using a mathematical tool kit to obtain one correct answer for the said stimulus; (b) generating a "N" sized matrix and feeding the said correct answer thus obtained as a first input to the said matrix; (c) generating "N-I" incorrect answers for the said stimulus based on the said correct answer and feeding the same to the "N" sized matrix as further inputs to complete the "N" sized matrix; and
(d) performing random permutation of the completed "N" sized matrix to obtain the multiple choice response for the stimulus.
In still another embodiment of the present invention, "N" indicates the number of multiple choices to be simulated for the stimulus.
In one more embodiment of the present invention, in step (c), the "N-I" incorrect answers are generated based on (a) a margin of error and/or (b) details regarding sensitive digits of the said correct answer and/or (c) a task distractor code that uses an distractor-generating method for generating "N-I" distractors.
The present invention further provides an automated apparatus for adaptive testing of skill of a user, said apparatus comprising:
(a) a means for generating and displaying task instance to the user; (b) a means for detecting a time at which the task instance is displayed to the user;
(c) a means for detecting a first actuation and optionally a means for detecting a second actuation of the input device by the user and a means for detecting a corresponding first actuation time and a means for detecting a corresponding second actuation time; (d) a means for determining the user's response to task instance from the said first and the optional second actuation of the input device and a means for analyzing the correctness of the user's response to the displayed task instance;
(e) a means for evaluating the skill of the user based on the correctness of the user's response and a time period taken to provide the correct response. In an embodiment of the present invention, the said means for determining the user's response to the task instance determines the said first actuation of the input device as the user's response to the task instance if the said first actuation is determined to be correct. In another embodiment of the present invention, if the first actuation of the input device is determined as the user's response, the corresponding first actuation time is determined as the time period taken to provide the correct response. In yet another embodiment of the present invention, if the said first actuation of the input device is determined to be incorrect, an opportunity is provided to the user to actuate the input device within a first predetermined amount of time period called "maxerntime". In still another embodiment of the present invention, if the user actuates the input device within the predetermined amount of time period i.e. within the "maxerntime", the actuation is treated as a second actuation of the input device. In one more embodiment of the present invention, the said second actuation of the input device by the user is determined as the user's response to the task instance if the said second actuation is determined to be correct.
In one another embodiment of the present invention, if the second actuation of the input device is determined as the user's response, the corresponding second actuation time is determined as the time period taken to provide the correct response and the time difference between the first actuation time and the second actuation time called the "erntime" is determined.
In an embodiment, the apparatus of the present invention further comprises a display means for displaying the first and/or the second actuation of the input device and the correct answer to the task instance in a distinguishable manner. In another embodiment, the apparatus of the present invention further comprises a means for generating multiple task instances of the same type of task object. In yet another embodiment, the apparatus of the present invention further comprises (i) a means for determining the number of times the first actuation of the input device is determined to be incorrect; (ii) a means for determining the number of times such a second actuation of the input device is determined as a correct response to the task instance and (iii) a means for determining a total "erntime" for all the second actuations of the input device received from the user, and (iv) a means for displaying to the user the said determined values for supporting the user to improve the skills. In a further embodiment of the present invention, prior to repeating steps (a) to (e), the user is provided a predetermined amount of time period called wait time for relaxing.
In another embodiment, the apparatus of the present invention further comprises a means for determining an average time period taken to provide the correct response for multiple task instances of the same type of task object thereby eliminating chance fluctuations.
In yet another embodiment, the apparatus of the present invention further comprises a means for displaying task instances at least "m" number of times or till "percentage correct response" of the user is below a predetermined value "n", wherein the percentage correct response is determined by a percentage calculating means. In still another embodiment, the apparatus of the present invention further comprises a means for calculating weakness value for the type of task object based on the average time period taken to provide the correct response for multiple task instances of the same type of task object.
In one more embodiment, the apparatus of the present invention further comprises a means for displaying task instances for the task object for which the weakness value is determined to be above a first threshold value.
In one another embodiment, the apparatus of the present invention further comprises a storage device for storing different types of task objects.
In another embodiment, the apparatus of the present invention further comprises a means for determining an average time for each type of task object, a means for determining the weakness for each type of task object and a means for determining the task object having the highest weakness.
In yet another embodiment, the apparatus of the present invention further comprises a means for displaying task instances for the task object having the highest weakness. In still another embodiment, the apparatus of the present invention further comprises a means for decreasing the value of the first predetermined amount of time period i.e. "maxerntime".
In a further embodiment of the present invention, the said task instance is randomly selected from a memory device having plurality of task instances or the said task instance is randomly generated / simulated from a single task object using a task instance generating means, thereby eliminating the need to store multiple task instances of the same type of task object on a memory device.
In a further more embodiment of the present invention, the said task instance is selected from the group consisting of (a) multiple choice based task instances and (b) non-multiple choice based task instances. In an embodiment of the present invention, each of the said multiple choice based task instances have at least one correct answer and wherein each of the correct answer has a single value in the correct answer or multiple-values in the correct answer. In another embodiment of the present invention, each of the said multiple choice based task instances and/or the non-multiple choice based task instances are selected from subjects selected from the group consisting of (a) mathematics and numerical problems in science and (b) non-mathematical based problems. In yet another embodiment of the present invention, the said task instance in subjects selected from the group consisting of mathematics and numerical problems in Science are randomly generated / simulated from a single task object using the task instance generating means thereby eliminating the need to store multiple task instances of the same type of task object on a memory device. In still another embodiment of the present invention, the task object comprises at least one variable and a provision for substituting the said at least one variable by a numerical number.
In one more embodiment of the present invention, the task instance generating means is a multiple choice based task instance generating means which comprises a means for generating a stimulus and a means for generating a corresponding multiple choice response.
In one another embodiment of the present invention, the means for generating the stimulus comprises: (a) a means for obtaining a task object having at least one variable; and
(b) a means for generating a numerical number randomly and substituting the said randomly generated numerical number in place of the said at least one variable contained in the said task object thereby generating the stimulus.
In a further embodiment of the present invention, the means for generating a numerical number comprises a means for generating a random number and a means for placing a predetermined constraint on the selected random number.
In a further more embodiment of the present invention, the said predetermined constraint is a variable constraint.
In one more embodiment of the present invention, the means for generating multiple choice response for a stimulus comprises:
(a) a mathematical tool kit for processing the stimulus to obtain one correct answer for the said stimulus;
(b) a means for generating a "N" sized matrix and feeding the said correct answer thus obtained as a first input to the said matrix; (c) a means for generating "N-I" incorrect answers for the said stimulus based on the said correct answer and feeding the same to the "N" sized matrix as further inputs to complete the "N" sized matrix; and
(d) a means for obtaining random permutation of the completed "N" sized matrix thereby generating the multiple choice response for the stimulus. In one another embodiment of the present invention, "N" indicates the number of multiple choices to be simulated for the stimulus.
In a further embodiment of the present invention, the "N-I" incorrect answers are generated based on (a) a margin of error and/or (b) details regarding sensitive digits of the said correct answer and/or (c) a task distractor code that uses an distractor- generating method for generating "N-I" distractors.
The present invention further provides an automated method for adaptive testing of skill of a user, said method comprising the steps: (a) displaying a task instance to the user;
(b) detecting a response from the user to the displayed task instance and a time at which the user responds to the task instance;
(c) analyzing the correctness of the user's response to the displayed task instance; (d) repeating steps (a) to (c) using task instance of at least two distinct types of task objects;
(e) evaluating a weakness value for each of the said at least two types of task objects based on the correctness of the user's response and a time period taken to provide the correct response; and (f) determining the task object for which the weakness value is highest.
In an embodiment of the present invention, in step (d), the steps (a) to (c) are repeated preferably using multiple task instances of the same type of task object and an average time period taken by the user to provide the correct response for the multiple task instances of the same type of task object is determined thereby eliminating chance fluctuations.
In another embodiment of the present invention, the weakness value for a particular task object is determined based on the average time period taken by the user to provide correct responses for the multiple task instances of that type of task object. In yet another embodiment of the present invention, in step (d), prior to repeating the steps (a) to (c), the user is provided a predetermined amount of time period called wait time for relaxing.
In still another embodiment of the present invention, in step (d), the steps (a) to (c) are repeated at least "m" number of times or till "percentage correct response" of the user is below a predetermined value "n". In an embodiment, the method of the present invention further comprises repeating steps (a) to (c) for the task object for which the weakness value is determined to be above a first threshold value.
In another embodiment, the method of the present invention further comprises repeating steps (a) to (c) for the task object for which the weakness value is determined in step (f) as being the highest.
In yet another embodiment, the step (a) further comprises determining a time at which the task instance is displayed to the user.
In still another embodiment of the present invention, in step (b), detecting a response from the user to the displayed task instance preferably comprises detecting a first actuation and optionally a second actuation of the input device by the user and a corresponding first actuation time and an optional second actuation time. In a further embodiment of the present invention, the response from the user to the displayed task instance is detected preferably from the said first and the optional second actuation of the input device.
In a further more embodiment of the present invention, the said first actuation of the input device is determined as the user's response to the task instance if the said first actuation is determined to be correct.
In an embodiment of the present invention, if the first actuation of the input device is determined as the user's response, the corresponding first actuation time is determined as the time period taken to provide the correct response. In another embodiment of the present invention, if the said first actuation of the input device is determined to be incorrect, an opportunity is provided to the user to actuate the input device within a first predetermined amount of time period called "maxerntime". In yet another embodiment of the present invention, if the user actuates the input device within the predetermined amount of time period i.e. within the "maxerntime", the actuation is treated as a second actuation of the input device. In still another embodiment of the present invention, the said second actuation of the input device by the user is determined as the user's response to the task instance if the said second actuation is determined to be correct. In one more embodiment of the present invention, if the second actuation of the input device is determined as the user's response, the corresponding second actuation time is determined as the time period taken to provide the correct response and the time difference between the first actuation time and the second actuation time called the "erntime" is determined. In one another embodiment, the method of the present invention further comprises displaying the first and/or the second actuation of the input device and the correct answer to the task instance on a display means in a distinguishable manner. In an embodiment, the method of the present invention further comprises (a) determining the following: (i) the number of times the first actuation of the input device is determined to be incorrect; (ii) the number of times such a second actuation of the input device is determined as a correct response to the task instance and (iii) a total "erntime" for all the second actuations of the input device received from the user, and (b) displaying to the user the said determined values for supporting the user to improve the skills. In another embodiment of the present invention, the value of the first predetermined amount of time period i.e. "maxerntime" is set to decrease when the steps (a) to (c) are being repeated using multiple task instances derived from a same type of task object.
In yet another embodiment of the present invention, the said task instance being displayed to the user is randomly selected from a memory device having plurality of task instances or is randomly generated / simulated or from a single task object, thereby eliminating the need to store multiple task instances of the same type of task object on a memory device.
In still another embodiment of the present invention, the said task instance is selected from the group consisting of (a) multiple choice based task instances and (b) non- multiple choice based task instances.
In one more embodiment of the present invention, each of the said multiple choice based task instances have at least one correct answer and wherein each of the correct answer has a single value in the correct answer or multiple-values in the correct answer. In one another embodiment of the present invention, each of the said multiple choice based task instances and/or the non-multiple choice based task instances are selected from subjects selected from the group consisting of (a) mathematics and numerical problems in science and (b) non-mathematical based problems.
In a further embodiment of the present invention, the said task instance in subjects selected from the group consisting of mathematics and numerical problems in Science are randomly generated / simulated from a single task object thereby eliminating the need to store multiple task instances of the same type of task object on a memory device.
In a further more embodiment of the present invention, the task object comprises at least one variable and a provision for substituting the said at least one variable by a numerical number.
In an embodiment of the present invention, the multiple choice based task instance comprises a stimulus and a corresponding multiple choice response.
In another embodiment of the present invention, the stimulus is generated by: (a) obtaining a task object having at least one variable; and
(b) generating a numerical number randomly and substituting the said randomly generated numerical number in place of the said at least one variable contained in the said task object thereby generating the stimulus.
In yet another embodiment of the present invention, in step (b), the said randomly simulated numerical number is simulated based on a predetermined constraint.
In still another embodiment of the present invention, the said predetermined constraint is a variable constraint. In a further embodiment of the present invention, the multiple choice response for a stimulus is generated by:
(a) processing the stimulus using a mathematical tool kit to obtain one correct answer for the said stimulus; (b) generating a "N" sized matrix and feeding the said correct answer thus obtained as a first input to the said matrix;
(c) generating "N-I" incorrect answers for the said stimulus based on the said correct answer and feeding the same to the "N" sized matrix as further inputs to complete the "N" sized matrix; and (d) performing random permutation of the completed "N" sized matrix to obtain the multiple choice response for the stimulus.
In one more embodiment of the present invention, "N" indicates the number of multiple choices to be simulated for the stimulus.
In a further more embodiment of the present invention, in step (c), the "N-I" incorrect answers are generated based on (a) a margin of error and/or (b) details regarding sensitive digits of the said correct answer and/or (c) a task distractor code that uses an distractor-generating method for generating "N-I" distractors.
Explanations on Skill Improvement Modules related to Learning Curves Since response time (RT) alone is not sufficient to compare user's speed in an easier task to that in a more difficult task, in order to compare user's speed in two distinct tasks, we need a new mechanism. One way is to set some kind of predetermined Standard Response Time (SRT) for each task in the apparatus during the manufacturing process, and then define user's weakness in a task as the ratio RT/SRT, where RT is his/her average response time. It signifies how many times the user is slower in comparison to the standard. Since RT of a user in a more difficult task is generally more than his/her RT in an easier task, the SRT should also follow the same. Then it is realistic to assume that the actual user's weakness in a task is proportional to user's weakness parameter in that task. To determine the SRT, we use the theory of Learning curves. From the theory of three-parameter learning curves (See [RC], [DRSR]), the response time RT = a + bf(n, c), where a, b, c are constants for that user and that task, such that 0 < c < 1, and the function f is «"6'for Power Learning Curve, and c" for the Exponential Learning Curve. Parameters b and c may vary depending upon which curve is used. The constant "a " is called the asymptote of the learning curve; it is the user's response time after practically infinite practice. In the kind of tasks considered here, it is not difficult to see that the user's asymptote in a more difficult task is not less than his asymptote in an easier task, and if one breaks a tasks into two subtasks done one immediately after the other, then one's asymptote in the combined task will be sum of one's asymptotes in the two subtasks. So, the asymptote can be taken as the SRT. For this, we determine it as the average asymptote of several test-users in that task, determined before manufacturing. These test-users are selected in such a way that they all can qualify the job-test considered in the given embodiment of the invention with almost 100% marks, and they all practice all the tasks available in the apparatus sufficient number of times for computation of asymptotes of each test-user in each task. Then we call it the task-asymptote, and for any user we get that the user's weakness in a task is the ratio of his or her current average response time to the task asymptote, i. e. weakness - RT/asymptote. In the present embodiment we selected exponential learning curve to determine asymptote. Other embodiments may select other type of learning curves. During the manufacturing stage the task header of each task is filled with the value called task- asymptote. It signifies how many times the user is slower in comparison to the average peak possibility of the test-user group. Now user's speed in two tasks can be compared by comparing his or her weakness in these tasks - the higher the weakness in a task, the weaker the user in that task.
Explanations on Modules related to Correction of Errors due to Slips and Impulses
Since it would be too costly, and of little use to measure the time duration after which the Error-related Negativity of a user peaks during errors due to slips and impulses in a speedy response problem-solving, we have to find an alternate mechanism for this purpose. If user's first response to a task is incorrect, he is given a chance to respond once more without providing him with any information regarding his error-status. For this the system waits for a time period called the "maxerntime", and if he responds correctly during the second time within that time period, then the time duration between the first and the second response is defined as the erntime (in that task instance), his response is considered correct, the response time is counted as the total time of the first and the second attempts, and his performance related to corrections of errors due to slips and impulses is measured by a quantity total ern, which increases by one each time he corrects one such error. Since in the kind of tasks under considerations the user is generally assumed to know how to solve the problem and it is only the skill improvement that he is concerned with, we can assume that all his errors are errors in haste, and most of these errors are due to slips and impulses. So, the ratio of the total ern to the total errors accurately gives his or her performance towards corrections of errors due to slips and impulses; it should decrease with time. In addition, after errors not corrected within the time interval limited by "maxerntime", the system displays user's response and the correct response in distinguishable manner, for example in two distinct fonts. It improves the learning in the initial phase. Initially we assume higher value for the maxerntime to accommodate all kind of users, store it in the apparatus during the manufacturing process, and call it the "initial maxerntime". Initially this value is used in the place of maxerntime. Since it is known that eventually the time taken to correct errors due to slips and impulses should come down to 130 - 150 milliseconds (See [SC]; we added the time to press the key), there should be some mechanism so that this maxerntime decreases with practice. For this, each time the device is started, it executes the Module to Adjust Maxerntime, which derives the maximum of all average current erntime of all admissible tasks, and sets the new maxerntime as a suitable decreasing function of this maximum and the current maxerntime. In this way the user has more and more challenge to decrease his time to correct errors due to slips and impulses. The quantities "average current erntime" and "admissible tasks", and the above function shall be defined later at appropriate place.
The AASP Overall
On starting the device for the first time the system initializes by copying necessary data into the user data memory and asks user to setup his or her preferences. On subsequent starting the device (See fig 5), the "Adjust Maxerntime" submodule execute the module to adjust Maxerntime (See fig 6).
Then the "Display Stats?" submodule prompts the user if he or she wants to see detailed statistics, and if the user so desires then the submodule "Display Stats" gives the user an opportunity to select a task, gives detailed performance statistics corresponding to that task, prompts the user if he or she wants to quit, and if he/she doesn't want to quit then it once more repeats the "Display Stats?" submodule. If the user chooses not to see detailed statistics, the submodule "Present new tasks?" prompts the user if he or she wants to practice new task or to practice previously practiced task. In the first case in the "User" mode, it presents the user list of new tasks to practice, if there are any new task left. In the "Auto" mode, the device automatically selects the new task, if there are any. If there are no new tasks, it displays such message and presents the option to practice already practiced tasks. New task is practiced by executing the Skill practice module (SPM) (See fig 10) till the user achieves the predetermined speed and accuracy level, where speed is characterized by the weakness parameter (See fig 8). Then it displays the statistics for that task, and once more prompts the user if he or she wants to practice new task or already practiced tasks.
If the user selects to practice previously practiced tasks, the "Get ID of weakest task" submodule executes the module to Detect the Weakest Task (DWT), which uses weakness derivation module WEAKNESS (See fig 8) to derive the weakest task, and then lets the user practice the weakest task. Once the user practices the weakest task by SPM, and wants a change of task, it once more presents him/her the weakest task at that moment. The user continues till he or she wants. In another embodiment, the user may have the option to select one of the previously practiced tasks through a menu, bypassing the module DWT altogether. Detailed definitions and working of the modules mentioned above shall follow later.
The AASP Content Objects and Modules
AASP content modules are embedded in the AASP-ROM, and objects are embedded in the content-ROM. Figure 1 shows the constructional features of an embodiment of the invention. In the preferred embodiment the apparatus consists of a processor, Random Access Memory (RAM), AASP-ROM, the content ROM (preferably write- protected), and a non-volatile User Data Memory (UDM), a display and an input devices, which executes all the modules of the AASP-ROM. The processor fetches the methods contained in these modules from the AASP-ROM and executes it. It gets all objects mentioned in these modules from the content ROM or the UDM, uses RAM for processing, updates statistics into the UDM, and displays statistics. In another embodiment several or all of above components of the apparatus may occupy one chip. In another embodiment the gadget may exclude the display and input devices, but include an interface to a computer to use latter' s display and input devices. In another embodiment the gadget may exclude the display, and use an external video terminal, like television, through an interface.
The Content ROM stores the Task objects, "Task Practice Constants", and "Initial System's Global Preferences" (See fig 3). Its methods are to get any named object or their components into the RAM. These methods lie in the AASP utility module (See fig 2) of the AASP-ROM. Basic system methods of the AASP ROM contains, apart from those described below, methods to format and display strings, and method to record user response exact to milliseconds. In the place of the processor RAM and the AASP ROM, we can use similar components of the Philips LPC 2104 chip. An Atmel Dataflash or ST Micro's flash can be used for the content ROM, whose first part can be write-protected and the second part can be made read/write for the user data memory. The AASP utility has method to search tasks by their unique codes called the Task ID. "Task Practice Constants" (TPC) contains constants used in the Task practice module. These are "waittime", "initial maxerntime", "unit size", "error-threshold", "Minimal number of units for accuracy", the initial default "group", and the "weakness- threshold". The user can change these settings. These constants may have different values in other embodiments of the invention, and should not be considered restrictive. The "Group" field shows the initial factory settings for the current group of tasks selected for practice; all subsequent selection of practice tasks is limited to tasks within that group. In present embodiment its value is the "whole" group, where "whole" means all the tasks. Below we define other constants.
The first field of the TPC is a table of pairs of values, whose first field gives the total number of pairs of values in the record, and second through the last fields are the pairs of asymptote and waittime corresponding to that asymptote. To get the waittime after the completion of a task, i. e. the time duration the device waits after receiving the user's first correct response or the second response on a task stimulus, the device gets its task asymptote from the task-header, finds out from the above table which of the asymptote value is equal to or less than the task asymptote, and gets the waittime corresponding to that asymptote in the pair. The values in the present embodiment are 1, 0, stimulus.and 4 seconds, i. e. the record contains only one pair. Other embodiments can have more pairs. Depending upon the task, the values of the waittime are between 0.3 to 8 seconds. Since it depends upon the task asymptote, which itself depends upon the difficulty-level of the task, the "waittime" naturally depends upon the difficulty level of the task. This is desirable since the time to relax after the completion of a task should be almost proportional to the difficulty level of the task. In other embodiments one can take any Standard Response Time in the place of the task-asymptote.
Since a user can increase speed at the cost of accuracy, to avoid problems with incorrect trials (See page 5 of [DRSR]) in response time (RT) measurement, and to detect the weakest task, the apparatus uses non-negative numeric values "Error threshold", and the "Weakness-threshold". A user achieves "qualified" status in a task the moment his or her average percent error and the average weakness in a practice-session of that task in a stretch does not exceed the error-threshold and weakness-threshold respectively. In the present embodiment their values are 10% and 7 respectively, it shows that in order to qualify in a task, the user must achieve at the most 10% error-rate and his average response time should not exceed 7 times of the predetermined standard response time, which in the present embodiment is the average task asymptote. It is needless to say that the weakness-threshold depends upon the test-user group, and can vary with other embodiments. "Qualified" field shows that the user has already crossed the requisite error/weakness thresholds for later response time (RT) analysis. Then the device updates the first field of the User initial state record for tasks to "true" (See fig 4). "Minimal number of units for accuracy" is the least number of units a task is to be practiced in a stretch to be eligible for verification for error-threshold and weakness-threshold as above; in the present embodiment its value is 2 units per task. In case both the threshold values are zero, and the "the user has always a "qualified" status after practicing a least number of units for accuracy. "Initial maxerøtime" is the initial value of the maximum time given to the user for correcting errors due to slips and impulses. On system's initialization, it is recorded in system's state. In embodiments without the ERN Recorder module, there is no "Initial maxerntime". Its value in the present embodiment is 2000 milliseconds. The "Unit size" is the number of times one needs to practice a task correctly in a sequence so that one's average statistics for all task instances in the sequence are recorded. It is required for smoothening the response time (RT) measurement so as to minimize chance fluctuations (See [DRSR]). Its value in the present embodiment is 3. Initial System's Global Preferences are "showerr" and "auto/user" mode. In the present embodiment the first is set to "True", which means that after error, not corrected within the time interval limited by "maxerntime", the system displays user's response and the correct response in a distinguishable manner. The second preference is set to "auto" mode by default. The user has the option to change these settings. The changed settings are stored in the global user setup preferences (See fig 4) and can be changed during system's initialization.
A Task object consists of the task-header, the stimulus recipe, the constraint recipe, and the response recipe. It is used to generate/simulate hundreds of random task stimuli and multiple choices for response. The recipes contain instructions for such simulations. The purpose of simulation, whenever possible, is to replace database of hundreds of task instances by this one task object for each type of tasks.
The task header consists of the "task name" field for user's selection, "task number type" to indicate whether the numeric values displayed are real numbers, integers, or fractions of integers, task "asymptote", the "display type", "the response type", and the Task Type. The asymptote is set during manufacturing. The display type indicates the display types for presentation, depending upon the number of multiple choices of response, two-column or four-column positioning of multiple choices on screen, etc. In scientific/technical subjects multiple choices differ from the correct choice by error terms. The response type is a table which contains as subfields the number of multiple choices for response, the position of sensitive digits of correct choice for adding errors, and the size of errors to generate incorrect choices, and the "Task Distractor Code" to generate good working distractors. As known from the theory of Random Number Generators, sensitive digits of the value of a function are those digits, which are most likely to change with any change in the input digits. The Task Distractor Code contains certain distractor codes, and the system switches to the corresponding distractor generator module to generate working distractors. The code "0" means that no special method is required. The code "1" means that the first two of the incorrect random responses generated by the module SMCR are modified to their nearest values distinct from the correct choice so that the outputs gives the same remainder after division by 3 as the correct choice, and the third and fourth incorrect choices are modified to their nearest values distinct from the correct choice so that the outputs gives same remainder after division by 7 and 11 respectively, as the correct choice. These distractors usually always work when the Task response recipe is a polynomial expression containing only the operations of addition, subtraction, and multiplication, and the correct choice is at least a 3-digit number. Finally, the Boolean field "task type" is set to "False" during manufacturing if the task is complicated enough to be simulated at random. Then task instances are not simulated, but randomly selected from a pre-stored location. Otherwise it is set to "True".
The task stimulus object consists of the stimulus recipe and the constraint recipe. The stimulus recipe contains the statement of the visual stimulus, with slots for variables to initialize hundreds of random stimulus instances, i.e. task problem statement instances.
When the task type is "True", the constraint recipe is a string containing recipe for simulation of task instance (only in the case of mathematical and numerical tasks). Then it contains the boundary conditions for all independent variable to be simulated randomly within those boundaries, and contains necessary relations and arithmetic expressions for simulating dependent variables. For example, to simulate the task instance "To find the area of a triangle with sides a, b, and c", we take the two smaller sides a, and b as independent variables within a boundary (2 < a, b < 10), and take the third, larger side c to be dependent, and with the variable boundary condition (max(a, b) < c < a + b). Now first we simulate two random numbers a and b within the boundaries (2, 10), then we compute the maximum of a and b, and finally we simulate the third random number c within the boundary (max(a, b) < c < a + b). Similar recipes are for other tasks. If the Task Type flag is "False", then the constraint recipe contains the number of total number of task instances stored in the system's content ROM, followed by the starting address of the first task instance values. In this case, the task instance values consists of the requisite number of values required in the task stimulus recipe, followed by the correct response and then by requisite number of distractors.
If the task type is "True", the response recipe is a recipe to get the correct choice. It is an arithmetic expression in postfix form, in which values of above stimulus variables are substituted, and the resulting expression is evaluated by the expression evaluator, a traditional method used in scientific calculators. Then it gives correct answer of the task stimulus instance. If the task type is "False", the response recipe is an empty string.
The User Data Object and Methods
User data objects are "Global user setup preferences" (GUSP), "System's state", "User's Initial state record for tasks" (UISR), and "User's performance statistics for tasks" (UPS). These reside in the User Data Memory. Its methods are to get values of any field of the above objects into RAM, and update any value in it from a corresponding value in RAM. GUSP has three fields - "showerr", "Auto/user" mode, and the "current group", which shows the current group of tasks selected for practice. All subsequent selection of practice tasks is limited to tasks within that group. The initial values of the first two are copied from the Initial system's global preferences, and the initial value of the "current group" is copied from the "Group" field of the Task Practice Constants. These fields are set by the user during system's setup, and can be modified by invoking the same.
The first field of the System's State is the "maxerntime". The system sets it during a practice session (See fig 6). Its second field, "System's initialization status" shows if the user had already initialized the system. The first field of the UISR shows the "qualified" status for that task. Its second field "Initial total Number of practice units" contains value of the initial total number of practice units before the "qualified" status flag was set to "True". Its 3rd - 6th fields are named as "Initial RT", "Initial err", "Initial ern", and "Initial erntime" containing values of the average of five latest response time (RT), the total number of errors, the total number of trials with "ern" flag set, and the total "erntime" respectively, before the "qualified" status flag was set to "True". Initially the 1st field is set to "False", while the remaining are set to zero. In other embodiments without the ER module, there are no fields related to ern or erntime. The first field "Best Unit RT" of the UPS denotes the least value of the response time in that task since the beginning, and its sixth field "Five latest unit trial RT" has five subfields containing unit averages of RTs of five latest practice units. The subfields of the sixth field are updated like a queue (FIFO). The 7th - 9th fields are "Current err", "Current ern", and "Current erntime", which contains the total number of errors, the total number of trials with "ern" flag set, and the total "erntime" in the latest practice unit respectively. The 2nd - 5th fields are "The number of practice units", "past total err", "Past total ern", and "Past total erntime", which contain the total number of units practiced, the total number of errors, the total number of trials with "ern" flag set, and the total "erntime" in all previous tasks, within the period under consideration respectively. The number of subfields in the sixth field may change with other embodiments of the invention. The periods of consideration are till the "qualified" status of the UISR is "False", and the period after that. In embodiments without the ER module, there are no fields related to "ern" or "erntime".
The Task Practice Object and Modules
An Example: The task object has been explained earlier. As an example, consider simulating the task "Find the area of a rectangle with adjacent sides x and y". Here the task stimulus is "find the area of a rectangle with adjacent sides x, and y)". While expanding it, the processor will first simulate values for variables x, and y, and then substitute these values in the place of x and y in the stimulus. Then it will be displayed to the user. The constraint recipe will be "(x = (0, D))(y = (x, D))", where D>0 is a given number indicating upper limit of lengths of sides. The above means to randomly choose the number x in the interval (0, D), and then the number y in the interval (x, D). The response recipe is the string of postfix form of the expression "xy", which means the product of x and y. As known from the theory of Random number generators, the position of sensitive digits of the product are its middle digits, which should be changed in incorrect choices. The task distractor code (TDC) contains "0" if the number of digits in the correct choice in the task response type is 1 or 2, and "1" otherwise.
The Task Practice Module (TPM)
Its main components are the Stimulus simulator, the Multiple Choice Response Simulator, the Response Recorder (RR), the ERN Recorder (ER), and the Display Formatter, which process the task object. We define these in details after the Task Practice Module (See fig 12-fig 13).
The "Gets Task ID" submodule of the TPM inputs the ID of a task object and gets the corresponding task object from the Content ROM. Then its "Simulates Stimulus" submodule simulates random task instances of that task to the user by executing the Module to simulate random stimulus instances of a task. Then its "Simulates Multiple-Choice" submodule simulates multiple-choices for response to the user by executing the Multiple-choice Response Simulator, and if the task is not a multiple- choice type, it provides an opportunity to the user to input the answer. Then the "Sends correct to RR" submodule sends the value of the correct-choice located in the variable "correct" to the Response Recorder RR (fig 18). Then its "Record Response" submodule executes the RR and gets the latter' s output. The RR records user's responses and their timings, gives him or her a chance to correct error due to slips and impulses by invoking the ER, which records the time period required to correct the error due to slips and impulses, and its error status, i. e. whether such errors have been corrected at all.
Then the RR passes these values to the TPM, which outputs values of response time, erntime, err, and ern for further processing to generate and display the user's performance statistics, and to detect weaker tasks.
The Display Formatter (DF) module receives the stimulus recipe, gets from it the variable names and fetches their simulated values from the global RAM of the TPM, substitutes these values in the place of corresponding variable names, gets the Display type and the Response type from the Module to simulate random stimulus instances of a task, and sends the resulting stimulus instance to display. Then it receives the display type from the task header, gets the number of multiple choices from the response type of the task, and gets multiple choices for response from the Multiple- choice Response Simulator. Then it orders multiple choices as per the display type for multiple choice, formats these as per the display type and the response type, and sends these to display. Finally, if it received variables "correct" and "response" from the response recorder RR, and in this case highlights the correct choice and the incorrect user's response in distinct fonts.
The Stimulus Simulator Object and Module The Stimulus simulator consists of the constraint simulator module and the module to Simulate Random Stimulus Instances (SRSI) (See fig 14-15). It also uses the Expression Evaluator (EVAL), the Random Number Generator (RNG, and the Display Formatter. The EVAL evaluates arithmetic expressions in constraint and response recipes. Since our expressions are known, we use evaluator without parser to evaluate expressions containing variables, necessary functions and operators, including assignment operator. In the preferred embodiment we assume that expressions figuring in all recipes are in the postfix, are verified to be syntactically and semantically correct, and that the EVAL can evaluate any expression with or without substitution, figuring in any recipe of tasks contained in the content ROM. We also assume that for any variable of occurring in a task recipe, it reserves separate RAM locations, which are global to the entire TPM module, i. e. these can be used and modified by any method of the task object, and are distinct from any other global or local variable figuring in any other method of the task object. For details on construction of expression evaluators see, chapter 3 of "Fundamentals of Data Structures", E. Horowitz, S. Sahni, Computer science press Inc., 1983 (Referred by [HS]), and "Data structures and algorithm analysis in C", 2nd edition, M. E. Weiss, pp. 72-77, Addison Wesley Longman, 2001 (Referred by [MEW]).
RNG are well known for generating random numbers. Such generators receives numbers a, and b such that a < b, and generates a random number x such that a < x < b. Similarly, the Random Permutation Generator (RPG) is a standard object; we assume it. It receives a positive integer N and outputs a randomly permuted array of all integers from 1 to N.
The Constraint simulator sequentially fetches each constraint of the constraint recipe from left-to-right. If boundary values of these constraints are arithmetic expressions containing only constants, then it evaluates these expressions using the EVAL and sends the resulting constants to the RNG in case the constraint is not an assignment to a constant, and finally initializes variable on the Left-Hand-Side of the substitution operator of the constraint to the value obtained above. When the constraint contains a dependent variable then such a constraint should occur in the constraint recipe only after all its constituent variables on the Right Hand Side of the substitution have already been initialized, and then it is initialized by substituting for variables occurring in the Right Hand Side of the constraint by their values, and then applying the above procedure. These variables are located in the global RAM of the TPM. If the "Task Type" flag of the task header contains "True", then the Submodule "Initialize SRSI" of the SRSI (See fig 14 - fig 15) gets the stimulus recipe of the task, gets its display type and response type from the task header, and then its "Solve Constraints" submodule executes the constraint simulator module to initialize its variables in the global RAM. Finally "Display Stimulus" submodule sends the stimulus recipe, the display type, the response types, and a message to start a new task, to the display formatter, which fetches values of the variables figuring in the stimulus recipe from the global RAM, substitutes these values for the variables in the stimulus recipe to get the stimulus instance, and finally displays the task-stimulus to the user. If the "Task Type" flag of the task header contains "False", then the SRSI randomly selects a task instance from the memory storage device, and displays it to the user. For this, it gets the number NN from its constraint recipe, sends NN to the RNG, gets back its output MM, then gets the start address of the task instances from the constraint recipe, and jumps to that address. Then it gets the task number type from the task header, the number of task variables k from its stimulus recipe, and the number of responses N from its response type. Then it skips N+k values of the task number type MM-I times, and then fetches first k values and substitutes these in the global RAM for those variables, and then invokes the ""Display Stimulus" submodule as above.
The Multiple Choice Response Simulator
The Multiple-choice Response Simulator consists of the correct-choice generator (CCG), and the Module to simulate multiple choices for response (SMCR) (See fig 16 - 17). It also uses the standard Random Permutation Generator (RPG).
The CCG gets the response recipe of a task object into named RAM location, sends it to the EVAL, gets back the value of the correct choice from the EVAL and outputs it. If the "Task Type" flag of the task header contains "True", then the submodule "Gets number of multiple choices" of the SMCR gets the number of multiple choices N from the response type of the task header, and puts it into a variable N. Then the submodule "Initialize SMCR" gets the response recipe of the task from the content ROM, and puts it into a string variable r. Then its "Gets Correct Choice" submodule sends the string r to the CCG, and stores its output into a variable x. Then its "Fills first Matrix slot" submodule Reserves a numeric array A of size N, and puts the value of the above x, i.e. the value of the correct choice, into the first location of the array. Then its "Gets position of sensitive digits" submodule gets the position of sensitive digits of the answer from the response type of the Task header, the "Gets error- margin" submodule gets the size (margin) of errors from the from the response type of the Task header, and the "Generate N-I incorrect answers" submodule generates N-I incorrect answers by adding errors of sizes as per the above margin of errors in sensitive digits of the correct choice.
However, these answers still might be easy to guess, i. e. may not be working distractors. So, next the "Gets task-distractor code" submodule gets the task-distractor code from the task header, and then "Generates N-I distractors" submodule uses this code to execute appropriate distractor-generating module to generate N-I distractors. Next the "Fills N-I matrix-slots" submodule fills the remaining N-I locations of the array A by these distractors, and then the "Permute Multiple-Choices" submodule sends the number N to the RPG, stores its output into another array M, and puts the position of the correct choice, i. e. the value in the first location M[I] of the array M in the variable "correct" in the global RAM to indicate the position of the correct choice. Finally the "Displays Multiple Choices" sends the permuted array A[M[I]], A[M[2]], . . . , A[M[N]] to the Display Formatter, which obtains the display type from the task header, orders multiple choices as per the display type of that task, formats multiple-choices as per the display type and the response type, and displays it to the user. If the "Task Type" flag of the task header contains "False", then the SMCR gets N values of choices, the first being the correct choice from the memory storage, then invokes the submodules "Permute Multiple-Choices" and "Displays Multiple Choices" in order. In another embodiment of the invention, to simulate mathematical and numerical non-multiple-choice tasks, we have another code in the task type, which indicates this type, and in this case the Stimulus Simulator simulates the stimulus instance, and the SMCR only derives the correct answer using the CCG, and passes it to the Task Practice Module.
The object "err" is a Boolean flag, set to "True" if user's first response is incorrect, and either there is no second response or it is incorrect.
The Response Recorder (RR)
The Response Recorder (See fig 18) records user's responses and their timings. Its InitializeRR submodule gets the value of "waittime" from the "task practice constants" of the content ROM into a variable waittime in the RAM, initializes variables "response", RT, and "erntime" to zero and "err", and "ern" flags to "False". Then its "Starts a timer and waits" submodule sets a timer to zero, starts it, and waits for the user's first response. The moment the user responds, its "Gets user's response" submodule puts user's first response into the variable response, and puts the timer value into the variable RT (response time). Then its "Is response correct" submodule verifies if the user's first response is correct. If the user's first response is incorrect, the "Puts True into err" submodule puts the value "True" into the variable "err" in the global RAM to show the error status, then its "Record ERN" submodule executes the ERN recorder module, and gets back. Next, its "Display Error?" submodule gets the value of "showerr" from the Global user setup preferences in the Content ROM, verifies if in addition to the above condition, the error status "err" and "showerr" are both "True", and in that case its "Sends response, correct to DF" submodule sends the values of user's response and the correct response contained in the RAM locations "response", and "correct" respectively to the display formatter. Finally, in this case the "Display Distinguishably" submodule uses Display formatter to display "response", and "correct" in a distinguishable manner in distinct fonts, and then the "waits till the user responds" submodule waits till the user responds to continue. Otherwise, if the first response is correct or if at least one of the "err" and "showerr" is "false", then the "Starts timer afresh" submodule once more sets the timer to zero and starts counting time, and its "Waits for waittime " submodule waits till the timer value is less than the "waittime". Finally it outputs the values of RT, "erntime", "err", and "ern". In another embodiment there is no ER module, and the "Record ERN" submodule doesn't execute the ER module, and it Outputs only the values of RT, and
"err".
The ERN Object and Methods
The ERN Object and Methods consists of the object "maxerntime", the ERN recorder module (ER), and the module to adjust the "maxerntime" (AMAX). The ER gives the user a chance to correct errors due to slips and impulses, and records latter's timing and error status, which are then passed to appropriate modules for further processing.
These modules give mechanism to provide external reinforcement to quicken user's
ERN during skill practice. The proposed mechanism is simple - if the user corrects the error due to slips and impulse within the time interval "maxerntime", it is no more considered an error and improves the performance report. Since the interval
"maxerntime", decreases with improvement in user's "erntime", user gets continuous challenge to improve his or her error response control.
The object "ern" is a Boolean flag set to "true" if the user's first response is wrong but the second response within the time limit of "maxerntime" is correct. In this case,
"erntime" is the time elapsed between the first and the second response. Otherwise
"erntime" is zero.
The ERN Recorder Module (ER) If user's first response is incorrect, the device executes the ERN recorder module from within the Response Recorder module (RR). It inputs values of "maxerntime" from the system's state, uses the value of "correct" already in its RAM location, and updates values of the response time (RT), "erntime", response, "err", and "ern". The variable RT initially contains the duration of user's first response, and "erntime" is initially set to zero. The variable "correct" contains the position of the correct choice among multiple choices, and the variable Response2 contains the user's first response initially, and is updated to contain user's second response, if there is any such response at all. In the case of non-multiple-choice tasks, the variable "correct" contains the correct answer.
The submodule "Gets maxerntime" of the ERN recorder (ER) (See fig 19) gets "maxerntime" from the System's state of the user data memory. Then its "Puts first response in response2" submodule initializes variable "response2" to the value of the variable "response", which is the user's first response, and finally its "Starts a timer, timer2" submodule sets a distinct timer, "timer2" to zero and starts it. Then its submodule "Gets Second Response" waits till the user responds, or the timer value exceeds maxerntime, whatever happens earlier. Then its submodule "Did User Respond?" verifies if the user did respond, and if the user did respond then in that case its "Is User Response Correct?" Submodule puts user's second response into the variable response2, compares it with the value of the variable "correct" in the global RAM to see if the second response is correct, and if it is correct then its "Puts "True" into ern" submodule puts "True" into the flag "ern", its "Puts "False" into err" submodule puts "false" into the flag "err", its "Puts timer2 into erntime" submodule puts the duration between the first and the second response, i.e. the value of timer2, into the "erntime", and finally its "Adds erntime to RT" submodule adds "erntime" to RT before returning back to the module RR. If the user didn't respond at all within the time limited by maxerntime, or if the second response is not correct, it does nothing. Finally it returns back to the module RR. The updated values are available to the RR through the global RAM.
Task Practice Unit Module (TPUM)
To smoothen the response time measurement for a task to get rid of chance fluctuations, it is required to give the user several task instances for practice, and then average all these response time (See fig Hand [DRSR]). For this, its "Initialize Unit Practice" submodule gets the task ID from the content ROM, reserves and initializes to zero numeric variables e, x, r, y,' and t in the RAM, and gets "unit size" from the Task Practice Constants and stores it in variable N. Then its Submodule "Unit Practice" executes the TPM for that task repeatedly till the total number of correct responses remains less than N, and accumulates the total number of trials in t, the total number of errors in e, the total number of responses with "ern" flag "True" in x, the sum of all "erntime" in r, and the average of all RT with correct response in y. Then its submodule "Unit Update" adds 1 to the content of the second field of the User's Performance Statistics (UPS) of that task, adds content of the seventh field to the content of the third field, content of the eighth field to the content of the fourth field, and content of the ninth field to the content of the fifth field of the UPS of that task. Then it puts e in the seventh field, x in the eighth field, r in the ninth field, and pushes y into the queue of the sixth field of the UPS of that task respectively. Then if y is less than the content of the first field of the UPS of that task, it puts y into the latter to update the "Best response time". Thus it updates the total number of trials, total number of errors, total number of errors due to slips and impulses, total timings of all correction of latter errors, and the "Best response time" into UPS of that task, in the user data memory of the Content ROM. Then the "Stop" submodule returns to the calling module.
In case the predetermined unit size is one, the TPUM does nothing more than the TPM apart from updating the values of the total number of trials, the total number of errors, the total number of errors due to slips and impulses, the total timings of all correction of latter errors, and the "Best response time" into UPS of that task.
The Module to Adjust the "Maxerntime" Object (AMAX) This module executes in the beginning of each practice session once (See fig (6)). It gets the current maxerntime from the System's State into the RAM. We define an admissible task as a task whose qualified status is "True" and whose total number of responses with "ern" flag "True" in its current practice unit, located in the UPS of that task, is not zero, i. e. there is at least one error due to slips and impulses corrected by the user in the most recent practice unit of that task. We further define the current average erntime of an admissible task as the ratio r/x, where r is the "sum of all erntime" in the current practice unit of that task, and x is the "total number of responses with ern flag True" in that practice unit of that task. These quantities are located in the UPS of that task. The "Do Exist Admissible Tasks?" submodule of AMAX verifies if there exist at least one admissible task with qualified status "True". If there are no admissible tasks then the module AMAX does nothing, and goes back to the calling module. Otherwise, its "Gets Maximum Erntime" submodule finds the maximum of the current average erntime of all admissible tasks by any maximum finding method, and then the submodule "Get maxerntime" gets the value of maxerntime from the System's State. Then its "Adjust maxerntime" submodule executes the function (See fig 7) to modify this maxerntime in the RAM, and finally its "Update maxerntime" submodule updates maxerntime from the RAM into the corresponding location of the System's State and returns back to the calling module. This latter function should output a value between its input values, which should rapidly decrease to the above maximum of all erntimes with each practice session. Its purpose is to decrease this upper limit (maxemtime) with practice, so that the user is continually motivated to improve the error-response control on errors due to slips and impulses. In the present embodiment, the function is calculated in figure 7, and is the average of maxerntime and e, where e is the maximum of all current average erntime of all admissible tasks. In other embodiments there could be other functions, provided they output a value in between the input values, which with time goes on decreasing to the maximum of all current average "erntime" values with time.
User's Weakness in a Task (WEAKNESS) As explained earlier, in this embodiment, we define a user's weakness in a task as the ratio of his or her current response time (RT) to the task asymptote, i. e. weakness = RT/ asymptote. Here RT is the average current response time as determined in the TPUM module, and asymptote is the task asymptote located in the task header. As mentioned earlier, in the place of asymptote we could have taken any standard response time. Though like RT, we can define weakness for a single task instance, in the present embodiment we only mean weakness derived from the average RT, which is determined by the TPUM. We could have also called it the average weakness, since the denominator in the above relation is constant for a given task. We determine it as follows (See fig 8). The "Inputs Task ID" submodule inputs the task ID. Then the "Gets Response Time" submodule gets the average current response time for this task from its User's Performance Statistics for Task. Then the "Gets task asymptote" submodule gets the task asymptote corresponding to this task ID from its Task header. Then its "Weakness = Response Time / Asymptote" submodule computes the ratio of RT and asymptote, assigns it to the variable "weakness", and outputs the value of weakness.
Display of Performance Statistics (DisplayStats Module)
The apparatus inputs the task ID, and displays values of the best RT from the User's Performance Statistics for tasks (UPS) of that task. Then it displays the total number of practice units, average Response time (RT), error percentage as 100X(past total error)/(past total errors + (unit size)X(number of practice units)), "ern" percentage as 100X(past total "ern")/(past total "ern" + past total errors), and average "erntime" of initial values obtained from the User's Initial State record (UISR) of that task, and then displays the later values (after the user obtained qualifying status in that task) from the UPS of that task. Then it displays the five current RT, and current "err", "ern" and "erntime" from its UPS. The "Qualified" status is derived below.
The Skill Practice Module (SPM) In the beginning of a task practice, to avoid problems with incorrect trials (See fig 10), the apparatus gives the user a few practice units of that task in a stretch, till the average error is more than x%, and average weakness is more than z%, where x and z are the predetermined non-negative "error threshold" and the "weakness-threshold" located in the Task Practice Constants (TPC) of that task. For the procedure to make sense, a "minimal number of units for accuracy" of that task must be practiced. The latter predetermined number is also located in the TPC. After that we define the task to be "Qualified" and set its "qualified" status flag "True" in the first field of the UISR of this task, which is initially set to "False". Its aim is to enforce certain basic speed/accuracy in each task before moving to its response time analysis.
The submodule "Initialize SPM" receives the task ID from the calling module, and initializes numeric variables n, x, y, and z to zero. Then its "Is Qualified" submodule gets the first field ("Qualified" status) of its UISR, and verifies if this flag is "True". If this flag is "True" then the "Gives Practice Tasks" submodule executes the TPUM module as long as the user wants. If the user wants to quit, its "Displays task statistics" submodule executes the DisplayStats module to display to the user his comprehensive performance report in that task, and then goes back to the calling module. If the "qualified" status flag is "False", then the submodules "Gets unit size", "Gets error-threshold", "Gets Min number of units", and "Gets weakness-threshold" sequentially gets the unit size, the error-threshold, the Minimal number of units for accuracy, and the weakness-threshold from the TPC into the variables n, x, y, and z respectively. Then the submodule "Initializes m, E, and w" initializes the variables m, E, and w to zero. These variables shall denote the total number of unit practiced in the following (Loop) before the user gets the "Qualified" status "True" for that task, the total number of errors in these practices, and the current average weakness in the most recent practice unit respectively. Then it executes the following Loop till "Qualifying Criteria Satisfied?" (explained below) is false. (Loop) The submodule "Gives Practice Tasks" as above executes TPUM, and then the "Adds current err to E" submodule adds the current error to the variable E, and the "Increment counter" submodule increases the value of variable m by one. Then its "Get weakness" submodule sends the task ID to the WEAKNESS modμle to determine the weakness in the latest Task practice unit, and gets this weakness in the variable w. Then it verifies the following condition, and if that condition is not satisfied, the Loop repeats.
The submodule "Qualifying Criteria Satisfied?" verifies if the number m of units already practiced is less than y, or the percentage error during the above practice determined as 100E/(E + nm) is at least y, or the weakness w is at least z. If any one of these conditions is true then the "Qualifying Criteria Satisfied?" is considered not true, and the (Loop) repeats.
Otherwise, the Update submodule puts "total number of practice units" from the UPS of that task into second field of the UISR of that task, average of all subfields of sixth field from the UPS into the third field of the UISR, and the sum of the third and the seventh fields from the UPS into the fourth field of the UISR of that task. Then it puts the sum of the fourth and the eighth fields from the UPS of that task into the fifth field of its UISR and the sum of fifth and the ninth fields from the UPS of that task into the sixth field of its UISR. Then it initializes all except the first field of UPS of that task to 0 and puts "True" in the first field of UISR of that task. The purpose of updating is to collect initial values as values just before achieving "qualified" status in that task, put these in UISR, and refresh all except the first field of the UPS of that task so as to put therein fresh values after obtaining "qualified" status later on. This way initial statistics are separated from the later statistics, when the user has become more skilled in that task.
Detection of Weak Tasks DWT
In order to support the user in strengthening his or her skill, we need some mechanism to detect the weakest task among all practiced tasks, so that the user can practice the weakest task at the moment to strengthen his or her weak spots. To detect the weakest task we need a way to compare two distinct tasks by some numeric attribute of the task. We have already derived such a numeric attribute as the user's weakness in a task as determined by the WEAKNESS module earlier. So, we compare two tasks by their weakness values; the higher the weakness value, the weaker we consider the user in that task. Such a task comparison mechanism is given by the
Weakness Comparison module (WCMP) below. Assuming that mechanism, the apparatus starts from the first task ID and finds the ID of the task with maximum weakness value by successively comparing tasks by the WCMP module. The task with highest weakness is declared to be the weakest, and is given to the user to practice in the AASP module. It is needless to say that the weakest task is selected only within the group of task set by the user.
The Weakness Comparison Module (WCMP) This Module (See fig 9) compares user's weakness in two tasks. Its "Inputs both Tasks IDs" inputs the task IDs in variables I and J respectively. Then its "Initialize flag to False" submodule initializes a flag variable z to "False". Then its "Gets first weakness" submodule sends task ID I to the WEAKNESS module (See fig 8) and stores its output in a variable w in RAM. Then its "Gets second weakness" submodule sends task ID J to the WEAKNESS module and stores its output in variable x. Then its "Is first task weaker?" submodule verifies if w is more than x. If this condition is true then its "Initialize flag to "True"" submodule puts "True" in the flag z. Finally its "Output flag" submodule returns the value of the flag z. The flag z is "True" if and only if the first task is weaker, i. e. it has the higher weakness value. The order of components of Task object, Task Practice Constants, initial system global preference can change in other embodiments of the invention. The constants assumed in fields of the Task header, the Task Practice Constants, The Initial system's global preferences and fields and sub fields of records of fig 4, as well as the number of sub fields of sixth field of User's performance statistics for tasks of fig 4 can change in the embodiments of the invention and should be considered illustrating but not restrictive. In another embodiment the task instances, messages, and performance reports etc can be displayed to the user on his local system, which might be a PC or a handheld or a mobile connected to a server, all the timings of events like that of the time of display of the task instance, the first actuation time, the optional second actuation time etc can be recorded on the local system, while the entire processing like generating the task instances, processing and analyzing the above timings and generating performance reports etc can be carried out at the remote server. In another embodiment, the local system can be used preferably for displaying and to get the user's inputs, while the entire processing and even the approximate timing of events may be carried out on the remote server.

Claims

CLAIMS:
1. An automated method for adaptive testing of skill of a user, said method comprising the steps:
(a) generating and displaying task instance to the user; (b) detecting a time at which the task instance is displayed to the user;
(c) detecting a first actuation and optionally a second actuation of the input device by the user and a corresponding first actuation time and an optional second actuation time;
(d) determining the user's response to task instance from the said first and the optional second actuation of the input device and analyzing the correctness of the user's response to the displayed task instance;
(e) evaluating the skill of the user based on the correctness of the user's response and a time period taken to provide the correct response.
2. The method of claim 1, wherein the said first actuation of the input device is determined as the user's response to the task instance if the said first actuation is determined to be correct.
3. The method of claim 2, wherein if the first actuation of the input device is determined as the user's response, the corresponding first actuation time is determined as the time period taken to provide the correct response.
4. The method of claim 1, wherein if the said first actuation of the input device is determined to be incorrect, an opportunity is provided to the user to actuate the input device within a first predetermined amount of time period called "maxerntime".
5. The method of claim 4, wherein if the user actuates the input device within the predetermined amount of time period i.e. within the "maxerntime", the actuation is treated as a second actuation of the input device.
6. The method of claim 1, wherein the said second actuation of the input device by the user is determined as the user's response to the task instance if the said second actuation is determined to be correct.
7. The method of claim 6, wherein if the second actuation of the input device is determined as the user's response, the corresponding second actuation time is determined as the time period taken to provide the correct response and the time difference between the first actuation time and the second actuation time called the "erntime" is determined.
8. The method of claim 1, further comprising displaying the first and/or the second actuation of the input device and the correct answer to the task instance on a display means in a distinguishable manner.
9. The method of claim 1, further comprising repeating the steps (a) to (e) using multiple task instances of same type of task object.
10. The method of claim 9, further comprising (a) determining the following: (i) the number of times the first actuation of the input device is determined to be incorrect; (ii) the number of times such a second actuation of the input device is determined as a correct response to the task instance and (iii) a total "erntime" for all the second actuations of the input device received from the user, and (b) displaying to the user the said determined values for supporting the user to improve the skills.
11. The method of claim 9, wherein prior to repeating steps (a) to (e), the user is provided a predetermined amount of time period called wait time for relaxing.
12. The method of claim 1, further comprising determining an average time period taken to provide the correct response for multiple task instances of the same type of task object thereby eliminating chance fluctuations.
13. The method of claim 1, further comprising repeating steps (a) to (e) at least "m" number of times or till "percentage correct response" of the user is below a predetermined value "n".
14. The method of claim 12, wherein based on the average time period taken to provide the correct response for multiple task instances of the same type of task object, weakness value for the type of task object is determined.
15. The method of claim 14, further comprising repeating steps (a) to (e) for the task object for which the weakness value is determined to be above a first threshold value.
16. The method of claim 15, further comprising repeating the steps (a) to (e) using task instances of different types of task object.
17. The method of claim 14, further comprising determining an average time for each type of task object, determining the weakness for each type of task object and determining the task object having the highest weakness.
18. The method of claim 1, further comprising repeating steps (a) to (e) for the task object having the highest weakness.
19. The method of claim 4, wherein the value of the first predetermined amount of time period i.e. "maxerntime" is set to decrease when the steps (a) to (e) are being repeated using multiple task instances derived from a same type of task object.
20. The method of claim 1, wherein the said task instance is randomly selected from a memory device having plurality of task instances or the said task instance is randomly generated / simulated or from a single task object, thereby eliminating the need to store multiple task instances of the same type of task object on a memory device.
21. The method of claim 20, wherein the said task instance is selected from the group consisting of (a) multiple choice based task instances and (b) non- multiple choice based task instances.
22. The method of claim 21, wherein each of the said multiple choice based task instances have at least one correct answer and wherein each of the correct answer has a single value in the correct answer or multiple-values in the correct answer.
23. The method of claim 21, wherein each of the said multiple choice based task instances and/or the non-multiple choice based task instances are selected from subjects selected from the group consisting of (a) mathematics and numerical problems in science and (b) non-mathematical based problems.
24. The method of claim 20, wherein the said task instance in subjects selected from the group consisting of mathematics and numerical problems in Science are randomly generated / simulated from a single task object thereby eliminating the need to store multiple task instances of the same type of task object on a memory device.
25. The method of claim 24, wherein the task object comprises at least one variable and a provision for substituting the said at least one variable by a numerical number.
26. The method of claim 25, wherein the multiple choice based task instance comprises a stimulus and a corresponding multiple choice response.
27. The method of claim 24, wherein the stimulus is generated by:
(a) obtaining a task object having at least one variable; and
(b) generating a numerical number randomly and substituting the said randomly generated numerical number in place of the said at least one variable contained in the said task object thereby generating the stimulus.
28. The method of claim 27, wherein in step (b), the said randomly simulated numerical number is simulated based on a predetermined constraint.
29. The method of claim 28, wherein the said predetermined constraint is a variable constraint.
30. The method of claim 26, wherein the multiple choice response for a stimulus is generated by:
(a) processing the stimulus using a mathematical tool kit to obtain one correct answer for the said stimulus; (b) generating a "N" sized matrix and feeding the said correct answer thus obtained as a first input to the said matrix;
(c) generating "N-I" incorrect answers for the said stimulus based on the said correct answer and feeding the same to the "N" sized matrix as further inputs to complete the "N" sized matrix; and (d) performing random permutation of the completed "N" sized matrix to obtain the multiple choice response for the stimulus.
31. The method of claim 30, wherein "N" indicates the number of multiple choices to be simulated for the stimulus.
32. The method of claim 30, wherein in step (c), the "N-I" incorrect answers are generated based on (a) a margin of error and/or (b) details regarding sensitive digits of the said correct answer and/or (c) a task distractor code that uses an distractor-generating method for generating "N-I" distractors.
33. An automated apparatus for adaptive testing of skill of a user, said apparatus comprising:
(a) a means for generating and displaying task instance to the user;
(b) a means for detecting a time at which the task instance is displayed to the user;
(c) a means for detecting a first actuation and optionally a means for detecting a second actuation of the input device by the user and a means for detecting a corresponding first actuation time and a means for detecting a corresponding second actuation time; (d) a means for determining the user's response to task instance from the said first and the optional second actuation of the input device and a means for analyzing the correctness of the user's response to the displayed task instance;
(e) a means for evaluating the skill of the user based on the correctness of the user's response and a time period taken to provide the correct response.
34. The apparatus of claim 33, wherein the said means for determining the user's response to the task instance determines the said first actuation of the input device as the user's response to the task instance if the said first actuation is determined to be correct.
35. The apparatus of claim 34, wherein if the first actuation of the input device is determined as the user's response, the corresponding first actuation time is determined as the time period taken to provide the correct response.
36. The apparatus of claim 33, wherein if the said first actuation of the input device is determined to be incorrect, an opportunity is provided to the user to actuate the input device within a first predetermined amount of time period called "maxerntime".
37. The apparatus of claim 36, wherein if the user actuates the input device within the predetermined amount of time period i.e. within the "maxerntime", the actuation is treated as a second actuation of the input device.
38. The apparatus of claim 33, wherein the said second actuation of the input device by the user is determined as the user's response to the task instance if the said second actuation is determined to be correct.
39. The apparatus of claim 38, wherein if the second actuation of the input device is determined as the user's response, the corresponding second actuation time is determined as the time period taken to provide the correct response and the time difference between the first actuation time and the second actuation time called the "erntime" is determined.
40. The apparatus of claim 33, further comprising a display means for displaying the first and/or the second actuation of the input device and the correct answer to the task instance in a distinguishable manner.
41. The apparatus of claim 33, further comprising a means for generating multiple task instances of the same type of task object.
42. The apparatus of claim 41, further comprising (i) a means for determining the number of times the first actuation of the input device is determined to be incorrect; (ii) a means for determining the number of times such a second actuation of the input device is determined as a correct response to the task instance and (iii) a means for determining a total "erntime" for all the second actuations of the input device received from the user, and (iv) a means for displaying to the user the said determined values for supporting the user to improve the skills.
43. The apparatus of claim 41, wherein prior to repeating steps (a) to (e), the user is provided a predetermined amount of time period called wait time for relaxing.
44. The apparatus of claim 33, further comprising a means for determining an average time period taken to provide the correct response for multiple task instances of the same type of task object thereby eliminating chance fluctuations.
45. The apparatus of claim 33, further comprising a means for displaying task instances at least ς'm" number of times or till "percentage correct response" of the user is below a predetermined value "n", wherein the percentage correct response is determined by a percentage calculating means.
46. The apparatus of claim 44, further comprising a means for calculating weakness value for the type of task object based on the average time period taken to provide the correct response for multiple task instances of the same type of task object.
47. The apparatus of claim 46, further comprising a means for displaying task instances for the task object for which the weakness value is determined to be above a first threshold value.
48. The apparatus of claim 47, further comprising a storage device for storing different types of task objects.
49. The apparatus of claim 46, further comprising a means for determining an average time for each type of task object, a means for determining the weakness for each type of task object and a means for determining the task object having the highest weakness.
50. The apparatus of claim 33, further comprising a means for displaying task instances for the task object having the highest weakness.
51. The apparatus of claim 36, further comprising a means for decreasing the value of the first predetermined amount of time period i.e. "maxerntime".
52. The apparatus of claim 33, wherein the said task instance is randomly selected from a memory device having plurality of task instances or the said task instance is randomly generated / simulated from a single task object using a task instance generating means, thereby eliminating the need to store multiple task instances of the same type of task object on a memory device.
53. The apparatus of claim 52, wherein the said task instance is selected from the group consisting of (a) multiple choice based task instances and (b) non- multiple choice based task instances.
54. The apparatus of claim 53, wherein each of the said multiple choice based task instances have at least one correct answer and wherein each of the correct answer has a single value in the correct answer or multiple-values in the correct answer.
55. The apparatus of claim 53, wherein each of the said multiple choice based task instances and/or the non-multiple choice based task instances are selected from subjects selected from the group consisting of (a) mathematics and numerical problems in science and (b) non-mathematical based problems.
56. The apparatus of claim 52, wherein the said task instance in subjects selected from the group consisting of mathematics and numerical problems in Science are randomly generated / simulated from a single task object using the task instance generating means thereby eliminating the need to store multiple task instances of the same type of task object on a memory device.
57. The apparatus of claim 56, wherein the task object comprises at least one variable and a provision for substituting the said at least one variable by a numerical number.
58. The apparatus of claim 57, wherein the task instance generating means is a multiple choice based task instance generating means which comprises a means for generating a stimulus and a means for generating a corresponding multiple choice response.
59. The apparatus of claim 56, wherein the means for generating the stimulus comprises:
(a) a means for obtaining a task object having at least one variable; and
(b) a means for generating a numerical number randomly and substituting the said randomly generated numerical number in place of the said at least one variable contained in the said task object thereby generating the stimulus.
60. The apparatus of claim 59, wherein the means for generating a numerical number comprises a means for generating a random number and a means for placing a predetermined constraint on the selected random number..
61. The apparatus of claim 60, wherein the said predetermined constraint is a variable constraint.
62. The apparatus of claim 58, wherein the means for generating multiple choice response for a stimulus comprises:
(a) a mathematical tool kit for processing the stimulus to obtain one correct answer for the said stimulus;
(b) a means for generating a "N" sized matrix and feeding the said correct answer thus obtained as a first input to the said matrix; (c) a means for generating "N-I" incorrect answers for the said stimulus based on the said correct answer and feeding the same to the "N" sized matrix as further inputs to complete the "N" sized matrix; and
(d) a means for obtaining random permutation of the completed "N" sized matrix thereby generating the multiple choice response for the stimulus.
63. The apparatus of claim 62, wherein "N" indicates the number of multiple choices to be simulated for the stimulus.
64. The apparatus of claim 62, wherein the "N-I" incorrect answers are generated based on (a) a margin of error and/or (b) details regarding sensitive digits of the said correct answer and/or (c) a task distractor code that uses an distractor- generating method for generating "N-I" distractors.
65. An automated method for adaptive testing of skill of a user, said method comprising the steps: (a) displaying a task instance to the user and determining a time at which the task instance is displayed to the user;
(b) detecting a response from the user to the displayed task instance and a time at which the user responds to the task instance;
(c) analyzing the correctness of the user's response to the displayed task instance;
(d) repeating steps (a) to (c) using task instance of at least two distinct types of task objects;
(e) evaluating a weakness value for each of the said at least two types of task objects based on the correctness of the user's response and a time period taken to provide the correct response; and
(f) determining the task object for which the weakness value is highest.
66. The method of claim 65, wherein in step (d), the steps (a) to (c) are repeated preferably using multiple task instances of the same type of task object and an average time period taken by the user to provide the correct response for the multiple task instances of the same type of task object is determined thereby eliminating chance fluctuations.
67. The method of claim 66, wherein the weakness value for a particular task object is determined based on the average time period taken by the user to provide correct responses for the multiple task instances of that type of task object.
68. The method of claim 65, wherein in step (d), prior to repeating the steps (a) to (c), the user is provided a predetermined amount of time period called wait time for relaxing.
69. The method of claim 65, wherein in step (d), the steps (a) to (c) are repeated for the same type of task object at least "m" number of times or till "percentage correct response" of the user is below a predetermined value "n".
70. The method of claim 65, further comprising repeating steps (a) to (c) for the task object for which the weakness value is determined to be above a first threshold value.
71. The method of claim 65, further comprising repeating steps (a) to (c) for the task object for which the weakness value is determined in step (f) as being the highest.
72. The method of claim 65, wherein in step (b), detecting a response from the user to the displayed task instance preferably comprises detecting a first actuation and optionally a second actuation of the input device by the user and a corresponding first actuation time and an optional second actuation time.
73. The method of claim 65, wherein the response from the user to the displayed task instance is detected preferably from the said first and the optional second actuation of the input device.
74. The method of claim 72, wherein the said first actuation of the input device is determined as the user's response to the task instance if the said first actuation is determined to be correct.
75. The method of claim 74, wherein if the first actuation of the input device is determined as the user's response, the corresponding first actuation time is determined as the time period taken to provide the correct response.
76. The method of claim 72, wherein if the said first actuation of the input device is determined to be incorrect, an opportunity is provided to the user to actuate the input device within a first predetermined amount of time period called "maxerntime".
77. The method of claim 76, wherein if the user actuates the input device within the predetermined amount of time period i.e. within the "maxerntime", the actuation is treated as a second actuation of the input device.
78. The method of claim 72, wherein the said second actuation of the input device by the user is determined as the user's response to the task instance if the said second actuation is determined to be correct.
79. The method of claim 78, wherein if the second actuation of the input device is determined as the user's response, the corresponding second actuation time is determined as the time period taken to provide the correct response and the time difference between the first actuation time and the second actuation time called the "erntime" is determined.
80. The method of claim 65, further comprising displaying the first and/or the second actuation of the input device and the correct answer to the task instance on a display means in a distinguishable manner.
81. The method of claim 65, further comprising (a) determining the following: (i) the number of times the first actuation of the input device is determined to be incorrect; (ii) the number of times such a second actuation of the input device is determined as a correct response to the task instance and (iii) a total
"erntime" for all the second actuations of the input device received from the user, and (b) displaying to the user the said determined values for supporting the user to improve the skills.
82. The method of claim 76, wherein the value of the first predetermined amount of time period i.e. "maxerntime" is set to decrease when the steps (a) to (c) are being repeated using multiple task instances derived from a same type of task object.
83. The method of claim 65, wherein the said task instance being displayed to the user is randomly selected from a memory device having plurality of task instances or is randomly generated / simulated or from a single task object, thereby eliminating the need to store multiple task instances of the same type of task object on a memory device.
84. The method of claim 83, wherein the said task instance is selected from the group consisting of (a) multiple choice based task instances and (b) non- multiple choice based task instances.
85. The method of claim 84, wherein each of the said multiple choice based task instances have at least one correct answer and wherein each of the correct answer has a single value in the correct answer or multiple-values in the correct answer.
86. The method of claim 84, wherein each of the said multiple choice based task instances and/or the non-multiple choice based task instances are selected from subjects selected from the group consisting of (a) mathematics and numerical problems in science and (b) non-mathematical based problems.
87. The method of claim 83, wherein the said task instance in subjects selected from the group consisting of mathematics and numerical problems in Science are randomly generated / simulated from a single task object thereby eliminating the need to store multiple task instances of the same type of task object on a memory device.
88. The method of claim 87, wherein the task object comprises at least one variable and a provision for substituting the said at least one variable by a numerical number.
89. The method of claim 88, wherein the multiple choice based task instance comprises a stimulus and a corresponding multiple choice response.
90. The method of claim 87, wherein the stimulus is generated by:
(a) obtaining a task object having at least one variable; and
(b) generating a numerical number randomly and substituting the said randomly generated numerical number in place of the said at least one variable contained in the said task object thereby generating the stimulus.
91. The method of claim 90, wherein in step (b), the said randomly simulated numerical number is simulated based on a predetermined constraint.
92. The method of claim 91, wherein the said predetermined constraint is a variable constraint.
93. The method of claim 87, wherein the multiple choice response for a stimulus is generated by:
(a) processing the stimulus using a mathematical tool kit to obtain one correct answer for the said stimulus; (b) generating a "N" sized matrix and feeding the said correct answer thus obtained as a first input to the said matrix;
(c) generating "N-I" incorrect answers for the said stimulus based on the said correct answer and feeding the same to the "N" sized matrix as further inputs to complete the "N" sized matrix; and (d) performing random permutation of the completed "N" sized matrix to obtain the multiple choice response for the stimulus.
94. The method of claim 93, wherein "N" indicates the number of multiple choices to be simulated for the stimulus.
95. The method of claim 93, wherein in step (c), the "N-I" incorrect answers are generated based on (a) a margin of error and/or (b) details regarding sensitive digits of the said correct answer and/or (c) a task distractor code that uses an distractor-generating method for generating "N-I" distractors.
PCT/IB2006/001506 2005-06-10 2006-06-08 Method and system for automated adaptive skill practice WO2006131819A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN1511DE2005 2005-06-10
IN1511/DEL/2005 2005-06-10

Publications (2)

Publication Number Publication Date
WO2006131819A2 true WO2006131819A2 (en) 2006-12-14
WO2006131819A3 WO2006131819A3 (en) 2009-06-04

Family

ID=37498815

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/001506 WO2006131819A2 (en) 2005-06-10 2006-06-08 Method and system for automated adaptive skill practice

Country Status (1)

Country Link
WO (1) WO2006131819A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011028422A1 (en) * 2009-09-05 2011-03-10 Cogmed America Inc. Method for measuring and training intelligence

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5885087A (en) * 1994-09-30 1999-03-23 Robolaw Corporation Method and apparatus for improving performance on multiple-choice exams

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5885087A (en) * 1994-09-30 1999-03-23 Robolaw Corporation Method and apparatus for improving performance on multiple-choice exams

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011028422A1 (en) * 2009-09-05 2011-03-10 Cogmed America Inc. Method for measuring and training intelligence

Also Published As

Publication number Publication date
WO2006131819A3 (en) 2009-06-04

Similar Documents

Publication Publication Date Title
McGill Learning to program with personal robots: Influences on student motivation
Weiss Computerized adaptive testing for effective and efficient measurement in counseling and education
TWI270023B (en) Computerized teaching, practice, and diagnosis system
Rink et al. Foundations for the learning and instruction of sport and games
US20050191605A1 (en) Method and apparatus for improving math or other educational skills
US20040018479A1 (en) Computer implemented tutoring system
US20110244435A1 (en) Method and apparatus for improving math skills
US20130224697A1 (en) Systems and methods for generating diagnostic assessments
Goudas et al. Self-regulated learning and students’ metacognitive feelings in physical education
Dugdale The design of computer-based mathematics instruction
US20060003296A1 (en) System and method for assessing mathematical fluency
US8545232B1 (en) Computer-based student testing with dynamic problem assignment
US7395027B2 (en) Computer-aided education systems and methods
Lau et al. Using microanalysis to examine how elementary students self-regulate in math: A case study
Boyce et al. Effective practices in game tutorial systems
US20190362138A1 (en) System for Adaptive Teaching Using Biometrics
Faessler et al. Evaluating student motivation in constructivistic, problem-based introductory computer science courses
Belka What preservice physical educators observe about lessons in progressive field experiences
Evans et al. Designing personalized learning products for middle school mathematics: The case for networked learning games
WO2006131819A2 (en) Method and system for automated adaptive skill practice
Popham Minimal competencies for objectives-oriented teacher education programs
Whitaker et al. Intelligent tutoring design alternatives in a serious game
Mohr et al. Changes in pre-service teachers’ beliefs about mathematics teaching and learning during teacher preparation and effects of video-enhanced analysis of practice
US20090017434A1 (en) Method And System Of Computerized Examination Strategy Analyzer
KR100432148B1 (en) A Trial Examination of Automatic Making questions System Using Internet

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06765480

Country of ref document: EP

Kind code of ref document: A2