Search results for author:"Brent Bridgeman"
Total records matched: 10 Search took: 0.072 secs
Journal of Educational Measurement Vol. 29, No. 3 (1992) pp. 253–71
Examinees in a regular administration of the quantitative portion of the Graduate Record Examination responded to particular items in a machine-scannable multiple-choice format. Volunteers (n=364) used a computer to answer open-ended counterparts of ...
International Journal of Testing Vol. 13, No. 2 (2013) pp. 105–122
To explore the potential effect of computer type on the Test of English as a Foreign Language-Internet-Based Test (TOEFL iBT) Writing Test, a sample of 444 international students was used. The students were randomly assigned to either a laptop or a...
Journal of Educational Measurement Vol. 41, No. 2 (June 2004) pp. 137–148
Time limits on some computer-adaptive tests (CATs) are such that many examinees have difficulty finishing, and some examinees may be administered tests with more time-consuming items than others. Results from over 100,000 examinees suggested that...
Relationship of TOEFL iBT[R] Scores to Academic Performance: Some Evidence from American Universities
Language Testing Vol. 29, No. 3 (July 2012) pp. 421–442
This study examined the relationship between scores on the TOEFL Internet-Based Test (TOEFL iBT[R]) and academic performance in higher education, defined here in terms of grade point average (GPA). The academic records for 2594 undergraduate and...
Effects of an On-Screen versus Bring-Your-Own Calculator Policy on Performance on the Computerized SAT I: Reasoning Test in Mathematics
Annual Meeting of the National Council on Measurement in Education (NCME) 1998 (April 1998)
Students taking the paper-based Scholastic Assessment Test (SAT) mathematics test are permitted to bring and use their own hand-held calculators, and this policy was continued for the computer-adaptive tests (CAT) designed for use in talent search...
Journal of Educational Measurement Vol. 39, No. 2 (2002) pp. 133–47
Examined data from several national testing programs to determine whether the change from paper-based administration to computer-based tests influences group differences in performance. Results from four college and graduate entrance examinations...
Applied Measurement in Education Vol. 16, No. 3 (2003) pp. 191–205
Studied the effects of variations in screen size, resolution, and presentation delay on verbal and mathematics scores on a computerized test for 357 high school juniors. No significant differences were found for mathematics scores, but verbal scores ...
Journal of Technology, Learning, and Assessment Vol. 10, No. 3 (August 2010)
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent...
International Journal of Testing Vol. 6, No. 3 (2006) pp. 255–268
The aim of this study was to assess test takers' attitudes and beliefs about an admissions test used extensively in graduate schools of business in the United States, the Graduate Management Admission Test (GMAT), and the relationships of these...
Language Testing Vol. 29, No. 1 (January 2012) pp. 91–108
Scores assigned by trained raters and by an automated scoring system (SpeechRater[TM]) on the speaking section of the TOEFL iBT[TM] were validated against a communicative competence criterion. Specifically, a sample of 555 undergraduate students...