Recent Changes - Search:

Toyonaka ContactMe help To Do Sell Buy Minoo Lorain Bicycle Lorain High School 1976 New Stuff Vancouver, BC

Classes

Research

OJC info

MASLE References iPod Publications

Media

Design

Computer

Software

Politics

Teaching and Learning

Learning Japanese

Main.Religion

Music

Family

Family?

Friends?

Serendipity

Wiki Thoughts

PmWiki

JALT 2006

Conference Webite

The following proposal was accepted. Presentation will be on ???

Machine-Aided Spoken Language Evaluation: The Rater-Jukebox for content classes/ The Rater Jukebox/ Pronunciation Module / Machine Rating. Presentation Title: MASLE: Speech recognition software for testing Promotional Presentation: no Content A - Context: Universal (U) Content B - Area: Language and Technology (CALL) Focus: III - A balance of I (Classroom Activites) and II (Research/Theory) Format: Short Paper Language: English Length: 25 minutes Equipment: (Equip. 4) PC Projector Short summary: This presentation will demonstrate the Machine Aided Spoken Language Evaluation (MASLE) project which consists of three modules: a recorder, an interface for a human rater and an automatic speech recognition rater. The testing of speech is rarely done in regular classrooms, in contrast, these tools enable teachers to evaluate student speech on a regular basis. Data will be presented showing the promise and limitations of using a machine to rate human speech.

Abstract: This presentation will demonstrate the Machine Aided Spoken Language Evaluation (MASLE) project which consists of three modules: a recorder, an interface for a human rater and an automatic speech recognition rater. The testing of speech is rarely done in regular classrooms, in contrast, these tools enable teachers to evaluate student speech on a regular basis. Testing as part of normal coursework can decrease the anxiety that is often experienced by students under usual test conditions. In addition, this type of test is clearly both more efficient and fair. One advantage of using this system of evaluation is that large classes are easily accommodated. The first component, the data collection module, presents stimuli to language learners and records their spoken language responses over the Internet. A rater can choose the subset of tests and/or speakers that that will be rated from the database. The second module allows learners to then listen and assign ratings to the spoken language. The third and final module uses automatic speech recognition software instead of a human rater to grade the speech. Both human and machine ratings are stored in the database and can be retrieved at a later time so that other tasks can be performed, such as assigning grades to the speakers, checking the reliability of the judgments or comparing human and machine ratings. Data will be presented showing the promise and limitations of using a machine to rate human speech.

Edit - History - Print - Recent Changes - Search
Page last modified on August 07, 2006, at 07:33 PM