WO2000067111A1 - An animated, interactive computer interface system - Google Patents

An animated, interactive computer interface system Download PDF

Info

Publication number
WO2000067111A1
WO2000067111A1 PCT/US2000/011839 US0011839W WO0067111A1 WO 2000067111 A1 WO2000067111 A1 WO 2000067111A1 US 0011839 W US0011839 W US 0011839W WO 0067111 A1 WO0067111 A1 WO 0067111A1
Authority
WO
WIPO (PCT)
Prior art keywords
interactive
interface system
animated
computer interface
audio input
Prior art date
Application number
PCT/US2000/011839
Other languages
French (fr)
Inventor
Craig Huish
Original Assignee
Screenfriends Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Screenfriends Corporation filed Critical Screenfriends Corporation
Priority to AU46898/00A priority Critical patent/AU4689800A/en
Publication of WO2000067111A1 publication Critical patent/WO2000067111A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • the present invention relates to user interface in computer systems and more specifically, relates to animated, interactive computer interfaces.
  • Speech recognition software enables users to interact with personal computers. These speech recognition software programs often integrate with operating software programs and other application software programs so that speech recognition takes place in the background, thereby making speech simply another way of giving the computer commands and instructions. Some speech recognition software programs are designed to control simple functions of the computer's operating program, such as opening and closing of programs, switching from one program to another, moving windows around on-screen, and issuing menu commands. Other speech recognition software programs allow the user to create a library of verbal commands, such as "GET MAIL" to launch an Inbox mail program, dial an e-mail server, and check for new messages on-line, or "FAX" to create and address a new fax form.
  • GET MAIL to launch an Inbox mail program
  • e-mail server dial an e-mail server
  • FAX to create and address a new fax form.
  • Some computer application programs employ animated characters to interact with the user.
  • animated characters are used in the word processing program sold under the trademark, WORD 97, by Microsoft Corporation.
  • WORD 97 a "help” feature known as "Office Assistance,” which displays an animated character when the "help” feature is requested, is provided.
  • the activity of the animated character is limited, and the animated character can only perform a simple human or animal-like body movement or facial expression.
  • one drawback with application programs that use animated characters is that they do not have speech recognition capabilities.
  • these programs are not designed to provide an audible response to the user's comments or actions.
  • the activities of the animated characters are also not integrated with other operating and application programs to perform functions in those programs. Summary of the Invention
  • the present invention provides an interactive computer interface system that includes a computer, which has a display monitor and storage medium, a sound receiving device, a sound generating device, and at least one machine executable file stored in the storage medium.
  • a means for converting audio input into machine readable code and an animated, interactive software program are also included in the system.
  • the animated, interactive software program produces a visual and audible response to the audio input and/or performs tasks responsive to the audio input by interpreting the machine readable code.
  • One embodiment of the animated, interactive software program includes a brain component and an animated character component.
  • the brain component receives and interprets the machine readable code through an operating system and generates a response to the audio input.
  • the animated character component which is coupled to the brain component, receives the response to the audio input, and then produces the visual and/or audible responses to the audio input.
  • the present invention also provides an interactive user interface system for a computer that has memory to run computer operating system software and a plurality of software applications.
  • the computer also includes a device that interacts with the memory to create a user input file and an interactive software program resident in the computer memory.
  • the interactive software program has a sensory output component for providing visual and audible output and a brain component for interpreting the user input file, delivering commands for the sensory output component responsive to the user input file, and initiating user-defined tasks for the computer.
  • the present invention also includes a method of operating an interactive computer interface system.
  • the method includes: providing a computer with a display monitor and a storage medium, a sound generating device, and at least one machine executable file stored in the storage medium; inputting audio to a sound receiving device; converting the audio input into machine readable code; delivering the machine readable code to an animated interactive software program; and executing the at least one machine executable file associated with the machine readable code.
  • Fig. 1 is an illustration of the computer system of the interactive computer interface system of the present invention.
  • Fig. 2 is a block diagram of the interactive computer interface system of the present invention.
  • Fig. 3 is a block diagram of the interactive computer interface system of Fig. 2 connected to a network server.
  • the physical aspects of the interactive computer interface system include a computer 1 with a display monitor 2 and a storage medium 3, a sound generating device, such as a speaker 4, and a sound receiving device, such as a microphone 5.
  • the microphone 5 may either be built into the storage medium 3 or stand alone to receive audio input from a user.
  • the system is designed for a computer user to use an animated character, or graphic image, 6 as an assistant when operating the computer 1. As such, the system enables the computer user to operate the computer 1 by inputting speech commands that appear to be received and acted upon by an animated character 6 displayed on the monitor 2.
  • the interactive computer interface system also includes an animated, interactive software program 10, at least one machine executable file 11 stored in the storage medium 3, or computer memory, a means 12 for converting, preferably, audio input 13 into machine readable code 14. Audio input 13 received by the sound recording devices 5 is preferred, but any other input and devices, such as a mouse, keyboard and touch- sensitive screen, may be used.
  • the storage medium 3 has a hard drive (not shown) with sufficient memory to run an operating system resident with the animated, interactive software program 10, the means 12 for converting the audio input 13 into machine readable code, and a plurality of the machine executable files 11, 16, 17, 18 and 19, which are, preferably, compatible with the operating system 15 and the means 12.
  • the machine readable code may be ASCII text files or any other user input file compatible with the operating system 15 on the computer 1.
  • the machine executable files 11, 16, 17, 18 and 19 may be application software programs or executable files associated with the application software programs.
  • the animated, interactive software program 10 performs tasks 22 or 23, which may be user-defined, responsive to the audio input 13 by interpreting the machine readable code 14 and/or produces a visual response 25 and audible response 26 to the audio input 13.
  • the animated, interactive program 10 includes two components: a brain component 20 and an animated character component, or sensory output component, 30.
  • the brain component 20 acts as a central hub between the operating system 15, means 12, or speech recognition program, and the plurality of application software programs 11, 17 and 18 and executable files 16 and 19.
  • the hierarchical structure of the brain component 20 which is similar to the hierarchical structure used in operating systems, accepts propriety software programs requiring minimal storage space, known as "plug-ins," 17 and 18. These plug-ins 17 and 18 plug into the brain component 20 to provide added functionality. Using such a hierarchical structure makes the program 10 expandable or scalable for use with future application programs.
  • the speech recognition program, or means 12, and the brain component 20 are loaded into the computer's random access memory.
  • speech commands, or audio input 13 are delivered to the computer 1 through the sound receiving device 5, and the speech recognition software program, or means 12, converts the speech, or audio input 13, into machine readable code 14, or standard text files, which are then further processed by the brain component 20.
  • the audio input 13 operates through the operating system 15.
  • a suitable text file 14 is delivered to the brain component 20 and presented to the application software program 11 and plug-in programs 17 and 18, the application software program 11 or the single plug-in 17 or 18 associated with the text 14 is activated.
  • the executable file 16 or 19 associated with the activated plug-in 17 or 18, respectively may be automatically executed.
  • a set of task instructions 22 or 23 may be generated by the brain component 20.
  • the task instructions 22 may be delivered to the operating system 15, or the task instructions 23 may be delivered to other application programs 11.
  • the set of task instructions 22 or 23 may also be delivered without execution of the application software program 11 or executable file 16 or 19 if the text file 14 is not associated with a plug-in 17 or 18, but associated only with the task instructions 22 or 23.
  • the task instructions 22 or 23 may inform the operating system 15, the application software program 11, or any other program or file to carry out specific tasks, such as opening files or executing software applications. For example, if the user's audio input 13 is "What time is it?", then the task instruction 22 or 23 will be an audible response 26 with the current time and a visual response 25 with visual movements corresponding to the audible response 26.
  • a set of character instructions 24 are generated by the brain component 20 and delivered to the animated character component 30. These character instructions 24 are used to produce the animated character 6, or graphic image, on the monitor 2 and the audible output 26 from the sound generating device 4.
  • the visual response 25 and audible response 26 operate through the operating system 15. Alternatively, the visual response 25 and audible response 26 operate stand-alone.
  • the animated character component 30 includes a plurality of graphic image files 33 and a plurality of sound files 35.
  • the graphic image files 33 are used to present the human-like body and facial movement features of the animated character 6 on the monitor 2. This animated character 6 may be large enough to fill the entire display of the monitor 2.
  • the animated character 6 may be displayed in two dimensional or three- dimensional animation or photo realistic images and preferably, has shoulders, a neck, and head.
  • the sound files 35 are used to provide life-like audible responses 26 from the animated character component 30.
  • the graphic image files 33 and sound files 35 may be provided from any program designed to operate with a means 12 to convert audio input 13 into machine readable code 14 and that can be integrated with the operating system 15 and other machine executable files 11, 16, 17, 18 and 19 that may be used.
  • the animated character component 30 includes a runtime version of an animation program known as DIRECTOR 6.0, sold by Macromedia Inc. of San Francisco, California, and a set of selected graphic image files 33 capable of being executed thereunder.
  • the graphic image files 33 are BMP files or PIC files that present a relatively large animated character 6 having human-like body and facial movement features on the monitor 2.
  • the animated character component 30 also coordinates and generates the audible responses 26 made by the animated character 6.
  • the audible responses 26 are produced by pre-recorded sound files 35. These sound files 35 may be recorded QUICKTIME, sold by Apple Corporation of California, files or midi or wave formatted files. Alternatively, the audible responses 26 are produced by generic text-to- speech (TTS) audio technology.
  • TTS generic text-to- speech
  • the brain component 20 also responds to selected speech commands in the audio input 13.
  • the brain component 20 may instruct the animated character component 30 to respond visually or audibly that the input command was unrecognized or inappropriate.
  • the animated character component 30 may be active by displaying an animated character 6 with a slight grin or other small facial movements (i.e., blinking, yawning, etc.).
  • a specified time period elapses, both the brain component 20 and the animated character component 30 may become inactivated. This may be indicated by the eyes of the animated character 6 being closed or the generation of snoring sounds, thereby indicating that the computer 1 is idle or in a standby mode.
  • the operating system 15 is WINDOWS 95, WINDOWS
  • the speech recognition program, or means 12 may also have "command and control" capability so that any function on the computer may be carried out with audio input 13, or speech commands.
  • Some software programs that operate in this manner include DRAGON NATURAL SPEAKING 3.0, sold by Dragon Systems, Inc. of Newton, MA, IBM VIAVOICE 98 EXECUTIVE, sold by IBM Corp. of West Palm Beach, FI, and VOICE XPRESS PLUS 1.01, sold by Lernout &
  • the animated character component 30 may be exchanged by the user to display different animated characters 6 with different voices, depending on the user's preferences. Initially, the brain component 20 and the animated character component 30 are loaded into the computer at the same time. Later, when a new animated character 6 is desired, the user simply removes the old animated character component 30 from the computer 1 and downloads a new animated character component 30a, 30b, or 30c for a new animated character 6 from a CD-ROM or floppy disc or from a network server 50 connected to a wide area network 60, as shown in Fig.3. It should be understood that one or both components 20 and 30 may be resident in the computer 1 or network based, where one or both components 20 and 30 are stored on the network server and remotely controlled by a client machine.

Abstract

An animated, interactive computer interface system including a computer, having a display monitor and storage medium, a sound receiving device, a sound generating device, at least one machine executable file stored in the storage medium, a means for converting audio input into machine readable code, and an animated, interactive software program. The animated, interactive software program producing a visual and audible response to the audio input and/or performing tasks responsive to the audio input by interpreting the machine readable code.

Description

AN ANIMATED, INTERACTIVE COMPUTER INTERFACE SYSTEM Reference to Related Application and Priority Claim
This application expressly claims the benefit of the earlier filing date and right of priority from the following patent application: U.S. Provisional Application Serial No. 60/132,249, filed on May 3, 1999 in the name of Craig Huish and entitled "An Animated,
Interactive Computer Interface System." The entirety of that earlier- filed, co-pending provisional patent application is hereby expressly incorporated herein by reference. Field of Invention
The present invention relates to user interface in computer systems and more specifically, relates to animated, interactive computer interfaces.
Background of Invention
Speech recognition software enables users to interact with personal computers. These speech recognition software programs often integrate with operating software programs and other application software programs so that speech recognition takes place in the background, thereby making speech simply another way of giving the computer commands and instructions. Some speech recognition software programs are designed to control simple functions of the computer's operating program, such as opening and closing of programs, switching from one program to another, moving windows around on-screen, and issuing menu commands. Other speech recognition software programs allow the user to create a library of verbal commands, such as "GET MAIL" to launch an Inbox mail program, dial an e-mail server, and check for new messages on-line, or "FAX" to create and address a new fax form.
Even though most speech recognition software programs are easier to use, more accurate and include greater productivity enhancing tools than the programs of the past, it is believed that many computer users still refuse to use them because they are uncomfortable speaking to a machine. It is believed that one way to overcome this discomfort is to give the computer more human or animal-like characteristics by displaying an animated character on the computer monitor that receives and responds to audio commands inputted by the user.
Some computer application programs employ animated characters to interact with the user. For example, animated characters are used in the word processing program sold under the trademark, WORD 97, by Microsoft Corporation. In this program, a "help" feature known as "Office Assistance," which displays an animated character when the "help" feature is requested, is provided. However, the activity of the animated character is limited, and the animated character can only perform a simple human or animal-like body movement or facial expression. It is believed that one drawback with application programs that use animated characters is that they do not have speech recognition capabilities. In addition, it is believed that these programs are not designed to provide an audible response to the user's comments or actions. The activities of the animated characters are also not integrated with other operating and application programs to perform functions in those programs. Summary of the Invention
The present invention provides an interactive computer interface system that includes a computer, which has a display monitor and storage medium, a sound receiving device, a sound generating device, and at least one machine executable file stored in the storage medium. A means for converting audio input into machine readable code and an animated, interactive software program are also included in the system. The animated, interactive software program produces a visual and audible response to the audio input and/or performs tasks responsive to the audio input by interpreting the machine readable code.
One embodiment of the animated, interactive software program includes a brain component and an animated character component. The brain component receives and interprets the machine readable code through an operating system and generates a response to the audio input. The animated character component, which is coupled to the brain component, receives the response to the audio input, and then produces the visual and/or audible responses to the audio input.
The present invention also provides an interactive user interface system for a computer that has memory to run computer operating system software and a plurality of software applications. The computer also includes a device that interacts with the memory to create a user input file and an interactive software program resident in the computer memory. The interactive software program has a sensory output component for providing visual and audible output and a brain component for interpreting the user input file, delivering commands for the sensory output component responsive to the user input file, and initiating user-defined tasks for the computer. The present invention also includes a method of operating an interactive computer interface system. The method includes: providing a computer with a display monitor and a storage medium, a sound generating device, and at least one machine executable file stored in the storage medium; inputting audio to a sound receiving device; converting the audio input into machine readable code; delivering the machine readable code to an animated interactive software program; and executing the at least one machine executable file associated with the machine readable code. Brief Description of the Drawings
The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate the presently preferred embodiment of the invention, and, together with the general description given above and the detailed description given below, serve to explain the features of the invention.
Fig. 1 is an illustration of the computer system of the interactive computer interface system of the present invention. Fig. 2 is a block diagram of the interactive computer interface system of the present invention.
Fig. 3 is a block diagram of the interactive computer interface system of Fig. 2 connected to a network server.
Detailed Description of the Preferred Embodiment(s) As shown in Fig. 1, the physical aspects of the interactive computer interface system include a computer 1 with a display monitor 2 and a storage medium 3, a sound generating device, such as a speaker 4, and a sound receiving device, such as a microphone 5. The microphone 5 may either be built into the storage medium 3 or stand alone to receive audio input from a user. The system is designed for a computer user to use an animated character, or graphic image, 6 as an assistant when operating the computer 1. As such, the system enables the computer user to operate the computer 1 by inputting speech commands that appear to be received and acted upon by an animated character 6 displayed on the monitor 2.
As shown in Fig. 2, the interactive computer interface system also includes an animated, interactive software program 10, at least one machine executable file 11 stored in the storage medium 3, or computer memory, a means 12 for converting, preferably, audio input 13 into machine readable code 14. Audio input 13 received by the sound recording devices 5 is preferred, but any other input and devices, such as a mouse, keyboard and touch- sensitive screen, may be used. Preferably, the storage medium 3 has a hard drive (not shown) with sufficient memory to run an operating system resident with the animated, interactive software program 10, the means 12 for converting the audio input 13 into machine readable code, and a plurality of the machine executable files 11, 16, 17, 18 and 19, which are, preferably, compatible with the operating system 15 and the means 12. The machine readable code may be ASCII text files or any other user input file compatible with the operating system 15 on the computer 1. The machine executable files 11, 16, 17, 18 and 19 may be application software programs or executable files associated with the application software programs.
In the preferred embodiment, the animated, interactive software program 10 performs tasks 22 or 23, which may be user-defined, responsive to the audio input 13 by interpreting the machine readable code 14 and/or produces a visual response 25 and audible response 26 to the audio input 13. In the preferred embodiment, the animated, interactive program 10 includes two components: a brain component 20 and an animated character component, or sensory output component, 30. The brain component 20 acts as a central hub between the operating system 15, means 12, or speech recognition program, and the plurality of application software programs 11, 17 and 18 and executable files 16 and 19. The hierarchical structure of the brain component 20, which is similar to the hierarchical structure used in operating systems, accepts propriety software programs requiring minimal storage space, known as "plug-ins," 17 and 18. These plug-ins 17 and 18 plug into the brain component 20 to provide added functionality. Using such a hierarchical structure makes the program 10 expandable or scalable for use with future application programs.
When the computer 1 is booted, or turned on, the operating system 15, the speech recognition program, or means 12, and the brain component 20 are loaded into the computer's random access memory. In the preferred embodiment, speech commands, or audio input 13, are delivered to the computer 1 through the sound receiving device 5, and the speech recognition software program, or means 12, converts the speech, or audio input 13, into machine readable code 14, or standard text files, which are then further processed by the brain component 20. Preferably, the audio input 13 operates through the operating system 15.
When a suitable text file 14 is delivered to the brain component 20 and presented to the application software program 11 and plug-in programs 17 and 18, the application software program 11 or the single plug-in 17 or 18 associated with the text 14 is activated. In turn, the executable file 16 or 19 associated with the activated plug-in 17 or 18, respectively, may be automatically executed. After the application software program 11 is activated or the executable file 16 or 19 is executed, a set of task instructions 22 or 23 may be generated by the brain component 20. The task instructions 22 may be delivered to the operating system 15, or the task instructions 23 may be delivered to other application programs 11. The set of task instructions 22 or 23 may also be delivered without execution of the application software program 11 or executable file 16 or 19 if the text file 14 is not associated with a plug-in 17 or 18, but associated only with the task instructions 22 or 23. The task instructions 22 or 23 may inform the operating system 15, the application software program 11, or any other program or file to carry out specific tasks, such as opening files or executing software applications. For example, if the user's audio input 13 is "What time is it?", then the task instruction 22 or 23 will be an audible response 26 with the current time and a visual response 25 with visual movements corresponding to the audible response 26. In addition, if the user's audio input 13 is "Open WordPerfect," the program will be opened, and the audible response 26, along with the visual response 25, may be "OK" or "I am opening the program," or something similar with corresponding gestures from the animated character 6. Simultaneously, a set of character instructions 24 are generated by the brain component 20 and delivered to the animated character component 30. These character instructions 24 are used to produce the animated character 6, or graphic image, on the monitor 2 and the audible output 26 from the sound generating device 4. In the preferred embodiment, the visual response 25 and audible response 26 operate through the operating system 15. Alternatively, the visual response 25 and audible response 26 operate stand-alone.
In the preferred embodiment, the animated character component 30 includes a plurality of graphic image files 33 and a plurality of sound files 35. The graphic image files 33 are used to present the human-like body and facial movement features of the animated character 6 on the monitor 2. This animated character 6 may be large enough to fill the entire display of the monitor 2. The animated character 6 may be displayed in two dimensional or three- dimensional animation or photo realistic images and preferably, has shoulders, a neck, and head. The sound files 35 are used to provide life-like audible responses 26 from the animated character component 30. The graphic image files 33 and sound files 35 may be provided from any program designed to operate with a means 12 to convert audio input 13 into machine readable code 14 and that can be integrated with the operating system 15 and other machine executable files 11, 16, 17, 18 and 19 that may be used. In the preferred embodiment, the animated character component 30 includes a runtime version of an animation program known as DIRECTOR 6.0, sold by Macromedia Inc. of San Francisco, California, and a set of selected graphic image files 33 capable of being executed thereunder. Preferably, the graphic image files 33 are BMP files or PIC files that present a relatively large animated character 6 having human-like body and facial movement features on the monitor 2.
The animated character component 30 also coordinates and generates the audible responses 26 made by the animated character 6. In the preferred embodiment, the audible responses 26 are produced by pre-recorded sound files 35. These sound files 35 may be recorded QUICKTIME, sold by Apple Corporation of California, files or midi or wave formatted files. Alternatively, the audible responses 26 are produced by generic text-to- speech (TTS) audio technology. When the set of character instructions 24 are delivered to the animated character component 30, the sound files 35 associated with the animated activity will automatically be played to corresponding animated character facial (mouth) movements. In the preferred embodiment, the brain component 20 also responds to selected speech commands in the audio input 13. If an unrecognized or a recognized, inappropriate speech command is inputted, the brain component 20 may instruct the animated character component 30 to respond visually or audibly that the input command was unrecognized or inappropriate. When the system is activated and waiting for a speech command to be inputted, the animated character component 30 may be active by displaying an animated character 6 with a slight grin or other small facial movements (i.e., blinking, yawning, etc.). When a specified time period elapses, both the brain component 20 and the animated character component 30 may become inactivated. This may be indicated by the eyes of the animated character 6 being closed or the generation of snoring sounds, thereby indicating that the computer 1 is idle or in a standby mode. In the preferred embodiment, the operating system 15 is WINDOWS 95, WINDOWS
98, or WINDOWS NT, all sold by Microsoft, Inc. of Redmond, WA. The speech recognition program, or means 12, may also have "command and control" capability so that any function on the computer may be carried out with audio input 13, or speech commands. Some software programs that operate in this manner include DRAGON NATURAL SPEAKING 3.0, sold by Dragon Systems, Inc. of Newton, MA, IBM VIAVOICE 98 EXECUTIVE, sold by IBM Corp. of West Palm Beach, FI, and VOICE XPRESS PLUS 1.01, sold by Lernout &
Hauspie Speech Products USA Inc., of Burlington, MA. However, any compatible program may be employed.
The animated character component 30 may be exchanged by the user to display different animated characters 6 with different voices, depending on the user's preferences. Initially, the brain component 20 and the animated character component 30 are loaded into the computer at the same time. Later, when a new animated character 6 is desired, the user simply removes the old animated character component 30 from the computer 1 and downloads a new animated character component 30a, 30b, or 30c for a new animated character 6 from a CD-ROM or floppy disc or from a network server 50 connected to a wide area network 60, as shown in Fig.3. It should be understood that one or both components 20 and 30 may be resident in the computer 1 or network based, where one or both components 20 and 30 are stored on the network server and remotely controlled by a client machine.
While the invention has been disclosed with reference to certain preferred embodiments, numerous modifications, alterations, and changes to the described embodiments are possible without departing from the sphere and scope of the invention, as defined in the appended claims and their equivalents thereof. Accordingly, it is intended that the invention not be limited to the described embodiments, but that it have the full scope defined by the language of the following claims.

Claims

What we claim is:
1. An interactive computer interface system comprising: a computer having a display monitor and a storage medium; a sound receiving device; a sound generating device; at least one machine executable file stored in the storage medium; a means for converting audio input into machine readable code; and an animated, interactive software program producing at least one of a visual and audible response to the audio input and performing tasks responsive to the audio input by interpreting the machine readable code.
2. The interactive computer interface system of claim 1, the computer including an operating system resident with at least one machine executable file, wherein the animated, interactive softw.are program further comprises: a brain component for receiving and interpreting the machine readable code through the operating system and generating a response to the audio input; and an animated character component coupled to the brain component for receiving the response to the audio input and producing the at least one of a visual and audible response to the audio input.
3. The interactive computer interface system of claim 2 wherein the animated character component comprises a plurality of interchangeable graphic image files and sound files.
4. The interactive computer interface system of claim 3 wherein the sound files provide the audible responses.
5. The interactive computer interface system of claim 3 wherein the graphic image files are files selected from the group consisting of BMP files, PIC files, and combinations thereof.
6. The interactive computer interface system of claim 2 wherein the brain component comprises a plurality of software programs.
7. The interactive computer interface system of claim 6 wherein the software programs comprise plug-ins.
8. The interactive computer interface system of claim 6 wherein the brain component generates a set of task instructions and delivers the task instructions to one of the software programs.
9. The interactive computer interface system of claim 2 wherein the brain component and animated character component are located on a resident computer.
10. The interactive computer interface system of claim 2 wherein the brain component and aminated character component are located on a network server.
11. The interactive computer interface system of claim 1 wherein the visual and audible response comprises a sound file playing to corresponding graphic image movements.
12. The interactive computer interface system of claim 1 wherein the at least one machine executable file comprises at least one of an operating program and an application software program.
13. The interactive computer interface system of claim 1 wherein the tasks comprise opening files and executing software applications.
14. The interactive computer interface system of claim 1 wherein the means for converting audio input into machine readable code comprises a speech recognition software program.
15. The interactive computer interface system of claim 14 wherein the speech recognition software program is equipped to substantially carry out any function on the computer.
16. An interactive user interface system for a computer, having memory to run computer operating system software and a plurality of software applications with the computer operating system software, comprising: a device that interacts with the memory to create a user input file; an interactive software program resident in the computer memory, including: a sensory output component for providing visual and audible output; and a brain component for interpreting the user input file, delivering commands for the sensory output component responsive to the user input file, and initiating user-defined tasks for the computer.
17. The interactive user interface system of claim 16 wherein the device comprises a speech recognition software program for converting user audio input into the user input file.
18. A method of operating an interactive computer interface system comprising: providing a computer having a display monitor and a storage medium, a sound generating device, and at least one machine executable file stored in the storage medium; inputting audio to a sound receiving device; converting the audio input into machine readable code; delivering the machine readable code to an animated interactive software program; and executing the at least one machine executable file associated with the machine readable code.
19. The method of claim 18 wherein the executing comprises: generating a visual and audible response.
20. The method of claim 18 wherein the executing comprises: performing tasks responsive to the audio input.
21. The method of claim 18 wherein the inputting comprises : speaking to an animated character displayed on the display monitor.
22. The method of claim 18 wherein the converting comprises: processing the audio input with a speech recognition program.
23. The method of claim 18 further comprising: generating at least one of a set of task instructions and a set of character instructions; and delivering any task instructions to the at least one machine executable file to carry out tasks associated with the instructions and any character instructions to the animated interactive software program to produce an animated character on the display monitor and audio output from a sound generating device.
24. The method of claim 18 further comprising: displaying a set of user commands; choosing a user command manually; delivering the user command to the animated interactive software program; and executing the at least one machine executable file associated with the user command.
PCT/US2000/011839 1999-05-03 2000-05-03 An animated, interactive computer interface system WO2000067111A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU46898/00A AU4689800A (en) 1999-05-03 2000-05-03 An animated, interactive computer interface system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13224999P 1999-05-03 1999-05-03
US60/132,249 1999-05-03

Publications (1)

Publication Number Publication Date
WO2000067111A1 true WO2000067111A1 (en) 2000-11-09

Family

ID=22453152

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/011839 WO2000067111A1 (en) 1999-05-03 2000-05-03 An animated, interactive computer interface system

Country Status (2)

Country Link
AU (1) AU4689800A (en)
WO (1) WO2000067111A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002057896A2 (en) * 2001-01-22 2002-07-25 Digital Animations Group Plc Interactive virtual assistant
WO2003032152A2 (en) * 2001-10-04 2003-04-17 Koninklijke Philips Electronics N.V. Device running a user interface application

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5377319A (en) * 1992-03-10 1994-12-27 Hitachi, Ltd. Help guidance method utilizing an animated picture
EP0691609A1 (en) * 1994-07-08 1996-01-10 Microsoft Corporation Software platform having a real world interface with animated characters
US5657462A (en) * 1993-11-17 1997-08-12 Collegeview Partnership Method and apparatus for displaying animated characters upon a computer screen in which a composite video display is merged into a static background such that the border between the background and the video is indiscernible
WO1999005671A1 (en) * 1997-07-24 1999-02-04 Knowles Electronics, Inc. Universal voice operated command and control engine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5377319A (en) * 1992-03-10 1994-12-27 Hitachi, Ltd. Help guidance method utilizing an animated picture
US5657462A (en) * 1993-11-17 1997-08-12 Collegeview Partnership Method and apparatus for displaying animated characters upon a computer screen in which a composite video display is merged into a static background such that the border between the background and the video is indiscernible
EP0691609A1 (en) * 1994-07-08 1996-01-10 Microsoft Corporation Software platform having a real world interface with animated characters
WO1999005671A1 (en) * 1997-07-24 1999-02-04 Knowles Electronics, Inc. Universal voice operated command and control engine

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002057896A2 (en) * 2001-01-22 2002-07-25 Digital Animations Group Plc Interactive virtual assistant
WO2002057896A3 (en) * 2001-01-22 2004-03-04 Digital Animations Group Plc Interactive virtual assistant
WO2003032152A2 (en) * 2001-10-04 2003-04-17 Koninklijke Philips Electronics N.V. Device running a user interface application
WO2003032152A3 (en) * 2001-10-04 2004-05-21 Koninkl Philips Electronics Nv Device running a user interface application

Also Published As

Publication number Publication date
AU4689800A (en) 2000-11-17

Similar Documents

Publication Publication Date Title
WO2022048403A1 (en) Virtual role-based multimodal interaction method, apparatus and system, storage medium, and terminal
KR101604593B1 (en) Method for modifying a representation based upon a user instruction
Lee et al. Nonverbal behavior generator for embodied conversational agents
US6384829B1 (en) Streamlined architecture for embodied conversational characters with reduced message traffic
Nagao et al. Speech dialogue with facial displays: Multimodal human-computer conversation
Beskow et al. Olga-A conversational agent with gestures
JP2012532390A (en) System and method for generating contextual motion of a mobile robot
WO2022079933A1 (en) Communication supporting program, communication supporting method, communication supporting system, terminal device, and nonverbal expression program
Turunen et al. An architecture and applications for speech-based accessibility systems
Karpov et al. A universal assistive technology with multimodal input and multimedia output interfaces
Rodolitz et al. Accessibility of voice-activated agents for people who are deaf or hard of hearing
Beskow et al. OLGA-a dialogue system with an animated talking agent.
DeCarlo et al. Specifying and animating facial signals for discourse in embodied conversational agents
Martin et al. Levels of Representation in the Annotation of Emotion for the Specification of Expressivity in ECAs
US20230196943A1 (en) Narrative text and vocal computer game user interface
WO2000067111A1 (en) An animated, interactive computer interface system
Cerezo et al. Interactive agents for multimodal emotional user interaction
WO2021064947A1 (en) Interaction method, interaction system, interaction device, and program
JP2960029B2 (en) Presentation support device
Luerssen et al. Head x: Customizable audiovisual synthesis for a multi-purpose virtual head
Feng et al. A platform for building mobile virtual humans
Granström et al. Speech and gestures for talking faces in conversational dialogue systems
Dey et al. Developing voice-only applications in the absence of speech recognition technology
Marriott A Facial Animation Case Study for HCI: The VHML‐Based Mentor System
Kunc et al. ECAF: Authoring language for embodied conversational agents

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP