Búsqueda Imágenes Maps Play YouTube Noticias Gmail Drive Más »
Iniciar sesión
Usuarios de lectores de pantalla: deben hacer clic en este enlace para utilizar el modo de accesibilidad. Este modo tiene las mismas funciones esenciales pero funciona mejor con el lector.

Patentes

  1. Búsqueda avanzada de patentes
Número de publicaciónUS20050137877 A1
Tipo de publicaciónSolicitud
Número de solicitudUS 10/738,460
Fecha de publicación23 Jun 2005
Fecha de presentación17 Dic 2003
Fecha de prioridad17 Dic 2003
También publicado comoUS8751241, US20080215336
Número de publicación10738460, 738460, US 2005/0137877 A1, US 2005/137877 A1, US 20050137877 A1, US 20050137877A1, US 2005137877 A1, US 2005137877A1, US-A1-20050137877, US-A1-2005137877, US2005/0137877A1, US2005/137877A1, US20050137877 A1, US20050137877A1, US2005137877 A1, US2005137877A1
InventoresChristopher Oesterling, William Mazzara, Jeffrey Stefan
Cesionario originalGeneral Motors Corporation
Exportar citaBiBTeX, EndNote, RefMan
Enlaces externos: USPTO, Cesión de USPTO, Espacenet
Method and system for enabling a device function of a vehicle
US 20050137877 A1
Resumen
The current invention provides a method and system for enabling a device function of a vehicle. A speech input stream is received at a telematics unit. A speech input context is determined for the received speech input stream. The received speech input stream is processed based on the determination and the device function of the vehicle is enabled responsive to the processed speech input stream. A vehicle device in control of the enabled device function of the vehicle is directed based on the processed speech input stream. A computer usable medium with suitable computer program code is employed for enabling a device function of a vehicle.
Imágenes(6)
Previous page
Next page
Reclamaciones(18)
1. A method for enabling a device function of a vehicle, the method comprising:
receiving a speech input stream at a telematics unit;
determining a speech input context for the received speech input stream;
processing the received speech input stream based on the determination; and
enabling the device function of the vehicle responsive to the processed speech input stream.
2. The method of claim 1 wherein determining a speech input context for the received speech input stream comprises:
monitoring the speech input stream at a context recognizer, the context recognizer comprising a context verbiage;
comparing the speech input stream to the context verbiage; and
selecting one of a plurality of domain specific actuators based on the determined speech input context.
3. The method of claim 1 wherein processing the received speech input stream comprises:
accessing a set of rules and structures for formatting the speech input stream according to the determined speech input context; and
formatting the received speech input stream based on the set of rules and the structures.
4. The method of claim 3, wherein the set of rules and structures are contained in a domain specific actuator.
5. The method of claim 1 wherein enabling the device function of the vehicle comprises:
writing the processed speech input stream in an activation cache;
activating a vehicle device corresponding to the device function of the vehicle; and
supplying the processed speech input stream from the activation cache to the vehicle device.
6. The method of claim 1 further comprising:
directing a vehicle device in control of the enabled device function of the vehicle based on the processed speech input stream.
7. A computer usable medium including computer program code for enabling a device function of a vehicle comprising:
computer program code for receiving a speech input stream at a telematics unit;
computer program code for determining a speech input context for the received speech input stream;
computer program code for processing the received speech input stream based on the determination; and
computer program code for enabling the device function of the vehicle responsive to the processed speech input stream.
8. The computer usable medium of claim 7 wherein computer program code for determining a speech input context for the received speech input stream comprises:
computer program code for monitoring the speech input stream at a context recognizer, the context recognizer comprising a context verbiage;
computer program code for comparing the speech input stream to the context verbiage; and
computer program code for selecting one of a plurality of domain specific actuators based on the determined speech input context.
9. The computer usable medium of claim 7 wherein processing the received speech input stream comprises:
computer program code for accessing a set of rules and structures for formatting the speech input stream according to the determined speech input context; and
computer program code for formatting the received speech input stream based on the set of rules and the structures.
10. The computer usable medium of claim 9 wherein the set of rules and structures are contained in a domain specific actuator.
11. The computer usable medium of claim 7 wherein enabling the device function of the vehicle comprises:
computer program code for writing the processed speech input stream in an activation cache;
computer program code for activating a vehicle device corresponding to the enabled device function of the vehicle; and
computer program code for supplying the processed speech input stream from the activation cache to the vehicle device.
12. The computer usable medium of claim 7 further comprising:
computer program code for directing a vehicle device in control of the enabled device function of the vehicle based on the processed speech input stream.
13. A system for enabling a device function of a vehicle, the system comprising:
means for receiving a speech input stream at a telematics unit;
means for determining a speech input context for the received speech input stream;
means for processing the received speech input stream based on the determination; and
means for enabling the device function of the vehicle responsive to the processed speech input stream.
14. The system of claim 13 wherein determining a speech input context for the received speech input stream comprises:
means for monitoring the speech input stream at a context recognizer, the context recognizer comprising a context verbiage;
means for comparing the speech input stream to the context verbiage; and
means for selecting one of a plurality of domain specific actuators based on the determined speech input context.
15. The system of claim 13 wherein processing the received speech input stream comprises:
means for accessing a set of rules and structures for formatting the speech input stream according to the determined speech input context; and
means for formatting the received speech input stream based on the set of rules and the structures.
16. The system of claim 15 wherein the set of rules and structures are contained in a domain specific actuator.
17. The system of claim 13 wherein enabling the device function of the vehicle comprises:
means for writing the processed speech input stream in an activation cache;
means for activating a vehicle device corresponding to the enabled device function of the vehicle; and
means for supplying the processed speech input stream from the activation cache to the vehicle device.
18. The system of claim 13 further comprising:
means for directing a vehicle device in control of the enabled device function of the vehicle based on the processed speech input stream.
Descripción
    FIELD OF THE INVENTION
  • [0001]
    This invention relates generally to telematics systems. In particular the invention relates to a method and system for enabling a device function of a vehicle.
  • BACKGROUND OF THE INVENTION
  • [0002]
    One of the fastest growing areas of communications technology is related to automobile network solutions. The demand and potential for wireless vehicle communication, networking and diagnostic services have recently increased. Although many vehicles on the road today have limited wireless communication functions, such as unlocking a door and setting or disabling a car alarm, new vehicles offer additional wireless communication systems that help personalize comfort settings, run maintenance and diagnostic functions, place telephone calls, access call-center information, update controller systems, determine vehicle location, assist in tracking vehicle after a theft of the vehicle and provide other vehicle-related services. Drivers can call telematics call centers and receive navigational, concierge, emergency, and location services, as well as other specialized help such as locating the geographical position of a stolen vehicle and honking the horn of a vehicle when the owner cannot locate it in a large parking garage. Telematics service providers can offer enhanced telematics services by supplying a subscriber with a digital handset.
  • [0003]
    With speech recognition available in today's vehicles a driver can control devices within the vehicle without removing their hands from the steering wheel. Drivers receive various forms of information while operating a vehicle such as phone numbers or destination addresses. While a driver is on the road, it is not convenient for them to record the information and then input that information to a vehicle device such as an in-vehicle phone or navigation system. Information of interest to a driver can be a part of a conversation the driver has with another person and not in a format directly usable by a vehicle device.
  • [0004]
    The driver can receive a business address as part of a conversation with a person at the business. To use that address with the vehicles navigation system, the driver must remember or record the address, enable the navigation system and input the address to the navigation system. This requirement is both an inconvenience for the driver and a limitation that decreases the driver's satisfaction with the capabilities of the navigation system.
  • [0005]
    It is desirable therefore, to provide a method and system for enabling a device function of a vehicle, that overcomes the challenges and obstacles described above.
  • SUMMARY OF THE INVENTION
  • [0006]
    The current invention provides a method for enabling a device function of a vehicle. A speech input stream is received at a telematics unit. A speech input context is determined for the received speech input stream. The received speech input stream is processed based on the determination and the device function of the vehicle is enabled responsive to the processed speech input stream. The method further comprises directing a vehicle device in control of the device function based on the processed speech input stream.
  • [0007]
    Another aspect of the current invention provides a computer usable medium including computer program code for enabling a device function of a vehicle. The computer usable medium comprises: computer program code for receiving a speech input stream at a telematics unit; computer program code for determining a speech input context for the received speech input stream; computer program code for processing the received speech input stream based on the determination; and computer program code for enabling the device function of the vehicle responsive to the processed speech input stream. The computer usable medium further comprises computer program code for directing a vehicle device in control of the device function based on the processed speech input stream.
  • [0008]
    Another aspect of the current invention provides a system for enabling a device function of a vehicle. The system comprises: means for receiving a speech input stream at a telematics unit; means for determining a speech input context for the received speech input stream; means for processing the received speech input stream based on the determination; and means for enabling the device function of the vehicle responsive to the processed speech input stream. The system further comprises means for directing a vehicle device in control of the device function based on the processed speech input stream.
  • [0009]
    The aforementioned and other features and advantages of the invention will become further apparent from the following detailed description of the presently preferred embodiment, read in conjunction with the accompanying drawings. The detailed description and drawings are merely illustrative of the invention rather than limiting, the scope of the invention being defined by the appended claims and equivalents thereof.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0010]
    FIG. 1 is a schematic diagram of a system for enabling a device function of a vehicle in accordance with one embodiment of the current invention;
  • [0011]
    FIG. 2 is a flow diagram of a method for enabling a device function of a vehicle in accordance with one embodiment of the current invention;
  • [0012]
    FIG.3 is a flow diagram detailing the step of determining the speech input context at block 220 of FIG.2;
  • [0013]
    FIG. 4 is a flow diagram detailing the step of processing the received speech input stream at block 230 of FIG. 2; and
  • [0014]
    FIG. 5 is a flow diagram detailing the step enabling the device function of the vehicle at block 240 of FIG. 2.
  • DETAILED DESCRIPTION OF THE PRESENTLY PREFERRED EMBODIMENTS
  • [0015]
    FIG. 1 is a schematic diagram of a system for enabling a device function of a vehicle in accordance with one embodiment of the current invention at 100. The system for enabling a device function of a vehicle at 100 comprises: a mobile vehicle 110, a telematics unit 120, one or more wireless carrier systems 140, or one or more satellite carrier systems 141, one or more communication networks 142, and one or more call centers 180. Mobile vehicle 110 is a vehicle such as a car or truck equipped with suitable hardware and software for transmitting and receiving speech and data communications. Vehicle 110 has a multimedia system 118 having one or more speakers 117.
  • [0016]
    In one embodiment of the invention, telematics unit comprises: a digital signal processor (DSP) 122 connected to a wireless modem 124; a global positioning system (GPS) receiver or GPS unit 126; an in-vehicle memory 128; a microphone 130; one or more speakers 132; an embedded or in-vehicle phone 134 or an email access appliance 136; and a display 138. DSP 122 is also referred to as a microcontroller, controller, host processor, ASIC, or vehicle communications processor. GPS unit 126 provides longitude and latitude coordinates of the vehicle, as well as a time stamp and a date stamp. In-vehicle phone 134 is an analog, digital, dual-mode, dual-band, multi-mode or multi-band cellular phone.
  • [0017]
    Telematics unit 120 can store a processed speech input stream, GPS location data, and other data files in in-vehicle memory 128. Telematics unit 120 can set or reset calling-state indicators and can enable or disable various cellular-phone functions, telematics-unit functions and vehicle functions when directed by program code running on DSP 122. Telematics unit 120 can send and receive over-the-air messages using, for example, a pseudo-standard air-interface function or other proprietary and non-proprietary communication links.
  • [0018]
    DSP 122 executes various computer programs and computer program code, within telematics unit 120, which control programming and operational modes of electronic and mechanical systems. DSP 122 controls communications between telematics unit 120, wireless carrier system 140 or satellite carrier system 141 and call center 180. A speech-recognition engine 119, which can translate human speech input through microphone 130 to digital signals used to control functions of telematics unit, is installed in telematics unit 120. The interface to telematics unit 120 includes one or more buttons (not shown) on telematics unit 120, on multimedia system 118, or on an associated keyboard or keypad that are also used to control functions of telematics unit. A text to speech synthesizer 121 can convert text strings to audible messages that are played through speaker 132 of telematics unit 120 or through speakers 117 of multimedia system 118.
  • [0019]
    Speech recognition engine 119 and buttons are used to activate and control various functions of telematics unit 120. For example, programming of in-vehicle phone 134 is controlled with verbal commands that are translated by speech-recognition software executed by DSP 122. Alternatively, pushing buttons on interface of telematics unit 120 or on in-vehicle phone 134 is used to program in-vehicle phone 134. In another embodiment, the interface to telematics unit 120 includes other forms of preference and data entry including touch-screens, wired or wireless keypad remotes, or other wirelessly connected devices such as Bluetooth-enabled devices or 802.11-enabled devices.
  • [0020]
    In one embodiment of the current invention, speech recognition engine 119 comprises a configurable listener automaton 111 that receives a speech input stream and processes the speech input stream according to a set of rules and structures defined in a domain specific actuator. The listener automaton 111 writes the processed speech input stream to an activation cache that is a portion of in-vehicle memory 128. DSP 122 executes computer program code comprising a context recognizer and associated domain specific actuators, within telematics unit 120, which control operation and configuration of the listener automaton 111. DSP 122 controls communications between telematics unit 120, listener automaton 111, and activation cache in in-vehicle memory 128. Data in the activation cache is supplied to the vehicle devices 115 through vehicle bus 112.
  • [0021]
    DSP 122 controls, generates and accepts digital signals transmitted between telematics unit 120 and a vehicle communication bus 112 that is connected to various vehicle components 114, vehicle devices 115, various sensors 116, and multimedia system 118 in mobile vehicle 110. DSP 122 can activate various programming and operation modes, as well as provide for data transfers. In facilitating interactions among the various communication and electronic modules, vehicle communication bus 112 utilizes bus interfaces such as controller-area network (CAN), J1850, International Organization for Standardization (ISO) Standard 9141, ISO Standard 11898 for high-speed applications, and ISO Standard 11519 for lower speed applications.
  • [0022]
    Mobile vehicle 110 via telematics unit 120 sends and receives radio transmissions from wireless carrier system 140, or satellite carrier system 141. Wireless carrier system 140, or satellite carrier system 141 is any suitable system for transmitting a signal from mobile vehicle 110 to communication network 142.
  • [0023]
    Communication network 142 includes services from mobile telephone switching offices, wireless networks, public-switched telephone networks (PSTN), and Internet protocol (IP) networks. Communication network 142 comprises a wired network, an optical network, a fiber network, another wireless network, or any combination thereof. Communication network 142 connects to mobile vehicle 110 via wireless carrier system 140, or satellite carrier system 141.
  • [0024]
    Communication network 142 can send and receive short messages according to established protocols such as dedicated short range communication standard (DSRC), IS-637 standards for short message service (SMS), IS-136 air-interface standards for SMS, and GSM 03.40 and 09.02 standards. In one embodiment of the invention, similar to paging, an SMS communication is posted along with an intended recipient, such as a communication device in mobile vehicle 110.
  • [0025]
    Call center 180 is a location where many calls are received and serviced at the same time, or where many calls are sent at the same time. In one embodiment of the invention, the call center is a telematics call center, facilitating communications to and from telematics unit 120 in mobile vehicle 110. In another embodiment, the call center 180 is a voice call center, providing verbal communications between a communication service advisor 185, in call center 180 and a subscriber. In another embodiment, call center 180 contains each of these functions.
  • [0026]
    Communication services advisor 185 is a real advisor or a virtual advisor. A real advisor is a human being in verbal communication with a user or subscriber. A virtual advisor is a synthesized speech interface responding to requests from user or subscriber. In one embodiment, the virtual advisor includes one or more recorded messages. In another embodiment, the virtual advisor generates speech messages using a call center based text to speech synthesizer (TTS). In another embodiment, the virtual advisor includes both recorded and TTS generated messages.
  • [0027]
    Call center 180 provides services to telematics unit 120. Communication services advisor 185 provides one of a number of support services to a subscriber. Call center 180 can transmit and receive data via a data signal to telematics unit 120 in mobile vehicle 110 through wireless carrier system 140, satellite carrier systems 141, or communication network 142.
  • [0028]
    Call center 180 can determine mobile identification numbers (MINs) and telematics unit identifiers associated with a telematics unit access request, compare MINs and telematics unit identifiers with a database of identifier records, and send calling-state messages to the telematics unit 120 based on the request and identification numbers.
  • [0029]
    Communication network 142 connects wireless carrier system 140 or satellite carrier system 141 to a user computer 150, a wireless or wired phone 160, a handheld device 170, such as a personal digital assistant, and call center 180. User computer 150 or handheld device 170 has a wireless modem to send data through wireless carrier system 140, or satellite carrier system 141, which connects to communication network 142. In another embodiment, user computer 150 or handheld device 170 has a wired modem that connects to communications network 142. Data is received at call center 180. Call center 180 has any suitable hardware and software capable of providing web services to help transmit messages and data signals from user computer 150 or handheld device 170 to telematics unit 120 in mobile vehicle 110.
  • [0030]
    FIG. 2 is a flow diagram of a method for enabling a device function of a vehicle in accordance with one embodiment of the current invention at 200. The method enabling a device function of a vehicle at 200 begins (block 205) when a speech-input stream is received at a telematics unit from a speech source (block 210). The speech source can be human speech or speech generated by a speech synthesizer. A speech input context is determined for the received speech input stream (block 220). The speech input context identifies the framework in which to interpret the received speech input stream. The speech input context associates the speech input stream to a specific device function of the vehicle such as navigation or personal calling.
  • [0031]
    The received speech input is processed based on the determined speech input context (block 230). The device function of the vehicle is enabled responsive to the processed speech input stream (block 240). The vehicle device in control of the enabled device function of the vehicle is directed based on the processed speech input stream (block 250). An example of a vehicle device is the navigation system of the vehicle and the corresponding device function of the vehicle is navigation. The method ends (block 295).
  • [0032]
    FIG. 3 is a flow diagram detailing the step of determining the speech input context at block 220 of FIG.2. The step of determining the speech input context at 300 begins (block 305) with monitoring the speech input stream at a context recognizer (block 310). The context recognizer comprises a context verbiage. The speech input stream is compared to the context verbiage (block 320). An example of verbiage contained in the context recognizer is the word “street” preceded by a text string. This verbiage is use to identify an address as a component of the speech input stream.
  • [0033]
    In one embodiment, a speech input stream comprised of numerical utterances followed by non-numerical utterances is associated with a navigation destination address context. In another embodiment, a speech input stream comprised of numerical utterances is associated with a directory assistance context.
  • [0034]
    Each device function of the vehicle is assigned a domain specific actuator. The domain specific actuator contains a set of rules and structures that determine how to format the speech input stream for the corresponding vehicle device that controls the particular device function of the vehicle. One of a plurality of domain specific actuators is selected based on the comparison of the speech input stream to the context verbiage (block 330) and the step ends (block 395).
  • [0035]
    In one example of the system and method for enabling a device function of a vehicle, a subscriber contacts directory assistance to obtain a phone number for a business. The directory assistance operator speaks the phone number for the business. The spoken phone number is the speech input stream in this example. The context recognizer identifies the string of numbers as a phone number by matching the received phone number to context verbiage corresponding to a phone number string. The context recognizer having determined that a phone number is being received selects a domain specific actuator for personal calling. The speech input stream is then formatted so that the phone number is available for use by the subscriber's in-vehicle phone or personal phonebook. The phone number is written to the activation cache and the personal calling device function is thereby enabled with the phone number data.
  • [0036]
    In another example, following on the previous example, the subscriber's personal calling is directed to request what action the subscriber would like to take regarding the received phone number. The personal calling device sends the subscriber a prompt asking the subscriber if they wish to dial or to store the phone number.
  • [0037]
    FIG. 4 is a flow diagram detailing the step of processing the received speech input stream at block 230 of FIG. 2. The step of processing the received speech input stream at 400 begins (block 405) by accessing a set of rules and structures for formatting the speech input stream according the determined speech input context (block 410). The set of rules and structures are contained in the domain specific actuator. The received speech input stream is formatted based on the set of rules and structures (block 420). For example, if the speech input stream includes a phone number, the speech input stream is formatted so that the phone number and other relevant data, such as the entity associated with the phone number, is available to and in the proper format for personal calling. The step ends (block 495).
  • [0038]
    FIG. 5 is a flow diagram detailing the step enabling the device function of the vehicle at block 240 of FIG. 2. The step of enabling the device function of the vehicle at 500 begins (block 505) with writing the processed speech input stream in an activation cache (block 510). The activation cache is a memory location where a vehicle device can access the processed speech input stream. The vehicle device corresponding to the enabled device function of the vehicle is activated (block 520). The processed speech input stream from the activation cache is supplied to the vehicle device (block 530) and the step ends (block 595). In the example where the device function of the vehicle is personal calling the vehicle device corresponding to personal calling is the in-vehicle phone. A phone number processed from the speech input stream and written to the activation cache would be supplied to the in-vehicle phone for dialing or storing.
  • [0039]
    While embodiments of the invention disclosed herein are presently considered to be preferred, various changes and modifications can be made without departing from the spirit and scope of the invention. The scope of the invention is indicated in the appended claims, and all changes that come within the meaning and range of equivalents are intended to be embraced therein.
Citas de patentes
Patente citada Fecha de presentación Fecha de publicación Solicitante Título
US6505161 *1 May 20007 Ene 2003Sprint Communications Company L.P.Speech recognition that adjusts automatically to input devices
US6597018 *16 Mar 200122 Jul 2003Matsushita Electric Industrial Co., Ltd.Semiconductor light emitter and flat panel display lighting system
US6598018 *15 Dic 199922 Jul 2003Matsushita Electric Industrial Co., Ltd.Method for natural dialog interface to car devices
US6693517 *20 Abr 200117 Feb 2004Donnelly CorporationVehicle mirror assembly communicating wirelessly with vehicle accessories and occupants
US6732077 *28 May 19964 May 2004Trimble Navigation LimitedSpeech recognizing GIS/GPS/AVL system
US20020049535 *10 Ago 200125 Abr 2002Ralf RigoWireless interactive voice-actuated mobile telematics system
US20040002866 *28 Jun 20021 Ene 2004Deisher Michael E.Speech recognition command via intermediate device
US20050065779 *2 Ago 200424 Mar 2005Gilad OdinakComprehensive multiple feature telematics system
Citada por
Patente citante Fecha de presentación Fecha de publicación Solicitante Título
US7091825 *1 Mar 200415 Ago 2006Sahai Anil KMethod and system for vehicle control using walkie-talkie type cellular phone
US7302371 *28 Dic 200427 Nov 2007General Motors CorporationCaptured test fleet
US783143131 Oct 20069 Nov 2010Honda Motor Co., Ltd.Voice recognition updates via remote broadcast signal
US7904300 *10 Ago 20058 Mar 2011Nuance Communications, Inc.Supporting multiple speech enabled user interface consoles within a motor vehicle
US7999654 *22 Dic 200516 Ago 2011Toyota Jidosha Kabushiki KaishaRemote control method and system, vehicle with remote controllable function, and control server
US811227522 Abr 20107 Feb 2012Voicebox Technologies, Inc.System and method for user-specific speech recognition
US814032722 Abr 201020 Mar 2012Voicebox Technologies, Inc.System and method for filtering and eliminating noise from natural language utterances to improve speech recognition and parsing
US814033511 Dic 200720 Mar 2012Voicebox Technologies, Inc.System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US81448399 Sep 200927 Mar 2012Alcatel LucentCommunication method and system for determining a sequence of services linked to a conversation
US814548930 Jul 201027 Mar 2012Voicebox Technologies, Inc.System and method for selecting and presenting advertisements based on natural language processing of voice-based input
US81506941 Jun 20113 Abr 2012Voicebox Technologies, Inc.System and method for providing an acoustic grammar to dynamically sharpen speech interpretation
US815596219 Jul 201010 Abr 2012Voicebox Technologies, Inc.Method and system for asynchronously processing natural language utterances
US8195468 *11 Abr 20115 Jun 2012Voicebox Technologies, Inc.Mobile systems and methods of supporting natural language human-machine interactions
US82551541 Nov 201128 Ago 2012Boadin Technology, LLCSystem, method, and computer program product for social networking utilizing a vehicular assembly
US832662730 Dic 20114 Dic 2012Voicebox Technologies, Inc.System and method for dynamically generating a recognition grammar in an integrated voice navigation services environment
US83266342 Feb 20114 Dic 2012Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US832663720 Feb 20094 Dic 2012Voicebox Technologies, Inc.System and method for processing multi-modal device interactions in a natural language voice services environment
US83322241 Oct 200911 Dic 2012Voicebox Technologies, Inc.System and method of supporting adaptive misrecognition conversational speech
US837014730 Dic 20115 Feb 2013Voicebox Technologies, Inc.System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US84215908 Abr 201116 Abr 2013Toyota Jidosha Kabushiki KaishaRemote control method and system, vehicle with remote controllable function, and control server
US84476074 Jun 201221 May 2013Voicebox Technologies, Inc.Mobile systems and methods of supporting natural language human-machine interactions
US845259830 Dic 201128 May 2013Voicebox Technologies, Inc.System and method for providing advertisements in an integrated voice navigation services environment
US84731521 Nov 201125 Jun 2013Boadin Technology, LLCSystem, method, and computer program product for utilizing a communication channel of a mobile device by a vehicular assembly
US85157653 Oct 201120 Ago 2013Voicebox Technologies, Inc.System and method for a cooperative conversational voice user interface
US852727413 Feb 20123 Sep 2013Voicebox Technologies, Inc.System and method for delivering targeted advertisements and tracking advertisement interactions in voice recognition contexts
US858916127 May 200819 Nov 2013Voicebox Technologies, Inc.System and method for an integrated, multi-modal, multi-device natural language voice services environment
US86206597 Feb 201131 Dic 2013Voicebox Technologies, Inc.System and method of supporting adaptive misrecognition in conversational speech
US871900914 Sep 20126 May 2014Voicebox Technologies CorporationSystem and method for processing multi-modal device interactions in a natural language voice services environment
US87190264 Feb 20136 May 2014Voicebox Technologies CorporationSystem and method for providing a natural language voice user interface in an integrated voice navigation services environment
US87319294 Feb 200920 May 2014Voicebox Technologies CorporationAgent architecture for determining meanings of natural language utterances
US87383803 Dic 201227 May 2014Voicebox Technologies CorporationSystem and method for processing multi-modal device interactions in a natural language voice services environment
US884965220 May 201330 Sep 2014Voicebox Technologies CorporationMobile systems and methods of supporting natural language human-machine interactions
US884967030 Nov 201230 Sep 2014Voicebox Technologies CorporationSystems and methods for responding to natural language speech utterance
US88865363 Sep 201311 Nov 2014Voicebox Technologies CorporationSystem and method for delivering targeted advertisements and tracking advertisement interactions in voice recognition contexts
US898383930 Nov 201217 Mar 2015Voicebox Technologies CorporationSystem and method for dynamically generating a recognition grammar in an integrated voice navigation services environment
US901504919 Ago 201321 Abr 2015Voicebox Technologies CorporationSystem and method for a cooperative conversational voice user interface
US903184512 Feb 201012 May 2015Nuance Communications, Inc.Mobile systems and methods for responding to natural language speech utterance
US9103691 *12 Nov 200811 Ago 2015Volkswagen AgMultimode user interface of a driver assistance system for inputting and presentation of information
US910526615 May 201411 Ago 2015Voicebox Technologies CorporationSystem and method for processing multi-modal device interactions in a natural language voice services environment
US91715419 Feb 201027 Oct 2015Voicebox Technologies CorporationSystem and method for hybrid processing in a natural language voice services environment
US9224394 *23 Mar 201029 Dic 2015Sirius Xm Connected Vehicle Services IncService oriented speech recognition for in-vehicle automated interaction and in-vehicle user interfaces requiring minimal cognitive driver processing for same
US926303929 Sep 201416 Feb 2016Nuance Communications, Inc.Systems and methods for responding to natural language speech utterance
US926909710 Nov 201423 Feb 2016Voicebox Technologies CorporationSystem and method for delivering targeted advertisements and/or providing natural language processing based on advertisements
US930554818 Nov 20135 Abr 2016Voicebox Technologies CorporationSystem and method for an integrated, multi-modal, multi-device natural language voice services environment
US940607826 Ago 20152 Ago 2016Voicebox Technologies CorporationSystem and method for delivering targeted advertisements and/or providing natural language processing based on advertisements
US949595725 Ago 201415 Nov 2016Nuance Communications, Inc.Mobile systems and methods of supporting natural language human-machine interactions
US950202510 Nov 201022 Nov 2016Voicebox Technologies CorporationSystem and method for providing a natural language content dedication service
US955874513 Nov 201531 Ene 2017Sirius Xm Connected Vehicle Services Inc.Service oriented speech recognition for in-vehicle automated interaction and in-vehicle user interfaces requiring minimal cognitive driver processing for same
US957007010 Ago 201514 Feb 2017Voicebox Technologies CorporationSystem and method for processing multi-modal device interactions in a natural language voice services environment
US96201135 May 201411 Abr 2017Voicebox Technologies CorporationSystem and method for providing a natural language voice user interface
US962670315 Sep 201518 Abr 2017Voicebox Technologies CorporationVoice commerce
US962695930 Dic 201318 Abr 2017Nuance Communications, Inc.System and method of supporting adaptive misrecognition in conversational speech
US97111434 Abr 201618 Jul 2017Voicebox Technologies CorporationSystem and method for an integrated, multi-modal, multi-device natural language voice services environment
US974789615 Oct 201529 Ago 2017Voicebox Technologies CorporationSystem and method for providing follow-up responses to prior natural language inputs of a user
US20050190041 *1 Mar 20041 Sep 2005Sahai Anil K.Method and system for vehicle control using walkie-talkie type cellular phone
US20060106584 *28 Dic 200418 May 2006Oesterling Christopher LCaptured test fleet
US20070038461 *10 Ago 200515 Feb 2007International Business Machines CorporationSupporting multiple speech enabled user interface consoles within a motor vehicle
US20080071534 *14 Sep 200620 Mar 2008General Motors CorporationMethods for using an interactive voice recognition system
US20080266051 *22 Dic 200530 Oct 2008Toyota Jidosha Kaushiki KaishaRemote Control Method and System, Vehicle with Remote Controllable Function, and Control Server
US20090150156 *11 Dic 200711 Jun 2009Kennewick Michael RSystem and method for providing a natural language voice user interface in an integrated voice navigation services environment
US20100086110 *9 Sep 20098 Abr 2010Alcatel-Lucent Via The Electronic Patent Assignment System (Epas)Communication method and system for determining a sequence of services linked to a conversation
US20100204994 *22 Abr 201012 Ago 2010Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US20100250243 *23 Mar 201030 Sep 2010Thomas Barton SchalkService Oriented Speech Recognition for In-Vehicle Automated Interaction and In-Vehicle User Interfaces Requiring Minimal Cognitive Driver Processing for Same
US20100286985 *19 Jul 201011 Nov 2010Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US20110022393 *12 Nov 200827 Ene 2011Waeller ChristophMultimode user interface of a driver assistance system for inputting and presentation of information
US20110137684 *8 Dic 20099 Jun 2011Peak David FSystem and method for generating telematics-based customer classifications
US20110187513 *8 Abr 20114 Ago 2011Toyota Jidosha Kabushiki KaishaRemote control method and system, vehicle with remote controllable function, and control server
US20110231182 *11 Abr 201122 Sep 2011Voicebox Technologies, Inc.Mobile systems and methods of supporting natural language human-machine interactions
EP2164212A1 *26 Ago 200917 Mar 2010Alcatel LucentCommunication method and system for determining a sequence of services associated with a conversation
WO2005091744A2 *21 Dic 20046 Oct 2005Sahai Anil KMethod and system for vehicle control using walkie-talkie type cellular phone
WO2005091744A3 *21 Dic 200419 Ene 2006Anil K SahaiMethod and system for vehicle control using walkie-talkie type cellular phone
WO2010029244A1 *26 Ago 200918 Mar 2010Alcatel LucentCommunication method and system for determining a sequence of services related to a conversation
Clasificaciones
Clasificación de EE.UU.704/275, 704/E15.045
Clasificación internacionalG10L15/26
Clasificación cooperativaG10L15/26
Clasificación europeaG10L15/26A
Eventos legales
FechaCódigoEventoDescripción
17 Dic 2003ASAssignment
Owner name: GENERAL MOTORS CORPORATION, MICHIGAN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OSTERLING, CHRISTOPHER;MAZZARA, WILLIAM E.;STEFAN, JEFFREY M.;REEL/FRAME:014825/0765
Effective date: 20031201