US20040077285A1 - Method, apparatus, and system for simulating visual depth in a concatenated image of a remote field of action - Google Patents

Method, apparatus, and system for simulating visual depth in a concatenated image of a remote field of action Download PDF

Info

Publication number
US20040077285A1
US20040077285A1 US10/421,374 US42137403A US2004077285A1 US 20040077285 A1 US20040077285 A1 US 20040077285A1 US 42137403 A US42137403 A US 42137403A US 2004077285 A1 US2004077285 A1 US 2004077285A1
Authority
US
United States
Prior art keywords
field
view
image
action
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/421,374
Inventor
Victor Bonilla
James McCabe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Racing Visions Investments Inc
Original Assignee
Bonilla Victor G.
Mccabe James W.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bonilla Victor G., Mccabe James W. filed Critical Bonilla Victor G.
Priority to US10/421,374 priority Critical patent/US20040077285A1/en
Publication of US20040077285A1 publication Critical patent/US20040077285A1/en
Assigned to RACING VISIONS INVESTMENTS, INC. reassignment RACING VISIONS INVESTMENTS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BONILLA, VICTOR G., MCCABE, JAMES W.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H30/00Remote-control arrangements specially adapted for toys, e.g. for toy vehicles
    • A63H30/02Electrical arrangements
    • A63H30/04Electrical arrangements using wireless transmission
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H33/00Other toys
    • A63H33/42Toy models or toy scenery not otherwise covered
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63JDEVICES FOR THEATRES, CIRCUSES, OR THE LIKE; CONJURING APPLIANCES OR THE LIKE
    • A63J13/00Panoramas, dioramas, stereoramas, or the like
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Definitions

  • the invention relates to concatenating images covering a visual field of action. Specifically, the invention relates to simulating visual depth in a concatenated image of a field of action of a remotely controlled vehicle.
  • Radio controllers facilitate the control of a vehicle through radio transmissions.
  • R/C enthusiasts are able to participate in organized group events such as racing or in what is known as “backyard bashing.”
  • R/C controllers have allowed scaled vehicles to travel over and under water, and through the air, which for obvious reasons was not previously possible with a cabled control mechanism.
  • a solution to this problem has been to assign a binary address to each vehicle in a system. Command data is then attached to the binary address and transmitted to all vehicles in the system.
  • commands to multiple vehicles must be placed in a queue and transmitted sequentially; this presents a slight lag between a user control and response by the vehicle.
  • Each vehicle constantly monitors transmitted commands and waits for a command with the assigned binary address. Limitations to this system include the loss of fine control of vehicles due to transmit lag, and ultimately the number of vehicles is limited because the time lag could become too great.
  • stereoscopic vision system cameras have a limited field of view.
  • Stereoscopic cameras also have a greater cost for a given viewing angle.
  • the various elements of the present invention have been developed in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available remote controlled vehicles. More particularly, various elements of the present invention have been developed in response to the present state of the art and in response to the problems and needs in the art that have not yet been fully solved by currently available remote control vehicle control vision systems. Accordingly, the present invention provides an improved method, apparatus, and system for displaying an integrated three-dimensional image of a remote field of action.
  • an improved remote control vehicle configured to move in a direction selectable remotely by a user.
  • the vehicle comprises a chassis configured to move about in response to vehicle control data from a user; a controller residing within the chassis configured to receive network switched packets containing the vehicle control data; and an actuator interface module configured to operate an actuator in response to the vehicle control data received by the controller.
  • the controller is configured to transmit vehicle data feedback to a user. Additionally, the controller may comprise a wireless network interface connection configured to transmit and receive network switched packets containing vehicle control data.
  • the present invention comprises a method of controlling a vehicle over a digital data network, including but not limited to a LAN, WAN, satellite, and digital cable networks.
  • the method comprises providing a mobile vehicle configured to transmit and receive vehicle control data over the network, providing a central server configured to transmit and receive vehicle control data, transmitting vehicle control data, controlling the mobile vehicle in response to the transmitted vehicle control data, and receiving vehicle feedback data from the vehicle.
  • Transmitting vehicle control data may comprise transmitting network switched packets in a peer-to-peer environment or in an infrastructure environment.
  • a method for simulating three-dimensional visual depth in an image of a remote field of action comprises concatenating multiple video image fields of view covering a visual space of a field of action. Video images comprising visual spaces covered by multiple fields of view are aligned and concatenated into an image of a field of action.
  • the method divides a field of view into two sub fields of view. All portions of a visual field of action are covered by at least two video image sub fields of view.
  • the method concatenates sub fields of multiple views into two distinct images of a visual field of action, with a point of view of a first image offset from a point of view of a second image.
  • Each image of a concatenated field of action may be displayed separately to the right and left eyes of a user, simulating three-dimensional visual depth.
  • concatenated images are organized in data packets for transmission over a network.
  • an apparatus for simulating three-dimensional visual depth in a concatenated image of a remote field of action.
  • the apparatus includes multiple video cameras covering a single visual field of action. Each portion of a visual field of action is captured by a field of view of at least two video cameras.
  • the apparatus divides a field of view into a clockwise and a counterclockwise sub field of view. Clockwise and counterclockwise sub fields of view are concatenated into clockwise and counterclockwise images of a visual field of action. In one embodiment, the clockwise and counterclockwise images are displayed to simulate three-dimensional visual depth.
  • video cameras capture images reflected off mirrors.
  • the mirrors may be positioned to locate the virtual center of each camera's focal plane in the same point to reduce parallax effects.
  • Various elements of the present invention are combined into a system for simulating three-dimensional visual depth in a concatenated image of a remote field of action.
  • the system includes multiple video cameras capturing multiple video images covering one or more fields of view. Each portion of a visual space covered by a first video camera field of view is also covered by a second video camera field of view.
  • the system divides each camera's field of view into at least two sub fields of view, a clockwise sub field of view and a counterclockwise sub field of view.
  • the system concatenates multiple clockwise sub fields of view into a single clockwise image of a field of action.
  • the system concatenates multiple counterclockwise sub fields of view into a single counterclockwise image of a field of action.
  • the clockwise and counterclockwise images may be used to display an image of the field of action with three-dimensional visual depth.
  • FIG. 1 is a perspective view of one embodiment of a network controlled vehicle of the present invention
  • FIG. 2 is a schematic block diagram illustrating one embodiment of a vehicle control module of the present invention
  • FIG. 3 is a schematic top view of one embodiment of a remotely controlled vehicle with video cameras in accordance with the prior art
  • FIG. 4 is a schematic top view of one embodiment of a remotely controlled vehicle with stereoscopic video cameras in accordance with the prior art
  • FIG. 5 is a schematic top view diagram illustrating one embodiment of a remotely controlled vehicle with video cameras in accordance with the present invention
  • FIG. 6 is a flow chart illustrating one embodiment of a field of view concatenation method in accordance with the present invention.
  • FIG. 7 is a schematic top view of one embodiment of a video camera field of view in accordance with the present invention.
  • FIG. 8 is a schematic top view of one embodiment of multiple, overlapping video camera fields of view in accordance with the present invention.
  • FIG. 9 is a flow chart illustrating one embodiment of a visual depth image generation method in accordance with the present invention.
  • FIG. 10 is a block diagram of one embodiment of a field of view processing system in accordance with the present invention.
  • FIG. 11 is a simplified side view of one embodiment of a video camera and mirror system in accordance with the present invention.
  • modules may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components.
  • a module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
  • Modules may also be implemented in software for execution by various types of processors.
  • An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
  • a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices.
  • operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
  • FIG. 1 shows a vehicle 100 that is controllable over a network.
  • the vehicle 100 comprises a video camera module 102 and a vehicle control module 104 .
  • the vehicle 100 is in one embodiment replicated at one-quarter scale, but may be of other scales also, including one-tenth scale, one-fifth scale, and one-third scale.
  • the network controlled vehicle 100 may embody scaled versions of airplanes, monster trucks, motorcycles, boats, buggies, and the like.
  • the vehicle 100 is a standard quarter scale vehicle 100 with centrifugal clutches and gasoline engines, and all of the data for the controls and sensors are communicated across the local area network.
  • the vehicle 100 may be electric or liquid propane or otherwise powered.
  • Quarter scale racecars are available from New Era Models of Nashua, N.H. as well as from other vendors, such as Danny's 1 ⁇ 4 Scale Cars of Glendale, Ariz.
  • the vehicle 100 is operated by remote control, and in one embodiment an operator need not be able to see the vehicle 100 to operate it. Rather, a video camera module 102 is provided with a one or more cameras 106 connected to the vehicle control module 104 for 1 displaying the points of view of the vehicle 100 to an operator.
  • the operator may control the vehicle 100 from a remote location at which the operator receives vehicle control data and 3 optionally audio and streaming video. In one embodiment, the driver receives the vehicle control data over a local area network.
  • the video camera module 102 is configured to communicate to the operator using the vehicle control module 104 .
  • the video camera module 102 may be configured to transmit streaming visual data directly to an operator station.
  • FIG. 2 shows one embodiment of the vehicle control module 104 of FIG. 1.
  • the vehicle control module 104 preferably comprises a network interface module 202 , a central processing unit (CPU) 204 , a servo interface module 206 , a sensor interface module 208 , and the video camera module 102 .
  • the network interface module 202 is provided with a wireless transmitter and receiver 205 .
  • the transmitter and receiver 205 may be custom designed or may be a standard, off-the-shelf component such as those found on laptops or electronic handheld devices. Indeed, a simplified computer similar to a PalmTM or Pocket PCTM may be provided with wireless networking capability, as is well known in the art and placed in the vehicle 100 for use as the vehicle control module 104 .
  • the CPU 204 is configured to communicate with the servo interface module 206 , the sensor interface module 208 , and the video camera module 102 through a data channel 210 .
  • the various controls and sensors may be made to interface through any type of data channel 210 or communication ports, including PCMCIA ports.
  • the CPU 204 may also be configured to select from a plurality of performance levels upon input from an administrator received over the network. Thus, an operator may use the same vehicle 100 and may progress from lower to higher performance levels.
  • the affected vehicle performance may include steering sensitivity, acceleration, and top speed. This feature is especially efficacious in driver education and training applications.
  • the CPU 204 may also provide a software failsafe with limitations to what an operator is allowed to do in controlling the vehicle 100 .
  • the CPU 204 comprises a Simple Network Management Protocol (SNMP) server module 212 .
  • SNMP provides an extensible solution with low computing overhead to managing multiple devices over a network.
  • SNMP is well known to those skilled in the art.
  • the CPU 204 may comprise a web-based protocol server module configured to implement a web-based protocol, such as JavaTM, for network data communications.
  • the SNMP server module 212 is configured to communicate vehicle control data to the servo interface module 206 .
  • the servo interface module 206 communicates the vehicle control data with the corresponding servo.
  • the network interface card 202 receives vehicle control data that indicates a new position for a throttle servo 214 .
  • the network interface card 202 communicates the vehicle control data to the CPU 204 which passes the data to the SNMP server 212 .
  • the SNMP server 212 receives the vehicle control data and routes the setting that is to be changed to the servo interface module 206 .
  • the servo interface module 206 then communicates a command to the throttle servo 214 to accelerate or decelerate.
  • the SNMP server 212 is also configured to control a plurality of servos through the servo interface module 206 .
  • servos that may be utilized depending upon the type of vehicle are the throttle servo 214 , a steering servo 216 , a camera servo 218 , and a brake servo 220 .
  • the SNMP server 212 may be configured to retrieve data by communicating with the sensor interface module 308 .
  • Examples of some desired sensors for a gas vehicle 100 are a head temperature sensor 222 , a tachometer 224 , an oil pressure sensor 226 , a speedometer 228 , and one or more accelerometers 230 .
  • other appropriate sensors and actuators can be controlled in a similar manner. Actuators specific to an airplane, boat, submarine, or robot may be controlled in this manner. For instance, the arms of a robot may be controlled remotely over the network.
  • FIG. 3 is a schematic top view of a remotely controlled vehicle 310 with video cameras 320 illustrated in accordance with the prior art.
  • the remotely controlled vehicle 310 includes one or more video cameras 320 and a transmitter 330 .
  • the video cameras 320 in one embodiment are mounted on the vehicle 310 .
  • Each of the video cameras 320 captures a video field of view according to video processing commonly known in the art.
  • the transmitter 330 broadcasts a video signal from the cameras 320 to a user maneuvering a remotely controlled vehicle 300 .
  • the transmitter 330 also broadcasts control signals used for control of the vehicle 310 .
  • the transmitter 330 may also transmit feedback data corresponding to performance parameters of the vehicle 310 during operation.
  • FIG. 4 is a schematic top view of a remotely controlled vehicle 310 with stereoscopic video cameras 420 in accordance with the prior art.
  • the remotely controlled vehicle 310 includes two video cameras 420 and a transmitter 330 .
  • the cameras 420 are substantially similar to the cameras 320 of FIG. 3 and are mounted on the remotely controlled vehicle 310 .
  • the cameras 420 are mounted in an orientation that allows the cameras 420 to capture video images of approximately the same field of view from two slightly offset points of view.
  • the transmitter 330 broadcasts a video signal from each camera 420 to a user maneuvering a remotely controlled vehicle.
  • each video image is displayed to a single display unit.
  • each video image may be displayed to individual display units and processes so as to simulate three-dimensional visual depth in the displayed image.
  • FIG. 5 is a schematic top view illustrating one embodiment of a remotely controlled vehicle 510 with video cameras 520 of the present invention.
  • the remotely controlled vehicle 510 includes two or more video cameras 520 and a transmitter 330 .
  • the vehicle 510 is depicted with eight video cameras 520 , other quantities, orientations, or combinations of video cameras 520 may be employed.
  • the video cameras 520 are mounted on the remotely controlled vehicle 510 and configured to provide a remote user with multiple video images of the fields of action for the vehicle 510 .
  • the transmitter 330 in one embodiment broadcasts one or more of the video images to the user.
  • the images from two or more video cameras 520 may be concatenated together to form a larger image of a single field of action.
  • FIG. 6 is a flow chart illustrating one embodiment of a field of view concatenation method 600 of the present invention.
  • the concatenation method 600 combines fields of view from two video images. Although for clarity purposes the steps of the concatenation method 600 are depicted in a certain sequential order, execution of the individual steps within an actual process may be conducted in parallel or in an order distinct from the depicted order.
  • the depicted concatenation method 600 includes an input fields of view step 610 , a fields of view difference step 620 , a match complete test 630 , a shift and scale step 640 , a calculate algorithm step 650 , an apply algorithm step 660 , a combine fields of view step 670 , a continue test 680 , a terminate test 690 , and an end step 695 .
  • the input fields of view step 610 samples two distinct fields of view.
  • the two fields of view are obtained from two distinct video cameras 520 mounted to a vehicle 510 .
  • Portions of the first and second fields of view have an overlapping visual space and the corresponding video images have overlapping pixels that are captured simultaneously. Portions of each field of view that cover the overlapping visual space may be culled for comparison.
  • the fields of view difference step 620 compares pixels from the first and the second fields of view.
  • the fields of view difference step 620 compares a pixel pair, with one pixel culled from the first field of view and one pixel culled from the second field of view.
  • each pixel in the pixel pair represents a target pixel in a field of action.
  • the fields of view difference step 620 may calculate a mathematical sum of the differences of all pixel pairs of the field of view. The sum diminishes as the first and second fields of view are more precisely aligned.
  • the match complete test 630 in one embodiment uses the calculated sum of differences to determine if an alignment of the first and second fields of view is satisfactory. If the alignment is satisfactory, the method 600 proceeds to the calculate algorithm step 650 . If the alignment is unsatisfactory, the method 600 loops to the shift and scale step 640 .
  • the shift and scale step 640 shifts and scales the alignment of the first field of view relative to the second field of view.
  • the shift and scale step 640 in one embodiment shifts pixels in the first field of view horizontally and vertically to improve the alignment of the first and second fields of view.
  • the shift and scale step 640 may also scale the first and second fields of view to improve the alignment of the fields of view.
  • the calculate algorithm step 650 uses a best alignment between the first and second fields of view as calculated by the fields of view difference step 620 to determine a concatenation algorithm for concatenating the first and second fields of view.
  • the calculate algorithm step 650 creates a video mask storing the concatenation algorithm.
  • the apply algorithm step 660 relates the concatenation algorithm of the calculate algorithm step 650 to the first and second fields of view.
  • the apply algorithm step 660 may modify a pixel value in preparation for concatenation.
  • the step 660 may also delete a pixel value.
  • the combine fields of view step 670 concatenates the first and second fields of view. In one embodiment, a pixel value from the first field of view is added to a pixel value of the second field of view.
  • the continue test 680 determines if the field of view concatenation method 600 will continue to use the current concatenation algorithm in concatenating the first and second fields of view. If the continue test 680 determines to continue using the concatenation algorithm, the field of view concatenation method 600 loops to the apply algorithm step 660 . If the continue test 680 determines to recalculate the concatenation algorithm, the method 600 proceeds to the terminate test 690 .
  • the terminate test 690 determines if the field of view concatenation method 600 should terminate. If the terminate test 690 determines the method 600 should not terminate, the method 600 loops to the input fields of view step 610 . If the terminate test 690 determines the method 600 should terminate, the field of view concatenation method 600 proceeds to the end step 695 .
  • FIG. 7 is a schematic top view of one embodiment of a video camera field of view 740 of the present invention.
  • the video camera field of view 700 includes a video camera 710 , a field of view 740 , a clockwise sub field of view 720 , and a counterclockwise sub field of view 730 .
  • the field of view of the video camera 710 includes a visual space captured by the video camera 710 .
  • the field of view 740 may be divided into the clockwise sub field of view 720 and the counterclockwise sub field of view 730 .
  • the clockwise sub field of view 720 and the counterclockwise sub field of view 730 may cover completely distinct visual spaces.
  • the clockwise sub field of view 720 and the counterclockwise sub field of view 730 may include overlapping portions of the same visual space.
  • FIG. 8 is a schematic top view of one embodiment of multiple, overlapping fields of view 800 of a plurality of video cameras 520 of the present invention.
  • the depicted schematic shows coverage of a field of action by a plurality of video camera 520 fields of view 740 .
  • the fields of view 740 are depicted using eight video cameras 320 , other quantities, orientations, or combinations of video cameras 520 may be employed.
  • the video camera 520 fields of view 740 includes one or more video cameras 520 , one or more fields of view 740 , one or more clockwise sub fields of view 720 , and one or more counterclockwise sub fields of view 730 .
  • the field of view 740 of the video camera 520 is divided into the clockwise sub field of view 720 and the counterclockwise sub field of view 730 .
  • Two or more clockwise sub fields of view 720 may be concatenated together to form a clockwise field of action image.
  • Two or more counterclockwise sub fields of view 730 may similarly be concatenated to form a counterclockwise field of action image.
  • the clockwise and the counterclockwise field of action images may be used to simulate three-dimensional visual depth.
  • clockwise and counterclockwise fields of action images are alternately displayed to a user's right and left eyes to provide three-dimensional visual depth in the field of action image.
  • a clockwise field of action is displayed to a user's right eye and a counterclockwise field of action is displayed to a user's left eye.
  • the clockwise and counterclockwise field of action images may also be combined for display in a polarized three-dimensional display.
  • FIG. 9 is a flow chart illustrating one embodiment of a visual depth image generation method 900 of the present invention.
  • the method 900 generates alternating clockwise and counter-clockwise images of a field of action.
  • the visual depth image generation method 900 includes a process clockwise sub fields of view step 910 , a process counterclockwise sub fields of view step 920 , a terminate test 930 , and an end step 940 .
  • the process clockwise sub fields of view step 910 prepares a clockwise sub field of view 720 for display.
  • the step 910 may employ the field of view concatenation method 600 to concatenate clockwise sub fields of view 720 into a clockwise field of action image.
  • the process counterclockwise sub fields of view step 920 prepares counterclockwise sub fields of view 730 for display.
  • the step 920 may also employ the field of view concatenation method 600 to concatenate counterclockwise sub fields of view 730 into a counterclockwise field of action image.
  • the process clockwise sub fields of view step 910 displays a counterclockwise field of action image and the process counterclockwise sub fields of view step 920 displays a clockwise field of action image.
  • FIG. 10 is a block diagram of one embodiment of a field of view processing system 1000 of the present invention.
  • the depicted system 1000 prepares video images for transmission to a display device.
  • the field of view processing system 1000 is illustrated using a network to transmit images, other transmission mechanisms may be employed.
  • the field of view processing system 1000 includes one or more video cameras 1010 , a video splitting module 1020 , a video processing module 1030 , a video memory module 1040 , a packet transmission module 1050 , a packet receipt module 1070 , and an image display module 1080 .
  • the video camera 1010 captures a video image of a field of view 740 .
  • the video splitting module 1020 splits the video image into a clockwise sub field of view 720 and a counterclockwise sub field of view 730 .
  • the clockwise and counterclockwise sub fields of view cover distinct, separate visual spaces.
  • the clockwise and counterclockwise sub fields of view share portions of visual space.
  • the video processing module 1030 prepares the video camera field of view for display.
  • the video processing module 1030 concatenates two or more clockwise sub fields of view 720 into a clockwise field of action image.
  • the video processing module 1030 also concatenates two or more counterclockwise sub fields of view 730 into a counterclockwise field of action image.
  • the video memory module 1040 stores a video image and a video algorithm.
  • the video memory module 1040 may store the video image and the video algorithm for concatenating the field of action image.
  • the packet transmission module 1050 prepares the field of action image for transmission as an image data packet over a network.
  • the counterclockwise field of action image data may be compressed and transmitted in separate data packets. In an alternate embodiment, clockwise and counterclockwise data packets are compressed and transmitted using shared data packets.
  • the packet receipt module 1070 receives the image data packet.
  • the packet receipt module 1070 decompresses the image data packet into a displayable format of the field of action image.
  • the image display module 1080 displays the field of action image.
  • the clockwise and the counterclockwise field of action images may be displayed to a right and a left eye of a user, simulating three-dimensional visual depth. In an alternate embodiment, the clockwise and the counterclockwise field of action images are combined in a polarized display with simulated three-dimensional visual depth.
  • FIG. 11 is a simplified side view drawing of one embodiment of a video camera/mirror system 1100 of the present invention.
  • the system 1100 includes a first video camera 1110 , a second video camera 1120 , one or more mirrors 1130 , and a common point 1140 . Although for purposes of clarity only two video cameras and two mirrors are illustrated, any number of cameras and mirrors may be employed.
  • the first video camera 1110 is positioned to capture an image of a portion of a visual space of a field of action as reflected by the mirror 1130 .
  • the mirror 1130 is positioned to locate the center of the virtual focal plane of the first camera 1110 in approximately the common point 1140 in space shared by the center of the virtual focal planes of the second video camera 1110 . Positioning the virtual focal plane of the first camera 1110 and the second camera 1120 at the common point 1140 may eliminate parallax effects when images from the cameras 1110 , 1120 are concatenated.
  • the present invention allows a user to maneuver a vehicle over a digital data network using visual feedback from an image covering a visual space of the vehicle's field of action.
  • Two or more field of view images are concatenated into field of action image with consistent visual feedback clues.
  • Multiple field of action images are generated to allow visual feedback with simulated three-dimensional visual depth to improve the visual clues provided to the user.

Abstract

A method, apparatus, and system are disclosed for simulating visual depth in a concatenated image of a remote field of action. A vision system provides multiple video camera fields of view covering a visual space of a field of action. Video image fields of view are divided into clockwise and counterclockwise sub fields of view. The clockwise sub fields of view and the counterclockwise sub fields of view cover the visual space of a field of action. Sub fields of view are concatenated into clockwise and counterclockwise images of the field of action capable of simulating visual depth.

Description

    BACKGROUND OF THE INVENTION
  • 1. The Field of the Invention [0001]
  • The invention relates to concatenating images covering a visual field of action. Specifically, the invention relates to simulating visual depth in a concatenated image of a field of action of a remotely controlled vehicle. [0002]
  • 2. The Relevant Art [0003]
  • Remote control enthusiasts regularly maneuver remotely controlled vehicles over challenging courses and in sophisticated racing events. Radio controllers facilitate the control of a vehicle through radio transmissions. By breaking the physical link between the vehicle and controller, R/C enthusiasts are able to participate in organized group events such as racing or in what is known as “backyard bashing.” Additionally, R/C controllers have allowed scaled vehicles to travel over and under water, and through the air, which for obvious reasons was not previously possible with a cabled control mechanism. [0004]
  • Racing scaled versions of NASCAR™, Formula 1™, and Indy™ series racecars have become very popular because, unlike other sports, the public generally does not have the opportunity to race these cars. Although scaled racecars give the hobbyist the feeling of racing, for example, a stock car, remotely racing a scaled racecar may lack realism. In order to make a racecar visually interesting to the point of view of the racer, the racecar is normally operated at speeds that if scaled are unrealistic. Additionally R/C is limited by the amount of channels or frequencies available for use. Currently, operators of racing tracks or airplane parks must track each user's frequency, and when the limited number of the available channels are being used, no new users are allowed to participate. [0005]
  • A solution to this problem has been to assign a binary address to each vehicle in a system. Command data is then attached to the binary address and transmitted to all vehicles in the system. In an analog R/C environment, commands to multiple vehicles must be placed in a queue and transmitted sequentially; this presents a slight lag between a user control and response by the vehicle. Each vehicle constantly monitors transmitted commands and waits for a command with the assigned binary address. Limitations to this system include the loss of fine control of vehicles due to transmit lag, and ultimately the number of vehicles is limited because the time lag could become too great. [0006]
  • Users typically must maneuver their vehicles with only the visual input from an observation viewpoint removed from the vehicle and track. Removed observation viewpoints often obscure important visual information needed to maneuver a remotely controlled vehicle with a constantly changing position and orientation. [0007]
  • Users have attempted to attain the visual perspective of the remotely controlled vehicle with vision systems that mount a video camera on the actual vehicle. However, the field of view of a vehicular mounted video camera image does not cover the visual space of the entire field of action of the remotely controlled vehicle. Additionally, video images lack depth clues vital to maneuvering a remotely controlled vehicle in difficult, high-performance situations. [0008]
  • Users have compensated for the visual feedback limitations of a video camera image with vision systems displaying images from multiple cameras, providing a user with a mosaic of images of the visual space of a field of action. However, various images covering the field of action may display mutually inconsistent visual feedback, reducing the effectiveness of visual clues. Multiple images of the field of action also lack visual depth information. [0009]
  • Users have compensated for the lack of visual depth in images of remote fields of action by mounting stereoscopic vision system cameras on a remote vehicle. However, stereoscopic cameras have a limited field of view. Stereoscopic cameras also have a greater cost for a given viewing angle. [0010]
  • Accordingly, it is apparent that a need exists for an improved system of controlling vehicles remotely. The need further exists for an improved system of controlling vehicles that accords a vision system for concatenating a consistent image of a remotely controlled vehicle's field of action. More specifically, what are needed are a method, apparatus, and system for simulating visual depth in a concatenated image covering the visual space of a field of action. [0011]
  • BRIEF SUMMARY OF THE INVENTION
  • The various elements of the present invention have been developed in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available remote controlled vehicles. More particularly, various elements of the present invention have been developed in response to the present state of the art and in response to the problems and needs in the art that have not yet been fully solved by currently available remote control vehicle control vision systems. Accordingly, the present invention provides an improved method, apparatus, and system for displaying an integrated three-dimensional image of a remote field of action. [0012]
  • In accordance with the invention as embodied and broadly described herein in the preferred embodiments, an improved remote control vehicle is provided and configured to move in a direction selectable remotely by a user. The vehicle comprises a chassis configured to move about in response to vehicle control data from a user; a controller residing within the chassis configured to receive network switched packets containing the vehicle control data; and an actuator interface module configured to operate an actuator in response to the vehicle control data received by the controller. The controller is configured to transmit vehicle data feedback to a user. Additionally, the controller may comprise a wireless network interface connection configured to transmit and receive network switched packets containing vehicle control data. [0013]
  • The present invention comprises a method of controlling a vehicle over a digital data network, including but not limited to a LAN, WAN, satellite, and digital cable networks. The method comprises providing a mobile vehicle configured to transmit and receive vehicle control data over the network, providing a central server configured to transmit and receive vehicle control data, transmitting vehicle control data, controlling the mobile vehicle in response to the transmitted vehicle control data, and receiving vehicle feedback data from the vehicle. Transmitting vehicle control data may comprise transmitting network switched packets in a peer-to-peer environment or in an infrastructure environment. [0014]
  • In one aspect of the present invention, a method for simulating three-dimensional visual depth in an image of a remote field of action is presented. The method comprises concatenating multiple video image fields of view covering a visual space of a field of action. Video images comprising visual spaces covered by multiple fields of view are aligned and concatenated into an image of a field of action. [0015]
  • The method divides a field of view into two sub fields of view. All portions of a visual field of action are covered by at least two video image sub fields of view. The method concatenates sub fields of multiple views into two distinct images of a visual field of action, with a point of view of a first image offset from a point of view of a second image. Each image of a concatenated field of action may be displayed separately to the right and left eyes of a user, simulating three-dimensional visual depth. In one embodiment, concatenated images are organized in data packets for transmission over a network. [0016]
  • In another aspect of the present invention, an apparatus is also presented for simulating three-dimensional visual depth in a concatenated image of a remote field of action. The apparatus includes multiple video cameras covering a single visual field of action. Each portion of a visual field of action is captured by a field of view of at least two video cameras. The apparatus divides a field of view into a clockwise and a counterclockwise sub field of view. Clockwise and counterclockwise sub fields of view are concatenated into clockwise and counterclockwise images of a visual field of action. In one embodiment, the clockwise and counterclockwise images are displayed to simulate three-dimensional visual depth. [0017]
  • In one embodiment, video cameras capture images reflected off mirrors. The mirrors may be positioned to locate the virtual center of each camera's focal plane in the same point to reduce parallax effects. [0018]
  • Various elements of the present invention are combined into a system for simulating three-dimensional visual depth in a concatenated image of a remote field of action. The system includes multiple video cameras capturing multiple video images covering one or more fields of view. Each portion of a visual space covered by a first video camera field of view is also covered by a second video camera field of view. The system divides each camera's field of view into at least two sub fields of view, a clockwise sub field of view and a counterclockwise sub field of view. The system concatenates multiple clockwise sub fields of view into a single clockwise image of a field of action. Similarly, the system concatenates multiple counterclockwise sub fields of view into a single counterclockwise image of a field of action. The clockwise and counterclockwise images may be used to display an image of the field of action with three-dimensional visual depth. [0019]
  • The various elements and aspects of the present invention facilitate controlling a vehicle over a digital data network with control feedback that includes the simulation of visual depth in a concatenated image of a field of action. These and other features and advantages of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter. [0020]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In order that the manner in which the advantages and objects of the invention are obtained will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof, which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which: [0021]
  • FIG. 1 is a perspective view of one embodiment of a network controlled vehicle of the present invention; [0022]
  • FIG. 2 is a schematic block diagram illustrating one embodiment of a vehicle control module of the present invention; [0023]
  • FIG. 3 is a schematic top view of one embodiment of a remotely controlled vehicle with video cameras in accordance with the prior art; [0024]
  • FIG. 4 is a schematic top view of one embodiment of a remotely controlled vehicle with stereoscopic video cameras in accordance with the prior art; [0025]
  • FIG. 5 is a schematic top view diagram illustrating one embodiment of a remotely controlled vehicle with video cameras in accordance with the present invention; [0026]
  • FIG. 6 is a flow chart illustrating one embodiment of a field of view concatenation method in accordance with the present invention; [0027]
  • FIG. 7 is a schematic top view of one embodiment of a video camera field of view in accordance with the present invention; [0028]
  • FIG. 8 is a schematic top view of one embodiment of multiple, overlapping video camera fields of view in accordance with the present invention; [0029]
  • FIG. 9 is a flow chart illustrating one embodiment of a visual depth image generation method in accordance with the present invention; [0030]
  • FIG. 10 is a block diagram of one embodiment of a field of view processing system in accordance with the present invention; and [0031]
  • FIG. 11 is a simplified side view of one embodiment of a video camera and mirror system in accordance with the present invention. [0032]
  • DETAILED DESCRIPTION OF THE INVENTION
  • Many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like. [0033]
  • Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module. [0034]
  • Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. [0035]
  • FIG. 1 shows a [0036] vehicle 100 that is controllable over a network. As depicted, the vehicle 100 comprises a video camera module 102 and a vehicle control module 104. The vehicle 100 is in one embodiment replicated at one-quarter scale, but may be of other scales also, including one-tenth scale, one-fifth scale, and one-third scale. Additionally, the network controlled vehicle 100 may embody scaled versions of airplanes, monster trucks, motorcycles, boats, buggies, and the like. In one embodiment, the vehicle 100 is a standard quarter scale vehicle 100 with centrifugal clutches and gasoline engines, and all of the data for the controls and sensors are communicated across the local area network. Alternatively, the vehicle 100 may be electric or liquid propane or otherwise powered. Quarter scale racecars are available from New Era Models of Nashua, N.H. as well as from other vendors, such as Danny's ¼ Scale Cars of Glendale, Ariz.
  • The [0037] vehicle 100 is operated by remote control, and in one embodiment an operator need not be able to see the vehicle 100 to operate it. Rather, a video camera module 102 is provided with a one or more cameras 106 connected to the vehicle control module 104 for 1 displaying the points of view of the vehicle 100 to an operator. The operator may control the vehicle 100 from a remote location at which the operator receives vehicle control data and 3 optionally audio and streaming video. In one embodiment, the driver receives the vehicle control data over a local area network. Under a preferred embodiment of the present invention, the video camera module 102 is configured to communicate to the operator using the vehicle control module 104. Alternatively, the video camera module 102 may be configured to transmit streaming visual data directly to an operator station.
  • FIG. 2 shows one embodiment of the [0038] vehicle control module 104 of FIG. 1. The vehicle control module 104 preferably comprises a network interface module 202, a central processing unit (CPU) 204, a servo interface module 206, a sensor interface module 208, and the video camera module 102. In one embodiment, the network interface module 202 is provided with a wireless transmitter and receiver 205. The transmitter and receiver 205 may be custom designed or may be a standard, off-the-shelf component such as those found on laptops or electronic handheld devices. Indeed, a simplified computer similar to a Palm™ or Pocket PC™ may be provided with wireless networking capability, as is well known in the art and placed in the vehicle 100 for use as the vehicle control module 104.
  • In one embodiment of the present invention, the CPU [0039] 204 is configured to communicate with the servo interface module 206, the sensor interface module 208, and the video camera module 102 through a data channel 210. The various controls and sensors may be made to interface through any type of data channel 210 or communication ports, including PCMCIA ports. The CPU 204 may also be configured to select from a plurality of performance levels upon input from an administrator received over the network. Thus, an operator may use the same vehicle 100 and may progress from lower to higher performance levels. The affected vehicle performance may include steering sensitivity, acceleration, and top speed. This feature is especially efficacious in driver education and training applications. The CPU 204 may also provide a software failsafe with limitations to what an operator is allowed to do in controlling the vehicle 100.
  • In one embodiment, the CPU [0040] 204 comprises a Simple Network Management Protocol (SNMP) server module 212. SNMP provides an extensible solution with low computing overhead to managing multiple devices over a network. SNMP is well known to those skilled in the art. In an alternate embodiment not depicted, the CPU 204 may comprise a web-based protocol server module configured to implement a web-based protocol, such as Java™, for network data communications.
  • The [0041] SNMP server module 212 is configured to communicate vehicle control data to the servo interface module 206. The servo interface module 206 communicates the vehicle control data with the corresponding servo. For example, the network interface card 202 receives vehicle control data that indicates a new position for a throttle servo 214. The network interface card 202 communicates the vehicle control data to the CPU 204 which passes the data to the SNMP server 212. The SNMP server 212 receives the vehicle control data and routes the setting that is to be changed to the servo interface module 206. The servo interface module 206 then communicates a command to the throttle servo 214 to accelerate or decelerate.
  • The [0042] SNMP server 212 is also configured to control a plurality of servos through the servo interface module 206. Examples of servos that may be utilized depending upon the type of vehicle are the throttle servo 214, a steering servo 216, a camera servo 218, and a brake servo 220. Additionally, the SNMP server 212 may be configured to retrieve data by communicating with the sensor interface module 308. Examples of some desired sensors for a gas vehicle 100 are a head temperature sensor 222, a tachometer 224, an oil pressure sensor 226, a speedometer 228, and one or more accelerometers 230. In addition, other appropriate sensors and actuators can be controlled in a similar manner. Actuators specific to an airplane, boat, submarine, or robot may be controlled in this manner. For instance, the arms of a robot may be controlled remotely over the network.
  • FIG. 3 is a schematic top view of a remotely controlled [0043] vehicle 310 with video cameras 320 illustrated in accordance with the prior art. The remotely controlled vehicle 310 includes one or more video cameras 320 and a transmitter 330. The video cameras 320 in one embodiment are mounted on the vehicle 310. Each of the video cameras 320 captures a video field of view according to video processing commonly known in the art. The transmitter 330 broadcasts a video signal from the cameras 320 to a user maneuvering a remotely controlled vehicle 300. In one embodiment, the transmitter 330 also broadcasts control signals used for control of the vehicle 310. In a further embodiment, the transmitter 330 may also transmit feedback data corresponding to performance parameters of the vehicle 310 during operation.
  • FIG. 4 is a schematic top view of a remotely controlled [0044] vehicle 310 with stereoscopic video cameras 420 in accordance with the prior art. The remotely controlled vehicle 310 includes two video cameras 420 and a transmitter 330.
  • The [0045] cameras 420 are substantially similar to the cameras 320 of FIG. 3 and are mounted on the remotely controlled vehicle 310. The cameras 420 are mounted in an orientation that allows the cameras 420 to capture video images of approximately the same field of view from two slightly offset points of view. The transmitter 330 broadcasts a video signal from each camera 420 to a user maneuvering a remotely controlled vehicle. In one embodiment, each video image is displayed to a single display unit. In an alternative embodiment, each video image may be displayed to individual display units and processes so as to simulate three-dimensional visual depth in the displayed image.
  • FIG. 5 is a schematic top view illustrating one embodiment of a remotely controlled [0046] vehicle 510 with video cameras 520 of the present invention. The remotely controlled vehicle 510 includes two or more video cameras 520 and a transmitter 330. Although the vehicle 510 is depicted with eight video cameras 520, other quantities, orientations, or combinations of video cameras 520 may be employed.
  • The [0047] video cameras 520 are mounted on the remotely controlled vehicle 510 and configured to provide a remote user with multiple video images of the fields of action for the vehicle 510. The transmitter 330 in one embodiment broadcasts one or more of the video images to the user. The images from two or more video cameras 520 may be concatenated together to form a larger image of a single field of action.
  • FIG. 6 is a flow chart illustrating one embodiment of a field of [0048] view concatenation method 600 of the present invention. The concatenation method 600 combines fields of view from two video images. Although for clarity purposes the steps of the concatenation method 600 are depicted in a certain sequential order, execution of the individual steps within an actual process may be conducted in parallel or in an order distinct from the depicted order.
  • The depicted [0049] concatenation method 600 includes an input fields of view step 610, a fields of view difference step 620, a match complete test 630, a shift and scale step 640, a calculate algorithm step 650, an apply algorithm step 660, a combine fields of view step 670, a continue test 680, a terminate test 690, and an end step 695.
  • The input fields of [0050] view step 610 samples two distinct fields of view. In one embodiment, the two fields of view are obtained from two distinct video cameras 520 mounted to a vehicle 510. Portions of the first and second fields of view have an overlapping visual space and the corresponding video images have overlapping pixels that are captured simultaneously. Portions of each field of view that cover the overlapping visual space may be culled for comparison.
  • The fields of [0051] view difference step 620 compares pixels from the first and the second fields of view. The fields of view difference step 620 compares a pixel pair, with one pixel culled from the first field of view and one pixel culled from the second field of view. In one embodiment, each pixel in the pixel pair represents a target pixel in a field of action. The fields of view difference step 620 may calculate a mathematical sum of the differences of all pixel pairs of the field of view. The sum diminishes as the first and second fields of view are more precisely aligned.
  • The match [0052] complete test 630 in one embodiment uses the calculated sum of differences to determine if an alignment of the first and second fields of view is satisfactory. If the alignment is satisfactory, the method 600 proceeds to the calculate algorithm step 650. If the alignment is unsatisfactory, the method 600 loops to the shift and scale step 640.
  • The shift and [0053] scale step 640 shifts and scales the alignment of the first field of view relative to the second field of view. The shift and scale step 640 in one embodiment shifts pixels in the first field of view horizontally and vertically to improve the alignment of the first and second fields of view. The shift and scale step 640 may also scale the first and second fields of view to improve the alignment of the fields of view.
  • The calculate [0054] algorithm step 650 uses a best alignment between the first and second fields of view as calculated by the fields of view difference step 620 to determine a concatenation algorithm for concatenating the first and second fields of view. In one embodiment, the calculate algorithm step 650 creates a video mask storing the concatenation algorithm.
  • The apply [0055] algorithm step 660 relates the concatenation algorithm of the calculate algorithm step 650 to the first and second fields of view. The apply algorithm step 660 may modify a pixel value in preparation for concatenation. The step 660 may also delete a pixel value. The combine fields of view step 670 concatenates the first and second fields of view. In one embodiment, a pixel value from the first field of view is added to a pixel value of the second field of view.
  • The continue [0056] test 680 determines if the field of view concatenation method 600 will continue to use the current concatenation algorithm in concatenating the first and second fields of view. If the continue test 680 determines to continue using the concatenation algorithm, the field of view concatenation method 600 loops to the apply algorithm step 660. If the continue test 680 determines to recalculate the concatenation algorithm, the method 600 proceeds to the terminate test 690.
  • The terminate [0057] test 690 determines if the field of view concatenation method 600 should terminate. If the terminate test 690 determines the method 600 should not terminate, the method 600 loops to the input fields of view step 610. If the terminate test 690 determines the method 600 should terminate, the field of view concatenation method 600 proceeds to the end step 695.
  • FIG. 7 is a schematic top view of one embodiment of a video camera field of [0058] view 740 of the present invention. The video camera field of view 700 includes a video camera 710, a field of view 740, a clockwise sub field of view 720, and a counterclockwise sub field of view 730.
  • The field of view of the [0059] video camera 710 includes a visual space captured by the video camera 710. The field of view 740 may be divided into the clockwise sub field of view 720 and the counterclockwise sub field of view 730. The clockwise sub field of view 720 and the counterclockwise sub field of view 730 may cover completely distinct visual spaces. Alternately, the clockwise sub field of view 720 and the counterclockwise sub field of view 730 may include overlapping portions of the same visual space.
  • FIG. 8 is a schematic top view of one embodiment of multiple, overlapping fields of [0060] view 800 of a plurality of video cameras 520 of the present invention. The depicted schematic shows coverage of a field of action by a plurality of video camera 520 fields of view 740. Although the fields of view 740 are depicted using eight video cameras 320, other quantities, orientations, or combinations of video cameras 520 may be employed. The video camera 520 fields of view 740 includes one or more video cameras 520, one or more fields of view 740, one or more clockwise sub fields of view 720, and one or more counterclockwise sub fields of view 730.
  • The field of [0061] view 740 of the video camera 520 is divided into the clockwise sub field of view 720 and the counterclockwise sub field of view 730. Two or more clockwise sub fields of view 720 may be concatenated together to form a clockwise field of action image. Two or more counterclockwise sub fields of view 730 may similarly be concatenated to form a counterclockwise field of action image. The clockwise and the counterclockwise field of action images may be used to simulate three-dimensional visual depth. In one embodiment, clockwise and counterclockwise fields of action images are alternately displayed to a user's right and left eyes to provide three-dimensional visual depth in the field of action image. In an alternate embodiment, a clockwise field of action is displayed to a user's right eye and a counterclockwise field of action is displayed to a user's left eye. The clockwise and counterclockwise field of action images may also be combined for display in a polarized three-dimensional display.
  • FIG. 9 is a flow chart illustrating one embodiment of a visual depth [0062] image generation method 900 of the present invention. The method 900 generates alternating clockwise and counter-clockwise images of a field of action. The visual depth image generation method 900 includes a process clockwise sub fields of view step 910, a process counterclockwise sub fields of view step 920, a terminate test 930, and an end step 940.
  • The process clockwise sub fields of [0063] view step 910 prepares a clockwise sub field of view 720 for display. The step 910 may employ the field of view concatenation method 600 to concatenate clockwise sub fields of view 720 into a clockwise field of action image. The process counterclockwise sub fields of view step 920 prepares counterclockwise sub fields of view 730 for display. The step 920 may also employ the field of view concatenation method 600 to concatenate counterclockwise sub fields of view 730 into a counterclockwise field of action image. In one alternate embodiment, the process clockwise sub fields of view step 910 displays a counterclockwise field of action image and the process counterclockwise sub fields of view step 920 displays a clockwise field of action image.
  • FIG. 10 is a block diagram of one embodiment of a field of [0064] view processing system 1000 of the present invention. The depicted system 1000 prepares video images for transmission to a display device. Although the field of view processing system 1000 is illustrated using a network to transmit images, other transmission mechanisms may be employed. The field of view processing system 1000 includes one or more video cameras 1010, a video splitting module 1020, a video processing module 1030, a video memory module 1040, a packet transmission module 1050, a packet receipt module 1070, and an image display module 1080.
  • The [0065] video camera 1010 captures a video image of a field of view 740. The video splitting module 1020 splits the video image into a clockwise sub field of view 720 and a counterclockwise sub field of view 730. In one embodiment, the clockwise and counterclockwise sub fields of view cover distinct, separate visual spaces. In an alternate embodiment, the clockwise and counterclockwise sub fields of view share portions of visual space.
  • The [0066] video processing module 1030 prepares the video camera field of view for display. The video processing module 1030 concatenates two or more clockwise sub fields of view 720 into a clockwise field of action image. The video processing module 1030 also concatenates two or more counterclockwise sub fields of view 730 into a counterclockwise field of action image.
  • The [0067] video memory module 1040 stores a video image and a video algorithm. The video memory module 1040 may store the video image and the video algorithm for concatenating the field of action image. The packet transmission module 1050 prepares the field of action image for transmission as an image data packet over a network. The counterclockwise field of action image data may be compressed and transmitted in separate data packets. In an alternate embodiment, clockwise and counterclockwise data packets are compressed and transmitted using shared data packets.
  • The [0068] packet receipt module 1070 receives the image data packet. The packet receipt module 1070 decompresses the image data packet into a displayable format of the field of action image. The image display module 1080 displays the field of action image. The clockwise and the counterclockwise field of action images may be displayed to a right and a left eye of a user, simulating three-dimensional visual depth. In an alternate embodiment, the clockwise and the counterclockwise field of action images are combined in a polarized display with simulated three-dimensional visual depth.
  • FIG. 11 is a simplified side view drawing of one embodiment of a video camera/[0069] mirror system 1100 of the present invention. The system 1100 includes a first video camera 1110, a second video camera 1120, one or more mirrors 1130, and a common point 1140. Although for purposes of clarity only two video cameras and two mirrors are illustrated, any number of cameras and mirrors may be employed.
  • The [0070] first video camera 1110 is positioned to capture an image of a portion of a visual space of a field of action as reflected by the mirror 1130. The mirror 1130 is positioned to locate the center of the virtual focal plane of the first camera 1110 in approximately the common point 1140 in space shared by the center of the virtual focal planes of the second video camera 1110. Positioning the virtual focal plane of the first camera 1110 and the second camera 1120 at the common point 1140 may eliminate parallax effects when images from the cameras 1110, 1120 are concatenated.
  • The present invention allows a user to maneuver a vehicle over a digital data network using visual feedback from an image covering a visual space of the vehicle's field of action. Two or more field of view images are concatenated into field of action image with consistent visual feedback clues. Multiple field of action images are generated to allow visual feedback with simulated three-dimensional visual depth to improve the visual clues provided to the user. [0071]
  • The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.[0072]

Claims (33)

What is claimed is:
1. A method for simulating visual depth using a concatenated image of a remote field of action, the method comprising:
receiving a first video image and a second video image of a remote field of action;
dividing the first video image field of view into a first clockwise sub field of view and a first counterclockwise sub field of view;
dividing the second video image field of view into a second clockwise sub field of view and a second counterclockwise sub field of view;
concatenating the first and second clockwise sub fields of view into a clockwise field of action image; and
concatenating the first and second counterclockwise sub fields of view into a counterclockwise field of action image.
2. The method of claim 1, further comprising aligning a first field of action image with a second field of action image.
3. The method of claim 1, further comprising displaying two or more field of action images to simulate three-dimensional visual depth.
4. The method of claim 1, further comprising providing an unbroken 360° field of action image.
5. The method of claim 1, further comprising controlling a remote vehicle, the remote vehicle providing the locus of the field of action image.
6. The method of claim 5, further comprising generating network switched packets containing vehicle control data.
7. A method for simulating visual depth using a concatenated image of a remote field of action, the method comprising:
receiving a first video image and a second video image of a remote field of action;
dividing the first video image field of view into a first clockwise sub field of view and a first counterclockwise sub field of view;
dividing the second video image field of view into a second clockwise sub field of view and a second counterclockwise sub field of view;
concatenating the first and second clockwise sub fields of view into a clockwise field of action image;
concatenating the first and second counterclockwise sub fields of view into a counterclockwise field of action image;
aligning a first field of action image with a second field of action image;
displaying two or more field of action images to simulate visual depth; and
providing an unbroken 360° field of action image.
8. The method of claim 7, further comprising controlling a remote vehicle, the remote vehicle providing the locus of the field of action image.
9. The method of claim 8, further comprising controlling network switched packets containing vehicle control data.
10. An apparatus for simulating visual depth using a concatenated image of a remote field of action, the apparatus comprising:
a first video camera configured to capture a first video image field of view and a second video camera configured to capture a second video image field of view;
a video splitting module configured to divide the first video image field of view into a clockwise sub field of view and a counterclockwise sub field of view;
the video splitting module further configured to divide the second video image field of view into a clockwise sub field of view and a counterclockwise sub field of view;
a video processing module configured to concatenate the first and second clockwise sub fields of view into a clockwise field of action image; and
the video processing module further configured to concatenate the first and second counterclockwise sub fields of view into a counterclockwise field of action image.
11. The apparatus of claim 10, further configured with a display module to selectively display two or more field of action images to simulate three-dimensional visual depth.
12. The apparatus of claim 10, further configured with a mirror positioned to locate a center of a virtual focal plane of the video camera in a common point.
13. The apparatus of claim 10, further configured to vertically orient the axis of the video camera with the greatest pixel density.
14. The apparatus of claim 10, further comprising a remotely controlled vehicle, at least one of the first and second video cameras disposed on the remotely controlled vehicle.
15. The apparatus of claim 14, wherein the remotely controlled vehicle is the locus of the field of action image.
16. An apparatus for simulating visual depth in a concatenated image of a remote field of action, the apparatus comprising:
means for receiving a first video image and a second video image of a remote field of action;
means for dividing the first video image into a first clockwise sub field of view and a first counterclockwise sub field of view;
means for dividing the second video image into a second clockwise sub field of view and a second counterclockwise sub field of view;
means for concatenating the first and second clockwise sub fields of view into a clockwise field of action image; and
means for concatenating the first and second counterclockwise sub fields of view into a counterclockwise field of action image.
17. The apparatus of claim 16, the apparatus further comprising means for aligning a first field of action image and a second field of action image.
18. The apparatus of claim 16, the apparatus further comprising means for displaying the two or more field of action images to simulate three-dimensional visual depth.
19. The apparatus of claim 16, the apparatus further comprising means for locating the center of the focal plane of a video camera in a common point.
20. The apparatus of claim 16, the apparatus further comprising means for displaying an unbroken 360° field of action image.
21. A system for simulating visual depth using a concatenated image of a remote field of action, the system comprising:
a remotely controlled vehicle;
a first video camera and a second video camera each mounted on the remotely controlled vehicle and configured to scan a field of action;
a video splitting module configured divide the video camera field of view into a clockwise sub field of view and a counterclockwise sub field of view;
a video processing module configured to combine two or more clockwise sub fields of view into a clockwise field of action image and two or more counterclockwise sub fields of view into a counterclockwise field of action image;
a data network configured to transmit the video images; and
a data storage server configured to store the video images.
22. The system of claim 21, further comprising an image display module to display two or more field of action images to simulate visual depth.
23. The system of claim 21, further comprising a mirror configured to locate the center of the virtual focal plane of the video camera at a common point.
24. The system of claim 21, further comprising the video cameras oriented around a vertical axis.
25. The system of claim 21, further comprising the video cameras oriented around a horizontal axis.
26. The system of claim 21, further comprising a video image transmission module configured to transmit the video images.
27. The system of claim 26, further comprising the video transmission module configured to transmit the video images over the data network.
28. The system of claim 21, further comprising a video image transmission module configured to transmit the video images over the data network using data packets.
29. The system of claim 21, further comprising a remotely controlled vehicle, the remotely controlled vehicle providing the locus of the field of action image.
30. A computer readable storage medium comprising computer readable program code configured to carry out a method for simulating visual depth using a concatenated image of a remote field of action, the method comprising:
receiving a first video image and a second video image;
dividing the first video image into a first clockwise sub field of view and a first counterclockwise sub field of view;
dividing the second video image into a second clockwise sub field of view and a second counterclockwise sub field of view;
concatenating the first and second clockwise sub fields of view into a clockwise field of action image; and
concatenating the first and second counterclockwise sub fields of view into a counterclockwise field of action image.
31. The computer readable storage medium of claim 30, wherein the method further comprises aligning a first field of action image and a second field of action image.
32. The computer readable storage medium of claim 30, wherein the method further comprises providing an unbroken 360° field of action image.
33. The computer readable storage medium of claim 30, wherein the method further comprises displaying two or more field of action images to simulate three-dimensional visual depth.
US10/421,374 2002-04-22 2003-04-22 Method, apparatus, and system for simulating visual depth in a concatenated image of a remote field of action Abandoned US20040077285A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/421,374 US20040077285A1 (en) 2002-04-22 2003-04-22 Method, apparatus, and system for simulating visual depth in a concatenated image of a remote field of action

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US37444002P 2002-04-22 2002-04-22
US10/421,374 US20040077285A1 (en) 2002-04-22 2003-04-22 Method, apparatus, and system for simulating visual depth in a concatenated image of a remote field of action

Publications (1)

Publication Number Publication Date
US20040077285A1 true US20040077285A1 (en) 2004-04-22

Family

ID=32095823

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/421,374 Abandoned US20040077285A1 (en) 2002-04-22 2003-04-22 Method, apparatus, and system for simulating visual depth in a concatenated image of a remote field of action

Country Status (1)

Country Link
US (1) US20040077285A1 (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040019413A1 (en) * 2002-01-31 2004-01-29 Bonilla Victor G. Apparatus system and method for remotely controlling a vehicle over a network
US20070004311A1 (en) * 2005-06-03 2007-01-04 Mark Trageser Toy vehicle with on-board electronics
US20070173174A1 (en) * 2005-11-01 2007-07-26 Mattel, Inc. Toys with view ports
US20120009845A1 (en) * 2010-07-07 2012-01-12 Juniper Holding Corp. Configurable location-aware toy capable of communicating with like toys and associated system infrastructure for communicating with such toys
US8876535B2 (en) * 2013-03-15 2014-11-04 State Farm Mutual Automobile Insurance Company Real-time driver observation and scoring for driver's education
US9257061B2 (en) 2013-03-15 2016-02-09 The Coca-Cola Company Display devices
US9646428B1 (en) 2014-05-20 2017-05-09 State Farm Mutual Automobile Insurance Company Accident response using autonomous vehicle monitoring
US9786154B1 (en) 2014-07-21 2017-10-10 State Farm Mutual Automobile Insurance Company Methods of facilitating emergency assistance
US9805601B1 (en) 2015-08-28 2017-10-31 State Farm Mutual Automobile Insurance Company Vehicular traffic alerts for avoidance of abnormal traffic conditions
US9908530B1 (en) 2014-04-17 2018-03-06 State Farm Mutual Automobile Insurance Company Advanced vehicle operator intelligence system
US9934667B1 (en) 2014-03-07 2018-04-03 State Farm Mutual Automobile Insurance Company Vehicle operator emotion management system and method
US9940834B1 (en) 2016-01-22 2018-04-10 State Farm Mutual Automobile Insurance Company Autonomous vehicle application
US9944282B1 (en) 2014-11-13 2018-04-17 State Farm Mutual Automobile Insurance Company Autonomous vehicle automatic parking
IT201600101337A1 (en) * 2016-11-03 2018-05-03 Srsd Srl MOBILE TERRESTRIAL OR NAVAL SYSTEM, WITH REMOTE CONTROL AND CONTROL, WITH PASSIVE AND ACTIVE DEFENSES, EQUIPPED WITH SENSORS AND COMPLETE ACTUATORS CONTEMPORARY COVERAGE OF THE SURROUNDING SCENARIO
US9972054B1 (en) 2014-05-20 2018-05-15 State Farm Mutual Automobile Insurance Company Accident fault determination for autonomous vehicles
US10042359B1 (en) 2016-01-22 2018-08-07 State Farm Mutual Automobile Insurance Company Autonomous vehicle refueling
US10134278B1 (en) 2016-01-22 2018-11-20 State Farm Mutual Automobile Insurance Company Autonomous vehicle application
US10185999B1 (en) 2014-05-20 2019-01-22 State Farm Mutual Automobile Insurance Company Autonomous feature use monitoring and telematics
US10319039B1 (en) 2014-05-20 2019-06-11 State Farm Mutual Automobile Insurance Company Accident fault determination for autonomous vehicles
US10324463B1 (en) 2016-01-22 2019-06-18 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation adjustment based upon route
US10373259B1 (en) 2014-05-20 2019-08-06 State Farm Mutual Automobile Insurance Company Fully autonomous vehicle insurance pricing
US10395332B1 (en) 2016-01-22 2019-08-27 State Farm Mutual Automobile Insurance Company Coordinated autonomous vehicle automatic area scanning
US10599155B1 (en) 2014-05-20 2020-03-24 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature monitoring and evaluation of effectiveness
US10937343B2 (en) 2016-09-26 2021-03-02 The Coca-Cola Company Display device
US11242051B1 (en) 2016-01-22 2022-02-08 State Farm Mutual Automobile Insurance Company Autonomous vehicle action communications
US11441916B1 (en) 2016-01-22 2022-09-13 State Farm Mutual Automobile Insurance Company Autonomous vehicle trip routing
US11669090B2 (en) 2014-05-20 2023-06-06 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature monitoring and evaluation of effectiveness
US11719545B2 (en) 2016-01-22 2023-08-08 Hyundai Motor Company Autonomous vehicle component damage and salvage assessment

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4405943A (en) * 1981-08-19 1983-09-20 Harris Corporation Low bandwidth closed loop imagery control and communication system for remotely piloted vehicle
US4817948A (en) * 1983-09-06 1989-04-04 Louise Simonelli Reduced-scale racing system
US4986187A (en) * 1988-12-27 1991-01-22 Lionel Trains, Inc. Toy vehicle assembly with video display capability
US5016004A (en) * 1987-12-24 1991-05-14 Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of National Defence Gas operated vehicular control system
US5015189A (en) * 1989-10-20 1991-05-14 Doron Precision Systems, Inc. Training apparatus
US5044956A (en) * 1989-01-12 1991-09-03 Atari Games Corporation Control device such as a steering wheel for video vehicle simulator with realistic feedback forces
US5130794A (en) * 1990-03-29 1992-07-14 Ritchey Kurtis J Panoramic display system
US5338247A (en) * 1992-10-30 1994-08-16 Miles Jeffrey A Battery powered model car
US5456604A (en) * 1993-10-20 1995-10-10 Olmsted; Robert A. Method and system for simulating vehicle operation using scale models
US5481257A (en) * 1987-03-05 1996-01-02 Curtis M. Brubaker Remotely controlled vehicle containing a television camera
US5596319A (en) * 1994-10-31 1997-01-21 Spry; Willie L. Vehicle remote control system
US5625408A (en) * 1993-06-24 1997-04-29 Canon Kabushiki Kaisha Three-dimensional image recording/reconstructing method and apparatus therefor
US5707237A (en) * 1993-04-20 1998-01-13 Kabushiki Kaisha Ace Denken Driving simulation system
US5989096A (en) * 1997-02-11 1999-11-23 Rokenbok Toy Company Toy fork lift vehicle with improved steering
US6074271A (en) * 1997-08-26 2000-06-13 Derrah; Steven Radio controlled skateboard with robot
US6113459A (en) * 1998-12-21 2000-09-05 Nammoto; Mikio Remote toy steering mechanism
US6141145A (en) * 1998-08-28 2000-10-31 Lucent Technologies Stereo panoramic viewing system
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring
US6247994B1 (en) * 1998-02-11 2001-06-19 Rokenbok Toy Company System and method for communicating with and controlling toy accessories
US20010026386A1 (en) * 2000-03-30 2001-10-04 Takashi Yamamoto Communication system, communication apparatus, and communication method
US6309306B1 (en) * 1999-03-03 2001-10-30 Disney Enterprises, Inc. Interactive entertainment attraction using telepresence vehicles
US20010045978A1 (en) * 2000-04-12 2001-11-29 Mcconnell Daniel L. Portable personal wireless interactive video device and method of using the same
US6747610B1 (en) * 1997-07-22 2004-06-08 Sanyo Electric Co., Ltd. Stereoscopic image display apparatus capable of selectively displaying desired stereoscopic image

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4405943A (en) * 1981-08-19 1983-09-20 Harris Corporation Low bandwidth closed loop imagery control and communication system for remotely piloted vehicle
US4817948A (en) * 1983-09-06 1989-04-04 Louise Simonelli Reduced-scale racing system
US5481257A (en) * 1987-03-05 1996-01-02 Curtis M. Brubaker Remotely controlled vehicle containing a television camera
US5016004A (en) * 1987-12-24 1991-05-14 Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of National Defence Gas operated vehicular control system
US4986187A (en) * 1988-12-27 1991-01-22 Lionel Trains, Inc. Toy vehicle assembly with video display capability
US5044956A (en) * 1989-01-12 1991-09-03 Atari Games Corporation Control device such as a steering wheel for video vehicle simulator with realistic feedback forces
US5015189A (en) * 1989-10-20 1991-05-14 Doron Precision Systems, Inc. Training apparatus
US5130794A (en) * 1990-03-29 1992-07-14 Ritchey Kurtis J Panoramic display system
US5338247A (en) * 1992-10-30 1994-08-16 Miles Jeffrey A Battery powered model car
US5707237A (en) * 1993-04-20 1998-01-13 Kabushiki Kaisha Ace Denken Driving simulation system
US5625408A (en) * 1993-06-24 1997-04-29 Canon Kabushiki Kaisha Three-dimensional image recording/reconstructing method and apparatus therefor
US5456604A (en) * 1993-10-20 1995-10-10 Olmsted; Robert A. Method and system for simulating vehicle operation using scale models
US5596319A (en) * 1994-10-31 1997-01-21 Spry; Willie L. Vehicle remote control system
US5989096A (en) * 1997-02-11 1999-11-23 Rokenbok Toy Company Toy fork lift vehicle with improved steering
US6747610B1 (en) * 1997-07-22 2004-06-08 Sanyo Electric Co., Ltd. Stereoscopic image display apparatus capable of selectively displaying desired stereoscopic image
US6074271A (en) * 1997-08-26 2000-06-13 Derrah; Steven Radio controlled skateboard with robot
US6247994B1 (en) * 1998-02-11 2001-06-19 Rokenbok Toy Company System and method for communicating with and controlling toy accessories
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring
US6141145A (en) * 1998-08-28 2000-10-31 Lucent Technologies Stereo panoramic viewing system
US6113459A (en) * 1998-12-21 2000-09-05 Nammoto; Mikio Remote toy steering mechanism
US6309306B1 (en) * 1999-03-03 2001-10-30 Disney Enterprises, Inc. Interactive entertainment attraction using telepresence vehicles
US20010026386A1 (en) * 2000-03-30 2001-10-04 Takashi Yamamoto Communication system, communication apparatus, and communication method
US20010045978A1 (en) * 2000-04-12 2001-11-29 Mcconnell Daniel L. Portable personal wireless interactive video device and method of using the same

Cited By (187)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040019413A1 (en) * 2002-01-31 2004-01-29 Bonilla Victor G. Apparatus system and method for remotely controlling a vehicle over a network
US6954695B2 (en) * 2002-01-31 2005-10-11 Racing Visions, Llc Apparatus system and method for remotely controlling a vehicle over a network
US20070004311A1 (en) * 2005-06-03 2007-01-04 Mark Trageser Toy vehicle with on-board electronics
US7275975B2 (en) 2005-06-03 2007-10-02 Mattel, Inc. Toy vehicle with on-board electronics
US20070173174A1 (en) * 2005-11-01 2007-07-26 Mattel, Inc. Toys with view ports
US8376806B2 (en) * 2005-11-01 2013-02-19 Mattel, Inc. Toys with view ports
US20120009845A1 (en) * 2010-07-07 2012-01-12 Juniper Holding Corp. Configurable location-aware toy capable of communicating with like toys and associated system infrastructure for communicating with such toys
US9478150B1 (en) * 2013-03-15 2016-10-25 State Farm Mutual Automobile Insurance Company Real-time driver observation and scoring for driver's education
US10598357B2 (en) 2013-03-15 2020-03-24 The Coca-Cola Company Display devices
US9269283B2 (en) 2013-03-15 2016-02-23 The Coca-Cola Company Display devices
US9275552B1 (en) * 2013-03-15 2016-03-01 State Farm Mutual Automobile Insurance Company Real-time driver observation and scoring for driver'S education
US9342993B1 (en) 2013-03-15 2016-05-17 State Farm Mutual Automobile Insurance Company Real-time driver observation and scoring for driver's education
US9257061B2 (en) 2013-03-15 2016-02-09 The Coca-Cola Company Display devices
US9530333B1 (en) * 2013-03-15 2016-12-27 State Farm Mutual Automobile Insurance Company Real-time driver observation and scoring for driver's education
US9640118B2 (en) 2013-03-15 2017-05-02 The Coca-Cola Company Display devices
US9885466B2 (en) 2013-03-15 2018-02-06 The Coca-Cola Company Display devices
US10311750B1 (en) * 2013-03-15 2019-06-04 State Farm Mutual Automobile Insurance Company Real-time driver observation and scoring for driver's education
US10446047B1 (en) * 2013-03-15 2019-10-15 State Farm Mutual Automotive Insurance Company Real-time driver observation and scoring for driver'S education
US10208934B2 (en) 2013-03-15 2019-02-19 The Coca-Cola Company Display devices
US8876535B2 (en) * 2013-03-15 2014-11-04 State Farm Mutual Automobile Insurance Company Real-time driver observation and scoring for driver's education
US9934667B1 (en) 2014-03-07 2018-04-03 State Farm Mutual Automobile Insurance Company Vehicle operator emotion management system and method
US10121345B1 (en) 2014-03-07 2018-11-06 State Farm Mutual Automobile Insurance Company Vehicle operator emotion management system and method
US10593182B1 (en) 2014-03-07 2020-03-17 State Farm Mutual Automobile Insurance Company Vehicle operator emotion management system and method
US9908530B1 (en) 2014-04-17 2018-03-06 State Farm Mutual Automobile Insurance Company Advanced vehicle operator intelligence system
US11288751B1 (en) 2014-05-20 2022-03-29 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature monitoring and evaluation of effectiveness
US11710188B2 (en) 2014-05-20 2023-07-25 State Farm Mutual Automobile Insurance Company Autonomous communication feature use and insurance pricing
US9852475B1 (en) 2014-05-20 2017-12-26 State Farm Mutual Automobile Insurance Company Accident risk model determination using autonomous vehicle operating data
US9858621B1 (en) 2014-05-20 2018-01-02 State Farm Mutual Automobile Insurance Company Autonomous vehicle technology effectiveness determination for insurance pricing
US11669090B2 (en) 2014-05-20 2023-06-06 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature monitoring and evaluation of effectiveness
US11580604B1 (en) 2014-05-20 2023-02-14 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature monitoring and evaluation of effectiveness
US11436685B1 (en) 2014-05-20 2022-09-06 State Farm Mutual Automobile Insurance Company Fault determination with autonomous feature use monitoring
US11386501B1 (en) 2014-05-20 2022-07-12 State Farm Mutual Automobile Insurance Company Accident fault determination for autonomous vehicles
US9792656B1 (en) 2014-05-20 2017-10-17 State Farm Mutual Automobile Insurance Company Fault determination with autonomous feature use monitoring
US11282143B1 (en) 2014-05-20 2022-03-22 State Farm Mutual Automobile Insurance Company Fully autonomous vehicle insurance pricing
US11127086B2 (en) 2014-05-20 2021-09-21 State Farm Mutual Automobile Insurance Company Accident fault determination for autonomous vehicles
US9972054B1 (en) 2014-05-20 2018-05-15 State Farm Mutual Automobile Insurance Company Accident fault determination for autonomous vehicles
US11080794B2 (en) 2014-05-20 2021-08-03 State Farm Mutual Automobile Insurance Company Autonomous vehicle technology effectiveness determination for insurance pricing
US11062396B1 (en) 2014-05-20 2021-07-13 State Farm Mutual Automobile Insurance Company Determining autonomous vehicle technology performance for insurance pricing and offering
US11023629B1 (en) 2014-05-20 2021-06-01 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature evaluation
US10026130B1 (en) 2014-05-20 2018-07-17 State Farm Mutual Automobile Insurance Company Autonomous vehicle collision risk assessment
US11010840B1 (en) 2014-05-20 2021-05-18 State Farm Mutual Automobile Insurance Company Fault determination with autonomous feature use monitoring
US10055794B1 (en) 2014-05-20 2018-08-21 State Farm Mutual Automobile Insurance Company Determining autonomous vehicle technology performance for insurance pricing and offering
US10963969B1 (en) 2014-05-20 2021-03-30 State Farm Mutual Automobile Insurance Company Autonomous communication feature use and insurance pricing
US10089693B1 (en) 2014-05-20 2018-10-02 State Farm Mutual Automobile Insurance Company Fully autonomous vehicle insurance pricing
US10748218B2 (en) 2014-05-20 2020-08-18 State Farm Mutual Automobile Insurance Company Autonomous vehicle technology effectiveness determination for insurance pricing
US9805423B1 (en) 2014-05-20 2017-10-31 State Farm Mutual Automobile Insurance Company Accident fault determination for autonomous vehicles
US10726499B1 (en) 2014-05-20 2020-07-28 State Farm Mutual Automoible Insurance Company Accident fault determination for autonomous vehicles
US11869092B2 (en) 2014-05-20 2024-01-09 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature monitoring and evaluation of effectiveness
US10726498B1 (en) 2014-05-20 2020-07-28 State Farm Mutual Automobile Insurance Company Accident fault determination for autonomous vehicles
US10719886B1 (en) 2014-05-20 2020-07-21 State Farm Mutual Automobile Insurance Company Accident fault determination for autonomous vehicles
US10719885B1 (en) 2014-05-20 2020-07-21 State Farm Mutual Automobile Insurance Company Autonomous feature use monitoring and insurance pricing
US10599155B1 (en) 2014-05-20 2020-03-24 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature monitoring and evaluation of effectiveness
US9767516B1 (en) 2014-05-20 2017-09-19 State Farm Mutual Automobile Insurance Company Driver feedback alerts based upon monitoring use of autonomous vehicle
US9754325B1 (en) 2014-05-20 2017-09-05 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature monitoring and evaluation of effectiveness
US10181161B1 (en) 2014-05-20 2019-01-15 State Farm Mutual Automobile Insurance Company Autonomous communication feature use
US10529027B1 (en) 2014-05-20 2020-01-07 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature monitoring and evaluation of effectiveness
US10185998B1 (en) 2014-05-20 2019-01-22 State Farm Mutual Automobile Insurance Company Accident fault determination for autonomous vehicles
US10185999B1 (en) 2014-05-20 2019-01-22 State Farm Mutual Automobile Insurance Company Autonomous feature use monitoring and telematics
US10185997B1 (en) 2014-05-20 2019-01-22 State Farm Mutual Automobile Insurance Company Accident fault determination for autonomous vehicles
US9715711B1 (en) 2014-05-20 2017-07-25 State Farm Mutual Automobile Insurance Company Autonomous vehicle insurance pricing and offering based upon accident risk
US10223479B1 (en) 2014-05-20 2019-03-05 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature evaluation
US10510123B1 (en) 2014-05-20 2019-12-17 State Farm Mutual Automobile Insurance Company Accident risk model determination using autonomous vehicle operating data
US10504306B1 (en) 2014-05-20 2019-12-10 State Farm Mutual Automobile Insurance Company Accident response using autonomous vehicle monitoring
US9646428B1 (en) 2014-05-20 2017-05-09 State Farm Mutual Automobile Insurance Company Accident response using autonomous vehicle monitoring
US10373259B1 (en) 2014-05-20 2019-08-06 State Farm Mutual Automobile Insurance Company Fully autonomous vehicle insurance pricing
US10354330B1 (en) 2014-05-20 2019-07-16 State Farm Mutual Automobile Insurance Company Autonomous feature use monitoring and insurance pricing
US10319039B1 (en) 2014-05-20 2019-06-11 State Farm Mutual Automobile Insurance Company Accident fault determination for autonomous vehicles
US10102587B1 (en) 2014-07-21 2018-10-16 State Farm Mutual Automobile Insurance Company Methods of pre-generating insurance claims
US11634103B2 (en) 2014-07-21 2023-04-25 State Farm Mutual Automobile Insurance Company Methods of facilitating emergency assistance
US10832327B1 (en) 2014-07-21 2020-11-10 State Farm Mutual Automobile Insurance Company Methods of providing insurance savings based upon telematics and driving behavior identification
US9786154B1 (en) 2014-07-21 2017-10-10 State Farm Mutual Automobile Insurance Company Methods of facilitating emergency assistance
US10825326B1 (en) 2014-07-21 2020-11-03 State Farm Mutual Automobile Insurance Company Methods of facilitating emergency assistance
US10974693B1 (en) 2014-07-21 2021-04-13 State Farm Mutual Automobile Insurance Company Methods of theft prevention or mitigation
US10997849B1 (en) 2014-07-21 2021-05-04 State Farm Mutual Automobile Insurance Company Methods of facilitating emergency assistance
US11030696B1 (en) 2014-07-21 2021-06-08 State Farm Mutual Automobile Insurance Company Methods of providing insurance savings based upon telematics and anonymous driver data
US11069221B1 (en) 2014-07-21 2021-07-20 State Farm Mutual Automobile Insurance Company Methods of facilitating emergency assistance
US10723312B1 (en) 2014-07-21 2020-07-28 State Farm Mutual Automobile Insurance Company Methods of theft prevention or mitigation
US11068995B1 (en) 2014-07-21 2021-07-20 State Farm Mutual Automobile Insurance Company Methods of reconstructing an accident scene using telematics data
US10387962B1 (en) 2014-07-21 2019-08-20 State Farm Mutual Automobile Insurance Company Methods of reconstructing an accident scene using telematics data
US11257163B1 (en) 2014-07-21 2022-02-22 State Farm Mutual Automobile Insurance Company Methods of pre-generating insurance claims
US9783159B1 (en) 2014-07-21 2017-10-10 State Farm Mutual Automobile Insurance Company Methods of theft prevention or mitigation
US10540723B1 (en) 2014-07-21 2020-01-21 State Farm Mutual Automobile Insurance Company Methods of providing insurance savings based upon telematics and usage-based insurance
US11565654B2 (en) 2014-07-21 2023-01-31 State Farm Mutual Automobile Insurance Company Methods of providing insurance savings based upon telematics and driving behavior identification
US10475127B1 (en) 2014-07-21 2019-11-12 State Farm Mutual Automobile Insurance Company Methods of providing insurance savings based upon telematics and insurance incentives
US11634102B2 (en) 2014-07-21 2023-04-25 State Farm Mutual Automobile Insurance Company Methods of facilitating emergency assistance
US11720968B1 (en) 2014-11-13 2023-08-08 State Farm Mutual Automobile Insurance Company Autonomous vehicle insurance based upon usage
US10007263B1 (en) 2014-11-13 2018-06-26 State Farm Mutual Automobile Insurance Company Autonomous vehicle accident and emergency response
US10416670B1 (en) 2014-11-13 2019-09-17 State Farm Mutual Automobile Insurance Company Autonomous vehicle control assessment and selection
US11532187B1 (en) 2014-11-13 2022-12-20 State Farm Mutual Automobile Insurance Company Autonomous vehicle operating status assessment
US11500377B1 (en) 2014-11-13 2022-11-15 State Farm Mutual Automobile Insurance Company Autonomous vehicle control assessment and selection
US11494175B2 (en) 2014-11-13 2022-11-08 State Farm Mutual Automobile Insurance Company Autonomous vehicle operating status assessment
US10241509B1 (en) 2014-11-13 2019-03-26 State Farm Mutual Automobile Insurance Company Autonomous vehicle control assessment and selection
US9944282B1 (en) 2014-11-13 2018-04-17 State Farm Mutual Automobile Insurance Company Autonomous vehicle automatic parking
US11645064B2 (en) 2014-11-13 2023-05-09 State Farm Mutual Automobile Insurance Company Autonomous vehicle accident and emergency response
US9946531B1 (en) 2014-11-13 2018-04-17 State Farm Mutual Automobile Insurance Company Autonomous vehicle software version assessment
US10831204B1 (en) 2014-11-13 2020-11-10 State Farm Mutual Automobile Insurance Company Autonomous vehicle automatic parking
US11247670B1 (en) 2014-11-13 2022-02-15 State Farm Mutual Automobile Insurance Company Autonomous vehicle control assessment and selection
US10166994B1 (en) 2014-11-13 2019-01-01 State Farm Mutual Automobile Insurance Company Autonomous vehicle operating status assessment
US11173918B1 (en) 2014-11-13 2021-11-16 State Farm Mutual Automobile Insurance Company Autonomous vehicle control assessment and selection
US11175660B1 (en) 2014-11-13 2021-11-16 State Farm Mutual Automobile Insurance Company Autonomous vehicle control assessment and selection
US11127290B1 (en) 2014-11-13 2021-09-21 State Farm Mutual Automobile Insurance Company Autonomous vehicle infrastructure communication device
US10157423B1 (en) 2014-11-13 2018-12-18 State Farm Mutual Automobile Insurance Company Autonomous vehicle operating style and mode monitoring
US10431018B1 (en) 2014-11-13 2019-10-01 State Farm Mutual Automobile Insurance Company Autonomous vehicle operating status assessment
US11726763B2 (en) 2014-11-13 2023-08-15 State Farm Mutual Automobile Insurance Company Autonomous vehicle automatic parking
US10246097B1 (en) 2014-11-13 2019-04-02 State Farm Mutual Automobile Insurance Company Autonomous vehicle operator identification
US10266180B1 (en) 2014-11-13 2019-04-23 State Farm Mutual Automobile Insurance Company Autonomous vehicle control assessment and selection
US10353694B1 (en) 2014-11-13 2019-07-16 State Farm Mutual Automobile Insurance Company Autonomous vehicle software version assessment
US11014567B1 (en) 2014-11-13 2021-05-25 State Farm Mutual Automobile Insurance Company Autonomous vehicle operator identification
US11740885B1 (en) 2014-11-13 2023-08-29 State Farm Mutual Automobile Insurance Company Autonomous vehicle software version assessment
US10336321B1 (en) 2014-11-13 2019-07-02 State Farm Mutual Automobile Insurance Company Autonomous vehicle control assessment and selection
US10940866B1 (en) 2014-11-13 2021-03-09 State Farm Mutual Automobile Insurance Company Autonomous vehicle operating status assessment
US10943303B1 (en) 2014-11-13 2021-03-09 State Farm Mutual Automobile Insurance Company Autonomous vehicle operating style and mode monitoring
US10824415B1 (en) 2014-11-13 2020-11-03 State Farm Automobile Insurance Company Autonomous vehicle software version assessment
US10821971B1 (en) 2014-11-13 2020-11-03 State Farm Mutual Automobile Insurance Company Autonomous vehicle automatic parking
US11748085B2 (en) 2014-11-13 2023-09-05 State Farm Mutual Automobile Insurance Company Autonomous vehicle operator identification
US10915965B1 (en) 2014-11-13 2021-02-09 State Farm Mutual Automobile Insurance Company Autonomous vehicle insurance based upon usage
US10824144B1 (en) 2014-11-13 2020-11-03 State Farm Mutual Automobile Insurance Company Autonomous vehicle control assessment and selection
US11954482B2 (en) 2014-11-13 2024-04-09 State Farm Mutual Automobile Insurance Company Autonomous vehicle control assessment and selection
US10163350B1 (en) 2015-08-28 2018-12-25 State Farm Mutual Automobile Insurance Company Vehicular driver warnings
US10026237B1 (en) 2015-08-28 2018-07-17 State Farm Mutual Automobile Insurance Company Shared vehicle usage, monitoring and feedback
US10325491B1 (en) 2015-08-28 2019-06-18 State Farm Mutual Automobile Insurance Company Vehicular traffic alerts for avoidance of abnormal traffic conditions
US10343605B1 (en) 2015-08-28 2019-07-09 State Farm Mutual Automotive Insurance Company Vehicular warning based upon pedestrian or cyclist presence
US9805601B1 (en) 2015-08-28 2017-10-31 State Farm Mutual Automobile Insurance Company Vehicular traffic alerts for avoidance of abnormal traffic conditions
US9870649B1 (en) 2015-08-28 2018-01-16 State Farm Mutual Automobile Insurance Company Shared vehicle usage, monitoring and feedback
US10242513B1 (en) 2015-08-28 2019-03-26 State Farm Mutual Automobile Insurance Company Shared vehicle usage, monitoring and feedback
US10950065B1 (en) 2015-08-28 2021-03-16 State Farm Mutual Automobile Insurance Company Shared vehicle usage, monitoring and feedback
US11450206B1 (en) 2015-08-28 2022-09-20 State Farm Mutual Automobile Insurance Company Vehicular traffic alerts for avoidance of abnormal traffic conditions
US10977945B1 (en) 2015-08-28 2021-04-13 State Farm Mutual Automobile Insurance Company Vehicular driver warnings
US10769954B1 (en) 2015-08-28 2020-09-08 State Farm Mutual Automobile Insurance Company Vehicular driver warnings
US9868394B1 (en) 2015-08-28 2018-01-16 State Farm Mutual Automobile Insurance Company Vehicular warnings based upon pedestrian or cyclist presence
US11107365B1 (en) 2015-08-28 2021-08-31 State Farm Mutual Automobile Insurance Company Vehicular driver evaluation
US10748419B1 (en) 2015-08-28 2020-08-18 State Farm Mutual Automobile Insurance Company Vehicular traffic alerts for avoidance of abnormal traffic conditions
US10106083B1 (en) 2015-08-28 2018-10-23 State Farm Mutual Automobile Insurance Company Vehicular warnings based upon pedestrian or cyclist presence
US10019901B1 (en) 2015-08-28 2018-07-10 State Farm Mutual Automobile Insurance Company Vehicular traffic alerts for avoidance of abnormal traffic conditions
US10168703B1 (en) 2016-01-22 2019-01-01 State Farm Mutual Automobile Insurance Company Autonomous vehicle component malfunction impact assessment
US10065517B1 (en) 2016-01-22 2018-09-04 State Farm Mutual Automobile Insurance Company Autonomous electric vehicle charging
US10086782B1 (en) 2016-01-22 2018-10-02 State Farm Mutual Automobile Insurance Company Autonomous vehicle damage and salvage assessment
US11016504B1 (en) 2016-01-22 2021-05-25 State Farm Mutual Automobile Insurance Company Method and system for repairing a malfunctioning autonomous vehicle
US11062414B1 (en) 2016-01-22 2021-07-13 State Farm Mutual Automobile Insurance Company System and method for autonomous vehicle ride sharing using facial recognition
US11015942B1 (en) 2016-01-22 2021-05-25 State Farm Mutual Automobile Insurance Company Autonomous vehicle routing
US10134278B1 (en) 2016-01-22 2018-11-20 State Farm Mutual Automobile Insurance Company Autonomous vehicle application
US10156848B1 (en) 2016-01-22 2018-12-18 State Farm Mutual Automobile Insurance Company Autonomous vehicle routing during emergencies
US10042359B1 (en) 2016-01-22 2018-08-07 State Farm Mutual Automobile Insurance Company Autonomous vehicle refueling
US11119477B1 (en) 2016-01-22 2021-09-14 State Farm Mutual Automobile Insurance Company Anomalous condition detection and response for autonomous vehicles
US10691126B1 (en) 2016-01-22 2020-06-23 State Farm Mutual Automobile Insurance Company Autonomous vehicle refueling
US11126184B1 (en) 2016-01-22 2021-09-21 State Farm Mutual Automobile Insurance Company Autonomous vehicle parking
US10295363B1 (en) 2016-01-22 2019-05-21 State Farm Mutual Automobile Insurance Company Autonomous operation suitability assessment and mapping
US11124186B1 (en) 2016-01-22 2021-09-21 State Farm Mutual Automobile Insurance Company Autonomous vehicle control signal
US10679497B1 (en) 2016-01-22 2020-06-09 State Farm Mutual Automobile Insurance Company Autonomous vehicle application
US10308246B1 (en) 2016-01-22 2019-06-04 State Farm Mutual Automobile Insurance Company Autonomous vehicle signal control
US11181930B1 (en) 2016-01-22 2021-11-23 State Farm Mutual Automobile Insurance Company Method and system for enhancing the functionality of a vehicle
US11189112B1 (en) 2016-01-22 2021-11-30 State Farm Mutual Automobile Insurance Company Autonomous vehicle sensor malfunction detection
US11242051B1 (en) 2016-01-22 2022-02-08 State Farm Mutual Automobile Insurance Company Autonomous vehicle action communications
US10828999B1 (en) 2016-01-22 2020-11-10 State Farm Mutual Automobile Insurance Company Autonomous electric vehicle charging
US10579070B1 (en) 2016-01-22 2020-03-03 State Farm Mutual Automobile Insurance Company Method and system for repairing a malfunctioning autonomous vehicle
US10545024B1 (en) 2016-01-22 2020-01-28 State Farm Mutual Automobile Insurance Company Autonomous vehicle trip routing
US10185327B1 (en) 2016-01-22 2019-01-22 State Farm Mutual Automobile Insurance Company Autonomous vehicle path coordination
US11348193B1 (en) 2016-01-22 2022-05-31 State Farm Mutual Automobile Insurance Company Component damage and salvage assessment
US9940834B1 (en) 2016-01-22 2018-04-10 State Farm Mutual Automobile Insurance Company Autonomous vehicle application
US10747234B1 (en) 2016-01-22 2020-08-18 State Farm Mutual Automobile Insurance Company Method and system for enhancing the functionality of a vehicle
US11441916B1 (en) 2016-01-22 2022-09-13 State Farm Mutual Automobile Insurance Company Autonomous vehicle trip routing
US11022978B1 (en) 2016-01-22 2021-06-01 State Farm Mutual Automobile Insurance Company Autonomous vehicle routing during emergencies
US10503168B1 (en) 2016-01-22 2019-12-10 State Farm Mutual Automotive Insurance Company Autonomous vehicle retrieval
US10802477B1 (en) 2016-01-22 2020-10-13 State Farm Mutual Automobile Insurance Company Virtual testing of autonomous environment control system
US11513521B1 (en) 2016-01-22 2022-11-29 State Farm Mutual Automobile Insurance Copmany Autonomous vehicle refueling
US11526167B1 (en) 2016-01-22 2022-12-13 State Farm Mutual Automobile Insurance Company Autonomous vehicle component maintenance and repair
US10493936B1 (en) 2016-01-22 2019-12-03 State Farm Mutual Automobile Insurance Company Detecting and responding to autonomous vehicle collisions
US10482226B1 (en) 2016-01-22 2019-11-19 State Farm Mutual Automobile Insurance Company System and method for autonomous vehicle sharing using facial recognition
US10818105B1 (en) 2016-01-22 2020-10-27 State Farm Mutual Automobile Insurance Company Sensor malfunction detection
US11600177B1 (en) 2016-01-22 2023-03-07 State Farm Mutual Automobile Insurance Company Autonomous vehicle application
US11625802B1 (en) 2016-01-22 2023-04-11 State Farm Mutual Automobile Insurance Company Coordinated autonomous vehicle automatic area scanning
US10469282B1 (en) 2016-01-22 2019-11-05 State Farm Mutual Automobile Insurance Company Detecting and responding to autonomous environment incidents
US10249109B1 (en) 2016-01-22 2019-04-02 State Farm Mutual Automobile Insurance Company Autonomous vehicle sensor malfunction detection
US10395332B1 (en) 2016-01-22 2019-08-27 State Farm Mutual Automobile Insurance Company Coordinated autonomous vehicle automatic area scanning
US11656978B1 (en) 2016-01-22 2023-05-23 State Farm Mutual Automobile Insurance Company Virtual testing of autonomous environment control system
US11920938B2 (en) 2016-01-22 2024-03-05 Hyundai Motor Company Autonomous electric vehicle charging
US11682244B1 (en) 2016-01-22 2023-06-20 State Farm Mutual Automobile Insurance Company Smart home sensor malfunction detection
US10386192B1 (en) 2016-01-22 2019-08-20 State Farm Mutual Automobile Insurance Company Autonomous vehicle routing
US10386845B1 (en) 2016-01-22 2019-08-20 State Farm Mutual Automobile Insurance Company Autonomous vehicle parking
US11719545B2 (en) 2016-01-22 2023-08-08 Hyundai Motor Company Autonomous vehicle component damage and salvage assessment
US10384678B1 (en) 2016-01-22 2019-08-20 State Farm Mutual Automobile Insurance Company Autonomous vehicle action communications
US10824145B1 (en) 2016-01-22 2020-11-03 State Farm Mutual Automobile Insurance Company Autonomous vehicle component maintenance and repair
US10829063B1 (en) 2016-01-22 2020-11-10 State Farm Mutual Automobile Insurance Company Autonomous vehicle damage and salvage assessment
US10324463B1 (en) 2016-01-22 2019-06-18 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation adjustment based upon route
US11879742B2 (en) 2016-01-22 2024-01-23 State Farm Mutual Automobile Insurance Company Autonomous vehicle application
US10937343B2 (en) 2016-09-26 2021-03-02 The Coca-Cola Company Display device
IT201600101337A1 (en) * 2016-11-03 2018-05-03 Srsd Srl MOBILE TERRESTRIAL OR NAVAL SYSTEM, WITH REMOTE CONTROL AND CONTROL, WITH PASSIVE AND ACTIVE DEFENSES, EQUIPPED WITH SENSORS AND COMPLETE ACTUATORS CONTEMPORARY COVERAGE OF THE SURROUNDING SCENARIO

Similar Documents

Publication Publication Date Title
US20040077285A1 (en) Method, apparatus, and system for simulating visual depth in a concatenated image of a remote field of action
US20030231244A1 (en) Method and system for manipulating a field of view of a video image from a remote vehicle
US6954695B2 (en) Apparatus system and method for remotely controlling a vehicle over a network
US20040005927A1 (en) Facility for remote computer controlled racing
US7050889B2 (en) Method and system for a computer controlled racing network
US10617963B2 (en) Method and system for controlling virtual reality attraction
CN109076249B (en) System and method for video processing and display
JP2022533637A (en) Metabirth data fusion system
CN102356417B (en) Teleoperation method and human robot interface for remote control of machine by human operator
CN105080134A (en) Realistic remote-control experience game system
US5616079A (en) Three-dimensional games machine
KR102042232B1 (en) System for providing augmented reality interactive game contents using a drones
CN108126340A (en) A kind of outdoor scene model for simulation or game station is combined the augmented reality system for being shown and being operated with virtual software
WO2007128949A1 (en) Display apparatus and method
CN206762241U (en) A kind of e-sports analogue system based on mixed reality
CA2406000A1 (en) Interactive video device and method of use
CN108650494B (en) Live broadcast system capable of instantly obtaining high-definition photos based on voice control
CN108650522B (en) Live broadcast system capable of instantly obtaining high-definition photos based on automatic control
US20030011619A1 (en) Synchronization and blending of plural images into a seamless combined image
CN108646776B (en) Imaging system and method based on unmanned aerial vehicle
CN113022884A (en) Unmanned aerial vehicle load test simulation method and system
KR20160102845A (en) Flight possible omnidirectional image-taking camera system
CN104700683A (en) Real-scene two-way interactive type driving platform
CN117440130A (en) Highway inspection system
US20030220723A1 (en) Apparatus system and method for remotely controlling a vehicle over a peer-to-peer network

Legal Events

Date Code Title Description
AS Assignment

Owner name: RACING VISIONS INVESTMENTS, INC., ARIZONA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BONILLA, VICTOR G.;MCCABE, JAMES W.;REEL/FRAME:015122/0575

Effective date: 20040508

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION