US20100198876A1 - Apparatus and method of embedding meta-data in a captured image - Google Patents

Apparatus and method of embedding meta-data in a captured image Download PDF

Info

Publication number
US20100198876A1
US20100198876A1 US12/363,966 US36396609A US2010198876A1 US 20100198876 A1 US20100198876 A1 US 20100198876A1 US 36396609 A US36396609 A US 36396609A US 2010198876 A1 US2010198876 A1 US 2010198876A1
Authority
US
United States
Prior art keywords
data
meta
image
format
combined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/363,966
Inventor
Slavomir Estok
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honeywell International Inc
Hand Held Products Inc
Original Assignee
Honeywell International Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honeywell International Inc filed Critical Honeywell International Inc
Priority to US12/363,966 priority Critical patent/US20100198876A1/en
Assigned to HAND HELD PRODUCTS, INC. reassignment HAND HELD PRODUCTS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ESTOK, SLAVOMIR
Publication of US20100198876A1 publication Critical patent/US20100198876A1/en
Priority to US15/816,541 priority patent/US10942964B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • G11B27/30Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on the same track as the main recording
    • G11B27/3027Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on the same track as the main recording used signal is digitally coded
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • G11B27/30Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on the same track as the main recording
    • G11B27/3081Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on the same track as the main recording used signal is a video-frame or a video-field (P.I.P)

Definitions

  • aspects of the present invention relate to a method of embedding meta-data to an image, and an apparatus to embed the meta-data to the image. More specifically, aspects of the present invention relate to a method of embedding an associated meta-data of an image into the image, and an apparatus to embed the associated meta-data into the image so that the associated meta-data and the image may be visually perceived by a human user or programmatically perceived by a device, or both.
  • Various devices and peripherals allow a user to capture an image of various locations or items in the form of image files (such as Joint Photographic Experts Group (JPEG)) or video (such as Moving Picture Experts Group (MPEG4)), or capture sounds in the form of audio files (such as MP3).
  • the various devices and peripherals may further collect additional information for the captured image, video, or sounds contemporaneously. Examples of such additional information include time, geographical location, temperature, identification, and other properties of interest to the user.
  • the additional information should be transformed into a usable format, and embedded into a file of the captured images, videos, or sounds in the usable format.
  • the embedded additional information is converted into meta-data having the same format as the captured images, videos, or sounds, and then embedded into the captured images, videos, or sounds.
  • the imbedded additional information and the file may be rendered and perceived by a human user, a device, or both.
  • an apparatus to generate and render data embedded with associated meta-data in a human or machine recognizable format includes an obtaining device to acquire first data in a predetermined format and associated second data comprising information of the first data, and to output the first data and the associated second data; a processing device to receive the first data and the associated second data from the obtaining device, to process the first data and the associated second data to thereby generate meta-data based on the first data and/or the associated second data, to convert the meta-data into the predetermined format of the first data, and to embed the converted meta-data into the first data as a combined data in the predetermined format; and a rendering device to receive the combined data from the processing device, and to render the combined data in the human or machine recognizable format.
  • an apparatus to generate and render an image embedded with associated meta-data includes a data acquisition driver to acquire an image, and associated data comprising information of the image; a data processor to process the acquired image and the associated data from the data acquisition driver in order to generate meta-data based on the image, the associated data, or both, to convert the meta-data into a same format as that of the image, and to embed the converted meta-data into the image; and a rendering driver to receive the image embedded with the converted meta-data from the data processor, and to render the image with the embedded converted meta-data as an output image.
  • a method of generating and rendering data embedded with associated meta-data in a human or machine recognizable format includes obtaining first data in a predetermined format, and second data that is associated with the first data; processing the obtained first data and the associated second data to generate meta-data based on the first data, and/or the associated second data, converting the meta-data into the predetermined format of the first data; embedding the converted meta-data into the first data to obtain a combined data; and rendering the combined data in the predetermined format of the first data, the predetermined format of the first data being the human or machine recognizable format.
  • a method of embedding meta-data in an image to be rendered together includes obtaining the image; obtaining associated data contemporaneously with the image, the associate data being additional information of the image and includes a user input data, environmental data, and/or collected data that is associated with the image; extracting meta-data from the image that characterizes the image according to user selection as an extracted meta-data; generating a consolidated meta-data by consolidating the associated data and the extracted meta-data; generating a meta-data sub-image, which is a visual representation of the consolidated meta-data, by converting the consolidated meta-data into the same format as the image; embedding the meta-data sub-image into the image to generate a combined image in the same format as the image; and visually rendering the combined image on a medium or a display device.
  • a visually rendered combined image formed a medium, the combined image comprising: a photographic image in a predetermined format; and an embedded meta-data sub-image that is positioned in a predetermined position of the photographic image, the embedded meta-data sub-image being an image representation of information of the photographic image, and being in the same predetermined format as the photographic image.
  • FIG. 1 illustrates an input device to obtain an image and associated information according to an aspect of the present invention
  • FIG. 2 illustrates a schematic of an apparatus 200 to embed meta-data to an obtained data according to an aspect of the present invention
  • FIG. 3 illustrates a schematic of an apparatus to embed meta-data in an image according to an aspect of the present invention
  • FIG. 4 illustrates a method of embedding meta-data to an obtained data according to an aspect of the present invention
  • FIG. 5 illustrates a method of embedding meta-data in an image according to an aspect of the present invention.
  • FIG. 6A illustrates an image with an embedded meta-data, referred to as a combined image
  • 6 B illustrates a standalone meta-data representation, which may be affixed to a separately rendered image, according to aspects of the present invention.
  • a method is herein conceived to be a sequence of steps or actions leading to a desired result and may be implemented as software. While it may be convenient to discuss such software as if embodied by a single program, most implementations will distribute the described functions among discrete (and some not so discrete) pieces of software. These pieces are often described using such terms of art as “programs,” “objects,” “functions,” “subroutines,” “libraries,” “.dlls,” “APIs,” and “procedures.” While one or more of these terms may be described in aspects of the present invention, there is no intention to limit the scope of the claims.
  • FIG. 1 illustrates an input device to obtain an image and associated information according to an aspect of the present invention.
  • a type of data collection device referred to as a portable data terminal (PDT) as an example of the input device.
  • PDT generally integrates a mobile computer, one or more data transport paths and one or more data collection subsystems.
  • the mobile computer portion is generally similar to a typical touch screen consumer oriented portable computing devices (e.g. “Pocket PCs” or “PDAs”), such as those available from PALM®, HEWLETT PACKARD®, and DELL®.
  • the data transport paths include wired and wireless paths, such as 802.11, IrDA, BLUETOOTH, RS-232, USB, CDMA, GSM (incl. GRPS), and so forth.
  • the data collection subsystem generally comprises a device that captures data from an external source, for example, touches, keystrokes, RFID signals, images, and bar codes.
  • the PDT is distinguished from typical consumer oriented portable computing devices through the use of “industrial” components integrated into a housing that provide increased durability, ergonomics, and environmental independence over the typical consumer oriented devices. Additionally, the PDT tends to provide improved battery life by utilizing superior batteries and power management systems. Referring back to FIG.
  • the PDT 100 utilizes an elongated body 102 supporting a variety of components, including: a battery (not illustrated); a touch screen 106 (generally comprising an LCD screen under a touch sensitive panel); a keypad 108 (including a scan button 108 a ); a scan engine (not illustrated); and a data/charging port (also not illustrated).
  • the scan engine may comprise, for example, one or more of an image engine, a laser engine, or an RFID engine.
  • the scan engine is generally located near a top end 110 of the PDT 100 and is used to scan markings such as product codes.
  • the data/charging port typically comprises a proprietary mechanical interface with one set of pins or pads for transmitting and receiving data (typically via a serial interface standard such as USB or RS-232) and a second set of pins or pads for receiving power for operating the system and/or charging the battery.
  • the data charging port is generally located near a bottom end 111 of the PDT 100 .
  • the user presses the scan key 108 a to initiate data capture via the scan engine.
  • the captured data is analyzed, e.g. decoded to identify the information represented, stored and, displayed on the touch screen 106 . Additional processing of the data may take place on the PDT 100 and/or an external data processing resource to which the data is transmitted.
  • the scan key 108 a or another key may be used to initiate image capture via the image engine for further processing.
  • the image engine may be used to obtain an image of a subject, such as merchandise.
  • the PDT 100 may have a microphone to capture sounds, sensors to measure temperature or other environmental information, a GPS system to obtain position information of a location, a wireless connection to connect to a network or the internet, for example.
  • FIG. 2 illustrates a schematic of an apparatus to embed meta-data to an obtained data according to an aspect of the present invention.
  • the apparatus 200 includes an obtaining device 210 , a processing device 220 , and a rendering device 230 .
  • the obtaining device 210 acquires, or receives input of first data, and second data that is associated or corresponds to the first data, and outputs the first data and the associated second data to the processing device 220 .
  • the processing device 220 processes the acquired or received first data and the associated second data, generates meta-data based on the first data, the associated second data, or both, converts the meta-data into a format of the first data (referred to as a sub meta-data) or another format (referred to as a block meta-data), and either embeds the meta-data into the first data as a combined data or provides the block meta-data for later rendering as a standalone meta-data.
  • the processing device 220 then provides the combined data and the block meta-data to the rendering device 230 for rendering.
  • the rendering device 230 receives the combined data and/or the block meta-data from the processing device 220 , and outputs the combined data in a predetermined format, and/or renders the block meta-data as standalone meta-data in the same or different format as the predetermined format.
  • the first and second data may be an image, video, audio, sound, text, music, or in other human user or device perceivable formats.
  • FIG. 3 illustrates a schematic of an apparatus to embed meta-data in an image according to an aspect of the present invention.
  • the apparatus 1000 includes a data acquisition driver 1100 , a data processor 1200 , and a data rendering driver 1300 .
  • the data acquisition driver 1100 acquires, or receives input of, image data, and other data corresponding to the image data (referred to as an associated data), and outputs the image data and the associated data to the data processor 1200 .
  • the data processor 1200 processes the acquired or received image data and the associated data, generates meta-data based on the image data, the associated data, or both, converts the meta-data into a visual format (referred to as a sub-visual meta-data) or a data block format (referred to as a block meta-data), and either embeds the sub-visual meta-data into the image data or provides the block meta-data for later rendering.
  • the rendering driver 1300 receives the image data that is embedded with the sub-visual meta-data and/or the block meta-data from the data processor 1200 , and outputs the image data that is embedded with the sub-visual meta-data as an output image, or renders the block meta-data as a visual standalone meta-data, or both.
  • the data acquisition driver 1100 includes a visual image capture driver 1110 and a meta-data capture driver 1120 .
  • the visual image capture driver 1110 may be a device, software (SW), or firmware (FW) of the apparatus 1000 , and may simply obtain an image from a visual capture device 1130 , or may trigger the visual capture device 1130 to capture the image, and output the obtained or captured image to the visual image capture driver 1110 .
  • the visual image capture driver 1110 may also process the obtained or captured image, which may be in a predetermined data format, and convert the image into another predetermined data format or may preprocess the image for later processing in the data processor 1200 .
  • the visual image capture driver 1110 then outputs the image in the predetermined data format, or which has been preprocessed, to the data processor 1200 .
  • the visual capture device 1130 may be any device capable of capturing or obtaining an image.
  • the visual capture device 1130 may be a built-in and/or externally connected video capture device, which is used to take a picture, a video, or a sequence of pictures of an item or a location, such as inventory or a warehouse.
  • Other examples of the visual capture device 1130 include a digital camera, a scanner, a camcorder, a cellphone having a camera function, a portable data terminal (PDT), and a webcam, for example, and may also encompass built-in optical imager device, and an external video camera connected over BLUETOOTH.
  • the image obtained by the visual capture device 1130 maybe in any image format, including JPEG (Joint Photographic Experts Group), PDF (Portable Document Format), TIFF (Tagged Image File Format), or MPEG (Moving Picture Experts Group), for example.
  • the meta-data capture driver 1120 obtains a user input, environmental data, and/or collected data to associate with the obtained image, from a user input device 1140 , one or more environment sensors, and/or a smart meta-data collection agent. Instead of simply obtaining the user input, the environmental data, and/or the collected data, the meta-data capture driver 1120 may trigger the user input device 1140 to obtain and output the user input, the one or more environment sensors 1150 to obtain and output the environmental data, and the smart meta-data collection agent 1160 to obtain and output the collected data.
  • Each of the user input device 1140 , the one or more environment sensors 1150 , and the smart meta-data collection agent 1160 outputs the respective user input, the environmental data, and/or the collected data to the data processor 1200 , and especially, the meta-data capture driver 1120 .
  • the user input device 1140 may be any device capable of obtaining an input from a user.
  • the user input device 1140 may be a built-in and/or externally connected user-system interaction device, which is used to obtain preferences and various data input from the user.
  • Examples of the user input device 1140 include a keyboard, a key pad, a mouse, a touch screen, a touch pad, a scanner, and a trackball, for example.
  • the user input device 1140 may include an optical imager device to scan a 2D barcode representation or optical character recognition (OCR)-recognizable text presented by the user on plain paper.
  • OCR optical character recognition
  • the user input device 1140 may be wired or wireless devices. Wireless devices may be BLUETOOTH devices, for example.
  • the one or more environment sensors 1150 may be any device capable of obtaining data from the environment that is associated with the image.
  • the one or more environment sensors 1150 may be built-in and/or externally connected sensors, which are used to gather complimentary information from the environment in which the image was taken.
  • Examples of the one or more environment sensors 1150 include thermometers or other meteorological sensors, a body-temperature and blood pressure meter, a GPS locator, an electronic compass, a movement sensor, a speech recognizer, a radio frequency identification (RFID) sensor, a facial biometry reader, a fingerprint reader, an electronic key reader, and a timepiece, for example.
  • RFID radio frequency identification
  • environment sensors 1150 may be thermometers or other meteorological sensors if the associated image is of a cloud; may be a body-temperature and blood pressure meter if the associated image is of a patient in a hospital, a GPS locator if the associated image is of a historic building, and so on.
  • the one or more environment sensors 1150 may be wired or wireless devices. Wireless devices may be BLUETOOTH devices, for example.
  • the smart meta-data collection agent 1160 may be any device capable of obtaining data of, or for use with, the image.
  • the smart meta-data collection agent 1160 may be used to perform a database query, run internet searches, and/or mine databases for data relating to the image.
  • the smart meta-data collection agent 1160 may be an on-device database query agent, a remotely running internet search agent connected over TCP/IC, or a remote database mining agent connected over Global System for Mobile communications/General Packet Radio Service (GSM/GPRS) connection, in aspects of the present invention, though not limited thereto.
  • GSM/GPRS Global System for Mobile communications/General Packet Radio Service
  • the user input device 1140 , the one or more environment sensors 1150 , and the smart meta-data collection agent 1160 may be devices, such as hardware, or may be software (SW) or firmware (FW) that runs on a processor or a dedicated device.
  • SW software
  • FW firmware
  • the user input, the environmental data, and/or the collected data respectively output from the user input device 1140 , the one or more environment sensors 1150 , and the smart meta-data collection agent 1160 are collected in the meta-data capture driver 1120 .
  • the meta-data capture driver 1120 processes the user input, the environmental data, and/or the collected data, and generates an associated data from the user input, the environmental data, the collected data, and/or portions thereof.
  • the associated data may already be a meta-data in a predetermined format.
  • the associated data is then output to the data processor 1200 .
  • the associated data corresponds to the obtained or captured image from the visual capture device 1130 , and provides additional information about the image.
  • the associated data may be jargon term for an item that is input by the user if the image is of the item that is part of an inventory, or may be descriptive information of a location that is input by the user if the image is of that location, such as, a warehouse.
  • the associated data may be the environmental data, such as temperature and/or humidity, obtained by the one or more environment sensors 1150 if the image is of a warehouse containing certain inventory, such as, ice cream.
  • the associated data may be the collected data, such as recall information, obtained by the data collection agent 1160 if the image is of merchandise that has been found defective.
  • the data processor 1200 processes the various acquired or received data (including the image and the associated data) from the data acquisition driver 1100 , namely, from the visual image capture driver 1110 and the meta-data capture driver 1120 .
  • the data processor 1200 uses the acquired or received data to generate meta-data that is associated with the image, and embeds the meta-data to the image or provides the meta-data for later rendering.
  • the data processor 1200 includes a visual meta-data extractor 1210 , a meta-data processor 1220 , a smart meta-data miner 1230 , a raw-image merger 1240 , a meta-data to raw-image formatter 1250 , and a meta-data to data-block formatter 1260 , for example.
  • the visual meta-data extractor 1210 may be a device, software (SW), or firmware (FW), and receives input of the obtained or captured image (o referred to as simply an image) in a predetermined data format or which has been preprocessed, analyzes the image, and extracts one or more meta-data that characterizes the image. Further, the visual meta-data extractor 1210 obtains additional meta-data from, or provides the meta-data to, one or more other components of the data processor 1200 , such as the meta-data processor 1220 .
  • SW software
  • FW firmware
  • the visual meta-data extractor 1210 outputs, to the raw-image merger 1240 , the image either as is (referred to as a raw image), or enhanced by highlighting one or more characteristics in the image that were extracted and turned into meta-data (referred to as an enhanced image) according to a user preference.
  • the visual meta-data extractor 1210 may be a logical component implemented in software (SW) and/or firmware (FW), such as a field programmable gate array (FPGA) or a complex programmable logic device (CPLD), and is used to analyze the image in the predetermined data format or which has been preprocessed, and extracts (or generates) meta-data from the image.
  • the extracted meta-data may include information as to face recognition, biometry data extraction, OCR recognition, object recognition, and position/distance measurement, for example, from the image.
  • the image may be output in an enhanced format to indicate that meta-data has been extracted from the image. For example, a face in the image may be highlighted.
  • the preference for which characteristics of the image to extract as meta-data may be based on user preference.
  • the meta-data processor 1220 may be a device, software (SW), or firmware (FW), and receives the associated data (which may be meta-data in a predetermined format) from the meta-data capture driver 1120 , the image from the visual meta-data extractor 1210 , and/or the extracted meta-data from the visual meta-data extractor 1210 .
  • SW software
  • FW firmware
  • the meta-data processor 1220 analyzes the received associated data, the extracted meta-data, and/or the image, extracts additional meta-data from the associated data and/or the image, consolidates or generates a consolidated meta-data from the extracted meta-data and the additional meta-data that characterizes the image and information related to the image data, respectively, and outputs the consolidated meta-data to the meta-data to RAW image formatter 1250 and/or the meta-data to data-block formatter 1260 .
  • the meta-data processor 1220 may provide one or more meta-data that have been consolidated or generated to the visual meta-data extractor 1210 . Additionally, the meta-data processor 1220 may obtain supplementary meta-data from the smart meta-data miner 1230 so that the meta-data processor 1220 may generate or consolidate the consolidated meta-data by also using the supplementary meta-data or portions thereof. Further, the meta-data processor 1220 may provide the consolidated meta-data to the smart meta-data miner 1230 .
  • the supplementary meta-data is additional data that is obtained from an outside source, and may be obtained based on the associated data, the extracted meta-data, and/or the image.
  • the consolidated meta-data need not be generated or consolidated from the entire associated data, the extracted meta-data, the image, and/or supplementary meta-data by the meta-data processor 1220 . Rather, the consolidated meta-data may be consolidated or generated from one or more portions of the associated data, the extracted meta-data, the image, and/or the supplementary meta-data, respectively.
  • Examples of one or more portions of the associated data, the extracted meta-data, the image, and/or the supplementary meta-data may include information that relate to the image, such as, time the image was taken, geographical position the image was taken, types of objects depicted in the image, properties of the subjects (or depicted items) of the image, temperatures of the subject or the environment at the time when the image was taken, and/or skin tone, if the image is of a person.
  • the output of the meta-data processor 1220 is consolidated meta-data that is generated or consolidated from all the associated data, the extracted meta-data, the image, the supplementary meta-data, or portions thereof that were obtained from the meta-data capture driver 1120 , the visual meta-data extractor 1210 , and the smart meta-data miner 1230 .
  • the consolidated meta-data may be generated or consolidated also by combining or synthesizing the associated data, the extracted meta-data, the image, and/or the supplementary meta-data. Then, the consolidated meta-data is output from the meta-data processor 1220 to the meta-data to RAW image formatter 1250 and/or the meta-data to data-block formatter 1260 .
  • the smart meta-data miner 1230 may be logical components implemented in software (SW), a device, or firmware (FW), and connects to remote information sources, such as data bases, search engines, and/or search agents that are accessible via a network and/or the internet.
  • SW software
  • FW firmware
  • the smart meta-data miner 1230 searches and retrieves the supplementary meta-data that is additional to a base meta-data provided or queried by the meta-data processor 1220 .
  • the smart meta-data miner 1230 provides the retrieved supplementary meta-data to the meta-data processor 1220 .
  • the meta-data to RAW image formatter 1250 is an image processing component implemented in software (SW) and/or firmware (FW), and projects or converts the consolidated meta-data onto a visual representation according to user preference, such as a 2D barcode.
  • SW software
  • FW firmware
  • the visual representation of the consolidated meta-data is referred to as a meta-data sub-image.
  • the meta-data to RAW image formatter 1250 outputs the meta-data sub-image to the RAW-image merger 1240 .
  • the RAW-image merger 1240 is an image processing component implemented in software (SW) and/or firmware (FW), and merges the raw image or the enhanced image from the visual meta-data extractor 1210 with the meta-data sub-image from the meta-data to RAW-image formatter 1250 , into one overall merged image according to user preferences.
  • the RAW-image merger 1240 embeds the meta-data sub-image into the raw image or the enhanced image to generate an embedded image.
  • the merged image and embedded image may be referred to as simply a combined image. Accordingly, the combined image is output from the data processor 1200 to the data rendering driver 1300 .
  • the meta-data to data-block formatter 1260 is a data processing component implemented in software (SW) and/or firmware (FW), and formats the consolidated meta-data into a standalone user or machine recognizable representation of the consolidated meta-data according to user preferences.
  • SW software
  • FW firmware
  • the meta-data to data-block formatter 1260 outputs the formatted consolidated meta-data representation to the data rendering driver 1300 .
  • the formatted consolidated meta-data representation is output from the meta-data to data-block formatter 1260 to the data render 1320 of the data rendering driver 1300 .
  • the formatted consolidated meta-data representation will be referred to as a standalone meta-data representation.
  • the RAW-image merger 1240 receives the raw image or the enhanced image from the visual meta-data extractor 1210 , and simply outputs the raw or the enhanced image to the image render 1310 , without creating a combined image, according to user preference. If the raw or the enhanced image is simply output, the standalone meta-data representation from the data renderer 1320 may be output to be physically attached or adhered to the raw or the enhanced image that is rendered by the image renderer 1310 .
  • the data rendering driver 1300 receives the combined image and/or the standalone meta-data representation from the data processor 1200 , and outputs a primary output 1330 that is a visually rendered combined image and/or a secondary output 1340 that is a rendered standalone meta-data representation.
  • the data rendering driver 1300 includes the image renderer 1310 and the data renderer 1320 .
  • the image renderer 1310 receives the combined image and outputs the rendered combined image
  • the data renderer 1320 receives the standalone meta-data representation and outputs the rendered standalone meta-data representation.
  • one or both of the primary output 1330 and the secondary output 1340 may be saved or stored in an electronic form or other format in the apparatus 1000 , and/or transferred to an outside device for later processing, rendering, or publishing.
  • the storing, processing, rendering, and publishing of the primary output 1330 and the secondary output 1340 may include use of any storage medium or distribution network such that information systems or media broadcasting services are able to search for, process, and render primary output 1330 in the form of the combined image and/or the secondary output 1340 in the form of the standalone meta-data representation.
  • the image renderer 1310 is a component implemented in software (SW) and/or firmware (FW), and outputs a primary output 1330 that results from merging or embedding the meta-data sub-image into the raw image or the enhanced image, and then rendering the combined image.
  • the primary output 1330 is a visually readable image or a picture, and having the meta-data sub-image that overlies a selected portion of the image or the picture according to user preference.
  • the location of the meta-data sub-image in the image, when rendered, may be in a predetermined portion of the image.
  • locations include a particular corner of a rectangular image, or a border of the rectangular image that may be added or augmented to an existing or standard border.
  • the meta-data sub-image when rendered with the image, may be a watermark that is semi-visible, semi-transparent, or translucent.
  • the meta-data sub-image may be rendered as a symbol, text, script, or barcode, where examples of such barcodes include linear barcodes, such as, universal product codes (UPC), or matrix or 2D barcodes, such as, an Aztec code.
  • barcodes include linear barcodes, such as, universal product codes (UPC), or matrix or 2D barcodes, such as, an Aztec code.
  • UPC universal product codes
  • matrix or 2D barcodes such as, an Aztec code.
  • the usable type of barcode or other symbols to render the meta-data sub-image is not limited.
  • the meta-data sub-image may be rendered to be located in the upper-right corner of the image.
  • the combined image may be rendered or output by being printed on a medium.
  • the medium may be paper, a plastic card, a label, for example.
  • the combined image may be implemented as a photograph, a PDF file, an x-ray image, or an image that is displayed on display screen, for example. If the output is printed on a medium, a printer may be further utilized to print the combined image to the medium.
  • the data renderer 1320 is a component implemented in software (SW) and/or firmware (FW), and outputs a complementary output 1340 that is the standalone meta-data representation.
  • SW software
  • FW firmware
  • the standalone meta-data representation is rendered in a selected structured format, which can be used to physically attach the rendered standalone the meta-data representation to a rendered raw image or the enhanced image. Additionally, the standalone meta-data representation may be used to electronically attach the standalone meta-data representation to the raw image or the enhanced image in their electronic formats. Additionally, the standalone meta-data representation may simply be stored electronically.
  • the physical attachment of the rendered standalone meta-data representation to the rendered raw image or the enhanced image may be exemplified by physically attaching a printed barcode to a separately printed image.
  • the separate printing of the image related to the standalone meta-data representation is not required.
  • the standalone meta-data representation may be in a format, namely, a visual picture file, such as bitmap, that contains a 2D barcode to be printed as a label.
  • the label can then be physically attached unto an item that was the subject of the image used to generate the 2D barcode.
  • the item that is the subject of the image may be a mail parcel, and the label may be an address label for the mail parcel.
  • Other items that may be the subject of the image may include a rental car, warehouse inventory items, or air passenger luggage.
  • the electronic attachment of the standalone meta-data representation to the raw image or the enhanced image may be exemplified by associating date/time information, shutter speed information, and/or light conditions information to a regular digital camera JPEG file.
  • the electronically storing of the standalone meta-data representation may be exemplified by storage of the standalone meta-data representation in a computer database. If stored in such a manner, the standalone meta-data representation and information thereof can be published to be searchable by smart data search engines, information systems, and/or media broadcasting services.
  • the image renderer 1310 outputs a primary output 1330 that results from merging or embedding the meta-data sub-image into the raw image or the enhanced image, and which is the combined image
  • the data renderer 1320 outputs a complementary output 1340 that is the standalone meta-data representation.
  • one or both of the primary output 1330 in the form of the combined image and the secondary output 1340 in the form of the standalone metadata representation may be saved or stored in an electronic form or other format in the apparatus 1000 , and/or transferred to an outside device or the renderer for later processing, rendering, or publishing.
  • the transfer to the outside device or renderer to be processed, rendered, or published may be by way of a network or the internet, and the outside device or renderer may be a remote network server or a mobile device.
  • the combined image and/or the standalone meta-data representation may be searchable or searched by information systems or media broadcasting services.
  • the one or both of the primary output 1330 in the form of the combined image and the secondary output 1340 in the form of the standalone metadata representation may be output in the electronic form or the other format to a virtual medium, or may be broadcast.
  • the virtual medium may be a networked server hard-drive, or a remote mobile device file-system.
  • the primary and/or the secondary output to be broadcast may be forwarded to a public-internet or company-intranet web-page, or output as a broadcast video stream to be broadcast for viewing via a display or a television, or output as a web page RSS-feed to be distributed over a community of mobile devices, or any combinations thereof.
  • Additional types of publishing or broadcasting formats may include JPEG and MPEG formats for displays, such as on LCD-TV screens, or BITMAP format for mobile devices, such as personal digital assistants (PDAs) or cell phones.
  • FIG. 3 discusses the apparatus 1000 in terms of images, so that the apparatus 1000 acquires image data and other data corresponding to the image data; generates a meta-data based on the image data and the corresponding other data; merges or embeds the meta-data into the image and/or providing a standalone meta-data; and renders a combined image and/or the standalone meta-data.
  • the apparatus need not be limited to handling images, and the inputs for the apparatus 1000 can be non-visual inputs, such as, sound or machine readable data. In general, the inputs for the apparatus 1000 may simply be any data that can be merged or embedded with a corresponding meta-data.
  • the data acquisition driver 1100 acquires, or receives input of, music data and other data corresponding to the music data (referred to as the associated data), and outputs the music data and the associated data to the data processor 1200 .
  • the data processor 1200 processes the acquired or received music data and the associated data, generates meta-data based on the music data, the associated data, or both, converts the meta-data into an audible format (referred to as a sub-aural meta-data) or a data block format (referred to as a block meta-data), and either embeds the sub-aural meta-data into the music data or provides the block meta-data for later standalone rendering.
  • the rendering driver 1300 receives the music data embedded with the sub-aural meta-data, and the block meta-data from the data processor 1200 ; and outputs the music data embedded with the sub-aural meta-data as output sound, or renders the block meta-data as an aural standalone meta-data, or both.
  • the embedded sub-aural meta-data may be reproduced before or after the music data, in aspects of the present invention.
  • FIG. 4 illustrates a method of embedding meta-data to an obtained data according to an aspect of the present invention.
  • the method of FIG. 4 may be practiced using the apparatus 200 as shown in FIG. 2 .
  • first data, and second data that is associated with or corresponds to the first data are acquired or received in operation S 410 .
  • the acquired or received first data and the associated second data is processed to generate meta-data based on the first data, the associated second data, or both, in operation S 420 .
  • The, the meta-data is converted into a format of the first data (referred to as a sub meta-data) or another format (referred to as a block meta-data), and the meta-data is embedded into the first data as a combined data. Accordingly, the first data and the meta-data are integrated into a combined data in operation S 430 .
  • the meta-data may also be provided additionally as a standalone meta-data for later rendering.
  • the combined data is output in a predetermined format, and/or standalone meta-data is rendered in the same or different format as the predetermined format in operation S 440 .
  • the first and second data may be an image, video, audio, sound, text, music, or in other human user or device perceivable formats.
  • FIG. 5 illustrates a method of embedding meta-data in an image according to an aspect of the present invention.
  • the method of FIG. 5 may be practiced using the apparatus 1000 as shown in FIG. 3 .
  • an image is obtained, for example, by being input or captured in operation S 510 .
  • the obtained image may be a raw image in a predetermined data format, or may have been preprocessed for ease of later processing.
  • additional data is obtained that is of the obtained image, referred to as associated data, in operation S 520 .
  • the associated data includes a user input data, environmental data, and/or collected data that is associated with the obtained image.
  • the user input data include preferences and various data input by the user.
  • Examples of the user input may be an image, video, audio, sound, and/or text, and may be obtained from, a key pad, a mouse, a touch screen, a touch pad, a scanner, microphone, a digital camera, a video recorder, and/or a trackball, for example.
  • Other examples of the user input may include a 2D barcode representation or optical character recognition (OCR)-recognizable text.
  • OCR optical character recognition
  • the environmental data includes data from the environment that is associated with the image.
  • the environmental data may be temperature, humidity, pressure, GPS location, or time, and may be obtained from thermometers, pressure meter, a GPS locator, or a timepiece, for example.
  • collected data include any data for use with the image that is obtained from an outside source.
  • Examples of the collected data may be recall data, for example, and may be obtained from a database query, internet searches, and/or mined databases for data relating to the image.
  • the image in the predetermined data format, or which has been preprocessed is analyzed, and meta-data is extracted (or generated) from the image in operation S 530 .
  • the extracted meta-data may include information as to face recognition, biometry data extraction, OCR recognition, object recognition, and position/distance measurement, for example, from the image.
  • the image may be enhanced to indicate that meta-data has been extracted from the image. For example, a face in the image may be highlighted.
  • the above noted associated data, the extracted meta-data from the image, and/or the image itself, are consolidated to generate a consolidated meta-data in operation S 540 .
  • supplementary meta-data may further be obtained in operation S 540 and may be included to generate the consolidated meta-data.
  • the supplementary meta-data is obtained from an outside source, and may be obtained based on the associated data, the extracted meta-data, and/or the image.
  • the consolidated meta-data need not be generated from the entire associated data, the extracted meta-data, the image, and/or supplementary meta-data. Rather, the consolidated meta-data may be generated from one or more portions of the associated data, the extracted meta-data, the image, and/or the supplementary meta-data.
  • a visual representation of the consolidated meta-data is generated by converting the consolidated meta-data into the same format as the image, for example, according to user preference in operation S 550 .
  • the visual representation of the consolidated meta-data is referred to as a meta-data sub-image, and may be a 2D barcode.
  • the consolidated meta-data may be converted into a standalone meta-data representation to be later output in a desired format.
  • the combined image may be output, or optionally, the image and a corresponding standalone meta-data representation may be output in operation S 570 .
  • the output combined image and/or the image and a corresponding standalone meta-data representation may be published through use of any storage medium or distribution network to be searchable or searched by information systems or media broadcasting services. Additionally, the output combined image and/or the image and a corresponding standalone meta-data representation may be further output as a broadcast audio and/or video stream to be broadcast for viewing via a display or a television, or output as a web page RSS-feed to be distributed over a community of mobile devices.
  • FIG. 6A illustrates an image with an embedded meta-data, referred to as a combined image
  • 6 B illustrates a standalone meta-data representation, which may be affixed to a separately rendered image, according to aspects of the present invention
  • the combined image 600 according to an aspect of the present invention is composed of a photographic image of a person 601 , and an embedded meta-data sub-image 602 that is positioned in the upper right corner of the combined image 600 .
  • the person's image 601 is based on the obtained image from a digital camera (used as the visual capture device 1130 as shown in FIG. 3 ).
  • the meta-data sub-image 602 is one that is generated from a consolidated meta-data using at least portions of the associated data, the extracted meta-data, the image, and/or supplementary meta-data, as discussed above with reference to FIGS. 3 and 5 .
  • the embedded meta-data sub-image 602 is shown as a 2D barcode, i.e., as an Aztec code.
  • the combined image is visually readily perceivable by a human user, programmatically perceivable by a device, or both.
  • the combined image 600 when implemented to a passport photograph, the combined image 600 includes passport holder's image 601 as the subject image, and the embedded meta-data sub-image 602 that is associated with the passport holder.
  • the entire combined image 600 is one that a human user can recognize immediately as a photograph having the Aztec code. Further, the human user can obtain information contained in the Aztec code by using an appropriate reader. Additionally, the entire combined image 600 is one the appropriate reader can be used to programmatically perceive both the photograph and the Aztec code.
  • the embedded meta-data sub-image 602 is an image representation of the consolidated meta-data, which in turn is generated based on the associated data (including a user input data, the environmental data, and/or the collected data that is associated with the image), the extracted meta-data (based on the image), the image, and/or supplementary meta-data. That is, in the case of FIG. 6A , the consolidated meta-data may include personal identifying information about the passport holder shown in the image 601 , such as, name and birthday, as user input data; GPS coordinates of where the image was taken as environmental data; and/or number of previous passports issued as collected data.
  • the consolidated meta-data may include visually identifying information, such as hair color or eye color, as extracted meta data based on the image; and confidential State Department code to insure authenticity of the image as the supplemental meta-data. Accordingly, an image and a corresponding meta-data are rendered together, whereby the corresponding meta-data (the consolidated meta-data) is embedded in the image.
  • the standalone meta-data representation 650 which may be affixed to a separately rendered image is shown.
  • the standalone meta-data representation 650 when rendered, may be a 2D barcode, such as, an Aztec code, as shown.
  • the standalone meta-data representation 650 may be embodied as a self-adhering strip, such as a sticker, which may be affixed to a corresponding image, which may appear similar to FIG. 6A .
  • the standalone meta-data representation 650 as a sticker, may be affixed to an item, such as a package, in order to provide information about the package, such as to identify the content and the destination.
  • aspects of the present invention relate to a method of embedding a meta-data that corresponds to an image into the image, and an apparatus to embed the corresponding meta-data into the image. Accordingly, additional information for an image is effectively collected when the image is captured, the additional information is to be transformed into a usable format as the corresponding meta-data, and embedded into a file of the captured image.
  • the imbedded corresponding meta-data and the captured image may be rendered and perceived by a human user, a device, or both.
  • aspects of the present invention is not limited to an image, and are applicable to video, audio, sound, text, music, or in other human user or device perceivable formats.
  • meta-data is information about the subject data, such as image, video, audio, sound, text, music, or data in other human user or device perceivable formats.
  • Meta-data may document data about elements or attributes (name, size, data type, etc); about records or data structures (length, fields, columns, etc.); and about the data (where it is located, how it is associated, ownership, etc.), of a subject data.
  • meta-data may include descriptive information about the context, quality and condition, or characteristics of the subject data. Accordingly, meta-data is usable to facilitate the understanding, characteristics, and management usage of the subject data.
  • the meta-data sub-image is a code, and is neither text nor script.

Abstract

Methods and apparatuses generate and render data embedded with associated meta-data in a human or machine recognizable format. The apparatus includes an obtaining device to acquire first data in a predetermined format and associated second data comprising information of the first data, and to output the first data and the associated second data; a processing device to receive the first data and the associated second data from the obtaining device, to process the first data and the associated second data to thereby generate meta-data based on the first data and/or the associated second data, to convert the meta-data into the predetermined format of the first data, and to embed the converted meta-data into the first data as a combined data in the predetermined format; and a rendering device to receive the combined data from the processing device, and to render the combined data in the human or machine recognizable format.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • Aspects of the present invention relate to a method of embedding meta-data to an image, and an apparatus to embed the meta-data to the image. More specifically, aspects of the present invention relate to a method of embedding an associated meta-data of an image into the image, and an apparatus to embed the associated meta-data into the image so that the associated meta-data and the image may be visually perceived by a human user or programmatically perceived by a device, or both.
  • 2. Description of the Related Art
  • Various devices and peripherals allow a user to capture an image of various locations or items in the form of image files (such as Joint Photographic Experts Group (JPEG)) or video (such as Moving Picture Experts Group (MPEG4)), or capture sounds in the form of audio files (such as MP3). The various devices and peripherals may further collect additional information for the captured image, video, or sounds contemporaneously. Examples of such additional information include time, geographical location, temperature, identification, and other properties of interest to the user.
  • SUMMARY OF THE INVENTION
  • To effectively use additional information that are collected with captured images, videos, or sounds, the additional information should be transformed into a usable format, and embedded into a file of the captured images, videos, or sounds in the usable format. Specifically, the embedded additional information is converted into meta-data having the same format as the captured images, videos, or sounds, and then embedded into the captured images, videos, or sounds. Once the additional information is transformed and imbedded, the imbedded additional information and the file may be rendered and perceived by a human user, a device, or both.
  • According to an aspect of the present invention, an apparatus to generate and render data embedded with associated meta-data in a human or machine recognizable format, includes an obtaining device to acquire first data in a predetermined format and associated second data comprising information of the first data, and to output the first data and the associated second data; a processing device to receive the first data and the associated second data from the obtaining device, to process the first data and the associated second data to thereby generate meta-data based on the first data and/or the associated second data, to convert the meta-data into the predetermined format of the first data, and to embed the converted meta-data into the first data as a combined data in the predetermined format; and a rendering device to receive the combined data from the processing device, and to render the combined data in the human or machine recognizable format.
  • According to an aspect of the present invention, an apparatus to generate and render an image embedded with associated meta-data, includes a data acquisition driver to acquire an image, and associated data comprising information of the image; a data processor to process the acquired image and the associated data from the data acquisition driver in order to generate meta-data based on the image, the associated data, or both, to convert the meta-data into a same format as that of the image, and to embed the converted meta-data into the image; and a rendering driver to receive the image embedded with the converted meta-data from the data processor, and to render the image with the embedded converted meta-data as an output image.
  • According to an aspect of the present invention, a method of generating and rendering data embedded with associated meta-data in a human or machine recognizable format, includes obtaining first data in a predetermined format, and second data that is associated with the first data; processing the obtained first data and the associated second data to generate meta-data based on the first data, and/or the associated second data, converting the meta-data into the predetermined format of the first data; embedding the converted meta-data into the first data to obtain a combined data; and rendering the combined data in the predetermined format of the first data, the predetermined format of the first data being the human or machine recognizable format.
  • According to an aspect of the present invention, a method of embedding meta-data in an image to be rendered together, includes obtaining the image; obtaining associated data contemporaneously with the image, the associate data being additional information of the image and includes a user input data, environmental data, and/or collected data that is associated with the image; extracting meta-data from the image that characterizes the image according to user selection as an extracted meta-data; generating a consolidated meta-data by consolidating the associated data and the extracted meta-data; generating a meta-data sub-image, which is a visual representation of the consolidated meta-data, by converting the consolidated meta-data into the same format as the image; embedding the meta-data sub-image into the image to generate a combined image in the same format as the image; and visually rendering the combined image on a medium or a display device.
  • According to an aspect of the present invention, a visually rendered combined image formed a medium, the combined image comprising: a photographic image in a predetermined format; and an embedded meta-data sub-image that is positioned in a predetermined position of the photographic image, the embedded meta-data sub-image being an image representation of information of the photographic image, and being in the same predetermined format as the photographic image.
  • Additional aspects and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and/or other aspects and advantages of the invention will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
  • FIG. 1 illustrates an input device to obtain an image and associated information according to an aspect of the present invention;
  • FIG. 2 illustrates a schematic of an apparatus 200 to embed meta-data to an obtained data according to an aspect of the present invention;
  • FIG. 3 illustrates a schematic of an apparatus to embed meta-data in an image according to an aspect of the present invention;
  • FIG. 4 illustrates a method of embedding meta-data to an obtained data according to an aspect of the present invention;
  • FIG. 5 illustrates a method of embedding meta-data in an image according to an aspect of the present invention; and
  • FIG. 6A illustrates an image with an embedded meta-data, referred to as a combined image, and 6B illustrates a standalone meta-data representation, which may be affixed to a separately rendered image, according to aspects of the present invention.
  • DETAILED DESCRIPTION OF THE EMBODIMENTS
  • Reference will now be made in detail to aspects of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The aspects are described below in order to explain the present invention by referring to the figures.
  • A method is herein conceived to be a sequence of steps or actions leading to a desired result and may be implemented as software. While it may be convenient to discuss such software as if embodied by a single program, most implementations will distribute the described functions among discrete (and some not so discrete) pieces of software. These pieces are often described using such terms of art as “programs,” “objects,” “functions,” “subroutines,” “libraries,” “.dlls,” “APIs,” and “procedures.” While one or more of these terms may be described in aspects of the present invention, there is no intention to limit the scope of the claims.
  • With respect to the software described herein, those of ordinary skill in the art will recognize that there exist a variety of platforms and languages for creating software for performing the methods outlined herein. Aspects of the present invention can be implemented using MICROSOFT VISUAL STUDIO or any number of varieties of C. However, those of ordinary skill in the art also recognize that the choice of the exact platform and language is often dictated by the specifics of the actual system constructed, such that what may work for one type of system may not be efficient on another system. It should also be understood that the methods described herein are not limited to being executed as software on a microprocessor, but may be executed using other circuits. For example, the methods could be implemented on a digital signal processor, a FPGA, or with HDL (Hardware Design Language) in an ASIC.
  • FIG. 1 illustrates an input device to obtain an image and associated information according to an aspect of the present invention. As shown in FIG. 1, illustrated is a type of data collection device referred to as a portable data terminal (PDT) as an example of the input device. A PDT generally integrates a mobile computer, one or more data transport paths and one or more data collection subsystems. The mobile computer portion is generally similar to a typical touch screen consumer oriented portable computing devices (e.g. “Pocket PCs” or “PDAs”), such as those available from PALM®, HEWLETT PACKARD®, and DELL®. The data transport paths include wired and wireless paths, such as 802.11, IrDA, BLUETOOTH, RS-232, USB, CDMA, GSM (incl. GRPS), and so forth. The data collection subsystem generally comprises a device that captures data from an external source, for example, touches, keystrokes, RFID signals, images, and bar codes. The PDT is distinguished from typical consumer oriented portable computing devices through the use of “industrial” components integrated into a housing that provide increased durability, ergonomics, and environmental independence over the typical consumer oriented devices. Additionally, the PDT tends to provide improved battery life by utilizing superior batteries and power management systems. Referring back to FIG. 1, the PDT 100 utilizes an elongated body 102 supporting a variety of components, including: a battery (not illustrated); a touch screen 106 (generally comprising an LCD screen under a touch sensitive panel); a keypad 108 (including a scan button 108 a); a scan engine (not illustrated); and a data/charging port (also not illustrated). The scan engine may comprise, for example, one or more of an image engine, a laser engine, or an RFID engine. The scan engine is generally located near a top end 110 of the PDT 100 and is used to scan markings such as product codes. The data/charging port typically comprises a proprietary mechanical interface with one set of pins or pads for transmitting and receiving data (typically via a serial interface standard such as USB or RS-232) and a second set of pins or pads for receiving power for operating the system and/or charging the battery. The data charging port is generally located near a bottom end 111 of the PDT 100.
  • In use, the user presses the scan key 108 a to initiate data capture via the scan engine. The captured data is analyzed, e.g. decoded to identify the information represented, stored and, displayed on the touch screen 106. Additional processing of the data may take place on the PDT 100 and/or an external data processing resource to which the data is transmitted.
  • In other aspects of the present invention, the scan key 108 a or another key may be used to initiate image capture via the image engine for further processing. In such a case, the image engine may be used to obtain an image of a subject, such as merchandise. Additionally, the PDT 100 may have a microphone to capture sounds, sensors to measure temperature or other environmental information, a GPS system to obtain position information of a location, a wireless connection to connect to a network or the internet, for example.
  • FIG. 2 illustrates a schematic of an apparatus to embed meta-data to an obtained data according to an aspect of the present invention. As shown in FIG. 2, the apparatus 200 includes an obtaining device 210, a processing device 220, and a rendering device 230.
  • Specifically, the obtaining device 210 acquires, or receives input of first data, and second data that is associated or corresponds to the first data, and outputs the first data and the associated second data to the processing device 220. In turn, the processing device 220 processes the acquired or received first data and the associated second data, generates meta-data based on the first data, the associated second data, or both, converts the meta-data into a format of the first data (referred to as a sub meta-data) or another format (referred to as a block meta-data), and either embeds the meta-data into the first data as a combined data or provides the block meta-data for later rendering as a standalone meta-data.
  • The processing device 220 then provides the combined data and the block meta-data to the rendering device 230 for rendering. The rendering device 230 receives the combined data and/or the block meta-data from the processing device 220, and outputs the combined data in a predetermined format, and/or renders the block meta-data as standalone meta-data in the same or different format as the predetermined format. In aspects of the present invention, the first and second data may be an image, video, audio, sound, text, music, or in other human user or device perceivable formats.
  • FIG. 3 illustrates a schematic of an apparatus to embed meta-data in an image according to an aspect of the present invention. As shown in FIG. 3, the apparatus 1000 includes a data acquisition driver 1100, a data processor 1200, and a data rendering driver 1300.
  • In aspects of the present invention, the data acquisition driver 1100 acquires, or receives input of, image data, and other data corresponding to the image data (referred to as an associated data), and outputs the image data and the associated data to the data processor 1200. In turn, the data processor 1200 processes the acquired or received image data and the associated data, generates meta-data based on the image data, the associated data, or both, converts the meta-data into a visual format (referred to as a sub-visual meta-data) or a data block format (referred to as a block meta-data), and either embeds the sub-visual meta-data into the image data or provides the block meta-data for later rendering. Additionally, the rendering driver 1300 receives the image data that is embedded with the sub-visual meta-data and/or the block meta-data from the data processor 1200, and outputs the image data that is embedded with the sub-visual meta-data as an output image, or renders the block meta-data as a visual standalone meta-data, or both.
  • Referring to FIG. 3 in greater detail, the data acquisition driver 1100 includes a visual image capture driver 1110 and a meta-data capture driver 1120. In aspects of the present invention, the visual image capture driver 1110 may be a device, software (SW), or firmware (FW) of the apparatus 1000, and may simply obtain an image from a visual capture device 1130, or may trigger the visual capture device 1130 to capture the image, and output the obtained or captured image to the visual image capture driver 1110. The visual image capture driver 1110 may also process the obtained or captured image, which may be in a predetermined data format, and convert the image into another predetermined data format or may preprocess the image for later processing in the data processor 1200. The visual image capture driver 1110 then outputs the image in the predetermined data format, or which has been preprocessed, to the data processor 1200.
  • In aspects of the present invention, the visual capture device 1130 may be any device capable of capturing or obtaining an image. For example, the visual capture device 1130 may be a built-in and/or externally connected video capture device, which is used to take a picture, a video, or a sequence of pictures of an item or a location, such as inventory or a warehouse. Other examples of the visual capture device 1130 include a digital camera, a scanner, a camcorder, a cellphone having a camera function, a portable data terminal (PDT), and a webcam, for example, and may also encompass built-in optical imager device, and an external video camera connected over BLUETOOTH. The image obtained by the visual capture device 1130 maybe in any image format, including JPEG (Joint Photographic Experts Group), PDF (Portable Document Format), TIFF (Tagged Image File Format), or MPEG (Moving Picture Experts Group), for example.
  • While the visual image capture driver 1110 obtains an image from the visual capture device 1130, the meta-data capture driver 1120 obtains a user input, environmental data, and/or collected data to associate with the obtained image, from a user input device 1140, one or more environment sensors, and/or a smart meta-data collection agent. Instead of simply obtaining the user input, the environmental data, and/or the collected data, the meta-data capture driver 1120 may trigger the user input device 1140 to obtain and output the user input, the one or more environment sensors 1150 to obtain and output the environmental data, and the smart meta-data collection agent 1160 to obtain and output the collected data. Each of the user input device 1140, the one or more environment sensors 1150, and the smart meta-data collection agent 1160 outputs the respective user input, the environmental data, and/or the collected data to the data processor 1200, and especially, the meta-data capture driver 1120.
  • In aspects of the present invention, the user input device 1140 may be any device capable of obtaining an input from a user. For example, the user input device 1140 may be a built-in and/or externally connected user-system interaction device, which is used to obtain preferences and various data input from the user. Examples of the user input device 1140 include a keyboard, a key pad, a mouse, a touch screen, a touch pad, a scanner, and a trackball, for example. Additionally, the user input device 1140 may include an optical imager device to scan a 2D barcode representation or optical character recognition (OCR)-recognizable text presented by the user on plain paper. In aspects of the present invention, the user input device 1140 may be wired or wireless devices. Wireless devices may be BLUETOOTH devices, for example.
  • In aspects of the present invention, the one or more environment sensors 1150 may be any device capable of obtaining data from the environment that is associated with the image. For example, the one or more environment sensors 1150 may be built-in and/or externally connected sensors, which are used to gather complimentary information from the environment in which the image was taken. Examples of the one or more environment sensors 1150 include thermometers or other meteorological sensors, a body-temperature and blood pressure meter, a GPS locator, an electronic compass, a movement sensor, a speech recognizer, a radio frequency identification (RFID) sensor, a facial biometry reader, a fingerprint reader, an electronic key reader, and a timepiece, for example.
  • In further illustrating the different types and use of the above environment sensors 1150, it should be understood that the following examples are non-limiting. For example, environment sensors 1150 may be thermometers or other meteorological sensors if the associated image is of a cloud; may be a body-temperature and blood pressure meter if the associated image is of a patient in a hospital, a GPS locator if the associated image is of a historic building, and so on. In aspects of the present invention, the one or more environment sensors 1150 may be wired or wireless devices. Wireless devices may be BLUETOOTH devices, for example.
  • In aspects of the present invention, the smart meta-data collection agent 1160 may be any device capable of obtaining data of, or for use with, the image. For example, the smart meta-data collection agent 1160 may be used to perform a database query, run internet searches, and/or mine databases for data relating to the image. Accordingly, the smart meta-data collection agent 1160 may be an on-device database query agent, a remotely running internet search agent connected over TCP/IC, or a remote database mining agent connected over Global System for Mobile communications/General Packet Radio Service (GSM/GPRS) connection, in aspects of the present invention, though not limited thereto.
  • In various aspects, the user input device 1140, the one or more environment sensors 1150, and the smart meta-data collection agent 1160 may be devices, such as hardware, or may be software (SW) or firmware (FW) that runs on a processor or a dedicated device. The user input, the environmental data, and/or the collected data respectively output from the user input device 1140, the one or more environment sensors 1150, and the smart meta-data collection agent 1160 are collected in the meta-data capture driver 1120. The meta-data capture driver 1120 processes the user input, the environmental data, and/or the collected data, and generates an associated data from the user input, the environmental data, the collected data, and/or portions thereof. The associated data may already be a meta-data in a predetermined format. The associated data is then output to the data processor 1200.
  • In aspects of the present invention, the associated data corresponds to the obtained or captured image from the visual capture device 1130, and provides additional information about the image. For example, the associated data may be jargon term for an item that is input by the user if the image is of the item that is part of an inventory, or may be descriptive information of a location that is input by the user if the image is of that location, such as, a warehouse. On the other hand, the associated data may be the environmental data, such as temperature and/or humidity, obtained by the one or more environment sensors 1150 if the image is of a warehouse containing certain inventory, such as, ice cream. Additionally, the associated data may be the collected data, such as recall information, obtained by the data collection agent 1160 if the image is of merchandise that has been found defective.
  • Referring back to FIG. 3, the data processor 1200 processes the various acquired or received data (including the image and the associated data) from the data acquisition driver 1100, namely, from the visual image capture driver 1110 and the meta-data capture driver 1120. The data processor 1200 uses the acquired or received data to generate meta-data that is associated with the image, and embeds the meta-data to the image or provides the meta-data for later rendering. The data processor 1200 includes a visual meta-data extractor 1210, a meta-data processor 1220, a smart meta-data miner 1230, a raw-image merger 1240, a meta-data to raw-image formatter 1250, and a meta-data to data-block formatter 1260, for example.
  • In aspects of the present invention, the visual meta-data extractor 1210 may be a device, software (SW), or firmware (FW), and receives input of the obtained or captured image (o referred to as simply an image) in a predetermined data format or which has been preprocessed, analyzes the image, and extracts one or more meta-data that characterizes the image. Further, the visual meta-data extractor 1210 obtains additional meta-data from, or provides the meta-data to, one or more other components of the data processor 1200, such as the meta-data processor 1220. Additionally, the visual meta-data extractor 1210 outputs, to the raw-image merger 1240, the image either as is (referred to as a raw image), or enhanced by highlighting one or more characteristics in the image that were extracted and turned into meta-data (referred to as an enhanced image) according to a user preference.
  • In aspects of the present invention, the visual meta-data extractor 1210 may be a logical component implemented in software (SW) and/or firmware (FW), such as a field programmable gate array (FPGA) or a complex programmable logic device (CPLD), and is used to analyze the image in the predetermined data format or which has been preprocessed, and extracts (or generates) meta-data from the image. For example, the extracted meta-data may include information as to face recognition, biometry data extraction, OCR recognition, object recognition, and position/distance measurement, for example, from the image. Once the meta-data is extracted, the image may be output in an enhanced format to indicate that meta-data has been extracted from the image. For example, a face in the image may be highlighted. In aspects of the present invention, the preference for which characteristics of the image to extract as meta-data may be based on user preference.
  • In aspects of the present invention, the meta-data processor 1220 may be a device, software (SW), or firmware (FW), and receives the associated data (which may be meta-data in a predetermined format) from the meta-data capture driver 1120, the image from the visual meta-data extractor 1210, and/or the extracted meta-data from the visual meta-data extractor 1210. The meta-data processor 1220 analyzes the received associated data, the extracted meta-data, and/or the image, extracts additional meta-data from the associated data and/or the image, consolidates or generates a consolidated meta-data from the extracted meta-data and the additional meta-data that characterizes the image and information related to the image data, respectively, and outputs the consolidated meta-data to the meta-data to RAW image formatter 1250 and/or the meta-data to data-block formatter 1260.
  • In addition to the meta-data processor 1220 obtaining the extracted meta-data, or the image as is or in an enhanced format, from visual meta-data extractor 1210, the meta-data processor 1220 may provide one or more meta-data that have been consolidated or generated to the visual meta-data extractor 1210. Additionally, the meta-data processor 1220 may obtain supplementary meta-data from the smart meta-data miner 1230 so that the meta-data processor 1220 may generate or consolidate the consolidated meta-data by also using the supplementary meta-data or portions thereof. Further, the meta-data processor 1220 may provide the consolidated meta-data to the smart meta-data miner 1230. The supplementary meta-data is additional data that is obtained from an outside source, and may be obtained based on the associated data, the extracted meta-data, and/or the image.
  • In aspects of the present invention, the consolidated meta-data need not be generated or consolidated from the entire associated data, the extracted meta-data, the image, and/or supplementary meta-data by the meta-data processor 1220. Rather, the consolidated meta-data may be consolidated or generated from one or more portions of the associated data, the extracted meta-data, the image, and/or the supplementary meta-data, respectively. Examples of one or more portions of the associated data, the extracted meta-data, the image, and/or the supplementary meta-data may include information that relate to the image, such as, time the image was taken, geographical position the image was taken, types of objects depicted in the image, properties of the subjects (or depicted items) of the image, temperatures of the subject or the environment at the time when the image was taken, and/or skin tone, if the image is of a person.
  • The output of the meta-data processor 1220 is consolidated meta-data that is generated or consolidated from all the associated data, the extracted meta-data, the image, the supplementary meta-data, or portions thereof that were obtained from the meta-data capture driver 1120, the visual meta-data extractor 1210, and the smart meta-data miner 1230. The consolidated meta-data may be generated or consolidated also by combining or synthesizing the associated data, the extracted meta-data, the image, and/or the supplementary meta-data. Then, the consolidated meta-data is output from the meta-data processor 1220 to the meta-data to RAW image formatter 1250 and/or the meta-data to data-block formatter 1260.
  • In aspects of the present invention, the smart meta-data miner 1230 may be logical components implemented in software (SW), a device, or firmware (FW), and connects to remote information sources, such as data bases, search engines, and/or search agents that are accessible via a network and/or the internet. The smart meta-data miner 1230 searches and retrieves the supplementary meta-data that is additional to a base meta-data provided or queried by the meta-data processor 1220. As discussed above, the smart meta-data miner 1230 provides the retrieved supplementary meta-data to the meta-data processor 1220.
  • In aspects of the present invention, the meta-data to RAW image formatter 1250 is an image processing component implemented in software (SW) and/or firmware (FW), and projects or converts the consolidated meta-data onto a visual representation according to user preference, such as a 2D barcode. Hereinafter, the visual representation of the consolidated meta-data is referred to as a meta-data sub-image. The meta-data to RAW image formatter 1250 outputs the meta-data sub-image to the RAW-image merger 1240.
  • In aspects of the present invention, the RAW-image merger 1240 is an image processing component implemented in software (SW) and/or firmware (FW), and merges the raw image or the enhanced image from the visual meta-data extractor 1210 with the meta-data sub-image from the meta-data to RAW-image formatter 1250, into one overall merged image according to user preferences. In other aspects, the RAW-image merger 1240 embeds the meta-data sub-image into the raw image or the enhanced image to generate an embedded image. In aspects of the present invention, the merged image and embedded image may be referred to as simply a combined image. Accordingly, the combined image is output from the data processor 1200 to the data rendering driver 1300. Particularly, the combined image from the RAW-image merger 1240 is output to the image render 1310 in the data rendering driver 1300. In aspects of the present invention, the meta-data to data-block formatter 1260 is a data processing component implemented in software (SW) and/or firmware (FW), and formats the consolidated meta-data into a standalone user or machine recognizable representation of the consolidated meta-data according to user preferences. Once formatted, the meta-data to data-block formatter 1260 outputs the formatted consolidated meta-data representation to the data rendering driver 1300. Particularly, the formatted consolidated meta-data representation is output from the meta-data to data-block formatter 1260 to the data render 1320 of the data rendering driver 1300. Hereinafter, the formatted consolidated meta-data representation will be referred to as a standalone meta-data representation.
  • It should be understood that in some aspects of the present invention, the RAW-image merger 1240 receives the raw image or the enhanced image from the visual meta-data extractor 1210, and simply outputs the raw or the enhanced image to the image render 1310, without creating a combined image, according to user preference. If the raw or the enhanced image is simply output, the standalone meta-data representation from the data renderer 1320 may be output to be physically attached or adhered to the raw or the enhanced image that is rendered by the image renderer 1310.
  • Referring back to FIG. 3, the data rendering driver 1300 receives the combined image and/or the standalone meta-data representation from the data processor 1200, and outputs a primary output 1330 that is a visually rendered combined image and/or a secondary output 1340 that is a rendered standalone meta-data representation. The data rendering driver 1300 includes the image renderer 1310 and the data renderer 1320. The image renderer 1310 receives the combined image and outputs the rendered combined image, and the data renderer 1320 receives the standalone meta-data representation and outputs the rendered standalone meta-data representation. Once output, one or both of the primary output 1330 and the secondary output 1340 may be saved or stored in an electronic form or other format in the apparatus 1000, and/or transferred to an outside device for later processing, rendering, or publishing. The storing, processing, rendering, and publishing of the primary output 1330 and the secondary output 1340 may include use of any storage medium or distribution network such that information systems or media broadcasting services are able to search for, process, and render primary output 1330 in the form of the combined image and/or the secondary output 1340 in the form of the standalone meta-data representation.
  • In aspects of the present invention, the image renderer 1310 is a component implemented in software (SW) and/or firmware (FW), and outputs a primary output 1330 that results from merging or embedding the meta-data sub-image into the raw image or the enhanced image, and then rendering the combined image. In aspects of the present invention, the primary output 1330 is a visually readable image or a picture, and having the meta-data sub-image that overlies a selected portion of the image or the picture according to user preference. In aspects of the present invention, the location of the meta-data sub-image in the image, when rendered, may be in a predetermined portion of the image. Examples of such locations include a particular corner of a rectangular image, or a border of the rectangular image that may be added or augmented to an existing or standard border. Additionally, the meta-data sub-image, when rendered with the image, may be a watermark that is semi-visible, semi-transparent, or translucent.
  • Additionally, in an aspect of the present invention, the meta-data sub-image may be rendered as a symbol, text, script, or barcode, where examples of such barcodes include linear barcodes, such as, universal product codes (UPC), or matrix or 2D barcodes, such as, an Aztec code. In aspects of the present invention, the usable type of barcode or other symbols to render the meta-data sub-image is not limited. Additionally, in aspects of the present invention, the meta-data sub-image may be rendered to be located in the upper-right corner of the image.
  • In aspects of the present invention, the combined image may be rendered or output by being printed on a medium. In aspects of the present, the medium may be paper, a plastic card, a label, for example. Accordingly, the combined image may be implemented as a photograph, a PDF file, an x-ray image, or an image that is displayed on display screen, for example. If the output is printed on a medium, a printer may be further utilized to print the combined image to the medium.
  • In aspects of the present invention, the data renderer 1320 is a component implemented in software (SW) and/or firmware (FW), and outputs a complementary output 1340 that is the standalone meta-data representation. The standalone meta-data representation is rendered in a selected structured format, which can be used to physically attach the rendered standalone the meta-data representation to a rendered raw image or the enhanced image. Additionally, the standalone meta-data representation may be used to electronically attach the standalone meta-data representation to the raw image or the enhanced image in their electronic formats. Additionally, the standalone meta-data representation may simply be stored electronically.
  • To elaborate, the physical attachment of the rendered standalone meta-data representation to the rendered raw image or the enhanced image may be exemplified by physically attaching a printed barcode to a separately printed image. However, the separate printing of the image related to the standalone meta-data representation is not required. For example, in aspects of the present invention, the standalone meta-data representation may be in a format, namely, a visual picture file, such as bitmap, that contains a 2D barcode to be printed as a label. The label can then be physically attached unto an item that was the subject of the image used to generate the 2D barcode. The item that is the subject of the image may be a mail parcel, and the label may be an address label for the mail parcel. Other items that may be the subject of the image may include a rental car, warehouse inventory items, or air passenger luggage.
  • On the other hand, the electronic attachment of the standalone meta-data representation to the raw image or the enhanced image may be exemplified by associating date/time information, shutter speed information, and/or light conditions information to a regular digital camera JPEG file. Finally, the electronically storing of the standalone meta-data representation may be exemplified by storage of the standalone meta-data representation in a computer database. If stored in such a manner, the standalone meta-data representation and information thereof can be published to be searchable by smart data search engines, information systems, and/or media broadcasting services.
  • As discussed above, the image renderer 1310 outputs a primary output 1330 that results from merging or embedding the meta-data sub-image into the raw image or the enhanced image, and which is the combined image, and the data renderer 1320 outputs a complementary output 1340 that is the standalone meta-data representation. In aspects of the present invention, one or both of the primary output 1330 in the form of the combined image and the secondary output 1340 in the form of the standalone metadata representation may be saved or stored in an electronic form or other format in the apparatus 1000, and/or transferred to an outside device or the renderer for later processing, rendering, or publishing. In aspects of the present invention, the transfer to the outside device or renderer to be processed, rendered, or published may be by way of a network or the internet, and the outside device or renderer may be a remote network server or a mobile device. Once stored or published through use of any storage medium or distribution network, the combined image and/or the standalone meta-data representation may be searchable or searched by information systems or media broadcasting services.
  • Additionally, the one or both of the primary output 1330 in the form of the combined image and the secondary output 1340 in the form of the standalone metadata representation may be output in the electronic form or the other format to a virtual medium, or may be broadcast. In aspects of the present invention, the virtual medium may be a networked server hard-drive, or a remote mobile device file-system. The primary and/or the secondary output to be broadcast may be forwarded to a public-internet or company-intranet web-page, or output as a broadcast video stream to be broadcast for viewing via a display or a television, or output as a web page RSS-feed to be distributed over a community of mobile devices, or any combinations thereof. Additional types of publishing or broadcasting formats may include JPEG and MPEG formats for displays, such as on LCD-TV screens, or BITMAP format for mobile devices, such as personal digital assistants (PDAs) or cell phones.
  • The above aspects of FIG. 3 discusses the apparatus 1000 in terms of images, so that the apparatus 1000 acquires image data and other data corresponding to the image data; generates a meta-data based on the image data and the corresponding other data; merges or embeds the meta-data into the image and/or providing a standalone meta-data; and renders a combined image and/or the standalone meta-data. However, it should be understood that the apparatus need not be limited to handling images, and the inputs for the apparatus 1000 can be non-visual inputs, such as, sound or machine readable data. In general, the inputs for the apparatus 1000 may simply be any data that can be merged or embedded with a corresponding meta-data.
  • Accordingly, if the input for the apparatus 1000 is music, the data acquisition driver 1100 acquires, or receives input of, music data and other data corresponding to the music data (referred to as the associated data), and outputs the music data and the associated data to the data processor 1200. In turn, the data processor 1200 processes the acquired or received music data and the associated data, generates meta-data based on the music data, the associated data, or both, converts the meta-data into an audible format (referred to as a sub-aural meta-data) or a data block format (referred to as a block meta-data), and either embeds the sub-aural meta-data into the music data or provides the block meta-data for later standalone rendering. Additionally, the rendering driver 1300 receives the music data embedded with the sub-aural meta-data, and the block meta-data from the data processor 1200; and outputs the music data embedded with the sub-aural meta-data as output sound, or renders the block meta-data as an aural standalone meta-data, or both. The embedded sub-aural meta-data may be reproduced before or after the music data, in aspects of the present invention.
  • FIG. 4 illustrates a method of embedding meta-data to an obtained data according to an aspect of the present invention. In aspects of the present invention, the method of FIG. 4 may be practiced using the apparatus 200 as shown in FIG. 2.
  • Referring to FIG. 4, first data, and second data that is associated with or corresponds to the first data, are acquired or received in operation S410. The acquired or received first data and the associated second data is processed to generate meta-data based on the first data, the associated second data, or both, in operation S420. The, the meta-data is converted into a format of the first data (referred to as a sub meta-data) or another format (referred to as a block meta-data), and the meta-data is embedded into the first data as a combined data. Accordingly, the first data and the meta-data are integrated into a combined data in operation S430. In other aspects of the present invention, the meta-data may also be provided additionally as a standalone meta-data for later rendering.
  • Once the first data and the meta-data are integrated into the combined data, the combined data is output in a predetermined format, and/or standalone meta-data is rendered in the same or different format as the predetermined format in operation S440. In aspects of the present invention, the first and second data may be an image, video, audio, sound, text, music, or in other human user or device perceivable formats.
  • FIG. 5 illustrates a method of embedding meta-data in an image according to an aspect of the present invention. The method of FIG. 5 may be practiced using the apparatus 1000 as shown in FIG. 3. Referring to FIG. 5, an image is obtained, for example, by being input or captured in operation S510. The obtained image may be a raw image in a predetermined data format, or may have been preprocessed for ease of later processing. Contemporaneously, or after the image is obtained, additional data is obtained that is of the obtained image, referred to as associated data, in operation S520. The associated data includes a user input data, environmental data, and/or collected data that is associated with the obtained image.
  • In aspects of the present invention, the user input data include preferences and various data input by the user. Examples of the user input may be an image, video, audio, sound, and/or text, and may be obtained from, a key pad, a mouse, a touch screen, a touch pad, a scanner, microphone, a digital camera, a video recorder, and/or a trackball, for example. Other examples of the user input may include a 2D barcode representation or optical character recognition (OCR)-recognizable text.
  • In aspects of the present invention, the environmental data includes data from the environment that is associated with the image. Examples of the environmental data may be temperature, humidity, pressure, GPS location, or time, and may be obtained from thermometers, pressure meter, a GPS locator, or a timepiece, for example.
  • In aspects of the present invention, collected data include any data for use with the image that is obtained from an outside source. Examples of the collected data may be recall data, for example, and may be obtained from a database query, internet searches, and/or mined databases for data relating to the image.
  • Referring back to FIG. 5, the image in the predetermined data format, or which has been preprocessed, is analyzed, and meta-data is extracted (or generated) from the image in operation S530. In aspects of the present invention, the extracted meta-data may include information as to face recognition, biometry data extraction, OCR recognition, object recognition, and position/distance measurement, for example, from the image. When the meta-data is extracted from the image, the image may be enhanced to indicate that meta-data has been extracted from the image. For example, a face in the image may be highlighted.
  • The above noted associated data, the extracted meta-data from the image, and/or the image itself, are consolidated to generate a consolidated meta-data in operation S540. Optionally, supplementary meta-data may further be obtained in operation S540 and may be included to generate the consolidated meta-data. The supplementary meta-data is obtained from an outside source, and may be obtained based on the associated data, the extracted meta-data, and/or the image. In aspects of the present invention, the consolidated meta-data need not be generated from the entire associated data, the extracted meta-data, the image, and/or supplementary meta-data. Rather, the consolidated meta-data may be generated from one or more portions of the associated data, the extracted meta-data, the image, and/or the supplementary meta-data.
  • Referring back to FIG. 5, a visual representation of the consolidated meta-data is generated by converting the consolidated meta-data into the same format as the image, for example, according to user preference in operation S550. The visual representation of the consolidated meta-data is referred to as a meta-data sub-image, and may be a 2D barcode. Optionally, the consolidated meta-data may be converted into a standalone meta-data representation to be later output in a desired format. Once the consolidated meta-data is generated, the raw image or the enhanced image is merged or embedded with the meta-data sub-image into a combined image according to user preferences in operation S560. Finally, the combined image may be output, or optionally, the image and a corresponding standalone meta-data representation may be output in operation S570. The output combined image and/or the image and a corresponding standalone meta-data representation may be published through use of any storage medium or distribution network to be searchable or searched by information systems or media broadcasting services. Additionally, the output combined image and/or the image and a corresponding standalone meta-data representation may be further output as a broadcast audio and/or video stream to be broadcast for viewing via a display or a television, or output as a web page RSS-feed to be distributed over a community of mobile devices.
  • FIG. 6A illustrates an image with an embedded meta-data, referred to as a combined image, and 6B illustrates a standalone meta-data representation, which may be affixed to a separately rendered image, according to aspects of the present invention. As shown in FIG. 6A, the combined image 600 according to an aspect of the present invention is composed of a photographic image of a person 601, and an embedded meta-data sub-image 602 that is positioned in the upper right corner of the combined image 600. The person's image 601 is based on the obtained image from a digital camera (used as the visual capture device 1130 as shown in FIG. 3). The meta-data sub-image 602 is one that is generated from a consolidated meta-data using at least portions of the associated data, the extracted meta-data, the image, and/or supplementary meta-data, as discussed above with reference to FIGS. 3 and 5. The embedded meta-data sub-image 602 is shown as a 2D barcode, i.e., as an Aztec code.
  • As shown in FIG. 6A, the combined image is visually readily perceivable by a human user, programmatically perceivable by a device, or both. For example, when implemented to a passport photograph, the combined image 600 includes passport holder's image 601 as the subject image, and the embedded meta-data sub-image 602 that is associated with the passport holder. The entire combined image 600 is one that a human user can recognize immediately as a photograph having the Aztec code. Further, the human user can obtain information contained in the Aztec code by using an appropriate reader. Additionally, the entire combined image 600 is one the appropriate reader can be used to programmatically perceive both the photograph and the Aztec code.
  • In the aspect shown in FIG. 6A, the embedded meta-data sub-image 602 is an image representation of the consolidated meta-data, which in turn is generated based on the associated data (including a user input data, the environmental data, and/or the collected data that is associated with the image), the extracted meta-data (based on the image), the image, and/or supplementary meta-data. That is, in the case of FIG. 6A, the consolidated meta-data may include personal identifying information about the passport holder shown in the image 601, such as, name and birthday, as user input data; GPS coordinates of where the image was taken as environmental data; and/or number of previous passports issued as collected data. Further, the consolidated meta-data may include visually identifying information, such as hair color or eye color, as extracted meta data based on the image; and confidential State Department code to insure authenticity of the image as the supplemental meta-data. Accordingly, an image and a corresponding meta-data are rendered together, whereby the corresponding meta-data (the consolidated meta-data) is embedded in the image.
  • As shown in FIG. 6B, the standalone meta-data representation 650, which may be affixed to a separately rendered image is shown. The standalone meta-data representation 650, when rendered, may be a 2D barcode, such as, an Aztec code, as shown. In aspects of the present invention, the standalone meta-data representation 650 may be embodied as a self-adhering strip, such as a sticker, which may be affixed to a corresponding image, which may appear similar to FIG. 6A. In other aspects, the standalone meta-data representation 650, as a sticker, may be affixed to an item, such as a package, in order to provide information about the package, such as to identify the content and the destination.
  • As discussed above, aspects of the present invention relate to a method of embedding a meta-data that corresponds to an image into the image, and an apparatus to embed the corresponding meta-data into the image. Accordingly, additional information for an image is effectively collected when the image is captured, the additional information is to be transformed into a usable format as the corresponding meta-data, and embedded into a file of the captured image. The imbedded corresponding meta-data and the captured image may be rendered and perceived by a human user, a device, or both. Further, aspects of the present invention is not limited to an image, and are applicable to video, audio, sound, text, music, or in other human user or device perceivable formats.
  • In various aspects of the present invention, meta-data is information about the subject data, such as image, video, audio, sound, text, music, or data in other human user or device perceivable formats. Meta-data may document data about elements or attributes (name, size, data type, etc); about records or data structures (length, fields, columns, etc.); and about the data (where it is located, how it is associated, ownership, etc.), of a subject data. Also, meta-data may include descriptive information about the context, quality and condition, or characteristics of the subject data. Accordingly, meta-data is usable to facilitate the understanding, characteristics, and management usage of the subject data.
  • In aspects of the present invention, the meta-data sub-image is a code, and is neither text nor script.
  • In various aspects, and/or refers to alternatives chosen from available elements so as to include one or more of the elements. For example, if the elements available include elements X, Y, and/or Z, the and/or refers to X, Y, Z, or any combination thereof.
  • Although a few aspects of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in the aspects without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.

Claims (20)

1. An apparatus to generate and render data embedded with associated meta-data in a human or machine recognizable format, comprising;
an obtaining device to acquire first data in a predetermined format and associated second data comprising information of the first data, and to output the first data and the associated second data;
a processing device to receive the first data and the associated second data from the obtaining device, to process the first data and the associated second data to thereby generate meta-data based on the first data and/or the associated second data, to convert the meta-data into the predetermined format of the first data, and to embed the converted meta-data into the first data as a combined data in the predetermined format; and
a rendering device to receive the combined data from the processing device, and to render the combined data in the human or machine recognizable format.
2. The apparatus of claim 1, wherein the processing device further converts the meta-data into a block meta-data of another format.
3. The apparatus of claim 2, wherein the rendering device further renders the first data in the human or machine recognizable format, and renders the block meta-data as a standalone meta-data in the human or machine recognizable format.
4. The apparatus of claim 1, wherein the human or machine recognizable format includes a format that can be rendered, published, searched, or broadcast as an image, video, audio, sound, text, music, broadcast video stream, a web page RSS-feed, or a combination thereof.
5. The apparatus of claim 1, wherein the human or machine recognizable format includes JPEG (Joint Photographic Experts Group), MPEG (Moving Picture Experts Group), PDF (Portable Document Format), and TIFF (Tagged Image File Format).
6. The apparatus of claim 1, wherein the converted meta-data is a 2D barcode, and the first data is a photographic image.
7. An apparatus to generate and render an image embedded with associated meta-data, comprising;
a data acquisition driver to acquire an image, and associated data comprising information of the image;
a data processor to process the acquired image and the associated data from the data acquisition driver in order to generate meta-data based on the image, the associated data, or both, to convert the meta-data into a same format as that of the image, and to embed the converted meta-data into the image; and
a rendering driver to receive the image embedded with the converted meta-data from the data processor, and to render the image with the embedded converted meta-data as an output image.
8. The apparatus of claim 7, wherein the data processor further converts the meta-data into a data block format, and the rendering driver renders the meta-data in the data block format as a standalone meta-data in the same format as that of the image.
9. The apparatus of claim 7, wherein the converted meta-data is a 2D barcode, and the image embedded with the converted meta-data is a photographic image containing the 2D barcode.
10. The apparatus of claim 7, wherein the data acquisition driver further comprises:
a visual image capture driver to capture the image in a predetermined format, and to output the captured image as the acquire image to the data processor; and
a meta-data capture driver to obtain at least one of user input data, environmental data, and collected data for the acquired image, and to output to the data processor the obtained at least one of user input data, environmental data, and collected data as associated data comprising information of the acquired image.
11. The apparatus of claim 7, wherein the data processor further comprises:
a visual meta-data extractor to analyze the image from the data acquisition driver and extract one or more extracted meta-data that characterizes the image, and to output the image or an enhanced image according to a user preference, wherein the enhanced image includes one or more highlighted characteristics of the enhanced image that have been extracted into meta-data;
a smart meta-data miner to connect to remote information sources and to obtain supplementary meta-data that are additional information of the image from the remote information sources;
a meta-data processor to extract additional meta-data from the associated data, the image, and/or the enhanced image, and to generate consolidated meta-data from the extracted meta-data, the additional meta-data, and/or the supplementary meta-data;
a meta-data to raw-image formatter to convert the consolidated meta-data from the meta-data processor into a meta-data sub-image, which is a visual representation of the consolidate meta-data;
a raw-image merger to combine the image or the enhanced image from the visual meta-data extractor with the meta-data sub-image from the meta-data to raw-image formatter into a combined image; and
a meta-data to data-block formatter to selectively format the consolidated meta-data from the meta-data processor into a standalone meta-data representation that is user or machine recognizable, according to user preference.
12. The apparatus of claim 11, wherein the meta-data processor further provides the consolidated meta-data to the visual meta-data extractor.
13. The apparatus of claim 11, wherein the meta-data processor provides the extracted meta-data and/or the additional meta-data to the smart meta-data miner so that the smart meta-data miner obtains the supplementary meta-data of the image from the remote information sources based on information provided by the image, the extracted meta-data, and/or the additional meta-data.
14. The apparatus of claim 11, wherein the rendering driver further comprises:
an image renderer to receive the combined image from the raw-image merger, and to output the rendered combined image as a photographic image having the meta-data sub-image visually rendered and overlying a predetermined portion of the photographic image; and
a data renderer to receive the standalone meta-data representation, and to selectively output a visually rendered standalone meta-data representation.
15. The apparatus of claim 14, wherein the image renderer further receives the image and/or the enhanced image, and selectively renders the image and/or the enhanced image as a photographic image, and the data renderer renders the standalone meta-data representation visually to be physically attached to the rendered image or enhanced image.
16. The apparatus of claim 7, wherein the image is in a Joint Photographic Experts Group (JPEG) format, and the converted meta-data is a 2D barcode.
17. A method of generating and rendering data embedded with associated meta-data in a human or machine recognizable format, comprising
obtaining first data in a predetermined format, and second data that is associated with the first data;
processing the obtained first data and the associated second data to generate meta-data based on the first data, and/or the associated second data,
converting the meta-data into the predetermined format of the first data;
embedding the converted meta-data into the first data to obtain a combined data; and
rendering the combined data in the predetermined format of the first data, the predetermined format of the first data being the human or machine recognizable format.
18. The method of claim 17, wherein the human or machine recognizable format includes a format that can be rendered, published, searched, or broadcast as an image, video, audio, sound, text, music, broadcast video stream, a web page RSS-feed, or a combination thereof.
19. The method of claim 17, wherein the first data is a photographic image and the converted meta-data is a 2D barcode positioned in a predetermined position of the photographic image.
20. A method of embedding meta-data in an image to be rendered together, comprising:
obtaining the image;
obtaining associated data contemporaneously with the image, the associate data being additional information of the image and includes a user input data, environmental data, and/or collected data that is associated with the image;
extracting meta-data from the image that characterizes the image according to user selection as an extracted meta-data;
generating a consolidated meta-data by consolidating the associated data and the extracted meta-data;
generating a meta-data sub-image, which is a visual representation of the consolidated meta-data, by converting the consolidated meta-data into the same format as the image;
embedding the meta-data sub-image into the image to generate a combined image in the same format as the image; and
visually rendering the combined image on a medium or a display device.
US12/363,966 2009-02-02 2009-02-02 Apparatus and method of embedding meta-data in a captured image Abandoned US20100198876A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/363,966 US20100198876A1 (en) 2009-02-02 2009-02-02 Apparatus and method of embedding meta-data in a captured image
US15/816,541 US10942964B2 (en) 2009-02-02 2017-11-17 Apparatus and method of embedding meta-data in a captured image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/363,966 US20100198876A1 (en) 2009-02-02 2009-02-02 Apparatus and method of embedding meta-data in a captured image

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/816,541 Continuation US10942964B2 (en) 2009-02-02 2017-11-17 Apparatus and method of embedding meta-data in a captured image

Publications (1)

Publication Number Publication Date
US20100198876A1 true US20100198876A1 (en) 2010-08-05

Family

ID=42398569

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/363,966 Abandoned US20100198876A1 (en) 2009-02-02 2009-02-02 Apparatus and method of embedding meta-data in a captured image
US15/816,541 Active US10942964B2 (en) 2009-02-02 2017-11-17 Apparatus and method of embedding meta-data in a captured image

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/816,541 Active US10942964B2 (en) 2009-02-02 2017-11-17 Apparatus and method of embedding meta-data in a captured image

Country Status (1)

Country Link
US (2) US20100198876A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110158469A1 (en) * 2009-12-29 2011-06-30 Mastykarz Justin P Methods and apparatus for management of field operations, projects and/or collected samples
US20120036538A1 (en) * 2010-08-04 2012-02-09 Nagravision S.A. Method for sharing data and synchronizing broadcast data with additional information
US20120072419A1 (en) * 2010-09-16 2012-03-22 Madhav Moganti Method and apparatus for automatically tagging content
US8533192B2 (en) 2010-09-16 2013-09-10 Alcatel Lucent Content capture device and methods for automatically tagging content
US8579198B2 (en) * 2010-12-01 2013-11-12 Symbol Technologies, Inc. Enhanced laser barcode scanning
US8666978B2 (en) 2010-09-16 2014-03-04 Alcatel Lucent Method and apparatus for managing content tagging and tagged content
US20140087349A1 (en) * 2012-09-21 2014-03-27 Justin Shelby Kitch Embeddable video playing system and method
US20140249885A1 (en) * 2013-03-04 2014-09-04 Catalina Marketing Corporation System and method for customized search results based on a shopping history of a user, retailer identifications, and items being promoted by retailers
US20140313372A1 (en) * 2012-07-10 2014-10-23 Sony Corporation Image distribution system and methods
US20160343170A1 (en) * 2010-08-13 2016-11-24 Pantech Co., Ltd. Apparatus and method for recognizing objects using filter information
EP3142344A1 (en) * 2015-09-12 2017-03-15 Uniwersytet Warszawski A system and method for steganographic coding of metadata on images
US9723253B2 (en) * 2015-03-11 2017-08-01 Sony Interactive Entertainment Inc. Apparatus and method for automatically generating an optically machine readable code for a captured image
US10133653B2 (en) * 2012-02-23 2018-11-20 Cadence Design Systems, Inc. Recording and playback of trace and video log data for programs
WO2020229995A1 (en) * 2019-05-10 2020-11-19 Roderick Victor Kennedy Reduction of the effects of latency for extended reality experiences
US10942964B2 (en) 2009-02-02 2021-03-09 Hand Held Products, Inc. Apparatus and method of embedding meta-data in a captured image
US11113491B2 (en) * 2020-01-02 2021-09-07 The Boeing Company Methods for virtual multi-dimensional quick response codes
US11336968B2 (en) 2018-08-17 2022-05-17 Samsung Electronics Co., Ltd. Method and device for generating content
US11609887B2 (en) * 2018-02-13 2023-03-21 Omron Corporation Quality check apparatus, quality check method, and program
US11961178B2 (en) 2020-05-11 2024-04-16 Roderick V. Kennedy Reduction of the effects of latency for extended reality experiences by split rendering of imagery types

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10991064B1 (en) 2018-03-07 2021-04-27 Adventure Soup Inc. System and method of applying watermark in a digital image
US20240005117A1 (en) * 2022-06-29 2024-01-04 Zebra Technologies Corporation Systems and Methods for Encoding Hardware-Calculated Metadata into Raw Images for Transfer and Storage and Imaging Devices

Citations (103)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3561432A (en) * 1967-07-29 1971-02-09 Olympus Optical Co Endoscope
US4078864A (en) * 1976-07-08 1978-03-14 United Technologies Corporation Method and apparatus for viewing and measuring damage in an inaccessible area
US4139822A (en) * 1977-06-14 1979-02-13 General Electric Company Eddy current probe for inspecting interiors of gas turbines, said probe having pivotal adjustments and a borescope
US4253447A (en) * 1978-10-16 1981-03-03 Welch Allyn, Inc. Color endoscope with charge coupled device and television viewing
US4271344A (en) * 1976-06-16 1981-06-02 Matsushita Electric Industrial Co., Ltd. High frequency heating oven with cooking vessel
US4573450A (en) * 1983-11-11 1986-03-04 Fuji Photo Optical Co., Ltd. Endoscope
US4576147A (en) * 1981-07-16 1986-03-18 Olympus Optical Co., Ltd. Hard endoscope with improved light dispersion
US4588294A (en) * 1984-06-27 1986-05-13 Warner-Lambert Technologies, Inc. Searching and measuring endoscope
US4651201A (en) * 1984-06-01 1987-03-17 Arnold Schoolman Stereoscopic endoscope arrangement
US4656508A (en) * 1984-06-08 1987-04-07 Olympus Optical Co., Ltd. Measuring endoscope
US4659195A (en) * 1986-01-31 1987-04-21 American Hospital Supply Corporation Engine inspection system
US4667656A (en) * 1984-10-26 1987-05-26 Olympus Optical Co., Ltd. Endoscope apparatus having nozzle angularly positioned image sensor
US4727859A (en) * 1986-12-29 1988-03-01 Welch Allyn, Inc. Right angle detachable prism assembly for borescope
US4733937A (en) * 1986-10-17 1988-03-29 Welch Allyn, Inc. Illuminating system for endoscope or borescope
US4735501A (en) * 1986-04-21 1988-04-05 Identechs Corporation Method and apparatus for fluid propelled borescopes
US4794912A (en) * 1987-08-17 1989-01-03 Welch Allyn, Inc. Borescope or endoscope with fluid dynamic muscle
US4796607A (en) * 1987-07-28 1989-01-10 Welch Allyn, Inc. Endoscope steering section
US4827909A (en) * 1987-03-31 1989-05-09 Kabushiki Kaisha Toshiba Endoscopic apparatus
US4909600A (en) * 1988-10-28 1990-03-20 Welch Allyn, Inc. Light chopper assembly
US4913369A (en) * 1989-06-02 1990-04-03 Welch Allyn, Inc. Reel for borescope insertion tube
US4926257A (en) * 1986-12-19 1990-05-15 Olympus Optical Co., Ltd. Stereoscopic electronic endoscope device
US4989581A (en) * 1990-06-01 1991-02-05 Welch Allyn, Inc. Torsional strain relief for borescope
US4998182A (en) * 1990-02-08 1991-03-05 Welch Allyn, Inc. Connector for optical sensor
US5010876A (en) * 1986-06-02 1991-04-30 Smith & Nephew Dyonics, Inc. Arthroscopic surgical practice
US5014515A (en) * 1989-05-30 1991-05-14 Welch Allyn, Inc. Hydraulic muscle pump
US5014600A (en) * 1990-02-06 1991-05-14 Welch Allyn, Inc. Bistep terminator for hydraulic or pneumatic muscle
US5019121A (en) * 1990-05-25 1991-05-28 Welch Allyn, Inc. Helical fluid-actuated torsional motor
US5018436A (en) * 1990-07-31 1991-05-28 Welch Allyn, Inc. Folded bladder for fluid dynamic muscle
US5018506A (en) * 1990-06-18 1991-05-28 Welch Allyn, Inc. Fluid controlled biased bending neck
US5114636A (en) * 1990-07-31 1992-05-19 Welch Allyn, Inc. Process for reducing the internal cross section of elastomeric tubing
US5191879A (en) * 1991-07-24 1993-03-09 Welch Allyn, Inc. Variable focus camera for borescope or endoscope
US5202758A (en) * 1991-09-16 1993-04-13 Welch Allyn, Inc. Fluorescent penetrant measurement borescope
US5203319A (en) * 1990-06-18 1993-04-20 Welch Allyn, Inc. Fluid controlled biased bending neck
US5275152A (en) * 1992-07-27 1994-01-04 Welch Allyn, Inc. Insertion tube terminator
US5278642A (en) * 1992-02-26 1994-01-11 Welch Allyn, Inc. Color imaging system
US5314070A (en) * 1992-12-16 1994-05-24 Welch Allyn, Inc. Case for flexible borescope and endoscope insertion tubes
USD358417S (en) * 1993-04-30 1995-05-16 Hewlett-Packard Company Printer platen
US5633675A (en) * 1993-02-16 1997-05-27 Welch Allyn, Inc, Shadow probe
US5663552A (en) * 1993-10-19 1997-09-02 Matsushita Electric Industrial Co., Ltd. Portable information terminal apparatus having image processing function
US5734418A (en) * 1996-07-17 1998-03-31 Welch Allyn, Inc. Endoscope with tab imager package
US5751341A (en) * 1993-01-05 1998-05-12 Vista Medical Technologies, Inc. Stereoscopic endoscope system
US5754313A (en) * 1996-07-17 1998-05-19 Welch Allyn, Inc. Imager assembly
US5857963A (en) * 1996-07-17 1999-01-12 Welch Allyn, Inc. Tab imager assembly for use in an endoscope
US6015088A (en) * 1996-11-05 2000-01-18 Welch Allyn, Inc. Decoding of real time video imaging
US6066090A (en) * 1997-06-19 2000-05-23 Yoon; Inbae Branched endoscope system
US6221007B1 (en) * 1996-05-03 2001-04-24 Philip S. Green System and method for endoscopic imaging and endosurgery
US20020039099A1 (en) * 2000-09-30 2002-04-04 Hand Held Products, Inc. Method and apparatus for simultaneous image capture and image display in an imaging device
US20030004397A1 (en) * 2001-06-28 2003-01-02 Takayuki Kameya Endoscope system
US20030046192A1 (en) * 2001-08-29 2003-03-06 Mitsubishi Denki Kabushiki Kaisha Distribution management system, distribution management method, and program
US6538732B1 (en) * 1999-05-04 2003-03-25 Everest Vit, Inc. Inspection system and method
USD473306S1 (en) * 2001-05-07 2003-04-15 Olympus Optical Co., Ltd. Remote control apparatus for industrial endoscope
US20030097042A1 (en) * 2001-10-31 2003-05-22 Teruo Eino Endoscopic system
US6697805B1 (en) * 2000-04-14 2004-02-24 Microsoft Corporation XML methods and systems for synchronizing multiple computing devices
US6697794B1 (en) * 2001-02-28 2004-02-24 Ncr Corporation Providing database system native operations for user defined data types
US20040064323A1 (en) * 2001-02-28 2004-04-01 Voice-Insight, Belgian Corporation Natural language query system for accessing an information system
US20040096123A1 (en) * 2000-03-28 2004-05-20 Shih Willy C. Method and system for locating and accessing digitally stored images
US20040126038A1 (en) * 2002-12-31 2004-07-01 France Telecom Research And Development Llc Method and system for automated annotation and retrieval of remote digital content
US20050001909A1 (en) * 2003-07-02 2005-01-06 Konica Minolta Photo Imaging, Inc. Image taking apparatus and method of adding an annotation to an image
US20050015480A1 (en) * 2003-05-05 2005-01-20 Foran James L. Devices for monitoring digital video signals and associated methods and systems
US20050027750A1 (en) * 2003-04-11 2005-02-03 Cricket Technologies, Llc Electronic discovery apparatus, system, method, and electronically stored computer program product
US6851610B2 (en) * 1999-06-07 2005-02-08 Metrologic Instruments, Inc. Tunnel-type package identification system having a remote image keying station with an ethernet-over-fiber-optic data communication link
US20050041097A1 (en) * 2003-08-19 2005-02-24 Bernstein Robert M. Non-medical videoscope
US20050050707A1 (en) * 2003-09-05 2005-03-10 Scott Joshua Lynn Tip tool
US6982765B2 (en) * 2001-09-14 2006-01-03 Thomson Licensing Minimizing video disturbance during switching transients and signal absence
US20060015919A1 (en) * 2004-07-13 2006-01-19 Nokia Corporation System and method for transferring video information
US20060031486A1 (en) * 2000-02-29 2006-02-09 International Business Machines Corporation Method for automatically associating contextual input data with available multimedia resources
US20060053088A1 (en) * 2004-09-09 2006-03-09 Microsoft Corporation Method and system for improving management of media used in archive applications
US20060050983A1 (en) * 2004-09-08 2006-03-09 Everest Vit, Inc. Method and apparatus for enhancing the contrast and clarity of an image captured by a remote viewing device
US20060072903A1 (en) * 2001-02-22 2006-04-06 Everest Vit, Inc. Method and system for storing calibration data within image files
US20070018229A1 (en) * 2005-07-25 2007-01-25 Freescale Semiconductor, Inc. Electronic device including discontinuous storage elements and a process for forming the same
US20070033109A1 (en) * 2005-08-05 2007-02-08 Microsoft Corporation Informal trust relationship to facilitate data sharing
US20070047816A1 (en) * 2005-08-23 2007-03-01 Jamey Graham User Interface for Mixed Media Reality
US20070106754A1 (en) * 2005-09-10 2007-05-10 Moore James F Security facility for maintaining health care data pools
US20070106536A1 (en) * 2003-08-01 2007-05-10 Moore James F Opml-based patient records
US20070124278A1 (en) * 2005-10-31 2007-05-31 Biogen Idec Ma Inc. System and method for electronic record keeping
US7321673B2 (en) * 2001-12-03 2008-01-22 Olympus Corporation Endoscope image filing system and endoscope image filing method
US20080027983A1 (en) * 2006-07-31 2008-01-31 Berna Erol Searching media content for objects specified using identifiers
US20080033983A1 (en) * 2006-07-06 2008-02-07 Samsung Electronics Co., Ltd. Data recording and reproducing apparatus and method of generating metadata
US20080039206A1 (en) * 2006-08-11 2008-02-14 Jonathan Ackley Interactive installation for interactive gaming
US20080052205A1 (en) * 2006-08-24 2008-02-28 Vision Chain System and method for identifying implicit events in a supply chain
US7346221B2 (en) * 2001-07-12 2008-03-18 Do Labs Method and system for producing formatted data related to defects of at least an appliance of a set, in particular, related to blurring
US20080071143A1 (en) * 2006-09-18 2008-03-20 Abhishek Gattani Multi-dimensional navigation of endoscopic video
US7508419B2 (en) * 2001-10-09 2009-03-24 Microsoft, Corp Image exchange with image annotation
US7526812B2 (en) * 2005-03-24 2009-04-28 Xerox Corporation Systems and methods for manipulating rights management data
US20090177495A1 (en) * 2006-04-14 2009-07-09 Fuzzmed Inc. System, method, and device for personal medical care, intelligent analysis, and diagnosis
US20100046842A1 (en) * 2008-08-19 2010-02-25 Conwell William Y Methods and Systems for Content Processing
US20100071003A1 (en) * 2008-09-14 2010-03-18 Modu Ltd. Content personalization
US20100065636A1 (en) * 2008-04-29 2010-03-18 Java Information Technology Ltd. Ontology-Based EPC Automatic Conversion Method and System
US7685428B2 (en) * 2003-08-14 2010-03-23 Ricoh Company, Ltd. Transmission of event markers to data stream recorder
US20100076976A1 (en) * 2008-09-06 2010-03-25 Zlatko Manolov Sotirov Method of Automatically Tagging Image Data
US20100075292A1 (en) * 2008-09-25 2010-03-25 Deyoung Dennis C Automatic education assessment service
US20100088123A1 (en) * 2008-10-07 2010-04-08 Mccall Thomas A Method for using electronic metadata to verify insurance claims
US20100086192A1 (en) * 2008-10-02 2010-04-08 International Business Machines Corporation Product identification using image analysis and user interaction
US7712670B2 (en) * 2005-09-28 2010-05-11 Sauerwein Jr James T Data collection device and network having radio signal responsive mode switching
US7865957B1 (en) * 2007-02-26 2011-01-04 Trend Micro Inc. Apparatus and methods for updating mobile device virus pattern data
US20110058187A1 (en) * 2009-09-10 2011-03-10 Bentley Systems, Incorporated Augmented reality dynamic plots
US20110066281A1 (en) * 2009-09-15 2011-03-17 Bowe Bell + Howell Company Method and system for referencing a specific mail target for enhanced mail owner customer intelligence
US20110079639A1 (en) * 2009-10-06 2011-04-07 Samsung Electronics Co. Ltd. Geotagging using barcodes
US20110107370A1 (en) * 2009-11-03 2011-05-05 At&T Intellectual Property I, L.P. System for media program management
US20110121066A1 (en) * 2009-11-23 2011-05-26 Konica Minolta Systems Laboratory, Inc. Document authentication using hierarchical barcode stamps to detect alterations of barcode
US8014665B2 (en) * 2007-05-22 2011-09-06 International Business Machines Corporation Method, apparatus and software for processing photographic image data using a photographic recording medium
US8090462B2 (en) * 2007-12-19 2012-01-03 Mobideo Technologies Ltd Maintenance assistance and control system method and apparatus
US20130002890A1 (en) * 2007-12-21 2013-01-03 Hand Held Products, Inc. Using metadata tags in video recordings produced by portable terminals

Family Cites Families (190)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2524651A (en) 1947-01-09 1950-10-03 Times Facsimile Corp Electrooptical scanning method and apparatus
US2949071A (en) 1956-03-19 1960-08-16 Foures Andre Endoscopic camera system
US3849632A (en) 1972-06-19 1974-11-19 Pitney Bowes Inc Reading apparatus for optical bar codes
US3969612A (en) 1974-06-11 1976-07-13 Recognition Equipment Incorporated Bar code reader enhancement
US4044227A (en) 1975-08-07 1977-08-23 The Upjohn Company Bar code reader
US4042823A (en) 1976-03-17 1977-08-16 The United States Of America As Represented By The Secretary Of The Navy Optical scanner
USRE31289E (en) 1978-10-16 1983-06-28 Welch Allyn, Inc. Color endoscope with charge coupled device and television viewing
US4298312A (en) 1979-07-24 1981-11-03 Purex Corporation Damaged vane locating method and apparatus
JPS58105517U (en) 1982-01-07 1983-07-18 住友電気工業株式会社 Pipe monitoring device
JPS60179713A (en) 1984-02-28 1985-09-13 Olympus Optical Co Ltd Endoscope device
US4621286A (en) 1984-05-29 1986-11-04 Rca Corporation Spatial-temporal frequency interleaved processing of a television signal with reduced amplitude interleaved sections
US4680457A (en) 1985-03-07 1987-07-14 Telesis Controls Corporation Code reader
US5053956A (en) 1985-06-17 1991-10-01 Coats Viyella Interactive system for retail transactions
US4700693A (en) 1985-12-09 1987-10-20 Welch Allyn, Inc. Endoscope steering section
EP0277778B1 (en) 1987-01-29 1993-12-29 Sony Magnescale, Inc. Improved production of pre-recorded tape cassettes
JPS63294509A (en) 1987-05-27 1988-12-01 Olympus Optical Co Ltd Stereoscopic endoscope device
US4790294A (en) 1987-07-28 1988-12-13 Welch Allyn, Inc. Ball-and-socket bead endoscope steering section
US4787369A (en) 1987-08-14 1988-11-29 Welch Allyn, Inc. Force relieving, force limiting self-adjusting steering for borescope or endoscope
US4887154A (en) 1988-06-01 1989-12-12 Welch Allyn, Inc. Lamp assembly and receptacle
US4862253A (en) 1988-07-20 1989-08-29 Welch Allyn, Inc. Apparatus for converting a video processor
JPH0681614B2 (en) 1989-04-12 1994-10-19 株式会社東芝 Electronic endoscopic device
US4962751A (en) 1989-05-30 1990-10-16 Welch Allyn, Inc. Hydraulic muscle pump
US4980763A (en) 1989-06-12 1990-12-25 Welch Allyn, Inc. System for measuring objects viewed through a borescope
FR2648202B1 (en) 1989-06-12 1994-05-20 Valeo TORSION DAMPING DEVICE WITH PERIPHERAL ELASTIC MEANS PROVIDED IN A SEALED HOUSING, PARTICULARLY FOR A MOTOR VEHICLE
US4941454A (en) 1989-10-05 1990-07-17 Welch Allyn, Inc. Servo actuated steering mechanism for borescope or endoscope
US4941456A (en) 1989-10-05 1990-07-17 Welch Allyn, Inc. Portable color imager borescope
US4979498A (en) 1989-10-30 1990-12-25 Machida Incorporated Video cervicoscope system
US5052803A (en) 1989-12-15 1991-10-01 Welch Allyn, Inc. Mushroom hook cap for borescope
US5070401A (en) 1990-04-09 1991-12-03 Welch Allyn, Inc. Video measurement system with automatic calibration and distortion correction
US5140319A (en) 1990-06-15 1992-08-18 Westech Geophysical, Inc. Video logging system having remote power source
US5047848A (en) 1990-07-16 1991-09-10 Welch Allyn, Inc. Elastomeric gage for borescope
US5061995A (en) 1990-08-27 1991-10-29 Welch Allyn, Inc. Apparatus and method for selecting fiber optic bundles in a borescope
US5066122A (en) 1990-11-05 1991-11-19 Welch Allyn, Inc. Hooking cap for borescope
US5140975A (en) 1991-02-15 1992-08-25 Welch Allyn, Inc. Insertion tube assembly for probe with biased bending neck
US5222477A (en) 1991-09-30 1993-06-29 Welch Allyn, Inc. Endoscope or borescope stereo viewing system
JP3631257B2 (en) 1992-08-28 2005-03-23 オリンパス株式会社 Electronic endoscope device
EP0587514A1 (en) 1992-09-11 1994-03-16 Welch Allyn, Inc. Processor module for video inspection probe
US5347989A (en) 1992-09-11 1994-09-20 Welch Allyn, Inc. Control mechanism for steerable elongated probe having a sealed joystick
WO1994009694A1 (en) 1992-10-28 1994-05-11 Arsenault, Dennis, J. Electronic endoscope
US5365331A (en) 1993-01-27 1994-11-15 Welch Allyn, Inc. Self centering device for borescopes
EP0682795A1 (en) 1993-02-02 1995-11-22 Label Vision Systems, Inc. Method and apparatus for decoding bar code data from a video signal and applications thereof
US5373317B1 (en) 1993-05-28 2000-11-21 Welch Allyn Inc Control and display section for borescope or endoscope
US5323899A (en) 1993-06-01 1994-06-28 Welch Allyn, Inc. Case for video probe
US20010016825A1 (en) 1993-06-08 2001-08-23 Pugliese, Anthony V. Electronic ticketing and reservation system and method
US5435296A (en) 1993-06-11 1995-07-25 Welch Allyn, Inc. Endoscope having crimped and soldered cable terminator
US5802274A (en) 1994-05-04 1998-09-01 International Business Machines Corporation Cartridge manufacturing system for game programs
US6164534A (en) 1996-04-04 2000-12-26 Rathus; Spencer A. Method and apparatus for accessing electronic data via a familiar printed medium
ATE169417T1 (en) 1994-10-14 1998-08-15 United Parcel Service Inc MULTI-LEVEL PACKAGE TRACKING SYSTEM
US6184923B1 (en) 1994-11-25 2001-02-06 Olympus Optical Co., Ltd. Endoscope with an interchangeable distal end optical adapter
JP3153720B2 (en) 1994-12-20 2001-04-09 富士通株式会社 Video presentation system
US5699262A (en) 1995-07-18 1997-12-16 Dralco, Inc. Video rental processing system
US5825982A (en) 1995-09-15 1998-10-20 Wright; James Head cursor control interface for an automated endoscope system for optimal positioning
US5770841A (en) 1995-09-29 1998-06-23 United Parcel Service Of America, Inc. System and method for reading package information
US20020014533A1 (en) 1995-12-18 2002-02-07 Xiaxun Zhu Automated object dimensioning system employing contour tracing, vertice detection, and forner point detection and reduction methods on 2-d range data maps
US6139490A (en) 1996-02-22 2000-10-31 Precision Optics Corporation Stereoscopic endoscope with virtual reality viewing
ES2140289B1 (en) 1996-04-02 2000-08-16 Windmoeller & Hoelscher BUSHING FOR PRINTER CYLINDERS.
US5918211A (en) 1996-05-30 1999-06-29 Retail Multimedia Corporation Method and apparatus for promoting products and influencing consumer purchasing decisions at the point-of-purchase
US6432046B1 (en) 1996-07-15 2002-08-13 Universal Technologies International, Inc. Hand-held, portable camera for producing video images of an object
US6133908A (en) 1996-12-04 2000-10-17 Advanced Communication Design, Inc. Multi-station video/audio distribution apparatus
US6106457A (en) 1997-04-04 2000-08-22 Welch Allyn, Inc. Compact imaging instrument system
EP0888019A1 (en) 1997-06-23 1998-12-30 Hewlett-Packard Company Method and apparatus for measuring the quality of a video transmission
US6097848A (en) 1997-11-03 2000-08-01 Welch Allyn, Inc. Noise reduction apparatus for electronic edge enhancement
US6220513B1 (en) 1997-12-31 2001-04-24 Ncr Corporation Methods and apparatus for determining bar code label location information
US6394351B1 (en) 1997-12-31 2002-05-28 Ncr Corporation Methods and apparatus for enhanced scanner operation employing bar code and bar code fragment time and position of data collection
US6512919B2 (en) 1998-12-14 2003-01-28 Fujitsu Limited Electronic shopping system utilizing a program downloadable wireless videophone
US6083152A (en) 1999-01-11 2000-07-04 Welch Allyn, Inc. Endoscopic insertion tube
US20010011233A1 (en) 1999-01-11 2001-08-02 Chandrasekhar Narayanaswami Coding system and method for linking physical items and corresponding electronic online information to the physical items
US6518881B2 (en) 1999-02-25 2003-02-11 David A. Monroe Digital communication system for law enforcement use
US6411963B1 (en) 1999-07-09 2002-06-25 Junot Systems, Inc. External system interface method and system
JP2001108916A (en) 1999-10-08 2001-04-20 Olympus Optical Co Ltd Solid mirror optical system
US6959235B1 (en) 1999-10-28 2005-10-25 General Electric Company Diagnosis and repair system and method
US6668272B1 (en) 1999-11-05 2003-12-23 General Electric Company Internet-based process optimization system and method
US6556273B1 (en) 1999-11-12 2003-04-29 Eastman Kodak Company System for providing pre-processing machine readable encoded information markings in a motion picture film
US6483535B1 (en) 1999-12-23 2002-11-19 Welch Allyn, Inc. Wide angle lens system for electronic imagers having long exit pupil distances
US6764009B2 (en) 2001-05-30 2004-07-20 Lightwaves Systems, Inc. Method for tagged bar code data interchange
US6487479B1 (en) 2000-01-07 2002-11-26 General Electric Co. Methods and systems for aviation component repair services
US7236596B2 (en) 2000-02-07 2007-06-26 Mikos, Ltd. Digital imaging system for evidentiary use
WO2001093473A2 (en) 2000-05-31 2001-12-06 Optinetix (Israel) Ltd. Systems and methods for distributing information through broadcast media
US6590470B1 (en) 2000-06-13 2003-07-08 Welch Allyn, Inc. Cable compensator circuit for CCD video probe
US6810406B2 (en) 2000-08-23 2004-10-26 General Electric Company Method and system for servicing a selected piece of equipment having unique system configurations and servicing requirements
US6763175B1 (en) 2000-09-01 2004-07-13 Matrox Electronic Systems, Ltd. Flexible video editing architecture with software video effect filter components
US6568596B1 (en) 2000-10-02 2003-05-27 Symbol Technologies, Inc. XML-based barcode scanner
US6746164B1 (en) 2000-10-27 2004-06-08 International Business Machines Corporation Method and system using carrier identification information for tracking printed articles
US7540424B2 (en) 2000-11-24 2009-06-02 Metrologic Instruments, Inc. Compact bar code symbol reading system employing a complex of coplanar illumination and imaging stations for omni-directional imaging of objects within a 3D imaging volume
US6429924B1 (en) 2000-11-30 2002-08-06 Eastman Kodak Company Photofinishing method
US6614872B2 (en) 2001-01-26 2003-09-02 General Electric Company Method and apparatus for localized digital radiographic inspection
US6494739B1 (en) 2001-02-07 2002-12-17 Welch Allyn, Inc. Miniature connector with improved strain relief for an imager assembly
US7126630B1 (en) 2001-02-09 2006-10-24 Kujin Lee Method and apparatus for omni-directional image and 3-dimensional data acquisition with data annotation and dynamic range extension method
US20020128790A1 (en) 2001-03-09 2002-09-12 Donald Woodmansee System and method of automated part evaluation including inspection, disposition recommendation and refurbishment process determination
US6468201B1 (en) 2001-04-27 2002-10-22 Welch Allyn, Inc. Apparatus using PNP bipolar transistor as buffer to drive video signal
US7111787B2 (en) 2001-05-15 2006-09-26 Hand Held Products, Inc. Multimode image capturing and decoding optical reader
US6942151B2 (en) 2001-05-15 2005-09-13 Welch Allyn Data Collection, Inc. Optical reader having decoding and image capturing functionality
US7231135B2 (en) 2001-05-18 2007-06-12 Pentax Of American, Inc. Computer-based video recording and management system for medical diagnostic equipment
US6775602B2 (en) 2001-07-09 2004-08-10 Gordon-Darby Systems, Inc. Method and system for vehicle emissions testing through on-board diagnostics unit inspection
US6772098B1 (en) 2001-07-11 2004-08-03 General Electric Company Systems and methods for managing inspections
US6834807B2 (en) 2001-07-13 2004-12-28 Hand Held Products, Inc. Optical reader having a color imager
DE10133975C1 (en) 2001-07-17 2002-10-17 Fachhochschule Dortmund Discount provision method for products and/or services allows customer to be provided with free telecommunications services corresponding to value of obtained discount
US6937154B2 (en) 2001-08-21 2005-08-30 Tabula Rasa, Inc. Method and apparatus for facilitating personal attention via wireless links
US20030043042A1 (en) 2001-08-21 2003-03-06 Tabula Rasa, Inc. Method and apparatus for facilitating personal attention via wireless networks
US20030043041A1 (en) 2001-08-21 2003-03-06 Rob Zeps Method and apparatus for facilitating personal attention via wireless networks
SE523426C2 (en) 2001-08-23 2004-04-20 Akzo Nobel Nv A nitrogen-containing orthoester-based surfactant, its manufacture and use
US6758403B1 (en) 2001-09-11 2004-07-06 Psc Scanning, Inc. System for editing data collection device message data
US20030105565A1 (en) 2001-12-03 2003-06-05 Loda David C. Integrated internet portal and deployed product microserver management system
US6908034B2 (en) 2001-12-17 2005-06-21 Zih Corp. XML system
KR100405828B1 (en) * 2002-02-01 2003-11-14 주식회사 마크애니 Apparatus and method for producing a document which is capable of preventing a forgery or an alteration of itself, and apparatus and method for authenticating the document
FR2835943B1 (en) 2002-02-14 2005-01-28 Eastman Kodak Co METHOD AND SYSTEM FOR CONTROLLING PHOTOGRAPHIC WORK FROM A PORTABLE TERMINAL
US6786405B2 (en) 2002-02-28 2004-09-07 Curt Wiedenhoefer Tissue and implant product supply system and method
US6830545B2 (en) 2002-05-13 2004-12-14 Everest Vit Tube gripper integral with controller for endoscope of borescope
US20040193016A1 (en) 2002-06-17 2004-09-30 Thomas Root Endoscopic delivery system for the non-destructive testing and evaluation of remote flaws
US7234106B2 (en) 2002-09-10 2007-06-19 Simske Steven J System for and method of generating image annotation information
US7172113B2 (en) * 2002-09-16 2007-02-06 Avery Dennison Corporation System and method for creating a display card
US20040125077A1 (en) 2002-10-03 2004-07-01 Ashton Jason A. Remote control for secure transactions
US7252633B2 (en) 2002-10-18 2007-08-07 Olympus Corporation Remote controllable endoscope system
US7121469B2 (en) 2002-11-26 2006-10-17 International Business Machines Corporation System and method for selective processing of digital images
US20040155783A1 (en) 2003-01-03 2004-08-12 Zaher Al-Sheikh Automatic confined space monitoring and alert system
WO2004068840A2 (en) 2003-01-29 2004-08-12 Everest Vit, Inc. Remote video inspection system
DE10305384A1 (en) 2003-02-11 2004-08-26 Kuka Roboter Gmbh Method and device for visualizing computer-aided information
US20040155109A1 (en) 2003-02-12 2004-08-12 Sears Brands, Llc Digital assistant for use in a commercial environment
JP4117550B2 (en) * 2003-03-19 2008-07-16 ソニー株式会社 Communication system, payment management apparatus and method, portable information terminal, information processing method, and program
US20040183900A1 (en) 2003-03-20 2004-09-23 Everest Vit Method and system for automatically detecting defects in remote video inspection applications
US20050119786A1 (en) * 2003-04-22 2005-06-02 United Parcel Service Of America, Inc. System, method and computer program product for containerized shipping of mail pieces
US20040223649A1 (en) 2003-05-07 2004-11-11 Eastman Kodak Company Composite imaging method and system
US6953432B2 (en) 2003-05-20 2005-10-11 Everest Vit, Inc. Imager cover-glass mounting
JP2005020654A (en) 2003-06-30 2005-01-20 Minolta Co Ltd Imaging device and method for imparting comment information to image
US6892947B1 (en) * 2003-07-30 2005-05-17 Hewlett-Packard Development Company, L.P. Barcode embedding methods, barcode communication methods, and barcode systems
US7328847B1 (en) * 2003-07-30 2008-02-12 Hewlett-Packard Development Company, L.P. Barcode data communication methods, barcode embedding methods, and barcode systems
US7523315B2 (en) * 2003-12-22 2009-04-21 Ingeo Systems, Llc Method and process for creating an electronically signed document
US20050162643A1 (en) 2004-01-22 2005-07-28 Thomas Karpen Automotive fuel tank inspection device
AR043357A1 (en) * 2004-01-23 2005-07-27 Salva Calcagno Eduardo Luis PROCEDURE OF IDENTIFICATION OF PERSONS THROUGH THE CONVERSION OF DACTILAR FOOTPRINTS AND GENETIC CODES IN BAR CODES AND DISPOSAL USED IN THIS PROCEDURE
US7134993B2 (en) 2004-01-29 2006-11-14 Ge Inspection Technologies, Lp Method and apparatus for improving the operation of a remote viewing device by changing the calibration settings of its articulation servos
US20050187739A1 (en) 2004-02-24 2005-08-25 Christian Baust Method and apparatus for creating and updating maintenance plans of an aircraft
US7779355B1 (en) 2004-03-30 2010-08-17 Ricoh Company, Ltd. Techniques for using paper documents as media templates
US20050219263A1 (en) 2004-04-01 2005-10-06 Thompson Robert L System and method for associating documents with multi-media data
US7463380B2 (en) 2004-04-23 2008-12-09 Sharp Laboratories Of America, Inc. Spooling/despooling subsystem job fingerprinting
US20050259289A1 (en) 2004-05-10 2005-11-24 Sharp Laboratories Of America, Inc. Print driver job fingerprinting
US7734093B2 (en) 2004-05-20 2010-06-08 Ricoh Co., Ltd. Paper-based upload and tracking system
US7150399B2 (en) 2004-06-09 2006-12-19 Ricoh Co., Ltd. Embedding barcode data in an auxiliary field of an image file
US7422559B2 (en) 2004-06-16 2008-09-09 Ge Inspection Technologies, Lp Borescope comprising fluid supply system
WO2005124594A1 (en) 2004-06-16 2005-12-29 Koninklijke Philips Electronics, N.V. Automatic, real-time, superimposed labeling of points and objects of interest within a view
US7502344B2 (en) 2004-06-25 2009-03-10 Fujifilm Corporation Communications terminal, server, playback control method and program
US20060026217A1 (en) 2004-06-25 2006-02-02 Lindner James A Method and system for automated migration of media archives
US7315521B2 (en) 2004-06-29 2008-01-01 Intel Corporation Mobile computing device to provide virtual office usage model
US7209035B2 (en) 2004-07-06 2007-04-24 Catcher, Inc. Portable handheld security device
WO2006008501A1 (en) 2004-07-20 2006-01-26 Prevx Ltd. Host intrusion prevention system and method
US7293711B2 (en) 2004-08-30 2007-11-13 Symbol Technologies, Inc. Combination barcode imaging/decoding and real-time video capture system
WO2006029681A2 (en) 2004-09-17 2006-03-23 Accenture Global Services Gmbh Personalized marketing architecture
US20060095950A1 (en) 2004-10-29 2006-05-04 Coonce Charles K Methods and multi-screen systems for real time response to medical emergencies
US20060094949A1 (en) 2004-10-29 2006-05-04 Coonce Charles K Methods and systems for real time response to medical emergencies
US8977385B2 (en) 2004-11-22 2015-03-10 Bell And Howell, Llc System and method for tracking a mail item through a document processing system
US7263205B2 (en) 2004-12-06 2007-08-28 Dspv, Ltd. System and method of generic symbol recognition and user authentication using a communication device with imaging capabilities
US7506817B2 (en) 2004-12-14 2009-03-24 Ricoh Co., Ltd. Location of machine readable codes in compressed representations
US7434226B2 (en) 2004-12-14 2008-10-07 Scenera Technologies, Llc Method and system for monitoring a workflow for an object
WO2006069347A2 (en) 2004-12-21 2006-06-29 Oceaneering International, Inc. Robotic animal handling system for biosafety laboratories
US7788575B2 (en) 2005-01-31 2010-08-31 Hewlett-Packard Development Company, L.P. Automated image annotation
JP2006217545A (en) * 2005-02-07 2006-08-17 Ricoh Co Ltd Image processing system and image processor
WO2006089247A2 (en) 2005-02-16 2006-08-24 Pisafe, Inc. Method and system for creating and using redundant and high capacity barcodes
US20060206245A1 (en) 2005-03-08 2006-09-14 Camper Mark H Creation of use of flight release information
US20060212794A1 (en) 2005-03-21 2006-09-21 Microsoft Corporation Method and system for creating a computer-readable image file having an annotation embedded therein
US20060215023A1 (en) 2005-03-23 2006-09-28 Coonce Charles K Method and system of displaying user interest data at a surveillance station
US20060265590A1 (en) 2005-05-18 2006-11-23 Deyoung Dennis C Digital signature/certificate for hard-copy documents
US20060263789A1 (en) 2005-05-19 2006-11-23 Robert Kincaid Unique identifiers for indicating properties associated with entities to which they are attached, and methods for using
US7450740B2 (en) * 2005-09-28 2008-11-11 Facedouble, Inc. Image classification and information retrieval over wireless digital networks and the internet
US7970738B2 (en) 2005-12-29 2011-06-28 Ricoh Co., Ltd. Always on and updated operation for document logs
US7865042B2 (en) 2006-01-31 2011-01-04 Konica Minolta Systems Laboratory, Inc. Document management method using barcode to store access history information
US20070176000A1 (en) 2006-01-31 2007-08-02 Konica Minolta Systems Laboratory, Inc. Selective image encoding and replacement
US20070226321A1 (en) 2006-03-23 2007-09-27 R R Donnelley & Sons Company Image based document access and related systems, methods, and devices
US8310533B2 (en) 2006-03-27 2012-11-13 GE Sensing & Inspection Technologies, LP Inspection apparatus for inspecting articles
US7577516B2 (en) 2006-05-09 2009-08-18 Hand Held Products, Inc. Power management apparatus and methods for portable data terminals
US7714908B2 (en) 2006-05-26 2010-05-11 Lifetouch Inc. Identifying and tracking digital images with customized metadata
US20080247629A1 (en) 2006-10-10 2008-10-09 Gilder Clark S Systems and methods for check 21 image replacement document enhancements
US8577773B2 (en) 2006-12-01 2013-11-05 Acupay System Llc Document processing systems and methods for regulatory certifications
US7839538B2 (en) 2006-12-18 2010-11-23 Pitney Bowes Inc. Method and system for applying an image-dependent dynamic watermark to postal indicia
US20080163364A1 (en) 2006-12-27 2008-07-03 Andrew Rodney Ferlitsch Security method for controlled documents
US8155427B2 (en) 2007-01-12 2012-04-10 Nanoark Corporation Wafer-scale image archiving and receiving system
US7571857B2 (en) 2007-01-12 2009-08-11 Hand Held Products, Inc. Apparatus and methods for acquiring GPS data for use with portable data terminals
US20080183852A1 (en) 2007-01-26 2008-07-31 Pramer David M Virtual information technology assistant
EP1986132A3 (en) 2007-04-26 2009-02-18 Bowe Bell + Howell Company Apparatus, method and programmable product for identification of a document with feature analysis
US8073795B2 (en) 2008-01-07 2011-12-06 Symbol Technologies, Inc. Location based services platform using multiple sources including a radio frequency identification data source
US20090238626A1 (en) 2008-03-18 2009-09-24 Konica Minolta Systems Laboratory, Inc. Creation and placement of two-dimensional barcode stamps on printed documents for storing authentication information
US20090292930A1 (en) 2008-04-24 2009-11-26 Marano Robert F System, method and apparatus for assuring authenticity and permissible use of electronic documents
US8379261B2 (en) 2008-12-18 2013-02-19 Konica Minolta Laboratory U.S.A., Inc. Creation and placement of two-dimensional barcode stamps on printed documents for storing authentication information
US8544748B2 (en) 2008-12-22 2013-10-01 Konica Minolta Laboratory U.S.A., Inc. Creation and placement of two-dimensional barcode stamps on printed documents for storing authentication information
US8260666B2 (en) 2009-01-14 2012-09-04 Yahoo! Inc. Dynamic demand calculation using captured data of real life objects
US20100198876A1 (en) 2009-02-02 2010-08-05 Honeywell International, Inc. Apparatus and method of embedding meta-data in a captured image
US8301297B2 (en) 2009-03-04 2012-10-30 Bell And Howell, Llc System and method for continuous sorting operation in a multiple sorter environment
US9519814B2 (en) 2009-06-12 2016-12-13 Hand Held Products, Inc. Portable data terminal
US20110282942A1 (en) 2010-05-13 2011-11-17 Tiny Prints, Inc. Social networking system and method for an online stationery or greeting card service
US9020834B2 (en) 2010-05-14 2015-04-28 Xerox Corporation System and method to control on-demand marketing campaigns and personalized trajectories in hyper-local domains
US20120076297A1 (en) 2010-09-24 2012-03-29 Hand Held Products, Inc. Terminal for use in associating an annotation with an image
US8791795B2 (en) 2010-09-28 2014-07-29 Hand Held Products, Inc. Terminal for line-of-sight RFID tag reading

Patent Citations (104)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3561432A (en) * 1967-07-29 1971-02-09 Olympus Optical Co Endoscope
US4271344A (en) * 1976-06-16 1981-06-02 Matsushita Electric Industrial Co., Ltd. High frequency heating oven with cooking vessel
US4078864A (en) * 1976-07-08 1978-03-14 United Technologies Corporation Method and apparatus for viewing and measuring damage in an inaccessible area
US4139822A (en) * 1977-06-14 1979-02-13 General Electric Company Eddy current probe for inspecting interiors of gas turbines, said probe having pivotal adjustments and a borescope
US4253447A (en) * 1978-10-16 1981-03-03 Welch Allyn, Inc. Color endoscope with charge coupled device and television viewing
US4576147A (en) * 1981-07-16 1986-03-18 Olympus Optical Co., Ltd. Hard endoscope with improved light dispersion
US4573450A (en) * 1983-11-11 1986-03-04 Fuji Photo Optical Co., Ltd. Endoscope
US4651201A (en) * 1984-06-01 1987-03-17 Arnold Schoolman Stereoscopic endoscope arrangement
US4656508A (en) * 1984-06-08 1987-04-07 Olympus Optical Co., Ltd. Measuring endoscope
US4588294A (en) * 1984-06-27 1986-05-13 Warner-Lambert Technologies, Inc. Searching and measuring endoscope
US4667656A (en) * 1984-10-26 1987-05-26 Olympus Optical Co., Ltd. Endoscope apparatus having nozzle angularly positioned image sensor
US4659195A (en) * 1986-01-31 1987-04-21 American Hospital Supply Corporation Engine inspection system
US4735501A (en) * 1986-04-21 1988-04-05 Identechs Corporation Method and apparatus for fluid propelled borescopes
US4735501B1 (en) * 1986-04-21 1990-11-06 Identechs Inc
US5010876A (en) * 1986-06-02 1991-04-30 Smith & Nephew Dyonics, Inc. Arthroscopic surgical practice
US4733937A (en) * 1986-10-17 1988-03-29 Welch Allyn, Inc. Illuminating system for endoscope or borescope
US4926257A (en) * 1986-12-19 1990-05-15 Olympus Optical Co., Ltd. Stereoscopic electronic endoscope device
US4727859A (en) * 1986-12-29 1988-03-01 Welch Allyn, Inc. Right angle detachable prism assembly for borescope
US4827909A (en) * 1987-03-31 1989-05-09 Kabushiki Kaisha Toshiba Endoscopic apparatus
US4796607A (en) * 1987-07-28 1989-01-10 Welch Allyn, Inc. Endoscope steering section
US4794912A (en) * 1987-08-17 1989-01-03 Welch Allyn, Inc. Borescope or endoscope with fluid dynamic muscle
US4909600A (en) * 1988-10-28 1990-03-20 Welch Allyn, Inc. Light chopper assembly
US5014515A (en) * 1989-05-30 1991-05-14 Welch Allyn, Inc. Hydraulic muscle pump
US4913369A (en) * 1989-06-02 1990-04-03 Welch Allyn, Inc. Reel for borescope insertion tube
US5014600A (en) * 1990-02-06 1991-05-14 Welch Allyn, Inc. Bistep terminator for hydraulic or pneumatic muscle
US4998182A (en) * 1990-02-08 1991-03-05 Welch Allyn, Inc. Connector for optical sensor
US5019121A (en) * 1990-05-25 1991-05-28 Welch Allyn, Inc. Helical fluid-actuated torsional motor
US4989581A (en) * 1990-06-01 1991-02-05 Welch Allyn, Inc. Torsional strain relief for borescope
US5018506A (en) * 1990-06-18 1991-05-28 Welch Allyn, Inc. Fluid controlled biased bending neck
US5203319A (en) * 1990-06-18 1993-04-20 Welch Allyn, Inc. Fluid controlled biased bending neck
US5018436A (en) * 1990-07-31 1991-05-28 Welch Allyn, Inc. Folded bladder for fluid dynamic muscle
US5114636A (en) * 1990-07-31 1992-05-19 Welch Allyn, Inc. Process for reducing the internal cross section of elastomeric tubing
US5191879A (en) * 1991-07-24 1993-03-09 Welch Allyn, Inc. Variable focus camera for borescope or endoscope
US5202758A (en) * 1991-09-16 1993-04-13 Welch Allyn, Inc. Fluorescent penetrant measurement borescope
US5278642A (en) * 1992-02-26 1994-01-11 Welch Allyn, Inc. Color imaging system
US5275152A (en) * 1992-07-27 1994-01-04 Welch Allyn, Inc. Insertion tube terminator
US5314070A (en) * 1992-12-16 1994-05-24 Welch Allyn, Inc. Case for flexible borescope and endoscope insertion tubes
US5751341A (en) * 1993-01-05 1998-05-12 Vista Medical Technologies, Inc. Stereoscopic endoscope system
US5633675A (en) * 1993-02-16 1997-05-27 Welch Allyn, Inc, Shadow probe
USD358417S (en) * 1993-04-30 1995-05-16 Hewlett-Packard Company Printer platen
US5663552A (en) * 1993-10-19 1997-09-02 Matsushita Electric Industrial Co., Ltd. Portable information terminal apparatus having image processing function
US6221007B1 (en) * 1996-05-03 2001-04-24 Philip S. Green System and method for endoscopic imaging and endosurgery
US5754313A (en) * 1996-07-17 1998-05-19 Welch Allyn, Inc. Imager assembly
US5857963A (en) * 1996-07-17 1999-01-12 Welch Allyn, Inc. Tab imager assembly for use in an endoscope
US5734418A (en) * 1996-07-17 1998-03-31 Welch Allyn, Inc. Endoscope with tab imager package
US6015088A (en) * 1996-11-05 2000-01-18 Welch Allyn, Inc. Decoding of real time video imaging
US6066090A (en) * 1997-06-19 2000-05-23 Yoon; Inbae Branched endoscope system
US6538732B1 (en) * 1999-05-04 2003-03-25 Everest Vit, Inc. Inspection system and method
US6851610B2 (en) * 1999-06-07 2005-02-08 Metrologic Instruments, Inc. Tunnel-type package identification system having a remote image keying station with an ethernet-over-fiber-optic data communication link
US20060031486A1 (en) * 2000-02-29 2006-02-09 International Business Machines Corporation Method for automatically associating contextual input data with available multimedia resources
US20040096123A1 (en) * 2000-03-28 2004-05-20 Shih Willy C. Method and system for locating and accessing digitally stored images
US6697805B1 (en) * 2000-04-14 2004-02-24 Microsoft Corporation XML methods and systems for synchronizing multiple computing devices
US20020039099A1 (en) * 2000-09-30 2002-04-04 Hand Held Products, Inc. Method and apparatus for simultaneous image capture and image display in an imaging device
US20060072903A1 (en) * 2001-02-22 2006-04-06 Everest Vit, Inc. Method and system for storing calibration data within image files
US6697794B1 (en) * 2001-02-28 2004-02-24 Ncr Corporation Providing database system native operations for user defined data types
US20040064323A1 (en) * 2001-02-28 2004-04-01 Voice-Insight, Belgian Corporation Natural language query system for accessing an information system
USD473306S1 (en) * 2001-05-07 2003-04-15 Olympus Optical Co., Ltd. Remote control apparatus for industrial endoscope
US20030004397A1 (en) * 2001-06-28 2003-01-02 Takayuki Kameya Endoscope system
US7346221B2 (en) * 2001-07-12 2008-03-18 Do Labs Method and system for producing formatted data related to defects of at least an appliance of a set, in particular, related to blurring
US20030046192A1 (en) * 2001-08-29 2003-03-06 Mitsubishi Denki Kabushiki Kaisha Distribution management system, distribution management method, and program
US6982765B2 (en) * 2001-09-14 2006-01-03 Thomson Licensing Minimizing video disturbance during switching transients and signal absence
US7508419B2 (en) * 2001-10-09 2009-03-24 Microsoft, Corp Image exchange with image annotation
US20030097042A1 (en) * 2001-10-31 2003-05-22 Teruo Eino Endoscopic system
US7321673B2 (en) * 2001-12-03 2008-01-22 Olympus Corporation Endoscope image filing system and endoscope image filing method
US20040126038A1 (en) * 2002-12-31 2004-07-01 France Telecom Research And Development Llc Method and system for automated annotation and retrieval of remote digital content
US20050027750A1 (en) * 2003-04-11 2005-02-03 Cricket Technologies, Llc Electronic discovery apparatus, system, method, and electronically stored computer program product
US20050015480A1 (en) * 2003-05-05 2005-01-20 Foran James L. Devices for monitoring digital video signals and associated methods and systems
US20050001909A1 (en) * 2003-07-02 2005-01-06 Konica Minolta Photo Imaging, Inc. Image taking apparatus and method of adding an annotation to an image
US20070106536A1 (en) * 2003-08-01 2007-05-10 Moore James F Opml-based patient records
US7685428B2 (en) * 2003-08-14 2010-03-23 Ricoh Company, Ltd. Transmission of event markers to data stream recorder
US20050041097A1 (en) * 2003-08-19 2005-02-24 Bernstein Robert M. Non-medical videoscope
US20050050707A1 (en) * 2003-09-05 2005-03-10 Scott Joshua Lynn Tip tool
US20060015919A1 (en) * 2004-07-13 2006-01-19 Nokia Corporation System and method for transferring video information
US20060050983A1 (en) * 2004-09-08 2006-03-09 Everest Vit, Inc. Method and apparatus for enhancing the contrast and clarity of an image captured by a remote viewing device
US20060053088A1 (en) * 2004-09-09 2006-03-09 Microsoft Corporation Method and system for improving management of media used in archive applications
US7526812B2 (en) * 2005-03-24 2009-04-28 Xerox Corporation Systems and methods for manipulating rights management data
US20070018229A1 (en) * 2005-07-25 2007-01-25 Freescale Semiconductor, Inc. Electronic device including discontinuous storage elements and a process for forming the same
US20070033109A1 (en) * 2005-08-05 2007-02-08 Microsoft Corporation Informal trust relationship to facilitate data sharing
US20070047816A1 (en) * 2005-08-23 2007-03-01 Jamey Graham User Interface for Mixed Media Reality
US20070106754A1 (en) * 2005-09-10 2007-05-10 Moore James F Security facility for maintaining health care data pools
US7712670B2 (en) * 2005-09-28 2010-05-11 Sauerwein Jr James T Data collection device and network having radio signal responsive mode switching
US20070124278A1 (en) * 2005-10-31 2007-05-31 Biogen Idec Ma Inc. System and method for electronic record keeping
US20090177495A1 (en) * 2006-04-14 2009-07-09 Fuzzmed Inc. System, method, and device for personal medical care, intelligent analysis, and diagnosis
US20080033983A1 (en) * 2006-07-06 2008-02-07 Samsung Electronics Co., Ltd. Data recording and reproducing apparatus and method of generating metadata
US20080027983A1 (en) * 2006-07-31 2008-01-31 Berna Erol Searching media content for objects specified using identifiers
US20080039206A1 (en) * 2006-08-11 2008-02-14 Jonathan Ackley Interactive installation for interactive gaming
US20080052205A1 (en) * 2006-08-24 2008-02-28 Vision Chain System and method for identifying implicit events in a supply chain
US20080071143A1 (en) * 2006-09-18 2008-03-20 Abhishek Gattani Multi-dimensional navigation of endoscopic video
US7865957B1 (en) * 2007-02-26 2011-01-04 Trend Micro Inc. Apparatus and methods for updating mobile device virus pattern data
US8014665B2 (en) * 2007-05-22 2011-09-06 International Business Machines Corporation Method, apparatus and software for processing photographic image data using a photographic recording medium
US8090462B2 (en) * 2007-12-19 2012-01-03 Mobideo Technologies Ltd Maintenance assistance and control system method and apparatus
US20130002890A1 (en) * 2007-12-21 2013-01-03 Hand Held Products, Inc. Using metadata tags in video recordings produced by portable terminals
US20100065636A1 (en) * 2008-04-29 2010-03-18 Java Information Technology Ltd. Ontology-Based EPC Automatic Conversion Method and System
US20100046842A1 (en) * 2008-08-19 2010-02-25 Conwell William Y Methods and Systems for Content Processing
US20100076976A1 (en) * 2008-09-06 2010-03-25 Zlatko Manolov Sotirov Method of Automatically Tagging Image Data
US20100071003A1 (en) * 2008-09-14 2010-03-18 Modu Ltd. Content personalization
US20100075292A1 (en) * 2008-09-25 2010-03-25 Deyoung Dennis C Automatic education assessment service
US20100086192A1 (en) * 2008-10-02 2010-04-08 International Business Machines Corporation Product identification using image analysis and user interaction
US20100088123A1 (en) * 2008-10-07 2010-04-08 Mccall Thomas A Method for using electronic metadata to verify insurance claims
US20110058187A1 (en) * 2009-09-10 2011-03-10 Bentley Systems, Incorporated Augmented reality dynamic plots
US20110066281A1 (en) * 2009-09-15 2011-03-17 Bowe Bell + Howell Company Method and system for referencing a specific mail target for enhanced mail owner customer intelligence
US20110079639A1 (en) * 2009-10-06 2011-04-07 Samsung Electronics Co. Ltd. Geotagging using barcodes
US20110107370A1 (en) * 2009-11-03 2011-05-05 At&T Intellectual Property I, L.P. System for media program management
US20110121066A1 (en) * 2009-11-23 2011-05-26 Konica Minolta Systems Laboratory, Inc. Document authentication using hierarchical barcode stamps to detect alterations of barcode

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10942964B2 (en) 2009-02-02 2021-03-09 Hand Held Products, Inc. Apparatus and method of embedding meta-data in a captured image
US20110158469A1 (en) * 2009-12-29 2011-06-30 Mastykarz Justin P Methods and apparatus for management of field operations, projects and/or collected samples
US20120036538A1 (en) * 2010-08-04 2012-02-09 Nagravision S.A. Method for sharing data and synchronizing broadcast data with additional information
US8719869B2 (en) * 2010-08-04 2014-05-06 Nagravision S.A. Method for sharing data and synchronizing broadcast data with additional information
US20160343170A1 (en) * 2010-08-13 2016-11-24 Pantech Co., Ltd. Apparatus and method for recognizing objects using filter information
US8666978B2 (en) 2010-09-16 2014-03-04 Alcatel Lucent Method and apparatus for managing content tagging and tagged content
US8655881B2 (en) * 2010-09-16 2014-02-18 Alcatel Lucent Method and apparatus for automatically tagging content
US20120072419A1 (en) * 2010-09-16 2012-03-22 Madhav Moganti Method and apparatus for automatically tagging content
US8849827B2 (en) 2010-09-16 2014-09-30 Alcatel Lucent Method and apparatus for automatically tagging content
US8533192B2 (en) 2010-09-16 2013-09-10 Alcatel Lucent Content capture device and methods for automatically tagging content
US8579198B2 (en) * 2010-12-01 2013-11-12 Symbol Technologies, Inc. Enhanced laser barcode scanning
US10133653B2 (en) * 2012-02-23 2018-11-20 Cadence Design Systems, Inc. Recording and playback of trace and video log data for programs
US20140313372A1 (en) * 2012-07-10 2014-10-23 Sony Corporation Image distribution system and methods
US10109210B2 (en) * 2012-09-21 2018-10-23 Justin Shelby Kitch Embeddable video playing system and method
US20140087349A1 (en) * 2012-09-21 2014-03-27 Justin Shelby Kitch Embeddable video playing system and method
US20140249885A1 (en) * 2013-03-04 2014-09-04 Catalina Marketing Corporation System and method for customized search results based on a shopping history of a user, retailer identifications, and items being promoted by retailers
US9723253B2 (en) * 2015-03-11 2017-08-01 Sony Interactive Entertainment Inc. Apparatus and method for automatically generating an optically machine readable code for a captured image
US20170310924A1 (en) * 2015-03-11 2017-10-26 Sony Interactive Entertainment Inc. Apparatus and method for automatically generating an optically machine readable code for a captured image
US10284807B2 (en) * 2015-03-11 2019-05-07 Sony Interactive Entertainment Inc. Apparatus and method for automatically generating an optically machine readable code for a captured image
EP3142344A1 (en) * 2015-09-12 2017-03-15 Uniwersytet Warszawski A system and method for steganographic coding of metadata on images
US11609887B2 (en) * 2018-02-13 2023-03-21 Omron Corporation Quality check apparatus, quality check method, and program
US11336968B2 (en) 2018-08-17 2022-05-17 Samsung Electronics Co., Ltd. Method and device for generating content
WO2020229995A1 (en) * 2019-05-10 2020-11-19 Roderick Victor Kennedy Reduction of the effects of latency for extended reality experiences
US11113491B2 (en) * 2020-01-02 2021-09-07 The Boeing Company Methods for virtual multi-dimensional quick response codes
US11625552B2 (en) 2020-01-02 2023-04-11 The Boeing Company Virtual multi-dimensional quick response codes
US11961178B2 (en) 2020-05-11 2024-04-16 Roderick V. Kennedy Reduction of the effects of latency for extended reality experiences by split rendering of imagery types

Also Published As

Publication number Publication date
US10942964B2 (en) 2021-03-09
US20180075033A1 (en) 2018-03-15

Similar Documents

Publication Publication Date Title
US10942964B2 (en) Apparatus and method of embedding meta-data in a captured image
US8565815B2 (en) Methods and systems responsive to features sensed from imagery or other data
US8156427B2 (en) User interface for mixed media reality
US9442957B2 (en) System and method of identifying visual objects
KR100641791B1 (en) Tagging Method and System for Digital Data
WO2017087568A1 (en) A digital image capturing device system and method
US20120083294A1 (en) Integrated image detection and contextual commands
US20020138476A1 (en) Document managing apparatus
US20020102966A1 (en) Object identification method for portable devices
CN109189879B (en) Electronic book display method and device
EP1783681A1 (en) Retrieval system and retrieval method
EP2003611A2 (en) Information presentation system, information presentation terminal, and server
KR101844604B1 (en) Apparatus and method for context detection in a mobile terminal
US9390089B2 (en) Distributed capture system for use with a legacy enterprise content management system
EP1917638A1 (en) System and methods for creation and use of a mixed media environment
MXPA05000422A (en) Universal computing device.
US20060055804A1 (en) Picture taking device
CN100430957C (en) Image processing device, image processing method, and storage medium storing image processing program
EP2482210A2 (en) System and methods for creation and use of a mixed media environment
CN112232260A (en) Subtitle region identification method, device, equipment and storage medium
US20080147687A1 (en) Information Management System and Document Information Management Method
US20050143126A1 (en) Electronic device
US9813567B2 (en) Mobile device and method for controlling the same
US8704939B1 (en) Mobile device and method for controlling the same
KR101135222B1 (en) Method of managing multimedia file and apparatus for generating multimedia file

Legal Events

Date Code Title Description
AS Assignment

Owner name: HAND HELD PRODUCTS, INC., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ESTOK, SLAVOMIR;REEL/FRAME:023577/0745

Effective date: 20091127

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION