WO2008087626A2 - An apparatus system and method for decoding optical symbols - Google Patents

An apparatus system and method for decoding optical symbols Download PDF

Info

Publication number
WO2008087626A2
WO2008087626A2 PCT/IL2008/000048 IL2008000048W WO2008087626A2 WO 2008087626 A2 WO2008087626 A2 WO 2008087626A2 IL 2008000048 W IL2008000048 W IL 2008000048W WO 2008087626 A2 WO2008087626 A2 WO 2008087626A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
symbol
display
present
data stored
Prior art date
Application number
PCT/IL2008/000048
Other languages
French (fr)
Other versions
WO2008087626A3 (en
Inventor
Ran Dvir
Arik Litinsky
Original Assignee
Symlink Technologies
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/IL2007/001306 external-priority patent/WO2008072219A2/en
Application filed by Symlink Technologies filed Critical Symlink Technologies
Publication of WO2008087626A2 publication Critical patent/WO2008087626A2/en
Publication of WO2008087626A3 publication Critical patent/WO2008087626A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light

Definitions

  • the present invention generally relates to the field encoding and decoding of an optical symbol. More specifically, the invention discloses a novel method and system of encoding and decoding optical information (i.e. 2D and/or 3D optical symbols).
  • a conventional ID bar code (one dimensional UPC bar code) is just a different way of encoding numbers and letters by using a combination of bars and spaces of varying widths which in essence is just another manner of entering data into a computer.
  • a bar code generally does not contain descriptive data. It is a reference number that a computer uses to look up an associated record that contains descriptive data and other important information.
  • a barcode found on a soda can does not contain the product name, type of soda, or price, instead, it contains a 12-digit product number.
  • This number is scanned by the cashier at the checkout, it is transmitted to the computer which finds the record associated with that item number in the data base.
  • the matching item record contains a description of the product, vendor name, price, quantity-on-hand, etc.
  • the computer instantly does a "price lookup" and displays the price on the cash register. It also subtracts the quantity purchased from the quantity-on-hand. This entire transaction is done instantly.
  • a bar code typically has ID data encoded in it, and that data is used by computer to look up all specific information associated with the data.
  • bar code scanner Since computers cannot "read" bar codes, for a computer to make use of the information contained in the bar code, the bar code data must be captured and decoded into a data format that the computer can process.
  • the device which reads or captures the bar code information and sends it to the decoder which is known as the bar code reader, generally called bar code scanner.
  • a typical bar code reader kit consists of a scanner, decoder, and cable which interfaces the decoder to the computer.
  • the Scanner scans the Bar Code symbol and captures the bars and spaces of the bar code and sends it to the decoder.
  • the decoder translates the bars and spaces into corresponding electrical output and transmits that data to the computer in a traditional data format.
  • a bar code scanner can either have the Decoder built into it, or have an interface between it and the computer.
  • the 2D barcode unlike linear codes can store the data within the code, therefore eliminating the needs for access to a database for getting the information. Large amounts of text and data can be stored securely and inexpensively.
  • Some 2D bar codes are like a set of linear bar codes literally stacked on top of each other.
  • the PDF417 is the best example of a stacked-bar symbol and is the most common of all 2D bar codes currently in use today.
  • 2D bar codes also use an advanced error correction instead of a check digits system. This error correction allows the symbol to withstand some physical damage without causing loss of data. This high level of error correction is far more advanced than conventional ID linear bar codes with check digits.
  • bar code readers include the pen type readers (bar code-wands), laser bar code scanners, CCD (Charge Couple Devices) barcode readers and camera-based barcode readers used for most two dimensional (2D) bar codes which contain much more information than standard vertical line bar codes.
  • pen type readers bar code-wands
  • laser bar code scanners CCD (Charge Couple Devices) barcode readers
  • camera-based barcode readers used for most two dimensional (2D) bar codes which contain much more information than standard vertical line bar codes.
  • Pen type barcode readers have a light source and a photo diode placed next to each other in the tip of a pen or wand. To read a bar code, a user drags the tip of the pen across all the bars, in a steady even motion. The photo diode measures the intensity of the light reflected back from the light source and generates a waveform corresponding to the widths of the bars and spaces in the bar code. The barcode reader sends the waveform to the decoder, which decodes the waveform and sends it to the computer in a traditional data format.
  • Laser barcode scanners work the same way as pen type barcode readers. The only main difference is that Laser barcode scanners use a laser beam as their light source, and typically employ either a reciprocating mirror or a rotating prism to scan the laser beam back and forth across the bar code. As with the pen type bar code reader, a photo diode is used to measure the intensity of the light reflected back from the bar code.
  • CCD barcode scanners use an array of tiny light sensors lined up in a row in the head of the barcode reader. Voltage waveform corresponding to the bars and spaces of the bar code is generated and sent to the decoder, which decodes the data and sends it to the computer.
  • the main difference between a CCD barcode scanner, a pen type barcode scanner, and laser barcode scanner is that the CCD barcode scanner measures emitted ambient light from the bar code whereas pen or laser barcode scanners measure reflected light of a specific frequency originating from the scanner itself.
  • Linear bar codes are decoded along one axis or direction and generally encode data characters as parallel arrangements of alternating, multiple-width strips of lower reflectivity or "bars" separated by absences of such strips having higher reflectivity or "spaces.” Each unique pattern of bars and spaces within a predetermined width defines a particular data character. A given linear symbol encodes several data characters along its length as several groups of unique bar and space patterns.
  • Newer data collection symbologies have departed from the typical linear symbologies to create 2D stacked or area symbologies in order to increase the amount of information encoded within a given area.
  • Stacked symbologies or "multi-row symbologies” employ several adjacent rows of multiple-width bars and spaces.
  • "Area symbologies" or 2D matrix symbologies employ arrangements of regular polygonal data cells where the center-to-center distance of adjacent cells is uniform
  • Reading stacked symbologies and 2D area technologies with scanning beam-type detectors typically involves a scanning approach where the beam is scanned by hand by a user with the scanner, horizontally across the large object a number of times to capture the image line by line.
  • the sensor output is converted to a digital signal.
  • the digital signal is then mapped into a two-dimensional character array and processed by the computer as a whole to decode the symbol or symbols.
  • Such line by line scanning is very time consuming, and frequently hard to accomplish as the user may shift the reader. Thus the reader will then have an incorrect indication of the relative locations of light and dark regions, thereby impairing decoding. If the card on which the bar code is resident is bent, the problems of the resulting image are increased.
  • two-dimensional readers have been employed that use cameras, or semiconductor or other suitable light receiving elements that image the entire two- dimensional area substantially simultaneously. This is a memory intensive operation for the processor and due to optical limitations inherent in such imaging devices. Further, these readers have a relatively small depth of field within which symbols can be read. To increase the reader's depth of field, some two-dimensional readers employ auto focus systems which are costly and relatively slow. Moreover, even readers with auto focus systems are limited by the depth-of-field of the auto focus system. Also, bent cards with resulting bent code strips can exacerbate the reading problems. Still further, even when reading linear or stacked symbologies, such systems employ relatively complex area-type processing for finding, identifying and decoding. The complexity of such processing makes these readers undesirably slow, and large as a system, for many linear and stacked technology applications
  • the optical data bearing symbol may consist of one or more optical data cells, wherein one or more of the optical data cells is associated with a logic value and may be decoded based on a data bearing region and a reference region of the data cell.
  • the decoding system may consist of two main subsystems: (1) image acquisition subsystem and (2) a decoding subsystem.
  • the image acquisition subsystem may include: (1) an image acquisition device, (2) a DSP module, (3) a display memory buffer (4) a display and (5) a controller.
  • the decoding subsystem may include: (1) a controller, (2) an input interface module, (3) a graphical processing module, (4) an output interface module and (5) a logic matrix parser module.
  • image acquisition device may be any device adapted to capture images using an image sensor (e.g. CMOS, CCD), as still photographs or as sequences of moving images (video).
  • image acquisition device may be a digital camera, a web camera , a video camera, a mobile phone camera and/or any other camera known today or to be devised in the future.
  • DSP Digital Signal Processing
  • DSP module may be adapted to receive image data from the image acquisition device and convert the raw data from the image sensor into a color-corrected image in a standard image file (e.g. GIF, JPEG, TIFF).
  • DSP module may be further adapted to process images to improve their quality (e.g. contrast, brightness, additional filtering).
  • DSP module may be implemented as an internal dedicated hardware (e.g. the image acquisition device includes a dedicated chip), or as (2) an external dedicated hardware, or as (3) a set of software algorithms included in the controller.
  • the display may be a device adapted to show images and /or information (e.g. an LCD display).
  • the display may be adapted to receive image data from the DSP module or from a display memory buffer.
  • a display memory buffer may be a storage device adapted to store image data received from the DSP module.
  • the display memory buffer may store the data being transferred from the DSP module to the display. It may act as a buffer allowing the DSP module and display to act independently without being affected by minor differences in operation (e.g. timing etc.).
  • ready for use data will be copied to the DMB when it can be either (1) used by the display, (2) stored in main memory or (3) retrieved by the decoding subsystem, accordingly, when the image acquisition device in active, the DMB may be associated with a "refresh rate" (i.e. the frequency of loading and/or retrieving data operations which are performed on the DMB).
  • image acquisition subsystem upon activation of the image acquisition device, may operate in different modes: (1) passive, (2) snap-shot and (3) video.
  • the passive operational mode may be generally characterized by the following steps: (1) acquisition of image data by the image acquisition device, (2) processing image data by the DSP module, (3) loading processed image data to the DMB, (4) displaying the image.
  • snap-shot and video operational modes may be generally characterized by the following steps: (1) triggering an image acquisition, (2) acquisition of an image data by the image acquisition device, (3) processing image data by the DSP module, (4) storing processed image data on the main memory unit, and (5) displaying the image (optional).
  • image data which was acquired in a snap-shot and/or video operational modes has better image quality (e.g. resolution, pixel density etc.) than that of an image data which was acquired in a passive operational mode (e.g. image data which was loaded to the DMB).
  • input interface module may be adapted to receive and/or retrieve data from (1) a display memory buffer or from (2) a main memory unit.
  • input interface module may be adapted to sense when an image acquisition device is active.
  • input interface module may be adapted to retrieve data from a DMB.
  • data may be retrieved from a DMB at a rate associated with the refreshing rate of the DMB and with the processing capabilities of the decoding subsystem, e.g.
  • the input interface module may retrieve a frame every X milliseconds from the DMB wherein X is a sufficient amount of time for (1) loading data of a complete frame to the DMB and (2) analyzing the data by the decoding subsystem (as described herein bellow).
  • the input interface module may further include an internal memory unit adapted to store data retrieved from the image acquisition subsystem.
  • the decoding subsystem may include a graphical processing module adapted to: (1) determine whether a given image data consists of an optical data bearing symbol, and (2) extract from the optical data bearing symbol a logic data matrix.
  • the graphical processing module may be adapted to determine whether a given image data set (i.e. data representing an image) consists of an optical data bearing symbol based on the detection of pivot corners in the given image data.
  • the graphical processing module may be further adapted to detect "potential pivot corners" based on general parameters that may suggest that the shape is a pivot corner (e.g. shape's coordinates, size etc.).
  • the graphical processing module upon detection of an optical data bearing symbol, is further adapted to determine whether the image data consisting of the optical data bearing symbol is of sufficient quality for further processing and decoding (e.g. signal to noise ratio, resolution etc.).
  • the graphical processing module is further adapted to trigger the image acquisition device to acquire an image (e.g. trigger a snap-shot).
  • the graphical processing module may be adapted to detect and tag a pivot corner based on a comparison of: (1) parameters of a potential pivot corner (e.g. Radius, area, diameter), and (2) parameters of a shape framed within the potential pivot corner (e.g. framed circle, rectangle, octagon etc.).
  • the graphical processing module may be further adapted to project a data bearing symbol onto a normalized plane.
  • the graphical processing module may be adapted to detect that a data bearing symbol is positioned on a non-normalized plane based on the distances between the data symbol's pivot corners.
  • the graphical processing module may be adapted to project the data bearing symbol to a normalized plane based on the distances between the data symbol's pivot corners.
  • the graphical processing module may be adapted to determine the logic values (e.g. 0, 1) of one or more data cells associated with the data bearing symbol. According to some embodiments of the present invention, the graphical processing module may be further adapted to estimate a histogram of a data cell based on at least one sample taken from the data cell reference region and from at least one sample taken from the data cell data bearing region. According to further embodiments of the present invention, the graphical processing module may be adapted to determine the logic value of a data cell based on: (1) the histogram associated with the data cell, and (2) a sample taken from the data bearing region of the reference cell.
  • the graphical processing module may be adapted to extract a logic data matrix from the data bearing symbol based on the determined logic values of the data cells and the orientation regions.
  • the logic matrix parser module may be adapted to decode a logic data matrix and retrieve the data encoded in the matrix (e.g. CRC type, opcode, data).
  • the output interface module may be adapted to send data to an output device/network based on the decoded information from the data bearing symbol.
  • FIG. 1 is a diagram of an exemplary decoding device with its display showing an optical data bearing symbol acquired from different sources, wherein the optical data bearing symbol consists of one or more optical data cells;
  • FIG. 2 is a block diagram of an exemplary decoding system according to some embodiments of the present invention, wherein an image acquisition subsystem is adapted to acquire images and wherein a graphical processing module is adapted to: determine whether a given image data consists of an optical data bearing symbol, and extract from the optical data bearing symbol a logic data matrix;
  • Fig. 3 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by an image acquisition subsystem adapted to acquire images;
  • Fig. 4 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by an image acquisition subsystem adapted to acquire images;
  • Fig. 5 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a decoding system adapted to sense whether an image acquisition device is active and convert a data bearing symbol into digital data;
  • Fig. 6 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a decoding system adapted to sense whether an image acquisition device is active, trigger an image acquisition operation and convert a data bearing symbol into digital data;
  • Fig. 7 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a graphical processing module adapted to detect a data bearing symbol within an acquired image;
  • Fig. 8 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a graphical processing module adapted to extract a logic data matrix from a data bearing symbol;
  • Fig. 9 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a logic matrix parser adapted to decode a logic data matrix and retrieve data encoded in the matrix (e.g. CRC type, opcode, data).
  • Fig. 9 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a logic matrix parser adapted to decode a logic data matrix and retrieve data encoded in the matrix (e.g. CRC type, opcode, data).
  • FIG. 10 is a diagram showing an exemplary graphical processing module adapted to decode received data using a data bearing regions of an optical data cell of the data mask layer in accordance with some embodiments of the present invention, wherein the graphical processing module may be adapted to determine the logic values (e.g. 0, 1) of one or more data cells associated with the data bearing symbol.
  • the logic values e.g. 0, 1
  • Fig. 11 is a schematic showing a server adapted to send and or distribute an application via a network to one or more computers adapted to process a computer program product in accordance with some embodiments of the present invention.
  • FIG. 11 It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
  • Embodiments of the present invention may include apparatuses for performing the operations herein.
  • This apparatus may be specially constructed for the desired purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions, and capable of being coupled to a computer system bus.
  • the optical data bearing symbol may consist of one or more optical data cells, wherein one or more of the optical data cells is associated with a logic value and may be decoded based on a data bearing region and a reference region of the data cell.
  • the decoding system may consist of two main subsystems: (1) image acquisition subsystem and (2) a decoding subsystem.
  • the image acquisition subsystem may include: (1) an image acquisition device, (2) a DSP module, (3) a display memory buffer (4) a display and (5) a controller.
  • the decoding subsystem may include: (1) a controller, (2) an input interface module, (3) a graphical processing module, (4) an output interface module and (5) a logic matrix parser module.
  • image acquisition device may be any device adapted to capture images using an image sensor (e.g. CMOS, CCD), as still photographs or as sequences of moving images (video).
  • image acquisition device may be a digital camera, a web camera , a video camera, a mobile phone camera and/or any other camera known today or to be devised in the future.
  • DSP Digital Signal Processing
  • DSP module may be adapted to receive image data from the image acquisition device and convert the raw data from the image sensor into a color-corrected image in a standard image file (e.g. GIF, JPEG, TIFF).
  • DSP module may be further adapted to process images to improve their quality (e.g. contrast, brightness, additional filtering).
  • DSP module may be implemented as an internal dedicated hardware (e.g. the image acquisition device includes a dedicated chip), or as (2) an external dedicated hardware, or as (3) a set of software algorithms included in the controller.
  • the display may be a device adapted to show images and /or information (e.g. an LCD display). According to some further embodiments of the present invention, the display may be adapted to receive image data from the DSP module or from a display memory buffer.
  • a display memory buffer (“DMB") may be a storage device adapted to store image data received from the DSP module. According to some further embodiments of the present invention, the display memory buffer may store the data being transferred from the DSP module to the display. It may act as a buffer allowing the DSP module and display to act independently without being affected by minor differences in operation (e.g. timing etc.).
  • ready for use data will be copied to the DMB when it can be either (1) used by the display, (2) stored in main memory or (3) retrieved by the decoding subsystem, accordingly, when the image acquisition device in active, the DMB may be associated with a "refresh rate" (i.e. the frequency of loading and/or retrieving data operations which are performed on the DMB).
  • a "refresh rate” i.e. the frequency of loading and/or retrieving data operations which are performed on the DMB.
  • image acquisition subsystem upon activation of the image acquisition device, may operate in different modes: (1) passive, (2) snap-shot and (3) video.
  • the passive operational mode may be generally characterized by the following steps: (1) acquisition of image data by the image acquisition device, (2) processing image data by the DSP module, (3) loading processed image data to the DMB, (4) displaying the image.
  • snap-shot and video operational modes may be generally characterized by the following steps: (1) triggering an image acquisition, (2) acquisition of an image data by the image acquisition device, (3) processing image data by the DSP module, (4) storing processed image data on the main memory unit, and (5) displaying the image (optional).
  • image data which was acquired in a snap-shot and/or video operational modes has better image quality (e.g. resolution, pixel density etc.) than that of an image data which was acquired in a passive operational mode (e.g. image data which was loaded to the DMB).
  • input interface module may be adapted to receive and/or retrieve data from (1) a display memory buffer or from (2) a main memory unit.
  • input interface module may be adapted to sense when an image acquisition device is active.
  • input interface module may be adapted to retrieve data from a DMB.
  • data may be retrieved from a DMB at a rate associated with the refreshing rate of the DMB and with the processing capabilities of the decoding subsystem, e.g.
  • the input interface module may retrieve a frame every X milliseconds from the DMB wherein X is a sufficient amount of time for (1) loading data of a complete frame to the DMB and (2) analyzing the data by the decoding subsystem (as described herein bellow).
  • the input interface module may further include an internal memory unit adapted to store data retrieved from the image acquisition subsystem.
  • the decoding subsystem may include a graphical processing module adapted to: (1) determine whether a given image data consists of an optical data bearing symbol, and (2) extract from the optical data bearing symbol a logic data matrix.
  • the graphical processing module may be adapted to determine whether a given image data set (i.e. data representing an image) consists of an optical data bearing symbol based on the detection of pivot corners in the given image data.
  • the graphical processing module may be further adapted to detect "potential pivot corners" based on general parameters that may suggest that the shape is a pivot corner (e.g. shape's coordinates, size etc.).
  • the graphical processing module upon detection of an optical data bearing symbol, is further adapted to determine whether the image data consisting of the optical data bearing symbol is of sufficient quality for further processing and decoding (e.g. signal to noise ratio, resolution etc.).
  • the graphical processing module is further adapted to trigger the image acquisition device to acquire an image (e.g. trigger a snap-shot).
  • the graphical processing module may be adapted to detect and tag a pivot corner based on a comparison of: (1) parameters of a potential pivot corner (e.g. Radius, area, diameter), and (2) parameters of a shape framed within the potential pivot corner (e.g. framed circle, rectangle, octagon etc.).
  • the graphical processing module may be further adapted to project a data bearing symbol onto a normalized plane.
  • the graphical processing module may be adapted to detect that a data bearing symbol is positioned on a non-normalized plane based on the distances between the data symbol's pivot corners.
  • the graphical processing module may be adapted to project the data bearing symbol to a normalized plane based on the distances between the data symbol's pivot corners.
  • the graphical processing module may be adapted to determine the logic values (e.g. 0, 1) of one or more data cells associated with the data bearing symbol. According to some embodiments of the present invention, the graphical processing module may be further adapted to estimate a histogram of a data cell based on at least one sample taken from the data cell reference region and from at least one sample taken from the data cell data bearing region. According to further embodiments of the present invention, the graphical processing module may be adapted to determine the logic value of a data cell based on: (1) the histogram associated with the data cell, and (2) a sample taken from the data bearing region of the reference cell.
  • the graphical processing module may be adapted to extract a logic data matrix from the data bearing symbol based on the determined logic values of the data cells and the orientation regions.
  • the logic matrix parser module may be adapted to decode a logic data matrix and retrieve the data encoded in the matrix (e.g. CRC type, opcode, data).
  • the output interface module may be adapted to send data to an output device/network based on the decoded information from the data bearing symbol.
  • FIG.1 there is shown a diagram of an exemplary decoding device 1000 with its display showing an optical data bearing symbol 1100 acquired from different sources such as: (1) a newspaper 1200, (2) a Television display 1300 and (3) a collection card.
  • FIG. 2 there is shown a block diagram of an exemplary decoding system according to some embodiments of the present invention, wherein an image acquisition subsystem may be adapted to acquire images and wherein a graphical processing module may be adapted to: determine whether a given image data consists of an optical data bearing symbol, and extract from the optical data bearing symbol a logic data matrix.
  • a graphical processing module may be adapted to: determine whether a given image data consists of an optical data bearing symbol, and extract from the optical data bearing symbol a logic data matrix.
  • system 2000 may be adapted acquire an image and decode an optical data bearing symbol which may consist of one or more optical data cells, wherein one or more of the optical data cells is associated with a logic value and may be decoded based on a data bearing region and a reference region of the data cell.
  • decoding system 2000 may consist of two main subsystems: (1) an image acquisition subsystem 2020 and (2) a decoding subsystem 2010.
  • the image acquisition subsystem may include: (1) an image acquisition device (e.g.
  • the decoding subsystem may include: (1) a controller 2100, (2) an input interface module 2400, (3) a graphical processing module 2300, (4) an output interface module 2700 and (5) a logic matrix parser module 2200.
  • image acquisition device (“camera”) 2500 may be any device adapted to capture images using an image sensor (e.g. CMOS, CCD), as still photographs or as sequences of moving images (video).
  • the image acquisition device may be a digital camera, a web camera , a video camera, a mobile phone camera and/or any other camera known today or to be devised in the future.
  • DSP Digital Signal Processing
  • the DSP module may be further adapted to process images to improve their quality (e.g. contrast, brightness, additional filtering).
  • DSP module 2800 may be implemented as an internal dedicated hardware (e.g. the image acquisition device includes a dedicated chip), or as (2) an external dedicated hardware, or as (3) a set of software algorithms included in the controller.
  • display 2900 may be a device adapted to show images and /or information (e.g. an LCD display). According to some further embodiments of the present invention, display 2900 may be adapted to receive image data from the DSP module 2800 or from a display memory buffer 2600.
  • a display memory buffer (“DMB”) 2600 may be a storage device adapted to store image data received from DSP module 2800. According to some further embodiments of the present invention, display memory buffer 2600 may store the data being transferred from the DSP module 2800 to the display 2900. It may act as a buffer allowing the DSP module and display to act independently without being affected by minor differences in operation (timing etc.).
  • ready for use data will be copied to the DMB when it can be either (1) used by the display, (2) stored in the main memory unit 2650 or (3) retrieved by the decoding subsystem, accordingly, when the image acquisition device is active, the DMB may be associated with a "refresh rate" (e.g. the frequency of loading and/or retrieving data operations which are performed on the DMB).
  • a "refresh rate” e.g. the frequency of loading and/or retrieving data operations which are performed on the DMB.
  • image acquisition subsystem 2020 may operate in different modes: (1) passive, (2) snap-shot and (3) video.
  • the functionality of the operational modes of image acquisition subsystem 2020 may best be described in conjunction with Figs. 3 and 4.
  • Fig.3 there is shown a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by an image acquisition subsystem adapted to acquire images in a passive operational mode.
  • the passive operational mode may be generally characterized by the following steps: (1) acquisition of image data by the image acquisition device 2500 (step 3000), (2) processing image data by the DSP module 2800 (step 3100), (3) loading processed image data to the DMB (step 3300) and (4) displaying the image.
  • snap-shot and/or video operational modes may be generally characterized by the following steps: (1) triggering an image acquisition (step 4000) (2) acquisition of an image data by the image acquisition device (step 4100), (3) processing image data by the DSP module (step 4300), (4) storing processed image data on the main memory unit 2650 (step 4400) , and (5) displaying the image (optional).
  • image data which was acquired in a snap-shot and/or video operational modes has better image quality (e.g. resolution, pixel density etc.) than that of an image data which was acquired in a passive operational mode (e.g. image data which was loaded to the DMB 2600).
  • image quality e.g. resolution, pixel density etc.
  • passive operational mode e.g. image data which was loaded to the DMB 2600.
  • input interface module 2400 may be adapted to sense when the image acquisition device is active (step 5000). According to some further embodiments of the present invention, upon sensing that the image acquisition device is active, input interface module 2400 may be adapted to retrieve data from the DMB
  • data may be retrieved from the DMB at a rate associated with the refreshing rate of the DMB and with the processing capabilities of the decoding subsystem, e.g. the input interface module may retrieve a frame every X milliseconds from the DMB wherein X is a sufficient amount of time for (1) loading data of a complete frame to the DMB and (2) analyzing the data by the decoding subsystem (as described herein bellow).
  • the input interface module may further include an internal memory unit adapted to store data retrieved from image acquisition subsystem 2020.
  • system 2000 may be adapted to determine whether the image data consists of an optical symbol (step 5200) the functionality of this step may best be described in conjunction with Fig. 7 described herein bellow.
  • system 2000 may be adapted to convert the optical data symbol to a logic data matrix (step 5300), the functionality of this step may best be described in conjunction with Fig. 8 described herein bellow.
  • system 2000 may be adapted to decode the logic data matrix (step 5400), the functionality of this step may best be described in conjunction with Fig. 8 described herein bellow.
  • graphical processing module 2300 may be adapted to: (1) determine whether a given image data consists of an optical data bearing symbol, and (2) extract from the optical data bearing symbol a logic data matrix.
  • the functionality of the graphical processing module 2300 may best be described in conjunction with Figs. 7 and 8 described herein bellow.
  • Fig. 6 there is shown a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a decoding system 2000 adapted to (1) sense whether an image acquisition device 2500 is active, (2) trigger an image acquisition operation and (3) convert a data bearing symbol into digital data.
  • input interface module may be adapted to sense when an image acquisition device is active (step 6000). According to some further embodiments of the present invention, upon sensing that an image acquisition device is active, input interface module may be adapted to retrieve data from a DMB 2600. According to some further embodiments of the present invention, the input interface module 2400 may further include an internal memory unit adapted to store data retrieved from the image acquisition subsystem.
  • input interface module may be adapted to retrieve data from the display memory buffer (step 6100) or may be directly connected to the camera and analyze the data stream feed of the camera before it is stored on the display memory buffer.
  • Retrieving data from the Display Memory Buffer may be important when the architecture of the image acquisition subsystem is such that the camera is an exclusive camera, e.g. there is no option for connecting directly to the data stream produced by the camera.
  • system 2000 may be adapted to determine whether the image data consists of an optical symbol (step 6200) the functionality of this step may best be described in conjunction with Fig. 7 described herein bellow.
  • graphical processing module 2300 may be further adapted to determine whether the image data consisting of the optical data bearing symbol is of sufficient quality for further processing and decoding (step 6300). According to some further embodiments of the present invention, if the image data is of insufficient quality, graphical processing module 2300 is further adapted to trigger the image acquisition device to (1) switch from passive operational mode to snap-shot operational mode (step 6600) and (2) trigger the image acquisition device 2500 to acquire an image in snap-shot mode (step 6700).
  • image data which was captured in snap-shot operational mode is stored on the main memory 2650.
  • input interface module 2400 may be adapted to receive and/or retrieve data from main memory unit 2650 (step 6800).
  • system 2000 may be adapted to convert the optical data symbol to a logic data matrix (step 6400), the functionality of this step may best be described in conjunction with Fig. 8 described herein bellow.
  • system 2000 may be adapted to decode the logic data matrix (step 6500), the functionality of this step may best be described in conjunction with Fig. 8 described herein bellow.
  • graphical processing module 2300 may be adapted to: (1) determine whether a given image data consists of an optical data bearing symbol, and (2) extract from the optical data bearing symbol a logic data matrix.
  • the functionality of the graphical processing module 2300 may best be described in conjunction with Figs. 7 and 8 described herein bellow.
  • Fig. 7 there is shown a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a graphical processing module 2300 adapted to detect a data bearing symbol within an acquired image.
  • system 2000 may be adapted to retrieve image data from a display memory buffer 2600 or from a main memory unit 2650 (step 7000) as was described herein above.
  • graphical processing module 2300 may be adapted to detect a data bearing symbol within the image data.
  • graphical processing module 2300 may be adapted to detect a potential "pivot corner" shape (step 7100).
  • the graphical processing module may be adapted to determine whether a given image data set (i.e. data representing an image) consists of an optical data bearing symbol based on the detection of pivot corners in the given image data.
  • the graphical processing module may be further adapted to detect "potential pivot corners" based on general parameters that may suggest that the shape is a pivot corner (e.g. shape's coordinates, size etc.).
  • graphical processing module 2300 may be adapted to locate a one or more point of reference (e.g. center of mass) within the potential "pivot corner" shape(step 7200).
  • a point of reference e.g. center of mass
  • graphical processing module 2300 may be adapted to calculate parameters of a framed shape within the potential "pivot corner" e.g. radius of a framed circle (step 7300).
  • graphical processing module 2300 may be adapted to compare parameters associated with the framed shape and parameters associated with the potential pivot corner (step 7400) and determine whether the potential pivot corner is actually a pivot corner based on that comparison.
  • the graphical processing module 2300 may be adapted to detect and tag a pivot corner based on a comparison of: (1) parameters of a potential pivot corner (e.g. Radius, area, diameter), and (2) parameters of a shape framed within the potential pivot corner (e.g. framed circle, rectangle, octagon etc.).
  • graphical processing module 2300 may be adapted to determine whether the image data includes additional potential pivot corners (step 7500).
  • graphical processing module 2300 may be adapted to determine whether the image data consists of an optical data symbol (step 7600) based on the analysis of the pivot corners described hereinabove.
  • Fig.8 there is shown a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a graphical processing module 2300 adapted to extract a logic data matrix from a data bearing symbol.
  • graphical processing module 2300 upon detection of an optical symbol (step 8000) may be adapted to extract from the optical symbol a logic data matrix.
  • graphical processing module 2300 may be adapted to project the detected optical symbol to a normalized plane (step 8100).
  • the graphical processing module may be adapted to detect that a data bearing symbol is positioned on a distorted ("non- normalized") plane based on the distances between the data symbol's detected pivot corners.
  • the graphical processing module may be adapted to project the data bearing symbol to a normalized plane based on the distances between the data symbol's pivot corners.
  • graphical processing module 2300 may be adapted to detect clusters of data cells based on the position of a one or more orientation regions (step 8200), which orientation regions may be associated with the optical symbol orientation layer. [00104] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to calculate parameters associated with one or more optical data cell based on samples taken from one or more reference region and one or more data bearing region of the data cell (step 8300).
  • graphical processing module 2300 may be adapted to generate a histogram of one or more optical data cell based on the data cell calculated parameters (step 8400).
  • the graphical processing module may be adapted to estimate a histogram of a data cell based on at least one sample taken from the data cell reference region and from at least one sample taken from the data cell data bearing region.
  • graphical processing module 2300 may be adapted to determine the logic value of an optical data cell ("data cell") (step 8500).
  • the graphical processing module may be adapted to determine the logic value of a data cell based on: (1) the histogram associated with the data cell, and (2) a sample taken from the data bearing region of the reference cell.
  • step 8500 may best be described in conjunction with Fig. 10, there is shown a diagram of a graphical processing module 2300 adapted to decode an optical symbol by determining the logic values of a one or more optical data cell (200,300).
  • the logic value of a data cell is determined based on (1) a histogram and (2) a sample taken from the data cell bearing region 320.
  • a data cell histogram may be generated using at least one sample taken from the data cell reference region 310 and from at least one sample taken from the data cell data bearing region 320.
  • graphical processing module 2300 may be adapted to extract a logic data matrix from the optical data symbol based on the determined logic values of the optical data cells (step 8600).
  • logic matrix parser 2200 adapted to decode a logic data matrix and retrieve data encoded in the matrix (e.g. CRC type, opcode, data).
  • logic matrix parser 2200 may be adapted to detect data bits clusters based on the location of a one or more data orientation bit associated with the logic data matrix (step 9000).
  • logic matrix parser 2200 may be adapted to decode clusters of data bits consisting of information associated with the metadata of information encoded within the optical symbol (step 9100).
  • logic matrix parser 2200 may be adapted to decode data bearing cluster consisting of information associated with the CRC used with the optical symbol (step 9200). According to some embodiments of the present invention, logic matrix parser 2200 may be adapted to decode data bearing cluster consisting of information associated with the opcode and /or additional data encoded by the optical symbol (step 9300).
  • output interface module 2700 may be adapted to send data to an output device/network based on the decoded information from the data bearing symbol.

Abstract

There is provided a novel system and method for decoding digital data. The data may be encoded into an optical data bearing symbol, which optical data bearing symbol may consist of one or more optical data cells, wherein at least one data cell may include both a data bearing region and a reference region and wherein the data may be retrieved from a memory display buffer.

Description

AN APPARATUS, SYSTEM AND METHOD FOR DECODING OPTICAL SYMBOLS
FIELD OF THE INVENTION
[001] The present invention generally relates to the field encoding and decoding of an optical symbol. More specifically, the invention discloses a novel method and system of encoding and decoding optical information (i.e. 2D and/or 3D optical symbols).
BACKGROUND OF THE INVENTION
[002] Ever prevalent in the lives of people in most industrialized countries is the use of bar codes , to identify products and memorialize other information. There are typically two types of bar codes in commercial use, a linear (ID) barcode and a two dimensional (2D) barcode. A conventional ID bar code (one dimensional UPC bar code) is just a different way of encoding numbers and letters by using a combination of bars and spaces of varying widths which in essence is just another manner of entering data into a computer. A bar code generally does not contain descriptive data. It is a reference number that a computer uses to look up an associated record that contains descriptive data and other important information. For example, a barcode found on a soda can does not contain the product name, type of soda, or price, instead, it contains a 12-digit product number. When this number is scanned by the cashier at the checkout, it is transmitted to the computer which finds the record associated with that item number in the data base. The matching item record contains a description of the product, vendor name, price, quantity-on-hand, etc. The computer instantly does a "price lookup" and displays the price on the cash register. It also subtracts the quantity purchased from the quantity-on-hand. This entire transaction is done instantly. In a nutshell, a bar code typically has ID data encoded in it, and that data is used by computer to look up all specific information associated with the data. [003] Since computers cannot "read" bar codes, for a computer to make use of the information contained in the bar code, the bar code data must be captured and decoded into a data format that the computer can process. The device which reads or captures the bar code information and sends it to the decoder, which is known as the bar code reader, generally called bar code scanner. A typical bar code reader kit consists of a scanner, decoder, and cable which interfaces the decoder to the computer. The Scanner scans the Bar Code symbol and captures the bars and spaces of the bar code and sends it to the decoder. The decoder translates the bars and spaces into corresponding electrical output and transmits that data to the computer in a traditional data format. A bar code scanner can either have the Decoder built into it, or have an interface between it and the computer.
[004] The 2D barcode unlike linear codes can store the data within the code, therefore eliminating the needs for access to a database for getting the information. Large amounts of text and data can be stored securely and inexpensively. Some 2D bar codes are like a set of linear bar codes literally stacked on top of each other. Conventionally, the PDF417 is the best example of a stacked-bar symbol and is the most common of all 2D bar codes currently in use today. 2D bar codes also use an advanced error correction instead of a check digits system. This error correction allows the symbol to withstand some physical damage without causing loss of data. This high level of error correction is far more advanced than conventional ID linear bar codes with check digits.
[005] Currently, four of the different types of bar code readers available include the pen type readers (bar code-wands), laser bar code scanners, CCD (Charge Couple Devices) barcode readers and camera-based barcode readers used for most two dimensional (2D) bar codes which contain much more information than standard vertical line bar codes. Each of these types uses a slightly different technology for reading and decoding a bar code.
[006] Pen type barcode readers have a light source and a photo diode placed next to each other in the tip of a pen or wand. To read a bar code, a user drags the tip of the pen across all the bars, in a steady even motion. The photo diode measures the intensity of the light reflected back from the light source and generates a waveform corresponding to the widths of the bars and spaces in the bar code. The barcode reader sends the waveform to the decoder, which decodes the waveform and sends it to the computer in a traditional data format.
[007] Laser barcode scanners work the same way as pen type barcode readers. The only main difference is that Laser barcode scanners use a laser beam as their light source, and typically employ either a reciprocating mirror or a rotating prism to scan the laser beam back and forth across the bar code. As with the pen type bar code reader, a photo diode is used to measure the intensity of the light reflected back from the bar code.
[008] CCD barcode scanners use an array of tiny light sensors lined up in a row in the head of the barcode reader. Voltage waveform corresponding to the bars and spaces of the bar code is generated and sent to the decoder, which decodes the data and sends it to the computer. The main difference between a CCD barcode scanner, a pen type barcode scanner, and laser barcode scanner is that the CCD barcode scanner measures emitted ambient light from the bar code whereas pen or laser barcode scanners measure reflected light of a specific frequency originating from the scanner itself.
[009] The camera-based barcode readers used for the majority of 2D bar codes which are becoming more popular due to increased data carrying ability, use a small video camera to capture an image of a bar code. The barcode reader then transmits that information to a computer and uses sophisticated digital image processing techniques to decode the bar code. Unfortunately this type of image processing of the entire 2D bar code is time consuming, requires the aiming of a camera to properly capture the image, and consumes large amounts of computer processing and memory as well as requiring substantial electrical power to run the camera. [0010] Linear bar codes are decoded along one axis or direction and generally encode data characters as parallel arrangements of alternating, multiple-width strips of lower reflectivity or "bars" separated by absences of such strips having higher reflectivity or "spaces." Each unique pattern of bars and spaces within a predetermined width defines a particular data character. A given linear symbol encodes several data characters along its length as several groups of unique bar and space patterns.
[0011] Newer data collection symbologies have departed from the typical linear symbologies to create 2D stacked or area symbologies in order to increase the amount of information encoded within a given area. Stacked symbologies or "multi-row symbologies" employ several adjacent rows of multiple-width bars and spaces. "Area symbologies" or 2D matrix symbologies employ arrangements of regular polygonal data cells where the center-to-center distance of adjacent cells is uniform Reading stacked symbologies and 2D area technologies with scanning beam-type detectors typically involves a scanning approach where the beam is scanned by hand by a user with the scanner, horizontally across the large object a number of times to capture the image line by line. The user must be very careful as to the distance the card is held from the scanner or it won't work correctly. Also, ambient light and reflections from the card surface itself can interfere with the imaging. For each sweep, the sensor output is converted to a digital signal. The digital signal is then mapped into a two-dimensional character array and processed by the computer as a whole to decode the symbol or symbols. Such line by line scanning is very time consuming, and frequently hard to accomplish as the user may shift the reader. Thus the reader will then have an incorrect indication of the relative locations of light and dark regions, thereby impairing decoding. If the card on which the bar code is resident is bent, the problems of the resulting image are increased.
[0012] To overcome such problems, two-dimensional readers have been employed that use cameras, or semiconductor or other suitable light receiving elements that image the entire two- dimensional area substantially simultaneously. This is a memory intensive operation for the processor and due to optical limitations inherent in such imaging devices. Further, these readers have a relatively small depth of field within which symbols can be read. To increase the reader's depth of field, some two-dimensional readers employ auto focus systems which are costly and relatively slow. Moreover, even readers with auto focus systems are limited by the depth-of-field of the auto focus system. Also, bent cards with resulting bent code strips can exacerbate the reading problems. Still further, even when reading linear or stacked symbologies, such systems employ relatively complex area-type processing for finding, identifying and decoding. The complexity of such processing makes these readers undesirably slow, and large as a system, for many linear and stacked technology applications
SUMMARY OF THE INVENTION
[0013] According to some embodiments of the present invention there is provided a system and method for decoding an optical data bearing symbol. According to some embodiments of the present invention, the optical data bearing symbol may consist of one or more optical data cells, wherein one or more of the optical data cells is associated with a logic value and may be decoded based on a data bearing region and a reference region of the data cell. [0014] According to some embodiments of the present invention, the decoding system may consist of two main subsystems: (1) image acquisition subsystem and (2) a decoding subsystem. According to some further embodiments of the present invention the image acquisition subsystem may include: (1) an image acquisition device, (2) a DSP module, (3) a display memory buffer (4) a display and (5) a controller. According to some further embodiments of the present invention, the decoding subsystem may include: (1) a controller, (2) an input interface module, (3) a graphical processing module, (4) an output interface module and (5) a logic matrix parser module.
[0015] According to some embodiments of the present invention, image acquisition device may be any device adapted to capture images using an image sensor (e.g. CMOS, CCD), as still photographs or as sequences of moving images (video). According to further embodiments of the present invention, image acquisition device may be a digital camera, a web camera , a video camera, a mobile phone camera and/or any other camera known today or to be devised in the future.
[0016] According to some embodiments of the present invention, DSP (Digital Signal Processing) module may be adapted to receive image data from the image acquisition device and convert the raw data from the image sensor into a color-corrected image in a standard image file (e.g. GIF, JPEG, TIFF). According to some embodiments of the present invention, DSP module may be further adapted to process images to improve their quality (e.g. contrast, brightness, additional filtering).
[0017] According to yet further embodiments of the present invention, DSP module may be implemented as an internal dedicated hardware (e.g. the image acquisition device includes a dedicated chip), or as (2) an external dedicated hardware, or as (3) a set of software algorithms included in the controller.
[0018] According to some embodiments of the present invention, the display may be a device adapted to show images and /or information (e.g. an LCD display). According to some further embodiments of the present invention, the display may be adapted to receive image data from the DSP module or from a display memory buffer.
[0019] According to some embodiments of the present invention, a display memory buffer ("DMB") may be a storage device adapted to store image data received from the DSP module. According to some further embodiments of the present invention, the display memory buffer may store the data being transferred from the DSP module to the display. It may act as a buffer allowing the DSP module and display to act independently without being affected by minor differences in operation (e.g. timing etc.). According to some embodiments of the present invention, ready for use data will be copied to the DMB when it can be either (1) used by the display, (2) stored in main memory or (3) retrieved by the decoding subsystem, accordingly, when the image acquisition device in active, the DMB may be associated with a "refresh rate" (i.e. the frequency of loading and/or retrieving data operations which are performed on the DMB).
[0020] According to some embodiments of the present invention, upon activation of the image acquisition device, image acquisition subsystem may operate in different modes: (1) passive, (2) snap-shot and (3) video. According to yet further embodiments of the present invention, the passive operational mode may be generally characterized by the following steps: (1) acquisition of image data by the image acquisition device, (2) processing image data by the DSP module, (3) loading processed image data to the DMB, (4) displaying the image. According to yet further embodiments of the present invention, snap-shot and video operational modes may be generally characterized by the following steps: (1) triggering an image acquisition, (2) acquisition of an image data by the image acquisition device, (3) processing image data by the DSP module, (4) storing processed image data on the main memory unit, and (5) displaying the image (optional). According to some embodiments of the present invention, image data which was acquired in a snap-shot and/or video operational modes (e.g. image data which was stored on the main memory) has better image quality (e.g. resolution, pixel density etc.) than that of an image data which was acquired in a passive operational mode (e.g. image data which was loaded to the DMB).
[0021] According to some embodiments of the present invention, input interface module may be adapted to receive and/or retrieve data from (1) a display memory buffer or from (2) a main memory unit. According to some embodiments of the present invention, input interface module may be adapted to sense when an image acquisition device is active. According to some further embodiments of the present invention, upon sensing that an image acquisition device is active, input interface module may be adapted to retrieve data from a DMB. According to some further embodiments of the present invention, data may be retrieved from a DMB at a rate associated with the refreshing rate of the DMB and with the processing capabilities of the decoding subsystem, e.g. the input interface module may retrieve a frame every X milliseconds from the DMB wherein X is a sufficient amount of time for (1) loading data of a complete frame to the DMB and (2) analyzing the data by the decoding subsystem (as described herein bellow). According to some further embodiments of the present invention, the input interface module may further include an internal memory unit adapted to store data retrieved from the image acquisition subsystem.
[0022] According to some embodiments of the present invention, the decoding subsystem may include a graphical processing module adapted to: (1) determine whether a given image data consists of an optical data bearing symbol, and (2) extract from the optical data bearing symbol a logic data matrix.
[0023] According to some embodiments of the present invention, the graphical processing module may be adapted to determine whether a given image data set (i.e. data representing an image) consists of an optical data bearing symbol based on the detection of pivot corners in the given image data. According to some embodiments of the present invention, the graphical processing module may be further adapted to detect "potential pivot corners" based on general parameters that may suggest that the shape is a pivot corner (e.g. shape's coordinates, size etc.). [0024] According to some further embodiments of the present invention, upon detection of an optical data bearing symbol, the graphical processing module is further adapted to determine whether the image data consisting of the optical data bearing symbol is of sufficient quality for further processing and decoding (e.g. signal to noise ratio, resolution etc.). According to some further embodiments of the present invention, if the image data is of insufficient quality, the graphical processing module is further adapted to trigger the image acquisition device to acquire an image (e.g. trigger a snap-shot).
[0025] According to some further embodiments of the present invention, the graphical processing module may be adapted to detect and tag a pivot corner based on a comparison of: (1) parameters of a potential pivot corner (e.g. Radius, area, diameter), and (2) parameters of a shape framed within the potential pivot corner (e.g. framed circle, rectangle, octagon etc.). [0026] According to some embodiments of the present invention, the graphical processing module may be further adapted to project a data bearing symbol onto a normalized plane. According to some further embodiments of the present invention, the graphical processing module may be adapted to detect that a data bearing symbol is positioned on a non-normalized plane based on the distances between the data symbol's pivot corners. According to yet further embodiments of the present invention, the graphical processing module may be adapted to project the data bearing symbol to a normalized plane based on the distances between the data symbol's pivot corners.
[0027] According to some embodiments of the present invention, the graphical processing module may be adapted to determine the logic values (e.g. 0, 1) of one or more data cells associated with the data bearing symbol. According to some embodiments of the present invention, the graphical processing module may be further adapted to estimate a histogram of a data cell based on at least one sample taken from the data cell reference region and from at least one sample taken from the data cell data bearing region. According to further embodiments of the present invention, the graphical processing module may be adapted to determine the logic value of a data cell based on: (1) the histogram associated with the data cell, and (2) a sample taken from the data bearing region of the reference cell.
[0028] According to some embodiments of the present invention, the graphical processing module may be adapted to extract a logic data matrix from the data bearing symbol based on the determined logic values of the data cells and the orientation regions. [0029] According to some embodiments of the present invention, the logic matrix parser module may be adapted to decode a logic data matrix and retrieve the data encoded in the matrix (e.g. CRC type, opcode, data).
[0030] According to some embodiments of the present invention, the output interface module may be adapted to send data to an output device/network based on the decoded information from the data bearing symbol.
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
[0032] Fig. 1 is a diagram of an exemplary decoding device with its display showing an optical data bearing symbol acquired from different sources, wherein the optical data bearing symbol consists of one or more optical data cells;
[0033] Fig. 2 is a block diagram of an exemplary decoding system according to some embodiments of the present invention, wherein an image acquisition subsystem is adapted to acquire images and wherein a graphical processing module is adapted to: determine whether a given image data consists of an optical data bearing symbol, and extract from the optical data bearing symbol a logic data matrix;
[0034] Fig. 3 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by an image acquisition subsystem adapted to acquire images;
[0035] Fig. 4 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by an image acquisition subsystem adapted to acquire images;
[0036] Fig. 5 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a decoding system adapted to sense whether an image acquisition device is active and convert a data bearing symbol into digital data;
[0037] Fig. 6 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a decoding system adapted to sense whether an image acquisition device is active, trigger an image acquisition operation and convert a data bearing symbol into digital data;
[0038] Fig. 7 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a graphical processing module adapted to detect a data bearing symbol within an acquired image;
[0039] Fig. 8 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a graphical processing module adapted to extract a logic data matrix from a data bearing symbol; [0040] Fig. 9 is a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a logic matrix parser adapted to decode a logic data matrix and retrieve data encoded in the matrix (e.g. CRC type, opcode, data). [0041] Fig. 10 is a diagram showing an exemplary graphical processing module adapted to decode received data using a data bearing regions of an optical data cell of the data mask layer in accordance with some embodiments of the present invention, wherein the graphical processing module may be adapted to determine the logic values (e.g. 0, 1) of one or more data cells associated with the data bearing symbol.
[0042] Fig. 11 is a schematic showing a server adapted to send and or distribute an application via a network to one or more computers adapted to process a computer program product in accordance with some embodiments of the present invention. [0043] It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
DETAILED DESCRIPTION
[0044] In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.
[0045] Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "processing", "computing", "calculating", "determining", or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.
[0046] Embodiments of the present invention may include apparatuses for performing the operations herein. This apparatus may be specially constructed for the desired purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions, and capable of being coupled to a computer system bus.
[0047] The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the desired method. The desired structure for a variety of these systems will appear from the description below. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the inventions as described herein. One of ordinary skill in the art should understand that the described invention may be used for all kinds of wireless or wire-line system. [0048] According to some embodiments of the present invention there is provided a system and method for decoding an optical data bearing symbol. According to some embodiments of the present invention, the optical data bearing symbol may consist of one or more optical data cells, wherein one or more of the optical data cells is associated with a logic value and may be decoded based on a data bearing region and a reference region of the data cell.
[0049] According to some embodiments of the present invention, the decoding system may consist of two main subsystems: (1) image acquisition subsystem and (2) a decoding subsystem. According to some further embodiments of the present invention the image acquisition subsystem may include: (1) an image acquisition device, (2) a DSP module, (3) a display memory buffer (4) a display and (5) a controller. According to some further embodiments of the present invention, the decoding subsystem may include: (1) a controller, (2) an input interface module, (3) a graphical processing module, (4) an output interface module and (5) a logic matrix parser module.
[0050] According to some embodiments of the present invention, image acquisition device may be any device adapted to capture images using an image sensor (e.g. CMOS, CCD), as still photographs or as sequences of moving images (video). According to further embodiments of the present invention, image acquisition device may be a digital camera, a web camera , a video camera, a mobile phone camera and/or any other camera known today or to be devised in the future.
[0051] According to some embodiments of the present invention, DSP (Digital Signal Processing) module may be adapted to receive image data from the image acquisition device and convert the raw data from the image sensor into a color-corrected image in a standard image file (e.g. GIF, JPEG, TIFF). According to some embodiments of the present invention, DSP module may be further adapted to process images to improve their quality (e.g. contrast, brightness, additional filtering).
[0052] According to yet further embodiments of the present invention, DSP module may be implemented as an internal dedicated hardware (e.g. the image acquisition device includes a dedicated chip), or as (2) an external dedicated hardware, or as (3) a set of software algorithms included in the controller.
[0053] According to some embodiments of the present invention, the display may be a device adapted to show images and /or information (e.g. an LCD display). According to some further embodiments of the present invention, the display may be adapted to receive image data from the DSP module or from a display memory buffer. [0054] According to some embodiments of the present invention, a display memory buffer ("DMB") may be a storage device adapted to store image data received from the DSP module. According to some further embodiments of the present invention, the display memory buffer may store the data being transferred from the DSP module to the display. It may act as a buffer allowing the DSP module and display to act independently without being affected by minor differences in operation (e.g. timing etc.). According to some embodiments of the present invention, ready for use data will be copied to the DMB when it can be either (1) used by the display, (2) stored in main memory or (3) retrieved by the decoding subsystem, accordingly, when the image acquisition device in active, the DMB may be associated with a "refresh rate" (i.e. the frequency of loading and/or retrieving data operations which are performed on the DMB).
[0055] According to some embodiments of the present invention, upon activation of the image acquisition device, image acquisition subsystem may operate in different modes: (1) passive, (2) snap-shot and (3) video. According to yet further embodiments of the present invention, the passive operational mode may be generally characterized by the following steps: (1) acquisition of image data by the image acquisition device, (2) processing image data by the DSP module, (3) loading processed image data to the DMB, (4) displaying the image. According to yet further embodiments of the present invention, snap-shot and video operational modes may be generally characterized by the following steps: (1) triggering an image acquisition, (2) acquisition of an image data by the image acquisition device, (3) processing image data by the DSP module, (4) storing processed image data on the main memory unit, and (5) displaying the image (optional). According to some embodiments of the present invention, image data which was acquired in a snap-shot and/or video operational modes (e.g. image data which was stored on the main memory) has better image quality (e.g. resolution, pixel density etc.) than that of an image data which was acquired in a passive operational mode (e.g. image data which was loaded to the DMB).
[0056] According to some embodiments of the present invention, input interface module may be adapted to receive and/or retrieve data from (1) a display memory buffer or from (2) a main memory unit. According to some embodiments of the present invention, input interface module may be adapted to sense when an image acquisition device is active. According to some further embodiments of the present invention, upon sensing that an image acquisition device is active, input interface module may be adapted to retrieve data from a DMB. According to some further embodiments of the present invention, data may be retrieved from a DMB at a rate associated with the refreshing rate of the DMB and with the processing capabilities of the decoding subsystem, e.g. the input interface module may retrieve a frame every X milliseconds from the DMB wherein X is a sufficient amount of time for (1) loading data of a complete frame to the DMB and (2) analyzing the data by the decoding subsystem (as described herein bellow). According to some further embodiments of the present invention, the input interface module may further include an internal memory unit adapted to store data retrieved from the image acquisition subsystem.
[0057] According to some embodiments of the present invention, the decoding subsystem may include a graphical processing module adapted to: (1) determine whether a given image data consists of an optical data bearing symbol, and (2) extract from the optical data bearing symbol a logic data matrix.
[0058] According to some embodiments of the present invention, the graphical processing module may be adapted to determine whether a given image data set (i.e. data representing an image) consists of an optical data bearing symbol based on the detection of pivot corners in the given image data. According to some embodiments of the present invention, the graphical processing module may be further adapted to detect "potential pivot corners" based on general parameters that may suggest that the shape is a pivot corner (e.g. shape's coordinates, size etc.). [0059] According to some further embodiments of the present invention, upon detection of an optical data bearing symbol, the graphical processing module is further adapted to determine whether the image data consisting of the optical data bearing symbol is of sufficient quality for further processing and decoding (e.g. signal to noise ratio, resolution etc.). According to some further embodiments of the present invention, if the image data is of insufficient quality, the graphical processing module is further adapted to trigger the image acquisition device to acquire an image (e.g. trigger a snap-shot).
[0060] According to some further embodiments of the present invention, the graphical processing module may be adapted to detect and tag a pivot corner based on a comparison of: (1) parameters of a potential pivot corner (e.g. Radius, area, diameter), and (2) parameters of a shape framed within the potential pivot corner (e.g. framed circle, rectangle, octagon etc.). [0061] According to some embodiments of the present invention, the graphical processing module may be further adapted to project a data bearing symbol onto a normalized plane. According to some further embodiments of the present invention, the graphical processing module may be adapted to detect that a data bearing symbol is positioned on a non-normalized plane based on the distances between the data symbol's pivot corners. According to yet further embodiments of the present invention, the graphical processing module may be adapted to project the data bearing symbol to a normalized plane based on the distances between the data symbol's pivot corners.
[0062] According to some embodiments of the present invention, the graphical processing module may be adapted to determine the logic values (e.g. 0, 1) of one or more data cells associated with the data bearing symbol. According to some embodiments of the present invention, the graphical processing module may be further adapted to estimate a histogram of a data cell based on at least one sample taken from the data cell reference region and from at least one sample taken from the data cell data bearing region. According to further embodiments of the present invention, the graphical processing module may be adapted to determine the logic value of a data cell based on: (1) the histogram associated with the data cell, and (2) a sample taken from the data bearing region of the reference cell.
[0063] According to some embodiments of the present invention, the graphical processing module may be adapted to extract a logic data matrix from the data bearing symbol based on the determined logic values of the data cells and the orientation regions.
[0064] According to some embodiments of the present invention, the logic matrix parser module may be adapted to decode a logic data matrix and retrieve the data encoded in the matrix (e.g. CRC type, opcode, data).
[0065] According to some embodiments of the present invention, the output interface module may be adapted to send data to an output device/network based on the decoded information from the data bearing symbol.
[0066] Turning now to Fig.1 , there is shown a diagram of an exemplary decoding device 1000 with its display showing an optical data bearing symbol 1100 acquired from different sources such as: (1) a newspaper 1200, (2) a Television display 1300 and (3) a collection card.
[0067] Turning now to Fig. 2, there is shown a block diagram of an exemplary decoding system according to some embodiments of the present invention, wherein an image acquisition subsystem may be adapted to acquire images and wherein a graphical processing module may be adapted to: determine whether a given image data consists of an optical data bearing symbol, and extract from the optical data bearing symbol a logic data matrix. The functionality of the block diagram depicted in Fig. 2 may best be described in conjunction with Figs. 3, 4, 5, 6, 7, 8,
9 and 10.
[0068] According to some embodiments of the present invention, system 2000 may be adapted acquire an image and decode an optical data bearing symbol which may consist of one or more optical data cells, wherein one or more of the optical data cells is associated with a logic value and may be decoded based on a data bearing region and a reference region of the data cell. [0069] According to some embodiments of the present invention, decoding system 2000 may consist of two main subsystems: (1) an image acquisition subsystem 2020 and (2) a decoding subsystem 2010. According to some further embodiments of the present invention the image acquisition subsystem may include: (1) an image acquisition device (e.g. camera) 2500, (2) a DSP module 2800, (3) a display memory buffer 2600 (4) a main memory unit 2650, (5) a display 2900 and (5) a controller 2100. According to some further embodiments of the present invention, the decoding subsystem may include: (1) a controller 2100, (2) an input interface module 2400, (3) a graphical processing module 2300, (4) an output interface module 2700 and (5) a logic matrix parser module 2200.
[0070] According to some embodiments of the present invention, image acquisition device ("camera") 2500 may be any device adapted to capture images using an image sensor (e.g. CMOS, CCD), as still photographs or as sequences of moving images (video). According to further embodiments of the present invention, the image acquisition device may be a digital camera, a web camera , a video camera, a mobile phone camera and/or any other camera known today or to be devised in the future.
[0071] According to some embodiments of the present invention, DSP (Digital Signal Processing) module 2800 may be adapted to receive image data from the image acquisition device 2500 and convert the raw data from the image sensor into a color-corrected image in a standard image file (e.g. GIF, JPEG, TIFF). According to some embodiments of the present invention, the DSP module may be further adapted to process images to improve their quality (e.g. contrast, brightness, additional filtering).
[0072] According to yet further embodiments of the present invention, DSP module 2800 may be implemented as an internal dedicated hardware (e.g. the image acquisition device includes a dedicated chip), or as (2) an external dedicated hardware, or as (3) a set of software algorithms included in the controller.
[0073] According to some embodiments of the present invention, display 2900 may be a device adapted to show images and /or information (e.g. an LCD display). According to some further embodiments of the present invention, display 2900 may be adapted to receive image data from the DSP module 2800 or from a display memory buffer 2600.
[0074] According to some embodiments of the present invention, a display memory buffer ("DMB") 2600 may be a storage device adapted to store image data received from DSP module 2800. According to some further embodiments of the present invention, display memory buffer 2600 may store the data being transferred from the DSP module 2800 to the display 2900. It may act as a buffer allowing the DSP module and display to act independently without being affected by minor differences in operation (timing etc.). According to some embodiments of the present invention, ready for use data will be copied to the DMB when it can be either (1) used by the display, (2) stored in the main memory unit 2650 or (3) retrieved by the decoding subsystem, accordingly, when the image acquisition device is active, the DMB may be associated with a "refresh rate" (e.g. the frequency of loading and/or retrieving data operations which are performed on the DMB).
[0075] According to some embodiments of the present invention, upon activation of the image acquisition device 2500, image acquisition subsystem 2020 may operate in different modes: (1) passive, (2) snap-shot and (3) video. The functionality of the operational modes of image acquisition subsystem 2020 may best be described in conjunction with Figs. 3 and 4. [0076] Turning now to Fig.3, there is shown a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by an image acquisition subsystem adapted to acquire images in a passive operational mode. According to further embodiments of the present invention, the passive operational mode may be generally characterized by the following steps: (1) acquisition of image data by the image acquisition device 2500 (step 3000), (2) processing image data by the DSP module 2800 (step 3100), (3) loading processed image data to the DMB (step 3300) and (4) displaying the image.
[0077] Turning now to Fig.4, there is shown a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by an image acquisition subsystem 2020 adapted to acquire images in snap-shot and/or video operational modes. According to yet further embodiments of the present invention, snap-shot and/or video operational modes may be generally characterized by the following steps: (1) triggering an image acquisition (step 4000) (2) acquisition of an image data by the image acquisition device (step 4100), (3) processing image data by the DSP module (step 4300), (4) storing processed image data on the main memory unit 2650 (step 4400) , and (5) displaying the image (optional). [0078] According to some embodiments of the present invention, image data which was acquired in a snap-shot and/or video operational modes (e.g. image data which was stored on the main memory unit 2650) has better image quality (e.g. resolution, pixel density etc.) than that of an image data which was acquired in a passive operational mode (e.g. image data which was loaded to the DMB 2600). [0079] Turning now to Fig. 5, there is shown a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a decoding system adapted to sense whether an image acquisition device is active and convert a data bearing symbol into digital data.
[0080] According to some embodiments of the present invention, input interface module 2400 may be adapted to sense when the image acquisition device is active (step 5000). According to some further embodiments of the present invention, upon sensing that the image acquisition device is active, input interface module 2400 may be adapted to retrieve data from the DMB
(step 5100). According to some further embodiments of the present invention, data may be retrieved from the DMB at a rate associated with the refreshing rate of the DMB and with the processing capabilities of the decoding subsystem, e.g. the input interface module may retrieve a frame every X milliseconds from the DMB wherein X is a sufficient amount of time for (1) loading data of a complete frame to the DMB and (2) analyzing the data by the decoding subsystem (as described herein bellow). According to some further embodiments of the present invention, the input interface module may further include an internal memory unit adapted to store data retrieved from image acquisition subsystem 2020.
[0081] According to some embodiments of the present invention, system 2000 may be adapted to determine whether the image data consists of an optical symbol (step 5200) the functionality of this step may best be described in conjunction with Fig. 7 described herein bellow.
[0082] According to some embodiments of the present invention, system 2000 may be adapted to convert the optical data symbol to a logic data matrix (step 5300), the functionality of this step may best be described in conjunction with Fig. 8 described herein bellow.
[0083] According to some embodiments of the present invention, system 2000 may be adapted to decode the logic data matrix (step 5400), the functionality of this step may best be described in conjunction with Fig. 8 described herein bellow.
[0084] According to some further embodiments of the present invention, graphical processing module 2300 may be adapted to: (1) determine whether a given image data consists of an optical data bearing symbol, and (2) extract from the optical data bearing symbol a logic data matrix. The functionality of the graphical processing module 2300 may best be described in conjunction with Figs. 7 and 8 described herein bellow.
[0085] Turning now to Fig. 6, there is shown a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a decoding system 2000 adapted to (1) sense whether an image acquisition device 2500 is active, (2) trigger an image acquisition operation and (3) convert a data bearing symbol into digital data.
[0086] According to some embodiments of the present invention, input interface module may be adapted to sense when an image acquisition device is active (step 6000). According to some further embodiments of the present invention, upon sensing that an image acquisition device is active, input interface module may be adapted to retrieve data from a DMB 2600. According to some further embodiments of the present invention, the input interface module 2400 may further include an internal memory unit adapted to store data retrieved from the image acquisition subsystem.
[0087] According to some embodiments of the present invention, input interface module may be adapted to retrieve data from the display memory buffer (step 6100) or may be directly connected to the camera and analyze the data stream feed of the camera before it is stored on the display memory buffer. Retrieving data from the Display Memory Buffer may be important when the architecture of the image acquisition subsystem is such that the camera is an exclusive camera, e.g. there is no option for connecting directly to the data stream produced by the camera.
[0088] According to some embodiments of the present invention, system 2000 may be adapted to determine whether the image data consists of an optical symbol (step 6200) the functionality of this step may best be described in conjunction with Fig. 7 described herein bellow.
[0089] According to some further embodiments of the present invention, upon detection of an optical data bearing symbol, graphical processing module 2300 may be further adapted to determine whether the image data consisting of the optical data bearing symbol is of sufficient quality for further processing and decoding (step 6300). According to some further embodiments of the present invention, if the image data is of insufficient quality, graphical processing module 2300 is further adapted to trigger the image acquisition device to (1) switch from passive operational mode to snap-shot operational mode (step 6600) and (2) trigger the image acquisition device 2500 to acquire an image in snap-shot mode (step 6700).
[0090] According to some embodiments of the present invention, image data which was captured in snap-shot operational mode is stored on the main memory 2650. According to some embodiments of the present invention, input interface module 2400 may be adapted to receive and/or retrieve data from main memory unit 2650 (step 6800). [0091] According to some embodiments of the present invention, system 2000 may be adapted to convert the optical data symbol to a logic data matrix (step 6400), the functionality of this step may best be described in conjunction with Fig. 8 described herein bellow. [0092] According to some embodiments of the present invention, system 2000 may be adapted to decode the logic data matrix (step 6500), the functionality of this step may best be described in conjunction with Fig. 8 described herein bellow.
[0093] According to some further embodiments of the present invention, graphical processing module 2300 may be adapted to: (1) determine whether a given image data consists of an optical data bearing symbol, and (2) extract from the optical data bearing symbol a logic data matrix. The functionality of the graphical processing module 2300 may best be described in conjunction with Figs. 7 and 8 described herein bellow.
[0094] Turning now to Fig. 7 there is shown a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a graphical processing module 2300 adapted to detect a data bearing symbol within an acquired image. According to some embodiments of the present invention, system 2000 may be adapted to retrieve image data from a display memory buffer 2600 or from a main memory unit 2650 (step 7000) as was described herein above. According to yet further embodiment of the present invention, graphical processing module 2300 may be adapted to detect a data bearing symbol within the image data. [0095] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to detect a potential "pivot corner" shape (step 7100). According to some embodiments of the present invention, the graphical processing module may be adapted to determine whether a given image data set (i.e. data representing an image) consists of an optical data bearing symbol based on the detection of pivot corners in the given image data. According to some embodiments of the present invention, the graphical processing module may be further adapted to detect "potential pivot corners" based on general parameters that may suggest that the shape is a pivot corner (e.g. shape's coordinates, size etc.).
[0096] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to locate a one or more point of reference (e.g. center of mass) within the potential "pivot corner" shape(step 7200).
[0097] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to calculate parameters of a framed shape within the potential "pivot corner" e.g. radius of a framed circle (step 7300). [0098] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to compare parameters associated with the framed shape and parameters associated with the potential pivot corner (step 7400) and determine whether the potential pivot corner is actually a pivot corner based on that comparison. According to some further embodiments of the present invention, the graphical processing module 2300 may be adapted to detect and tag a pivot corner based on a comparison of: (1) parameters of a potential pivot corner (e.g. Radius, area, diameter), and (2) parameters of a shape framed within the potential pivot corner (e.g. framed circle, rectangle, octagon etc.).
[0099] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to determine whether the image data includes additional potential pivot corners (step 7500).
[00100] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to determine whether the image data consists of an optical data symbol (step 7600) based on the analysis of the pivot corners described hereinabove. [00101] Turning now to Fig.8, there is shown a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a graphical processing module 2300 adapted to extract a logic data matrix from a data bearing symbol. According to some embodiments of the present invention, upon detection of an optical symbol (step 8000) graphical processing module 2300 may be adapted to extract from the optical symbol a logic data matrix.
[00102] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to project the detected optical symbol to a normalized plane (step 8100). According to some further embodiments of the present invention, the graphical processing module may be adapted to detect that a data bearing symbol is positioned on a distorted ("non- normalized") plane based on the distances between the data symbol's detected pivot corners. According to yet further embodiments of the present invention, the graphical processing module may be adapted to project the data bearing symbol to a normalized plane based on the distances between the data symbol's pivot corners.
[00103] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to detect clusters of data cells based on the position of a one or more orientation regions (step 8200), which orientation regions may be associated with the optical symbol orientation layer. [00104] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to calculate parameters associated with one or more optical data cell based on samples taken from one or more reference region and one or more data bearing region of the data cell (step 8300).
[00105] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to generate a histogram of one or more optical data cell based on the data cell calculated parameters (step 8400). According to some further embodiments of the present invention, the graphical processing module may be adapted to estimate a histogram of a data cell based on at least one sample taken from the data cell reference region and from at least one sample taken from the data cell data bearing region.
[00106] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to determine the logic value of an optical data cell ("data cell") (step 8500). According to further embodiments of the present invention, the graphical processing module may be adapted to determine the logic value of a data cell based on: (1) the histogram associated with the data cell, and (2) a sample taken from the data bearing region of the reference cell.
[00107] According to some embodiments of the present invention, step 8500 may best be described in conjunction with Fig. 10, there is shown a diagram of a graphical processing module 2300 adapted to decode an optical symbol by determining the logic values of a one or more optical data cell (200,300). According to some embodiments of the present invention, the logic value of a data cell is determined based on (1) a histogram and (2) a sample taken from the data cell bearing region 320.According to further embodiments of the present invention, a data cell histogram may be generated using at least one sample taken from the data cell reference region 310 and from at least one sample taken from the data cell data bearing region 320.. [00108] According to some embodiments of the present invention, graphical processing module 2300 may be adapted to extract a logic data matrix from the optical data symbol based on the determined logic values of the optical data cells (step 8600).
[00109] Turning now to Fig.9, there is shown a flow chart depicting the steps of an exemplary embodiment of the present invention which may be executed by a logic matrix parser 2200 adapted to decode a logic data matrix and retrieve data encoded in the matrix (e.g. CRC type, opcode, data). [00110] According to some embodiments of the present invention, logic matrix parser 2200 may be adapted to detect data bits clusters based on the location of a one or more data orientation bit associated with the logic data matrix (step 9000).
[00111] According to some embodiments of the present invention, logic matrix parser 2200 may be adapted to decode clusters of data bits consisting of information associated with the metadata of information encoded within the optical symbol (step 9100).
[00112] According to some embodiments of the present invention, logic matrix parser 2200 may be adapted to decode data bearing cluster consisting of information associated with the CRC used with the optical symbol (step 9200). According to some embodiments of the present invention, logic matrix parser 2200 may be adapted to decode data bearing cluster consisting of information associated with the opcode and /or additional data encoded by the optical symbol (step 9300).
[00113] According to some embodiments of the present invention, output interface module 2700 may be adapted to send data to an output device/network based on the decoded information from the data bearing symbol.
[00114] While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.

Claims

CLAIMSWhat is claimed:
1. An electronic device comprising: an image acquisition unit adapted to generate one or more frames of image data; a display buffer functionally associated with said image acquisition unit and adapted to store data derived from said one or more frames in a format corresponding to a display associated with said device; and a symbol decoding unit adapted to receive data from data stored on said display buffer.
2. The device according to claim 1, wherein said image acquisition unit comprises an image acquisition device and controlling circuitry.
3. The device according to claim 2, wherein said controlling circuitry is adapted to receive raw image data from the image acquisition device and convert raw data into a color-corrected image in a standard image file.
4. The device according to claim 2, wherein said display is adapted to receive image data from said controlling circuitry.
5. The device according to claim 2, wherein said display buffer is adapted to store image data received from said controlling circuitry.
6. The device according to claim 1, wherein the data stored on said display buffer is retrieved by a component selected from the group consisting of said display, a memory module and said decoding subsystem.
7. The device according to claim 1, wherein the image acquisition unit is adapted to operate in a mode selected from the group consisting of a passive mode, a snapshot mode and a video mode.
8. The device according to claim 1, wherein said symbol decoding unit is further adapted to analyze data stored on said display buffer.
9. The device according to claim 1, wherein said symbol decoding unit is further adapted to analyze data stored on a memory module.
10. The device according to claim 8, wherein said symbol decoding unit is further adapted to analyze data stored on a memory module based on the analysis results of data stored on said display buffer.
11. A computer program product embodied on at least one computer-readable medium, said program configured to cause a device processor to decode image acquisition unit derived data received from a display buffer of said device.
12. The computer program product according to claim 11, further adapted to cause retrieval of data stored on said display buffer by a component selected from the group consisting of a display, a memory module and a decoding subsystem.
13. The computer program product according to claim 11, further adapted to cause a symbol decoding unit to analyze data stored on said display buffer.
14. The computer program product according to claim 11, further adapted to cause a symbol decoding unit to analyze data stored on a memory module.
15. The computer program product according to claim 13, further adapted to cause said symbol decoding unit to analyze data stored on a memory module based on the analysis results of data stored on said display buffer.
16. A server adapted to transmit a computer program product embodied on at least one computer-readable medium, said program configured to cause a device processor to decode image acquisition unit derived data received from a display buffer of said device.
17. The server according to claim 16, wherein said program is further configured to cause retrieval of data stored on said display buffer by a component selected from the group consisting of a display, a memory module and a decoding subsystem.
18. The server according to claim 16, wherein said program is further configured to cause a symbol decoding unit to analyze data stored on said display buffer.
19. The server according to claim 16, wherein said program is further configured to cause a symbol decoding unit to analyze data stored on a memory module.
20. The server according to claim 18, wherein said program is further adapted to cause said symbol decoding unit to analyze data stored on a memory module based on the on the analysis results of data stored on said display buffer.
21. A method comprising: generating one or more frames of image data; storing data derived from said one or more frames in a format corresponding to a display; analyzing data stored on a display memory buffer; and decoding image derived data received from a display buffer of said device.
22. The method according to claim 21, further comprising;
converting raw data into a color-corrected image in a standard image file.
23. The method according to claim 21, further comprising: storing on said display memory buffer raw data.
24. The method according to claim 21, further comprising : retrieving data stored on said display buffer by a component selected from the group consisting of a display, a memory module and a decoding subsystem.
25. The method according to claim 21, further comprising : analyzing data stored on said display buffer.
26. The method according to claim 21, further comprising : analyzing data stored on a memory module.
27. The method according to claim 25, further comprising : analyzing data stored on a memory module based on the analysis results of data stored on said display buffer.
28. A method for decoding an optical data bearing symbol comprising: a. polling a display memory buffer; b. determining whether data stored said display memory buffer consists of an optical symbol;
29. the method according to claim 28, further comprising: a. determining the logic value of one or more optical data bearing symbol based on its data bearing region and reference region.
30. The method according to claim 28, wherein said optical data bearing symbol comprises two or more pivot corner.
31. The method according to claim 28, wherein said optical data bearing symbol comprises one or more orientation region.
32. The method according to claim 28 further comprising the step of: a. projecting said optical data symbol to a normalized plane.
33. The method according to claim 29 further comprising the steps of: a. creating a histogram of one or more optical data cell based on at least one sample taken from the reference region and from at least one sample taken from the data bearing region; and b. determining the logic value of the optical data symbol based on said optical data cell histogram.
34. The method according to claim 29 further comprising the steps of: a. extracting a logic data matrix based on said optical data symbol logic value.
35. A system for decoding an optical data bearing symbol comprising a controller adapted to perform polling on a display memory buffer; and determine whether the data in the display memory buffer data consists of an optical symbol.
36. The system according to claim 35, wherein the data optical symbol comprises one or more data cells.
37. The system according to claim 36, wherein the data cell comprises a data bearing region and a reference region.
38. The system according to claim 37, wherein the controller is further adapted to determine the logic value of a data cell based on the data bearing region and reference region.
39. The system according to claim 35, wherein the data optical symbol comprises two or more pivot corner.
40. The system according to claim 36, wherein said data cell comprises one or more orientation region.
PCT/IL2008/000048 2007-01-17 2008-01-10 An apparatus system and method for decoding optical symbols WO2008087626A2 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US88524807P 2007-01-17 2007-01-17
US88525107P 2007-01-17 2007-01-17
US60/885,248 2007-01-17
US60/885,251 2007-01-17
US90946307P 2007-04-01 2007-04-01
US60/909,463 2007-04-01
PCT/IL2007/001306 WO2008072219A2 (en) 2006-12-14 2007-10-28 An apparatus system and method for encoding and decoding optical symbols
ILPCT/IL2007/001306 2007-10-28

Publications (2)

Publication Number Publication Date
WO2008087626A2 true WO2008087626A2 (en) 2008-07-24
WO2008087626A3 WO2008087626A3 (en) 2010-02-04

Family

ID=39636459

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2008/000048 WO2008087626A2 (en) 2007-01-17 2008-01-10 An apparatus system and method for decoding optical symbols

Country Status (1)

Country Link
WO (1) WO2008087626A2 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5949057A (en) * 1996-03-29 1999-09-07 Telxon Corporation Portable data collection device with crosshair targeting illumination assembly
US6318637B1 (en) * 1997-12-02 2001-11-20 Telxon Corporation Multi-focal length imaging based portable dataform reader
US6749120B2 (en) * 2000-12-11 2004-06-15 Cpo Technologies Corp. Method and apparatus for scanning electronic barcodes

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5949057A (en) * 1996-03-29 1999-09-07 Telxon Corporation Portable data collection device with crosshair targeting illumination assembly
US6318637B1 (en) * 1997-12-02 2001-11-20 Telxon Corporation Multi-focal length imaging based portable dataform reader
US6749120B2 (en) * 2000-12-11 2004-06-15 Cpo Technologies Corp. Method and apparatus for scanning electronic barcodes

Also Published As

Publication number Publication date
WO2008087626A3 (en) 2010-02-04

Similar Documents

Publication Publication Date Title
US20230067071A1 (en) System and method for document processing
US10331929B2 (en) Imaging terminal, imaging sensor to determine document orientation based on bar code orientation and methods for operating the same
US8662397B2 (en) Multiple camera imaging-based bar code reader
US7131592B2 (en) Bar code scanner and method with a plurality of operational modes
US10650204B2 (en) Barcode detection method and barcode detection system for increasing detection efficiency by using gray level flip counts
JP2000501209A (en) Sub-pixel data form reader
US20110132984A1 (en) Universal stand for indicia readers
US20080245869A1 (en) Method and apparatus for reading a printed indicia with a limited field of view sensor
US20050082370A1 (en) System and method for decoding barcodes using digital imaging techniques
EP2320350B1 (en) Annotation of optical images on a mobile device
JPH0684004A (en) Bar code reader
US20110261203A1 (en) Imaging scanner utilized as a cctv camera
US8302866B2 (en) Optical reader using distance map decoding
US8245937B2 (en) Method and apparatus for estimating exposure time for imaging-based barcode scanners
US20010027997A1 (en) System for reading barcode symbols
WO2008087626A2 (en) An apparatus system and method for decoding optical symbols
US10872257B2 (en) Barcode detection method and barcode detection system for increasing detection efficiency
KR100829108B1 (en) Apparatus and method for reading 2 dimensional barcode with a pattern gradient calculation function
WO2008072219A2 (en) An apparatus system and method for encoding and decoding optical symbols
CA2353168A1 (en) Optical scanner and image reader for reading images and decoding optical information including one and two dimensional symbologies at variable depth of field
JPH0469786A (en) Optical information reader

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08702629

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08702629

Country of ref document: EP

Kind code of ref document: A2