US20060175530A1 - Sub-pixel resolution and wavefront analyzer system - Google Patents
Sub-pixel resolution and wavefront analyzer system Download PDFInfo
- Publication number
- US20060175530A1 US20060175530A1 US11/051,850 US5185005A US2006175530A1 US 20060175530 A1 US20060175530 A1 US 20060175530A1 US 5185005 A US5185005 A US 5185005A US 2006175530 A1 US2006175530 A1 US 2006175530A1
- Authority
- US
- United States
- Prior art keywords
- optical sensors
- output
- coupled
- sub
- cartridge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B26/00—Optical devices or arrangements for the control of light using movable or deformable optical elements
- G02B26/06—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the phase of light
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B26/00—Optical devices or arrangements for the control of light using movable or deformable optical elements
- G02B26/08—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
- G02B26/0816—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements
- G02B26/0825—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements the reflecting element being a flexible sheet or membrane, e.g. for varying the focus
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B26/00—Optical devices or arrangements for the control of light using movable or deformable optical elements
- G02B26/08—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
- G02B26/0816—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements
- G02B26/0833—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements the reflecting element being a micromechanical device, e.g. a MEMS mirror, DMD
Definitions
- the present invention relates generally to the field of optical sensors and more particularly to a method for extracting sub-pixel resolution in real-time and a wavefront sensor for an adaptive optics system.
- Optical tracking systems face two major challenges.
- the first challenge is that each pixel has a finite field of view and sensitivity is uniform across its surface. This results in the photosensing element being unable to determine the location of an object or a feature in an image that is smaller than a single pixel.
- a number of solutions have been tried to overcome this limitation.
- One solution is to purposely blur the image over multiple pixels and calculate the centroid of the blurred image. This solution has had limited success, but requires computation and serial sampling and, therefore, is no longer in real-time.
- the approach also functions at the price of blurring any other objects in the field of view.
- Another approach is to optically magnify the image until the feature is larger than a single pixel.
- the rate limiting step here is transfer of the time sample to a memory space in a computer so that the information from the sensors can be manipulated by the computer's central processor.
- Still other systems have been used to extract subpixel resolution by calculating the centroid of an object at different time points and then computing the displacement distance with higher accuracy than a single pixel dimension or the pixel spacing.
- a sub-pixel resolution system that overcomes present sensor shortfalls and other problems has a number of optical sensors.
- Each of the optical sensors has a field of view that overlaps a neighboring optical sensor's field of view.
- a number of contrast enhancement circuits are coupled between each of the optical sensors.
- An induced current circuit is coupled to a group of the optical sensors.
- Each of the optical sensors may be real time current generators and have a Gaussian far field sensitivity. The Gaussian far field sensitivity may be created by a ball lens optically coupled to each of the optical sensors.
- the Gaussian far field sensitivity may be created by employing a thin mask deposited at the edge of each sensor or by electrically coupling the bases of bipolar transistors of the photosensors in a single cartridge together or by using electronic weighting approaches.
- a sub-pixel resolution system has a number of optical sensors. Each sensor has Gaussian or other linear or nonlinear far-field angular sensitivity.
- a number of contrast enhancement circuits are coupled between the optical sensors.
- An induced current circuit is coupled to a group of the optical sensors.
- the optical sensors may have a field of view that overlaps a neighboring optical sensors field of view.
- An optical filter may cover one of the optical sensors.
- the optical sensors may form two or more cartridges and have an output of a first cartridge coupled to a first input of a comparator and an output of a second cartridge coupled a second input of the comparator.
- a digital processor may be coupled to an output of the optical sensors.
- the digital processor may be coupled to an output of the comparator.
- a wavefront analyzer system has a sub-pixel analog resolution system.
- a wavefront analyzer has an input coupled to an output of the analog sub-pixel resolution system.
- the analog sub-pixel resolution system may have a number of optical sensors.
- the field of view an optical sensor may overlap a neighboring optical sensors field of view.
- the sensors may have Gaussian far field sensitivity.
- the Gaussian far field sensitivity may be created by a ball lens in an optical path between the deformable mirror and the optical sensors.
- the sub-pixel resolution system may have a number of contrast enhancement circuits.
- Each of the contrast enhancement circuits is coupled between two the optical sensors.
- An induced current circuit is coupled to a group of the optical sensors.
- the optical sensors form at least two cartridges and an output of a first cartridge is compared with an output of a second cartridge.
- FIG. 1 is an electrical schematic diagram of a sub-pixel resolution system in accordance with one embodiment of the invention
- FIG. 2A is a three dimensional diagram of the far field Gaussian sensitivity of the optical sensors in accordance with one embodiment of the invention.
- FIG. 2B is a two dimensional diagram of the overlapping field of views of optical sensors in accordance with one embodiment of the invention.
- FIG. 3A is a cross sectional view of the optical sensors and associated optics in accordance with one embodiment of the invention.
- FIG. 3B is a cross sectional view of a single ball lens and optical fiber in accordance with one embodiment of the invention.
- FIG. 4A is a dimension diagram of three cartridges of optical sensors and the associated comparator circuitry in accordance with one embodiment of the invention.
- FIG. 4B is a two dimensional diagram of a comparator element called L4.
- the circle represents the photodetector input field of view of a single cartridge.
- FIG. 4C shows a network of seven cartridges of optical sensors and the associated L4 comparator circuitry in accordance with one embodiment of the invention
- FIG. 5 is a block diagram of a sub-pixel resolution system and digital processing array in accordance with one embodiment of the invention.
- FIG. 6 is an adaptive optics system in accordance with one embodiment of the invention.
- the present invention describes a sub-pixel resolution system that uses an array of analog optical detectors with overlapping fields of view to obtain sub-pixel resolution.
- the optical detectors are coupled to analog processing circuits that enhance contrast between optical detectors and induce current to detect low light images. Because the processing is performed using analog circuits and the optical detectors are analog circuits, the system is essentially a real time resolution system. Some applications may require digitizing of outputs and post processing that may slow down the resolution system, but all the initial detection and processing is essentially real time.
- objects in an image consist of features of low and high spatial frequency.
- High spatial frequencies down to the diffraction limit (2 times the wavelength of light being imaged) are smaller that the physical size of an individual pixel.
- the invention provides improved tracking of targets with high accuracy and resolving small features, smaller than the size of the optical detector, the picture element or pixel.
- FIG. 1 is an electrical schematic diagram of a sub-pixel resolution system 10 in accordance with one embodiment of the invention.
- the system 10 has six optical sensors 12 , 14 , 16 , 18 , 20 , 22 that are modeled as current sources.
- the current sources 12 , 14 , 16 , 18 , 20 , 22 are coupled to amplifiers 24 , 26 , 28 , 30 , 32 , 34 .
- All the outputs 36 , 38 , 40 , 42 , 44 , 46 of the amplifiers 24 , 26 , 28 , 30 , 32 , 34 are coupled to a cartridge resistor 48 and cartridge capacitor 50 that generate a voltage (Vecs). This voltage provides a reference for the activity within the cartridge.
- Vecs voltage
- a programmable variable (K1) multiplied by Vecs controls a current mirror or voltage dependent current source (diamond symbol with arrow, 76 , 78 , 80 , 82 , 84 , 86 in FIG. 1 ) that provides contrast enhancement by pulling current away from the input nodes of amplifiers 24 , 26 , 28 , 30 , 32 , 34 , when K1 has a negative value. That contrast enhancement is based on activity among the contributing sensors 12 , 14 , 16 , 18 , 20 , 22 .
- any activity from any of the sensors 12 , 14 , 16 , 18 , 20 , 22 will be augmented and amplified by the value of K1 and injected into the processing circuitry, essentially multiplying the input from any of the sensors in the cartridge. This action provides enhanced sensitivity of the sensors at low illumination levels.
- the contrast enhancement circuit for the first optical detector 12 has a first current source 52 and a second current source 54 .
- the first current source 52 generates a current that is equivalent to the output current 46 (I 6 ) times a constant K2.
- the second current source 54 generates a current that is equivalent to the output current 38 (I 2 ) times a constant K2.
- these two current sources 52 , 54 are a function of the neighboring optical detectors 14 , 22 output currents 38 , 46 .
- K2 we can cause the output current (I 1 ) 36 to be decreased when a current is sensed at the neighboring optical detector 14 .
- This increases the difference in the two currents 36 , 38 which increases the contrast between the two detectors. This is not a winner-take-all (as in a current shunting or inhibitory circuit, although it could be) but rather a proportional contrast sensor.
- the second optical detector 14 has a first current source 56 and a second current source 58 .
- the first current source 56 generates a current that is equivalent to the output current 36 (I 1 ) times a constant K2.
- the second current source 58 generates a current that is equivalent to the output current 40 (I 3 ) times a constant K2.
- the third optical detector 16 has a first current source 60 and a second current source 62 .
- the first current source 60 generates a current that is equivalent to the output current 38 (I 2 ) times a constant K2.
- the second current source 62 generates a current that is equivalent to the output current 42 (I 4 ) times a constant K2.
- the fourth optical detector 18 has a first current source 64 and a second current source 66 .
- the first current source 64 generates a current that is equivalent to the output current 40 (I 3 ) times a constant K2.
- the second current source 66 generates a current that is equivalent to the output current 44 (I 5 ) times a constant K2.
- the fifth optical detector 20 has a first current source 68 and a second current source 70 .
- the first current source 68 generates a current that is equivalent to the output current 42 (I 4 ) times a constant K2.
- the second current source 70 generates a current that is equivalent to the output current 46 (I 6 ) times a constant K2.
- the sixth optical detector 22 has a first current source 72 and a second current source 74 .
- the first current source 72 generates a current that is equivalent to the output current 44 (I 5 ) times a constant K2.
- the second current source 74 generates a current that is equivalent to the output current 36 (I 1 ) times a constant K2.
- Each of the optical detectors 12 , 14 , 16 , 18 , 20 , 22 also has an induced current circuit 76 , 78 , 80 , 82 , 84 , 86 .
- the induced current circuits 76 , 78 , 80 , 82 , 84 , 86 are current sources that are a product of the constant K1 and the voltage (Vecs) across the cartridge resistor 48 .
- K1 the cartridge of optical detectors is able to sense the presence of light that might not be sensed by any of the individual optical detectors 12 , 14 , 16 , 18 , 20 , 22 . Note that the output from each detector is of interest as well as the cartridge output.
- K1 and K2 can be programmed in or adaptive circuitry can be used to determine the values and used to extract camouflaged features of objects with low contrast.
- Voltage gain and offset can be applied to the current mirrors or to the operational amplifiers to control the working range and dynamic range of the detectors.
- FIG. 2A is a three dimensional diagram of the far field sensitivity 100 of three optical sensors in accordance with one embodiment of the invention.
- This three dimensional graph shows the overlap of far field sensitivity of three of the seven optical detectors in a cartridge, each having a Gaussian or other nonlinear sensitivity profile.
- the optical detectors have 50 % overlap. It has been shown mathematically that there is no spatial resolution limit between two adjacent detectors, if the contrast ratio of the object being detected is high enough. However, there are contrast ratio limitations that can affect the ability to detect the spatial resolution.
- the advantage of using a Gaussian or other continuous function is that the position of an object within the detector's field of view can be sensed with higher resolution than either the detector's physical width or the spacing between detectors.
- FIG. 2B is a two dimensional diagram 102 of the overlapping field of views of optical sensors in accordance with one embodiment of the invention.
- the circles show where the far field sensitivity is 50% of the peak sensitivity for a cartridge of seven optical detectors in a close packed hexagonal arrangement.
- a trace 104 represents the path of a point source across the detectors.
- the overlapping arrangement allows for 2 n zero crossing, where n is the number of pixels or detectors. So in this case there are 128 zero crossing in this optical detector arrangement. Zero crossings are often used to determine the path of a point source. The more zero crossing the better able the image system is able to determine the path of an object.
- Standard CCD Charge Coupled Devices
- FIG. 3A is a cross sectional view of the optical sensors and associated optics 110 in accordance with one embodiment of the invention.
- Three optical detectors 114 , 116 , 118 are optically coupled through fiber optical cables 120 , 122 , 124 to three ball lenses 126 , 128 , 130 .
- the ball lens 126 in combination with the optical fiber 120 , inherently produces an overlapping field of view among the optical detectors 114 , 116 , 118 .
- optical system has an imaging lens system 131 in front of the ball lenses 126 , 128 , 130 .
- the ball lenses 126 , 128 , 130 result in an essentially Gaussian far field intensity.
- a filter 132 is shown in front of one of the ball lenses 126 .
- the optical detectors are photo-transistors that have a very broad spectral range. Filters can be used to select for particular wavelengths of interest.
- another method of obtaining a Gaussian like far field sensitivity pattern is to have relatively small photodiodes, for instance photodiodes that are less than about five microns. The reason for this is that the edge of the photodiode is more sensitive than the center.
- the conductive traces may be placed over the optical sensors and has a masking effect that contributes to a Gaussian like far field sensitivity.
- the ball lenses are placed on the optical sensors.
- the imaging lens may be a facet lens from a compound eye or a regular lens from a camera.
- FIG. 3B is a cross sectional view of a single ball lens 132 and optical fiber 133 in accordance with one embodiment of the invention.
- the image 134 is focused in front of the ball lens 132 .
- the image is moved up or down the result is that some of the light falls outside the optical fiber 133 and onto an adjacent fiber.
- the overlapping fields of view may be created by having thin traces or masks between the optical sensors which will diffuse the light between the two adjacent sensors.
- the traces are commonly thinner (Z-axis or deposition layer thickness) than the diffraction limit of the light being imaged, so they do not impair light transmission but allow diffusion of light into the neighboring photosensor.
- a modified sensor with the bases of the bipolar transistors connected to each other or an electronic weighting approach may be used to create a position dependent output for the sensor.
- the doping of the sensor may be non-uniform and this will result in a position dependent photosensitivity for the sensor.
- the ball lens, coupled sensors, mask or trace ( 141 of FIG. 3 ) and non-uniform doping are all image position systems.
- FIGS. 4 a & b are a diagrams of an element (called L4) that connects adjacent cartridges.
- the circle represents seven photoreceptor inputs of a single cartridge.
- the limbs a, b, and c represent inputs and outputs to and from neighboring L4 elements.
- the output of each cartridge is compared by L4 providing local information processing within its own cartridge field of view and provides redundancy to the network of optical sensors 140 , 142 , 144 , 146 , 148 , 150 and the associated comparator circuitry in accordance with one embodiment of the invention.
- FIG. 4 b shows a network of seven L4 elements. Such a network provides a cooperative approach using comparators to extract coherent information abut an object in an image that is larger than a single cartridge.
- cartridges 140 , 142 , 144 , 146 , 148 , 150 have seven optical detectors each in a hexagonal close packed structure.
- the output of the cartridges 140 , 142 , 144 , 146 , 148 , 150 which is the voltage Vecs shown in FIG. 1 , is coupled into the comparators 152 , 154 , 156 .
- the outputs 158 , 160 , 162 of the comparators 152 , 154 , 156 are used to share information across the cartridges.
- each L4 compares its own cartridge input to the outputs of adjacent L4 elements. Information processing is local, within each cartridge. The output of each cartridge is compared to the outputs of each of its 2, 4, 6, or 8 neighbors, depending on the packing arrangement of the photodetectors. There is no leakage of current through resistors as Langan and others have done. There is no output current coupling cartridges, as this would eliminate the subpixel resolution information contained in each cartridge. Using a comparator, all ideal resistances are high enough that any one output does not alter a neighbor's processing.
- FIG. 5 is a block diagram of a sub-pixel resolution system and digital processing array 160 in accordance with one embodiment of the invention.
- the system 160 has an array of analog optical detectors or pixels 162 in a hexagonal close packed structure.
- the optical sensors 162 have overlapping fields of view as described above.
- Below the optical sensors 162 is the analog readout and processing circuit 164 , such as the circuits shown in FIG. 1 and FIG. 4 .
- the readout circuitry 164 is coupled to a processor or processor array 166 .
- the processor array converts the analog output signals into information that can be used by a larger system such as the system in FIG. 6 .
- FIG. 6 is an adaptive optics system 180 in accordance with one embodiment of the invention.
- the system 180 receives light from a telescope 182 into a collimating lens 184 .
- the collimated light 186 impinges upon a deformable mirror 188 .
- the deformable mirror 188 is mounted on a tip/tilt stage 190 .
- the light then passes through a pair of lenses 192 , 194 and is collimated again.
- a beam splitter 196 transfers part of the light to a mirror 198 and an imaging lens 200 and part of the light to a sub-pixel resolution system 202 , such as that described above.
- the output 204 of the sub-pixel resolution system 202 is coupled to a wavefront analyzer and deformable mirror controller 206 .
- the analyzer determines the shape of the wavefront and the controller has an output 208 that directs the deformable mirror to adjust the surface of the deformable mirror to remove any aberrations, such as those caused by atmospheric conditions.
- the sub-pixel resolution system 202 has analog optical detector and analog front end processing, the system 180 is able to adjust more quickly for changes in the wavefront. This allows this system to significantly reduce the time necessary to form an image of a faint star, since the wavefront is continuously being update. For faint stars this can reduce the exposure time in half or less. This makes the telescope system twice as productive as present adaptive optics systems.
- the tip-tilt stage removes low order aberration and the deformable mirror actuators correct the high order aberration as in an adaptive optics system using a Shack-Hartmann wavefront sensor.
- Our fly-eye sensor replaces the Shack-Hartmann wavefront sensor and operates in real-time without requiring a CCD to sense the optical signal from different parts of the beam.
- the advantage in this application is that the fly-eye sensor provides much higher resolution and operates in real-time without sampling the photodetector array. In addition to the computational savings of not having to sample and move data to access a central processor, no numerical computation is required, as it is using a CCD array.
Abstract
Description
- None.
- The present invention relates generally to the field of optical sensors and more particularly to a method for extracting sub-pixel resolution in real-time and a wavefront sensor for an adaptive optics system.
- Optical tracking systems face two major challenges. The first challenge is that each pixel has a finite field of view and sensitivity is uniform across its surface. This results in the photosensing element being unable to determine the location of an object or a feature in an image that is smaller than a single pixel. A number of solutions have been tried to overcome this limitation. One solution is to purposely blur the image over multiple pixels and calculate the centroid of the blurred image. This solution has had limited success, but requires computation and serial sampling and, therefore, is no longer in real-time. The approach also functions at the price of blurring any other objects in the field of view. Another approach is to optically magnify the image until the feature is larger than a single pixel. However, this produces the same problem of expanding the image, changing the contrast ratio and also narrows the entire field of view of the pixel array by the optical magnification factor of the image. The importance of blurring or magnifying a feature in an image is that the light flux (photons per square centimeter per second, which is what you are trying to detect) is drastically reduced (2 times the change in image size yields the square root of the photon flux; 3 times, the cube root, 4 times, the fourth root and so on) and compromises the ability of the detector array to resolve the change in luminance. As Bucklew and Saleh showed in 1985, resolution is a matter of contrast sensitivity. Therefore, degrading the contrast sensitivity compromises system detection ability. Another problem is that most image devices accumulate charge and then digitize the magnitude of this stored charge. The charge is periodically sampled and then drained to start a new charge accumulation period. This process is inherently limited to a relatively slow update rate compared to the speed of analog electronic signals. Thus this accumulate, digitize and drain process is a rate limiting process. The sampling process or the readout rate of a sensor array and transfer to a computer's memory space constitutes a large delay in subsequent processing. Furthermore, if an image moves across multiple pixels during an update cycle, it is hard to distinguish this from a large image or to determine the track of the image. There have been attempts to only perform this accumulate, digitize and drain process for the pixels near the image of interest in order to speed up these image systems. Unfortunately, this process is still relatively slow and blinds the other parts of the imaging system. In fast sensor systems with few pixels, sampling is still required, the rate limiting step here is transfer of the time sample to a memory space in a computer so that the information from the sensors can be manipulated by the computer's central processor. Still other systems have been used to extract subpixel resolution by calculating the centroid of an object at different time points and then computing the displacement distance with higher accuracy than a single pixel dimension or the pixel spacing.
- Other approaches to subpixel resolution involve first identifying the edge of an object then calculating the position of the center of mass of that object at subsequent time points. Alternatively, wavelet encoding an edge from the partial values in adjacent pixels can be used to infer the position of an edge of an object. Then recomputing that position at subsequent time points can be used to locate a moving edge with higher resolution than the pixel spacing. However, both these approaches require computation and, therefore, no longer operate in real-time.
- Thus there exists a need for a sub-pixel resolution system that operates without blurring or magnifying the image and has a much faster update rate than present imaging systems.
- A sub-pixel resolution system that overcomes present sensor shortfalls and other problems has a number of optical sensors. Each of the optical sensors has a field of view that overlaps a neighboring optical sensor's field of view. A number of contrast enhancement circuits are coupled between each of the optical sensors. An induced current circuit is coupled to a group of the optical sensors. Each of the optical sensors may be real time current generators and have a Gaussian far field sensitivity. The Gaussian far field sensitivity may be created by a ball lens optically coupled to each of the optical sensors. Alternatively the Gaussian far field sensitivity may be created by employing a thin mask deposited at the edge of each sensor or by electrically coupling the bases of bipolar transistors of the photosensors in a single cartridge together or by using electronic weighting approaches. In one embodiment, a sub-pixel resolution system has a number of optical sensors. Each sensor has Gaussian or other linear or nonlinear far-field angular sensitivity. A number of contrast enhancement circuits are coupled between the optical sensors. An induced current circuit is coupled to a group of the optical sensors. The optical sensors may have a field of view that overlaps a neighboring optical sensors field of view. An optical filter may cover one of the optical sensors. The optical sensors may form two or more cartridges and have an output of a first cartridge coupled to a first input of a comparator and an output of a second cartridge coupled a second input of the comparator. A digital processor may be coupled to an output of the optical sensors. The digital processor may be coupled to an output of the comparator. The advantage of such a circuit is that local processing increases speed using a parallel approach while a cooperative process between cartridges helps detect features that are larger than the field of view of a single cartridge. All this processing is accomplished without first requiring a global sampling process and moving that data to a memory space for a central processor to manipulate.
- In one embodiment, a wavefront analyzer system has a sub-pixel analog resolution system. A wavefront analyzer has an input coupled to an output of the analog sub-pixel resolution system. The analog sub-pixel resolution system may have a number of optical sensors. The field of view an optical sensor may overlap a neighboring optical sensors field of view. The sensors may have Gaussian far field sensitivity. The Gaussian far field sensitivity may be created by a ball lens in an optical path between the deformable mirror and the optical sensors. The sub-pixel resolution system may have a number of contrast enhancement circuits. Each of the contrast enhancement circuits is coupled between two the optical sensors. An induced current circuit is coupled to a group of the optical sensors. The optical sensors form at least two cartridges and an output of a first cartridge is compared with an output of a second cartridge.
-
FIG. 1 is an electrical schematic diagram of a sub-pixel resolution system in accordance with one embodiment of the invention; -
FIG. 2A is a three dimensional diagram of the far field Gaussian sensitivity of the optical sensors in accordance with one embodiment of the invention; -
FIG. 2B is a two dimensional diagram of the overlapping field of views of optical sensors in accordance with one embodiment of the invention; -
FIG. 3A is a cross sectional view of the optical sensors and associated optics in accordance with one embodiment of the invention; -
FIG. 3B is a cross sectional view of a single ball lens and optical fiber in accordance with one embodiment of the invention; -
FIG. 4A is a dimension diagram of three cartridges of optical sensors and the associated comparator circuitry in accordance with one embodiment of the invention; -
FIG. 4B is a two dimensional diagram of a comparator element called L4. The circle represents the photodetector input field of view of a single cartridge. The limbs labeled a, b, and c or bidirectional inputs and outputs to neighboring L4 elements, used to compare inputs from their parent cartridges in order to segment objects in an image. -
FIG. 4C shows a network of seven cartridges of optical sensors and the associated L4 comparator circuitry in accordance with one embodiment of the invention; -
FIG. 5 is a block diagram of a sub-pixel resolution system and digital processing array in accordance with one embodiment of the invention; and -
FIG. 6 is an adaptive optics system in accordance with one embodiment of the invention. - The present invention describes a sub-pixel resolution system that uses an array of analog optical detectors with overlapping fields of view to obtain sub-pixel resolution. The optical detectors are coupled to analog processing circuits that enhance contrast between optical detectors and induce current to detect low light images. Because the processing is performed using analog circuits and the optical detectors are analog circuits, the system is essentially a real time resolution system. Some applications may require digitizing of outputs and post processing that may slow down the resolution system, but all the initial detection and processing is essentially real time.
- For a vision system, objects in an image consist of features of low and high spatial frequency. High spatial frequencies down to the diffraction limit (2 times the wavelength of light being imaged) are smaller that the physical size of an individual pixel. The invention provides improved tracking of targets with high accuracy and resolving small features, smaller than the size of the optical detector, the picture element or pixel.
-
FIG. 1 is an electrical schematic diagram of asub-pixel resolution system 10 in accordance with one embodiment of the invention. Thesystem 10 has sixoptical sensors current sources amplifiers outputs 36, 38, 40, 42, 44, 46 of theamplifiers cartridge resistor 48 andcartridge capacitor 50 that generate a voltage (Vecs). This voltage provides a reference for the activity within the cartridge. A programmable variable (K1) multiplied by Vecs controls a current mirror or voltage dependent current source (diamond symbol with arrow, 76, 78, 80, 82, 84, 86 inFIG. 1 ) that provides contrast enhancement by pulling current away from the input nodes ofamplifiers sensors sensors optical detectors optical detector 12 has a firstcurrent source 52 and a secondcurrent source 54. The firstcurrent source 52 generates a current that is equivalent to the output current 46 (I6) times a constant K2. The secondcurrent source 54 generates a current that is equivalent to the output current 38 (I2) times a constant K2. Note that these twocurrent sources optical detectors output currents 38, 46. By selecting the correct value for K2 we can cause the output current (I1) 36 to be decreased when a current is sensed at the neighboringoptical detector 14. This increases the difference in the twocurrents 36, 38 which increases the contrast between the two detectors. This is not a winner-take-all (as in a current shunting or inhibitory circuit, although it could be) but rather a proportional contrast sensor. - The second
optical detector 14 has a firstcurrent source 56 and a secondcurrent source 58. The firstcurrent source 56 generates a current that is equivalent to the output current 36 (I1) times a constant K2. The secondcurrent source 58 generates a current that is equivalent to the output current 40 (I3) times a constant K2. The thirdoptical detector 16 has a firstcurrent source 60 and a secondcurrent source 62. The firstcurrent source 60 generates a current that is equivalent to the output current 38 (I2) times a constant K2. The secondcurrent source 62 generates a current that is equivalent to the output current 42 (I4) times a constant K2. The fourthoptical detector 18 has a firstcurrent source 64 and a second current source 66. The firstcurrent source 64 generates a current that is equivalent to the output current 40 (I3) times a constant K2. The second current source 66 generates a current that is equivalent to the output current 44 (I5) times a constant K2. The fifthoptical detector 20 has a firstcurrent source 68 and a secondcurrent source 70. The firstcurrent source 68 generates a current that is equivalent to the output current 42 (I4) times a constant K2. The secondcurrent source 70 generates a current that is equivalent to the output current 46 (I6) times a constant K2. The sixthoptical detector 22 has a first current source 72 and a second current source 74. The first current source 72 generates a current that is equivalent to the output current 44 (I5) times a constant K2. The second current source 74 generates a current that is equivalent to the output current 36 (I1) times a constant K2. - Each of the
optical detectors current circuit current circuits cartridge resistor 48. By setting the value of K1 correctly the cartridge of optical detectors is able to sense the presence of light that might not be sensed by any of the individualoptical detectors - The advantage of two sources of contrast enhancement based on either overall activity within the cartridge or activity in the nearest neighbors allows different levels of contrast enhancement and helps in subsequent post-processing used to identify features detected within a given cartridge and to share that information with neighboring cartridges. The values of K1 and K2 can be programmed in or adaptive circuitry can be used to determine the values and used to extract camouflaged features of objects with low contrast. Voltage gain and offset can be applied to the current mirrors or to the operational amplifiers to control the working range and dynamic range of the detectors. These variables (K1, K2 offset and gain) can be controlled by adaptive circuitry that allows well-camouflaged objects to be extracted from the background.
-
FIG. 2A is a three dimensional diagram of thefar field sensitivity 100 of three optical sensors in accordance with one embodiment of the invention. This three dimensional graph shows the overlap of far field sensitivity of three of the seven optical detectors in a cartridge, each having a Gaussian or other nonlinear sensitivity profile. The optical detectors have 50% overlap. It has been shown mathematically that there is no spatial resolution limit between two adjacent detectors, if the contrast ratio of the object being detected is high enough. However, there are contrast ratio limitations that can affect the ability to detect the spatial resolution. The advantage of using a Gaussian or other continuous function is that the position of an object within the detector's field of view can be sensed with higher resolution than either the detector's physical width or the spacing between detectors. Overlapping the optical sensors' fields of view allows for sub-pixel resolution.FIG. 2B is a two dimensional diagram 102 of the overlapping field of views of optical sensors in accordance with one embodiment of the invention. The circles show where the far field sensitivity is 50% of the peak sensitivity for a cartridge of seven optical detectors in a close packed hexagonal arrangement. Atrace 104 represents the path of a point source across the detectors. The overlapping arrangement allows for 2n zero crossing, where n is the number of pixels or detectors. So in this case there are 128 zero crossing in this optical detector arrangement. Zero crossings are often used to determine the path of a point source. The more zero crossing the better able the image system is able to determine the path of an object. Standard CCD (Charge Coupled Devices) do not have pixels with overlapping fields of view. As a result, number of zero crossings is n+1. As a result, the overlapping fields of view significantly improve the performance of the present sub-pixel resolution system over previous resolution systems. -
FIG. 3A is a cross sectional view of the optical sensors and associatedoptics 110 in accordance with one embodiment of the invention. Threeoptical detectors optical cables ball lenses optical fiber 120, inherently produces an overlapping field of view among theoptical detectors imaging lens system 131 in front of theball lenses ball lenses filter 132 is shown in front of one of the ball lenses 126. In one embodiment, the optical detectors are photo-transistors that have a very broad spectral range. Filters can be used to select for particular wavelengths of interest. Note that another method of obtaining a Gaussian like far field sensitivity pattern is to have relatively small photodiodes, for instance photodiodes that are less than about five microns. The reason for this is that the edge of the photodiode is more sensitive than the center. In addition, the conductive traces may be placed over the optical sensors and has a masking effect that contributes to a Gaussian like far field sensitivity. In one embodiment, the ball lenses are placed on the optical sensors. The imaging lens may be a facet lens from a compound eye or a regular lens from a camera. -
FIG. 3B is a cross sectional view of asingle ball lens 132 andoptical fiber 133 in accordance with one embodiment of the invention. The image 134 is focused in front of theball lens 132. When the image is moved up or down the result is that some of the light falls outside theoptical fiber 133 and onto an adjacent fiber. - The overlapping fields of view may be created by having thin traces or masks between the optical sensors which will diffuse the light between the two adjacent sensors. The traces are commonly thinner (Z-axis or deposition layer thickness) than the diffraction limit of the light being imaged, so they do not impair light transmission but allow diffusion of light into the neighboring photosensor. Alternatively a modified sensor with the bases of the bipolar transistors connected to each other or an electronic weighting approach may be used to create a position dependent output for the sensor. In another embodiment, the doping of the sensor may be non-uniform and this will result in a position dependent photosensitivity for the sensor. The ball lens, coupled sensors, mask or trace (141 of
FIG. 3 ) and non-uniform doping are all image position systems. -
FIGS. 4 a & b are a diagrams of an element (called L4) that connects adjacent cartridges. The circle represents seven photoreceptor inputs of a single cartridge. The limbs a, b, and c represent inputs and outputs to and from neighboring L4 elements. The output of each cartridge is compared by L4 providing local information processing within its own cartridge field of view and provides redundancy to the network ofoptical sensors FIG. 4 b shows a network of seven L4 elements. Such a network provides a cooperative approach using comparators to extract coherent information abut an object in an image that is larger than a single cartridge. Thus, a local process is used to isolate or segment an object with arbitrary geometry from background in an image. In one embodiment of the invention,cartridges cartridges FIG. 1 , is coupled into thecomparators outputs comparators comparators -
FIG. 5 is a block diagram of a sub-pixel resolution system anddigital processing array 160 in accordance with one embodiment of the invention. Thesystem 160 has an array of analog optical detectors orpixels 162 in a hexagonal close packed structure. Theoptical sensors 162 have overlapping fields of view as described above. Below theoptical sensors 162 is the analog readout andprocessing circuit 164, such as the circuits shown inFIG. 1 andFIG. 4 . Thereadout circuitry 164 is coupled to a processor orprocessor array 166. The processor array converts the analog output signals into information that can be used by a larger system such as the system inFIG. 6 . -
FIG. 6 is anadaptive optics system 180 in accordance with one embodiment of the invention. Thesystem 180 receives light from atelescope 182 into acollimating lens 184. Thecollimated light 186 impinges upon adeformable mirror 188. Thedeformable mirror 188 is mounted on a tip/tilt stage 190. The light then passes through a pair oflenses beam splitter 196 transfers part of the light to amirror 198 and animaging lens 200 and part of the light to asub-pixel resolution system 202, such as that described above. Theoutput 204 of thesub-pixel resolution system 202 is coupled to a wavefront analyzer anddeformable mirror controller 206. The analyzer determines the shape of the wavefront and the controller has anoutput 208 that directs the deformable mirror to adjust the surface of the deformable mirror to remove any aberrations, such as those caused by atmospheric conditions. Since thesub-pixel resolution system 202 has analog optical detector and analog front end processing, thesystem 180 is able to adjust more quickly for changes in the wavefront. This allows this system to significantly reduce the time necessary to form an image of a faint star, since the wavefront is continuously being update. For faint stars this can reduce the exposure time in half or less. This makes the telescope system twice as productive as present adaptive optics systems. - The tip-tilt stage removes low order aberration and the deformable mirror actuators correct the high order aberration as in an adaptive optics system using a Shack-Hartmann wavefront sensor. Our fly-eye sensor replaces the Shack-Hartmann wavefront sensor and operates in real-time without requiring a CCD to sense the optical signal from different parts of the beam. The advantage in this application is that the fly-eye sensor provides much higher resolution and operates in real-time without sampling the photodetector array. In addition to the computational savings of not having to sample and move data to access a central processor, no numerical computation is required, as it is using a CCD array.
- Thus there has been described a high resolution system that has sub-pixel resolution without blurring the image or magnifying the image and has a much faster update rate than present resolution systems. Note that while the description has focused on detecting electromagnetic energy, the system may be used for sound energy, radio waves, infrared waves, particles or other types of energy.
- While the invention has been described in conjunction with specific embodiments thereof, it is evident that many alterations, modifications, and variations will be apparent to those skilled in the art in light of the foregoing description. Accordingly, it is intended to embrace all such alterations, modifications, and variations in the appended claims.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/051,850 US20060175530A1 (en) | 2005-02-04 | 2005-02-04 | Sub-pixel resolution and wavefront analyzer system |
PCT/US2006/003704 WO2006084049A2 (en) | 2005-02-04 | 2006-02-02 | Sub-pixel resolution and wavefront analyzer system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/051,850 US20060175530A1 (en) | 2005-02-04 | 2005-02-04 | Sub-pixel resolution and wavefront analyzer system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060175530A1 true US20060175530A1 (en) | 2006-08-10 |
Family
ID=36777916
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/051,850 Abandoned US20060175530A1 (en) | 2005-02-04 | 2005-02-04 | Sub-pixel resolution and wavefront analyzer system |
Country Status (2)
Country | Link |
---|---|
US (1) | US20060175530A1 (en) |
WO (1) | WO2006084049A2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008131027A1 (en) * | 2007-04-20 | 2008-10-30 | Samsung Electronics Co., Ltd. | Subpixel rendering area resample functions for display devices |
US20180176739A1 (en) * | 2016-12-15 | 2018-06-21 | Wisconsin Alumni Research Foundation | Navigation System Tracking High-Efficiency Indoor Lighting Fixtures |
US10247811B2 (en) * | 2014-10-16 | 2019-04-02 | Harris Corporation | Modulation of input to Geiger mode avalanche photodiode LIDAR using digital micromirror devices |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4286760A (en) * | 1978-03-14 | 1981-09-01 | Thomson-Csf | Photoelectric direction finder |
US5206499A (en) * | 1989-06-22 | 1993-04-27 | Northrop Corporation | Strapdown stellar sensor and holographic multiple field of view telescope therefor |
US5214492A (en) * | 1991-08-02 | 1993-05-25 | Optical Specialties, Inc. | Apparatus for producing an accurately aligned aperture of selectable diameter |
US5220398A (en) * | 1990-09-28 | 1993-06-15 | Massachusetts Institute Of Technology | Analog VLSI microchip for object position and orientation |
US5847398A (en) * | 1997-07-17 | 1998-12-08 | Imarad Imaging Systems Ltd. | Gamma-ray imaging with sub-pixel resolution |
US5909967A (en) * | 1997-11-12 | 1999-06-08 | Lg Electronics Inc. | Bearing engagement structure for hermetic compressor |
US6765195B1 (en) * | 2001-05-22 | 2004-07-20 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Method and apparatus for two-dimensional absolute optical encoding |
US6781694B2 (en) * | 2002-07-16 | 2004-08-24 | Mitutoyo Corporation | Two-dimensional scale structures and method usable in an absolute position transducer |
US20040193670A1 (en) * | 2001-05-21 | 2004-09-30 | Langan John D. | Spatio-temporal filter and method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4271355A (en) * | 1979-08-30 | 1981-06-02 | United Technologies Corporation | Method for mitigating 2πN ambiguity in an adaptive optics control system |
-
2005
- 2005-02-04 US US11/051,850 patent/US20060175530A1/en not_active Abandoned
-
2006
- 2006-02-02 WO PCT/US2006/003704 patent/WO2006084049A2/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4286760A (en) * | 1978-03-14 | 1981-09-01 | Thomson-Csf | Photoelectric direction finder |
US5206499A (en) * | 1989-06-22 | 1993-04-27 | Northrop Corporation | Strapdown stellar sensor and holographic multiple field of view telescope therefor |
US5220398A (en) * | 1990-09-28 | 1993-06-15 | Massachusetts Institute Of Technology | Analog VLSI microchip for object position and orientation |
US5214492A (en) * | 1991-08-02 | 1993-05-25 | Optical Specialties, Inc. | Apparatus for producing an accurately aligned aperture of selectable diameter |
US5847398A (en) * | 1997-07-17 | 1998-12-08 | Imarad Imaging Systems Ltd. | Gamma-ray imaging with sub-pixel resolution |
US5909967A (en) * | 1997-11-12 | 1999-06-08 | Lg Electronics Inc. | Bearing engagement structure for hermetic compressor |
US20040193670A1 (en) * | 2001-05-21 | 2004-09-30 | Langan John D. | Spatio-temporal filter and method |
US6765195B1 (en) * | 2001-05-22 | 2004-07-20 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Method and apparatus for two-dimensional absolute optical encoding |
US6781694B2 (en) * | 2002-07-16 | 2004-08-24 | Mitutoyo Corporation | Two-dimensional scale structures and method usable in an absolute position transducer |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008131027A1 (en) * | 2007-04-20 | 2008-10-30 | Samsung Electronics Co., Ltd. | Subpixel rendering area resample functions for display devices |
US20100045695A1 (en) * | 2007-04-20 | 2010-02-25 | Candice Hellen Brown Elliott | Subpixel rendering area resample functions for display device |
US8508548B2 (en) | 2007-04-20 | 2013-08-13 | Samsung Display Co., Ltd. | Subpixel rendering area resample functions for display device |
US10247811B2 (en) * | 2014-10-16 | 2019-04-02 | Harris Corporation | Modulation of input to Geiger mode avalanche photodiode LIDAR using digital micromirror devices |
US20180176739A1 (en) * | 2016-12-15 | 2018-06-21 | Wisconsin Alumni Research Foundation | Navigation System Tracking High-Efficiency Indoor Lighting Fixtures |
US10251027B2 (en) * | 2016-12-15 | 2019-04-02 | Wisconsin Alumni Ressarch Foundation | Navigation system tracking high-efficiency indoor lighting fixtures |
Also Published As
Publication number | Publication date |
---|---|
WO2006084049A3 (en) | 2009-09-11 |
WO2006084049A2 (en) | 2006-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10653313B2 (en) | Systems and methods for lensed and lensless optical sensing of binary scenes | |
Sarkar et al. | Biologically inspired CMOS image sensor for fast motion and polarization detection | |
US8569680B2 (en) | Hyperacuity from pre-blurred sampling of a multi-aperture visual sensor | |
US20190075233A1 (en) | Extended or full-density phase-detection autofocus control | |
US6784408B1 (en) | Array of lateral effect detectors for high-speed wavefront sensing and other applications | |
US20010024534A1 (en) | Super resolution methods for electro-optical systems | |
US20150293018A1 (en) | Low-power image change detector | |
US10274652B2 (en) | Systems and methods for improving resolution in lensless imaging | |
JPS6355043B2 (en) | ||
US20060175530A1 (en) | Sub-pixel resolution and wavefront analyzer system | |
Li et al. | Camera geometric calibration using dynamic single-pixel illumination with deep learning networks | |
EP3301644B1 (en) | Method for building a depth map of a scene and/or an all-in-focus image | |
EP3700187B1 (en) | Signal processing device and imaging device | |
Olivas et al. | Platform motion blur image restoration system | |
CA2775621C (en) | Scanning multispectral telescope comprising wavefront analysis means | |
Rosenberger et al. | Investigations on infrared-channel-image quality improvements for multispectral imaging | |
JP2019056590A (en) | Position detection sensor | |
Bradley et al. | The modulation transfer function of focal plane array systems | |
Brückner et al. | Position detection with hyperacuity using artificial compound eyes | |
US20190346598A1 (en) | Imaging systems and methods with periodic gratings with homologous pixels | |
Erickson et al. | Miniature lensless computational infrared imager | |
WO2019175549A1 (en) | An imaging device | |
US11451722B2 (en) | Adaptive optics image acquisition method | |
Benson et al. | Pre-blurred spatial sampling can lead to hyperacuity | |
Cirino et al. | Design of cubic-phase distribution lenses for passive infrared motion sensors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LAMINA SYSTEMS INC., COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:THELEN, DONALD C., JR.;WILCOX, MICHAEL J.;SIGNING DATES FROM 20111025 TO 20120104;REEL/FRAME:028378/0547 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: LAMINA SYSTEMS INC., COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THELEN, DONALD C., JR.;REEL/FRAME:034579/0834 Effective date: 20120104 Owner name: LAMINA SYSTEMS INC., COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WILCOX, MICHAEL J.;REEL/FRAME:034579/0780 Effective date: 20111025 |