WO2008047157A1 - Dual waveband single axis camera for machine vision - Google Patents

Dual waveband single axis camera for machine vision Download PDF

Info

Publication number
WO2008047157A1
WO2008047157A1 PCT/GB2007/050607 GB2007050607W WO2008047157A1 WO 2008047157 A1 WO2008047157 A1 WO 2008047157A1 GB 2007050607 W GB2007050607 W GB 2007050607W WO 2008047157 A1 WO2008047157 A1 WO 2008047157A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
gathering means
image gathering
imaging system
waveband
Prior art date
Application number
PCT/GB2007/050607
Other languages
French (fr)
Inventor
Tom Heseltine
Original Assignee
Aurora Computer Systems Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aurora Computer Systems Limited filed Critical Aurora Computer Systems Limited
Publication of WO2008047157A1 publication Critical patent/WO2008047157A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/41Extracting pixel data from a plurality of image sensors simultaneously picking up an image, e.g. for increasing the field of view by combining the outputs of a plurality of sensors

Definitions

  • the present application relates to the use of human face recognition as an example of recognition systems, but the techniques detailed within the application have application in a wide range of machine vision applications.
  • This technique produces two 2-d images, a normal image from a standard camera system operating in the visible spectrum, and an image with the projected line structure superimposed on it.
  • the normal image may be monochrome or colour but in face recognition systems is usually colour.
  • Systems for product or object recognition may use monochrome where colour information is less important.
  • a line projection system is used to project a line structure onto the target object.
  • the projector is placed in a different axis to the camera such that distortion in the pattern as seen by the camera provides depth information (Fig 1 ). This is explained in depth by GB 2410794.
  • the depth information then provides a framework on which the 2-d image may be rendered, providing a limited 3-d model of the subject.
  • the main limitation of this modelling method is that the model contains only information available from the 2-d camera and no data on the obscured parts of the subject is available from a single picture. Multiple pictures taken of the object rotated in the viewing region may be used to build a more detailed 3-d model.
  • the system described in 'Capturing 2VkJ Depth and Texture of Time-Varying Scenes Using Structured Infrared Light' shows how an infrared projection system may be used, with a second camera being sensitive to infrared.
  • the visible image may then be used without a visible projected structure being present.
  • the visible image may then be used for 2-d recognition and operator monitoring.
  • the two images from the two cameras do however have distortion problems that become significant, as the distance of the object from the camera changes.
  • the present application describes embodiments of an invention that removes this image distortion such that the pattern-generated framework remains essentially spatially identical to the normal visible image, irrespective of the distance of the object from the cameras.
  • Fig 1 shows a typical configuration where two cameras (1 , 2) are mounted close to each other and a pattern projector (3), or laser pattern generator (3), is placed off axis. It can clearly be seen ( Figures 2, 3) that differences will exist within the images from the two cameras, simply because of the different camera positions. This can be improved at a fixed distance by rotating the cameras as in Figure 4.
  • Figure 5 shows that a displacement error of ⁇ 12.5mm occurs when the object moves ⁇ 250mm about a distance of 1 metre from two cameras that are separated by 50mm. This means that when an object such as a face is a 1 metre distance from the cameras, the visible colour image can be overlaid onto the IR pattern derived depth data with a high degree of accuracy. This is particularly important when the effect of rendering key features such as eyes on to the depth data.
  • an imaging system comprising at least first and second image gathering means angularly disposed relative to each other and provided with beam splitter means such that both image gathering means are arranged so as to be effectively coaxial and to gather first and second substantially coincident images of a given object; wherein the first image gathering means is configured to gather image information in a first waveband, wherein the second image gathering means is configured to gather image information in a second waveband different to the first waveband, and wherein there is further provided means for projecting a predetermined pattern onto the object which pattern is detected by the second image gathering means and not by the first image gathering means.
  • a method of generating image data wherein at least first and second image gathering means angularly disposed relative to each other and provided with beam splitter means are arranged so as to be effectively coaxial and to gather first and second substantially coincident images of a given object; wherein the first image gathering means gathers image information in a first waveband, and the second image gathering means gathers image information in a second waveband different to the first waveband, and wherein a predetermined pattern is projected onto the object which pattern is detected by the second image gathering means and not by the first image gathering means.
  • the first and second images may be mirror images of each other, but in all cases will effectively be coaxial and have substantially identically perspectives. This means that there is substantially no distortion of one image relative to the other, regardless of the distance of the object from the image gathering means.
  • the image gathering means may be cameras or video cameras or CCD devices or the like.
  • embodiments of the present invention use a beam splitter to combine two cameras, such that they appear on the same axis, removing image errors irrespective of distance.
  • a beam splitter combines the viewing region of the two cameras so that they are coaxial and with correct adjustment of the lens/magnification of the image.
  • Two images are produced that are essentially identical with the exception that one may be a mirror image of the other. This mirror image may be removed either in software or by camera hardware.
  • the first image gathering means may be a colour camera giving a normal 2-d image
  • the second image gathering means may be an infrared camera providing an image of the projected pattern structure.
  • the depth information provided by the second image gathering means provides an accurate frame onto which the 2-d image may be rendered, irrespective of the distance of the object from the cameras.
  • An embodiment of this technique allows the use of low cost CCTV-style cameras, where one camera is a colour camera and the other camera is a monochrome camera with a response into the IR or near-IR region.
  • a waveband selective beam splitter may be employed to separate the visible image from the infrared.
  • the IR or near-IR pattern image is removed from the visible colour camera and the visible image removed from the monochrome camera. This is done with minimal loss of light to the camera(s) as a very high proportion of both the visible and IR/near-IR light is passed to the appropriate camera. This is in contrast to the first embodiment where all wavebands are shared between the cameras, resulting in lower light intensity availability at each camera.
  • each camera may be provided with its own imaging lens
  • the beam splitter may be placed behind an imaging lens or compound lens that acts as a lens for both cameras.
  • This embodiment allows easy control of image magnification as a zoom lens may be employed. This creates an imaging system that behaves as a single camera but generates both visible and infrared pattern information. This allows the use of a wide range of lens options such as zoom, auto iris, auto focus etc., without creating images that separate out under different lens configurations.
  • the pattern-generated image is all that is needed.
  • visual feedback may be achieved by viewing the image seen by the camera(s) on a display system.
  • the use of the pattern image in facial recognition systems for visual feedback may prove to be quite disconcerting, particularly to those using the system for the first time. As infrequent use may be the norm in security applications, the use of this image may reduce the acceptability of the system.
  • the use of the dual camera system whereby a plain monochrome or colour image is used as the visual feedback significantly improves the public's acceptance of the system. If the configuration described in this connection is employed, the improved feedback performance is achieved without reducing system accuracy.
  • problems with occlusions may arise. This problem occurs when features on the object cast shadows seen by the camera(s). Step changes within the object may also cause patterns to be displaced such that incorrect connections are made between patterns.
  • Coded patterns made each line unique by either colour, shape or time displacement. In the case of colour, multi-coloured stripes are projected allowing unique identification of lines. However these produce visible patterns on the image.
  • two time-displaced images are captured; normal colour image followed by pattern-projected image. The time displacement causes motion-related problems with the subject being in two different locations with the two images.
  • the use of the beam splitter allows the use of colour-encoded patterns to be used at the same instant in time as capturing a normal colour image seen without the pattern projection.
  • a first beam splitter (closest to the object) may be configured to separate out light above a first given wavelength and to allow passage of light below the first given wavelength.
  • One or more subsequent beam splitters may be configured to separate out light above progressively longer wavelengths, in each case allowing passage of light above a respective given wavelength.
  • the beam splitters may be configured to allow passage of respectively shorter wavelengths, and to separate out light below progressively shorter wavelength thresholds.
  • this technique has the ability to recover patterns, projected in the corresponding wavelengths in the IR, near-IR or other spectra, simultaneously together with a visible image.
  • the number of cameras, wavelengths and pattern colours is determined by the total spectral bandwidth employed, and the bandwidth of the individual patterns/beam splitters.
  • the above discusses the spectrum in terms of IR and visible, however this distinction is arbitrary in nature.
  • the visible spectrum may be divided in the same way as the IR.
  • multiple projecting means may be provided to project multiple patterns at different wavelengths or wavebands corresponding to multiple cameras or beam splitters. This allows a distribution of projectors to remove the effect of the occlusions whilst avoiding the problem of time-multiplexed systems where motion of the subject creates distortion of the final image.
  • the words "comprise” and “contain” and variations of the words, for example “comprising” and “comprises”, means “including but not limited to”, and is not intended to (and does not) exclude other moieties, additives, components, integers or steps.
  • FIGURE 1 shows a prior art camera system
  • FIGURE 2 shows an image gathered by a first camera of the Figure 1 system
  • FIGURE 3 shows an image gathered by a second camera of the Figure 1 system
  • FIGURE 4 shows a modification of the Figure 1 system
  • FIGURE 5 illustrates how displacement errors occur in the Figure 1 system
  • FIGURE 6 shows a first embodiment of the present invention
  • FIGURE 7 shows a second embodiment of the present invention
  • FIGURE 8 shows a third embodiment of the present invention
  • FIGURE 9 illustrates how occlusion problems may occur
  • FIGURE 10 shows a fourth embodiment of the present invention.
  • FIGURE 1 1 shows a fifth embodiment of the present invention.
  • Figure 6 shows a first embodiment in which the two cameras 1 , 2 are mounted at right angles to each other, and a beam splitter 5 is provided so that the cameras 1 , 2 gather coincident images of the object 3 despite not being spatially co-located.
  • camera 1 is a normal colour camera
  • camera 2 is a near-IR monochrome camera
  • the beam splitter 5 splits light across all wavelengths. Due to the beam splitter 5, the image gathered by camera 2 may be a mirror reflection of the image gathered by camera 1 , but this can be corrected in hardware or software, resulting in the two images being substantially identical and coincident.
  • Figure 7 shows a second embodiment in which the beam splitter 5 is adapted to separate visible from near-IR light, passing visible light to camera 1 and near-IR light to camera 2.
  • the near-IR pattern image is removed from camera 1
  • the visible light image is removed from camera 2. This is done with very low loss of light to the cameras 1 , 2, as a very high proportion of both the visible and the near-IR light is passed to the appropriate camera.
  • each camera 1 , 2 is provided with its own lens system (not shown).
  • a single lens system 6 is provided, and the beam splitter 5 is located behind the lens and arranged to split light between two CCDs 7, 8. This embodiment allows easy control of the image magnification, since a zoom lens may be employed, as well as auto-iris, auto-focus etc.
  • the pattern-generated image is all that is needed.
  • visual feedback may be achieved by viewing the image seen by the camera on a display system.
  • the use of the pattern image in facial recognition systems for visual feedback may prove to be quite disconcerting, particularly to those using the system for the first time.
  • the use of this image may reduce the acceptability of the system.
  • the use of the dual camera system whereby a plain monochrome or colour image is used as the visual feedback significantly improves the public's acceptance of the system. If the configuration described in this application is employed, the improved feedback performance is achieved without reducing system accuracy.
  • Figure 9 a flat object 9 is placed in front of a larger flat object 10.
  • the distance between the two objects 9, 10 is such that the vertical displacement of the line patterns A to G on the nearest object 9 is one line with respect to the far object 10.
  • line-tracing software may assume that line D, for example, is a continuous line across the image. In fact line D becomes displaced and becomes E on the nearest object 9. Line tracing will therefore assume the image is a flat single object, not a flat object 9 in front of a second flat object 10.
  • Techniques to resolve this problem have been developed in such patent applications as GB2410794, the full disclosure of which is hereby incorporated into the present application by reference. Alternatively, many systems of encoding the lines have been developed.
  • Figure 10 shows an embodiment multiple beam splitters 5, 5', 5" ... 5 n , and multiple cameras Ci to C n .
  • the beam splitters are each configured to allow passage of light only above or below a predetermined wavelength, effectively acting as high or low pass filters for light of progressively decreasing or increasing wavelength.
  • the cameras may be configured to be sensitive only to the appropriate wavelengths of light. This allows recovery of patterns, projected in the corresponding wavelengths in, say, the IR spectrum, together with a visible image.
  • Figure 1 1 shows a generic form where a projector or projectors Pi project a spectrum of light L 1 with pattern structures in the bands L 1 L 2 L n +i
  • the projected light reflected off of the subject or object 3 is optionally band pass filtered to remove light outside the wavelengths of interest.
  • Each wavelength of interest is separated from the combined projected light such that the camera received light in the range L 1 L 2 L n +i....
  • Beam splitters BSi to BS n operating as low pass filters reflecting light above the cut-off frequency for each bandwidth of interest.
  • the cascade nature results in pass band filtered light, of the specific band of interest to each camera respectively.
  • each projector may project a pattern of a specific wavelength corresponding to the response of a specific camera. This allows a distribution of projectors to remove the effect of the occlusions whilst avoiding the problem of time-multiplexed systems where motion of the subject creates distortion of the final image.

Abstract

There is disclosed an imaging system comprising at least first and second image gathering means angularly disposed relative to each other and provided with beam splitter means such that both image gathering means are arranged so as to be effectively coaxial and to gather first and second substantially coincident images of a given object. The first image gathering means is configured to gather image information in a first waveband, and the second image gathering means is configured to gather image information in a second waveband different to the first waveband. There is further provided means for projecting a predetermined pattern onto the object which pattern is detected by the second image gathering means and not by the first image gathering means. Embodiments of the present invention are particularly useful in human face recognition applications, especially in relation to identity verification where fast throughput is required, for example at airport check-in desks.

Description

DUAL WAVEBAND SINGLE AXIS CAMERA FOR MACHINE VISION
BACKGROUND
Significant effort and interest has been applied to automatic recognition of objects for a wide range of applications. The present application relates to the use of human face recognition as an example of recognition systems, but the techniques detailed within the application have application in a wide range of machine vision applications.
Recognition systems mainly but not exclusively work based on 2-dimensional images or 3-dimensional images. 3-dimensional images are built by techniques such as stereoscopic vision or laser line scanning of a rotating object. An alternative is the use of 2-Dimensional images with the addition of depth information. An academic paper which explains these techniques in detail is 'Capturing 2V-D Depth and Texture of Time- Varying Scenes Using Structured Infrared Light', Christian Frueh and Avideh Zakhor, Department of Computer Science and Electrical engineering, University of California, Berkeley 1. There is also known patent application GB 2410794 A, 'Apparatus and methods for three dimensional scanning', Marcos A Rodrigues, Alan Robinson and Lyuba Alboul, Sheffield Hallam University 2. " This technique produces two 2-d images, a normal image from a standard camera system operating in the visible spectrum, and an image with the projected line structure superimposed on it. The normal image may be monochrome or colour but in face recognition systems is usually colour. Systems for product or object recognition may use monochrome where colour information is less important. A line projection system is used to project a line structure onto the target object. The projector is placed in a different axis to the camera such that distortion in the pattern as seen by the camera provides depth information (Fig 1 ). This is explained in depth by GB 2410794. The depth information then provides a framework on which the 2-d image may be rendered, providing a limited 3-d model of the subject. The main limitation of this modelling method is that the model contains only information available from the 2-d camera and no data on the obscured parts of the subject is available from a single picture. Multiple pictures taken of the object rotated in the viewing region may be used to build a more detailed 3-d model.
The system described in 'Capturing 2VkJ Depth and Texture of Time-Varying Scenes Using Structured Infrared Light' shows how an infrared projection system may be used, with a second camera being sensitive to infrared. The visible image may then be used without a visible projected structure being present. In facial recognition systems, the visible image may then be used for 2-d recognition and operator monitoring. The two images from the two cameras do however have distortion problems that become significant, as the distance of the object from the camera changes. The present application describes embodiments of an invention that removes this image distortion such that the pattern-generated framework remains essentially spatially identical to the normal visible image, irrespective of the distance of the object from the cameras.
Fig 1 shows a typical configuration where two cameras (1 , 2) are mounted close to each other and a pattern projector (3), or laser pattern generator (3), is placed off axis. It can clearly be seen (Figures 2, 3) that differences will exist within the images from the two cameras, simply because of the different camera positions. This can be improved at a fixed distance by rotating the cameras as in Figure 4.
This configuration gives a better result at a fixed distance, however the object is viewed from two different locations and results in some features observed by camera (1 ) not being seen by camera (2). In addition, image alignment only occurs at a fixed distance. If the object moves towards or away from the camera system the two images will separate out.
Figure 5 shows that a displacement error of ±12.5mm occurs when the object moves ±250mm about a distance of 1 metre from two cameras that are separated by 50mm. This means that when an object such as a face is a 1 metre distance from the cameras, the visible colour image can be overlaid onto the IR pattern derived depth data with a high degree of accuracy. This is particularly important when the effect of rendering key features such as eyes on to the depth data.
If the object moves forwards or backwards by 250mm, the two images separate out by 12.5mm. In facial terms this is the equivalent of moving the eye and placing it on the bridge of the nose. This causes significant problems in achieving accurate rendering of the colour image onto the depth data. Software algorithms may be employed to manipulate the two images to correct for this error, but this requires knowing the distance of the object to the camera in order to work out the correction. In machine vision applications this may become more of a problem if the object has large depth variability with respect to distance from the camera. In these instances, significant errors in both directions may occur on the same image.
BRIEF SUMMARY OF THE DISCLOSURE
According to a first aspect of the present invention, there is provided an imaging system comprising at least first and second image gathering means angularly disposed relative to each other and provided with beam splitter means such that both image gathering means are arranged so as to be effectively coaxial and to gather first and second substantially coincident images of a given object; wherein the first image gathering means is configured to gather image information in a first waveband, wherein the second image gathering means is configured to gather image information in a second waveband different to the first waveband, and wherein there is further provided means for projecting a predetermined pattern onto the object which pattern is detected by the second image gathering means and not by the first image gathering means.
According to a second aspect of the present invention, there is provided a method of generating image data, wherein at least first and second image gathering means angularly disposed relative to each other and provided with beam splitter means are arranged so as to be effectively coaxial and to gather first and second substantially coincident images of a given object; wherein the first image gathering means gathers image information in a first waveband, and the second image gathering means gathers image information in a second waveband different to the first waveband, and wherein a predetermined pattern is projected onto the object which pattern is detected by the second image gathering means and not by the first image gathering means.
The first and second images may be mirror images of each other, but in all cases will effectively be coaxial and have substantially identically perspectives. This means that there is substantially no distortion of one image relative to the other, regardless of the distance of the object from the image gathering means.
The image gathering means may be cameras or video cameras or CCD devices or the like. In other words, embodiments of the present invention use a beam splitter to combine two cameras, such that they appear on the same axis, removing image errors irrespective of distance.
The use of a beam splitter combines the viewing region of the two cameras so that they are coaxial and with correct adjustment of the lens/magnification of the image. Two images are produced that are essentially identical with the exception that one may be a mirror image of the other. This mirror image may be removed either in software or by camera hardware.
The first image gathering means may be a colour camera giving a normal 2-d image, and the second image gathering means may be an infrared camera providing an image of the projected pattern structure. In this instance, the depth information provided by the second image gathering means provides an accurate frame onto which the 2-d image may be rendered, irrespective of the distance of the object from the cameras.
An embodiment of this technique allows the use of low cost CCTV-style cameras, where one camera is a colour camera and the other camera is a monochrome camera with a response into the IR or near-IR region. In this embodiment, a waveband selective beam splitter may be employed to separate the visible image from the infrared.
In an alternative embodiment, the IR or near-IR pattern image is removed from the visible colour camera and the visible image removed from the monochrome camera. This is done with minimal loss of light to the camera(s) as a very high proportion of both the visible and IR/near-IR light is passed to the appropriate camera. This is in contrast to the first embodiment where all wavebands are shared between the cameras, resulting in lower light intensity availability at each camera.
In both the first and second embodiments, each camera may be provided with its own imaging lens
Alternatively, the beam splitter may be placed behind an imaging lens or compound lens that acts as a lens for both cameras. This embodiment allows easy control of image magnification as a zoom lens may be employed. This creates an imaging system that behaves as a single camera but generates both visible and infrared pattern information. This allows the use of a wide range of lens options such as zoom, auto iris, auto focus etc., without creating images that separate out under different lens configurations.
In systems that require depth information only, the pattern-generated image is all that is needed. However positioning the target object correctly within the viewing region may be important in some applications. In these applications, visual feedback may be achieved by viewing the image seen by the camera(s) on a display system. The use of the pattern image in facial recognition systems for visual feedback may prove to be quite disconcerting, particularly to those using the system for the first time. As infrequent use may be the norm in security applications, the use of this image may reduce the acceptability of the system. The use of the dual camera system whereby a plain monochrome or colour image is used as the visual feedback significantly improves the public's acceptance of the system. If the configuration described in this connection is employed, the improved feedback performance is achieved without reducing system accuracy.
In an embodiment combining a normal image with a non-encoded pattern structure for 3-d data as described above, problems with occlusions may arise. This problem occurs when features on the object cast shadows seen by the camera(s). Step changes within the object may also cause patterns to be displaced such that incorrect connections are made between patterns.
In order demonstrate this problem, one may consider a flat object placed in front of a larger flat object. The distance between the two objects is such that the vertical displacement of the patterns on the nearest object is one line with respect to the far object. As there is no coding on the lines, line-tracing software may assume that a particular given line is a continuous line across the image. In fact given line becomes displaced and becomes an adjacent or near-adjacent line on the nearest object. Line tracing will therefore assume the image is a flat single object, not a flat object in front of a second flat object. Techniques to resolve this problem have been developed in such patent applications as GB2410794 by Sheffield Hallam University. Alternatively, many other systems of encoding the lines have been developed.
Coded patterns made each line unique by either colour, shape or time displacement. In the case of colour, multi-coloured stripes are projected allowing unique identification of lines. However these produce visible patterns on the image. In order to achieve two images (normal colour) and depth data, two time-displaced images are captured; normal colour image followed by pattern-projected image. The time displacement causes motion-related problems with the subject being in two different locations with the two images.
The use of the beam splitter allows the use of colour-encoded patterns to be used at the same instant in time as capturing a normal colour image seen without the pattern projection.
Accordingly, further embodiments of the present invention may employ a plurality of beam splitters, each adapted to separate out a predetermined waveband of incident image light. For example, a first beam splitter (closest to the object) may be configured to separate out light above a first given wavelength and to allow passage of light below the first given wavelength. One or more subsequent beam splitters may be configured to separate out light above progressively longer wavelengths, in each case allowing passage of light above a respective given wavelength. Alternatively, the beam splitters may be configured to allow passage of respectively shorter wavelengths, and to separate out light below progressively shorter wavelength thresholds.
Therefore, this technique has the ability to recover patterns, projected in the corresponding wavelengths in the IR, near-IR or other spectra, simultaneously together with a visible image. The number of cameras, wavelengths and pattern colours is determined by the total spectral bandwidth employed, and the bandwidth of the individual patterns/beam splitters. The above discusses the spectrum in terms of IR and visible, however this distinction is arbitrary in nature. The visible spectrum may be divided in the same way as the IR.
Moreover, multiple projecting means may be provided to project multiple patterns at different wavelengths or wavebands corresponding to multiple cameras or beam splitters. This allows a distribution of projectors to remove the effect of the occlusions whilst avoiding the problem of time-multiplexed systems where motion of the subject creates distortion of the final image. Throughout the description and claims of this specification, the words "comprise" and "contain" and variations of the words, for example "comprising" and "comprises", means "including but not limited to", and is not intended to (and does not) exclude other moieties, additives, components, integers or steps.
Throughout the description and claims of this specification, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
Features, integers, characteristics, compounds, chemical moieties or groups described in conjunction with a particular aspect, embodiment or example of the invention are to be understood to be applicable to any other aspect, embodiment or example described herein unless incompatible therewith.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the present invention, and to show how it may be carried into effect, reference shall now be made to the accompanying drawings, in which:
FIGURE 1 shows a prior art camera system;
FIGURE 2 shows an image gathered by a first camera of the Figure 1 system;
FIGURE 3 shows an image gathered by a second camera of the Figure 1 system;
FIGURE 4 shows a modification of the Figure 1 system;
FIGURE 5 illustrates how displacement errors occur in the Figure 1 system;
FIGURE 6 shows a first embodiment of the present invention;
FIGURE 7 shows a second embodiment of the present invention;
FIGURE 8 shows a third embodiment of the present invention; FIGURE 9 illustrates how occlusion problems may occur;
FIGURE 10 shows a fourth embodiment of the present invention; and
FIGURE 1 1 shows a fifth embodiment of the present invention.
DETAILED DESCRIPTION
Figures 1 to 5 have been discussed in the introduction to the present application.
Figure 6 shows a first embodiment in which the two cameras 1 , 2 are mounted at right angles to each other, and a beam splitter 5 is provided so that the cameras 1 , 2 gather coincident images of the object 3 despite not being spatially co-located. In this embodiment, camera 1 is a normal colour camera, camera 2 is a near-IR monochrome camera, and the beam splitter 5 splits light across all wavelengths. Due to the beam splitter 5, the image gathered by camera 2 may be a mirror reflection of the image gathered by camera 1 , but this can be corrected in hardware or software, resulting in the two images being substantially identical and coincident.
Figure 7 shows a second embodiment in which the beam splitter 5 is adapted to separate visible from near-IR light, passing visible light to camera 1 and near-IR light to camera 2. In this way, the near-IR pattern image is removed from camera 1 , and the visible light image is removed from camera 2. This is done with very low loss of light to the cameras 1 , 2, as a very high proportion of both the visible and the near-IR light is passed to the appropriate camera.
In Figures 6 and 7, each camera 1 , 2 is provided with its own lens system (not shown). In a third embodiment, shown in Figure 8, a single lens system 6 is provided, and the beam splitter 5 is located behind the lens and arranged to split light between two CCDs 7, 8. This embodiment allows easy control of the image magnification, since a zoom lens may be employed, as well as auto-iris, auto-focus etc.
In systems that require depth information only, the pattern-generated image is all that is needed. However positioning the target object correctly within the viewing region may be important in some applications. In these applications, visual feedback may be achieved by viewing the image seen by the camera on a display system. The use of the pattern image in facial recognition systems for visual feedback may prove to be quite disconcerting, particularly to those using the system for the first time. As infrequent use may be the norm in security applications, the use of this image may reduce the acceptability of the system. The use of the dual camera system whereby a plain monochrome or colour image is used as the visual feedback significantly improves the public's acceptance of the system. If the configuration described in this application is employed, the improved feedback performance is achieved without reducing system accuracy.
In the embodiment combining a normal image with a non-encoded pattern structure for 3D data as described above, problems with occlusions arise. This problem occurs when features on the object cast shadows seen by the camera. Step changes within the object may also cause patterns to be displaced such that incorrect connections are made between patterns.
In Figure 9 a flat object 9 is placed in front of a larger flat object 10. The distance between the two objects 9, 10 is such that the vertical displacement of the line patterns A to G on the nearest object 9 is one line with respect to the far object 10. As there is no coding on the lines, line-tracing software may assume that line D, for example, is a continuous line across the image. In fact line D becomes displaced and becomes E on the nearest object 9. Line tracing will therefore assume the image is a flat single object, not a flat object 9 in front of a second flat object 10. Techniques to resolve this problem have been developed in such patent applications as GB2410794, the full disclosure of which is hereby incorporated into the present application by reference. Alternatively, many systems of encoding the lines have been developed.
Figure 10 shows an embodiment multiple beam splitters 5, 5', 5" ... 5n, and multiple cameras Ci to Cn. The beam splitters are each configured to allow passage of light only above or below a predetermined wavelength, effectively acting as high or low pass filters for light of progressively decreasing or increasing wavelength. The cameras may be configured to be sensitive only to the appropriate wavelengths of light. This allows recovery of patterns, projected in the corresponding wavelengths in, say, the IR spectrum, together with a visible image.
Figure 1 1 shows a generic form where a projector or projectors Pi project a spectrum of light L1 with pattern structures in the bands L1 L2 Ln+i The projected light reflected off of the subject or object 3 is optionally band pass filtered to remove light outside the wavelengths of interest. Each wavelength of interest is separated from the combined projected light such that the camera received light in the range L1 L2 Ln+i.... Beam splitters BSi to BSn operating as low pass filters reflecting light above the cut-off frequency for each bandwidth of interest. The cascade nature results in pass band filtered light, of the specific band of interest to each camera respectively. Where multiple projectors are used each projector may project a pattern of a specific wavelength corresponding to the response of a specific camera. This allows a distribution of projectors to remove the effect of the occlusions whilst avoiding the problem of time-multiplexed systems where motion of the subject creates distortion of the final image.

Claims

1. An imaging system comprising at least first and second image gathering means angularly disposed relative to each other and provided with beam splitter means such that both image gathering means are arranged so as to be effectively coaxial and to gather first and second substantially coincident images of a given object; wherein the first image gathering means is configured to gather image information in a first waveband, wherein the second image gathering means is configured to gather image information in a second waveband different to the first waveband, and wherein there is further provided means for projecting a predetermined pattern onto the object which pattern is detected by the second image gathering means and not by the first image gathering means.
2. An imaging system as claimed in claim 1 , wherein the first and second images are mirror images of each other, and in which there is further provided means for reflecting one image.
3. An imaging system as claimed in any preceding claim, wherein the beam splitter is adapted to split light across all wavebands of interest.
4. An imaging system as claimed in claim 1 or 2, wherein the beam splitter is adapted to reflect light in the second waveband and to allow passage of light in the first waveband.
5. An imaging system as claimed in any preceding claim, wherein the first image gathering means collects visible colour light, and the second image gathering means collects light in a non-visible waveband, e.g. infra red or near-infra red.
6. An imaging system as claimed in any preceding claim, wherein each image gathering means is a video camera with its own lens system.
7. An imaging system as claimed in any one of claims 1 to 5, wherein each image gathering means is a CCD or CMOS or other photosensitive array, and in which the beam splitter is located between the photosensitive arrays and a single, common lens system.
8. An imaging system as claimed in any preceding claim, wherein there are provided more than two image gathering means and more than one beam splitter.
9. An imaging system as claimed in claim 8, comprising n beam splitters and n+1 image gathering means, where n is a positive integer.
10. An imaging system as claimed in claim 9, wherein the n beam splitters are arranged so as to split light into n+1 different wavebands, and wherein each image gathering means is arranged to image light in a different one of the n+1 wavebands.
1 1. A method of generating image data, wherein at least first and second image gathering means angularly disposed relative to each other and provided with beam splitter means are arranged so as to be effectively coaxial and to gather first and second substantially coincident images of a given object; wherein the first image gathering means gathers image information in a first waveband, and the second image gathering means gathers image information in a second waveband different to the first waveband, and wherein a predetermined pattern is projected onto the object which pattern is detected by the second image gathering means and not by the first image gathering means.
12. A method as claimed in claim 1 1 , using an imaging system as claimed in any one of claims 1 to 10.
13. An imaging system substantially as hereinbefore described with reference to or as shown in Figures 6 to 11 of the accompanying drawings.
14. A means of generating image data substantially as hereinbefore described with reference to or as shown in Figures 6 to 1 1 of the accompanying drawings.
PCT/GB2007/050607 2006-10-16 2007-10-03 Dual waveband single axis camera for machine vision WO2008047157A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0620380A GB2443004A (en) 2006-10-16 2006-10-16 Multi camera and waveband imaging apparatus
GB0620380.6 2006-10-16

Publications (1)

Publication Number Publication Date
WO2008047157A1 true WO2008047157A1 (en) 2008-04-24

Family

ID=37491496

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2007/050607 WO2008047157A1 (en) 2006-10-16 2007-10-03 Dual waveband single axis camera for machine vision

Country Status (2)

Country Link
GB (1) GB2443004A (en)
WO (1) WO2008047157A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101145249B1 (en) * 2008-11-24 2012-05-25 한국전자통신연구원 Apparatus for validating face image of human being and method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6411918B1 (en) * 1998-12-08 2002-06-25 Minolta Co., Ltd. Method and apparatus for inputting three-dimensional data
US6421114B1 (en) * 1999-03-30 2002-07-16 Minolta Co., Ltd. Three-dimensional information measuring apparatus
US20030067538A1 (en) * 2001-10-04 2003-04-10 Myers Kenneth J. System and method for three-dimensional data acquisition
US6754370B1 (en) * 2000-08-14 2004-06-22 The Board Of Trustees Of The Leland Stanford Junior University Real-time structured light range scanning of moving scenes
WO2006103191A1 (en) * 2005-03-30 2006-10-05 Siemens Aktiengesellschaft Device for determining spatial co-ordinates of object surfaces

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2880158B1 (en) * 2004-12-23 2007-02-23 Sagem METHOD OF IDENTIFYING AN INDIVIDUAL FROM INDIVIDUAL CHARACTERISTICS, WITH DETECTION OF FRAUD

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6411918B1 (en) * 1998-12-08 2002-06-25 Minolta Co., Ltd. Method and apparatus for inputting three-dimensional data
US6421114B1 (en) * 1999-03-30 2002-07-16 Minolta Co., Ltd. Three-dimensional information measuring apparatus
US6754370B1 (en) * 2000-08-14 2004-06-22 The Board Of Trustees Of The Leland Stanford Junior University Real-time structured light range scanning of moving scenes
US20030067538A1 (en) * 2001-10-04 2003-04-10 Myers Kenneth J. System and method for three-dimensional data acquisition
WO2006103191A1 (en) * 2005-03-30 2006-10-05 Siemens Aktiengesellschaft Device for determining spatial co-ordinates of object surfaces

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101145249B1 (en) * 2008-11-24 2012-05-25 한국전자통신연구원 Apparatus for validating face image of human being and method thereof

Also Published As

Publication number Publication date
GB0620380D0 (en) 2006-11-22
GB2443004A (en) 2008-04-23

Similar Documents

Publication Publication Date Title
JP5426174B2 (en) Monocular 3D imaging
US7837330B2 (en) Panoramic three-dimensional adapter for an optical instrument and a combination of such an adapter and such an optical instrument
US7061532B2 (en) Single sensor chip digital stereo camera
US7646550B2 (en) Three-channel camera systems with collinear apertures
US7819591B2 (en) Monocular three-dimensional imaging
KR101737085B1 (en) 3D camera
US20080158345A1 (en) 3d augmentation of traditional photography
US20080204900A1 (en) Three-channel camera systems with non-collinear apertures
KR101776702B1 (en) Monitoring camera for generating 3 dimensional scene and method thereof
JP2010113720A (en) Method and apparatus for combining range information with optical image
KR101685418B1 (en) Monitoring system for generating 3-dimensional picture
US9019603B2 (en) Two-parallel-channel reflector with focal length and disparity control
WO2012029299A1 (en) Image capture device, playback device, and image-processing method
CN101937168B (en) Optical three-dimensional imaging device and three-dimensional camera shooting method
CN106709894B (en) Image real-time splicing method and system
WO2011134215A1 (en) Stereoscopic camera device
CN112204434A (en) Method and system for recording images using one or more prisms
US20120242791A1 (en) 3-d video processing device and 3-d video processing method
JP2010181826A (en) Three-dimensional image forming apparatus
JP2015188251A (en) Image processing system, imaging apparatus, image processing method, and program
CN104469340A (en) Stereoscopic video co-optical-center imaging system and imaging method thereof
JP2011215545A (en) Parallax image acquisition device
WO2008047157A1 (en) Dual waveband single axis camera for machine vision
KR102031485B1 (en) Multi-view capturing apparatus and method using single 360-degree camera and planar mirrors
JP6367803B2 (en) Method for the description of object points in object space and combinations for its implementation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07824819

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07824819

Country of ref document: EP

Kind code of ref document: A1