CN102314683A - Computational imaging method and imaging system based on nonplanar image sensor - Google Patents
Computational imaging method and imaging system based on nonplanar image sensor Download PDFInfo
- Publication number
- CN102314683A CN102314683A CN201110199561A CN201110199561A CN102314683A CN 102314683 A CN102314683 A CN 102314683A CN 201110199561 A CN201110199561 A CN 201110199561A CN 201110199561 A CN201110199561 A CN 201110199561A CN 102314683 A CN102314683 A CN 102314683A
- Authority
- CN
- China
- Prior art keywords
- depth
- image
- focuses
- mode
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention discloses a computational imaging method and an imaging system based on a nonplanar image sensor, and the light-receiving surface of the image sensor is arranged by adopting a nonplanar mode. The imaging method includes the following steps: extracting the image information which is acquired by the nonplanarly arranged light-receiving surface in single exposure, and generating an image focused at different depths according to the image information; in combination with the method for estimating depth from focus (DFF) and the method of estimating depth from defocus (DFD), carrying out depth estimation on the image focused at different depths to obtain a depth map; generating a full-focused image on the basis of the image focused at different depths and the depth map, and generating refocused images focused at specific positions and provided with specific depths of field on the basis of the obtained depth map and the full-focused image. The method and the system can use single exposure to effectively acquire the information of the depth of field, and can use a single image to restore and control the depth of field.
Description
Technical field
The present invention relates to calculate shooting field, relate in particular to the calculating sensory field, specifically, relate to being calculated to be of a kind of nonplanar graph image-position sensor as method and system.
Background technology
Traditional digital photography comes down to obtain the projection of scene on two dimensional surface in the three-dimensional world, uses resulting single image can accurately not recover the degree of depth and the structural information of three-dimensional scenic, can't bring depth perception, stereoscopic sensation to human eye.Traditional camera imaging model need be weighed between the depth of field and signal to noise ratio (S/N ratio), adjusts the depth of field through aperture, and when aperture is big more, the depth of field is more little, and aperture is more little, and then the depth of field is bigger but signal to noise ratio (S/N ratio) is very low.Just need grow exposure if increase signal to noise ratio (S/N ratio), and long exposure can be introduced the motion blur of image, therefore, traditional camera imaging model can't satisfy the demand of the depth of field and signal to noise ratio (S/N ratio) simultaneously.In addition, traditional camera also can't carry out big depth field imaging.To sum up, traditional single exposure digital photography can't be obtained the full appreciation to scene.
In computer vision field, utilize many images to come the degree of depth and the structure of restoration scenario usually, method commonly used comprises: use image from various visual angles, utilize the different cameral parameter that the image that is obtained is set, beat modes such as active light to scene.But adopt these methods to need the various visual angles camera usually, increase extra active light source or need the one camera multiexposure, multiple exposure.
The calculating shooting of rise in recent years is learned through designing novel collection mechanism to gather more visual information.Theoretical based on light field; The optical field acquisition of design one camera; Light field camera like classics can be gathered the angle information that traditional camera is lost through sacrificing spatial resolution in single exposure; Utilize the light field that is collected to carry out refocused, depth of field expansion, and can infer the degree of depth and the structure of scene.In addition, also received widely based on single exposure depth of field control of alternate manner and to have paid close attention to, made the fuzzy core of different depth have more differentiation property like mode, thereby can effectively carry out depth of field control and carry out rough estimation of Depth through the coding aperture.
But present acquisition mode all can't effectively collect the depth information of scene.The acquisition system theoretical based on light field in fact only collected the angle information of scene, and really do not collect the depth information of scene, and the angle information that usage space resolution exchanges for exists very big redundancy.Other single exposure acquisition system also can only be inferred the depth information of scene through the degree of depth clue in the image that collects, and the accurately degree of depth and the structural information of effective restoration scenario.Therefore, need to explore and use single exposure effectively to carry out the method that depth information is gathered, and then carry out accurate estimation of Depth and depth of field control.
Summary of the invention
The object of the invention provides being calculated to be as method and system of a kind of nonplanar graph image-position sensor; Solve traditional imaging and can't effectively gather the problem of scene depth information; And can decoupling zero under the situation of single exposure go out to focus on the image sequence of scene different depth, thereby effectively the degree of depth of restoration scenario with carry out depth of field control.
In order to solve the problems of the technologies described above; The invention provides a kind of calculating formation method based on imageing sensor; It is characterized in that the daylighting surface of said imageing sensor adopts the on-plane surface mode to arrange, said formation method may further comprise the steps: the image information acquisition step; The image information that gather through single exposure on the daylighting surface that extraction on-plane surface mode is arranged forms the image that focuses on different depth according to said image information; Depth map generates step, unites from the method for focusing estimating depth with from defocusing the method for estimating depth, and the said image that focuses on different depth is carried out estimation of Depth to obtain depth map; The image depth controlled step generates a width of cloth total focus image based on said image and the said depth map that focuses on different depth, focuses on the refocused image that ad-hoc location has particular depth of view based on depth map that is obtained and the generation of total focus image.
Further, this method also comprises: said daylighting surface adopts the on-plane surface arrangement mode to be: said daylighting surface is sensor pixel, and the pixel of said imageing sensor is arranged on the different height rank.
Further; This method also comprises: said daylighting surface adopts the on-plane surface arrangement mode to be: fibre faceplate is set guides light to arrive sensor plane, said fibre faceplate is set to the on-plane surface by the distribution of differing heights rank back to the end in said sensor plane.
Further; This method also comprises: said daylighting surface is divided into a plurality of regional areas; Comprise all height level in each regional area, each height level is corresponding to a focal plane of scene, and the image information of on each height level, being gathered is formed the image that focuses on a degree of depth.
Further; This method also comprises: generate in the step at said depth map; Through obtaining ID information from the mode that focuses on estimating depth, with said ID information as primary data, through drawing said depth map from the mode that defocuses estimating depth from the mode that defocuses estimating depth.
Further; This method also comprises: generate in the step at said depth map; Through obtaining ID information from the mode that defocuses estimating depth, with said ID information as primary data, through drawing said depth map from the mode that focuses on estimating depth from the mode that focuses on estimating depth.
Further; This method also comprises: in said image depth controlled step; With in the image sequence between respective pixel relatively the image information in the pixel of clear focusing take out and form a width of cloth total focus image, perhaps adopt the mode of deconvolution to obtain the image of clear total focus.
Further, this method also comprises: in said image depth controlled step, calculate the inconsistent fuzzy core of refocused image overall, draw the refocused image according to said fuzzy core and said total focus image.
The present invention also provides a kind of calculating imaging device based on imageing sensor; It is characterized in that; Comprise with lower unit: image sensor cell; Its daylighting surface adopts the on-plane surface mode to arrange, and the image information that gather through single exposure on the daylighting surface that extraction on-plane surface mode is arranged forms the image that focuses on different depth according to said image information; The depth map generation unit, it is united from the method that focuses on estimating depth with from defocusing the method for estimating depth, and the said image that focuses on different depth is carried out estimation of Depth to obtain depth map; The image depth control module, it generates a width of cloth total focus image based on said image and the said depth map that focuses on different depth, focuses on the refocused image that ad-hoc location has particular depth of view based on depth map that is obtained and the generation of total focus image.
Further, this system also comprises: said daylighting surface adopts the on-plane surface arrangement mode to be: said daylighting surface is sensor pixel, and the pixel of said imageing sensor is arranged on the different height rank.
Further, this system also comprises: said daylighting surface adopts the on-plane surface arrangement mode to be: fibre faceplate is set comes guiding fiber to pixel planes, said fibre faceplate is set to the on-plane surface by the distribution of differing heights rank back to the end in said pixel planes.
Further; This system also comprises: said daylighting surface is divided into a plurality of regional areas; Comprise all height level in each regional area, each height level is corresponding to a focal plane of scene, and the image information of on each height level, being gathered is formed the image of a degree of depth.
Compared with prior art, the present invention has the following advantages:
The present invention has realized a kind of calculating formation method and imaging device of nonplanar graph image-position sensor, uses single exposure to gather scene depth information effectively, for using single image to come the restoration scenario degree of depth and carry out depth of field control algolithm to have great importance.Compare the optical field acquisition method that spatial resolution is exchanged for the angular resolution and then the deduction degree of depth, utilize nonplanar sensor to sacrifice spatial resolution and come directly to exchange for depth resolution, make the utilization of spatial resolution more effective.Specific on-plane surface daylighting surface pattern of rows and columns of sensor designs; Through the resulting image sequence of single image decoupling zero that obtains by this on-plane surface sensor imaging device collection; Employing is from the method that defocuses estimating depth with from focusing on the method associating estimating depth of estimating depth; Thereby it is more accurate to obtain to carry out depth estimation result than traditional single image, thereby can carry out accurate more depth of field control.
Further, only just can obtain the image that focuses on different depth, make traditional from focusing on estimating depth DFF and can being applied to dynamic scene from defocusing estimating depth DFD algorithm with single exposure.
Further, the on-plane surface fibre faceplate that under the situation that does not change original camera hardware system, before sensor, adds made obtains the effect on on-plane surface sensor daylighting surface, can practice thrift cost largely.And estimation of Depth and depth of field control algolithm can realize on hardware systems such as ordinary PC or workstation, and be easy to use, flexible.
Fibre faceplate used in the present invention is widely used in military affairs, criminal investigation, monitoring, space flight, navigation, mining industry, the CCD coupling in fields such as medical treatment, figure image intensifying coupling, and high-definition television imaging and advanced aspects such as office equipment image applications.Utilize an end face to be non-plane surface optical fiber panel and sensors coupled, the on-plane surface sensor imaging device that is obtained, the field that can be applicable to need big depth field imaging field, need carry out depth of field control field, need the scene three-dimensional structure to recover.Therefore, the present invention has application fields.
Other features and advantages of the present invention will be set forth in instructions subsequently, and, partly from instructions, become obvious, perhaps understand through embodiment of the present invention.The object of the invention can be realized through the structure that in instructions, claims and accompanying drawing, is particularly pointed out and obtained with other advantages.
Description of drawings
Accompanying drawing is used to provide further understanding of the present invention, and constitutes the part of instructions, is used to explain the present invention with embodiments of the invention, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the process flow diagram based on the calculating formation method of imageing sensor according to the embodiment of the invention one;
Fig. 2 is other sensor surface pixel example alignment of branch differing heights level according to the embodiment of the invention one;
Fig. 3 is according to the on-plane surface fibre faceplate instance of the embodiment of the invention one and utilizes it and the synoptic diagram of sensors coupled;
Fig. 4 is the on-plane surface fibre faceplate synoptic diagram according to the alternate manner of the embodiment of the invention one;
Fig. 5 is being used for from defocusing estimating depth and the imaging model synoptic diagram that focuses on estimating depth according to the embodiment of the invention one;
Fig. 6 is the imaging model synoptic diagram that is used to form the refocused image according to the embodiment of the invention one;
Fig. 7 is the structural representation based on the calculating imaging device of imageing sensor according to the embodiment of the invention two.
Embodiment
Below will combine accompanying drawing and embodiment to specify embodiment of the present invention, how the application technology means solve technical matters to the present invention whereby, and the implementation procedure of reaching technique effect can make much of and implement according to this.Need to prove that only otherwise constitute conflict, each embodiment among the present invention and each characteristic among each embodiment can mutually combine, formed technical scheme is all within protection scope of the present invention.
In addition; Can in computer system, carry out in the step shown in the process flow diagram of accompanying drawing such as a set of computer-executable instructions, and, though logical order has been shown in process flow diagram; But in some cases, can carry out step shown or that describe with the order that is different from here.
Embodiment one
Fig. 1 is the process flow diagram based on the calculating formation method of imageing sensor according to the embodiment of the invention one, specifies the step of this method below in conjunction with Fig. 1.
Step S110 extracts the surperficial image information of being gathered of daylighting that on-plane surface is arranged, and forms the image that focuses on different depth according to this image information.
The imageing sensor that relates in the calculating formation method of present embodiment has adopted nonplanar daylighting surface arrangement mode.Generally speaking; The daylighting surface of sensor is a sensor pixel; Obtain the imaging results of scene under different focusing surfaces for being implemented in single exposure collection; Be preferably the height level of branch specific quantity for the pixel arrangement mode of on-plane surface sensor surface, choose the number of the different focusing surfaces of the corresponding scene of other number of different height level, also can confirm according to the precision of the required scene depth resolution that exchanges for.Preferably; Imageing sensor on average is divided into a plurality of regional areas with behavior unit; The pixel that all comprises predefined all height level in each regional area; Corresponding delegation of each height level or multirow pixel, other pixel of differing heights level that sets in each regional area is thought the same position of corresponding three-dimensional scene.Fig. 2 is the example schematic of other sensor surface pixel arrangement of differing heights level in a kind of minute; The image sensor surface pixel is divided into regional area 1,2,3...; Each regional area comprises 3 row pixels, and pixel column 1.1,2.1,3.1... are on a height, and pixel column 1.2,2.2,3.2... are on a height; Pixel column 1.3,2.3,3.3... are on a height; It is three different depths of corresponding scene respectively, and think local pixel capable 1.1,1.2,1.3 corresponding the same point in the scene, same corresponding the same point of scene of local pixel capable 2.1,2.2,2.3 and local pixel capable 3.1,3.2,3.3.In this manner, when imaging, only need single exposure, can come the scene on the different focussing planes is carried out to picture according to other pixel of differing heights level, to obtain the image of the different depth of field.
Because it is higher directly to make the sensor cost with the arrangement of differing heights rank pixel.Preferably; Place the daylighting surface of fibre faceplate through being close to the sensor pixel plane as sensor; And make on-plane surface as shown in Figure 2 on the end face of pixel planes at fibre faceplate, therefore fibre faceplate guiding light just be equivalent to and obtained nonplanar sensor to sensor.This fibre faceplate can be set to a series of neat compact arranged optical fiber, and preferred, the centre distance minimum of fiber can reach 3um, and numerical aperture can be accomplished more than or equal to 1.
Preferably; Make the fibre diameter fibre faceplate identical with pixel size; The on-plane surface optical fiber arrangements precision of making on fibre faceplate surface can reach optical fiber one by one, then a certain on-plane surface fibre faceplate instance and utilize it and the synoptic diagram of sensors coupled as shown in Figure 3.The optical fiber cross section is on imaging plane among Fig. 3, and the guiding visible light is to the sensor pixel plane, and the optical fiber cross section is equivalent to original sensor plane.Among the figure label be 1 optical fiber cross section in one plane, a focussing plane a in its corresponding scene.Label be 2 and 3 optical fiber cross section on two other plane, b and c on corresponding two other focussing plane.
In when imaging, recover the different image sequences that focus on, all are labeled as 1 pixel and take out and form piece image and just can obtain a width of cloth and focus on the image on a plane among Fig. 3.In like manner, with all are labeled as 2 and 3 pixel and take out the composition piece image respectively and just can obtain the image that focuses on b and the c plane respectively among the figure.Can obtain the image sequence that focuses on the scene different depth of a series of low resolution thus.As preferably, the different depth rank number of on-plane surface fibre faceplate and the different arrangement mode of optical fiber also can be set, as shown in Figure 4.Equally, the pixel surface of on-plane surface arrangement also can be used mode shown in Figure 4.
(wherein f is a focal length according to imaging model 1/f=1/v+1/u; U, v are respectively object distance and image distance); When focal length is the millimeter rank; The subtle change of image distance will cause the bigger variation of object distance, and for present embodiment, the change in depth that the on-plane surface sensor surface is small can be corresponding to the bigger variation of focussing plane.Suppose that focal length is 9mm, infinite if the focussing plane of scene is moved to by 1m, then the displacement of respective sensor only needs 81.7um; If it is infinite that the focussing plane of scene is moved to by 0.5m, then the displacement of respective sensor only needs 164.9um.Therefore, according to the imaging model theory, on the size of sensor, can guarantee the feasibility of present embodiment.
Step S120 unites from the method for focusing estimating depth DFF with from defocusing the method for estimating depth DFD, and the said image that focuses on different depth is carried out estimation of Depth to obtain depth map.
After obtaining to focus on the image sequence different depth under through step S110, in this step, adopt from focusing on estimating depth (DFF) and from defocusing two kinds of methods estimations of estimating depth (DFD) scene depth (depth map).As shown in Figure 5, suppose that the on-plane surface sensing obtains 5 equivalent imaging planes 1,2,3,4,5, resulting 5 images focus on scene 1 ', 2 ', 3 ', 4 ', 5 ' respectively and locate.
Use is obtained ID figure from the method that focuses on estimating depth (DFF), and detailed process is following:
From focusing on estimating depth mainly is the degree of depth of inferring scene through the focus level of measurement image sequence.For each width of cloth figure that obtains, do the preliminary measurement that difference obtains the focus level of image pixel with original image with through the original image behind the local non-mean filter; Use partitioning algorithm (like mean shift) that every image is cut apart then and obtain image block, the internal focus degree value of each image block is averaged as the focus level value of this piece interior pixels.Obtain after the focus level value of each pixel of all images sequence; The focus level of respective pixel between the image is carried out match about the variation of image distance with Gaussian function; Obtain the Gaussian function peak value, pairing image distance size when promptly each pixel focus level is maximum, for example the image distance size v of certain specific pixel when focus point d among Fig. 5; The object distance that can obtain scene corresponding point d ' according to the lens imaging model is big or small, has promptly obtained the degree of depth s of scene corresponding point.
In the practical application, if depth map not high to the result's of estimation of Depth accuracy requirement then that can directly use DFF to estimate.Obtain more high accuracy depth figure if desired, under the less situation of sensor daylighting surface elevation rank, the depth map that can DFF be estimated is as ID, and adopts and defocus being optimized of estimating depth (DFD).
Use is optimized depth map from the method that defocuses estimating depth (DFD), and detailed process is following:
From defocusing estimating depth mainly is the degree of depth of inferring scene through the relative fog-level between the measurement image sequence.
Suppose whole scene all the image of clear focusing (total focus) be I, the out-of-focus image that then focuses on particular depth of view under a certain degree of depth is:
Wherein, x and y all represent specific two-dimensional pixel coordinate,
The span of remarked pixel coordinate x, h
σ(y, x) can be approximate with Gauss model for fuzzy core:
Wherein σ (y) is the respective pixel y fuzzy quantity relevant with the degree of depth:
Wherein F is a focal length, and D is a diaphragm diameter, and v is an image distance, and s is object distance (degree of depth), and γ is a calibration parameter.
Utilize above-mentioned convolution model can obtain to focus on the image I of different depth for two width of cloth
1, I
2Between fuzzy relatively convolution model be:
The fuzzy relatively relation with scene depth is:
From defocus estimating depth the energy term that will optimize be:
Wherein α is the regular terms coefficient, E
m(s) be level and smooth, it can have multiple choices, as gets l
1Norm, promptly
If focus on the image I of different depth for two width of cloth
1, I
2, data item:
Wherein H () is a step function.
For obtained by the on-plane surface imaging process many different images that focus on, adopt the constraint of many images to make and to obtain more superior result from what defocus estimating depth.Like I among the figure
1, I
2, I
3, I
4And I
5Adopt and estimate fuzzy relatively mode in twos, promptly data item is:
E
d=E
12+E
23+E
34+E
45 (9)
The depth map that can DFF be estimated during iteration optimization minimizes formula (7) and obtains final depth estimation result as ID, obtains depth map.
In practical application; If it is more that the daylighting surface of sensor comprises other number of plies of differing heights level, the degree of depth that the degree of depth that then obtains through DFD does not have DFF to obtain is accurate, therefore; As preferably; Obtain ID information through the mode from DFD, with the primary data of ID information as the mode of DFF, the mode through DFF draws depth map.
Step S130 generates a width of cloth total focus image based on image that focuses on different depth and depth map, focuses on the image that ad-hoc location has particular depth of view based on depth map that is obtained and the generation of total focus image.
The detailed process of obtaining (to the expansion of the scene depth of field) of total focus image is following:
The method of obtaining of total focus image has two kinds in the present embodiment:
A kind of method be with in the image sequence between respective pixel relatively the image information in the pixel of clear focusing take out and form a width of cloth total focus image.After the focus level of each pixel of acquisition all images sequence in step S120 from focus on estimating depth (DFF) method, a certain position pixel in the total focus image is taken from the highest pairing pixel of image of focus level in this position pixel of image sequence.Adopt some zone in the resulting total focus image of this mode possibly still exist the phenomenon of defocusing blurring.
Another kind method is the image that the mode of employing deconvolution obtains clear total focus:
Owing to can regard the convolution (formula (1)) of the inconsistent fuzzy core of image and space of a clear total focus as, can note by abridging and be I for the image of a limited depth of field
b=I*h, I are picture rich in detail, and h is a fuzzy core.Wherein to the fuzzy core h at any one pixel y place (y, shape x) is identical with the shape of aperture, generally uses Gaussian function or gate function approximate, thinks known; Its size is (formula (2) and (3)) of being departed from the degree decision of focussing plane by its corresponding fields sight spot.If corresponding scene point focuses on imaging plane, then (y x) can regard unit impulse response as to fuzzy core h.To depart from focussing plane far away more when the point of scene, and then the size of fuzzy core is just big more.Try to achieve the degree of depth of scene through step S120 after, just can obtain each the corresponding scene point of each pixel of image and the distance of focussing plane, thereby can confirm the size of the fuzzy core of each pixel for given focussing plane.Therefore can obtain the inconsistent fuzzy core of the pairing overall situation for resulting each width of cloth image that focuses on the image sequence under the different depth.
Equally, suppose that the resulting image sequence that focuses under the different depth is I
1, I
2, I
3, I
4, I
5, can try to achieve its corresponding overall inconsistent fuzzy core is h
1, h
2, h
3, h
4, h
5, the image of corresponding potential total focus is I.Can obtain the total focus image through the minimization of energy function so, concrete formula is following:
Wherein, α is the regular terms coefficient, E
i=|| I*h
i-I
i||
2In like manner, a wherein level and smooth E
m(I) multiple choices can be arranged, as get l
1Norm, promptly
Adopt that some zone may exist ringing in the resulting total focus image of this mode.
The refocused image to obtain detailed process following:
Though in step S110, can obtain many images that focus under the different depth, utilize the depth map of resulting scene and the image of total focus can carry out scene depth of field control flexibly.
As shown in Figure 6, suppose to obtain focussing plane and be positioned at the big or small [s of being of the s place depth of field
1, s
2] the refocused image, its corresponding image distance setting is respectively v, [v
1, v
2], then have
Synthetic diaphragm diameter be
wherein c be the acceptable fuzzy diameter of maximum.Because the degree of depth of scene is known, then can calculate the inconsistent fuzzy core h ' of refocused image overall according to formula (2) and (3).
Therefore can be according to the image I of total focus image I and fuzzy core h ' acquisition refocused
b, formula is following:
I
b=I*h′
Embodiment two
Fig. 7 is the structural representation according to the calculating imaging device of the embodiment of the invention two, specifies the composition of this calculating imaging device below in conjunction with Fig. 7.
This calculating imaging device comprises following each unit:
Image sensor cell, its surface adopts the on-plane surface mode to arrange, and the image information that gather through single exposure on the surface that extraction on-plane surface mode is arranged forms the image that focuses on different depth according to the image information of gathering.
Nonplanar arrangement mode has been adopted on the surface of image sensor cell.Generally speaking; The surface of sensor is the pixel of sensor; Obtain the imaging results of scene under different focusing surfaces for being implemented in single exposure collection; Be preferably the height level of branch specific quantity for the pixel arrangement mode of on-plane surface sensor surface, choose the number of the different focusing surfaces of the corresponding scene of other number of different height level, also can confirm according to the precision of the required scene depth resolution that exchanges for.Preferably; Imageing sensor on average is divided into a plurality of regional areas with behavior unit; The pixel that all comprises predefined all height level in each regional area; Corresponding delegation of each height level or multirow pixel, other pixel of differing heights level that sets in each regional area is thought the same position of corresponding three-dimensional scene.Fig. 2 is the example schematic of other sensor surface pixel arrangement of differing heights level in a kind of minute; The image sensor surface pixel is divided into regional area 1,2,3...; Each regional area comprises 3 row pixels, and pixel column 1.1,2.1,3.1... are on a height, and pixel column 1.2,2.2,3.2... are on a height; Pixel column 1.3,2.3,3.3... are on a height; It is three different depths of corresponding scene respectively, and think local pixel capable 1.1,1.2,1.3 corresponding the same point in the scene, same corresponding the same point of scene of local pixel capable 2.1,2.2,2.3 and local pixel capable 3.1,3.2,3.3.In this manner, when imaging, only need single exposure, can come the scene on the different focussing planes is carried out to picture according to other pixel of differing heights level, to obtain the image of the different depth of field.
Because it is higher directly to make the sensor cost with the arrangement of differing heights rank pixel; Preferably; Configuration image sensor unit as follows: place the surface of fibre faceplate as sensor through the pixel planes of being close to sensor; And fibre faceplate make on-plane surface as shown in Figure 2 on the end face of pixel planes, therefore fibre faceplate guiding light just be equivalent to and obtained nonplanar sensor to sensor.This fibre faceplate can be set to a series of neat compact arranged optical fiber, and preferred, the centre distance minimum of fiber can reach 3um, and numerical aperture can be accomplished more than or equal to 1.Preferably; Make the fibre diameter fibre faceplate identical with pixel size; The on-plane surface optical fiber arrangements precision of making on fibre faceplate surface can reach optical fiber one by one, then a certain on-plane surface fibre faceplate instance and utilize it and the synoptic diagram of sensors coupled as shown in Figure 3.The optical fiber cross section is on imaging plane among Fig. 3, and the guiding visible light is to the sensor pixel plane, and the optical fiber cross section is equivalent to original sensor plane.Among the figure label be 1 optical fiber cross section in one plane, a focussing plane a in its corresponding scene.Label be 2 and 3 optical fiber cross section on two other plane, b and c on corresponding two other focussing plane.
In when imaging, recover the different image sequences that focus on, all are labeled as 1 pixel and take out and form piece image and just can obtain a width of cloth and focus on the image on a plane among Fig. 3.In like manner, with all are labeled as 2 and 3 pixel and take out the composition piece image respectively and just can obtain the image that focuses on b and the c plane respectively among the figure.Can obtain the image sequence that focuses on the scene different depth of a series of low resolution thus.As preferably, the different depth rank number of on-plane surface fibre faceplate and the different arrangement mode of optical fiber also can be set, as shown in Figure 4.Equally, the pixel surface of on-plane surface arrangement also can be used mode shown in Figure 4.
The depth map generation unit, it is united from the method that focuses on estimating depth DFF with from defocusing the method for estimating depth DFD, and the said image that focuses on different depth is carried out estimation of Depth to obtain depth map.
In the depth map generation unit, at first utilize from the mode that focuses on estimating depth DFF and estimate ID, thereby draw the depth map of estimation as ID from the mode that defocuses estimating depth DFD.Also can be with the degree of depth of estimating from the mode that defocuses estimating depth DFD as ID, through drawing the depth map of estimation from the mode that focuses on estimating depth DFF from the mode that focuses on estimating depth DFF.Step S120 is identical among detailed process and the embodiment one, is not described further at this.
Image depth control
The unit, it generates a width of cloth total focus image based on said image and the said depth map that focuses on different depth, focuses on the refocused image that ad-hoc location has particular depth of view based on depth map that is obtained and the generation of total focus image.
In the image depth control module, through with in the image sequence between respective pixel relatively the image information in the pixel of clear focusing take out and form a width of cloth total focus image, perhaps adopt the mode of deconvolution to obtain the image of clear total focus.Utilize the depth map of resulting scene and the image of total focus can carry out scene depth of field control flexibly, draw and focus on the refocused image that ad-hoc location has particular depth of view.Step S130 is identical among detailed process and the embodiment one, is not described further at this.
Each unit in the present embodiment can be used for realizing the preferred version of each step that embodiment one is corresponding equally, also is not described further at this.
The above is merely embodiments of the invention, and is in order to restriction the present invention, not all within spirit of the present invention and principle, any modification of being done, is equal to replacement, improvement etc., all should be included within protection scope of the present invention.
Those skilled in the art should be understood that; Above-mentioned each module of the present invention or each step can realize that they can concentrate on the single calculation element with the general calculation device, perhaps are distributed on the network that a plurality of calculation element forms; Alternatively; They can realize with the executable program code of calculation element, thereby, can they be stored in the memory storage and carry out by calculation element; Perhaps they are made into each integrated circuit modules respectively, perhaps a plurality of modules in them or step are made into the single integrated circuit module and realize.Like this, the present invention is not restricted to any specific hardware and software combination.
Though the embodiment that the present invention disclosed as above, the embodiment that described content just adopts for the ease of understanding the present invention is not in order to limit the present invention.Technician under any the present invention in the technical field; Under the prerequisite of spirit that does not break away from the present invention and disclosed and scope; Can do any modification and variation what implement in form and on the details; But scope of patent protection of the present invention still must be as the criterion with the scope that appending claims was defined.
Claims (12)
1. the calculating formation method based on imageing sensor is characterized in that, the daylighting surface of said imageing sensor adopts the on-plane surface mode to arrange, and said formation method may further comprise the steps:
The image information acquisition step, the image information that gather through single exposure on the daylighting surface that extraction on-plane surface mode is arranged forms the image that focuses on different depth according to said image information;
Depth map generates step, unites from the method for focusing estimating depth with from defocusing the method for estimating depth, and the said image that focuses on different depth is carried out estimation of Depth to obtain depth map;
The image depth controlled step generates a width of cloth total focus image based on said image and the said depth map that focuses on different depth, focuses on the refocused image that ad-hoc location has particular depth of view based on depth map that is obtained and the generation of total focus image.
2. formation method according to claim 1 is characterized in that, said daylighting surface adopts the on-plane surface arrangement mode to be: said daylighting surface is sensor pixel, and the pixel of said imageing sensor is arranged on the different height rank.
3. formation method according to claim 1; It is characterized in that; Said daylighting surface adopts the on-plane surface arrangement mode to be: fibre faceplate is set guides light to arrive sensor plane, said fibre faceplate is set to the on-plane surface by the distribution of differing heights rank back to the end in said sensor plane.
4. according to claim 2 or 3 described formation methods; It is characterized in that; Said daylighting surface is divided into a plurality of regional areas; Comprise all height level in each regional area, each height level is corresponding to a focal plane of scene, and the image information of on each height level, being gathered is formed the image that focuses on a degree of depth.
5. formation method according to claim 1; It is characterized in that; Generate in the step at said depth map; Through obtaining ID information from the mode that focuses on estimating depth, with said ID information as primary data, through drawing said depth map from the mode that defocuses estimating depth from the mode that defocuses estimating depth.
6. formation method according to claim 1; It is characterized in that; Generate in the step at said depth map; Through obtaining ID information from the mode that defocuses estimating depth, with said ID information as primary data, through drawing said depth map from the mode that focuses on estimating depth from the mode that focuses on estimating depth.
7. formation method according to claim 1; It is characterized in that; In said image depth controlled step; With in the image sequence between respective pixel relatively the image information in the pixel of clear focusing take out and form a width of cloth total focus image, perhaps adopt the mode of deconvolution to obtain the image of clear total focus.
8. formation method according to claim 7 is characterized in that, in said image depth controlled step, calculates the inconsistent fuzzy core of refocused image overall, draws the refocused image according to said fuzzy core and said total focus image.
9. a calculating imaging device is characterized in that, comprises with lower unit:
Image sensor cell, its daylighting surface adopts the on-plane surface mode to arrange, and the image information that gather through single exposure on the daylighting surface that extraction on-plane surface mode is arranged forms the image that focuses on different depth according to said image information;
The depth map generation unit, it is united from the method that focuses on estimating depth with from defocusing the method for estimating depth, and the said image that focuses on different depth is carried out estimation of Depth to obtain depth map;
The image depth control module, it generates a width of cloth total focus image based on said image and the said depth map that focuses on different depth, focuses on the refocused image that ad-hoc location has particular depth of view based on depth map that is obtained and the generation of total focus image.
10. imaging device according to claim 9 is characterized in that, said daylighting surface adopts the on-plane surface arrangement mode to be: said daylighting surface is sensor pixel, and the pixel of said imageing sensor is arranged on the different height rank.
11. imaging device according to claim 9; It is characterized in that; Said daylighting surface adopts the on-plane surface arrangement mode to be: fibre faceplate is set comes guiding fiber to pixel planes, said fibre faceplate is set to the on-plane surface by the distribution of differing heights rank back to the end in said pixel planes.
12. according to claim 10 or 11 described imaging devices; It is characterized in that; Said daylighting surface is divided into a plurality of regional areas; Comprise all height level in each regional area, each height level is formed the image that focuses on a degree of depth corresponding to a focal plane of scene by the image information of being gathered on each height level.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110199561 CN102314683B (en) | 2011-07-15 | 2011-07-15 | Computational imaging method and imaging system based on nonplanar image sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110199561 CN102314683B (en) | 2011-07-15 | 2011-07-15 | Computational imaging method and imaging system based on nonplanar image sensor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102314683A true CN102314683A (en) | 2012-01-11 |
CN102314683B CN102314683B (en) | 2013-01-16 |
Family
ID=45427821
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110199561 Active CN102314683B (en) | 2011-07-15 | 2011-07-15 | Computational imaging method and imaging system based on nonplanar image sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102314683B (en) |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663718A (en) * | 2012-03-19 | 2012-09-12 | 清华大学 | Method and system for deblurring of gloablly inconsistent image |
CN102663721A (en) * | 2012-04-01 | 2012-09-12 | 清华大学 | Defocus depth estimation and full focus image acquisition method of dynamic scene |
CN103198475A (en) * | 2013-03-08 | 2013-07-10 | 西北工业大学 | Full-focus synthetic aperture perspective imaging method based on multilevel iteration visualization optimization |
CN103369233A (en) * | 2012-03-28 | 2013-10-23 | 索尼公司 | System and method for performing depth estimation by utilizing adaptive kernel |
CN103440662A (en) * | 2013-09-04 | 2013-12-11 | 清华大学深圳研究生院 | Kinect depth image acquisition method and device |
CN103516976A (en) * | 2012-06-25 | 2014-01-15 | 佳能株式会社 | Image pickup apparatus and method of controlling the same |
CN104243823A (en) * | 2014-09-15 | 2014-12-24 | 北京智谷技术服务有限公司 | Light field acquisition control method and device and light field acquisition device |
CN104254768A (en) * | 2012-01-31 | 2014-12-31 | 3M创新有限公司 | Method and apparatus for measuring the three dimensional structure of a surface |
CN104281397A (en) * | 2013-07-10 | 2015-01-14 | 华为技术有限公司 | Refocusing method and device for multiple depth sections and electronic device |
CN104410784A (en) * | 2014-11-06 | 2015-03-11 | 北京智谷技术服务有限公司 | Light field collecting control method and light field collecting control device |
WO2015032289A1 (en) * | 2013-09-05 | 2015-03-12 | 华为技术有限公司 | Method for displaying focus picture and image processing device |
CN104463964A (en) * | 2014-12-12 | 2015-03-25 | 英华达(上海)科技有限公司 | Method and equipment for acquiring three-dimensional model of object |
CN104599283A (en) * | 2015-02-10 | 2015-05-06 | 南京林业大学 | Image depth improvement method for camera height recovery based on depth difference |
CN104798370A (en) * | 2012-11-27 | 2015-07-22 | 高通股份有限公司 | System and method for generating 3-D plenoptic video images |
CN104899870A (en) * | 2015-05-15 | 2015-09-09 | 清华大学深圳研究生院 | Depth estimation method based on light-field data distribution |
WO2015180659A1 (en) * | 2014-05-28 | 2015-12-03 | 华为技术有限公司 | Image processing method and image processing device |
CN105247572A (en) * | 2012-10-05 | 2016-01-13 | 奥利亚医疗公司 | System and method for estimating a quantity of interest in a kinematic system by contrast agent tomography |
CN105301864A (en) * | 2014-07-29 | 2016-02-03 | 深圳市墨克瑞光电子研究院 | Liquid crystal lens imaging device and liquid crystal lens imaging method |
CN105474622A (en) * | 2013-08-30 | 2016-04-06 | 高通股份有限公司 | Method and apparatus for generating an all-in-focus image |
CN105554369A (en) * | 2014-10-23 | 2016-05-04 | 三星电子株式会社 | Electronic device and method for processing image |
CN105657394A (en) * | 2014-11-14 | 2016-06-08 | 东莞宇龙通信科技有限公司 | Photographing method based on double cameras, photographing device and mobile terminal |
CN105721768A (en) * | 2014-12-19 | 2016-06-29 | 汤姆逊许可公司 | Method and apparatus for generating adapted slice image from focal stack |
CN106105193A (en) * | 2014-03-13 | 2016-11-09 | 三星电子株式会社 | For producing image pick up equipment and the method for the image with depth information |
CN106231177A (en) * | 2016-07-20 | 2016-12-14 | 成都微晶景泰科技有限公司 | Scene depth measuring method, equipment and imaging device |
CN106225765A (en) * | 2016-07-25 | 2016-12-14 | 浙江大学 | A kind of many line scan image sensors obtain device and the formation method of hyperfocal distance scanning imagery |
CN106610553A (en) * | 2015-10-22 | 2017-05-03 | 深圳超多维光电子有限公司 | A method and apparatus for auto-focusing |
CN106895793A (en) * | 2015-12-21 | 2017-06-27 | 财团法人工业技术研究院 | The method and apparatus of double mode depth survey |
CN108459417A (en) * | 2018-02-05 | 2018-08-28 | 华侨大学 | A kind of monocular narrow-band multispectral stereo visual system and its application method |
CN108876839A (en) * | 2018-07-18 | 2018-11-23 | 清华大学 | A kind of field depth extending method of structured light three-dimensional imaging system, device and system |
US10375292B2 (en) | 2014-03-13 | 2019-08-06 | Samsung Electronics Co., Ltd. | Image pickup apparatus and method for generating image having depth information |
CN112001958A (en) * | 2020-10-28 | 2020-11-27 | 浙江浙能技术研究院有限公司 | Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation |
CN112669355A (en) * | 2021-01-05 | 2021-04-16 | 北京信息科技大学 | Method and system for splicing and fusing focusing stack data based on RGB-D super-pixel segmentation |
WO2021102716A1 (en) * | 2019-11-27 | 2021-06-03 | 深圳市晟视科技有限公司 | Depth-of-field synthesis system, camera, and microscope |
CN115226417A (en) * | 2021-02-20 | 2022-10-21 | 京东方科技集团股份有限公司 | Image acquisition device, image acquisition apparatus, image acquisition method, and image production method |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106254855B (en) * | 2016-08-25 | 2017-12-05 | 锐马(福建)电气制造有限公司 | A kind of three-dimensional modeling method and system based on zoom ranging |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101048691A (en) * | 2004-09-03 | 2007-10-03 | 自动识别与控制公司 | Extended depth of field using a multi-focal length lens with a controlled range of spherical aberration and centrally obscured aperture |
US20090103792A1 (en) * | 2007-10-22 | 2009-04-23 | Visiongate, Inc. | Depth of Field Extension for Optical Tomography |
-
2011
- 2011-07-15 CN CN 201110199561 patent/CN102314683B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101048691A (en) * | 2004-09-03 | 2007-10-03 | 自动识别与控制公司 | Extended depth of field using a multi-focal length lens with a controlled range of spherical aberration and centrally obscured aperture |
US20090103792A1 (en) * | 2007-10-22 | 2009-04-23 | Visiongate, Inc. | Depth of Field Extension for Optical Tomography |
Non-Patent Citations (1)
Title |
---|
徐树奎等: "计算摄影综述", 《计算机应用研究》, vol. 27, no. 11, 30 November 2010 (2010-11-30), pages 4032 - 4039 * |
Cited By (68)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104254768A (en) * | 2012-01-31 | 2014-12-31 | 3M创新有限公司 | Method and apparatus for measuring the three dimensional structure of a surface |
EP2810054A4 (en) * | 2012-01-31 | 2015-09-30 | 3M Innovative Properties Co | Method and apparatus for measuring the three dimensional structure of a surface |
CN102663718A (en) * | 2012-03-19 | 2012-09-12 | 清华大学 | Method and system for deblurring of gloablly inconsistent image |
CN102663718B (en) * | 2012-03-19 | 2015-06-24 | 清华大学 | Method and system for deblurring of gloablly inconsistent image |
CN103369233B (en) * | 2012-03-28 | 2016-12-28 | 索尼公司 | For by utilizing self-adaptive kernel to perform the system and method for estimation of Depth |
CN103369233A (en) * | 2012-03-28 | 2013-10-23 | 索尼公司 | System and method for performing depth estimation by utilizing adaptive kernel |
CN102663721B (en) * | 2012-04-01 | 2015-04-15 | 清华大学 | Defocus depth estimation and full focus image acquisition method of dynamic scene |
CN102663721A (en) * | 2012-04-01 | 2012-09-12 | 清华大学 | Defocus depth estimation and full focus image acquisition method of dynamic scene |
CN103516976B (en) * | 2012-06-25 | 2016-12-28 | 佳能株式会社 | Image-pickup device and control method thereof |
CN103516976A (en) * | 2012-06-25 | 2014-01-15 | 佳能株式会社 | Image pickup apparatus and method of controlling the same |
CN105247572A (en) * | 2012-10-05 | 2016-01-13 | 奥利亚医疗公司 | System and method for estimating a quantity of interest in a kinematic system by contrast agent tomography |
CN104798370A (en) * | 2012-11-27 | 2015-07-22 | 高通股份有限公司 | System and method for generating 3-D plenoptic video images |
CN104798370B (en) * | 2012-11-27 | 2017-05-03 | 高通股份有限公司 | System and method for generating 3-D plenoptic video images |
CN103198475B (en) * | 2013-03-08 | 2016-01-13 | 西北工业大学 | Based on the total focus synthetic aperture perspective imaging method that multilevel iteration visualization is optimized |
CN103198475A (en) * | 2013-03-08 | 2013-07-10 | 西北工业大学 | Full-focus synthetic aperture perspective imaging method based on multilevel iteration visualization optimization |
CN104281397A (en) * | 2013-07-10 | 2015-01-14 | 华为技术有限公司 | Refocusing method and device for multiple depth sections and electronic device |
US10203837B2 (en) | 2013-07-10 | 2019-02-12 | Huawei Technologies Co., Ltd. | Multi-depth-interval refocusing method and apparatus and electronic device |
WO2015003544A1 (en) * | 2013-07-10 | 2015-01-15 | 华为技术有限公司 | Method and device for refocusing multiple depth intervals, and electronic device |
CN105474622A (en) * | 2013-08-30 | 2016-04-06 | 高通股份有限公司 | Method and apparatus for generating an all-in-focus image |
CN103440662A (en) * | 2013-09-04 | 2013-12-11 | 清华大学深圳研究生院 | Kinect depth image acquisition method and device |
CN103440662B (en) * | 2013-09-04 | 2016-03-09 | 清华大学深圳研究生院 | Kinect depth image acquisition method and device |
WO2015032289A1 (en) * | 2013-09-05 | 2015-03-12 | 华为技术有限公司 | Method for displaying focus picture and image processing device |
CN104427237A (en) * | 2013-09-05 | 2015-03-18 | 华为技术有限公司 | Display method and image processing equipment of focusing picture |
CN104427237B (en) * | 2013-09-05 | 2018-08-21 | 华为技术有限公司 | A kind of display methods and image processing equipment focusing picture |
CN106105193A (en) * | 2014-03-13 | 2016-11-09 | 三星电子株式会社 | For producing image pick up equipment and the method for the image with depth information |
US10375292B2 (en) | 2014-03-13 | 2019-08-06 | Samsung Electronics Co., Ltd. | Image pickup apparatus and method for generating image having depth information |
CN106105193B (en) * | 2014-03-13 | 2019-01-11 | 三星电子株式会社 | For generating the image pick up equipment and method of the image with depth information |
CN105335950A (en) * | 2014-05-28 | 2016-02-17 | 华为技术有限公司 | Image processing method and image processing apparatus |
CN105335950B (en) * | 2014-05-28 | 2019-02-12 | 华为技术有限公司 | Image processing method and image processing apparatus |
WO2015180659A1 (en) * | 2014-05-28 | 2015-12-03 | 华为技术有限公司 | Image processing method and image processing device |
CN105301864B (en) * | 2014-07-29 | 2018-01-30 | 深圳市墨克瑞光电子研究院 | Liquid crystal lens imaging device and liquid crystal lens imaging method |
CN105301864A (en) * | 2014-07-29 | 2016-02-03 | 深圳市墨克瑞光电子研究院 | Liquid crystal lens imaging device and liquid crystal lens imaging method |
US10341594B2 (en) | 2014-09-15 | 2019-07-02 | Beijing Zhigu Tech Co., Ltd. | Light field capture control methods and apparatuses, light field capture devices |
CN104243823A (en) * | 2014-09-15 | 2014-12-24 | 北京智谷技术服务有限公司 | Light field acquisition control method and device and light field acquisition device |
CN105554369A (en) * | 2014-10-23 | 2016-05-04 | 三星电子株式会社 | Electronic device and method for processing image |
US10430957B2 (en) | 2014-10-23 | 2019-10-01 | Samsung Electronics Co., Ltd. | Electronic device for processing images obtained using multiple image sensors and method for operating the same |
CN105554369B (en) * | 2014-10-23 | 2020-06-23 | 三星电子株式会社 | Electronic device and method for processing image |
US10970865B2 (en) | 2014-10-23 | 2021-04-06 | Samsung Electronics Co., Ltd. | Electronic device and method for applying image effect to images obtained using image sensor |
US11455738B2 (en) | 2014-10-23 | 2022-09-27 | Samsung Electronics Co., Ltd. | Electronic device and method for applying image effect to images obtained using image sensor |
CN104410784B (en) * | 2014-11-06 | 2019-08-06 | 北京智谷技术服务有限公司 | Optical field acquisition control method and device |
CN104410784A (en) * | 2014-11-06 | 2015-03-11 | 北京智谷技术服务有限公司 | Light field collecting control method and light field collecting control device |
US10269130B2 (en) | 2014-11-06 | 2019-04-23 | Beijing Zhigu Tech Co., Ltd. | Methods and apparatus for control of light field capture object distance adjustment range via adjusting bending degree of sensor imaging zone |
CN105657394A (en) * | 2014-11-14 | 2016-06-08 | 东莞宇龙通信科技有限公司 | Photographing method based on double cameras, photographing device and mobile terminal |
CN105657394B (en) * | 2014-11-14 | 2018-08-24 | 东莞宇龙通信科技有限公司 | Image pickup method, filming apparatus based on dual camera and mobile terminal |
CN104463964A (en) * | 2014-12-12 | 2015-03-25 | 英华达(上海)科技有限公司 | Method and equipment for acquiring three-dimensional model of object |
TWI607862B (en) * | 2014-12-12 | 2017-12-11 | 英華達股份有限公司 | Method and apparatus of generating a 3-d model from a, object |
CN105721768A (en) * | 2014-12-19 | 2016-06-29 | 汤姆逊许可公司 | Method and apparatus for generating adapted slice image from focal stack |
CN104599283B (en) * | 2015-02-10 | 2017-06-09 | 南京林业大学 | A kind of picture depth improved method for recovering camera heights based on depth difference |
CN104599283A (en) * | 2015-02-10 | 2015-05-06 | 南京林业大学 | Image depth improvement method for camera height recovery based on depth difference |
CN104899870B (en) * | 2015-05-15 | 2017-08-25 | 清华大学深圳研究生院 | The depth estimation method being distributed based on light field data |
CN104899870A (en) * | 2015-05-15 | 2015-09-09 | 清华大学深圳研究生院 | Depth estimation method based on light-field data distribution |
US10346997B2 (en) | 2015-05-15 | 2019-07-09 | Graduate School At Shenzhen, Tsinghua University | Depth estimation method based on light-field data distribution |
WO2016184099A1 (en) * | 2015-05-15 | 2016-11-24 | 清华大学深圳研究生院 | Depth estimation method based on light field data distribution |
CN106610553A (en) * | 2015-10-22 | 2017-05-03 | 深圳超多维光电子有限公司 | A method and apparatus for auto-focusing |
CN106610553B (en) * | 2015-10-22 | 2019-06-18 | 深圳超多维科技有限公司 | A kind of method and device of auto-focusing |
CN106895793A (en) * | 2015-12-21 | 2017-06-27 | 财团法人工业技术研究院 | The method and apparatus of double mode depth survey |
CN106231177A (en) * | 2016-07-20 | 2016-12-14 | 成都微晶景泰科技有限公司 | Scene depth measuring method, equipment and imaging device |
CN106225765A (en) * | 2016-07-25 | 2016-12-14 | 浙江大学 | A kind of many line scan image sensors obtain device and the formation method of hyperfocal distance scanning imagery |
CN108459417A (en) * | 2018-02-05 | 2018-08-28 | 华侨大学 | A kind of monocular narrow-band multispectral stereo visual system and its application method |
CN108459417B (en) * | 2018-02-05 | 2020-06-26 | 华侨大学 | Monocular narrow-band multispectral stereoscopic vision system and using method thereof |
CN108876839A (en) * | 2018-07-18 | 2018-11-23 | 清华大学 | A kind of field depth extending method of structured light three-dimensional imaging system, device and system |
CN108876839B (en) * | 2018-07-18 | 2021-05-28 | 清华大学 | Depth of field extension method, device and system of structured light three-dimensional imaging system |
WO2021102716A1 (en) * | 2019-11-27 | 2021-06-03 | 深圳市晟视科技有限公司 | Depth-of-field synthesis system, camera, and microscope |
CN113795862A (en) * | 2019-11-27 | 2021-12-14 | 深圳市晟视科技有限公司 | Depth of field synthesis system, camera and microscope |
CN112001958A (en) * | 2020-10-28 | 2020-11-27 | 浙江浙能技术研究院有限公司 | Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation |
CN112669355A (en) * | 2021-01-05 | 2021-04-16 | 北京信息科技大学 | Method and system for splicing and fusing focusing stack data based on RGB-D super-pixel segmentation |
CN112669355B (en) * | 2021-01-05 | 2023-07-25 | 北京信息科技大学 | Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation |
CN115226417A (en) * | 2021-02-20 | 2022-10-21 | 京东方科技集团股份有限公司 | Image acquisition device, image acquisition apparatus, image acquisition method, and image production method |
Also Published As
Publication number | Publication date |
---|---|
CN102314683B (en) | 2013-01-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102314683B (en) | Computational imaging method and imaging system based on nonplanar image sensor | |
US9204067B2 (en) | Image sensor and image capturing apparatus | |
JP5105482B2 (en) | Optical condition design method and compound eye imaging apparatus | |
US8547421B2 (en) | System for adaptive displays | |
CN102917235B (en) | Image processing apparatus and image processing method | |
JP4673202B2 (en) | Image input device | |
CN102783162B (en) | Camera head | |
JP6590792B2 (en) | Method, apparatus and display system for correcting 3D video | |
KR102219624B1 (en) | Virtual ray tracing method and light field dynamic refocusing display system | |
US9581787B2 (en) | Method of using a light-field camera to generate a three-dimensional image, and light field camera implementing the method | |
US20130286170A1 (en) | Method and apparatus for providing mono-vision in multi-view system | |
CN104867125B (en) | Obtain the method and device of image | |
US20090245696A1 (en) | Method and apparatus for building compound-eye seeing displays | |
CN104050662A (en) | Method for directly obtaining depth image through light field camera one-time imaging | |
JP6585938B2 (en) | Stereoscopic image depth conversion apparatus and program thereof | |
KR20160074223A (en) | Image pick-up apparatus, portable terminal including the same and image pick-up method using the apparatus | |
CN104635337A (en) | Design method of honeycomb type lens array capable of improving stereo image display resolution | |
Shin et al. | Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets | |
CN105939443A (en) | Disparity in plenoptic systems | |
CN106060376A (en) | Display control apparatus, display control method, and image capturing apparatus | |
JP6125201B2 (en) | Image processing apparatus, method, program, and image display apparatus | |
CN113436130A (en) | Intelligent sensing system and device for unstructured light field | |
JP4729011B2 (en) | Element image group conversion device, stereoscopic image display device, and element image group conversion program | |
JP5741353B2 (en) | Image processing system, image processing method, and image processing program | |
CN102780900B (en) | Image display method of multi-person multi-view stereoscopic display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |