WO2001059751A1 - Image processing device and method, and recording medium - Google Patents
Image processing device and method, and recording medium Download PDFInfo
- Publication number
- WO2001059751A1 WO2001059751A1 PCT/JP2001/000895 JP0100895W WO0159751A1 WO 2001059751 A1 WO2001059751 A1 WO 2001059751A1 JP 0100895 W JP0100895 W JP 0100895W WO 0159751 A1 WO0159751 A1 WO 0159751A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image signal
- class
- pixel
- input image
- pixels
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/149—Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/89—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
- H04N19/895—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
- H04N7/0137—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes dependent on presence/absence of motion, e.g. of motion zones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0117—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
- H04N7/0122—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal the input and the output signals having different aspect ratios
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
- H04N7/0145—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes the interpolation being class adaptive, i.e. it uses the information of class which is determined for a pixel based upon certain characteristics of the neighbouring pixels
Definitions
- the present invention relates to an image processing apparatus and method, and a recording medium, and more particularly, to an image processing apparatus and method for processing or converting an image, and a recording medium.
- the class classification adaptive processing for each pixel of interest of the input image signal, a class map and a predicted map are obtained from the input image signal, and the pixel of interest is classified into one of preset classes based on the cluster map. Then, by performing an operation using a prediction coefficient set and a prediction tab generated by learning in advance for each class selected according to the classified class, an output image signal higher in quality than the input image signal is obtained. Is generated.
- the class tap / predicted tap may be pixels located outside the effective range of the image. Here, it is highly possible that the pixel value of the pixel outside the effective range does not have a normal value. Therefore, in the conventional classification adaptive processing, if the class gap / the prediction gap is located outside the effective range of the image, the mask is not used as shown in Fig. 1. .
- pixels that cannot be corrected by error correction codes or are missing due to packet loss, etc. are used as surrounding non-missing pixels as class taps and prediction packets.
- Perform classification adaptive processing There are some which generate missing pixels.
- the pixel value can be set by the class classification adaptive processing using the pixel values of the pixels located around the target pixel.
- a pixel in which the positional relationship with the missing pixel is relatively the same is used, that is, the same processing is performed with a so-called tap structure.
- the set pixel value will not have a normal value. Therefore, in the generated image, pixels located at the end of such an image are masked and are not used as shown in FIG.
- the processing content does not change according to the position of the pixel on the screen, and the same processing is executed regardless of the physical position of the pixel on the screen, so that high quality is not achieved much Sometimes.
- the present invention has been made in view of such a situation, and it is an object of the present invention to always generate a higher quality image regardless of the position of a pixel on a screen. I do.
- An image processing apparatus includes: a position detection unit configured to detect position information indicating a position of a target pixel in a frame of an input image signal including a plurality of pixels; and a plurality of classes based on the position information.
- Class determination means for determining the class of the pixel of interest; prediction tap selection means for selecting a plurality of pixels from the input image signal as prediction tabs; conversion data obtained by learning in advance for each class; and the prediction A computing unit that outputs an output image signal higher in quality than the input image by performing a computation process based on the tab.
- the image processing method includes: a position detection step for detecting position information indicating a position of a pixel of interest in a frame of a target pixel of an input image signal including a plurality of pixels; A class determination step for determining the class of the pixel of interest; a prediction evening selection step for selecting a plurality of pixels from the input image signal as a prediction evening; a conversion obtained by learning in advance for each class And a calculation step of performing a calculation process based on the prediction tap to output a higher-quality output image signal than the input image.
- the image processing method includes: a position detection step of detecting position information indicating a position of a target pixel in a frame of an input image signal including a plurality of pixels; A class determination step for determining the class of the pixel of interest; a selection step for selecting a plurality of pixels from the input image signal as a prediction pixel; a conversion obtained by learning in advance for each class; And performing a calculation process based on the prediction tap to output a higher-quality output image signal than the input image.
- the program has been recorded.
- the image processing apparatus includes: a position detecting unit that detects position information indicating a position of a pixel of interest in a frame of an input image signal including a plurality of pixels; and a positional relationship between the pixel of interest and the position information.
- a cluster group selecting unit that selects a plurality of pixels that are variable according to the input image signal from the input image signal as a class node, and determines the class of the target pixel from the plurality of classes based on the class node.
- Class determining means Class determining means; predictive tap selecting means for selecting a plurality of pixels from the input image signal as predictive tabs; conversion data obtained by learning in advance for each class; and computation based on the predictive tabs
- a processing means for outputting an output image signal higher in quality than the input image by performing the processing.
- the image processing method may further include: a position detection step for detecting position information indicating a position of the target pixel in the frame of the input image signal including a plurality of pixels; and a positional relationship between the target pixel and the target pixel.
- a recording medium is an input image signal including a plurality of pixels.
- a position detection step for detecting position information indicating a position of the pixel of interest in the frame; and a plurality of pixels whose positional relationship with the pixel of interest is varied according to the position information.
- An image processing apparatus includes: a position detection unit that detects position information indicating a position of a target pixel in a frame of an input image signal including a plurality of pixels; and a plurality of pixels from the input image signal as class taps.
- a cluster to be selected a soap selecting means, a class determining means for determining a class of the pixel of interest from a plurality of classes based on the class map, and a positional relationship between the pixel of interest.
- Prediction pixel selection means for selecting a plurality of pixels that can be changed as the prediction pixel from the input image signal, conversion data obtained by learning in advance for each class, and the prediction pixel Computing means for outputting an output image signal higher in quality than the input image by performing an arithmetic process based on the input image.
- the image processing method includes: a position detection step of detecting position information indicating a position of a target pixel in a frame of an input image signal including a plurality of pixels; and A step of selecting a class tap to be selected; a step of determining a class of the target pixel from a plurality of classes based on the class tap; a positional relationship with the target pixel is variable according to the position information A plurality of pixels to be calculated as prediction tabs from the input image signal.
- a prediction tap selection step to be selected by selecting, a conversion data obtained by learning in advance for each class, and an arithmetic process based on the prediction tap, thereby outputting an output image signal higher in quality than the input image. It is characterized by including an operation step.
- a recording medium includes a position detecting step for detecting position information indicating a position of a target pixel in a frame of an input image signal including a plurality of pixels, and a method for detecting a plurality of pixels from the input image signal.
- a program that causes a computer to execute the process is recorded.
- the image processing apparatus includes: for each pixel of interest of an input image signal including a plurality of pixels, a provisional class node selection unit that selects a plurality of pixels from the input image signal as a provisional class tap; A true class tap selecting means for selecting, from the input image signal, a plurality of pixels whose positional relationship with the pixel of interest is changed according to the position of the temporary cluster in the frame, as the true class tap; A class determining means for determining a class of the pixel of interest from a plurality of classes, a prediction pixel selecting means for selecting a plurality of pixels from the input image signal as a prediction pixel, By performing an arithmetic process based on the conversion data obtained by learning in advance and the prediction step, an output image signal higher in quality than the input image is output. Characterized in that it comprises calculating means for.
- the image processing method includes a temporary cluster group selection step of selecting a plurality of pixels from the input image signal as temporary class taps for each target pixel of the input image signal including a plurality of pixels;
- a true class selection step of selecting a plurality of pixels whose positional relationship with the pixel of interest is changed according to the position of the temporary cluster in the frame as a true cluster from the input image signal;
- a class determining step of determining a class of the pixel of interest from a plurality of classes based on the true class map; a prediction group selecting step of selecting a plurality of pixels as a prediction tab from the input image signal; It is characterized by including conversion data obtained by learning in advance for each class, and an operation step of outputting an output image signal higher in quality than the input image by performing an operation process based on the prediction tap.
- the recording medium includes: for each pixel of interest of an input image signal composed of a plurality of pixels, a provisional cluster selection step for selecting a plurality of pixels from the input image signal as a provisional cluster; A true cluster group selection step of selecting a plurality of pixels whose positional relationship with the pixel of interest is variable according to the position of the tap in the frame as a true class tap from the input image signal, and based on the true cluster group, A class determination step of determining the class of the pixel of interest from a plurality of classes; a prediction tap selection step of selecting a plurality of pixels from the input image signal as a prediction tap; By performing the arithmetic processing based on the conversion taps and the prediction taps, the input image is higher than the input image.
- Program for executing image processing in a computer is recorded, characterized in that it comprises a calculation step for outputting the Do output image signal.
- An image processing apparatus includes: for each pixel of interest of an input image signal composed of a plurality of pixels, a class tap selection unit that selects a plurality of pixels from the input image signal as class taps; Class determining means for determining a class of the pixel of interest from a plurality of classes; and selecting a plurality of pixels from the input image signal as a tentative prediction node for each of the pixel of interest.
- the image processing method includes: for each attention pixel of an input image signal including a plurality of pixels, selecting a plurality of pixels as a class tap from the input image signal; Multiple classes based on Determining a class of the target pixel from the input image signal, and selecting a plurality of pixels from the input image signal as a temporary prediction tab for the target pixel. Selecting a plurality of pixels whose positional relationship with the target pixel is changed according to the position of the tap in the frame as a true prediction block from the input image signal; And a calculation step of outputting a higher quality output image signal than the input image by performing a calculation process based on the conversion data obtained by learning in advance and the true prediction tap.
- the recording medium further comprises: a class tap selecting step of selecting a plurality of pixels from the input image signal as class taps for each target pixel of the input image signal including a plurality of pixels; A class determination step of determining a class of the pixel of interest from a plurality of classes; a tentative prediction tab selection step of selecting a plurality of pixels from the input image signal as a tentative prediction tab for each pixel of interest; A true prediction selection, which selects a plurality of pixels whose positional relationship with the attention pixel is changed according to the position of the provisional prediction in the frame as the true prediction from the input image signal By performing the step, the conversion data obtained by learning in advance for each class, and the arithmetic processing based on the true prediction tap, And a computing step of outputting an output image signal higher in quality than the input image.
- FIG. 1 is a diagram illustrating a pixel mask.
- FIG. 2 is a diagram illustrating a configuration of an embodiment of an image processing apparatus according to the present invention.
- FIG. 3 is a diagram illustrating a configuration example of an effective pixel area calculation circuit.
- FIG. 4 is a diagram illustrating an effective pixel area vertical flag VF and an effective pixel area horizontal flag HF.
- FIG. 5 is a diagram illustrating peripheral pixels to be created.
- FIG. 6 is a diagram showing an example of constructing a tap at an end of an image.
- FIG. 7 is a diagram illustrating an example of constructing a tap at an end of an image.
- FIG. 8 is a block diagram showing the configuration of the missing pixel creation circuit.
- FIG. 9 is a flowchart illustrating the processing of the preprocessing circuit.
- FIG. 10 is a diagram showing a configuration of the motion class generation circuit.
- FIG. 11 is a diagram illustrating a configuration of the motion detection circuit.
- FIG. 12 is a diagram showing taps used for calculating time activity.
- FIG. 13 is a diagram showing taps used for calculating a spatial activity.
- FIG. 14 is a diagram illustrating a threshold value for motion determination.
- FIG. 15 is a flowchart illustrating a process of setting a motion class code MCC of the motion determination circuit.
- FIG. 16 is a diagram illustrating pixels used for majority decision of the motion class code MCC.
- FIG. 17 is a flowchart illustrating a process of setting a motion class code MCC of the motion detection circuit.
- FIG. 18 is a diagram illustrating an example of the construction of an evening nip at the end of an image.
- FIG. 19 is a diagram showing an example of constructing taps at an end of an image.
- FIG. 20 is a diagram illustrating pixels used for the interpolation processing.
- FIG. 21 is a diagram illustrating pixels whose pixel values are replaced.
- FIG. 22 is a block diagram showing another configuration of the missing pixel creating circuit.
- FIG. 23 is used in an image processing device that selectively performs one or more of the image processing mode that creates missing pixels, the image processing mode that takes into account chromatic aberration, and the image processing mode that takes into account the terror position.
- FIG. 2 is a diagram illustrating a configuration of an embodiment of an image processing apparatus that generates a coefficient set according to an embodiment.
- FIG. 24 is a diagram illustrating chromatic aberration.
- FIG. 25 is a diagram illustrating chromatic aberration.
- FIG. 26 is a diagram for explaining switching of taps.
- FIG. 1 is a diagram illustrating a configuration of an embodiment of an image processing apparatus that selectively performs one or a plurality of image processing modes in consideration of code positions and telop positions.
- FIG. 28 is a flowchart illustrating the process of switching the evening light corresponding to the chromatic aberration.
- FIG. 29 is a diagram illustrating an example of a screen on which a terrorist or the like is displayed.
- FIG. 30 is a flowchart for explaining the process of switching the evenings corresponding to the position of the terrorist.
- FIG. 31 is a diagram illustrating a recording medium.
- FIG. 2 is a diagram illustrating a configuration of an embodiment of an image processing apparatus according to the present invention.
- the effective pixel area calculation circuit 11 calculates the pixels of the image input to the missing pixel creation circuit 12 based on the vertical synchronization signal and the horizontal synchronization signal synchronized with the image input to the missing pixel creation circuit 12.
- a valid pixel area vertical flag VF and an effective pixel area horizontal flag HF indicating whether or not the pixel is located in the effective pixel area are output to the missing pixel creation circuit 12 (hereinafter, a pixel is also referred to as a tap).
- the missing pixel creation circuit 12 includes an input image and a missing flag LF corresponding to each pixel of the image, and an effective pixel area supplied from the effective pixel area calculation circuit 11. Based on the vertical flag VF and the effective pixel area horizontal flag HF, a pixel corresponding to the missing pixel included in the input image is created, and the missing pixel is replaced with the created pixel and output.
- FIG. 3 is a diagram illustrating an example of the configuration of the effective pixel area calculation circuit 11.
- the vertical synchronizing signal detecting circuit 41 generates data indicating whether each pixel of the image is within the effective pixel area in the vertical direction of the screen based on the input vertical synchronizing signal (hereinafter referred to as “vertical effective pixel area data”). Is generated and supplied to the effective area calculation circuit 43.
- the horizontal sync signal detection circuit 42 detects each pixel of the image in the horizontal direction of the screen based on the input horizontal sync signal. Then, a data indicating whether or not the pixel is within the effective pixel area (hereinafter referred to as horizontal effective pixel area data) is generated and supplied to the effective area calculation circuit 43.
- the effective area calculation circuit 43 corrects the vertical effective pixel area data supplied from the vertical synchronization signal detection circuit 41, and outputs the corrected data to the missing pixel creation circuit 12 as an effective pixel area vertical flag VF.
- the effective pixel area vertical flag VF for example, as shown in FIG. 4, a value of 0 is set within the effective range of the display, and a value of 1 is set outside the effective range of the display.
- the effective area calculation circuit 43 corrects the horizontal effective pixel area data supplied from the horizontal synchronization signal detection circuit 42, and outputs the corrected data to the missing pixel creation circuit 12 as an effective pixel area horizontal flag HF.
- the effective pixel area horizontal flag HF is set to a value of ⁇ within the effective range of the display, and a value of 1 is set outside the effective range of the display.
- each of the pixels of the input image falls within the effective pixel area. Or not.
- the missing pixel creation circuit 12 When the image input to the missing pixel creation circuit 12 is an interlace, the position of the pixel of the field of interest is one field before or one field after the field of interest. The pixel is vertically offset by 1/2 from the position of the pixel in the field.
- the missing pixel creation circuit 12 uses the class classification adaptive processing to obtain pixel values of peripheral pixels in the same field (field k in the figure) as the pixel to be created, as shown in FIG. Based on the pixel value of the pixel existing in the previous field (field k-1 in the figure) and the pixel value of the pixel existing in the field immediately before (field k-1 in the figure), the missing pixel Create pixel values.
- the missing pixel creation circuit 12 uses the effective pixel area vertical flag supplied from the effective pixel area calculation circuit 11. Based on VF and effective pixel area horizontal flag HF, Select only the pixels to be placed (pixels outside the effective range of the image will be truncated) and create the pixel values of the missing pixels based on the selected pixels.
- the missing pixel creation circuit 12 uses the effective pixel area vertical flag supplied from the effective pixel area calculation circuit 11. Based on the VF and the effective pixel area horizontal flag HF, the effective pixel is selected as a tap by adaptively switching to the pixel structure that targets pixels located in the effective area of the image. A pixel value may be created.
- FIG. 8 is a block diagram showing an example of the configuration of the missing pixel creation circuit 12.
- the pixel value input to the missing pixel creation circuit 12 and the missing flag LF indicating missing pixels are supplied to the preprocessing circuit 101 and the evening construction circuit 102-1.
- the effective pixel area vertical flag VF and the effective pixel area horizontal flag HF input from the effective pixel area calculation circuit 11 are divided into a pre-processing circuit 101, a tap construction circuit 102-1 to 1102-5, The class synthesis circuit 107 and the coefficient holding class code selection circuit 109 are supplied.
- the preprocessing circuit 101 sets a missing flag LF of a pixel located outside the effective pixel area based on the effective pixel area vertical flag VF and the effective pixel area horizontal flag HF. For example, a missing flag LF of "1" indicates that the pixel value is missing, and a missing flag LF of "0" indicates that the pixel value is not missing.
- the pre-processing circuit 101 generates the value of the missing pixel in the effective pixel area by a linear interpolation filter based on the pixel value and the missing flag LF corresponding to the pixel, and assigns the value to the missing pixel.
- Set and supply to evening circuit construction circuit 102-5-2 through 102-5 That is, when a pixel is missing, the pre-processing circuit 101 increases the number of prediction maps by the number of missing pixels.
- the class tap does not include a missing pixel, and the class classification process does not use the pixel value generated by the preprocessing circuit 101.
- step S11 the preprocessing circuit 101 determines whether or not the target pixel is missing based on the missing flag LF, and if it is determined that the target pixel is not missing, the step proceeds to step S11.
- the process proceeds to step S12, where the pixel value of the target pixel is set to the target pixel, and the process ends. If it is determined in step SI1 that the target pixel is missing, the process proceeds to step S13, where the preprocessing circuit 101 determines whether the target pixel is adjacent in the horizontal direction based on the missing flag LF. Judge whether one of the two pixels is missing or not.
- step S13 If it is determined in step S13 that neither of the two pixels adjacent to the target pixel in the horizontal direction is missing, the process proceeds to step S14, where the pre-processing circuit 101 The average value of the pixel values of two pixels adjacent in the horizontal direction is set as the pixel value of the target pixel, and the process ends.
- step S13 If it is determined in step S13 that one of the two pixels adjacent to the target pixel in the horizontal direction is missing, the process proceeds to step S15, where the preprocessing circuit 101 It is determined whether any two adjacent pixels in the horizontal direction of the pixel are missing. If it is determined in step S15 that one of the two pixels adjacent to the target pixel in the horizontal direction is not missing, the process proceeds to step S16, where the preprocessing circuit 101 sets the target pixel The pixel value of the horizontally adjacent pixel that is not missing is set as the pixel value of the target pixel, and the process ends.
- step S15 If it is determined in step S15 that any of the pixels horizontally adjacent to the target pixel is missing, the process proceeds to step S17, where the pre-processing circuit 101 determines the missing pixel based on the missing flag LF. Then, it is determined whether or not one of two pixels adjacent to the target pixel in the vertical direction is missing.
- step S17 If it is determined in step S17 that neither of the two pixels adjacent to the target pixel in the vertical direction is missing, the process proceeds to step S18, where the pre-processing circuit 101 in c Sutedzupu S 1 7 which the sets the average value of the vertical pixel values of two pixels adjacent to the pixel value of the target pixel, the processing is terminated, 2 are adjacent in the vertical direction of the target pixel Tsunoe If it is determined that any of the elements is missing, the process proceeds to step S19, where the preprocessing circuit 101 removes all pixels adjacent to the target pixel based on the missing flag LF. Is determined.
- step S19 If it is determined in step S19 that any of the pixels adjacent to the target pixel is not missing, the process proceeds to step S20, where the preprocessing circuit 101 determines that the target pixel is adjacent to the target pixel. Then, the pixel value of the non-missing pixel is set to the pixel value of the target pixel, and the process ends.
- step S19 all pixels adjacent to the target pixel are missing If it is determined that the target pixel has not been processed, the preprocessing circuit 101 sets the pixel value of the pixel of the past frame at the same position as the target pixel to the pixel value of the target pixel, and the process ends. I do.
- the preprocessing circuit 101 linearly interpolates the pixel value of the processing target pixel in the effective pixel area from the pixel values of the peripheral pixels.
- the interpolation processing by the pre-processing circuit 101 can expand the range of tabs that can be used in subsequent processing.
- the tap construction circuit 1 0 2-1 sets the missing flag LF of the pixel located outside the effective pixel area based on the effective pixel area vertical flag VF and the effective pixel area horizontal flag HF, and locates it outside the effective pixel area.
- the missing flag LF of the pixel to be reset is reset, and the missing flag LF is supplied to the motion class generation circuit 103 as the missing flag tap SLFT1.
- the tap construction circuit 102-1 selects a motion cluster element TD 1 composed of non-missing pixels in the effective pixel area, and supplies the selected motion class element TD 1 to the motion class generation circuit 103. I do.
- the motion class generation circuit 103 includes the parameters supplied from the initialization circuit 111, the missing flag supplied from the type construction circuit 102-1, SLFT1, and the selected motion class data TD. Based on 1, a motion class code MC C and a static motion flag SMF are generated and output to the evening construction circuits 102-2 to 102-5 and the class synthesis circuit 107.
- the motion class code MCC has two bits of information indicating the amount of motion, and the static flag SMF indicates the presence or absence of motion with one bit.
- FIG. 10 shows the configuration of the motion class generation circuit 103.
- the missing flag SLF T1 and the motion cluster map TD1 supplied from the evening construction circuit 102-1 are supplied to the motion detection circuit 151.
- the motion detection circuit 15 1 generates and outputs a motion class code MC C based on the missing flag SLFT 1 and the motion cluster TD 1, and judges whether the generated motion class code MC C is still or not. Feed to circuit 15 2.
- the time activity calculation circuit 18 based on the missing flag tap SLFT 1 and the motion class tap TD 1 supplied from the tap construction circuit 10 2-1, for example, determines whether or not the Pixels, centered on the target pixel of creation 3 x 3 pixels (included in the motion class map TD1) and 3 x 3 pixels (movements) in the effective area of the previous frame that are not missing
- the temporal activity is calculated by adding the absolute value of the difference between the pixel values of the two and the class tap TD 1), and is supplied to the motion determining circuit 184.
- the time activity calculating circuit 18 1 calculates the time activity using only the non-missing pixels without using the missing pixels.
- FIG. 12 (A) is a diagram showing an example of a 3 ⁇ 3 pixel centered on the pixel to be created, which is used for calculating the temporal activity.
- "Era-" indicates a missing pixel.
- FIG. 12 (B) is a diagram showing an example of a 3 ⁇ 3 pixel of the immediately preceding frame corresponding to the pixel shown in FIG. 12 (A).
- L1 to L3 shown in FIGS. 12 (A) and 12 (B) denote lines, respectively, and the same numbers of lines indicate the same positions in the vertical direction.
- H1 to H3 shown in (B) indicate the horizontal positions of the pixels, and the same numbers indicate the same positions in the horizontal direction.
- Time activity ⁇
- () represents the pixel value of a pixel
- II represents a function for calculating an absolute value
- V represents a missing value from the number of pixels in a frame in which a pixel to be created exists. Indicates the number obtained by subtracting the number of pixels.
- the spatial activity calculation circuit 18 2 is based on the missing flag map SLFT 1 and the motion class tap TD 1 supplied from the type construction circuit 102-1, for example, based on the pixel to be created.
- the spatial activity is calculated by adding 1 to the difference between the maximum value and the minimum value of the pixel of X3, and is supplied to the threshold value setting circuit 183.
- FIG. 13 is a diagram showing an example of a 3 ⁇ 3 pixel centered on a missing pixel to be created, which is used for calculating a spatial activity.
- the spatial activity is calculated by equation (2).
- Spatial activity Max (qi) -Min (qi) +1 (2)
- Max (qi) indicates the maximum value of the pixel values of q1 to q9.
- Min (qi) indicates the minimum value of the pixel values of ql to q9.
- the value setting circuit 183 selects and selects a motion judgment threshold value stored in advance in the threshold value setting circuit 183 based on the space activity supplied from the space activity calculation circuit 182.
- the threshold value thus obtained is supplied to the motion determination circuit 18 4.
- a threshold having a different value is selected depending on the value of the spatial activity.
- the motion decision circuit 18 4 sets the motion class code MCC based on the motion decision threshold supplied from the threshold setting circuit 18 3 and the time activity supplied from the time activity calculation circuit 18 1, and makes a majority decision.
- FIG. 14 is a diagram illustrating a threshold value for motion determination. Different values are used for the motion determination threshold depending on the value of the spatial activity. As spatial activity increases, larger thresholds are used. This takes into account that if a pixel with a large spatial activity involves a small amount of movement, the temporal activity will be large.
- step S31 the motion determination circuit 1884 determines whether or not the time activity is equal to or less than the threshold 1, and when it is determined that the time activity is equal to or less than the threshold 1, the motion determination circuit 1884 determines whether the time activity is equal to or less than the threshold 1. Proceed to 2 to set the motion class code MCC to 0, and the process ends.
- step S31 If it is determined in step S31 that the time activity exceeds the threshold 1, the process proceeds to step S33, where the motion determination circuit 18 4 determines that the time activity is It is determined whether or not the time activity is equal to or less than the threshold value 2. If it is determined that the time activity is equal to or less than the threshold value 2, the process proceeds to step S34, in which the motion class code MCC is set to 1, and the process ends.
- step S33 If it is determined in step S33 that the time activity exceeds the threshold value 2, the process proceeds to step S35, where the motion determination circuit 184 determines whether the time activity is equal to or less than the threshold value 3. If it is determined that the time activity is equal to or smaller than the threshold value 3, the process proceeds to step S36, where the motion class code MCC is set to 2, and the process ends.
- step S35 If it is determined in step S35 that the time activity exceeds the threshold value 3, the process proceeds to step S37, where the motion determination circuit 184 sets 3 to the motion class code MCC, and the process ends.
- the motion determination circuit 184 sets the motion class code MCC based on the threshold and the time activity.
- the majority decision circuit 185 sets a final motion class code MCC based on the motion class codes MCC of a plurality of pixels. For example, as shown in FIG. 16, based on the motion class codes MCC of 14 pixels around the target pixel of creation, the majority decision circuit 1885 calculates the motion class code MCC of the target pixel. Set.
- step S51 the motion detection circuit 151 determines whether or not to execute the majority decision in accordance with the parameter setting from the initialization circuit 111, and determines that the majority decision is not to be performed. If so, the process proceeds to step S52, where the selector 187 selects the motion class code MCC of the target pixel output from the motion determination circuit 184, and finally sets the motion class code MCC of the target pixel. Set as the appropriate motion class code MCC, and the process ends.
- step S51 when it is determined that the majority decision is to be performed, the process proceeds to step S53, and the majority decision circuit 185 selects the pixel having the motion class code MCC of 3 out of the 14 pixels. Is determined to be greater than or equal to the threshold 3, and if it is determined that the number of pixels for which the motion class code MCC of 3 is set is greater than the threshold 3, the process proceeds to step S54, where the motion class Set code MC C to 3 You.
- the selector 187 outputs the output of the majority decision circuit 185 as the final motion class code MCC, and the process ends.
- Step S53 when it is determined that the number of pixels for which the motion class code MCC of 3 is set is equal to or smaller than the threshold value 3, the process proceeds to Step S55, and the majority decision circuit 18 5 1 Is the value obtained by adding the number of pixels having the motion class code MCC of 3 and the number of pixels having the motion class code MCC of 2 out of the 4 pixels greater than the threshold value 2? It is determined that the value obtained by adding the number of pixels with the motion class code MCC of 3 and the number of pixels with the motion class code MCC of 2 is greater than the threshold value 2. If so, proceed to step S56, and set 2 to the motion class code MCC. The selector 187 outputs the output of the majority decision circuit 185 as the final motion class code MCC, and the process ends.
- step S55 if the value obtained by adding the number of pixels having the motion class code MCC of 3 and the number of pixels having the motion class code MCC of 2 is equal to or less than the threshold value 2, If it is determined, the process proceeds to step S57, where the majority decision circuit 1885 determines the number of pixels having the motion class code MCC of 3, out of the 14 pixels, and the motion class code MC C of 2 It is determined whether the value obtained by adding the number of pixels for which the is set and the number of pixels for which the motion class code MCC of 1 is set is greater than a threshold value 1. The value obtained by adding the number of pixels set, the number of pixels set with the motion class code MCC of 2, and the number of pixels set with the motion class code MCC of 1 is greater than the threshold value 1. If it is determined that the motion class code MCC 1 is set, the selector 187 outputs the output of the majority decision circuit 185 as the final motion class code MCC, and the process ends.
- step S57 the number of pixels for which the motion class code MCC 3 is set, the number of pixels for which the motion class code MCC 2 is set, and the motion class code MCC 1 are set.
- the process proceeds to step S59, the majority decision circuit 1 85 sets the motion class code MCC to 0, and the selector 1 87 Then, the output of the majority decision circuit 185 is output as the final motion class code MCC, and the process ends.
- the motion detection circuit 151 sets the final motion class code MCC based on the motion class codes MCC of a plurality of pixels and the threshold value stored in advance.
- the motion class generation circuit 103 sets the motion class code MCC from the pixel values of a plurality of pixels, and outputs the motion class code MCC to the static / motion determination circuit 152 and the missing pixel creation circuit 12. Output.
- the static / dynamic judgment circuit 152 sets and outputs a static / dynamic flag SMF based on the motion class code MCC. For example, when the motion class code MCC is 0 or 1, the static flag SMF is set to 0, and when the motion class code MCC is 2 or 3, the static flag SMF is set to 1.
- the evening construction circuit 102--2 covers all the class structures. Selects the class prediction layer VET (not including pixels outside the effective pixel area) and supplies it to the variable tap selection circuit 108.
- the tap construction circuit 1 0 2-3 sets the missing flag LF of the pixel located outside the effective pixel area based on the effective pixel area vertical flag VF and the effective pixel area horizontal flag HF, and locates it outside the effective pixel area.
- the missing flag LF of the pixel to be reset is reset, and the missing flag LF is supplied to the DR class generation circuit 104 as the missing flag tap SLFT2.
- the type building circuit 102-3 Based on the motion class code MCC, the static flag SMF, and the missing flag LF supplied from the motion class generating circuit 103, the type building circuit 102-3 has a missing pixel in the effective pixel area. None Select DR class tap TD 2 and supply selected DR class tap TD 2 to DR class generation circuit 104.
- the DR class generation circuit 104 is a pixel included in the DR class tap TD 2 based on the missing flag map SLFT 2 and the DR class tap TD 2 supplied from the sunset construction circuit 102-3. And generates a DR class code DR CC determined according to the dynamic range, which is the difference between the maximum pixel value and the minimum pixel value of the pixels that are not missing, and outputs it to the class synthesis circuit 107. .
- the effective pixel area Based on the effective pixel area vertical flag VF and the effective pixel area horizontal flag HF supplied from the effective pixel area calculation circuit 11, the effective pixel area Set the missing flag LF for pixels located outside the area, reset the missing flag LF for pixels located outside the effective pixel area, and set the missing flag LF to the missing flag SLFT 3 as the space class generation circuit 1 0 to 5
- the sunset construction circuit 102-4 Based on the motion class code MCC and the static flag SMF supplied from the motion class generation circuit 103 and the missing flag LF, the sunset construction circuit 102-4 has a missing pixel in the effective pixel area.
- the unselected space class tap TD3 is selected, and the selected space class tap TD3 is supplied to the space class generation circuit 105.
- the space class generation circuit 105 generates a space class code SCC corresponding to the pixel value pattern based on the missing flag SLF T3 and the space cluster map TD3 supplied from the tap construction circuit 102-4. Is generated and output to the class synthesis circuit 107.
- the tap construction circuit 102-5 selects the missing flag LF based on the effective pixel area vertical flag VF and the effective pixel area horizontal flag HF supplied from the effective pixel area calculation circuit 11, and selects the missing flag LF.
- the missing flag LF is supplied to the missing class generation circuit 106 as the missing flag SLF T4.
- the missing class generating circuit 106 generates a missing class code LCC based on the missing flag type SLF T4 supplied from the tab structuring circuit 102-5, and outputs it to the class synthesizing circuit 107.
- the class synthesis circuit 107 generates a motion class code MCC, a static flag SMF, and a DR based on the effective pixel area vertical flag VF and the effective pixel area horizontal flag HF supplied from the effective pixel area calculation circuit 11.
- the class code DR CC, the spatial class code SCC, and the missing class code LCC are integrated into one final class code CC, and the class code CC is output to the coefficient holding class code selection circuit 109.
- the coefficient holding class code selection circuit 109 includes a coefficient set supplied from the effective pixel area calculation circuit 11, the effective pixel area vertical flag VF and the effective pixel area horizontal flag HF, and the coefficient set supplied from the initialization circuit 11 1.
- the prediction tap selection signal VT Generates a prediction tap selection signal VT based on the prediction structure and the class code CC supplied from the class synthesis circuit 107, and supplies the generated prediction tap selection signal VT to the variable tap selection circuit 108.
- the prediction coefficient W selected from the coefficient set based on the class code CC is output to the estimation prediction operation circuit 110.
- the coefficient set supplied from the initialization circuit 1 1 1 is generated in advance according to the class classified by the class code CC. And stored in the initialization circuit 111.
- the variable tab selection circuit 108 is a predictive tap selection signal supplied from the all-class prediction tab VET supplied from the tap construction circuit 102-2 and the coefficient holding class code selection circuit 109. Based on V ⁇ , a prediction tap ⁇ ⁇ ⁇ is selected, and the selected prediction tap ⁇ ⁇ ⁇ ⁇ is supplied to the estimated prediction calculation circuit 110. For example, the variable tap selection circuit 108 selects the tap specified by the prediction tap selection signal VT from the types included in the all-class prediction tab VE, and sets the selected tap to the prediction tap selection. I do.
- the multiply-accumulator 1 2 1 of the estimation prediction operation circuit 110 is supplied from the prediction table ⁇ ⁇ ⁇ supplied from the variable selection circuit 108 and the coefficient holding class code selection circuit 109. Based on the obtained prediction coefficient W, the pixel value of the missing pixel is calculated using a linear estimation formula. Note that the product-sum unit 122 of the estimation prediction operation circuit 110 may calculate the pixel value of the missing pixel based on the prediction coefficient W using a nonlinear estimation formula.
- the filter 122 of the estimation prediction calculation circuit 110 calculates the pixel value of the missing pixel from the prediction gap ⁇ supplied from the variable gap selection circuit 108.
- the estimation / prediction calculation circuit 110 selects and outputs the output of the filter 122 or the output of the accumulator 122 based on the output mode set from the initialization circuit 111, and outputs Find the result according to the mode.
- the missing pixel creation circuit 12 uses the effective pixel area vertical flag VF and the effective pixel area horizontal flag HF to calculate the dynamic range, motion, missing area, pixel value pattern, and the like from the pixels in the effective pixel area. Then, the missing pixel value is calculated based on the pixel values around the missing pixel (not including the pixel values of the pixels outside the effective pixel area).
- the missing pixel creation circuit 12 switches the output mode of the estimation / prediction calculation circuit 110, executes the above-described processing on all pixels, and improves the image quality of the input image ( For example, increase in gradation (increase in the number of bits of ⁇ data, U data, and V data), removal of noise, removal of quantization distortion (including removal of distortion in the time direction), quadruple density resolution Creation, etc.)
- the missing pixel to be created is located at the end of the image.
- the missing pixel creation circuit 12 replaces the class classification adaptive processing with linear interpolation based on the pixel values of adjacent pixels. Processing may be performed to interpolate the pixel value of the missing pixel.
- the missing pixel creation circuit 12 is used in advance when the missing pixel to be created is located at the end of the image and all the adjacent pixels are missing.
- the value corresponding to the determined inconspicuous color e.g., equivalent to gray
- FIG. 22 is a block diagram showing another configuration of the missing pixel creation circuit 12 that executes the processing shown in FIG. 20 or FIG. 21.
- the pixel value and the missing flag LF indicating the missing pixel, which are input to the missing pixel creating circuit 12, are supplied to the preprocessing circuit 201 and the tab construction circuit 202-1.
- the preprocessing circuit 201 executes the same processing as the preprocessing circuit 101, and linearly converts the value of the missing pixel based on the input pixel value and the missing flag LF indicating the missing pixel.
- the value is set to the missing pixel generated by the interpolation filter and supplied to the tap construction circuits 202-2 to 220-5.
- the type construction circuit 202_1 supplies the missing flag LF of the selected pixel to the motion class generating circuit 203 as a missing flag SLFTT1.
- the tap construction circuit 202-1 selects the motion class tab TD1 composed of pixels not missing in the effective pixel area, and supplies the selected motion class tap TD1 to the motion class generation circuit 203. I do.
- the motion class generation circuit 203 receives the parameters supplied from the initialization circuit 211, the missing flag LF supplied from the setup circuit 202-1, and the selected motion class setup TD1. Then, a motion class code MCC and a static / dynamic flag SMF are generated on the basis of the above and output to the evening construction circuit 202-2 to 202-5 and the class synthesis circuit 207.
- the motion class code MCC has 2-bit information indicating the amount of motion, and the static flag SMF indicates the presence or absence of motion with 1 bit. For example, when the motion class code MCC is 0 or 1, the static flag SMF is set to 0, and when the motion class code MCC is 2 or 3, the static flag SMF is set to 1. You.
- the tap construction circuit 202-2 performs all-class prediction covering all class structures based on the motion class code MCC, the static motion flag SMF, and the missing flag LF supplied from the motion class generation circuit 103. Select the tap V T (excluding pixels outside the effective pixel area) and supply it to the variable tap selection circuit 208.
- the tap construction circuit 202-3 supplies the selected missing flag LF to the DR class generating circuit 204 as the missing flag tap SLFT2. Based on the motion class code MCC and the static flag SMF and the missing flag LF supplied from the motion class generating circuit 203, the type constructing circuit 202-3 generates a DR class image in the effective pixel area that is not missing.
- the web server TD 2 is selected, and the selected DR cluster web TD 2 is supplied to the DR class generation circuit 204.
- the DR class generation circuit 204 calculates the maximum pixel value and the minimum pixel value of the non-missing pixels based on the missing flag tap SLFT 2 and the DR cluster A DR class code DR CC determined according to the dynamic range which is a difference from the pixel value is generated and output to the class synthesis circuit 207.
- the tap construction circuit 202-4 supplies the selected missing flag LF to the space class generating circuit 205 as a missing flag type SLFT3.
- the evening construction circuit 202-4 determines a non-missing space class in the effective pixel area based on the motion class code MC C and the static flag SMF and the missing flag LF supplied from the motion class generating circuit 203. It selects evening TD 3 and supplies the selected space cluster TD 3 to the space class generation circuit 205.
- the space class generation circuit 205 generates a space class code SCC corresponding to the pixel value pattern based on the missing flag tab SLFT 3 and the space class map TD 3 supplied from the evening construction circuit 202-4. Is generated and output to the class synthesis circuit 207.
- the tap construction circuit 202-5 selects the missing flag LF and supplies the selected missing flag LF to the missing class generating circuit 206 as the missing flag SLF T4.
- the missing class generation circuit 206 generates a missing class code LCC based on the missing flag tab SLFT 4 supplied from the evening construction circuit 220-5, and outputs it to the class synthesis circuit 207.
- the class synthesis circuit 207 integrates the motion class code MCC, the static and dynamic flag SMF, the DR class code DRCC, the spatial class code SCC, and the missing class code LCC into one final class code CC to form a class code.
- CC is output to the coefficient holding class code selection circuit 209.
- the coefficient holding class code selection circuit 209 is based on the previously learned coefficient set and prediction structure supplied from the initialization circuit 211, and the class code CC supplied from the class synthesis circuit 207. Generates a prediction tap selection signal VT, supplies the generated prediction tab selection signal VT to the variable tap selection circuit 208, and generates a prediction coefficient W selected based on the class code CC from the coefficient set. Is output to the estimation / prediction calculation circuit 210.
- variable tab selection circuit 208 is based on the all-class prediction tab VET supplied from the tap construction circuit 202-2 and the prediction tap selection signal VT supplied from the coefficient holding class code selection circuit 209. Then, the prediction tap ⁇ is selected, and the selected prediction tap ⁇ is supplied to the estimated prediction operation circuit 210.
- the estimation / prediction calculation circuit 210 predicts the prediction data supplied from the variable tab selection circuit 208 and the coefficient holding class code. Based on the prediction coefficient W supplied from the selection circuit 209, the pixel value of the missing pixel is calculated using a linear estimation formula, and is output to the selection circuit 214.
- the estimation / prediction operation circuit 210 corresponds to the product-sum device 121 in FIG.
- the replacement circuit 2 1 2 sets a value corresponding to a predetermined color that is hardly conspicuous (for example, corresponding to gray) as the pixel value of the missing pixel based on the missing flag LF indicating the missing pixel. And supplies it to the selection circuit 214.
- the linear interpolation circuit 2 13 performs the same processing as the pre-processing circuit 201, and linearly converts the value of the missing pixel based on the input pixel value and the missing flag LF indicating the missing pixel. Generated by the interpolation filter, the value is set to the missing pixel, and supplied to the selection circuit 214.
- the replacement circuit 2 12 and the linear interpolation circuit 2 13 correspond to the filter 1 2 2 in FIG.
- the selection circuit 2 1 4 is based on the effective pixel area vertical flag VF and the effective pixel area horizontal flag HF supplied from the effective pixel area calculation circuit 11 1. ⁇ Select either the output of the replacement circuit 212 or the output of the linear interpolation circuit 211 and output it as the output of the missing pixel creation circuit 12.
- the missing pixel creation circuit 12 performs the missing pixel value based on the pixel values of the pixels around the missing pixel by the class classification adaptive processing based on the dynamic range, the movement, the missing, and the change in the pixel value.
- the missing pixel located at the end of the effective pixel area can be interpolated or replaced and output.
- the missing pixel creation circuit 12 may appropriately switch the processing described with reference to FIGS. 6 and 7 and FIGS. 18 to 21. It may be.
- the process of class classification including the pixel value generated by the preprocessing circuit 101 in the cluster is performed by the preprocessing circuit 10. 1 as described above which may c so as to use the generated pixel values, the image processing apparatus according to the present invention, regardless of the position on the screen of the pixel, always generates a higher-quality image For example, regardless of the position of the missing pixel on the screen, the missing pixel can be created with less error.
- FIG. 23 is a diagram illustrating a configuration of an embodiment of an image processing apparatus for generating a coefficient set in advance.
- the image input to the image processing apparatus is supplied to the down-filling circuit 303 and the normal equation calculation circuit 310.
- the display position calculation circuit 310 calculates the distance of each pixel of the image from the center of the screen, and calculates the position information indicating the distance of each pixel of the image from the center of the screen to the tap construction circuit 3044-1 to 30. 4—Supply to N.
- the display position calculation circuit 301 may supply position information indicating the distance of each pixel of the image from the center of the screen to the structure switching control circuit 302.
- the initialization circuit 312 supplies the structure switching control circuit 302 with image edge information, aberration information, processing mode, and telop position information.
- the structure switching control circuit 302 controls the tap selection signal TS1, the tap selection signal TS2, and the tap selection signals TS3-1 to TS3-1 according to the image edge information.
- (N- 2) is supplied to the tap construction circuit 304-1 to 304 -N, and when the processing mode indicates the aberration mode, the evening selection signal TS 1 corresponding to the aberration information ,
- the tap selection signal TS2, and the tap selection signals TS3-1 to TS3- (N-2) are supplied to the sunset construction circuit 304-04-1 to 304-N, and processed.
- the structure switching control circuit 302 may select a plurality of processing modes among the above three processing modes.
- each of the tap selection signal TS1, the tap selection signal TS2, and the tap selection signals D33-1 to D33- (N-2) are a signal corresponding to red, a signal corresponding to green, and a signal corresponding to blue. It consists of corresponding signals, that is, signals corresponding to RGB.
- the structure switching control circuit 302 calculates the distance of each pixel from the center of the screen from the physical address on the screen of each pixel supplied from the display position calculation circuit 301, and calculates the calculated distance from the center of the screen, And an aberration class code CCA including a class code corresponding to red, a class code corresponding to green, and a class code corresponding to blue based on the aberration information input from the initialization circuit 312.
- the structure switching control circuit 302 supplies the generated aberration class code C CA to the class synthesis circuit 307.
- the structure switching control circuit 302 stores in advance the relationship between the physical address of each pixel on the screen and the distance from the center of the screen, and stores the relationship stored in advance and each of the relationships supplied from the display position calculation circuit 301. From the center of the screen of each pixel, based on the physical address of the pixel on the screen May be obtained.
- the structure switching control circuit 302 uses the class code corresponding to red.
- An aberration class code CCA including a class code corresponding to green and a class code corresponding to blue may be generated, and the generated aberration class code CCA may be supplied to the class synthesis circuit 307. .
- the structure switching control circuit 302 quantizes the amount of aberration to generate an aberration class code C CA.
- the blue light image included in the white light is shifted along the optical axis with respect to the yellow light image.
- the yellow light image included in the white light is imaged at a position closer to the optical axis than the blue light image, is closer to the optical axis than the red light image, and is formed as white light.
- the included red light image forms a position farther from the optical axis than the yellow light image.
- chromatic aberration Large chromatic aberration means that the distance between the imaging positions of the blue light image, the yellow light image, and the red light image is long.
- Fig. 25 (B) shows the relationship between the distance from the center of the screen and the magnitude of chromatic aberration. That is, chromatic aberration increases nonlinearly with distance from the center of the screen.
- the down-fill filter 303 determines whether the input image has a pixel value corresponding to the aberration by applying a process corresponding to the aberration or a process such as the addition of jitter or noise, or , Or an image to which noise has been added is supplied to the tap construction circuits 304-1 to 304 -N.
- the tap construction circuit 3044-1 is based on the position information supplied from the display position calculation circuit 301 and the tap selection signal TS1 supplied from the structure switching control circuit 302. By switching the tap structure for each of red, green, and blue, the pixels included in the image supplied from the downfill 303 are replaced with the red, green, and blue corresponding motion class maps TD. 1 and supplies the selected motion cluster group TD 1 to the motion class generation circuit 305.
- the moving class tap TD1 output by the evening block construction circuit 3044-1 comprises a tap corresponding to red, a tap corresponding to green, and a tab corresponding to blue.
- FIG. 26 is a diagram for explaining a sop structure for each of red, green, and blue in the tap construction circuit 304-1.
- the tap corresponding to green is constituted by taps centered on the pixel of interest, as shown in FIG. 26 (A).
- the structure switching control circuit 302 uses, for example, as shown in FIG. A correction vector is generated, and the selection signal TS1 including the generated correction vector is supplied to the tap building circuit 3044-1.
- the evening construction circuit 3044-1 is included in the location information indicating the distance from the center of the screen of each pixel supplied from the display position calculating circuit 301, and the evening selection signal TS1. Based on the correction vector for red, the correction pixel for red is selected based on the pixel of interest, and taps corresponding to red centered on the correction pixel are selected as shown in Fig. 26 (C). Is configured.
- the evening construction circuit 3044-1 is included in the position information indicating the distance from the center of the screen of each pixel supplied from the display position calculating circuit 301, and the evening selection signal TS1. Based on the correction vector for blue, the correction pixel for blue is selected with reference to the pixel of interest, and a tap corresponding to blue centered on the correction pixel is configured.
- the evening selection signal TS1 is configured to include a red correction vector, a green correction vector, and a blue correction vector based on the pixel of interest.
- the construction circuit 304-1 calculates the position information indicating the distance of each pixel from the center of the screen supplied from the display position calculation circuit 301 and the correction vector for red included in the tap selection signal TS 1.
- a tap corresponding to red with the correction target pixel corresponding to red as the center is formed, and positional information indicating the distance from the center of the screen of each pixel supplied from the display position calculating circuit 301, and tap Select signal TS 1
- a tap corresponding to green is formed with the correction target pixel corresponding to green as the center, and the center of the screen of each pixel supplied from the display position calculation circuit 301 is formed.
- a tab corresponding to blue centered on a corrected pixel of interest corresponding to blue based on position information indicating a distance from the camera and a correction vector for blue included in the evening selection signal TS1. You may do so.
- the motion class generation circuit 305 uses the parameters supplied from the parameter setting circuit 310 and the motion cluster TD 1 supplied from the evening construction circuit 304-1 to generate the motion corresponding to red.
- a motion class code MCC comprising a class code, a motion class code corresponding to green, and a motion class code corresponding to blue; a static flag corresponding to red; a static flag corresponding to green; and a static flag corresponding to blue.
- a static / dynamic flag SMF composed of a dynamic flag is generated, and is output to the setup structuring circuit 304-2 to 304-1 N and the class synthesis circuit 307.
- the tab construction circuit 304-4-2 is supplied from the motion class code MCC for each of red, green, and blue supplied from the motion class generation circuit 300, the static flag SMF, and the display position calculation circuit 301.
- the tap structure is switched for each of red, green, and blue based on position information indicating the distance of each pixel from the center of the screen and the tap selection signal TS2 supplied from the structure switching control circuit 302. All the class prediction taps VET including the taps corresponding to, the tabs corresponding to the green color, the ribs, and the sunset corresponding to the blue color are selected and supplied to the variable tap selection circuit 308.
- the evening-up construction circuit 304-133 is provided with a motion class code MCC for each of red, green, and blue supplied from the motion class generation circuit 303, and a static motion flag SMF for each of red, green, and blue. Position information indicating the distance of each pixel from the center of the screen supplied from the position calculation circuit 301, and tap selection signals TS3-1 for each of red, green, and blue supplied from the structure switching control circuit 302.
- the tap structure is switched for each of red, green, and blue based on the above, and a cluster TD 2-1 comprising a tap corresponding to red, a tap corresponding to green, and a tap corresponding to blue is selected and selected.
- the selected class tap TD2-1 is supplied to the class generation circuit 306-1.
- the class generation circuit 303-6 Based on the class tap TD2-1 supplied from the evening construction circuit 304-4, the class generation circuit 303-6 generates a class code corresponding to red and a class code corresponding to green.
- a class code CC1 composed of a class code and a class code corresponding to blue is generated, and the generated class code CC1 is output to the class synthesis circuit 307.
- the class code CC1 can be, for example, a classcode corresponding to the difference between the maximum pixel value and the minimum pixel value of the pixels included in the class tap TD2-1.
- the tap construction circuits 304-4 to 304-N include a motion class code MCC and a static motion flag SMF supplied from the motion class generation circuit 300, and a screen of each pixel supplied from the display position calculation circuit 310. Based on the position information indicating the distance from the center and the sunset selection signals TS 3-2 to TS 3-(N-2) supplied from the structure switching control circuit 302, each corresponds to red. Select a class map TD2-2—2 through TD2— (N—2) consisting of a tap, a tap corresponding to green, and a tap corresponding to blue, and select the selected class tap TD2-2—2 through TD2. — (N ⁇ 2) is supplied to that of the class generation circuit 306 ⁇ 2 to 306 ⁇ (N ⁇ 2).
- the class generation circuit 306-2 to 306- (N-2) is one of the cluster maps TD2-2 to TD2- (N-2) supplied from the tap construction circuits 304-14 to 304-N. On the basis of, a class code corresponding to red, a class code corresponding to green, and a class code CC2 to CC (N-2) including a class code corresponding to blue are generated, and The generated class codes CC 2 to CC (N ⁇ 2) are output to the class synthesis circuit 307.
- One of the class codes C C2 to C C (N ⁇ 2) can be, for example, a class code corresponding to a pixel value pattern.
- the class synthesis circuit 307 calculates the red color included in the aberration class code CCA based on the class code corresponding to the red color included in the motion class code MCC and the static flag corresponding to the red color included in the static flag SMF.
- the class code corresponding to the red color included in the class code and the class code CC1 to CC (N-2) is integrated into the final class code corresponding to the red color of one class code TCC.
- the class synthesis circuit 307 supports the green color included in the aberration class code CCA based on the class code corresponding to the green color included in the motion class code MCC and the static flag corresponding to the green color included in the static motion flag SMF.
- the class synthesis circuit 307 Based on the class code corresponding to the blue color included in the motion class code MCC and the static flag corresponding to the blue color included in the SMF, the class synthesis circuit 307 converts the blue color included in the aberration class code CCA into The corresponding class code and the class code corresponding to blue included in the class codes CC 1 to CC (N ⁇ 2) are integrated into the final class code corresponding to blue of one class code TCC.
- the class synthesis circuit 307 outputs to the class selection circuit 309 a class code TCC composed of a class code corresponding to red, a class code corresponding to green, and a class code corresponding to blue.
- the class code selection circuit 309 Based on the class code TCC supplied from the class synthesizing circuit 307, the class code selection circuit 309 generates a prediction code composed of a tap corresponding to red, a tap corresponding to green, and a tap corresponding to blue. It generates a tap selection signal VT, supplies the generated predicted tap selection signal VT to the variable tap selection circuit 308, and outputs the Clasco FTCC to the normal equation calculation circuit 310.
- the variable tap selection circuit 304 selects the all-class predictive signal VET supplied from the evening circuit construction circuit 304-2 and the predictive knob signal supplied from the classcode selecting circuit 310. Based on the signal VT, select a prediction tap ⁇ ⁇ ⁇ ⁇ ⁇ consisting of a tap corresponding to red, a tab corresponding to green, and a tap corresponding to blue, and perform a normal equation operation on the selected prediction tap ⁇ ⁇ Supply to circuit 310.
- the normal equation operation circuit 310 receives the prediction data ⁇ ⁇ ⁇ , which is the learning data supplied from the variable data selection circuit 308, and the input data, which is the teacher data supplied from the down filter 303. When images are received, they are used to calculate a prediction coefficient W that minimizes errors by the least squares method.
- the prediction coefficient W is composed of a prediction coefficient corresponding to red, a prediction coefficient corresponding to green, and a prediction coefficient corresponding to blue.
- the predicted value E [y] of the pixel value y of the original image (corresponding to the input image (hereinafter, appropriately referred to as teacher data)) is passed through the down-fill pixel 303.
- learning data an image having a pixel value corresponding to the aberration Pixel values (hereinafter referred to as learning data, as appropriate) X !, X2,... And predetermined prediction coefficients Wi, W2,.
- the predicted value E [y] can be expressed by the following equation (3).
- the prediction coefficient W i for obtaining a prediction value E [y] close to the pixel value y of the original image is a square error
- the optimal prediction coefficient w can be obtained.
- equation (9) it is possible to apply, for example, a sweeping-out method (the elimination method of Gauss_Jordan).
- the pixel value of the prediction tap ET included in the training data is
- the prediction coefficient W to be obtained is WW 2 , W 3, ", and the linear linear combination of these gives the pixel value y of a certain pixel in the teacher data.
- the prediction coefficients w l5 W 2, W 3,... That minimize the square error of are obtained by solving the normal equation shown in the above equation (9).
- the normal equation operation circuit 310 calculates the predicted value W i for the true value y corresponding to red from the prediction tap of the same class corresponding to red and the red component of the corresponding teacher data.
- the prediction coefficients WW 2, W 3,... Corresponding to red, which minimize the square error of X i + W z X z + W s X s +, are calculated by establishing a normal equation.
- the normal equation operation circuit 310 calculates the predicted value for the true value y corresponding to green from the prediction tap of the same class corresponding to green and the green component of the corresponding teacher data.
- the normal equation operation circuit 310 calculates the predicted value for the true value y corresponding to blue from the prediction tap of the same class corresponding to blue and the blue component of the corresponding teacher data.
- a prediction coefficient w composed of a prediction coefficient corresponding to red, a prediction coefficient corresponding to green, and a prediction coefficient corresponding to blue is generated for each class.
- the prediction coefficient W for each class which is composed of the prediction coefficient corresponding to red, the prediction coefficient corresponding to green, and the prediction coefficient corresponding to blue, obtained in the normal equation operation circuit 310, together with the class code TCC Supplied to the coefficient memory 3 1 1.
- the prediction coefficient W from the normal equation operation circuit 310 is stored in the address corresponding to the class indicated by the class code TCC.
- the image processing apparatus shown in FIG. 23 has one or more of the image processing mode for creating missing pixels, the image processing mode considering chromatic aberration, and the image processing mode considering telop position. It is possible to generate a coefficient set used in an image processing apparatus that performs selective operation.
- a coefficient set used in an image processing apparatus that performs the processing may be generated.
- Figure 27 shows an image processing mode that creates missing pixels, an image processing mode that takes chromatic aberration into account, and an image processing that takes into account the terror position, using the coefficient set generated by the image processing device shown in Figure 23.
- 1 is a diagram illustrating a configuration of an embodiment of an image processing apparatus according to the present invention, which selectively performs one or a plurality of modes.
- the display position calculation circuit 410 calculates the distance of each pixel of the input image from the center of the screen, and outputs position information indicating the distance of each pixel from the center of the screen to the tab construction circuits 4 0 3 1 1 to 4 0 Supply to 3—N.
- the display position calculation circuit 401 may supply position information indicating the distance of each pixel of the image from the center of the screen to the structure switching control circuit 402.
- the initialization circuit 410 supplies the structure switching control circuit 402 with image edge information, aberration information, processing mode, and telop position information.
- the structure switching control circuit 402 controls the tap selection signal TS1, the tap selection signal TS2, and the tap selection signals TS3-1 to TS3- according to the image edge information.
- (N-2) is supplied to each of the setup circuits 4 0 3 _ 1 to 4 0 3-N, and when the processing mode indicates the aberration mode, tap selection according to the aberration information
- Each of the signal TS1, the evening selection signal TS2, and the tap selection signals TS3-1 to TS3- (N-2) is converted into a sunset construction circuit 403-1 to 1400-3.
- the tap selection signal TS1 the tap selection signal TS2, and the tap selection signals TS3-1 to TS3- (corresponding to the telop position information) N-2) is supplied to each of the tap construction circuits 403-1-to 403-N.
- the structure switching control circuit 402 may select a plurality of processing modes among the above three processing modes.
- the aberration mode will be described as an example.
- each of the tap selection signal TS1, the tap selection signal TS2, and the tap selection signals TS3-1 to TS3- (N-2) is a signal corresponding to red and a signal corresponding to green. It consists of the corresponding signal and the signal corresponding to blue, that is, the signal corresponding to RGB.
- the structure switching control circuit 402 uses the aberration information input from the initialization circuit 410 to generate an aberration composed of a class code corresponding to red, a class code corresponding to green, and a class code corresponding to blue.
- a class code CCA is generated, and the generated aberration class code CCA is supplied to the class synthesis circuit 407.
- the evening-up construction circuit 403-1 Based on the positional information supplied from the display position calculating circuit 401 and the evening selection signal TS1 supplied from the structure switching control circuit 402, the evening-up construction circuit 403-1 receives red, green, and The tap structure is switched for each blue color, and the pixels included in the image supplied from the pre-processing circuit 403 are selected as the red, green, and blue motion class taps TD1 corresponding thereto, and the selected motion class tap TD 1 is supplied to the motion class generation circuit 40.
- the motion class TD 1 output by the evening circuit 403-1 is composed of a tap corresponding to red, a tap corresponding to green, and a sunset corresponding to blue.
- the motion class generation circuit 404 generates a motion class code corresponding to red based on the parameters supplied from the initialization circuit 4 11 1 and the motion cluster data TD 1 supplied from the evening construction circuit 403-1.
- a motion class code MCC composed of a motion class code corresponding to green and a motion class code corresponding to blue, and a static flag corresponding to red, a static flag corresponding to green, and a static flag corresponding to blue And outputs the motion class code MCC and the static flag SMF to the tab construction circuits 403-2 to 403 -N and the class synthesis circuit 407.
- the tap construction circuit 4 ⁇ 3-2 is provided with a motion class code MCC and a static flag SMF for each of red, green and blue supplied from the motion class generation circuit 404, and a tap selection signal supplied from the structure switching control circuit 402. Based on TS 2, switch the tap structure for each of red, green, and blue, and tap for red and green An all-class prediction tap VET consisting of an evening tap and a sunset corresponding to blue is selected, and the all-class prediction tap VET is supplied to the variable tab selection circuit 407.
- the evening construction circuit 403-3 includes a motion class code MCC for each of red, green, and blue supplied from the motion class generation circuit 404, a static flag SMF for each of red, green, and blue, and a structure switching control. Based on the tap selection signal TS3-1 for each of red, green, and blue supplied from the circuit 402, the tap structure is switched for each of red, green, and blue, and the tap corresponding to red and green is supported. And a class tap TD2-1 consisting of a tap corresponding to blue and a tap corresponding to blue, and supplies the selected class tap TD2-1 to the class generation circuit 405-1.
- the class generation circuit 405-1 includes a class code corresponding to red, a class code corresponding to green, and a blue corresponding to blue based on the class tap TD2-1 supplied from the evening construction circuit 403-3.
- a class code CC 1 composed of a class code is generated, and the generated class code CC 1 is output to the class synthesis circuit 406.
- the class code C C1 can be a code corresponding to the difference between the maximum pixel value and the minimum pixel value of the pixels included in the class tab TD2-1.
- Each of the type construction circuits 403-4 to 403-N is composed of a motion class code MCC and a static / dynamic flag SMF supplied from the motion class generation circuit 404, and an evening selection signal supplied from the structure switching control circuit 402.
- a class sunset consisting of a tap corresponding to red, a sunset corresponding to green, and a sunset corresponding to blue.
- Each of the class generation circuits 405-2 to 405-(N- 2) is a cluster TD 2-2 to TD 2-( N-2), a class code corresponding to a red color, a class code corresponding to a green color, and a class code CC2 to CC (N-2) including a classcode corresponding to a blue color.
- One of them is generated, and the generated class codes CC 2 to CC (N ⁇ 2) are output to the class synthesis circuit 406.
- Class code CC 2 can be, for example, a class code corresponding to a pixel value pattern.
- the class synthesis circuit 406 generates a class code corresponding to the red color included in the motion class code MCC and a red color included in the static flag SMF.
- the class code corresponding to the red color included in the aberration class code CCA and the class code corresponding to the red color included in the class codes CC1 to CC (N-2) are finally determined based on the static motion flag corresponding to One class code Merge into the class code corresponding to the red color of the TCC.
- the class synthesis circuit 406 converts the green color included in the aberration class code CCA based on the class code corresponding to the green color included in the motion class code MCC and the static flag corresponding to the green color included in the static flag SMF.
- the corresponding class code and the class code corresponding to the green color included in the class codes CC1 to CC (N-2) are integrated into the final class code corresponding to the green color of one class code TCC.
- the class synthesis circuit 406 generates a blue color included in the aberration class code CCA based on a class code corresponding to blue included in the motion class code MCC and a static flag corresponding to blue included in the static flag SMF.
- the corresponding class code and the class code corresponding to the blue color included in the class codes CC 1 to CC (N ⁇ 2) are integrated into the final class code corresponding to the blue color of one class code TCC.
- the class synthesis circuit 406 outputs to the coefficient holding class code selection circuit 408 a class code TCC composed of a class code corresponding to red, a class code corresponding to green, and a class code corresponding to blue.
- the coefficient holding class code selection circuit 408 stores in advance the prediction tap selection signal VT and the coefficient set corresponding to the class code TCC supplied from the initialization circuit 410.
- the coefficient holding class code selecting circuit 408 is a prediction tap comprising a tab corresponding to red, a type corresponding to green, and a tab corresponding to blue.
- a selection signal VT is generated, and the generated prediction tap selection signal VT is supplied to the variable tab selection circuit 407, and the prediction coefficient corresponding to the red class code included in the class code TCC and the class code TCC are generated.
- a prediction coefficient W consisting of a prediction coefficient corresponding to the included green class code and a prediction coefficient corresponding to the blue class code included in the class code TCC W Is output to the estimation prediction calculation circuit 409.
- the variable tab selection circuit 4 ⁇ 7 converts the all-class prediction tap VET supplied from the tap construction circuit 400-3 and the prediction tap selection signal V ⁇ supplied from the coefficient holding classcode selection circuit 408. Based on this, a prediction tap ⁇ ⁇ ⁇ ⁇ consisting of a tap corresponding to red, a tap corresponding to green, and a tap corresponding to blue is selected, and the selected prediction tap ⁇ ⁇ is supplied to the estimation prediction calculation circuit 409. I do.
- the sum-of-products 4 2 1 of the estimation prediction calculation circuit 409 are configured to store the coefficient corresponding to the red color included in the prediction data supplied from the variable selection circuit 407 and the coefficient.
- the red component of the pixel value is calculated using a linear estimation formula based on the prediction coefficient corresponding to the red included in the prediction coefficient W supplied from the class code selection circuit 408.
- the multiply-accumulator 4 221 of the estimation prediction operation circuit 409 is configured so that the prediction tap supplied from the variable tap selection circuit 407 ⁇ corresponds to the green included in the prediction tap ⁇ ⁇ , and the coefficient holding class code.
- the green component of the pixel value is calculated using a linear estimation formula based on the prediction coefficient corresponding to green included in the prediction coefficient W supplied from the selection circuit 408.
- the multiply-accumulator 421 of the estimation prediction calculation circuit 409 is configured to store the coefficient corresponding to the blue color contained in the prediction signal supplied from the variable selection circuit 407, and to hold the coefficient. Based on the prediction coefficient corresponding to blue contained in the prediction coefficient W supplied from the class code selection circuit 408, the blue component of the pixel value is calculated using a linear estimation formula.
- the product-sum unit 421 of the estimation prediction operation circuit 409 may calculate the pixel value of the missing pixel based on the prediction coefficient W using a non-linear estimation formula.
- the image processing apparatus shown in FIG. 27 has one or more of an image processing mode for creating missing pixels, an image processing mode in which chromatic aberration is considered, and an image processing mode in which telop positions are considered.
- a plurality of images can be selectively executed, and a clearer image can be obtained as compared with the related art.
- the switching control circuit 302 acquires the aberration information supplied from the initialization circuit 3122.
- step S102 the structure switching control circuit 302 selects the target pixel c
- step S103 the display position calculation circuit 301 obtains the relative distance between the target pixel and the center of the screen.
- step S104 the structure switching control circuit 302 generates a correction vector for red, a correction vector for green, and a correction vector for blue, and outputs the correction vector including the correction vector.
- the tap selection signal TS1 is supplied to the tap construction circuit 304-4, and the evening selection signal TS2 including the correction vector is supplied to the tap construction circuit 3044-2, and the correction vector is supplied.
- the tap selection signals TS3-1 to TS3- (N-2) are supplied to the tap construction circuits 304-4-3 to 304-N.
- step S 105 the evening construction circuit 304-1-1 calculates the position information indicating the relative distance between the pixel of interest and the center of the screen, the correction vector for red, and the correction vector for green.
- the taps are switched based on the correction vectors for,, and blue, and the corresponding red, green, and blue motion class taps TD1 are selected.
- the tap construction circuit 3044-2 is based on positional information indicating the relative distance between the pixel of interest and the center of the screen, and a correction vector for red, a correction vector for green, and a correction vector for blue. Toggle the tap to select the red, green, and blue corresponding all-class predictive VET.
- Each of the tap construction circuits 304-4-3 through 304-4-N has positional information indicating the relative distance between the pixel of interest and the center of the screen, as well as a correction vector for red, a correction vector for green, and blue.
- the taps are switched based on the correction vector for, and the DR class maps TD2-1 to TD2- (N-2) corresponding to red, green, and blue are selected.
- step S106 the image processing apparatus determines whether or not processing has been completed for all pixels. If it is determined that processing has not been completed for all pixels, the image processing apparatus proceeds to step S102. Return and repeat the tab switching process.
- step S106 If it is determined in step S106 that the processing has been completed for all pixels, the processing ends.
- the image processing apparatus shown in FIG. 23 can switch the tab in accordance with the screen position in the aberration mode in consideration of the aberration.
- the image processing apparatus shown in FIG. 27 has the configuration shown in FIG. 28 in the aberration mode. —Tap is switched in accordance with the screen position in the same process as described with reference to the chart, so description thereof will be omitted.
- the structure switching control circuit 302 of the image processing apparatus in FIG. 23 acquires telop position information indicating a telop display area for displaying a terrorist signal.
- the telop position information indicates, for example, the position and size of the telop display area such as the upper 30 lines, the lower 50 lines, or the right 100 pixels.
- the structure switching control circuit 302 may acquire data indicating the terror display area from the input image.
- FIG. 29 is a diagram illustrating an example of a screen on which a telop or the like is displayed.
- the corresponding characters are displayed in the telop display area above and below the image display area, for example, together with the image.
- the signal characteristics of the image in the telop display area are different from the signal characteristics of an image such as a natural image because a large amount of flat portions and edge portions are included.
- the characters displayed in the telop display area in the right half of the screen are displayed so as to flow on the image from the top to the bottom of the screen.
- an image generated by computer graphics is displayed in the frame image display area surrounding four sides of the image display area.
- the display position calculation circuit 310 calculates the physical address of each pixel of the input image on the screen and supplies the calculated physical address to the type construction circuits 304-1 to 304 -N.
- the structure switching control circuit 302 generates a tap selection signal TS1, a tab selection signal TS2, and tap selection signals TS3-1 to TS3- (N-2) based on the telop position information.
- the tap selection signal TS 1 is supplied to the evening construction circuit 3 0 4—1 and the tap
- the selection signal TS2 is supplied to the tab construction circuit 304-4-2, and each of the tap selection signals TS3-1 to TS3- (N-2) is applied to the tap construction circuit 304-4-3 to 304-4--. N, supply it to it.
- the pixel configuration circuit 3044-1 uses a wider range of pixels.
- the target pixel belongs to the telop display area, for example, a tap using pixels in a narrower range is selected, and a motion class tap TD1 is selected.
- the image processing device can perform image processing using image components that change gradually over many pixels. Can be performed.
- the pixel values of the pixels corresponding to the characters are substantially the same, and the pixel values of the pixels corresponding to the background are substantially the same value.
- the pixel value of a pixel corresponding to a character displayed in white is greatly different from the pixel value of a pixel corresponding to a background displayed in black.
- the image processing apparatus can execute class classification or adaptive processing appropriately corresponding to an image in which pixel values change rapidly.
- step S201 the structure switching control circuit 302 acquires the telop position information supplied from the initialization circuit 3122.
- the structure switching control circuit 302 generates a tap selection signal TS1, a tap selection signal TS2, and a type selection signal TS3-1 to TS3- (N-2) corresponding to the position of the telop,
- the tap selection signal TS 1 is supplied to the tap construction circuit 3 0 4-1
- the tap selection signal TS 2 is supplied to the tap construction circuit 3 0 4-2, and the tap selection signals TS 3-1 to TS 3 are supplied.
- the evening construction circuit 3044-1 selects the pixel of interest.
- the tap construction circuit 3044-2 selects the pixel of interest.
- Each of the tap construction circuits 3 04 1 3 to 3 0 4 -N selects the pixel of interest.
- step S203 the evening construction circuit 3044-1 determines whether or not the target pixel is a pixel in the telob based on the physical address on the screen of each pixel and the tap selection signal TS1. If it is determined that the pixel of interest is a pixel in the telop, the process proceeds to step S204, the tap is switched, and the motion class evening TD1 corresponding to the telop is selected. Proceed to step S206.
- step S203 If it is determined in step S203 that the pixel of interest is not a pixel in the terrorist, the process proceeds to step S205, where the tap construction circuit 3044-1 switches the tap to correspond to the natural image. Then, a motion class tap TD1 is selected, and the procedure proceeds to step S206.
- steps S 203 to S 205 the tap construction circuits 304-2 to 304-N execute the same processing as the tap construction circuit 304-1. Description is omitted.
- step S 206 the evening construction circuits 304-1 to 304-N determine whether or not processing has been completed for all pixels, and has completed processing for all pixels. If it is determined that there is no tap, the process returns to step S202, and the tap switching process is repeated.
- step S206 it is determined that processing has been completed for all pixels. If so, the process ends.
- the image processing apparatus having the configuration shown in FIG. 23 can switch the tap in the telop mode in accordance with whether or not the target pixel belongs to the telop display area.
- the image processing apparatus having the configuration shown in FIG. 27 performs the same processing as described with reference to the flowchart shown in FIG. 30 to determine whether or not the target pixel belongs to the telop display area. Since the setting is switched accordingly, the description is omitted.-The image processing device shown in Fig. 23 or Fig. 27 has a frame image display area even for the image shown in Fig. 29 (D). The process is executed by switching between the sunset and the evening of the image display area.
- the series of processes described above can be executed by hardware, but can also be executed by software.
- the programs that make up the software are installed in a dedicated computer or by installing various programs installed in a dedicated hardware. It is installed from a recording medium to, for example, a general-purpose personal computer that can execute various functions.
- FIG. 31 is a diagram illustrating an example of a recording medium and a computer.
- the CPU (Central Processing Unit) 501 actually executes various application programs and 0S (Operating System).
- An R0M (Read-only Memory) 502 generally stores basically fixed data of a program used by the CPU 501 and parameters for calculation.
- a RAM (Randoffi-Access Memory) 503 stores a program used in the execution of the CPU 501 and parameters that change as appropriate in the execution. These are interconnected by a host bus 504 composed of a CPU bus and the like.
- the host bus 504 is connected via a bridge 505 to an external bus 506 such as a PCI (Peripheral Component Interconnect / Interface) bus.
- PCI Peripheral Component Interconnect / Interface
- the keyboard 508 is operated by the user when inputting various commands to the CPU 501.
- the mouse 509 is operated by the user when instructing or selecting a point on the screen of the display 5 10.
- Display 5 10 is a liquid crystal display It consists of a device or a CRT (Cathode Ray Tube) and displays various information in a text image.
- the HDD (Hard Disk Drive) 5 1 1 drives the hard disks and records or reproduces the programs and information to be executed by the CPU 5 1 on them.
- the drive 5 1 2 has the attached magnetic disks 5 5 1
- the data or the program recorded on the optical disk 552, the magneto-optical disk 553 or the semiconductor memory 554 is read, and the data or the program is read into the interface 507, the external bus 506, the bridge 505, And the RAM 503 connected via the host bus 504.
- the keyboard 508 to the drive 512 are connected to an interface 507, and the interface 507 is connected to an external bus 506, a bridge 505, and a host bus 504. Connected to CPU 501.
- the recording medium is a magnetic disk 5 on which a program is recorded, which is distributed separately from a computer to provide a user with a program for executing a process corresponding to the block diagram.
- 5 1 including floppy disk
- optical disk 5 52 including CD-R0M (Compact Disc-Read Only Memory), DVD (Digital Versatile Disc)
- magneto-optical disk 5 53 MD (Mini- Disc)
- ROM Read Only Memory
- the program for executing the process corresponding to the block diagram for the user may be supplied to the computer via a wired or wireless communication medium.
- steps for describing a program stored in a recording medium are not only performed in chronological order according to the order described, but are not necessarily performed in chronological order. Alternatively, it also includes processes that are individually executed.
- position information indicating the position of a target pixel in a frame of an input image signal including a plurality of pixels is detected, and the position information is included in the position information.
- the class of the pixel of interest is determined from the plurality of classes, and a plurality of pixels are selected as prediction taps from the input image signal.
- position information indicating a position of a target pixel in a frame of an input image signal including a plurality of pixels is detected, and a positional relationship with the target pixel is determined.
- a plurality of pixels that are variable according to the position information are selected as cluster clusters from the input image signal, the class of the pixel of interest is determined from the plurality of classes based on the class tap, and the plurality of pixels are predicted from the input image signal. Since the selected data is selected as the tap, and the conversion data obtained by learning in advance for each class and the arithmetic processing based on the prediction tap are performed, an output image signal higher in quality than the input image is output. Regardless of the position of the pixel on the screen, higher quality images can always be generated.
- position information indicating a position of a target pixel in a frame of an input image signal including a plurality of pixels is detected, and a plurality of pixels are detected from the input image signal.
- the class of the pixel of interest is determined from the plurality of classes based on the class tap, and the plurality of pixels whose positional relationship with the pixel of interest is varied according to the positional information is determined from the input image signal.
- the conversion data that is selected as the prediction tap and is obtained by learning in advance for each class, and the calculation processing based on the prediction tap is performed, so that an output image signal higher in quality than the input image is output. Regardless of the position of the pixel on the screen, a higher quality image can always be generated.
- a plurality of pixels are selected from the input image signal as temporary class taps, A plurality of pixels whose positional relationship with the pixel of interest is changed according to the position of the temporary cluster in the frame are selected as true class taps from the input image signal, and based on the true class tap, a plurality of pixels are selected from the plurality of classes.
- a class is determined, a plurality of pixels are selected from the input image signal as a prediction tap, and based on the conversion taps and prediction taps obtained by learning in advance for each class.
- a plurality of pixels are selected from the input image signal as class taps
- the class of the pixel of interest is determined from the plurality of classes based on the cluster map, and for each pixel of interest, a plurality of pixels are selected from the input image signal as a tentative prediction tap, and the position of the tentative prediction tap in the frame is determined.
- the output image signal is output at a higher quality than the input image by performing the arithmetic processing based on the input image. High quality images can be generated.
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP01902812A EP1197946B1 (en) | 2000-02-10 | 2001-02-08 | Image processing device and method, and recording medium |
US09/958,394 US6912014B2 (en) | 2000-02-10 | 2001-02-08 | Image processing device and method, and recording medium |
DE60127631T DE60127631T2 (de) | 2000-02-10 | 2001-02-08 | Anordnung und verfahren zur bildverarbeitung und aufzeichnungsträger |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2000033786 | 2000-02-10 | ||
JP2000-33786 | 2000-02-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2001059751A1 true WO2001059751A1 (en) | 2001-08-16 |
Family
ID=18558217
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2001/000895 WO2001059751A1 (en) | 2000-02-10 | 2001-02-08 | Image processing device and method, and recording medium |
Country Status (5)
Country | Link |
---|---|
US (1) | US6912014B2 (ja) |
EP (2) | EP1197946B1 (ja) |
KR (1) | KR100742850B1 (ja) |
DE (2) | DE60127631T2 (ja) |
WO (1) | WO2001059751A1 (ja) |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7324709B1 (en) * | 2001-07-13 | 2008-01-29 | Pixelworks, Inc. | Method and apparatus for two-dimensional image scaling |
WO2003071479A1 (fr) * | 2002-02-21 | 2003-08-28 | Sony Corporation | Processeur de signaux |
JP4611069B2 (ja) * | 2004-03-24 | 2011-01-12 | 富士フイルム株式会社 | 特定シーンの画像を選別する装置、プログラムおよびプログラムを記録した記録媒体 |
US9079762B2 (en) | 2006-09-22 | 2015-07-14 | Ethicon Endo-Surgery, Inc. | Micro-electromechanical device |
US7561317B2 (en) * | 2006-11-03 | 2009-07-14 | Ethicon Endo-Surgery, Inc. | Resonant Fourier scanning |
US20080146898A1 (en) * | 2006-12-19 | 2008-06-19 | Ethicon Endo-Surgery, Inc. | Spectral windows for surgical treatment through intervening fluids |
US7713265B2 (en) * | 2006-12-22 | 2010-05-11 | Ethicon Endo-Surgery, Inc. | Apparatus and method for medically treating a tattoo |
US20080151343A1 (en) * | 2006-12-22 | 2008-06-26 | Ethicon Endo-Surgery, Inc. | Apparatus including a scanned beam imager having an optical dome |
US8801606B2 (en) | 2007-01-09 | 2014-08-12 | Ethicon Endo-Surgery, Inc. | Method of in vivo monitoring using an imaging system including scanned beam imaging unit |
US8273015B2 (en) * | 2007-01-09 | 2012-09-25 | Ethicon Endo-Surgery, Inc. | Methods for imaging the anatomy with an anatomically secured scanner assembly |
US7589316B2 (en) * | 2007-01-18 | 2009-09-15 | Ethicon Endo-Surgery, Inc. | Scanning beam imaging with adjustable detector sensitivity or gain |
US20080226029A1 (en) * | 2007-03-12 | 2008-09-18 | Weir Michael P | Medical device including scanned beam unit for imaging and therapy |
US8216214B2 (en) | 2007-03-12 | 2012-07-10 | Ethicon Endo-Surgery, Inc. | Power modulation of a scanning beam for imaging, therapy, and/or diagnosis |
JP2008258836A (ja) * | 2007-04-03 | 2008-10-23 | Sony Corp | 撮像装置、信号処理回路、信号処理装置、信号処理方法及びコンピュータプログラム |
US7995045B2 (en) | 2007-04-13 | 2011-08-09 | Ethicon Endo-Surgery, Inc. | Combined SBI and conventional image processor |
US8626271B2 (en) | 2007-04-13 | 2014-01-07 | Ethicon Endo-Surgery, Inc. | System and method using fluorescence to examine within a patient's anatomy |
US8160678B2 (en) | 2007-06-18 | 2012-04-17 | Ethicon Endo-Surgery, Inc. | Methods and devices for repairing damaged or diseased tissue using a scanning beam assembly |
US7558455B2 (en) * | 2007-06-29 | 2009-07-07 | Ethicon Endo-Surgery, Inc | Receiver aperture broadening for scanned beam imaging |
US7982776B2 (en) * | 2007-07-13 | 2011-07-19 | Ethicon Endo-Surgery, Inc. | SBI motion artifact removal apparatus and method |
US20090021818A1 (en) * | 2007-07-20 | 2009-01-22 | Ethicon Endo-Surgery, Inc. | Medical scanning assembly with variable image capture and display |
US9125552B2 (en) * | 2007-07-31 | 2015-09-08 | Ethicon Endo-Surgery, Inc. | Optical scanning module and means for attaching the module to medical instruments for introducing the module into the anatomy |
US7983739B2 (en) | 2007-08-27 | 2011-07-19 | Ethicon Endo-Surgery, Inc. | Position tracking and control for a scanning assembly |
US7925333B2 (en) | 2007-08-28 | 2011-04-12 | Ethicon Endo-Surgery, Inc. | Medical device including scanned beam unit with operational control features |
US8050520B2 (en) * | 2008-03-27 | 2011-11-01 | Ethicon Endo-Surgery, Inc. | Method for creating a pixel image from sampled data of a scanned beam imager |
US8332014B2 (en) * | 2008-04-25 | 2012-12-11 | Ethicon Endo-Surgery, Inc. | Scanned beam device and method using same which measures the reflectance of patient tissue |
TWI524773B (zh) * | 2009-10-23 | 2016-03-01 | 財團法人資訊工業策進會 | 即時偵測一物件之偵測裝置、偵測方法及其電腦程式產品 |
JP2012244395A (ja) * | 2011-05-19 | 2012-12-10 | Sony Corp | 学習装置および方法、画像処理装置および方法、プログラム、並びに記録媒体 |
JP2013021635A (ja) * | 2011-07-14 | 2013-01-31 | Sony Corp | 画像処理装置、画像処理方法、プログラム、及び記録媒体 |
KR101257946B1 (ko) * | 2011-08-08 | 2013-04-23 | 연세대학교 산학협력단 | 영상의 색수차를 제거하는 장치 및 그 방법 |
GB2526943B (en) | 2011-12-20 | 2016-04-27 | Imagination Tech Ltd | Method and apparatus for compressing and decompressing data |
KR102134136B1 (ko) * | 2018-12-27 | 2020-07-15 | (주)인스페이스 | 딥러닝 기반 위성영상 해상도 조절방법 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08275118A (ja) * | 1995-03-31 | 1996-10-18 | Sony Corp | 信号変換装置及び信号変換方法 |
US5852470A (en) * | 1995-05-31 | 1998-12-22 | Sony Corporation | Signal converting apparatus and signal converting method |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3859089B2 (ja) | 1995-05-31 | 2006-12-20 | ソニー株式会社 | 信号変換装置及び信号変換方法 |
US5946044A (en) * | 1995-06-30 | 1999-08-31 | Sony Corporation | Image signal converting method and image signal converting apparatus |
JP3669530B2 (ja) | 1995-06-30 | 2005-07-06 | ソニー株式会社 | 画像信号変換装置及び画像信号変換方法 |
JP3631333B2 (ja) * | 1996-08-23 | 2005-03-23 | シャープ株式会社 | 画像処理装置 |
US6330344B1 (en) * | 1997-02-14 | 2001-12-11 | Sony Corporation | Image processing device and method employing motion detection to generate improved quality image from low resolution image |
IL127910A (en) * | 1997-05-06 | 2003-01-12 | Sony Corp | Image converter and image converting method |
CN1219255C (zh) | 1997-10-23 | 2005-09-14 | 索尼电子有限公司 | 差错恢复的设备与方法 |
JP4147632B2 (ja) * | 1998-08-24 | 2008-09-10 | ソニー株式会社 | 画像情報変換装置、画像情報変換方法、およびテレビジョン受像機 |
DE60041114D1 (de) * | 1999-04-23 | 2009-01-29 | Sony Corp | Bildumwandlungsvorrichtung und -verfahren |
US6678405B1 (en) * | 1999-06-08 | 2004-01-13 | Sony Corporation | Data processing apparatus, data processing method, learning apparatus, learning method, and medium |
JP4470280B2 (ja) * | 2000-05-24 | 2010-06-02 | ソニー株式会社 | 画像信号処理装置及び画像信号処理方法 |
-
2001
- 2001-02-08 US US09/958,394 patent/US6912014B2/en not_active Expired - Fee Related
- 2001-02-08 DE DE60127631T patent/DE60127631T2/de not_active Expired - Lifetime
- 2001-02-08 WO PCT/JP2001/000895 patent/WO2001059751A1/ja active IP Right Grant
- 2001-02-08 KR KR1020017012927A patent/KR100742850B1/ko not_active IP Right Cessation
- 2001-02-08 EP EP01902812A patent/EP1197946B1/en not_active Expired - Lifetime
- 2001-02-08 DE DE60140824T patent/DE60140824D1/de not_active Expired - Lifetime
- 2001-02-08 EP EP06076012A patent/EP1686801B1/en not_active Expired - Lifetime
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08275118A (ja) * | 1995-03-31 | 1996-10-18 | Sony Corp | 信号変換装置及び信号変換方法 |
US5852470A (en) * | 1995-05-31 | 1998-12-22 | Sony Corporation | Signal converting apparatus and signal converting method |
Non-Patent Citations (1)
Title |
---|
See also references of EP1197946A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP1686801A3 (en) | 2008-07-09 |
DE60127631T2 (de) | 2007-12-13 |
DE60140824D1 (de) | 2010-01-28 |
KR100742850B1 (ko) | 2007-07-25 |
EP1197946B1 (en) | 2007-04-04 |
EP1197946A1 (en) | 2002-04-17 |
EP1686801A2 (en) | 2006-08-02 |
US6912014B2 (en) | 2005-06-28 |
US20030030753A1 (en) | 2003-02-13 |
DE60127631D1 (de) | 2007-05-16 |
KR20020000164A (ko) | 2002-01-04 |
EP1686801B1 (en) | 2009-12-16 |
EP1197946A4 (en) | 2006-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2001059751A1 (en) | Image processing device and method, and recording medium | |
US20100202711A1 (en) | Image processing apparatus, image processing method, and program | |
JP2014194706A (ja) | 画像処理装置、画像処理方法、及び、プログラム | |
JP4696388B2 (ja) | 情報信号処理装置、情報信号処理方法、画像信号処理装置およびそれを使用した画像表示装置、それに使用される係数種データ生成装置、係数データ生成装置、並びに情報記録媒体 | |
JPWO2009119347A1 (ja) | 処理システム、画像処理方法および画像処理用プログラム | |
JP4650683B2 (ja) | 画像処理装置および方法、プログラム並びに記録媒体 | |
JP4470280B2 (ja) | 画像信号処理装置及び画像信号処理方法 | |
KR20060045430A (ko) | 계수 데이터의 생성 장치 및 생성 방법, 계수종 데이터의생성 장치 및 생성 방법, 정보 신호 처리 장치, 및프로그램 및 그것을 기록한 매체 | |
JP4623345B2 (ja) | 画像処理装置および方法、並びに記録媒体 | |
JP2005159830A (ja) | 信号処理装置および方法、記録媒体、並びにプログラム | |
WO2001097510A1 (en) | Image processing system, image processing method, program, and recording medium | |
JP4507639B2 (ja) | 画像信号処理装置 | |
JP4655213B2 (ja) | 画像処理装置および方法、プログラム並びに記録媒体 | |
JP2000348019A (ja) | データ処理装置およびデータ処理方法、並びに媒体 | |
JP4650684B2 (ja) | 画像処理装置および方法、プログラム並びに記録媒体 | |
JPH0851598A (ja) | 画像情報変換装置 | |
JP2007251690A (ja) | 画像処理装置および方法、学習装置および方法、並びにプログラム | |
JP4441860B2 (ja) | 情報信号の処理装置および処理方法、並びにプログラムおよびそれを記録した媒体 | |
JP4595162B2 (ja) | 画像信号処理装置及び画像信号処理方法 | |
JP2005004770A (ja) | グループ化による映画映像検出方法及び装置 | |
JP4200403B2 (ja) | 画像符号化装置および方法、画像復号装置および方法、画像伝送システムおよび方法、並びに記録媒体 | |
JP4200401B2 (ja) | 画像変換装置および方法、並びに記録媒体 | |
JP3777596B2 (ja) | 画像情報変換装置および方法、係数算出装置および方法、記憶装置、記録媒体、並びにプログラム | |
JP2006020347A (ja) | 係数生成装置および方法 | |
JP4055487B2 (ja) | 画像信号の処理装置および処理方法、それに使用される係数種データの生成装置および生成方法、係数データの生成装置および生成方法、並びに各方法を実行するためのプログラムおよびそのプログラムを記録したコンピュータ読み取り可能な媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2001902812 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1020017012927 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 09958394 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 2001902812 Country of ref document: EP |
|
WWG | Wipo information: grant in national office |
Ref document number: 2001902812 Country of ref document: EP |