WO2018100950A1 - 画像処理装置、デジタルカメラ、画像処理プログラム、及び記録媒体 - Google Patents
画像処理装置、デジタルカメラ、画像処理プログラム、及び記録媒体 Download PDFInfo
- Publication number
- WO2018100950A1 WO2018100950A1 PCT/JP2017/039189 JP2017039189W WO2018100950A1 WO 2018100950 A1 WO2018100950 A1 WO 2018100950A1 JP 2017039189 W JP2017039189 W JP 2017039189W WO 2018100950 A1 WO2018100950 A1 WO 2018100950A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- image processing
- pixel
- image
- unit
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 380
- 238000011156 evaluation Methods 0.000 claims description 122
- 238000000034 method Methods 0.000 claims description 75
- 230000008569 process Effects 0.000 claims description 59
- 230000008859 change Effects 0.000 claims description 41
- 238000012937 correction Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 14
- 230000000694 effects Effects 0.000 claims description 7
- 238000003384 imaging method Methods 0.000 description 14
- 230000000052 comparative effect Effects 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 208000034656 Contusions Diseases 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 208000034526 bruise Diseases 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000002845 discoloration Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6027—Correction or control of colour gradation or colour contrast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00002—Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
- H04N1/00007—Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for relating to particular apparatus or devices
- H04N1/00021—Picture signal circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00002—Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
- H04N1/00026—Methods therefor
- H04N1/00029—Diagnosis, i.e. identifying a problem by comparison with a normal state
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/40093—Modification of content of picture, e.g. retouching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/12—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/72—Combination of two or more compensation controls
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/20—Circuitry for controlling amplitude response
- H04N5/202—Gamma control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/643—Hue control means, e.g. flesh tone control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/68—Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
Definitions
- the following disclosure relates to an image processing apparatus and a digital camera including the image processing apparatus.
- the present invention also relates to an image processing program for causing a computer to function as an image processing apparatus and a recording medium on which the image processing program is recorded.
- image processing may be automatically performed on image data that is digital information (digital signal) representing the image.
- image data that is digital information (digital signal) representing the image.
- CCD Charge-Coupled Device
- An image processing apparatus for performing a tether process on the blocks thus described is described.
- JP-A-64-61170 Japanese Patent Publication “JP-A-64-61170” (published March 8, 1989)
- Patent Document 1 uses image data representing a monochrome image as an object of image processing. Therefore, it cannot be used for color image data that is currently widely used.
- One embodiment of the present invention has been made in view of the above problems, and an object thereof is to improve the appearance of an image not only for image data representing a monochrome image but also for image data representing a color image.
- An image processing apparatus capable of performing the image processing is provided.
- an image processing device responds to at least one piece of pixel information included in each of a plurality of pixel blocks defined by dividing an image into a plurality of regions.
- An evaluation unit that evaluates an attribute relating to at least one of luminance, hue, and saturation of each pixel block;
- a process determining unit that determines a processing content to be performed on each of the at least one pixel information according to an evaluation result of the evaluation unit;
- An image processing unit that performs processing corresponding to the processing content for each of the at least one piece of pixel information.
- Image processing for improving the appearance of an image can be performed not only on image data representing a monochrome image but also on image data representing a color image.
- FIG. 1 is a block diagram of a digital camera including an image processing apparatus according to a first embodiment of the present invention.
- 3 is a flowchart showing a flow of processing in the image processing apparatus shown in FIG. 1.
- FIG. 2 is a plan view of an image divided into m rows and n columns by an image dividing unit provided in the image processing apparatus shown in FIG. 1.
- (A) to (c) show examples of luminance histograms of pixel blocks divided by the image dividing unit included in the image processing apparatus shown in FIG.
- (b) is a graph which respectively shows an example of the tone curve which the image processing part with which the image processing apparatus shown in FIG. 1 is provided selects.
- FIG. 1 is a block diagram of a digital camera including an image processing apparatus according to a first embodiment of the present invention.
- 3 is a flowchart showing a flow of processing in the image processing apparatus shown in FIG. 1.
- FIG. 2 is a plan view of an image divided into m rows and n columns by an image dividing
- FIG. 3 is an enlarged plan view of each image block showing an example of processing content determined by a processing determination unit included in the image processing apparatus shown in FIG. 1. It is an enlarged plan view of each image block showing another example of the processing content determined by the processing determination unit provided in the image processing apparatus shown in FIG. (A) is a graph showing the amount of saturation parameters in the case where only the processing for increasing the saturation without performing the boundary processing is performed on the image block of i rows shown in FIG. (B) to (e) are graphs showing the saturation parameter amounts when the i-th image block shown in FIG. 7 is subjected to the processing for increasing the saturation while performing the boundary processing.
- (A) is an image represented by pixel information acquired by the image processing apparatus shown in FIG.
- FIG. (B) is an image represented by pixel information after image processing is performed by the image processing apparatus shown in FIG. (C) is an image represented by pixel information after image processing is performed by the image processing apparatus of the comparative example.
- It is a block diagram of a digital camera including an image processing apparatus according to a third embodiment of the present invention. It is a flowchart which shows the flow of a process in the image processing apparatus shown in FIG. FIG.
- FIG. 12 is a plan view of a screen showing a pixel block group associated with a predetermined type of subject by a detection unit provided in the image processing apparatus shown in FIG. 11.
- A is an image represented by pixel information acquired by the image processing apparatus shown in FIG.
- B is an image represented by pixel information after image processing is performed by the image processing apparatus shown in FIG.
- FIG. 1 is a block diagram of the digital camera 1.
- FIG. 2 is a flowchart showing the flow of processing in the image processing apparatus 11.
- FIG. 3 is a plan view of the image 51 divided into m rows and n columns by the image dividing unit 12 included in the image processing apparatus 11.
- 4A to 4C show examples of luminance histograms of pixel blocks divided by the image dividing unit 12 included in the image processing apparatus 11, respectively.
- 5A and 5B are graphs showing examples of tone curves selected by the image processing unit 15 included in the image processing apparatus 11, respectively. 6 to 9 will be described later.
- the digital camera 1 includes an image processing device 11, an imaging unit 21, a display unit 31, and a storage unit 41.
- the imaging unit 21 includes a color filter group in which red, green, and blue color filters are arranged in a matrix, an image sensor in which photoelectric conversion elements are arranged in a matrix, and an analog / digital converter (A / D converter). ).
- the imaging unit 21 photoelectrically converts the light intensity of each of the red, green, and blue components transmitted through the color filter group at the same timing into an electrical signal, and then converts the electrical signal from an analog signal to a digital signal and outputs the electrical signal. To do. That is, the imaging unit 21 outputs color pixel information representing an image formed by light of red, green, and blue components incident at the same timing.
- Examples of the imaging unit 21 include a CCD (Charge Coupled Devices) image sensor or a CMOS (Complementary Metal Oxide Semiconductor) image sensor.
- CCD Charge Coupled Devices
- CMOS Complementary Metal Oxide Semiconductor
- the image processing apparatus 11 acquires the image data generated by the imaging unit 21, and performs processing on the pixel information constituting the image data in order to improve the appearance of the image represented by the image data.
- the image processing apparatus 11 outputs the image data configured by the processed pixel information to the display unit 31 and the storage unit 41.
- the display unit 31 displays an image represented by the image data processed by the image processing apparatus 11.
- Examples of the display unit 31 include a liquid crystal display device (LCD, “Liquid” Crystal “Display”).
- the storage unit 41 is a storage medium that stores image data configured by pixel information processed by the image processing apparatus 11, and includes a main storage unit and an auxiliary storage unit.
- the main memory unit is composed of RAM (random access memory).
- Examples of the auxiliary storage unit include a hard disk drive (HDD, Hard Disk Drive) or a solid state drive (SSD, Solid State Drive).
- the main storage unit is a storage unit that develops the image processing program stored in the auxiliary storage unit.
- the main storage unit can also be used to temporarily store pixel information processed by the image processing apparatus 11.
- the auxiliary storage unit stores the image processing program as described above and stores pixel information processed by the image processing apparatus 11 in a non-volatile manner.
- the imaging unit 21, the display unit 31, and the storage unit 41 can be realized using existing technology. Below, the structure of the image processing apparatus 11 and the process which the image processing apparatus 11 implements are demonstrated.
- the image processing apparatus 11 includes an image dividing unit 12, an evaluating unit 13, a process determining unit 14, an image processing unit 15, and a process correcting unit 16.
- the image processing unit 15 and the processing correction unit 16 are an image processing unit and a processing correction unit described in the claims, respectively.
- the image processing method performed by the image processing apparatus 11 includes step S11, step S12, step S13, step S14, and step S15.
- Step S11 is a step of acquiring image data.
- Step S12 is a step of dividing the image data into a plurality of pixel blocks.
- Step S13 is a step of evaluating the contrast of each pixel block.
- Step S14 is a step of determining the processing content to be applied to each pixel information in each pixel block.
- Step S15 is a step of performing processing on each pixel information in each pixel block.
- the image dividing unit 12 acquires image data representing the color image 51 from the imaging unit 21 and divides the image data representing the image 51 into a plurality of pixel blocks arranged in m rows and n columns (see FIG. 3). In other words, the image dividing unit 12 performs Step S11 and Step S12. Each pixel block is composed of a plurality of pixels arranged in a matrix along the vertical and horizontal directions.
- M and n which are the number of divisions into a plurality of pixel blocks, are each a positive integer.
- the pixel block located in i row and j column is also referred to as a block (i, j).
- i is an arbitrary integer of 1 to m
- j is an arbitrary integer of 1 to n.
- Block (i, j) is a generalized notation of a plurality of divided pixel blocks.
- the image data representing the color image 51 represents the color corresponding to each pixel using pixel information, that is, the light intensity (gradation value) of each component of red, green, and blue.
- the pixel information processed by the image processing device 11 is not limited to the pixel information configured by the R signal, the G signal, and the B signal that respectively represent the gradation values of the three colors red, green, and blue.
- the pixel information processed by the image processing apparatus 11 is composed of an R signal, a G signal, a B signal, and a Ye signal that represent gradation values of four colors obtained by adding yellow to red, green, and blue, respectively. It may be configured by a Y signal representing luminance and two signals (U signal, V signal) representing color difference.
- the block (i, j) preferably has 50 to 300 pixels in the vertical direction and 50 to 300 pixels in the horizontal direction.
- the number of pixels in the vertical direction of the block (i, j) is obtained by dividing the number of vertical pixels of the image by m, which is the number of divisions corresponding to the row direction.
- the number of pixels in the horizontal direction of the block (i, j) is obtained by dividing the number of pixels in the horizontal direction of the image by n, which is the number of divisions corresponding to the column direction.
- the number of pixels in the vertical direction and the number of pixels in the horizontal direction may be equal to each other or different from each other.
- the block (i, j) When the number of pixels included in the block (i, j) is too small, that is, when the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the block (i, j) is less than 50, the block (i, j) It is difficult to determine what features are included in the. As a result, the optimum processing content may not be selected. On the other hand, when the number of pixels included in the block (i, j) is too large, that is, when the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the block (i, j) exceeds 300, the block (i, j ) Tend to include various features. As a result, the optimum processing content may not be selected.
- Suitable for improving the appearance of the image by dividing the image indicated by the image data into a plurality of pixel blocks so that the number of pixels in the vertical direction and the number of pixels in the horizontal direction are within the range of 50 to 300. Processing contents can be selected.
- the evaluation unit 13 determines the luminance, hue, and saturation of the block (i, j) according to each of at least one piece of pixel information included in the block (i, j). The attribute relating to at least one of them is evaluated.
- the evaluation unit 13 evaluates the contrast, which is an attribute relating to luminance, for all the blocks (i, j). In other words, the evaluation unit 13 performs step S13.
- the evaluation unit 13 acquires a luminance histogram in each of at least one piece of pixel information included in the block (i, j) (see (a) to (c) of FIG. 4). Further, the evaluation unit 13 determines the minimum value (minimum gradation), the maximum value (maximum gradation), and the average value (average gradation) of the pixel information included in the block (i, j) from the luminance histogram. Then, a gradation difference ⁇ g which is a difference between the minimum value and the maximum value is derived.
- 4A to 4C show a solid line indicating the minimum value, the maximum value, and the average value, and an arrow indicating the gradation difference ⁇ g, respectively.
- the evaluation unit 13 evaluates the contrast level of each pixel block according to the average value and gradation difference ⁇ g of each pixel block. For example, (1) when the average value of the block (i, j) is in the range of 64 or more and 192 or less and the gradation difference ⁇ g is less than 128, the evaluation unit 13 determines the contrast of the block (i, j).
- the evaluation unit 13 When the evaluation is “low” and (2) the average value of the block (i, j) is in the range of 64 or more and 192 or less and the gradation difference ⁇ g is 128 or more and less than 192, the evaluation unit 13 If the contrast of i, j) is evaluated as “high” and (3) the average value of the block (i, j) is in the range of 64 to 192 and the gradation difference ⁇ g is 192 or more, the evaluation unit 13 evaluates the contrast of block (i, j) as “too high”.
- FIG. 4 shows an example of a luminance histogram of a block (i, j) evaluated as having a low contrast.
- FIG. 4B shows an example of a luminance histogram of a block (i, j) evaluated as having a high contrast.
- FIG. 4C shows an example of a luminance histogram of the block (i, j) evaluated that the contrast is too high.
- standard which the evaluation part 13 evaluates the contrast level of a block (i, j) is not limited to the reference
- the evaluation unit 13 may be configured as follows. (1) The ratio of the number of pixels less than the first predetermined gradation and the ratio of the number of pixels greater than or equal to the second predetermined gradation to the total number of pixels included in the block (i, j) are both predetermined. When the ratio is equal to or greater than the ratio, the evaluation unit 13 evaluates the contrast of the block (i, j) as “high”.
- the evaluation unit 13 evaluates the contrast of the block (i, j) as “low”.
- the second predetermined gradation number is a gradation number larger than the first predetermined gradation number.
- the processing determination unit 14 determines the processing content to be performed on each piece of pixel information included in the block (i, j) according to the evaluation result of the evaluation unit 13. In other words, the process determination unit 14 performs step S14. In the present embodiment, the process determining unit 14 determines the amount by which the contrast of the block (i, j) is changed according to the contrast level of the block (i, j) evaluated by the evaluating unit 13.
- the amount of change in contrast which is the processing content in the present embodiment, is represented by, for example, an integer of ⁇ 30 to 30.
- the process determination unit 14 selects an amount of change in contrast to be applied to each piece of pixel information of the block (i, j) from the range of ⁇ 30 to 30 according to the evaluation result of the evaluation unit 13. .
- the process determination unit 14 selects “+30” as the amount by which the contrast is changed. For the block (i, j) evaluated as “high” by the evaluation unit 13, the process determination unit 14 selects “+10” as the amount of change in contrast. For the block (i, j) for which the contrast is evaluated as “too high” by the evaluation unit 13, the processing determination unit 14 selects “ ⁇ 20” as the amount by which the contrast is changed.
- the image processing unit 15 that is an image processing unit performs processing corresponding to the processing content determined by the processing determination unit 14 for each piece of pixel information included in the block (i, j). That is, the image processing unit 15 adjusts the output level of each piece of pixel information included in the block (i, j) so as to correspond to the amount of change in contrast determined by the processing determination unit 14. In other words, the image processing unit 15 performs Step S15.
- each tone curve associated with each amount of contrast change is determined in advance.
- Each tone curve is stored in the storage unit 41, for example.
- the image processing unit 15 is included in the block (i, j) so as to correspond to the amount of change in contrast by selecting a tone curve associated with the amount of change in contrast determined by the processing determination unit 14. The output level of each pixel information is adjusted.
- the tone curve indicated by a broken line in FIG. 5A is a tone curve that is employed when the amount of change in contrast is “0”, that is, when the contrast is not changed.
- the image processing unit 15 selects a tone curve indicated by a solid line in FIG. 5A and adjusts each piece of pixel information included in the block (i, j), whereby the contrast of the block (i, j) is adjusted. Is enhanced.
- each tone curve associated with each amount of contrast change can be appropriately determined by the designer when designing the image processing apparatus 11.
- the tone curve associated with the amount “+30” for changing the contrast is not limited to the tone curve shown in FIG. 5A.
- the tone curve shown in FIG. It may be a tone curve. That is, the amount by which the output level is decreased in the region where the input level is low may be different from the amount by which the output level is increased in the region where the input level is high.
- the image processing apparatus 11 evaluates contrast, which is an attribute related to the luminance of the block (i, j), according to each of at least one piece of pixel information included in the block (i, j). In addition, the image processing apparatus 11 determines the processing content to be applied to each of at least one pixel information included in the block (i, j) according to the evaluation result, and performs the processing according to the processing content in the block (i, j). This is applied to each of at least one piece of pixel information included in j).
- the image processing apparatus 11 is configured to evaluate either the hue or saturation attribute of the block (i, j) according to each of at least one piece of pixel information included in the block (i, j). May be.
- each of the evaluation unit 13, the processing determination unit 14, and the image processing unit 15 Should be configured as follows.
- the evaluation unit 13 determines the saturation of the block (i, j) according to the pixel information included in the block (i, j).
- the process determination unit 14 determines an amount for changing the saturation of the block (i, j) according to the saturation of the block (i, j) evaluated by the evaluation unit 13.
- the image processing unit 15 adjusts the output level of each piece of pixel information included in the block (i, j) so as to correspond to the amount of change in the saturation determined by the processing determination unit 14.
- the evaluation unit 13 evaluates the saturation of each pixel information as an integer from 0 to 100 according to the pixel information included in the block (i, j).
- the evaluation unit 13 calculates the average saturation of the block (i, j) by averaging the saturation of each pixel information included in the block (i, j).
- the evaluation unit 13 evaluates the saturation of the block (i, j) as “low”.
- the process determination unit 14 determines an amount for changing the saturation of the block (i, j) according to the evaluation result of the evaluation unit 13. For example, for the block (i, j) evaluated as “low” by the evaluation unit 13, the process determination unit 14 selects “+20” as the amount to change the saturation.
- any one of the existing methods is appropriately selected. That's fine.
- the method for changing the saturation include the following three methods. (1) A method of converting RGB signals into YCbCr signals, then multiplying the YCbCr signals by a coefficient, and reversely converting the YCbCr signals into RGB signals again. (2) A method using L * a * b * space. (3) A method of simply multiplying each RGB signal by a coefficient as in the following expression (1).
- R input , G input , and B input respectively indicate the gradation values of the pixel information before changing the saturation
- R output , G output , and B output respectively change the saturation.
- the gradation value of the pixel information after this is shown, and the matrix of 3 rows and 3 columns shows the coefficients.
- the image processing unit 15 changes the saturation of the block (i, j) in accordance with the amount of change of the saturation determined by the processing determination unit 14. In this manner, each of the plurality of pieces of pixel information included in the block (i, j) is adjusted.
- the evaluation unit 13 determines the brightness of the block (i, j) according to at least one piece of pixel information included in the block (i, j).
- the process determination unit 14 determines an amount to change the lightness of the block (i, j) according to the lightness of the block (i, j) evaluated by the evaluation unit 13.
- the image processing unit 15 adjusts each of at least one piece of pixel information included in the block (i, j) so as to correspond to the amount of change in lightness determined by the processing determination unit 14.
- the brightness is redefined using a relational expression different from luminance based on the gradations of red, green, and blue. Therefore, lightness is one of the attributes related to luminance.
- the evaluation unit 13 determines the hue of the block (i, j) according to at least one piece of pixel information included in the block (i, j).
- the process determination unit 14 determines an amount to change the hue of the block (i, j) according to the hue of the block (i, j) evaluated by the evaluation unit 13.
- the image processing unit 15 adjusts each of at least one piece of pixel information included in the block (i, j) so as to correspond to the amount by which the hue determined by the processing determination unit 14 is changed.
- the evaluation unit 13 evaluates whether the hue of the block (i, j) is a skin color according to each piece of pixel information included in the block (i, j). That is, the evaluation unit 13 determines that each piece of pixel information included in the block (i, j) has a red gradation value in a range of 220 to less than 250 and a green gradation value of 170 to less than 220. And whether the blue tone value is within the range of 130 or more and less than 220 is evaluated.
- the process determining unit 14 determines to smooth the hue of the block (i, j), for example. Alternatively, when it is evaluated that the hue of the block (i, j) is a skin color, the processing determination unit 14 does not change the hue of the block (i, j), that is, a plurality of blocks included in the block (i, j). It may be determined not to process the pixel information.
- each of the evaluation unit 13 and the processing determination unit 14 may be configured as follows.
- the evaluation unit 13 evaluates whether or not the block (i, j) satisfies a predetermined condition in accordance with each of at least one piece of pixel information included in the block (i, j).
- the processing determination unit 14 does not perform processing on each of at least one piece of pixel information included in the block (i, j) as the processing content. To decide.
- the existing method is similar to the case of changing the saturation. Any one of the methods may be selected as appropriate.
- the method (1) after converting the color of each pixel information represented by each RGB signal into L * a * b * space, (2) the chromaticity (a * And b * ) are smoothed, and (3) a method of inversely converting the color of each piece of smoothed pixel information from the L * a * b * space into RGB signals.
- the image processing unit 15 each of the plurality of pieces of pixel information included in the block (i, j) so as to smooth the hue of the block (i, j). Adjust.
- the image processing apparatus 11 may be configured to evaluate the combination of at least two of the luminance (contrast), hue, and saturation attributes of the block (i, j).
- the evaluation unit 13 determines that the block (i, j) is a single color or an achromatic color according to each piece of pixel information included in the block (i, j). It is evaluated whether it is only. That is, the evaluation unit 13 evaluates whether or not each piece of pixel information included in the block (i, j) has a constant luminance and each gradation value of red, green, and blue is constant.
- the processing determination unit 14 determines not to perform processing on each piece of pixel information included in the block (i, j). .
- FIG. 6 shows an example of processing contents determined by combining and evaluating attributes related to at least one of luminance, hue, and saturation.
- FIG. 6 is an enlarged plan view of each image block showing an example of the processing content determined by the processing determination unit 14.
- the image processing apparatus 11 can perform fine processing for each block (i, j) according to the feature of the block (i, j).
- the processing content performed by the image processing apparatus 11 may include edge enhancement processing as described in block (3, 3).
- FIG. 7 Another example of the processing content determined by the processing determination unit 14 is shown in FIG. As illustrated in FIG. 7, the process determination unit 14 selects “+20” as the amount of change in the saturation for the block (i, j), and the adjacent block (maximum block) surrounding the block (i, j). Blocks (i ⁇ 1, j ⁇ 1), (i ⁇ 1, j), (i ⁇ 1, j + 1), (i, j ⁇ 1), (i, j + 1), (i + 1, j) that are adjacent blocks) -1), (i + 1, j), (i + 1, j + 1) is selected not to be processed.
- the block (i, j) corresponds to the first pixel block described in the claims.
- the blocks (i-1, j-1), (i-1, j), (i-1, j + 1), (i, j-1), (i, j + 1), (i + 1, j-1) , (I + 1, j), (i + 1, j + 1) correspond to the second pixel block recited in the claims.
- FIG. 8A is a graph showing the amount of change in saturation determined for the blocks (i, j ⁇ 1), (i, j), and (i, j + 1) shown in FIG. .
- FIG. 7 when the processing according to the processing content shown in FIG. 7 is performed on each piece of pixel information representing the image 51, there is a clear saturation difference at the boundary between the block (i, j) and the adjacent block. It may be recognized by the user. As a result, the user may feel that the image represented by the image data including the pixel information subjected to the processing according to the processing content is unnatural.
- the processing correction unit 16 which is the processing correction unit described in the claims, performs processing contents (first processing contents) applied to the block (i, j) and processing contents (second processing) performed to the adjacent blocks.
- the processing content is corrected at least one of them.
- the processing correction unit 16 causes continuity between the effect of the first processing content and the effect of the second processing content (smoothly the first processing content and the second processing content). Connect).
- the process correction unit 16 can reduce the possibility that the user recognizes the saturation difference as described above.
- the mode of smoothly connecting the first processing content and the second processing content is not limited, but may be the modes shown in FIGS. 8B to 8E, for example.
- (B) to (e) of FIG. 8 show saturation parameters when the processing for increasing the saturation is performed on the block (i, j) and the adjacent blocks shown in FIG. 7 while performing the boundary processing. It is a graph which shows quantity.
- the amount of change in the saturation for the adjacent block is kept within the range of the adjacent block while the amount of change in the saturation for the block (i, j) is “+20”. Decrease linearly to zero. As a result, the mode shown in FIG. 8B eliminates the discontinuity of the amount that changes the saturation that can occur in the vicinity of the boundary between the block (i, j) and the adjacent block.
- the amount of change in the saturation for the adjacent block is kept within the range of the adjacent block while the amount of change in the saturation for the block (i, j) is kept at “+20”. Decrease to 0 in a quadratic curve. Thereby, the mode shown in (c) of FIG. 8 eliminates the discontinuity of the amount that changes the saturation that may occur near the boundary between the block (i, j) and the adjacent block.
- the amount of change in saturation is reduced in a quadratic curve as in the mode shown in (c) of FIG. , J) is different from the mode shown in (c) of FIG. 8 in that the amount of change in saturation is reduced from the vicinity of the outer edge.
- the saturation is changed while the amount of change in the saturation for the block (i, j) is set to “+20”, similarly to the mode shown in (b) of FIG. 8.
- the amount to be reduced is linearly reduced to zero.
- the aspect shown in (b) of FIG. 8 is that the amount of gradually changing the saturation is decreased using not only the most adjacent block (adjacent block) but also the next adjacent block surrounding the outer periphery of the most adjacent block. And different.
- the configuration of the image processing apparatus 11 has been described by taking an example in which the image processing apparatus 11 performs image processing on image data representing a color image.
- the image data processed by the image processing apparatus 11 is not limited to image data representing a color image, and may be image data representing a monochrome image.
- Image data representing a monochrome image is composed of a signal representing a monochrome (for example, white) gradation value.
- the monochrome gradation value can also be interpreted as luminance. Therefore, the image processing apparatus 11 can perform the process according to the luminance of the block (i, j) among the processes described above on the image data representing the monochrome image. That is, the image processing apparatus 11 can be applied to both image data representing a monochrome image and image data representing a color image.
- the inventors of the present application have found that the appearance of an image cannot be sufficiently improved only by a tether process performed by the image processing apparatus described in Patent Document 1 on a block determined to be a grayscale image area.
- the image processing device described in Patent Document 1 is applied to a natural image (for example, an image taken using a camera), the image processing device cannot improve the appearance of the natural image.
- the image processing apparatus 11 can perform fine contrast adjustment according to the contrast of the block (i, j), it can further improve the appearance of the image as compared with the image processing apparatus described in Patent Document 1. I found out that I can.
- the image processing apparatus 11 can be suitably used for image data representing a natural image.
- the image processing apparatus 11 has been described using the digital camera 1 as an example of an image display apparatus.
- the image display device including the image processing device 11 is not limited to a digital camera.
- the image display device may be any device that is assumed to automatically perform processing on pixel information constituting the transferred image data, for example.
- the image display device may be, for example, a display device (including a television receiver) such as a printer or an LCD, or a smartphone equipped with an imaging unit.
- the above-described functions of the image processing apparatus 11 may be implemented in a specific mode (for example, “clean mode” or “bruise mode”) provided in the printer.
- each control block (the image dividing unit 12, the evaluation unit 13, the process determining unit 14, the image processing unit 15, and the process correcting unit 16) of the image processing apparatus 11 is realized by software
- the software is for personal computers. It may be for smartphones (so-called apps).
- one embodiment of the present invention is also suitable when image data captured by an imaging unit included in a smartphone is automatically developed. Since smartphones are not supposed to store RAW data, there are many cases where gradation collapse and gradation loss occur when image data is first stored. By applying the image processing apparatus 11, it is possible to suppress the occurrence of gradation collapse and gradation loss.
- FIG. 9 shows the result of processing the image data representing the image generated by the imaging unit 21 using the image processing apparatus 11 of the present embodiment.
- FIG. 9A shows an image 51 represented by the image data acquired by the image processing apparatus 11. That is, the image 51 is an image represented by image data that has not been processed.
- FIG. 9B shows an image 52 represented by image data after image processing is performed by the image processing apparatus 11.
- FIG. 9C shows an image 152 represented by image data after being subjected to image processing by the image processing apparatus of the comparative example.
- the image processing apparatus of the comparative example is configured to uniformly process all of the pixel information constituting a plurality of image data representing an image.
- Image 51 is a picture of a child in a flower garden.
- the image processing apparatus 11 is configured to perform a process of increasing the saturation of an area considered to have a bright color such as a flower or the sky.
- the face color of the child was also changed as a result of uniformly increasing the saturation on all of the pixel information constituting the image data representing the image 51 ((c in FIG. 9). )reference). That is, it has been found that it is difficult for the image processing apparatus of the comparative example to improve the appearance of the image 51.
- the image processing apparatus 11 divides image data representing an image into a plurality of blocks (i, j), and then includes processing contents according to the attributes of each block (i, j) in each block (i, j). Can be applied to each of the pixel information. Specifically, depending on the attribute of each block (i, j), it is possible to finely select the processing contents such as increasing the contrast, increasing the saturation, and not performing the processing.
- the image processing device 11 increases the saturation of the block (i, j) included in the region considered to be a flower and is included in the region considered as a child's face color.
- the image data representing the image 52 in which the discoloration of the block (i, j) to be suppressed is suppressed can be generated. That is, the image processing apparatus 11 can improve the appearance of the image.
- FIG. 10 is an enlarged plan view of an image 51 showing a range of pixel blocks to be evaluated when the evaluation unit 13 included in the image processing apparatus 11 according to the present embodiment evaluates the attribute of the block (i, j). is there.
- the configuration of the image processing apparatus 11 according to the present embodiment can be obtained by modifying the process performed by the evaluation unit 13 and the process determination unit 14 included in the image processing apparatus 11 according to the first embodiment. Therefore, in this embodiment, the content of the process which the evaluation part 13 and the process determination part 14 implement is demonstrated.
- the image dividing unit 12, the image processing unit 15, and the processing correction unit 16 included in the image processing apparatus 11 according to the present embodiment are the units included in the image processing apparatus 11 according to the first embodiment. It is configured in the same way.
- ⁇ Evaluation unit 13 In addition to each piece of pixel information, each piece of at least one piece of pixel information included in the block (i, j) surrounding the block (i, j) is also considered.
- the nearest neighbor block 512 surrounding the periphery of the block 511 that is the block (i, j) is composed of eight pixel blocks, and the next neighbor block 513 surrounding the nearest neighbor block 512 is 16 pieces. It consists of pixel blocks.
- the evaluation unit 13 is configured to evaluate the attribute of the block (i, j) according to all the pixel information included in the block 511 and the nearest neighbor block 512. Alternatively, the evaluation unit 13 may be configured to evaluate the attribute of the block (i, j) according to all the pixel information included in the block 511, the nearest neighbor block 512, and the next neighbor block 513. Good.
- a case where the evaluation unit 13 evaluates the attribute of the block (i, j) according to all the pixel information included in the block 511 and the nearest neighbor block 512 will be described as an example. Further, the case where the evaluation unit 13 evaluates the brightness that is one of the attributes of the block (i, j) will be described as an example.
- the evaluation unit 13 evaluates the brightness of each pixel information as an integer between 0 and 100 according to the pixel information included in the block 511.
- the evaluation unit 13 calculates the average brightness of the block 511 by averaging the brightness of each pixel information included in the block 511. Further, the evaluation unit 13 evaluates the brightness of each pixel information as an integer of 0 or more and 100 or less according to the pixel information included in the nearest neighbor block 512, and calculates the average brightness of the nearest neighbor block 512.
- the evaluation unit 13 calculates a lightness difference ⁇ b that is a difference between the average lightness of the nearest neighbor block 512 and the average lightness of the block 511, and when the lightness difference ⁇ b is +10 or more and less than +30, Evaluates as “higher than the nearest block”.
- the process determination unit 14 determines the amount by which the lightness of the block 511 is changed according to the evaluation result of the evaluation unit 13. For example, for the pixel information included in the block 511 whose brightness is evaluated as “higher than the nearest block” by the evaluation unit 13, the process determination unit 14 selects “+30” as the amount by which the brightness is changed. In addition, the process determination unit 14 determines not to perform the process on the pixel information included in the block 511 evaluated by the evaluation unit 13 as having a brightness other than “higher than the nearest neighbor block”.
- the image processing apparatus 11 can improve the appearance of the image 51.
- the evaluation unit 13 and the processing determination unit 14 may be configured as follows.
- the evaluation unit 13 evaluates the hue of each pixel information as an integer between ⁇ 180 and 180 in accordance with each piece of pixel information included in the block 511, and the saturation of each pixel information is between 0 and 100 Is evaluated as an integer.
- the evaluation unit 13 calculates the average hue and average saturation of the block 511 by averaging the hue and saturation of each pixel information included in the block 511.
- the evaluation unit 13 evaluates the hue of each pixel information as an integer of ⁇ 180 or more and 180 or less according to the plurality of pixel information included in the nearest neighbor block 512, and sets the saturation of each pixel information. Evaluation is made as an integer of 0 to 100.
- the evaluation unit 13 calculates the average hue and average saturation of the nearest neighbor block 512 by averaging the hue and saturation of each pixel information included in the nearest neighbor block 512.
- the evaluation unit 13 calculates a hue difference ⁇ h that is a difference between the average hue of the nearest block 512 and the average hue of the block 511.
- a hue difference ⁇ h is ⁇ 10 or more and +10 or less
- the average saturation of the block 511 and the average saturation of the nearest neighbor block 512 are both 20 or more and less than 50
- the evaluation unit 13 It is evaluated that the color of the nearest block 512 is “same” and the saturation is “low”.
- the process determination unit 14 determines an amount to change the saturation of the block 511 according to the evaluation result of the evaluation unit 13. For example, when the evaluation unit 13 evaluates that the color of the block 511 and the color of the nearest neighbor block 512 are “same” and the saturation is “low”, each of the blocks 511 and the nearest neighbor block 512 includes For the pixel information, the process determination unit 14 selects “+20” as the amount by which the saturation is changed.
- the processing determination unit 14 evaluates the saturation with respect to each piece of pixel information included in the block 511. Select the amount to change the saturation depending on the result.
- FIG. 11 is a block diagram of the image processing apparatus 111 and the digital camera 101 including the image processing apparatus 111.
- FIG. 12 is a flowchart showing the flow of processing in the image processing apparatus 111.
- FIG. 13 is a plan view of a screen showing pixel block groups associated with a predetermined type of subject by the detection unit 117 included in the image processing apparatus 111.
- FIG. 14A shows an image 151 represented by image data acquired by the image processing apparatus 111.
- FIG. 14B shows an image 152 represented by image data composed of pixel information after image processing is performed by the image processing apparatus 111.
- the image processing apparatus 111 includes an image dividing unit 112, an evaluation unit 113, a process determining unit 114, an image processing unit 115, a process correcting unit 116, and a detecting unit 117.
- the image processing device 111 is obtained by adding a detection unit 117 to the image processing device 11 according to the first embodiment.
- the image division unit 112, the evaluation unit 113, the processing determination unit 114, the image processing unit 115, and the processing correction unit 116 are respectively an image division unit 12, an evaluation unit 13, a processing determination unit 14, and The configuration is the same as that of the image processing unit 15 and the processing correction unit 16.
- the image processing method performed by the image processing apparatus 111 includes step S111, step S112, step S113, step S114, step S115, step S116, and step S117.
- Step S111 is a step of acquiring image data.
- Step S112 is a step of dividing the image data into a plurality of pixel blocks.
- Step S113 is a step of evaluating the contrast of each pixel block.
- Step S114 is a step of detecting a pixel block group associated with a predetermined type.
- Step S115 is a step of evaluating the attributes of the pixel blocks forming the pixel block group.
- Step S116 is a step of determining the processing content to be applied to each pixel information.
- Step S117 is a step of processing each pixel information. Steps S111 to S113 and steps S116 to S117 correspond to steps S11 to S15 shown in FIG. 2, respectively.
- the detection unit 117 detects a pixel block group that is a group of pixel blocks formed by collecting blocks (i, j) adjacent to each other and associated with a predetermined type of subject. That is, the detection unit 117 performs step S114.
- the detection unit 117 is a pixel block group formed by a set of blocks (i, j) adjacent to each other and detects a pixel block group associated with a predetermined type of subject. Use object detection algorithms and deep learning.
- the detection unit 117 detects the face area 1511, the flower area 1512, and the shadow area 1513 included in the image 151 as a pixel block group.
- the face region 1511 is a pixel block group associated with the face of the subject (child)
- the flower region 1512 is a pixel block group associated with the subject (flower)
- the shadow region 1513 is superimposed on the subject (flower).
- This is a pixel block group associated with the shadow of the photographer.
- the predetermined type of subject in the present embodiment is not limited to a real object such as a person or a flower, but is an image represented by the intensity of light generated by projecting light like a shade. including.
- the evaluation unit 113 determines the luminance, hue, and saturation of the block (i, j) according to each piece of pixel information included in the block (i, j) that forms the face area 1511, the flower area 1512, and the shadow area 1513.
- the attribute relating to at least one of them is evaluated.
- the evaluation unit 113 performs step S115 in addition to step S113.
- the process determining unit 114 calculates the attribute (contrast) of the block (i, j) forming the pixel block group evaluated in step S113 and the predetermined type associated with the pixel block group evaluated in step S115. In accordance with at least one of them, the processing content to be applied to each piece of pixel information included in the block (i, j) forming the pixel block group is determined. That is, the process determining unit 114 performs step S116.
- the process determining unit 114 selects (1) nothing to do as the processing content when the predetermined subject is a face, and (2) changes the saturation as the processing content when the predetermined subject is a flower. An amount “+20” to be selected is selected, and (3) an amount “+30” to change the lightness is selected as the processing content when a predetermined subject is in shadow.
- the evaluation according to the attribute of the pixel block described in the first embodiment is effective for extracting features common to a narrow range of an image.
- the evaluation according to the attribute of the pixel block group associated with a predetermined type of subject described in the present embodiment is effective for extracting features common to a wide range of an image.
- the appearance of the image can be improved more accurately by selecting the processing content in consideration of the features common to the narrow range and the features common to the wide range included in the image.
- the detection unit 117 may be configured to associate the shadow region 1513 only with the shadow of the photographer, or may be configured to associate with both the shadow and the flower of the photographer.
- the processing determination unit 114 is included in the block (i, j) forming the pixel block group based on the fact that the predetermined type is shadow. Processing contents to be applied to each pixel information are determined.
- the processing determination unit 114 determines the blocks that form the pixel block group based on the fact that the predetermined type is the shade and the flower ( Processing contents to be applied to each pixel information included in i, j) are determined.
- areas other than the face area 1511, the flower area 1512, and the shadow area 1513 are areas not associated with a predetermined type of subject.
- the process determination unit 114 selects a predetermined process content for an area associated with a face, flower, shadow, or the like, which is a predetermined type of subject, and is not associated with the predetermined type of subject.
- the processing content corresponding to the attribute of the block (i, j) may be selected for the region.
- the process determining unit 114 may be configured to refer to metadata included in image data representing an image when selecting the processing content. For example, when the keyword “flower” is included in the metadata of the image 151, the image 151 is considered to be taken with attention paid to the flower. In this case, the process determining unit 114 may be configured to make only “flowers” out of predetermined types of subjects effective and ignore “faces”. In this way, the processing determination unit 114 can reflect the intention of the photographer in the contents of the image processing by referring to the keywords included in the metadata.
- FIG. 14 shows a result of processing the image data representing the image generated by the imaging unit 21 using the image processing device 111.
- FIG. 14A shows an image 151 represented by image data acquired by the image processing apparatus 111. That is, the image 151 is an image represented by image data that has not been processed.
- FIG. 14B shows an image 152 represented by image data after image processing is performed by the image processing apparatus 111.
- the image 151 is a picture of a child in the flower garden, similar to the image 51 shown in FIG. As described above, the shade of the photographer is superimposed on the flowers included in the image 151.
- the image processing apparatus 111 is configured to perform a process of increasing the brightness of the region considered to be a shadow in addition to increasing the saturation of the region considered to have a vivid color. Has been.
- the image processing apparatus 111 can suppress the influence of the shadow of the photographer superimposed on the flower. That is, the image processing apparatus 111 can improve the appearance of the image 151.
- Each control block (the image dividing unit 12, the evaluation unit 13, the process determining unit 14, the image processing unit 15, and the process correcting unit 16) of the image processing apparatus 11 is a logic circuit (IC chip) or the like formed in an integrated circuit (IC chip) or the like. Hardware) or software using a CPU (Central Processing Unit).
- the image processing apparatus 11 includes a CPU that executes instructions of a program that is software that realizes each function, and a ROM (Read Only Memory) in which the program and various data are recorded so as to be readable by the computer (or CPU).
- a storage device (these are referred to as “recording media”), a RAM (Random Access Memory) for expanding the program, and the like are provided.
- the objective of this invention is achieved when a computer (or CPU) reads the said program from the said recording medium and runs it.
- a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
- the program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program.
- an arbitrary transmission medium such as a communication network or a broadcast wave
- one embodiment of the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
- the image processing device (11, 111) includes a pixel block (block (i, j)) defined by dividing the image (51, 151) into a plurality of regions.
- An evaluation unit (13, 113) for evaluating an attribute relating to at least one of luminance, hue, and saturation of each pixel block (block (i, j)) according to at least one pixel information included;
- a process determination unit (14, 114) that determines the processing content to be performed on each of the at least one pixel information, and for each of the at least one pixel information,
- an image processing unit (15, 115) that performs processing according to the processing content.
- this image processing apparatus performs the process according to the processing content according to each attribute of several pixel block (block (i, j)) to each pixel block (block). (I, j)) can be applied to each of at least one piece of pixel information. Therefore, it is possible to prevent color collapse or whiteout that may occur in the image (51, 151) after the processing.
- the image processing apparatus (11, 111) can perform image processing for improving the appearance of an image not only for image data representing a monochrome image but also for image data representing a color image. .
- the evaluation unit (13, 113) is configured so that each of the pixel blocks (block (i, 111)) corresponds to the at least one pixel information. j)), the processing determination unit (14, 114) evaluates each pixel block (block (i, j)) according to the contrast level evaluated by the evaluation unit (13, 113). )) To change the contrast, and the image processing unit (15, 115) corresponds to the contrast changing amount determined by the processing determination unit (14, 114). Each piece of pixel information may be adjusted.
- the image processing apparatus (11, 111) determines the processing content according to the contrast that is an attribute related to the luminance among the attributes of the pixel block (block (i, j)). Accordingly, it is possible to reliably prevent color collapse or whiteout that may occur in an image after processing.
- the evaluation unit (13) acquires a luminance histogram in the at least one pixel information, and (1) the luminance histogram
- the level of the contrast of each pixel block (block (i, j)) is evaluated according to the average gradation and (2) the gradation difference between the minimum gradation and the maximum gradation of the luminance histogram, and the image
- the processing unit (15) may be configured to select a tone curve associated with the amount of change in the contrast determined by the processing determination unit (14).
- the image processing apparatus (11) appropriately evaluates the contrast of each pixel block (block (i, j)) and is included in each pixel block (block (i, j)).
- the contrast of the plurality of pieces of pixel information can be adjusted appropriately.
- the evaluation unit (13) is configured to output each pixel block (block) according to the at least one piece of pixel information. (I, j)) is determined, and the processing determination unit (14) determines the pixel block (block (i, j)) according to the saturation evaluated by the evaluation unit (13). An amount for changing the saturation is determined, and the image processing unit (15) includes each of the at least one piece of pixel information so as to correspond to the amount for changing the saturation determined by the processing determination unit (14). May be configured to adjust.
- the evaluation unit (13) is configured so that each of the pixel blocks (blocks) corresponds to the at least one piece of pixel information. (I, j)) is evaluated, and the processing determination unit (14) determines the brightness of each pixel block (block (i, j)) according to the brightness evaluated by the evaluation unit (13). The amount to be changed is determined, and the image processing unit (15) adjusts each of the at least one piece of pixel information so as to correspond to the amount of change in the lightness determined by the processing determination unit (14). It may be configured as follows.
- the evaluation unit (13) is configured to output each pixel block (block) according to the at least one pixel information. (I, j)) is evaluated, and the process determination unit (14) changes the hue of each pixel block (block (i, j)) according to the hue evaluated by the evaluation unit. And the image processing unit (15) adjusts each of the at least one piece of pixel information so as to correspond to the amount of change in the hue determined by the processing determination unit (14). May be.
- the image processing apparatus (11) can perform each pixel block (block) instead of performing image processing according to the contrast level of each pixel block (block (i, j)).
- (I, j)) may be configured to perform image processing according to any of saturation, brightness, and hue. Note that the brightness of each pixel block (block (i, j)) is obtained by re-expressing the luminance of each pixel block (block (i, j)) using a different definition.
- the image processing device (111) is the pixel block group formed by aggregating a plurality of adjacent pixel blocks (block (i, j)) in any one of the above aspects 1 to 6. And a detection unit (117) for detecting a pixel block group associated with a predetermined type of subject, wherein the evaluation unit (113) includes at least one pixel block included in each pixel block group.
- the attribute of each pixel block (block (i, j)) is evaluated according to pixel information, and the processing determination unit (114) determines the attribute according to at least one of the attribute and the predetermined type.
- the processing contents to be applied to at least one piece of pixel information included in each pixel block (block (i, j)) forming the pixel block group may be determined.
- Evaluation based on the attribute of each pixel block (block (i, j)) is effective for extracting features common to a narrow range of the image (151).
- the evaluation according to the attribute of the pixel block group associated with a predetermined type of subject is effective for extracting features common to a wide range of the image (151).
- the processing content can be selected in consideration of the features common to the narrow range and the features common to the wide range included in the image (151), so that the appearance of the image (151) can be selected. Can be improved more accurately.
- the image processing device (11, 111) according to aspect 8 of the present invention obtains image data from the outside and displays an image indicated by the image data in the plurality of pixel blocks.
- Each of the plurality of pixel blocks (block (i, j)) has a vertical pixel count and a horizontal pixel count of 50 or more and 300, respectively. It may be configured as follows.
- the processing correction unit that corrects the processing content determined by the processing determination unit (14, 114). (16, 116), and the processing correction unit (16, 116) performs (1) a first pixel block (block (i, j)) which is one pixel block among the plurality of pixel blocks. And (2) second pixel blocks (blocks (i ⁇ 1, j ⁇ 1), (i ⁇ ) adjacent to the first pixel block (block (i, j)).
- the processing content is corrected so as to smoothly connect the first processing content and the second processing content, and thus may occur in the image (51, 151) represented by the pixel information after processing. Unnaturalness can be eliminated.
- the evaluation unit (13, 113) includes the pixel blocks (block (i, j)). ) At least one piece of pixel information included in the pixel block (block (i, j)) and at least an adjacent block surrounding the periphery of the pixel block (block (i, j)) ( Blocks (i-1, j-1), (i-1, j), (i-1, j + 1), (i, j-1), (i, j + 1), (i + 1, j-1), (i + 1, j), (i + 1, j + 1)) may be configured to evaluate the attribute in accordance with at least one piece of pixel information included in each.
- the evaluation unit when evaluating the attribute of each pixel block (block (i, j)), the evaluation unit (13, 113) adds to each pixel block (block (i, j)) in addition to each pixel block (block (i, j)).
- (I, j)) adjacent blocks (blocks (i-1, j-1), (i-1, j), (i-1, j + 1), (i, j-1), (i , J + 1), (i + 1, j ⁇ 1), (i + 1, j), (i + 1, j + 1)) may be considered.
- the evaluation unit (13, 113) is configured to perform each of the above processes according to the at least one piece of pixel information. It is evaluated whether or not the pixel block (block (i, j)) satisfies a predetermined condition, and when each of the pixel blocks (block (i, j)) satisfies the predetermined condition, the processing determination unit ( 14, 114) may be configured to determine not to perform processing on the at least one piece of pixel information as the processing content.
- a pixel block that should not be processed can be appropriately evaluated.
- the digital camera (1) according to the aspect 12 of the present invention generates image data and the image processing apparatus (11, 111) according to any one of the aspects 1 to 11 and the image data. And an imaging unit (21) provided to the apparatus.
- the digital camera (1) has the same effects as the image processing apparatus (11, 111) according to each aspect described above.
- the image processing apparatus (11, 111) may be realized by a computer.
- an image processing program of the image processing apparatus that causes the image processing apparatus to be realized by the computer by operating the computer as each unit included in the image processing apparatus, and a computer-readable recording medium that records the image processing program are also provided. It is included in the category of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Color Image Communication Systems (AREA)
Abstract
Description
上記評価部の評価結果に応じて、上記少なくとも1つの画素情報の各々に対して施す処理内容を決定する処理決定部と、
上記少なくとも1つの画素情報の各々に対して、上記処理内容に応じた処理を施す画像処理部と、を備えている。
以下、本発明の第1の実施形態に係る画像処理装置11、及び、画像処理装置11を備えたデジタルカメラ1について、図1~図9を参照して説明する。
図1に示すように、デジタルカメラ1は、画像処理装置11と、撮像部21と、表示部31と、記憶部41と、を備えている。
図1に示すように、画像処理装置11は、画像分割部12、評価部13、処理決定部14、画像処理部15、及び処理補正部16を備えている。画像処理部15及び処理補正部16は、それぞれ、請求の範囲に記載の画像処理部及び処理補正部である。また、図2に示すように、画像処理装置11が実施する画像処理方法は、ステップS11、ステップS12、ステップS13、ステップS14、及びステップS15を含んでいる。ステップS11は、画像データを取得するステップである。ステップS12は、画像データを複数の画素ブロックに分割するステップである。ステップS13は、各画素ブロックのコントラストを評価するステップである。ステップS14は、各画素ブロック内の各画素情報に施す処理内容を決定するステップである。ステップS15は、各画素ブロック内の各画素情報に処理を施すステップである。
画像分割部12は、カラーの画像51を表す画像データを撮像部21から取得するとともに、画像51を表す画像データをm行n列に配列した複数の画素ブロックに分割する(図3参照)。換言すれば、画像分割部12は、ステップS11及びステップS12を実施する。各画素ブロックは、縦方向及び横方向に沿ってマトリクス状に配列した複数の画素によって構成されている。
評価部13は、複数の画素ブロックからなるカラーの画像において、ブロック(i,j)に含まれる少なくとも1つの画素情報の各々に応じて、ブロック(i,j)の輝度、色相、及び彩度のうち少なくとも何れか1つに関する属性を評価する。本実施形態において、評価部13は、輝度に関する属性であるコントラストの高低を全てのブロック(i,j)について評価する。換言すれば、評価部13は、ステップS13を実施する。
処理決定部14は、評価部13の評価結果に応じてブロック(i,j)に含まれる画素情報の各々に対して施す処理内容を決定する。換言すれば、処理決定部14は、ステップS14を実施する。本実施形態において、処理決定部14は、評価部13が評価したブロック(i,j)のコントラストの高低に応じてブロック(i,j)のコントラストを変化させる量を決定する。
画像処理部である画像処理部15は、ブロック(i,j)に含まれる画素情報の各々に対して、処理決定部14が決定した処理内容に応じた処理を施す。すなわち、画像処理部15は、処理決定部14が決定したコントラストを変化させる量に対応するように、ブロック(i,j)に含まれる画素情報の各々の出力レベルを調整する。換言すれば、画像処理部15は、ステップS15を実施する。
本実施形態の画像処理装置11は、ブロック(i,j)に含まれる少なくとも1つの画素情報の各々に応じてブロック(i,j)の輝度に関する属性であるコントラストを評価する。そのうえで、画像処理装置11は、その評価結果に応じてブロック(i,j)に含まれる少なくとも1つの画素情報の各々に施す処理内容を決定し、その処理内容に応じた処理をブロック(i,j)に含まれる少なくとも1つの画素情報の各々に対して施す。
処理決定部14によって決定された処理内容の他の一例を図7に示す。図7に示すように、処理決定部14は、ブロック(i,j)に対して彩度を変化させる量として「+20」を選択し、ブロック(i,j)の周囲を取り囲む隣接ブロック(最隣接ブロック)であるブロック(i-1,j-1),(i-1,j),(i-1,j+1),(i,j-1),(i,j+1),(i+1,j-1),(i+1,j),(i+1,j+1)に対して処理を施さないことを選択した。ここで、ブロック(i,j)は、請求の範囲に記載の第1の画素ブロックに対応する。また、ブロック(i-1,j-1),(i-1,j),(i-1,j+1),(i,j-1),(i,j+1),(i+1,j-1),(i+1,j),(i+1,j+1)は、請求の範囲に記載の第2の画素ブロックに対応する。
以上のように、本実施形態では、画像処理装置11がカラー画像を表す画像データに対して画像処理を施す場合を例に、画像処理装置11の構成について説明した。しかし、画像処理装置11が処理する画像データは、カラー画像を表す画像データに限定されるものではなく、モノクロ画像を表す画像データであってもよい。
なお、本実施形態において、画像処理装置11を画像表示装置の一例としてデジタルカメラ1を用いて説明した。しかし、画像処理装置11を備えている画像表示装置は、デジタルカメラに限定されるものではない。画像表示装置は、例えば転送された画像データを構成する画素情報に対して自動的に処理を施すことを想定した装置であればよい。画像表示装置は、例えば、プリンタやLCDなどの表示装置(テレビジョン受像器を含む)などであってもよし、撮像部を備えたスマートフォンであってもよい。また、画像処理装置11の上述した機能は、プリンタが備えている特定のモード(例えば「キレイモード」や「あざやかモード」など)において実施されるものであってもよい。
本実施形態の画像処理装置11を用いて撮像部21が生成した画像を表す画像データに対して処理を施した結果を図9に示す。図9の(a)は、画像処理装置11が取得した画像データが表す画像51である。すなわち、画像51は、処理を施されていない画像データが表す画像である。図9の(b)は、画像処理装置11によって画像処理を施されたあとの画像データが表す画像52である。図9の(c)は、比較例の画像処理装置によって画像処理を施されたあとの画像データが表す画像152である。比較例の画像処理装置は、画像を表す複数の画像データを構成する画素情報の全てに対して一律に処理を施すように構成されている。
本発明の第2の実施形態に係る画像処理装置11について、図10を参照して説明する。図10は、本実施形態に係る画像処理装置11が備えている評価部13がブロック(i,j)の属性を評価するときに評価する画素ブロックの範囲を示す、画像51の拡大平面図である。
第1の実施形態において説明した評価部13は、複数の画素ブロックからなるカラーの画像において、ブロック(i,j)に含まれる少なくとも1つの画素情報の各々に応じて、全てのブロック(i,j)の輝度、色相、及び彩度のうち少なくとも何れか1つに関する属性を評価するものであった。本実施形態において説明する評価部13は、ブロック(i,j)の輝度、色相、及び彩度のうち少なくとも何れか1つに関する属性を評価するときに、ブロック(i,j)に含まれる少なくとも1つの画素情報の各々に加えて、ブロック(i,j)の周囲を取り囲むブロック(i,j)に含まれる少なくとも1つの画素情報の各々も考慮する。
処理決定部14は、評価部13の評価結果に応じて、ブロック511の明度を変化させる量を決定する。例えば、評価部13によって明度が「最隣接ブロックより高い」と評価されたブロック511に含まれる画素情報に対して、処理決定部14は、明度を変化させる量として「+30」を選択する。また、処理決定部14は、評価部13によって明度が「最隣接ブロックより高い」以外であると評価されたブロック511に含まれる画素情報に対して、処理を施さないことを決定する。
また、また、ブロック(i,j)の属性の1つである明度を評価する場合、評価部13及び処理決定部14を以下のように構成してもよい。
本発明の第3の実施形態に係る画像処理装置111について、図11~図14を参照して説明する。図11は、画像処理装置111、及び、画像処理装置111を備えたデジタルカメラ101のブロック図である。図12は、画像処理装置111における処理の流れを示すフローチャートである。図13は、画像処理装置111が備えている検出部117によって所定の種類の被写体に関連付けられた画素ブロック群を示す画面の平面図である。図14の(a)は、画像処理装置111が取得した画像データが表す画像151である。図14の(b)は、画像処理装置111によって画像処理を施されたあとの画素情報からなる画像データが表す画像152である。
検出部117は、互いに隣接するブロック(i,j)が集合してなる画素ブロック群であって所定の種類の被写体に関連付けられた画素ブロック群を検出する。すなわち、検出部117は、ステップS114を実施する。
画像処理装置111を用いて撮像部21が生成した画像を表す画像データに対して処理を施した結果を図14に示す。図14の(a)は、画像処理装置111が取得した画像データが表す画像151である。すなわち、画像151は、処理を施されていない画像データが表す画像である。図14の(b)は、画像処理装置111によって画像処理を施されたあとの画像データが表す画像152である。
画像処理装置11の各制御ブロック(画像分割部12、評価部13、処理決定部14、画像処理部15、及び処理補正部16)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。
本発明の態様1に係る画像処理装置(11,111)は、画像(51,151)を複数の領域に分割することによって規定される複数の画素ブロック(ブロック(i,j))の各々に含まれる少なくとも1つの画素情報に応じて上記各画素ブロック(ブロック(i,j))の輝度、色相、及び彩度のうち少なくとも何れか1つに関する属性を評価する評価部(13,113)と、上記評価部の評価結果に応じて、上記少なくとも1つの画素情報の各々に対して施す処理内容を決定する処理決定部(14,114)と、上記少なくとも1つの画素情報の各々に対して、上記処理内容に応じた処理を施す画像処理部(15,115)と、を備えている。
本出願は、2016年12月1日に出願された日本国特許出願:特願2016-234515に対して優先権の利益を主張するものであり、それを参照することにより、その内容の全てが本書に含まれる。
11,111 画像処理装置
12,112 画像分割部
13,113 評価部
14,114 処理決定部
15,115 画像処理部
16,116 処理補正部
117 検出部
21 撮像部
31 表示部
41 記憶部
Claims (14)
- 画像を複数の領域に分割することによって規定される複数の画素ブロックの各々に含まれる少なくとも1つの画素情報に応じて上記各画素ブロックの輝度、色相、及び彩度のうち少なくとも何れか1つに関する属性を評価する評価部と、
上記評価部の評価結果に応じて、上記少なくとも1つの画素情報の各々に対して施す処理内容を決定する処理決定部と、
上記少なくとも1つの画素情報の各々に対して、上記処理内容に応じた処理を施す画像処理部と、を備えている、
ことを特徴とする画像処理装置。 - 上記評価部は、上記少なくとも1つの画素情報に応じて上記各画素ブロックのコントラストの高低を評価し、
上記処理決定部は、上記評価部が評価した上記コントラストの高低に応じて上記各画素ブロックのコントラストを変化させる量を決定し、
上記画像処理部は、上記処理決定部が決定した上記コントラストを変化させる量に対応するように、上記少なくとも1つの画素情報の各々を調整する、
ことを特徴とする請求項1に記載の画像処理装置。 - 上記評価部は、上記少なくとも1つの画素情報における輝度ヒストグラムを取得し、且つ、(1)当該輝度ヒストグラムの平均階調、及び、(2)当該輝度ヒストグラムの最小階調と最大階調との階調差に応じて上記各画素ブロックのコントラストの高低を評価し、
上記画像処理部は、上記処理決定部が決定した上記コントラストを変化させる量に関連付けられたトーンカーブを選択する、
ことを特徴とする請求項2に記載の画像処理装置。 - 上記評価部は、上記少なくとも1つの画素情報に応じて上記各画素ブロックの彩度を判定し、
上記処理決定部は、上記評価部が評価した上記彩度に応じて上記各画素ブロックの彩度を変化させる量を決定し、
上記画像処理部は、上記処理決定部が決定した上記彩度を変化させる量に対応するように、上記少なくとも1つの画素情報の各々を調整する、
ことを特徴とする請求項1~3の何れか1項に記載の画像処理装置。 - 上記評価部は、上記少なくとも1つの画素情報に応じて上記各画素ブロックの明度を評価し、
上記処理決定部は、上記評価部が評価した上記明度に応じて上記各画素ブロックの明度を変化させる量を決定し、
上記画像処理部は、上記処理決定部が決定した上記明度を変化させる量に対応するように、上記少なくとも1つの画素情報の各々を調整する、
ことを特徴とする請求項1~4の何れか1項に記載の画像処理装置。 - 上記評価部は、上記少なくとも1つの画素情報に応じて上記各画素ブロックの色相を評価し、
上記処理決定部は、上記評価部が評価した上記色相に応じて上記各画素ブロックの色相を変化させる量を決定し、
上記画像処理部は、上記処理決定部が決定した上記色相を変化させる量に対応するように、上記少なくとも1つの画素情報の各々を調整する、
ことを特徴とする請求項1~5の何れか1項に記載の画像処理装置。 - 互いに隣接する複数の画素ブロックが集合してなる画素ブロック群であって所定の種類の被写体に関連付けられた画素ブロック群を検出する検出部を更に備え、
上記評価部は、上記画素ブロック群をなす各画素ブロックに含まれる少なくとも1つの画素情報に応じて上記各画素ブロックの上記属性を評価し、
上記処理決定部は、上記属性及び上記所定の種類の少なくとも何れか一方に応じて、上記画素ブロック群をなす各画素ブロックに含まれる少なくとも1つの画素情報に施す処理内容を決定する、
ことを特徴とする請求項1~6の何れか1項に記載の画像処理装置。 - 画像データを外部から取得するとともに、当該画像データが示す画像を上記複数の画素ブロックに分割する画像分割部を更に備えており、
上記複数の画素ブロックの各々は、縦方向の画素数及び横方向の画素数がそれぞれ50以上300以下である、
ことを特徴とする請求項1~7の何れか1項に記載の画像処理装置。 - 上記処理決定部が決定した上記処理内容を補正する処理補正部を更に備え、
上記処理補正部は、(1)複数の画素ブロックの内の一画素ブロックである第1の画素ブロックに対して施す第1の処理内容、及び、(2)当該第1の画素ブロックに隣接する第2の画素ブロックに対して施す第2の処理内容のうち少なくとも何れか一方を補正することによって、上記第1の処理内容による効果と上記第2の処理内容による効果との連続性を生じさせる、
ことを特徴とする請求項1~8の何れか1項に記載の画像処理装置。 - 上記評価部は、上記各画素ブロックの上記属性を評価するときに、当該画素ブロックに含まれる少なくとも1つの画素情報と、少なくとも当該画素ブロックの周囲を取り囲む隣接ブロックにそれぞれ含まれる少なくとも1つの画素情報とに応じて、上記属性を評価する、
ことを特徴とする請求項1~9の何れか1項に記載の画像処理装置。 - 上記評価部は、上記少なくとも1つの画素情報に応じて上記各画素ブロックが所定の条件を満たすか否かを評価し、
上記各画素ブロックが上記所定の条件を満たす場合に、上記処理決定部は、上記処理内容として、上記少なくとも1つの画素情報に対して処理を施さないことを決定する、
ことを特徴とする請求項1~10の何れか1項に記載の画像処理装置。 - 請求項1~11の何れか1項に記載の画像処理装置と、
画像データを生成するとともに、当該画像データを上記画像処理装置に提供する撮像部と、を備えている、
ことを特徴とするデジタルカメラ。 - 請求項1に記載の画像処理装置としてコンピュータを機能させるための画像処理プログラムであって、上記評価部、上記処理決定部、及び上記画像処理部としてコンピュータを機能させるための画像処理プログラム。
- 請求項13に記載の画像処理プログラムを記録したコンピュータ読み取り可能な記録媒体。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201780074379.2A CN110192388A (zh) | 2016-12-01 | 2017-10-30 | 图像处理装置、数码相机、图像处理程序、以及记录介质 |
KR1020197015460A KR20190073516A (ko) | 2016-12-01 | 2017-10-30 | 화상 처리 장치, 디지털 카메라, 화상 처리 프로그램, 및 기록 매체 |
US16/465,264 US20190394438A1 (en) | 2016-12-01 | 2017-10-30 | Image processing device, digital camera, and non-transitory computer-readable storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016-234515 | 2016-12-01 | ||
JP2016234515 | 2016-12-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018100950A1 true WO2018100950A1 (ja) | 2018-06-07 |
Family
ID=62242111
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/039189 WO2018100950A1 (ja) | 2016-12-01 | 2017-10-30 | 画像処理装置、デジタルカメラ、画像処理プログラム、及び記録媒体 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20190394438A1 (ja) |
KR (1) | KR20190073516A (ja) |
CN (1) | CN110192388A (ja) |
WO (1) | WO2018100950A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111835931A (zh) * | 2019-04-17 | 2020-10-27 | 夏普株式会社 | 图像处理装置、图像形成装置、图像读取装置 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102663537B1 (ko) | 2019-01-31 | 2024-05-08 | 삼성전자 주식회사 | 전자 장치 및 이미지 처리 방법 |
CN110769321B (zh) * | 2019-10-14 | 2020-07-31 | 安徽省徽腾智能交通科技有限公司泗县分公司 | 伴音大数据信号现场播放*** |
CN113344812A (zh) * | 2021-05-31 | 2021-09-03 | 维沃移动通信(杭州)有限公司 | 图像处理方法、装置和电子设备 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000123164A (ja) * | 1998-10-19 | 2000-04-28 | Canon Inc | 画像処理装置及びその方法 |
JP2003256830A (ja) * | 2002-03-04 | 2003-09-12 | Mitsubishi Electric Corp | コントラスト強調方式 |
JP2015095681A (ja) * | 2013-11-08 | 2015-05-18 | オリンパス株式会社 | マルチエリアホワイトバランス制御装置、マルチエリアホワイトバランス制御方法、マルチエリアホワイトバランス制御プログラム、マルチエリアホワイトバランス制御プログラムを記録したコンピュータ、マルチエリアホワイトバランス画像処理装置、マルチエリアホワイトバランス画像処理方法、マルチエリアホワイトバランス画像処理プログラム、マルチエリアホワイトバランス画像処理プログラムを記録したコンピュータ及びマルチエリアホワイトバランス画像処理装置を備えた撮像装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2670779B2 (ja) | 1987-08-31 | 1997-10-29 | 株式会社東芝 | 中間調画像分離処理装置 |
US5541653A (en) * | 1993-07-27 | 1996-07-30 | Sri International | Method and appartus for increasing resolution of digital color images using correlated decoding |
US6922492B2 (en) * | 2002-12-27 | 2005-07-26 | Motorola, Inc. | Video deblocking method and apparatus |
KR101030369B1 (ko) * | 2009-02-23 | 2011-04-20 | 인하대학교 산학협력단 | 영상 분류 장치 및 방법 |
CN103685972B (zh) * | 2012-09-21 | 2017-03-01 | 宏达国际电子股份有限公司 | 影像优化方法以及使用此方法的*** |
-
2017
- 2017-10-30 KR KR1020197015460A patent/KR20190073516A/ko not_active Application Discontinuation
- 2017-10-30 US US16/465,264 patent/US20190394438A1/en not_active Abandoned
- 2017-10-30 CN CN201780074379.2A patent/CN110192388A/zh active Pending
- 2017-10-30 WO PCT/JP2017/039189 patent/WO2018100950A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000123164A (ja) * | 1998-10-19 | 2000-04-28 | Canon Inc | 画像処理装置及びその方法 |
JP2003256830A (ja) * | 2002-03-04 | 2003-09-12 | Mitsubishi Electric Corp | コントラスト強調方式 |
JP2015095681A (ja) * | 2013-11-08 | 2015-05-18 | オリンパス株式会社 | マルチエリアホワイトバランス制御装置、マルチエリアホワイトバランス制御方法、マルチエリアホワイトバランス制御プログラム、マルチエリアホワイトバランス制御プログラムを記録したコンピュータ、マルチエリアホワイトバランス画像処理装置、マルチエリアホワイトバランス画像処理方法、マルチエリアホワイトバランス画像処理プログラム、マルチエリアホワイトバランス画像処理プログラムを記録したコンピュータ及びマルチエリアホワイトバランス画像処理装置を備えた撮像装置 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111835931A (zh) * | 2019-04-17 | 2020-10-27 | 夏普株式会社 | 图像处理装置、图像形成装置、图像读取装置 |
Also Published As
Publication number | Publication date |
---|---|
CN110192388A (zh) | 2019-08-30 |
US20190394438A1 (en) | 2019-12-26 |
KR20190073516A (ko) | 2019-06-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5045421B2 (ja) | 撮像装置、色ノイズ低減方法および色ノイズ低減プログラム | |
KR101061866B1 (ko) | 대상 화상에 계조 보정을 행하는 화상 처리 장치 | |
JP4395789B2 (ja) | 画像処理装置、撮像装置、画像処理方法およびプログラム | |
WO2018100950A1 (ja) | 画像処理装置、デジタルカメラ、画像処理プログラム、及び記録媒体 | |
US8081239B2 (en) | Image processing apparatus and image processing method | |
US8363125B2 (en) | Image processing apparatus, image processing method, and computer program product | |
JP6415062B2 (ja) | 画像処理装置、画像処理方法、制御プログラム、および記録媒体 | |
US8797427B2 (en) | Image processing apparatus | |
JP2007094742A (ja) | 画像信号処理装置及び画像信号処理プログラム | |
US9177396B2 (en) | Image processing apparatus and image processing method | |
WO2019170007A1 (zh) | 图像色彩调整方法及装置 | |
US9449375B2 (en) | Image processing apparatus, image processing method, program, and recording medium | |
JP2012165204A (ja) | 信号処理装置、信号処理方法、撮像装置及び撮像処理方法 | |
US8818128B2 (en) | Image processing apparatus, image processing method, and program | |
JP5878586B2 (ja) | 画像色調整方法及びその電子装置 | |
WO2020093441A1 (zh) | 图像饱和度增强的细节处理方法及装置 | |
JP2013114692A (ja) | 自動映像補正のための映像処理装置及び方法 | |
US9635331B2 (en) | Image processing apparatus that performs tone correction and edge enhancement, control method therefor, and storage medium | |
US9071803B2 (en) | Image processing apparatus, image pickup apparatus, image processing method and non-transitory computer-readable storage medium storing image processing program | |
WO2012099013A1 (ja) | 画像補正装置、画像補正表示装置、画像補正方法、プログラム、及び、記録媒体 | |
JP6413210B2 (ja) | 画像処理装置、撮像装置およびプログラム | |
US9055232B2 (en) | Image processing apparatus capable of adding soft focus effects, image processing method, and storage medium | |
WO2022246663A1 (zh) | 图像处理方法、设备、***和存储介质 | |
Adams et al. | Perceptually based image processing algorithm design | |
CN111726596B (zh) | 影像处理方法及其电子装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17875947 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20197015460 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17875947 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |