CN108769505A - A kind of image procossing set method and electronic equipment - Google Patents
A kind of image procossing set method and electronic equipment Download PDFInfo
- Publication number
- CN108769505A CN108769505A CN201810287814.6A CN201810287814A CN108769505A CN 108769505 A CN108769505 A CN 108769505A CN 201810287814 A CN201810287814 A CN 201810287814A CN 108769505 A CN108769505 A CN 108769505A
- Authority
- CN
- China
- Prior art keywords
- image
- infrared
- frame
- information
- final
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
A kind of image processing method of the application offer and electronic equipment, the method includes:The first image under visible light environment is obtained by camera, described first image includes the first object;The second image under infrared luminous environment is obtained by the camera;Second image includes first object;Obtain the parameter information of the first object described in second image;The parameter information is the corresponding infrared signature information of first object;Information processing described first image based on the parameter, so that virtualization effect is presented in the region in described first image in addition to first object.The image processing method of the application can quickly determine the first object region (namely foreground area) and background area in image, and can carry out virtualization processing to it automatically after determining background area.
Description
Technical field
This application involves a kind of for accurately determining the image processing method of foreground area in image and applying this kind of method
Electronic equipment.
Background technology
Current laptop is all integrated, and there are one camera and infrared light supplies, for mating reaction so that notebook
Computer can realize the face identification functions of saferization.However currently, infrared light supply does recognition of face in addition to cooperation camera
In addition, just the waste in resource is caused without other effects.
Apply for content
The application problem to be solved, which is to provide, a kind of can quickly determine the first object region in image
The electronic equipment of image processing method and application this method.
To solve the above-mentioned problems, the application provides a kind of image processing method, the method includes:
The first image under visible light environment is obtained by camera, described first image includes the first object;
The second image under infrared luminous environment is obtained by the camera;Second image includes described first pair
As;
Obtain the parameter information of the first object described in second image;The parameter information is first object pair
The infrared signature information answered;
Information processing described first image based on the parameter so that in described first image except first object with
Virtualization effect is presented in outer region.
Preferably, first object is located in the foreground area of described first image, the parameter information is for true
The information of the fixed foreground area.
The second image under infrared luminous environment is obtained by the camera include preferably, described:
At least two frame infrared images under infrared luminous environment are obtained by the camera.
Preferably, the parameter information for obtaining the first object described in the second image includes:
Final infrared image is determined based on at least two frame infrared images;
Wherein, described to determine that final infrared image includes based on at least two frame infrared images:
It is determined based on the luminance parameter for the same pixel in every frame infrared image in at least two frame infrared images
Go out final infrared image, wherein each pixel meets first condition and second condition in the final infrared image.
Preferably, at least two frame infrared images based on described in determine that final infrared image further includes later:
Processing is compensated for each pixel in the final infrared image.
The embodiment of the present application provides a kind of electronic equipment simultaneously, including:
Camera, it is described for obtaining the first image under visible light environment and the second image under infrared luminous environment
First image includes the first object, and second image includes first object;
Processor is used to obtain the parameter information of the first object described in second image;The parameter information is
The corresponding infrared signature information of first object, and information processing described first image based on the parameter, so that described
Virtualization effect is presented in region in first image in addition to first object.
Preferably, first object is located in the foreground area of described first image, the parameter information is for true
Determine the foreground area information in described first image.
Preferably, the camera obtains the second image under infrared luminous environment includes:
At least two frame infrared images under infrared luminous environment are obtained by the camera.
Preferably, the parameter information that the processor obtains the first object described in the second image includes:
Final infrared image is determined based on at least two frame infrared images;
Wherein, described to determine that final infrared image includes based on at least two frame infrared images:
It is determined based on the luminance parameter for the same pixel in every frame infrared image in at least two frame infrared images
Go out final infrared image, wherein each pixel meets first condition and second condition in the final infrared image.
Preferably, the processor is additionally operable to compensate processing for each pixel in the final infrared image.
The advantageous effect of the application is that it is possible to using camera and the infrared camera on electronic equipment rapidly and efficiently
Determine the first object region in the visible images shot by camera and the back of the body in addition to the first object region
Scene area is enabled a system to carry out virtualization processing automatically according to the background area in image, be not only increased accurate to image
The efficiency of processing, and processing accuracy is improved, whole process is not necessarily to any hardware facility of additional, is maximumlly utilized existing
The function of equipment.
Description of the drawings
Fig. 1 is the flow chart of the image processing method in the application one embodiment.
Fig. 2 is the flow chart of the image processing method in another embodiment of the application.
Fig. 3 is the flow chart of the image processing method in another embodiment of the application.
Fig. 4 is the block diagram of the electronic equipment in another embodiment of the application.
Specific implementation mode
The application is described in detail below in conjunction with attached drawing.
It should be understood that various modifications can be made to disclosed embodiments.Therefore, following description should not regard
To limit, and only as the example of embodiment.Those skilled in the art will expect within the scope and spirit of this
Other modifications.
The attached drawing being included in the description and forms part of the description shows embodiment of the disclosure, and with it is upper
What face provided is used to explain the disclosure together to the substantially description of the disclosure and the detailed description given below to embodiment
Principle.
It is of the invention by the description of the preferred form of the embodiment with reference to the accompanying drawings to being given as non-limiting examples
These and other characteristic will become apparent.
Although being also understood that invention has been described with reference to some specific examples, people in the art
Member realize with can determine the present invention many other equivalents, they have feature as claimed in claim and therefore all
In the protection domain defined by whereby.
When read in conjunction with the accompanying drawings, in view of following detailed description, above and other aspect, the feature and advantage of the disclosure will become
It is more readily apparent.
The specific embodiment of the disclosure is described hereinafter with reference to attached drawing;It will be appreciated, however, that the disclosed embodiments are only
Various ways implementation can be used in the example of the disclosure.It is known and/or repeat function and structure be not described in detail to avoid
Unnecessary or extra details so that the disclosure is smudgy.Therefore, specific structural and functionality disclosed herein is thin
Section is not intended to restrictions, but as just the basis of claim and representative basis be used to instruct those skilled in the art with
Substantially any appropriate detailed construction diversely uses the disclosure.
This specification can be used phrase " in one embodiment ", " in another embodiment ", " in another embodiment
In " or " in other embodiments ", it can be referred to one or more of the identical or different embodiment according to the disclosure.
Current laptop is all integrated, and there are one camera and infrared light supplies, for mating reaction so that notebook
Computer can realize the face identification functions of saferization.However currently, infrared light supply does recognition of face in addition to cooperation camera
In addition, just the waste in resource is caused without other effects.
In order to solve the above technical problems, embodiments herein provides a kind of image processing method, as shown in Figure 1, the party
Method includes:
The first image under visible light environment is obtained by camera, which includes the first object;
The second image under infrared luminous environment is obtained by camera;Second image similarly includes the first object;
The parameter information of the first object in the second image is obtained, which is the corresponding infrared signature letter of the first object
Breath;
The first image is handled based on parameter information, so that virtualization is presented in the region in the first image in addition to the first object
Effect.
It is, the first image includes the first object and can uniformly be considered as the background view of background, wherein the first object is
For target object.Then the tool of the first object is determined by obtaining the infrared signature information of the first object shot under infrared light supply
Body contour area, namely determine target area, last combining target region and the first image determine background view region,
And the region is subjected to virtualization processing, to show the first object in the first image convexity, that is, highlighting target object.Pass through
Method in the embodiment of the present application, user can for example, by manual continuous shooting at least two image or system under different illumination from
Dynamic continuous shooting at least two images under different illumination can clearly determine target area and background area, then system
Background area can be blurred, be more clear with highlighting target area, become clear.Certain system can also carry out other processing, such as will
Target area carries out color treatments or other landscaping treatments etc..
Further, in this embodiment the first object be located in the foreground area of the first image, and above-mentioned parameter information
It is actually used in the information for determining the foreground area in the first image, that is, the area information of the first object region.Pass through the step
Suddenly, system can determine the foreground area in the first image and background area, can be it is subsequent to background area at
Reason provides basis.For example, the first image taking is that user is located at the scene before flower bed under daily illumination, wherein user is
An object, the second image taking are that user is located at the scene before flower bed under Infrared irradiation, and system is by obtaining the second image
The infrared signature information of middle user, for example, it is complexity, length-width ratio, mean value contrast, maximum brightness, standard deviation, equal value difference, tight
One or more infrared signature information in degree of gathering etc. determine user region, that is, the profile information of user, connects
It and is determined to be emerging in figure after being blocked by user except the region namely flower bed of using open air in the first image according to the profile information
Region as in, after which determines, system can carry out virtualization processing to the region.
Further, it is specially when obtaining the second image under infrared luminous environment by camera in the present embodiment:
At least two frame infrared images under infrared environmental are obtained by camera.
When it is implemented, must be simultaneously to obtain two frame infrared images, multiframe can be also obtained, moreover, being obtained under visible light environment
The first image taken may also comprise multiframe visible images, to determine foreground zone in the first image by calculating for follow-up system
More calculating data are provided during domain, increase the precision of the determination of foreground area.For example, the infrared light on electronic equipment
Source can be opened with certain frequency intermittence so that visible images replace with infrared image and continuous acquisition.It is more such as acquisition
In frame image, odd-numbered frame is visible light frame (i.e. visible images), and even frame is infrared frame (i.e. infrared image) etc..
Further, in this embodiment system is specially in the parameter information of the first object in obtaining the second image:
Final infrared image is determined based on above-mentioned at least two frame infrared images;
It is above-mentioned to determine that final infrared image includes based at least two frame infrared images:
Based at least two frame infrared images, the luminance parameter in every frame infrared image for the same pixel is determined most
Whole infrared image, wherein each pixel meets first condition and second condition in final infrared image.
Specifically, the setting of first condition is for being determined before being located at from least two frame infrared images in the present embodiment
The pixel in pixel namely the first object region in scene area, to determine the display of foreground area by the pixel
Content, that is, final infrared image.And the setting of second condition be for determining in infrared image whether there is disturbing factor,
If so, then rejecting the disturbing factor so that the infrared image after finally determining has compared with high accurancy and precision, with actual foreground region institute
It is consistent containing image.There are many having for the disturbing factor, such as infrared image includes by sunlight or being different from the present embodiment
Imaging under the artificial light source of mid-infrared light source.
In the present embodiment by taking infrared image totally two frames as an example, the process of the final infrared image of above-mentioned determination is illustrated:
Include identical pixel in two frame infrared images in the present embodiment, wherein first condition is:The infrared figure of two frames
After the brightness of same pixel is subtracted each other as in, absolute value is positive number.Specifically formula is:
IR front n=| IRframe1n-IR frame2n |
Wherein n represents each pixel in infrared image.
Foreground area can be determined substantially by the first condition, but in order to increase precision, eliminated by first
Interference pixel in the foreground area that part is determined, the second condition in the present embodiment are:
Set luminance threshold Y1 and Y2 (Y1 and Y2 concrete numerical values or numberical range can be adjusted voluntarily according to actual needs);
A, determine whether the brightness of same pixel in two frame infrared images is all higher than Y1;
That is, IRframe1n>Y1, while IRframe2n>Y1
B, determine whether the absolute value of the difference of the brightness of same pixel in two frame infrared images is not more than Y2;
That is, | IRframe1n-IR frame2n |<Y2;
Condition B, which is mainly used for determining, to be influenced to be not due to the red of electronic equipment active irradiation caused by infrared image
Caused by outer light.
It just can determine that it is interference pixel when only having the brightness of certain pixel to meet condition A and condition B simultaneously in the present embodiment,
It need to be rejected in foreground area.
Fig. 2 is the flow chart of the image processing method in another embodiment of the application.According to an embodiment of the invention, preferably
Ground, due to being limited by the type and time for exposure of some cameras, such as belt-type shutter, global formula shutter etc.,
Its time for exposure is different, it is thus possible to the brightness of each pixel in the infrared image shot can be made to have certain deviation.In order to
Reduce the deviation so that the final infrared image after above-mentioned steps determine is consistent with practical scene, improves precision, the present embodiment
Middle system further includes after final infrared image being determined based at least two frame infrared images:
Processing is compensated for each pixel in final infrared image.
Specifically, herein temporarily by the final infrared image determined by above-mentioned steps be infrared image to be revised, specifically
When implementation, the present intensity value of each pixel in infrared image to be revised is multiplied by a floating compensation factor alpha by system, should
Coefficient is accordingly to be changed according to the brightness value and time for exposure of different pixels, such as the actual time for exposure is longer,
Coefficient is bigger, that is, each pixel has one and its unique corresponding penalty coefficient.Will institute in infrared image be revised
The infrared image formed after having the brightness of pixel to be adjusted is actual final infrared image, that is, final foreground
Region, specific formula are as follows:
IR front N=IR front n* α;
Wherein, N is the pixel in actual final infrared image, and n is the pixel in infrared image to be revised.
Furthermore it is preferred that in order to avoid increasing meaningless operation processing burden for electronic equipment, before image taking
Phase can prejudge whether current environment is outdoor environment, such as whether the screening-mode for passing through current system is outdoor pattern,
If so, not using the above method to handle image.Because of the infrared light that in an outdoor environment, electronic equipment actively irradiates
It is negligible compared with the ambient light of high intensity, therefore the foreground area distinguished using the above method is missed with background area
Difference is larger.
For the ease of more fully understanding that the present invention, Fig. 3 show the image processing method in another embodiment of the application
Flow chart.It should be understood that the flow chart is a specific embodiment, it can also be combined and be deformed come real using others
It is existing.The image processing method of Fig. 3 includes:
Visible images are obtained by camera.
Two frame infrared images are obtained by camera.
By first condition based on the luminance parameter for being directed to the same pixel in every frame infrared image in two frame infrared images
Determine initial infrared image.For example, first condition is:After the brightness of same pixel is subtracted each other in two frame infrared images, absolutely
Value is positive number.Specifically formula is:IR front n=| IRframe1n-IR frame2n |, wherein n is represented in infrared image
Each pixel.
Judge whether there is disturbing factor in initial infrared image by second condition?If YES, then reject the interference because
Element, and form infrared image to be revised;If NO, then initial infrared image is determined as final infrared image.For example, second
Condition is:Set luminance threshold Y1 and Y2 (Y1 and Y2 concrete numerical values or numberical range can be adjusted voluntarily according to actual needs);A,
Determine whether the brightness of same pixel in two frame infrared images is all higher than Y1;That is, IRframe1n>Y1, while IRframe2n>
Y1;B, determine whether the absolute value of the difference of the brightness of same pixel in two frame infrared images is not more than Y2, that is, |
IRframe1n–IR frame2n|<Y2。
The brightness of each pixel in infrared image to be revised is multiplied by and the unique corresponding floating compensation coefficient of the pixel.
Final infrared image is determined based on by the modified pixel of floating compensation coefficient.
Background image is determined based on visible images and final infrared image.
Background image is blurred.
As shown in figure 4, providing a kind of electronic equipment 400 simultaneously in the embodiment of the present application comprising:
Camera, for obtaining the first image under visible light environment and the second image under infrared luminous environment, first
Image includes the first object, and the second image includes the first object;
Processor is used to obtain the parameter information of the first object in the second image;Parameter information corresponds to for the first object
Infrared signature information, and based on parameter information handle the first image so that the area in the first image in addition to the first object
Virtualization effect is presented in domain.
It is, the first image includes the first object and can uniformly be considered as the background view of background, wherein the first object is
For target object.Then the tool of the first object is determined by obtaining the infrared signature information of the first object shot under infrared light supply
Body contour area, namely determine target area, last combining target region and the first image determine background view region,
And the region is subjected to virtualization processing, to show the first object in the first image convexity, that is, highlighting target object.Pass through
Method in the embodiment of the present application, user can be for example, by manual continuous shooting at least two images or processor under different illumination
Automatic continuous shooting at least two images under different illumination can clearly determine target area and background area, then locate
Reason device can blur background area, be more clear with highlighting target area, become clear.Certain processor can also carry out other processing,
Such as target area is subjected to color treatments or other landscaping treatments etc..
Further, in this embodiment the first object be located in the foreground area of the first image, and above-mentioned parameter information
It is actually used in the information for determining the foreground area in the first image, that is, the area information of the first object region.Pass through the step
Suddenly, processor can determine the foreground area in the first image and background area, can be carried out to background area to be subsequent
Processing provides basis.For example, the first image taking is that user is located at the scene before flower bed under daily illumination, wherein user is
First object, the second image taking are that user is located at the scene before flower bed under Infrared irradiation, and processor is by obtaining second
The infrared signature information of user in image, such as complexity, length-width ratio, mean value contrast, maximum brightness, standard deviation, mean value
One or more infrared signature information in difference, compactness etc. determine user region, that is, the profile letter of user
Breath is determined to show after being blocked by user except the region namely flower bed of using open air in the first image then according to the profile information
The region of dew in the picture, after which determines, processor can carry out virtualization processing to the region.
Further, it is specially when obtaining the second image under infrared luminous environment by camera in the present embodiment:
At least two frame infrared images under infrared environmental are obtained by camera.
When it is implemented, must be simultaneously to obtain two frame infrared images, multiframe can be also obtained, moreover, being obtained under visible light environment
The first image taken may also comprise multiframe visible images, to determine foreground in the first image by calculating for subsequent processor
More calculating data are provided during region, increase the precision of the determination of foreground area.For example, infrared on electronic equipment
Light source can be opened with certain frequency intermittence so that visible images replace with infrared image and continuous acquisition.Such as acquisition
In multiple image, odd-numbered frame is visible light frame (i.e. visible images), and even frame is infrared frame (i.e. infrared image) etc..
Further, in this embodiment processor is specially in the parameter information of the first object in obtaining the second image:
Final infrared image is determined based on above-mentioned at least two frame infrared images;
It is above-mentioned to determine that final infrared image includes based at least two frame infrared images:
Based at least two frame infrared images, the luminance parameter in every frame infrared image for the same pixel is determined most
Whole infrared image, wherein each pixel meets first condition and second condition in final infrared image.
Specifically, the setting of first condition is for being determined before being located at from least two frame infrared images in the present embodiment
The pixel in pixel namely the first object region in scene area, to determine the display of foreground area by the pixel
Content, that is, final infrared image.And the setting of second condition be for determining in infrared image whether there is disturbing factor,
If so, then rejecting the disturbing factor so that the infrared image after finally determining has compared with high accurancy and precision, with actual foreground region institute
It is consistent containing image.There are many having for the disturbing factor, such as infrared image includes by sunlight or being different from the present embodiment
Imaging under the artificial light source of mid-infrared light source.
In the present embodiment by taking infrared image totally two frames as an example, the process of the final infrared image of above-mentioned determination is illustrated:
Include identical pixel in two frame infrared images in the present embodiment, wherein first condition is:The infrared figure of two frames
After the brightness of same pixel is subtracted each other as in, absolute value is positive number.Specifically formula is:
IR front n=| IRframe1n-IR frame2n |
Wherein n represents each pixel in infrared image.
Foreground area can be determined substantially by the first condition, but in order to increase precision, eliminated by first
Interference pixel in the foreground area that part is determined, the second condition in the present embodiment are:
Set luminance threshold Y1 and Y2 (Y1 and Y2 concrete numerical values or numberical range can be adjusted voluntarily according to actual needs);
A, determine whether the brightness of same pixel in two frame infrared images is all higher than Y1;
That is, IRframe1n>Y1, while IRframe2n>Y1
B, determine whether the absolute value of the difference of the brightness of same pixel in two frame infrared images is not more than Y2;
That is, | IRframe1n-IR frame2n |<Y2;
Condition B, which is mainly used for determining, to be influenced to be not due to the red of electronic equipment active irradiation caused by infrared image
Caused by outer light.
It just can determine that it is interference pixel when only having the brightness of certain pixel to meet condition A and condition B simultaneously in the present embodiment,
It need to be rejected in foreground area.
Preferably due to the type and time for exposure by some cameras are limited, such as belt-type shutter, the overall situation
Formula shutter etc., time for exposure are different, it is thus possible to the brightness of each pixel in the infrared image shot can be made to have one
Determine deviation.In order to reduce the deviation so that the final infrared image after above-mentioned steps determine is consistent with practical scene, improves essence
Degree, processor further includes after final infrared image being determined based at least two frame infrared images in the present embodiment:
Processing is compensated for each pixel in final infrared image.
Specifically, herein temporarily by the final infrared image determined by above-mentioned steps be infrared image to be revised, specifically
When implementation, the present intensity value of each pixel in infrared image to be revised is multiplied by a floating compensation factor alpha by processor,
The coefficient is accordingly to be changed according to the brightness value and time for exposure of different pixels, such as the actual time for exposure gets over
Long, coefficient is bigger, that is, each pixel has one and its unique corresponding penalty coefficient.Will infrared image be revised
The infrared image that the brightness of middle all pixels is formed after being adjusted is actual final infrared image, that is, final
Foreground area, specific formula are as follows:
IR front N=IR front n* α;
Wherein, N is the pixel in actual final infrared image, and n is the pixel in infrared image to be revised.
Above example is only the exemplary embodiment of the application, is not used in limitation the application, the protection domain of the application
It is defined by the claims.Those skilled in the art can make respectively the application in the essence and protection domain of the application
Kind modification or equivalent replacement, this modification or equivalent replacement also should be regarded as falling within the scope of protection of this application.
Claims (10)
1. a kind of image processing method, the method includes:
The first image under visible light environment is obtained by camera, described first image includes the first object;
The second image under infrared luminous environment is obtained by the camera;Second image includes first object;
Obtain the parameter information of the first object described in second image;The parameter information is that first object is corresponding
Infrared signature information;
Information processing described first image based on the parameter, so that in described first image in addition to first object
Virtualization effect is presented in region.
2. according to the method described in claim 1, it is characterized in that, first object is located at the foreground zone of described first image
In domain, the parameter information is used to determine the information of the foreground area.
3. according to the method described in claim 2, it is characterized in that, described obtained by the camera under infrared luminous environment
Second image includes:
At least two frame infrared images under infrared luminous environment are obtained by the camera.
4. according to the method described in claim 3, it is characterized in that, the parameter for obtaining the first object described in the second image
Information includes:
Final infrared image is determined based on at least two frame infrared images;
Wherein, described to determine that final infrared image includes based on at least two frame infrared images:
It is determined most based on the luminance parameter for the same pixel in every frame infrared image in at least two frame infrared images
Whole infrared image, wherein each pixel meets first condition and second condition in the final infrared image.
5. according to the method described in claim 3, it is characterized in that, described determined most based on at least two frame infrared images
Further include after whole infrared image:
Processing is compensated for each pixel in the final infrared image.
6. a kind of electronic equipment, which is characterized in that including:
Camera, for obtaining the first image under visible light environment and the second image under infrared luminous environment, described first
Image includes the first object, and second image includes first object;
Processor is used to obtain the parameter information of the first object described in second image;The parameter information is described
The corresponding infrared signature information of first object, and information processing described first image based on the parameter, so that described first
Virtualization effect is presented in region in image in addition to first object.
7. electronic equipment according to claim 6, which is characterized in that before first object is located at described first image
In scene area, the parameter information is used to determine the foreground area information in described first image.
8. electronic equipment according to claim 7, which is characterized in that the camera obtains second under infrared luminous environment
Image includes:
At least two frame infrared images under infrared luminous environment are obtained by the camera.
9. electronic equipment according to claim 8, which is characterized in that the processor obtains first described in the second image
The parameter information of object includes:
Final infrared image is determined based on at least two frame infrared images;
Wherein, described to determine that final infrared image includes based on at least two frame infrared images:
It is determined most based on the luminance parameter for the same pixel in every frame infrared image in at least two frame infrared images
Whole infrared image, wherein each pixel meets first condition and second condition in the final infrared image.
10. according to the method for claim 8 electronic equipment, which is characterized in that the processor is additionally operable to for described final red
Each pixel compensates processing in outer image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810287814.6A CN108769505A (en) | 2018-03-30 | 2018-03-30 | A kind of image procossing set method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810287814.6A CN108769505A (en) | 2018-03-30 | 2018-03-30 | A kind of image procossing set method and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108769505A true CN108769505A (en) | 2018-11-06 |
Family
ID=63980760
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810287814.6A Pending CN108769505A (en) | 2018-03-30 | 2018-03-30 | A kind of image procossing set method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108769505A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110310327A (en) * | 2019-06-28 | 2019-10-08 | 联想(北京)有限公司 | Image processing method and device, computer system and readable storage medium storing program for executing |
CN112217992A (en) * | 2020-09-29 | 2021-01-12 | Oppo(重庆)智能科技有限公司 | Image blurring method, image blurring device, mobile terminal, and storage medium |
CN113014791A (en) * | 2019-12-20 | 2021-06-22 | 中兴通讯股份有限公司 | Image generation method and device |
CN113572968A (en) * | 2020-04-24 | 2021-10-29 | 杭州萤石软件有限公司 | Image fusion method and device, camera equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103810478A (en) * | 2014-02-21 | 2014-05-21 | 广东小天才科技有限公司 | Sitting posture detection method and device |
WO2017104411A1 (en) * | 2015-12-14 | 2017-06-22 | ソニー株式会社 | Imaging element, image processing device and method, and program |
CN107316272A (en) * | 2017-06-29 | 2017-11-03 | 联想(北京)有限公司 | Method and its equipment for image procossing |
CN107395965A (en) * | 2017-07-14 | 2017-11-24 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
-
2018
- 2018-03-30 CN CN201810287814.6A patent/CN108769505A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103810478A (en) * | 2014-02-21 | 2014-05-21 | 广东小天才科技有限公司 | Sitting posture detection method and device |
WO2017104411A1 (en) * | 2015-12-14 | 2017-06-22 | ソニー株式会社 | Imaging element, image processing device and method, and program |
CN107316272A (en) * | 2017-06-29 | 2017-11-03 | 联想(北京)有限公司 | Method and its equipment for image procossing |
CN107395965A (en) * | 2017-07-14 | 2017-11-24 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110310327A (en) * | 2019-06-28 | 2019-10-08 | 联想(北京)有限公司 | Image processing method and device, computer system and readable storage medium storing program for executing |
CN113014791A (en) * | 2019-12-20 | 2021-06-22 | 中兴通讯股份有限公司 | Image generation method and device |
CN113014791B (en) * | 2019-12-20 | 2023-09-19 | 中兴通讯股份有限公司 | Image generation method and device |
CN113572968A (en) * | 2020-04-24 | 2021-10-29 | 杭州萤石软件有限公司 | Image fusion method and device, camera equipment and storage medium |
CN113572968B (en) * | 2020-04-24 | 2023-07-18 | 杭州萤石软件有限公司 | Image fusion method, device, image pickup apparatus and storage medium |
CN112217992A (en) * | 2020-09-29 | 2021-01-12 | Oppo(重庆)智能科技有限公司 | Image blurring method, image blurring device, mobile terminal, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107277356B (en) | Method and device for processing human face area of backlight scene | |
US9852499B2 (en) | Automatic selection of optimum algorithms for high dynamic range image processing based on scene classification | |
EP1583033B1 (en) | Digital cameras with luminance correction | |
JP5178170B2 (en) | White balance adjusting device and white balance adjusting method | |
CN108769505A (en) | A kind of image procossing set method and electronic equipment | |
US10742892B1 (en) | Apparatus and method for capturing and blending multiple images for high-quality flash photography using mobile electronic device | |
US8902328B2 (en) | Method of selecting a subset from an image set for generating high dynamic range image | |
US20060078218A1 (en) | Image correction apparatus, image correction program storage medium, image correction method, and image correction system | |
US20050220359A1 (en) | Luminance correction | |
CN113612930A (en) | System and method for capturing digital images | |
CN107451969A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
TWI632894B (en) | Heart rate activity detecting system based on motion images and method thereof | |
JP2006319534A (en) | Imaging apparatus, method and program | |
CN107948538A (en) | Imaging method, device, mobile terminal and storage medium | |
CN107911625A (en) | Light measuring method, device, readable storage medium storing program for executing and computer equipment | |
CN110346116B (en) | Scene illumination calculation method based on image acquisition | |
CN109089041A (en) | Recognition methods, device, electronic equipment and the storage medium of photographed scene | |
CN110852956A (en) | Method for enhancing high dynamic range image | |
Kao | High dynamic range imaging by fusing multiple raw images and tone reproduction | |
JP7401013B2 (en) | Information processing device, control device, information processing method and program | |
JP2019205055A (en) | Image processing apparatus, image processing apparatus control method, and program | |
JP2014178146A (en) | Image processing apparatus and method | |
CN111275630A (en) | Cell image adjusting method and device and electron microscope | |
JP3497801B2 (en) | Face image display method and face image processing device | |
JP2012085093A (en) | Imaging device and acquisition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181106 |