CN108335278B - Image processing method and device, storage medium and electronic equipment - Google Patents

Image processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108335278B
CN108335278B CN201810222006.1A CN201810222006A CN108335278B CN 108335278 B CN108335278 B CN 108335278B CN 201810222006 A CN201810222006 A CN 201810222006A CN 108335278 B CN108335278 B CN 108335278B
Authority
CN
China
Prior art keywords
image
terminal
noise reduction
reduction processing
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810222006.1A
Other languages
Chinese (zh)
Other versions
CN108335278A (en
Inventor
何新兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810222006.1A priority Critical patent/CN108335278B/en
Publication of CN108335278A publication Critical patent/CN108335278A/en
Application granted granted Critical
Publication of CN108335278B publication Critical patent/CN108335278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method and device, a storage medium and electronic equipment. The method comprises the following steps: acquiring an environmental parameter when a terminal acquires a first image; acquiring historical data of whether the acquired image is subjected to noise reduction processing by a terminal; and judging whether the first image needs to be subjected to noise reduction processing or not according to the environment parameter and the historical data. The method and the device can improve the accuracy of the terminal in judging whether the image needs to be subjected to noise reduction processing.

Description

Image processing method and device, storage medium and electronic equipment
Technical Field
The present application belongs to the field of image technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
In order to obtain a better imaging effect, the terminal can perform noise reduction processing on the acquired image. For example, in a dark-light shooting environment or when an image is interfered by the inside of the imaging device, the noise of the image collected by the terminal is larger. In this case, the terminal can perform noise reduction processing on the acquired image, thereby improving the imaging effect. However, in the related art, when the terminal determines whether the noise reduction processing needs to be performed on the acquired image, the determination accuracy is low.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, a storage medium and an electronic device, which can improve the accuracy of a terminal in judging whether the image needs to be subjected to noise reduction processing.
The embodiment of the application provides an image processing method, which comprises the following steps:
acquiring an environmental parameter when a terminal acquires a first image;
acquiring historical data of whether the acquired image is subjected to noise reduction processing by a terminal;
and judging whether the first image needs to be subjected to noise reduction processing or not according to the environment parameters and the historical data.
An embodiment of the present application provides an image processing apparatus, including:
the first acquisition module is used for acquiring environmental parameters when the terminal acquires a first image;
the second acquisition module is used for acquiring historical data of whether the noise reduction processing is carried out on the acquired image by the terminal;
and the judging module is used for judging whether the first image needs to be subjected to noise reduction processing or not according to the environment parameters and the historical data.
The embodiment of the application provides a storage medium, wherein a computer program is stored on the storage medium, and when the computer program is executed on a computer, the computer is enabled to execute the steps in the image processing method provided by the embodiment of the application.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the steps in the image processing method provided in the embodiment of the present application by calling the computer program stored in the memory.
In this embodiment, the terminal may determine whether to perform noise reduction processing on the first image according to the environmental parameter when the first image is acquired and the historical data of whether to perform noise reduction processing on other images acquired before the first image. Therefore, the accuracy of the terminal in judging whether the acquired image needs to be subjected to noise reduction processing can be improved. In addition, since the terminal in this embodiment can determine whether the noise reduction processing needs to be performed on the acquired image according to the environmental parameter and the history data, instead of determining only according to the environmental parameter, the flexibility of the terminal in determining whether the noise reduction processing needs to be performed on the acquired image can be improved in this embodiment.
Drawings
The technical solution and the advantages of the present invention will be apparent from the following detailed description of the embodiments of the present invention with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 2 is another schematic flow chart of an image processing method according to an embodiment of the present application.
Fig. 3 to fig. 4 are scene schematic diagrams of an image processing method according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 6 is another schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
It can be understood that the execution subject of the embodiment of the present application may be a terminal device such as a smart phone or a tablet computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application, where the flow chart may include:
in step S101, an environmental parameter when the terminal acquires the first image is obtained.
In order to obtain a better imaging effect, the terminal can perform noise reduction processing on the acquired image. For example, in a dark-light shooting environment or when an image is interfered by the inside of the imaging device, the noise of the image collected by the terminal is larger. In this case, the terminal can perform noise reduction processing on the acquired image, thereby improving the imaging effect. However, in the related art, when the terminal determines whether the noise reduction processing needs to be performed on the acquired image, the determination accuracy is low.
In step S101 of the embodiment of the present application, the terminal may first acquire the environmental parameter when the first image is acquired. That is, the terminal may first acquire the environmental parameters of the first image when it is acquired.
In some embodiments, the environmental parameter may be a parameter such as sensitivity (ISO), environmental light brightness, and the like. It is to be understood that the exemplifications set out herein are not to be construed as limiting the application.
In step S102, history data indicating whether or not the terminal has performed noise reduction processing on the acquired image is acquired.
For example, after obtaining the environmental parameter when the terminal acquires the first image, the terminal may obtain history data of whether or not the noise reduction processing is performed on the image acquired before the first image.
In step S103, it is determined whether or not the noise reduction processing is required for the first image based on the environmental parameter and the history data.
For example, after acquiring the environmental parameter when the first image is acquired and the history data of whether the noise reduction processing is performed on the image acquired before the first image, the terminal may determine whether the noise reduction processing is performed on the first image according to the environmental parameter and the history data.
It can be understood that, in the embodiment of the present application, the terminal may determine whether to perform noise reduction processing on the first image according to the environmental parameter when the first image is acquired and the historical data of whether to perform noise reduction processing on other images acquired before the first image. Therefore, the accuracy of the terminal in judging whether the acquired image needs to be subjected to noise reduction processing can be improved.
In addition, since the terminal in this embodiment can determine whether the noise reduction processing needs to be performed on the acquired image according to the environmental parameter and the history data, instead of determining only according to the environmental parameter, the flexibility of the terminal in determining whether the noise reduction processing needs to be performed on the acquired image can be improved in this embodiment.
Referring to fig. 2, fig. 2 is another schematic flow chart of an image processing method according to an embodiment of the present application, where the flow chart may include:
in step S201, the terminal acquires sensitivity at the time of acquiring the first image.
For example, when the user uses the terminal to take a picture, the terminal may acquire the sensitivity (ISO) used by the terminal when the first image is captured. That is, the terminal may acquire the sensitivity that the terminal uses when the first image is captured.
In step S202, the terminal acquires a first time when the first image is acquired.
In step S203, the terminal acquires a second time when a second image is acquired, the second image being an image acquired by the terminal last time before the first image is acquired.
For example, steps S202 and S203 may include:
after the sensitivity of the terminal in acquiring the first image is acquired, the terminal can acquire a first time of acquiring the first image and a second time of acquiring the second image. Wherein the second image is an image which is acquired by the terminal last time before the first image is acquired. That is, the second image is the last image captured by the terminal.
Thereafter, the terminal may calculate a time interval between the first time and the second time, and detect whether the time interval between the first time and the second time is less than a preset interval.
If it is detected that the time interval between the first time and the second time is not less than a preset interval, for example, the difference between the first time and the second time is 10 minutes, and the preset interval is 20 seconds, the terminal may determine whether to perform noise reduction processing on the first image only according to the environmental parameters such as sensitivity when the first image is acquired.
If the time interval between the first time and the second time is smaller than the preset interval, the process proceeds to step S204.
In step S204, if the time interval between the first time and the second time is smaller than the preset interval, the terminal acquires history data of whether the noise reduction processing is performed on the second image.
For example, the difference between the first time and the second time is 8 seconds, and the preset interval is 20 seconds, that is, the first image and the second image are adjacently acquired in a short time. In this case, the terminal may further acquire history data of whether or not the noise reduction processing has been performed on the second image by the terminal.
After obtaining the sensitivity used by the terminal when the first image is acquired and the historical data of whether the terminal has performed noise reduction processing on the second image, the terminal can determine whether the noise reduction processing needs to be performed on the first image according to the sensitivity and the historical data.
For example, the terminal may first detect whether a value of sensitivity used by the terminal when acquiring the first image is less than a preset threshold.
If it is detected that the value of the sensitivity used by the terminal when acquiring the first image is not less than the preset threshold, for example, the preset threshold is 800, and the value of the sensitivity used by the terminal when acquiring the first image is also 800. In this case, since the terminal uses a higher sensitivity when acquiring the first image, which results in a higher noise of the first image, the terminal may determine that the noise reduction processing needs to be performed on the first image. That is, in the case where the sensitivity used when the terminal acquires the first image is not less than the preset threshold, the terminal may determine that the noise reduction processing needs to be performed on the first image regardless of whether the terminal performs the noise reduction processing on the second image.
If the terminal detects that the value of the sensitivity used by the terminal in acquiring the first image is smaller than the preset threshold, the terminal may further calculate a difference between the sensitivity and the preset threshold, and detect whether the difference is smaller than or equal to the preset difference.
If the difference is greater than a predetermined difference, for example, the terminal uses a sensitivity value of 750 when acquiring the first image, a predetermined threshold value of 800, a difference between the two values is 50, and the predetermined difference is 20. In this case, it can be considered that the first image is less noisy, and therefore the terminal can determine that the noise reduction processing is not necessary for the first image.
If the difference is smaller than or equal to the predetermined difference, the process proceeds to step S205.
In step S205, if the value of the sensitivity when the first image is acquired is less than the preset threshold, the difference between the value of the sensitivity and the preset threshold is less than or equal to the preset difference, and the historical data indicates that the terminal has performed noise reduction processing on the second image, the terminal determines that the noise reduction processing needs to be performed on the first image.
For example, the terminal may use a sensitivity value of 790 and a preset threshold value of 800 when acquiring the first image, where the difference between the two values is 10 and the preset difference is 20. At this time, it can be considered that the sensitivity used by the terminal in capturing the first image does not reach the preset threshold 800, but the difference between the sensitivity 790 and the preset threshold 800 is small. In this case, if it is determined that the terminal has performed noise reduction processing on the second image, that is, the terminal has performed noise reduction processing on the immediately previous image acquired, the terminal may determine that noise reduction processing needs to be performed on the first image as well.
It can be understood that, since the time when the terminal acquires the first image is short (within the preset interval) from the time when the terminal acquires the second image, the change between the environment when the terminal acquires the first image and the environment when the terminal acquires the second image is considered to be small, that is, the change of the environmental parameter is small. In this case, since the terminal performs noise reduction processing on the second image and the terminal acquires the first image with sensitivity very close to the preset threshold (the sensitivity is near the preset threshold), the terminal can determine that noise reduction processing is required on the first image although the sensitivity does not reach the preset threshold.
According to the embodiment, whether the noise reduction processing needs to be performed on the acquired image can be judged according to the environment parameters and the historical data, so that the flexibility of judging whether the noise reduction processing needs to be performed on the acquired image by the terminal can be improved.
In addition, the embodiment can also effectively solve the problem that when the environmental parameter is located near the corresponding preset threshold value, the terminal judges that the judgment is not accurate enough when judging whether the acquired image needs to be subjected to noise reduction processing. For example, in the case where the sensitivity used when the terminal captures the first image is 790 and the preset threshold is 800, if it is determined whether the noise reduction processing needs to be performed on the captured first image only according to the sensitivity, the terminal may determine that the noise reduction processing does not need to be performed on the first image because the sensitivity 790 is smaller than the preset threshold 800. However, in fact, sensitivity 790 is very close to the preset threshold 800, and therefore the terminal is likely to need noise reduction processing on the first image. In this case, the present embodiment may further determine whether or not the noise reduction processing needs to be performed on the first image by referring to the history data of whether or not the terminal has performed the noise reduction processing on the second image that has just been acquired. If the terminal performs noise reduction on the second image, the terminal can also determine that the first image needs to be processed, so that the problem that when the environmental parameter is located near the corresponding preset threshold value, the terminal cannot judge accurately enough when judging whether the acquired image needs to be subjected to noise reduction is effectively solved.
In one embodiment, the step of the terminal acquiring the history data of whether the noise reduction processing is performed on the second image may include:
and if the fact that the terminal performs noise reduction on the first image in a multi-frame noise reduction mode is determined, acquiring historical data of whether a second image is subjected to noise reduction processing by the terminal, wherein the second image is an image which is acquired and stored in an album by the terminal last time before the first image is acquired.
For example, the terminal performs noise reduction processing on an image in a multi-frame noise reduction mode, and when the terminal acquires history data of whether noise reduction processing is performed on a second image, the second image is an image which is acquired and stored in an album last time before the first image is acquired. It will be appreciated that the images stored in the album become the photos in the album.
For example, after entering the camera preview interface, the terminal acquires one frame of image at regular intervals (e.g., 30 milliseconds, 50 milliseconds, 60 milliseconds, etc.), and stores the recently acquired frames of image in a fixed-length buffer queue, so that when multi-frame noise reduction needs to be performed on an acquired frame of image, the terminal may acquire continuously acquired frames of image including the frame of image, and perform noise reduction processing on the frame of image according to the frames of image.
In this case, the second image acquired by the terminal may be an image that was last captured and stored (output) into the album as a photograph by the terminal before the first image was captured. That is, the second image is a photograph taken last by the terminal.
In an embodiment, before the step of acquiring the environment parameter when the first image is acquired by the terminal, the method may further include the following steps:
when an image containing a human face is collected, the terminal determines a target frame number according to at least two collected images, wherein the target frame number is greater than or equal to 2;
acquiring to-be-processed images with the number of the target frames from the acquired multi-frame images by the terminal;
determining a first image from the image to be processed, wherein the first image at least comprises a face image meeting a preset condition;
if the first image needs to be subjected to noise reduction processing, the terminal performs noise reduction processing on the first image according to the image to be processed.
That is, before the step of acquiring the environmental parameter of the first image acquired by the terminal, the step of determining the first image by the terminal may be further included. For example, after entering the camera preview interface, if it is detected that the terminal is collecting images including faces, the terminal may determine a target frame number according to at least two collected images including faces, where the target frame number is greater than or equal to 2.
For example, when the terminal acquires four frames of images containing human faces, the terminal can detect whether the positions of the human faces in the four frames of images are displaced. If the displacement does not occur or is very small, the face image in the image can be considered to be relatively stable, that is, the user does not shake or rotate the head in a large range. If the displacement occurs, the face image is considered to be unstable, that is, the user shakes or rotates the head, and the amplitude is large.
In one embodiment, whether the human face in the image is displaced or not can be detected by the following method: after the four acquired frames of images are acquired, the terminal can generate a coordinate system, and then the terminal can put each frame of image into the coordinate system in the same way. And then, the terminal can acquire the coordinates of the facial image feature points in each frame of image in the coordinate system. After the coordinates of the feature points of the face image in each frame of image in the coordinate system are obtained, the terminal can compare whether the coordinates of the feature points of the same face image in different images are the same or not. If the face images are the same, the face images in the images can be considered to be not displaced. If the difference is not the same, the face image in the image can be considered to be displaced. If the face image is detected to be displaced, the terminal can acquire a specific displacement value. If the specific displacement value is within the preset value range, the face image in the image can be considered to have smaller displacement. If the specific displacement value is outside the preset value range, the face image in the image can be considered to have larger displacement.
In one embodiment, for example, if the face image is displaced, the target frame number may be determined to be 4 frames. If the human face image is not displaced, the target frame number can be determined as 6 frames or 8 frames.
After the user presses the photographing button, the terminal can acquire the images to be processed with the target frame number from the recently acquired images. Then, the terminal can determine a first image from the images to be processed, wherein the first image at least comprises a face image meeting a preset condition. For example, the preset conditions are: the eyes of a user in the first image are open more than the eyes of the user in the other images to be processed.
In one embodiment, the terminal may detect the eye size in the image as follows. For example, the terminal may first identify the eye region in the image by a face and eye recognition technology, and then obtain the area ratio of the eye region in the whole image. If the area ratio is large, it can be considered that the eyes of the user are open widely. The small area ratio may be considered to be that the user's eyes are open little. For another example, the terminal may further calculate the number of pixel points occupied by human eyes in the image in the vertical direction, and the number may be used to indicate the size of the human eyes.
After the first image is determined, if it is determined that the noise reduction processing needs to be performed on the first image, the terminal may perform the noise reduction processing on the first image according to the acquired multiple frames of images to be processed.
In one embodiment, the step of determining the first image from the to-be-processed image may include:
and if the images to be processed are single images, determining a first image from the images to be processed, wherein the face image in the first image meets a preset condition.
For example, when the terminal detects that the to-be-processed images acquired by the terminal are single images (that is, each frame of the to-be-processed image only contains one face image), the terminal may determine the to-be-processed image of which the face image meets the preset condition as the first image.
For example, the preset condition is that the user's eyes in the image are open more than the user's eyes in the other images to be processed. For example, the terminal may perform eye recognition on the image to be processed, and acquire a numerical value representing the size of the eye in each frame of the image to be processed. Then, the terminal may sort all the images to be processed in the descending order of the numerical values. Then, the terminal may determine the image to be processed ranked first as the first image.
In one embodiment, the step of determining the first image from the to-be-processed image may include:
if each image to be processed is a multi-person image, determining a first image from the images to be processed, wherein the first image at least comprises a face image meeting preset conditions;
determining a face image to be replaced which does not accord with the preset condition from the first image;
determining a target face image meeting the preset condition from other images to be processed except the first image, wherein the target face image and the face image to be replaced are face images of the same user;
replacing the face image to be replaced with the target face image in the first image to obtain a first image subjected to image replacement processing;
then, the step of performing noise reduction processing on the first image according to the image to be processed may include: and according to the image to be processed, performing noise reduction processing on the first image subjected to the image replacement processing.
For example, when the terminal detects that the to-be-processed images obtained by the terminal are multi-person images (that is, each frame of to-be-processed image includes at least two face images, and the number of faces in each frame of image is equal), the terminal may determine a first image from all the to-be-processed images, where the first image includes at least one face image meeting a preset condition. For example, the preset condition is that the eyes of a certain user in the image are open more than the eyes of the user in other images to be processed.
For example, the terminal acquires H, I, J, K, L, M the 6 frames of images to be processed, and the 6 frames of images are the photographic images of three people, namely three people, three people. The terminal can perform face recognition on the 6 frames of images and detect the eye size of the face part in the images. For example, the H, I, J, K, L, M images have values for eye size of 81, 83, 84, 86, 85, respectively. H. I, J, K, L, M the numerical values representing the eye size of D are 75, 77, 79, 78, 77 respectively. H. I, J, K, L, M the numerical values representing the eye size of E are 84, 85, 86, 88, 86, respectively.
For c, the face image whose eyes are the largest appears in images K and L. For the third, the face image whose eyes are the largest appears in the image K. For penta, the image whose eyes are the largest appears in the image L. Since the face image with the largest eyes of two persons appears in the image L, the terminal can determine the image L as the first image.
After the image L is determined as the first image, the terminal may determine the face image of the user t in the image L as the face image to be replaced, and then the terminal may determine the face image (with the largest eyes) of the user t in the image K as the target face image. Then, the terminal may replace the human face image in the image L with the human face image in the image K (i.e., the target human face image), so as to obtain the image L subjected to the image replacement processing.
Then, when it is determined that the noise reduction processing needs to be performed on the image L, the terminal may perform multi-frame noise reduction processing on the image L subjected to the image replacement processing. For example, the terminal may perform multi-frame noise reduction processing on the image L subjected to the image replacement processing described above from the image J, K, M.
Referring to fig. 3 and 4, fig. 3 and 4 are schematic scene diagrams of an image processing method according to an embodiment of the present application.
In this embodiment, after entering the preview interface of the camera, the terminal may acquire one frame of image every 30 to 60 milliseconds according to the current environmental parameter, and store the acquired image in the buffer queue. The buffer queue may be a fixed-length queue, for example, the buffer queue may store 10 frames of images newly acquired by the terminal.
For example, five people, i.e., A, B, C, D and E, play outside and prepare to take a picture beside a landscape. Wherein, the first user terminal takes a picture of the second user terminal, as shown in fig. 3. For example, after entering a preview interface of a camera, the terminal acquires a frame of image every 50 milliseconds according to the currently acquired environmental parameters. Before a user presses a photographing button of a camera, the terminal can acquire 4 acquired images from the cache queue, and it can be understood that the 4 images all include a face image of the user b. Then, the terminal can detect whether the position of the face image of the second frame of the 4 frames of images in the picture is displaced or not. If the displacement does not occur or is very small, the face image of the second person can be considered to be relatively stable, namely the second person does not shake or rotate the head in a large range. If the displacement occurs, the face image of b is considered to be unstable, i.e. b shakes or rotates the head, and the amplitude is large. For example, in this embodiment, the terminal detects that the position of the face image of b in the 4 frames of images in the screen has not been displaced.
Then, the terminal may obtain the current ambient light brightness, and determine whether the terminal is currently in a dark light environment according to the ambient light brightness. For example, the terminal determines that it is currently in a dim light environment.
Then, the terminal may, according to the obtained information: and the position of the face image of the second person in the picture is not displaced, and the face image is currently in a dark environment, so that a target frame number is determined. For example, the number of target frames is determined to be 6 frames.
After that, when the first presses the photographing button, the terminal can acquire 6 captured images about the second. For example, the terminal may obtain, from the buffer queue, 6 recently acquired images about b, where the 6 images are A, B, C, D, E, F, for example, in chronological order.
After acquiring the 6 frames of images, the terminal may perform face recognition on the 6 frames of images, and detect the eye size of the face part in the images. For example, the A, B, C, D, E, F images have values for eye size B of 80, 82, 83, 84, 85, 84, respectively. Since the 6 frames of images are the images of the single person b, the terminal may determine the image with the largest eye among the 6 frames of images as the first image, i.e., the image E is determined as the first image.
After determining the image E as the first image, the terminal may acquire the sensitivity used when acquiring the image E, and detect whether the sensitivity is less than a preset threshold. For example, the sensitivity used by the terminal when capturing the image E is 800, and the preset threshold is 800. Then, since the sensitivity used by the terminal when the image E is captured is equal to the preset threshold, the terminal may determine that the noise reduction processing needs to be performed on the image E. In this case, the terminal may perform noise reduction processing on the image E from the continuously acquired 4 frames of images including the image E. For example, the terminal may perform multi-frame noise reduction processing on the image E from the image C, D, F. In multi-frame denoising, the terminal may first align the image C, D, E, F and obtain the pixel values for each set of aligned pixels in the image. If the pixel values of the same group of alignment pixels are not different, the terminal can calculate the pixel value mean value of the group of alignment pixels, and replace the pixel value of the corresponding pixel of the image E with the pixel value mean value. If the pixel values of the alignment pixels in the same group are different, the pixel values in the image E may not be adjusted.
For example, the pixel P1 in the image C, the pixel P2 in the image D, the pixel P3 in the image E, and the pixel P4 in the image F are a group of mutually aligned pixels, where the pixel value of P1 is 101, the pixel value of P2 is 102, the pixel value of P3 is 103, and the pixel value of P4 is 104, and then the average value of the pixel values of the group of mutually aligned pixels is 102.5, then the terminal may adjust the pixel value of the P3 pixel in the image E from 103 to 102.5, thereby performing noise reduction on the P3 pixel in the image E. If the pixel value of P1 is 80, the pixel value of P2 is 83, the pixel value of P3 is 103, and the pixel value of P4 is 90, then the pixel value of P3 may not be adjusted at this time, i.e., the pixel value of P3 remains 103, because their pixel values are more different.
After the image E is subjected to multi-frame noise reduction processing, the terminal may output the image E subjected to noise reduction processing to an album as one photo. It will be appreciated that image E is a large eye photograph of b taken.
At the 8 th second after the completion of the first photograph of B, the first photograph was again a compilation of C, D and E photographs as shown in FIG. 4. Similarly, after entering a preview interface of the camera, the terminal acquires a frame of image at regular intervals according to the currently acquired environmental parameters. Before a user presses a photographing button of a camera, the terminal can acquire 4 acquired images from the cache queue, and it can be understood that the 4 images all include face images of C, D and E. Then, the terminal can detect whether the positions of the face images of the third, fourth and fifth frames of images in the picture are displaced or not. For example, in this embodiment, the terminal detects that the positions of the face images of c, d, and e in the 4 frames of images in the screen are not displaced.
Then, the terminal may obtain the current ambient light brightness, and determine whether the terminal is currently in a dark light environment according to the ambient light brightness. For example, the terminal determines that it is currently in a dim light environment.
Then, the terminal may, according to the obtained information: the positions of the human face images of the third, the fourth and the fifth images in the picture are not displaced, and the human face images are currently in a dim light environment, so that a target frame number is determined. For example, the number of target frames is determined to be 6 frames.
Thereafter, when the nail presses the photographing button, the terminal can acquire 6 captured images about c, d, and e. For example, the terminal may obtain, from the buffer queue, 6 recently acquired images about c, d, and e, where the 6 images are H, I, J, K, L, M respectively, for example, according to time sequence.
After acquiring the 6 frames of images, the terminal may perform face recognition on the 6 frames of images, and detect the eye size of the face part in the images. For example, the H, I, J, K, L, M images have values for eye size of 81, 83, 84, 86, 85, respectively. H. I, J, K, L, M the numerical values representing the eye size of D are 75, 77, 79, 78, 77 respectively. H. I, J, K, L, M the numerical values representing the eye size of E are 84, 85, 86, 88, 86, respectively.
Since the 6 frames of images are images of a plurality of people including people.
For example, for C, the face image with the largest eyes appears in images K and L. For the third, the face image whose eyes are the largest appears in the image K. For penta, the image whose eyes are the largest appears in the image L. Since the face image with the largest eyes of two persons appears in the image L, the terminal can determine the image L as the target image.
After the image L is determined as the target image, the terminal may replace the face image in the image L with the face image in the image K (with the largest eyes). It can be understood that after the face image replacement is completed, the eyes of all three persons, namely third person, third person and fifth person, in the image L are the largest eyes in the 6 frames of images H, I, J, K, L, M.
Then, the terminal may acquire the sensitivity used when the image L is captured, and detect whether the sensitivity is less than a preset threshold. For example, the sensitivity used by the terminal when capturing the image L is 790, and the preset threshold is 800.
The terminal detects that the sensitivity used by the terminal when the image L is collected is 790 less than the preset threshold 800, the difference 10 between the sensitivity and the preset difference 20 is less than the preset difference 10, and the interval between the second time when the image L is collected and the first time when the image E which is collected last and output to the album to be a photo by the terminal is 8 seconds and less than the preset interval 20 seconds, so that the terminal can determine that the image L needs to be subjected to noise reduction processing.
Thereafter, the terminal may acquire 4 frames of images continuously acquired including the image L, for example, the 4 frames of images are J, K, L, M. Finally, the terminal may perform multi-frame noise reduction processing on the image L subjected to face replacement processing from the image J, K, M, and then output the image L subjected to noise reduction processing to an album as a photograph.
It can be understood that, in this embodiment, the image L originally includes the face images in the large eye states of the third person and the fifth person, and the terminal replaces the face image in the third person in the image L with the face image in the large eye state in the third person in the image K, so that after image replacement, the image L includes the face images in the large eye states of the third person, the third person and the fifth person. After that, the terminal performs noise reduction processing on the image L and outputs the image L to the album to be a photo, so that the photo is a big-eye photo of three people, namely C, D and E, and the imaging effect of the photo is good due to the noise reduction processing. That is, the eyes of all three people, i.e., third, fourth and fifth, in the photo finally output by the terminal are eyes in a large eye state.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 300 may include: a first obtaining module 301, a second obtaining module 302, and a determining module 303.
The first obtaining module 301 is configured to obtain an environmental parameter when the terminal collects the first image.
For example, the first obtaining module 301 may first obtain the environmental parameters when the first image is acquired. That is, the first acquiring module 301 may acquire the environmental parameters of the first image when being acquired.
In some embodiments, the environmental parameter may be a parameter such as sensitivity (ISO), environmental light brightness, and the like.
A second obtaining module 302, configured to obtain historical data of whether the terminal performs noise reduction processing on the acquired image.
For example, after the first obtaining module 301 obtains the environment parameter when the terminal collects the first image, the second obtaining module 302 may obtain history data of whether the noise reduction processing is performed on the image collected before the first image.
A judging module 303, configured to judge whether to perform noise reduction processing on the first image according to the environment parameter and the historical data.
For example, after acquiring the environmental parameter when the first image is acquired and the history data of whether the noise reduction processing is performed on the image acquired before the first image, the determining module 303 may determine whether the noise reduction processing is performed on the first image according to the environmental parameter and the history data.
In one embodiment, the second obtaining module 302 may be configured to:
acquiring historical data of whether a terminal performs noise reduction processing on a second image, wherein the second image is an image acquired by the terminal last time before the first image is acquired.
In an embodiment, the second obtaining module 302 may further be configured to:
acquiring a first moment when a terminal acquires the first image and a second moment when a terminal acquires the second image;
and if the time interval between the first moment and the second moment is smaller than a preset interval, acquiring historical data of whether the terminal performs noise reduction processing on the second image.
In an embodiment, the environmental parameter is sensitivity, and the determining module 303 is configured to:
and if the value of the sensitivity when the first image is collected is smaller than a preset threshold, the difference value between the value of the sensitivity and the preset threshold is smaller than or equal to a preset difference value, and the historical data represents that the terminal performs noise reduction processing on the second image, determining that the noise reduction processing needs to be performed on the first image.
In one embodiment, the second obtaining module 302 may be configured to: and if the fact that the terminal performs noise reduction on the first image in a multi-frame noise reduction mode is determined, acquiring historical data of whether a second image is subjected to noise reduction processing by the terminal, wherein the second image is an image which is acquired and stored in an album by the terminal last time before the first image is acquired.
Referring to fig. 6, fig. 6 is another schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 300 may further include: the determination module 304 is configured to determine,
the determination module 304 is configured to: when an image containing a human face is collected, determining a target frame number according to at least two collected images, wherein the target frame number is greater than or equal to 2; acquiring to-be-processed images with the number of the target frames from the acquired multi-frame images; and determining a first image from the image to be processed, wherein the first image at least comprises a face image meeting a preset condition.
Then, the determining module 303 may be further configured to: and if the first image needs to be subjected to noise reduction processing, performing noise reduction processing on the first image according to the image to be processed.
In one embodiment, the determining module 304 may be further configured to: and if the images to be processed are single images, determining a first image from the images to be processed, wherein the face image in the first image meets a preset condition.
In one embodiment, the determining module 304 may be further configured to: if each image to be processed is a multi-person image, determining a first image from the images to be processed, wherein the first image at least comprises a face image meeting preset conditions; determining a face image to be replaced which does not accord with the preset condition from the first image; determining a target face image meeting the preset condition from other images to be processed except the first image, wherein the target face image and the face image to be replaced are face images of the same user; and in the first image, replacing the face image to be replaced with the target face image to obtain a first image subjected to image replacement processing.
Then, the determining module 303 may be further configured to: and according to the image to be processed, performing noise reduction processing on the first image subjected to the image replacement processing.
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which, when executed on a computer, causes the computer to execute the steps in the image processing method provided in this embodiment.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the steps in the image processing method provided in this embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 7, fig. 7 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
The mobile terminal 400 may include components such as a sensor 401, memory 402, processor 403, and the like. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 7 is not intended to be limiting of mobile terminals and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The sensor 401 may comprise, for example, an ambient light level sensor.
The memory 402 may be used to store applications and data. The memory 402 stores applications containing executable code. The application programs may constitute various functional modules. The processor 403 executes various functional applications and data processing by running an application program stored in the memory 402.
The processor 403 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing an application program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the mobile terminal.
In this embodiment, the processor 403 in the mobile terminal loads the executable code corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 403 runs the application programs stored in the memory 402, thereby implementing the steps:
acquiring an environmental parameter when a terminal acquires a first image; acquiring historical data of whether the acquired image is subjected to noise reduction processing by a terminal; and judging whether the first image needs to be subjected to noise reduction processing or not according to the environment parameters and the historical data.
The embodiment of the invention also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 8 is a diagram illustrating an exemplary image processing circuit. As shown in fig. 8, for ease of explanation, only aspects of the image processing techniques related to embodiments of the present invention are shown.
As shown in fig. 8, the image processing circuit includes an image signal processor 540 and a control logic 550. Image data captured by the imaging device 510 is first processed by an image signal processor 540, and the image signal processor 540 analyzes the image data to capture image statistics that may be used to determine and/or one or more control parameters of the imaging device 510. The imaging device 510 may include a camera with one or more lenses 511 and an image sensor 512. Image sensor 512 may include an array of color filters (e.g., Bayer filters), and image sensor 512 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 512 and provide a set of raw image data that may be processed by an image signal processor 540. The sensor 520 may provide raw image data to the image signal processor 540 based on the sensor 520 interface type. The sensor 520 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
The image signal processor 540 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and image signal processor 540 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The image signal processor 540 may also receive pixel data from the image memory 530. For example, raw pixel data is sent from the sensor 520 interface to the image memory 530, and the raw pixel data in the image memory 530 is then provided to the image signal processor 540 for processing. The image Memory 530 may be a part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the sensor 520 interface or from the image memory 530, the image signal processor 540 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 530 for additional processing before being displayed. An image signal processor 540 receives the processed data from the image memory 530 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 570 for viewing by a user and/or further processing by a graphics engine or GPU (graphics processing Unit). Further, the output of the image signal processor 540 may also be sent to the image memory 530, and the display 570 may read image data from the image memory 530. In one embodiment, image memory 530 may be configured to implement one or more frame buffers. Further, the output of the image signal processor 540 may be transmitted to an encoder/decoder 560 so as to encode/decode image data. The encoded image data may be saved and decompressed before being displayed on the display 570 device. The encoder/decoder 560 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the image signal processor 540 may be sent to the control logic 550. For example, the statistical data may include image sensor 512 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 511 shading correction, and the like. The control logic 550 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 510 and, in turn, control parameters based on the received statistical data. For example, the control parameters may include sensor 520 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 511 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 511 shading correction parameters.
The following steps are steps for implementing the image processing method provided by the embodiment by using the image processing technology in fig. 8:
acquiring an environmental parameter when a terminal acquires a first image; acquiring historical data of whether the acquired image is subjected to noise reduction processing by a terminal; and judging whether the first image needs to be subjected to noise reduction processing or not according to the environment parameters and the historical data.
In one embodiment, when the step of acquiring the history data of whether the noise reduction processing is performed on the acquired image by the terminal is performed by the electronic device, the electronic device may perform: acquiring historical data of whether a terminal performs noise reduction processing on a second image, wherein the second image is an image acquired by the terminal last time before the first image is acquired.
In one embodiment, after the step of acquiring the environmental parameter when the terminal acquires the first image, the electronic device may further perform: and acquiring a first moment when the terminal acquires the first image and a second moment when the terminal acquires the second image.
Then, when the step of acquiring the history data of whether the terminal has performed noise reduction processing on the second image is performed by the electronic device, the electronic device may perform: and if the time interval between the first moment and the second moment is smaller than a preset interval, acquiring historical data of whether the terminal performs noise reduction processing on the second image.
In one embodiment, the environmental parameter is sensitivity, and when the electronic device executes the step of determining whether the noise reduction processing needs to be performed on the first image according to the environmental parameter and the history data, the electronic device may execute: and if the value of the sensitivity when the first image is collected is smaller than a preset threshold, the difference value between the value of the sensitivity and the preset threshold is smaller than or equal to a preset difference value, and the historical data represents that the terminal performs noise reduction processing on the second image, determining that the noise reduction processing needs to be performed on the first image.
In one embodiment, when the acquiring terminal executes the step of acquiring the history data of whether the terminal has performed noise reduction processing on the second image, which is the image acquired by the terminal last time before the first image is acquired, the electronic device may execute: and if the fact that the terminal performs noise reduction on the first image in a multi-frame noise reduction mode is determined, acquiring historical data of whether a second image is subjected to noise reduction processing by the terminal, wherein the second image is an image which is acquired and stored in an album by the terminal last time before the first image is acquired.
In one embodiment, before the step of acquiring the environmental parameter when the terminal acquires the first image, the electronic device may further perform: when an image containing a human face is collected, determining a target frame number according to at least two collected images, wherein the target frame number is greater than or equal to 2; acquiring to-be-processed images with the number of the target frames from the acquired multi-frame images; and determining a first image from the image to be processed, wherein the first image at least comprises a face image meeting a preset condition.
Then, after the step of determining whether the noise reduction processing needs to be performed on the first image, the electronic device may further perform: and if the first image needs to be subjected to noise reduction processing, performing noise reduction processing on the first image according to the image to be processed.
In one embodiment, when the electronic device performs the step of determining the first image from the to-be-processed image, it may perform: and if the images to be processed are single images, determining a first image from the images to be processed, wherein the face image in the first image meets a preset condition.
In one embodiment, when the electronic device performs the step of determining the first image from the to-be-processed image, it may perform: if each image to be processed is a multi-person image, determining a first image from the images to be processed, wherein the first image at least comprises a face image meeting preset conditions; determining a face image to be replaced which does not accord with the preset condition from the first image; determining a target face image meeting the preset condition from other images to be processed except the first image, wherein the target face image and the face image to be replaced are face images of the same user; and in the first image, replacing the face image to be replaced with the target face image to obtain a first image subjected to image replacement processing.
Then, when the electronic device performs the step of performing noise reduction processing on the first image according to the image to be processed, it may perform: and according to the image to be processed, performing noise reduction processing on the first image subjected to the image replacement processing.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image processing method, and are not described herein again.
The image processing apparatus provided in the embodiment of the present application and the image processing method in the above embodiments belong to the same concept, and any method provided in the embodiment of the image processing method may be executed on the image processing apparatus, and a specific implementation process thereof is described in detail in the embodiment of the image processing method, and is not described herein again.
It should be noted that, for the image processing method described in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process of implementing the image processing method described in the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and during the execution, the process of the embodiment of the image processing method can be included. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the image processing apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing describes in detail a method, an apparatus, a storage medium, and an electronic device for processing an image according to embodiments of the present application, and a specific example is applied to illustrate principles and embodiments of the present invention, and the description of the foregoing embodiments is only used to help understand the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A method of processing an image, comprising:
acquiring an environmental parameter when a terminal collects a first image, wherein the environmental parameter at least comprises sensitivity;
acquiring a first moment when the terminal acquires the first image and a second moment when the terminal acquires a second image, wherein the second image is an image acquired by the terminal at the last time before the first image is acquired;
if the time interval between the first moment and the second moment is smaller than a preset interval, acquiring historical data of whether the terminal performs noise reduction processing on the second image;
if the value of the sensitivity when the first image is collected is smaller than a preset threshold, the difference value between the value of the sensitivity and the preset threshold is smaller than or equal to a preset difference value, and the historical data represents that the terminal performs noise reduction on the second image, it is determined that the noise reduction on the first image is required;
and if the sensitivity value used when the terminal collects the first image is not less than a preset threshold value, carrying out noise reduction processing on the first image.
2. The method according to claim 1, wherein the step of acquiring history data of whether the terminal has performed noise reduction processing on a second image, which is an image acquired by the terminal most recently before the acquisition of the first image, comprises:
and if the fact that the terminal performs noise reduction on the first image in a multi-frame noise reduction mode is determined, acquiring historical data of whether a second image is subjected to noise reduction processing by the terminal, wherein the second image is an image which is acquired and stored in an album by the terminal last time before the first image is acquired.
3. The image processing method according to claim 2, wherein before the step of acquiring the environmental parameter of the first image by the terminal, the method further comprises:
when an image containing a human face is collected, determining a target frame number according to at least two collected images, wherein the target frame number is greater than or equal to 2;
acquiring to-be-processed images with the number of the target frames from the acquired multi-frame images;
determining a first image from the image to be processed, wherein the first image at least comprises a face image meeting a preset condition;
the step of performing noise reduction processing on the first image comprises:
and performing noise reduction processing on the first image according to the image to be processed.
4. The method according to claim 3, wherein the step of determining the first image from the image to be processed comprises:
and if the images to be processed are single images, determining a first image from the images to be processed, wherein the face image in the first image meets the preset condition.
5. The method according to claim 3, wherein the step of determining the first image from the image to be processed comprises:
if each image to be processed is a multi-person image, determining a first image from the images to be processed, wherein the first image at least comprises a face image meeting the preset condition;
determining a face image to be replaced which does not accord with the preset condition from the first image;
determining a target face image meeting the preset condition from other images to be processed except the first image, wherein the target face image and the face image to be replaced are face images of the same user;
replacing the face image to be replaced with the target face image in the first image to obtain a first image subjected to image replacement processing;
the step of performing noise reduction processing on the first image according to the image to be processed includes: and according to the image to be processed, performing noise reduction processing on the first image subjected to the image replacement processing.
6. An apparatus for processing an image, comprising:
the first acquisition module is used for acquiring environmental parameters when the terminal acquires a first image, wherein the environmental parameters at least comprise sensitivity;
the second acquisition module is used for acquiring a first moment when the terminal acquires the first image and a second moment when the terminal acquires a second image, wherein the second image is an image acquired by the terminal at the latest time before the first image is acquired; if the time interval between the first moment and the second moment is smaller than a preset interval, acquiring historical data of whether the terminal performs noise reduction processing on the second image;
the judging module is used for determining that the noise reduction processing needs to be carried out on the first image if the value of the sensitivity when the first image is collected is smaller than a preset threshold, the difference value between the value of the sensitivity and the preset threshold is smaller than or equal to a preset difference value, and the historical data indicates that the terminal carries out the noise reduction processing on the second image; and if the sensitivity value used when the terminal collects the first image is not less than a preset threshold value, carrying out noise reduction processing on the first image.
7. A storage medium having stored thereon a computer program, characterized in that the computer program, when executed on a computer, causes the computer to execute the method according to any of claims 1 to 5.
8. An electronic device comprising a memory, a processor, wherein the processor is configured to perform the method of any one of claims 1 to 5 by invoking a computer program stored in the memory.
CN201810222006.1A 2018-03-18 2018-03-18 Image processing method and device, storage medium and electronic equipment Active CN108335278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810222006.1A CN108335278B (en) 2018-03-18 2018-03-18 Image processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810222006.1A CN108335278B (en) 2018-03-18 2018-03-18 Image processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108335278A CN108335278A (en) 2018-07-27
CN108335278B true CN108335278B (en) 2020-07-07

Family

ID=62932117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810222006.1A Active CN108335278B (en) 2018-03-18 2018-03-18 Image processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108335278B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115733913A (en) * 2021-08-25 2023-03-03 北京小米移动软件有限公司 Continuous photographing method and device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012012907A1 (en) * 2010-07-29 2012-02-02 Valorbec Société En Commandite, Représentée Par Gestion Valeo S.E.C. Minimal iterativity anisotropic diffusion method for reducing image or video noises
CN105163040A (en) * 2015-09-28 2015-12-16 广东欧珀移动通信有限公司 Image processing method and mobile terminal
CN107180417A (en) * 2017-05-31 2017-09-19 广东欧珀移动通信有限公司 Photo processing method, device, computer-readable recording medium and electronic equipment
CN107205116A (en) * 2017-06-13 2017-09-26 广东欧珀移动通信有限公司 Image-selecting method and Related product
CN107464219A (en) * 2016-06-03 2017-12-12 上海联影医疗科技有限公司 The motion detection and noise-reduction method of consecutive image
CN107734253A (en) * 2017-10-13 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107798652A (en) * 2017-10-31 2018-03-13 广东欧珀移动通信有限公司 Image processing method, device, readable storage medium storing program for executing and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012012907A1 (en) * 2010-07-29 2012-02-02 Valorbec Société En Commandite, Représentée Par Gestion Valeo S.E.C. Minimal iterativity anisotropic diffusion method for reducing image or video noises
CN105163040A (en) * 2015-09-28 2015-12-16 广东欧珀移动通信有限公司 Image processing method and mobile terminal
CN107464219A (en) * 2016-06-03 2017-12-12 上海联影医疗科技有限公司 The motion detection and noise-reduction method of consecutive image
CN107180417A (en) * 2017-05-31 2017-09-19 广东欧珀移动通信有限公司 Photo processing method, device, computer-readable recording medium and electronic equipment
CN107205116A (en) * 2017-06-13 2017-09-26 广东欧珀移动通信有限公司 Image-selecting method and Related product
CN107734253A (en) * 2017-10-13 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107798652A (en) * 2017-10-31 2018-03-13 广东欧珀移动通信有限公司 Image processing method, device, readable storage medium storing program for executing and electronic equipment

Also Published As

Publication number Publication date
CN108335278A (en) 2018-07-27

Similar Documents

Publication Publication Date Title
CN109068067B (en) Exposure control method and device and electronic equipment
CN107948519B (en) Image processing method, device and equipment
EP3480783B1 (en) Image-processing method, apparatus and device
CN108322669B (en) Image acquisition method and apparatus, imaging apparatus, and readable storage medium
CN109040609B (en) Exposure control method, exposure control device, electronic equipment and computer-readable storage medium
CN108900782B (en) Exposure control method, exposure control device and electronic equipment
CN110290289B (en) Image noise reduction method and device, electronic equipment and storage medium
CN110072052B (en) Image processing method and device based on multi-frame image and electronic equipment
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
CN110166708B (en) Night scene image processing method and device, electronic equipment and storage medium
CN108259770B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110191291B (en) Image processing method and device based on multi-frame images
EP3480784B1 (en) Image processing method, and device
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN109348088B (en) Image noise reduction method and device, electronic equipment and computer readable storage medium
CN110166707B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN108401110B (en) Image acquisition method and device, storage medium and electronic equipment
CN108093158B (en) Image blurring processing method and device, mobile device and computer readable medium
CN107481186B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN110290323B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107509044B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN109672819B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110166706B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN107704798B (en) Image blurring method and device, computer readable storage medium and computer device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant