CN109118447B - Picture processing method, picture processing device and terminal equipment - Google Patents

Picture processing method, picture processing device and terminal equipment Download PDF

Info

Publication number
CN109118447B
CN109118447B CN201810866578.3A CN201810866578A CN109118447B CN 109118447 B CN109118447 B CN 109118447B CN 201810866578 A CN201810866578 A CN 201810866578A CN 109118447 B CN109118447 B CN 109118447B
Authority
CN
China
Prior art keywords
picture
model
dim light
processed
trained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810866578.3A
Other languages
Chinese (zh)
Other versions
CN109118447A (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810866578.3A priority Critical patent/CN109118447B/en
Publication of CN109118447A publication Critical patent/CN109118447A/en
Application granted granted Critical
Publication of CN109118447B publication Critical patent/CN109118447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a picture processing method, a picture processing device and a terminal device, wherein the picture processing method comprises the following steps: acquiring a picture to be processed; detecting whether the picture to be processed is a picture obtained in a dark light environment; and if the picture to be processed is the picture obtained in the dark light environment, improving the picture brightness of the picture to be processed by using the trained dark light reduction model, wherein the dark light reduction model is a pre-trained neural network model for improving the picture brightness of the picture obtained in the dark light environment. The method solves the technical problem that the traditional method for improving the image quality of the collected image in the dark environment is too complex.

Description

Picture processing method, picture processing device and terminal equipment
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a terminal device, and a computer-readable storage medium.
Background
When the external environment is relatively dark (for convenience of subsequent description, hereinafter referred to as a dark light environment), a large amount of noise exists in the image acquired by the conventional terminal device, and in order to improve the image quality of the image acquired in the dark light environment, the image brightness of the image acquired in the dark light environment needs to be adjusted, however, the image with the adjusted brightness is often blurred, and therefore, further operations such as denoising, deblurring, image enhancement and the like are required, and therefore, the conventional method for improving the image quality of the image acquired in the dark light environment is too complicated.
Disclosure of Invention
In view of this, the present application provides a picture processing method, a picture processing apparatus, a terminal device and a computer readable storage medium, which can solve the technical problem that the method for improving the image quality of a picture acquired in a dark light environment in the prior art is too complicated.
A first aspect of the present application provides an image processing method, including:
acquiring a picture to be processed;
detecting whether the picture to be processed is a picture obtained in a dark light environment;
if the picture to be processed is a picture obtained in a dark light environment, then:
and improving the picture brightness of the picture to be processed by using the trained dim light reduction model, wherein the dim light reduction model is a pre-trained neural network model for improving the picture brightness of the picture acquired in a dim light environment.
A second aspect of the present application provides a picture processing apparatus, including:
the image acquisition module is used for acquiring an image to be processed;
the dark light detection module is used for detecting whether the picture to be processed is a picture acquired in a dark light environment;
and the dim light reduction module is used for improving the picture brightness of the picture to be processed by utilizing a trained dim light reduction model if the picture to be processed is the picture acquired in a dim light environment, wherein the dim light reduction model is a pre-trained neural network model used for improving the picture brightness of the picture acquired in the dim light environment.
A third aspect of the present application provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to the first aspect when executing the computer program.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect as described above.
A fifth aspect of the present application provides a computer program product comprising a computer program which, when executed by one or more processors, performs the steps of the method of the first aspect as described above.
From the above, the present application provides a picture processing method, which includes obtaining a picture to be processed, detecting whether the picture to be processed is obtained in a dark light environment, and if it is detected that the picture to be processed is obtained in the dark light environment, directly improving the picture brightness of the picture to be processed by using a trained dark light reduction model, wherein the dark light reduction model is a pre-trained neural network model for improving the brightness of the picture obtained in the dark light environment. Therefore, in the technical scheme provided by the application, when the picture acquired by the terminal device is acquired in a dark environment, the preset neural network model is directly utilized to process the picture, and the traditional method needs to perform a series of operations such as brightness adjustment, denoising, deblurring, picture enhancement and the like on the picture after the picture acquired in a dark light environment is acquired.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a picture processing method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation process of a dark light reduction model training process according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a training process of a dim light reduction model according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart illustrating an implementation process of a training process of a dim light reduction model according to another embodiment of the present application;
FIG. 5 is a schematic diagram of a training process of a discriminant model provided in the second embodiment of the present application;
FIG. 6 is a schematic diagram of a training process of another dim light reduction model provided in the second embodiment of the present application;
fig. 7 is a schematic structural diagram of a picture processing apparatus according to a third embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The image processing method provided by the embodiment of the application can be applied to terminal equipment, and the terminal equipment includes, but is not limited to: smart phones, tablet computers, learning machines, intelligent wearable devices, and the like.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the terminal devices described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the devices described above are not portable communication devices, but rather are desktop computers having touch-sensitive surfaces (e.g., touch screen displays and/or touch pads).
In the discussion that follows, a terminal device that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal device supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal device may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to explain the technical solution of the present application, the following description will be given by way of specific examples.
Example one
Referring to fig. 1, a picture processing method provided in an embodiment of the present application is described below, where the picture processing method in the embodiment of the present application includes:
in step S101, a picture to be processed is acquired;
in the embodiment of the application, a to-be-processed picture is obtained first. The picture to be processed may be a picture acquired by a camera after the terminal device starts the camera or the video camera, for example, a user starts a camera application program and the camera acquires a certain frame of picture; or, the picture may be a picture taken by the user through a local camera, for example, a picture taken after the user starts a camera application program in the terminal device and clicks a shooting button; or, the image may be an image received by the user through another application, for example, an image sent by another wechat contact received by the user in the wechat; or, the picture may also be a picture downloaded by the user from the internet, for example, a picture downloaded by the user in a browser through a common carrier network; or, a certain frame of picture in the video, for example, one of the frames of pictures in the television program watched by the user, where the source of the picture to be processed is not limited.
In step S102, detecting whether the picture to be processed is a picture obtained in a dark light environment;
after the picture to be processed is obtained, it is required to detect whether the picture to be processed is a picture obtained in a dark light environment, where the ambient brightness is less than the preset brightness. In general, the darker the external environment, the higher the sensitivity of the camera when taking a picture, and the longer the exposure time, so that if the to-be-processed picture is a picture currently captured by the camera (for example, the to-be-processed picture is a certain frame of picture captured by the camera after the user starts a camera or a video camera application), then the current sensitivity (or exposure time) of the camera can be obtained, and whether the current sensitivity of the camera is greater than the preset sensitivity (or whether the current exposure time of the camera is greater than the preset exposure time) is judged, if the current sensitivity of the camera is greater than the preset sensitivity (or the current exposure time of the camera is greater than the preset exposure time), and if not, determining that the picture to be processed is not the picture acquired in the dark light environment. In addition, in the embodiment of the present application, the sensitivity or the exposure duration of the camera when the to-be-processed picture is taken may also be acquired from the attribute information of the to-be-processed picture, and then whether the to-be-processed picture is an acquired picture in a dark light environment is determined according to the sensitivity or the exposure duration of the camera in the attribute information.
In addition, if the to-be-processed picture acquired in step S101 is not the picture currently acquired by the camera and the attribute information of the to-be-processed picture does not include the sensitivity or exposure duration information of the camera when the to-be-processed picture is taken, it can be determined whether the to-be-processed picture is a picture acquired in a dark light environment by detecting the picture brightness of the to-be-processed picture (when the external environment is dark, the picture brightness of the picture acquired by the camera is certainly low; but if the picture brightness of the picture acquired by the camera is low, the external environment is not necessarily dark; therefore, it cannot be determined very accurately whether the to-be-processed picture is the picture acquired in a dark light environment by detecting the picture brightness of the to-be-processed picture, so in step S102, the sensitivity or exposure duration of the camera when the to-be-processed picture is taken can be acquired first, therefore, whether the picture to be processed is the picture acquired in the dark environment or not can be judged according to the light sensitivity or the exposure time of the camera, and only when the light sensitivity or the exposure time of the camera for shooting the picture to be processed cannot be acquired, whether the picture to be processed is the picture acquired in the dark environment or not is detected through the picture brightness of the picture to be processed.
In step S103, if the to-be-processed picture is a picture obtained in a dark light environment, the picture brightness of the to-be-processed picture is improved by using a trained dark light reduction model, where the dark light reduction model is a pre-trained neural network model for improving the picture brightness of the picture obtained in the dark light environment;
in this embodiment of the application, if it is detected that the to-be-processed picture obtained in step S101 is a picture obtained in a dark light environment, the to-be-processed picture is input into a pre-trained dark light reduction model, where the dark light reduction model is a neural network model deployed in the terminal device before the terminal device leaves a factory, and is used to improve the picture brightness of the picture obtained in the dark light environment.
Illustratively, the training process of the dim light reduction model may be as shown in fig. 2, and includes steps S201 to S204:
in step S201, any dark light sample picture and a non-dark light sample picture corresponding to the dark light sample picture are selected from a sample database, where the sample database includes a plurality of dark light sample pictures obtained in the dark light environment and non-dark light sample pictures corresponding to the dark light sample pictures;
in the embodiment of the present application, a dim light restoration model needs to be trained in advance by using each sample picture in a sample database, where the sample database includes a plurality of dim light sample pictures obtained in the dim light environment (i.e., an environment with an environmental brightness less than a preset brightness) and non-dim light sample pictures corresponding to each dim light sample picture. As shown in fig. 3, the sample database 301 includes 3 sample groups 3011, 3012, and 3013, each sample group is composed of a dark sample picture and a corresponding non-dark sample picture, in fig. 3, the sample group 3011 is composed of a dark sample picture a and a corresponding non-dark sample picture a1, the sample group 3012 is composed of a dark sample picture B and a corresponding non-dark sample picture B1, and the sample group 3013 is composed of a dark sample picture C and a corresponding non-dark sample picture C1.
In the embodiment of the present application, each of the dim-light sample pictures in the sample database is a picture obtained in the dim-light environment (i.e., an environment with an ambient brightness less than a preset brightness), the corresponding non-dim-light sample picture is a picture obtained in the non-dim-light environment (i.e., an environment with an ambient brightness not less than the preset brightness), and the dim-light sample picture and the corresponding non-dim-light sample picture have the same picture content. In order to obtain the dim light sample picture and the corresponding non-dim light sample picture in the sample database, a dim light sample picture can be obtained in a dim light environment, for example, at night when the external environment is dark, and then a non-dim light sample picture can be obtained in a non-dim light environment, for example, when the light is bright in the daytime, at the same shooting place and shooting angle.
Any one of the dim light sample pictures and the corresponding non-dim light sample picture are selected from the sample database as the training pictures of the dim light restoration model, and as shown in fig. 3, the dim light restoration model is trained by using the dim light sample picture a and the non-dim light sample picture a 1.
In step S202, the dim light sample picture is input into an initial dim light restoration model, so that the initial dim light restoration model increases the picture brightness of the dim light sample picture, thereby obtaining a generated picture output by the initial dim light restoration model;
in this embodiment, an initial dim light reduction model is first established, and the dim light sample picture selected in step S201 is input into the initial dim light reduction model, so that the initial dim light reduction model outputs a generated picture. As shown in fig. 3, the dim light sample picture a is input into the initial dim light reduction model 302, and a generated picture output by the initial dim light reduction model is obtained.
In step S203, performing similarity matching on the generated picture and the non-dim light sample picture, and determining whether the similarity between the generated picture and the non-dim light sample picture is greater than a preset similarity threshold;
in this embodiment, image features, such as texture features, color features, luminance features, and/or edge features, of the generated image obtained in step S202 and the non-dim-light sample image selected in step S201 may be respectively extracted, similarity matching may be performed on the image features of the generated image and the non-dim-light sample image, and whether the similarity of the generated image and the non-dim-light sample image is greater than a preset similarity threshold may be determined.
In step S204, continuously adjusting each parameter of the current dim light reduction model until the similarity between the generated picture output by the current dim light reduction model and the non-dim light sample picture is greater than the similarity threshold, and then using the current dim light reduction model as the trained dim light reduction model.
In general, the similarity between the generated image output by the initial dim light reduction model and the non-dim light sample image is often smaller, so that each parameter of the initial dim light reduction model needs to be adjusted, the dim light sample image selected in step S201 is input into the dim light reduction model after parameter adjustment again, the generated image output by the dim light reduction model after parameter adjustment is subjected to similarity matching with the non-dim light sample image selected in step S201 again, each parameter of the current dim light reduction model is continuously adjusted until the similarity between the generated image output by the current dim light reduction model and the non-dim light sample image is greater than the preset similarity threshold, and the current dim light reduction model is used as the trained dim light reduction model.
The above steps S201 to S204 provide a method for training a dim light reduction model, which trains the dim light reduction model by selecting any one of the dim light sample pictures and the corresponding non-dim light sample picture in the sample database. In addition, in the embodiment of the present application, a plurality of dim light sample pictures and corresponding non-dim light sample pictures may also be selected from the sample database to train the dim light restoration model. The following describes a training process for training the dim light restoration model by selecting a plurality of dim light sample pictures and corresponding non-dim light sample pictures in the sample database with reference to fig. 3:
as shown in fig. 3, the dark light sample picture a and the dark light sample picture B in the sample database 301, and the corresponding non-dark light sample picture a1 and the non-dark light sample picture B1 may be selected to train the dark light restoration model, first, the dark light sample picture a is input into the current dark light restoration model, and it is determined whether the similarity between the generated picture output by the current dark light restoration model and the non-dark light sample picture a1 is greater than a preset similarity threshold; secondly, inputting the dark light sample picture B into the current dark light restoration model, and judging whether the similarity between the generated picture output by the current dark light restoration model and the non-dark light sample picture B1 is greater than a preset similarity threshold value; then, counting the proportion of the dim light sample pictures with the similarity greater than a preset similarity threshold, and judging whether the proportion is greater than a preset proportion; and finally, continuously adjusting each parameter of the current dim light reduction model until the proportion of the dim light sample pictures with the similarity greater than a preset similarity threshold value is greater than a preset proportion. For example, assuming that after the dark light sample picture a is input into the current dark light reduction model, the similarity between the generated picture output by the dark light reduction model and the non-dark light sample picture a1 is greater than the preset similarity threshold, and after the dark light sample picture B is input into the current dark light reduction model, the similarity between the generated picture output by the dark light reduction model and the non-dark light sample picture B1 is not greater than the preset similarity threshold, the proportion of the dark light sample pictures with the similarities greater than the preset similarity threshold in this step is 50%, if the preset proportion is 80%, the parameters of the current dark light reduction model need to be adjusted, and the dark light sample picture a and the dark light sample picture B are input into the current dark light reduction model again, the proportion of the dark light sample pictures with the similarities greater than the preset similarity threshold is counted again, and whether the proportion is greater than the preset proportion is judged, and continuously adjusting each parameter of the current dim light reduction model until the counted proportion of the dim light sample pictures with the similarity greater than the preset similarity threshold value is greater than the preset proportion.
In the technical scheme provided by the first embodiment of the application, when the picture acquired by the terminal device is acquired in a dark environment, the preset neural network model is directly utilized to process the picture, and the traditional method is to perform a series of operations such as brightness adjustment, denoising, deblurring, picture enhancement and the like on the picture after the picture acquired in a dark light environment is acquired.
Example two
Referring to fig. 4, a training process of another dim light reduction model provided in the second embodiment of the present application is described below, where the training process of the dim light reduction model in the second embodiment of the present application includes:
in step S401, any dark light sample picture and a non-dark light sample picture corresponding to the dark light sample picture are selected from a sample database, where the sample database includes a plurality of dark light sample pictures obtained in the dark light environment and non-dark light sample pictures corresponding to the dark light sample pictures;
in step S402, the dim light sample picture is input into an initial dim light restoration model, so that the initial dim light restoration model increases the picture brightness of the dim light sample picture, thereby obtaining a generated picture output by the initial dim light restoration model;
in the second embodiment of the present application, the steps S401 to S402 are the same as the steps S201 to S202 in the first embodiment, and specific reference may be made to the description of the first embodiment, which is not repeated herein.
In step S403, the generated image and the non-dim light sample image are input into a trained discrimination model, so that the trained discrimination model determines whether the generated image output by the initial dim light restoration model is correct;
in the second embodiment of the present application, the discriminant model may be trained in advance before the dim light reduction model is trained, so as to obtain a trained discriminant model, where the trained discriminant model is used to determine whether a generated picture output by the current dim light reduction model is correct.
In the second embodiment of the present application, the initial dim light reduction model and the sample database can be utilized, the initial discriminative model is trained to obtain a trained discriminative model, specifically, the training process of the trained discrimination model is shown in fig. 5, assuming that the sample database includes 3 sample picture groups, which are respectively a dim light sample picture a and a corresponding non-dim light sample picture a1, a dim light sample picture B and a corresponding non-dim light sample picture B1, a dim light sample picture C and a corresponding non-dim light sample picture C1, first, selecting any number of dim light sample pictures from the sample database as the input of the initial dim light restoration model to obtain corresponding generated pictures, as shown in fig. 5, selecting a dark light sample picture A and a dark light sample picture B to be input into the initial dark light reduction model, so as to obtain a generated picture A2 and a generated picture B2; secondly, each generated picture and the corresponding non-dim light sample picture thereof are taken as a picture group, and the label of the picture group is set as "incorrect", as shown in fig. 5, the generated picture a2 and the corresponding non-dim light sample picture a1 are taken as a picture group, and the label of the picture group is set as "incorrect", and meanwhile, the generated picture B2 and the corresponding non-dim light sample picture B1 are taken as a picture group, and the label of the picture group is also set as "incorrect"; thirdly, selecting any plurality of non-dim light sample pictures from the sample database, taking two same non-dim light sample pictures as a picture group, and setting the label of the picture group as 'correct', as shown in fig. 5, selecting a non-dim light sample picture B1 and a non-dim light sample picture C1 from the sample database, taking two non-dim light sample pictures B1 as a picture group, setting the label of the picture group as 'correct', taking two non-dim light sample pictures C1 as a picture group, and setting the label of the picture group as 'correct'; then, inputting each picture group with the label set on the picture group into an initial discrimination model so that the initial discrimination model judges whether each picture group is correct or not, comparing a discrimination result output by the initial discrimination model with the label of each picture group, judging whether the discrimination accuracy of the initial discrimination model reaches a preset accuracy threshold value or not, and continuously adjusting each parameter of the current discrimination model until the discrimination accuracy of the current discrimination model reaches the preset accuracy threshold value.
Inputting the generated picture output by the current dim light reduction model and the non-dim light sample picture selected in step S401 into the trained discrimination model, so that the trained discrimination model determines whether the label corresponding to the group of pictures consisting of the generated picture output by the current dim light reduction model and the non-dim light sample picture selected in step S401 is "correct", if the label corresponding to the trained discrimination model determines that the label is "correct", the generated picture output by the current dim light reduction model is considered to be correct, and if the label corresponding to the trained discrimination model determines that the label is "incorrect", the generated picture output by the current dim light reduction model is considered to be incorrect.
In step S404, continuously adjusting each parameter of the current dim light reduction model until the trained discrimination model determines that the generated picture output by the current dim light reduction model is correct, and then using the current dim light reduction model as the trained dim light reduction model;
in this embodiment, first, a generated picture output by an initial dim light reduction model and a non-dim light sample picture selected in step S401 are input into the trained discrimination model, and then parameters of the current dim light reduction model are continuously adjusted until the trained discrimination model determines that the generated picture output by the current dim light reduction model is correct, so that the generated picture output by the dim light reduction model trained by the trained discrimination model is closer to the corresponding non-dim light sample picture.
In addition, in this embodiment of the application, the training process described in the above steps S401 to S404 may be subjected to loop iteration, so as to generate a dim light reduction model with better performance, as shown in fig. 6:
firstly, the training methods defined in the synchronization steps S401-S404 are the same, namely, an initial judgment model is trained by utilizing an initial dim light reduction model and a sample database to generate a trained judgment model, and then the initial dim light reduction model is trained by utilizing the trained judgment model to generate the trained dim light reduction model;
secondly, updating the initial dim light reduction model and the initial discrimination model, namely, taking the trained dim light reduction model obtained in the last step (namely, by using the training method defined in the steps S401-S404) as the initial dim light reduction model, taking the trained discrimination model obtained in the last step as the initial discrimination model, repeatedly executing the last step, and generating the trained discrimination model and the trained dim light reduction model again;
and finally, continuously updating the initial dim light reduction model and the initial discrimination model until the number of the cycle iterations reaches a certain requirement, taking the trained dim light reduction model obtained at the last time as a finally trained dim light reduction model, and deploying the finally trained dim light reduction model in the terminal equipment.
In addition, in this embodiment of the application, the training process described in the above steps S401 to S404 is subjected to loop iteration, so as to generate a dim light reduction model with better performance, the dark light reduction model may also be:
firstly, the training methods defined in the synchronization steps S401-S404 are the same, namely, an initial judgment model is trained by utilizing an initial dim light reduction model and a sample database to generate a trained judgment model, and then the initial dim light reduction model is trained by utilizing the trained judgment model to generate the trained dim light reduction model;
secondly, judging whether the trained dim light reduction model generated in the last step meets the requirements (in the embodiment of the application, the currently generated trained dim light reduction model and the currently initial dim light reduction model can be compared in terms of each parameter, and then judging whether the currently generated trained dim light reduction model meets the requirements, specifically, the following description of the embodiment of the application can be referred to), if not, updating the initial dim light reduction model and the initial discrimination model, namely, taking the trained dim light reduction model obtained in the last step as the initial dim light reduction model, taking the trained discrimination model obtained in the last step as the initial discrimination model, repeatedly executing the last step, and generating the trained discrimination model and the trained dim light reduction model again;
and finally, continuously judging whether the currently generated trained dim light reduction model meets the requirements or not, if not, updating the initial dim light reduction model and the initial discrimination model, generating the trained dim light reduction model and the trained discrimination model by using the training method defined in the steps S401-S404 again until the generated trained dim light reduction model meets the requirements, taking the trained dim light reduction model obtained at the last time as a finally trained dim light reduction model, and deploying the finally trained dim light reduction model in the terminal equipment.
Wherein, the above-mentioned dark light reduction model after judging the training that generates whether satisfies the demand, include:
comparing each parameter in the currently generated trained dim light reduction model with each parameter in the currently initial dim light reduction model to obtain an adjustment percentage of each parameter, where the adjustment percentage is a ratio of a parameter adjustment amount to a corresponding parameter value in the currently initial dim light reduction model, for example, the currently initial dim light reduction model includes two parameters, where a is 1 and b is 2, the parameter a in the currently generated trained dim light reduction model is 1.2 and the parameter b is 1.8, the adjustment percentage of the parameter a is (1.2-1)/1 is 0.2, and the adjustment percentage of the parameter b is (1.8-2)/2 is-0.1;
judging whether the absolute value of the adjustment percentage of each parameter is smaller than a preset adjustment threshold value;
if so, the trained dim light reduction model is considered to meet the requirements;
otherwise, the trained dim light reduction model is considered to be not satisfied with the requirements.
That is, if the parameters of the currently generated trained dim light reduction model and the parameters of the currently initial dim light reduction model are relatively close to each other, the training of the dim light reduction model is considered to be completed.
In the training method provided in the first embodiment of the present application, after each parameter of the dim light reduction model is adjusted, the image feature of the generated image output by the dim light reduction model after the parameter is adjusted is extracted, and the extracted image feature of the generated image output by the dim light reduction model after the parameter is adjusted is required to be subjected to similarity matching with the image feature of the non-dim light sample image, so that the training process of the dim light reduction model takes a long time by the training method provided in the first embodiment of the present application. The training method provided by the second embodiment of the present application avoids extracting the image features and calculating the similarity matching after adjusting each parameter of the dim light reduction model each time by training the discriminant model in advance, and therefore, compared with the model training method provided by the first embodiment of the present application, the model training method provided by the second embodiment of the present application can accelerate the training process of the dim light reduction model.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
EXAMPLE III
In the third embodiment of the present application, a picture processing apparatus is provided, and for convenience of description, only the parts related to the present application are shown, as shown in fig. 7, a picture processing apparatus 700 includes,
a picture obtaining module 701, configured to obtain a picture to be processed;
a dark light detection module 702, configured to detect whether the to-be-processed picture is a picture obtained in a dark light environment;
the dim light restoration module 703 is configured to, if the to-be-processed picture is a picture obtained in a dim light environment, increase the picture brightness of the to-be-processed picture by using a trained dim light restoration model, where the dim light restoration model is a pre-trained neural network model for increasing the picture brightness of the picture obtained in the dim light environment.
Optionally, the picture to be processed is a picture currently acquired by the camera;
accordingly, the dim light detection module 702 includes:
the light sensitivity unit is used for acquiring the current light sensitivity of the camera and judging whether the current light sensitivity of the camera is greater than the preset light sensitivity;
the first photosensitive unit is used for confirming that the picture to be processed is a picture acquired in a dark light environment if the current sensitivity of the camera is greater than the preset sensitivity;
the second photosensitive unit is used for confirming that the picture to be processed is not the picture acquired under the dark light environment if the current sensitivity of the camera is less than or equal to the preset sensitivity.
Optionally, the picture to be processed is a picture currently acquired by the camera;
accordingly, the dim light detection module 702 includes:
an exposure duration unit, configured to obtain a current exposure duration of the camera, and determine whether the current exposure duration of the camera is greater than a preset exposure duration;
a first duration unit, configured to determine that the to-be-processed picture is a picture obtained in a dark light environment if the current exposure duration of the camera is greater than the preset exposure duration;
and the second duration unit is used for confirming that the picture to be processed is not the picture acquired in the dark light environment if the current exposure duration of the camera is less than or equal to the preset exposure duration.
Optionally, the dim light restoration model is trained by a training module, where the training module includes:
the system comprises a sample picture selecting unit, a dark light processing unit and a dark light processing unit, wherein the sample picture selecting unit is used for selecting any dark light sample picture and a non-dark light sample picture corresponding to the dark light sample picture from a sample database, and the sample database comprises a plurality of dark light sample pictures acquired under the dark light environment and the non-dark light sample pictures corresponding to the dark light sample pictures;
a generated picture obtaining unit, configured to input the dim light sample picture to an initial dim light restoration model, so that the initial dim light restoration model improves the picture brightness of the dim light sample picture, and thus a generated picture output by the initial dim light restoration model is obtained;
a judging unit, configured to input the generated picture and the non-dim-light sample picture into a trained judging model, so that the trained judging model judges whether the generated picture output by the initial dim-light restoration model is correct;
and the parameter adjusting unit is used for continuously adjusting each parameter of the current dim light reduction model until the trained discrimination model judges that the generated picture output by the current dim light reduction model is correct.
Optionally, the training module further includes:
and a discriminant model training unit for training the initial discriminant model by using the initial dim light restoration model and the sample database to obtain the trained discriminant model.
Optionally, the training module further includes:
the requirement judging unit is used for judging whether the trained dim light reduction model meets requirements or not;
and an updating unit, configured to, if the trained dim light restoration model does not meet the requirement, use the trained dim light restoration model as an initial dim light restoration model, and use the trained discrimination model as an initial discrimination model.
Optionally, the requirement determining unit includes:
an adjustment obtaining subunit, configured to compare each parameter in the trained dim light reduction model with each parameter in the initial dim light reduction model, and obtain an adjustment percentage of each parameter, where the adjustment percentage is a ratio of a parameter adjustment amount to a corresponding parameter value in the initial dim light reduction model;
the adjustment judging subunit is used for judging whether the absolute value of the adjustment percentage of each parameter is smaller than a preset adjustment threshold value;
the first requirement subunit is used for considering that the trained dim light reduction model meets requirements if the absolute value of the adjustment percentage of each parameter is smaller than a preset adjustment threshold;
and the second requirement subunit is used for considering that the trained dim light restoration model does not meet the requirement if the absolute value unevenness of the adjustment percentage of each parameter is smaller than a preset adjustment threshold.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Example four
Fig. 8 is a schematic diagram of a terminal device according to a fourth embodiment of the present application. As shown in fig. 8, the terminal device 8 of this embodiment includes: a processor 80, a memory 81, and a computer program 82 stored in the memory 81 and operable on the processor 80. The processor 80 implements the steps of the various method embodiments described above, such as steps S101 to S103 shown in fig. 1, when executing the computer program 82. Alternatively, the processor 80 implements the functions of the modules/units in the device embodiments, for example, the functions of the modules 701 to 703 shown in fig. 7, when executing the computer program 82.
Illustratively, the computer program 82 may be divided into one or more modules/units, which are stored in the memory 81 and executed by the processor 80 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 82 in the terminal device 8. For example, the computer program 82 may be divided into a picture acquiring module, a dark light detecting module and a dark light restoring module, and the functions of the modules are as follows:
acquiring a picture to be processed;
detecting whether the picture to be processed is a picture obtained in a dark light environment;
if the picture to be processed is a picture obtained in a dark light environment, then:
and improving the picture brightness of the picture to be processed by using the trained dim light reduction model, wherein the dim light reduction model is a pre-trained neural network model for improving the picture brightness of the picture acquired in a dim light environment.
The terminal device 8 may be a smart phone, a tablet computer, a learning machine, an intelligent wearable device, or other computing device. The terminal device may include, but is not limited to, a processor 80 and a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of a terminal device 8 and does not constitute a limitation of terminal device 8 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 81 may be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 8. The memory 81 may be an external storage device of the terminal device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided in the terminal device 8. Further, the memory 81 may include both an internal storage unit and an external storage device of the terminal device 8. The memory 81 is used for storing the computer program and other programs and data required by the terminal device. The above-mentioned memory 81 can also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. An image processing method, comprising:
acquiring a picture to be processed, wherein the picture to be processed is a picture acquired by a camera after a camera or a video camera is started by terminal equipment;
detecting whether the picture to be processed is a picture obtained in a dark light environment;
if the picture to be processed is a picture obtained in a dark light environment, then:
the method comprises the steps that the picture brightness of a picture to be processed is improved by utilizing a trained dim light reduction model, wherein the dim light reduction model is a pre-trained neural network model used for improving the picture brightness of the picture obtained in a dim light environment, and the dim light reduction model is a neural network model which is deployed in terminal equipment before the terminal equipment leaves a factory;
the picture to be processed is a picture currently acquired by a camera;
correspondingly, the detecting whether the picture to be processed is a picture obtained in a dark light environment includes:
acquiring the current sensitivity of the camera, and judging whether the current sensitivity of the camera is greater than a preset sensitivity;
if the current sensitivity of the camera is greater than the preset sensitivity, determining that the picture to be processed is a picture obtained in a dark light environment;
and if the current sensitivity of the camera is less than or equal to the preset sensitivity, determining that the picture to be processed is not the picture acquired in the dark environment.
2. The picture processing method according to claim 1, wherein the picture to be processed is a picture currently acquired by a camera;
correspondingly, the detecting whether the picture to be processed is a picture obtained in a dark light environment includes:
acquiring the current exposure time of the camera, and judging whether the current exposure time of the camera is greater than a preset exposure time;
if the current exposure time of the camera is longer than the preset exposure time, determining that the picture to be processed is a picture obtained in a dark light environment;
and if the current exposure time of the camera is less than or equal to the preset exposure time, determining that the picture to be processed is not the picture obtained in the dark light environment.
3. The picture processing method according to any one of claims 1 to 2, wherein the training process of the dim light reduction model comprises:
selecting any dark light sample picture and a non-dark light sample picture corresponding to the dark light sample picture from a sample database, wherein the sample database comprises a plurality of dark light sample pictures acquired under the dark light environment and the non-dark light sample pictures corresponding to the dark light sample pictures;
inputting the dim light sample picture into an initial dim light reduction model so that the initial dim light reduction model improves the picture brightness of the dim light sample picture, and a generated picture output by the initial dim light reduction model is obtained;
inputting the generated picture and the non-dim light sample picture into a trained discrimination model so that the trained discrimination model judges whether the generated picture output by the initial dim light reduction model is correct or not;
and continuously adjusting each parameter of the current dim light reduction model until the trained discrimination model judges that the generated picture output by the current dim light reduction model is correct, and taking the current dim light reduction model as the trained dim light reduction model.
4. The method as claimed in claim 3, wherein before the step of inputting the generated picture and the non-dim sample picture into the trained discriminant model, the training process of the dim-light restoration model further comprises:
and training an initial discrimination model by using the initial dim light reduction model and the sample database to obtain the trained discrimination model.
5. The method as claimed in claim 4, wherein after the step of using the current dim light reducing model as the trained dim light reducing model, the training process of the dim light reducing model further comprises:
judging whether the trained dim light reduction model meets the requirements or not;
if the trained dim light reduction model does not meet the requirements, the method comprises the following steps:
and taking the trained dim light restoration model as an initial dim light restoration model, taking the trained discrimination model as an initial discrimination model, and returning to execute the step of selecting any dim light sample picture and the non-dim light sample picture corresponding to the dim light sample picture from the sample database and the subsequent steps.
6. The method of claim 5, wherein the determining whether the trained dim light restoration model meets the requirements comprises:
comparing each parameter in the trained dim light reduction model with each parameter in the initial dim light reduction model to obtain the adjustment percentage of each parameter, wherein the adjustment percentage is the ratio of the parameter adjustment amount to the corresponding parameter value in the initial dim light reduction model;
judging whether the absolute value of the adjustment percentage of each parameter is smaller than a preset adjustment threshold value;
if so, considering the trained dim light reduction model to meet the requirement;
otherwise, the trained dim light reduction model is considered not to meet the requirements.
7. A picture processing apparatus, comprising:
the image acquisition module is used for acquiring a picture to be processed, wherein the picture to be processed is a picture acquired by a camera after the terminal equipment starts the camera or the video camera;
the dark light detection module is used for detecting whether the picture to be processed is a picture acquired in a dark light environment;
the dark light reduction module is used for improving the image brightness of the image to be processed by using a trained dark light reduction model if the image to be processed is the image acquired in a dark light environment, wherein the dark light reduction model is a pre-trained neural network model used for improving the image brightness of the image acquired in the dark light environment, and the dark light reduction model is a neural network model deployed in the terminal device before the terminal device leaves a factory;
the picture to be processed is a picture currently acquired by a camera;
correspondingly, the detecting whether the picture to be processed is a picture obtained in a dark light environment includes:
acquiring the current sensitivity of the camera, and judging whether the current sensitivity of the camera is greater than a preset sensitivity;
if the current sensitivity of the camera is greater than the preset sensitivity, determining that the picture to be processed is a picture obtained in a dark light environment;
and if the current sensitivity of the camera is less than or equal to the preset sensitivity, determining that the picture to be processed is not the picture acquired in the dark environment.
8. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201810866578.3A 2018-08-01 2018-08-01 Picture processing method, picture processing device and terminal equipment Active CN109118447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810866578.3A CN109118447B (en) 2018-08-01 2018-08-01 Picture processing method, picture processing device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810866578.3A CN109118447B (en) 2018-08-01 2018-08-01 Picture processing method, picture processing device and terminal equipment

Publications (2)

Publication Number Publication Date
CN109118447A CN109118447A (en) 2019-01-01
CN109118447B true CN109118447B (en) 2021-04-23

Family

ID=64863923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810866578.3A Active CN109118447B (en) 2018-08-01 2018-08-01 Picture processing method, picture processing device and terminal equipment

Country Status (1)

Country Link
CN (1) CN109118447B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111953888B (en) * 2019-05-16 2021-12-24 武汉Tcl集团工业研究院有限公司 Dim light imaging method and device, computer readable storage medium and terminal equipment
CN112087556B (en) * 2019-06-12 2023-04-07 武汉Tcl集团工业研究院有限公司 Dark light imaging method and device, readable storage medium and terminal equipment
CN110458763A (en) * 2019-07-08 2019-11-15 深圳中兴网信科技有限公司 Restoring method, system, the medium of night color image based on deep learning
CN112532892B (en) * 2019-09-19 2022-04-12 华为技术有限公司 Image processing method and electronic device
CN110677557B (en) * 2019-10-28 2022-04-22 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN112911131B (en) * 2019-12-03 2022-11-25 杭州海康威视数字技术股份有限公司 Image quality adjusting method and device
CN115115531A (en) * 2022-01-14 2022-09-27 长城汽车股份有限公司 Image denoising method and device, vehicle and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1311404C (en) * 2002-09-20 2007-04-18 精工爱普生株式会社 Image backlighting processing using image forming historical information
JP2004343610A (en) * 2003-05-19 2004-12-02 Seiko Epson Corp Image processing for dark background image
CN107635103B (en) * 2015-08-11 2020-02-14 Oppo广东移动通信有限公司 Image processing method, mobile terminal and medium product
CN106454077B (en) * 2016-09-26 2021-02-23 宇龙计算机通信科技(深圳)有限公司 Shooting method, shooting device and terminal
CN107205125B (en) * 2017-06-30 2019-07-09 Oppo广东移动通信有限公司 A kind of image processing method, device, terminal and computer readable storage medium
CN107463052B (en) * 2017-08-30 2020-08-11 北京小米移动软件有限公司 Shooting exposure method and device
CN107491771A (en) * 2017-09-21 2017-12-19 百度在线网络技术(北京)有限公司 Method for detecting human face and device

Also Published As

Publication number Publication date
CN109118447A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
CN109118447B (en) Picture processing method, picture processing device and terminal equipment
CN108921806B (en) Image processing method, image processing device and terminal equipment
CN111654594B (en) Image capturing method, image capturing apparatus, mobile terminal, and storage medium
CN109345553B (en) Palm and key point detection method and device thereof, and terminal equipment
CN108319592B (en) Translation method and device and intelligent terminal
CN108765340B (en) Blurred image processing method and device and terminal equipment
CN108737739B (en) Preview picture acquisition method, preview picture acquisition device and electronic equipment
WO2022142009A1 (en) Blurred image correction method and apparatus, computer device, and storage medium
CN108961157B (en) Picture processing method, picture processing device and terminal equipment
CN108961267B (en) Picture processing method, picture processing device and terminal equipment
CN110335216B (en) Image processing method, image processing apparatus, terminal device, and readable storage medium
CN108961183B (en) Image processing method, terminal device and computer-readable storage medium
CN110119733B (en) Page identification method and device, terminal equipment and computer readable storage medium
CN109215037B (en) Target image segmentation method and device and terminal equipment
CN110457963B (en) Display control method, display control device, mobile terminal and computer-readable storage medium
CN110266994B (en) Video call method, video call device and terminal
CN109359582B (en) Information searching method, information searching device and mobile terminal
CN110618852B (en) View processing method, view processing device and terminal equipment
CN108985215B (en) Picture processing method, picture processing device and terminal equipment
CN107679222B (en) Picture processing method, mobile terminal and computer readable storage medium
CN112991151B (en) Image processing method, image generation method, apparatus, device, and medium
CN107360361B (en) Method and device for shooting people in backlight mode
CN108932704B (en) Picture processing method, picture processing device and terminal equipment
CN109492249B (en) Rapid generation method and device of design drawing and terminal equipment
CN108062405B (en) Picture classification method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant