CN112488933B - Video detail enhancement method and device, mobile terminal and storage medium - Google Patents

Video detail enhancement method and device, mobile terminal and storage medium Download PDF

Info

Publication number
CN112488933B
CN112488933B CN202011348089.2A CN202011348089A CN112488933B CN 112488933 B CN112488933 B CN 112488933B CN 202011348089 A CN202011348089 A CN 202011348089A CN 112488933 B CN112488933 B CN 112488933B
Authority
CN
China
Prior art keywords
video
parameter
parameters
detail enhancement
enhancement processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011348089.2A
Other languages
Chinese (zh)
Other versions
CN112488933A (en
Inventor
杨敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
You Peninsula Beijing Information Technology Co ltd
Original Assignee
You Peninsula Beijing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by You Peninsula Beijing Information Technology Co ltd filed Critical You Peninsula Beijing Information Technology Co ltd
Priority to CN202011348089.2A priority Critical patent/CN112488933B/en
Publication of CN112488933A publication Critical patent/CN112488933A/en
Priority to PCT/CN2021/129369 priority patent/WO2022111269A1/en
Application granted granted Critical
Publication of CN112488933B publication Critical patent/CN112488933B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a video detail enhancement method, a video detail enhancement device, a mobile terminal and a storage medium, wherein the method comprises the following steps: the method comprises the steps of collecting video data, collecting parameters related to detail enhancement processing from the environment in the process of collecting the video data, collecting parameters related to the detail enhancement processing from the video data as environment parameters, determining the state of executing the detail enhancement processing on the video data according to the environment parameters and the video parameters as video parameters, prohibiting the detail enhancement processing under the condition that the environment where the mobile terminal is located and an object shot by the mobile terminal are not matched with the detail enhancement processing, reducing the frequency of the detail enhancement processing, saving the calculation force of the detail enhancement processing, reducing the occupation of resources such as a CPU (central processing unit), a memory and the like, reserving more calculation force to ensure the normal execution of business operation, and finally improving the flexibility degree of the detail enhancement processing and the robustness of the detail enhancement processing.

Description

Video detail enhancement method and device, mobile terminal and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computer vision, in particular to a video detail enhancement method, a video detail enhancement device, a mobile terminal and a storage medium.
Background
With the rapid development of mobile internet and mobile terminals, video data in mobile terminals has become a common information carrier for many business operations, such as live broadcast, video call, etc., which contain a large amount of information of objects, and are one of the ways for people to obtain external original information.
The sharpness of video data is an index of evaluation of video quality, and therefore, the sharpness of video data is improved in business operations, and most of the methods for improving sharpness are detail enhancement processing (DETAIL ENHANCEMENT) which is usually performed on video data after the video data is acquired.
However, in some cases, the detail enhancement processing may bring negative effects such as distortion, and the detail enhancement processing is performed continuously, which consumes more computation power, and in the case of limited performance of the mobile terminal, the robustness of the detail enhancement processing is low.
Disclosure of Invention
The embodiment of the invention provides a video detail enhancement method, a video detail enhancement device, a mobile terminal and a storage medium, which are used for solving the problems of how to slow down negative influence and improve robustness when carrying out detail enhancement processing on video data under the condition of limited performance.
In a first aspect, an embodiment of the present invention provides a method for enhancing details of a video, which is applied to a mobile terminal, where the method includes:
collecting video data;
In the process of collecting the video data, collecting parameters related to detail enhancement processing from the environment as environment parameters;
Collecting parameters related to detail enhancement processing from the video data as video parameters;
and determining the state of executing the detail enhancement processing on the video data according to the environment parameter and the video parameter.
In a second aspect, an embodiment of the present invention further provides a device for enhancing details of a video, where the device is applied to a mobile terminal, and the device includes:
the video data acquisition module is used for acquiring video data;
The environment parameter acquisition module is used for acquiring parameters related to detail enhancement processing from the environment as environment parameters in the process of acquiring the video data;
The video parameter acquisition module is used for acquiring parameters related to detail enhancement processing from the video data as video parameters;
And the enhancement state determining module is used for determining the state of executing the detail enhancement processing on the video data according to the environment parameter and the video parameter.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes:
One or more processors;
A memory for storing one or more programs,
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of detail enhancement of video as described in the first aspect.
In a fourth aspect, embodiments of the present invention further provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for enhancing details of a video as described in the first aspect.
In this embodiment, video data is collected, in the process of collecting video data, parameters related to detail enhancement processing are collected from an environment, as environment parameters, parameters related to detail enhancement processing are collected from video data, as video parameters, a state of executing detail enhancement processing on the video data is determined according to the environment parameters and the video parameters, on one hand, under the condition that an environment where a mobile terminal is located and an object shot by the mobile terminal are matched with the detail enhancement processing, normal execution of the detail enhancement processing is guaranteed, so that quality of the video data is guaranteed, on the other hand, under the condition that the environment where the mobile terminal is located and the object shot by the mobile terminal are not matched with the detail enhancement processing, the execution of the detail enhancement processing is forbidden, the frequency of the detail enhancement processing is reduced, the calculation force of the detail enhancement processing is saved, occupation of resources such as a CPU (central processing unit), a memory (internal processing) is reduced, more calculation forces are reserved, normal execution of business operation is guaranteed, and finally, the flexibility of the detail enhancement processing is improved, and the robustness of the detail enhancement processing is improved.
Drawings
Fig. 1 is a flowchart of a method for enhancing details of a video according to a first embodiment of the present invention;
Fig. 2 is a flowchart of a business operation according to a first embodiment of the present invention;
FIG. 3 is a frequency comparison chart of video data and environmental parameters according to a first embodiment of the present invention;
fig. 4 is a flowchart of a method for enhancing details of a video according to a second embodiment of the present invention;
FIG. 5 is a diagram showing a comparison example of a detail enhancement process according to a second embodiment of the present invention;
Fig. 6 is a schematic structural diagram of a video detail enhancement device according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of a mobile terminal according to a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
The detail enhancement processing for image data has been paid attention to as a process from development to prosperity of computer vision, and a detail enhancement algorithm based on single-frame image data is developed from the beginning to a detail enhancement algorithm based on video data, which is now generated due to rapid promotion of computing power of various platforms.
For these detail enhancement processes, the template-based detail enhancement process, the filtering algorithm-based detail enhancement process, and the deep learning-based detail enhancement process can be combed according to a timeline.
With the rapid development of live broadcast, short video and other business operations, the quality of video data becomes a non-negligible problem, and the improvement of the definition of the video data collected by the mobile terminal becomes an important ring for improving the quality of the video data.
Compared with the traditional dim light enhancement of single-frame image data or video data, the detail enhancement processing for the mobile terminal has more severe requirements on computational complexity, so that the problem of balancing the effect and the performance becomes a new difficulty. Most of the existing detail enhancement algorithms are still based on PC (personal computer ) or servers with GPU (Graphics Processing Unit, graphics processor), and have large calculation amount, and cannot be generally executed by mobile terminals with very limited calculation resources.
The embodiment provides an algorithm which gives consideration to both performance and effect, applies detail enhancement processing to video acquisition of a mobile terminal, adapts to complex and various environments while retaining the effect of the detail enhancement processing, and filters artifacts which are easy to generate, namely, artifacts, abnormal and people can see marks, areas, flaws and the like which are artificially processed in the image processing process, thereby improving the quality of video data.
Example 1
Fig. 1 is a flowchart of a video detail enhancement method according to an embodiment of the present invention, where the method may be applicable to a case of deciding whether to execute detail enhancement processing according to an environment of a mobile terminal and characteristics of video data itself, and the method may be executed by a video detail enhancement device, where the video detail enhancement device may be implemented by software and/or hardware, and may be configured in the mobile terminal, for example, a mobile phone, a tablet computer, an intelligent wearable device (such as a smart watch, a smart glasses, etc.), and so on, and specifically includes the following steps:
And 101, collecting video data.
In this embodiment, the mobile terminal is configured with one or more cameras (cameras) that can be used for photographing and video recording, and in general, the cameras may be disposed on the back (also called rear Camera) of the mobile terminal, or may be disposed on the front (also called front Camera) of the mobile terminal, which is not limited in this embodiment of the present invention.
In addition, the operating system of the mobile terminal includes Android (Android), iOS, windows, etc., and may support the running of various applications, such as shopping applications, instant messaging applications, live broadcast applications, video conferencing applications, short video applications, etc.
As shown in fig. 2, in S201, these applications perform business operations, in S202, the camera is called to collect video data, so that relevant business operations are performed, for example, in the Android system, mediaRecorder may be called to collect video data, in the iOS system, AVCaptureFileOutputRecordingDelegate under AVFoundation may be called to collect video data, and so on.
It should be noted that, depending on the service scenario, the service operation assumed by the video data collected by these applications is also different.
In general, the video data collected by these applications can be used to undertake business operations with real-time capabilities.
For example, in a shopping business scenario, video data collected by a shopping application may be used to carry business operations for selling goods, i.e., a host user performs narration, presentation, etc. of goods, and provides links for shopping in the video data.
For another example, in the service scenario of instant messaging, the video data collected by the instant messaging tool may be used to carry a service operation of a video call, that is, a session performed by a communicating multiparty user.
For another example, in a live service scenario, video data collected by a live application may be used to carry live service operations, i.e., a host user performs a performance, a game comment, etc.
For another example, in a business scenario of a conference, video data collected by a video conference application may be used to carry business operations of the conference, i.e., multiparty users participating in the conference speak sequentially as a moderator, etc.
Of course, the video data collected by these applications may also be subjected to service operations with low real-time requirements, for example, the short video application collects video data as short video, etc., which is not limited in this embodiment.
Step 102, in the process of collecting video data, collecting parameters related to detail enhancement processing from the environment as environment parameters.
As shown in fig. 2, while the mobile terminal collects video data, in S203, parameters related to detail enhancement processing are collected as environmental parameters for the environment in which the mobile terminal is located.
In one embodiment of the present invention, the environmental parameter may be a brightness value of light in an environment in which the mobile terminal is located.
In the dark environment, the noise level of the collected video data is higher, if detail enhancement processing is performed at the moment, an artifact phenomenon is easy to occur in a noise concentrated area, and the detail enhancement processing increases occupation of resources such as a CPU (central processing unit, a central processing unit), a memory and the like in the mobile terminal, but no obvious practical benefit exists.
If a sensor for detecting light, such as an illumination intensity sensor, is configured in the mobile terminal and an API (Application Programming Interface, application program interface) of the sensor is opened, the sensor can be directly called to detect the brightness value of the light in the environment where the mobile terminal is located.
For example, in the Android system, the luminance value of the LIGHT detected by a listening illumination intensity sensor (LIGHT) may be registered by SensorManager.
If the mobile terminal is provided with a sensor for detecting light, but the API is not opened, the brightness of the screen can be automatically adjusted according to the brightness value of the light of the mobile terminal, the white balance and other relations can be automatically adjusted according to the brightness value of the light when the camera is started, the brightness value of the screen is detected and mapped into the brightness value of the light in the environment where the mobile terminal is located, or the brightness value of the light in the environment where the mobile terminal is located is detected from the camera, and the like.
Of course, the above environmental parameters and the collection modes thereof are merely examples, and other environmental parameters and collection modes thereof, for example, CPU occupancy rate, memory occupancy rate, etc., may be set according to actual situations when implementing the present embodiment, which is not limited in this embodiment. In addition, other environmental parameters and collection modes thereof can be adopted by those skilled in the art according to actual needs, and the embodiment is not limited thereto.
In practical applications, the frame rate of the mobile terminal for collecting video data is generally different from the frame rate of the mobile terminal for collecting environmental parameters, and the frame rate of the mobile terminal for collecting environmental parameters is generally greater than the frame rate of the mobile terminal for collecting video data, i.e. a plurality of environmental parameters usually exist between every two frames of image data in the video data.
As shown in fig. 3, the frame rate at which the mobile terminal collects video data is lower than the frame rate at which the sensor collects environmental parameters, and 7 frames of environmental parameters exist between the 1 st frame of video data and the 2 nd frame of video data.
Thus, to ensure stability of detail enhancement processing and environmental parameters, in this embodiment, segmentation may be employed to align the image data with the environmental parameters.
In a specific implementation, parameters related to detail enhancement processing acquired between current frame image data and previous frame image data are acquired in the environment as environment parameters, so that the environment parameters are segmented into a plurality of segments by taking video data as a reference.
And smoothing the environmental parameters aiming at each section of environmental parameters to serve as the environmental parameters corresponding to the image data of the current frame, so that differences of correlation between the image data at different time points on a time sequence and the environmental parameters are highlighted, and then, detail enhancement processing is carried out on the image data of the frame by using the environmental parameters, so that the quality of the detail enhancement processing is improved.
In one way of smoothing, a parameter weight may be configured for an environmental parameter, which may represent the importance of the environmental parameter, i.e. the more important the environmental parameter, the greater its environmental weight.
In one example, a time stamp for collecting the environmental parameter may be determined, and a parameter weight may be set for the environmental parameter based on the time stamp, where the parameter weight is positively correlated with the time stamp, i.e., the closer the time for collecting the environmental parameter is to the current time, the stronger the correlation between the environmental parameter and the current frame image data, the greater the configured parameter weight, and conversely, the farther the time for collecting the environmental parameter is from the current time, the worse the correlation between the environmental parameter and the current frame image data, the smaller the configured parameter weight, so that the parameter weight monotonically decreases from the current frame image data to the previous frame image data.
Of course, other ways of configuring the parameter weights besides the acquisition time may be adopted, for example, taking the inverse of the number of environmental parameters as the parameter weight, and so on, which is not limited in this embodiment.
If each environmental parameter is configured with a corresponding parameter weight, a ratio between a first target value and a second target value can be calculated as the environmental parameter corresponding to the current frame image data, wherein the first target value is a sum of products between all environmental parameters and all parameter weights, and the second target value is a sum of all parameter weights, and the environmental parameters are expressed as follows:
Wherein, For the environmental parameters corresponding to the t-th frame of image data, n environmental parameters exist between the t-th frame of image data and the t-1 st frame of image data, y t-i is the i-th frame of environmental parameters positioned before the t-th frame of image data, and w i is the i-th parameter weight.
Step 103, collecting parameters related to detail enhancement processing from the video data as video parameters.
As shown in fig. 2, for video data acquired by the mobile terminal, in S204, parameters related to the detail enhancement process may be acquired from itself as video parameters, thereby characterizing a photographed object.
In one embodiment of the invention, the video parameter may be a resolution in the video data, a first probability that a pixel belongs to the skin, a number of pixels representing the skin.
On the one hand, the resolution of the video data can be read from MediaRecorder objects and the like, the resolution shows the fineness of details in the image data, in general, the higher the resolution is, the more pixels are included, the more the image data is clear, on the other hand, the lower the resolution is, the fewer the pixels are included, the more the image data is blurred, in the case of lower resolution, the image data is blurred, if the detail enhancement processing is carried out at the moment, the phenomenon of artifact easily appears in the blurred region, the detail enhancement processing increases the occupation of resources such as CPU, memory and the like in the mobile terminal, but no obvious practical benefit exists
On the other hand, in the business operations such as live broadcasting and short video, the video data is usually subjected to a beautifying process, such as skin grinding process, and if the detail enhancement process is performed at this time, the phenomenon of artifacts easily occurs in the area where the skin is located, and the detail enhancement process increases the occupation of resources such as CPU and memory in the mobile terminal, but has no obvious practical benefit.
In a specific implementation, the video data has multi-frame image data, skin color detection can be performed on each frame of image data by means of a parameterized model, a non-parameterized model and the like for each frame of image data, a first probability of belonging to skin is calculated for each pixel point in the image data, if the first probability is greater than a preset skin color threshold value, the pixel point is determined to represent skin, and the number of the pixel points representing skin is counted to be used as skin color number.
Wherein the parameterized model estimates a skin tone histogram based on the assumption that skin tone can follow a gaussian probability distribution model, and the non-parameterized model compares the skin tone histogram to the histogram of the training set.
In one example of skin tone detection, for video data collected by a camera, YUV ("Y" represents brightness (luminence or Luma), that is, gray scale values, "U" and "V" represent chrominance (Chrominance or Chroma), which are used to describe image colors and saturation, and are used to specify colors of pixels), and to facilitate skin tone detection, the image data may be converted from YUV color space to RGB (Red Green Blue) color space.
For example, image data may be converted from YUV color space to RGB color space by the following conversion relation:
R=Y+1.4075*(V-128)
G=Y-0.3455*(U-128)-0.7169*(V-128)
B=Y+1.779*(U-128)
in the RGB color space, R (red), G (green) and B (blue) components of each pixel point in the image data are traversed, and whether the pixel point meets a preset threshold value condition is judged.
If the pixel points in the image data meet the preset threshold condition, setting the first probability that the pixel points belong to the skin to be 1.
If the pixel points in the image data do not meet the preset threshold condition, the first probability that the pixel points belong to the skin is set to be 0.
Wherein the threshold condition comprises:
The R component is larger than the first color threshold, the G component is larger than the second color threshold, the B component is larger than the third color threshold, the R component is larger than the G component, and the R component is larger than the B component;
the difference between the maximum value of the R component, the G component and the B component and the minimum value of the R component, the G component and the B component is larger than a fourth color threshold;
The difference between the R component and the G component is greater than the fifth color threshold.
Assuming that the first color threshold is 95, the second color threshold is 40, the third color threshold is 20, the fourth color threshold is 15, and the fifth color threshold is 5, the threshold conditions are expressed as follows:
R>95,G>40,B>20,R>G,R>B
(Max(R,G,B)-Min(R,G,B))>15
Abs(R-G)>5
It should be noted that, in addition to skin color detection in the RGB color space, the threshold condition may be equivalent to the YUV color space by a conversion formula such as bt601.fullrange, and skin color detection may be performed in the YUV color space, where the threshold condition is Cb > 77, cb < 127, 133 < Cr < 173, and so on, which is not limited in this embodiment.
In this example, the skin tone detection is used as a condition for judging whether to perform detail enhancement processing on video data, and the accuracy requirement on the skin tone detection is not very high, so the requirement can be met through a threshold condition, the threshold condition is simple, a statistical mode is used, the operation is simple and convenient, and the execution time of the skin tone detection can be effectively reduced.
Of course, the above-mentioned video parameters and the detection modes thereof are merely examples, and other video parameters and detection modes thereof, for example, average luminance values of image data, etc., may be set according to actual situations when the present embodiment is implemented, which is not limited thereto. In addition, in addition to the above-mentioned video parameters and the detection modes thereof, those skilled in the art can also adopt other video parameters and detection modes thereof according to actual needs, which is not limited in this embodiment.
And 104, determining the state of performing detail enhancement processing on the video data according to the environment parameters and the video parameters.
In this embodiment, as shown in fig. 2, in S206, the environmental parameters related to the detail enhancement process are combined with the video parameters, and the adaptation degree between the environment where the current mobile terminal is located and the photographed object and the detail enhancement process is estimated, so that a decision is made based on the adaptation degree, and the state of performing the detail enhancement process on the video data is determined, that is, whether to perform the detail enhancement process is determined, and when a certain condition is satisfied, the detail enhancement process on the video data is allowed, and when the condition is not satisfied, the detail enhancement process on the video data is prohibited.
In one embodiment of the present invention, step 104 includes the steps of:
Step 1041, comparing the environmental parameter with a preset environmental condition and comparing the video parameter with a preset video condition for the multi-frame image data in the video data.
In this embodiment, on the one hand, a corresponding environmental condition may be preset for an environmental parameter, where the environmental condition is used to represent the degree of adaptation between the environment where the mobile terminal is located and the detail enhancement processing, and on the other hand, a video condition may be preset for a video parameter, where the video condition is used to represent the degree of adaptation between the object photographed by the mobile terminal and the detail enhancement processing.
In one example, the environmental parameter includes a luminance value of the light, and the video parameter includes a resolution of the video data, a number of skin tones, the number of pixels representing the skin.
In this example, the luminance value is compared with a preset luminance threshold value.
If the brightness value is larger than the preset brightness threshold value, the environment where the mobile terminal is located is not the dim light environment, and at the moment, the condition that the environment condition is met can be determined; if the brightness value is smaller than or equal to the preset brightness threshold value, the environment where the mobile terminal is located is indicated to belong to a dark light environment, and at the moment, the condition that the environment condition is not met can be determined.
In addition, the resolution of the video data is compared with a preset resolution threshold, the ratio between the number of skin colors and the resolution of the video data is calculated as the skin color duty ratio, and the skin color duty ratio is compared with the preset skin color threshold.
If the resolution of the video data is larger than a preset resolution threshold value and the skin tone ratio is smaller than a preset skin tone threshold value, the video data is clear, the object shot by the mobile terminal is not mainly a person, and at the moment, the condition that the video condition is met can be determined; if the resolution of the video data is smaller than or equal to a preset resolution threshold value and/or the skin tone duty ratio is larger than or equal to a preset skin tone threshold value, the video data is fuzzy, the object shot by the mobile terminal is mainly a person, and at the moment, the condition of meeting the video can not be determined.
In this example, the environmental conditions f 1, the video conditions f 2 are represented as follows:
Wherein, For the luminance value of the ray, S 1 is the number of skin tones, R 1 is the resolution of the video data, τ 1 is the luminance threshold, τ 2 is the resolution threshold, and τ 3 is the skin tone threshold.
Of course, the above environmental conditions and video conditions are merely examples, and in implementing the present embodiment, other environmental conditions and video conditions may be set according to actual situations of environmental parameters and video parameters, for example, if the environmental parameters are CPU occupancy rates, the environmental conditions are determined to be satisfied when the CPU occupancy rates are smaller than a preset occupancy threshold, if the video parameters are average luminance values, the video conditions are determined to be satisfied when the average luminance values are larger than a preset video luminance threshold, and so on, the present embodiment is not limited thereto. In addition, other environmental conditions and video conditions besides those described above may be adopted by those skilled in the art according to actual needs, and the present embodiment is not limited thereto.
Step 1042, if the environmental condition and the video condition are satisfied, determining that the image data matches the detail enhancement processing.
If the environment parameters meet the environment conditions and the video parameters meet the video conditions when shooting certain frame of image data, the adaptation degree between the environment where the mobile terminal is located and the detail enhancement processing is higher when shooting the frame of image data, and the adaptation degree between the object shot by the mobile terminal and the detail enhancement processing is higher.
Step 1043 of setting a state in which detail enhancement processing is performed on the video data based on the distribution of the image data matching the detail enhancement processing.
Since the acquisition of video data is a continuous behavior, considering the continuity of video data, the distribution of image data matching the detail enhancement processing in a part of image data (e.g., the previous 80-120 frames of image data) at the time of initial acquisition of video data can be identified, thereby setting the state in which the detail enhancement processing is performed on video data, i.e., determining whether the detail enhancement processing is performed on video data.
In a specific implementation, the ratio of the image data subjected to the matching detail enhancement processing in the partial image data may be calculated as a matching ratio, and the matching ratio may be compared with a preset matching threshold.
If the matching ratio is greater than or equal to a preset matching threshold, the environment where the mobile terminal is located is indicated to be matched with detail enhancement processing more stably, and the object shot by the mobile terminal is matched with detail enhancement processing more stably, and at the moment, detail enhancement processing is allowed to be executed on the video data.
If the matching proportion is smaller than the preset matching threshold, the matching proportion indicates that the environment where the mobile terminal is located and the fluctuation of the detail enhancement processing adaptation are large, and at the moment, the fluctuation of the object shot by the mobile terminal and the detail enhancement processing adaptation is large, and at the moment, the detail enhancement processing is forbidden to be executed on the video data.
In addition, for live broadcast and other business operations, the environment where the mobile terminal is located and the object shot by the mobile terminal are stable, after the state of performing detail enhancement processing on the video data is set, the environment parameter and the video parameter can be not detected any more, the state of performing detail enhancement processing on the video data is determined according to the environment parameter and the video parameter, and frequent switching of detail enhancement processing is avoided, so that the picture of the video data is suddenly changed.
In this embodiment, in the case where the detail enhancement processing is prohibited, the video data may be subjected to subsequent processing according to the service scenario, which is not limited in this embodiment.
For example, as shown in fig. 2, for Video data for which the detail enhancement process is prohibited in S206, in S208 the Video data may be displayed on a screen, and in S209 the Video data may be encoded, for example, in the format of h.264 and packaged into FLV (Flash Video) format, awaiting transmission to a device that plays the Video data.
In this embodiment, video data is collected, in the process of collecting video data, parameters related to detail enhancement processing are collected from an environment, as environment parameters, parameters related to detail enhancement processing are collected from video data, as video parameters, a state of executing detail enhancement processing on the video data is determined according to the environment parameters and the video parameters, on one hand, under the condition that an environment where a mobile terminal is located and an object shot by the mobile terminal are matched with the detail enhancement processing, normal execution of the detail enhancement processing is guaranteed, so that quality of the video data is guaranteed, on the other hand, under the condition that the environment where the mobile terminal is located and the object shot by the mobile terminal are not matched with the detail enhancement processing, the execution of the detail enhancement processing is forbidden, the frequency of the detail enhancement processing is reduced, the calculation force of the detail enhancement processing is saved, occupation of resources such as a CPU (central processing unit), a memory (internal processing) is reduced, more calculation forces are reserved, normal execution of business operation is guaranteed, and finally, the flexibility of the detail enhancement processing is improved, and the robustness of the detail enhancement processing is improved.
Example two
Fig. 4 is a flowchart of a video detail enhancement method according to a second embodiment of the present invention, where the embodiment is based on the foregoing embodiment, and further increases the operation of detail enhancement processing, and the detail enhancement processing is deployed in the process of recording video data by a mobile terminal, so that a user can perceive the effect of detail enhancement processing in real time, and can maximally improve the image quality experience of the user in capturing video data, and when the detail enhancement processing is performed in the process of recording video data, a better input can be provided for subsequent encoding and decoding, and the viewing experience is improved, and the method specifically includes the following steps:
Step 401, collecting video data.
Step 402, in the process of collecting video data, collecting parameters related to detail enhancement processing from the environment as environment parameters.
Step 403, collecting parameters related to detail enhancement processing from the video data as video parameters.
Step 404, determining a state of performing detail enhancement processing on the video data according to the environment parameter and the video parameter.
In step 405, in the process of displaying back video data, parameters related to the detail enhancement processing are collected as display back parameters.
In the operation of live broadcasting, short video, etc., the playback operation may be performed, that is, the video data collected by the camera is displayed on the screen of the mobile terminal, as shown in fig. 2, in S205, parameters related to the detail enhancement processing may be collected for playback during the process of playback of the video data, as playback parameters.
In one embodiment of the present invention, the playback parameters include the resolution of the screen, which may be read by invoking DISPLAYMETRIES the object, or the like.
In addition, the environment parameter, the video parameter, and the playback parameter are transferred to a code corresponding to the detail enhancement processing in the form of JNI (Java NATIVE INTERFACE, java local interface) or the like, and the detail enhancement processing is waited for each frame of image data in the video data.
Step 406, if the status is that the detail enhancement processing is allowed to be executed on the video data, calculating the adjustment parameters according to the environment parameters, the video parameters and the playback parameters.
As shown in fig. 2, in S207, under the condition that the detail enhancement processing is allowed to be performed on the video data, the environment parameter, the video parameter and the echo parameter may be comprehensively measured, so as to calculate a parameter for adjusting the detail enhancement processing, and as an adjustment parameter, the adjustment parameter may be used to adjust the intensity of the detail enhancement processing, that is, adjust the intensity of each pixel point in each frame of image data when the detail enhancement processing is performed, so as to adjust the detail enhancement processing, so that the detail enhancement processing obtains a better comprehensive performance among the environment where the mobile terminal is located, the object shot by the mobile terminal, and the echo effect of the mobile terminal.
In one embodiment of the present invention, step 406 includes the steps of:
Step 4061, converting the video parameters into first target parameters that participate in the detail enhancement process.
Step 4062, converting the echo parameters into second target parameters participating in the detail enhancement process.
In this embodiment, the environment parameter, the video parameter, and the echo parameter are normalized, the video parameter is converted into a first target parameter that participates in the detail enhancement processing, and the echo parameter is converted into a second target parameter that participates in the detail enhancement processing, so that the echo parameter and the environment parameter are linearly fused with the second target parameter under the same scale.
In one example, the video parameters include a first probability that the pixel belongs to skin, a resolution of the video data, and in this example, a second probability that the pixel belongs to non-skin may be obtained by subtracting the first probability, which is the first target parameter.
In this example, since the skin color region is considered as smooth as possible from the viewpoint of subjective aesthetic of the user, if the detail enhancement is performed on the skin color region, the artifacts are easy to enter, although the sharpness is improved, but the quality of the video data is rather degraded, the second probability that the pixel belongs to the non-skin is counted, and the detail enhancement on the skin can be weakened in the detail enhancement processing, and the detail enhancement on the non-skin is increased.
In another example, the back-display parameter includes a resolution of the screen, and in this example, a ratio between the resolution of the video data and the resolution of the screen is calculated as a display scale, which is the second target parameter.
In this example, if the display scale is small, a large scale up-sampling is generally performed at the time of the back-display operation, and the detail enhancement processing is liable to cause uneven detail presentation.
Step 4063, performing linear fusion on the environmental parameter, the first target parameter and the second target parameter to obtain the adjustment parameter.
In this embodiment, the environmental parameter, the first target parameter and the second target parameter may be linearly fused to obtain an adjustment parameter, that is, the adjustment parameter is positively correlated with the environmental parameter, the first target parameter and the second target parameter, that is, the greater the environmental parameter, the first target parameter and the second target parameter is, the greater the adjustment parameter is, the stronger the strength of the detail enhancement processing is, otherwise, the smaller the environmental parameter, the first target parameter and the second target parameter is, the smaller the adjustment parameter is, and the weaker the strength of the detail enhancement processing is.
In one example, the environmental parameter includes a luminance value of the light, the first target parameter includes a second probability that the pixel belongs to non-skin, and the second target parameter includes a display scale, and in this example, the first adjustment weight is configured for the luminance value, the second adjustment weight is configured for the second probability, and the third adjustment weight is configured for the display scale.
It should be noted that the first adjustment weight, the second adjustment weight, and the third adjustment weight may be set by those skilled in the art according to practical situations, for example, the second adjustment weight is dominant, that is, the second adjustment weight is greater than the first adjustment weight, the third adjustment weight, and so on, which is not limited in this embodiment.
And calculating a sum value among the first adjustment value, the second adjustment value and the third adjustment value, wherein the sum value is taken as a fourth adjustment value, the first adjustment value represents the product between the brightness value and the first adjustment weight, the second adjustment value represents the product between the second probability and the second adjustment weight, and the third adjustment value represents the product between the display scale rate and the third adjustment weight.
And adding a preset second adjusting coefficient as an adjusting parameter on the basis of the product between the fourth adjusting value and the preset first adjusting coefficient.
In this example, the adjustment parameters are shown below:
p2=1-p1
wherein, S ti is the adjusting parameter of the ith pixel point of the t-th frame, The luminance value is p 1, the first probability is p 2, the second probability is p 2, the resolution of the video data is R 1, the resolution of the screen is R 2, the display ratio is R, the first adjustment weight is alpha, the second adjustment weight is beta, the third adjustment weight is gamma, the first adjustment coefficient is k, and the second adjustment coefficient is b.
Step 407, performing detail enhancement processing on the video data according to the adjustment parameters.
After determining the intensity (i.e., the adjustment parameter) of the current detail enhancement process, the relevant parameters in the detail enhancement process may be adjusted according to the intensity (i.e., the adjustment parameter), thereby performing the detail enhancement process on the video data.
In one embodiment of the invention, step 407 may include the steps of:
step 4071, extracting details from the image data of the video data to obtain the original detail data.
In a specific implementation, filtering processing may be performed on the video data by a box filtering method, an average filtering method, or the like, so as to extract details from image data of the video data as original detail data.
Step 4072, adjusting the original detail data based on the adjustment parameters to obtain the target detail data.
And applying the adjusting parameters to the original detail data, and adjusting the original detail data to realize the intensity adjustment of detail enhancement processing.
In one example, the adjustment weights may be queried, and the adjustment weights, adjustment parameters, and the original detail data multiplied to obtain the target detail data.
Of course, besides adjusting the weight, the adjustment parameter may be multiplied by the original detail data to obtain the target detail data, and so on, which is not limited in this embodiment.
Step 4073, superimposing the target detail data to the image data.
The target detail data after adjustment of the adjustment parameters can be superimposed on the original image data by means of USM sharpening and the like, and the image data after detail enhancement can be obtained, and at this time, the detail enhancement processing is represented as follows:
yt(i,j)=yt(i,j)+λ*Sti*Dt(i,j)
Where y t (i, j) is the image data after detail enhancement, y t (i, j) is the image data before detail enhancement, λ is the adjustment weight, S ti is the adjustment parameter, and D t (i, j) is the original detail data.
In order to reduce the operation amount of the detail enhancement processing, the detail enhancement processing can be performed under a certain channel (such as a Y channel in a YUV color space and a G channel in an RGB color space) sensitive to a user, and the obtained channel (such as Y t) with enhanced details and other channels (such as U tVt) are fused to obtain the video data with improved definition.
As shown in fig. 5, the left side is the original image data, the right side is the image data after the detail enhancement processing is performed by applying the present embodiment, and in contrast, the image data on the right side has a significant sharpness improvement in the collar region of the person compared with the region where the plant is located in the background of the image data on the left side.
Of course, the above-described detail enhancement processing is merely an example, and in implementing the present embodiment, the detail enhancement processing thereof may be set according to actual situations, for example, if the detail enhancement processing is performed using convolution, the adjustment parameter may adjust the size of the convolution kernel, or the like, which is not limited thereto. In addition, in addition to the above-described detail enhancement processing, those skilled in the art may employ other detail enhancement processing according to actual needs, which is not limited in this embodiment.
For the video data after the detail enhancement processing, the subsequent processing may be performed according to the service scenario, which is not limited in this embodiment.
For example, as shown in fig. 2, the video data after the detail enhancement processing is displayed on a screen in S208, and the video data after the detail enhancement is encoded, for example, encoded in the format of h.264 and packaged in FLV format in S209, waiting for transmission to a device that plays the video data.
In this embodiment, if the detail enhancement processing is allowed to be performed on the video data, in the process of displaying the video data, parameters related to the detail enhancement processing are calculated as the display parameters according to the environment parameters, the video parameters and the display parameters, and the detail enhancement processing is performed on the video data according to the adjustment parameters, so that the detail enhancement strength is integrally adapted to the environment where the mobile terminal is located, the object shot by the mobile terminal, and the display effect of the mobile terminal, the utilization efficiency of computing power is improved, and the complexity of calculation is considered while the effect of the detail enhancement processing is considered, so that the detail enhancement processing is possible to be applied in the process of real-time video communication or short video acquisition of the mobile terminal.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Example III
Fig. 6 is a block diagram of a video detail enhancement device according to a third embodiment of the present invention, which is applied to a mobile terminal, and the device specifically includes the following modules:
a video data acquisition module 601, configured to acquire video data;
an environmental parameter collection module 602, configured to collect, as environmental parameters, parameters related to detail enhancement processing from an environment during the process of collecting the video data;
a video parameter acquisition module 603, configured to acquire parameters related to detail enhancement processing from the video data as video parameters;
an enhancement state determining module 604, configured to determine a state of performing the detail enhancement processing on the video data according to the environmental parameter and the video parameter.
In one embodiment of the present invention, the video data has a plurality of frames of image data; the environmental parameter collection module 602 includes:
The parameter acquisition sub-module is used for acquiring parameters which are acquired between the image data of the current frame and the image data of the previous frame and are related to detail enhancement processing in the environment and used as environment parameters;
And the smoothing processing sub-module is used for carrying out smoothing processing on the environment parameters and taking the environment parameters as environment parameters corresponding to the image data of the current frame.
In one embodiment of the present invention, the smoothing submodule includes:
a parameter weight configuration unit, configured to configure parameter weights for the environmental parameters;
And the ratio calculating unit is used for calculating a ratio between a first target value and a second target value as the environment parameter corresponding to the image data of the current frame, wherein the first target value is the sum of products between all the environment parameters and all the parameter weights, and the second target value is the sum of all the parameter weights.
In one embodiment of the present invention, the parameter weight configuration unit includes:
a time stamp determining subunit, configured to determine a time stamp for acquiring the environmental parameter;
And a parameter weight setting subunit, configured to set a parameter weight for the environmental parameter based on the timestamp, where the parameter weight is positively related to the timestamp.
In one embodiment of the present invention, the video data has a plurality of frames of image data; the video parameter acquisition module 603 includes:
A first probability calculation sub-module for calculating a first probability of belonging to skin for each pixel point in the image data;
The skin color pixel point determining submodule is used for determining that the pixel points represent skin if the first probability is larger than a preset skin color threshold value;
And the skin color number counting sub-module is used for counting the number of the pixel points representing the skin as the skin color number.
In one embodiment of the invention, the first probability calculation sub-module includes:
A color space conversion unit configured to convert from a YUV color space to an RGB color space in the image data;
A first numerical value setting unit, configured to set, in the RGB color space, a first probability that a pixel point in the image data belongs to skin to 1 if the pixel point meets a preset threshold condition;
a second value setting unit, configured to set a first probability that a pixel point belongs to skin to 0 if the pixel point in the image data does not meet a preset threshold condition;
wherein the threshold condition comprises:
The R component is larger than the first color threshold, the G component is larger than the second color threshold, the B component is larger than the third color threshold, the R component is larger than the G component, and the R component is larger than the B component;
the difference between the maximum value of the R component, the G component and the B component and the minimum value of the R component, the G component and the B component is larger than a fourth color threshold;
The difference between the R component and the G component is greater than the fifth color threshold.
In one embodiment of the present invention, the enhancement state determination module 604 includes:
the condition comparison sub-module is used for comparing the environment parameters with preset environment conditions and comparing the video parameters with preset video conditions respectively aiming at multi-frame image data in the video data;
The image matching sub-module is used for determining that the image data matches the detail enhancement processing if the environmental condition and the video condition are met;
a distribution setting sub-module for setting a state of executing the detail enhancement processing on the video data based on a distribution of the image data matching the detail enhancement processing.
In one embodiment of the present invention, the environmental parameter includes a luminance value of light, the video parameter includes a resolution of the video data, a number of skin tones, the number of skin tones being the number of pixels representing skin;
the condition comparison submodule includes:
The environment condition satisfaction determining unit is used for determining that the environment condition is satisfied if the brightness value is larger than a preset brightness threshold value;
a resolution comparison unit for comparing the resolution of the video data with a preset resolution threshold;
A skin tone duty ratio calculation unit for calculating a ratio between the number of skin tones and the resolution of the video data as a skin tone duty ratio;
and the video condition satisfaction determining unit is used for determining that the video condition is satisfied if the resolution of the video data is larger than a preset resolution threshold value and the skin tone duty ratio is smaller than a preset skin tone threshold value.
In one embodiment of the present invention, the distribution setting submodule includes:
A matching ratio calculation unit configured to calculate a ratio of the image data matching the detail enhancement processing as a matching ratio;
an enabling enhancement unit, configured to enable the detail enhancement processing to be performed on the video data if the matching ratio is greater than or equal to a preset matching threshold;
And the inhibition enhancement unit is used for inhibiting the detail enhancement processing to be executed on the video data if the matching proportion is smaller than a preset matching threshold value.
In one embodiment of the present invention, further comprising:
the playback parameter acquisition module is used for acquiring parameters related to the detail enhancement processing as playback parameters in the process of playback of the video data;
The adjusting parameter calculating module is used for calculating adjusting parameters according to the environment parameters, the video parameters and the back display parameters if the state is that the detail enhancement processing is allowed to be executed on the video data, and the adjusting parameters are used for adjusting the intensity of the detail enhancement processing;
And the detail enhancement processing execution module is used for executing the detail enhancement processing on the video data according to the adjustment parameters.
In one embodiment of the present invention, the adjustment parameter calculation module includes:
the first target parameter conversion sub-module is used for converting the video parameters into first target parameters participating in the detail enhancement processing;
the second target parameter conversion sub-module is used for converting the back display parameters into second target parameters which participate in the detail enhancement processing;
and the linear fusion sub-module is used for carrying out linear fusion on the environment parameter, the first target parameter and the second target parameter to obtain an adjustment parameter.
In one embodiment of the present invention, the video parameter includes a first probability that the pixel belongs to skin, a resolution of the video data, and the back-display parameter includes a resolution of a screen;
The first target parameter conversion submodule includes:
The second probability calculation unit is used for subtracting the first probability to obtain a second probability that the pixel point belongs to non-skin;
the second target parameter conversion submodule includes:
And a display ratio calculating unit for calculating a ratio between the resolution of the video data and the resolution of the screen as a display ratio.
In one embodiment of the invention, the environmental parameter comprises a luminance value of the light;
The linear fusion submodule comprises:
A first adjustment weight configuration unit configured to configure a first adjustment weight for the luminance value;
a second adjustment weight configuration unit configured to configure a second adjustment weight for the second probability;
a third adjustment weight configuration unit configured to configure a third adjustment weight for the display scale;
An adjustment value calculation unit configured to calculate, as a fourth adjustment value, a sum value between a first adjustment value representing a product between the luminance value and the first adjustment weight, a second adjustment value representing a product between the second probability and the second adjustment weight, and a third adjustment value representing a product between the display proportionality rate and the third adjustment weight;
and the adjusting value adjusting unit is used for adding a preset second adjusting coefficient to be used as an adjusting parameter on the basis of the product between the fourth adjusting value and the preset first adjusting coefficient.
In one embodiment of the present invention, the detail enhancement processing execution module includes:
The original detail data extraction sub-module is used for extracting details from the image data of the video data to obtain original detail data;
The original detail data adjustment sub-module is used for adjusting the original detail data based on the adjustment parameters to obtain target detail data;
And the target detail data superposition sub-module is used for superposing the target detail data to the image data.
In one embodiment of the present invention, the raw detail data adjustment submodule includes:
the adjusting weight inquiring unit is used for inquiring the adjusting weight;
And the detail multiplying unit is used for multiplying the adjustment weight and the adjustment parameter with the original detail data to obtain target detail data.
The video detail enhancement device provided by the embodiment of the invention can execute the video detail enhancement method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 7 is a schematic structural diagram of a mobile terminal according to a fourth embodiment of the present invention. Fig. 7 illustrates a block diagram of an exemplary mobile terminal 12 suitable for use in implementing embodiments of the present invention. The mobile terminal 12 shown in fig. 7 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 7, the mobile terminal 12 is embodied in the form of a general purpose computing device. The components of the mobile terminal 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The mobile terminal 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by mobile terminal 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The mobile terminal 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, commonly referred to as a "hard disk drive"). Although not shown in fig. 7, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The mobile terminal 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the mobile terminal 12, and/or any devices (e.g., network card, modem, etc.) that enable the mobile terminal 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Also, the mobile terminal 12 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through the network adapter 20. As shown, the network adapter 20 communicates with other modules of the mobile terminal 12 over the bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with the mobile terminal 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, to implement the detail enhancement method of video provided by the embodiment of the present invention.
Example five
The fifth embodiment of the present invention further provides a computer readable storage medium, on which a computer program is stored, where the computer program when executed by a processor implements each process of the foregoing video detail enhancement method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
The computer readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (16)

1. A method for enhancing details of a video, which is applied to a mobile terminal, the method comprising:
collecting video data;
In the process of collecting the video data, collecting parameters related to detail enhancement processing from the environment as environment parameters;
Collecting parameters related to detail enhancement processing from the video data as video parameters;
Determining a state of executing the detail enhancement processing on the video data according to the environment parameter and the video parameter;
The video data has multi-frame image data; the method for acquiring parameters related to detail enhancement processing from the video data, as video parameters, comprises the following steps:
converting the image data from a YUV color space to an RGB color space;
in the RGB color space, if a pixel point in the image data accords with a preset threshold condition, setting a first probability that the pixel point belongs to skin to be 1;
if the pixel points in the image data do not meet the preset threshold condition, setting the first probability that the pixel points belong to the skin to be 0;
If the first probability is larger than a preset skin color threshold value, determining that the pixel points represent skin;
Counting the number of the pixel points representing the skin as the number of skin colors;
wherein the threshold condition comprises:
The R component is larger than the first color threshold, the G component is larger than the second color threshold, the B component is larger than the third color threshold, the R component is larger than the G component, and the R component is larger than the B component;
the difference between the maximum value of the R component, the G component and the B component and the minimum value of the R component, the G component and the B component is larger than a fourth color threshold;
The difference between the R component and the G component is greater than a fifth color threshold;
the video parameters include a first probability that a pixel belongs to the skin and a number of pixels representing the skin.
2. The method according to claim 1, wherein the collecting parameters related to the detail enhancement process from the environment as environmental parameters includes:
Acquiring parameters which are acquired between the image data of the current frame and the image data of the previous frame and related to detail enhancement processing in an environment as environment parameters;
And smoothing the environment parameters to serve as environment parameters corresponding to the image data of the current frame.
3. The method according to claim 2, wherein smoothing the environmental parameter as the environmental parameter corresponding to the image data of the current frame includes:
configuring parameter weights for the environmental parameters;
And calculating a ratio between a first target value and a second target value as an environment parameter corresponding to the image data of the current frame, wherein the first target value is a sum value of products between all environment parameters and all parameter weights, and the second target value is a sum value between all parameter weights.
4. A method according to claim 3, wherein said configuring parameter weights for said environmental parameters comprises:
Determining a timestamp of the acquisition of the environmental parameter;
parameter weights are set for the environmental parameters based on the time stamps, the parameter weights being positively correlated with the time stamps.
5. The method of claim 1, wherein the determining a state of performing the detail enhancement processing on the video data based on the environmental parameter and the video parameter comprises:
Comparing the environment parameter with a preset environment condition and comparing the video parameter with a preset video condition respectively aiming at multi-frame image data in the video data;
If the environment condition and the video condition are met, determining that the image data matches the detail enhancement processing;
A state in which the detail enhancement processing is performed on the video data is set based on the distribution of the image data matching the detail enhancement processing.
6. The method of claim 5, wherein the environmental parameter comprises a luminance value of a light ray, and the video parameter further comprises a resolution of the video data;
The comparing the environmental parameter with a preset environmental condition and the comparing the video parameter with a preset video condition respectively includes:
If the brightness value is larger than a preset brightness threshold value, determining that the environment condition is met;
comparing the resolution of the video data with a preset resolution threshold;
Calculating the ratio between the skin color number and the resolution of the video data as the skin color duty ratio;
And if the resolution of the video data is larger than a preset resolution threshold value and the skin tone duty ratio is smaller than a preset skin tone threshold value, determining that the video condition is met.
7. The method according to claim 5, wherein the setting a state in which the detail enhancement processing is performed on the video data based on the distribution of the image data matching the detail enhancement processing, comprises:
calculating the duty ratio of the image data matched with the detail enhancement processing as a matching ratio;
if the matching proportion is greater than or equal to a preset matching threshold value, allowing the detail enhancement processing to be executed on the video data;
And if the matching proportion is smaller than a preset matching threshold value, prohibiting the detail enhancement processing of the video data.
8. The method of any one of claims 1-7, further comprising:
In the process of redisplaying the video data, acquiring parameters related to the detail enhancement processing as redisplaying parameters;
If the state is that the detail enhancement processing is allowed to be executed on the video data, calculating an adjusting parameter according to the environment parameter, the video parameter and the back display parameter, wherein the adjusting parameter is used for adjusting the intensity of the detail enhancement processing;
And executing the detail enhancement processing on the video data according to the adjustment parameters.
9. The method of claim 8, wherein calculating adjustment parameters based on the environmental parameters, the video parameters, and the playback parameters comprises:
converting the video parameters into first target parameters participating in the detail enhancement processing;
converting the back display parameters into second target parameters participating in the detail enhancement processing;
And carrying out linear fusion on the environment parameter, the first target parameter and the second target parameter to obtain an adjustment parameter.
10. The method of claim 9, wherein the video parameters further comprise a resolution of the video data, and wherein the playback parameters comprise a resolution of a screen;
The converting the video parameter into a first target parameter participating in the detail enhancement process includes:
subtracting the first probability from one to obtain a second probability that the pixel belongs to non-skin;
the converting the echo parameter into a second target parameter participating in the detail enhancement process includes:
And calculating the ratio between the resolution of the video data and the resolution of the screen as a display proportion.
11. The method of claim 10, wherein the environmental parameter comprises a brightness value of a light ray;
the linear fusion of the environmental parameter, the first target parameter and the second target parameter to obtain an adjustment parameter includes:
configuring a first adjusting weight for the brightness value;
Configuring a second adjustment weight for the second probability;
configuring a third adjustment weight for the display scale;
Calculating a sum value among a first adjustment value, a second adjustment value, and a third adjustment value as a fourth adjustment value, the first adjustment value representing a product between the luminance value and the first adjustment weight, the second adjustment value representing a product between the second probability and the second adjustment weight, and the third adjustment value representing a product between the display scale rate and the third adjustment weight;
And adding a preset second adjusting coefficient as an adjusting parameter on the basis of the product between the fourth adjusting value and the preset first adjusting coefficient.
12. The method of claim 8, wherein said performing said detail enhancement processing on said video data in accordance with said adjustment parameters comprises:
extracting details from the image data of the video data to obtain original detail data;
adjusting the original detail data based on the adjustment parameters to obtain target detail data;
The target detail data is superimposed to the image data.
13. The method of claim 12, wherein said adjusting said raw detail data based on said adjustment parameters to obtain target detail data comprises:
Inquiring and adjusting the weight;
multiplying the adjustment weight and the adjustment parameter by the original detail data to obtain target detail data.
14. A video detail enhancement device, for use in a mobile terminal, the device comprising:
the video data acquisition module is used for acquiring video data; the video data has multi-frame image data;
The environment parameter acquisition module is used for acquiring parameters related to detail enhancement processing from the environment as environment parameters in the process of acquiring the video data;
The video parameter acquisition module is used for acquiring parameters related to detail enhancement processing from the video data as video parameters;
An enhancement state determining module, configured to determine a state of performing the detail enhancement processing on the video data according to the environmental parameter and the video parameter;
The video parameter acquisition module comprises:
a color space conversion sub-module for converting the image data from a YUV color space to an RGB color space;
A first numerical value setting sub-module, configured to set, in the RGB color space, a first probability that a pixel point in the image data belongs to skin to 1 if the pixel point meets a preset threshold condition;
A second value setting sub-module, configured to set a first probability that a pixel point belongs to skin to 0 if the pixel point in the image data does not meet a preset threshold condition;
The skin color pixel point determining submodule is used for determining that the pixel points represent skin if the first probability is larger than a preset skin color threshold value;
The skin color quantity counting sub-module is used for counting the quantity of the pixel points representing the skin and taking the quantity as the skin color quantity;
wherein the threshold condition comprises:
The R component is larger than the first color threshold, the G component is larger than the second color threshold, the B component is larger than the third color threshold, the R component is larger than the G component, and the R component is larger than the B component;
the difference between the maximum value of the R component, the G component and the B component and the minimum value of the R component, the G component and the B component is larger than a fourth color threshold;
The difference between the R component and the G component is greater than a fifth color threshold;
the video parameters include a first probability that a pixel belongs to the skin and a number of pixels representing the skin.
15. A mobile terminal, the mobile terminal comprising:
One or more processors;
A memory for storing one or more programs,
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the detail enhancement method of video of any of claims 1-13.
16. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, which when executed by a processor implements the detail enhancement method of a video according to any of claims 1-13.
CN202011348089.2A 2020-11-26 2020-11-26 Video detail enhancement method and device, mobile terminal and storage medium Active CN112488933B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011348089.2A CN112488933B (en) 2020-11-26 2020-11-26 Video detail enhancement method and device, mobile terminal and storage medium
PCT/CN2021/129369 WO2022111269A1 (en) 2020-11-26 2021-11-08 Method and device for enhancing video details, mobile terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011348089.2A CN112488933B (en) 2020-11-26 2020-11-26 Video detail enhancement method and device, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN112488933A CN112488933A (en) 2021-03-12
CN112488933B true CN112488933B (en) 2024-06-18

Family

ID=74935828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011348089.2A Active CN112488933B (en) 2020-11-26 2020-11-26 Video detail enhancement method and device, mobile terminal and storage medium

Country Status (2)

Country Link
CN (1) CN112488933B (en)
WO (1) WO2022111269A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488933B (en) * 2020-11-26 2024-06-18 有半岛(北京)信息科技有限公司 Video detail enhancement method and device, mobile terminal and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915529A (en) * 2020-08-05 2020-11-10 广州市百果园信息技术有限公司 Video dim light enhancement method and device, mobile terminal and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080056566A1 (en) * 2006-09-01 2008-03-06 Texas Instruments Incorporated Video processing
CN103226808B (en) * 2012-01-30 2016-09-21 深圳迈瑞生物医疗电子股份有限公司 A kind of image enchancing method and system
CN105894458A (en) * 2015-12-08 2016-08-24 乐视移动智能信息技术(北京)有限公司 Processing method and device of image with human face
CN108848318A (en) * 2018-05-30 2018-11-20 苏州树云网络科技有限公司 A kind of adaptive process monitoring method
CN111447374B (en) * 2020-05-13 2021-01-26 重庆紫光华山智安科技有限公司 Light supplement adjusting method and device, electronic equipment and storage medium
CN111696058A (en) * 2020-05-27 2020-09-22 重庆邮电大学移通学院 Image processing method, device and storage medium
CN112488933B (en) * 2020-11-26 2024-06-18 有半岛(北京)信息科技有限公司 Video detail enhancement method and device, mobile terminal and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915529A (en) * 2020-08-05 2020-11-10 广州市百果园信息技术有限公司 Video dim light enhancement method and device, mobile terminal and storage medium

Also Published As

Publication number Publication date
WO2022111269A1 (en) 2022-06-02
CN112488933A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN110163810B (en) Image processing method, device and terminal
EP3039864B1 (en) Automatic white balancing with skin tone correction for image processing
CN107680056B (en) Image processing method and device
JP4234195B2 (en) Image segmentation method and image segmentation system
US8311355B2 (en) Skin tone aware color boost for cameras
CN113518185B (en) Video conversion processing method and device, computer readable medium and electronic equipment
US11917158B2 (en) Static video recognition
WO2022121893A1 (en) Image processing method and apparatus, and computer device and storage medium
CN112712569B (en) Skin color detection method and device, mobile terminal and storage medium
JP2004310475A (en) Image processor, cellular phone for performing image processing, and image processing program
CN111918095A (en) Dim light enhancement method and device, mobile terminal and storage medium
CN111899197A (en) Image brightening and denoising method and device, mobile terminal and storage medium
US7885458B1 (en) Illuminant estimation using gamut mapping and scene classification
Liba et al. Sky optimization: Semantically aware image processing of skies in low-light photography
CN116188296A (en) Image optimization method and device, equipment, medium and product thereof
CN112488933B (en) Video detail enhancement method and device, mobile terminal and storage medium
US7796827B2 (en) Face enhancement in a digital video
CN112822413B (en) Shooting preview method, shooting preview device, terminal and computer readable storage medium
CN113422893B (en) Image acquisition method and device, storage medium and mobile terminal
CN115239578A (en) Image processing method and device, computer readable storage medium and terminal equipment
CN113298753A (en) Sensitive muscle detection method, image processing method, device and equipment
CN111915529B (en) Dim light enhancement method and device for video, mobile terminal and storage medium
CN111970501A (en) Pure color scene AE color processing method and device, electronic equipment and storage medium
KR101039404B1 (en) Image signal processor, smart phone and auto exposure controlling method
CN111915529A (en) Video dim light enhancement method and device, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant