WO2018127091A1 - Image processing method and apparatus, relevant device and server - Google Patents

Image processing method and apparatus, relevant device and server Download PDF

Info

Publication number
WO2018127091A1
WO2018127091A1 PCT/CN2018/071363 CN2018071363W WO2018127091A1 WO 2018127091 A1 WO2018127091 A1 WO 2018127091A1 CN 2018071363 W CN2018071363 W CN 2018071363W WO 2018127091 A1 WO2018127091 A1 WO 2018127091A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
video
processing
terminal device
data
Prior art date
Application number
PCT/CN2018/071363
Other languages
French (fr)
Chinese (zh)
Inventor
李凯
夏珍
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201710014525.4A external-priority patent/CN108289185B/en
Priority claimed from CN201710344562.1A external-priority patent/CN108307101B/en
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018127091A1 publication Critical patent/WO2018127091A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working

Definitions

  • the present application relates to the field of image processing, and in particular, to a method, an apparatus, a related device, and a server for image processing.
  • Video communication refers to the way in which users communicate by transmitting video images.
  • a terminal device such as a user's mobile phone can mutually transmit video images collected by the camera through the network to realize video communication.
  • the current video communication method can also support the addition of video effects in video images, such as adding a beard, an animal ear and other facial pendant images to a face in a video image, adding a cartoon to the video image. Filter effects such as comics.
  • GPU Graphics Processing Device
  • adding video effects to video images is generally implemented on the GPU; however, the GPU is also responsible for the display of the graphical interface, image rendering, etc. in the terminal device, along with the graphics.
  • the terminal device needs to set a GPU with sufficient performance to meet the requirement of adding video effects on the video image, which puts higher requirements on the performance configuration of the terminal device, resulting in the use of video communication based on video effects.
  • the limitations are getting bigger and bigger.
  • An embodiment of the present invention provides a method for image processing, where the method includes:
  • Feature adjustment of image features in the image data is performed according to the determined processing manner.
  • an embodiment of the present invention further provides a video communication method, apparatus, and terminal device to reduce usage limitations of video communication based on video effects.
  • the embodiment of the present invention provides the following technical solutions:
  • a video communication method is applied to a first terminal device, and the method includes:
  • the data processing type matches the second type, indicating that the GPU of the first terminal device adds the filter effect on the video image; wherein the data processing amount corresponding to the first type is The data processing amount range set by the CPU is corresponding, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type;
  • a video image to which a video effect is added is transmitted to the second terminal device.
  • the embodiment of the invention further provides a video communication device, which is applied to a first terminal device, and the device includes:
  • connection establishing module configured to establish a video communication connection with the second terminal device
  • a video image acquiring module configured to acquire a video image collected by the image capturing device of the first terminal device
  • a filter effect determining module for determining the selected filter effect
  • a type determining module configured to determine a data processing type of the filter effect
  • a first filter effect adding module configured to add the filter effect to the video image in a CPU of the first terminal device if the data processing amount type conforms to the first type
  • a second filter effect adding module configured to instruct the GPU of the first terminal device to add the filter effect on the video image if the data processing amount type conforms to the second type; wherein the first The data processing amount corresponding to the type corresponds to the data processing amount range set by the CPU, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type;
  • a video image transmission module configured to transmit, to the second terminal device, a video image to which a video effect is added.
  • the embodiment of the invention further provides a terminal device, including:
  • a CPU configured to establish a video communication connection with the second terminal device, and acquire a video image collected by the image acquisition device of the first terminal device; determine a selected filter effect; and determine a data processing of the filter effect a quantity type; if the data processing type matches the first type, adding the filter effect to the video image in a CPU of the first terminal device; if the data processing type matches a second type, indicating The GPU of the first terminal device adds the filter effect to the video image; wherein, the data processing amount corresponding to the first type corresponds to a data processing amount range set by the CPU, and The data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type; determining, according to at least the video image to which the filter effect is added, the video image to which the video special effect is added; and adding the added video special effect to the second terminal device Video image
  • a GPU configured to be instructed by the CPU to add the filter effect on the video image when the data processing amount type conforms to the second type.
  • the first terminal device may determine whether the filter effect is implemented in the CPU or on the GPU according to the data processing type of the filter effect.
  • the implementation of the data processing type is low, and when the first type is met, the filter effect may be added to the video image in the CPU of the first terminal device, where the data is
  • the processing complexity corresponding to the processing type is high, and when the second type is met, the GPU of the first terminal device may be added to the video image to add the filter effect; thus, the embodiment of the present invention may be based on the filter effect.
  • the type of data processing the processing device added by the execution filter effect is reasonably allocated, so that the data processing pressure involved in the filter effect addition can be shared between the CPU and the GPU, so that the terminal device has a certain comprehensive performance, and the video can be satisfied.
  • the embodiments of the present invention provide an image processing method, an electronic device, and a server, which can solve at least the above problems in the prior art.
  • a first aspect of the embodiments of the present invention provides an image processing method, where the method includes:
  • the electronic device detects a target operation, the target operation characterizing an operation of the electronic device to adjust image features in the collected image data and/or video data;
  • a parameter adjustable range value for processing the image feature for the electronic device Acquiring, based on the target operation, a parameter adjustable range value for processing the image feature for the electronic device, the parameter adjustable range value being based at least on processing image features in the image data and/or video data Determined by image processing features;
  • a target adjustment value is selected from the parameter adjustable range values, and the image features in the acquired image data and/or video data are processed by the target adjustment value.
  • the method further includes:
  • the parameter adjustable range value is based on at least an image processing feature for processing image features in image data and/or video data, and information for transmitting the collected image data and/or video data. Determining the transmission feature; correspondingly, the method further includes:
  • image processing features for processing image features in image data and/or video data, and acquiring information transmission features for transmitting the collected image data and/or video data, and for the electronic device
  • the image processing features and information transmission features are sent to the server to cause the server to determine a parameter adjustable range value that matches the electronic device based at least on image processing features and information transmission characteristics for the electronic device.
  • the method further includes:
  • An adjuster characterizing the parameter adjustable range value is presented to facilitate the selection of the target adjustment value from the parameter adjustable range value using the adjuster.
  • a second aspect of the embodiments of the present invention provides an image processing method, where the method includes:
  • the method further includes:
  • a third aspect of the embodiments of the present invention provides an electronic device, where the electronic device includes:
  • a detecting unit configured to detect a target operation, the target operation characterizing an operation of the electronic device to adjust image features in the collected image data and/or video data;
  • a first obtaining unit configured to acquire, according to the target operation, a parameter adjustable range value for processing the image feature for the electronic device, where the parameter adjustable range value is based at least on image data and/or video Determining the image processing characteristics of the image features processed in the data;
  • a processing unit configured to select a target adjustment value from the parameter adjustable range values, and process the image features in the collected image data and/or video data by using the target adjustment value.
  • the first acquiring unit is further configured to acquire image processing features for processing image features in image data and/or video data, and determine the electronic device based on at least the image processing features.
  • a parameter adjustable range value that matches the electronic device is determined based at least on image processing features for the electronic device.
  • the parameter adjustable range value is based on at least an image processing feature for processing image features in image data and/or video data, and information for transmitting the collected image data and/or video data. Determined by transmitting characteristics; correspondingly,
  • the first acquiring unit is further configured to acquire image processing features for processing image features in image data and/or video data, and acquire information for transmitting the collected image data and/or video data. Transmitting a feature, determining a parameter adjustable range value that matches the electronic device based on at least the image processing feature and the information transmission feature; or acquiring for processing image features in image data and/or video data Image processing features, and acquiring information transmission features for transmitting the collected image data and/or video data, and transmitting image processing features and information transmission characteristics for the electronic device to a server to enable the server to at least A parameter adjustable range value that matches the electronic device is determined based on image processing features and information transmission characteristics for the electronic device.
  • the processing unit is further configured to present an identifier characterized by the parameter adjustable range value, so as to select the target adjustment value from the parameter adjustable range value by using the adjuster.
  • a fourth aspect of the embodiments of the present invention provides a server, where the server includes:
  • a second acquiring unit configured to acquire an image processing feature corresponding to the electronic device for processing image features in the collected image data and/or video data
  • a determining unit configured to determine, according to an image processing feature for the electronic device, a parameter adjustable range value that matches at least an image processing feature of the electronic device, so that the electronic device can adjust the range from the parameter
  • the target adjustment value is selected from the value, and the image feature in the image data and/or the video data collected by the electronic device is processed by using the target adjustment value.
  • the second acquiring unit is further configured to acquire an information transmission feature corresponding to the electronic device for transmitting the collected image data and/or video data; correspondingly,
  • the determining unit is further configured to determine, according to an image processing feature and an information transmission feature for the electronic device, a parameter adjustable range value that matches at least an image processing feature and an information transmission feature of the electronic device.
  • the image processing method and the electronic device and the server according to the embodiments of the present invention first acquire an image based on at least the image data and/or the video data before adjusting the image features in the collected image data and/or video data.
  • the parameter adjustable range value determined by the image processing feature of the processing where the parameter adjustable range value is determined based on the image processing feature for the electronic device, the embodiment of the present invention can Achieving different beauty intensity for different performance electronic devices; further, the target adjustment value of the electronic device for adjusting image features in the collected image data and/or video data is an adjustable range value from the parameter
  • the embodiment of the present invention achieves the adjustment of the beauty intensity within a certain range, and satisfies the different requirements of the user for the beauty intensity in different states. Therefore, the method according to the embodiment of the present invention is While enriching the user experience, it also enhances the user experience.
  • 1-1 is a schematic flowchart of a method for image processing according to an embodiment of the present invention
  • FIG. 1 is a structural block diagram of a video communication system according to an embodiment of the present invention.
  • FIG. 2 is a signaling flowchart of a video communication method according to an embodiment of the present invention.
  • FIG. 3 is another signaling flowchart of a video communication method according to an embodiment of the present invention.
  • FIG. 4 is a flow chart of a method for determining a selected video effect
  • FIG. 5 is a structural block diagram of a terminal device
  • FIG. 6 is a schematic diagram showing a process of adding a face pendant to a video image
  • FIG. 7 is another schematic flowchart of implementing a face pendant and a filter effect added in a video image
  • FIG. 8 is a flowchart of a method for performing video encoding processing on a video image to which a video effect is added;
  • Figure 9 is a schematic diagram showing the performance of the skin and skin of the iPhone 5S on the GPU;
  • Figure 10 is a schematic diagram showing the performance of the skin and skin of the iPhone 4S on the GPU;
  • Figure 11 is a schematic diagram showing the performance of testing face recognition technology on different iPhones
  • FIG. 12 is a structural block diagram of a video communication apparatus according to an embodiment of the present invention.
  • FIG. 13 is a block diagram showing another structure of a video communication apparatus according to an embodiment of the present invention.
  • FIG. 14 is a block diagram showing still another structure of a video communication apparatus according to an embodiment of the present invention.
  • FIG. 15 is a schematic flowchart of an implementation process of an image processing method according to an embodiment of the present invention.
  • 16 is a schematic diagram of an interface of a beauty intensity slider of an electronic device according to an embodiment of the present invention.
  • 17 is a schematic view showing a range of beauty intensity according to an embodiment of the present invention.
  • FIG. 18 is a schematic diagram 1 of information interaction between an electronic device and a server in an image processing method according to an embodiment of the present invention.
  • FIG. 19 is a second schematic diagram of information exchange between an electronic device and a server in an image processing method according to an embodiment of the present invention.
  • FIG. 20 is a schematic flowchart diagram of an image processing method according to an embodiment of the present invention.
  • FIG. 21 is a schematic flowchart of application scenario 2 in an image processing method according to an embodiment of the present invention.
  • FIG. 22 is a schematic diagram of a beauty flow in an image processing method according to an embodiment of the present invention.
  • FIG. 23 is a schematic diagram of an effect of performing a beauty treatment using an image processing method according to an embodiment of the present invention.
  • 24 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 25 is a schematic structural diagram of a server according to an embodiment of the present invention.
  • the embodiment of the present invention includes:
  • the video communication system may include: a first terminal device 10 and a second terminal device 20;
  • the first terminal device and the second terminal device may be user-side devices having image capturing devices such as cameras and having data processing capabilities, such as a smartphone with a camera, a tablet computer, a notebook computer, and the like.
  • the data interaction between a terminal device and the second terminal device may be implemented through a network, and may be implemented by a network server that provides a video communication service, such as an IM (instant messaging) server having a video communication function.
  • IM instant messaging
  • FIG. 1 is a schematic diagram of a two-person video communication scenario, showing that the embodiment of the present invention can also support multi-person video communication, such as video communication supporting group users.
  • the first terminal device and the second terminal device process the video image collected by the image capturing device and transmit the same, and receive the video image and display the same process;
  • the first terminal device is the video image transmitting device
  • the second terminal device is a video image receiving device as an example, and the video communication process provided by the embodiment of the present invention is introduced.
  • the second terminal device also becomes a video image transmitting device when transmitting the video image to the first terminal device.
  • the terminal device also becomes a video image receiving device when receiving the video image sent by the second terminal device; the video communication process of the process is the video image transmitting device with the first terminal device, and the second terminal device is the video image receiving device.
  • the communication process has the same principle and can be cross-referenced.
  • FIG. 2 is a signaling flowchart of a video communication method according to an embodiment of the present invention. Referring to FIG. 2, the process may include:
  • Step S10 The first terminal device establishes a video communication connection with the second terminal device.
  • the first terminal device can establish a video communication connection with the second terminal device through a network server that supports video communication, and a network server that supports video communication, such as an IM (Instant Messaging) server with a video communication function.
  • IM Instant Messaging
  • Step S11 The first terminal device acquires a video image collected by the image acquiring device of the first terminal device.
  • the first terminal device and the second terminal device can mutually transmit the video image; and the first terminal device transmits the video image to the second terminal device as an angle for description.
  • the first terminal device may add a video special effect on the video image captured by the image capturing device such as the camera, and transmit the video image with the video special effect to the second terminal device; the processing of the second terminal device is the same Therefore, the first terminal device needs to acquire the video image captured by the image capturing device, and the video image captured by the image capturing device is transmitted to the CPU of the first terminal device.
  • Step S12 The first terminal device determines the selected filter effect.
  • Filter effects such as comic effects filter effects, cartoon effects filter effects, and different color effects filter effects.
  • Step S13 The first terminal device determines a data processing quantity type of the filter effect.
  • Step S14 If the data processing quantity type conforms to the first type, add the filter effect to the video image in a CPU of the first terminal device.
  • Step S15 If the data processing quantity type conforms to the second type, instruct the GPU of the first terminal device to add the filter effect on the video image.
  • the data processing amount corresponding to the first type corresponds to a data processing amount range set by the CPU, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type.
  • the embodiment of the present invention may select a processing effect according to the processing complexity of the filter effect to be added when adding a filter effect to the video image.
  • the CPU or GPU of a terminal device implements the addition of a filter effect;
  • the filter effect when the processing complexity of the filter effect is low, the filter effect is added to the video image on the CPU of the first terminal device, and when the complexity of the filter effect is high, the first terminal is Adding a filter effect to the video image on the GPU of the device.
  • the GPU is a dedicated graphics processing device, once the GPU is used for image processing, it is generally required to call the OpenGL interface (Open Graphics Library, a professional graphics program interface that crosses the programming language and cross-platform programming interface specifications). Implementation, and the call of the OpenGL interface will greatly occupy the processing resources of the GPU, resulting in an increase in the processing resources of the terminal device.
  • OpenGL interface Open Graphics Library, a professional graphics program interface that crosses the programming language and cross-platform programming interface specifications.
  • Implementation, and the call of the OpenGL interface will greatly occupy the processing resources of the GPU, resulting in an increase in the processing resources of the terminal device.
  • the GPU is currently involved in more graphics processing tasks, it is necessary to set a GPU with sufficient performance to meet the requirements.
  • the requirement of adding video effects on the video image which will put forward higher requirements on the performance configuration of the terminal device; therefore, the embodiment of the present invention can be based on the processing complexity of the filter effect, when the processing complexity of the filter effect is low.
  • the filter effect When the filter effect can be implemented by the CPU, the filter effect is added on the CPU of the first terminal device, the call of the OpenGL interface is reduced, and the processing pressure of the GPU is reduced; and when the processing complexity of the filter effect is high, When the filter effect is implemented by the CPU, the filter effect is implemented in the GPU to ensure the smooth addition of the filter effect. ;
  • the terminal device has a certain comprehensive performance, which can meet the requirement of adding a video special effect on the video image, and does not need to strengthen the performance configuration of a certain aspect such as the GPU, and provides a possibility for the low-configuration terminal device to implement the filter effect. , reduces the use limitations of video communication based on video effects.
  • the processing complexity of the filter effect can be determined by the data processing amount of the filter effect. Generally, the lower the data processing amount of the filter effect, the lower the processing complexity, and the more the data processing amount of the filter effect is. High, the higher the processing complexity;
  • the embodiment of the present invention may define a data processing type of the filter effect according to the data processing amount of the filter effect, and define a first type and a second type; wherein, the data processing amount corresponding to the first type is The data processing amount range set by the CPU is corresponding, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type;
  • the threshold value of the data processing amount corresponding to the first type and the second type may be a threshold value that is pre-analyzed according to the CPU data processing capability of the first terminal device, and has a high processing complexity and a high processing complexity;
  • the threshold values of the data processing amounts corresponding to the first type and the second type determined may also be different.
  • the filter effect when it is determined that the data processing quantity type of the selected filter effect conforms to the first type, it may be determined that the filter effect may be implemented by a CPU, and the filtering is added to the video image in a CPU of the first terminal device.
  • Mirror effect when the data processing type of the selected filter effect conforms to the second type, it can be determined that the filter effect is not suitable for implementation by the CPU, and the GPU implementation can be added to the video image by a professional graphics processing GPU.
  • the filter effect when it is determined that the data processing quantity type of the selected filter effect conforms to the first type, it may be determined that the filter effect may be implemented by a CPU, and the filtering is added to the video image in a CPU of the first terminal device.
  • the data processing type of the selected filter effect may be determined according to the data processing amount of the filter effect. If the data processing amount of the filter effect corresponds to the data processing amount corresponding to the first type, the determined The data processing quantity type conforms to the first type, and if the data processing quantity of the filter effect corresponds to the data processing quantity corresponding to the second type, it may be determined that the data processing quantity type conforms to the second type;
  • the filter effect data may be locally stored by the first terminal device or downloaded from the network, and each filter effect may carry a corresponding data processing type identifier, and the data processing type identifier may be combined with a pre-filter based
  • the amount of data processing of the effect corresponds to the type of data processing amount of the analyzed filter effect.
  • the range of data processing amount set for the CPU may be determined according to the model capability of the first terminal device, and the model capability referred to herein may be an upper limit value of the data processing amount of the CPU (generally by the CPU)
  • the number of the data processing amount of the CPU may be determined by determining the upper limit value of the data processing amount of the CPU, and setting the CPU occupation ratio range of the CPU processing filter effect; Determining, by the CPU occupation ratio range (such as multiplying by two), the data processing amount range;
  • the data processing amount ranges set for the CPU are different, that is, the data processing amounts corresponding to the first type and the second type are different under different models.
  • the filter effect implemented in the CPU may be based on data in the YUV format, that is, if the filter effect data, the video image acquired from the image capturing device is not data in the YUV format, the embodiment of the present invention needs to convert For the YUV format, the filter effect can be added to the video image;
  • the filter effect implemented in the GPU may be based on data in the RGB format; that is, the video image may be rendered into the GPU and saved in the RGB format, and the GPU performs the filter effect on the video image.
  • the video image with the added filter effect is transferred to the CPU, and the CPU will further convert to the YUV format for subsequent video encoding processing and the like.
  • Step S16 Determine a video image to which a video effect is added, at least according to the video image to which the filter effect is added.
  • a video effect of the video effect in addition to the video image with the filter effect added, it may also be a video image with a filter effect and a face pendant image.
  • Step S17 The video image to which the video special effect is added is transmitted to the second terminal device.
  • the first terminal device may perform video encoding processing on the video image to which the video special effect is added, and then transmit the encoded video image to the second terminal device; optionally, video encoding the video image to which the video special effect is added Before processing, the first terminal device may further perform pre-coding processing on the video image to which the video special effect is added, and the pre-coding processing includes at least one of the following processing methods: noise reduction sharpening processing, such as dermabrasion processing.
  • the embodiment of the present invention may invoke a predetermined filter effect implementation algorithm to add the filter effect to the video image.
  • the embodiment of the present invention may invoke an OpenGL interface to instruct the GPU to implement an algorithm with a predetermined filter effect through the OpenGL interface.
  • the filter effect is added to the video image.
  • the first terminal device may determine whether the filter effect is implemented in the CPU or in the GPU according to the data processing type of the filter effect; The processing complexity corresponding to the data processing type is low, and when the first type is met, the filter effect may be added to the video image in the CPU of the first terminal device, where the data processing type corresponds When the processing complexity is high, when the second type is met, the GPU of the first terminal device may be instructed to add the filter effect on the video image; thus, the data processing type according to the filter effect may be used in the embodiment of the present invention.
  • the processing device added by the execution filter effect is allocated reasonably, so that the data processing pressure involved in the filter effect is added, and can be shared between the CPU and the GPU, so that the terminal device has a certain comprehensive performance, and the video effect can be added to the video image. Needs, without the need to enhance the performance configuration of a certain aspect of the GPU, it is possible to achieve a filter effect for a low-configuration terminal device, reducing the Video Effects limitations on the use of video communications.
  • the first terminal device can adjust a frame rate of the video image processed by the video special effect according to the current network bandwidth, and the higher the current network bandwidth, the higher the frame rate.
  • the frame rate of the video image processed by the video special effect is improved, and the video image quality is improved on the basis of ensuring the reliability of the video communication;
  • the frame rate of the video image processed by the video special effect is reduced to ensure the reliability of the video communication.
  • the frame rate adjustment of the video image processed by the video special effect may be implemented by adjusting a frame rate of the video image acquired by the image capturing device, as shown in FIG. 3, and FIG. 3 is another method of the video communication method.
  • a flow chart, the process can include:
  • Step S20 The first terminal device acquires a current network bandwidth.
  • the manner in which the first terminal device obtains the current network bandwidth may be various, for example, performing a bandwidth detection packet interaction with a server having a network bandwidth detection function to implement detection of the current network bandwidth.
  • Step S21 The first terminal device determines, according to the first correspondence between the preset network bandwidth range and the image acquisition frame rate, an image acquisition frame rate corresponding to a network bandwidth range in which the current network bandwidth is located.
  • the network bandwidth range is positively correlated with the corresponding image acquisition frame rate, that is, the higher the network bandwidth range, the higher the image acquisition frame rate.
  • different network bandwidth ranges may be set in advance, and the corresponding image acquisition frame rate may be different.
  • the image acquisition frame rate corresponding to different network bandwidth ranges is different, and the higher the network bandwidth range, the more the corresponding image acquisition frame rate is. High (ie, the network bandwidth range is positively correlated with the corresponding image acquisition frame rate), thereby forming a first correspondence.
  • the frame rate of the video image acquired by the first terminal device is limited to the frame rate of the image capturing device of the camera of the first terminal device; therefore, the upper limit value of the image capturing frame rate may be the image capturing device.
  • Frame rate Table 1 below shows an alternative illustration of the network bandwidth range and the image acquisition frame rate, which can be referred to, where the maximum frame rate is the acquisition frame rate of the image acquisition device.
  • the embodiment of the present invention may determine an image acquisition frame rate corresponding to a network bandwidth range in which the current network bandwidth of the first terminal device is in the first correspondence relationship.
  • Step S22 The first terminal device acquires a video image collected by the image acquiring device of the first terminal device according to the determined image acquisition frame rate.
  • the image acquisition frame rate is positively correlated with the current network bandwidth of the first terminal device, that is, the higher the current network bandwidth of the first terminal device, the larger the image acquisition frame rate, and the lower the current network bandwidth.
  • the video image captured by the image acquisition device is obtained by using the image acquisition frame rate, and the number of video image frames is obtained in a unit time when the current network bandwidth of the first terminal device is high. Adding a video effect by the number of video image frames per unit time can make the image quality of the video image to which the video effect is added higher;
  • the embodiment of the present invention can obtain a lower image acquisition frame rate and acquire a video image collected by the image acquisition device, so that when the current network bandwidth is low, the unit The time obtains a small number of video image frames, and subsequently adds video effects by the number of video image frames per unit time, which can ensure the reliability of video communication.
  • step S20 to the step S22 may be considered as an optional implementation manner of acquiring the video image collected by the image acquiring device of the first terminal device in step S11 shown in FIG. 2 .
  • Step S23 The first terminal device adds the selected video special effect to the acquired video image.
  • Add selected video effects to the video image such as adding a face pendant to the video image, and/or adding a filter effect to the video image; if the filter effect is selected, it is determined according to the method shown in Figure 2, Implement filter effects on CPU or GPU;
  • the addition of the face pendant can be implemented on the GPU
  • the GPU may add a filter effect on the video image to which the face pendant is added, or may add a face pendant on the video image to which the filter effect is added;
  • the filter effect is implemented on the CPU, and the CPU can add a filter effect on the video image added by the GPU to add a face pendant.
  • Step S24 The first terminal device transmits a video image to which the video special effect is added to the second terminal device.
  • the first terminal device may perform video encoding processing on the video image to which the video special effect is added, and then transmit the encoded video image to the second terminal device; optionally, video encoding the video image to which the video special effect is added Before processing, the first terminal device may further perform pre-coding processing on the video image to which the video special effect is added, and the pre-coding processing includes at least one of the following processing methods: noise reduction sharpening processing, such as dermabrasion processing.
  • the first terminal device may adjust the image acquisition frame rate according to the current network bandwidth, and the current network bandwidth is positively correlated with the adjusted image acquisition frame rate; thus, the first terminal device may adjust the image acquisition frame. Rate, acquiring a video image collected by the image capturing device of the first terminal device, and adding the selected video effect to the acquired video image, so that the frame rate of the video image processed by the video special effect is compared with the current network bandwidth.
  • the video communication method provided by the embodiment of the present invention can dynamically adjust the frame rate of the image corresponding to the current network bandwidth, so that the frame rate of the video image processed by the video special effect can be adapted to the current network bandwidth to ensure that the video special effect is added.
  • the video communication can be carried out reliably, ensuring the reliability of video communication.
  • the embodiment of the present invention may determine a video special effect selected by the user, and add a user to the acquired video image when the video special effect selected by the user corresponds to the device configuration information of the first terminal device and the current network bandwidth.
  • the selected video special effect is such that the first terminal device reduces the executable video special effect type and further reduces the resource consumption of the terminal device when the device configuration is low and the network bandwidth is low;
  • FIG. 4 is a flowchart of a method for determining a selected video effect according to an embodiment of the present invention.
  • the method may be applied to a first terminal device.
  • the method may include:
  • Step S100 Determine, according to the device configuration information of the first terminal device, and the current network bandwidth, at least one video effect type currently executable.
  • the embodiment of the present invention may preset a second correspondence between the device configuration level, the network bandwidth range, and the video effect type; wherein, the video effect type corresponding to one device configuration level and one network bandwidth range may be at least one.
  • the first terminal device can have higher device resources and bandwidth resources, and achieve more video special effects types;
  • Table 2 below shows the device configuration level, network bandwidth range, and video effect type. The second correspondence relationship is shown, which can be referred to; wherein the data processing amount of the basic filter is low, and the data processing amount of the complex filter is high;
  • the embodiment of the present invention can obtain the device configuration information of the memory, the CPU, the GPU, and the like of the first terminal device, and determine the device configuration level of the first terminal device based on the predetermined device configuration level grading policy. Generally, the device configuration level is higher. The higher the device configuration of the terminal device, such as the larger the memory, the more the number of CPU cores, etc.;
  • the embodiment of the present invention may determine, in the second correspondence, the device configuration information of the first terminal device The configuration level, and the network bandwidth range in which the current network bandwidth is located, and the corresponding video effect type, determine at least one video effect type currently executable by the first terminal device.
  • Step S110 Display a video effect corresponding to the at least one video effect type that can be executed.
  • the first terminal device may be displayed in a video effect selection area of the video communication interface, where the first terminal device currently performs a video effect type, and a displayed video effect type may correspond to at least one video special effect, so that the user selects a need. Video effects added to the video image.
  • Step S120 Determine a video effect selected from the displayed video effects.
  • the embodiment of the present invention can determine the filter effect selected from the displayed video effects, that is, the method shown in FIG. 4 is used to implement step S12 shown in FIG. 2;
  • the embodiment of the present invention can also implement the selection of the face pendant image.
  • the embodiment of the present invention can obtain a video image to be added with a higher image acquisition frame rate when the current network bandwidth of the first terminal device is high, and is in the first terminal.
  • the device has a higher device configuration level
  • video effects are added to the video image with more executable video effects types, and the image quality of the video image to which the video effects are added is improved on the basis of ensuring the reliability of the video communication;
  • the embodiment of the present invention can obtain a lower image rate of the image, obtain a video image to be added with a special effect, and use a less-executable video effect type as the video image. Add video effects to reduce the resource consumption and network bandwidth consumption of the video communication process, and ensure the reliability of video communication.
  • the video communication method described above may be performed by a CPU of the first terminal device, and FIG. 5 shows a structure of the first terminal device.
  • the first terminal device may include: an image collection device 11, a CPU ( a central processing unit 12, a GPU (Graphics Processing Unit) 13; the structure of the second terminal device is similar to that of the first terminal device;
  • the embodiment of the present invention can implement the addition of the face pendant on the video image; further reducing the CPU of the first terminal device Resource consumption, the embodiment of the present invention can use the gray map channel image in the video image to realize the determination of the location of the face pendant in the video image;
  • FIG. 6 is a schematic flowchart of the first terminal device implementing the addition of the face pendant in the video image, and the process may include:
  • Step S30 The CPU acquires a video image acquired by the image capturing device by using the determined image acquisition frame rate.
  • the first terminal device may enable an image capturing device such as a camera to collect a local video image, where the video image generally includes the face of the first user and the environment in which the first user is located.
  • the background image of the background after determining the corresponding image acquisition frame rate according to the current network bandwidth, the first terminal device may acquire the video image from the image collection device at a corresponding frame rate.
  • the video image captured by the image acquisition device may be a YUV (where Y refers to brightness, UV is collectively referred to as chrominance) video image, or may be RGB (where R represents red, G represents green, B Indicates a blue) video image, and the format of the captured video image may depend on the type of image capture device.
  • YUV where Y refers to brightness, UV is collectively referred to as chrominance
  • RGB where R represents red, G represents green, B Indicates a blue
  • the format of the captured video image may depend on the type of image capture device.
  • Step S31 The CPU determines the selected face pendant image.
  • the user may locally select the face pendant image from the network server or the first terminal device, and the CPU detects the face input of the user input. After the image selection command, the selected face pendant image can be determined.
  • the video communication interface of the first terminal device may display a face pendant image selection area, where the face pendant image selection area may be displayed with a face pendant image downloaded from a network server and/or stored locally by the first terminal device. The user can select a face pendant image from the face selection image selection area.
  • Step S32 The CPU extracts a grayscale channel image of the acquired video image.
  • the CPU may determine that the face pendant image needs to be added to the video image, in order to identify the location of the face pendant image in the video image, the CPU for each frame acquired. For video images, the grayscale channel image of the video image needs to be extracted.
  • the CPU can perform the positioning of the facial feature points only according to the grayscale channel image in the video image; specifically, the CPU can extract the grayscale channel in the video image.
  • An image such as a Y channel image in a video image extracted in YUV format (Y channel is a luminance channel, image representation of a Y channel is similar to that of a grayscale channel), or extraction of a G channel image in a video image of RGB format (G The image of the channel behaves like the performance of a grayscale channel).
  • Step S33 The CPU identifies the position of each facial feature point in the grayscale channel image.
  • the CPU can process the grayscale channel image through the face recognition technology to realize the positioning of the facial feature points in the grayscale channel image, and the facial features such as the facial features of the face (eyebrow, eye, ear, nose)
  • the CPU may determine a face region in the grayscale channel image by using a face detection technique, and perform face feature point localization on the face region (eg, facial feature point location) , determining the position of each facial feature point in the grayscale channel image; the obtained position of each facial feature point in the grayscale channel image can be regarded as the collection of each facial feature point in the image acquisition device.
  • the CPU can rotate the grayscale channel image, so that the face in the grayscale channel image is erected, and then the positioning of the facial feature point is realized.
  • the embodiment of the present invention may further reduce the grayscale channel image and obtain gray.
  • the degree channel reduces the image, reduces the image based on the grayscale channel to realize the positioning of the facial feature point, and then converts the position of the facial feature point located in the grayscale channel reduction image into a person in the grayscale channel image. The position of the face feature point.
  • the position of the face feature point in the image may be defined by the coordinates of the face feature point in the image.
  • Step S34 The CPU determines, according to the face feature point corresponding to the face pendant image and the location of each face feature point, the added position of the face pendant image in the video image.
  • each face image has a corresponding facial feature point
  • the face pendant image is added to the corresponding facial feature point
  • the rabbit face image of the rabbit ear corresponds to the ear feature point of the face.
  • the face image of the eyeglass corresponds to the feature point of the face of the face, and the like; after determining the face pendant image to be added on the video image, the CPU may determine the face feature point corresponding to the face pendant image and the step S33 Position of each face feature point in the grayscale channel image, determining an added position of the face pendant image in the video image;
  • the person corresponding to the face pendant image is matched from the position of each face feature point in the grayscale channel image.
  • the corresponding position of the face feature point is obtained at the position where the face pendant image is added in the video image.
  • Step S35 The CPU renders the video image and the face pendant image to the GPU of the first terminal device to generate a video image and a face pendant image in a first image format in the GPU; The added location is transmitted to the GPU.
  • the CPU may call the OpenGL Interface (a professional graphics program interface of a cross-programming language, a cross-platform programming interface specification) to render the video image and the face pendant image to the texture of the GPU. And saving the video image and the face pendant image of the first image format in the GPU;
  • OpenGL Interface a professional graphics program interface of a cross-programming language, a cross-platform programming interface specification
  • the first image format may be an RGB format; correspondingly, if the video image, and/or the face pendant image is in a YUV format, the CPU renders the video image and the face pendant image to the GPU.
  • image format conversion is required to save video images and face pendant images in RGB format in the GPU;
  • the CPU may also notify the GPU of the added position determined in step S34, so that the GPU implements the addition of the face pendant image on the video image.
  • Step S36 The GPU adds the video image of the first image format and the face pendant image according to the added position to obtain a special effect video image of the first image format.
  • the GPU adds the video image of the first image format and the face pendant image, and may add a face pendant image to the video image; specifically, the GPU may add the location according to the added position in the first image format. Add a face pendant image to the video image to get a special effect video image in the first image format.
  • the GPU may also add a filter effect on the video image to obtain a filter effect and a face pendant added to the first image format.
  • a special effect video image of the image may also be added.
  • Step S37 The GPU transmits the special effect video image of the first image format to the CPU.
  • the CPU may convert the special effect video image of the first image format into the second image format, and then perform video encoding processing on the special effect video image of the second image format, and transmit the special effect video image after the video encoding process to a second terminal device;
  • the second image format may be a YUV format;
  • the CPU may convert the RBG format to the YUV format to obtain a special effect video image in the YUV format, and perform the special effect video image in the YUV format.
  • the CPU may add a filter effect based on the video image of the second image format to add the face pendant; thereby the CPU may add the face pendant to the second image format and
  • the video image of the filter effect is subjected to pre-coding processing of the video image of the YUV format before the video encoding process, and the pre-coding process includes at least one of the following processing methods: noise reduction sharpening (generally only processing the Y channel image), For example, dermabrasion processing (generally only processing Y channel images), some very special filters (generally only processing UV channel images), etc.; and then transmitting the encoded video images to the second terminal device.
  • the following is an example of the process of adding a face pendant and a filter effect to a video image provided by the embodiment of the present invention, taking the video image captured by the image capture device as the YUV format (the optional form of the second image format) as an example. Description, the following is only illustrated from the perspective of processing by the first terminal device, and the process may be as shown in FIG. 7, including:
  • Step S40 The CPU acquires a frame rate of the determined image, and acquires a YUV video image acquired by the camera.
  • Step S41 The CPU determines the selected face pendant image and the selected filter effect, and determines that the data processing type of the filter effect is the first type.
  • Step S42 the CPU extracts the Y channel image in the YUV video image.
  • the Y channel image is only an optional form of the grayscale channel image.
  • the image representation of the Y channel in the YUV video image is similar to the performance of the grayscale channel.
  • Step S43 The CPU reduces the Y channel image according to the set ratio to obtain a Y channel reduced image.
  • the Y-channel reduced image is only an optional form of the grayscale channel reduced image.
  • the size of the reduced image may be set.
  • the reduction ratio of the video image may be set, thereby reducing the ratio of the setting.
  • the video image is reduced to reduce the amount of data processing involved in the face recognition technology, and the face recognition processing efficiency is improved; optionally, the size of the reduced image does not affect the accuracy of the face recognition, and the specific size may be Set according to the actual situation.
  • the Y channel reduced image After the Y channel image is reduced, the Y channel reduced image can be obtained.
  • Step S44 The CPU rotates the Y channel reduced image according to the set rotation angle.
  • the set rotation angle may be a video image captured by the camera, and a difference angle with the screen display image, such as 90 degrees or 180 degrees, by rotating the reduced Y channel image by setting the rotation angle, the reduced Y channel image may be made.
  • the face in the face is erect.
  • Step S45 The CPU recognizes the position of each facial feature point in the rotated Y channel reduced image.
  • the embodiment of the present invention may apply a face recognition technology to perform face recognition processing on the rotated Y channel image, and locate the position of each face feature point in the rotated Y channel image.
  • Step S46 The CPU converts the position of each face feature point in the rotated Y channel reduction image into a position of each face feature point in the Y channel image, and obtains the position of each face feature point in the YUV video image.
  • the specific conversion process may be: determining the rotated Y channel reduced image, and performing reverse rotation according to the set rotation angle (the opposite of the direction of rotating the Y channel to reduce the image in step S35), the reverse rotation Y channel is reduced.
  • the position of each face feature point in the image; and the position of each face feature point in the enlarged Y channel image (ie, Y channel image) after the Y channel reduction image that determines the reverse rotation is enlarged according to the set ratio.
  • the position of each face feature point in the Y channel image obtained after the conversion can be regarded as the position of each face feature point in the YUV video image.
  • Step S47 The CPU determines, according to the face feature point corresponding to the face pendant image and the position of each face feature point in the YUV video image, the added position of the face pendant image in the YUV video image.
  • the adding may be determined by using points A and B.
  • Step S48 calling the OpenGL interface to render the YUV video image and the face pendant image to the GPU, and saving the RGB format video image and the face pendant image in the GPU; and transmitting the added position to the GPU.
  • Step S49 The GPU adds the video image of the RGB format and the face pendant image according to the added position to obtain a special effect video image of the RGB format.
  • the embodiment of the present invention can add a face pendant image to the multi-frame video image sequence by superimposing the face pendant image and the video image sequence frame by frame. .
  • Step S50 The GPU transmits the special effect video image in the RGB format to the CPU.
  • Step S51 The CPU converts the special effect video image of the RGB format into a special effect video image of the YUV format, and adds a filter effect.
  • the CPU may further instruct the GPU to add the filter effect on the video image, so that the GPU may add the RGB format to the face pendant.
  • the effect video image of the image and filter effect is transmitted to the CPU and converted to the YUV format by the CPU.
  • the CPU may further perform pre-coding processing on the special-effect video image of the face image and the filter effect in the YUV format, and then perform video encoding processing on the pre-coded special-effect video image to transmit the video to the second terminal device. Coding effect video image after processing;
  • the CPU may transmit the video effect processed video image to the second terminal device by using the communication module of the first terminal device;
  • the communication module is a communication device having network communication capability, such as a WIFI or a GPRS communication module.
  • the video image captured by the image capturing device may also be in the RGB format.
  • the processing manner in this case is different from that shown in FIG. 6 and FIG. 7 in that the CPU can obtain the frame rate of the determined image and obtain the RGB captured by the camera. Video image, and position positioning of each facial feature point in the RGB video image based on the G channel image; meanwhile, the CPU may directly call the OpenGL interface to render the RGB video image and the selected RGB face pendant image to the GPU; Subsequently, after obtaining the special effect video image of the RGB format transmitted by the GPU, the CPU can convert the special effect video image into the YUV format for video encoding pre-processing, and the video encoding post-processing, and then transmit the video encoded video image to the second. Terminal Equipment.
  • the implementation of the filter effect on the CPU in the embodiment of the present invention may be a video image based on the YUV format, and the filter effect data is implemented;
  • the filter effect on the GPU may be a video image based on the RGB format, and a filter.
  • the effect data is implemented.
  • the CPU may render the video image acquired by the image capturing device and the user-selected face pendant image into the GPU, and instruct the GPU to add the selected face pendant image to the video image and Filter effect;
  • the GPU may first add a filter effect to the video image, and then add a face pendant image on the video image to which the filter effect is added, or may first add a face to the video image.
  • Pendant image and then add the filter effect on the video image of adding the face pendant image;
  • the CPU may render the video image acquired by the image capturing device and the user-selected face pendant image into the GPU, and instruct the GPU to add the selected face pendant image to the video image. Therefore, the CPU can add the selected filter effect on the captured video image of the GPU-added face image.
  • the embodiment of the present invention may further determine a coding resolution when the video is encoded according to the device configuration level and the network bandwidth range;
  • FIG. 8 is a flowchart of a method for performing a video encoding process on a video image to which a video effect is added.
  • the method is applicable to a CPU of the first terminal device. Referring to FIG. 8, the process may include:
  • Step S300 Determine, according to a preset device configuration level, a third correspondence between the network bandwidth range and the encoding resolution, a device configuration level of the device configuration information of the first terminal device, where the current network bandwidth is located.
  • the encoding resolution corresponding to the bandwidth range is
  • the encoding resolution may be positively correlated with the device configuration level and the network bandwidth range, and the upper limit of the encoding resolution is selected as the highest resolution of the terminal device; that is, if the network bandwidth is high, the device configuration level is Higher, the embodiment of the present invention can limit the maximum resolution of the terminal device itself, such as iOS for the highest resolution of 640x480, Android face pendant with the highest resolution of 480x360, Android filter limit of 1280x720, when the face is pendant and filter When the mirror is used at the same time, the limit is taken as 480x360; Table 3 below shows an optional illustration of the third correspondence, which can be referred to.
  • Step S310 Perform video encoding processing on the video image to which the video special effect is added at the determined encoding resolution.
  • the higher the network bandwidth of the terminal device and the higher the device configuration level the higher the encoding resolution can be used for video encoding processing, and the image quality of the video image added with the video special effect is improved; and the lower the network bandwidth, the device
  • the lower the configuration level, the lower the encoding resolution can be used for video encoding processing, reducing the CPU and network resources, and ensuring the reliability of video communication.
  • the second terminal device may perform video decoding processing on the video image and display the video image.
  • the addition of the video special effect is performed at the first terminal device transmitting the video image, and the second terminal device receiving the video image can directly decode the video image added with the video special effect to realize the video image display, reaching the first
  • the video image added with the video effect viewed by the terminal device that is, the effect of the video image added by the first terminal device added with the video special effect, improves the synchronization of the video communication between the first terminal device and the second terminal device;
  • FIG. 9, FIG. 10 and FIG. 11 on the basis of the video images collected by the conventional transmission camera, various face processing technologies, virtual props, various color filters and the like are added. Special effects processing is able to meet the needs of real-time video calls;
  • Figure 9 is a schematic diagram of testing the performance of beauty skin and skin on the GPU of iPhone5S
  • Figure 10 is a test of beauty skinning on the GPU of iPhone4S.
  • Figure 11 is a schematic diagram of the performance of the face recognition technology (face detection, tracking, facial feature point location) on different iPhones.
  • the video communication method provided by the embodiment of the present invention can dynamically adjust the frame rate of the image corresponding to the current network bandwidth, so that the frame rate of the video image processed by the video special effect can be adapted to the current network bandwidth to ensure that the video special effect is added.
  • the video communication can be carried out reliably, ensuring the reliability of video communication.
  • the video communication device provided by the embodiment of the present invention is described below, and the video communication device described below can refer to the video communication method described above.
  • FIG. 12 is a structural block diagram of a video communication apparatus according to an embodiment of the present invention.
  • the apparatus is applicable to a CPU of a first terminal device.
  • the apparatus may include:
  • connection establishing module 100 configured to establish a video communication connection with the second terminal device
  • the video image acquisition module 200 is configured to acquire a video image collected by the image collection device of the first terminal device;
  • a filter effect determining module 300 configured to determine the selected filter effect
  • a type determining module 400 configured to determine a data processing type of the filter effect
  • a first filter effect adding module 500 configured to add the filter effect to the video image in a CPU of the first terminal device if the data processing amount type conforms to the first type;
  • a second filter effect adding module 600 configured to: if the data processing quantity type conforms to the second type, instruct the GPU of the first terminal device to add the filter effect on the video image; wherein, the The data processing amount corresponding to the type corresponds to the data processing amount range set by the CPU, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type;
  • Adding a special effect video image determining module 700 configured to determine a video image to add a video special effect according to at least the video image to which the filter effect is added;
  • the video image transmission module 800 is configured to transmit a video image to which the video special effect is added to the second terminal device.
  • the apparatus may further include:
  • a data processing amount range determining module 900 configured to determine a data processing amount upper limit value of the CPU, and a CPU occupation ratio range that sets the CPU processing filter effect; according to the data processing amount upper limit value, The CPU occupancy ratio range determines the data processing amount range.
  • the first filter effect adding module 500 is configured to add the filter effect to the video image in a CPU of the first terminal device, specifically:
  • a predetermined filter effect is invoked to implement an algorithm to add the filter effect to the video image.
  • the second filter effect adding module 600 is configured to instruct the GPU of the first terminal device to add the filter effect on the video image, specifically:
  • the video image obtaining module 200 is configured to acquire the video image collected by the image capturing device of the first terminal device, and specifically includes:
  • FIG. 13 is a block diagram showing another structure of a video communication apparatus according to an embodiment of the present invention. As shown in FIG. 12 and FIG. 13, the apparatus may further include:
  • the display module 1000 is configured to determine, according to the device configuration information of the first terminal device, the current network bandwidth, the currently executable at least one video effect type; and display the video special effect corresponding to the at least one executable video effect type; Among them, one video effect type corresponds to at least one video effect.
  • the filter effect determining module 300 is configured to determine the selected filter effect, and specifically includes:
  • the display module 1000 is configured to determine, according to the device configuration information of the first terminal device, and the current network bandwidth, the currently executable at least one video effect type, specifically:
  • the second corresponding relationship between the preset device configuration level, the network bandwidth range, and the video effect type is retrieved; wherein, the higher the device configuration level and the network bandwidth range, the greater the number of corresponding video effect types;
  • FIG. 14 is a block diagram showing another structure of a video communication apparatus according to an embodiment of the present invention. As shown in FIG. 13 and FIG. 14, the apparatus may further include:
  • a face pendant image selection module 1100 configured to determine a face pendant image selected from the displayed video effects
  • An image acquisition module 1200 is configured to extract a grayscale channel image of the acquired video image; identify a location of each facial feature point in the grayscale channel image; and correspond to the facial pendant image according to the facial image a face feature point, and a location of each face feature point, determining an add position of the face pendant image in the video image; rendering the video image and the face pendant image to the a GPU of the first terminal device, configured to generate a video image and a face image of the first image format in the GPU; and transmit the added location to the GPU; and obtain a special effect of the first image format transmitted by the GPU A video image, the special effect video image being a video image in which a face mount image is added at the added position.
  • the special effect video image determining module 700 is configured to determine, according to at least the video image that adds the filter effect, the video image to which the video special effect is added, specifically:
  • the video image transmission module 800 is configured to transmit, to the second terminal device, a video image that adds a video special effect, specifically:
  • the image acquisition module 1200 is configured to: add a location of each facial feature point in the grayscale channel image, and specifically includes:
  • the grayscale channel image is reduced according to the set ratio, and the grayscale channel is reduced to obtain an image
  • the rotated grayscale channel reduces the position of each facial feature point in the image to be converted to the position of each facial feature point in the grayscale image channel image.
  • the grayscale channel image is a Y channel image of a YUV format video image, or a G channel image of an RGB format video image;
  • the first image format is an RGB format
  • the second image format is a YUV format.
  • the special effect video image determining module 700 is configured to determine, according to at least the video image that adds the filter effect, the video image to which the video special effect is added, specifically:
  • the video image transmission module 800 is configured to transmit, to the second terminal device, a video image that adds a video special effect, specifically:
  • the video image transmission module 600 is configured to perform video encoding processing on the video image to which the video special effect is added, and specifically includes:
  • the video image to which the video effect is added is subjected to video encoding processing at the determined encoding resolution.
  • the embodiment of the present invention further provides a terminal device, and the structure of the terminal device may be as shown in FIG. 5, including: a CPU and a GPU;
  • the terminal device is configured to establish a video communication connection with the second terminal device, and acquire a video image collected by the image capturing device of the first terminal device; and determine the selected filter effect. Determining a data processing amount type of the filter effect; if the data processing amount type conforms to the first type, adding the filter effect to the video image in a CPU of the first terminal device; The data processing type conforms to the second type, and indicates that the GPU of the first terminal device adds the filter effect on the video image; wherein the data processing amount corresponding to the first type is set for the CPU The data processing amount ranges correspondingly, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type; at least the video image to which the video special effect is added is determined according to at least the video image to which the filter effect is added; Transmitting, by the second terminal device, a video image to which a video special effect is added;
  • the GPU is operative to be instructed by the CPU to add the filter effect on the video image when the data throughput type conforms to the second type.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.
  • most APP does not provide the beauty intensity adjustment function, and can only choose to enable or disable the beauty function, and even if the beauty function is turned on, the beauty beauty intensity is used for the beauty treatment, so that it is inevitable
  • the user experience is reduced because the user has different requirements for beauty and strength in different states, such as when the makeup is already applied, the skin condition is good, or the lighting is good, the weaker beauty strength may be needed, but the state is not Good, no makeup or long acne may require a stronger beauty strength.
  • the beauty intensity cannot be set according to the actual needs of the user, that is, once the APP is released, the beauty intensity cannot be adjusted, and only the default beauty intensity can be used.
  • an embodiment of the present invention provides an image processing method, an electronic device, and a server.
  • the present invention will be described in detail with reference to the accompanying drawings in the accompanying drawings.
  • the embodiment provides an image processing method, and the method is applied to an electronic device, where the electronic device has an image capturing and video capturing function.
  • the electronic device is connected or provided with a camera, and the image is collected by the camera. , video capture function.
  • the electronic device may be a mobile terminal such as a mobile phone or a tablet computer, or a personal computer or the like.
  • FIG. 14 is a schematic flowchart of an implementation of an image processing method according to an embodiment of the present invention. As shown in FIG. 15, the method includes:
  • Step 101 The electronic device detects a target operation, and the target operation represents an operation of the electronic device to adjust image features in the collected image data and/or video data;
  • the target operation may be specifically performed by the user on a specific physical button, or performed on a specific virtual button, or a specific gesture operation. In an actual application, the embodiment does not perform the target operation. limit.
  • the target operation may be a specific operation of starting an image acquisition function or a video collection function (such as turning on a camera, etc.), and simultaneously, when the image acquisition function or the video collection function is activated, the collection is started.
  • the function of adjusting the image features in the image data or the video data, such as the beauty function; or the electronic device is in the image capturing state or the video capturing state, and the user implements the expected image data or video data.
  • the image features are processed for specific operations, such as when the mobile phone is in an image acquisition state or a video capture state, and the user triggers the beauty function through the target operation. That is to say, the target operation described in this embodiment is an operation for starting the beauty function.
  • the opening of the beauty function may be synchronously started when the camera is started, or may be triggered to be started based on the target operation after the camera is started. of.
  • the beauty function can be specifically implemented by a dermabrasion algorithm implemented by combining skin color recognition, such as dermabrasion of the skin color area after acquiring the skin color in the image, and then achieving the beauty effect by dermabrasion.
  • the skin color recognition may adopt an elliptical model discriminating method of a color coding method (YUV) color format
  • the microdermabrasion algorithm may adopt a Bilateral filter method.
  • the algorithm adopted by the existing beauty function can not distinguish the quality of the image quality, and when the mobile phone camera is poor, the image quality of the camera will be poor. At this time, the effect after the beauty will be milled. The skin becomes more blurred and the detail loss is serious. When the mobile phone camera is better, the image captured by the camera is of better quality. If the same intensity of beauty is used, the beauty effect may not be enough to cause the skin to be beautiful. The problem of insufficient degree of color, therefore, affects the user experience. Therefore, in order to avoid the above problem, the present embodiment first acquires matching with its own device features (such as image processing features) before starting the beauty function to process images or videos.
  • the parameter adjustable range value such as the adjustable range of beauty, in order to avoid excessive beauty or lack of beauty strength.
  • Step 102 The electronic device acquires, according to the target operation, a parameter adjustable range value for processing the image feature for the electronic device, where the parameter adjustable range value is based at least on image data and/or video data.
  • the image features are determined by the image processing features of the processing;
  • the image processing feature may be specifically an image collection feature of a camera set or connected by an electronic device, and/or a display processing feature performed by the electronic device before display, etc., of course, in practical applications,
  • the image processing feature may also be specifically other feature information related to image processing, which is not limited in this embodiment.
  • Step 103 The electronic device selects a target adjustment value from the parameter adjustable range values, and uses the target adjustment value to process image features in the collected image data and/or video data.
  • an adjuster characterized by the parameter adjustable range value may be presented on the display interface, so as to select the adjustable range value from the parameter by using the adjuster.
  • the target adjustment value is obtained; in a specific application, as shown in FIG. 16, the beauty adjustable slider can be used to represent the adjustable range of the beauty, wherein the A-point representation of the sliding strip is not subjected to cosmetic treatment, or characterization
  • the minimum beauty intensity supported by the electronic device (for example, in a specific application, as long as the beauty function is turned on, the electronic device performs a certain intensity of beauty treatment, that is, the electronic device defaults to the image or video to perform the specific intensity.
  • the user can only increase the beauty intensity to this extent, but not the beauty intensity, that is, the minimum beauty intensity is not zero); and the B point of the slider characterizes the support of the electronic device.
  • Maximum beauty intensity for beauty treatment the beauty adjustable range of the electronic device having different image processing characteristics may be different.
  • the maximum beauty intensity that can be achieved is 30, that is, the beauty intensity can be selected between 0-30, and at this time, the range can be selected according to the image processing features of the electronic device.
  • One or a range two or other range or the like serves as a cosmetically adjustable range that matches the image processing characteristics of the electronic device, thus avoiding the problem of excessive beauty or insufficient beauty.
  • the step of selecting the target adjustment value may be determined based on a user operation, for example, based on a sliding operation performed by the user on the beauty intensity slider, so that the embodiment not only realizes electronic for different performances.
  • the device sets different beauty intensity, and also realizes the adjustment function of the beauty intensity, which satisfies the different needs of the user in different states, and enriches the user experience, and also improves the user experience.
  • the electronic device prior to the step of obtaining the parameter adjustable range value, the electronic device first acquires (eg, detects) image processing features for processing image features in image data and/or video data. And determining a parameter adjustable range value that matches the electronic device based at least on the image processing feature.
  • the electronic device prior to the step of obtaining the parameter adjustable range value, the electronic device first acquires (eg, detects) image processing features for processing image features in image data and/or video data. And determining a parameter adjustable range value that matches the electronic device based at least on the image processing feature.
  • the electronic device acquires an image processing feature for processing image features in the image data and/or the video data, and transmits an image processing feature for the electronic device to the server, correspondingly, Obtaining, by the server, an image processing feature corresponding to the image feature in the collected image data and/or video data corresponding to the electronic device, and determining, according to the image processing feature for the electronic device, at least The parameter of the electronic device is matched with the parameter adjustable range value, so that the electronic device can obtain the adjustable range value of the parameter from the server, and select the target adjustment value from the parameter adjustable range value to The image features in the image data and/or video data acquired by the electronic device are processed using the target adjustment value. That is to say, in the embodiment, the process of determining the parameter adjustable range value may be performed in the electronic device or may be performed in the server.
  • the method according to the embodiment of the present invention first acquires an image based on at least image features in the image data and/or video data before adjusting the image features in the collected image data and/or video data.
  • the parameter adjustable range value is determined by processing the feature.
  • the embodiment of the present invention can implement different performances.
  • the purpose of the electronic device is to set different beauty intensity; further, the target adjustment value of the electronic device to adjust image features in the collected image data and/or video data is selected from the adjustable range value of the parameter, Therefore, the embodiment of the present invention achieves the adjustment of the beauty intensity in a specific range, and satisfies the different requirements of the user for the beauty strength in different states. Therefore, the method described in the embodiment of the present invention enriches the user experience. It also enhances the user experience.
  • the parameter adjustable range value when determining the parameter adjustable range value, other parameters, such as an information transmission feature, may be referred to, that is, the parameter adjustable range value may also be specifically based on the image data. And image processing features for processing image features in the video data, and information transmission features for transmitting the collected image data and/or video data are determined;
  • the electronic device Before the step of obtaining the parameter adjustable range value, the electronic device first acquires (eg, detects) image processing features for processing image features in image data and/or video data, and acquires (eg, detects Obtaining an information transmission feature for transmitting the collected image data and/or video data, and determining, according to at least the image processing feature and the information transmission feature, a parameter adjustable range that matches the electronic device Value; or, as shown in FIG. 19, the electronic device acquires image processing features for processing image features in image data and/or video data, and acquires for transmitting the collected image data and/or video data.
  • the information transmission feature is sent to the server for the image processing feature and the information transmission feature of the electronic device.
  • the server not only acquires the image data and/or video corresponding to the collected electronic device.
  • An image processing feature for processing image features in the data, and acquiring corresponding to the electronic device for acquiring An information transmission feature for transmitting image data and/or video data, and further determining, according to an image processing feature and an information transmission feature of the electronic device, a parameter matching at least an image processing feature and an information transmission feature of the electronic device. Adjust the range value.
  • the electronic device is convenient to obtain the parameter adjustable range value from the server, and select a target adjustment value from the parameter adjustable range value to use the target adjustment value to acquire image data collected by the electronic device.
  • image features in the video data are processed. That is to say, in this embodiment, the process of determining the parameter adjustable range value may be performed in the electronic device or may be performed in the server.
  • the image processing feature may be specifically a coding resolution, a code rate, and the like; correspondingly, the information transmission feature may be specifically a bandwidth of the electronic device or the like.
  • the processed image data and/or video is presented in the electronic device.
  • the beauty operation so that the local display and the opposite end receiving end can simultaneously watch the effect of the beauty, and the beauty effect can be adjusted according to the user operation.
  • Application scenario 1 using the server as the control center, after obtaining the current device hardware and other information uploaded by the mobile phone, through analyzing and intelligently distributing, determining the range of automatically adjusting the beauty intensity for different performance mobile phones, so that the beauty
  • the intensity of the skin has different intensity ranges on different performance phones, so that the beauty function on different performance phones can achieve the best effect.
  • the beauty server control logic may be added to the video call logic, and the server may determine which range of beauty strengths to deliver according to the performance of the mobile phone reported in the real-time video call. For example, as shown in FIG. 17, according to the performance of the mobile phone reported by the mobile phone 1, the server selects the beauty intensity interval of the range 1 as the beauty adjustable range of the mobile phone 1 (0-10), and the mobile phone based on the mobile phone 2 reports.
  • the performance selection range 2 is used as the adjustable range of the mobile phone 2 (10-25).
  • the beauty and whitening sliders can be presented in the mobile phone interface, and the beauty and whitening sliders are mapped with the adjustable range of the beauty delivered by the server, so that the user can select the current scene. Beauty intensity.
  • the specific steps include:
  • Step 601 The mobile phone performance data reporting module reports the performance of the mobile phone to the server;
  • Step 602 The network collects the reported information, uses the Server to evaluate the current mobile phone performance, and determines the current location of the mobile phone in the beauty intensity policy table, so as to allocate the corresponding beauty adjustable range for the mobile phone;
  • Step 603 The mobile phone obtains the adjustable range of the beauty and maps to the beauty slider.
  • Step 604 The mobile phone acquires camera data.
  • Step 605 Perform beauty treatment on the data collected by the camera according to the beauty intensity selected by the user;
  • Step 606 Send the data after the beauty to the local display and the encoding end respectively;
  • Step 607 After being transmitted through the network, it is sent to the decoder to decode and the effect of displaying the beauty on the viewing end.
  • the information reported by the mobile phone performance includes but is not limited to: a central processing unit (CPU) core number, a CPU frequency, a mobile phone operating system version, a network status, a coding resolution, a code rate, and the like.
  • CPU central processing unit
  • the network server control center refers to the control center located in the background, usually the server device. After obtaining the uploaded mobile phone performance information, it analyzes, calculates, and outputs the corresponding adjustable range of the beauty.
  • the network refers to the background server control end. After the data is reported, the server is used for analysis, so that the server side selects the beauty intensity file suitable for the current mobile phone performance in the preset beauty intensity policy table based on the performance of the mobile phone. Bit, which is the adjustable range of beauty.
  • the process of determining the adjustable range of the beauty is performed locally on the mobile phone. Specifically, when the video call of the mobile phone starts, the performance reporting module of the mobile phone reports the performance data locally, and the algorithm of the local control module analyzes and selects the current mobile phone.
  • the beauty range of the performance can be adjusted, and the adjustable range of the beauty is sent to the software configuration parameter, and the beauty adjustable range and the interface slider are mapped when the beauty interface is opened.
  • the specific steps are as follows:
  • Step 701 The mobile phone performance data reporting module reports the performance of the mobile phone to the local device
  • Step 702 After receiving the corresponding information, the local control center analyzes the performance of the current mobile phone by using an algorithm, and determines the current location of the mobile phone in the beauty intensity policy table to allocate the corresponding beauty adjustable range. ;
  • Step 703 mapping the adjustable range of the beauty to the top of the beauty slider
  • Step 704 Acquire camera data.
  • Step 705 Perform beauty processing on the data collected by the camera according to the beauty intensity selected by the user;
  • Step 706 Send the data after the beauty to the local display and the encoding end code respectively;
  • Step 707 after being transmitted through the network, is sent to the decoder to decode and the effect of displaying the beauty on the viewing end.
  • the information reported by the mobile phone performance includes but is not limited to: the number of CPU cores, the CPU frequency, the mobile phone operating system version, the network status, the encoding resolution, the code rate, and the like.
  • the local control center is located in the mobile phone software client.
  • the software opens the video call, the information reported by the mobile phone is collected, and then the corresponding beauty adjustable range is analyzed and outputted.
  • the control center (such as the local control center or the server control center) is required to determine the adjustable range of the beauty in the current call state through the beauty intensity policy table.
  • Table 1 gives A specific example of the beauty intensity strategy table:
  • the beauty intensity strategy table shows the corresponding relationship between the mobile phone and the adjustable range of the camera under different models, different encoding resolutions and different bandwidths.
  • the adjustable range of the beauty is 0-5, because the model is poor and the image quality is collected by the camera. It is relatively poor, and the resolution is very low.
  • the picture quality will be blurred when the video call is made. If the strong beauty intensity is used, the picture quality will drop very much, so choose a smaller beauty intensity interval. .
  • low-end models, mid-range models, and high-end models can be distinguished as follows.
  • the server side distinguishes, specifically, when the mobile phone video call is performed, the performance of the mobile phone and the call data are reported, and the server is uploaded to the server through the network, and the current mobile phone performance is determined by the server according to the mobile phone performance classification table.
  • Mid-range or high-end models are possible.
  • Table 2 gives the mobile phone performance classification table of the Android system
  • the division of low-end, mid-range, and high-end models is not fixed, and it is necessary to periodically adjust the division criteria according to network operation data and mobile phone ratio.
  • the resolution the resolution of most video calls is currently at 480x360 and 640x480, but as the performance of the mobile phone improves, the video call quality will also increase, so the beauty of the different resolutions
  • the adjustable range also needs to be adjusted accordingly.
  • the video call rate is usually between 150-300 kbps, and with the popularity of wireless fidelity (Wi-Fi) and the coverage of fourth-generation mobile communication technology (4G), video
  • Wi-Fi wireless fidelity
  • 4G fourth-generation mobile communication technology
  • the bit rate of the call will be higher and higher, so the adjustable range of the beauty needs to be adjusted accordingly, in order to achieve a higher bit rate, the higher the quality of the call video encoding, the beauty intensity range can be appropriately increased or expanded.
  • the following beauty algorithm can be used to implement the beauty function; specifically, as shown in FIG. 22, the implementation process of the beauty:
  • Step 801 Perform skin color region recognition on the input original image; use the skin color detection algorithm to perform skin color detection on the original input image, optimize on the basis of the common skin color detection algorithm, and try to improve the skin color based on the error rate of not losing the recognition. Identify accuracy.
  • Step 802 Differentiate the original image to obtain a skin color region and a non-skin color region.
  • Step 803 dermabrasion treatment on the skin color region; using a dermabrasion algorithm on the skin color region, and adopting a lower intensity processing when processing the detail portion of the skin color region, automatically identifying the skin texture and the detailed rich texture (nose, eyes, Eyebrows and other areas), so as to maintain the three-dimensional sense and detail of the facial features.
  • Step 804 Fusing the skin color region after the dermabrasion treatment with the skin color region in the original image; here, using the rich details of the original image and the skin color region after the dermabrasion treatment, the skin color region details are preferably maintained first. Integrity, secondly, increases the realism of the microdermabrasion area.
  • Step 805 The fused image obtained in step 804 is fused with the non-skinning region; where the dermabrasion region and the non-skinning region are fused, a mutation occurs at the boundary, so that the boundary region of the dermabrasion and the non-skinning region is feathered. , smooth transition between the microdermabrasion area and the non-skinning area.
  • Step 806 sharpening processing; due to a filtering algorithm in essence, the principle inevitably loses details, and in order to enhance the video effect, the overall image details are sharpened and enhanced, and the beauty processing is ended.
  • the skin color detection algorithm in practical applications, the YUV skin color detection model is used to detect the skin color region.
  • the dermabrasion algorithm there are many dermabrasion algorithms, such as Gaussian filtering, surface ambiguity, bilateral filtering, guided filtering, etc.
  • the sharpening algorithm can enhance the edge of the image, make the blurred image more clear, the color becomes sharp and prominent, and the quality of the image is improved, resulting in an image more suitable for human eye observation and recognition. In practical applications, it can be used.
  • a differential method or a Gaussian filtering method is used for sharpening.
  • FIG. 23 is a schematic diagram showing the effect of the beauty processing after the image processing method of the embodiment of the present invention.
  • the beauty adjustable range corresponding to the mobile phone A and the mobile phone B is Different, for example, the adjustable range of the beauty in the mobile phone A is 0-10, and the adjustable range of the mobile phone B is 10-25. Therefore, after the beauty treatment of the same image, the beauty effect achieved is different.
  • the beauty slider can represent the adjustable range of the beauty, even when the slider is in the medium beauty intensity, the beauty effects exhibited by different mobile phones are different, such as the degree of the spot display in the dotted area. different.
  • the embodiment of the present invention can provide different beauty strengths for video calls, which can facilitate the user to select an optimal effect in different scenarios. For example, when the user is in a bad state or has no make-up, a strong beauty can be used, and when outdoors or when makeup is made, a lower intensity beauty can be used.
  • the electronic device includes:
  • a detecting unit 1001 configured to detect a target operation, where the target operation represents an operation of the electronic device to adjust image features in the collected image data and/or video data;
  • a first obtaining unit 1002 configured to acquire, according to the target operation, a parameter adjustable range value for processing the image feature for the electronic device, where the parameter adjustable range value is based at least on image data and/or Determining the image processing features of the image features processed in the video data;
  • the processing unit 1003 is configured to select a target adjustment value from the parameter adjustable range values, and process the image features in the collected image data and/or video data by using the target adjustment value.
  • the first obtaining unit 1002 is further configured to acquire image processing features for processing image features in image data and/or video data, and determine at least based on the image processing features. Determining a parameter-adjustable range value of the electronic device; or acquiring an image processing feature for processing image features in the image data and/or the video data, and transmitting the image processing feature for the electronic device to the server, Having the server determine a parameter adjustable range value that matches the electronic device based at least on image processing characteristics for the electronic device.
  • the parameter adjustable range value is based at least on image processing features that process image features in image data and/or video data, and for performing acquired image data and/or video data. Determined by the transmitted information transmission characteristics; correspondingly,
  • the first obtaining unit 1002 is further configured to acquire image processing features for processing image features in image data and/or video data, and acquire, for acquiring, collected image data and/or video data. And an information transmission feature determining, according to at least the image processing feature and the information transmission feature, a parameter adjustable range value that matches the electronic device; or acquiring the image feature in the image data and/or the video data Processing image processing features, and acquiring information transmission features for transmitting the collected image data and/or video data, and transmitting image processing features and information transmission features for the electronic device to a server to enable the server A parameter adjustable range value that matches the electronic device is determined based at least on image processing features and information transmission characteristics for the electronic device.
  • processing unit 1003 is further configured to present an identifier characterized by the parameter adjustable range value, so as to select the selected value from the parameter adjustable range value by using the adjuster. Target adjustment value.
  • This embodiment provides a server. As shown in FIG. 25, the server includes:
  • a second acquiring unit 1101 configured to acquire an image processing feature corresponding to the electronic device for processing image features in the collected image data and/or video data;
  • a determining unit 1102 configured to determine, according to an image processing feature for the electronic device, a parameter adjustable range value that matches at least an image processing feature of the electronic device, so that the electronic device is adjustable from the parameter
  • the target adjustment value is selected from the range value, and the image feature in the image data and/or the video data collected by the electronic device is processed by using the target adjustment value.
  • the second acquiring unit 1101 is further configured to acquire an information transmission feature corresponding to the electronic device for transmitting the collected image data and/or video data; correspondingly,
  • the determining unit 1102 is further configured to determine, according to an image processing feature and an information transmission feature for the electronic device, a parameter adjustable range value that matches at least an image processing feature and an information transmission feature of the electronic device.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

An image processing method and apparatus, a relevant device and a server. The image processing method comprises: acquiring collected image data; determining a processing mode for processing the image data; and performing feature adjustment on image features in the image data according to the determined processing mode. By means of the embodiments of the present invention, adjustments within a certain range can be realized, satisfying different demands of a user for the quality of pictures in different statuses; therefore, the method according to the embodiments of the present invention promotes user experience while enriching the user experience.

Description

一种图像处理的方法、装置、相关设备及服务器Method, device, related device and server for image processing
本申请要求于2017年1月9日提交中国专利局、申请号为2017100145254、发明名称为“一种视频通信方法、装置及终端设备”的中国专利申请的优先权,以及要求于2017年5月16日提交中国专利局、申请号为2017103445621、发明名称为“一种图像处理方法及电子设备、服务器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese Patent Application filed on January 9, 2017, the Chinese Patent Office, the application number is 2017100145254, and the invention is entitled "a video communication method, device and terminal device", and the requirements are in May 2017. Priority is claimed on Chinese Patent Application No. JP-A No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No.
技术领域Technical field
本申请涉及图像处理领域,特别涉及一种图像处理的方法、装置、相关设备及服务器。The present application relates to the field of image processing, and in particular, to a method, an apparatus, a related device, and a server for image processing.
背景技术Background technique
视频通信是指用户之间通过传递视频图像进行通信的方式,如用户的手机等终端设备在建立视频通信连接后,可通过网络互传摄像头采集的视频图像,实现视频通信。Video communication refers to the way in which users communicate by transmitting video images. For example, after establishing a video communication connection, a terminal device such as a user's mobile phone can mutually transmit video images collected by the camera through the network to realize video communication.
为实现视频通信的个性化需求,目前的视频通信方式还可支持视频特效在视频图像中的添加,如为视频图像中的人脸添加胡子,动物耳朵等人脸挂件图像,为视频图像添加卡通、漫画等滤镜效果。In order to realize the personalized requirements of video communication, the current video communication method can also support the addition of video effects in video images, such as adding a beard, an animal ear and other facial pendant images to a face in a video image, adding a cartoon to the video image. Filter effects such as comics.
GPU(图形处理器件)作为常规的处理图形的器件,为视频图像添加视频特效一般是在GPU上实现;然而,GPU在终端设备中还需负责图形界面的显示、图像呈现等工作,随着图形处理任务的增多,终端设备需要设置性能足够强大的GPU,才能满足在视频图像上添加视频特效的需求,这对于终端设备的性能配置提出了更高的要求,导致基于视频特效的视频通信的使用局限越来越大。GPU (Graphics Processing Device) As a conventional device for processing graphics, adding video effects to video images is generally implemented on the GPU; however, the GPU is also responsible for the display of the graphical interface, image rendering, etc. in the terminal device, along with the graphics. With the increase of processing tasks, the terminal device needs to set a GPU with sufficient performance to meet the requirement of adding video effects on the video image, which puts higher requirements on the performance configuration of the terminal device, resulting in the use of video communication based on video effects. The limitations are getting bigger and bigger.
目前视频通话类和直播类应用程序(APP)大量涌现,不管是视频通话场景还是主播类直播场景,美颜已经成了这类APP的必备功能,这里,美颜功能可使参与者即使在没化妆的情况下依然能够展现美丽动人的一面,但是,现有视频通话类和主播类APP中的美颜功能存在如下问题:不同性能的手机的美颜强度相同,这样,当采用高强度美颜方式处理性能较差的手机采集到的图像时,会出现图像处理过度而使得图像中的某些细节模糊的问题,而对于性能较强的手机而言,高强度美颜方式能提升美颜效果,因此,现有美颜强度统一 的方式降低了用户体验。At present, video calling and live-like applications (APPs) have emerged in large numbers. Whether it is a video call scene or a live broadcast scene, beauty has become a must-have feature of such apps. Here, the beauty function allows participants to even In the absence of make-up, it still shows a beautiful and moving side. However, the beauty functions in the existing video call and anchor apps have the following problems: the performance of different performance phones is the same, so when using high-intensity beauty When the image is processed by a mobile phone with poor performance, there may be problems in that the image is over-processed and some details in the image are blurred. For a mobile phone with high performance, the high-intensity beauty mode can enhance the beauty. The effect, therefore, the way the existing beauty strength is unified reduces the user experience.
发明内容Summary of the invention
本发明实施例提供一种图像处理的方法,所述方法包括:An embodiment of the present invention provides a method for image processing, where the method includes:
获取采集的图像数据;Obtaining acquired image data;
确定处理图像数据的处理方式;Determining how the processed image data is processed;
根据确定的所述处理方式对所述图像数据中的图像特征进行特征调整。Feature adjustment of image features in the image data is performed according to the determined processing manner.
有鉴于此,本发明实施例还提供一种视频通信方法、装置及终端设备,以减小基于视频特效的视频通信的使用局限。In view of this, an embodiment of the present invention further provides a video communication method, apparatus, and terminal device to reduce usage limitations of video communication based on video effects.
为实现上述目的,本发明实施例提供如下技术方案:To achieve the above objective, the embodiment of the present invention provides the following technical solutions:
一种视频通信方法,应用于第一终端设备,所述方法包括:A video communication method is applied to a first terminal device, and the method includes:
建立与第二终端设备的视频通信连接,并获取所述第一终端设备的图像采集装置所采集的视频图像;Establishing a video communication connection with the second terminal device, and acquiring a video image collected by the image acquisition device of the first terminal device;
确定所选取的滤镜效果;Determine the selected filter effect;
确定所述滤镜效果的数据处理量类型;Determining a type of data processing amount of the filter effect;
如果所述数据处理量类型符合第一类型,在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果;Adding the filter effect to the video image in a CPU of the first terminal device if the data processing amount type conforms to the first type;
如果所述数据处理量类型符合第二类型,指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果;其中,所述第一类型对应的数据处理量与为所述CPU设定的数据处理量范围相应,且,所述第一类型对应的数据处理量,低于第二类型对应的数据处理量;If the data processing type matches the second type, indicating that the GPU of the first terminal device adds the filter effect on the video image; wherein the data processing amount corresponding to the first type is The data processing amount range set by the CPU is corresponding, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type;
至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像;Determining a video image to add a video effect based at least on the video image to which the filter effect is added;
向所述第二终端设备传输添加视频特效的视频图像。A video image to which a video effect is added is transmitted to the second terminal device.
本发明实施例还提供一种视频通信装置,应用于第一终端设备,所述装置包括:The embodiment of the invention further provides a video communication device, which is applied to a first terminal device, and the device includes:
连接建立模块,用于建立与第二终端设备的视频通信连接;a connection establishing module, configured to establish a video communication connection with the second terminal device;
视频图像获取模块,用于获取所述第一终端设备的图像采集装置所采集的视频图像;a video image acquiring module, configured to acquire a video image collected by the image capturing device of the first terminal device;
滤镜效果确定模块,用于确定所选取的滤镜效果;a filter effect determining module for determining the selected filter effect;
类型确定模块,用于确定所述滤镜效果的数据处理量类型;a type determining module, configured to determine a data processing type of the filter effect;
第一滤镜效果添加模块,用于如果所述数据处理量类型符合第一类型,在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果;a first filter effect adding module, configured to add the filter effect to the video image in a CPU of the first terminal device if the data processing amount type conforms to the first type;
第二滤镜效果添加模块,用于如果所述数据处理量类型符合第二类型,指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果;其中,所述第一类型对应的数据处理量与为所述CPU设定的数据处理量范围相应,且,所述第一类型对应的数据处理量,低于第二类型对应的数据处理量;a second filter effect adding module, configured to instruct the GPU of the first terminal device to add the filter effect on the video image if the data processing amount type conforms to the second type; wherein the first The data processing amount corresponding to the type corresponds to the data processing amount range set by the CPU, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type;
添加特效的视频图像确定模块,用于至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像;Adding a special effect video image determining module for determining a video image to which a video effect is added according to at least a video image to which a filter effect is added;
视频图像传输模块,用于向所述第二终端设备传输添加视频特效的视频图像。And a video image transmission module, configured to transmit, to the second terminal device, a video image to which a video effect is added.
本发明实施例还提供一种终端设备,包括:The embodiment of the invention further provides a terminal device, including:
CPU,用于建立与第二终端设备的视频通信连接,并获取所述第一终端设备的图像采集装置所采集的视频图像;确定所选取的滤镜效果;确定所述滤镜效果的数据处理量类型;如果所述数据处理量类型符合第一类型,在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果;如果所述数据处理量类型符合第二类型,指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果;其中,所述第一类型对应的数据处理量与为所述CPU设定的数据处理量范围相应,且,所述第一类型对应的数据处理量,低于第二类型对应的数据处理量;至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像;向所述第二终端设备传输添加视频特效的视频图像;a CPU, configured to establish a video communication connection with the second terminal device, and acquire a video image collected by the image acquisition device of the first terminal device; determine a selected filter effect; and determine a data processing of the filter effect a quantity type; if the data processing type matches the first type, adding the filter effect to the video image in a CPU of the first terminal device; if the data processing type matches a second type, indicating The GPU of the first terminal device adds the filter effect to the video image; wherein, the data processing amount corresponding to the first type corresponds to a data processing amount range set by the CPU, and The data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type; determining, according to at least the video image to which the filter effect is added, the video image to which the video special effect is added; and adding the added video special effect to the second terminal device Video image
GPU,用于受所述CPU指示,在所述数据处理量类型符合第二类型时,在所述视频图像上添加所述滤镜效果。a GPU, configured to be instructed by the CPU to add the filter effect on the video image when the data processing amount type conforms to the second type.
基于上述技术方案,本发明实施例提供的视频通信方法中,第一终端设备在选取滤镜效果后,可根据滤镜效果的数据处理量类型,决定滤镜效果是在CPU中实现还是在GPU中实现;即所述数据处理量类型对应的处理复杂度较低,符合第一类型时,可在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果,所述数据处理量类型对应的处理复杂度较高,符合第二类型时,可指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果;从而本发明实施例可根据滤镜效果的数据处理量类型,合理分配执行滤镜效果添加 的处理器件,使得滤镜效果添加涉及的数据处理压力,可在CPU和GPU间分担,使得终端设备具有一定的综合性能,则可满足在视频图像上添加视频特效的需求,而不需要强化GPU等某一方面的性能配置,为低配置的终端设备能够实现滤镜效果提供了可能,减小了基于视频特效的视频通信的使用局限。Based on the foregoing technical solution, in the video communication method provided by the embodiment of the present invention, after selecting the filter effect, the first terminal device may determine whether the filter effect is implemented in the CPU or on the GPU according to the data processing type of the filter effect. The implementation of the data processing type is low, and when the first type is met, the filter effect may be added to the video image in the CPU of the first terminal device, where the data is The processing complexity corresponding to the processing type is high, and when the second type is met, the GPU of the first terminal device may be added to the video image to add the filter effect; thus, the embodiment of the present invention may be based on the filter effect. The type of data processing, the processing device added by the execution filter effect is reasonably allocated, so that the data processing pressure involved in the filter effect addition can be shared between the CPU and the GPU, so that the terminal device has a certain comprehensive performance, and the video can be satisfied. Add video effects to the image without the need to enhance the performance configuration of a certain aspect of the GPU, enabling filter effects for low-profile end devices Providing the possibility of reducing the use of video communication based on video effects.
为解决现有存在的技术问题,本发明实施例提供了一种图像处理方法及电子设备、服务器,能至少解决现有技术中存在的上述问题。In order to solve the existing technical problems, the embodiments of the present invention provide an image processing method, an electronic device, and a server, which can solve at least the above problems in the prior art.
本发明实施例的技术方案是这样实现的:The technical solution of the embodiment of the present invention is implemented as follows:
本发明实施例第一方面提供了一种图像处理方法,所述方法包括:A first aspect of the embodiments of the present invention provides an image processing method, where the method includes:
电子设备检测到目标操作,所述目标操作表征所述电子设备对采集到的图像数据和/或视频数据中图像特征进行调整的操作;The electronic device detects a target operation, the target operation characterizing an operation of the electronic device to adjust image features in the collected image data and/or video data;
基于所述目标操作获取针对所述电子设备的用于对图像特征进行处理的参数可调范围值,所述参数可调范围值是至少基于对图像数据和/或视频数据中图像特征进行处理的图像处理特征而确定出的;Acquiring, based on the target operation, a parameter adjustable range value for processing the image feature for the electronic device, the parameter adjustable range value being based at least on processing image features in the image data and/or video data Determined by image processing features;
从所述参数可调范围值中选取出目标调整值,利用所述目标调整值对采集的图像数据和/或视频数据中图像特征进行处理。A target adjustment value is selected from the parameter adjustable range values, and the image features in the acquired image data and/or video data are processed by the target adjustment value.
上述方案中,所述方法还包括:In the above solution, the method further includes:
获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,并至少基于所述图像处理特征确定出与所述电子设备相匹配的参数可调范围值;或者,Obtaining an image processing feature for processing image features in the image data and/or the video data, and determining a parameter adjustable range value that matches the electronic device based on at least the image processing feature; or
获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,并将针对所述电子设备的图像处理特征发送至服务器,以使服务器至少基于针对所述电子设备的图像处理特征确定出与所述电子设备相匹配的参数可调范围值。Obtaining image processing features for processing image features in image data and/or video data, and transmitting image processing features for the electronic device to a server such that the server is based at least on image processing features for the electronic device A parameter adjustable range value that matches the electronic device is determined.
上述方案中,所述参数可调范围值是至少基于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,以及用于对采集到的图像数据和/或视频数据进行传输的信息传输特征而确定出的;对应地,所述方法还包括:In the above solution, the parameter adjustable range value is based on at least an image processing feature for processing image features in image data and/or video data, and information for transmitting the collected image data and/or video data. Determining the transmission feature; correspondingly, the method further includes:
获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,以及获取用于对采集到的图像数据和/或视频数据进行传输的信息传输特征,至少基于所述图像处理特征和所述信息传输特征确定出与所述电子设备相 匹配的参数可调范围值;或者,Acquiring image processing features for processing image features in image data and/or video data, and acquiring information transmission features for transmitting the captured image data and/or video data, based at least on the image processing features And the information transmission feature determines a parameter adjustable range value that matches the electronic device; or
获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,以及获取用于对采集到的图像数据和/或视频数据进行传输的信息传输特征,并将针对所述电子设备的图像处理特征和信息传输特征发送至服务器,以使服务器至少基于针对所述电子设备的图像处理特征和信息传输特征确定出与所述电子设备相匹配的参数可调范围值。Obtaining image processing features for processing image features in image data and/or video data, and acquiring information transmission features for transmitting the collected image data and/or video data, and for the electronic device The image processing features and information transmission features are sent to the server to cause the server to determine a parameter adjustable range value that matches the electronic device based at least on image processing features and information transmission characteristics for the electronic device.
上述方案中,所述方法还包括:In the above solution, the method further includes:
呈现表征有所述参数可调范围值的调整符,以便于利用所述调整符从所述参数可调范围值中选取出所述目标调整值。An adjuster characterizing the parameter adjustable range value is presented to facilitate the selection of the target adjustment value from the parameter adjustable range value using the adjuster.
本发明实施例第二方面提供了一种图像处理方法,所述方法包括:A second aspect of the embodiments of the present invention provides an image processing method, where the method includes:
服务器获取电子设备对应的用于对采集到的图像数据和/或视频数据中图像特征进行处理的图像处理特征;Obtaining, by the server, an image processing feature corresponding to the image feature in the collected image data and/or video data corresponding to the electronic device;
根据针对所述电子设备的图像处理特征,确定出至少与所述电子设备的图像处理特征相匹配的参数可调范围值,以便于所述电子设备从所述参数可调范围值中选取出目标调整值,并利用所述目标调整值对所述电子设备采集的图像数据和/或视频数据中图像特征进行处理。Determining, according to an image processing feature for the electronic device, a parameter adjustable range value that matches at least an image processing feature of the electronic device, so that the electronic device selects a target from the parameter adjustable range value Adjusting the value and processing the image features in the image data and/or video data collected by the electronic device using the target adjustment value.
上述方案中,所述方法还包括:In the above solution, the method further includes:
获取所述电子设备对应的用于对采集到的图像数据和/或视频数据进行传输的信息传输特征;对应地,Acquiring an information transmission feature corresponding to the electronic device for transmitting the collected image data and/or video data; correspondingly,
所述根据针对所述电子设备的图像处理特征,确定出至少与所述电子设备的图像处理特征相匹配的参数可调范围值,包括:Determining, according to an image processing feature for the electronic device, a parameter adjustable range value that matches at least an image processing feature of the electronic device, including:
根据针对所述电子设备的图像处理特征以及信息传输特征,确定出至少与所述电子设备的图像处理特征和信息传输特征相匹配的参数可调范围值。And determining, according to the image processing feature and the information transmission feature for the electronic device, a parameter adjustable range value that matches at least an image processing feature and an information transmission feature of the electronic device.
本发明实施例第三方面提供了一种电子设备,所述电子设备包括:A third aspect of the embodiments of the present invention provides an electronic device, where the electronic device includes:
检测单元,用于检测到目标操作,所述目标操作表征所述电子设备对采集到的图像数据和/或视频数据中图像特征进行调整的操作;a detecting unit, configured to detect a target operation, the target operation characterizing an operation of the electronic device to adjust image features in the collected image data and/or video data;
第一获取单元,用于基于所述目标操作获取针对所述电子设备的用于对图像特征进行处理的参数可调范围值,所述参数可调范围值是至少基于对图像数据和/或视频数据中图像特征进行处理的图像处理特征而确定出的;a first obtaining unit, configured to acquire, according to the target operation, a parameter adjustable range value for processing the image feature for the electronic device, where the parameter adjustable range value is based at least on image data and/or video Determining the image processing characteristics of the image features processed in the data;
处理单元,用于从所述参数可调范围值中选取出目标调整值,利用所述目标调整值对采集的图像数据和/或视频数据中图像特征进行处理。And a processing unit, configured to select a target adjustment value from the parameter adjustable range values, and process the image features in the collected image data and/or video data by using the target adjustment value.
上述方案中,所述第一获取单元,还用于获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,并至少基于所述图像处理特征确定出与所述电子设备相匹配的参数可调范围值;或者,获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,并将针对所述电子设备的图像处理特征发送至服务器,以使服务器至少基于针对所述电子设备的图像处理特征确定出与所述电子设备相匹配的参数可调范围值。In the above solution, the first acquiring unit is further configured to acquire image processing features for processing image features in image data and/or video data, and determine the electronic device based on at least the image processing features. Matching parameter adjustable range values; or acquiring image processing features for processing image features in image data and/or video data, and transmitting image processing features for the electronic device to a server to cause the server A parameter adjustable range value that matches the electronic device is determined based at least on image processing features for the electronic device.
上述方案中,所述参数可调范围值是至少基于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,以及用于对采集到的图像数据和/或视频数据进行传输的信息传输特征而确定出的;对应地,In the above solution, the parameter adjustable range value is based on at least an image processing feature for processing image features in image data and/or video data, and information for transmitting the collected image data and/or video data. Determined by transmitting characteristics; correspondingly,
所述第一获取单元,还用于获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,以及获取用于对采集到的图像数据和/或视频数据进行传输的信息传输特征,至少基于所述图像处理特征和所述信息传输特征确定出与所述电子设备相匹配的参数可调范围值;或者,获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,以及获取用于对采集到的图像数据和/或视频数据进行传输的信息传输特征,并将针对所述电子设备的图像处理特征和信息传输特征发送至服务器,以使服务器至少基于针对所述电子设备的图像处理特征和信息传输特征确定出与所述电子设备相匹配的参数可调范围值。The first acquiring unit is further configured to acquire image processing features for processing image features in image data and/or video data, and acquire information for transmitting the collected image data and/or video data. Transmitting a feature, determining a parameter adjustable range value that matches the electronic device based on at least the image processing feature and the information transmission feature; or acquiring for processing image features in image data and/or video data Image processing features, and acquiring information transmission features for transmitting the collected image data and/or video data, and transmitting image processing features and information transmission characteristics for the electronic device to a server to enable the server to at least A parameter adjustable range value that matches the electronic device is determined based on image processing features and information transmission characteristics for the electronic device.
上述方案中,所述处理单元,还用于呈现表征有所述参数可调范围值的调整符,以便于利用所述调整符从所述参数可调范围值中选取出所述目标调整值。In the above solution, the processing unit is further configured to present an identifier characterized by the parameter adjustable range value, so as to select the target adjustment value from the parameter adjustable range value by using the adjuster.
本发明实施例第四方面提供了一种服务器,所述服务器包括:A fourth aspect of the embodiments of the present invention provides a server, where the server includes:
第二获取单元,用于获取电子设备对应的用于对采集到的图像数据和/或视频数据中图像特征进行处理的图像处理特征;a second acquiring unit, configured to acquire an image processing feature corresponding to the electronic device for processing image features in the collected image data and/or video data;
确定单元,用于根据针对所述电子设备的图像处理特征,确定出至少与所述电子设备的图像处理特征相匹配的参数可调范围值,以便于所述电子设备从所述参数可调范围值中选取出目标调整值,并利用所述目标调整值对所述电子 设备采集的图像数据和/或视频数据中图像特征进行处理。a determining unit, configured to determine, according to an image processing feature for the electronic device, a parameter adjustable range value that matches at least an image processing feature of the electronic device, so that the electronic device can adjust the range from the parameter The target adjustment value is selected from the value, and the image feature in the image data and/or the video data collected by the electronic device is processed by using the target adjustment value.
上述方案中,所述第二获取单元,还用于获取所述电子设备对应的用于对采集到的图像数据和/或视频数据进行传输的信息传输特征;对应地,In the above solution, the second acquiring unit is further configured to acquire an information transmission feature corresponding to the electronic device for transmitting the collected image data and/or video data; correspondingly,
所述确定单元,还用于根据针对所述电子设备的图像处理特征以及信息传输特征,确定出至少与所述电子设备的图像处理特征和信息传输特征相匹配的参数可调范围值。The determining unit is further configured to determine, according to an image processing feature and an information transmission feature for the electronic device, a parameter adjustable range value that matches at least an image processing feature and an information transmission feature of the electronic device.
本发明实施例所述的图像处理方法及电子设备、服务器,在对采集到的图像数据和/或视频数据中图像特征进行调整之前,会先获取至少基于对图像数据和/或视频数据中图像特征进行处理的图像处理特征而确定出的参数可调范围值,这里,由于所述参数可调范围值是基于针对所述电子设备的图像处理特征而确定出的,所以,本发明实施例能够实现针对不同性能的电子设备设置不同的美颜强度的目的;进一步地,所述电子设备对采集的图像数据和/或视频数据中图像特征进行调整的目标调整值是从该参数可调范围值中选取出的,所以,本发明实施例实现了美颜强度在一特定范围内的调整,满足了用户在不同状态下对美颜强度的不同需求,因此,本发明实施例所述的方法在丰富用户体验的同时,也提升了用户体验。The image processing method and the electronic device and the server according to the embodiments of the present invention first acquire an image based on at least the image data and/or the video data before adjusting the image features in the collected image data and/or video data. The parameter adjustable range value determined by the image processing feature of the processing, where the parameter adjustable range value is determined based on the image processing feature for the electronic device, the embodiment of the present invention can Achieving different beauty intensity for different performance electronic devices; further, the target adjustment value of the electronic device for adjusting image features in the collected image data and/or video data is an adjustable range value from the parameter The embodiment of the present invention achieves the adjustment of the beauty intensity within a certain range, and satisfies the different requirements of the user for the beauty intensity in different states. Therefore, the method according to the embodiment of the present invention is While enriching the user experience, it also enhances the user experience.
附图说明DRAWINGS
图1-1为本发明实施例提供的图像处理的方法的流程示意图;1-1 is a schematic flowchart of a method for image processing according to an embodiment of the present invention;
图1为本发明实施例提供的视频通信***的结构框图;1 is a structural block diagram of a video communication system according to an embodiment of the present invention;
图2为本发明实施例提供的视频通信方法的信令流程图;2 is a signaling flowchart of a video communication method according to an embodiment of the present invention;
图3为本发明实施例提供的视频通信方法的另一信令流程图;FIG. 3 is another signaling flowchart of a video communication method according to an embodiment of the present invention;
图4为确定所选取的视频特效的方法流程图;4 is a flow chart of a method for determining a selected video effect;
图5为终端设备的结构框图;5 is a structural block diagram of a terminal device;
图6为实现人脸挂件在视频图像中添加的流程示意图;FIG. 6 is a schematic diagram showing a process of adding a face pendant to a video image;
图7为实现人脸挂件和滤镜效果在视频图像中添加的另一流程示意图;FIG. 7 is another schematic flowchart of implementing a face pendant and a filter effect added in a video image; FIG.
图8对添加视频特效的视频图像进行视频编码处理的方法流程图;FIG. 8 is a flowchart of a method for performing video encoding processing on a video image to which a video effect is added;
图9为在iPhone5S的GPU上测试美颜磨皮、美肤的性能的示意图;Figure 9 is a schematic diagram showing the performance of the skin and skin of the iPhone 5S on the GPU;
图10为在iPhone4S的GPU上测试美颜磨皮、美肤的性能的示意图;Figure 10 is a schematic diagram showing the performance of the skin and skin of the iPhone 4S on the GPU;
图11为在不同iPhone手机上测试人脸识别技术性能的示意图;Figure 11 is a schematic diagram showing the performance of testing face recognition technology on different iPhones;
图12为本发明实施例提供的视频通信装置的结构框图;FIG. 12 is a structural block diagram of a video communication apparatus according to an embodiment of the present invention;
图13为本发明实施例提供的视频通信装置的另一结构框图;FIG. 13 is a block diagram showing another structure of a video communication apparatus according to an embodiment of the present invention;
图14为本发明实施例提供的视频通信装置的再一结构框图。FIG. 14 is a block diagram showing still another structure of a video communication apparatus according to an embodiment of the present invention.
图15为本发明实施例一图像处理方法的实现流程示意图;FIG. 15 is a schematic flowchart of an implementation process of an image processing method according to an embodiment of the present invention; FIG.
图16为本发明实施例电子设备美颜强度滑动条的界面示意图;16 is a schematic diagram of an interface of a beauty intensity slider of an electronic device according to an embodiment of the present invention;
图17为本发明实施例美颜强度范围示意图;17 is a schematic view showing a range of beauty intensity according to an embodiment of the present invention;
图18为本发明实施例图像处理方法中电子设备与服务器的信息交互示意图一;FIG. 18 is a schematic diagram 1 of information interaction between an electronic device and a server in an image processing method according to an embodiment of the present invention; FIG.
图19为本发明实施例图像处理方法中电子设备与服务器的信息交互示意图二;FIG. 19 is a second schematic diagram of information exchange between an electronic device and a server in an image processing method according to an embodiment of the present invention; FIG.
图20为本发明实施例图像处理方法在应用场景一的流程示意图;FIG. 20 is a schematic flowchart diagram of an image processing method according to an embodiment of the present invention;
图21为本发明实施例图像处理方法在应用场景二的流程示意图;FIG. 21 is a schematic flowchart of application scenario 2 in an image processing method according to an embodiment of the present invention;
图22为本发明实施例图像处理方法中美颜流程示意图;FIG. 22 is a schematic diagram of a beauty flow in an image processing method according to an embodiment of the present invention; FIG.
图23为利用本发明实施例图像处理方法进行美颜处理后的效果示意图;23 is a schematic diagram of an effect of performing a beauty treatment using an image processing method according to an embodiment of the present invention;
图24为本发明实施例电子设备的组成结构示意图;24 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
图25为本发明实施例服务器的组成结构示意图。FIG. 25 is a schematic structural diagram of a server according to an embodiment of the present invention.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
如图1-1所示,下面介绍本发明实施例中一种图像处理的方法,本发明实施例包括:As shown in FIG. 1-1, a method for image processing in the embodiment of the present invention is described below. The embodiment of the present invention includes:
101、获取采集的图像数据。101. Acquire acquired image data.
102、确定处理图像数据的处理方式。102. Determine a processing method of processing image data.
103、根据确定的所述处理方式对所述图像数据中的图像特征进行特征调 整。103. Perform feature adjustment on image features in the image data according to the determined processing manner.
图1为本发明实施例提供的视频通信***的结构框图,参照图1,该视频通信***可以包括:第一终端设备10和第二终端设备20;1 is a structural block diagram of a video communication system according to an embodiment of the present invention. Referring to FIG. 1, the video communication system may include: a first terminal device 10 and a second terminal device 20;
第一终端设备和第二终端设备可以是具有摄像头等图像采集装置,并且具有数据处理能力的用户侧设备,如带有摄像头的智能手机、平板电脑、笔记本电脑等。The first terminal device and the second terminal device may be user-side devices having image capturing devices such as cameras and having data processing capabilities, such as a smartphone with a camera, a tablet computer, a notebook computer, and the like.
第一用户(第一终端设备的用户)和第二用户(第二终端设备的用户)在进行视频通信时,可通过第一终端设备和第二终端设备互传视频图像,实现视频通信,第一终端设备和第二终端设备的数据交互可通过网络实现,具体可以是提供视频通信服务的网络服务器实现,如具有视频通信功能的IM(即时通信)服务器等。When the first user (the user of the first terminal device) and the second user (the user of the second terminal device) perform video communication, the first terminal device and the second terminal device can mutually transmit the video image to realize video communication. The data interaction between a terminal device and the second terminal device may be implemented through a network, and may be implemented by a network server that provides a video communication service, such as an IM (instant messaging) server having a video communication function.
图1所示为双人视频通信场景的示意,显示本发明实施例也可支持多人视频通信,如支持群组用户的视频通信。FIG. 1 is a schematic diagram of a two-person video communication scenario, showing that the embodiment of the present invention can also support multi-person video communication, such as video communication supporting group users.
在本发明实施例中,第一终端设备和第二终端设备处理图像采集装置采集的视频图像并传输,及接收视频图像并展示的过程相同;下面以第一终端设备为视频图像发送设备,第二终端设备为视频图像接收设备为例,对本发明实施例提供的视频通信流程进行介绍;显然,第二终端设备也会在向第一终端设备发送视频图像时,成为视频图像发送设备,第一终端设备也会在接收第二终端设备发送的视频图像时,成为视频图像接收设备;这个过程的视频通信流程,与第一终端设备为视频图像发送设备,第二终端设备为视频图像接收设备的通信流程的原理相同,可相互参照。In the embodiment of the present invention, the first terminal device and the second terminal device process the video image collected by the image capturing device and transmit the same, and receive the video image and display the same process; the first terminal device is the video image transmitting device, The second terminal device is a video image receiving device as an example, and the video communication process provided by the embodiment of the present invention is introduced. Obviously, the second terminal device also becomes a video image transmitting device when transmitting the video image to the first terminal device. The terminal device also becomes a video image receiving device when receiving the video image sent by the second terminal device; the video communication process of the process is the video image transmitting device with the first terminal device, and the second terminal device is the video image receiving device. The communication process has the same principle and can be cross-referenced.
图2示出了本发明实施例提供的视频通信方法的信令流程图,参照图2,该流程可以包括:FIG. 2 is a signaling flowchart of a video communication method according to an embodiment of the present invention. Referring to FIG. 2, the process may include:
步骤S10、第一终端设备建立与第二终端设备的视频通信连接。Step S10: The first terminal device establishes a video communication connection with the second terminal device.
可选的,第一终端设备可通过支持视频通信的网络服务器,与第二终端设备建立视频通信连接;支持视频通信的网络服务器,如具有视频通信功能的IM(Instant Messaging,即时通讯)服务器等。Optionally, the first terminal device can establish a video communication connection with the second terminal device through a network server that supports video communication, and a network server that supports video communication, such as an IM (Instant Messaging) server with a video communication function. .
步骤S11、第一终端设备获取所述第一终端设备的图像采集装置所采集的视频图像。Step S11: The first terminal device acquires a video image collected by the image acquiring device of the first terminal device.
第一终端设备与第二终端设备建立视频通信连接后,第一终端设备和第二 终端设备可以互传视频图像;以第一终端设备向第二终端设备传输视频图像为角度进行说明,在本发明实施例中,第一终端设备可在摄像头等图像采集装置采集的视频图像上添加视频特效,并将添加有视频特效的视频图像传输给第二终端设备;第二终端设备的处理与此同理;因此,第一终端设备需要获取图像采集装置所采集的视频图像,图像采集装置采集的视频图像将传输至第一终端设备的CPU。After the first terminal device establishes a video communication connection with the second terminal device, the first terminal device and the second terminal device can mutually transmit the video image; and the first terminal device transmits the video image to the second terminal device as an angle for description. In the embodiment of the present invention, the first terminal device may add a video special effect on the video image captured by the image capturing device such as the camera, and transmit the video image with the video special effect to the second terminal device; the processing of the second terminal device is the same Therefore, the first terminal device needs to acquire the video image captured by the image capturing device, and the video image captured by the image capturing device is transmitted to the CPU of the first terminal device.
步骤S12、第一终端设备确定所选取的滤镜效果。Step S12: The first terminal device determines the selected filter effect.
滤镜效果如漫画特效的滤镜效果,卡通特效的滤镜效果,不同色彩特效的滤镜效果等。Filter effects such as comic effects filter effects, cartoon effects filter effects, and different color effects filter effects.
步骤S13、第一终端设备确定所述滤镜效果的数据处理量类型。Step S13: The first terminal device determines a data processing quantity type of the filter effect.
步骤S14、如果所述数据处理量类型符合第一类型,在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果。Step S14: If the data processing quantity type conforms to the first type, add the filter effect to the video image in a CPU of the first terminal device.
步骤S15、如果所述数据处理量类型符合第二类型,指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果。Step S15: If the data processing quantity type conforms to the second type, instruct the GPU of the first terminal device to add the filter effect on the video image.
其中,所述第一类型对应的数据处理量与为所述CPU设定的数据处理量范围相应,且,所述第一类型对应的数据处理量,低于第二类型对应的数据处理量。The data processing amount corresponding to the first type corresponds to a data processing amount range set by the CPU, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type.
在本发明实施例中,为实现添加有视频特效的视频图像的可靠通信,本发明实施例可在为视频图像添加滤镜效果时,根据所要添加的滤镜效果的处理复杂度,选取在第一终端设备的CPU或者GPU实现滤镜效果的添加;In the embodiment of the present invention, in order to implement reliable communication of a video image to which a video special effect is added, the embodiment of the present invention may select a processing effect according to the processing complexity of the filter effect to be added when adding a filter effect to the video image. The CPU or GPU of a terminal device implements the addition of a filter effect;
具体可以是在滤镜效果的处理复杂度较低时,在第一终端设备的CPU上实现滤镜效果在视频图像上的添加,在滤镜效果的处理复杂度较高时,在第一终端设备的GPU上实现滤镜效果在视频图像上的添加。Specifically, when the processing complexity of the filter effect is low, the filter effect is added to the video image on the CPU of the first terminal device, and when the complexity of the filter effect is high, the first terminal is Adding a filter effect to the video image on the GPU of the device.
需要说明的是,虽然GPU是专用的图形处理器件,但一旦使用GPU进行图像处理,则一般需要调用OpenGL接口(Open Graphics Library,一个跨编程语言、跨平台的编程接口规格的专业的图形程序接口)实现,而OpenGL接口的调用将极大的占用GPU的处理资源,导致终端设备的处理资源占用上升,一旦GPU当前同时涉及的图形处理任务较多,则需要设置性能足够强大的GPU,才能满足在视频图像上添加视频特效的需求,这将对终端设备的性能配置提出更高的要求;因此本发明实施例可根据滤镜效果的处理复杂度,在滤镜效果的处 理复杂度较低时,可由CPU实现滤镜效果时,在第一终端设备的CPU上实现滤镜效果的添加,减少OpenGL接口的调用,降低GPU的处理压力;而在滤镜效果的处理复杂度较高时,无法由CPU实现滤镜效果时,在GPU中实现滤镜效果,保障滤镜效果添加的顺利实现;It should be noted that although the GPU is a dedicated graphics processing device, once the GPU is used for image processing, it is generally required to call the OpenGL interface (Open Graphics Library, a professional graphics program interface that crosses the programming language and cross-platform programming interface specifications). Implementation, and the call of the OpenGL interface will greatly occupy the processing resources of the GPU, resulting in an increase in the processing resources of the terminal device. Once the GPU is currently involved in more graphics processing tasks, it is necessary to set a GPU with sufficient performance to meet the requirements. The requirement of adding video effects on the video image, which will put forward higher requirements on the performance configuration of the terminal device; therefore, the embodiment of the present invention can be based on the processing complexity of the filter effect, when the processing complexity of the filter effect is low. When the filter effect can be implemented by the CPU, the filter effect is added on the CPU of the first terminal device, the call of the OpenGL interface is reduced, and the processing pressure of the GPU is reduced; and when the processing complexity of the filter effect is high, When the filter effect is implemented by the CPU, the filter effect is implemented in the GPU to ensure the smooth addition of the filter effect. ;
这样,终端设备具有一定的综合性能,则可满足在视频图像上添加视频特效的需求,而不需要强化GPU等某一方面的性能配置,为低配置的终端设备能够实现滤镜效果提供了可能,减小了基于视频特效的视频通信的使用局限。In this way, the terminal device has a certain comprehensive performance, which can meet the requirement of adding a video special effect on the video image, and does not need to strengthen the performance configuration of a certain aspect such as the GPU, and provides a possibility for the low-configuration terminal device to implement the filter effect. , reduces the use limitations of video communication based on video effects.
可选的,滤镜效果的处理复杂度可以由滤镜效果的数据处理量决定,一般而言,滤镜效果的数据处理量越低,处理复杂度越低,滤镜效果的数据处理量越高,处理复杂度越高;Optionally, the processing complexity of the filter effect can be determined by the data processing amount of the filter effect. Generally, the lower the data processing amount of the filter effect, the lower the processing complexity, and the more the data processing amount of the filter effect is. High, the higher the processing complexity;
本发明实施例可根据滤镜效果的数据处理量,定义滤镜效果的数据处理类型,并定义出第一类型和第二类型;其中,所述第一类型对应的数据处理量与为所述CPU设定的数据处理量范围相应,且,所述第一类型对应的数据处理量,低于第二类型对应的数据处理量;The embodiment of the present invention may define a data processing type of the filter effect according to the data processing amount of the filter effect, and define a first type and a second type; wherein, the data processing amount corresponding to the first type is The data processing amount range set by the CPU is corresponding, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type;
即第一类型和第二类型对应的数据处理量的界限值,可以是根据第一终端设备的CPU数据处理能力,预先分析出的处理复杂度低、和处理复杂度高的界限值;而针对不同机型,不同数据处理能力的CPU,所确定的第一类型和第二类型对应的数据处理量的界限值也可能不同。That is, the threshold value of the data processing amount corresponding to the first type and the second type may be a threshold value that is pre-analyzed according to the CPU data processing capability of the first terminal device, and has a high processing complexity and a high processing complexity; For different models and CPUs with different data processing capabilities, the threshold values of the data processing amounts corresponding to the first type and the second type determined may also be different.
从而在确定所选取的滤镜效果的数据处理量类型符合第一类型时,可以确定该滤镜效果可以由CPU实现,在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果;而在所选取的滤镜效果的数据处理量类型符合第二类型时,可确定该滤镜效果并不适宜由CPU实现,可由专业进行图形处理的GPU实现在所述视频图像上添加所述滤镜效果。Therefore, when it is determined that the data processing quantity type of the selected filter effect conforms to the first type, it may be determined that the filter effect may be implemented by a CPU, and the filtering is added to the video image in a CPU of the first terminal device. Mirror effect; when the data processing type of the selected filter effect conforms to the second type, it can be determined that the filter effect is not suitable for implementation by the CPU, and the GPU implementation can be added to the video image by a professional graphics processing GPU. The filter effect.
可选的,所选取的滤镜效果的数据处理量类型,可以根据滤镜效果的数据处理量确定,如果滤镜效果的数据处理量与第一类型对应的数据处理量相应,则可确定所述数据处理量类型符合第一类型,如果滤镜效果的数据处理量与第二类型对应的数据处理量相应,则可确定所述数据处理量类型符合第二类型;Optionally, the data processing type of the selected filter effect may be determined according to the data processing amount of the filter effect. If the data processing amount of the filter effect corresponds to the data processing amount corresponding to the first type, the determined The data processing quantity type conforms to the first type, and if the data processing quantity of the filter effect corresponds to the data processing quantity corresponding to the second type, it may be determined that the data processing quantity type conforms to the second type;
可选的,滤镜效果数据可以是第一终端设备本地存储,或者从网络下载的,各个滤镜效果可以携带有相应的数据处理量类型标识,该数据处理量类型标识可以与预先基于滤镜效果的数据处理量,分析出的滤镜效果的数据处理量类型 相对应。Optionally, the filter effect data may be locally stored by the first terminal device or downloaded from the network, and each filter effect may carry a corresponding data processing type identifier, and the data processing type identifier may be combined with a pre-filter based The amount of data processing of the effect corresponds to the type of data processing amount of the analyzed filter effect.
可选的,为所述CPU设定的数据处理量范围,可以根据第一终端设备的机型能力决定,此处所指的机型能力可以是CPU的数据处理量上限值(一般由CPU的核数等决定),本发明实施例可通过确定所述CPU的数据处理量上限值,及设定所述CPU处理滤镜效果的CPU占用比例范围;根据所述数据处理量上限值,与所述CPU占用比例范围(如两者相乘)确定所述数据处理量范围;Optionally, the range of data processing amount set for the CPU may be determined according to the model capability of the first terminal device, and the model capability referred to herein may be an upper limit value of the data processing amount of the CPU (generally by the CPU) The number of the data processing amount of the CPU may be determined by determining the upper limit value of the data processing amount of the CPU, and setting the CPU occupation ratio range of the CPU processing filter effect; Determining, by the CPU occupation ratio range (such as multiplying by two), the data processing amount range;
显然,这仅是所述CPU设定的数据处理量范围的可选方式,所设定的数据处理量范围应在CPU的数据处理能力范围内,本发明实施例也可结合CPU当前的空闲资源,确定CPU当前的数据处理量范围(如设定资源比例的当前空闲资源等),实现CPU设定的数据处理量范围根据CPU空闲资源的动态变更。Obviously, this is only an optional manner of the data processing amount range set by the CPU, and the set data processing amount range should be within the data processing capability of the CPU, and the embodiment of the present invention can also combine the current idle resources of the CPU. Determine the current data processing range of the CPU (such as the current idle resource for setting the resource ratio, etc.), and realize the data processing amount range set by the CPU according to the dynamic change of the CPU idle resource.
本发明实施例不同的机型下,为所述CPU所设定的数据处理量范围是不同的,即不同的机型下,第一类型和第二类型对应的数据处理量是不同的。In different models of the embodiment of the present invention, the data processing amount ranges set for the CPU are different, that is, the data processing amounts corresponding to the first type and the second type are different under different models.
可选的,在CPU中所实现的滤镜效果,可基于YUV格式的数据,即如果滤镜效果数据、从图像采集装置所获取的视频图像不是YUV格式的数据,本发明实施例需要将转换为YUV格式,才能进行滤镜效果在视频图像上的添加;Optionally, the filter effect implemented in the CPU may be based on data in the YUV format, that is, if the filter effect data, the video image acquired from the image capturing device is not data in the YUV format, the embodiment of the present invention needs to convert For the YUV format, the filter effect can be added to the video image;
可选的,在GPU中所实现的滤镜效果,可基于RGB格式的数据;即本发明实施例可将视频图像渲染到GPU中,并保存为RGB格式,GPU进行滤镜效果在视频图像上的添加后,添加滤镜效果的视频图像传输给CPU后,CPU将进一步转换为YUV格式,以便后续的视频编码处理等。Optionally, the filter effect implemented in the GPU may be based on data in the RGB format; that is, the video image may be rendered into the GPU and saved in the RGB format, and the GPU performs the filter effect on the video image. After the addition, the video image with the added filter effect is transferred to the CPU, and the CPU will further convert to the YUV format for subsequent video encoding processing and the like.
步骤S16、至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像。Step S16: Determine a video image to which a video effect is added, at least according to the video image to which the filter effect is added.
可选的,添加视频特效的视频图像,除是添加滤镜效果的视频图像外,还可能是添加滤镜效果和人脸挂件图像的视频图像。Optionally, add a video effect of the video effect, in addition to the video image with the filter effect added, it may also be a video image with a filter effect and a face pendant image.
步骤S17、向所述第二终端设备传输添加视频特效的视频图像。Step S17: The video image to which the video special effect is added is transmitted to the second terminal device.
可选的,第一终端设备可对添加视频特效的视频图像进行视频编码处理,然后向第二终端设备传输编码处理后的视频图像;可选的,在对添加视频特效的视频图像进行视频编码处理前,第一终端设备还可对添加视频特效的视频图像进行编码前处理,编码前处理包括如下至少一种处理方式:降噪锐化处理, 比如磨皮处理等。Optionally, the first terminal device may perform video encoding processing on the video image to which the video special effect is added, and then transmit the encoded video image to the second terminal device; optionally, video encoding the video image to which the video special effect is added Before processing, the first terminal device may further perform pre-coding processing on the video image to which the video special effect is added, and the pre-coding processing includes at least one of the following processing methods: noise reduction sharpening processing, such as dermabrasion processing.
可选的,在第一终端设备的CPU中为所述视频图像添加所述滤镜效果时,本发明实施例可调用预定的滤镜效果实现算法,为所述视频图像添加所述滤镜效果;Optionally, when the filter effect is added to the video image in the CPU of the first terminal device, the embodiment of the present invention may invoke a predetermined filter effect implementation algorithm to add the filter effect to the video image. ;
而在指示第一终端设备的GPU在所述视频图像上添加所述滤镜效果时,本发明实施例可调用OpenGL接口,以指示所述GPU通过OpenGL接口,以预定的滤镜效果实现算法,在所述视频图像上添加所述滤镜效果。When the GPU indicating that the first terminal device adds the filter effect on the video image, the embodiment of the present invention may invoke an OpenGL interface to instruct the GPU to implement an algorithm with a predetermined filter effect through the OpenGL interface. The filter effect is added to the video image.
本发明实施例提供的视频通信方法中,第一终端设备在选取滤镜效果后,可根据滤镜效果的数据处理量类型,决定滤镜效果是在CPU中实现还是在GPU中实现;即所述数据处理量类型对应的处理复杂度较低,符合第一类型时,可在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果,所述数据处理量类型对应的处理复杂度较高,符合第二类型时,可指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果;从而本发明实施例可根据滤镜效果的数据处理量类型,合理分配执行滤镜效果添加的处理器件,使得滤镜效果添加涉及的数据处理压力,可在CPU和GPU间分担,使得终端设备具有一定的综合性能,则可满足在视频图像上添加视频特效的需求,而不需要强化GPU等某一方面的性能配置,为低配置的终端设备能够实现滤镜效果提供了可能,减小了基于视频特效的视频通信的使用局限。In the video communication method provided by the embodiment of the present invention, after selecting the filter effect, the first terminal device may determine whether the filter effect is implemented in the CPU or in the GPU according to the data processing type of the filter effect; The processing complexity corresponding to the data processing type is low, and when the first type is met, the filter effect may be added to the video image in the CPU of the first terminal device, where the data processing type corresponds When the processing complexity is high, when the second type is met, the GPU of the first terminal device may be instructed to add the filter effect on the video image; thus, the data processing type according to the filter effect may be used in the embodiment of the present invention. The processing device added by the execution filter effect is allocated reasonably, so that the data processing pressure involved in the filter effect is added, and can be shared between the CPU and the GPU, so that the terminal device has a certain comprehensive performance, and the video effect can be added to the video image. Needs, without the need to enhance the performance configuration of a certain aspect of the GPU, it is possible to achieve a filter effect for a low-configuration terminal device, reducing the Video Effects limitations on the use of video communications.
可选的,如果还需要在视频图像上添加人脸挂件,第一终端设备可根据当前网络带宽,调整进行视频特效处理的视频图像的帧率,并以当前网络带宽越高,帧率越高,当前网络带宽越低,帧率越低的原则,在网络带宽较高的情况下,提升进行视频特效处理的视频图像的帧率,在保障视频通信可靠性的基础上,提升视频图像质量;而在网络带宽较低的情况下,降低进行视频特效处理的视频图像的帧率,保障视频通信的可靠性。Optionally, if a face pendant is further added to the video image, the first terminal device can adjust a frame rate of the video image processed by the video special effect according to the current network bandwidth, and the higher the current network bandwidth, the higher the frame rate. The lower the current network bandwidth is, the lower the frame rate is. In the case of high network bandwidth, the frame rate of the video image processed by the video special effect is improved, and the video image quality is improved on the basis of ensuring the reliability of the video communication; In the case of a low network bandwidth, the frame rate of the video image processed by the video special effect is reduced to ensure the reliability of the video communication.
可选的,进行视频特效处理的视频图像的帧率的调整,可通过调整获取图像采集装置所采集的视频图像的帧率实现,具体可如图3所示,图3为视频通信方法的另一流程图,该过程可以包括:Optionally, the frame rate adjustment of the video image processed by the video special effect may be implemented by adjusting a frame rate of the video image acquired by the image capturing device, as shown in FIG. 3, and FIG. 3 is another method of the video communication method. A flow chart, the process can include:
步骤S20、第一终端设备获取当前网络带宽。Step S20: The first terminal device acquires a current network bandwidth.
第一终端设备获取当前网络带宽的方式可以有多种,如与具有网络带宽检 测功能的服务器进行带宽检测包的交互,实现当前网络带宽的检测。The manner in which the first terminal device obtains the current network bandwidth may be various, for example, performing a bandwidth detection packet interaction with a server having a network bandwidth detection function to implement detection of the current network bandwidth.
步骤S21、第一终端设备根据预置的网络带宽范围与图像获取帧率的第一对应关系,确定与当前网络带宽所处网络带宽范围对应的图像获取帧率。Step S21: The first terminal device determines, according to the first correspondence between the preset network bandwidth range and the image acquisition frame rate, an image acquisition frame rate corresponding to a network bandwidth range in which the current network bandwidth is located.
其中,网络带宽范围与所对应的图像获取帧率正相关,即网络带宽范围越高,所对应的图像获取帧率越高。The network bandwidth range is positively correlated with the corresponding image acquisition frame rate, that is, the higher the network bandwidth range, the higher the image acquisition frame rate.
本发明实施例可预先设置不同的网络带宽范围,所对应的图像获取帧率,其中,不同的网络带宽范围对应的图像获取帧率不同,且网络带宽范围越高,对应的图像获取帧率越高(即网络带宽范围与所对应的图像获取帧率正相关),从而形成第一对应关系。In the embodiment of the present invention, different network bandwidth ranges may be set in advance, and the corresponding image acquisition frame rate may be different. The image acquisition frame rate corresponding to different network bandwidth ranges is different, and the higher the network bandwidth range, the more the corresponding image acquisition frame rate is. High (ie, the network bandwidth range is positively correlated with the corresponding image acquisition frame rate), thereby forming a first correspondence.
可选的,由于第一终端设备获取视频图像的帧率限制于,第一终端设备的摄像头等图像采集装置的采集帧率;因此图像获取帧率的上限值可以是该图像采集装置的采集帧率;下表1示出了网络带宽范围与图像获取帧率的一种可选示意,可参照,其中最大帧率为图像采集装置的采集帧率。Optionally, the frame rate of the video image acquired by the first terminal device is limited to the frame rate of the image capturing device of the camera of the first terminal device; therefore, the upper limit value of the image capturing frame rate may be the image capturing device. Frame rate; Table 1 below shows an alternative illustration of the network bandwidth range and the image acquisition frame rate, which can be referred to, where the maximum frame rate is the acquisition frame rate of the image acquisition device.
图像获取帧率Image acquisition frame rate 网络带宽范围Network bandwidth range
fps<8Fps<8 带宽<80kbpsBandwidth <80kbps
8<fps<=108<fps<=10 带宽<100kbpsBandwidth <100kbps
10<fps<=1210<fps<=12 100kbps<带宽<150kbps100kbps<bandwidth <150kbps
12<fps<=1512<fps<=15 150kbps<带宽<200kbps150kbps<bandwidth <200kbps
fps>15Fps>15 带宽>200kbpsBandwidth >200kbps
表1,第一对应关系的示意表格Table 1, a schematic table of the first correspondence
在确定第一终端设备的当前网络带宽后,本发明实施例可确定第一对应关系中,与第一终端设备的当前网络带宽所处于的网络带宽范围对应的图像获取帧率。After determining the current network bandwidth of the first terminal device, the embodiment of the present invention may determine an image acquisition frame rate corresponding to a network bandwidth range in which the current network bandwidth of the first terminal device is in the first correspondence relationship.
步骤S22、第一终端设备根据所确定的图像获取帧率,获取所述第一终端设备的图像采集装置所采集的视频图像。Step S22: The first terminal device acquires a video image collected by the image acquiring device of the first terminal device according to the determined image acquisition frame rate.
由于所述图像获取帧率与第一终端设备的当前网络带宽为正相关关系,即第一终端设备的当前网络带宽越高,所述图像获取帧率越大,当前网络带宽越低,所述图像获取帧率越低;The image acquisition frame rate is positively correlated with the current network bandwidth of the first terminal device, that is, the higher the current network bandwidth of the first terminal device, the larger the image acquisition frame rate, and the lower the current network bandwidth. The lower the image acquisition frame rate;
相应的,以所述图像获取帧率获取图像采集装置所采集的视频图像,可以 在第一终端设备的当前网络带宽较高的情况下,在单位时间获取到较多的视频图像帧数,后续以该单位时间的视频图像帧数进行视频特效添加,可使得添加视频特效的视频图像的图像质量较高;Correspondingly, the video image captured by the image acquisition device is obtained by using the image acquisition frame rate, and the number of video image frames is obtained in a unit time when the current network bandwidth of the first terminal device is high. Adding a video effect by the number of video image frames per unit time can make the image quality of the video image to which the video effect is added higher;
而在第一终端设备的当前网络带宽较低的情况下,本发明实施例可以较低的图像获取帧率,获取图像采集装置所采集的视频图像,从而在当前网络带宽较低时,在单位时间获取到较少的视频图像帧数,后续以该单位时间的视频图像帧数进行视频特效添加,可保障视频通信的可靠性。In the case that the current network bandwidth of the first terminal device is low, the embodiment of the present invention can obtain a lower image acquisition frame rate and acquire a video image collected by the image acquisition device, so that when the current network bandwidth is low, the unit The time obtains a small number of video image frames, and subsequently adds video effects by the number of video image frames per unit time, which can ensure the reliability of video communication.
可选的,步骤S20至步骤S22可以认为是,图2所示步骤S11获取所述第一终端设备的图像采集装置所采集的视频图像的可选实现方式。Optionally, the step S20 to the step S22 may be considered as an optional implementation manner of acquiring the video image collected by the image acquiring device of the first terminal device in step S11 shown in FIG. 2 .
步骤S23、第一终端设备为所获取的视频图像添加所选取的视频特效。Step S23: The first terminal device adds the selected video special effect to the acquired video image.
为视频图像添加所选取的视频特效如,为视频图像添加人脸挂件,和/或,为视频图像添加滤镜效果;如果选取的是滤镜效果,则需要根据图2所示方法,决定是在CPU还是GPU实现滤镜效果;Add selected video effects to the video image, such as adding a face pendant to the video image, and/or adding a filter effect to the video image; if the filter effect is selected, it is determined according to the method shown in Figure 2, Implement filter effects on CPU or GPU;
如果是添加人脸挂件,则可在GPU上实现人脸挂件的添加;If the face pendant is added, the addition of the face pendant can be implemented on the GPU;
可选的,进一步,如果在GPU上实现滤镜效果,GPU可在添加人脸挂件的视频图像上添加滤镜效果,也可以是在添加滤镜效果的视频图像上,添加人脸挂件;如果在CPU上实现滤镜效果,CPU可以在GPU反馈的添加有人脸挂件的视频图像上,添加滤镜效果。Optionally, further, if the filter effect is implemented on the GPU, the GPU may add a filter effect on the video image to which the face pendant is added, or may add a face pendant on the video image to which the filter effect is added; The filter effect is implemented on the CPU, and the CPU can add a filter effect on the video image added by the GPU to add a face pendant.
步骤S24、第一终端设备向所述第二终端设备传输添加视频特效的视频图像。Step S24: The first terminal device transmits a video image to which the video special effect is added to the second terminal device.
可选的,第一终端设备可对添加视频特效的视频图像进行视频编码处理,然后向第二终端设备传输编码处理后的视频图像;可选的,在对添加视频特效的视频图像进行视频编码处理前,第一终端设备还可对添加视频特效的视频图像进行编码前处理,编码前处理包括如下至少一种处理方式:降噪锐化处理,比如磨皮处理等。Optionally, the first terminal device may perform video encoding processing on the video image to which the video special effect is added, and then transmit the encoded video image to the second terminal device; optionally, video encoding the video image to which the video special effect is added Before processing, the first terminal device may further perform pre-coding processing on the video image to which the video special effect is added, and the pre-coding processing includes at least one of the following processing methods: noise reduction sharpening processing, such as dermabrasion processing.
本发明实施例中,第一终端设备可根据当前网络带宽,调整图像获取帧率,且当前网络带宽与调整的图像获取帧率为正相关关系;从而第一终端设备可以调整后的图像获取帧率,获取所述第一终端设备的图像采集装置所采集的视频图像,并为所获取的视频图像添加所选取的视频特效,使得进行视频特效处理的视频图像的帧率,与当前网络带宽为正相关关系;即当前网络带宽较高,可 在保障通信可靠性的基础上,提高进行视频特效处理的视频图像的帧率,提升添加有视频特效的视频图像的图像质量;而当前网络带宽较低时,可以降低进行视频特效处理的视频图像的帧率,减小终端设备的资源消耗及视频通信所用网络带宽,保障视频通信的可靠性。In the embodiment of the present invention, the first terminal device may adjust the image acquisition frame rate according to the current network bandwidth, and the current network bandwidth is positively correlated with the adjusted image acquisition frame rate; thus, the first terminal device may adjust the image acquisition frame. Rate, acquiring a video image collected by the image capturing device of the first terminal device, and adding the selected video effect to the acquired video image, so that the frame rate of the video image processed by the video special effect is compared with the current network bandwidth. Positive correlation; that is, the current network bandwidth is high, and the frame rate of the video image processed by the video special effect can be improved, and the image quality of the video image added with the video special effect can be improved on the basis of ensuring the reliability of the communication; When low, the frame rate of the video image processed by the video special effect can be reduced, the resource consumption of the terminal device and the network bandwidth used for the video communication can be reduced, and the reliability of the video communication can be ensured.
本发明实施例提供的视频通信方法,通过动态调整与当前网络带宽相应的图像获取帧率,可使得进行视频特效处理的视频图像的帧率,与当前的网络带宽相适应,保障添加有视频特效的视频通信能够可靠进行,保障了视频通信的可靠性。The video communication method provided by the embodiment of the present invention can dynamically adjust the frame rate of the image corresponding to the current network bandwidth, so that the frame rate of the video image processed by the video special effect can be adapted to the current network bandwidth to ensure that the video special effect is added. The video communication can be carried out reliably, ensuring the reliability of video communication.
进一步,本发明实施例可确定用户所选取的视频特效,并在用户所选取的视频特效,与第一终端设备的设备配置信息以及当前网络带宽相对应时,才为所获取的视频图像添加用户所选取的视频特效;使得第一终端设备在设备配置较低,网络带宽较低的情况下,减少可执行的视频特效类型,进一步降低终端设备的资源消耗;Further, the embodiment of the present invention may determine a video special effect selected by the user, and add a user to the acquired video image when the video special effect selected by the user corresponds to the device configuration information of the first terminal device and the current network bandwidth. The selected video special effect is such that the first terminal device reduces the executable video special effect type and further reduces the resource consumption of the terminal device when the device configuration is low and the network bandwidth is low;
可选的,图4示出了本发明实施例提供的确定所选取的视频特效的方法流程图,该方法可应用于第一终端设备,参照图4,该方法可以包括:Optionally, FIG. 4 is a flowchart of a method for determining a selected video effect according to an embodiment of the present invention. The method may be applied to a first terminal device. Referring to FIG. 4, the method may include:
步骤S100、根据所述第一终端设备的设备配置信息,以及当前网络带宽,确定当前可执行的至少一个视频特效类型。Step S100: Determine, according to the device configuration information of the first terminal device, and the current network bandwidth, at least one video effect type currently executable.
可选的,本发明实施例可预置设备配置等级、网络带宽范围与视频特效类型的第二对应关系;其中,一个设备配置等级与一个网络带宽范围所对应的视频特效类型可以是至少一个,且设备配置等级与网络带宽范围越高,所对应的视频特效类型的数量越多,而一个视频特效类型可以对应至少一个视频特效;Optionally, the embodiment of the present invention may preset a second correspondence between the device configuration level, the network bandwidth range, and the video effect type; wherein, the video effect type corresponding to one device configuration level and one network bandwidth range may be at least one. The higher the device configuration level and the network bandwidth range, the greater the number of corresponding video effect types, and one video effect type can correspond to at least one video effect;
即设备配置等级与网络带宽范围越高,第一终端设备能够具有较高的设备资源和带宽资源,实现较多视频特效类型;下表2示出了设备配置等级、网络带宽范围与视频特效类型的第二对应关系示意,可参照;其中,基础滤镜的数据处理量较低,复杂滤镜的数据处理量较高;That is, the higher the device configuration level and the network bandwidth range, the first terminal device can have higher device resources and bandwidth resources, and achieve more video special effects types; Table 2 below shows the device configuration level, network bandwidth range, and video effect type. The second correspondence relationship is shown, which can be referred to; wherein the data processing amount of the basic filter is low, and the data processing amount of the complex filter is high;
Figure PCTCN2018071363-appb-000001
Figure PCTCN2018071363-appb-000001
表2Table 2
本发明实施例可获取第一终端设备的内存、CPU、GPU等设备配置信息,并基于预定的设备配置等级分级策略,确定第一终端设备的设备配置等级,一般而言,设备配置等级越高,终端设备的设备配置越高,如内存越大、CPU核数越多等;The embodiment of the present invention can obtain the device configuration information of the memory, the CPU, the GPU, and the like of the first terminal device, and determine the device configuration level of the first terminal device based on the predetermined device configuration level grading policy. Generally, the device configuration level is higher. The higher the device configuration of the terminal device, such as the larger the memory, the more the number of CPU cores, etc.;
在确定第一终端设备的设备配置等级,及当前网络带宽所处的网络带宽范围后,本发明实施例可确定所述第二对应关系中,与所述第一终端设备的设备配置信息的设备配置等级,及所述当前网络带宽所处的网络带宽范围,相应的视频特效类型,确定出第一终端设备当前可执行的至少一个视频特效类型。After determining the device configuration level of the first terminal device and the network bandwidth range in which the current network bandwidth is located, the embodiment of the present invention may determine, in the second correspondence, the device configuration information of the first terminal device The configuration level, and the network bandwidth range in which the current network bandwidth is located, and the corresponding video effect type, determine at least one video effect type currently executable by the first terminal device.
步骤S110、展示所述可执行的至少一个视频特效类型对应的视频特效。Step S110: Display a video effect corresponding to the at least one video effect type that can be executed.
可选的,第一终端设备可在视频通信界面的视频特效选取区域展示,第一终端设备当前可执行的视频特效类型,所展示的一个视频特效类型可对应至少一个视频特效,以便用户选取需要在视频图像上添加的视频特效。Optionally, the first terminal device may be displayed in a video effect selection area of the video communication interface, where the first terminal device currently performs a video effect type, and a displayed video effect type may correspond to at least one video special effect, so that the user selects a need. Video effects added to the video image.
步骤S120、确定从所展示的视频特效中选取的视频特效。Step S120: Determine a video effect selected from the displayed video effects.
可选的,本发明实施例可确定从所展示的视频特效中选取的滤镜效果,即通过图4所示方法,实现图2所示步骤S12;Optionally, the embodiment of the present invention can determine the filter effect selected from the displayed video effects, that is, the method shown in FIG. 4 is used to implement step S12 shown in FIG. 2;
可选的,除实现滤镜效果的选取外,本发明实施例还可以实现人脸挂件图像的选取。Optionally, in addition to the selection of the filter effect, the embodiment of the present invention can also implement the selection of the face pendant image.
结合图3和图4所示方法,本发明实施例可在第一终端设备的当前网络带宽较高时,以较高的图像获取帧率,获取待添加特效的视频图像,并在第一终端设备的设备配置等级较高时,以较多可执行的视频特效类型为视频图像添加 视频特效,在保障视频通信可靠性的基础上,提升添加视频特效的视频图像的图像质量;而在第一终端设备的当前带宽较低,且设备配置等级较低时,本发明实施例可以较低的图像获取帧率,获取待添加特效的视频图像,并以较少可执行的视频特效类型为视频图像添加视频特效,降低视频通信过程中、终端设备的资源消耗及网络带宽消耗,保障视频通信的可靠性。With reference to the methods shown in FIG. 3 and FIG. 4, the embodiment of the present invention can obtain a video image to be added with a higher image acquisition frame rate when the current network bandwidth of the first terminal device is high, and is in the first terminal. When the device has a higher device configuration level, video effects are added to the video image with more executable video effects types, and the image quality of the video image to which the video effects are added is improved on the basis of ensuring the reliability of the video communication; When the current bandwidth of the terminal device is low, and the device configuration level is low, the embodiment of the present invention can obtain a lower image rate of the image, obtain a video image to be added with a special effect, and use a less-executable video effect type as the video image. Add video effects to reduce the resource consumption and network bandwidth consumption of the video communication process, and ensure the reliability of video communication.
可选的,上述描述的视频通信方法,可由第一终端设备的CPU执行,图5示出了第一终端设备的结构,参照图5,第一终端设备可以包括:图像采集装置11,CPU(Central Processing Unit,中央处理器)12,GPU(Graphics Processing Unit,图形处理器)13;第二终端设备的结构与第一终端设备的结构类似;Optionally, the video communication method described above may be performed by a CPU of the first terminal device, and FIG. 5 shows a structure of the first terminal device. Referring to FIG. 5, the first terminal device may include: an image collection device 11, a CPU ( a central processing unit 12, a GPU (Graphics Processing Unit) 13; the structure of the second terminal device is similar to that of the first terminal device;
可选的,在以确定的图像获取帧率,获取图像采集装置所采集的视频图像后,本发明实施例可实现人脸挂件在视频图像上的添加;为进一步减小第一终端设备的CPU资源消耗,本发明实施例可使用视频图像中的灰度图通道图像,实现人脸挂件在视频图像中添加位置的确定;Optionally, after obtaining the frame rate of the image obtained by the image acquisition device, the embodiment of the present invention can implement the addition of the face pendant on the video image; further reducing the CPU of the first terminal device Resource consumption, the embodiment of the present invention can use the gray map channel image in the video image to realize the determination of the location of the face pendant in the video image;
可选的,图6示出了第一终端设备实现人脸挂件在视频图像中添加的示意流程,该流程可以包括:Optionally, FIG. 6 is a schematic flowchart of the first terminal device implementing the addition of the face pendant in the video image, and the process may include:
步骤S30、CPU以所确定的图像获取帧率,获取图像采集装置采集的视频图像。Step S30: The CPU acquires a video image acquired by the image capturing device by using the determined image acquisition frame rate.
第一用户在与第二用户进行视频通信时,第一终端设备可启用摄像头等图像采集装置,采集本地的视频图像,该视频图像一般包括第一用户的人脸,以及第一用户所处环境背景的背景图像;第一终端设备在根据当前网络带宽,确定相应的图像获取帧率后,可以相应帧率从图像采集装置获取视频图像。When the first user performs video communication with the second user, the first terminal device may enable an image capturing device such as a camera to collect a local video image, where the video image generally includes the face of the first user and the environment in which the first user is located. The background image of the background; after determining the corresponding image acquisition frame rate according to the current network bandwidth, the first terminal device may acquire the video image from the image collection device at a corresponding frame rate.
可选的,图像采集装置所采集的视频图像可能是YUV(其中,Y指的是亮度,UV统一称为色度)视频图像,也可能是RGB(其中,R表示红色,G表示绿色,B表示蓝色)视频图像,所采集的视频图像的格式,可根据图像采集装置的类型而定。Optionally, the video image captured by the image acquisition device may be a YUV (where Y refers to brightness, UV is collectively referred to as chrominance) video image, or may be RGB (where R represents red, G represents green, B Indicates a blue) video image, and the format of the captured video image may depend on the type of image capture device.
步骤S31、CPU确定所选取的人脸挂件图像。Step S31: The CPU determines the selected face pendant image.
在第一终端设备的用户需要在视频图像的人脸上添加人脸挂件特效时,该用户可从网络服务器或者第一终端设备本地选取人脸挂件图像,CPU在检测到用户输入的人脸挂件图像选取指令后,可以确定所选取的人脸挂件图像。When the user of the first terminal device needs to add a face pendant effect to the face of the video image, the user may locally select the face pendant image from the network server or the first terminal device, and the CPU detects the face input of the user input. After the image selection command, the selected face pendant image can be determined.
可选的,第一终端设备的视频通信界面可以展示人脸挂件图像选取区域,该人脸挂件图像选取区域可以展示有从网络服务器下载,和/或第一终端设备本地存储的人脸挂件图像,用户可从该人脸挂件图像选取区域中选取人脸挂件图像。Optionally, the video communication interface of the first terminal device may display a face pendant image selection area, where the face pendant image selection area may be displayed with a face pendant image downloaded from a network server and/or stored locally by the first terminal device. The user can select a face pendant image from the face selection image selection area.
步骤S32、CPU提取所获取的视频图像的灰度图通道图像。Step S32: The CPU extracts a grayscale channel image of the acquired video image.
基于步骤S21所选取的人脸挂件图像,CPU可确定后续需将该人脸挂件图像添加在视频图像上,为识别人脸挂件图像在视频图像中的添加位置,CPU对于所获取的每一帧视频图像,需提取视频图像的灰度图通道图像。Based on the face pendant image selected in step S21, the CPU may determine that the face pendant image needs to be added to the video image, in order to identify the location of the face pendant image in the video image, the CPU for each frame acquired. For video images, the grayscale channel image of the video image needs to be extracted.
出于减少数据处理量,降低CPU资源消耗的目的,CPU可以仅根据视频图像中的灰度图通道图像,进行人脸特征点的定位;具体的,CPU可以提取视频图像中的灰度图通道图像,如提取YUV格式的视频图像中的Y通道图像(Y通道就是亮度通道,Y通道的图像表现类似于灰度图通道的表现),或者提取RGB格式的视频图像中的G通道图像(G通道的图像表现类似于灰度图通道的表现)。In order to reduce the amount of data processing and reduce the CPU resource consumption, the CPU can perform the positioning of the facial feature points only according to the grayscale channel image in the video image; specifically, the CPU can extract the grayscale channel in the video image. An image, such as a Y channel image in a video image extracted in YUV format (Y channel is a luminance channel, image representation of a Y channel is similar to that of a grayscale channel), or extraction of a G channel image in a video image of RGB format (G The image of the channel behaves like the performance of a grayscale channel).
步骤S33、CPU识别所述灰度图通道图像中各人脸特征点的位置。Step S33: The CPU identifies the position of each facial feature point in the grayscale channel image.
可选的,CPU可以通过人脸识别技术处理灰度图通道图像,实现灰度图通道图像中的人脸特征点的定位,人脸特征点如人脸的五官(眉、眼、耳、鼻、口)特征点等;具体的,CPU可以通过人脸检测技术,确定所述灰度图通道图像中的人脸区域,对该人脸区域进行人脸特征点定位(如五官特征点定位),确定出各人脸特征点在灰度图通道图像中的位置;得到的各人脸特征点在灰度图通道图像中的位置,可以认为是各人脸特征点在图像采集装置所采集的视频图像中的位置;Optionally, the CPU can process the grayscale channel image through the face recognition technology to realize the positioning of the facial feature points in the grayscale channel image, and the facial features such as the facial features of the face (eyebrow, eye, ear, nose) Specifically, the CPU may determine a face region in the grayscale channel image by using a face detection technique, and perform face feature point localization on the face region (eg, facial feature point location) , determining the position of each facial feature point in the grayscale channel image; the obtained position of each facial feature point in the grayscale channel image can be regarded as the collection of each facial feature point in the image acquisition device. The position in the video image;
可选的,由于图像采集装置采集的图像,与最终显示在屏幕的图像可能存在角度差异(一般是90度或180度的角度差异),因此图像采集装置采集的视频图像中的人脸可能不是正立的,本发明实施例中,CPU可以旋转灰度图通道图像,使得灰度图通道图像中的人脸正立后,再实现人脸特征点的定位。Optionally, since the image captured by the image capturing device may have an angular difference (usually an angular difference of 90 degrees or 180 degrees) from the image finally displayed on the screen, the face in the video image captured by the image capturing device may not be In the embodiment of the present invention, the CPU can rotate the grayscale channel image, so that the face in the grayscale channel image is erected, and then the positioning of the facial feature point is realized.
进一步,由于摄像头采集的视频图像的尺寸一般较大,本发明实施例除直接针对灰度图通道图像进行人脸特征点的定位外,还可以对灰度图通道图像进行缩小处理后,得到灰度图通道缩小图像,基于灰度图通道缩小图像实现人脸特征点的定位,然后再将灰度图通道缩小图像中定位的人脸特征点的位置,转 换为灰度图通道图像中的人脸特征点的位置。Further, since the size of the video image collected by the camera is generally large, in addition to directly positioning the facial feature point for the grayscale channel image, the embodiment of the present invention may further reduce the grayscale channel image and obtain gray. The degree channel reduces the image, reduces the image based on the grayscale channel to realize the positioning of the facial feature point, and then converts the position of the facial feature point located in the grayscale channel reduction image into a person in the grayscale channel image. The position of the face feature point.
可选的,人脸特征点在图像中的位置可以通过人脸特征点在图像中的坐标定义。Optionally, the position of the face feature point in the image may be defined by the coordinates of the face feature point in the image.
步骤S34、CPU根据所述人脸挂件图像对应的人脸特征点,及所述各人脸特征点的位置,确定所述人脸挂件图像在所述视频图像中的添加位置。Step S34: The CPU determines, according to the face feature point corresponding to the face pendant image and the location of each face feature point, the added position of the face pendant image in the video image.
可选的,每一个人脸挂件图像会有对应的人脸特征点,人脸挂件图像会添加到对应的人脸特征点上,如兔子耳朵的人脸挂件图像对应人脸的耳朵特征点,眼镜的人脸挂件图像对应人脸的眼镜特征点等;CPU在确定需在视频图像上添加的人脸挂件图像后,可根据该人脸挂件图像对应的人脸特征点,及步骤S33所确定的灰度图通道图像中各人脸特征点的位置,确定该人脸挂件图像在所述视频图像中的添加位置;Optionally, each face image has a corresponding facial feature point, and the face pendant image is added to the corresponding facial feature point, for example, the rabbit face image of the rabbit ear corresponds to the ear feature point of the face. The face image of the eyeglass corresponds to the feature point of the face of the face, and the like; after determining the face pendant image to be added on the video image, the CPU may determine the face feature point corresponding to the face pendant image and the step S33 Position of each face feature point in the grayscale channel image, determining an added position of the face pendant image in the video image;
具体的,本发明实施例可根据所选取的人脸挂件图像对应的人脸特征点,从灰度图通道图像中各人脸特征点的位置中,匹配出与该人脸挂件图像对应的人脸特征点相应的位置,得到人脸挂件图像在所述视频图像中的添加位置。Specifically, in the embodiment of the present invention, according to the face feature point corresponding to the selected face pendant image, the person corresponding to the face pendant image is matched from the position of each face feature point in the grayscale channel image. The corresponding position of the face feature point is obtained at the position where the face pendant image is added in the video image.
步骤S35、CPU将所述视频图像和所述人脸挂件图像渲染到所述第一终端设备的GPU中,以在所述GPU中生成第一图像格式的视频图像和人脸挂件图像;及将所述添加位置传输给所述GPU。Step S35: The CPU renders the video image and the face pendant image to the GPU of the first terminal device to generate a video image and a face pendant image in a first image format in the GPU; The added location is transmitted to the GPU.
可选的,CPU可调用OpenGL接口(Open Graphics Library,一个跨编程语言、跨平台的编程接口规格的专业的图形程序接口),将所述视频图像和所述人脸挂件图像渲染到GPU的纹理,并在GPU中保存第一图像格式的视频图像和人脸挂件图像;Optionally, the CPU may call the OpenGL Interface (a professional graphics program interface of a cross-programming language, a cross-platform programming interface specification) to render the video image and the face pendant image to the texture of the GPU. And saving the video image and the face pendant image of the first image format in the GPU;
可选的,第一图像格式可以是RGB格式;相应的,如果视频图像,和/或人脸挂件图像为YUV格式的,CPU在将所述视频图像和所述人脸挂件图像渲染到GPU的纹理时,需要进行图像格式转换,以在GPU中保存RGB格式的视频图像和人脸挂件图像;Optionally, the first image format may be an RGB format; correspondingly, if the video image, and/or the face pendant image is in a YUV format, the CPU renders the video image and the face pendant image to the GPU. For texture, image format conversion is required to save video images and face pendant images in RGB format in the GPU;
在将所述视频图像和所述人脸挂件图像渲染到GPU的纹理时,CPU还可将步骤S34所确定的添加位置通知给GPU,以便GPU实现人脸挂件图像在视频图像上的添加。When the video image and the face pendant image are rendered to the texture of the GPU, the CPU may also notify the GPU of the added position determined in step S34, so that the GPU implements the addition of the face pendant image on the video image.
步骤S36、GPU根据所述添加位置,将所述第一图像格式的视频图像与人脸挂件图像相添加,得到第一图像格式的特效视频图像。Step S36: The GPU adds the video image of the first image format and the face pendant image according to the added position to obtain a special effect video image of the first image format.
可选的,GPU对第一图像格式的视频图像与人脸挂件图像进行添加,可以是在,视频图像上添加人脸挂件图像;具体的,GPU可以根据所述添加位置,在第一图像格式的视频图像上添加人脸挂件图像,得到第一图像格式的特效视频图像。Optionally, the GPU adds the video image of the first image format and the face pendant image, and may add a face pendant image to the video image; specifically, the GPU may add the location according to the added position in the first image format. Add a face pendant image to the video image to get a special effect video image in the first image format.
可选的,如果基于图2所示方法,确定在GPU实现滤镜效果的添加,则GPU还可在视频图像上添加滤镜效果,得到第一图像格式的添加有滤镜效果和人脸挂件图像的特效视频图像。Optionally, if based on the method shown in FIG. 2, determining that the filter effect is added on the GPU, the GPU may also add a filter effect on the video image to obtain a filter effect and a face pendant added to the first image format. A special effect video image of the image.
步骤S37、GPU将第一图像格式的特效视频图像传输给CPU。Step S37: The GPU transmits the special effect video image of the first image format to the CPU.
可选的,CPU可将第一图像格式的特效视频图像转换为第二图像格式,再对所述第二图像格式的特效视频图像进行视频编码处理,将视频编码处理后的特效视频图像传输给第二终端设备;可选的,第二图像格式可以是YUV格式;Optionally, the CPU may convert the special effect video image of the first image format into the second image format, and then perform video encoding processing on the special effect video image of the second image format, and transmit the special effect video image after the video encoding process to a second terminal device; optionally, the second image format may be a YUV format;
可选的,为便于视频编码处理,如果第一图像格式的特效视频图像为RGB格式,CPU可以进行RBG格式至YUV格式的转换,得到YUV格式的特效视频图像,以YUV格式的特效视频图像进行视频编码处理,得到视频编码处理后的特效视频图像;Optionally, in order to facilitate video encoding processing, if the special effect video image of the first image format is in RGB format, the CPU may convert the RBG format to the YUV format to obtain a special effect video image in the YUV format, and perform the special effect video image in the YUV format. Video encoding processing to obtain a special effect video image after video encoding processing;
进一步,如果确定是在CPU上实现滤镜效果,CPU可基于第二图像格式的添加有人脸挂件的视频图像,进行滤镜效果的添加;从而CPU可对第二图像格式的添加有人脸挂件和滤镜效果的视频图像,进行视频编码处理前,如对YUV格式的视频图像进行编码前处理,编码前处理包括如下至少一种处理方式:降噪锐化处理(一般只处理Y通道图像),比如磨皮处理(一般只处理Y通道图像),一些非常特殊的滤镜(一般只处理UV通道图像)等;进而向第二终端设备传输编码处理后的视频图像。Further, if it is determined that the filter effect is implemented on the CPU, the CPU may add a filter effect based on the video image of the second image format to add the face pendant; thereby the CPU may add the face pendant to the second image format and The video image of the filter effect is subjected to pre-coding processing of the video image of the YUV format before the video encoding process, and the pre-coding process includes at least one of the following processing methods: noise reduction sharpening (generally only processing the Y channel image), For example, dermabrasion processing (generally only processing Y channel images), some very special filters (generally only processing UV channel images), etc.; and then transmitting the encoded video images to the second terminal device.
下面以图像采集装置采集的视频图像为YUV格式(第二图像格式的可选形式)为例,对本发明实施例提供的在视频图像中添加人脸挂件及滤镜效果的流程进行介绍,为便于描述,下面仅从第一终端设备处理的角度进行说明示意,该流程可如图7所示,包括:The following is an example of the process of adding a face pendant and a filter effect to a video image provided by the embodiment of the present invention, taking the video image captured by the image capture device as the YUV format (the optional form of the second image format) as an example. Description, the following is only illustrated from the perspective of processing by the first terminal device, and the process may be as shown in FIG. 7, including:
步骤S40、CPU以所确定的图像获取帧率,获取摄像头采集的YUV视频图像。Step S40: The CPU acquires a frame rate of the determined image, and acquires a YUV video image acquired by the camera.
步骤S41、CPU确定所选取的人脸挂件图像及所选取的滤镜效果,并判断所述滤镜效果的数据处理量类型是第一类型。Step S41: The CPU determines the selected face pendant image and the selected filter effect, and determines that the data processing type of the filter effect is the first type.
步骤S42、CPU提取YUV视频图像中的Y通道图像。Step S42, the CPU extracts the Y channel image in the YUV video image.
可选的,Y通道图像仅是灰度图通道图像的可选形式。Alternatively, the Y channel image is only an optional form of the grayscale channel image.
YUV视频图像中Y通道的图像表现类似于灰度图通道的表现,通过提取YUV视频图像中的Y通道图像进行后续人脸特征点的位置的定位,可减小第一终端设备的数据处理压力,降低CPU的资源消耗。The image representation of the Y channel in the YUV video image is similar to the performance of the grayscale channel. By extracting the Y channel image in the YUV video image for the location of the subsequent facial feature points, the data processing pressure of the first terminal device can be reduced. , reduce CPU resource consumption.
步骤S43、CPU根据设定比例缩小Y通道图像,得到Y通道缩小图像。Step S43: The CPU reduces the Y channel image according to the set ratio to obtain a Y channel reduced image.
可选的,Y通道缩小图像仅是灰度图通道缩小图像的可选形式。Optionally, the Y-channel reduced image is only an optional form of the grayscale channel reduced image.
可选的,本发明实施例可设定缩小后的图像的尺寸,在摄像头采集的视频图像的尺寸固定的情况下,可以设定出视频图像的缩小比例,从而以该设定的缩小比例,对视频图像进行缩小,以减小人脸识别技术涉及的数据处理量,提升人脸识别处理效率;可选的,缩小后的图像的尺寸,需不影响人脸识别的准确性,具体尺寸可以根据实际情况设定。Optionally, in the embodiment of the present invention, the size of the reduced image may be set. When the size of the video image captured by the camera is fixed, the reduction ratio of the video image may be set, thereby reducing the ratio of the setting. The video image is reduced to reduce the amount of data processing involved in the face recognition technology, and the face recognition processing efficiency is improved; optionally, the size of the reduced image does not affect the accuracy of the face recognition, and the specific size may be Set according to the actual situation.
对Y通道图像进行缩小后,可得到Y通道缩小图像。After the Y channel image is reduced, the Y channel reduced image can be obtained.
步骤S44、CPU根据设定旋转角度对Y通道缩小图像进行旋转。Step S44: The CPU rotates the Y channel reduced image according to the set rotation angle.
设定旋转角度可以是摄像头采集的视频图像,与屏幕显示图像的差异角度,如90度或180度,通过设定旋转角度对缩小后的Y通道图像进行旋转,可使得缩小后的Y通道图像中的人脸正立。The set rotation angle may be a video image captured by the camera, and a difference angle with the screen display image, such as 90 degrees or 180 degrees, by rotating the reduced Y channel image by setting the rotation angle, the reduced Y channel image may be made. The face in the face is erect.
步骤S45、CPU识别旋转后的Y通道缩小图像中各人脸特征点的位置。Step S45: The CPU recognizes the position of each facial feature point in the rotated Y channel reduced image.
可选的,本发明实施例可应用人脸识别技术,对旋转后的Y通道图像进行人脸识别处理,定位出各人脸特征点在旋转后的Y通道图像中的位置。Optionally, the embodiment of the present invention may apply a face recognition technology to perform face recognition processing on the rotated Y channel image, and locate the position of each face feature point in the rotated Y channel image.
步骤S46、CPU将旋转后的Y通道缩小图像中各人脸特征点的位置,转换为Y通道图像中各人脸特征点的位置,得到YUV视频图像中各人脸特征点的位置。Step S46: The CPU converts the position of each face feature point in the rotated Y channel reduction image into a position of each face feature point in the Y channel image, and obtains the position of each face feature point in the YUV video image.
可选的,具体转换过程可以是:确定旋转后的Y通道缩小图像,按照设定旋转角度进行反向旋转(与步骤S35旋转Y通道缩小图像的方向相反)后,反向旋转的Y通道缩小图像中各人脸特征点的位置;及确定反向旋转的Y通道缩小图像按照设定比例放大后,放大后的Y通道图像(即Y通道图像)中各人脸特征点的位置。Optionally, the specific conversion process may be: determining the rotated Y channel reduced image, and performing reverse rotation according to the set rotation angle (the opposite of the direction of rotating the Y channel to reduce the image in step S35), the reverse rotation Y channel is reduced. The position of each face feature point in the image; and the position of each face feature point in the enlarged Y channel image (ie, Y channel image) after the Y channel reduction image that determines the reverse rotation is enlarged according to the set ratio.
转换后所得到的Y通道图像中各人脸特征点的位置,可以认为是YUV视频图像中各人脸特征点的位置。The position of each face feature point in the Y channel image obtained after the conversion can be regarded as the position of each face feature point in the YUV video image.
步骤S47、CPU根据所述人脸挂件图像对应的人脸特征点,及YUV视频图像中各人脸特征点的位置,确定所述人脸挂件图像在所述YUV视频图像中的添加位置。Step S47: The CPU determines, according to the face feature point corresponding to the face pendant image and the position of each face feature point in the YUV video image, the added position of the face pendant image in the YUV video image.
可选的,设A为视频图像上的一像素点,人脸挂件在视频图像上的人脸位置所覆盖的区域大小为点B,本发明实施例可通过A和B点,确定所述添加位置C点;具体的,C=alpha*A+(1-alpha)*B,alpha是人脸挂件图像的透明度值。Optionally, let A be a pixel point on the video image, and the size of the area covered by the face position of the face pendant on the video image is point B. In the embodiment of the present invention, the adding may be determined by using points A and B. Position C; specifically, C=alpha*A+(1-alpha)*B, alpha is the transparency value of the face pendant image.
步骤S48、调用OpenGL接口将所述YUV视频图像和所述人脸挂件图像渲染到GPU中,并在GPU中保存RGB格式的视频图像和人脸挂件图像;及将所述添加位置传输给GPU。Step S48, calling the OpenGL interface to render the YUV video image and the face pendant image to the GPU, and saving the RGB format video image and the face pendant image in the GPU; and transmitting the added position to the GPU.
步骤S49、GPU根据所述添加位置,将RGB格式的视频图像和人脸挂件图像相添加,得到RGB格式的特效视频图像。Step S49: The GPU adds the video image of the RGB format and the face pendant image according to the added position to obtain a special effect video image of the RGB format.
由于所获取的视频图像是多帧的视频图像序列,本发明实施例可通过将人脸挂件图像和视频图像序列逐帧对应叠加在一起,实现在多帧的视频图像序列上添加人脸挂件图像。Since the acquired video image is a multi-frame video image sequence, the embodiment of the present invention can add a face pendant image to the multi-frame video image sequence by superimposing the face pendant image and the video image sequence frame by frame. .
步骤S50、GPU将RGB格式的特效视频图像传输给CPU。Step S50: The GPU transmits the special effect video image in the RGB format to the CPU.
步骤S51、CPU将RGB格式的特效视频图像转换为,YUV格式的特效视频图像,并添加滤镜效果。Step S51: The CPU converts the special effect video image of the RGB format into a special effect video image of the YUV format, and adds a filter effect.
可选的,如果CPU判断所述滤镜效果的数据处理量类型是第二类型,则CPU还可指示GPU进行滤镜效果在视频图像上的添加,从而GPU可将RGB格式的添加有人脸挂件图像和滤镜效果的特效视频图像传输给CPU,由CPU转换为YUV格式。Optionally, if the CPU determines that the data processing type of the filter effect is the second type, the CPU may further instruct the GPU to add the filter effect on the video image, so that the GPU may add the RGB format to the face pendant. The effect video image of the image and filter effect is transmitted to the CPU and converted to the YUV format by the CPU.
可选的,CPU还可对YUV格式的添加人脸图像和滤镜效果的特效视频图进行编码前处理,再将编码前处理后的特效视频图像进行视频编码处理,向第二终端设备传输视频编码处理后的特效视频图像;Optionally, the CPU may further perform pre-coding processing on the special-effect video image of the face image and the filter effect in the YUV format, and then perform video encoding processing on the pre-coded special-effect video image to transmit the video to the second terminal device. Coding effect video image after processing;
可选的,CPU可通过第一终端设备的通信模块向第二终端设备传输视频编码处理后的特效视频图像;通信模块如WIFI、GPRS通信模块等具有网络通信能力的通信设备。Optionally, the CPU may transmit the video effect processed video image to the second terminal device by using the communication module of the first terminal device; the communication module is a communication device having network communication capability, such as a WIFI or a GPRS communication module.
可选的,图像采集装置采集的视频图像也可以为RGB格式,此情况下的处 理方式与图6和图7所示的区别在于:CPU可以所确定的图像获取帧率,获取摄像头采集的RGB视频图像,并基于G通道图像进行RGB视频图像中各人脸特征点的位置定位;同时,CPU可直接调用OpenGL接口将所述RGB视频图像和所选取的RGB人脸挂件图像渲染到GPU中;后续,CPU获取到GPU传输的RGB格式的特效视频图像后,可转换为YUV格式的特效视频图像进行视频编码前处理,和视频编码后处理,进而将视频编码处理后的视频图像传输给第二终端设备。Optionally, the video image captured by the image capturing device may also be in the RGB format. The processing manner in this case is different from that shown in FIG. 6 and FIG. 7 in that the CPU can obtain the frame rate of the determined image and obtain the RGB captured by the camera. Video image, and position positioning of each facial feature point in the RGB video image based on the G channel image; meanwhile, the CPU may directly call the OpenGL interface to render the RGB video image and the selected RGB face pendant image to the GPU; Subsequently, after obtaining the special effect video image of the RGB format transmitted by the GPU, the CPU can convert the special effect video image into the YUV format for video encoding pre-processing, and the video encoding post-processing, and then transmit the video encoded video image to the second. Terminal Equipment.
可选的,本发明实施例在CPU上实现滤镜效果可以是基于YUV格式的视频图像,及滤镜效果数据实现;在GPU上实现滤镜效果可以是基于RGB格式的视频图像,及滤镜效果数据实现。Optionally, the implementation of the filter effect on the CPU in the embodiment of the present invention may be a video image based on the YUV format, and the filter effect data is implemented; the filter effect on the GPU may be a video image based on the RGB format, and a filter. The effect data is implemented.
可选的,一方面,CPU可将从图像采集装置获取的视频图像、及用户选取的人脸挂件图像渲染到GPU中,并指示GPU在所述视频图像上添加所选取的人脸挂件图像及滤镜效果;可选的,GPU可以是先在视频图像上添加滤镜效果,再在添加滤镜效果的视频图像上实现人脸挂件图像的添加,也可以是先在视频图像上添加人脸挂件图像,再在添加人脸挂件图像的视频图像上实现滤镜效果的添加;Optionally, on the one hand, the CPU may render the video image acquired by the image capturing device and the user-selected face pendant image into the GPU, and instruct the GPU to add the selected face pendant image to the video image and Filter effect; optionally, the GPU may first add a filter effect to the video image, and then add a face pendant image on the video image to which the filter effect is added, or may first add a face to the video image. Pendant image, and then add the filter effect on the video image of adding the face pendant image;
可选的,另一方面,CPU可将从图像采集装置获取的视频图像、及用户选取的人脸挂件图像渲染到GPU中,并指示GPU在所述视频图像上添加所选取的人脸挂件图像;从而CPU可在获取到的GPU传输的添加有人脸挂件图像的视频图像上,添加所选取的滤镜效果。Optionally, on the other hand, the CPU may render the video image acquired by the image capturing device and the user-selected face pendant image into the GPU, and instruct the GPU to add the selected face pendant image to the video image. Therefore, the CPU can add the selected filter effect on the captured video image of the GPU-added face image.
可选的,在对添加视频特效的视频图像进行视频编码处理时,本发明实施例还可根据设备配置等级、网络带宽范围确定视频编码时的编码分辨率;Optionally, when performing video encoding processing on the video image to which the video effect is added, the embodiment of the present invention may further determine a coding resolution when the video is encoded according to the device configuration level and the network bandwidth range;
可选的,图8示出了对添加视频特效的视频图像进行视频编码处理的方法流程图,该方法可应用于第一终端设备的CPU,参照图8,该流程可以包括:Optionally, FIG. 8 is a flowchart of a method for performing a video encoding process on a video image to which a video effect is added. The method is applicable to a CPU of the first terminal device. Referring to FIG. 8, the process may include:
步骤S300、根据预置的设备配置等级、网络带宽范围与编码分辨率的第三对应关系,确定与所述第一终端设备的设备配置信息的设备配置等级,所述当前网络带宽所处的网络带宽范围对应的编码分辨率。Step S300: Determine, according to a preset device configuration level, a third correspondence between the network bandwidth range and the encoding resolution, a device configuration level of the device configuration information of the first terminal device, where the current network bandwidth is located. The encoding resolution corresponding to the bandwidth range.
可选的,编码分辨率可与设备配置等级、网络带宽范围均为正相关关系, 并且编码分辨率的上限值选取为终端设备的自身最高分辨率;即如果网络带宽较高,设备配置等级较高,则本发明实施例可限制终端设备的自身最高分辨率,如iOS为最高分辨率为640x480,Android人脸挂件最高分辨率为480x360,Android滤镜限制为1280x720,当人脸挂件和滤镜同时使用时,限制取上为480x360;下表3示出第三对应关系的可选示意,可参照。Optionally, the encoding resolution may be positively correlated with the device configuration level and the network bandwidth range, and the upper limit of the encoding resolution is selected as the highest resolution of the terminal device; that is, if the network bandwidth is high, the device configuration level is Higher, the embodiment of the present invention can limit the maximum resolution of the terminal device itself, such as iOS for the highest resolution of 640x480, Android face pendant with the highest resolution of 480x360, Android filter limit of 1280x720, when the face is pendant and filter When the mirror is used at the same time, the limit is taken as 480x360; Table 3 below shows an optional illustration of the third correspondence, which can be referred to.
设备配置等级Device configuration level 网络带宽范围Network bandwidth range 编码分辨率Coding resolution
第一等级(低端机型)First grade (low end model) 带宽<80kbpsBandwidth <80kbps 192x144192x144
第一等级(低端机型)First grade (low end model) 带宽<100kbpsBandwidth <100kbps 320x240320x240
第二等级(中端机型)Second level (middle end model) 100kbps<带宽<150kbps100kbps<bandwidth <150kbps 320x240320x240
第二等级(中端机型)Second level (middle end model) 150kbps<带宽<200kbps150kbps<bandwidth <200kbps 480x36480x36
第三等级(高端机型)Third level (high-end models) 带宽>200kbpsBandwidth >200kbps 480x360480x360
表3table 3
步骤S310、以所确定的编码分辨率,对添加视频特效的视频图像进行视频编码处理。Step S310: Perform video encoding processing on the video image to which the video special effect is added at the determined encoding resolution.
可见,终端设备的网络带宽越高,设备配置等级越高,则可采用相对较高的编码分辨率进行视频编码处理,提升添加有视频特效的视频图像的图像质量;而网络带宽越低,设备配置等级越低,则可采用相对较低的编码分辨率进行视频编码处理,降低CPU和网络资源的占用,保障视频通信的可靠性。It can be seen that the higher the network bandwidth of the terminal device and the higher the device configuration level, the higher the encoding resolution can be used for video encoding processing, and the image quality of the video image added with the video special effect is improved; and the lower the network bandwidth, the device The lower the configuration level, the lower the encoding resolution can be used for video encoding processing, reducing the CPU and network resources, and ensuring the reliability of video communication.
可选的,第二终端设备在获取到第一终端设备传输的添加有视频特效的视频图像后,可对该视频图像进行视频解码处理,并进行展示。Optionally, after acquiring the video image added by the first terminal device and adding the video effect, the second terminal device may perform video decoding processing on the video image and display the video image.
可以看出,视频特效的添加是在传输视频图像的第一终端设备处执行,而接收视频图像的第二终端设备,则可直接解码添加有视频特效的视频图像,实现视频图像展示,达到第一终端设备所览的添加有视频特效的视频图像,即第一终端设备所发的添加有视频特效的视频图像的效果,提升第一终端设备和第二终端设备视频通信的同步性;It can be seen that the addition of the video special effect is performed at the first terminal device transmitting the video image, and the second terminal device receiving the video image can directly decode the video image added with the video special effect to realize the video image display, reaching the first The video image added with the video effect viewed by the terminal device, that is, the effect of the video image added by the first terminal device added with the video special effect, improves the synchronization of the video communication between the first terminal device and the second terminal device;
值得说明的是,即使第一终端设备在视频图像上添加视频特效后,再传输给第二终端设备,并不影响视频图像传输的实时,反而对视频通信的同步性有较大提升;经本发明的发明人研究测试发现,如图9、图10和图11所示,在传统传输摄像头采集的视频图像的基础上,增加各种人脸处理技术、虚拟道具、 各种色彩滤镜等视频特效处理,均是能够满足实时视频通话的需求的;其中,图9为在iPhone5S的GPU上测试美颜磨皮、美肤的性能的示意图,图10为在iPhone4S的GPU上测试美颜磨皮、美肤的性能的示意图,图11为在不同iPhone手机上测试人脸识别技术(人脸检测、跟踪、五官特征点定位)性能的示意图。It is worth noting that even if the first terminal device adds a video effect on the video image and then transmits it to the second terminal device, it does not affect the real-time transmission of the video image, but the synchronization of the video communication is greatly improved; The inventor's research and test found that, as shown in FIG. 9, FIG. 10 and FIG. 11, on the basis of the video images collected by the conventional transmission camera, various face processing technologies, virtual props, various color filters and the like are added. Special effects processing is able to meet the needs of real-time video calls; Figure 9 is a schematic diagram of testing the performance of beauty skin and skin on the GPU of iPhone5S, and Figure 10 is a test of beauty skinning on the GPU of iPhone4S. Schematic diagram of the performance of the skin, Figure 11 is a schematic diagram of the performance of the face recognition technology (face detection, tracking, facial feature point location) on different iPhones.
本发明实施例提供的视频通信方法,通过动态调整与当前网络带宽相应的图像获取帧率,可使得进行视频特效处理的视频图像的帧率,与当前的网络带宽相适应,保障添加有视频特效的视频通信能够可靠进行,保障了视频通信的可靠性。The video communication method provided by the embodiment of the present invention can dynamically adjust the frame rate of the image corresponding to the current network bandwidth, so that the frame rate of the video image processed by the video special effect can be adapted to the current network bandwidth to ensure that the video special effect is added. The video communication can be carried out reliably, ensuring the reliability of video communication.
下面对本发明实施例提供的视频通信装置进行介绍,下文描述的视频通信装置可与上文描述的视频通信方法相互对应参照。The video communication device provided by the embodiment of the present invention is described below, and the video communication device described below can refer to the video communication method described above.
图12为本发明实施例提供的视频通信装置的结构框图,该装置可应用于第一终端设备的CPU,参照图12,该装置可以包括:FIG. 12 is a structural block diagram of a video communication apparatus according to an embodiment of the present invention. The apparatus is applicable to a CPU of a first terminal device. Referring to FIG. 12, the apparatus may include:
连接建立模块100,用于建立与第二终端设备的视频通信连接;a connection establishing module 100, configured to establish a video communication connection with the second terminal device;
视频图像获取模块200,用于获取所述第一终端设备的图像采集装置所采集的视频图像;The video image acquisition module 200 is configured to acquire a video image collected by the image collection device of the first terminal device;
滤镜效果确定模块300,用于确定所选取的滤镜效果;a filter effect determining module 300, configured to determine the selected filter effect;
类型确定模块400,用于确定所述滤镜效果的数据处理量类型;a type determining module 400, configured to determine a data processing type of the filter effect;
第一滤镜效果添加模块500,用于如果所述数据处理量类型符合第一类型,在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果;a first filter effect adding module 500, configured to add the filter effect to the video image in a CPU of the first terminal device if the data processing amount type conforms to the first type;
第二滤镜效果添加模块600,用于如果所述数据处理量类型符合第二类型,指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果;其中,所述第一类型对应的数据处理量与为所述CPU设定的数据处理量范围相应,且,所述第一类型对应的数据处理量,低于第二类型对应的数据处理量;a second filter effect adding module 600, configured to: if the data processing quantity type conforms to the second type, instruct the GPU of the first terminal device to add the filter effect on the video image; wherein, the The data processing amount corresponding to the type corresponds to the data processing amount range set by the CPU, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type;
添加特效的视频图像确定模块700,用于至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像;Adding a special effect video image determining module 700, configured to determine a video image to add a video special effect according to at least the video image to which the filter effect is added;
视频图像传输模块800,用于向所述第二终端设备传输添加视频特效的视频图像。The video image transmission module 800 is configured to transmit a video image to which the video special effect is added to the second terminal device.
可选的,如图12所示,该装置还可以包括:Optionally, as shown in FIG. 12, the apparatus may further include:
数据处理量范围确定模块900,用于确定所述CPU的数据处理量上限值,及设定所述CPU处理滤镜效果的CPU占用比例范围;根据所述数据处理量上限值,与所述CPU占用比例范围确定所述数据处理量范围。a data processing amount range determining module 900, configured to determine a data processing amount upper limit value of the CPU, and a CPU occupation ratio range that sets the CPU processing filter effect; according to the data processing amount upper limit value, The CPU occupancy ratio range determines the data processing amount range.
可选的,第一滤镜效果添加模块500,用于在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果,具体包括:Optionally, the first filter effect adding module 500 is configured to add the filter effect to the video image in a CPU of the first terminal device, specifically:
调用预定的滤镜效果实现算法,为所述视频图像添加所述滤镜效果。A predetermined filter effect is invoked to implement an algorithm to add the filter effect to the video image.
可选的,第二滤镜效果添加模块600,用于指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果,具体包括:Optionally, the second filter effect adding module 600 is configured to instruct the GPU of the first terminal device to add the filter effect on the video image, specifically:
指示所述GPU通过OpenGL接口,以预定的滤镜效果实现算法,在所述视频图像上添加所述滤镜效果。Instructing the GPU to implement an algorithm with a predetermined filter effect through the OpenGL interface, adding the filter effect on the video image.
可选的,视频图像获取模块200,用于获取所述第一终端设备的图像采集装置所采集的视频图像,具体包括:Optionally, the video image obtaining module 200 is configured to acquire the video image collected by the image capturing device of the first terminal device, and specifically includes:
获取当前网络带宽;Get the current network bandwidth;
根据预置的网络带宽范围与图像获取帧率的第一对应关系,确定与当前网络带宽所处网络带宽范围对应的图像获取帧率;其中,网络带宽范围与所对应的图像获取帧率正相关;Determining an image acquisition frame rate corresponding to a network bandwidth range of the current network bandwidth according to a first correspondence between the preset network bandwidth range and an image acquisition frame rate; wherein the network bandwidth range is positively correlated with the corresponding image acquisition frame rate ;
根据所确定的图像获取帧率,获取所述第一终端设备的图像采集装置所采集的视频图像。Obtaining a video image acquired by the image acquiring device of the first terminal device according to the determined image acquisition frame rate.
可选的,图13示出了本发明实施例提供的视频通信装置的另一结构框图,结合图12和图13所示,该装置还可以包括:Optionally, FIG. 13 is a block diagram showing another structure of a video communication apparatus according to an embodiment of the present invention. As shown in FIG. 12 and FIG. 13, the apparatus may further include:
展示模块1000,用于根据所述第一终端设备的设备配置信息,以及当前网络带宽,确定当前可执行的至少一个视频特效类型;展示所述可执行的至少一个视频特效类型对应的视频特效;其中,一个视频特效类型对应至少一个视频特效。The display module 1000 is configured to determine, according to the device configuration information of the first terminal device, the current network bandwidth, the currently executable at least one video effect type; and display the video special effect corresponding to the at least one executable video effect type; Among them, one video effect type corresponds to at least one video effect.
相应的,滤镜效果确定模块300,用于确定所选取的滤镜效果,具体包括:Correspondingly, the filter effect determining module 300 is configured to determine the selected filter effect, and specifically includes:
确定从所展示的视频特效中选取的滤镜效果。Determine the filter effect selected from the displayed video effects.
可选的,展示模块1000,用于根据所述第一终端设备的设备配置信息,以及当前网络带宽,确定当前可执行的至少一个视频特效类型,具体包括:Optionally, the display module 1000 is configured to determine, according to the device configuration information of the first terminal device, and the current network bandwidth, the currently executable at least one video effect type, specifically:
调取预置的设备配置等级、网络带宽范围与视频特效类型的第二对应关系;其中,设备配置等级与网络带宽范围越高,所对应的视频特效类型的数量 越多;The second corresponding relationship between the preset device configuration level, the network bandwidth range, and the video effect type is retrieved; wherein, the higher the device configuration level and the network bandwidth range, the greater the number of corresponding video effect types;
确定所述第二对应关系中,与所述第一终端设备的设备配置信息的设备配置等级,及所述当前网络带宽所处的网络带宽范围,相应的视频特效类型。Determining, in the second correspondence, a device configuration level of the device configuration information of the first terminal device, and a network bandwidth range in which the current network bandwidth is located, and a corresponding video effect type.
可选的,图14示出了本发明实施例提供的视频通信装置的再一结构框图,结合图13和图14所示,该装置还可以包括:Optionally, FIG. 14 is a block diagram showing another structure of a video communication apparatus according to an embodiment of the present invention. As shown in FIG. 13 and FIG. 14, the apparatus may further include:
人脸挂件图像选取模块1100,用于确定从所展示的视频特效中选取的人脸挂件图像;a face pendant image selection module 1100, configured to determine a face pendant image selected from the displayed video effects;
添加人脸挂件的图像获取模块1200,用于提取所获取的视频图像的灰度图通道图像;识别所述灰度图通道图像中各人脸特征点的位置;根据所述人脸挂件图像对应的人脸特征点,及所述各人脸特征点的位置,确定所述人脸挂件图像在所述视频图像中的添加位置;将所述视频图像和所述人脸挂件图像渲染到所述第一终端设备的GPU中,以在所述GPU中生成第一图像格式的视频图像和人脸挂件图像;及将所述添加位置传输给所述GPU;获取GPU传输的第一图像格式的特效视频图像,所述特效视频图像为在所述添加位置添加有人脸挂件图像的视频图像。An image acquisition module 1200 is configured to extract a grayscale channel image of the acquired video image; identify a location of each facial feature point in the grayscale channel image; and correspond to the facial pendant image according to the facial image a face feature point, and a location of each face feature point, determining an add position of the face pendant image in the video image; rendering the video image and the face pendant image to the a GPU of the first terminal device, configured to generate a video image and a face image of the first image format in the GPU; and transmit the added location to the GPU; and obtain a special effect of the first image format transmitted by the GPU A video image, the special effect video image being a video image in which a face mount image is added at the added position.
可选的,添加特效的视频图像确定模块700,用于至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像,具体包括:Optionally, the special effect video image determining module 700 is configured to determine, according to at least the video image that adds the filter effect, the video image to which the video special effect is added, specifically:
确定添加有人脸挂件图像及滤镜效果的视频图像,得到添加视频特效的视频图像;其中,人脸挂件图像在滤镜效果前或后添加在视频图像上。Determine the video image to add the face image and the filter effect to get the video image with the added video effect; where the face pendant image is added to the video image before or after the filter effect.
可选的,视频图像传输模块800,用于向所述第二终端设备传输添加视频特效的视频图像,具体包括:Optionally, the video image transmission module 800 is configured to transmit, to the second terminal device, a video image that adds a video special effect, specifically:
确定第二图像格式的添加视频特效的视频图像;Determining a video image of the added video effect of the second image format;
对所述第二图像格式的添加视频特效的视频图像,进行视频编码处理;Performing a video encoding process on the video image of the second image format to which the video effect is added;
将视频编码处理后的视频图像传输给第二终端设备。Transmitting the video encoded video image to the second terminal device.
可选的,添加人脸挂件的图像获取模块1200,用于识别所述灰度图通道图像中各人脸特征点的位置,具体包括:Optionally, the image acquisition module 1200 is configured to: add a location of each facial feature point in the grayscale channel image, and specifically includes:
根据设定比例缩小灰度图通道图像,得到灰度图通道缩小图像;The grayscale channel image is reduced according to the set ratio, and the grayscale channel is reduced to obtain an image;
根据设定旋转角度对灰度图通道缩小图像进行旋转;Rotating the reduced image of the grayscale channel according to the set rotation angle;
识别旋转后的灰度图通道缩小图像中各人脸特征点的位置;Recognizing the rotated grayscale channel to reduce the position of each facial feature point in the image;
将旋转后的灰度图通道缩小图像中各人脸特征点的位置,转换为所述灰度 图通道图像中各人脸特征点的位置。The rotated grayscale channel reduces the position of each facial feature point in the image to be converted to the position of each facial feature point in the grayscale image channel image.
可选的,所述灰度图通道图像为YUV格式视频图像的Y通道图像,或,RGB格式视频图像的G通道图像;Optionally, the grayscale channel image is a Y channel image of a YUV format video image, or a G channel image of an RGB format video image;
所述第一图像格式为RGB格式,所述第二图像格式为YUV格式。The first image format is an RGB format, and the second image format is a YUV format.
可选的,添加特效的视频图像确定模块700,用于至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像,具体包括:Optionally, the special effect video image determining module 700 is configured to determine, according to at least the video image that adds the filter effect, the video image to which the video special effect is added, specifically:
指示所述第一终端设备的GPU在所述视频图像上添加所选取的人脸挂件图像及滤镜效果,得到添加有人脸挂件图像及滤镜效果的视频图像;Instructing the GPU of the first terminal device to add the selected face pendant image and the filter effect on the video image to obtain a video image with a face image and a filter effect added;
或,指示所述GPU在所述视频图像上添加所选取的人脸挂件图像,以在获取的添加有人脸挂件图像的视频图像上,添加所选取的滤镜效果,得到添加有人脸挂件图像及滤镜效果的视频图像。Or instructing the GPU to add the selected face pendant image on the video image to add the selected filter effect on the acquired video image of the added face pendant image, to obtain the added face pendant image and Video image of the filter effect.
可选的,视频图像传输模块800,用于向所述第二终端设备传输添加视频特效的视频图像,具体包括:Optionally, the video image transmission module 800 is configured to transmit, to the second terminal device, a video image that adds a video special effect, specifically:
对添加视频特效的视频图像进行视频编码处理;Video encoding processing of a video image to which a video effect is added;
向所述第二终端设备传输视频编码处理后的,添加视频特效的视频图像。Transmitting a video image processed by the video encoding process to the second terminal device to add a video effect.
可选的,视频图像传输模块600,用于对添加视频特效的视频图像进行视频编码处理,具体包括:Optionally, the video image transmission module 600 is configured to perform video encoding processing on the video image to which the video special effect is added, and specifically includes:
根据预置的设备配置等级、网络带宽范围与编码分辨率的第三对应关系,确定与所述第一终端设备的设备配置信息的设备配置等级,所述当前网络带宽所处的网络带宽范围对应的编码分辨率;Determining, according to a preset device configuration level, a third correspondence relationship between the network bandwidth range and the encoding resolution, a device configuration level of the device configuration information of the first terminal device, where the current network bandwidth is in a network bandwidth range Coding resolution
以所确定的编码分辨率,对添加视频特效的视频图像进行视频编码处理。The video image to which the video effect is added is subjected to video encoding processing at the determined encoding resolution.
可选的,本发明实施例还提供一种终端设备,该终端设备的结构可以如图5所示,包括:CPU和GPU;Optionally, the embodiment of the present invention further provides a terminal device, and the structure of the terminal device may be as shown in FIG. 5, including: a CPU and a GPU;
该终端设备在传输视频图像的阶段,CPU可用于,建立与第二终端设备的视频通信连接,并获取所述第一终端设备的图像采集装置所采集的视频图像;确定所选取的滤镜效果;确定所述滤镜效果的数据处理量类型;如果所述数据处理量类型符合第一类型,在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果;如果所述数据处理量类型符合第二类型,指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果;其中,所述第一类型 对应的数据处理量与为所述CPU设定的数据处理量范围相应,且,所述第一类型对应的数据处理量,低于第二类型对应的数据处理量;至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像;向所述第二终端设备传输添加视频特效的视频图像;The terminal device is configured to establish a video communication connection with the second terminal device, and acquire a video image collected by the image capturing device of the first terminal device; and determine the selected filter effect. Determining a data processing amount type of the filter effect; if the data processing amount type conforms to the first type, adding the filter effect to the video image in a CPU of the first terminal device; The data processing type conforms to the second type, and indicates that the GPU of the first terminal device adds the filter effect on the video image; wherein the data processing amount corresponding to the first type is set for the CPU The data processing amount ranges correspondingly, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type; at least the video image to which the video special effect is added is determined according to at least the video image to which the filter effect is added; Transmitting, by the second terminal device, a video image to which a video special effect is added;
GPU可用于,受所述CPU指示,在所述数据处理量类型符合第二类型时,在所述视频图像上添加所述滤镜效果。The GPU is operative to be instructed by the CPU to add the filter effect on the video image when the data throughput type conforms to the second type.
CPU与GPU的其他功能可参照上文相应部分描述。Other functions of the CPU and GPU can be described in the corresponding sections above.
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。The various embodiments in the present specification are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same similar parts between the various embodiments may be referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant parts can be referred to the method part.
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。A person skilled in the art will further appreciate that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware, computer software or a combination of both, in order to clearly illustrate the hardware and software. Interchangeability, the composition and steps of the various examples have been generally described in terms of function in the above description. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。The steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both. The software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的核心思想或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments enables those skilled in the art to make or use the invention. Various modifications to these embodiments are obvious to those skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the invention. Therefore, the present invention is not to be limited to the embodiments shown herein, but the scope of the invention is to be accorded
在现有应用中,大部分APP不提供美颜强度调节功能,只能选择开启或者关闭美颜功能,而且,即使开启美颜功能后,也是采用默认美颜强度进行美颜处理,这样,必然降低了用户体验,因为用户在不同状态下对美颜强弱需求不一样,比如在已经化妆、皮肤状态很好或者灯光很好的情况,可能只需要比较弱的美颜强度,而在状态不佳、没有化妆或者长痘痘时,可能会需要比较强的美颜强度。而且,现有视频通话类和直播类APP发布后,不能根据用户的实际需求来设置美颜强度,也就是说,一旦APP发布后,美颜强度则无法调整,只能使用默认美颜强度,或者只能等下一个版本发布时在新版本发出的APP中设置调整后的美颜强度,这样会严重影响APP的口碑。综上所述,在美颜功能普及的今天,用户对美颜功能的需求已经非常强烈,但由于不同性能手机的配置存在较大差异,如果采用统一的美颜强度,会导致高端性能手机上美颜功能无法发挥最佳效果,而低端手机也无法达到最佳效果,从而影响了用户的产品体验。In the existing application, most APP does not provide the beauty intensity adjustment function, and can only choose to enable or disable the beauty function, and even if the beauty function is turned on, the beauty beauty intensity is used for the beauty treatment, so that it is inevitable The user experience is reduced because the user has different requirements for beauty and strength in different states, such as when the makeup is already applied, the skin condition is good, or the lighting is good, the weaker beauty strength may be needed, but the state is not Good, no makeup or long acne may require a stronger beauty strength. Moreover, after the existing video call and live broadcast APPs are released, the beauty intensity cannot be set according to the actual needs of the user, that is, once the APP is released, the beauty intensity cannot be adjusted, and only the default beauty intensity can be used. Or you can only set the adjusted beauty intensity in the APP issued by the new version when the next version is released, which will seriously affect the APP's reputation. In summary, in today's popularization of beauty functions, users' demand for beauty functions has been very strong, but due to the large differences in the configuration of different performance phones, if the uniform beauty intensity is adopted, it will lead to high-end performance on mobile phones. The beauty function does not work best, and the low-end phone does not achieve the best results, which affects the user's product experience.
因此,为了解决上述问题,提升用户体验,本发明实施例提供了一种图像处理方法及电子设备、服务器。这里,为了能够更加详尽地了解本发明的特点与技术内容,下面结合附图对本发明的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本发明。Therefore, in order to solve the above problems and improve the user experience, an embodiment of the present invention provides an image processing method, an electronic device, and a server. The present invention will be described in detail with reference to the accompanying drawings in the accompanying drawings.
实施例一 Embodiment 1
本实施例提供了一种图像处理方法,所述方法应用于电子设备,所述电子设备具有图像采集、视频采集功能,例如,所述电子设备连接或设置有摄像头,通过所述摄像头实现图像采集、视频采集功能。具体地,所述电子设备可以为手机、平板电脑等移动终端,或者为个人电脑等。图14为本发明实施例一图像处理方法的实现流程示意图,如图15所示,所述方法包括:The embodiment provides an image processing method, and the method is applied to an electronic device, where the electronic device has an image capturing and video capturing function. For example, the electronic device is connected or provided with a camera, and the image is collected by the camera. , video capture function. Specifically, the electronic device may be a mobile terminal such as a mobile phone or a tablet computer, or a personal computer or the like. FIG. 14 is a schematic flowchart of an implementation of an image processing method according to an embodiment of the present invention. As shown in FIG. 15, the method includes:
步骤101:电子设备检测到目标操作,所述目标操作表征所述电子设备对采集到的图像数据和/或视频数据中图像特征进行调整的操作;Step 101: The electronic device detects a target operation, and the target operation represents an operation of the electronic device to adjust image features in the collected image data and/or video data;
本实施例中,所述目标操作可以具体为用户实施于特定物理按键的操作,或者为实施于特定虚拟按键的操作,或者为特定手势操作,在实际应用中,本实施例对目标操作不做限制。In this embodiment, the target operation may be specifically performed by the user on a specific physical button, or performed on a specific virtual button, or a specific gesture operation. In an actual application, the embodiment does not perform the target operation. limit.
进一步地,在实际应用中,所述目标操作可以是启动图像采集功能或视频 采集功能的具体操作(如开启摄像头等操作),而且,在图像采集功能或视频采集功能启动的同时,开启对采集到的图像数据或视频数据中图像特征进行调整的功能,如美颜功能;或者也可以是电子设备处于图像采集状态或视频采集状态下,用户实施的预期对采集到的图像数据或视频数据中的图像特征进行处理的特定操作,如在手机处于图像采集状态或者视频采集状态下,用户通过目标操作触发美颜功能。也就是说,本实施例所述的目标操作为启动美颜功能的操作,这里,美颜功能的开启可以是摄像头启动时同步开启的,也可以是在摄像头启动后基于目标操作而被触发开启的。Further, in an actual application, the target operation may be a specific operation of starting an image acquisition function or a video collection function (such as turning on a camera, etc.), and simultaneously, when the image acquisition function or the video collection function is activated, the collection is started. The function of adjusting the image features in the image data or the video data, such as the beauty function; or the electronic device is in the image capturing state or the video capturing state, and the user implements the expected image data or video data. The image features are processed for specific operations, such as when the mobile phone is in an image acquisition state or a video capture state, and the user triggers the beauty function through the target operation. That is to say, the target operation described in this embodiment is an operation for starting the beauty function. Here, the opening of the beauty function may be synchronously started when the camera is started, or may be triggered to be started based on the target operation after the camera is started. of.
在实际应用中,所述美颜功能可以具体通过以结合肤色识别实施的磨皮算法来实现,如在获取图像中的肤色后对肤色区域进行磨皮,进而通过磨皮来实现美颜效果。这里,肤色识别可以采用颜色编码方法(YUV)色彩格式的椭圆模型判别方法,而磨皮算法可以采用双边滤波(Bilateral filter)方法。In practical applications, the beauty function can be specifically implemented by a dermabrasion algorithm implemented by combining skin color recognition, such as dermabrasion of the skin color area after acquiring the skin color in the image, and then achieving the beauty effect by dermabrasion. Here, the skin color recognition may adopt an elliptical model discriminating method of a color coding method (YUV) color format, and the microdermabrasion algorithm may adopt a Bilateral filter method.
这里,由于现有美颜功能采用的算法并不能区分图像质量的优劣,而且,在手机摄像头较差时,摄像头采集的图像质量也会较差,此时,美颜后的效果会因为磨皮而变得更加模糊,细节损失严重;而在手机摄像头较好时,摄像头采集到的图像质量较好,若采用相同强度的美颜,美颜效果又可能会出现磨皮程度不够,导致美颜程度不够的问题,所以,影响了用户体验,因此,为了避免上述问题,本实施例在启动美颜功能处理图像或视频之前,会先获取与自身的设备特征(如图像处理特征)相匹配的参数可调范围值,例如美颜可调范围,以此,来避免美颜过度或者美颜强度不足的问题。Here, the algorithm adopted by the existing beauty function can not distinguish the quality of the image quality, and when the mobile phone camera is poor, the image quality of the camera will be poor. At this time, the effect after the beauty will be milled. The skin becomes more blurred and the detail loss is serious. When the mobile phone camera is better, the image captured by the camera is of better quality. If the same intensity of beauty is used, the beauty effect may not be enough to cause the skin to be beautiful. The problem of insufficient degree of color, therefore, affects the user experience. Therefore, in order to avoid the above problem, the present embodiment first acquires matching with its own device features (such as image processing features) before starting the beauty function to process images or videos. The parameter adjustable range value, such as the adjustable range of beauty, in order to avoid excessive beauty or lack of beauty strength.
步骤102:电子设备基于所述目标操作获取针对所述电子设备的用于对图像特征进行处理的参数可调范围值,所述参数可调范围值是至少基于对图像数据和/或视频数据中图像特征进行处理的图像处理特征而确定出的;Step 102: The electronic device acquires, according to the target operation, a parameter adjustable range value for processing the image feature for the electronic device, where the parameter adjustable range value is based at least on image data and/or video data. The image features are determined by the image processing features of the processing;
本实施例中,所述图像处理特征可以具体为电子设备设置或连接的摄像头的图像采集特征,和/或,为在显示前电子设备进行的显示处理特征等,当然,在实际应用中,所述图像处理特征还可以具体为与图像处理相关的其他特征信息,本实施例对此不做限制。In this embodiment, the image processing feature may be specifically an image collection feature of a camera set or connected by an electronic device, and/or a display processing feature performed by the electronic device before display, etc., of course, in practical applications, The image processing feature may also be specifically other feature information related to image processing, which is not limited in this embodiment.
步骤103:电子设备从所述参数可调范围值中选取出目标调整值,利用所述目标调整值对采集的图像数据和/或视频数据中图像特征进行处理。Step 103: The electronic device selects a target adjustment value from the parameter adjustable range values, and uses the target adjustment value to process image features in the collected image data and/or video data.
本实施例中,当确定出参数可调范围值后,可以在显示界面呈现表征有所述参数可调范围值的调整符,以便于利用所述调整符从所述参数可调范围值中选取出所述目标调整值;在一具体应用中,如图16所示,可以通过美颜强度滑动条来表征美颜可调范围,其中,滑动条的A点表征不进行美颜处理,或者表征电子设备所支持的最小美颜强度(例如,在一具体应用中,只要开启美颜功能,电子设备就进行特定强度的美颜处理,也即电子设备默认对图像或视频进行该特定强度的美颜处理,用户只能在该程度上增加美颜强度,而不能降低美颜强度,也就是说,所述最小美颜强度不为零);而滑动条的B点表征电子设备所能支持的最大美颜强度进行美颜处理。这里,图像处理特征不同的电子设备的美颜可调范围可以不同。例如,如图17所示,在实际应用中,所能够实现的最大美颜强度为30,即美颜强度可以在0-30之间选择,此时,可以根据电子设备的图像处理特征选取范围一或范围二或其他范围等作为与该电子设备的图像处理特征相匹配的美颜可调范围,这样,以避免美颜过度或者美颜强度不足的问题。In this embodiment, after the parameter adjustable range value is determined, an adjuster characterized by the parameter adjustable range value may be presented on the display interface, so as to select the adjustable range value from the parameter by using the adjuster. The target adjustment value is obtained; in a specific application, as shown in FIG. 16, the beauty adjustable slider can be used to represent the adjustable range of the beauty, wherein the A-point representation of the sliding strip is not subjected to cosmetic treatment, or characterization The minimum beauty intensity supported by the electronic device (for example, in a specific application, as long as the beauty function is turned on, the electronic device performs a certain intensity of beauty treatment, that is, the electronic device defaults to the image or video to perform the specific intensity. For face treatment, the user can only increase the beauty intensity to this extent, but not the beauty intensity, that is, the minimum beauty intensity is not zero); and the B point of the slider characterizes the support of the electronic device. Maximum beauty intensity for beauty treatment. Here, the beauty adjustable range of the electronic device having different image processing characteristics may be different. For example, as shown in FIG. 17, in practical applications, the maximum beauty intensity that can be achieved is 30, that is, the beauty intensity can be selected between 0-30, and at this time, the range can be selected according to the image processing features of the electronic device. One or a range two or other range or the like serves as a cosmetically adjustable range that matches the image processing characteristics of the electronic device, thus avoiding the problem of excessive beauty or insufficient beauty.
这里,选取出目标调整值的步骤可以是基于用户操作而确定的,例如,基于用户实施于美颜强度滑动条的滑动操作而确定出的,所以,本实施例不仅实现了针对不同性能的电子设备设置不同的美颜强度,还实现了美颜强度的调整功能,满足了用户在不同状态下对美颜强度的不同需求,丰富用户体验的同时,也提升了用户体验。Here, the step of selecting the target adjustment value may be determined based on a user operation, for example, based on a sliding operation performed by the user on the beauty intensity slider, so that the embodiment not only realizes electronic for different performances. The device sets different beauty intensity, and also realizes the adjustment function of the beauty intensity, which satisfies the different needs of the user in different states, and enriches the user experience, and also improves the user experience.
在一具体实施例中,在获取所述参数可调范围值的步骤之前,所述电子设备先获取(如检测得到)用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,并至少基于所述图像处理特征确定出与所述电子设备相匹配的参数可调范围值。或者,如图18所示,电子设备获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,并将针对所述电子设备的图像处理特征发送至服务器,对应地,所述服务器获取所述电子设备对应的用于对采集到的图像数据和/或视频数据中图像特征进行处理的图像处理特征,并根据针对所述电子设备的图像处理特征,确定出至少与所述电子设备的图像处理特征相匹配的参数可调范围值,这样,便于所述电子设备向服务器去获取该参数可调范围值,并从所述参数可调范围值中选取出目标调整值,以利用所述 目标调整值对所述电子设备采集的图像数据和/或视频数据中图像特征进行处理。也就是说,本实施例中,确定参数可调范围值的过程可以在电子设备中执行,也可以是在服务器中执行。In a specific embodiment, prior to the step of obtaining the parameter adjustable range value, the electronic device first acquires (eg, detects) image processing features for processing image features in image data and/or video data. And determining a parameter adjustable range value that matches the electronic device based at least on the image processing feature. Alternatively, as shown in FIG. 18, the electronic device acquires an image processing feature for processing image features in the image data and/or the video data, and transmits an image processing feature for the electronic device to the server, correspondingly, Obtaining, by the server, an image processing feature corresponding to the image feature in the collected image data and/or video data corresponding to the electronic device, and determining, according to the image processing feature for the electronic device, at least The parameter of the electronic device is matched with the parameter adjustable range value, so that the electronic device can obtain the adjustable range value of the parameter from the server, and select the target adjustment value from the parameter adjustable range value to The image features in the image data and/or video data acquired by the electronic device are processed using the target adjustment value. That is to say, in the embodiment, the process of determining the parameter adjustable range value may be performed in the electronic device or may be performed in the server.
这样,本发明实施例所述的方法,在对采集到的图像数据和/或视频数据中图像特征进行调整之前,会先获取至少基于对图像数据和/或视频数据中图像特征进行处理的图像处理特征而确定出的参数可调范围值,这里,由于所述参数可调范围值是基于针对所述电子设备的图像处理特征而确定出的,所以,本发明实施例能够实现针对不同性能的电子设备设置不同的美颜强度的目的;进一步地,所述电子设备对采集的图像数据和/或视频数据中图像特征进行调整的目标调整值是从该参数可调范围值中选取出的,所以,本发明实施例实现了美颜强度在一特定范围内的调整,满足了用户在不同状态下对美颜强度的不同需求,因此,本发明实施例所述的方法在丰富用户体验的同时,也提升了用户体验。Thus, the method according to the embodiment of the present invention first acquires an image based on at least image features in the image data and/or video data before adjusting the image features in the collected image data and/or video data. The parameter adjustable range value is determined by processing the feature. Here, since the parameter adjustable range value is determined based on the image processing feature for the electronic device, the embodiment of the present invention can implement different performances. The purpose of the electronic device is to set different beauty intensity; further, the target adjustment value of the electronic device to adjust image features in the collected image data and/or video data is selected from the adjustable range value of the parameter, Therefore, the embodiment of the present invention achieves the adjustment of the beauty intensity in a specific range, and satisfies the different requirements of the user for the beauty strength in different states. Therefore, the method described in the embodiment of the present invention enriches the user experience. It also enhances the user experience.
实施例二 Embodiment 2
基于实施例一所述的方法,本实施例中在确定参数可调范围值时还可以参考其他参数,如信息传输特征,也就是说,所述参数可调范围值还可以具体基于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,以及用于对采集到的图像数据和/或视频数据进行传输的信息传输特征而确定出;此时,Based on the method described in the first embodiment, in the embodiment, when determining the parameter adjustable range value, other parameters, such as an information transmission feature, may be referred to, that is, the parameter adjustable range value may also be specifically based on the image data. And image processing features for processing image features in the video data, and information transmission features for transmitting the collected image data and/or video data are determined;
在获取所述参数可调范围值的步骤之前,所述电子设备会先获取(如检测得到)用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,以及获取(如检测得到)用于对采集到的图像数据和/或视频数据进行传输的信息传输特征,进而至少基于所述图像处理特征和所述信息传输特征确定出与所述电子设备相匹配的参数可调范围值;或者,如图19所示,电子设备获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,以及获取用于对采集到的图像数据和/或视频数据进行传输的信息传输特征,并将针对所述电子设备的图像处理特征和信息传输特征发送至服务器,对应地,所述服务器不仅获取所述电子设备对应的用于对采集到的图像数据和/或视频数据中图像特征进行处理的图像处理特征,还会获取所述电子设备对应的用于对采 集到的图像数据和/或视频数据进行传输的信息传输特征,进而根据针对所述电子设备的图像处理特征以及信息传输特征确定出至少与所述电子设备的图像处理特征和信息传输特征相匹配的参数可调范围值。这样,便于所述电子设备向服务器去获取该参数可调范围值,并从所述参数可调范围值中选取出目标调整值,以利用所述目标调整值对所述电子设备采集的图像数据和/或视频数据中图像特征进行处理。也就是说,本实施例中,确定参数可调范围值的过程可以在电子设备中执行,也可以是在服务器中执行。Before the step of obtaining the parameter adjustable range value, the electronic device first acquires (eg, detects) image processing features for processing image features in image data and/or video data, and acquires (eg, detects Obtaining an information transmission feature for transmitting the collected image data and/or video data, and determining, according to at least the image processing feature and the information transmission feature, a parameter adjustable range that matches the electronic device Value; or, as shown in FIG. 19, the electronic device acquires image processing features for processing image features in image data and/or video data, and acquires for transmitting the collected image data and/or video data. The information transmission feature is sent to the server for the image processing feature and the information transmission feature of the electronic device. Correspondingly, the server not only acquires the image data and/or video corresponding to the collected electronic device. An image processing feature for processing image features in the data, and acquiring corresponding to the electronic device for acquiring An information transmission feature for transmitting image data and/or video data, and further determining, according to an image processing feature and an information transmission feature of the electronic device, a parameter matching at least an image processing feature and an information transmission feature of the electronic device. Adjust the range value. In this way, the electronic device is convenient to obtain the parameter adjustable range value from the server, and select a target adjustment value from the parameter adjustable range value to use the target adjustment value to acquire image data collected by the electronic device. And/or image features in the video data are processed. That is to say, in this embodiment, the process of determining the parameter adjustable range value may be performed in the electronic device or may be performed in the server.
在一具体实施例中,所述图像处理特征可以具体为编码分辨率、码率等;相应地,所述信息传输特征可以具体为所述电子设备的带宽等。In an embodiment, the image processing feature may be specifically a coding resolution, a code rate, and the like; correspondingly, the information transmission feature may be specifically a bandwidth of the electronic device or the like.
这里,实际应用中,当所述电子设备利用所述目标调整值对采集的图像数据和/或视频数据中图像特征进行处理后,在所述电子设备中呈现处理后的图像数据和/或视频数据;或者,在呈现处理后的图像数据和/或视频数据的同时,发送处理后的图像数据和/或视频数据,以此来实现发送美颜处理后的图像或视频的目的。也就是说,在视频或拍照场景中,在摄像头采集到视频数据或图像数据后,且在本地显示之前,利用本发明实施例所述的方法能够对从摄像头采集的视频数据或图像数据实时进行美颜操作,进而使得本地显示和对端接收端能同时观看美颜后的效果,且美颜效果能够根据用户操作而实施调整。Here, in an actual application, after the electronic device processes the image features in the collected image data and/or video data by using the target adjustment value, the processed image data and/or video is presented in the electronic device. Data; or, after the processed image data and/or video data is presented, the processed image data and/or video data is transmitted, thereby achieving the purpose of transmitting the cosmetically processed image or video. That is to say, in the video or the photographing scene, after the video data or the image data is collected by the camera, and before the local display, the video data or the image data collected from the camera can be performed in real time by using the method described in the embodiment of the invention. The beauty operation, so that the local display and the opposite end receiving end can simultaneously watch the effect of the beauty, and the beauty effect can be adjusted according to the user operation.
进一步地,以下结合视频场景对本发明实施例进一步详细说明;具体地,Further, the embodiments of the present invention are further described in detail below in conjunction with a video scenario; specifically,
应用场景一,利用服务器(Server)作为控制中心,在获取手机上传的当前设备硬件等信息后,通过分析和智能分配,确定出针对不同性能手机的自动调节美颜强度的范围,如此,使美颜强度在不同性能的手机上有不同的强度范围,从而使得不同性能的手机上的美颜功能都能达到一个最佳效果。 Application scenario 1, using the server as the control center, after obtaining the current device hardware and other information uploaded by the mobile phone, through analyzing and intelligently distributing, determining the range of automatically adjusting the beauty intensity for different performance mobile phones, so that the beauty The intensity of the skin has different intensity ranges on different performance phones, so that the beauty function on different performance phones can achieve the best effect.
具体地,如图20所示,可以在视频通话逻辑中增加美颜Server控制逻辑,进而根据实时视频通话中上报的手机性能等数据,使Server端决定下发哪一范围的美颜强度。例如,如图17所示,根据手机1上报的手机性能等数据,Server端选择范围1的美颜强度区间作为手机1的美颜可调范围(0-10),而基于手机2上报的手机性能选择范围2作为手机2的美颜可调范围(10-25),通过这样的自动匹配功能,可以使不同性能的手机充分发挥美颜的效果,使用户能够在不同性能的手机上均体验到最佳的美颜效果。Specifically, as shown in FIG. 20, the beauty server control logic may be added to the video call logic, and the server may determine which range of beauty strengths to deliver according to the performance of the mobile phone reported in the real-time video call. For example, as shown in FIG. 17, according to the performance of the mobile phone reported by the mobile phone 1, the server selects the beauty intensity interval of the range 1 as the beauty adjustable range of the mobile phone 1 (0-10), and the mobile phone based on the mobile phone 2 reports. The performance selection range 2 is used as the adjustable range of the mobile phone 2 (10-25). Through such automatic matching function, the mobile phone with different performance can fully exert the beauty effect, so that the user can experience on different performance mobile phones. To the best beauty effect.
当然,在实际应用中,可以在手机界面中呈现美颜和美白滑动条,该美颜和美白滑动条映射有Server端下发的美颜可调范围,以此来便于用户去选择适合当前场景的美颜强度。如图20所示,具体步骤包括:Of course, in practical applications, the beauty and whitening sliders can be presented in the mobile phone interface, and the beauty and whitening sliders are mapped with the adjustable range of the beauty delivered by the server, so that the user can select the current scene. Beauty intensity. As shown in Figure 20, the specific steps include:
步骤601:手机性能数据上报模块向服务器上报手机性能;Step 601: The mobile phone performance data reporting module reports the performance of the mobile phone to the server;
步骤602:网络端收集上报信息,利用Server对当前手机性能评估,并在美颜强度策略表中,确定出该手机当前所处位置,以为该手机分配对应的美颜可调范围;Step 602: The network collects the reported information, uses the Server to evaluate the current mobile phone performance, and determines the current location of the mobile phone in the beauty intensity policy table, so as to allocate the corresponding beauty adjustable range for the mobile phone;
步骤603:手机获取到该美颜可调范围,并映射到美颜滑动条上;Step 603: The mobile phone obtains the adjustable range of the beauty and maps to the beauty slider.
步骤604:手机获取摄像头数据;Step 604: The mobile phone acquires camera data.
步骤605:根据用户所选的美颜强度,对摄像头采集的数据进行美颜处理;Step 605: Perform beauty treatment on the data collected by the camera according to the beauty intensity selected by the user;
步骤606:将美颜后的数据分别送给本地显示和编码端编码;Step 606: Send the data after the beauty to the local display and the encoding end respectively;
步骤607:经网络传输后,送给解码器解码及观看端显示美颜后的效果。Step 607: After being transmitted through the network, it is sent to the decoder to decode and the effect of displaying the beauty on the viewing end.
这里,对上述如图20所示的过程说明如下:Here, the process shown in FIG. 20 above is explained as follows:
第一、实际应用中,手机性能上报的信息包括但不限于:中央处理器(CPU)核心数、CPU主频、手机操作***版本、网络状态、编码分辨率、码率等。First, in practical applications, the information reported by the mobile phone performance includes but is not limited to: a central processing unit (CPU) core number, a CPU frequency, a mobile phone operating system version, a network status, a coding resolution, a code rate, and the like.
第二、网络Server控制中心,指位于后台的控制中心,通常是服务器设备,在获取到上传的手机性能信息后,进行分析、计算和输出对应的美颜可调范围。Second, the network server control center refers to the control center located in the background, usually the server device. After obtaining the uploaded mobile phone performance information, it analyzes, calculates, and outputs the corresponding adjustable range of the beauty.
第三、网络端指后台服务器控制端,将数据上报后,利用Server端来进行分析,进而使Server端基于手机性能在预设的美颜强度策略表中选择适合当前手机性能的美颜强度档位,也即美颜可调范围。Third, the network refers to the background server control end. After the data is reported, the server is used for analysis, so that the server side selects the beauty intensity file suitable for the current mobile phone performance in the preset beauty intensity policy table based on the performance of the mobile phone. Bit, which is the adjustable range of beauty.
应用场景二,确定美颜可调范围的过程在手机本地执行,具体地,在手机视频通话开始时,手机性能上报模块向本地上报性能数据,通过本地控制模块的算法分析,选出适合当前手机性能的美颜可调范围,并将该美颜可调范围下发给软件配置参数中,在开启美颜界面时,将美颜可调范围和界面滑动条进行映射。如图21所示,具体步骤如下:In the second application scenario, the process of determining the adjustable range of the beauty is performed locally on the mobile phone. Specifically, when the video call of the mobile phone starts, the performance reporting module of the mobile phone reports the performance data locally, and the algorithm of the local control module analyzes and selects the current mobile phone. The beauty range of the performance can be adjusted, and the adjustable range of the beauty is sent to the software configuration parameter, and the beauty adjustable range and the interface slider are mapped when the beauty interface is opened. As shown in Figure 21, the specific steps are as follows:
步骤701:手机性能数据上报模块向本地上报手机性能;Step 701: The mobile phone performance data reporting module reports the performance of the mobile phone to the local device;
步骤702:本地控制中心接收对应信息后,通过算法分析,对当前手机性能评估后,并在美颜强度策略表中,确定出该手机当前所处的位置,以分配对应的美颜可调范围;Step 702: After receiving the corresponding information, the local control center analyzes the performance of the current mobile phone by using an algorithm, and determines the current location of the mobile phone in the beauty intensity policy table to allocate the corresponding beauty adjustable range. ;
步骤703:将美颜可调范围映射到美颜滑动条的上;Step 703: mapping the adjustable range of the beauty to the top of the beauty slider;
步骤704:获取摄像头数据;Step 704: Acquire camera data.
步骤705、根据用户所选的美颜强度,对摄像头采集的数据进行美颜处理;Step 705: Perform beauty processing on the data collected by the camera according to the beauty intensity selected by the user;
步骤706、将美颜后的数据分别送给本地显示和编码端编码;Step 706: Send the data after the beauty to the local display and the encoding end code respectively;
步骤707、经网络传输后,送给解码器解码及观看端显示美颜后的效果。Step 707, after being transmitted through the network, is sent to the decoder to decode and the effect of displaying the beauty on the viewing end.
这里,对上述如图20所示的过程说明如下:Here, the process shown in FIG. 20 above is explained as follows:
第一、实际应用中,手机性能上报的信息包括但不限于:CPU核心数、CPU主频、手机操作***版本、网络状态、编码分辨率、码率等。First, in practical applications, the information reported by the mobile phone performance includes but is not limited to: the number of CPU cores, the CPU frequency, the mobile phone operating system version, the network status, the encoding resolution, the code rate, and the like.
第二、本地控制中心位于手机软件客户端中,在软件打开视频通话时,将手机上报的信息进行收集,然后进行分析和输出对应的美颜可调范围。Second, the local control center is located in the mobile phone software client. When the software opens the video call, the information reported by the mobile phone is collected, and then the corresponding beauty adjustable range is analyzed and outputted.
这里,在应用场景一和应用场景二中,都需要控制中心(如本地控制中心,或Server控制中心)通过美颜强度策略表来决定当前通话状态下美颜可调范围,这里,表1给出了美颜强度策略表的一具体示例:Here, in the application scenario 1 and the application scenario 2, the control center (such as the local control center or the server control center) is required to determine the adjustable range of the beauty in the current call state through the beauty intensity policy table. Here, Table 1 gives A specific example of the beauty intensity strategy table:
Figure PCTCN2018071363-appb-000002
Figure PCTCN2018071363-appb-000002
表1Table 1
如表1所示,美颜强度策略表中给出了不同机型、不同编码分辨率、不同带宽下,手机与美颜可调范围的对应关系。例如:对于低端机型,在分辨率为320x240,带宽(也即码率)在100kbps以下时,美颜可调范围是0-5,因为此时机型较差,摄像头采集到的图像质量比较差,而且分辨率又很低,在100kbps以下时,会导致视频通话时画面质量比较模糊,如果采用较强的美颜强度后,画面质量会下降非常厉害,所以选择较小美颜强度区间。又例如:对于高端机型,在分辨率为1280x720下,带宽大于500kbps时,美颜可调范围是10-30。因为此时机型较好,摄像头采集的画面质量较高,而且分辨率比较高,在大于500kbps下,视频通话质量已经足够好,所以提供较大范围的美颜可调区间,能够实现很好的美颜效果。As shown in Table 1, the beauty intensity strategy table shows the corresponding relationship between the mobile phone and the adjustable range of the camera under different models, different encoding resolutions and different bandwidths. For example, for low-end models, when the resolution is 320x240 and the bandwidth (that is, the code rate) is below 100kbps, the adjustable range of the beauty is 0-5, because the model is poor and the image quality is collected by the camera. It is relatively poor, and the resolution is very low. When it is below 100kbps, the picture quality will be blurred when the video call is made. If the strong beauty intensity is used, the picture quality will drop very much, so choose a smaller beauty intensity interval. . Another example: for high-end models, at a resolution of 1280x720, when the bandwidth is greater than 500kbps, the adjustable range of beauty is 10-30. Because the model is better at this time, the picture quality of the camera is higher, and the resolution is higher. At more than 500kbps, the video call quality is good enough, so a wide range of beauty adjustable intervals can be provided. Beauty effect.
在实际应用中,可以采用如下方式区分低端机型、中端机型和高端机型。例如,通过Server端来区分,具体地,在手机视频通话时,对手机性能和通话数据进行上报,通过网络上传至Server端,并在Server端根据手机性能区分表来决定当前手机性能属于低端、中端还是高端机型。这里,表2给出了Android***的手机性能区分表;In practical applications, low-end models, mid-range models, and high-end models can be distinguished as follows. For example, the server side distinguishes, specifically, when the mobile phone video call is performed, the performance of the mobile phone and the call data are reported, and the server is uploaded to the server through the network, and the current mobile phone performance is determined by the server according to the mobile phone performance classification table. Mid-range or high-end models. Here, Table 2 gives the mobile phone performance classification table of the Android system;
Figure PCTCN2018071363-appb-000003
Figure PCTCN2018071363-appb-000003
表2Table 2
这里,在实际应用中,对于低端、中端和高端机型的划分并不固定,需要根据网络运营数据、手机占比来定期调整划分标准。例如,按照分辨率情况来看,目前大多数视频通话的分辨率处于480x360和640x480下,但随着手机性能提升等因素,视频通话画质也会随之提升,所以对不同分辨率下美颜可调范围也需要随之调整。而按照码率的情况来看,通常视频通话的码率一般在150-300kbps之间,而随着无线保真(Wi-Fi)普及和***移动通信技术(4G)的覆盖范围,视频通话的码率也会越来越高,因此美颜可调范围也需要随之调整,才能在高码率下,通话视频编码质量越高时,美颜强度范围也能适当提升或扩大。Here, in practical applications, the division of low-end, mid-range, and high-end models is not fixed, and it is necessary to periodically adjust the division criteria according to network operation data and mobile phone ratio. For example, according to the resolution, the resolution of most video calls is currently at 480x360 and 640x480, but as the performance of the mobile phone improves, the video call quality will also increase, so the beauty of the different resolutions The adjustable range also needs to be adjusted accordingly. According to the bit rate, the video call rate is usually between 150-300 kbps, and with the popularity of wireless fidelity (Wi-Fi) and the coverage of fourth-generation mobile communication technology (4G), video The bit rate of the call will be higher and higher, so the adjustable range of the beauty needs to be adjusted accordingly, in order to achieve a higher bit rate, the higher the quality of the call video encoding, the beauty intensity range can be appropriately increased or expanded.
这里,在实际应用中,可以采用如下美颜算法来实现美颜功能;具体地, 如图22所示,美颜的实施过程:Here, in practical applications, the following beauty algorithm can be used to implement the beauty function; specifically, as shown in FIG. 22, the implementation process of the beauty:
步骤801:对输入原始图像进行肤色区域识别;利用肤色检测算法,对原始输入图像进行肤色检测,在普通肤色检测算法基础上优化,在不太损失识别的错误率的基础上,尽量提高肤色的识别准确率。Step 801: Perform skin color region recognition on the input original image; use the skin color detection algorithm to perform skin color detection on the original input image, optimize on the basis of the common skin color detection algorithm, and try to improve the skin color based on the error rate of not losing the recognition. Identify accuracy.
步骤802:将原始图像进行区分,得到肤色区域和非肤色区域。Step 802: Differentiate the original image to obtain a skin color region and a non-skin color region.
步骤803:对肤色区域进行磨皮处理;对肤色区域采用磨皮算法,且在处理肤色区域的细节部分时,采用较低的强度处理,能自动识别皮肤纹理和细节丰富纹理(鼻子、眼睛、眉毛等区域),从而尽量保持脸部五官的立体感和细节。Step 803: dermabrasion treatment on the skin color region; using a dermabrasion algorithm on the skin color region, and adopting a lower intensity processing when processing the detail portion of the skin color region, automatically identifying the skin texture and the detailed rich texture (nose, eyes, Eyebrows and other areas), so as to maintain the three-dimensional sense and detail of the facial features.
步骤804:将磨皮处理后的肤色区域和原始图像中的肤色区域进行融合;这里,利用原始图像的丰富细节和磨皮处理后的肤色区域进行融合,首先较好的保持了肤色区域细节的完整性,其次增加了磨皮区域的真实感。Step 804: Fusing the skin color region after the dermabrasion treatment with the skin color region in the original image; here, using the rich details of the original image and the skin color region after the dermabrasion treatment, the skin color region details are preferably maintained first. Integrity, secondly, increases the realism of the microdermabrasion area.
步骤805:将步骤804得到的融合后的图像与非肤色区域进行融合;这里磨皮区域与非肤色区域融合后,在边界处会出现突变,因此对磨皮和非肤色的边界区域进行羽化处理,使磨皮区域与非肤色区域平滑过渡。Step 805: The fused image obtained in step 804 is fused with the non-skinning region; where the dermabrasion region and the non-skinning region are fused, a mutation occurs at the boundary, so that the boundary region of the dermabrasion and the non-skinning region is feathered. , smooth transition between the microdermabrasion area and the non-skinning area.
步骤806:锐化处理;由于磨皮算法本质上的一种滤波算法,因此原理上不可避免的会损失细节,同时为了提升视频效果,对整体图像细节进行锐化增强,结束美颜处理。Step 806: sharpening processing; due to a filtering algorithm in essence, the principle inevitably loses details, and in order to enhance the video effect, the overall image details are sharpened and enhanced, and the beauty processing is ended.
这里,对上述如图22所示的过程说明如下:Here, the process shown in FIG. 22 above is explained as follows:
第一、肤色检测算法;实际应用中,利用YUV肤色检测模型进行肤色区域检测。First, the skin color detection algorithm; in practical applications, the YUV skin color detection model is used to detect the skin color region.
第二、磨皮算法;目前存在较多磨皮算法,如高斯滤波、表面模糊、双边滤波、导向滤波等Second, the dermabrasion algorithm; there are many dermabrasion algorithms, such as Gaussian filtering, surface ambiguity, bilateral filtering, guided filtering, etc.
第三、锐化算法能够增强图像边缘,使模糊的图像变得更加清晰,颜色变得鲜明突出,图像的质量有所改善,产生更适合人眼观察和识别的图像,实际应用中,可以采用微分法或高斯滤波方法来进行锐化。Third, the sharpening algorithm can enhance the edge of the image, make the blurred image more clear, the color becomes sharp and prominent, and the quality of the image is improved, resulting in an image more suitable for human eye observation and recognition. In practical applications, it can be used. A differential method or a Gaussian filtering method is used for sharpening.
图23为利用本发明实施例图像处理方法进行美颜处理后的效果示意图;如图23所示,由于手机A和手机B的性能不同,所以,手机A和手机B对应的美颜可调范围不同,如手机A中美颜可调范围为0-10,而手机B中美颜 可调范围为10-25,所以,对相同的图像进行美颜处理后,所达到的美颜效果不同,具体地,由于美颜滑动条能够表征出美颜可调范围,所以,即使在滑动条均处于中等美颜强度时,不同手机呈现出的美颜效果也不同,如虚线区域中的斑点显示程度不同。FIG. 23 is a schematic diagram showing the effect of the beauty processing after the image processing method of the embodiment of the present invention; as shown in FIG. 23, since the performances of the mobile phone A and the mobile phone B are different, the beauty adjustable range corresponding to the mobile phone A and the mobile phone B is Different, for example, the adjustable range of the beauty in the mobile phone A is 0-10, and the adjustable range of the mobile phone B is 10-25. Therefore, after the beauty treatment of the same image, the beauty effect achieved is different. Specifically, since the beauty slider can represent the adjustable range of the beauty, even when the slider is in the medium beauty intensity, the beauty effects exhibited by different mobile phones are different, such as the degree of the spot display in the dotted area. different.
这样,本发明实施例对于视频通话能提供不同的美颜强度,可以方便用户在不同场景下选择最优的效果。例如当用户状态不佳或者没有化妆时,则可以用较强美颜,而在户外或者已经化妆时,则可以用较低强度美颜。In this way, the embodiment of the present invention can provide different beauty strengths for video calls, which can facilitate the user to select an optimal effect in different scenarios. For example, when the user is in a bad state or has no make-up, a strong beauty can be used, and when outdoors or when makeup is made, a lower intensity beauty can be used.
实施例三 Embodiment 3
本实施例提供了一种电子设备,如图24所示,所述电子设备包括:This embodiment provides an electronic device. As shown in FIG. 24, the electronic device includes:
检测单元1001,用于检测到目标操作,所述目标操作表征所述电子设备对采集到的图像数据和/或视频数据中图像特征进行调整的操作;a detecting unit 1001, configured to detect a target operation, where the target operation represents an operation of the electronic device to adjust image features in the collected image data and/or video data;
第一获取单元1002,用于基于所述目标操作获取针对所述电子设备的用于对图像特征进行处理的参数可调范围值,所述参数可调范围值是至少基于对图像数据和/或视频数据中图像特征进行处理的图像处理特征而确定出的;a first obtaining unit 1002, configured to acquire, according to the target operation, a parameter adjustable range value for processing the image feature for the electronic device, where the parameter adjustable range value is based at least on image data and/or Determining the image processing features of the image features processed in the video data;
处理单元1003,用于从所述参数可调范围值中选取出目标调整值,利用所述目标调整值对采集的图像数据和/或视频数据中图像特征进行处理。The processing unit 1003 is configured to select a target adjustment value from the parameter adjustable range values, and process the image features in the collected image data and/or video data by using the target adjustment value.
在一实施例中,所述第一获取单元1002,还用于获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,并至少基于所述图像处理特征确定出与所述电子设备相匹配的参数可调范围值;或者,获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,并将针对所述电子设备的图像处理特征发送至服务器,以使服务器至少基于针对所述电子设备的图像处理特征确定出与所述电子设备相匹配的参数可调范围值。In an embodiment, the first obtaining unit 1002 is further configured to acquire image processing features for processing image features in image data and/or video data, and determine at least based on the image processing features. Determining a parameter-adjustable range value of the electronic device; or acquiring an image processing feature for processing image features in the image data and/or the video data, and transmitting the image processing feature for the electronic device to the server, Having the server determine a parameter adjustable range value that matches the electronic device based at least on image processing characteristics for the electronic device.
在另一实施例中,所述参数可调范围值是至少基于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,以及用于对采集到的图像数据和/或视频数据进行传输的信息传输特征而确定出的;对应地,In another embodiment, the parameter adjustable range value is based at least on image processing features that process image features in image data and/or video data, and for performing acquired image data and/or video data. Determined by the transmitted information transmission characteristics; correspondingly,
所述第一获取单元1002,还用于获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,以及获取用于对采集到的图像数据和/或视频数据进行传输的信息传输特征,至少基于所述图像处理特征和所述信息传输 特征确定出与所述电子设备相匹配的参数可调范围值;或者,获取用于对图像数据和/或视频数据中图像特征进行处理的图像处理特征,以及获取用于对采集到的图像数据和/或视频数据进行传输的信息传输特征,并将针对所述电子设备的图像处理特征和信息传输特征发送至服务器,以使服务器至少基于针对所述电子设备的图像处理特征和信息传输特征确定出与所述电子设备相匹配的参数可调范围值。The first obtaining unit 1002 is further configured to acquire image processing features for processing image features in image data and/or video data, and acquire, for acquiring, collected image data and/or video data. And an information transmission feature determining, according to at least the image processing feature and the information transmission feature, a parameter adjustable range value that matches the electronic device; or acquiring the image feature in the image data and/or the video data Processing image processing features, and acquiring information transmission features for transmitting the collected image data and/or video data, and transmitting image processing features and information transmission features for the electronic device to a server to enable the server A parameter adjustable range value that matches the electronic device is determined based at least on image processing features and information transmission characteristics for the electronic device.
在另一实施例中,所述处理单元1003,还用于呈现表征有所述参数可调范围值的调整符,以便于利用所述调整符从所述参数可调范围值中选取出所述目标调整值。In another embodiment, the processing unit 1003 is further configured to present an identifier characterized by the parameter adjustable range value, so as to select the selected value from the parameter adjustable range value by using the adjuster. Target adjustment value.
这里需要指出的是:以上电子设备实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果,因此不做赘述。对于本发明电子设备实施例中未披露的技术细节,请参照本发明方法实施例的描述而理解,为节约篇幅,因此不再赘述。It should be noted that the description of the above embodiment of the electronic device is similar to the description of the method embodiment described above, and has similar advantages to the method embodiment, and thus will not be described again. For the technical details that are not disclosed in the embodiment of the present invention, please refer to the description of the method embodiment of the present invention, and the details are not described herein.
实施例四 Embodiment 4
本实施例提供了一种服务器,如图25所示,所述服务器包括:This embodiment provides a server. As shown in FIG. 25, the server includes:
第二获取单元1101,用于获取电子设备对应的用于对采集到的图像数据和/或视频数据中图像特征进行处理的图像处理特征;a second acquiring unit 1101, configured to acquire an image processing feature corresponding to the electronic device for processing image features in the collected image data and/or video data;
确定单元1102,用于根据针对所述电子设备的图像处理特征,确定出至少与所述电子设备的图像处理特征相匹配的参数可调范围值,以便于所述电子设备从所述参数可调范围值中选取出目标调整值,并利用所述目标调整值对所述电子设备采集的图像数据和/或视频数据中图像特征进行处理。a determining unit 1102, configured to determine, according to an image processing feature for the electronic device, a parameter adjustable range value that matches at least an image processing feature of the electronic device, so that the electronic device is adjustable from the parameter The target adjustment value is selected from the range value, and the image feature in the image data and/or the video data collected by the electronic device is processed by using the target adjustment value.
在一实施例中,所述第二获取单元1101,还用于获取所述电子设备对应的用于对采集到的图像数据和/或视频数据进行传输的信息传输特征;对应地,In an embodiment, the second acquiring unit 1101 is further configured to acquire an information transmission feature corresponding to the electronic device for transmitting the collected image data and/or video data; correspondingly,
所述确定单元1102,还用于根据针对所述电子设备的图像处理特征以及信息传输特征,确定出至少与所述电子设备的图像处理特征和信息传输特征相匹配的参数可调范围值。The determining unit 1102 is further configured to determine, according to an image processing feature and an information transmission feature for the electronic device, a parameter adjustable range value that matches at least an image processing feature and an information transmission feature of the electronic device.
这里需要指出的是:以上服务器实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果,因此不做赘述。对于本发明服 务器实施例中未披露的技术细节,请参照本发明方法实施例的描述而理解,为节约篇幅,因此不再赘述。It should be noted here that the description of the above server embodiment is similar to the description of the above method embodiment, and has similar advantageous effects as the method embodiment, and therefore will not be described again. For the technical details that are not disclosed in the embodiment of the present invention, please refer to the description of the method embodiment of the present invention, and the details are not described herein.
本领域内的技术人员应明白,本发明的实施例可提供为方法、***、或计算机程序产品。因此,本发明可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
本发明是参照根据本发明实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
以上所述仅是本发明实施例的实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明实施例原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本发明实施例的保护范围。The above is only an embodiment of the embodiments of the present invention, and it should be noted that those skilled in the art can make some improvements and refinements without departing from the principles of the embodiments of the present invention. Retouching should also be considered as the scope of protection of the embodiments of the present invention.

Claims (36)

  1. 一种图像处理的方法,其特征在于,所述方法包括:A method of image processing, the method comprising:
    获取采集的图像数据;Obtaining acquired image data;
    确定处理图像数据的处理方式;Determining how the processed image data is processed;
    根据确定的所述处理方式对所述图像数据中的图像特征进行特征调整。Feature adjustment of image features in the image data is performed according to the determined processing manner.
  2. 根据权利要求1所述的方法,其特征在于,所述方法具体应用于与第二终端设备的视频通信连接的第一终端设备,所述图像数据包括视频图像,所述获取采集的图像数据;确定处理图像数据的处理方式;根据确定的所述处理方式对所述图像数据中的图像特征进行特征调整,包括:The method according to claim 1, wherein the method is specifically applied to a first terminal device connected to a video communication of a second terminal device, wherein the image data comprises a video image, and the acquired image data is acquired; Determining a processing manner of processing the image data; performing feature adjustment on the image features in the image data according to the determined processing manner, including:
    获取所述第一终端设备的图像采集装置所采集的视频图像;Obtaining a video image collected by the image acquiring device of the first terminal device;
    确定待添加的滤镜效果;Determine the filter effect to be added;
    确定所述滤镜效果的数据处理量类型;Determining a type of data processing amount of the filter effect;
    根据所述数据处理类型在所述视频图像上添加所述滤镜效果。The filter effect is added to the video image in accordance with the data processing type.
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述数据处理类型在所述视频图像添加所述滤镜效果,包括:The method according to claim 2, wherein said adding said filter effect to said video image according to said data processing type comprises:
    如果所述数据处理量类型符合第一类型,则在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果;Adding the filter effect to the video image in a CPU of the first terminal device if the data processing amount type conforms to the first type;
    如果所述数据处理量类型符合第二类型,则指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果;其中,所述第一类型对应的数据处理量与为所述CPU当前设定的数据处理量范围相应,且,所述第一类型对应的数据处理量,低于第二类型对应的数据处理量;If the data processing type matches the second type, indicating that the GPU of the first terminal device adds the filter effect on the video image; wherein the data processing amount corresponding to the first type is The data processing amount range currently set by the CPU is corresponding, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type;
    所述方法还包括:The method further includes:
    至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像;Determining a video image to add a video effect based at least on the video image to which the filter effect is added;
    向所述第二终端设备发送所述添加视频特效的视频图像。Sending the video image of the added video effect to the second terminal device.
  4. 根据权利要求3所述的方法,其特征在于,所述CPU当前设定的数据处理量范围根据以下实现方式得到:The method according to claim 3, wherein the data processing amount range currently set by the CPU is obtained according to the following implementation manner:
    确定所述CPU的数据处理量上限值,以及设定所述CPU处理滤镜效果的CPU占用比例范围;Determining an upper limit value of the data processing amount of the CPU, and setting a CPU occupation ratio range in which the CPU processes the filter effect;
    根据所述数据处理量上限值和所述CPU占用比例范围,确定所述数据处理量范围。And determining the data processing amount range according to the data processing amount upper limit value and the CPU occupation ratio range.
  5. 根据权利要求3所述的方法,其特征在于,所述在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果,包括:The method according to claim 3, wherein the adding the filter effect to the video image in a CPU of the first terminal device comprises:
    调用预定的滤镜效果实现算法,为所述视频图像添加所述滤镜效果;Calling a predetermined filter effect implementation algorithm to add the filter effect to the video image;
    所述指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果,包括:And the GPU indicating the first terminal device adds the filter effect on the video image, including:
    指示所述GPU通过OpenGL接口以预定的滤镜效果实现算法,在所述视频图像上添加所述滤镜效果。Instructing the GPU to implement an algorithm with a predetermined filter effect through the OpenGL interface, adding the filter effect on the video image.
  6. 根据权利要求3-5任一项所述的方法,其特征在于,所述获取所述第一终端设备的图像采集装置所采集的视频图像,包括:The method according to any one of claims 3-5, wherein the acquiring the video image collected by the image capturing device of the first terminal device comprises:
    获取当前网络带宽;Get the current network bandwidth;
    根据预置的网络带宽范围与图像获取帧率的第一对应关系,确定与所述当前网络带宽所在的网络带宽范围对应的图像获取帧率;其中,网络带宽范围与所对应的图像获取帧率正相关;Determining, according to a first correspondence between the preset network bandwidth range and the image acquisition frame rate, an image acquisition frame rate corresponding to a network bandwidth range in which the current network bandwidth is located; wherein, the network bandwidth range and the corresponding image acquisition frame rate Positive correlation
    根据所确定的图像获取帧率,获取所述视频图像。The video image is acquired according to the determined image acquisition frame rate.
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:The method of claim 6 wherein the method further comprises:
    根据所述当前网络带宽、所述第一终端设备的设备配置信息,确定当前可执行的至少一个视频特效类型;Determining at least one video effect type currently executable according to the current network bandwidth and device configuration information of the first terminal device;
    展示所述可执行的至少一个视频特效类型对应的视频特效;其中,一个视频特效类型对应至少一个视频特效;Demonstrating a video effect corresponding to the at least one video effect type executable; wherein one video effect type corresponds to at least one video effect;
    所述确定待添加的滤镜效果包括:The determining the filter effect to be added includes:
    从所展示的视频特效中选取待添加的滤镜效果。Select the filter effect to add from the displayed video effects.
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述当前网络带宽、所述第一终端设备的设备配置信息,确定当前可执行的至少一个视频特效类型包括:The method according to claim 7, wherein the determining, according to the current network bandwidth and the device configuration information of the first terminal device, that the currently executable at least one video effect type comprises:
    获取第二对应关系,所述第二对应关系包括预置的设备配置等级、网络带宽范围与视频特效类型的对应关系;其中,视频特效类型的数量跟随设备配置等级递增而递增,和/或视频特效类型的数量跟随网络带宽范围递增而递增;Obtaining a second correspondence, where the second correspondence includes a preset device configuration level, a network bandwidth range, and a video effect type; wherein, the number of the video effect types is incremented by the device configuration level, and/or the video The number of effect types increases with increasing network bandwidth range;
    根据所述第二对应关系、所述当前网络带宽、所述第一终端设备的设备配置信息,确定出当前可执行的至少一个视频特效类型。Determining at least one video effect type currently executable according to the second correspondence, the current network bandwidth, and device configuration information of the first terminal device.
  9. 根据权利要求7所述的方法,其特征在于,所述方法还包括:The method of claim 7, wherein the method further comprises:
    从所展示的视频特效中选取人脸挂件图像;Select a face pendant image from the displayed video effects;
    所述方法还包括:The method further includes:
    提取所获取的视频图像的灰度图通道图像;Extracting a grayscale channel image of the acquired video image;
    识别所述灰度图通道图像中各人脸特征点的位置;Identifying a location of each facial feature point in the grayscale channel image;
    根据所述人脸挂件图像中的各人脸特征点,及所述灰度图通道图像中各人脸特征点的位置,确定所述人脸挂件图像在所述视频图像中的添加位置;Determining, according to each face feature point in the face pendant image, and a location of each face feature point in the grayscale channel image, determining an added position of the face pendant image in the video image;
    将所述视频图像和所述人脸挂件图像渲染到所述第一终端设备的GPU中,以在所述GPU中生成第一图像格式的视频图像和人脸挂件图像,以及将所述添加位置传输给所述GPU;Rendering the video image and the face pendant image to a GPU of the first terminal device to generate a video image and a face pendant image of a first image format in the GPU, and adding the added location Transfer to the GPU;
    获取GPU传输的特效视频图像,所述特效视频图像为在所述添加位置添加所述人脸挂件图像的视频图像,所述特效视频图像的图像格式为第一图像格式。Obtaining a special effect video image transmitted by the GPU, where the special effect video image is a video image in which the face pendant image is added, and the image format of the special effect video image is a first image format.
  10. 根据权利要求9所述的方法,其特征在于,所述至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像,包括:The method according to claim 9, wherein the determining the video image to which the video effect is added according to at least the video image to which the filter effect is added comprises:
    确定添加所述人脸挂件图像及所述滤镜效果的视频图像,得到添加视频特效的视频图像;其中,所述人脸挂件图像在添加所述滤镜效果之前或之后添加;Determining to add the video image of the face pendant image and the filter effect to obtain a video image to which a video effect is added; wherein the face pendant image is added before or after adding the filter effect;
    所述向所述第二终端设备发送所述添加视频特效的视频图像,包括:Transmitting, by the second terminal device, the video image of the added video special effect, including:
    确定图像格式为第二图像格式的添加视频特效的视频图像;Determining a video image of the added video effect in which the image format is the second image format;
    对图像格式为所述第二图像格式的添加视频特效的视频图像,进行视频编码处理;Performing a video encoding process on a video image whose image format is a video effect added to the second image format;
    将视频编码处理后得到的视频图像发送给第二终端设备。The video image obtained by the video encoding process is sent to the second terminal device.
  11. 根据权利要求9所述的方法,其特征在于,所述识别所述灰度图通道图像中各人脸特征点的位置包括:The method according to claim 9, wherein the identifying the location of each facial feature point in the grayscale channel image comprises:
    按照设定比例缩小灰度图通道图像,得到灰度图通道缩小图像;The grayscale channel image is reduced according to the set ratio, and the grayscale channel is reduced to obtain an image;
    按照设定旋转角度对所述灰度图通道缩小图像进行旋转;Rotating the reduced image of the grayscale image according to a set rotation angle;
    识别旋转后的所述灰度图通道缩小图像中各人脸特征点的位置;Recognizing the rotated grayscale channel to reduce the position of each facial feature point in the image;
    将旋转后的所述灰度图通道缩小图像中各人脸特征点的位置,转换为所述灰度图通道图像中各人脸特征点的位置。Converting the position of each face feature point in the image by the rotated grayscale channel to the position of each face feature point in the grayscale channel image.
  12. 根据权利要求10所述的方法,其特征在于,所述灰度图通道图像包括Y通道图像或G通道图像,所述Y通道图像是指图像格式为YUV格式的 视频图像的通道图像,所述G通道图像是指图像格式为RGB格式的视频图像的通道图像;The method according to claim 10, wherein the grayscale channel image comprises a Y channel image or a G channel image, and the Y channel image refers to a channel image of a video image in an image format of YUV format, A G channel image refers to a channel image of a video image in an image format of RGB format;
    所述第一图像格式为RGB格式,所述第二图像格式为YUV格式。The first image format is an RGB format, and the second image format is a YUV format.
  13. 根据权利要求9或10所述的方法,其特征在于,所述至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像包括:The method according to claim 9 or 10, wherein the determining, according to at least the video image to which the filter effect is added, determining the video image to which the video effect is added comprises:
    指示所述第一终端设备的GPU在所述视频图像上添加所选取的所述人脸挂件图像及所述滤镜效果,得到添加所述人脸挂件图像及所述滤镜效果的视频图像;Instructing the GPU of the first terminal device to add the selected facial pendant image and the filter effect on the video image to obtain a video image that adds the facial pendant image and the filter effect;
    或,指示所述GPU在所述视频图像上依次添加所述人脸挂件图像和所述滤镜效果,得到添加所述人脸挂件图像及所述滤镜效果的视频图像。Or, instructing the GPU to sequentially add the face pendant image and the filter effect on the video image to obtain a video image that adds the face pendant image and the filter effect.
  14. 根据权利要求2所述的方法,其特征在于,所述至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像之后,所述向所述第二终端设备发送添加视频特效的视频图像之前,还包括:The method according to claim 2, wherein, after determining the video image to which the video effect is added, at least according to the video image to which the filter effect is added, the transmitting the video image of adding the video effect to the second terminal device Previously, it also included:
    对所述添加视频特效的视频图像进行视频编码处理。The video image to which the video effect is added is subjected to video encoding processing.
  15. 根据权利要求14所述的方法,其特征在于,所述对添加视频特效的视频图像进行视频编码处理,包括:The method according to claim 14, wherein the video encoding processing of the video image to which the video effect is added comprises:
    根据所述第一终端设备的设备配置信息的设备配置等级,所述当前网络带宽和预置的第三对应关系,确定出对应的编码分辨率,所述第三对应关系包括设备配置等级、网络带宽范围与编码分辨率的对应关系;Determining, according to the device configuration level of the device configuration information of the first terminal device, the current network bandwidth and the preset third correspondence, determining a corresponding coding resolution, where the third correspondence includes a device configuration level, a network The correspondence between the bandwidth range and the encoding resolution;
    以所确定的编码分辨率,对所述添加视频特效的视频图像进行视频编码处理。The video image to which the video effect is added is subjected to video encoding processing at the determined encoding resolution.
  16. 根据权利要求1所述的方法,其特征在于,所述方法具体用于电子设备,所述图像数据包括视频图像,所述获取采集的图像数据;确定处理图像数据的处理方式;根据确定的所述处理方式对所述图像数据中的图像特征进行特征调整,包括:The method according to claim 1, wherein the method is specifically for an electronic device, the image data comprises a video image, the acquired image data is acquired, and a processing manner of processing the image data is determined; The processing mode performs feature adjustment on the image features in the image data, including:
    获取所述电子设备采集的图像数据;Obtaining image data collected by the electronic device;
    检测到目标操作,所述目标操作表征所述电子设备对所述图像数据中的图像特征进行调整的操作;A target operation is detected, the target operation characterizing an operation of the electronic device to adjust image features in the image data;
    基于所述目标操作获取参数可调范围值,所述参数可调范围值用于对图像 特征进行处理,所述参数可调范围值至少基于对图像数据中图像特征进行处理的图像处理特征得到;Obtaining a parameter adjustable range value based on the target operation, the parameter adjustable range value is used to process image features, and the parameter adjustable range value is obtained based at least on image processing features that process image features in the image data;
    从所述参数可调范围值中选取出目标调整值,利用所述目标调整值对所述图像数据中图像特征进行特征调整。Selecting a target adjustment value from the parameter adjustable range value, and performing feature adjustment on the image feature in the image data by using the target adjustment value.
  17. 根据权利要求16所述的方法,其特征在于,所述基于所述目标操作获取参数可调范围值,包括:The method according to claim 16, wherein the obtaining the parameter adjustable range value based on the target operation comprises:
    获取用于对图像数据中的图像特征进行处理的图像处理特征,并至少基于所述图像处理特征确定出所述参数可调范围值;Obtaining an image processing feature for processing image features in the image data, and determining the parameter adjustable range value based on at least the image processing feature;
    或者,获取用于对图像数据中的图像特征进行处理的图像处理特征,并将所述图像处理特征发送至服务器,以使服务器至少基于所述图像处理特征确定出所述参数可调范围值。Alternatively, an image processing feature for processing image features in the image data is obtained and sent to a server to cause the server to determine the parameter adjustable range value based at least on the image processing features.
  18. 根据权利要求16所述的方法,其特征在于,所述参数可调范围值至少基于所述图像处理特征,以及用于对采集到的图像数据进行传输的信息传输特征得到;所述基于所述目标操作获取参数可调范围值,包括:The method of claim 16 wherein said parameter adjustable range value is derived based at least on said image processing feature and an information transfer feature for transmitting the collected image data; said based on said The target operation obtains the parameter adjustable range value, including:
    获取用于对图像数据中的图像特征进行处理的图像处理特征,以及获取用于对所述图像数据进行传输的信息传输特征,至少基于所述图像处理特征和所述信息传输特征确定出所述参数可调范围值;Acquiring image processing features for processing image features in image data, and acquiring information transmission features for transmitting the image data, determining the at least based on the image processing features and the information transmission features Parameter adjustable range value;
    或者,获取用于对图像数据中的图像特征进行处理的图像处理特征,以及获取用于对所述图像数据进行传输的信息传输特征,并将所述图像处理特征和所述信息传输特征发送至服务器,以使服务器至少基于针对所述图像处理特征和所述信息传输特征确定出所述参数可调范围值。Or acquiring image processing features for processing image features in the image data, and acquiring information transmission features for transmitting the image data, and transmitting the image processing features and the information transmission features to a server to cause the server to determine the parameter adjustable range value based at least on the image processing feature and the information transmission feature.
  19. 根据权利要求16所述的方法,其特征在于,所述方法还包括:The method of claim 16 wherein the method further comprises:
    显示调整符,所述调整符表征所述参数可调范围值,所述调整符用于从所述参数可调范围值中选取出所述目标调整值。Displaying an indicator characterizing the parameter adjustable range value, the adjuster for selecting the target adjustment value from the parameter adjustable range value.
  20. 一种图像处理方法,其特征在于,所述方法应用于服务器,所述方法包括:An image processing method, wherein the method is applied to a server, the method comprising:
    获取图像处理特征,所述图像处理特征用于对电子设备采集到的图像数据中的图像特征进行处理;Obtaining an image processing feature, the image processing feature is configured to process image features in the image data collected by the electronic device;
    根据所述图像处理特征,确定出至少与所述图像处理特征相匹配的参数可 调范围值;Determining, according to the image processing feature, a parameter adjustable range value that matches at least the image processing feature;
    将所述参数可调范围值发送给所述电子设备,以使所述电子设备基于目标操作获取所述参数可调范围值,从所述参数可调范围值中选取出目标调整值,并利用所述目标调整值对所述图像数据中的图像特征进行特征调整,所述目标操作表征所述电子设备对所述图像数据中的图像特征进行调整的操作。Transmitting the parameter adjustable range value to the electronic device, so that the electronic device acquires the parameter adjustable range value based on a target operation, selects a target adjustment value from the parameter adjustable range value, and utilizes The target adjustment value performs feature adjustment on image features in the image data, and the target operation characterizes an operation in which the electronic device adjusts image features in the image data.
  21. 根据权利要求20所述的方法,其特征在于,所述方法还包括:The method of claim 20, wherein the method further comprises:
    获取信息传输特征,所述信息传输特征用于所述电子设备对采集到的图像数据进行传输的信息传输特征;Obtaining an information transmission feature, wherein the information transmission feature is used for an information transmission feature of the electronic device to transmit the collected image data;
    所述根据所述图像处理特征,确定出至少与所述图像处理特征相匹配的参数可调范围值,包括:Determining, according to the image processing feature, a parameter adjustable range value that matches at least the image processing feature, including:
    根据所述图像处理特征以及所述信息传输特征,确定出所述参数可调范围值。And determining the parameter adjustable range value according to the image processing feature and the information transmission feature.
  22. 一种图像处理装置,其特征在于,所述装置包括:An image processing apparatus, characterized in that the apparatus comprises:
    获取模块,用于获取采集的图像数据;Obtaining a module, configured to acquire acquired image data;
    处理模块,用于确定处理图像数据的处理方式;a processing module for determining a processing method of processing image data;
    根据确定的所述处理方式对所述图像数据中的图像特征进行特征调整。Feature adjustment of image features in the image data is performed according to the determined processing manner.
  23. 根据权利要求22所述的装置,其特征在于,所述图像处理装置具体应用于与第二终端设备的视频通信连接的第一终端设备,所述第一终端设备包括:The device according to claim 22, wherein the image processing device is specifically applied to a first terminal device connected to a video communication of a second terminal device, the first terminal device comprising:
    所述获取模块,具体用于获取所述第一终端设备的图像采集装置所采集的视频图像;The acquiring module is specifically configured to acquire a video image collected by the image capturing device of the first terminal device;
    所述处理模块,具体用于确定待添加的滤镜效果;The processing module is specifically configured to determine a filter effect to be added;
    确定所述滤镜效果的数据处理量类型;Determining a type of data processing amount of the filter effect;
    根据所述数据处理类型在所述视频图像上添加所述滤镜效果。The filter effect is added to the video image in accordance with the data processing type.
  24. 根据权利要求23所述的装置,其特征在于,所述处理模块包括:The device according to claim 23, wherein the processing module comprises:
    第一滤镜效果添加模块,用于如果所述数据处理量类型符合第一类型,则在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果;a first filter effect adding module, configured to add the filter effect to the video image in a CPU of the first terminal device if the data processing amount type conforms to the first type;
    第二滤镜效果添加模块,用于如果所述数据处理量类型符合第二类型,则指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果;其中, 所述第一类型对应的数据处理量与为所述CPU当前设定的数据处理量范围相应,且,所述第一类型对应的数据处理量,低于第二类型对应的数据处理量;a second filter effect adding module, configured to: if the data processing type matches the second type, instruct the GPU of the first terminal device to add the filter effect on the video image; wherein, the The data processing amount corresponding to the type corresponds to the data processing amount range currently set by the CPU, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type;
    添加特效的视频图像确定模块,用于至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像;Adding a special effect video image determining module for determining a video image to which a video effect is added according to at least a video image to which a filter effect is added;
    视频图像传输模块,用于向所述第二终端设备发送所述添加视频特效的视频图像。And a video image transmission module, configured to send the video image of the added video special effect to the second terminal device.
  25. 根据权利要求24所述的装置,其特征在于,所述处理模块还包括:The device according to claim 24, wherein the processing module further comprises:
    数据处理量范围确定模块,用于确定所述CPU的数据处理量上限值,以及设定所述CPU处理滤镜效果的CPU占用比例范围;根据所述数据处理量上限值和所述CPU占用比例范围,确定所述数据处理量范围;a data processing amount range determining module, configured to determine an upper limit value of the data processing amount of the CPU, and a CPU occupation ratio range that sets the CPU processing filter effect; according to the data processing amount upper limit value and the CPU Taking the proportion range and determining the range of the data processing amount;
    所述第一滤镜效果添加模块,具体用于:The first filter effect adding module is specifically configured to:
    调用预定的滤镜效果实现算法,为所述视频图像添加所述滤镜效果;Calling a predetermined filter effect implementation algorithm to add the filter effect to the video image;
    所述第二滤镜效果添加模块,具体用于:The second filter effect adding module is specifically configured to:
    指示所述GPU通过OpenGL接口,以预定的滤镜效果实现算法,在所述视频图像上添加所述滤镜效果。Instructing the GPU to implement an algorithm with a predetermined filter effect through the OpenGL interface, adding the filter effect on the video image.
  26. 根据权利要求24所述的装置,其特征在于,所述获取模块具体用于:The device according to claim 24, wherein the obtaining module is specifically configured to:
    获取当前网络带宽;Get the current network bandwidth;
    根据预置的网络带宽范围与图像获取帧率的第一对应关系,确定与所述当前网络带宽所在的网络带宽范围对应的图像获取帧率;其中,网络带宽范围与所对应的图像获取帧率正相关;Determining, according to a first correspondence between the preset network bandwidth range and the image acquisition frame rate, an image acquisition frame rate corresponding to a network bandwidth range in which the current network bandwidth is located; wherein, the network bandwidth range and the corresponding image acquisition frame rate Positive correlation
    根据所确定的图像获取帧率,获取所述视频图像。The video image is acquired according to the determined image acquisition frame rate.
  27. 根据权利要求24所述的视频通信装置,其特征在于,所述装置还包括:The video communication device according to claim 24, wherein the device further comprises:
    显示模块,用于根据所述当前网络带宽、所述第一终端设备的设备配置信息,确定当前可执行的至少一个视频特效类型;展示所述可执行的至少一个视频特效类型对应的视频特效;其中,一个视频特效类型对应至少一个视频特效;a display module, configured to determine, according to the current network bandwidth, the device configuration information of the first terminal device, the currently executable at least one video effect type; and display the video special effect corresponding to the at least one executable video effect type; Wherein, one video effect type corresponds to at least one video special effect;
    所述滤镜效果确定模块,具体用于:The filter effect determining module is specifically configured to:
    确定从所展示的视频特效中选取的滤镜效果。Determine the filter effect selected from the displayed video effects.
  28. 一种第一终端设备,其特征在于,所述第一终端设备与第二终端设备的视频通信连接,所述第一终端设备包括:A first terminal device, wherein the first terminal device is in video communication with the second terminal device, and the first terminal device includes:
    中央处理器CPU,获取采集的图像数据;a central processing unit CPU that acquires acquired image data;
    确定处理图像数据的处理方式;Determining how the processed image data is processed;
    根据确定的所述处理方式对所述图像数据中的图像特征进行特征调整。Feature adjustment of image features in the image data is performed according to the determined processing manner.
  29. 根据权利要求28所述的终端设备,其特征在于,所述CPU具体用于:The terminal device according to claim 28, wherein the CPU is specifically configured to:
    获取所述第一终端设备的图像采集装置所采集的视频图像;确定待添加的滤镜效果;Obtaining a video image collected by the image acquiring device of the first terminal device; determining a filter effect to be added;
    确定所述滤镜效果的数据处理量类型;Determining a type of data processing amount of the filter effect;
    根据所述数据处理类型在所述视频图像上添加所述滤镜效果。The filter effect is added to the video image in accordance with the data processing type.
  30. 根据权利要求29所述的终端设备,其特征在于,所述CPU具体用于:The terminal device according to claim 29, wherein the CPU is specifically configured to:
    如果所述数据处理量类型符合第一类型,则在所述第一终端设备的CPU中为所述视频图像添加所述滤镜效果;Adding the filter effect to the video image in a CPU of the first terminal device if the data processing amount type conforms to the first type;
    如果所述数据处理量类型符合第二类型,则指示所述第一终端设备的GPU在所述视频图像上添加所述滤镜效果;其中,所述第一类型对应的数据处理量与为所述CPU当前设定的数据处理量范围相应,且,所述第一类型对应的数据处理量,低于第二类型对应的数据处理量;至少根据添加滤镜效果的视频图像,确定添加视频特效的视频图像;向所述第二终端设备传输添加视频特效的视频图像;If the data processing type matches the second type, indicating that the GPU of the first terminal device adds the filter effect on the video image; wherein the data processing amount corresponding to the first type is The data processing amount range currently set by the CPU is corresponding, and the data processing amount corresponding to the first type is lower than the data processing amount corresponding to the second type; at least according to the video image to which the filter effect is added, determining to add the video special effect Video image; transmitting a video image to which the video effect is added to the second terminal device;
    所述第一终端设备还包括:The first terminal device further includes:
    图形处理器GPU,用于接受所述CPU指示,根据所述CPU指示在所述数据处理量类型符合第二类型时,在所述视频图像上添加所述滤镜效果。And a graphics processor GPU, configured to accept, by the CPU, to add the filter effect on the video image according to the CPU indicating that the data processing amount type conforms to the second type.
  31. 一种电子设备,其特征在于,所述电子设备包括:An electronic device, comprising:
    获取模块,用于获取所述电子设备采集的图像数据;An acquiring module, configured to acquire image data collected by the electronic device;
    检测单元,用于检测到目标操作,所述目标操作表征所述图像数据中的图像特征进行调整的操作;a detecting unit, configured to detect a target operation, the target operation characterizing an operation of adjusting an image feature in the image data;
    第一获取单元,用于基于所述目标操作获取参数可调范围值,所述参数可调范围值用于对图像特征进行处理,所述参数可调范围值至少基于对图像数据中的图像特征进行处理的图像处理特征得到;a first acquiring unit, configured to acquire a parameter adjustable range value based on the target operation, where the parameter adjustable range value is used to process image features, and the parameter adjustable range value is based at least on image features in image data The image processing features of the processing are obtained;
    处理单元,用于从所述参数可调范围值中选取出目标调整值,利用所述目标调整值对所述图像数据中的图像特征进行特征调整。And a processing unit, configured to select a target adjustment value from the parameter adjustable range value, and perform feature adjustment on the image feature in the image data by using the target adjustment value.
  32. 根据权利要求31所述的电子设备,其特征在于,所述第一获取单元,还用于获取用于对图像数据中的图像特征进行处理的图像处理特征,并至少基于所述图像处理特征确定出所述参数可调范围值;The electronic device according to claim 31, wherein the first obtaining unit is further configured to acquire image processing features for processing image features in the image data, and determine at least based on the image processing features Out of the parameter adjustable range value;
    或者,获取用于对图像数据中的图像特征进行处理的图像处理特征,并将所述图像处理特征发送至服务器,以使服务器至少基于所述图像处理特征确定出所述参数可调范围值。Alternatively, an image processing feature for processing image features in the image data is obtained and sent to a server to cause the server to determine the parameter adjustable range value based at least on the image processing features.
  33. 根据权利要求31或32所述的电子设备,其特征在于,所述参数可调范围值是至少基于所述图像处理特征,以及用于对采集到的图像数据进行传输的信息传输特征得到;The electronic device according to claim 31 or 32, wherein the parameter adjustable range value is obtained based at least on the image processing feature and an information transmission feature for transmitting the collected image data;
    所述第一获取单元,还用于获取用于对图像数据中的图像特征进行处理的图像处理特征,以及获取用于对所述图像数据进行传输的信息传输特征,至少基于所述图像处理特征和所述信息传输特征确定出所述参数可调范围值;The first acquiring unit is further configured to acquire an image processing feature for processing image features in the image data, and acquire an information transmission feature for transmitting the image data, at least based on the image processing feature Determining, by the information transmission feature, the parameter adjustable range value;
    或者,获取用于对图像数据中的图像特征进行处理的图像处理特征,以及获取用于对所述图像数据进行传输的信息传输特征,并将所述图像处理特征和所述信息传输特征发送至服务器,以使服务器至少基于所述图像处理特征和所述信息传输特征确定出所述参数可调范围值。Or acquiring image processing features for processing image features in the image data, and acquiring information transmission features for transmitting the image data, and transmitting the image processing features and the information transmission features to a server to cause the server to determine the parameter adjustable range value based at least on the image processing feature and the information transmission feature.
  34. 根据权利要求31所述的电子设备,其特征在于,所述显示单元还用于:The electronic device according to claim 31, wherein the display unit is further configured to:
    显示调整符,所述调整符表征所述参数可调范围值,所述调整符用于从所述参数可调范围值中选取出所述目标调整值。Displaying an indicator characterizing the parameter adjustable range value, the adjuster for selecting the target adjustment value from the parameter adjustable range value.
  35. 一种服务器,其特征在于,所述服务器包括:A server, wherein the server comprises:
    获取单元,用于获取图像处理特征,所述图像处理特征用于对采集到的图像数据中的图像特征进行处理;An acquiring unit, configured to acquire image processing features, where the image processing features are used to process image features in the collected image data;
    处理单元,用于根据所述图像处理特征,确定出至少与所述电子设备的图像处理特征相匹配的参数可调范围值;a processing unit, configured to determine, according to the image processing feature, a parameter adjustable range value that matches at least an image processing feature of the electronic device;
    收发单元,用于将所述参数可调范围值发送给所述电子设备,以使所述电子设备基于目标操作获取所述参数可调范围值,从所述参数可调范围值中选取出目标调整值,并利用所述目标调整值对所述图像数据中的图像特征进行特征调整,所述目标操作表征所述电子设备对所述图像数据中的图像特征进行调整 的操作。a transceiver unit, configured to send the parameter adjustable range value to the electronic device, so that the electronic device acquires the parameter adjustable range value based on a target operation, and selects a target from the parameter adjustable range value Adjusting a value and performing feature adjustment on the image features in the image data using the target adjustment value, the target operation characterizing an operation of the electronic device to adjust image features in the image data.
  36. 根据权利要求35所述的服务器,其特征在于,所述获取单元还用于:The server according to claim 35, wherein the obtaining unit is further configured to:
    获取信息传输特征,所述信息传输特征用于所述电子设备;Obtaining an information transmission feature, the information transmission feature being used for the electronic device;
    所述处理单元还用于根据所述图像处理特征以及所述信息传输特征,确定出所述参数可调范围值。The processing unit is further configured to determine the parameter adjustable range value according to the image processing feature and the information transmission feature.
PCT/CN2018/071363 2017-01-09 2018-01-04 Image processing method and apparatus, relevant device and server WO2018127091A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201710014525.4A CN108289185B (en) 2017-01-09 2017-01-09 Video communication method, device and terminal equipment
CN201710014525.4 2017-01-09
CN201710344562.1 2017-05-16
CN201710344562.1A CN108307101B (en) 2017-05-16 2017-05-16 Image processing method, electronic equipment and server

Publications (1)

Publication Number Publication Date
WO2018127091A1 true WO2018127091A1 (en) 2018-07-12

Family

ID=62789249

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/071363 WO2018127091A1 (en) 2017-01-09 2018-01-04 Image processing method and apparatus, relevant device and server

Country Status (1)

Country Link
WO (1) WO2018127091A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325905A (en) * 2018-08-29 2019-02-12 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN111161133A (en) * 2019-12-26 2020-05-15 维沃移动通信有限公司 Picture processing method and electronic equipment
CN111683280A (en) * 2020-06-04 2020-09-18 腾讯科技(深圳)有限公司 Video processing method and device and electronic equipment
CN112241941A (en) * 2020-10-20 2021-01-19 北京字跳网络技术有限公司 Method, device, equipment and computer readable medium for acquiring image
CN112241709A (en) * 2020-10-21 2021-01-19 北京字跳网络技术有限公司 Image processing method, and training method and device of beard transformation network
CN113556500A (en) * 2020-04-24 2021-10-26 华为技术有限公司 Video overlapping method, device and system
CN114298929A (en) * 2021-12-20 2022-04-08 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN115297359A (en) * 2022-07-29 2022-11-04 北京字跳网络技术有限公司 Multimedia data transmission method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100141784A1 (en) * 2008-12-05 2010-06-10 Yoo Kyung-Hee Mobile terminal and control method thereof
CN103716548A (en) * 2013-12-06 2014-04-09 乐视致新电子科技(天津)有限公司 Video picture special effect processing method and device
CN105357466A (en) * 2015-11-20 2016-02-24 小米科技有限责任公司 Video communication method and video communication device
CN105872447A (en) * 2016-05-26 2016-08-17 努比亚技术有限公司 Video image processing device and method
CN105979195A (en) * 2016-05-26 2016-09-28 努比亚技术有限公司 Video image processing apparatus and method
CN106303361A (en) * 2015-06-11 2017-01-04 阿里巴巴集团控股有限公司 Image processing method, device, system and graphic process unit in video calling

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100141784A1 (en) * 2008-12-05 2010-06-10 Yoo Kyung-Hee Mobile terminal and control method thereof
CN103716548A (en) * 2013-12-06 2014-04-09 乐视致新电子科技(天津)有限公司 Video picture special effect processing method and device
CN106303361A (en) * 2015-06-11 2017-01-04 阿里巴巴集团控股有限公司 Image processing method, device, system and graphic process unit in video calling
CN105357466A (en) * 2015-11-20 2016-02-24 小米科技有限责任公司 Video communication method and video communication device
CN105872447A (en) * 2016-05-26 2016-08-17 努比亚技术有限公司 Video image processing device and method
CN105979195A (en) * 2016-05-26 2016-09-28 努比亚技术有限公司 Video image processing apparatus and method

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325905A (en) * 2018-08-29 2019-02-12 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN109325905B (en) * 2018-08-29 2023-10-13 Oppo广东移动通信有限公司 Image processing method, image processing device, computer readable storage medium and electronic apparatus
CN111161133A (en) * 2019-12-26 2020-05-15 维沃移动通信有限公司 Picture processing method and electronic equipment
CN111161133B (en) * 2019-12-26 2023-07-04 维沃移动通信有限公司 Picture processing method and electronic equipment
CN113556500A (en) * 2020-04-24 2021-10-26 华为技术有限公司 Video overlapping method, device and system
CN113556500B (en) * 2020-04-24 2022-05-13 华为技术有限公司 Video overlapping method, device and system
CN111683280A (en) * 2020-06-04 2020-09-18 腾讯科技(深圳)有限公司 Video processing method and device and electronic equipment
CN112241941A (en) * 2020-10-20 2021-01-19 北京字跳网络技术有限公司 Method, device, equipment and computer readable medium for acquiring image
CN112241941B (en) * 2020-10-20 2024-03-22 北京字跳网络技术有限公司 Method, apparatus, device and computer readable medium for acquiring image
CN112241709A (en) * 2020-10-21 2021-01-19 北京字跳网络技术有限公司 Image processing method, and training method and device of beard transformation network
CN114298929A (en) * 2021-12-20 2022-04-08 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN115297359A (en) * 2022-07-29 2022-11-04 北京字跳网络技术有限公司 Multimedia data transmission method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2018127091A1 (en) Image processing method and apparatus, relevant device and server
EP3611915B1 (en) Method and apparatus for image processing
US11216178B2 (en) Video encoding method and electronic device adapted thereto
JP7266672B2 (en) Image processing method, image processing apparatus, and device
CN113129312B (en) Image processing method, device and equipment
CN108289185B (en) Video communication method, device and terminal equipment
KR101768980B1 (en) Virtual video call method and terminal
KR102189345B1 (en) Establishing a video conference during a phone call
US7982762B2 (en) System and method for combining local and remote images such that images of participants appear overlaid on another in substanial alignment
US20140002586A1 (en) Gaze direction adjustment for video calls and meetings
US9838641B1 (en) Low power framework for processing, compressing, and transmitting images at a mobile image capture device
US9723261B2 (en) Information processing device, conference system and storage medium
KR102496225B1 (en) Method for video encoding and electronic device supporting the same
WO2022022019A1 (en) Screen projection data processing method and apparatus
US10084959B1 (en) Color adjustment of stitched panoramic video
CN110868547A (en) Photographing control method, photographing control device, electronic equipment and storage medium
WO2023016167A1 (en) Virtual image video call method, terminal device, and storage medium
CN108307101B (en) Image processing method, electronic equipment and server
US20220301278A1 (en) Image processing method and apparatus, storage medium, and electronic device
CN114531564A (en) Processing method and electronic equipment
CN115515008B (en) Video processing method, terminal and video processing system
RU2791810C2 (en) Method, equipment and device for image processing
RU2794062C2 (en) Image processing device and method and equipment
CN117676065A (en) Video call method and electronic equipment
CN117560579A (en) Shooting processing method, shooting processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18736042

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18736042

Country of ref document: EP

Kind code of ref document: A1