CN113850118A - Video processing function verification method and device, electronic equipment and storage medium - Google Patents

Video processing function verification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113850118A
CN113850118A CN202110833094.0A CN202110833094A CN113850118A CN 113850118 A CN113850118 A CN 113850118A CN 202110833094 A CN202110833094 A CN 202110833094A CN 113850118 A CN113850118 A CN 113850118A
Authority
CN
China
Prior art keywords
image
video
pixel
verified
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110833094.0A
Other languages
Chinese (zh)
Inventor
陈裕发
龙祖苑
谢宗兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110833094.0A priority Critical patent/CN113850118A/en
Publication of CN113850118A publication Critical patent/CN113850118A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method and a device for verifying video processing function, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a template video and image feature data of a first image from a preset template library, wherein the image feature data of the first image is extracted from the first image of a standard video based on a preset data extraction algorithm, and the standard video is a video which is obtained after the template video is processed and realizes the video processing function to be verified; starting a to-be-verified video processing function to process the template video to obtain a to-be-verified video, extracting image characteristic data from a second image of the to-be-verified video based on a preset data extraction algorithm, and comparing the extracted data with the image characteristic data of the first image; and if not, determining that the video processing function to be verified is abnormal. The technical scheme of this application embodiment can reduce the manpower, promotes verification efficiency.

Description

Video processing function verification method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a method and an apparatus for verifying a video processing function, an electronic device, and a storage medium.
Background
In video editing software, various video processing functions bring convenience to video creation, and editing videos and sharing the edited videos become an entertainment mode for vast users.
During the operation of the video processing function, an exception may occur. At present, in order to verify whether a video processing function is abnormal, a video is usually processed through the video processing function, and whether the processed video realizes the video processing function is judged by naked eyes. The verification method consumes a great deal of manpower, and has high cost and low efficiency.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present application provide a method and an apparatus for verifying a video processing function, an electronic device, and a computer-readable storage medium.
According to an aspect of an embodiment of the present application, there is provided a method for verifying a video processing function, the method including:
acquiring image characteristic data of a template video and a first image from a preset template library; the image characteristic data of the first image is extracted from the first image of the standard video based on a preset data extraction algorithm; the standard video is a video which is obtained after the template video is processed and realizes the video processing function to be verified;
starting the video processing function to be verified to process the template video to obtain a video to be verified;
extracting image characteristic data from a second image of the video to be verified based on the preset data extraction algorithm, and comparing the image characteristic data of the second image with the image characteristic data of the first image; the position of the second image in the video to be verified is the same as the position of the first image in the standard video;
and if the image characteristic data of the second image is not matched with the image characteristic data of the first image, determining that the video to be verified is abnormal in processing function.
According to an aspect of an embodiment of the present application, there is provided a video processing function verification apparatus, including:
the acquisition module is configured to acquire the template video and the image characteristic data of the first image from a preset template library; the image characteristic data of the first image is extracted from the first image of the standard video based on a preset data extraction algorithm; the standard video is a video which is obtained after the template video is processed and realizes the video processing function to be verified;
the processing module is configured to start the video processing function to be verified so as to process the template video to obtain a video to be verified;
the comparison module is configured to extract image characteristic data from a second image of the video to be verified based on the preset data extraction algorithm, and compare the image characteristic data of the second image with the image characteristic data of the first image; the position of the second image in the video to be verified is the same as the position of the first image in the standard video;
and the verification module is configured to determine that the video to be verified is abnormal in processing function if the image characteristic data of the second image is not matched with the image characteristic data of the first image.
According to an aspect of an embodiment of the present application, there is provided an electronic device including:
a memory storing computer readable instructions;
and a processor for reading the computer readable instructions stored in the memory to execute the video processing function verification method of any one of the above.
According to an aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon computer-readable instructions, which, when executed by a processor of a computer, cause the computer to execute the video processing function verification method according to any one of the above.
In the technical scheme provided by the embodiment of the application, image feature data of a template video and a first image are obtained from a preset template library, wherein the image feature data of the first image are extracted from the first image of a standard video based on a preset data extraction algorithm, and the standard video is a video which is obtained after the template video is processed and realizes a video processing function to be verified; then, starting a video processing function to be verified to process the template video to obtain a video to be verified, extracting image characteristic data from a second image of the video to be verified based on a preset data extraction algorithm, and comparing the image characteristic data of the second image with the image characteristic data of the first image, wherein the position of the second image in the video to be verified is the same as the position of the first image in the standard video; if the image characteristic data of the second image is not matched with the image characteristic data of the first image, the video processing function to be verified is determined to be abnormal, so that whether the video processing function is abnormal or not is verified on the image level, manual and visual judgment is not needed, manpower is reduced, efficiency is improved, cost is reduced, verification of the video processing function is achieved through comparison of the image characteristic data, comparison quantity can be reduced, and efficiency is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a schematic illustration of an implementation environment to which the present application relates;
FIG. 2 illustrates a flow chart of a method of verifying video processing functions in accordance with an exemplary embodiment of the present application;
FIG. 3 is a flow chart of step S130 in the embodiment shown in FIG. 2 in an exemplary embodiment;
FIG. 4 is a flow chart of step S130 in the embodiment shown in FIG. 2 in an exemplary embodiment;
FIG. 5-1 is a schematic illustration of an original image shown in an exemplary embodiment of the present application;
FIG. 5-2 is a schematic diagram illustrating an image implementing a person matting function obtained after processing the original image illustrated in FIG. 5-1 according to an exemplary embodiment of the application;
FIG. 6 is a schematic diagram of a second image shown in an exemplary embodiment of the present application;
fig. 7 is a block diagram of an authentication device of a video processing function shown in an exemplary embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It should also be noted that: reference to "a plurality" in this application means two or more. "and/or" describe the association relationship of the associated objects, meaning that there may be three relationships, e.g., A and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Before the technical solutions of the embodiments of the present application are described, terms and expressions referred to in the embodiments of the present application are explained, and the terms and expressions referred to in the embodiments of the present application are applied to the following explanations.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) is a science for researching how to make a machine see, and further means that a camera and a Computer are used for replacing human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or is transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
In the related art, generally, a person visually judges whether the video processed by the video processing function is realized, so as to verify whether the video processing function is abnormal, and the verification method consumes a large amount of manpower, and is high in cost and low in efficiency. Based on this, the embodiment of the application provides a method and a device for verifying a video processing function, which can improve the verification speed and efficiency of the video processing function and reduce the verification cost.
Referring to fig. 1, fig. 1 is a schematic diagram of an implementation environment related to the present application. The implementation environment includes a terminal 100 and an authentication device 200 of a video processing function, and communication is performed between the terminal 100 and the authentication device 200 through a wired or wireless network.
The terminal 100 includes a client of the video processing function to be verified, and a user can operate on the terminal 100 to start the video processing function to be verified, so that the video is processed through the video processing function to be verified, and the video is edited. The client with the video processing function to be verified can be application software with the video processing function to be verified. Or, the client with the video processing function to be verified may also be a web client with the video processing function to be verified, for example, if a certain web site has the video processing function to be verified, the user may access the web site through the web client, so as to start the video processing function to be verified.
The verification device 200 may obtain the template video and the image feature data of the first image from the preset template library, and control the terminal 100 to start a to-be-verified video processing function to process the template video to obtain the to-be-verified video, then extract the image feature data from the second image of the to-be-verified video based on a preset data extraction algorithm, compare the image feature data of the second image with the image feature data of the first image, and determine that the to-be-verified video processing function is abnormal if the image feature data of the second image is not matched with the image feature data of the first image, thereby implementing verification of the video processing function. The first image is an image in a standard video, the standard video is a video which is obtained after the template video is processed and achieves a video processing function to be verified, image characteristic data of the first image is extracted from the first image of the standard video based on a preset data extraction algorithm, and the position of the first image in the standard video is the same as the position of the second image in the video to be verified.
It should be noted that the video processing function to be verified may be implemented based on a computer vision technology, and the preset data extraction algorithm may also be implemented based on a computer vision technology.
The terminal 100 may be any electronic device of a client capable of running a video processing function, such as a smart phone, a tablet, a laptop, and a computer, and the verification device 200 may be any electronic device capable of verifying a video processing function, such as a smart phone, a tablet, a laptop, a computer, and a server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like, which is not limited herein.
It should be noted that the terminal 100 and the authentication device 200 may be independent electronic devices, for example, the terminal 100 is a smart phone, a tablet or a notebook, and the authentication device 200 is a server. The terminal 100 and the verification device 200 may also be the same electronic device, that is, the electronic device may operate a client of the video processing function and may also verify the video processing function.
Fig. 2 is a flow diagram illustrating a method of verification of video processing functions in accordance with an exemplary embodiment. The method may be applied to the implementation environment shown in fig. 1 and is specifically performed by the authentication device 200 in the embodiment environment shown in fig. 1.
As shown in fig. 2, in an exemplary embodiment, the method for verifying the video processing function may include steps S110 to S140, which are described in detail as follows:
step S110, obtaining image feature data of the template video and the first image from a preset template library.
It should be noted that, in this embodiment, a template library is preset, that is, the template library includes the template video and the image feature data of the first image. The number and the type of the template videos contained in the preset template library can be flexibly set according to actual needs. The preset template library can be stored locally or at a network end, such as cloud storage.
The template video is used for verifying the video processing function to be verified and can be flexibly set according to actual needs.
The video processing function is a functional module for processing video, such as a matting function, a special effect adding function, and the like, which can be implemented by computer vision technology. The video processing function to be verified is a video processing function which needs to verify whether the video processing function is abnormal or not. The video processing function to be verified can be a video processing function in video editing software, for example, a special effect function of adding "fireworks" in "micro-vision" APP (application). The video processing function to be verified may also be a video processing function in a video editing website.
The first image is an image in the standard video, i.e. a video frame of the standard video. The number of the first images can be flexibly set according to actual needs, for example, each frame of video frames of the standard video can be used as the first image, that is, the number of the first images is equal to the number of the video frames in the standard video.
The standard video is a video which is obtained after the template video is processed and realizes the video processing function to be verified, namely the video which is obtained after the template video is processed and achieves the effect corresponding to the video processing function to be verified. The template video can be processed in advance through a video processing function to be verified, the template video is processed through manual inspection, and the video after manual inspection is used as a standard video. For example, assuming that the video processing function to be verified is a "stars" special effect function, the "stars" special effect function may be started in advance to process the template video to obtain a processed template video, and if "stars" have been added to the template video after the manual inspection processing, the processed template video is used as a standard video.
The image feature data of the first image is extracted from the first image based on a preset data extraction algorithm. The preset data extraction algorithm can be flexibly set according to actual needs, and can be realized through a computer vision technology.
In this embodiment, when the video processing function to be verified needs to be verified, the image feature data of the template video and the first image are obtained from the preset template library.
In order to ensure that the video processing function to be verified is in a normal state, the image feature data of the template video and the first image can be periodically acquired from the preset template library so as to periodically verify the video processing function to be verified.
Step S120, starting a video processing function to be verified so as to process the template video and obtain the video to be verified.
It should be understood that the video processing function may be abnormal during the operation process, for example, after the user downloads and installs the video editing software, the video processing function may be abnormal during the operation process of the video editing software. In order to verify whether the video processing function to be verified is abnormal, the video processing function to be verified can be started, and the template video is processed through the video processing function to be verified to obtain the video to be verified.
Step S130, extracting image feature data from the second image of the video to be verified based on a preset data extraction algorithm, and comparing the image feature data of the second image with the image feature data of the first image.
The second image is an image in the video to be verified, namely a video frame in the video to be verified, and in order to ensure that the comparison has referential property, the position of the second image in the video to be verified is the same as the position of the first image in the standard video. For example, if the first image is a2 nd frame video frame in the standard video, the second image is a2 nd frame video frame in the video to be verified, and if the first image is a 5 th frame video frame in the standard video, the second image is a 5 th frame video frame in the video to be verified. Here, the 2 nd frame and the 5 th frame are the number of frames of the video frame in the video, and the larger the number of frames is, the later the corresponding playing time is.
And extracting image characteristic data from the second image based on a preset data extraction algorithm, and then comparing the image characteristic data of the second image with the image characteristic data of the first image to determine whether the video processing function to be verified is abnormal or not according to a comparison result. Since the manner of extracting the image feature data from the second image is the same as the manner of extracting the image feature data from the first image, and the image feature data are extracted based on the preset data extraction algorithm, the comparison can be more referential.
In step S140, if the image feature data of the second image is not matched with the image feature data of the first image, it is determined that the video processing function to be verified is abnormal.
The standard video is a video which is obtained after the template video is processed and realizes the processing function of the video to be verified, the video to be verified is a video which is obtained after the template video is processed through the processing function of the video to be verified, the first image is an image in the standard video, the second image is an image in the video to be verified, the position of the first image in the standard video is the same as the position of the second image in the video to be verified, the extraction algorithm of the image feature data of the first image is the same as the extraction algorithm of the image feature data of the second image, and if the processing function of the video to be verified is normal, the image feature data of the second image is generally matched with the image feature data of the first image; therefore, in this embodiment, if the obtained comparison result is that the image feature data of the second image is not matched with the image feature data of the first image, it is determined that the video processing function to be verified is abnormal.
Wherein the image feature data of the second image may be determined to match the image feature data of the first image when the image feature data of the second image is the same as the image feature data of the first image; and when the image characteristic data of the second image is different from the image characteristic data of the first image, judging that the image characteristic data of the second image is not matched with the image characteristic data of the first image. Or when the similarity between the image feature data of the second image and the image feature data of the first image is greater than or equal to a preset first threshold, determining that the image feature data of the second image is matched with the image feature data of the first image; when the similarity between the image feature data of the second image and the image feature data of the first image is smaller than a preset first threshold, determining that the image feature data of the second image does not match the image feature data of the first image, wherein the preset first threshold can be flexibly set according to actual needs, for example, can be set to 90%, 80%, and the like.
In this embodiment, image feature data of a template video and a first image are obtained from a preset template library, wherein the image feature data of the first image is extracted from the first image of a standard video based on a preset data extraction algorithm, and the standard video is a video which is obtained after the template video is processed and realizes a video processing function to be verified; then, starting a video processing function to be verified to process the template video to obtain a video to be verified, extracting image characteristic data from a second image of the video to be verified based on a preset data extraction algorithm, and comparing the image characteristic data of the second image with the image characteristic data of the first image, wherein the position of the second image in the video to be verified is the same as the position of the first image in the standard video; if the image characteristic data of the second image is not matched with the image characteristic data of the first image, the video processing function to be verified is determined to be abnormal, so that whether the video processing function is abnormal or not is automatically verified on the image level, manual visual judgment is not needed, manpower is reduced, efficiency is improved, cost is reduced, verification of the video processing function is achieved through comparison of the image characteristic data, comparison quantity can be reduced, and efficiency is further improved.
In an exemplary embodiment, the image feature data of the first image includes pixel values of pixel points in a first target region, and the first target region is a partial image region in the first image. It should be noted that the pixel Value is used to represent the color of the pixel point, wherein an RGB (red, green, blue, red, green, blue) Value may be used as the pixel Value, or an HSV (Hue, Saturation, brightness) Value may be used as the pixel Value, and of course, values of other color systems may also be used as the pixel Value. The position, shape, size, etc. of the first target area may be flexibly set according to actual needs, for example, the focus of one image is usually in the central area of the image, and thus, the position of the first target area may be the central area of the first image; the shape of the first target area can be regular figures such as rectangle, triangle, circle and the like, and of course, the first target area can also be irregular figures; the area of the first target region may be 1/2, 1/5, 1/10, etc. of the first image.
Referring to fig. 3, fig. 3 is a flowchart of step S130 in the embodiment shown in fig. 2 in an exemplary embodiment under the condition that the image feature data of the first image includes pixel values of pixel points in the first target region, and the first target region is a partial image region in the first image. As shown in fig. 3, the process of extracting image feature data from the second image of the video to be verified based on the preset data extraction algorithm and comparing the image feature data of the second image with the image feature data of the first image may include steps S131 to S132, which are described in detail as follows:
step S131, a second target area is determined from the second image based on a preset data extraction algorithm.
Because the image characteristic data of the first image includes the pixel values of the pixel points in the first target region, the image characteristic data of the second image includes the pixel values of the pixel points in the second target region, wherein the position of the second target region in the second image is the same as the position of the first target region in the first image.
Therefore, in this embodiment, the second target region is determined from the second image based on the preset data extraction algorithm.
Step S132 is to compare the pixel value of each pixel point in the second target region with the pixel value of each pixel point in the first target region.
And comparing the pixel value of each pixel point in the second target area with the pixel value of each pixel point in the first target area, and determining whether the image characteristic data of the second image is matched with the image characteristic data of the first image according to the comparison result, thereby determining whether the video processing function to be verified is abnormal.
It should be noted that, during comparison, the positions of the two pixels that are compared are the same, for example, if the a2 pixel in the second target region is compared with the a1 pixel in the first target region, the position of the a2 pixel in the second image is the same as the position of the a1 pixel in the first image; for example, if the first image and the second image establish a coordinate system with the pixel point at the same position as the origin, the pixel point with the same coordinate is at the same position, and if the second target region includes the pixel point with the coordinate of (0, 1), the pixel point is compared with the pixel point with the coordinate of (0, 1) in the first target region, and if the second target region further includes the pixel point with the coordinate of (2, 3), the pixel point is compared with the pixel point with the coordinate of (2, 3) in the first target region.
When the pixel value of each pixel point in the second target region is matched with the pixel value of the corresponding pixel point in the first target region, the image characteristic data of the second image is judged to be matched with the image characteristic data of the first image. Or when the ratio of the number of pixel points with matched pixel values to the total number of pixel points in the second target region is greater than or equal to a preset second threshold, determining that the image characteristic data of the second image is matched with the image characteristic data of the first image; when the ratio of the number of the pixel points with the matched pixel values to the total number of the pixel points in the second target area is smaller than a preset second threshold, judging that the image characteristic data of the second image is not matched with the image characteristic data of the first image, wherein the pixel points with the matched pixel values are the pixel points in the second target area, and the pixel values of the pixel points are matched with the pixel values of the corresponding pixel points in the first target area; the preset second threshold can be flexibly set according to actual needs, for example, set to 90%, 80%, and the like; for example, if the second target region includes 100 pixels, and the preset second threshold is 90%, if the pixel values of 90 pixels in the second target region match the pixel values of the corresponding pixels in the first target region, it is determined that the image feature data of the second image matches the image feature data of the first image.
It should be noted that the two pixel value matches may be: the two pixel values are identical, for example, assuming that RGB values are used as pixel values, one RGB value is (0, 2, 3), and if the other RGB value is also (0, 2, 3), it is determined that the two pixel values match. Alternatively, to avoid errors due to uncertainty, the two pixel value matches may be: the difference value of the two pixel values is smaller than a preset third threshold, wherein the preset third threshold can be set to be 5, 8, 2 and the like, and the specific value can be flexibly set according to actual needs; for example, assuming that RGB values are used as pixel values, one RGB value is (1, 2, 3), and the other RGB value is (7, 5, 1), the difference between the two pixel values is: and (7-1) + (5-2) + (3-1) ═ 11, if the preset third threshold is 15, the two pixel values match, and if the preset third threshold is 5, the two pixel values do not match.
In this embodiment, the pixel values of the partial image regions in the first image and the second image are compared, so that the video processing function is verified according to the comparison result, the comparison amount can be reduced, and the verification efficiency can be improved.
In an exemplary embodiment, the image characteristic data of the first image includes: a first coordinate sequence consisting of the coordinates of the first target pixel points; the first target pixel point is a pixel point which meets a preset condition in the first image. For example, assuming that the coordinates of each first target pixel are (0, 1), (2, 1), (3, 1), (2, 2), (5, 2), (6, 2), respectively, the first coordinate sequence includes { (0, 1), (2, 1), (3, 1), (2, 2), (5, 2), (6, 2) }. Or, to further reduce the data processing amount, the first coordinate sequence may include a plurality of sub-sequences, where one sub-sequence includes the abscissa of the first target pixel belonging to the same row, and records the number of rows corresponding to each sub-sequence (where the number of rows is used to represent the position of the pixel, such as the first row, the second row, etc.); for example, if the first target pixel includes (0, 1), (2, 1), (3, 1), (2, 2), (4, 2), (6, 2), (2, 3), (4, 3), the first target pixel includes 3 rows, where the subsequence corresponding to the first row (the row with the ordinate 1) is {0, 2, 3}, the subsequence corresponding to the second row (the row with the ordinate 2) is {2, 4, 6}, and the subsequence corresponding to the third row (the row with the ordinate 3) is {2, 4 }.
It should be noted that the preset condition may be flexibly set according to actual needs, for example, the preset condition may be set to be a pixel point with a maximum pixel value, a pixel point with a minimum pixel value, and the like. In this embodiment, a plurality of pixel points meeting a preset condition are determined from a first image, then, coordinates of the determined pixel points form a coordinate sequence to obtain a first coordinate sequence, and image feature data of the first image includes the first coordinate sequence.
Referring to fig. 4, fig. 4 is a flowchart of step S130 in the embodiment shown in fig. 2 in an exemplary embodiment, under the condition that the image feature data of the first image includes the first coordinate sequence. As shown in fig. 4, the process of extracting image feature data from the second image of the video to be verified based on the preset data extraction algorithm and comparing the image feature data of the second image with the image feature data of the first image may include steps S133 to S135, which are described in detail as follows:
step S133, determining pixel points satisfying a preset condition from a second image of the video to be verified, and using the determined pixel points as second target pixel points.
The second target pixel point is a pixel point which meets the preset condition in the second image. In this embodiment, after the video to be verified is obtained, a second target pixel point is determined from a second image of the video to be verified.
Step S134, a second coordinate sequence is obtained based on the coordinates of the second target pixel point.
And after the second target pixel point is determined, a second coordinate sequence is obtained based on the coordinate of the second target pixel point.
It should be noted that the manner of obtaining the second coordinate sequence based on the coordinates of the second target pixel point is the same as the manner of obtaining the first coordinate sequence based on the coordinates of the first target pixel point. For example, if the first coordinate sequence is a sequence of complete coordinates of each first target pixel, the second coordinate sequence is also a sequence of complete coordinates of each second target pixel. If the first coordinate sequence comprises a plurality of subsequences, one subsequence comprises the abscissa of the first target pixel point belonging to the same row, and the row number corresponding to each subsequence is recorded; the second coordinate sequence may include a plurality of subsequences, one subsequence includes the abscissa of the second target pixel point belonging to the same row, and the row number corresponding to each subsequence is recorded; in this way, when the second coordinate sequence is aligned with the first coordinate sequence, the number of rows of the aligned two sub-sequences is the same, for example, if the B2 sub-sequence in the second coordinate sequence is aligned with the B1 sub-sequence in the first coordinate sequence, the number of rows of the B2 sub-sequence is the same as the number of rows of the B1 sub-sequence.
Step S135, comparing the second coordinate sequence with the first coordinate sequence.
And after the second coordinate sequence is obtained, comparing the second coordinate sequence with the first coordinate sequence to determine whether the image characteristic data of the second image is matched with the image characteristic data of the first image according to the comparison result. And if the second coordinate sequence is not matched with the first coordinate sequence, the image characteristic data of the second image is not matched with the image characteristic data of the first image.
For example, if the first coordinate sequence is { (1, 2), (2, 3), (7, 8) }, and if the second coordinate sequence is also { (1, 2), (2, 3), (7, 8) }, the second coordinate sequence is determined to be the first coordinate sequence. Or, the matching between the second coordinate sequence and the first coordinate sequence may be that the ratio of the number of the same coordinates in the second coordinate sequence to the total number of coordinates included in the second coordinate sequence exceeds a preset fourth threshold, where the preset fourth threshold may be 90%, 80%, and the like, and the specific value may be flexibly set according to actual needs; for example, if the preset fourth threshold is 70%, the first coordinate sequence is { (1, 3), (2, 4), (7, 8), (7, 10), (7, 18) } and the second coordinate sequence is { (1, 3), (2, 4), (7, 8), (7, 10), (7, 15) }, then 4 coordinates in the first coordinate sequence and the second coordinate sequence are the same, the ratio is 80%, and if the preset fourth threshold is exceeded, the first coordinate sequence matches the second coordinate sequence.
In the embodiment, the first target pixel point meeting the preset condition is determined from the first image, the first coordinate sequence is determined based on the coordinate of the first target pixel point, the first coordinate sequence is stored in the template library, the second target pixel point meeting the preset condition is determined from the second image, the second coordinate sequence is determined based on the coordinate of the second target pixel point, the first coordinate sequence is compared with the second coordinate sequence, whether the video processing function to be verified is abnormal or not is determined according to the comparison result, therefore, the complex image data can be converted into a small amount of data to be stored in the template library, the image characteristic data is simplified into the coordinate sequence from the pixel value, the comparison efficiency is greatly improved, and the video processing function can be verified quickly, efficiently and reliably.
In an exemplary embodiment, the video processing function to be verified is a matting function, and the matting function is used for determining a target object to be reserved from an image and modifying a pixel value of a pixel point outside the target object into a preset pixel value; the preset pixel value can be flexibly set according to actual needs, and for example, the RGB value can be set to (0, 0, 0). For example, see fig. 5-1 and 5-2, where fig. 5-1 is an original image, and fig. 5-2 is an image obtained by processing the original image and implementing the image matting function. The image matting function can also be color matting, the target object is a region corresponding to a specific color, the color matting is used for determining the region corresponding to the specific color from the image, and the pixel value of the pixel point outside the determined region is modified into a preset pixel value.
The first target pixel point is a pixel point corresponding to the contour of the target object in the first image. Because the outline of the target object can represent whether the matting function is realized, the pixel points corresponding to the outline of the target object can be selected as the first target pixel points.
Under the condition that the video processing function to be verified is a matting function and the first target pixel point is a pixel point corresponding to the contour of the target object in the first image, step S133 shown in fig. 4 includes steps S210 to S220, which are described in detail as follows:
step S210, determining a pixel point corresponding to the contour of the target object from the second image.
And after the video to be verified is obtained, determining pixel points corresponding to the outline of the target object from a second image of the video to be verified.
Step S220, using the determined pixel point corresponding to the contour of the target object as a second target pixel point.
And taking the pixel points corresponding to the determined contour of the target object as second target pixel points.
The specific manner of determining the pixel point corresponding to the contour of the target object from the second image includes, but is not limited to, the following two manners:
in the first manner, the process of determining the pixel point corresponding to the contour of the target object from the second image includes steps S211 to S212, which are described in detail as follows:
step S211, for each row of the second image, dividing the pixel points with the same type and continuous positions into the same pixel region to obtain at least one pixel region.
The types include a first type with a pixel value being a preset pixel value and a second type with a pixel value being a non-preset pixel value.
For each line of the second image, the pixel points with the same type and continuous positions are divided into the same pixel area, so that at least one pixel area can be obtained. For example, referring to fig. 6, it is assumed that fig. 6 is a second image including 2 lines, each line includes 20 pixels, one square lattice represents one pixel, a square lattice filled with oblique lines represents a pixel having a pixel value other than the predetermined pixel value (i.e., a second type pixel), a square lattice filled without oblique lines represents a pixel having a pixel value other than the predetermined pixel value (i.e., a first type pixel), and for the first line, after the pixels having the same type and continuous positions are divided into the same pixel region, 7 pixel regions are obtained, which are 611-617 respectively; for the second row, after the pixel points with the same type and continuous positions are divided into the same pixel region, 5 pixel regions are obtained, which are 621-625 respectively.
Step S212 is to determine an initial pixel point of each pixel region in at least one pixel region, and use the initial pixel point of each pixel region as a pixel point corresponding to the contour of the target object.
After the pixel regions of each line are obtained, initial pixel points of each pixel region are determined, and the initial pixel points are used as pixel points corresponding to the contour of the target object, namely second target pixel points. Wherein, the initial pixel point is the pixel point with the minimum abscissa in the pixel region. For example, as shown in fig. 6, assuming that a coordinate system is established with the pixel point at the lower left corner as the origin, wherein the abscissa includes 0 to 19, the ordinate includes 0 to 1, the coordinate of the starting pixel point of the pixel region 611 is (0, 1), the coordinate of the starting pixel point of the pixel region 612 is (2, 1), the coordinate of the starting pixel point of the pixel region 621 is (0, 0), the coordinate of the starting pixel point of the pixel region 622 is (2, 0), the coordinate of the starting coordinate point of the pixel region 623 is (5, 0), and similarly, the starting pixel points of other pixel regions can be determined, and for the first row, the coordinates of the pixel points (i.e., the second target pixel points) corresponding to the determined contour of the target object are (0, 1), (2, 1), (7, 1), (11, 1), (14, 1), (15, 1), (17), 1) for the second line, the coordinates of the pixel points corresponding to the determined outline of the target object (i.e., the second target pixel points) are (0, 0), (2, 0), (5, 0), (9, 0), and (12, 0), respectively. Then, based on the coordinates of the second target pixel point, a second coordinate sequence is obtained: {(0,1),(2,1),(7,1),(11,1),(14,1),(15,1),(17,1),(0,0),(2,0),(5,0),(9,0),(12,0)}.
It should be noted that, in this way, for each row in the second image, if the pixel value of the pixel point in the first pixel region (the pixel region with the smallest abscissa) is the preset pixel value (i.e., the first type), the pixel region may be discarded, only the initial pixel point in the remaining pixel regions is determined, and the determined initial pixel point is used as the second target pixel point; that is to say, for each row in the second image, according to the sequence of the abscissa from small to large, the pixel point with the smallest abscissa is taken as the starting point, and the pixel point with the first pixel value but the non-preset pixel value is determined; then, with the determined pixel point as a starting point, determining a pixel point with a first pixel value as a preset pixel value from subsequent pixel points; and then, with the determined pixel point as a starting point, determining a pixel point with a first pixel value which is not the preset pixel value from the subsequent pixel points, and circulating according to the steps until all pixel points of the row are traversed, and taking the determined pixel point as a pixel point corresponding to the contour of the target object. For example, referring to fig. 6, in the first row, the first pixel region is the pixel region 611, and since the pixel value of the pixel point in the pixel region 611 is the predetermined pixel value (i.e. the first type), the pixel region 611 is discarded, and the initial pixel point of the pixel region 612 plus 617 is determined, so as to obtain the coordinate sequence { (2, 1), (7, 1), (11, 1), (14, 1), (15, 1), (17, 1) } corresponding to the first row, that is, for the first row, according to the ascending order of the abscissa, the pixel point whose first pixel value is not the predetermined pixel value, that is, (2, 1) is determined first, then after (2, 1) is determined, the pixel point whose first pixel value is the predetermined pixel value, that is, (7, 1), then after (7, 1) is determined, the pixel point whose first pixel value is not the predetermined pixel value, that is, (11, 1), thereby, (2, 1), (7, 1), (11, 1), (14, 1), (15, 1), (17, 1) were determined.
In a second manner, the process of determining a pixel point corresponding to the contour of the target object from the second image includes: for each row of the second image, determining the pixel point with the minimum abscissa and the pixel point with the maximum abscissa from the pixel points with the pixel values not being preset pixel values, and taking the pixel point with the minimum abscissa and the pixel point with the maximum abscissa as the pixel points corresponding to the contour of the target object.
And for each row of the second image, determining pixel points with pixel values which are not preset pixel values, and selecting the pixel point with the minimum abscissa and the pixel point with the maximum abscissa from the determined pixel points as pixel points corresponding to the contour of the target object, namely the second target pixel point.
For example, referring to fig. 6, for the first row, the coordinates of the pixel point whose pixel value is not the preset pixel value and whose abscissa is minimum are (2, 1), and the coordinates of the pixel point whose pixel value is not the preset pixel value and whose abscissa is maximum are (16, 1); for the second row, the coordinates of the pixel point with the pixel value not being the preset pixel value and the smallest abscissa are (2, 0), and the coordinates of the pixel point with the pixel value not being the preset pixel value and the largest abscissa are (11, 0).
It should be noted that, because the obtaining processes of the first coordinate sequence and the second coordinate sequence are the same, for the specific process of obtaining the first coordinate sequence, the specific process of the second coordinate sequence may be referred to, and for the specific process of obtaining the first coordinate sequence, details are not repeated in this embodiment.
In an exemplary embodiment, when the video processing function to be verified is a matting function, the image feature data of the first image includes: after step S135 shown in fig. 4, the method for verifying the video processing function further includes step S136 under the condition that the first target pixel point is a pixel point corresponding to the contour of the target object in the first image and the first target pixel point is a first coordinate sequence formed by the coordinates of each first target pixel point, which is described in detail as follows:
step S136, if the second coordinate sequence does not match the first coordinate sequence, determining that the image feature data of the second image does not match the image feature data of the first image, and the mismatch type is that the target object contour does not match.
And comparing the second coordinate sequence with the first coordinate sequence, wherein if the comparison result is that the second coordinate sequence is not matched with the first coordinate sequence, the image characteristic data of the second image is not matched with the image characteristic data of the first image, and the unmatched type is the target object contour mismatching.
In an exemplary embodiment, the video processing function to be verified is a matting function, and the image feature data of the first image includes: and the first target pixel points are pixel points corresponding to the contour of the target object in the first image. The image characteristic data of the first image further comprises a first pixel value sequence corresponding to a pixel point of which the pixel value is not a preset pixel value in the first coordinate sequence; for example, assume that the first coordinate sequence is { (0, 1), (2, 1), (7, 1), (11, 1) }, where the pixel values of the pixels (2, 1), (7, 1) are not the predetermined pixel value, the pixel value of the pixel (2, 1) is (100, 20, 50), the pixel value of the pixel (7, 1) is (120, 15, 38), and the first pixel value sequence is { (100, 20, 50), (120, 15, 38) }. Under this condition, after step S135 shown in fig. 4, the method for verifying the video processing function further includes steps S310 to S340, which are described in detail as follows:
in step S310, if the second coordinate sequence matches the first coordinate sequence, a pixel point of a pixel value in the second coordinate sequence that is not a preset pixel value is determined.
And if the second coordinate sequence is matched with the first coordinate sequence, representing that the outlines of the target object in the second image and the first image are matched, and determining that the pixel value is a pixel point with a preset pixel value in the second coordinate sequence in order to further confirm whether the content of the target object in the second image and the first image is changed.
In step S320, a second pixel value sequence is obtained based on the determined pixel values of the pixel points.
And obtaining a second pixel value sequence based on the determined pixel values of the pixel points.
In step S330, the first pixel value sequence and the second pixel value sequence are compared.
The first sequence of pixel values is compared to the second sequence of pixel values.
When the comparison is carried out, one pixel value in the first pixel value sequence is only compared with the pixel value at the corresponding position in the second pixel value sequence. For example, a first pixel value in a first sequence of pixel values is aligned with a first pixel value in a second sequence of pixel values, and a second pixel value in the first sequence of pixel values is aligned with a second pixel value in the second sequence of pixel values.
In step S340, if the first pixel value sequence does not match the second pixel value sequence, it is determined that the image feature data of the second image does not match the image feature data of the first image, and the type of mismatch is that the target object pixel value does not match.
If the first pixel value sequence is not matched with the second pixel value sequence, the image characteristic data of the second image is not matched with the image characteristic data of the first image, and the mismatch type is that the target object pixel value is not matched.
Wherein the first sequence of pixel values may be determined to match the second sequence of pixel values when each pixel value in the first sequence of pixel values matches each pixel value in the second sequence of pixel values; and when at least one pixel value in the first pixel value sequence is not matched with the pixel value at the corresponding position in the second pixel value sequence, judging that the first pixel value sequence is not matched with the second pixel value sequence. Or, when the ratio of the number of matched pixel values in the second pixel value sequence and the total number of pixel values contained in the second pixel value sequence is greater than or equal to a preset fifth threshold, it may be determined that the first pixel value sequence is matched with the second pixel value sequence; and when the ratio of the number of the matched pixel values in the second pixel value sequence and the total number of the pixel values in the second pixel value sequence is smaller than a preset fifth threshold value, judging that the first pixel value sequence is not matched with the second pixel value sequence. The preset fifth threshold value can be flexibly set according to actual needs. For example, assuming that the predetermined fifth threshold is 89%, the second sequence of pixel values comprises 10 pixel values, of which 8 pixel values match the pixel values in the first sequence of pixel values, the ratio is 8/10, and the second sequence of pixel values does not match the first sequence of pixel values. It should be noted that, please refer to the foregoing description for how to determine whether two pixel values are matched, which is not described herein again.
In an exemplary embodiment, the method for verifying the video processing function further includes: and after determining that the video processing function to be verified is abnormal, outputting abnormal function information. Wherein the functional abnormality information includes: and the comparison result is the position of the unmatched second image in the video to be verified and the unmatched type. Therefore, the maintenance personnel can repair the video processing function to be verified based on the abnormal function message.
It should be noted that the manner of outputting the abnormal function message may be flexibly set according to actual needs, for example, the abnormal function message may be sent to a preset mailbox, where the preset mailbox may be a mailbox of a maintenance person, so that the maintenance person can quickly repair the video processing function.
In an exemplary embodiment, the first image is a video frame selected from a standard video based on a preset video frame selection manner, wherein, in order to reduce the comparison amount, the number of the first images is smaller than the number of video frames in the standard video. The preset video frame selection mode can be flexibly set according to actual needs, for example, because a cover is usually a focus of a video, the cover of a standard video can be selected as a first image, or a first frame video frame of the standard video can be selected as the first image.
Under this condition, before step S130 shown in fig. 1, the method for verifying the video processing function may further include: and selecting a video frame at a corresponding position from the video to be verified as a second image based on the position of the first image in the standard video.
For example, if the first image is a cover of the standard video, the cover of the video to be verified is used as a second image; and if the first image is the first frame of the standard video, taking the first frame video frame in the video to be verified as the second image.
In an exemplary embodiment, in the case that the number of the first images is greater than or equal to 2, specific implementations of step S130 shown in fig. 1 include, but are not limited to, the following two ways:
the first mode is as follows: extracting image characteristic data from each second image based on a preset data extraction algorithm; and comparing the image characteristic data of each second image with the image characteristic data of each first image in sequence until all the second images are compared.
That is, each second image is compared with the corresponding first image. Therefore, the verification precision is improved, and the comparison result of each second image can be known when the function abnormality information is output subsequently. For example, assuming that the second image includes three images of C1, C2, and C3, and the first image includes 3 images of D1, D2, and D3, where C1 corresponds to D1, C2 corresponds to D2, and C3 corresponds to D3, the image feature data of C1 and the image feature data of D1 are compared, the image feature data of C2 and the image feature data of D2 are compared, and the image feature data of C3 and the image feature data of D3 are compared, respectively, and a comparison result is obtained.
The second mode is as follows: extracting image characteristic data from each second image based on a preset data extraction algorithm; and comparing the image characteristic data of each second image with the image characteristic data of each first image in sequence, and stopping comparison if the image characteristic data of the second image is not matched with the image characteristic data of the second image.
And extracting image characteristic data from each second image based on a preset data extraction algorithm, sequentially comparing the image characteristic data of each second image with the image characteristic data of the corresponding first image, and if the image characteristic data of the second image is not matched with the image characteristic data of the second image, indicating that the video to be verified is abnormal in processing function, so that the comparison is stopped, the data processing capacity is reduced, and the response speed is increased. For example, if the second image includes three images, i.e., E1, E2, and E3, the first image includes 3 images, i.e., F1, F2, and F3, wherein E1 corresponds to F1, E2 corresponds to F2, and E3 corresponds to F3, the image feature data of E1 is compared with the image feature data of F1, and if the comparison result is a match, the image feature data of E2 is compared with the image feature data of F2; if the comparison result is not matched, the comparison is stopped.
Fig. 7 is a block diagram of an authentication apparatus of a video processing function shown in an exemplary embodiment of the present application. The device includes:
an obtaining module 710 configured to obtain image feature data of a template video and a first image from a preset template library; the image characteristic data of the first image is extracted from the first image of the standard video based on a preset data extraction algorithm; the standard video is a video which is obtained after the template video is processed and realizes the video processing function to be verified; the processing module 720 is configured to start a to-be-verified video processing function so as to process the template video to obtain a to-be-verified video; the comparison module 730 is configured to extract image feature data from a second image of the video to be verified based on a preset data extraction algorithm, and compare the image feature data of the second image with the image feature data of the first image; the position of the second image in the video to be verified is the same as the position of the first image in the standard video; the verification module 740 is configured to determine that the video processing function to be verified is abnormal if the image feature data of the second image does not match the image feature data of the first image.
In another exemplary embodiment, under the condition that the image feature data of the first image includes pixel values of pixels in a first target region, and the first target region is a partial image region in the first image, the comparing module 730 includes:
a region determination module configured to determine a second target region from the second image based on a preset data extraction algorithm; wherein the position of the second target area in the second image is the same as the position of the first target area in the first image.
And the pixel value comparison module is configured to compare the pixel value of each pixel point in the second target area with the pixel value of each pixel point in the first target area.
In another exemplary embodiment, under the condition that the image feature data of the first image includes a first coordinate sequence formed by coordinates of each first target pixel, and the first target pixel is a pixel satisfying a preset condition in the first image, the comparing module 730 includes:
and the pixel point determining module is configured to determine pixel points meeting preset conditions from a second image of the video to be verified, and take the determined pixel points as second target pixel points.
And the coordinate sequence determination module is configured to obtain a second coordinate sequence based on the coordinates of the second target pixel point.
And the coordinate sequence comparison module is configured to compare the second coordinate sequence with the first coordinate sequence.
In another exemplary embodiment, when the video processing function to be verified is a matting function, the matting function is used to determine a target object to be retained from an image, and modify pixel values of pixel points outside the target object into preset pixel values; the first target pixel point is in the first image, and the pixel point determining module comprises the following components under the condition that the pixel point corresponding to the contour of the target object is in the first image:
and the contour pixel point determining module is configured to determine pixel points corresponding to the contour of the target object from the second image.
And the target pixel point determining module is configured to take the pixel point corresponding to the determined outline of the target object as a second target pixel point.
In another exemplary embodiment, the contour pixel point determining module includes:
the dividing module is configured to divide pixel points which are the same in type and continuous in position into the same pixel area for each line of the second image so as to obtain at least one pixel area; the types include a first type with a pixel value being a preset pixel value and a second type with a pixel value being a non-preset pixel value.
And the initial pixel point determining module is configured to determine an initial pixel point of each pixel area in at least one pixel area, and take the initial pixel point of each pixel area as a pixel point corresponding to the contour of the target object.
In another exemplary embodiment, the contour pixel point determining module includes:
and the contour pixel point determining submodule is configured to determine a pixel point with the minimum abscissa and a pixel point with the maximum abscissa from pixel points with pixel values which are not preset pixel values for each row of the second image, and take the pixel point with the minimum abscissa and the pixel point with the maximum abscissa as pixel points corresponding to the contour of the target object.
In another exemplary embodiment, when the video processing function to be verified is a matting function, the matting function is used to determine a target object to be retained from an image, and modify pixel values of pixel points outside the target object into preset pixel values; the device further includes, under the condition that the first target pixel point is a pixel point corresponding to the contour of the target object in the first image:
and the first determining module is configured to determine the image characteristic data of the second image and the image characteristic data of the first image if the second coordinate sequence does not match the first coordinate sequence, and the mismatch type is the target object contour mismatch.
In another exemplary embodiment, when the video processing function to be verified is a matting function, the matting function is used to determine a target object to be retained from an image, and modify pixel values of pixel points outside the target object into preset pixel values; the first target pixel point is a pixel point corresponding to the contour of the target object in the first image; the image feature data of the first image further includes a first pixel value sequence corresponding to a pixel point of a pixel value other than the preset pixel value in the first coordinate sequence, and the apparatus further includes:
and the second determining module is configured to determine a pixel point of which the pixel value is not the preset pixel value in the second coordinate sequence if the second coordinate sequence is matched with the first coordinate sequence.
A pixel value sequence determination module configured to obtain a second pixel value sequence based on the determined pixel values of the pixel points.
A pixel value sequence comparison module configured to compare the first pixel value sequence with the second pixel value sequence.
And the third determining module is configured to determine that the image characteristic data of the second image does not match the image characteristic data of the first image and the mismatch type is that the target object pixel value does not match if the first pixel value sequence does not match the second pixel value sequence.
In another exemplary embodiment, the apparatus further comprises:
the output module is configured to output the abnormal function information after determining that the video processing function to be verified is abnormal, and the abnormal function information comprises: and the comparison result is the position of the unmatched second image in the video to be verified and the unmatched type.
In another exemplary embodiment, the first image is a video frame selected from a standard video based on a preset video frame selection mode; under the condition that the number of the first images is less than the number of video frames in the standard video, the device further comprises:
and the selection module is configured to select a video frame at a corresponding position from the video to be verified as a second image based on the position of the first image in the standard video.
In another exemplary embodiment, the selection module includes:
and the first selection submodule is configured to take the cover of the video to be verified as the second image if the first image is the cover of the standard video.
And the second selection submodule is configured to use the first frame video frame in the video to be verified as the second image if the first image is the first frame of the standard video.
In another exemplary embodiment, under the condition that the number of the first images is greater than or equal to 2 and the number of the second images is greater than or equal to 2, the comparing module 730 includes:
the first comparison sub-module is configured to extract image characteristic data from each second image based on a preset data extraction algorithm; and comparing the image characteristic data of each second image with the image characteristic data of each first image in sequence until all the second images are compared.
The second comparison submodule is configured to extract image characteristic data from each second image based on a preset data extraction algorithm; and comparing the image characteristic data of each second image with the image characteristic data of each first image in sequence, and stopping comparison if the image characteristic data of the second image is not matched with the image characteristic data of the second image.
It should be noted that the apparatus provided in the foregoing embodiment and the method provided in the foregoing embodiment belong to the same concept, and the specific manner in which each module and unit execute operations has been described in detail in the method embodiment, and is not described again here.
Embodiments of the present application further provide an electronic device, including a processor and a memory, where the memory has stored thereon computer-readable instructions, which when executed by the processor, implement the video processing function verification method as described above.
Fig. 8 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
It should be noted that the electronic device is only an example adapted to the application and should not be considered as providing any limitation to the scope of use of the application. The electronic device is also not to be construed as requiring reliance on, or necessity of, one or more components of the exemplary electronic device illustrated in fig. 8.
As shown in fig. 8, in an exemplary embodiment, the electronic device includes a processing component 801, a memory 802, a power component 803, a multimedia component 804, an audio component 805, a sensor component 807, and a communication component 808. The above components are not all necessary, and the electronic device may add other components or reduce some components according to its own functional requirements, which is not limited in this embodiment.
The processing component 801 generally controls overall operation of the electronic device, such as operations associated with display, data communication, and log data processing. The processing component 801 may include one or more processors 809 to execute instructions to perform all or a portion of the above-described operations. Further, the processing component 801 may include one or more modules that facilitate interaction between the processing component 801 and other components. For example, the processing component 801 may include a multimedia module to facilitate interaction between the multimedia component 804 and the processing component 801.
The memory 802 is configured to store various types of data to support operation at the electronic device, examples of which include instructions for any application or method operating on the electronic device. The memory 802 stores one or more modules configured to be executed by the one or more processors 809 to perform all or part of the steps of the methods described in the embodiments above.
The power supply component 803 provides power to the various components of the electronic device. The power components 803 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for an electronic device.
The multimedia component 804 includes a screen that provides an output interface between the electronic device and the user. In some embodiments, the screen may include a TP (Touch Panel) and an LCD (Liquid Crystal Display). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 805 is configured to output and/or input audio signals. For example, the audio component 805 includes a microphone configured to receive external audio signals when the electronic device is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. In some embodiments, the audio component 805 also includes a speaker for outputting audio signals.
The sensor assembly 807 includes one or more sensors for providing various aspects of status assessment for the electronic device. For example, the sensor assembly 807 may detect an open/closed state of the electronic device, and may also detect a temperature change of the electronic device.
The communication component 808 is configured to facilitate wired or wireless communication between the electronic device and other devices. The electronic device may access a Wireless network based on a communication standard, such as Wi-Fi (Wireless-Fidelity, Wireless network).
It will be appreciated that the configuration shown in fig. 8 is merely illustrative and that the electronic device may include more or fewer components than shown in fig. 8 or have different components than shown in fig. 8. Each of the components shown in fig. 8 may be implemented in hardware, software, or a combination thereof.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Yet another aspect of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment, or may exist separately without being incorporated in the electronic device.
It should be noted that the computer readable storage medium of the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the methods provided in the various embodiments described above.
The above description is only a preferred exemplary embodiment of the present application, and is not intended to limit the embodiments of the present application, and those skilled in the art can easily make various changes and modifications according to the main concept and spirit of the present application, so that the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method for verifying video processing capabilities, the method comprising:
acquiring image characteristic data of a template video and a first image from a preset template library; the image characteristic data of the first image is extracted from the first image of the standard video based on a preset data extraction algorithm; the standard video is a video which is obtained after the template video is processed and realizes the video processing function to be verified;
starting the video processing function to be verified to process the template video to obtain a video to be verified;
extracting image characteristic data from a second image of the video to be verified based on the preset data extraction algorithm, and comparing the image characteristic data of the second image with the image characteristic data of the first image; the position of the second image in the video to be verified is the same as the position of the first image in the standard video;
and if the image characteristic data of the second image is not matched with the image characteristic data of the first image, determining that the video to be verified is abnormal in processing function.
2. The method of claim 1, wherein the image characteristic data of the first image comprises pixel values for pixels within the first target region; the first target area is a partial image area in the first image;
the extracting image feature data from a second image of the video to be verified based on the preset data extraction algorithm, and comparing the image feature data of the second image with the image feature data of the first image, includes:
determining a second target area from the second image based on the preset data extraction algorithm; wherein the second target region is located at the same position in the second image as the first target region is located in the first image;
and comparing the pixel value of each pixel point in the second target area with the pixel value of each pixel point in the first target area.
3. The method of claim 1, wherein the image feature data of the first image comprises: a first coordinate sequence consisting of the coordinates of the first target pixel points; the first target pixel point is a pixel point which meets a preset condition in the first image;
the extracting image feature data from a second image of the video to be verified based on the preset data extraction algorithm, and comparing the image feature data of the second image with the image feature data of the first image, includes:
determining pixel points meeting the preset conditions from a second image of the video to be verified, and taking the determined pixel points as second target pixel points;
obtaining a second coordinate sequence based on the coordinates of the second target pixel points;
and comparing the second coordinate sequence with the first coordinate sequence.
4. The method according to claim 3, wherein the video processing function to be verified is a matting function, and the matting function is used for determining a target object to be preserved from an image and modifying the pixel value of a pixel point outside the target object into a preset pixel value; the first target pixel point is a pixel point corresponding to the contour of the target object in the first image;
the determining, from the second image of the video to be verified, a pixel point meeting the preset condition, and taking the determined pixel point as a second target pixel point includes:
determining pixel points corresponding to the contour of the target object from the second image;
and taking the pixel points corresponding to the determined contour of the target object as the second target pixel points.
5. The method of claim 4, wherein the determining pixel points corresponding to the contour of the target object from the second image comprises:
for each line of the second image, dividing pixel points which are the same in type and continuous in position into the same pixel area to obtain at least one pixel area; wherein the types include a first type in which the pixel value is the preset pixel value and a second type in which the pixel value is not the preset pixel value;
and determining initial pixel points of each pixel area in the at least one pixel area, and taking the initial pixel points of each pixel area as pixel points corresponding to the contour of the target object.
6. The method of claim 4, wherein the determining pixel points corresponding to the contour of the target object from the second image comprises:
for each row of the second image, determining a pixel point with the minimum abscissa and a pixel point with the maximum abscissa from pixel points with pixel values not equal to the preset pixel value, and taking the pixel point with the minimum abscissa and the pixel point with the maximum abscissa as pixel points corresponding to the contour of the target object.
7. The method according to claim 3, wherein the video processing function to be verified is a matting function, and the matting function is used for determining a target object to be preserved from an image and modifying the pixel value of a pixel point outside the target object into a preset pixel value; the first target pixel point is a pixel point corresponding to the contour of the target object in the first image;
after aligning the second coordinate sequence with the first coordinate sequence, the method further comprises:
and if the second coordinate sequence is not matched with the first coordinate sequence, determining that the image characteristic data of the second image is not matched with the image characteristic data of the first image, and the type of mismatching is that the target object contour is not matched.
8. The method according to claim 3, wherein the video processing function to be verified is a matting function, and the matting function is used for determining a target object to be preserved from an image and modifying the pixel value of a pixel point outside the target object into a preset pixel value; the first target pixel point is a pixel point corresponding to the contour of the target object in the first image; the image feature data of the first image further comprises: in the first coordinate sequence, a first pixel value sequence corresponding to a pixel point of which the pixel value is not the preset pixel value;
after the aligning the second coordinate sequence to the first coordinate sequence, the method further comprises:
if the second coordinate sequence is matched with the first coordinate sequence, determining pixel points of which the pixel values are not the preset pixel values in the second coordinate sequence;
obtaining a second pixel value sequence based on the determined pixel values of the pixel points;
comparing the first sequence of pixel values to the second sequence of pixel values;
if the first pixel value sequence is not matched with the second pixel value sequence, determining that the image characteristic data of the second image is not matched with the image characteristic data of the first image, and the type of mismatch is that the target object pixel value is not matched.
9. The method of claim 8, wherein the method further comprises:
after determining that the video processing function to be verified is abnormal, outputting function abnormal information, wherein the function abnormal information comprises: and the comparison result is the position of the unmatched second image in the video to be verified and the unmatched type.
10. The method of claim 1, wherein the first image is a video frame selected from the standard video based on a predetermined video frame selection manner; the number of the first images is less than the number of video frames in the standard video;
before extracting image feature data from a second image of the video to be verified, the method further comprises:
and selecting a video frame at a corresponding position from the video to be verified as the second image based on the position of the first image in the standard video.
11. The method according to claim 10, wherein the selecting a video frame corresponding to a position from the video to be verified as the second image based on the position of each first image in the standard video comprises:
if the first image is the cover of the standard video, taking the cover of the video to be verified as the second image;
and if the first image is the first frame of the standard video, taking the first frame video frame in the video to be verified as the second image.
12. The method of claim 1, wherein the number of first images is 2 or greater, and the number of second images is 2 or greater;
the extracting image feature data from a second image of the video to be verified based on the preset data extraction algorithm, and comparing the image feature data of the second image with the image feature data of the first image, includes:
extracting image characteristic data from each second image based on the preset data extraction algorithm; comparing the image characteristic data of each second image with the image characteristic data of each first image in sequence until all the second images are compared;
or the like, or, alternatively,
extracting image characteristic data from each second image based on the preset data extraction algorithm; and comparing the image characteristic data of each second image with the image characteristic data of each first image in sequence, and stopping comparison if the image characteristic data of the second image is not matched with the image characteristic data of the second image.
13. An apparatus for verifying video processing functions, comprising:
the acquisition module is configured to acquire the template video and the image characteristic data of the first image from a preset template library; the image characteristic data of the first image is extracted from the first image of the standard video based on a preset data extraction algorithm; the standard video is a video which is obtained after the template video is processed and realizes the video processing function to be verified;
the processing module is configured to start the video processing function to be verified so as to process the template video to obtain a video to be verified;
the comparison module is configured to extract image characteristic data from a second image of the video to be verified based on the preset data extraction algorithm, and compare the image characteristic data of the second image with the image characteristic data of the first image; the position of the second image in the video to be verified is the same as the position of the first image in the standard video;
and the verification module is configured to determine that the video to be verified is abnormal in processing function if the image characteristic data of the second image is not matched with the image characteristic data of the first image.
14. An electronic device, comprising:
a memory storing computer readable instructions;
a processor to read computer readable instructions stored by the memory to perform the method of any of claims 1-12.
15. A computer-readable storage medium having computer-readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the method of any one of claims 1-12.
CN202110833094.0A 2021-07-22 2021-07-22 Video processing function verification method and device, electronic equipment and storage medium Pending CN113850118A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110833094.0A CN113850118A (en) 2021-07-22 2021-07-22 Video processing function verification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110833094.0A CN113850118A (en) 2021-07-22 2021-07-22 Video processing function verification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113850118A true CN113850118A (en) 2021-12-28

Family

ID=78975159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110833094.0A Pending CN113850118A (en) 2021-07-22 2021-07-22 Video processing function verification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113850118A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117311650A (en) * 2022-06-23 2023-12-29 格兰菲智能科技有限公司 Display module verification method, system and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117311650A (en) * 2022-06-23 2023-12-29 格兰菲智能科技有限公司 Display module verification method, system and device

Similar Documents

Publication Publication Date Title
CN111553267B (en) Image processing method, image processing model training method and device
CN112954450B (en) Video processing method and device, electronic equipment and storage medium
CN109376256B (en) Image searching method and device
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN112381104A (en) Image identification method and device, computer equipment and storage medium
WO2022166258A1 (en) Behavior recognition method and apparatus, terminal device, and computer-readable storage medium
CN111124888A (en) Method and device for generating recording script and electronic device
CN111860377A (en) Live broadcast method and device based on artificial intelligence, electronic equipment and storage medium
CN112257729B (en) Image recognition method, device, equipment and storage medium
CN113705462A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN111435367A (en) Knowledge graph construction method, system, equipment and storage medium
CN113378958A (en) Automatic labeling method, device, equipment, storage medium and computer program product
CN105488470A (en) Method and apparatus for determining character attribute information
CN113850118A (en) Video processing function verification method and device, electronic equipment and storage medium
CN112883827B (en) Method and device for identifying specified target in image, electronic equipment and storage medium
CN113888500A (en) Dazzling degree detection method, device, equipment and medium based on face image
CN113705666B (en) Split network training method, use method, device, equipment and storage medium
CN113705559B (en) Character recognition method and device based on artificial intelligence and electronic equipment
CN113011254B (en) Video data processing method, computer equipment and readable storage medium
CN112487943B (en) Key frame de-duplication method and device and electronic equipment
CN113192171A (en) Three-dimensional effect graph efficient rendering method and system based on cloud rendering
CN113255456A (en) Non-active living body detection method, device, electronic equipment and storage medium
CN114708592B (en) Seal security level judging method, device, equipment and computer readable storage medium
CN118014599B (en) Block chain-based data tracing method, system, equipment and medium
CN117274761B (en) Image generation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination