CN112395189A - Method, device and equipment for automatically identifying test video and storage medium - Google Patents

Method, device and equipment for automatically identifying test video and storage medium Download PDF

Info

Publication number
CN112395189A
CN112395189A CN202011279936.4A CN202011279936A CN112395189A CN 112395189 A CN112395189 A CN 112395189A CN 202011279936 A CN202011279936 A CN 202011279936A CN 112395189 A CN112395189 A CN 112395189A
Authority
CN
China
Prior art keywords
video
frame
analyzed
file
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011279936.4A
Other languages
Chinese (zh)
Inventor
路遥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kangjian Information Technology Shenzhen Co Ltd
Original Assignee
Kangjian Information Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kangjian Information Technology Shenzhen Co Ltd filed Critical Kangjian Information Technology Shenzhen Co Ltd
Priority to CN202011279936.4A priority Critical patent/CN112395189A/en
Publication of CN112395189A publication Critical patent/CN112395189A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/24Arrangements for testing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention relates to the field of research and development management, and discloses a method, a device, equipment and a storage medium for automatically identifying a test video, wherein the method comprises the following steps: acquiring a video file uploaded by a user, determining a video to be analyzed from the video file, and analyzing the video to be analyzed of the video file frame by frame to obtain an analysis result; if the analysis fails, generating error log information and an early warning signal, storing the current analysis environment, and sending the early warning signal to a user; if the analysis is successful, generating a hypertext markup language file of the video to be analyzed; analyzing the hypertext markup language file to analyze the hypertext markup language file into test scene segments; and calculating the time length of the test scene segment, and storing the time length as an analysis result in a database. The method automatically identifies the specific response time of each step of operation in the video through a program algorithm, improves the working efficiency of testing personnel and the accuracy rate of testing, and also relates to a block chain technology, wherein video files can be stored in the block chain.

Description

Method, device and equipment for automatically identifying test video and storage medium
Technical Field
The invention relates to the field of research and development management, in particular to a method, a device, equipment and a storage medium for automatically identifying a test video.
Background
When the mobile phone tests the App at the present stage, the loading time of the App function page can not be accurately known only from the test video, the test video can only be played in video software, and the loading time between each scene is manually identified, so that the process is complicated and the accuracy is not high. Or log data of App loading time is added in the App, log logs in the mobile phone are downloaded to a computer for manual inquiry after each test is finished, and logs printed by the App are not intuitive feelings of a final user on the mobile phone. Both methods are time-consuming and laborious, the accuracy is not high, and the operation steps are complex.
Disclosure of Invention
The invention mainly aims to solve the technical problems of complicated flow and low accuracy when the conventional mobile phone tests App. The automatic identification method of the test video comprises the following steps:
acquiring a video file uploaded by a user, determining a video to be analyzed from the video file, and analyzing the video to be analyzed frame by frame to obtain an analysis result; if the analysis result is analysis failure, generating error log information and an early warning signal, storing the error log information and the current analysis environment, and sending the early warning signal to the user;
if the analysis result is that the analysis is successful, generating a hypertext markup language file of the video to be analyzed;
analyzing the hypertext markup language file, and analyzing the hypertext markup language file into at least one test scene segment;
and calculating the time length of the test scene segment, and storing the test scene segment and the corresponding time length as analysis results in a database.
Optionally, in a first implementation manner of the first aspect of the present invention, the obtaining a video file uploaded by a user, determining a video to be analyzed from the video file, and performing frame-by-frame analysis on the video to be analyzed to obtain an analysis result includes:
acquiring a video file uploaded by a user, and determining a video to be analyzed from the video file;
inputting the video to be analyzed into a preset cutter to obtain each frame of image of the video to be analyzed;
inputting each frame of image of the video to be analyzed into a preset classifier, and classifying the frame of image to obtain a frame image group of the video to be analyzed.
Optionally, in a second implementation manner of the first aspect of the present invention, the inputting each frame of image of the video to be analyzed into a preset classifier, and classifying the frame of image to obtain a group of frame images of the video to be analyzed includes:
carrying out similarity calculation of adjacent frame images on each frame image of the video to be analyzed to obtain a similarity value of each frame image of the video to be analyzed and the adjacent frame image;
determining adjacent frame images with similarity values within a preset threshold value as at least one group of stable stages;
determining the frame image between the stable phases as a switching phase.
Optionally, in a third implementation manner of the first aspect of the present invention, the calculating the similarity between each frame of image of the video to be analyzed and an adjacent frame of image to obtain the similarity between each frame of image of the video to be analyzed and the adjacent frame of image includes:
carrying out Gaussian blur change on two adjacent frame images;
respectively dividing the two frame images into M multiplied by N small images, wherein M, N are all natural numbers larger than 0;
performing pixel-level structural similarity calculation on the MxN small images divided by the two images by using a structural similarity algorithm, so as to obtain MxN structural similarity values;
and calculating the average value of the M multiplied by N structured similarity values, and taking the average value as the similarity value of each frame image and the adjacent frame image of the video to be analyzed.
Optionally, in a fourth implementation manner of the first aspect of the present invention, the analyzing the hypertext markup language file and parsing the hypertext markup language file into at least one test scenario segment includes:
respectively marking corresponding labels on frame images of a stable stage and frame images of a switching stage in the hypertext markup language file, wherein the labels are used for distinguishing different stable stages or switching stages when the number of the stable stages or the switching stages is more than 1;
and outputting the frame image of the same label as a test scene segment.
Optionally, in a fifth implementation manner of the first aspect of the present invention, the calculating a time length of the test scenario segment, and saving the test scenario segment and the corresponding time length as an analysis result in a database includes:
acquiring a frame rate of the current parsing environment;
calculating the time length of the test scene segment according to the frame rate and the number of frame images of the stable stage corresponding to the test scene segment;
and storing the test scene segment and the corresponding time length as analysis results in a database.
Optionally, in a sixth implementation manner of the first aspect of the present invention, after the generating a hypertext markup language file of the video to be parsed, the method further includes:
and storing the hypertext markup language file into a preset distributed file system.
The second aspect of the present invention provides an automatic test video identification apparatus, including:
the acquisition module is used for acquiring a video file uploaded by a user, determining a video to be analyzed from the video file, and analyzing the video to be analyzed frame by frame to obtain an analysis result; the early warning module is used for generating error log information and an early warning signal when the analysis result is analysis failure, storing the error log information and the current analysis environment and sending the early warning signal to the user;
the file generation module is used for generating a hypertext markup language file of the video to be analyzed when the analysis result is that the analysis is successful;
the analysis module is used for analyzing the hypertext markup language file and analyzing the hypertext markup language file into at least one test scene segment;
and the calculation module is used for calculating the time length of the test scene segment and storing the test scene segment and the corresponding time length as analysis results in a database.
Optionally, in a first implementation manner of the second aspect of the present invention, the obtaining module includes:
the video acquisition unit is used for acquiring a video file uploaded by a user and determining a video to be analyzed from the video file;
the cutting unit is used for inputting the video to be analyzed into a preset cutter to obtain each frame of image of the video to be analyzed;
and the classification unit is used for inputting each frame of image of the video to be analyzed into a preset classifier, classifying the frame of image and obtaining a frame image group of the video to be analyzed.
Optionally, in a second implementation manner of the second aspect of the present invention, the classifying unit includes:
the similarity calculation operator unit is used for calculating the similarity of each frame of image of the video to be analyzed with the adjacent frame of image to obtain the similarity value of each frame of image of the video to be analyzed with the adjacent frame of image;
a stabilizing subunit, configured to determine, as at least one group of stabilization phases, adjacent frame images having similarity values within a preset threshold;
a switching subunit, configured to determine the frame image between the stabilization phases as a switching phase.
Optionally, in a third implementation manner of the second aspect of the present invention, the similarity calculation subunit is specifically configured to:
carrying out Gaussian blur change on two adjacent frame images;
respectively dividing the two frame images into M multiplied by N small images, wherein M, N are all natural numbers larger than 0;
performing pixel-level structural similarity calculation on the MxN small images divided by the two images by using a structural similarity algorithm, so as to obtain MxN structural similarity values;
and calculating the average value of the M multiplied by N structured similarity values, and taking the average value as the similarity value of each frame image and the adjacent frame image of the video to be analyzed.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the analysis module is configured to:
respectively marking corresponding labels on frame images of a stable stage and frame images of a switching stage in the hypertext markup language file, wherein the labels are used for distinguishing different stable stages or switching stages when the number of the stable stages or the switching stages is more than 1;
and outputting the frame image of the same label as a test scene segment.
Optionally, in a fifth implementation manner of the second aspect of the present invention, the calculation module is specifically configured to:
acquiring a frame rate of the current parsing environment;
calculating the time length of the test scene segment according to the frame rate and the number of frame images of the stable stage corresponding to the test scene segment;
and storing the test scene segment and the corresponding time length as analysis results in a database.
Optionally, in a sixth implementation manner of the second aspect of the present invention, the device for automatically identifying a test video further includes a file saving module, where the file saving module is specifically configured to:
and storing the hypertext markup language file into a preset distributed file system.
The third aspect of the present invention provides a test video automatic identification device, including: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line; the at least one processor invokes the instructions in the memory to cause the test video automatic identification device to perform the test video automatic identification method described above.
A fourth aspect of the present invention provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to execute the above-mentioned test video automatic identification method.
According to the technical scheme, a video file uploaded by a user is obtained, a video to be analyzed is determined from the video file, and the video to be analyzed of the video file is analyzed frame by frame to obtain an analysis result; if the analysis result is analysis failure, generating error log information and an early warning signal, storing the error log information and the current analysis environment, and sending the early warning signal to the user; if the analysis result is that the analysis is successful, generating a hypertext markup language file of the video to be analyzed of the video file; analyzing the hypertext markup language file, and analyzing the hypertext markup language file into at least one test scene segment; and calculating the time length of the test scene segment, and storing the test scene segment and the corresponding time length as analysis results in a database. According to the method, the specific response time of each step of operation in the video is automatically identified through a program algorithm, the working efficiency of testers is improved, the accuracy of the test is greatly improved, the testers can accurately position scenes with low loading speed, research and development personnel can purposefully conduct investigation and optimization aiming at outstanding problems, and the research and development of new products of companies are used for iterative driving protection. And the end user can obtain more flow experience when using the company product, and the public praise of the company product is improved.
Drawings
FIG. 1 is a diagram of a first embodiment of an automatic test video identification method according to an embodiment of the present invention;
FIG. 2 is a diagram of a second embodiment of a method for automatic identification of a test video according to an embodiment of the present invention;
FIG. 3 is a diagram of a third embodiment of an automatic test video identification method according to an embodiment of the present invention;
FIG. 4 is a diagram of a fourth embodiment of the method for automatic identification of test videos according to the embodiment of the present invention;
FIG. 5 is a diagram of a fifth embodiment of an automatic test video identification method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an embodiment of an automatic test video identification device according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of another embodiment of an automatic test video identification device according to an embodiment of the present invention;
fig. 8 is a schematic diagram of an embodiment of a test video automatic identification device in an embodiment of the present invention.
Detailed Description
According to the technical scheme, a video file uploaded by a user is obtained, a video to be analyzed is determined from the video file, and the video to be analyzed of the video file is analyzed frame by frame to obtain an analysis result; if the analysis result is analysis failure, generating error log information and an early warning signal, storing the error log information and the current analysis environment, and sending the early warning signal to the user; if the analysis result is that the analysis is successful, generating a hypertext markup language file of the video to be analyzed of the video file; analyzing the hypertext markup language file, and analyzing the hypertext markup language file into at least one test scene segment; and calculating the time length of the test scene segment, and storing the test scene segment and the corresponding time length as analysis results in a database. The method automatically identifies the specific response time of each step of operation in the video through a program algorithm, improves the working efficiency of testers, greatly improves the accuracy of the test, enables the testers to accurately position scenes with low loading speed, enables research and development personnel to purposefully conduct investigation and optimization aiming at outstanding problems, and enables new products of companies to be researched and developed for iterative safe driving and protecting navigation. And the end user can obtain more flow experience when using the company product, and the public praise of the company product is improved.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For understanding, the following describes a specific flow of an embodiment of the present invention, and with reference to fig. 1, a first embodiment of a method for automatically identifying a test video according to an embodiment of the present invention includes:
101. acquiring a video file uploaded by a user, determining a video to be analyzed from the video file, and analyzing the video to be analyzed frame by frame to obtain an analysis result;
it is to be understood that the execution subject of the present invention may be a test video automatic identification device, and may also be a terminal or a server, which is not limited herein. The embodiment of the present invention is described by taking a server as an execution subject.
It is emphasized that, in order to ensure privacy and security of the user, the video file uploaded by the user may be stored in a node of a blockchain.
In this embodiment, the video file uploaded by the user may be a video record shot by a test video device, or a recorded video obtained by recording through a screen recording function of a mobile phone, two videos are unified into one video format to generate a video to be analyzed, which is convenient for a server to analyze the video to be analyzed, the server automatically identifies all uploaded video files and determines the video to be analyzed in the file uploading process of the user, the determination method may be that a tag is added to the video to be analyzed, and the server automatically extracts and analyzes the video to be analyzed after identifying the tag;
in this embodiment, the frame-by-frame parsing mainly uses a slicer and a classifier, where the slicer is used to slice a video into multiple parts according to a certain rule, and is responsible for video segmentation and sampling, and is used as a data collector to provide automated data support for other tools (e.g., an AI model), and can provide a friendly interface or other forms of support for the outside (including the classifier). In this embodiment, the classifier loads (possibly learns on the AI classifier) some classified pictures and classifies frames (pictures) according to the loaded pictures. For example, after loading the frame corresponding to the stable stage in the above example, the classifier can perform frame-level classification on the video to obtain the accuracy of each stage. The location of the classifier is to perform frame-level, high-accuracy picture classification on the video and can utilize the sampling results. It should have different presence modalities (e.g., machine learning models) to achieve different classification effects.
102. If the analysis result is analysis failure, generating error log information and an early warning signal, storing the error log information and the current analysis environment, and sending the early warning signal to the user;
in this embodiment, when the cutter and the classifier analyze the video to be analyzed to have an error, error log information may be maintained, where the information content includes a location where the video to be analyzed has the error, time of the error, and the like, and the current analysis environment refers to an operating environment of software, for a PC, such as XP, Linux, and peripheral software required by software operation, and the like. In addition to this, application layer software other than the destination software is included, which tends to be very influential when software interaction is involved. And after the early warning signal is generated, the early warning signal is sent to a user who uploads a video, so that follow-up investigation is facilitated.
103. If the analysis result is that the analysis is successful, generating a hypertext markup language file of the video to be analyzed;
in this embodiment, an HTML (HyperText Mark-up Language) file for generating the video to be analyzed may be read by a plurality of web browsers to generate a file for transmitting various information to a web page, a web browser may interpret the HTML file to display the web page, the HTML file allows an image and an object to be embedded, each frame of image analyzed by the cutter and the classifier is combined to generate an HTML file, and a subsequent user interprets the HTML file through the web browser in an application project to obtain and analyze each frame of image of the video to be analyzed.
104. Analyzing the hypertext markup language file, and analyzing the hypertext markup language file into at least one test scene segment;
in this embodiment, the slicer cuts the video to be analyzed into a plurality of frame images, the number of the frame images is the same as the number of frames of the video to be analyzed, and classifies the frame images, for example, in the process of testing APP, the process before clicking the icon of APP is defined as stable stage 1, the process before clicking the icon of APP enters the APP start page is defined as switching stage 1, the process of entering the display of the APP start page is defined as stable stage 2, the process of entering the start page into the APP application interface is defined as switching stage 2, and the APP application interface is defined as stable stage 3, in this embodiment, the complete process can be used as a test scene, the test is performed for testing the start process completed by the mobile phone APP, or the above different stages are integrated as different test scenes, for example, the switching stage 2 and the stable stage 3 are used as a test scene segment, the test purpose of the test scene segment is to test the time from the start page to the APP interface, and different test purposes are realized by configuring different test scene segments.
105. And calculating the time length of the test scene segment, and storing the test scene segment and the corresponding time length as analysis results in a database.
In this embodiment, different test scene segments are configured, frame images of the test scene segments and the number of the frame images are obtained, the total time length of the test scene segments is calculated according to the current FPS (Frames Per Second, transmission frame number Per Second), the total time length is accurate to 0.01 Second and is stored in a database as an analysis result, the analyzed result is displayed on a page, a tester can visually see the execution time of each test scene on the page, an important test scene is selected to click an "add" button, and test data are automatically stored in an App performance baseline database.
In this embodiment, a video to be analyzed uploaded by a user is acquired and the video to be analyzed is analyzed frame by frame to obtain an analysis result; if the analysis result is analysis failure, generating error log information and an early warning signal, storing the error log information and the current analysis environment, and sending the early warning signal to the user; if the analysis result is that the analysis is successful, generating a hypertext markup language file of the video to be analyzed; analyzing the hypertext markup language file, and analyzing the hypertext markup language file into at least one test scene segment; and calculating the time length of the test scene segment, and storing the test scene segment and the corresponding time length as analysis results in a database. The method automatically identifies the specific response time of each step of operation in the video through a program algorithm, improves the working efficiency of testers, greatly improves the accuracy of the test, enables the testers to accurately position scenes with low loading speed, enables research and development personnel to purposefully conduct investigation and optimization aiming at outstanding problems, and enables new products of companies to be researched and developed for iterative safe driving and protecting navigation. And the end user can obtain more flow experience when using the company product, and the public praise of the company product is improved.
Referring to fig. 2, a second embodiment of the method for automatically identifying a test video according to the embodiment of the present invention includes:
201. acquiring a video file uploaded by a user, and determining a video to be analyzed from the video file;
202. inputting the video to be analyzed into a preset cutter to obtain each frame of image of the video to be analyzed;
203. inputting each frame of image of a video to be analyzed into a preset classifier, and classifying the frame of image to obtain a frame image group of the video to be analyzed;
in this embodiment, the positioning of the cutter is a pre-process, reducing the operating cost and repeatability of other modules. After the stable interval is obtained, it can be known that there are several stable stages in the video, frames corresponding to the stable stages are extracted, and the like, for example, there are 3 stable stages in total collected in each stage of the video to be analyzed, and the frame images corresponding to each stage are labeled respectively and named 0, 1, and 2.
In practical application, two classifiers can be used for processing the cut result, and respectively comprise an SSIM classifier and an SVM + HoG classifier, wherein the SSIM classifier is not required to be trained and is lighter, and is mostly used for videos with fewer stages and simpler videos, the SVM + HoG classifier is better represented on videos with complicated stages, and can be trained by different videos to gradually improve the recognition effect of the videos so that the videos can be sufficiently used in the production environment.
204. If the analysis process fails, generating error log information and an early warning signal, storing the error log information and the current analysis environment, and sending the early warning signal to a user;
205. if the analysis process is successful, generating a hypertext markup language file of the video to be analyzed;
206. analyzing the hypertext markup language file, and analyzing the hypertext markup language file into at least one test scene segment;
207. and calculating the time length of the test scene segment, and storing the test scene segment and the corresponding time length as analysis results into a database.
On the basis of the previous embodiment, the process of acquiring the video to be analyzed uploaded by the user and analyzing the video to be analyzed frame by frame is described in detail, and the video to be analyzed uploaded by the user is acquired; inputting the video to be analyzed into a preset cutter to obtain each frame of image of the video to be analyzed; each frame of image of the video to be analyzed is input into a preset classifier, the frame of image is classified, the frame of image grouping of the video to be analyzed is obtained, the method can automatically frame the video to be analyzed uploaded by a user to obtain each frame of image and classify each frame of image, further the specific response time of each step of operation in the video is automatically identified, the working efficiency of testers is improved, and the accuracy of the test is greatly improved.
Referring to fig. 3, a third embodiment of the method for automatically identifying a test video according to the embodiment of the present invention includes:
301. acquiring a video file uploaded by a user, and determining a video to be analyzed from the video file;
302. inputting the video to be analyzed into a preset cutter to obtain each frame of image of the video to be analyzed;
303. carrying out Gaussian blur change on two adjacent frame images;
304. respectively dividing the two frame images into M multiplied by N small images;
305. performing pixel-level structural similarity calculation on the M multiplied by N small images divided by the two images by using a structural similarity algorithm, so as to obtain M multiplied by N structural similarity values;
306. calculating the average value of the M multiplied by N structured similarity values, and taking the average value as the similarity value of each frame image of the video to be analyzed and the adjacent frame image;
in this embodiment, before the similarity calculation, image noise reduction processing is performed on two frames of images to be compared through gaussian blur transformation, so as to increase the robustness of image comparison. Therefore, the integrity of the image can be highlighted, and local pixels can be filtered, specifically, a 3 × 3 gaussian fuzzy kernel function is selected, and a weighted average of pixels around any pixel is obtained, wherein each pixel is a weighted average of surrounding pixels, and the weight is related to the gaussian function.
In this embodiment, SSIM (Structural SIMilarity) is an index for measuring SIMilarity between two images, and after calculating a Structural SIMilarity value corresponding to each pixel in two frames of images, the Structural SIMilarity values are averaged to obtain a SIMilarity value between the two frames of images.
307. Determining adjacent frame images with similarity values within a preset threshold value as at least one group of stable stages;
308. determining frame images between the stable phases as switching phases;
in this embodiment, an SSIM classifier is mainly used to perform similarity calculation of adjacent frame images on each frame of image of a video to be analyzed, an SSIM structural algorithm is used to segment two adjacent frame images into small images to perform one-to-one corresponding pixel level comparison, and similarity calculation is performed.
309. If the analysis process fails, generating error log information and an early warning signal, storing the error log information and the current analysis environment, and sending the early warning signal to a user;
310. if the analysis process is successful, generating a hypertext markup language file of the video to be analyzed;
311. analyzing the hypertext markup language file, and analyzing a stable stage and a switching stage in the hypertext markup language file into at least one test scene segment;
312. and calculating the time length of the test scene segment, and storing the test scene segment and the corresponding time length as analysis results into a database.
The embodiment describes in detail a process of inputting each frame of image of a video to be analyzed into a preset classifier, classifying the frame of image, and obtaining a group of frame images of the video to be analyzed, and obtains a similarity value between each frame of image of the video to be analyzed and an adjacent frame of image by calculating the similarity of the adjacent frame of image of each frame of image of the video to be analyzed; determining adjacent frame images with similarity values within a preset threshold value as at least one group of stable stages; the frame image between the stabilization phases is determined as a switching phase. The process of calculating the similarity of each frame of image of the video to be analyzed and the adjacent frame of image to obtain the similarity value of each frame of image of the video to be analyzed and the adjacent frame of image is further described, and two adjacent frame of images are subjected to Gaussian blur change; respectively dividing the two frame images into M multiplied by N small images, wherein M, N are natural numbers which are more than 0; performing pixel-level structural similarity calculation on the M multiplied by N small images divided by the two images by using a structural similarity algorithm, so as to obtain M multiplied by N structural similarity values; the method comprises the steps of calculating the average value of the M multiplied by N structured similarity values, taking the average value as the similarity value of each frame of image of the video to be analyzed and the adjacent frame of image, starting to calculate the similarity of the two adjacent frames of image, further classifying each frame of image, further automatically identifying the specific response time of each step of operation in the video, improving the working efficiency of testers and greatly improving the accuracy of the test.
Referring to fig. 4, a fourth embodiment of the method for automatically identifying a test video according to the embodiment of the present invention includes:
401. acquiring a video file uploaded by a user, determining a video to be analyzed from the video file, and analyzing the video to be analyzed frame by frame to obtain an analysis result;
402. if the analysis result is analysis failure, generating error log information and an early warning signal, storing the error log information and the current analysis environment, and sending the early warning signal to a user;
403. if the analysis result is that the analysis is successful, generating a hypertext markup language file of the video to be analyzed;
404. respectively marking corresponding labels on the frame image in the stable stage and the frame image in the switching stage in the hypertext markup language file;
405. outputting the frame image of the same label as a test scene segment;
in this embodiment, in the process of testing APP, the process before clicking the icon of APP is defined as stable phase 1, the process before clicking the icon of APP enters the APP start page is taken as switching phase 1, the display process entering the APP start page is taken as stable phase 2, the process when the start page enters the APP application interface is taken as switching phase 2, the APP application interface is taken as stable phase 3, in the process of parsing the HTML file into at least one test scene segment, it is determined that switching phase 2 and stable phase 1 are one test scene segment 1, it is determined that switching phase 2 and stable phase 3 are one test scene segment 2, corresponding tags are marked on the frame images corresponding to the different test scene segments, and the distinction is convenient for subsequent analysis.
406. And calculating the time length of the test scene segment, and storing the test scene segment and the corresponding time length as analysis results into a database.
The embodiment describes in detail a process of analyzing the html file and parsing the html file into at least one test scenario segment on the basis of the previous embodiment, and marks corresponding labels on frame images in a stable stage and frame images in a switching stage in the html file, where the labels are used to distinguish different stable stages or switching stages when the number of the stable stages or the switching stages is greater than 1. The frame image of the same label is output as a test scene segment, early warning is triggered when indexes are updated, updating timeliness and reliability of early warning data can be improved, specific response time of each step of operation in the video is automatically identified through a program algorithm, and working efficiency of testers and testing accuracy are improved.
Referring to fig. 5, a fifth embodiment of the method for automatically identifying a test video according to the embodiment of the present invention includes:
501. acquiring a video file uploaded by a user, determining a video to be analyzed from the video file, and analyzing the video to be analyzed frame by frame to obtain an analysis result;
502. if the analysis result is analysis failure, generating error log information and an early warning signal, storing the error log information and the current analysis environment, and sending the early warning signal to a user;
503. if the analysis result is that the analysis is successful, generating a hypertext markup language file of the video to be analyzed;
504. storing the hypertext markup language file into a preset distributed file system;
505. analyzing the hypertext markup language file, and analyzing the hypertext markup language file into at least one test scene segment;
506. acquiring a frame rate of a current analysis environment;
507. calculating the time length of the test scene segment according to the frame rate and the number of frame images of the stable stage corresponding to the test scene segment;
508. and storing the test scene segment and the corresponding time length as an analysis result into a database.
In this embodiment, the frame rate refers to the number of frames transmitted per second of pictures, and in colloquial, refers to the number of pictures of animation or video. The FPS measures the amount of information used to store and display the motion video. The more the frames per second, the smoother the displayed action is, the frame rate of the current analysis environment is determined, the frame image number of the test scene segment to be tested is divided by the frame rate to obtain the time length of the test scene segment, the analysis result is stored in a database, the analysis result is displayed on a page, a tester can visually see the execution time of each test scene on the page, an important test scene is selected to click an 'add' button, and the test data is automatically stored in an App performance baseline database.
On the basis of the previous embodiment, the process of calculating the time length of the test scene segment, and storing the test scene segment and the corresponding time length as the analysis result in the database is described in detail, and the frame rate of the current analysis environment is obtained; calculating the time length of the test scene segment according to the frame rate and the number of frame images of the stable stage corresponding to the test scene segment; the test scene segments and the corresponding time lengths are stored in a database as analysis results, the method can calculate the number of frames of each step of operation of the test app in the test video, and settle the specific response time of each step of operation, thereby improving the working efficiency of testers, greatly improving the accuracy of the test, accurately positioning the scene with low loading speed by the testers, searching and optimizing research and development personnel aiming at the prominent problem in a targeted manner, and developing and iterating the new products of companies for driving protection. And the end user can obtain more flow experience when using the company product, and the public praise of the company product is improved.
In the above description of the method for automatically identifying a test video according to the embodiment of the present invention, referring to fig. 6, an embodiment of an automatic identification device for a test video according to the embodiment of the present invention includes:
an obtaining module 601, configured to obtain a video file uploaded by a user, determine a video to be analyzed from the video file, and perform frame-by-frame analysis on the video to be analyzed to obtain an analysis result;
the early warning module 602 is configured to generate error log information and an early warning signal when the analysis result is that analysis fails, store the error log information and a current analysis environment, and send the early warning signal to the user;
the file generating module 603 is configured to generate a hypertext markup language file of the video to be analyzed when the analysis result is that the analysis is successful;
an analysis module 604, configured to analyze the html file and parse the html file into at least one test scenario segment;
the calculating module 605 is configured to calculate a time length of the test scenario segment, and store the test scenario segment and the corresponding time length as an analysis result in a database.
In the embodiment of the present invention, the test video automatic identification apparatus operates the test video automatic identification method, and the test video automatic identification method includes: acquiring a video file uploaded by a user, determining a video to be analyzed from the video file, and analyzing the video to be analyzed frame by frame to obtain an analysis result; if the analysis result is analysis failure, generating error log information and an early warning signal, storing the error log information and the current analysis environment, and sending the early warning signal to the user; if the analysis result is that the analysis is successful, generating a hypertext markup language file of the video to be analyzed; analyzing the hypertext markup language file, and analyzing the hypertext markup language file into at least one test scene segment; and calculating the time length of the test scene segment, and storing the test scene segment and the corresponding time length as analysis results in a database. The method automatically identifies the specific response time of each step of operation in the video through a program algorithm, improves the working efficiency of testers, greatly improves the accuracy of the test, enables the testers to accurately position scenes with low loading speed, enables research and development personnel to purposefully conduct investigation and optimization aiming at outstanding problems, and enables new products of companies to be researched and developed for iterative safe driving and protecting navigation. And the end user can obtain more flow experience when using the company product, and the public praise of the company product is improved.
Referring to fig. 7, a second embodiment of the test video automatic identification device according to the embodiment of the present invention includes:
an obtaining module 601, configured to obtain a video file uploaded by a user, determine a video to be analyzed from the video file, and perform frame-by-frame analysis on the video to be analyzed to obtain an analysis result;
the early warning module 602 is configured to generate error log information and an early warning signal when the analysis result is that analysis fails, store the error log information and a current analysis environment, and send the early warning signal to the user;
the file generating module 603 is configured to generate a hypertext markup language file of the video to be analyzed when the analysis result is that the analysis is successful;
an analysis module 604, configured to analyze the html file and parse the html file into at least one test scenario segment;
the calculating module 605 is configured to calculate a time length of the test scenario segment, and store the test scenario segment and the corresponding time length as an analysis result in a database.
Wherein, the obtaining module 601 includes:
a video obtaining unit 6011, configured to obtain a video file uploaded by a user, and determine a video to be analyzed from the video file;
a cutting unit 6012, configured to input the video to be analyzed into a preset cutter, so as to obtain each frame of image of the video to be analyzed;
a classifying unit 6013, configured to input each frame of image of the video to be analyzed into a preset classifier, and classify the frame of image to obtain a frame of image group of the video to be analyzed.
Wherein the classification unit 6013 includes:
the similarity calculation subunit 60131 is configured to perform similarity calculation on each frame of image of the video to be analyzed to obtain a similarity value between each frame of image of the video to be analyzed and an adjacent frame of image;
a stabilizing subunit 60132, configured to determine, as at least one set of stabilization phases, adjacent frame images with similarity values within a preset threshold;
a switching subunit 60133, configured to determine the frame image between the stable phases as a switching phase.
Optionally, the similarity operator unit 60131 is specifically configured to:
carrying out Gaussian blur change on two adjacent frame images;
respectively dividing the two frame images into M multiplied by N small images, wherein M, N are all natural numbers larger than 0;
performing pixel-level structural similarity calculation on the MxN small images divided by the two images by using a structural similarity algorithm, so as to obtain MxN structural similarity values;
and calculating the average value of the M multiplied by N structured similarity values, and taking the average value as the similarity value of each frame image and the adjacent frame image of the video to be analyzed.
Optionally, the analysis module 604 is configured to:
respectively marking corresponding labels on frame images of a stable stage and frame images of a switching stage in the hypertext markup language file, wherein the labels are used for distinguishing different stable stages or switching stages when the number of the stable stages or the switching stages is more than 1;
and outputting the frame image of the same label as a test scene segment.
Optionally, the calculating module 605 is specifically configured to:
acquiring a frame rate of the current parsing environment;
calculating the time length of the test scene segment according to the frame rate and the number of frame images of the stable stage corresponding to the test scene segment;
and storing the test scene segment and the corresponding time length as analysis results in a database.
The device for automatically identifying a test video further comprises a file saving module 606, wherein the file saving module 606 is specifically configured to:
and storing the hypertext markup language file into a preset distributed file system.
On the basis of the previous embodiment, the specific functions of each module and the unit composition of part of the modules are described in detail, through the specific functions of each module and the units of part of the modules in the embodiment, a program algorithm can automatically identify the specific response time of each step of operation in a video, the work efficiency of testers is improved, the accuracy of testing is greatly improved, the testers can accurately position scenes with low loading speed, research and development personnel can purposefully perform troubleshooting and optimization aiming at outstanding problems, and research and development iteration of new products of companies are protected and navigated. And the end user can obtain more flow experience when using the company product, and the public praise of the company product is improved. In addition, the invention also relates to a block chain technology, and the video files can be stored in the block chain.
Fig. 6 and fig. 7 describe the automatic identification apparatus for mid-test video in the embodiment of the present invention in detail from the perspective of the modular functional entity, and the automatic identification apparatus for test video in the embodiment of the present invention is described in detail from the perspective of hardware processing.
Fig. 8 is a schematic structural diagram of a test video automatic identification device 800 according to an embodiment of the present invention, where the test video automatic identification device 800 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 810 (e.g., one or more processors) and a memory 820, and one or more storage media 830 (e.g., one or more mass storage devices) storing an application 833 or data 832. Memory 820 and storage medium 830 may be, among other things, transient or persistent storage. The program stored in the storage medium 830 may include one or more modules (not shown), and each module may include a series of instruction operations for the test video automatic recognition apparatus 800. Further, the processor 810 may be configured to communicate with the storage medium 830, and execute a series of instruction operations in the storage medium 830 on the test video automatic identification device 800 to implement the steps of the test video automatic identification method.
The test video auto-id device 800 may also include one or more power supplies 840, one or more wired or wireless network interfaces 850, one or more input-output interfaces 860, and/or one or more operating systems 831, such as Windows service, Mac OS X, Unix, Linux, FreeBSD, etc. It will be understood by those skilled in the art that the test video automatic identification device configuration shown in fig. 8 does not constitute a limitation of the test video automatic identification device provided in the present application, and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
The present invention also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, and which may also be a volatile computer-readable storage medium, having stored therein instructions, which, when run on a computer, cause the computer to perform the steps of the test video automatic identification method.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses, and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A test video automatic identification method is characterized by comprising the following steps:
acquiring a video file uploaded by a user, determining a video to be analyzed from the video file, and analyzing the video to be analyzed frame by frame to obtain an analysis result;
if the analysis result is analysis failure, generating error log information and an early warning signal, storing the error log information and the current analysis environment, and sending the early warning signal to the user;
if the analysis result is that the analysis is successful, generating a hypertext markup language file of the video to be analyzed;
analyzing the hypertext markup language file, and analyzing the hypertext markup language file into at least one test scene segment;
and calculating the time length of the test scene segment, and storing the test scene segment and the corresponding time length as analysis results in a database.
2. The method according to claim 1, wherein the obtaining a video file uploaded by a user, determining a video to be analyzed from the video file, and analyzing the video to be analyzed frame by frame to obtain an analysis result comprises:
acquiring a video file uploaded by a user, and determining a video to be analyzed from the video file;
inputting the video to be analyzed into a preset cutter to obtain each frame of image of the video to be analyzed;
inputting each frame of image of the video to be analyzed into a preset classifier, and classifying the frame of image to obtain a frame image group of the video to be analyzed.
3. The method according to claim 2, wherein the step of inputting each frame of image of the video to be analyzed into a preset classifier, and classifying the frame of image to obtain the group of frame images of the video to be analyzed comprises:
carrying out similarity calculation of adjacent frame images on each frame image of the video to be analyzed to obtain a similarity value of each frame image of the video to be analyzed and the adjacent frame image;
determining adjacent frame images with similarity values within a preset threshold value as at least one group of stable stages;
determining the frame image between the stable phases as a switching phase.
4. The method according to claim 3, wherein the calculating the similarity between each frame of image of the video to be analyzed and the adjacent frame of image to obtain the similarity between each frame of image of the video to be analyzed and the adjacent frame of image comprises:
carrying out Gaussian blur change on two adjacent frame images;
respectively dividing the two frame images into M multiplied by N small images, wherein M, N are all natural numbers larger than 0;
performing pixel-level structural similarity calculation on the MxN small images divided by the two images by using a structural similarity algorithm, so as to obtain MxN structural similarity values;
and calculating the average value of the M multiplied by N structured similarity values, and taking the average value as the similarity value of each frame image and the adjacent frame image of the video to be analyzed.
5. The method according to claim 4, wherein the parsing the HTML file to parse the HTML file into at least one test scenario segment comprises:
respectively marking corresponding labels on frame images of a stable stage and frame images of a switching stage in the hypertext markup language file, wherein the labels are used for distinguishing different stable stages or switching stages when the number of the stable stages or the switching stages is more than 1;
and outputting the frame image of the same label as a test scene segment.
6. The method according to any one of claims 1 to 5, wherein the calculating the time length of the test scene segment and saving the test scene segment and the corresponding time length as the parsing result in a database comprises:
acquiring a frame rate of the current parsing environment;
calculating the time length of the test scene segment according to the frame rate and the number of frame images of the stable stage corresponding to the test scene segment;
and storing the test scene segment and the corresponding time length as analysis results in a database.
7. The method according to claim 6, further comprising, after the generating the hypertext markup language document of the video to be parsed:
and storing the hypertext markup language file into a preset distributed file system.
8. An automatic test video identification device, comprising:
the acquisition module is used for acquiring a video file uploaded by a user, determining a video to be analyzed from the video file, and analyzing the video to be analyzed frame by frame to obtain an analysis result; the early warning module is used for generating error log information and an early warning signal when the analysis result is analysis failure, storing the error log information and the current analysis environment and sending the early warning signal to the user;
the file generation module is used for generating a hypertext markup language file of the video to be analyzed when the analysis result is that the analysis is successful;
the analysis module is used for analyzing the hypertext markup language file and analyzing the hypertext markup language file into at least one test scene segment;
and the calculation module is used for calculating the time length of the test scene segment and storing the test scene segment and the corresponding time length as analysis results in a database.
9. A test video automatic identification device, characterized in that the test video automatic identification device comprises: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line;
the at least one processor invokes the instructions in the memory to cause the test video auto-identification device to perform the test video auto-identification method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a test video automatic identification method according to any one of claims 1 to 7.
CN202011279936.4A 2020-11-16 2020-11-16 Method, device and equipment for automatically identifying test video and storage medium Pending CN112395189A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011279936.4A CN112395189A (en) 2020-11-16 2020-11-16 Method, device and equipment for automatically identifying test video and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011279936.4A CN112395189A (en) 2020-11-16 2020-11-16 Method, device and equipment for automatically identifying test video and storage medium

Publications (1)

Publication Number Publication Date
CN112395189A true CN112395189A (en) 2021-02-23

Family

ID=74600409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011279936.4A Pending CN112395189A (en) 2020-11-16 2020-11-16 Method, device and equipment for automatically identifying test video and storage medium

Country Status (1)

Country Link
CN (1) CN112395189A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642443A (en) * 2021-08-06 2021-11-12 深圳市宏电技术股份有限公司 Model testing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590150A (en) * 2016-07-07 2018-01-16 北京新岸线网络技术有限公司 Video analysis implementation method and device based on key frame
US20180374233A1 (en) * 2017-06-27 2018-12-27 Qualcomm Incorporated Using object re-identification in video surveillance
CN110147711A (en) * 2019-02-27 2019-08-20 腾讯科技(深圳)有限公司 Video scene recognition methods, device, storage medium and electronic device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590150A (en) * 2016-07-07 2018-01-16 北京新岸线网络技术有限公司 Video analysis implementation method and device based on key frame
US20180374233A1 (en) * 2017-06-27 2018-12-27 Qualcomm Incorporated Using object re-identification in video surveillance
CN110147711A (en) * 2019-02-27 2019-08-20 腾讯科技(深圳)有限公司 Video scene recognition methods, device, storage medium and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642443A (en) * 2021-08-06 2021-11-12 深圳市宏电技术股份有限公司 Model testing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10438050B2 (en) Image analysis device, image analysis system, and image analysis method
CN112101335B (en) APP violation monitoring method based on OCR and transfer learning
US8732666B2 (en) Automatic identification of subroutines from test scripts
CN113255614A (en) RPA flow automatic generation method and system based on video analysis
US20110307488A1 (en) Information processing apparatus, information processing method, and program
CN110532056B (en) Control identification method and device applied to user interface
CN113139141A (en) User label extension labeling method, device, equipment and storage medium
CN111488873A (en) Character-level scene character detection method and device based on weak supervised learning
CN114730486B (en) Method and system for generating training data for object detection
CN112434178A (en) Image classification method and device, electronic equipment and storage medium
CN111355628B (en) Model training method, service identification method, device and electronic device
CN115562656A (en) Page generation method and device, storage medium and computer equipment
KR101667199B1 (en) Relative quality index estimation apparatus of the web page using keyword search
US20180173687A1 (en) Automatic datacenter state summarization
CN112395189A (en) Method, device and equipment for automatically identifying test video and storage medium
US11995889B2 (en) Cognitive generation of HTML pages based on video content
EP2599042A1 (en) Systems and methods of rapid business discovery and transformation of business processes
CN111966339B (en) Buried point parameter input method and device, computer equipment and storage medium
CN111581057B (en) General log analysis method, terminal device and storage medium
CN116841846A (en) Real-time log abnormality detection method, device, equipment and storage medium thereof
CN111127057A (en) Multi-dimensional user portrait restoration method
CN115481025A (en) Script recording method and device for automatic test, computer equipment and medium
CN114935918A (en) Performance evaluation method, device and equipment of automatic driving algorithm and storage medium
CN113703637A (en) Inspection task coding method and device, electronic equipment and computer storage medium
CN112668282A (en) Method and system for converting format of equipment procedure document

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination