CN114283356B - Acquisition and analysis system and method for moving image - Google Patents

Acquisition and analysis system and method for moving image Download PDF

Info

Publication number
CN114283356B
CN114283356B CN202111490973.4A CN202111490973A CN114283356B CN 114283356 B CN114283356 B CN 114283356B CN 202111490973 A CN202111490973 A CN 202111490973A CN 114283356 B CN114283356 B CN 114283356B
Authority
CN
China
Prior art keywords
image
scene
target
unit
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111490973.4A
Other languages
Chinese (zh)
Other versions
CN114283356A (en
Inventor
曹栋亮
孙伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Weidi Technology Group Co ltd
Original Assignee
Shanghai Weidi Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Weidi Technology Group Co ltd filed Critical Shanghai Weidi Technology Group Co ltd
Priority to CN202111490973.4A priority Critical patent/CN114283356B/en
Publication of CN114283356A publication Critical patent/CN114283356A/en
Application granted granted Critical
Publication of CN114283356B publication Critical patent/CN114283356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a moving image acquisition and analysis method, which comprises the following steps of S100: extracting a video stream to be processed, and performing framing processing to obtain a plurality of frame images; respectively capturing different scene information in a plurality of frames of images and integrating the scene information; step S200: performing fragment division on the video stream to be processed based on the scene integration information; step S300: carrying out target scene discrimination on each image frame and selecting expected images from a plurality of video sequences; step S400: identifying and adjusting the motion condition of the target scenery on part of the expected images; step S500: pushing the finally selected expected image to a user, wherein the user can select the image based on self-intention; in order to better realize the method, a system for acquiring and analyzing the moving image is also provided; when the medium picture scene of the video is the characters or the chart, the identification of the picture scene can correspond to the information identification of the characters and the chart.

Description

Mobile image acquisition and analysis system and method
Technical Field
The invention relates to the technical field of video stream content acquisition and processing, in particular to a system and a method for acquiring and analyzing moving images.
Background
For a section of video stream, if a user wants to acquire a part of picture scenes appearing in the section of video stream, a self-screenshot method is usually adopted, but the moment of self-screenshot is probably a place where the frames of a film are changed, the interception of an expected image cannot be accurately performed, and if a dynamic scene appears in the video stream, a photo with an excellent viewing effect of the dynamic scene needs to be acquired, so that the difficulty is increased; meanwhile, the video stream comprises a plurality of images, and if the images are acquired only through the self-capture, the video stream needs to continuously flow back to the desired scene moment, so that the image acquisition efficiency is reduced, and the finally obtained image viewing effect is influenced.
Disclosure of Invention
The present invention is directed to a system and a method for acquiring and analyzing moving images, so as to solve the problems in the background art.
In order to solve the technical problems, the invention provides the following technical scheme: a moving image acquisition and analysis method comprises the following steps:
step S100: extracting a video stream to be processed, and performing frame processing on the video stream to be processed to obtain a plurality of frame images; respectively capturing different scene information in a plurality of frames of images to obtain respective scene integration information of the plurality of frames of images in the video stream to be processed; when the medium picture scene of the video is the characters or the diagram, the identification of the picture scene can be correspondingly the information identification of the characters and the diagram;
step S200: segmenting the video stream to be processed into a plurality of video sequences based on the obtained scene integration information of a plurality of frames of images;
step S300: carrying out target scene discrimination on each image frame in a plurality of video sequences to obtain a discrimination result of a target scene; respectively selecting expected images from the plurality of video sequences based on the discrimination result;
step S400: identifying the motion condition of the target scenery for the part of the expected images selected in the step S300, and adjusting the selected expected images based on the result obtained by the motion condition identification;
step S500: and pushing the finally selected expected image to the user, wherein the user can select the image from the expected images based on self-intention.
Further, the step of capturing and integrating the scene information in the plurality of image frames in step S100 includes:
step S101: respectively obtaining the background picture color types of a plurality of frames of images, and calculating the distribution area of each background picture color type; setting a color distribution area threshold value, and taking color type information with the distribution area smaller than the color distribution area threshold value as a rejection mark; extracting outlines of scenes in a plurality of frames of images to obtain n scene outlines; setting a contour line interval threshold, comparing interval breakpoint distances among the n scene outlines based on the contour line interval threshold, and combining the scene outlines of which the interval breakpoint distances are less than or equal to the contour line interval threshold to form an integral scene outline;
step S102: finding the scene outline corresponding to the preliminary marking rejection information in the step S101; if the scene contour has other scene contours the distance between which and the interval breakpoint is less than or equal to the contour line interval threshold value, eliminating the elimination mark; if the color type information marked for elimination does not have a corresponding scene outline, eliminating the color type information;
step S103: integrating the information from the step S101 to the step S102 to respectively obtain respective scene integration information M of a plurality of frames of images, wherein the form of the scene integration information M is as follows:
Figure BDA0003399338410000021
wherein, a, b, c and d respectively represent different scene outlines which are sequentially shown from left to right in an image frame; r is i ,e i ,z i ,u i Respectively representing color category sets in different scene outlines;
Figure BDA0003399338410000022
representing a set r of colour classes on the scene outline a i
Figure BDA0003399338410000023
Representing a set e of colour classes on the scene outline b i
Figure BDA0003399338410000024
Representing a set z of colour classes on the scene outline c i
Figure BDA0003399338410000025
Representing a set u of colour classes on a scene outline d i
The above-mentioned capturing and integrating the scene information in each image frame is to obtain different scene information of all image frames in the video stream to be processed, and integrate the same scene information in all scene information, that is, multiple scene pictures may appear in a section of video stream, and the images in the same scene picture are obtained by capturing and integrating the scene information in different pictures.
Further, the step S200 of performing segment division on the video stream to be processed includes:
step S201: acquiring the frame rate information transmitted by the equipment, setting the equipment to transmit fps frame images every second, and collecting the fps frame images transmitted by the equipment every second;
step S202: setting the timestamp 0-t of the video stream to be processed 0 The internally appearing image frames are collected into an image frame set Q 1 ;t 0 For a first interval, Q, of the video stream to be processed 1 Scene integration information of all intra-frame images of
Figure BDA0003399338410000026
Figure BDA0003399338410000027
Figure BDA0003399338410000028
Wherein M is 11 、M 12 、M 12 8230am respectively representing Q 1 The first, second and third sheets of picture frame 8230and scene integration information of the picture frame; will be at t 0 ~t 0-1 The internally appearing image frames are collected into an image frame set Q 2 ,t 0-1 Represents t 0 Next first interval of time, Q 2 Scene integration information of all intra-frame images of
Figure BDA0003399338410000031
Figure BDA0003399338410000032
Wherein M is 21 、M 22 、M 23 8230am respectively representing Q 2 The first inner sheet,The second and third sheets of information 8230indicating scene integration information of image frames;
step S203: image frame set Q 1 Scene integration information of all intra-frame
Figure BDA0003399338410000033
And an image frame set Q 2 Scene integration information for all intra-frame
Figure BDA0003399338410000034
Comparing the two to obtain a comparison result P;
Figure BDA0003399338410000035
calculating and setting a contrast threshold; if the comparison result P is smaller than the comparison threshold value, t is compared 0 As segment dividing nodes;
step S204: if the comparison result P is greater than or equal to the comparison threshold value, the comparison result P is 0 to t 0-1 The image frame sets in the image frame group are merged, and the image frame set Q is obtained 1 And an image frame set Q 2 Internal scene integration information
Figure BDA0003399338410000036
And
Figure BDA0003399338410000037
correspondingly merging; immediately after finding t 0-1 ~t 0-2 The internally appearing image frames are assembled into an image frame set Q 3 ,t 0-2 Represents t 0-1 Next interval of (2), Q 3 Scene integration information of all image frames in the scene
Figure BDA0003399338410000038
Figure BDA0003399338410000039
Wherein M is 31 、M 32 、M 33 Respectively represent Q 3 Integrating the scenery of the first, second and third image frames to obtain the image frame set Q 3 And an image frame set Q 1 And an image frame set Q 2 Merging sets of frames to perform a sceneThe integrated information is compared to obtain a comparison result P,
Figure BDA00033993384100000310
if the comparison result P is smaller than the comparison threshold value, t is compared 0-1 As a segment dividing node, if the comparison result P is greater than or equal to the comparison threshold, 0-t 0-2 Merging the image frame sets; repeating the steps until all the video streams to be processed are traversed and divided to obtain a plurality of video sequences;
the video stream is divided into segments of video segments containing different scenes or scene pictures through the obtained dividing nodes, and another purpose of dividing the video stream is to acquire an expected image containing different scenes or scene pictures when the expected image is acquired in the subsequent step, which is equivalent to providing different acquisition sources for acquisition of the expected image.
Further, step S300 includes:
step S301: connecting image diagonals of each image frame in each video sequence to obtain two diagonal lines, taking the intersection point of the two image diagonal lines as a target reference point, selecting a radius value R according to the height and width of the image frame to form a focus target region S = pi R 2
Step S302: calculating different scene outline areas covered in each image frame focus target area in each video sequence, and taking the scene outline with the largest covered area in the image frame focus target area as a target scene of the image frame; if the total number of the target scenery of each image frame in a video sequence is 1, selecting an image frame which covers the largest area of the target scenery in a focus target region in the video sequence as an expected image in the video sequence; if the target area covers an image frame with the largest target scenery area, the definition of the image frame is smaller than a definition threshold value; selecting an image frame covering the second largest target scene area in the focus target area in the video sequence as an expected image in the video sequence, and repeating the steps to obtain the expected images with the number of the video sequence being 1;
step S303: if the total number of target scenery appearing in each image frame in a video sequence is more than or equal to 2, respectively extracting each image frame covering the target scenery according to the time sequence; for one target scenery A, if all the image frames of the target scenery A are continuous image frames in time, and the number of the continuous image frames is greater than a set frame number threshold value, selecting an image frame with the largest area covering the area A in a focus target area from the continuous image frames as a desired image in the video sequence, and so on, and finally obtaining the desired image with the number of the video sequence greater than or equal to 1;
the reason why the most central part of one frame of image is set as the focus target area is that in the present application, the part which is located at the most central part of the image in one image frame is the part which is most interested in or most wanted to be recorded by the user in the process of recording video data by using the device by default, and the part is taken as the target scene; the focus target area is set for the purpose of identifying the target scene; if only one target scenery appears in one video clip, the situation is judged that the target scenery is not changed when the user continuously inputs information into the same target scenery or the field angle moving range is very small when the user inputs information into a certain part of scene by using equipment, and the probability of information superposition among the image frames is high under the situation; if two or more target scenery appear in a section of video segment, the problem that a user inputs information into different target scenery or the target scenery moves within a period of time is judged under the condition, the target scenery has a trend of moving from the edge of an image frame to a focus target area, the target scenery is dynamic, the probability of information superposition among the image frames is low, and the user needs to collect more images for screening the images with dynamic effect which the target scenery wants to capture.
Further, step S400 is to perform further motion situation recognition on the target scenes with the total number of occurrences greater than or equal to 2 in step S303:
step S401: corresponds to and obtains the step S303Now successive image frames of the object scene A, one image frame in the successive image frames is recorded as f n Adjacent to a picture frame of f n-1 (ii) a Respectively discretizing the whole contour of the target scenery A in the image frame fn and the image frame fn-1 to obtain an image frame f n Discrete point set, image frame f n-1 The discrete point set of (2); respectively recording the gray values of the corresponding pixel points of the two frames as f in the two discrete point sets n (x k ,y k ) And f n-1 (x k ,y k );
Step S402: calculating the gray values of the corresponding pixel points of the two image frames according to a formula to obtain a difference image D (x) k ,y k ):
D(x k ,y k )=|f n (x k ,y k )-f n-1 (x k ,y k )|
Wherein f is n (x k ,y k ) Representing the object A in an image frame f n The k-th discrete point (x) k ,y k ) Gray values of corresponding pixel points; f. of n-1 (x k ,y k ) Representing the object A in an image frame f n-1 The k-th discrete point (x) k ,y k ) The gray value of the corresponding pixel point;
step S403: setting a gray threshold J according to the formula
Figure BDA0003399338410000041
Obtain an image D' (x) k ,y k ) (ii) a If image D' (x) k ,y k ) For people or animals, judging that the motion state of the target scene is due to self-motion; if image D' (x) k ,y k ) For a static object, judging that the target scene presents a motion state due to passive artificial change of the field angle of the equipment;
the above steps are equivalent to the identification of the motion conditions of different target scenes when different target scenes occur in step S303, because two or more target scenes may be the change of the field angle of the device, and there may be motion states of the scenes; and (3) introducing inter-frame difference calculation to obtain an image of a complete moving target, and identifying whether the image is a person, an animal or a still object to obtain a judgment result of the motion condition of the target scenery, namely whether the image is due to self motion or passive artificial change.
Further, step S400 includes: when the motion condition of a certain target scene is recognized as self-motion, all continuous image frames covering the target scene in the target area are taken as expected images;
if the target scenery is dynamic, only one expected image is collected, and the wonderful moment of the target scenery at the moment of dynamic change can be omitted, so that the partial images are used as expected images for the user to select by himself.
In order to better realize the method, the system for acquiring and analyzing the moving image is also provided, and comprises the following steps: the system comprises a moving image acquisition processing module, an image scene information capturing module, a segment dividing module, a target scene distinguishing module, an expected image selecting module, a target scene motion condition identifying module and an expected image selecting and adjusting module;
the mobile image acquisition processing module is used for inputting a video stream to be processed and performing framing processing on the video stream to be processed to obtain a plurality of frame images;
the image scene information capturing module is used for receiving the data in the moving image acquisition and processing module and capturing different scene information in a plurality of frames of images;
the segmentation module is used for receiving data in the moving image acquisition and processing module and the image scene information capturing module, and segmenting the video stream to be processed based on the obtained scene integration information of a plurality of frames of images to obtain a plurality of video sequences;
the target scene distinguishing module is used for receiving the data in the segment dividing module and distinguishing target scenes of all image frames in a plurality of video sequences;
the expected image selection module is used for receiving the data in the target scenery distinguishing module and respectively selecting expected images from a plurality of video sequences based on the target scenery distinguishing result;
the target scenery moving condition identification module is used for receiving the data in the expected image selection module and identifying the target scenery moving condition of the selected part of expected images;
and the expected image selection and adjustment module is used for receiving the data in the target scene motion condition identification module and performing selection and adjustment on the selected expected image based on the identification result of the target scene motion condition.
Furthermore, the image scene information capturing module comprises a distribution area calculating unit, an outline extracting unit, an interval breakpoint capturing unit, an outline integrating unit, an information eliminating unit and an information integrating unit;
the distribution area calculation unit is used for acquiring the background picture color types of a plurality of frames of images obtained in the moving image acquisition and processing module and calculating the distribution area of each background picture color type;
the contour extraction unit is used for extracting contours of scenes in a plurality of frames of images obtained from the moving image acquisition processing module;
the interval breakpoint capturing unit is used for capturing the interval breakpoints among the scene outlines obtained in the outline extracting unit and calculating the interval distance; obtaining the final target scenery
The contour integration unit is used for receiving the data in the interval breakpoint capturing unit and carrying out contour integration on the contour of the related scenery based on the data;
the information rejection unit is used for receiving the data in the distribution area calculation unit, the interval breakpoint capturing unit and the contour integration unit and reserving or rejecting the related information;
and the information integration unit is used for receiving the information data processed by the information elimination unit and integrating the information of each frame of image in the plurality of frames of images.
Further, the fragment dividing module comprises: the system comprises an image frame set analysis unit, an information comparison unit, an image frame set merging unit and a video stream dividing node selection unit;
the image frame set analysis unit is used for acquiring the frame rate information transmitted by the equipment and analyzing and collecting the scene information of fps frame images transmitted by the equipment every second;
the information comparison unit is used for receiving the data in the image frame set analysis unit and comparing the scene information of the image frame sets of adjacent seconds;
the image frame set merging unit is used for receiving the data in the information comparison unit and generating a frame set or video dividing nodes for the image frame sets of adjacent seconds based on the comparison data;
and the video stream dividing unit is used for receiving the data in the image frame set merging unit and performing video division on the video stream to be processed based on each dividing node generated in the image frame set merging unit.
Further, the target scene distinguishing module comprises a focus target area setting unit and a desired image capturing unit; the target scenery motion condition identification module comprises a discrete processing unit, a difference image calculation unit, a target scenery identification unit and a motion condition identification unit;
the focus target area setting unit is used for carrying out image diagonal connection on each image frame in each video sequence to obtain a target reference point, and then selecting a target radius value R according to the height and width of the image frame to obtain a final focus target area;
a target scene recognition unit that captures a target scene for each image frame in each video sequence based on the focus target region set in the focus target region setting unit;
the expected image capturing unit is used for capturing an expected image based on the distribution area characteristics of the captured target scenery in the focus target area;
the discrete processing unit is used for extracting and discretizing the outline of a scene image in a part of expected images in the expected image capturing unit;
the difference image calculation unit is used for receiving the data in the discrete processing unit and calculating the difference image between the continuous image frames;
a motion situation recognition unit for receiving the data from the difference image calculation unit, recognizing whether the target object is a person or an animal or a still object, and determining the motion situation of the target object based on the recognition result
Compared with the prior art, the invention has the following beneficial effects: the invention realizes the acquisition and analysis of the expected images in different video clips containing different picture scenes in a section of video stream; the situation-based consideration of the image scene is realized, the identification of the target scenery is realized, and the motion situation of the target scenery obtained by identification is judged; the reference expected image in multiple scenes and multiple aspects can be provided for a user; the method and the device for processing the video stream are used for analyzing each frame of image in the video stream sequence and obtaining the expected image based on the analysis result, and can effectively solve the problems that the expected image obtained by a user through a self-interception method is fuzzy and unavailable; when the medium picture scene of the video is the characters or the chart, the identification of the picture scene can be correspondingly carried out on the information of the background characters and the chart in the video.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic flow chart of a method for collecting and analyzing moving images according to the present invention;
FIG. 2 is a schematic diagram of an acquisition and analysis system for moving images according to the present invention;
fig. 3 is a schematic diagram of an embodiment of the acquisition and analysis method for moving images according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-3, the present invention provides a technical solution: a method for collecting and analyzing moving images comprises the following steps:
step S100: extracting a video stream to be processed, and performing frame processing on the video stream to be processed to obtain a plurality of frame images; respectively capturing different scene information in a plurality of frames of images to obtain respective scene integration information of the plurality of frames of images in the video stream to be processed; when the medium picture scene of the video is characters or a chart, the identification of the picture scene can be correspondingly used as the information identification of the characters and the chart in the video;
wherein the step of capturing and integrating the scene information in the plurality of image frames in step S100 comprises:
step S101: respectively obtaining the color types of the background pictures of a plurality of frames of images, and calculating the distribution area of each color type of the background pictures; setting a color distribution area threshold value, and marking the color type information with the distribution area smaller than the color distribution area threshold value as a rejection mark; extracting outlines of scenes in a plurality of frame images to obtain n scene outlines; setting a contour line interval threshold, comparing interval breakpoint distances among the n scene outlines based on the contour line interval threshold, and combining the scene outlines of which the interval breakpoint distances are less than or equal to the contour line interval threshold to form an integral scene outline;
step S102: finding the scene outline corresponding to the preliminary marking rejection information in the step S101; if the scene contour has other scene contours the distance between which and the interval breakpoint is less than or equal to the contour line interval threshold value, eliminating the elimination mark; if the color type information marked for elimination does not have a corresponding scene outline, eliminating the color type information;
step S103: integrating the information from the step S101 to the step S102 to respectively obtain respective scene integration information M of a plurality of frames of images, wherein the form of the scene integration information M is as follows:
Figure BDA0003399338410000081
wherein, a, b, c and d respectively represent different scene outlines which are sequentially displayed from left to right in an image frame; r is a radical of hydrogen i ,e i ,z i ,u i Respectively representing color category sets in different scene outlines;
Figure BDA0003399338410000082
representing a set r of colour classes on the scene outline a i
Figure BDA0003399338410000083
Representing a set e of colour classes on the scene outline b i
Figure BDA0003399338410000084
Representing a set z of colour classes on the scene outline c i
Figure BDA0003399338410000085
Representing a set u of colour classes on a scene outline d i
Step S200: segmenting the video stream to be processed into a plurality of video sequences based on the obtained scene integration information of the plurality of frames of images;
the step S200 of segmenting the video stream to be processed includes:
step S201: acquiring frame rate information transmitted by equipment, setting the equipment to transmit fps frame images every second, and collecting the fps frame images transmitted by the equipment every second;
step S202: as shown in FIG. 3, it is assumed that the image frames at the 1 st second of the timestamp of the video stream to be processed are collected into an image frame set Q 1 ,Q 1 Scene integration information of all intra-frame images of
Figure BDA0003399338410000091
Figure BDA0003399338410000092
Wherein M is 11 、M 12 、M 12 8230am respectively representing Q 1 The first, second and third sheets of picture frame 8230and scene integration information of the picture frame; collecting the 2 nd second image frame into an image frame set Q 2 ,Q 2 Scene integration information of all image frames in the scene
Figure BDA0003399338410000093
Figure BDA0003399338410000094
Wherein, M 21 、M 22 、M 23 8230am respectively representing Q 2 823000 the first, second and third image frames and scene integration information of the image frames;
step S203: set of image frames Q 1 Scene integration information for all intra-frame
Figure BDA0003399338410000095
And an image frame set Q 2 Scene integration information for all intra-frame
Figure BDA0003399338410000096
Comparing the two to obtain a comparison result P;
Figure BDA0003399338410000097
calculating and setting a contrast threshold; if the comparison result P is smaller than the comparison threshold, taking the first second as a segment division node;
step S204: if the comparison result P is larger than or equal to the comparison threshold value, merging the image frame sets within 0-2 seconds, and combining the image frame set Q within the 1 st second 1 And 2 second image frame set Q 2 Internal scene integration information
Figure BDA0003399338410000098
And
Figure BDA0003399338410000099
correspondingly merging; subsequently, the image frames occurring in the 3 rd second are combined into an image frame set Q 3 ,Q 3 Scene integration information of all intra-frame images of
Figure BDA00033993384100000910
Figure BDA00033993384100000911
Figure BDA00033993384100000912
Wherein M is 31 、M 32 、M 33 Respectively represent Q 3 Integrating the scenery of the first, second and third image frames to obtain the image frame set Q 3 And an image frame set Q 1 And an image frame set Q 2 Comparing the scene integration information of the combined frame sets to obtain a comparison result P,
Figure BDA00033993384100000913
if the comparison result P is smaller than the comparison threshold, taking the 2 nd second as a segment division node; if the comparison result P is larger than or equal to the comparison threshold, merging the image frame sets within 0-3 seconds;
according to the steps S202 to S204, the image frames appearing at the 4 th second are collected into the image frame set Q 4 ,Q 4 Scene integration information of all image frames in the scene
Figure BDA00033993384100000914
Figure BDA00033993384100000915
Image frame set Q 4 And an image frame set Q 1 Image frame set Q 2 Image frame set Q 3 Comparing the scene integration information between the combined frame sets to obtain a comparison result P,
Figure BDA00033993384100000916
Figure BDA00033993384100000917
if the comparison result P is smaller than the comparison threshold, taking the 3 rd second as a segment dividing node, if the comparison result P is larger than or equal to the comparison threshold, merging the image frame sets within 0-4 seconds, and so on until all the video streams to be processed are traversed and divided to obtain a plurality of segment dividing nodes and a plurality of video sequences;
step S300: carrying out target scene discrimination on each image frame in a plurality of video sequences to obtain a discrimination result of a target scene; respectively selecting expected images from a plurality of video sequences based on the discrimination result;
wherein, step S300 includes:
step S301: connecting image diagonals of each image frame in each video sequence to obtain two diagonal lines, taking the intersection point of the two image diagonal lines as a target reference point, selecting a radius value R according to the height and width of the image frame to form a focus target region S = pi R 2
Step S302: calculating different scene outline areas covered in each image frame focus target area in each video sequence, and taking the scene outline with the largest covered area in the image frame focus target area as a target scene of the image frame; if the total number of the target scenes of each image frame in a video sequence is 1, selecting an image frame which covers the target scene with the largest area in a focus target area in the video sequence as an expected image in the video sequence; if an image frame covering the largest target scenery area in the target area is a blurred image, selecting an image frame covering the second largest target scenery area in the focus target area in the video sequence as an expected image in the video sequence, and repeating the steps to obtain the expected images with the number of the video sequence being 1;
step S303: if the total number of the target scenery appearing in each image frame in a video sequence is more than or equal to 2, respectively extracting each image frame covering the target scenery according to the time sequence; for one target scenery A, if all the image frames of the target scenery A are continuous image frames in time, and the number of the continuous image frames is greater than a set frame number threshold value, selecting an image frame with the largest area covering the area A in a focus target area from the continuous image frames as a desired image in the video sequence, and so on, and finally obtaining the desired image with the number of the video sequence greater than or equal to 1; in this step, there is no precedence in the analysis of the selection of different target scenes, i.e. the desired image covering the target scene a in the focal target region may be selected first, or the desired image covering the target scene B in the focal target region may be selected first;
step S400: and (3) performing motion condition recognition of the target scenery on the part of the expected images selected in the step (S300), and further performing motion condition recognition on the target scenery with the total number of occurrences of more than or equal to 2 in the step (S303) based on the result obtained by the motion condition recognition:
wherein, step S400 includes:
step S401: correspondingly acquiring the continuous image frames of the target scenery A appearing in the step S303, and recording one image frame in the continuous image frames as f n Adjacent to a picture frame of f n-1 (ii) a Respectively discretizing the whole outline of the target scenery A in the image frame fn and the image frame fn-1 to obtain an image frame f n Discrete point set, image frame f n-1 The discrete point set of (2); respectively recording the gray values of the corresponding pixel points of the two frames in the two discrete point sets as f n (x k ,y k ) And f n-1 (x k ,y k );
Step S402: calculating the gray values of the corresponding pixel points of the two image frames according to a formula to obtain a difference image D (x) k ,y k ):
D(x k ,y k )=|f n (x k ,y k )-f n-1 (x k ,y k )|
Wherein f is n (x k ,y k ) Representing the object A in an image frame f n The k-th discrete point (x) k ,y k ) Gray values of corresponding pixel points; f. of n-1 (x k ,y k ) Representing the object A in an image frame f n-1 The k-th discrete point (x) k ,y k ) The gray value of the corresponding pixel point;
step S403: setting a gray threshold J according to the formula
Figure BDA0003399338410000111
Obtain an image D' (x) k ,y k ) (ii) a If image D' (x) k ,y k ) The object scene is judged to be a person or an animal and to be in a motion state due to self-motion; if image D' (x) k ,y k ) For a static object, judging that the target scene presents a motion state due to passive artificial change of the field angle of the equipment;
when the motion condition identification result of a certain target scene is self-motion, all continuous image frames covering the target scene in the target area are taken as expected images;
step S500: and pushing the finally selected expected image to the user, wherein the user can select the image from the expected images based on self-intention.
In order to better realize the method, the system for acquiring and analyzing the moving image is also provided, and comprises the following steps: the system comprises a mobile image acquisition and processing module, an image scene information capturing module, a segment dividing module, a target scene distinguishing module, an expected image selecting module, a target scene motion condition identifying module and an expected image selecting and adjusting module;
the mobile image acquisition and processing module is used for inputting a video stream to be processed and performing framing processing on the video stream to be processed to obtain a plurality of frame images;
the image scene information capturing module is used for receiving the data in the moving image acquisition and processing module and capturing different scene information in a plurality of frames of images;
the image scene information capturing module comprises a distribution area calculating unit, an outline extracting unit, an interval breakpoint capturing unit, an outline integrating unit, an information eliminating unit and an information integrating unit;
the distribution area calculation unit is used for acquiring the background picture color types of a plurality of frames of images obtained in the moving image acquisition and processing module and calculating the distribution area of each background picture color type; the contour extraction unit is used for extracting contours of scenes in a plurality of frames of images obtained from the moving image acquisition processing module; the interval breakpoint capturing unit is used for capturing interval breakpoints among the scene outlines obtained in the outline extraction unit and calculating interval distances; the contour integration unit is used for receiving the data in the interval breakpoint capturing unit and carrying out contour integration on the contour of the related scene based on the data; the information removing unit is used for receiving the data in the distribution area calculating unit, the interval breakpoint capturing unit and the outline integrating unit and reserving or removing the related information; the information integration unit is used for receiving the information data processed by the information elimination unit and integrating the information of each frame of image in the plurality of frames of images;
the segmentation module is used for receiving data in the moving image acquisition and processing module and the image scene information capturing module, and segmenting the video stream to be processed based on the obtained scene integration information of a plurality of frames of images to obtain a plurality of video sequences;
wherein, the fragment division module includes: the system comprises an image frame set analysis unit, an information comparison unit, an image frame set merging unit and a video stream dividing node selection unit;
the image frame set analysis unit is used for acquiring the frame rate information transmitted by the equipment and analyzing and collecting the scene information of the fps frame images transmitted by the equipment every second; the information comparison unit is used for receiving the data in the image frame set analysis unit and comparing the scene information of the image frame sets of adjacent seconds; the image frame set merging unit is used for receiving the data in the information comparison unit and generating a frame set or video division node for the image frame set of the adjacent second based on the comparison data; a video stream dividing unit for receiving data in the image frame set merging unit, and performing video division on the video stream to be processed based on each dividing node generated in the image frame set merging unit
The target scene distinguishing module is used for receiving the data in the segment dividing module and distinguishing target scenes of all image frames in a plurality of video sequences;
the expected image selection module is used for receiving the data in the target scene discrimination module and respectively selecting expected images from a plurality of video sequences based on the discrimination result of the target scene;
the target scenery moving condition identification module is used for receiving the data in the expected image selection module and identifying the target scenery moving condition of the selected part of expected images;
and the expected image selection and adjustment module is used for receiving the data in the target scene motion condition identification module and performing selection and adjustment on the selected expected image based on the identification result of the target scene motion condition.
The target scene distinguishing module comprises a focus target area setting unit and a desired image capturing unit; the target scenery motion condition identification module comprises a discrete processing unit, a difference image calculation unit, a target scenery identification unit and a motion condition identification unit;
the focus target area setting unit is used for performing image diagonal connection on each image frame in each video sequence to obtain a target reference point, and then selecting a target radius value R according to the height and width of the image frame to obtain a final focus target area;
a target scene recognition unit that captures a target scene for each image frame in each video sequence based on the focus target region set in the focus target region setting unit;
the expected image capturing unit is used for capturing an expected image based on the distribution area characteristics of the captured target scenery in the focus target area;
the discrete processing unit is used for extracting and discretizing the outline of a scene image in a part of expected images in the expected image capturing unit;
the difference image calculation unit is used for receiving the data in the discrete processing unit and calculating the difference image between the continuous image frames;
and the motion condition identification unit is used for receiving the data in the difference image calculation unit, identifying whether the target scenery is a person or an animal or a still object, and judging the motion condition of the target scenery based on the identification result.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A method for moving image acquisition and analysis, the method comprising:
step S100: extracting a video stream to be processed, and performing framing processing on the video stream to be processed to obtain a plurality of frame images; respectively capturing different scene information in the plurality of frames of images to obtain respective scene integration information of the plurality of frames of images in the video stream to be processed;
the step of capturing and integrating scene information in the plurality of frames of images in step S100 includes:
step S101: respectively obtaining the background picture color types of the plurality of frames of images, and calculating the distribution area of each background picture color type; setting a color distribution area threshold value, and taking color type information with the distribution area smaller than the color distribution area threshold value as a rejection mark; extracting outlines of scenes in the plurality of frame images to obtain n scene outlines; setting a contour line interval threshold, comparing interval breakpoint distances among the n scene outlines based on the contour line interval threshold, and combining the scene outlines of which the interval breakpoint distances are less than or equal to the contour line interval threshold to form an integral scene outline;
step S102: finding the scene outline corresponding to the preliminary marking rejection information in the step S101; if the scene outlines have other scene outlines of which the interval breakpoint distance is less than or equal to the contour line interval threshold value, eliminating the rejection marks; if the color type information marked by the rejection has no corresponding scene outline, rejecting the color type information;
step S103: integrating the information from the step S101 to the step S102 to obtain respective scene integration information M of the plurality of frames of images, wherein the form of the scene integration information M is as follows:
Figure 890174DEST_PATH_IMAGE001
(ii) a Wherein, a, b, c and d respectively represent different scene outlines which are sequentially displayed from left to right in an image frame;
Figure 371971DEST_PATH_IMAGE002
Figure 51214DEST_PATH_IMAGE003
Figure 200435DEST_PATH_IMAGE004
Figure 939721DEST_PATH_IMAGE005
respectively representing color category sets in the different scene outlines;
Figure 177323DEST_PATH_IMAGE006
representing a set of colour classes on the scene outline a
Figure 660257DEST_PATH_IMAGE002
Figure 663985DEST_PATH_IMAGE007
Representing a set of colour classes on the scene outline b
Figure 574172DEST_PATH_IMAGE003
Figure 30562DEST_PATH_IMAGE008
Representing a set of colour classes on the scene outline c
Figure 51607DEST_PATH_IMAGE004
Figure 644262DEST_PATH_IMAGE009
Representing sets of colour classes on scene outlines d
Figure 725351DEST_PATH_IMAGE005
Step S200: segmenting the video stream to be processed into a plurality of video sequences based on the obtained scene integration information of the plurality of frames of images;
step S300: carrying out target scene discrimination on each image frame in the plurality of video sequences to obtain a discrimination result of a target scene; respectively selecting expected images from the plurality of video sequences based on the discrimination result;
the step S300 includes:
step S301: performing image diagonal line connection on each image frame in each video sequence to obtain two diagonal lines, taking the intersection point of the two image diagonal lines as a target reference point, selecting a radius value R according to the height and width of the image frame to form a focus target area
Figure 934615DEST_PATH_IMAGE010
Step S302: calculating different scene outline areas covered in each image frame focus target area in each video sequence, and taking the scene outline with the largest covered area in the image frame focus target area as a target scene of the image frame; if the total number of the target scenery of each image frame in a video sequence is 1, selecting an image frame which covers the largest area of the target scenery in a focus target region in the video sequence as an expected image in the video sequence; if the image frame which covers the largest target scenery area in the target area is fuzzy, selecting an image frame which covers the second largest target scenery area in the focus target area in the video sequence as an expected image in the video sequence, and repeating the steps to obtain the expected images of which the video sequence number is 1;
step S303: if the total number of target scenes appearing in each image frame in a video sequence is more than or equal to 2, respectively extracting each image frame covering the target scenes according to a time sequence; aiming at one target scenery A, if all the image frames of the target scenery A are continuous image frames in time, and the frame number of the continuous image frames is greater than a set frame number threshold value, selecting an image frame which covers the area A with the largest area in a focus target area from the continuous image frames as a desired image in the video sequence, and so on, and finally obtaining the desired image of which the number of the video sequence is greater than or equal to 1;
step S400: identifying the motion condition of the target scenery for the part of the expected images selected in the step S300, and adjusting the selected expected images based on the result obtained by the motion condition identification;
step S500: and pushing the finally selected expected images to the user, wherein the user can select the images in the expected images based on self-intention.
2. The method for collecting and analyzing moving images as claimed in claim 1, wherein the step S200 of segmenting the video stream to be processed comprises:
step S201: acquiring the frame rate information transmitted by equipment, setting that the equipment transmits fps frame images every second, and collecting the fps frame images transmitted by the equipment every second;
step S202: setting a timestamp to be in the video stream to be processed
Figure 493773DEST_PATH_IMAGE012
The internally appearing image frames are assembled into an image frame set
Figure 472093DEST_PATH_IMAGE014
Figure 458504DEST_PATH_IMAGE016
For a first interval time of the video stream to be processed,
Figure 886555DEST_PATH_IMAGE014
scene integration information of all image frames in the scene
Figure 249403DEST_PATH_IMAGE018
Figure 347809DEST_PATH_IMAGE020
8230; wherein,
Figure DEST_PATH_IMAGE022
Figure DEST_PATH_IMAGE024
Figure 567438DEST_PATH_IMAGE024
8230am respectively
Figure 485716DEST_PATH_IMAGE014
The first, second and third sheets of picture frame 8230and scene integration information of the picture frame; will be at
Figure DEST_PATH_IMAGE026
The internally appearing image frames are assembled into an image frame set
Figure DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE030
To represent
Figure 776889DEST_PATH_IMAGE016
The next interval of time of (a) is,
Figure 464222DEST_PATH_IMAGE028
scene integration information of all image frames in the scene
Figure DEST_PATH_IMAGE032
Figure DEST_PATH_IMAGE034
8230; wherein,
Figure DEST_PATH_IMAGE036
Figure DEST_PATH_IMAGE038
Figure DEST_PATH_IMAGE040
8230am respectively
Figure 779053DEST_PATH_IMAGE028
The first, second and third sheets of picture frame 8230and scene integration information of the picture frame;
step S203: collecting the image frames
Figure 184627DEST_PATH_IMAGE014
Scene integration information for all intra-frame
Figure 889277DEST_PATH_IMAGE018
And the image frame set
Figure 696696DEST_PATH_IMAGE028
Scene integration information for all intra-frame
Figure 195811DEST_PATH_IMAGE032
Are compared with each other to obtain a comparison result
Figure DEST_PATH_IMAGE042
Figure DEST_PATH_IMAGE044
Calculating and setting a contrast threshold; if the comparison result P is smaller than the comparison threshold value, comparing the comparison result P with the comparison threshold value
Figure 413647DEST_PATH_IMAGE016
As segment dividing nodes;
step S204: if the comparison result P is greater than or equal to the comparison threshold value, the comparison result P is compared with the comparison threshold value
Figure DEST_PATH_IMAGE046
The image frame sets are combined and are combined
Figure 453147DEST_PATH_IMAGE047
And the image frame set
Figure DEST_PATH_IMAGE048
Internal scene integration information
Figure 646231DEST_PATH_IMAGE018
And
Figure 847405DEST_PATH_IMAGE032
correspondingly merging; then find out
Figure DEST_PATH_IMAGE050
The internally appearing image frames are assembled into an image frame set
Figure DEST_PATH_IMAGE052
Figure DEST_PATH_IMAGE054
Represent
Figure 86626DEST_PATH_IMAGE030
The next interval of time of (a) is,
Figure 133079DEST_PATH_IMAGE052
scene integration information of all image frames in the scene
Figure DEST_PATH_IMAGE056
Figure DEST_PATH_IMAGE058
8230; wherein,
Figure DEST_PATH_IMAGE060
Figure DEST_PATH_IMAGE062
Figure DEST_PATH_IMAGE064
respectively represent
Figure 104971DEST_PATH_IMAGE052
Integrating the scenery of the first, second and third image frames to collect them
Figure 945888DEST_PATH_IMAGE052
And image frame set
Figure 813350DEST_PATH_IMAGE047
Image frame set
Figure 663494DEST_PATH_IMAGE048
Comparing the scene integration information of the combined frame set to obtain a comparison result
Figure 34432DEST_PATH_IMAGE042
Figure 311830DEST_PATH_IMAGE065
If the comparison result is positiveP is less than the contrast threshold value, and the
Figure 135429DEST_PATH_IMAGE030
As a segment dividing node, if the comparison result P is greater than or equal to the comparison threshold, it will
Figure 789265DEST_PATH_IMAGE067
And combining the image frame sets in the video processing system, and repeating the steps until all the video streams to be processed are traversed and divided to obtain a plurality of video sequences.
3. A moving image collection and analysis method as claimed in claim 1, wherein said step S400 is further performed to identify the motion of each object scene whose total number of occurrences in said step S303 is greater than or equal to 2:
step S401: correspondingly acquiring the continuous image frames of the target scenery A appearing in the step S303, and recording one image frame in the continuous image frames as
Figure 14710DEST_PATH_IMAGE069
An adjacent image frame is
Figure 460079DEST_PATH_IMAGE071
(ii) a Respectively discretizing the whole outline of the target scenery A in the image frame fn and the image frame fn-1 to obtain the image frame
Figure 302133DEST_PATH_IMAGE069
Discrete point set, image frame
Figure 228501DEST_PATH_IMAGE071
The discrete point set of (2); respectively recording the gray values of the corresponding pixel points of the two frames in the two discrete point sets
Figure 574031DEST_PATH_IMAGE073
And
Figure 458811DEST_PATH_IMAGE075
step S402: calculating the gray values of the corresponding pixel points of the two image frames according to a formula to obtain a difference image
Figure 257002DEST_PATH_IMAGE077
Figure 252640DEST_PATH_IMAGE079
Wherein,
Figure 187098DEST_PATH_IMAGE073
representing a target scene A in an image frame
Figure DEST_PATH_IMAGE080
The k-th discrete point of
Figure DEST_PATH_IMAGE082
The gray value of the corresponding pixel point;
Figure 305096DEST_PATH_IMAGE075
representing a target scene A in an image frame
Figure 856163DEST_PATH_IMAGE083
The k-th discrete point of
Figure 392842DEST_PATH_IMAGE082
The gray value of the corresponding pixel point;
step S403: setting a gray level threshold J according to the formula
Figure 447386DEST_PATH_IMAGE085
Obtaining an image
Figure DEST_PATH_IMAGE087
(ii) a If the image is
Figure 470705DEST_PATH_IMAGE087
The object scene is a person or an animal, and the object scene is judged to be in a motion state due to self-motion; if the image is
Figure 977910DEST_PATH_IMAGE087
And determining that the target scene presents a motion state for the still object due to passive artificial change of the field angle of the equipment.
4. A moving image acquisition analysis method according to claim 3, wherein said step S400 comprises: and when the motion condition of a certain target scene is recognized as self-motion, all continuous image frames covering the target scene in the target area are taken as expected images.
5. An acquisition analysis system for moving images, the system comprising: the system comprises a moving image acquisition processing module, an image scene information capturing module, a segment dividing module, a target scene distinguishing module, an expected image selecting module, a target scene motion condition identifying module and an expected image selecting and adjusting module;
the mobile image acquisition processing module is used for inputting a video stream to be processed and performing framing processing on the video stream to be processed to obtain a plurality of frame images;
the image scenery information capturing module is used for receiving the data in the moving image acquisition and processing module and capturing different scenery information in the plurality of frames of images;
the image scene information capturing module comprises a distribution area calculating unit, an outline extracting unit, an interval breakpoint capturing unit, an outline integrating unit, an information rejecting unit and an information integrating unit;
the distribution area calculating unit is used for acquiring the background picture color types of a plurality of frames of images obtained from the moving image acquisition and processing module and calculating the distribution area of each background picture color type;
the contour extraction unit is used for extracting contours of scenes in a plurality of frames of images obtained from the moving image acquisition and processing module;
the interval breakpoint capturing unit is used for capturing interval breakpoints among the scene outlines obtained in the outline extraction unit and calculating interval distances;
the contour integration unit is used for receiving the data in the interval breakpoint capturing unit and carrying out contour integration on the contour of the related scenery based on the data;
the information eliminating unit is used for receiving the data in the distribution area calculating unit, the interval breakpoint capturing unit and the outline integrating unit and reserving or eliminating related information;
the information integration unit is used for receiving the information data processed by the information elimination unit and integrating the information of each frame of image in the plurality of frames of images;
the segmentation module is used for receiving data in the moving image acquisition processing module and the image scene information capturing module, and segmenting the video stream to be processed based on the obtained scene integration information of the frames of images to obtain a plurality of video sequences;
the object scene distinguishing module is used for receiving the data in the segmentation module and distinguishing the object scene from each image frame in the plurality of video sequences;
the target scenery distinguishing module comprises a focus target area setting unit and a desired image capturing unit;
the focus target area setting unit is used for carrying out image diagonal connection on each image frame in each video sequence to obtain a target reference point, and then selecting a target radius value R according to the height and width of the image frame to obtain a final focus target area;
the expected image capturing unit captures an expected image based on the distribution area characteristics of the captured target scene in the focal target area;
the expected image selection module is used for receiving the data in the target scene discrimination module and respectively selecting expected images from the plurality of video sequences based on the discrimination result of the target scene;
the target scenery moving condition identification module is used for receiving the data in the expected image selection module and identifying the target scenery moving condition of the selected part of expected images;
and the expected image selection and adjustment module is used for receiving the data in the target scenery movement condition identification module and performing selection and adjustment on the selected expected image based on the identification result of the target scenery movement condition.
6. The system for acquiring and analyzing a moving image according to claim 5, wherein the segmentation module comprises: the system comprises an image frame set analysis unit, an information comparison unit, an image frame set merging unit and a video stream dividing node selection unit;
the image frame set analysis unit is used for acquiring the frame rate information transmitted by the equipment and analyzing and collecting the scene information of the fps frame images transmitted by the equipment every second;
the information comparison unit is used for receiving the data in the image frame set analysis unit and comparing the scene information of the image frame set of adjacent seconds;
the image frame set merging unit is used for receiving the data in the information comparison unit and carrying out frame set and/or video division node generation on the image frame sets of adjacent seconds based on the comparison data;
and the video stream dividing unit is used for receiving the data in the image frame set merging unit and performing video division on the video stream to be processed based on each dividing node generated in the image frame set merging unit.
7. The mobile image acquisition and analysis system according to claim 5, wherein the object scene motion condition recognition module comprises a discrete processing unit, a difference image calculation unit, an object scene recognition unit, and a motion condition recognition unit;
the target scenery recognition unit captures a target scenery for each image frame in each video sequence based on the focus target area set in the focus target area setting unit;
the discrete processing unit is used for extracting and discretizing the outline of a scene image in a part of expected images in the expected image capturing unit;
the difference image calculation unit is used for receiving the data in the discrete processing unit and calculating the difference image between the continuous image frames;
and the motion condition identification unit is used for receiving the data in the differential image calculation unit, identifying whether the target scenery is a person or an animal or a still object, and judging the motion condition of the target scenery based on the identification result.
CN202111490973.4A 2021-12-08 2021-12-08 Acquisition and analysis system and method for moving image Active CN114283356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111490973.4A CN114283356B (en) 2021-12-08 2021-12-08 Acquisition and analysis system and method for moving image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111490973.4A CN114283356B (en) 2021-12-08 2021-12-08 Acquisition and analysis system and method for moving image

Publications (2)

Publication Number Publication Date
CN114283356A CN114283356A (en) 2022-04-05
CN114283356B true CN114283356B (en) 2022-11-29

Family

ID=80871265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111490973.4A Active CN114283356B (en) 2021-12-08 2021-12-08 Acquisition and analysis system and method for moving image

Country Status (1)

Country Link
CN (1) CN114283356B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197706B (en) * 2023-04-23 2024-06-18 青岛尘元科技信息有限公司 Method and system for dividing progressive lens, storage medium and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718871A (en) * 2016-01-18 2016-06-29 成都索贝数码科技股份有限公司 Video host identification method based on statistics
WO2018086527A1 (en) * 2016-11-08 2018-05-17 中兴通讯股份有限公司 Video processing method and device
CN112581489A (en) * 2019-09-29 2021-03-30 RealMe重庆移动通信有限公司 Video compression method, device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150169960A1 (en) * 2012-04-18 2015-06-18 Vixs Systems, Inc. Video processing system with color-based recognition and methods for use therewith
CN107169985A (en) * 2017-05-23 2017-09-15 南京邮电大学 A kind of moving target detecting method based on symmetrical inter-frame difference and context update
CN110290426B (en) * 2019-06-24 2022-04-19 腾讯科技(深圳)有限公司 Method, device and equipment for displaying resources and storage medium
CN112203095B (en) * 2020-12-04 2021-03-09 腾讯科技(深圳)有限公司 Video motion estimation method, device, equipment and computer readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718871A (en) * 2016-01-18 2016-06-29 成都索贝数码科技股份有限公司 Video host identification method based on statistics
WO2018086527A1 (en) * 2016-11-08 2018-05-17 中兴通讯股份有限公司 Video processing method and device
CN112581489A (en) * 2019-09-29 2021-03-30 RealMe重庆移动通信有限公司 Video compression method, device and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"A study on various methods used for video summarization and moving object detection for video surveillance applications";A.Senthil Murugan et al;《Multimedia Tools and Applications》;20180123;第77卷;第23273-23290页 *
"Efficient key-frame extraction and video analysis";J.Calic et al;《Proceedings. International Conference on Information Technology:Coding and Computing》;20020807;第1-6页 *
"基于场景分割的视频内容摘要研究";熊伟;《中国学位论文全文数据库》;20141103;第1-58页 *
"基于颜色与目标轮廓特征的视频分割方法";孙中华;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20041215(第04期);第I138-847页 *

Also Published As

Publication number Publication date
CN114283356A (en) 2022-04-05

Similar Documents

Publication Publication Date Title
CN110830756A (en) Monitoring method and device
CN109714519B (en) Method and system for automatically adjusting image frame
US20060120712A1 (en) Method and apparatus for processing image
US20020186881A1 (en) Image background replacement method
CN1586069A (en) Identification and evaluation of audience exposure to logos in a broadcast event
CN101605209A (en) Camera head and image-reproducing apparatus
CN107133969A (en) A kind of mobile platform moving target detecting method based on background back projection
CN107358141B (en) Data identification method and device
JPH08191411A (en) Scene discrimination method and representative image recording and display device
JP2005513656A (en) Method for identifying moving objects in a video using volume growth and change detection masks
CN106060470A (en) Video monitoring method and system
CN114283356B (en) Acquisition and analysis system and method for moving image
CN108093314A (en) A kind of news-video method for splitting and device
CN111985348A (en) Face recognition method and system
CN115965889A (en) Video quality assessment data processing method, device and equipment
CN109492545B (en) Scene and compressed information-based facial feature positioning method and system
RU2616152C1 (en) Method of spatial position control of the participants of the sports event on the game field
CN112733680A (en) Model training method, extracting method and device for generating high-quality face image based on monitoring video stream and terminal equipment
CN116095363B (en) Mobile terminal short video highlight moment editing method based on key behavior recognition
CN109492616B (en) Face recognition method for advertising screen based on autonomous learning
CN115641527A (en) Intelligent collaborative tracking and tracing method and system for surveillance video
CN108388872A (en) A kind of headline recognition methods and device based on font color
CN112446820A (en) Method for removing irrelevant portrait of scenic spot photo
CN110852161A (en) Method, device and system for identifying identity of person in motion in real time
CN112818743A (en) Image recognition method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant