CN111460226A - Video character retrieval method and retrieval system based on deep learning - Google Patents

Video character retrieval method and retrieval system based on deep learning Download PDF

Info

Publication number
CN111460226A
CN111460226A CN202010249216.7A CN202010249216A CN111460226A CN 111460226 A CN111460226 A CN 111460226A CN 202010249216 A CN202010249216 A CN 202010249216A CN 111460226 A CN111460226 A CN 111460226A
Authority
CN
China
Prior art keywords
face
frame
video
image
digital video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010249216.7A
Other languages
Chinese (zh)
Inventor
杨唤晨
谢恩鹏
徐杰
李帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yunman Intelligent Technology Co ltd
Original Assignee
Shandong Yunman Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yunman Intelligent Technology Co ltd filed Critical Shandong Yunman Intelligent Technology Co ltd
Priority to CN202010249216.7A priority Critical patent/CN111460226A/en
Priority to PCT/CN2020/096015 priority patent/WO2021196409A1/en
Publication of CN111460226A publication Critical patent/CN111460226A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Library & Information Science (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Geometry (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A video character retrieval method and retrieval system based on deep learning are characterized in that frames or fragments can be decoded in a digital video after the digital video is decoded according to a frame rate and preprocessed, face information is obtained by utilizing a pre-trained deep neural network, a face picture is converted into a feature vector by utilizing a Facenet network, a feature value of the face picture of a specific character is extracted by utilizing the Facenet network, and then a formula is utilized to calculate the distance between the feature vector and the feature centroid of the characterBy relating the distance to a characteristic hemisphere
Figure DEST_PATH_IMAGE002
The comparison of (1) determines whether the specific character is the specific character, thereby facilitating the service provider to search a plurality of application scenes such as videos containing the specific character in the server.

Description

Video character retrieval method and retrieval system based on deep learning
Technical Field
The invention relates to the technical field of video face retrieval, in particular to a video character retrieval method and a retrieval system based on deep learning.
Background
In recent years, video applications such as streaming media and IPTV have been rapidly developed, and it has become an important entertainment system for people to follow up the activities such as dramas and watching digital tv. Cisco's VNI predicted that IP video traffic will account for 82% of Internet IP traffic by 2022. In this background, people have generated a great demand for more diversified and convenient video services. Therefore, how to search for people in the video, finding the segment of the movie in which the star of interest appears or searching whether a certain person appears in the monitoring video or not and searching the video containing a specific person in the video library become problems to be solved.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a method and a system for enabling a streaming media service provider and an intelligent set top box service provider to search characters in videos.
The technical scheme adopted by the invention for overcoming the technical problems is as follows:
a video character retrieval method based on deep learning comprises the following steps:
a) decoding the digital video file according to the frame rate of the digital video file;
b) preprocessing the decoded frame of the digital video, and converting the frame into a gray scale image;
c) inputting the preprocessed frame into a pre-trained deep neural network, if a face exists in the gray-scale image, outputting the positions of all faces in the frame by the deep neural network and intercepting the face, and if the face does not exist in the frame, returning to the step a);
d) preprocessing the intercepted face image;
e) inputting the preprocessed face image into a Facenet network, and converting the face image into an N-dimensional characteristic vector V by the Facenet networkunkonwn
f) Inputting the face pictures of i specific characters to be identified into a Facenet network, and extracting the characteristic value V of the face pictures of the specific characters by the Facenet networktarget,iAccording to the formula
Figure BDA0002434614510000021
Computing a feature centroid, cen, for a particular personi,ρiThe confidence factor of the face picture of the ith specific person is 0 < rhoi≤1;
g) By the formula
Figure BDA0002434614510000022
Calculating a characteristic vector VunkonwnDistance l from the feature centroid of a charactercenIf l iscenIf the radius is less than the characteristic sphere radius r, the person is determined to be a specific person, and if l is less than the characteristic sphere radius r, the person is determined to be a specific personcenIf the radius is larger than or equal to the characteristic sphere radius r, the person is judged not to be a specific person;
g) the facilitator finds all videos containing the specific character frame in the server, and when the user needs to watch the video of the specific character, the facilitator jumps to the video containing the specific character frame for the user.
Further, the frame step length in the step a) is set to be s, s is a positive integer which is larger than or equal to 1, and one frame is selected from every s frames decoded from the digital video file and is sent to the step b) for processing.
Preferably, in the step b), the decoded frame of the digital video is reduced to a fixed size in a manner of length-to-width ratio, and is converted into a gray scale image after reduction.
Further, the pretreatment operation in step d) comprises the following steps:
d-1) if the intercepted face image is square, scaling the intercepted face image to a square image of M × M pixels;
d-2) if the intercepted face image is not square, using a black edge to complement the image into a square image and then scaling the image to a square image with M × M pixels.
Preferably, N in step e) is 128.
Preferably, M in step d) is 160.
A video character retrieval system based on deep learning, comprising: the system comprises a video decoding unit, a face detection unit and a face feature extraction unit;
the video decoding unit comprises a video decoding unit and a preprocessing unit, the video decoding unit decodes the digital video file according to the frame rate of the digital video file, and the preprocessing unit preprocesses the decoded frame of the digital video;
the face detection unit comprises a deep neural network and a preprocessing unit, wherein the deep neural network outputs position coordinates of all faces in a frame, intercepts the faces and preprocesses the faces through the preprocessing unit;
the face feature extraction unit is composed of a Facenet network.
The invention has the beneficial effects that: the method comprises the steps of decoding a digital video according to a frame rate, preprocessing the digital video, decoding frames or fragments in the digital video, acquiring face information by using a pre-trained deep neural network, converting a face picture into a feature vector by using a Facenet network, extracting a feature value of the face picture of a specific figure by using the Facenet network, calculating the distance between the feature vector and the feature centroid of the figure by using a formula, and judging whether the figure is the specific figure or not by comparing the distance with a feature hemisphere r, so that a service provider can search various application scenes including videos of the specific figure and the like in a server conveniently.
Drawings
FIG. 1 is a system block diagram of the present invention.
Detailed Description
The invention is further described below with reference to fig. 1.
A video character retrieval method based on deep learning comprises the following steps:
a) decoding the digital video file according to the frame rate of the digital video file;
b) preprocessing the decoded frame of the digital video, and converting the frame into a gray scale image;
c) inputting the preprocessed frame into a pre-trained deep neural network, if a face exists in the gray-scale image, outputting the positions of all faces in the frame by the deep neural network and intercepting the face, and if the face does not exist in the frame, returning to the step a);
d) preprocessing the intercepted face image;
e) inputting the preprocessed face image into a Facenet network, and converting the face image into an N-dimensional characteristic vector V by the Facenet networkunkonwn
f) Inputting the face pictures of i specific characters to be identified into a Facenet network, and extracting the characteristic value V of the face pictures of the specific characters by the Facenet networktarget,iAccording to the formula
Figure BDA0002434614510000031
Computing a feature centroid, cen, for a particular personi,ρiThe confidence factor of the face picture of the ith specific person is 0 < rhoi≤1;
g) By the formula
Figure BDA0002434614510000041
Calculating a characteristic vector VunkonwnDistance l from the feature centroid of a charactercenIf l iscenIf the radius is less than the characteristic sphere radius r, the person is determined to be a specific person, and if l is less than the characteristic sphere radius r, the person is determined to be a specific personcenIf the radius is larger than or equal to the characteristic sphere radius r, the person is judged not to be a specific person;
g) the facilitator finds all videos containing the specific character frame in the server, and when the user needs to watch the video of the specific character, the facilitator jumps to the video containing the specific character frame for the user.
The method comprises the steps of decoding a digital video according to a frame rate, preprocessing the digital video, decoding frames or fragments in the digital video, acquiring face information by using a pre-trained deep neural network, converting a face picture into a feature vector by using a Facenet network, extracting a feature value of the face picture of a specific figure by using the Facenet network, calculating the distance between the feature vector and the feature centroid of the figure by using a formula, and judging whether the face picture is the specific figure or not by comparing the distance with a feature hemisphere r, so that a service provider can search various application scenes including videos of the specific figure and the like in a server conveniently.
Preferably, the frame step length in step a) is set to s, s is a positive integer greater than or equal to 1, and one frame is selected from every s frames decoded from the digital video file and sent to the step b) for processing. By setting the step length, only one frame in each decoded s frames is sent to a subsequent preprocessing unit, so that system resources are saved, and the system running speed is increased.
Further, if the size of the frame is too large, the frame of the decoded digital video is reduced to a fixed size in a length-width equal ratio mode in the step b), and the frame is converted into a gray image after being reduced, so that the execution speed of the face detection can be increased.
Further, the pretreatment operation in step d) comprises the following steps:
d-1) if the intercepted face image is square, scaling the intercepted face image to a square image of M × M pixels;
d-2) if the intercepted face image is not square, using a black edge to complement the image into a square image and then scaling the image to a square image with M × M pixels.
Preferably, N in step e) is 128.
Preferably, M in step d) is 160.
A video character retrieval system based on deep learning, comprising: the system comprises a video decoding unit, a face detection unit and a face feature extraction unit;
the video decoding unit comprises a video decoding unit and a preprocessing unit, the video decoding unit decodes the digital video file according to the frame rate of the digital video file, and the preprocessing unit preprocesses the decoded frame of the digital video;
the face detection unit comprises a deep neural network and a preprocessing unit, wherein the deep neural network outputs position coordinates of all faces in a frame, intercepts the faces and preprocesses the faces through the preprocessing unit;
the face feature extraction unit is composed of a Facenet network.
The above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (7)

1. A video character retrieval method based on deep learning is characterized by comprising the following steps:
a) decoding the digital video file according to the frame rate of the digital video file;
b) preprocessing the decoded frames of the digital video;
c) inputting the preprocessed frame into a pre-trained deep neural network, if a face exists in the gray-scale image, outputting the positions of all faces in the frame by the deep neural network and intercepting the face, and if the face does not exist in the frame, returning to the step a);
d) preprocessing the intercepted face image;
e) inputting the preprocessed face image into a Facenet network, and converting the face image into an N-dimensional characteristic vector V by the Facenet networkunkonwn
f) Inputting the face pictures of i specific characters to be identified into a Facenet network, and extracting the characteristic value V of the face pictures of the specific characters by the Facenet networktarget,iAccording to the formula
Figure FDA0002434614500000011
Computing a feature centroid, cen, for a particular personi,ρiThe confidence factor of the face picture of the ith specific person is 0 < rhoi≤1;
g) By the formula
Figure FDA0002434614500000012
Calculating a characteristic vector VunkonwnDistance l from the feature centroid of a charactercenIf l iscenIf the radius is less than the characteristic sphere radius r, the person is determined to be a specific person, and if l is less than the characteristic sphere radius r, the person is determined to be a specific personcenIf the radius is larger than or equal to the characteristic sphere radius r, the person is judged not to be a specific person;
g) the facilitator finds all videos containing the specific character frame in the server, and when the user needs to watch the video of the specific character, the facilitator jumps to the video containing the specific character frame for the user.
2. The deep learning-based video character retrieval method of claim 1, wherein: setting the frame step length as s in the step a), wherein s is a positive integer greater than or equal to 1, and selecting one frame from every s frames decoded from the digital video file to be sent to the step b) for processing.
3. The deep learning-based video character retrieval method of claim 1, wherein: and b), reducing the decoded frame of the digital video to a fixed size in a length-width equal ratio mode, and converting the reduced frame into a gray scale image.
4. The deep learning-based video character retrieval method of claim 1, wherein the preprocessing operation in step d) comprises the following steps:
d-1) if the intercepted face image is square, scaling the intercepted face image to a square image of M × M pixels;
d-2) if the intercepted face image is not square, using a black edge to complement the image into a square image and then scaling the image to a square image with M × M pixels.
5. The deep learning-based video character retrieval method of claim 1, wherein: in step e) N is 128.
6. The deep learning-based video character retrieval method of claim 4, wherein: in step d), M is 160.
7. A retrieval system for implementing the deep learning-based video character retrieval method according to claim 1, comprising: the system comprises a video decoding unit, a face detection unit and a face feature extraction unit;
the video decoding unit comprises a video decoding unit and a preprocessing unit, the video decoding unit decodes the digital video file according to the frame rate of the digital video file, and the preprocessing unit preprocesses the decoded frame of the digital video;
the face detection unit comprises a deep neural network and a preprocessing unit, wherein the deep neural network outputs position coordinates of all faces in a frame, intercepts the faces and preprocesses the faces through the preprocessing unit;
the face feature extraction unit is composed of a Facenet network.
CN202010249216.7A 2020-04-01 2020-04-01 Video character retrieval method and retrieval system based on deep learning Pending CN111460226A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010249216.7A CN111460226A (en) 2020-04-01 2020-04-01 Video character retrieval method and retrieval system based on deep learning
PCT/CN2020/096015 WO2021196409A1 (en) 2020-04-01 2020-06-15 Video figure retrieval method and retrieval system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010249216.7A CN111460226A (en) 2020-04-01 2020-04-01 Video character retrieval method and retrieval system based on deep learning

Publications (1)

Publication Number Publication Date
CN111460226A true CN111460226A (en) 2020-07-28

Family

ID=71682499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010249216.7A Pending CN111460226A (en) 2020-04-01 2020-04-01 Video character retrieval method and retrieval system based on deep learning

Country Status (2)

Country Link
CN (1) CN111460226A (en)
WO (1) WO2021196409A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705422A (en) * 2021-08-25 2021-11-26 山东云缦智能科技有限公司 Method for acquiring character video clips through human faces

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319938A (en) * 2017-12-31 2018-07-24 奥瞳***科技有限公司 High quality training data preparation system for high-performance face identification system
CN108764067A (en) * 2018-05-08 2018-11-06 北京大米科技有限公司 Video intercepting method, terminal, equipment and readable medium based on recognition of face
CN110188602A (en) * 2019-04-17 2019-08-30 深圳壹账通智能科技有限公司 Face identification method and device in video
CN110543811A (en) * 2019-07-15 2019-12-06 华南理工大学 non-cooperation type examination person management method and system based on deep learning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106658169B (en) * 2016-12-18 2019-06-07 北京工业大学 A kind of universal method based on deep learning multilayer division news video
TWI636426B (en) * 2017-08-23 2018-09-21 財團法人國家實驗研究院 Method of tracking a person's face in an image
CN108647621A (en) * 2017-11-16 2018-10-12 福建师范大学福清分校 A kind of video analysis processing system and method based on recognition of face
CN107911748A (en) * 2017-11-24 2018-04-13 南京融升教育科技有限公司 A kind of video method of cutting out based on recognition of face
CN108337532A (en) * 2018-02-13 2018-07-27 腾讯科技(深圳)有限公司 Perform mask method, video broadcasting method, the apparatus and system of segment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319938A (en) * 2017-12-31 2018-07-24 奥瞳***科技有限公司 High quality training data preparation system for high-performance face identification system
CN108764067A (en) * 2018-05-08 2018-11-06 北京大米科技有限公司 Video intercepting method, terminal, equipment and readable medium based on recognition of face
CN110188602A (en) * 2019-04-17 2019-08-30 深圳壹账通智能科技有限公司 Face identification method and device in video
CN110543811A (en) * 2019-07-15 2019-12-06 华南理工大学 non-cooperation type examination person management method and system based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PAN_JINQUAN: "利用MTCNN和FaceNet实现人脸检测和人脸识别|CSDN博文精选", 《微信公众平台》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705422A (en) * 2021-08-25 2021-11-26 山东云缦智能科技有限公司 Method for acquiring character video clips through human faces
CN113705422B (en) * 2021-08-25 2024-04-09 山东浪潮超高清视频产业有限公司 Method for obtaining character video clips through human faces

Also Published As

Publication number Publication date
WO2021196409A1 (en) 2021-10-07

Similar Documents

Publication Publication Date Title
US10194203B2 (en) Multimodal and real-time method for filtering sensitive media
US10628700B2 (en) Fast and robust face detection, region extraction, and tracking for improved video coding
WO2019218824A1 (en) Method for acquiring motion track and device thereof, storage medium, and terminal
US10304458B1 (en) Systems and methods for transcribing videos using speaker identification
US20090290791A1 (en) Automatic tracking of people and bodies in video
WO2018006825A1 (en) Video coding method and apparatus
US8027523B2 (en) Image processing apparatus, image processing method, and program
US20170060867A1 (en) Video and image match searching
US8787692B1 (en) Image compression using exemplar dictionary based on hierarchical clustering
US20160307029A1 (en) Duplicate reduction for face detection
US20110123115A1 (en) On-Screen Guideline-Based Selective Text Recognition
JP5067310B2 (en) Subtitle area extraction apparatus, subtitle area extraction method, and subtitle area extraction program
CN109948721B (en) Video scene classification method based on video description
CN114339360B (en) Video processing method, related device and equipment
US20240007703A1 (en) Non-occluding video overlays
Ikeda Segmentation of faces in video footage using HSV color for face detection and image retrieval
CN111460226A (en) Video character retrieval method and retrieval system based on deep learning
CN114359333A (en) Moving object extraction method and device, computer equipment and storage medium
CN113011254A (en) Video data processing method, computer equipment and readable storage medium
CN111476132A (en) Video scene recognition method and device, electronic equipment and storage medium
CN116391200A (en) Scaling agnostic watermark extraction
KR20190109662A (en) User Concern Image Detecting Method at Security System and System thereof
CN114387440A (en) Video clipping method and device and storage medium
JP5283267B2 (en) Content identification method and apparatus
CN114067421B (en) Personnel duplicate removal identification method, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200728