CN111460907B - Malicious behavior identification method, system and storage medium - Google Patents

Malicious behavior identification method, system and storage medium Download PDF

Info

Publication number
CN111460907B
CN111460907B CN202010148608.4A CN202010148608A CN111460907B CN 111460907 B CN111460907 B CN 111460907B CN 202010148608 A CN202010148608 A CN 202010148608A CN 111460907 B CN111460907 B CN 111460907B
Authority
CN
China
Prior art keywords
video
feature
similarity
malicious
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010148608.4A
Other languages
Chinese (zh)
Other versions
CN111460907A (en
Inventor
罗富
李浙伟
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010148608.4A priority Critical patent/CN111460907B/en
Publication of CN111460907A publication Critical patent/CN111460907A/en
Application granted granted Critical
Publication of CN111460907B publication Critical patent/CN111460907B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a malicious behavior identification method, a malicious behavior identification system and a storage medium. The method comprises the following steps: analyzing the acquired video stream to obtain video data and audio data; identifying video data to obtain video features, comparing the video features with a malicious video feature library to obtain video feature similarity, identifying audio data to obtain audio features, and comparing the audio features with the malicious audio feature library to obtain audio feature similarity; judging whether malicious behaviors exist or not according to the video feature similarity and the audio feature similarity. By the method, whether malicious behaviors exist can be judged.

Description

Malicious behavior identification method, system and storage medium
Technical Field
The present disclosure relates to the field of behavior recognition, and in particular, to a malicious behavior recognition method, system, and computer readable storage medium.
Background
In some closed spaces where many people exist, the phenomenon of malicious behavior is common. Such as prisons, canteens or workshops, often have the appearance of a noisy frame, a blow, etc., due to the fact that people have previously collided with each other.
However, these malicious acts, when occurring, tend to deteriorate the results by not being stopped in time.
Disclosure of Invention
The technical problem that the malicious behavior can not be stopped in time can be solved.
In order to solve the above technical problems, a first aspect of the present application provides a malicious behavior identification method, which includes: analyzing the acquired video stream to obtain video data and audio data; identifying video data to obtain video features, comparing the video features with a malicious video feature library to obtain video feature similarity, identifying audio data to obtain audio features, and comparing the audio features with the malicious audio feature library to obtain audio feature similarity; judging whether malicious behaviors exist or not according to the video feature similarity and the audio feature similarity.
To solve the above technical problem, a second aspect of the present application provides a malicious behavior recognition system, which includes a processor and a memory coupled to the processor, where the memory stores program instructions for implementing the method provided in the first aspect of the present application after the processor runs.
To solve the above technical problem, a third aspect of the present application provides a storage medium storing program instructions that when executed implement the method provided in the first aspect of the present application.
According to the scheme, the video stream is split into the video data and the audio data, the video data is compared with the malicious video feature library to obtain the video feature similarity, the audio data is compared with the malicious audio feature library to obtain the audio feature similarity, and whether malicious behaviors exist or not is judged by combining the video feature similarity and the audio feature similarity, so that a judgment result can be more accurate.
Drawings
FIG. 1 is a flowchart illustrating an embodiment of a malicious behavior recognition method according to the present application;
FIG. 2 is a schematic diagram of a specific flow of S120 in FIG. 1;
FIG. 3 is a schematic diagram of a specific flow of S130 in FIG. 1;
FIG. 4 is a schematic diagram illustrating a malicious behavior recognition system according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an embodiment of a storage medium of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms "first," "second," "third," and the like in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", and "a third" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments. It should be noted that, for the following method embodiments, if there are substantially the same results, the method of the present application is not limited to the illustrated flow sequence.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of a malicious behavior recognition method according to the present application. As shown in fig. 1, the present embodiment may include the following steps:
s110: and analyzing the acquired video stream to obtain video data and audio data.
The video stream can be obtained by a camera. Which may include a video data portion and an audio data portion. It can be parsed into video data and audio data by means of a video stream parsing program or the like.
S120: the method comprises the steps of identifying video data to obtain video features, comparing the video features with a malicious video feature library to obtain video feature similarity, identifying audio data to obtain audio features, and comparing the audio features with the malicious audio feature library to obtain audio feature similarity.
The malicious video feature library is recorded with video features with malicious behaviors in history (or collected in the past), and the similarity of the video features can be obtained through comparison of the video features and the malicious video feature library; audio features with historic malicious behaviors are recorded in the malicious video feature library, and the similarity of the audio features can be obtained through comparison of the audio features and the malicious audio feature library.
S130: judging whether malicious behaviors exist or not according to the video feature similarity and the audio feature similarity.
The video feature similarity and the audio feature similarity can have corresponding weights, and if the weighted sum of the video feature similarity and the audio feature similarity reaches a preset threshold, malicious behaviors are judged to exist.
Optionally, the weight corresponding to the video feature similarity is positively correlated with the quality index of the video data, and the weight corresponding to the audio feature similarity is positively correlated with the quality index of the audio data. The quality index of the video data can be resolution, code rate and the like of the video data, the quality index of the audio data can be sampling rate, code rate and the like, and the quality index of the video data and the quality index of the audio data can also be the environment where a camera for collecting the audio and the video is located at present, for example, the quality of the collected audio and the video can be influenced due to unstable installation of the camera.
After judging that the malicious behaviors exist, triggering an alarm, and automatically adding the video features and the audio features with the malicious behaviors into a malicious video feature library and a malicious audio feature library respectively.
By the method, the video stream is split into the video data and the audio data, the video data is compared with the malicious video feature library to obtain the video feature similarity, the audio data is compared with the malicious audio feature library to obtain the audio feature similarity, and whether malicious behaviors exist or not is judged by combining the video feature similarity and the audio feature similarity, so that a judgment result is more accurate.
Referring to fig. 2, fig. 2 is a schematic flow chart of S120 in fig. 1. As shown in fig. 2, S120 may include the sub-steps of:
s221: the method comprises the steps of carrying out crowd feature recognition on video data to obtain first video features, comparing the first video features with a first malicious video feature library to obtain first video feature similarity, carrying out individual feature recognition on the video data to obtain second video features, and comparing the second video features with a second malicious video feature library to obtain second video feature similarity.
Optionally, the video features include a first video feature and a second video feature, and the malicious video feature library includes a first malicious video feature library and a second malicious video feature library. The first malicious video feature library is recorded with action features of people gathering toward one video subarea (the video area can comprise a plurality of subareas, and the subareas can be partially overlapped), and the second malicious video feature library is recorded with features of people with malicious behaviors and the times of the malicious behaviors.
In this embodiment, the crowd feature recognition may be performed on the video data first, and then the individual feature recognition may be performed on the crowd-gathered region obtained according to the crowd feature recognition. The specific implementation process is as follows:
crowd feature recognition may also be referred to as crowd gathering recognition, and performing crowd feature recognition on video data, the resulting first video feature may include actions of people in the video and the number of people in the sub-region. The subareas with the number of people greater than or equal to the preset threshold value can be divided into areas for people group, and finally, only one area or a plurality of areas for people group can be divided; and comparing the actions of the person with the first malicious video feature library to obtain the first video feature similarity.
Alternatively, the individual feature identification of the video data may include individual feature identification of an area of the video data corresponding to the first video feature. Specifically, the region identified by the individual features is a region corresponding to the first video feature in the video data, namely, a region where people gather. And carrying out individual feature recognition on the crowd-gathered region to obtain the features of each person in the crowd-gathered region, and then comparing the features of each person with malicious behaviors in the second malicious video feature library one by one to obtain a plurality of similarities, wherein the largest similarity is taken as the target similarity between the person and the second malicious video feature library. For example, the features of the first group of people gathering areas are compared with the features of n malicious agents in the second malicious video feature library one by one to obtain n similarities, wherein the ith feature similarity is the target similarity if the ith feature similarity is the largest. The second video feature similarity may be a mean value of the target similarities corresponding to the group collection region, or may be a largest one of the target similarities.
Optionally, the more the historical number of times that the individual has malicious behaviors in the personal characteristic recognition result, the higher the second video characteristic similarity. Specifically, when the second video feature similarity is the average value of each target similarity, the historical times of malicious behaviors of the individuals are the average value of the times of malicious behaviors of the individuals in the second malicious feature library corresponding to each target similarity in the group collection area; when the second video feature similarity is the largest one of the target similarities, the historical times of malicious behaviors of the individuals are the times of malicious behaviors of people in a second malicious feature database corresponding to the target similarities.
In addition, the similarity of the second video features can be calculated according to the number of people with malicious behaviors in the personal feature recognition result. The more the number of individuals with malicious behaviors in the personal characteristic recognition result is, the higher the similarity of the second video characteristic is.
Of course, in a specific embodiment, only the individual feature recognition may be performed on the video data, the recognized area is the whole video area, and each recognized personal feature is compared with the second malicious video feature library to obtain the second video feature similarity. The specific identification process is similar to the process of identifying personal characteristics of the crowd-sourced region described above and is not repeated here.
S222: and calculating a weighted sum of the first video feature similarity and the second video feature similarity as the video feature similarity.
Optionally, the weight of the similarity of the first video feature is positively correlated with the quality index of the video data, a secondThe weight of the similarity of the video features is 1. For example, video feature similarity x=a 1 X 1 +X 2 Wherein is X 1 First video feature similarity, a 1 X is the weight of the similarity of the first video features 2 And the second video feature similarity.
When the individual feature recognition is only carried out on the video data, the second video feature similarity obtained correspondingly is directly taken as the video feature similarity. For example, video feature similarity x=x 2
By the method, the crowd characteristic recognition is carried out on the video data to obtain the first video characteristic similarity, the individual characteristic recognition is carried out on the data characteristic to obtain the second video characteristic similarity, and the video characteristic similarity is obtained according to the first video characteristic similarity and the second video characteristic similarity, so that the calculated video characteristic similarity is more accurate, and further the situation of malicious behaviors can be accurately judged.
Referring to fig. 3, fig. 3 is a schematic flow chart of S130 in fig. 1. As shown in fig. 3, S130 may include the sub-steps of:
s331: and adjusting the weight of the video feature similarity according to the quality index of the video data, and adjusting the weight of the audio feature similarity according to the quality index of the audio data.
The higher the quality index of the video data is, the larger the weight of the video feature similarity is; the higher the quality index of the audio data, the larger the weight of the similarity of the audio features. The initial weights for video feature similarity and audio feature similarity may be manually given. For example, the initial weight of the video feature similarity is set to be 0.5, and the initial weight of the audio feature similarity is set to be 0.5.
S332: and obtaining malicious behavior similarity according to the video feature similarity and the audio feature similarity.
Malicious behavior similarity is a weighted sum of video feature similarity and audio feature similarity. For example, if the similarity of the video features is X, the weight is a, the similarity of the video features is Y, and the weight is b, the similarity of malicious behaviors is ax+by.
In combination with the above-mentioned first aspectIn one embodiment, the video feature similarity may include a first video feature similarity and a second video feature similarity. When the video feature similarity Y comprises the first video feature similarity Y 1 And a second video feature similarity Y 2 When the similarity of the first video features is weighted as b 1 If the weight of the second video feature similarity is 1, the video feature similarity y=b 1 Y 1 +Y 2 Similarity of malicious behavior is aX+b 1 Y 1 +Y 2 . The video feature similarity may also include only the second video feature similarity Y 2 . When the video feature similarity Y only includes the second video feature similarity Y 2 When video similarity y=y 2 Similarity of malicious behavior is aX+Y 2
S333: if the similarity of the malicious behaviors is greater than or equal to a preset threshold value, judging that the malicious behaviors exist.
By the method, the weight of the similarity of the video features can be adjusted according to the quality index of the video data, and the weight of the similarity of the audio features can be adjusted according to the quality index of the audio data, so that the accuracy of the calculated similarity of the malicious behaviors is further improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of a malicious behavior recognition system according to the present application. As shown in fig. 4, the malicious behavior recognition system 400 includes a processor 410, a memory 420 coupled to the processor 410, wherein the memory 420 stores program instructions for implementing the method of any of the embodiments described above; the processor 410 is configured to execute program instructions stored in the memory 420 to implement the method of any of the embodiments described above. The processor 410 may also be referred to as a CPU (Central Processing Unit ). The processor 410 may be an integrated circuit chip having signal processing capabilities. Processor 410 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a storage medium of the present application. The storage medium 500 of the embodiment of the present application stores program instructions that, when executed, implement the malicious behavior identification method of the present application. Wherein the instructions may form a program file stored in the storage medium in the form of a software product to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
It should be noted that, all or part of the flow in the method of the foregoing embodiment may be implemented by a computer readable storage medium to instruct related hardware. The computer program may be stored in a computer readable storage medium, which when executed is capable of carrying out the method or steps of any of the embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer storage medium may include: any entity or device capable of carrying computer program code, a recording medium, a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory, a random access memory, a point carrier signal, a telecommunication signal, a software distribution medium, and so forth.
The foregoing embodiments are merely examples of the present application, and are not intended to limit the scope of the patent application, so that all changes in the equivalent structure or the equivalent flow process using the contents of the specification and the drawings of the present application, or direct or indirect application in other related technical fields, are included in the scope of the patent protection of the present application.

Claims (7)

1. A malicious behavior recognition method, comprising:
analyzing the acquired video stream to obtain video data and audio data;
identifying the video data to obtain video features, comparing the video features with a malicious video feature library to obtain video feature similarity, identifying the audio data to obtain audio features, and comparing the audio features with a malicious audio feature library to obtain audio feature similarity;
adjusting the weight of the video feature similarity according to the quality index of the video data, and adjusting the weight of the audio feature similarity according to the quality index of the audio data;
obtaining malicious behavior similarity according to the video feature similarity and the audio feature similarity, wherein the malicious behavior similarity is a weighted sum of the video feature similarity and the audio feature similarity;
and if the similarity of the malicious behaviors is greater than or equal to a preset threshold value, judging that the malicious behaviors exist.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the weight of the video feature similarity is positively correlated with the quality index of the video data, and the weight of the audio feature similarity is positively correlated with the audio quality index.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the video features comprise a first video feature and a second video feature, the malicious video feature library comprises a first malicious video feature library and a second malicious video feature library, the video features are obtained by identifying the video data, and the video feature similarity obtained by comparing the video features with the malicious video feature library comprises:
carrying out crowd feature recognition on the video data to obtain the first video feature, comparing the first video feature with the first malicious video feature library to obtain a first video feature similarity, carrying out individual feature recognition on the video data to obtain the second video feature, and comparing the second video feature with the second malicious video feature library to obtain a second video feature similarity;
and calculating a weighted sum of the first video feature similarity and the second video feature similarity as the video feature similarity.
4. The method of claim 3, wherein the step of,
the individual feature identification of the video data comprises:
and carrying out individual feature recognition on the region corresponding to the first video feature in the video data.
5. The method of claim 3, wherein the step of,
the more the historical times of the malicious behaviors exist in the individual characteristic identification result, the higher the similarity of the second video characteristic is.
6. A malicious behavior recognition system, wherein the malicious behavior recognition system includes a processor, a memory coupled to the processor, wherein,
the memory stores program instructions for implementing the malicious behavior identification method according to any one of claims 1-5;
the processor is configured to execute the program instructions stored by the memory to implement malicious behavior identification.
7. A storage medium storing program instructions which, when executed, implement the steps of the method of any one of claims 1-5.
CN202010148608.4A 2020-03-05 2020-03-05 Malicious behavior identification method, system and storage medium Active CN111460907B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010148608.4A CN111460907B (en) 2020-03-05 2020-03-05 Malicious behavior identification method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010148608.4A CN111460907B (en) 2020-03-05 2020-03-05 Malicious behavior identification method, system and storage medium

Publications (2)

Publication Number Publication Date
CN111460907A CN111460907A (en) 2020-07-28
CN111460907B true CN111460907B (en) 2023-06-20

Family

ID=71685551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010148608.4A Active CN111460907B (en) 2020-03-05 2020-03-05 Malicious behavior identification method, system and storage medium

Country Status (1)

Country Link
CN (1) CN111460907B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111933174A (en) * 2020-08-16 2020-11-13 云知声智能科技股份有限公司 Voice processing method, device, equipment and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6567775B1 (en) * 2000-04-26 2003-05-20 International Business Machines Corporation Fusion of audio and video based speaker identification for multimedia information access
CN102348101A (en) * 2010-07-30 2012-02-08 深圳市先进智能技术研究所 Examination room intelligence monitoring system and method thereof
CN108600254A (en) * 2018-05-07 2018-09-28 龚麟 A kind of audio and video identifying system
WO2019104890A1 (en) * 2017-12-01 2019-06-06 深圳壹账通智能科技有限公司 Fraud identification method and device combining audio analysis and video analysis and storage medium
CN110047513A (en) * 2019-04-28 2019-07-23 秒针信息技术有限公司 A kind of video monitoring method, device, electronic equipment and storage medium
WO2019222759A1 (en) * 2018-05-18 2019-11-21 Synaptics Incorporated Recurrent multimodal attention system based on expert gated networks
CN110837758A (en) * 2018-08-17 2020-02-25 杭州海康威视数字技术股份有限公司 Keyword input method and device and electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005076594A1 (en) * 2004-02-06 2005-08-18 Agency For Science, Technology And Research Automatic video event detection and indexing
BRPI0621897B1 (en) * 2006-08-03 2018-03-20 International Business Machines Corporation “SURVEILLANCE SYSTEM USING VIDEO AND AUDIO RECOGNITION, SURVEILLANCE METHOD AND MACHINE-READY STORAGE DEVICE”
CN102098492A (en) * 2009-12-11 2011-06-15 上海弘视通信技术有限公司 Audio and video conjoint analysis-based fighting detection system and detection method thereof
CN103246869B (en) * 2013-04-19 2016-07-06 福建亿榕信息技术有限公司 Method is monitored in crime based on recognition of face and behavior speech recognition
CN110493615A (en) * 2018-05-15 2019-11-22 武汉斗鱼网络科技有限公司 A kind of live video monitoring method and electronic equipment
US11070588B2 (en) * 2018-06-11 2021-07-20 International Business Machines Corporation Cognitive malicious activity identification and handling
CN109087666A (en) * 2018-07-31 2018-12-25 厦门快商通信息技术有限公司 The identification device and method that prison is fought
CN110569720B (en) * 2019-07-31 2022-06-07 安徽四创电子股份有限公司 Audio and video intelligent identification processing method based on audio and video processing system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6567775B1 (en) * 2000-04-26 2003-05-20 International Business Machines Corporation Fusion of audio and video based speaker identification for multimedia information access
CN102348101A (en) * 2010-07-30 2012-02-08 深圳市先进智能技术研究所 Examination room intelligence monitoring system and method thereof
WO2019104890A1 (en) * 2017-12-01 2019-06-06 深圳壹账通智能科技有限公司 Fraud identification method and device combining audio analysis and video analysis and storage medium
CN108600254A (en) * 2018-05-07 2018-09-28 龚麟 A kind of audio and video identifying system
WO2019222759A1 (en) * 2018-05-18 2019-11-21 Synaptics Incorporated Recurrent multimodal attention system based on expert gated networks
CN110837758A (en) * 2018-08-17 2020-02-25 杭州海康威视数字技术股份有限公司 Keyword input method and device and electronic equipment
CN110047513A (en) * 2019-04-28 2019-07-23 秒针信息技术有限公司 A kind of video monitoring method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111460907A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN107818301B (en) Method and device for updating biological characteristic template and electronic equipment
US20070168337A1 (en) Apparatus and method for determining information retrieval applicability and generating best case for determination
CN110717358B (en) Visitor number counting method and device, electronic equipment and storage medium
CN110717509B (en) Data sample analysis method and device based on tree splitting algorithm
WO2020135756A1 (en) Video segment extraction method, apparatus and device, and computer-readable storage medium
US11870932B2 (en) Systems and methods of gateway detection in a telephone network
CN113269046B (en) High-altitude falling object identification method and system
CN111401238A (en) Method and device for detecting character close-up segments in video
CN111460907B (en) Malicious behavior identification method, system and storage medium
CN114445768A (en) Target identification method and device, electronic equipment and storage medium
CN110324352B (en) Method and device for identifying batch registered account groups
CN108288053B (en) Iris image processing method and device and computer readable storage medium
US20200311401A1 (en) Analyzing apparatus, control method, and program
JP2019211633A (en) Voice processing program, voice processing method and voice processing device
CN115374793B (en) Voice data processing method based on service scene recognition and related device
CN109587248B (en) User identification method, device, server and storage medium
CN113034771B (en) Gate passing method, device and equipment based on face recognition and computer storage medium
WO2022001245A1 (en) Method and apparatus for detecting plurality of types of sound events
US10885160B1 (en) User classification
CN110347918B (en) Data recommendation method and device based on user behavior data and computer equipment
CN103390404A (en) Information processing apparatus, information processing method and information processing program
CN111339829B (en) User identity authentication method, device, computer equipment and storage medium
CN113742730A (en) Malicious code detection method, system and computer readable storage medium
CN116156416A (en) Method and device for extracting liveplace based on signaling data
CN112417007A (en) Data analysis method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant