CN108230352A - Detection method, device and the electronic equipment of target object - Google Patents

Detection method, device and the electronic equipment of target object Download PDF

Info

Publication number
CN108230352A
CN108230352A CN201710059806.1A CN201710059806A CN108230352A CN 108230352 A CN108230352 A CN 108230352A CN 201710059806 A CN201710059806 A CN 201710059806A CN 108230352 A CN108230352 A CN 108230352A
Authority
CN
China
Prior art keywords
target object
video frame
result
motion state
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710059806.1A
Other languages
Chinese (zh)
Other versions
CN108230352B (en
Inventor
余锋伟
李文博
闫俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201710059806.1A priority Critical patent/CN108230352B/en
Publication of CN108230352A publication Critical patent/CN108230352A/en
Application granted granted Critical
Publication of CN108230352B publication Critical patent/CN108230352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Image Analysis (AREA)

Abstract

An embodiment of the present invention provides a kind of detection method of target object, device and electronic equipment, wherein, the detection method of target object includes:According to the characteristic point of target object at least one in the first video frame, motion state of the target object in the second video frame is predicted using first nerves network, motion state prediction result is obtained, the first video frame is current video frame, and the second video frame is current video frame in rear video frame;According to the characteristic point of target object, position of the target object in the second video frame is predicted using nervus opticus network, obtains position prediction result;Position prediction result is matched with position detection result;The motion state of target object is determined according to matching result and motion state prediction result.Through the embodiment of the present invention, it can effectively realize the judgement whether to disappear in monitor video for target object, lower the generation of the mistake of target object tracking, improve the accuracy of image detection.

Description

Detection method, device and the electronic equipment of target object
Technical field
The present embodiments relate to field of artificial intelligence more particularly to a kind of detection method of target object, devices And electronic equipment.
Background technology
Image detection is the process that characteristic area interested in image or detection target are extracted and detected, in recent years Come, with the development of Multitarget Tracking, multiple target tracking means be applied to more and more such as video monitoring or In the scenes such as net cast, to carry out the image procossing after image detection and detection.
Multiple target tracking be using computer, determine in the video sequence it is interested, there is certain notable visual signature The position of each self-movement target, size and the complete movement locus of each target a kind of technology.Traditional multiple target The key that tracking is realized is to carry out effective data correlation, the metric data from single or multiple sensors and The track known or had determined is mutually paired.However, this data correlation mode lacks fine judgment mechanism, for mesh The loss of mark object in video often can not accurately judge, so as to the generation of mistake that target object is caused to track, reduce The accuracy of image detection.
Invention content
An embodiment of the present invention provides a kind of detection schemes of target object.
One side according to embodiments of the present invention provides a kind of detection method of target object, including:According to first The characteristic point of at least one target object in video frame, using first nerves network to the target object in the second video frame Motion state predicted, obtain motion state prediction result, wherein, first video frame be current video frame, it is described Second video frame is current video frame in rear video frame;According to the characteristic point of the target object, nervus opticus network is used Position of the target object in the second video frame is predicted, obtains position prediction result;By the position prediction knot The position detection result of the fruit target object corresponding with the second video frame is matched;According to matching result and the fortune Dynamic status predication result determines the motion state of the target object.
Optionally, by the position detection knot of the position prediction result target object corresponding with the second video frame Fruit is matched, including:By the position of the position prediction result target object corresponding with second video frame Testing result carries out the association in external performance space, determines that the position prediction result is regarded with described second according to association results The matching result between position detection result in frequency frame.
Optionally, the position of the position prediction result target object corresponding with second video frame is examined The association in result progress external performance space is surveyed, including:Determine the position prediction result and the position detection result it Between difference, the association in external performance space is carried out according to the difference.
Optionally, the motion state of the target object includes at least one of:The state of being tracked, transient loss shape State, long-term lost condition and vanishing state;Wherein, the tracked state is used to indicate the position prediction result of target object Exist with the position detection result in corresponding video frame and be associated with;The transient loss state is used to indicate the prediction of target object As a result there is no associations with the position detection result in corresponding video frame;The long-term lost condition is used to indicate target object In the sequence of frames of video of the first setting quantity, there is no close with corresponding position detection result for all position prediction results Connection;The vanishing state is used to indicate target object in the sequence of frames of video of the second setting quantity, all position prediction results There is no associations with corresponding position detection result;Wherein, the first setting quantity is less than the described second setting quantity.
Optionally, the motion state further includes generation state, and the generation state is used to indicate target object and exists for the first time Occur in first video frame.
Optionally, the movement shape that the target object is determined according to matching result and the motion state prediction result State, including:It indicates that the position prediction result exists with the position detection result in response to the matching result to be associated with, by institute The motion state for stating target object is labeled as being tracked state.
Optionally, the movement shape that the target object is determined according to matching result and the motion state prediction result State further includes:It indicates that the position prediction result is not present with the position detection result in response to the matching result to be associated with, Obtain the motion state prediction result of the target object.
Optionally, after the motion state prediction result for obtaining the target object, the method further includes:It rings Motion state prediction result described in Ying Yu indicates that the target object is the state that is tracked, by the motion state of the target object It is labeled as transient loss state.
Optionally, after the motion state prediction result for obtaining the target object, the method further includes:It rings Motion state prediction result described in Ying Yu indicates that the target object for transient loss state, judges that the target object is continuous Whether the number for being labeled as transient loss state reaches N-1 times, if so, the motion state of the target object is labeled as growing Phase lost condition;If it is not, the motion state of the target object is still then labeled as transient loss state;Wherein, described in N is represented First setting quantity.
Optionally, the method further includes:Indicate the target object for length in response to the motion state prediction result Phase lost condition, judges whether the target object is reached M-1 times by continuous marking for the number of long-term lost condition, if so, The motion state of the target object is then labeled as vanishing state;If it is not, then the motion state of the target is still labeled as Long-term lost condition;Wherein, M represents the second setting quantity.
Optionally, by the position detection knot of the position prediction result target object corresponding with the second video frame Fruit is matched, including:The position prediction result is corresponding with the second video frame detected by object detector The position detection result of the target object is matched.
Optionally, in the movement that the target object is determined according to matching result and the motion state prediction result After state, the method further includes:According to the action of target object described in the moving state identification.
Optionally, in the movement that the target object is determined according to matching result and the motion state prediction result After state, the method further includes:The target object is counted according to the motion state.
Optionally, in the movement that the target object is determined according to matching result and the motion state prediction result After state, the method further includes:The target object is counted according to the motion state, according to count results into The flow analysis of row target object.
Optionally, in the movement that the target object is determined according to matching result and the motion state prediction result After state, the method further includes:According to the motion state detection abnormal target object, to the abnormal target object into Row alarm.
Optionally, in the movement that the target object is determined according to matching result and the motion state prediction result After state, the method further includes:According to the motion state, information recommendation is carried out to the target object.
Optionally, the first nerves network for Recognition with Recurrent Neural Network RNN and/or, the nervus opticus network is cycle Neural network RNN.
Other side according to embodiments of the present invention additionally provides a kind of detection device of target object, including:Prediction Module, for the characteristic point according to target object at least one in the first video frame, using first nerves network to the target Motion state of the object in the second video frame is predicted, obtains motion state prediction result, wherein, first video frame For current video frame, second video frame is current video frame in rear video frame;And the spy according to the target object Point is levied, position of the target object in the second video frame is predicted using nervus opticus network, obtains position prediction As a result;Matching module, for the position of the position prediction result target object corresponding with the second video frame to be examined Result is surveyed to be matched;Determining module, for determining the target pair according to matching result and the motion state prediction result The motion state of elephant.
Optionally, the matching module, for by position prediction result institute corresponding with second video frame The association in the position detection result progress external performance space of target object is stated, the position prediction is determined according to association results As a result the matching result between the position detection result in second video frame.
Optionally, the matching module, for determining between the position prediction result and the position detection result Difference carries out the association in external performance space according to the difference;According to association results determine the position prediction result with The matching result between position detection result in second video frame.
Optionally, the motion state of the target object includes at least one of:The state of being tracked, transient loss shape State, long-term lost condition and vanishing state;Wherein, the tracked state is used to indicate the position prediction result of target object Exist with the position detection result in corresponding video frame and be associated with;The transient loss state is used to indicate the prediction of target object As a result there is no associations with the position detection result in corresponding video frame;The long-term lost condition is used to indicate target object In the sequence of frames of video of the first setting quantity, there is no close with corresponding position detection result for all position prediction results Connection;The vanishing state is used to indicate target object in the sequence of frames of video of the second setting quantity, all position prediction results There is no associations with corresponding position detection result;Wherein, the first setting quantity is less than the described second setting quantity.
Optionally, the motion state further includes generation state, and the generation state is used to indicate target object and exists for the first time Occur in first video frame.
Optionally, the determining module includes:Submodule is associated with, for indicating the position in response to the matching result Prediction result exists with the position detection result to be associated with, and the motion state of the target object is labeled as to be tracked state.
Optionally, the determining module further includes:Dereferenced submodule, in response to described in matching result instruction Position prediction result, there is no being associated with, obtains the motion state prediction result of the target object with the position detection result.
Optionally, the dereferenced submodule, be additionally operable to the motion state prediction result for obtaining the target object it Afterwards, indicate that the target object is the state that is tracked in response to the motion state prediction result, by the fortune of the target object Dynamic state is labeled as transient loss state.
Optionally, the dereferenced submodule is additionally operable to indicate the target in response to the motion state prediction result Object is transient loss state, judges whether the target object reaches N-1 by continuous marking for the number of transient loss state It is secondary, if so, the motion state of the target object is labeled as long-term lost condition;If it is not, then by the target object Motion state is still labeled as transient loss state;Wherein, N represents the first setting quantity.
Optionally, the dereferenced submodule is additionally operable to indicate the target in response to the motion state prediction result Object is long-term lost condition, judges whether the target object reaches M-1 by continuous marking for the number of long-term lost condition It is secondary, if so, the motion state of the target object is labeled as vanishing state;If it is not, then by the motion state of the target Still it is labeled as long-term lost condition;Wherein, M represents the second setting quantity.
Optionally, the matching module, for by the position prediction result with detected by object detector the The position detection result of the corresponding target object is matched in two video frame.
Optionally, described device further includes:First operation module, in the determining module according to matching result and institute After stating the motion state that motion state prediction result determines the target object, according to target described in the moving state identification The action of object.
Optionally, described device further includes:Second operation module, in the determining module according to matching result and institute After stating the motion state that motion state prediction result determines the target object, according to the motion state to the target pair As being counted.
Optionally, described device further includes:Third operation module, in the determining module according to matching result and institute After stating the motion state that motion state prediction result determines the target object, according to the motion state to the target pair As being counted, the flow analysis of target object is carried out according to count results.
Optionally, described device further includes:4th operation module, in the determining module according to matching result and institute After stating the motion state that motion state prediction result determines the target object, according to the motion state detection abnormal object Object alarms to the abnormal target object.
Optionally, described device further includes:5th operation module, in the determining module according to matching result and institute After stating the motion state that motion state prediction result determines the target object, according to the motion state, to the target Object carries out information recommendation.
Optionally, the first nerves network for Recognition with Recurrent Neural Network RNN and/or, the nervus opticus network is cycle Neural network RNN.
Another aspect according to embodiments of the present invention, additionally provides a kind of electronic equipment, including:Processor, memory, Communication device and communication bus, the processor, the memory and the communication device complete phase by the communication bus Communication between mutually;For the memory for storing an at least executable instruction, the executable instruction performs the processor It is preceding it is any as described in target object detection method.
Another aspect according to embodiments of the present invention additionally provides a kind of computer readable storage medium, the calculating Machine readable storage medium storing program for executing is stored with:For the characteristic point according to target object at least one in the first video frame, the first god is used Motion state of the target object in the second video frame is predicted through network, obtain motion state prediction result can Execute instruction, wherein, first video frame is current video frame, and second video frame is current video frame in rear video Frame;According to the characteristic point of the target object, position of the nervus opticus network to the target object in the second video frame is used It puts and is predicted, obtain the executable instruction of position prediction result;For will be in the position prediction result and the second video frame The position detection result of the corresponding target object carries out matched executable instruction;For according to matching result and the fortune Dynamic status predication result determines the executable instruction of the motion state of the target object.
The detection scheme of the target object provided according to embodiments of the present invention, based on the target object in current video frame Image characteristic point, by first nerves network and nervus opticus network respectively to target object in for example next video of rear video frame Motion state and position in frame predicted, so according to the prediction result of the position to target object and target object The comparison of position detection result in rear video frame and the prediction result of the motion state to target object, determine target pair As in the actual motion state in rear video frame, can determine by the actual motion state to the target object in video frame Testing result.In scheme provided in an embodiment of the present invention, the motion state of target object is expressed as based on image characteristic point State, this motion state is because image characteristic point differs greatly between different target object, if target object is in video Middle disappearance, the motion state of feature based point predicted would become hard to similar thus right with the motion state of other target objects It is more sensitive in the disappearance of detected target object in video, can effectively realize for target object in monitor video whether The generation of the mistake of target object tracking is lowered in the judgement of disappearance, improves the accuracy of image detection.
Description of the drawings
Fig. 1 is a kind of step flow chart of the detection method of according to embodiments of the present invention one target object;
Fig. 2 is a kind of step flow chart of the detection method of according to embodiments of the present invention two target object;
Fig. 3 is a kind of structure diagram of the detection device of according to embodiments of the present invention three target object;
Fig. 4 is a kind of structure diagram of the detection device of according to embodiments of the present invention four target object;
Fig. 5 is the structure diagram of according to embodiments of the present invention five a kind of electronic equipment.
Specific embodiment
(identical label represents identical element in several attached drawings) and embodiment below in conjunction with the accompanying drawings, implement the present invention The specific embodiment of example is described in further detail.Following embodiment is used to illustrate the present invention, but be not limited to the present invention Range.
It will be understood by those skilled in the art that the terms such as " first ", " second " in the embodiment of the present invention are only used for distinguishing Different step, equipment or module etc. neither represent any particular technology meaning, also do not indicate that the inevitable logic between them is suitable Sequence.
Embodiment one
With reference to Fig. 1, a kind of step flow chart of the detection method of according to embodiments of the present invention one target object is shown.
Detection method includes the following steps for the target object of the present embodiment:
Step S102:According to the characteristic point of target object at least one in the first video frame, first nerves network pair is used Motion state of the target object in the second video frame is predicted, obtains motion state prediction result.
Wherein, the first video frame is current video frame, and the second video frame is current video frame in rear video frame.Wherein, Rear video frame can be next video frame of current video frame or sequential it is rear and with current video frame it is non-conterminous Video frame.By the prediction of the next video frame adjacent with current video frame, it can realize and predict frame by frame;By with current video The non-conterminous prediction in rear video frame of frame can be realized and non-be predicted frame by frame.
In the embodiment of the present invention, the neural network for carrying out motion state prediction and position prediction can be with motion state Forecast function and/or there is CNN (Convolutional Neural Network, the convolutional Neural net of position prediction Network) or RNN (Recurrent Neural Network, Recognition with Recurrent Neural Network).RNN is a kind of node orientation connection The artificial neural network of cyclization, the internal state of this network can show dynamic time sequence behavior, and substantive characteristics is to handle The feedback link of existing inside has feedforward to connect again between unit.From the point of view of systematic perspective, it is a feedback dynamical system, is being counted Dynamic characteristic of the course is embodied during calculating, there is stronger dynamic behaviour and computing capability than feedforward neural network.It can thus answer For realizing the motion state prediction of target object and position prediction in the embodiment of the present invention.
Step S104:According to the characteristic point of target object, using nervus opticus network to target object in the second video frame In position predicted, obtain position prediction result.
In the embodiment of the present invention, based on the characteristic point of the target object in current video frame, to the mesh in rear video frame The motion state and position for marking object are predicted.
First nerves network is to the prediction of the motion state of target object and nervus opticus network to the position of target object Prediction execution can in no particular order sequence, can also perform parallel.
First nerves network and nervus opticus network are the neural network trained and completed, wherein, first nerves network Have the function of to predict the motion state of target object, nervus opticus network, which has, carries out in advance the position of target object The function of survey.The training of first nerves network and nervus opticus network can be used any suitable related training method and realize, this Inventive embodiments are not restricted the training method of neural network.For example, for first nerves network, training sample is using movement The sequence of state;For nervus opticus network, training sample uses the sequence of target location.In training, first nerves network Input with each frame of nervus opticus network is data of the training sample in present frame, is for next frame in the output of each frame Or other predictions in rear frame.In test, first nerves network is movement of the target object in previous frame in the input of every frame State, output are the next frames or other motion states in rear frame predicted;The input of each frame of nervus opticus network is mesh Object is marked in the position of previous frame, output is the target object that predicts in next frame or other in the position of rear frame.
Step S106:The position detection result of position prediction result target object corresponding with the second video frame is carried out Matching.
Wherein, position detection result of the target object in the second video frame may be used arbitrary appropriate ways and obtain, packet It includes but is not limited to convolutional neural networks CNN modes, object detector mode etc..
Step S108:The motion state of target object is determined according to matching result and motion state prediction result.
Position of the characteristic point based on the target object in the first video frame to target object in the second video frame it is pre- It surveys, it is possible to match with practical position detection result, it is also possible to not match that, need to determine target pair according to matching result The actual motion state of elephant, the actual motion state can characterize the testing result to target object.
According to the detection method of target object provided in this embodiment, the feature based on the target object in current video frame Point, by first nerves network and nervus opticus network respectively to target object in the fortune in rear video frame such as next video frame Dynamic state and position predicted, and then according to the prediction result and target object of the position to target object in rear video frame In the comparison of position detection result and the prediction result of motion state to target object, determine target object rear Actual motion state in video frame is the detection knot that can determine to the target object in video frame by the actual motion state Fruit.In the detection method of target object provided in this embodiment, the motion state of target object is expressed as feature based point State, this motion state is because characteristic point differs greatly between different target object, if target object disappears in video Lose, the motion state of feature based point predicted would become hard to it is similar with the motion state of other target objects, thus for inspection It is more sensitive to survey the disappearance of target object in video, can effectively realize whether disappear in monitor video for target object Judgement, lower the generation of the mistake of target object tracking, improve the accuracy of image detection.
It should be noted that the detection scheme of the target object of the embodiment of the present invention can be applied to the inspection of single goal object It surveys, to improve detection accuracy;It can also be applied to multiple target object detection, to reduce the missing inspection of multiple target object detection and mistake Accidentally rate.
The detection method of the target object of the present embodiment can be held by any suitable equipment with data-handling capacity Row, including but not limited to:Mobile terminal, PC machine, server and other electronic equipments with data-handling capacity.
Embodiment two
With reference to Fig. 2, a kind of step flow chart of the detection method of according to embodiments of the present invention two target object is shown.
In the present embodiment, the first and second neural networks using RNN, also, using in rear video frame as current video frame Next video frame for, the detection scheme of target object of the present invention is illustrated.But those skilled in the art should be bright , for non-conterminous other in rear video frame with current video frame, can refer to the target object that the present embodiment realizes the present invention Detection scheme.
Detection method includes the following steps for the target object of the present embodiment:
Step S202:Obtain current video frame to be detected.
Step S204:Obtain the characteristic point of the target object in current video frame.
Wherein, target object can be any suitable mobilizable object, including but not limited to:Personage, animal etc..Mesh It can be one to mark object, can also be included multiple.
Any suitable mode may be used to the acquisition of the characteristic point of target object, including but not limited to by having spy The CNN that sign point obtains function is obtained, alternatively, obtained by appropriate object detector FeatureDetector etc..Pass through Object detector can simply and effectively obtain the characteristic point of target object.
Step S206:The characteristic point of target object in current video frame, using the first RNN to target object Motion state in two video frame is predicted, obtains motion state prediction result, also, using the 2nd RNN to target object Position in the second video frame is predicted, obtains position prediction result.
In the present embodiment, the motion state of target object includes:The state of being tracked, loses shape at transient loss state for a long time State and vanishing state.Wherein, tracked state be used to indicate the position prediction result of target object in corresponding video frame There is association in position detection result, for example, existing to position prediction result of the target object in next video frame and target object There is association in the position detection result in next video frame;Transient loss state be used to indicate the prediction result of target object with it is right Position detection result in the video frame answered is there is no association, for example, the position prediction to target object in next video frame As a result it is not present and is associated with position detection result of the target object in next video frame;Long-term lost condition is used to indicate target In the sequence of frames of video of the first setting quantity, all position prediction results are not present object with corresponding position detection result Association, exists for example, predicting position prediction result and target object of the target object in A1, A2, A3 successively by the 2nd RNN There is no associations for position detection result in A1, A2, A3;Vanishing state is used to indicate target object in the second setting quantity In sequence of frames of video, association is not present with corresponding position detection result in all position prediction results, for example, passing through second RNN predict successively target object A1, A2 ... position prediction result and target object in A10 A1, A2 ... A10 In position detection result there is no association.Wherein, the first setting quantity is less than the second setting quantity.
Optionally, the motion state further includes generation state, which is used to indicate target object for the first time Occur in one video frame, in order to carry out state differentiation and mark.
In the present embodiment, it can judge whether be associated between two objects by any suitable association algorithm.Association is calculated The input of method is a similarity matrix, and output is associated result.The similarity matrix contains the mesh in the first video frame The similarity degree of object and the target object in the second video frame is marked, the information such as location information, presentation information generally can be used Otherness is measured.But not limited to this, in practical applications, can also be judged using other algorithms between two objects whether Association, e.g., relevant algorithm of bipartite graph matching algorithm, k nearest neighbor etc..
Step S208:The position detection result of position prediction result target object corresponding with the second video frame is carried out Matching.
In a kind of feasible pattern, the matching of position prediction result and position detection result may be used both judgements is It is no that there are associated modes to realize.It in the manner, can be by position prediction result target pair corresponding with the second video frame The position detection result of elephant carries out the association in external performance space, determines that position prediction result is regarded with second according to association results The matching result between position detection result in frequency frame.Wherein, external performance is the visual signature information of object in detection block, The feature of the simple color histogram expression as using image is complicated such as the higher convolutional neural networks extraction of service precision Characteristics of image etc..Certainly, in practical applications, other modes for obtaining external performance are equally applicable.External performance space In contain target object external performance information, in the present embodiment, the external performance information in the external performance space The information of position including target object.The associated mode carried out in external performance space may be used such as institute in step S206 By determining the difference between position prediction result and position detection result, table is carried out according to the difference for the interrelational form stated As the association in feature space, details are not described herein.
Wherein, position detection result of the target object in the second video frame can be obtained by any suitable mode, Including but not limited to:It is obtained by the trained CNN with position acquisition or by object detector acquisition etc..
Step S210:The motion state of target object is determined according to matching result and motion state prediction result.
It is associated with for example, if matching result indicating positions prediction result exists with position detection result, in response to described It indicates that the position prediction result exists with the position detection result with result to be associated with, the motion state of target object is marked To be tracked state.
It is associated with for another example if matching result indicating positions prediction result is not present with position detection result, in response to institute It states matching result and indicates that the position prediction result, there is no being associated with, obtains the movement of target object with the position detection result Status predication result;If motion state prediction result instruction target object is the state that is tracked, in response to the motion state Prediction result indicates that the target object is the state that is tracked, and the motion state of target object is labeled as transient loss state; If motion state prediction result instruction target object is transient loss state, indicated in response to the motion state prediction result The target object is transient loss state, judges whether target object is reached by continuous marking for the number of transient loss state N-1 times, if so, the motion state of target object is labeled as long-term lost condition;If it is not, then by the movement shape of target object State is still labeled as transient loss state;Wherein, N represents the first setting quantity;If motion state prediction result indicates target object For long-term lost condition, then indicate that the target object for long-term lost condition, is sentenced in response to the motion state prediction result Whether disconnected target object is reached M-1 times by number of the continuous marking for long-term lost condition, if so, the movement by target object State is labeled as vanishing state;If it is not, the motion state of target is still then labeled as long-term lost condition;Wherein, M represents second Set quantity.Wherein, N and M is integer, N<M, and N is more than or equal to 3, M and is more than or equal to 4.In a kind of preferred embodiment, N 3, M is 10.
Step S212:Operation to target object is determined according to the motion state of target object.
Wherein, at least one of is included but not limited to the operation of target object:
Operation one:Testing result to target object is determined according to the motion state of target object.
If target object motion state it has been determined that if the testing result of target object can be determined according to the motion state.
For example, if the motion state of target object is the state that is tracked, it can determine target object to be detected continuous Two frame video frame in occur, do not disappear;If the motion state of target object is transient loss state, can determine to be checked The target object of survey has transience disappearance in continuous multi-frame video frame sequence;If the motion state of target object is loses for a long time Mistake state then can determine that target object to be detected has the disappearance of long period in continuous multi-frame video frame sequence;If mesh The motion state for marking object is vanishing state, then can determine target object to be detected in continuous multi-frame video frame sequence It completely disappears.
Operation two:According to the action of the moving state identification target object of target object.
It, can be according to the movement of target object for example, if the motion state of target object is constantly in tracked state State further obtains the basic exercise track of target object by appropriate mode, and the action of target object is identified.
Operation three:Target object is counted according to the motion state of target object.
For example, when target object includes multiple, then it can be according to the motion state of more upper target objects to target object It is counted, such as the target object for being in tracked state is counted, alternatively, to being in transient loss state or losing for a long time The target object of mistake state count etc..
Operation four:Target object is counted according to the motion state of target object, and mesh is carried out according to count results Mark the flow analysis of object.
After the quantity of target object is obtained namely after count results, target object can be carried out according to count results Flow analysis.
Operation five:According to the motion state detection abnormal target object of target object, alarm abnormal target object.
For example, during being monitored to a certain target object, it is found that the target object is in transient loss state, length When phase lost condition or vanishing state, corresponding abnormal alarm can be carried out to motion state.
Operation six:According to the motion state of target object, information recommendation is carried out to target object.
For example, the movement locus of the target object to being in tracked state is analyzed, letter is carried out according to analysis result Breath is recommended, and if the target object often drives vehicle, then can carry out information recommendation of corresponding vehicle etc..
Hereinafter, the detection process of the above-mentioned target object in the present embodiment is illustrated with a specific example.
The detection process of the target object of this example includes:Training two Recognition with Recurrent Neural Network RNN1 and RNN2, make RNN1 It can predict the motion state in target object future, RNN2 is enable to predict the position in target object future;Using RNN1 and RNN2 predicts target object existing for t-1 frames in the motion state of t frames and position respectively;It, will be to position prediction in t frames Result matched with the position detection result of Current observation, wherein it is possible to using FeatureDetector, according to Detection Response can both know the position of target object, can also know the characteristic point of target object;It will be current It is noted as being tracked the target object A1 of state, the target object A2 of long-term lost condition, the reality with corresponding Current observation Border result carries out the association in external performance space, if target object A1 or A2 are associated with actual result, the mesh that will be associated with Mark object A1 or A2 are labeled as being tracked the target object of state, are associated with if target object A1 is not present with actual result, will The target object A1 not being associated with is labeled as transient loss state;If target object A2 is not present with actual result and is associated with, The target object A2 not being associated with is labeled as long-term lost condition;It is identified by for long-term lost condition and loses Target New Year's Day more than 10 frames is labeled as vanishing state;Continue the detection to subsequent video frame progress target object, use RNN1 Target object existing for t frames is predicted respectively in the motion state of t+1 frames and position with RNN2;Iteration performs successively, directly Terminate to sequence of frames of video.
Through this embodiment, it can effectively realize the judgement whether to disappear in monitor video for target object, lower Generation of the multiple target to the mistake of image tracing improves the accuracy of image detection.
The detection method of the target object of the present embodiment can be held by any suitable equipment with data-handling capacity Row, including but not limited to:Mobile terminal, PC machine etc..
Embodiment three
With reference to Fig. 3, a kind of structure diagram of the detection device of according to embodiments of the present invention three target object is shown.
The detection device of the target object of the present embodiment includes:Prediction module 302, for according in the first video frame at least The characteristic point of one target object carries out motion state of the target object in the second video frame using first nerves network pre- It surveys, obtains motion state prediction result, wherein, the first video frame is current video frame, and the second video frame is current video frame In rear video frame;And the characteristic point according to target object, using nervus opticus network to target object in the second video frame Position predicted, obtain position prediction result;Matching module 304, for will be in position prediction result and the second video frame The position detection result of corresponding target object is matched;Determining module 306, for pre- according to matching result and motion state Survey the motion state that result determines target object.
According to the detection device of target object provided in this embodiment, the image based on the target object in current video frame Characteristic point, by first nerves network and nervus opticus network respectively to target object in rear video frame such as next video frame Motion state and position predicted, and then according to the prediction result and target object of the position to target object in backsight The comparison of position detection result in frequency frame and the prediction result of the motion state to target object, determine that target object exists Actual motion state in rear video frame is the detection that can determine to the target object in video frame by the actual motion state As a result.In the present embodiment, the motion state of target object is expressed as the state based on image characteristic point, this motion state because Image characteristic point differs greatly between different target object, if target object disappears in video, predict based on The motion state of characteristic point would become hard to it is similar with the motion state of other target objects, thus for detected target object in video In disappearance it is more sensitive, can effectively realize the judgement whether to disappear in monitor video for target object, lower target Generation to the mistake of image tracing improves the accuracy of image detection.
Example IV
With reference to Fig. 4, a kind of structure diagram of the detection device of according to embodiments of the present invention four target object is shown.
The detection device of the target object of the present embodiment includes:Prediction module 402, for according in the first video frame at least The characteristic point of one target object carries out motion state of the target object in the second video frame using first nerves network pre- It surveys, obtains motion state prediction result, wherein, the first video frame is current video frame, and the second video frame is current video frame In rear video frame;And the characteristic point according to target object, using nervus opticus network to target object in the second video frame Position predicted, obtain position prediction result;Matching module 404, for will be in position prediction result and the second video frame The position detection result of corresponding target object is matched;Determining module 406, for pre- according to matching result and motion state Survey the motion state that result determines target object.
Optionally, matching module 404 is used for the position of position prediction result target object corresponding with the second video frame The association in testing result progress external performance space is put, is determined in position prediction result and the second video frame according to association results Position detection result between matching result.
Optionally, matching module 404 is used to determine the difference between position prediction result and position detection result, according to institute State the association in difference progress external performance space;Position prediction result and the position in the second video frame are determined according to association results Put the matching result between testing result.
Optionally, the motion state of target object includes at least one of:The state of being tracked, transient loss state, length Phase lost condition and vanishing state;Wherein, tracked state is used to indicate the position prediction result of target object and is regarded with corresponding There is association in the position detection result in frequency frame;The prediction result that transient loss state is used to indicate target object is regarded with corresponding There is no associations for position detection result in frequency frame;Long-term lost condition is used to indicate target object regarding in the first setting quantity In frequency frame sequence, there is no associations with corresponding position detection result for all position prediction results;Vanishing state is used to indicate Target object is in the sequence of frames of video of the second setting quantity, and all position prediction results and corresponding position detection result are not There are associations;Wherein, the first setting quantity is less than the second setting quantity.
Optionally, motion state further includes generation state, and generation state is used to indicate target object for the first time in the first video Occur in frame.
Optionally it is determined that module 406 includes:Submodule 4062 is associated with, for being predicted in response to matching result indicating positions As a result exist with position detection result and be associated with, the motion state of target object is labeled as to be tracked state.
Optionally it is determined that module 406 further includes:Dereferenced submodule 4064, in response to matching result indicating positions Prediction result, there is no being associated with, obtains the motion state prediction result of target object with position detection result.
Optionally, dereferenced submodule 4064 is additionally operable to after the motion state prediction result for obtaining target object, is rung It should indicate that target object is the state that is tracked in motion state prediction result, the motion state of target object is labeled as of short duration lose Mistake state.
Optionally, it is of short duration that dereferenced submodule 4064, which is additionally operable in response to motion state prediction result instruction target object, Lost condition, judges whether target object is reached N-1 times by continuous marking for the number of transient loss state, if so, by mesh The motion state of mark object is labeled as long-term lost condition;If it is not, the motion state of target object is still then labeled as of short duration lose Mistake state;Wherein, N represents the first setting quantity.
Optionally, it is long-term that dereferenced submodule 4064, which is additionally operable in response to motion state prediction result instruction target object, Lost condition, judges whether target object is reached M-1 times by continuous marking for the number of long-term lost condition, if so, by mesh The motion state of mark object is labeled as vanishing state;If it is not, the motion state of target is still then labeled as long-term lost condition;Its In, M represents the second setting quantity.
Optionally, matching module 404 is used for the second video detected by position prediction result and by object detector The position detection result of corresponding target object is matched in frame.
Optionally, the detection device of the target object of the present embodiment further includes:First operation module 408, for determining After module 406 determines the motion state of target object according to matching result and motion state prediction result, according to motion state Identify the action of target object.
Optionally, the detection device of the target object of the present embodiment further includes:Second operation module 410, for determining After module 406 determines the motion state of target object according to matching result and motion state prediction result, according to motion state Target object is counted.
Optionally, the detection device of the target object of the present embodiment further includes:Third operation module 412, for determining After module 406 determines the motion state of target object according to matching result and motion state prediction result, according to motion state Target object is counted, the flow analysis of target object is carried out according to count results.
Optionally, the detection device of the target object of the present embodiment further includes:4th operation module 414, for determining After module 406 determines the motion state of target object according to matching result and motion state prediction result, according to motion state Abnormal target object is detected, is alarmed abnormal target object.
Optionally, the detection device of the target object of the present embodiment further includes:5th operation module 416, for determining After module 406 determines the motion state of target object according to matching result and motion state prediction result, according to motion state Information recommendation is carried out to target object.
Optionally, first nerves network for RNN and/or, nervus opticus network is RNN.
The detection device of the target object of the present embodiment is used to implement corresponding target object in aforesaid plurality of embodiment Detection method, and the advantageous effect with corresponding embodiment of the method, details are not described herein.
Embodiment five
The embodiment of the present invention five provides a kind of electronic equipment, such as can be mobile terminal, personal computer (PC), put down Plate computer, server etc..Below with reference to Fig. 5, it illustrates suitable for being used for realizing the terminal device of the embodiment of the present invention or service The structure diagram of the electronic equipment 500 of device.As shown in figure 5, electronic equipment 500 includes one or more processors, communication member Part etc., one or more of processors are for example:One or more central processing unit (CPU) 501 and/or one or more Image processor (GPU) 513 etc., processor can according to the executable instruction being stored in read-only memory (ROM) 502 or From the executable instruction that storage section 508 is loaded into random access storage device (RAM) 503 perform various appropriate actions and Processing.Communication device includes communication component 512 and/or communication interface 509.Wherein, communication component 512 may include but be not limited to net Card, the network interface card may include but be not limited to IB (Infiniband) network interface card, and communication interface 509 includes such as LAN card, modulation /demodulation The communication interface of the network interface card of device etc., communication interface 509 perform communication process via the network of such as internet.
Processor can communicate with read-only memory 502 and/or random access storage device 503 to perform executable instruction, It is connected by communication bus 504 with communication component 512 and is communicated through communication component 512 with other target devices, so as to completes this Inventive embodiments provide any one target object the corresponding operation of detection method, for example, according in the first video frame at least The characteristic point of one target object carries out motion state of the target object in the second video frame using first nerves network pre- It surveys, obtains motion state prediction result, wherein, the first video frame is current video frame, and the second video frame is current video frame In rear video frame;According to the characteristic point of target object, position of the nervus opticus network to target object in the second video frame is used It puts and is predicted, obtain position prediction result;By the position of position prediction result target object corresponding with the second video frame Testing result is matched;The motion state of target object is determined according to matching result and motion state prediction result.
In addition, in RAM 503, it can also be stored with various programs and data needed for device operation.CPU501 or GPU513, ROM502 and RAM503 are connected with each other by communication bus 504.In the case where there is RAM503, ROM502 is can Modeling block.RAM503 stores executable instruction or executable instruction is written into ROM502 at runtime, and executable instruction makes place It manages device and performs the corresponding operation of above-mentioned communication means.Input/output (I/O) interface 505 is also connected to communication bus 504.Communication Component 512 can be integrally disposed, may be set to be with multiple submodule (such as multiple IB network interface cards), and in communication bus chain It connects.
I/O interfaces 505 are connected to lower component:Importation 506 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 507 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 508 including hard disk etc.; And the communication interface 509 of the network interface card including LAN card, modem etc..Driver 510 is also according to needing to connect It is connected to I/O interfaces 505.Detachable media 511, such as disk, CD, magneto-optic disk, semiconductor memory etc. are pacified as needed On driver 510, in order to be mounted into storage section 508 as needed from the computer program read thereon.
Need what is illustrated, framework as shown in Figure 5 is only a kind of optional realization method, can root during concrete practice The component count amount and type of above-mentioned Fig. 5 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component Put, can also be used it is separately positioned or integrally disposed and other implementations, such as GPU and CPU separate setting or can be by GPU collection Into on CPU, communication device separates setting, can also be integrally disposed on CPU or GPU, etc..These interchangeable embodiment party Formula each falls within protection scope of the present invention.
Particularly, according to embodiments of the present invention, it is soft to may be implemented as computer for the process above with reference to flow chart description Part program.For example, the embodiment of the present invention includes a kind of computer program product, including being tangibly embodied in machine readable media On computer program, computer program included for the program code of the method shown in execution flow chart, and program code can wrap The corresponding instruction of corresponding execution method and step provided in an embodiment of the present invention is included, for example, according at least one in the first video frame The characteristic point of target object predicts motion state of the target object in the second video frame using first nerves network, Motion state prediction result is obtained, wherein, the first video frame is current video frame, and the second video frame is current video frame rear Video frame;According to the characteristic point of target object, using nervus opticus network to position of the target object in the second video frame into Row prediction, obtains position prediction result;By the position detection of position prediction result target object corresponding with the second video frame As a result it is matched;The motion state of target object is determined according to matching result and motion state prediction result.In such reality It applies in example, which can be downloaded and installed from network by communication device and/or from detachable media 511 It is mounted.When the computer program is executed by processor, the above-mentioned function of being limited in the method for the embodiment of the present invention is performed.
It may be noted that according to the needs of implementation, all parts/step described in the embodiment of the present invention can be split as more The part operation of two or more components/steps or components/steps can be also combined into new component/step by multi-part/step Suddenly, to realize the purpose of the embodiment of the present invention.
It is above-mentioned to realize or be implemented as in hardware, firmware according to the method for the embodiment of the present invention to be storable in note Software or computer code in recording medium (such as CD ROM, RAM, floppy disk, hard disk or magneto-optic disk) are implemented through net The original storage that network is downloaded is in long-range recording medium or nonvolatile machine readable media and will be stored in local recording medium In computer code, can be stored in using all-purpose computer, application specific processor or can compile so as to method described here Such software processing in journey or the recording medium of specialized hardware (such as ASIC or FPGA).It is appreciated that computer, processing Device, microprocessor controller or programmable hardware include can storing or receive software or computer code storage assembly (for example, RAM, ROM, flash memory etc.), when the software or computer code are by computer, processor or hardware access and when performing, realize Processing method described here.In addition, when all-purpose computer access is used to implement the code for the processing being shown here, code It performs and is converted to all-purpose computer to perform the special purpose computer of processing being shown here.
Those of ordinary skill in the art may realize that each exemplary lists described with reference to the embodiments described herein Member and method and step can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is performed with hardware or software mode, specific application and design constraint depending on technical solution.Professional technician Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed The range of the embodiment of the present invention.
Embodiment of above is merely to illustrate the embodiment of the present invention, and is not the limitation to the embodiment of the present invention, related skill The those of ordinary skill in art field in the case where not departing from the spirit and scope of the embodiment of the present invention, can also make various Variation and modification, therefore all equivalent technical solutions also belong to the scope of the embodiment of the present invention, the patent of the embodiment of the present invention Protection domain should be defined by the claims.

Claims (10)

1. a kind of detection method of target object, including:
According to the characteristic point of target object at least one in the first video frame, existed using first nerves network to the target object Motion state in second video frame is predicted, obtains motion state prediction result, wherein, first video frame is current Video frame, second video frame are current video frame in rear video frame;
According to the characteristic point of the target object, position of the nervus opticus network to the target object in the second video frame is used It puts and is predicted, obtain position prediction result;
The position detection result of the position prediction result target object corresponding with the second video frame is matched;
The motion state of the target object is determined according to matching result and the motion state prediction result.
It is 2. according to the method described in claim 1, wherein, the position prediction result is corresponding with the second video frame described The position detection result of target object is matched, including:
The position detection result of the position prediction result target object corresponding with second video frame is carried out Association in external performance space determines the position prediction result and the position in second video frame according to association results Matching result between testing result.
It is 3. according to the method described in claim 2, wherein, the position prediction result is corresponding with second video frame The position detection result of the target object carries out the association in external performance space, including:
It determines the difference between the position prediction result and the position detection result, external performance is carried out according to the difference Association in space.
4. according to claim 1-3 any one of them methods, wherein, the motion state of the target object include it is following at least One of:The state of being tracked, transient loss state, long-term lost condition and vanishing state;
Wherein, the position prediction result and the position inspection in corresponding video frame that the tracked state is used to indicate target object It surveys result and there is association;The prediction result and the position in corresponding video frame that the transient loss state is used to indicate target object Putting testing result, there is no associations;The long-term lost condition is used to indicate video frame sequence of the target object in the first setting quantity In row, there is no associations with corresponding position detection result for all position prediction results;The vanishing state is used to indicate mesh Object is marked in the sequence of frames of video of the second setting quantity, all position prediction results are not deposited with corresponding position detection result It is being associated with;Wherein, the first setting quantity is less than the described second setting quantity.
5. according to the method described in claim 4, wherein, the motion state further includes generation state, the generation state is used Occur in the first video frame for the first time in instruction target object.
6. method according to claim 4 or 5, wherein, it is described according to matching result and the motion state prediction result Determine the motion state of the target object, including:
It indicates that the position prediction result exists with the position detection result in response to the matching result to be associated with, by the mesh The motion state of mark object is labeled as being tracked state.
It is 7. described to be determined according to matching result and the motion state prediction result according to the method described in claim 6, wherein The motion state of the target object, further includes:
Indicate that the position prediction result, there is no being associated with, obtains institute with the position detection result in response to the matching result State the motion state prediction result of target object.
8. according to the method described in claim 7, wherein, the motion state prediction result for obtaining the target object it Afterwards, the method further includes:
Indicate that the target object is the state that is tracked in response to the motion state prediction result, by the fortune of the target object Dynamic state is labeled as transient loss state.
9. a kind of detection device of target object, including:
Prediction module for the characteristic point according to target object at least one in the first video frame, uses first nerves network pair Motion state of the target object in the second video frame predicted, obtains motion state prediction result, wherein, described the One video frame is current video frame, and second video frame is current video frame in rear video frame;And according to the target The characteristic point of object predicts position of the target object in the second video frame using nervus opticus network, obtains Position prediction result;
Matching module, for by the position detection of the position prediction result target object corresponding with the second video frame As a result it is matched;
Determining module, for determining the movement shape of the target object according to matching result and the motion state prediction result State.
10. a kind of electronic equipment, including:Processor, memory, communication device and communication bus, the processor, the storage Device and the communication device complete mutual communication by the communication bus;
For the memory for storing an at least executable instruction, the executable instruction makes the processor perform right such as will Seek the detection method of any target objects of 1-8.
CN201710059806.1A 2017-01-24 2017-01-24 Target object detection method and device and electronic equipment Active CN108230352B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710059806.1A CN108230352B (en) 2017-01-24 2017-01-24 Target object detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710059806.1A CN108230352B (en) 2017-01-24 2017-01-24 Target object detection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN108230352A true CN108230352A (en) 2018-06-29
CN108230352B CN108230352B (en) 2021-02-26

Family

ID=62656467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710059806.1A Active CN108230352B (en) 2017-01-24 2017-01-24 Target object detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN108230352B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009662A (en) * 2019-04-02 2019-07-12 北京迈格威科技有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of face tracking
CN110298306A (en) * 2019-06-27 2019-10-01 北京百度网讯科技有限公司 The determination method, device and equipment of target object motion information
CN110414443A (en) * 2019-07-31 2019-11-05 苏州市科远软件技术开发有限公司 A kind of method for tracking target, device and rifle ball link tracking
CN110837766A (en) * 2018-08-17 2020-02-25 北京市商汤科技开发有限公司 Gesture recognition method, gesture processing method and device
CN111479061A (en) * 2020-04-15 2020-07-31 上海摩象网络科技有限公司 Tracking state determination method and device and handheld camera
CN111652043A (en) * 2020-04-15 2020-09-11 北京三快在线科技有限公司 Object state identification method and device, image acquisition equipment and storage medium
CN112257587A (en) * 2020-10-22 2021-01-22 江苏禹空间科技有限公司 Target object detection effect evaluation method and device, storage medium and equipment
CN113095183A (en) * 2021-03-31 2021-07-09 西北工业大学 Micro-expression detection method based on deep neural network
CN114241011A (en) * 2022-02-22 2022-03-25 阿里巴巴达摩院(杭州)科技有限公司 Target detection method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635835A (en) * 2008-07-25 2010-01-27 深圳市信义科技有限公司 Intelligent video monitoring method and system thereof
CN103077539A (en) * 2013-01-23 2013-05-01 上海交通大学 Moving object tracking method under complicated background and sheltering condition
CN103472445A (en) * 2013-09-18 2013-12-25 电子科技大学 Detecting tracking integrated method for multi-target scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635835A (en) * 2008-07-25 2010-01-27 深圳市信义科技有限公司 Intelligent video monitoring method and system thereof
CN103077539A (en) * 2013-01-23 2013-05-01 上海交通大学 Moving object tracking method under complicated background and sheltering condition
CN103472445A (en) * 2013-09-18 2013-12-25 电子科技大学 Detecting tracking integrated method for multi-target scene

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837766A (en) * 2018-08-17 2020-02-25 北京市商汤科技开发有限公司 Gesture recognition method, gesture processing method and device
CN110837766B (en) * 2018-08-17 2023-05-05 北京市商汤科技开发有限公司 Gesture recognition method, gesture processing method and device
CN110009662A (en) * 2019-04-02 2019-07-12 北京迈格威科技有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of face tracking
CN110009662B (en) * 2019-04-02 2021-09-17 北京迈格威科技有限公司 Face tracking method and device, electronic equipment and computer readable storage medium
CN110298306B (en) * 2019-06-27 2022-08-05 北京百度网讯科技有限公司 Method, device and equipment for determining motion information of target object
CN110298306A (en) * 2019-06-27 2019-10-01 北京百度网讯科技有限公司 The determination method, device and equipment of target object motion information
CN110414443A (en) * 2019-07-31 2019-11-05 苏州市科远软件技术开发有限公司 A kind of method for tracking target, device and rifle ball link tracking
CN111479061A (en) * 2020-04-15 2020-07-31 上海摩象网络科技有限公司 Tracking state determination method and device and handheld camera
CN111652043A (en) * 2020-04-15 2020-09-11 北京三快在线科技有限公司 Object state identification method and device, image acquisition equipment and storage medium
CN112257587A (en) * 2020-10-22 2021-01-22 江苏禹空间科技有限公司 Target object detection effect evaluation method and device, storage medium and equipment
CN112257587B (en) * 2020-10-22 2023-12-22 无锡禹空间智能科技有限公司 Target object detection effect evaluation method, device, storage medium and equipment
CN113095183A (en) * 2021-03-31 2021-07-09 西北工业大学 Micro-expression detection method based on deep neural network
CN114241011A (en) * 2022-02-22 2022-03-25 阿里巴巴达摩院(杭州)科技有限公司 Target detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN108230352B (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN108230352A (en) Detection method, device and the electronic equipment of target object
Abd Algani et al. Leaf disease identification and classification using optimized deep learning
Gama et al. On evaluating stream learning algorithms
Li et al. A deep learning approach to link prediction in dynamic networks
CN101989327B (en) Image analyzing apparatus and image analyzing method
US20180006900A1 (en) Predictive anomaly detection in communication systems
US20130335571A1 (en) Vision based target tracking for constrained environments
CN110096938A (en) A kind for the treatment of method and apparatus of action behavior in video
CN111209774A (en) Target behavior recognition and display method, device, equipment and readable medium
CN106030565A (en) Computer performance prediction using search technologies
Gupta et al. Class-weighted evaluation metrics for imbalanced data classification
CN114925938B (en) Electric energy meter running state prediction method and device based on self-adaptive SVM model
US20240013075A1 (en) Method and apparatus for semantic analysis on confrontation scenario based on target-attribute-relation
Rai et al. Fraud detection in credit card data using machine learning techniques
Moon et al. A Large-Scale Study in Predictability of Daily Activities and Places.
CN106991425B (en) Method and device for detecting commodity transaction quality
EP3323100A1 (en) A method and system for analyzing mobile apps
KR20220059287A (en) Attention-based stacking method for time series forecasting
Lo Predicting software reliability with support vector machines
CN110213094B (en) Method and device for establishing threat activity topological graph and storage equipment
Patil et al. Adaptive real time data mining methodology for wireless body area network based healthcare applications
CN108768774A (en) A kind of network safety evaluation method and assessment system of quantification
Jurj et al. Real-time identification of animals found in domestic areas of Europe
Costa Topological data analysis and applications
Huang et al. A flexible framework for synthesizing human activity patterns with application to sequential categorical data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant