CN113378005A - Event processing method and device, electronic equipment and storage medium - Google Patents

Event processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113378005A
CN113378005A CN202110622066.4A CN202110622066A CN113378005A CN 113378005 A CN113378005 A CN 113378005A CN 202110622066 A CN202110622066 A CN 202110622066A CN 113378005 A CN113378005 A CN 113378005A
Authority
CN
China
Prior art keywords
information
target
event
target object
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110622066.4A
Other languages
Chinese (zh)
Other versions
CN113378005B (en
Inventor
甘露
付琰
周洋杰
陈亮辉
彭玉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110622066.4A priority Critical patent/CN113378005B/en
Publication of CN113378005A publication Critical patent/CN113378005A/en
Application granted granted Critical
Publication of CN113378005B publication Critical patent/CN113378005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

The disclosure provides an event processing method, device, equipment and storage medium, relates to the field of deep learning and big data, and can be used in smart city scenes. The specific implementation scheme is as follows: acquiring an image to be detected, and performing feature extraction on the image to be detected to acquire a plurality of feature information of the image to be detected; determining event information of a target event; searching in a pre-established object information base according to the characteristic information of the image to be detected, and sequencing the search result according to the event information of the target event; acquiring object information of a target object in the image to be detected according to the sequencing result; and tracking and positioning the target object according to the object information. The method and the device improve the accuracy of the object information base, reduce the cost of manually checking the candidate objects and improve the event processing efficiency.

Description

Event processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing, and in particular, to the field of deep learning and big data, and in particular, to an event processing method and apparatus, an electronic device, and a storage medium.
Background
As AI (Artificial Intelligence) increasingly permeates smart city construction, various functional departments in cities actively manage various pain points and cooperate with the internet or traditional suppliers to discuss solutions. The method generally comprises the following optimization links after the traditional office process is disassembled: intelligent data fusion, intelligent application, intelligent flow propulsion, intelligent analysis and evaluation and the like, and is used for improving the office efficiency and quality.
At present, the intelligent data fusion in some scenes has the problems of low accuracy, manual troubleshooting, low efficiency and the like.
Disclosure of Invention
The present disclosure provides a method, an apparatus, an electronic device, and a storage medium for event processing, which can be applied in a smart city scenario.
According to a first aspect of the present disclosure, there is provided an event processing method, including:
acquiring an image to be detected, and performing feature extraction on the image to be detected to acquire a plurality of feature information of the image to be detected;
determining event information of a target event, the event information including at least one of venue information and occurrence time information;
searching in a pre-established object information base according to the characteristic information of the image to be detected, and sequencing the search result according to the event information of the target event;
acquiring object information of a target object in the image to be detected according to the sequencing result;
and tracking and positioning the target object according to the object information.
According to a second aspect of the present disclosure, there is provided an event processing apparatus including:
the image processing module is used for acquiring an image to be detected and extracting the characteristics of the image to be detected to acquire a plurality of characteristic information of the image to be detected;
a first determining module for determining event information of a target event, the event information including at least one of venue information and occurrence time information;
the retrieval module is used for retrieving in a pre-established object information base according to the plurality of characteristic information of the image to be detected and sequencing retrieval results according to the event information of the target event;
the second determining module is used for determining the object information of the target object in the image to be detected according to the sequencing result;
and the positioning module is used for tracking and positioning the target object according to the object information.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the first aspect described above.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of the first aspect described above.
According to the technical scheme of the disclosure, the number of retrieval results is reduced by extracting the plurality of characteristic information of the image to be detected and retrieving the characteristic information in the pre-established object information base according to the plurality of characteristic information of the image to be detected. In addition, the retrieval results are sequenced according to the event information of the target event, so that the correlation between the retrieval results and the target event is introduced, the retrieval results can be further screened, the accuracy of the retrieval results is improved, and the manual removal time is effectively shortened. In addition, the target object is tracked and positioned according to the acquired object information of the target object in the image to be detected, and tracking positioning analysis is performed by integrating multi-aspect data, so that the accuracy and efficiency of event processing can be improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart of an event processing method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of creating an object information base according to an embodiment of the disclosure;
fig. 3 is a flowchart of establishing object information of each target object according to an embodiment of the present disclosure;
fig. 4 is a flowchart for obtaining candidate objects and their ranking according to an embodiment of the disclosure;
FIG. 5 is a flow chart of another method for obtaining candidate objects and their ranking according to an embodiment of the disclosure;
fig. 6 is a flowchart of tracking and positioning a target object according to an embodiment of the present disclosure;
fig. 7 is a block diagram of an event processing device according to an embodiment of the present disclosure;
fig. 8 is a block diagram of another event processing device according to an embodiment of the present disclosure;
fig. 9 is a block diagram of an electronic device for implementing an event processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The term "and/or" in the embodiments of the present disclosure describes an association relationship of associated objects, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It should be noted that, in the existing data fusion scheme, since only the face information of the target is used for data fusion, the accuracy and the recall rate of the target information base are not high enough. In addition, when the face image of the target object is searched in the target information base, many similar candidates are easy to find, so that the manual removal workload is large, and the efficiency is low.
In view of the above problems, the present disclosure provides an event processing method, apparatus, device, and storage medium.
Fig. 1 is a flowchart of an event processing method according to an embodiment of the present disclosure. It should be noted that the event processing method according to the embodiment of the present disclosure may be applied to an event processing apparatus according to the embodiment of the present disclosure, and the event processing apparatus may be configured in an electronic device. As shown in fig. 1, the method comprises the steps of:
step 101, obtaining an image to be detected, and performing feature extraction on the image to be detected to obtain a plurality of feature information of the image to be detected.
In order to retrieve the target object as accurate as possible according to the image to be detected, feature extraction needs to be performed on the image to be detected to obtain a plurality of feature information of the image to be detected, and the feature information can be used as a clue for further retrieving the target object, so that the retrieval efficiency is improved.
It should be noted that the plurality of feature information of the image to be detected may include: at least two of the first characteristic information, the second characteristic information, the vehicle characteristic information, and the space-time characteristic information may also include other characteristic information that is not mentioned in the embodiments of the present disclosure according to the needs of the scene, and the present disclosure does not limit this.
In a certain scenario, the first feature information may be face feature information, and the second feature information may be body feature information. As an example, the human characteristic information may include a human vector, a color of clothes, a sex, whether glasses are worn, whether a hat is worn, and the like. Regarding the vehicle characteristic information, such as the target object in the image to be detected is in the vehicle, the information of the license plate number, the vehicle color, and the like of the vehicle can be extracted. In addition, the spatiotemporal characteristic information can be information such as snapshot time and place.
Step 102, determining event information of the target event, wherein the event information comprises at least one of place information and time information.
It is to be understood that at least one of the occurrence location information and the occurrence time information of the target event may be used as a clue for further determining the target object. For example, the occurrence location information of the target event corresponds to the snapshot location in the object information base, and for example, corresponds to the snapshot time in the object information base according to the occurrence time information of the target event, and the like, where the object information base is described below.
And 103, retrieving in a pre-established object information base according to the plurality of characteristic information of the image to be detected, and sequencing the retrieval result according to the event information of the target event.
That is, a plurality of feature information of the image to be detected is used as a screening condition, a search result is obtained by searching in a pre-established object information base, and the correlation between the search result and at least one of the occurrence location information and the occurrence time information of the target event is calculated, so that the search results are sorted according to the correlation.
The pre-established object information base can be a feature information base of each object obtained by converting a video shot by a monitoring camera into an image, extracting features, and clustering feature information of the same object. And searching in the object information base according to the characteristic information of the image to be detected to obtain a search result with high matching, so that the accuracy of the search result can be improved, and the number of the search results can be reduced. And the retrieval results are sequenced according to the event information of the target events, which is equivalent to automatic investigation aiming at the retrieval results, so that the investigation efficiency is improved, and the cost of manual investigation is reduced.
And 104, acquiring the object information of the target object in the image to be detected according to the sequencing result.
It can be understood that which result or results in the retrieval results are most matched with the image to be detected can be obtained according to the sorting results, and thus the result or results are used as the target object in the detection image.
And 105, tracking and positioning the target object according to the object information.
The object information comprises corresponding place information and behavior information of the corresponding target object at different time, so that the target object can be tracked, positioned and analyzed according to the information. In addition, the location information and the behavior information of the target object at different time can be acquired in the related database according to the object information, so that the tracking and positioning of the target object are realized.
In the technical solution of the present disclosure, the acquisition, storage, and application of the characteristic information and the trajectory behavior information of the related target object all conform to the regulations of the related laws and regulations, and do not violate the good customs of the public order.
According to the event processing method disclosed by the embodiment of the disclosure, the number of retrieval results is reduced by introducing a plurality of clues by extracting the plurality of characteristic information of the image to be detected and retrieving in the pre-established object information base according to the plurality of characteristic information of the image to be detected. In addition, the retrieval results are sequenced according to the event information of the target events, so that the correlation between the retrieval results and the target events is introduced, the retrieval results can be further screened, the accuracy of the retrieval results is improved, and the manual removal duration is effectively shortened. In addition, the target object is tracked and positioned according to the acquired object information of the target object in the image to be detected, and tracking positioning analysis is performed by integrating multi-aspect data, so that the accuracy of event processing can be improved, and the efficiency of event processing can also be improved.
In order to further describe the establishment mode of the object information base in detail, the present disclosure proposes another embodiment.
Fig. 2 is a flowchart of establishing an object information base according to an embodiment of the present disclosure. As shown in fig. 2, the object information base may be pre-established in the following manner:
step 201, acquiring a surveillance video stream shot by a surveillance camera, and sampling the surveillance video stream to obtain N video frames; wherein N is a positive integer.
Wherein, the surveillance camera head can be the surveillance camera head of a plurality of different scenes, for example: the monitoring of the traffic road, the monitoring of public places such as subway stations or stations and the like.
Step 202, performing target detection on each video frame to determine M target object samples in each video frame; wherein M is a positive integer.
It is understood that each video frame corresponds to one image, and target detection is performed for each video frame, so that M target object samples in each video frame are detected. Wherein the M target object samples in each video frame refer to M human images in each video frame.
Step 203, acquiring an image of each target object sample from the N video frames, and performing feature extraction on the image to obtain a plurality of feature information of each target object sample.
That is, all target object samples corresponding to the N video frames are extracted as images, and then feature extraction is performed on the images, so that a plurality of feature information of each target object sample is obtained.
In the embodiment of the present disclosure, the feature extraction of the image may include extracting at least two types of features from the first feature information, the second feature information, the vehicle feature information, the space feature information, and the like, and in order to make the coverage of the acquired information wide, a plurality of types of features are extracted as much as possible, so as to improve the quality of the object information base. When the mobile terminal is in a certain scene, the first characteristic information may be face characteristic information, and the second characteristic information may be body characteristic information. As an example, the human characteristic information may include a human vector, a color of clothes, a sex, whether glasses are worn, whether a hat is worn, and the like. Regarding the vehicle characteristic information, such as the target object in the image of the target object sample is in the vehicle, the information of the license plate number, the vehicle color, and the like of the vehicle can be extracted. In addition, the spatiotemporal characteristic information can be information such as snapshot time and place.
And step 204, establishing object information of each target object sample according to the plurality of characteristic information of each target object sample.
That is, by feature extraction of an image, a plurality of feature information for each target object sample, that is, first feature information, second feature information, vehicle information, spatiotemporal information, etc. for each target object are obtained, and these feature information are taken as object information for each target object sample.
It should be noted that, since different target object samples may refer to the same target object, it is necessary to perform determination according to each target object sample and its feature information, so as to merge the target object samples referring to the same target object and their feature information, thereby obtaining object information corresponding to each target object.
And step 205, building a library according to the object information of each target object sample to obtain an object information library.
According to the event processing method provided by the embodiment of the disclosure, when the object information base is established, the feature extraction is respectively performed on each target object sample to obtain a plurality of feature information corresponding to each target object sample, so that the data information coverage of the object information base can be effectively improved, the accuracy and the recall rate of the object information base are greatly improved, and a basic guarantee is provided for accurately acquiring the target object information in the image to be detected and tracking and positioning.
To further illustrate the establishment of object information for each target object sample in the above embodiments, the present disclosure proposes another embodiment.
Fig. 3 is a flowchart for establishing object information of each target object according to an embodiment of the present disclosure.
As shown in fig. 3, the implementation of establishing the object information of each target object includes:
step 301, acquiring a pre-established discrimination model; wherein the discriminant model is trained by using a plurality of feature information of the object sample.
The pre-established discrimination model is used for determining whether or not the plurality of target object samples are the same object based on the plurality of feature information of the plurality of target object samples.
Step 302, grouping each target object sample, inputting a plurality of feature information of each target object sample in each group into the discrimination model, and judging whether each target object sample in each group is the same object.
It can be understood that, in the plurality of target object samples obtained by the above sampling, different target object samples may refer to the same object, and therefore, in order to form a one-to-one correspondence relationship between each object information and each object, each target object sample needs to be grouped and distinguished.
As an example, all the target corresponding samples may be combined two by two to obtain a plurality of sets of target object samples. And inputting a plurality of characteristic information corresponding to each target object sample in each group of target object samples into the discrimination model to judge whether each target object sample in each group is the same object.
Step 303, in response to that each target object sample in each group is the same object, merging a plurality of feature information of each target object sample in each group to obtain object information of the same object.
That is, if each target object sample in each group is the same object, the plurality of feature information of each target object sample in each group all belong to the same object, and therefore the plurality of feature information of each target object sample in each group are combined to obtain object information corresponding to the same object.
And 304, in response to that the target object samples in each group are not the same object, establishing object information of the target object samples according to a plurality of characteristic information of the target object samples in each group.
That is, if the target object samples in each group are not the same object, it is described that the plurality of feature information of the target object samples in each group are information of different objects, and therefore, corresponding object information is established for each target object sample.
According to the event processing method provided by the embodiment of the disclosure, when object information is established, whether each group of target object samples are the same object is judged according to the discrimination model, and the plurality of characteristic information of each target object sample of the same object is combined into the object information of the same object, so that the situation that the object information of a plurality of target objects corresponds to the same object can be avoided, and the accuracy and the recall rate of the object information are further improved.
In the event processing method according to the above embodiment, a plurality of feature information of an image to be detected is retrieved from a pre-established object information base, and the retrieval results are sorted according to the event information of the target event. To further describe the detailed implementation of this section, the present disclosure presents yet another embodiment to this section.
Fig. 4 is a flowchart for obtaining candidate objects and their ranks according to an embodiment of the disclosure. As shown in fig. 4, a specific implementation manner of obtaining candidate objects and their ranks may include:
step 401, retrieving in an object information base according to a first feature information in a plurality of feature information of an image to be detected, and obtaining at least one candidate object.
In the embodiment of the present disclosure, a description is given by taking an example in which the plurality of feature information and object information bases of the image to be detected include first feature information, second feature information, vehicle feature information, and space feature information. The first characteristic information may be face characteristic information, and the second characteristic information may be body characteristic information. As an example, the retrieval in the object information base according to the first feature information in the plurality of feature information of the image to be detected may be implemented by: acquiring face feature information of an image to be detected; acquiring a centroid face vector of each person in an object information base; according to the face feature information of the image to be detected, the similarity between the face feature information and the centroid face vector of each object in the object information base is obtained; and taking the object corresponding to the object information with the similarity meeting the expectation as a candidate object. Wherein, since there may exist a plurality of face feature vectors extracted from different images for each object, the centroid face vector refers to an average value of the plurality of face feature vectors. That is, the average of the plurality of face feature vectors for each object may be taken as the centroid face vector for each person. Therefore, the calculation amount of the face feature similarity can be reduced, and the resource consumption is reduced.
And step 402, acquiring spatiotemporal feature information from the object information of each candidate object.
And step 403, calculating a first correlation between each candidate object and the target event according to the event information of the target event and the spatio-temporal feature information of each candidate object.
It will be appreciated that in order to narrow the range of candidates, further matching may be done by adding clues.
In the embodiment of the present disclosure, the event information of the target event may include at least one of venue information and occurrence time information of the target event. According to at least one of the occurrence location information and the occurrence time information of the target event and the spatio-temporal feature information of each candidate object, the possibility of each candidate object participating in the target event can be calculated for time and place, and can be embodied in the form of the calculated score, so that the first correlation of each candidate object and the target event is obtained.
Step 404, obtaining corresponding characteristic information from the object information of each candidate object according to at least one of second characteristic information, vehicle characteristic information and space-time characteristic information in the plurality of characteristic information of the image to be detected.
That is, according to each kind of feature information of an image to be detected, corresponding kind of feature information is acquired in object information of each candidate object. For example, if the plurality of feature information of the image to be detected includes second feature information, vehicle feature information, and spatiotemporal feature information, it is necessary to obtain the corresponding second feature information, vehicle feature information, and spatiotemporal feature information from the object information of each candidate object. Wherein the second characteristic information may be human characteristic information in some scenarios.
Step 405, inputting at least one feature information and the corresponding feature information into a pre-established discriminant model to obtain a second correlation between each candidate object and the target event.
It can be understood that the pre-established discrimination model can judge whether the target object and the candidate object in the image to be detected are the same object according to the feature information of the image to be detected and the corresponding feature information in the object information, so as to obtain the similarity score of each candidate object.
At step 406, at least one candidate object is ranked according to the first relevance and the second relevance.
In order to comprehensively consider clues such as at least one of the occurrence place information and the occurrence time information of the target event and the characteristic information in the image to be detected so as to further examine the candidate objects, at least one candidate object is ranked according to the first relevance and the second relevance. In the embodiment of the present disclosure, the score of the first correlation and the score of the second correlation may be weighted and calculated, and the scores may be sorted according to the magnitude of the final score of the weighted calculation.
According to the event processing method disclosed by the embodiment of the disclosure, when object information is retrieved, not only the event information of the target event is introduced, but also the correlation of the second characteristic information, the vehicle and the space-time characteristic information is introduced, so that the possibility that the candidate object participates in the target event and the similarity with the target object in the image to be detected can be comprehensively calculated, the purpose of accurately searching the candidate object is achieved, the labor cost is further saved, and the candidate object searching efficiency is improved.
In order to further improve the efficiency of candidate object examination, based on the above embodiments, the embodiments of the present disclosure provide another way of obtaining candidate objects and ranking thereof. Fig. 5 is a flowchart illustrating another method for obtaining candidate objects and their ranking according to an embodiment of the disclosure. As shown in fig. 5, on the basis of the above embodiment, the implementation further includes:
at step 507, it is determined whether the candidate object has participated in a particular event. If the candidate object does not participate in the specific event, go to step 506; if the candidate object has participated in the specific event, step 508 is executed.
It is understood that if the candidate object has a record in the related event database, and the recorded event has a coincidence with the target event, the probability that the candidate object is the target object is increased.
As an example, a query may be made in the related event database according to the candidate object, and if a specific event in which the candidate object participates may be found in the related event database, it is indicated that the candidate object participates in the specific event. Otherwise, it indicates that no participation in the specific event has been performed.
Step 508, in response to the candidate object participating in the specific event, obtaining description information of the specific event.
In step 509, the thread description keyword of the target event is obtained.
Step 510, calculating a third correlation between the candidate object and the target event according to the description information of the specific event and the clue description keyword of the target event.
It is understood that, according to the description information of the specific event and the clue description keyword of the target event, the coincidence between the specific event and the target event can be calculated, so as to obtain a third correlation between the candidate object and the target event.
At step 511, at least one candidate object is ranked according to the first relevance, the second relevance and the third relevance.
In order to comprehensively consider clues such as the occurrence place information and/or the occurrence time information of the target event, the feature information in the image to be tested, the correlation with the specific event and the like to further examine the candidate objects, at least one candidate object is ranked according to the first correlation, the second correlation and the third correlation. In the embodiment of the present disclosure, the score of the first correlation, the score of the second correlation, and the score of the third correlation may be weighted and calculated, and the scores may be sorted according to the magnitude of the final score of the weighted calculation.
It should be noted that steps 501 to 506 in fig. 5 are completely consistent with the implementation manners of steps 401 to 406 in fig. 4, and are not described herein again.
According to the event processing method disclosed by the embodiment of the disclosure, when the object information is retrieved, the correlation between the specific event in which the candidate object participates and the target event is increased, that is, if the candidate object participates in the specific event related to the target event, the possibility that the candidate object is the target object is increased, so that the candidate object can be further checked, and the candidate object checking efficiency can be further improved.
In view of the specific manner of tracking and positioning the target object according to the object information in the foregoing embodiment, the present disclosure provides another embodiment.
Fig. 6 is a flowchart of tracking and positioning a target object according to an embodiment of the present disclosure. As shown in fig. 6, an implementation of tracking and locating a target object may include:
601, acquiring a motion track of a target object according to object information; the motion track includes at least one of a capturing track and an Identity document (Identity identification number) track of the monitoring camera.
It should be noted that the motion trajectory of the target object may be included in the object information, that is, the capturing trajectory of the monitoring camera is obtained based on the capturing time and the capturing location of the monitoring camera in the object information. In addition, the motion trail of the target object can be queried in a trail database according to the object information, wherein the trail database comprises the motion trail of each object. For example, a track point obtained by a way of base station access dotting (for example, when a Subscriber Identity Module (SIM) card of a Subscriber is accessed to a certain base station, the base station reports the location information to obtain track information of the Subscriber reaching the location), and for example, a track obtained by a way of WiFi access dotting; as another example, a network IP address used when a user logs in to a social application; for another example, a dotting mode of getting in and out of the station by the vehicle of the identity card is utilized; for another example, the track points are obtained by using an ID card to check in, check out, etc. of the hotel, or other ID check-in methods.
Step 602, combining at least one of the capturing track and the ID track of the monitoring camera of the target object, and performing collision detection analysis on the motion track obtained after combination.
In the embodiment of the present disclosure, after at least one of the capturing trajectory and the ID trajectory of the monitoring camera of the target object is combined, an abnormal trajectory point may exist therein, so that it is necessary to perform collision detection analysis on the combined motion trajectory. As an example, the merged motion trajectory may be subjected to speed smoothing to find an abnormal point, and an abnormal cause may be analyzed according to the information of the abnormal point and the object information, so that the clustering error of the object information base may be corrected in time. In addition, confidence calculation can be carried out on each track point according to the object information, and for track points with confidence lower than a threshold value, information such as related first characteristic information and Identity (ID) can be obtained, so that workers can manually check key information points, and timely modify the object information and ID associated information to obtain a target object motion track with high accuracy.
Step 603, tracking and positioning the target object according to the motion track after collision detection and analysis.
It can be understood that after the collision detection analysis is performed on the motion trajectory of the target object, the staff tracks and positions the target object according to the analyzed motion trajectory, and then processes the target event.
According to the event processing method provided by the embodiment of the disclosure, at least one of the snapshot track and the identity ID track of the target object motion is acquired according to the object information, so that the acquisition of the target object motion track by data fusion is realized. In addition, collision detection analysis is carried out on the motion trail of the target object, and manual check is carried out on the key track points, so that the accuracy of the motion trail of the target object is improved.
In order to implement the method, the present disclosure provides an event processing apparatus.
Fig. 7 is a block diagram of an event processing apparatus according to an embodiment of the present disclosure. As shown in fig. 7, the apparatus includes:
the image processing module 710 is configured to obtain an image to be detected, and perform feature extraction on the image to be detected to obtain a plurality of feature information of the image to be detected;
a first determining module 720 for determining event information of a target event, the event information including at least one of venue information and occurrence time information;
the retrieval module 730 is used for retrieving in a pre-established object information base according to a plurality of characteristic information of the image to be detected and sequencing the retrieval result according to the event information of the target event;
a second determining module 740, configured to determine object information of a target object in the image to be detected according to the sorting result;
and the positioning module 750 is configured to perform tracking and positioning on the target object according to the object information.
In some embodiments of the present disclosure, the retrieving module 730 includes:
the retrieval obtaining unit 730-1 is configured to perform retrieval in an object information base according to first feature information in a plurality of feature information of an image to be detected to obtain at least one candidate object;
a first obtaining unit 730-2 for obtaining spatiotemporal feature information from the object information of each candidate object;
a first calculating unit 730-3, configured to calculate a first correlation of each candidate object with the target event according to the event information of the target event and the spatio-temporal feature information of each candidate object;
a second obtaining unit 730-4, configured to obtain corresponding feature information from object information of each candidate object according to at least one of second feature information, vehicle feature information, and spatiotemporal feature information among a plurality of feature information of an image to be detected;
the second calculating unit 730-5 is configured to input the at least one feature information and the corresponding feature information into a pre-established discriminant model to obtain a second correlation between each candidate object and the target event;
a ranking unit 730-6 for ranking the at least one candidate object according to the first relevance and the second relevance.
Furthermore, in the embodiment of the present disclosure, the retrieving module 730 further includes:
a determining unit 730-7 for determining whether the candidate object participates in a specific event;
a third obtaining unit 730-8, configured to obtain description information of the specific event in response to the candidate object participating in the specific event;
a fourth obtaining unit 730-9, configured to obtain a thread description keyword of the target event;
a third calculating unit 730-10, configured to calculate a third correlation between the candidate object and the target event according to the description information of the specific event and the thread description keyword of the target event;
the sorting unit 730-6 is specifically configured to:
the at least one candidate object is ranked according to the first relevance, the second relevance, and the third relevance.
In an embodiment of the present disclosure, the positioning module 750 is specifically configured to:
acquiring a motion track of a target object according to the object information; the motion track comprises at least one of a snapshot track and an Identity (ID) track of the monitoring camera;
combining at least one of the capturing track and the ID track of the monitoring camera of the target object, and performing conflict detection analysis on the motion track obtained after combination;
and tracking and positioning the target object according to the motion track after collision detection and analysis.
According to the event processing device disclosed by the embodiment of the disclosure, the number of retrieval results is reduced by introducing a plurality of clues by extracting a plurality of pieces of characteristic information of the image to be detected and retrieving the characteristic information in the pre-established object information base according to the characteristic information of the image to be detected. In addition, the retrieval results are sequenced according to the occurrence place information and/or the occurrence time information of the target event, so that the correlation between the retrieval results and the target event is introduced, namely, the retrieval results can be further screened, the accuracy of the retrieval results is improved, and the time length of manual removal is effectively shortened. In addition, the target object is tracked and positioned according to the acquired object information of the target object in the image to be detected, and tracking positioning analysis is performed by integrating multi-aspect data, so that the accuracy of event processing can be improved, and the efficiency of event processing can also be improved.
Fig. 8 is a block diagram of another event processing device according to an embodiment of the present disclosure. As shown in fig. 8, the apparatus further includes:
the establishing module 860 is used for establishing an object information base in advance: the establishing module 860 is specifically configured to:
acquiring a surveillance video stream shot by a surveillance camera, and sampling the surveillance video stream to acquire N video frames; wherein N is a positive integer;
performing target detection on each video frame to determine M target object samples in each video frame; wherein M is a positive integer;
acquiring an image of each target object sample from the N video frames, and performing feature extraction on the image to acquire a plurality of feature information of each target object sample;
establishing object information of each target object sample according to the plurality of characteristic information of each target object sample;
and establishing a library according to the object information of each target object sample to obtain an object information library.
In some embodiments of the present disclosure, the establishing module 860 is specifically configured to:
acquiring a pre-established discrimination model; the discrimination model is trained by adopting a plurality of characteristic information of the object sample;
grouping each target object sample, inputting a plurality of characteristic information of each target object sample in each group into a discrimination model, and judging whether each target object sample in each group is the same object;
in response to that each target object sample in each group is the same object, combining a plurality of characteristic information of each target object sample in each group to obtain object information of the same object;
and in response to that the target object samples in each group are not the same object, establishing object information of the target object samples according to a plurality of characteristic information of the target object samples in each group.
It should be noted that 810 to 850 in fig. 8 have the same functions and structures as 710 to 750 in fig. 7, and the description thereof is omitted.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
According to the event processing device provided by the embodiment of the disclosure, when the object information base is established, the feature extraction is respectively performed on each target object sample to obtain a plurality of feature information corresponding to each target object sample, so that the data information coverage of the object information base can be effectively improved, the accuracy and the recall rate of the object information base are greatly improved, and a basic guarantee is provided for accurately acquiring the target object information in the image to be detected and tracking and positioning. In addition, the target object samples of the same object are combined, so that the condition that the object information of a plurality of target objects corresponds to the same object can be avoided, and the accuracy and the recall rate of the object information base can be further improved.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
As shown in fig. 9, is a block diagram of an electronic device of a method of event processing according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic apparatus includes: one or more processors 901, memory 902, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 9 illustrates an example of a processor 901.
Memory 902 is a non-transitory computer readable storage medium provided by the present disclosure. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the method of event processing provided by the present disclosure. The non-transitory computer-readable storage medium of the present disclosure stores computer instructions for causing a computer to perform the method of event processing provided by the present disclosure.
The memory 902, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the methods of event processing in the embodiments of the present disclosure (e.g., the image processing module 710, the first determining module 720, the retrieving module 730, the second determining module 740, and the positioning module 750 shown in fig. 7). The processor 901 executes various functional applications of the server and data processing by executing non-transitory software programs, instructions, and modules stored in the memory 902, that is, implements the event processing method in the above method embodiment. The present disclosure provides a computer program product comprising a computer program which, when executed by a processor 901, implements the event handling method in the above-described method embodiments.
The memory 902 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for event processing, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 902 may optionally include memory located remotely from the processor 901, which may be connected to an event processing electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of event processing may further include: an input device 903 and an output device 904. The processor 901, the memory 902, the input device 903 and the output device 904 may be connected by a bus or other means, and fig. 9 illustrates the connection by a bus as an example.
The input device 903 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the event-processed electronic device, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input device. The output devices 904 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. An event processing method, comprising:
acquiring an image to be detected, and performing feature extraction on the image to be detected to acquire a plurality of feature information of the image to be detected;
determining event information of a target event, the event information including at least one of venue information and occurrence time information;
searching in a pre-established object information base according to the characteristic information of the image to be detected, and sequencing the search result according to the event information of the target event;
acquiring object information of a target object in the image to be detected according to the sequencing result;
and tracking and positioning the target object according to the object information.
2. The method of claim 1, wherein the object information base is pre-established by:
acquiring a surveillance video stream shot by a surveillance camera, and sampling the surveillance video stream to acquire N video frames; wherein N is a positive integer;
performing target detection on each video frame to determine M target object samples in each video frame; wherein M is a positive integer;
acquiring an image of each target object sample from the N video frames, and performing feature extraction on the image to acquire a plurality of feature information of each target object sample;
establishing object information of each target object sample according to the plurality of characteristic information of each target object sample;
and establishing a library according to the object information of each target object sample to obtain the object information library.
3. The method of claim 2, wherein establishing object information for each target object sample from a plurality of feature information for the each target object sample comprises:
acquiring a pre-established discrimination model; the discriminant model is trained by adopting a plurality of characteristic information of an object sample;
grouping each target object sample, inputting a plurality of characteristic information of each target object sample in each group into the discrimination model, and judging whether each target object sample in each group is the same object;
in response to that each target object sample in each group is the same object, merging a plurality of feature information of each target object sample in each group to obtain object information of the same object;
and in response to that the target object samples in each group are not the same object, establishing object information of the target object samples according to a plurality of characteristic information of the target object samples in each group.
4. The method as claimed in claim 1, wherein the retrieving in a pre-established object information base according to a plurality of feature information of the image to be detected and sorting the retrieval results according to the event information of the target event comprises:
retrieving in the object information base according to first feature information in the plurality of feature information of the image to be detected to obtain at least one candidate object;
acquiring space-time characteristic information from the object information of each candidate object;
calculating a first correlation between each candidate object and the target event according to the event information of the target event and the space-time characteristic information of each candidate object;
acquiring corresponding characteristic information from the object information of each candidate object according to at least one of second characteristic information, vehicle characteristic information and space-time characteristic information in the plurality of characteristic information of the image to be detected;
inputting the at least one feature information and the corresponding feature information into a pre-established discrimination model to obtain a second correlation between each candidate object and the target event;
ranking the at least one candidate object according to the first and second correlations.
5. The method of claim 4, further comprising:
determining whether the candidate object has participated in a particular event;
responding to the candidate object participating in a specific event, and acquiring description information of the specific event;
obtaining clue description keywords of the target event;
calculating a third correlation between the candidate object and the target event according to the description information of the specific event and the clue description keyword of the target event;
wherein said ranking the object information of the at least one candidate object according to the first and second correlations comprises:
ranking the at least one candidate object according to the first, second, and third correlations.
6. The method of claim 1, wherein the tracking and locating the target object according to the object information comprises:
acquiring a motion track of the target object according to the object information; wherein the motion track comprises at least one of a capturing track and an Identity (ID) track of the monitoring camera;
combining at least one of the capturing track and the ID track of the monitoring camera of the target object, and performing conflict detection analysis on the motion track obtained after combination;
and tracking and positioning the target object according to the motion track after collision detection and analysis.
7. An event processing apparatus comprising:
the image processing module is used for acquiring an image to be detected and extracting the characteristics of the image to be detected to acquire a plurality of characteristic information of the image to be detected;
a first determining module for determining event information of a target event, the event information including at least one of venue information and occurrence time information;
the retrieval module is used for retrieving in a pre-established object information base according to the plurality of characteristic information of the image to be detected and sequencing retrieval results according to the event information of the target event;
the second determining module is used for determining the object information of the target object in the image to be detected according to the sequencing result;
and the positioning module is used for tracking and positioning the target object according to the object information.
8. The apparatus of claim 7, further comprising:
the establishing module is used for establishing the object information base in advance: wherein the establishing module is specifically configured to:
acquiring a surveillance video stream shot by a surveillance camera, and sampling the surveillance video stream to acquire N video frames; wherein N is a positive integer;
performing target detection on each video frame to determine M target object samples in each video frame; wherein M is a positive integer;
acquiring an image of each target object sample from the N video frames, and performing feature extraction on the image to acquire a plurality of feature information of each target object sample;
establishing object information of each target object sample according to the plurality of characteristic information of each target object sample;
and establishing a library according to the object information of each target object sample to obtain the object information library.
9. The apparatus according to claim 8, wherein the establishing module is specifically configured to:
acquiring a pre-established discrimination model; the discriminant model is trained by adopting a plurality of characteristic information of an object sample;
grouping each target object sample, inputting a plurality of characteristic information of each target object sample in each group into the discrimination model, and judging whether each target object sample in each group is the same object;
in response to that each target object sample in each group is the same object, merging a plurality of feature information of each target object sample in each group to obtain object information of the same object;
and in response to that the target object samples in each group are not the same object, establishing object information of the target object samples according to a plurality of characteristic information of the target object samples in each group.
10. The apparatus of claim 7, wherein the retrieving module comprises:
the retrieval acquisition unit is used for retrieving in the object information base according to first characteristic information in the plurality of characteristic information of the image to be detected to obtain at least one candidate object;
a first obtaining unit configured to obtain spatiotemporal feature information from object information of each of the candidate objects;
a first calculating unit, configured to calculate a first correlation between each candidate object and the target event according to event information of the target event and spatio-temporal feature information of each candidate object;
the second acquisition unit is used for acquiring corresponding characteristic information from the object information of each candidate object according to at least one of second characteristic information, vehicle characteristic information and space-time characteristic information in the plurality of characteristic information of the image to be detected;
the second calculation unit is used for inputting the at least one piece of characteristic information and the corresponding characteristic information into a pre-established discrimination model to obtain a second correlation between each candidate object and the target event;
a ranking unit configured to rank the at least one candidate object according to the first correlation and the second correlation.
11. The apparatus of claim 10, the retrieval module comprising:
a determining unit, configured to determine whether the candidate object participates in a specific event;
the third acquisition unit is used for responding to the fact that the candidate object participates in a specific event and acquiring the description information of the specific event;
a fourth obtaining unit, configured to obtain a thread description keyword of the target event;
the third calculation unit is used for calculating a third correlation between the candidate object and the target event according to the description information of the specific event and the clue description key words of the target event;
wherein the sorting unit is specifically configured to:
ranking the at least one candidate object according to the first, second, and third correlations.
12. The apparatus of claim 7, wherein the positioning module is specifically configured to:
acquiring a motion track of the target object according to the object information; wherein the motion track comprises at least one of a capturing track and an Identity (ID) track of the monitoring camera;
combining at least one of the capturing track and the ID track of the monitoring camera of the target object, and performing conflict detection analysis on the motion track obtained after combination;
and tracking and positioning the target object according to the motion track after collision detection and analysis.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 6.
14. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 6.
CN202110622066.4A 2021-06-03 2021-06-03 Event processing method, device, electronic equipment and storage medium Active CN113378005B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110622066.4A CN113378005B (en) 2021-06-03 2021-06-03 Event processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110622066.4A CN113378005B (en) 2021-06-03 2021-06-03 Event processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113378005A true CN113378005A (en) 2021-09-10
CN113378005B CN113378005B (en) 2023-06-02

Family

ID=77575808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110622066.4A Active CN113378005B (en) 2021-06-03 2021-06-03 Event processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113378005B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115431174A (en) * 2022-09-05 2022-12-06 昆山市恒达精密机械工业有限公司 Method and system for medium plate grinding control

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7970240B1 (en) * 2001-12-17 2011-06-28 Google Inc. Method and apparatus for archiving and visualizing digital images
CN103020303A (en) * 2012-12-31 2013-04-03 中国科学院自动化研究所 Internet-based cross-media landmark historical event extraction and picture retrieval method
CN108932509A (en) * 2018-08-16 2018-12-04 新智数字科技有限公司 A kind of across scene objects search methods and device based on video tracking
US20190197369A1 (en) * 2017-12-22 2019-06-27 Motorola Solutions, Inc Method, device, and system for adaptive training of machine learning models via detected in-field contextual incident timeline entry and associated located and retrieved digital audio and/or video imaging
CN110705476A (en) * 2019-09-30 2020-01-17 深圳市商汤科技有限公司 Data analysis method and device, electronic equipment and computer storage medium
CN110717414A (en) * 2019-09-24 2020-01-21 青岛海信网络科技股份有限公司 Target detection tracking method, device and equipment
US20200074665A1 (en) * 2018-09-03 2020-03-05 Baidu Online Network Technology (Beijing) Co., Ltd. Object detection method, device, apparatus and computer-readable storage medium
CN110888877A (en) * 2019-11-13 2020-03-17 深圳市超视智慧科技有限公司 Event information display method and device, computing equipment and storage medium
CN110942036A (en) * 2019-11-29 2020-03-31 深圳市商汤科技有限公司 Person identification method and device, electronic equipment and storage medium
CN112084939A (en) * 2020-09-08 2020-12-15 深圳市润腾智慧科技有限公司 Image feature data management method and device, computer equipment and storage medium
WO2020248386A1 (en) * 2019-06-14 2020-12-17 平安科技(深圳)有限公司 Video analysis method and apparatus, computer device and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7970240B1 (en) * 2001-12-17 2011-06-28 Google Inc. Method and apparatus for archiving and visualizing digital images
CN103020303A (en) * 2012-12-31 2013-04-03 中国科学院自动化研究所 Internet-based cross-media landmark historical event extraction and picture retrieval method
US20190197369A1 (en) * 2017-12-22 2019-06-27 Motorola Solutions, Inc Method, device, and system for adaptive training of machine learning models via detected in-field contextual incident timeline entry and associated located and retrieved digital audio and/or video imaging
CN108932509A (en) * 2018-08-16 2018-12-04 新智数字科技有限公司 A kind of across scene objects search methods and device based on video tracking
US20200074665A1 (en) * 2018-09-03 2020-03-05 Baidu Online Network Technology (Beijing) Co., Ltd. Object detection method, device, apparatus and computer-readable storage medium
WO2020248386A1 (en) * 2019-06-14 2020-12-17 平安科技(深圳)有限公司 Video analysis method and apparatus, computer device and storage medium
CN110717414A (en) * 2019-09-24 2020-01-21 青岛海信网络科技股份有限公司 Target detection tracking method, device and equipment
CN110705476A (en) * 2019-09-30 2020-01-17 深圳市商汤科技有限公司 Data analysis method and device, electronic equipment and computer storage medium
CN110888877A (en) * 2019-11-13 2020-03-17 深圳市超视智慧科技有限公司 Event information display method and device, computing equipment and storage medium
CN110942036A (en) * 2019-11-29 2020-03-31 深圳市商汤科技有限公司 Person identification method and device, electronic equipment and storage medium
CN112084939A (en) * 2020-09-08 2020-12-15 深圳市润腾智慧科技有限公司 Image feature data management method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
訾玲玲;杜军平;: "基于突发事件的跨媒体信息检索***的研究", 计算机仿真, no. 06 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115431174A (en) * 2022-09-05 2022-12-06 昆山市恒达精密机械工业有限公司 Method and system for medium plate grinding control
CN115431174B (en) * 2022-09-05 2023-11-21 昆山市恒达精密机械工业有限公司 Method and system for controlling grinding of middle plate

Also Published As

Publication number Publication date
CN113378005B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN106339428B (en) Suspect's personal identification method and device based on video big data
CN111967302B (en) Video tag generation method and device and electronic equipment
CN111259751B (en) Human behavior recognition method, device, equipment and storage medium based on video
US20210201161A1 (en) Method, apparatus, electronic device and readable storage medium for constructing key-point learning model
CN111598164A (en) Method and device for identifying attribute of target object, electronic equipment and storage medium
CN110458130B (en) Person identification method, person identification device, electronic equipment and storage medium
CN111506771B (en) Video retrieval method, device, equipment and storage medium
US11048917B2 (en) Method, electronic device, and computer readable medium for image identification
CN111782977A (en) Interest point processing method, device, equipment and computer readable storage medium
CN111026937A (en) Method, device and equipment for extracting POI name and computer storage medium
CN111611903B (en) Training method, using method, device, equipment and medium of motion recognition model
US20220027705A1 (en) Building positioning method, electronic device, storage medium and terminal device
CN111178323B (en) Group behavior recognition method, device, equipment and storage medium based on video
CN109902681B (en) User group relation determining method, device, equipment and storage medium
CN113033458A (en) Action recognition method and device
CN112507090A (en) Method, apparatus, device and storage medium for outputting information
CN112163503A (en) Method, system, storage medium and equipment for generating insensitive track of personnel in case handling area
CN112507833A (en) Face recognition and model training method, device, equipment and storage medium
CN112084812B (en) Image processing method, device, computer equipment and storage medium
CN111783619A (en) Human body attribute identification method, device, equipment and storage medium
CN110473530B (en) Instruction classification method and device, electronic equipment and computer-readable storage medium
CN113378005B (en) Event processing method, device, electronic equipment and storage medium
CN111949820B (en) Video associated interest point processing method and device and electronic equipment
CN111832658B (en) Point-of-interest information processing method and device, electronic equipment and storage medium
CN110889392B (en) Method and device for processing face image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant