CN114078226A - Intelligent online production line behavior identification method based on online correlation of action pipelines - Google Patents

Intelligent online production line behavior identification method based on online correlation of action pipelines Download PDF

Info

Publication number
CN114078226A
CN114078226A CN202111411477.5A CN202111411477A CN114078226A CN 114078226 A CN114078226 A CN 114078226A CN 202111411477 A CN202111411477 A CN 202111411477A CN 114078226 A CN114078226 A CN 114078226A
Authority
CN
China
Prior art keywords
action
pipeline
detection
score
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111411477.5A
Other languages
Chinese (zh)
Other versions
CN114078226B (en
Inventor
甘明刚
苏绍文
王晴
张琰
何玉轩
马千兆
刘晋廷
杜尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202111411477.5A priority Critical patent/CN114078226B/en
Priority claimed from CN202111411477.5A external-priority patent/CN114078226B/en
Publication of CN114078226A publication Critical patent/CN114078226A/en
Application granted granted Critical
Publication of CN114078226B publication Critical patent/CN114078226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an intelligent online production line behavior identification method based on online association of action pipelines, which adopts an action pipeline online association algorithm with multi-standard similarity matching and can obtain a more accurate video-level action detection result. The invention adopts a visual sensor to obtain the video data of the production line, outputs the detection frame by the real-time detection of a frame-level motion detection model, performs online association on the detection frame by adopting a multi-standard similarity matching principle comprising category consistency, category confidence, spatial overlapping degree, appearance similarity and space-time similarity, and outputs the video-level behavior recognition result, namely the motion pipeline, in real time, thereby improving the accuracy of behavior recognition, particularly obviously improving the recognition effect of the behavior category with large spatial position change and high speed, and being more suitable for complex application scenes on an intelligent production line.

Description

Intelligent online production line behavior identification method based on online correlation of action pipelines
Technical Field
The invention relates to the field of human-computer interaction cooperation of an intelligent combined production line, in particular to a behavior recognition method of the intelligent combined production line based on online association of action pipelines.
Background
Under the promotion of a new generation of artificial intelligence technology and information and communication technology, industries such as automobiles, electronics, household appliances and the like are changing from large-scale customized production to large-scale personalized customization, which provides new challenges for the aspects of organization, information processing, system operation, human-computer coordination and the like of a production line. How the machine understands the intention of an operator is a difficulty in the man-machine interaction cooperation of the intelligent production line in identifying the behavior of the operator.
Behavior recognition of the intelligent online production line is mainly realized by a visual method at present, specifically, a time-space action detection task in computer vision is utilized, action detection is adopted to refer to behavior recognition in the intelligent online production line, and detection comprises determination of behavior time-space positions and determination of target behavior categories. The space-time action detection task takes visual information acquired by a visual sensor as input, and outputs a frame-level action detection result in the form of a detection frame through a detection model, wherein the frame-level action detection result comprises the spatial position and the action type of the detection frame in a frame of image; and then outputting a video-level action detection result through a correlation algorithm, wherein the video-level action detection result comprises the spatio-temporal position and the category of an action pipeline in the video, and the action pipeline is formed by detection frames on continuous frames.
In the production process of products, people and machines need to interact in real time, so that detection results cannot be output after video recording is finished, the output of action pipelines needs to be accurately generated in real time, frame-level detection frames at future moments cannot be obtained, and the correlation of the past action pipelines cannot be modified. Due to the complexity of the video background of the production line, the difference of the distribution among the classes, and the change and the blurring of the visual angle caused by movement or defocusing, the detection of the time-space action is still a very challenging task, and the current algorithm only depends on the spatial overlapping degree of the frame-level detection frames of adjacent frames to carry out greedy association is difficult to deal with the situations of multi-user cooperation, complex production action and the like of the production line.
Disclosure of Invention
In view of the above, the invention provides an intelligent online production line behavior identification method based on online association of action pipelines, and a more accurate video-level action detection result can be obtained by adopting an action pipeline online association algorithm with multi-standard similarity matching.
In order to achieve the purpose, the technical scheme of the invention is as follows:
the invention discloses an intelligent online production line behavior identification method based on online association of action pipelines, which comprises the following steps of:
step 1, at an initial moment, obtaining a frame-level action detection result corresponding to video information on an intelligent production line, wherein the frame-level action detection result comprises a plurality of candidate detection frames and category confidence scores of the candidate detection frames; performing non-maximum suppression on the candidate detection frames, removing the detection frames with higher overlapping degree, and reserving Mt=1The detection frames are sorted according to the confidence score; with reserved Mt=1The detection frames are respectively used as the first frame detection frame of the action pipeline, and M is createdt=1The score of the action pipeline is the category confidence score of the corresponding detection frame, the action pipelines are sorted according to the score of the action pipeline, and the initialization of the action pipeline is finished;
step 2, at the current moment, obtaining a frame-level action detection result corresponding to the video information on the intelligent production line, wherein the frame-level action detection result comprises a plurality of candidate detection frames and category confidence scores of the candidate detection frames; performing non-maximum suppression on the candidate detection frames at the current moment, removing the detection frames with higher overlapping degree, reserving N detection frames, and sorting according to confidence score;
step 3, calculating the association score matrix between the action pipeline still alive at the current moment and the N detection frames obtained in the step 2, wherein the ith action pipeline
Figure BDA0003374215740000021
And the jth detection frame
Figure BDA0003374215740000022
Is scored by the correlation
Figure BDA0003374215740000031
The method specifically comprises the following steps:
Figure BDA0003374215740000032
wherein label is an action category consistency score; lambda [ alpha ]cWeight of confidence score for category, λsScoring weights for spatial overlap, λAScoring the weight of appaarance for appearance similarity, λRScoring a relationship weight for the spatio-temporal relationship;
step 4, the action pipeline still alive at the current moment is sequentially matched with the N detection frames obtained in the step 2 according to the sequence of the pipeline scores at the last moment;
the pipeline score calculation formula of the ith action pipeline at the time T is as follows:
Figure BDA0003374215740000033
wherein the content of the first and second substances,
Figure BDA0003374215740000034
represents the pipeline score of the action pipeline i at time T, k represents the last k times of association, i represents the pipeline serial number,
Figure BDA0003374215740000035
indicating a pipei association score at time t;
the specific matching process for one of the action pipelines is as follows:
screening out all candidate detection frames with average intersection ratio exceeding a threshold value with the last k detection frames of the action pipeline; aiming at all the screened candidate detection frames, obtaining a detection frame with the highest association score with the action pipeline according to the association score matrix, adding the detection frame into the action pipeline, taking the association score of the detection frame and the action pipeline at the moment as the association score of the action pipeline at the moment, then deleting the detection frame from the candidate detection frame and the association score matrix at the moment, and matching the next action pipeline;
if no candidate detection frame with the spatial overlapping degree exceeding the threshold value exists in one motion pipeline, no detection frame is added at the moment t, and if no new detection frame is added at any continuous k moments, the motion pipeline is determined to be dead;
for the action pipeline with the detection frame added, updating the pipeline score by using the correlation score calculation formula of the action pipeline at the current moment;
step 5, sorting all the surviving action pipelines according to the pipeline scores; outputting the space-time positions and the types of all the survival action pipelines at the current moment; and updating the current time by using the next time, and returning to execute the step 2 by using the updated current time.
The calculation formula of the association score of the ith action pipeline at the time t is as follows:
Figure BDA0003374215740000041
and K is the total number of all candidate detection frames with the average intersection ratio of the last K detection frames of the action pipeline i exceeding the threshold value.
The category confidence score confidence is the sum of confidence of the candidate detection frame and the pipeline about the current category; the spatial overlap degree score overlap is an intersection comparison mean value of the candidate detection frame and the last k detection frames of the pipeline; and respectively calculating the appearance characteristic and the space-time characteristic vector contained in the candidate detection frame and the last k detection frames of the pipeline by adopting an L2 norm through the appearance similarity score apearance and the space-time relation score relation.
The motion category consistency score is calculated in the following way:
Figure BDA0003374215740000042
where i is the pipeline serial number, j is the detection frame serial number, philA category confidence score, ψ, for a pipe or detection box with respect to category llFor the punishment item of the inconsistency of the pipeline and the detection frame, the formula synchronously carries out time calibration on the pipeline in the process of calculating the action class consistency score, and the time calibration is as follows:
Figure BDA0003374215740000043
wherein l*Is the best class, l is the class, C is the set of all classes of the dataset, l is the best classtudeFor the action conduit class,/detIs a detection frame category.
Wherein M is 20.
Wherein k is 5.
Has the advantages that:
the invention adopts a visual sensor to obtain the video data of the production line, outputs the detection frame by the real-time detection of a frame-level motion detection model, performs online association on the detection frame by adopting a multi-standard similarity matching principle comprising category consistency, category confidence, spatial overlapping degree, appearance similarity and space-time similarity, and outputs the video-level behavior recognition result, namely the motion pipeline, in real time, thereby improving the accuracy of behavior recognition, particularly obviously improving the recognition effect of the behavior category with large spatial position change and high speed, and being more suitable for complex application scenes on an intelligent production line.
Drawings
FIG. 1 is an architectural diagram of a multi-criteria affinity matched action pipeline online correlation Method (MSRT) employed by the present invention.
FIG. 2 is a comparison graph of the detection results of the real-time online motion detection correlation method (ROAD), the micro motion pipeline correlation method (ACT), and the multi-criteria affinity-matched motion pipeline online correlation Method (MSRT) of the present invention.
FIG. 3 is an architecture diagram of a real-time online motion detection association method (ROAD).
FIG. 4 is an architecture diagram of the micro-actuation pipe association method (ACT).
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention provides an intelligent online production line behavior identification method based on online association of action pipelines, which adopts an online association algorithm of the action pipelines with multi-standard similarity matching, the schematic diagram of the algorithm is shown in figure 3, according to the convention of the association algorithm of the action pipelines, the generation of the action pipelines is independently carried out aiming at a specific action class, and different action classes are not influenced mutually. The data set adopted by the frame-level motion detection model comprises C-type motions, and the following steps are performed in parallel for each motion type;
step 1, at an initial time (t is 1), obtaining a frame-level motion detection result corresponding to video information on an intelligent production line, wherein the frame-level motion detection result comprises a plurality of candidate detection frames and category confidence scores of the candidate detection frames; performing non-maximum suppression on the candidate detection frames, removing the detection frames with higher overlapping degree, and reserving Mt=1A detection frame (this embodiment M)t=1Taking 20) and sorting according to the confidence score; with reserved Mt=1The detection frames are respectively used as the first frame detection frame of the action pipeline, and M is createdt=1The score of the action pipeline is the category confidence score of the corresponding detection frame, the action pipelines are sorted according to the score of the action pipeline, and the initialization of the action pipeline is finished;
step 2, obtaining a frame-level motion detection result corresponding to the video information on the intelligent production line at the time t (t > 1), wherein the frame-level motion detection result comprises a plurality of candidate detection frames and candidate detection framesA category confidence score; wherein, the jth candidate detection box is marked as
Figure BDA0003374215740000061
The category confidence score for the jth candidate detection box is
Figure BDA0003374215740000062
Performing non-maximum suppression on the candidate detection frames at the current moment, removing the detection frames with higher overlapping degree, reserving N detection frames, and sorting according to confidence score;
step 3, considering the action category consistency score label, the category confidence score confidence, the spatial overlapping degree score overlap, the appearance similarity score apearance and the space-time relation score relation, calculating the M still alive at the time ttAnd 2, obtaining an association score matrix between each action pipeline and the N detection frames obtained in the step 2, wherein the ith action pipeline
Figure BDA0003374215740000063
And the jth detection frame
Figure BDA0003374215740000064
Is scored by the correlation
Figure BDA0003374215740000065
The method specifically comprises the following steps:
Figure BDA0003374215740000066
wherein λ iscWeight of confidence score for category, λsScoring weights for spatial overlap, λAScoring the weight of appaarance for appearance similarity, λRScoring a relationship weight for the spatio-temporal relationship;
the action category consistency score is calculated as follows:
Figure BDA0003374215740000067
where i is the pipeline serial number, j is the detection frame serial number, philA category confidence score, ψ, for a pipe or detection box with respect to category llFor the punishment item of the inconsistency of the pipeline and the detection frame, the formula synchronously carries out time calibration on the pipeline in the process of calculating the action class consistency score, and the time calibration is as follows:
Figure BDA0003374215740000071
wherein l*Is the best class, l is the class, C is the set of all classes of the dataset, l is the best classtudeFor the action conduit class,/detIs a detection frame category.
The category confidence score confidence is the sum of the confidence of the candidate detection box and the pipeline about the current category; the spatial overlap degree score overlap is an intersection comparison mean value of the candidate detection frame and the last k detection frames of the pipeline; and respectively calculating the appearance characteristic and the space-time characteristic vector contained in the candidate detection frame and the last k detection frames of the pipeline by adopting an L2 norm through the appearance similarity score apearance and the space-time relation score relation.
Step 4, MtSequentially matching the action pipelines with the N detection frames obtained in the step 2 according to the sequence of the pipeline scores at the last moment;
the pipeline score calculation formula of the ith action pipeline at the time T is as follows:
Figure BDA0003374215740000072
wherein the content of the first and second substances,
Figure BDA0003374215740000073
represents the pipeline score of the action pipeline i at time T, k represents the last k times of association, i represents the pipeline serial number,
Figure BDA0003374215740000074
to representThe association score of pipe i at time t.
The specific matching process for one of the action pipelines is as follows:
screening all candidate detection frames with average intersection ratio exceeding a threshold value with the last k detection frames (k is defaulted to be 5 in the embodiment) of the action pipeline; aiming at all the screened candidate detection frames, obtaining a detection frame with the highest association score with the action pipeline according to the association score matrix, adding the detection frame into the action pipeline, taking the association score of the detection frame and the action pipeline at the moment as the association score of the action pipeline at the moment, then deleting the detection frame from the candidate detection frame and the association score matrix at the moment, and matching the next action pipeline;
the association score at time t for the action pipeline i is calculated as follows:
Figure BDA0003374215740000075
and K is the total number of all candidate detection frames with the average intersection ratio of the last K detection frames of the action pipeline i exceeding the threshold value.
If no candidate detection frame with the spatial overlapping degree exceeding the threshold value exists in one motion pipeline, no detection frame is added at the moment t, and if no new detection frame is added at any continuous k moments, the motion pipeline is determined to be dead;
and updating the pipeline score of the action pipeline with the added detection frame by using a formula (4).
Step 5, sorting all the surviving action pipelines according to the pipeline scores; outputting the space-time positions and the types of all the survival action pipelines at the current moment; and updating t by using t +1 to obtain the current t, and returning to execute the step 2.
In order to show the advantages of the online correlation algorithm compared with the traditional correlation algorithm, a comparative experiment is carried out on the UCF 101-24 behavior detection data set, and the result is shown in FIG. 2
It can be seen that the detection accuracy of the multi-standard similarity matching based motion pipeline online correlation algorithm (MSRT) is generally higher than that of a real-time online motion detection algorithm (ROAD) and a micro motion pipeline correlation Algorithm (ACT), and especially for the behaviors of skiing, water skiing and the like with large spatial displacement change and high motion speed.
The reason is that:
1) a real-time online action detection algorithm (ROAD) algorithm, shown in FIG. 3, is greedy and is prone to fall into local optimality; the score only considers the confidence level, and the threshold only considers that the spatial overlap is too simple; multiple categories do not influence each other (pipelines are overlapped, and the calculated amount is large); the detection association, the pipeline type determination and the time calibration are carried out step by step, and the calculation amount is large.
2) A micro-motion pipeline association Algorithm (ACT), which is modified to a micro-motion pipeline algorithm based on a real-time online motion detection model (ROAD) as shown in fig. 4, and associates micro-pipeline detections of 7 frame lengths instead of individual frame-level detections. Soft non-maximum suppression is mainly adopted, and in the non-maximum suppression process, overlapped candidate detection frames are not deleted, but confidence scores of the overlapped candidate detection frames are changed; then, an intersection ratio calculation method of the two micro pipelines is provided, and the intersection ratio in the overlapping time is added and divided by the length of the overlapping time; and finally, averaging the N overlapped micro pipelines in time after the correlation is finished, and combining the N overlapped micro pipelines into a complete pipeline.
3) An action pipeline online correlation algorithm (MSRT) based on multi-standard similarity matching cancels a hard cross-over ratio threshold value, replaces a multi-similarity correlation module, comprehensively utilizes various judgment bases, and is shown in table 1 in the experiment of the influence of sampling rate on various correlation algorithm results.
TABLE 1 experiment of the impact of sampling rate on the results of various correlation algorithms
Figure BDA0003374215740000091
The correlation standard of the multi-standard similarity matching-based action pipeline online correlation algorithm (MSRT) not only considers the spatial overlapping degree of the newly added detection frame and the original detection frame in the action pipeline, but also comprehensively considers the appearance similarity degree of the two detection frames, the space-time relation similarity degree and the respective frame-level action scores, and improves the accuracy of correlation matching. In order to avoid the problem of local optimization easily appearing by trapping in a greedy algorithm, the MSRT algorithm adopts a mechanism of a candidate detection pool, and when a detection frame is newly added, the MSRT algorithm is matched with the last detection frames in the action pipeline instead of being matched with the last detection frame in the action pipeline, so that efficient and accurate online space-time action detection is realized, and the real-time performance and the accuracy of man-machine interaction in a production line are better guaranteed. The online correlation algorithm provided by the invention has the following requirements:
the average spatial overlapping degree, namely the average intersection ratio, of the newly added detection frame and the last k detection frames of the action pipeline to be associated must exceed a threshold value;
one detection box cannot be simultaneously associated with a plurality of action pipelines;
the scores of the correlation matching basis comprehensively consider spatial overlapping, appearance similarity and spatial-temporal relationship similarity;
and reserving a detection frame candidate pool of the last k frames.
The multi-standard similarity matching-based action pipeline online correlation Method (MSRT) not only considers the spatial coincidence degree of the newly added detection frames and the original detection frames in the action pipeline, but also comprehensively considers the respective frame-level detection scores of the detection frames, the appearance similarity and the space-time relationship similarity between the detection frames, and improves the matching accuracy.
For the same frame level detection result, a real-time online action detection correlation algorithm (ROAD), a micro-action pipeline correlation Algorithm (ACT) and an action pipeline online correlation algorithm (MSRT) based on multi-standard similarity matching are adopted for comparison. The miss-association threshold is 3 times of detection defaulted by real-time online motion detection, and the intersection ratio is 0, namely the intersection ratio is removed, and only the category confidence score of the frame-level detection is relied on. According to the analysis of the results, the following results are obtained: the lower the sampling frequency is, the greater the advantages of an action pipeline online correlation algorithm (MSRT) based on multi-standard similarity matching compared with other 2 algorithms are; in dense sampling, the algorithm with the cross-over ratio threshold performs relatively well because the cross-over ratio threshold avoids cross-correlations; while when the sampling frequency is lower, the network model with the cross-over ratio threshold performs worse and worse as the threshold is increased, because the cross-over ratio hard threshold causes erroneous correlation and pipeline split.
The experiment shows that the action pipeline online correlation algorithm based on multi-standard similarity matching can accurately detect and correlate the behaviors with violent space displacement changes on line and can meet the working requirements on a complex intelligent combined production line.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. An intelligent online production line behavior identification method based on online correlation of action pipelines is characterized by comprising the following steps:
step 1, at an initial moment, obtaining a frame-level action detection result corresponding to video information on an intelligent production line, wherein the frame-level action detection result comprises a plurality of candidate detection frames and category confidence scores of the candidate detection frames; performing non-maximum suppression on the candidate detection frames, removing the detection frames with higher overlapping degree, and reserving Mt=1The detection frames are sorted according to the confidence score; with reserved Mt=1The detection frames are respectively used as the first frame detection frame of the action pipeline, and M is createdt=1The score of the action pipeline is the category confidence score of the corresponding detection frame, the action pipelines are sorted according to the score of the action pipeline, and the initialization of the action pipeline is finished;
step 2, at the current moment, obtaining a frame-level action detection result corresponding to the video information on the intelligent production line, wherein the frame-level action detection result comprises a plurality of candidate detection frames and category confidence scores of the candidate detection frames; performing non-maximum suppression on the candidate detection frames at the current moment, removing the detection frames with higher overlapping degree, reserving N detection frames, and sorting according to confidence score;
step 3, calculatingAnd (3) a correlation score matrix between the action pipeline which still survives at the previous moment and the N detection boxes obtained in the step (2), wherein the ith action pipeline
Figure FDA0003374215730000011
And the jth detection frame
Figure FDA0003374215730000012
Is scored by the correlation
Figure FDA0003374215730000013
The method specifically comprises the following steps:
Figure FDA0003374215730000014
wherein the content of the first and second substances,
Figure FDA0003374215730000015
an action category consistency score; lambda [ alpha ]cWeight of confidence score for category, λsScoring weights for spatial overlap, λAScoring the weight of appaarance for appearance similarity, λRScoring a relationship weight for the spatio-temporal relationship;
step 4, the action pipeline still alive at the current moment is sequentially matched with the N detection frames obtained in the step 2 according to the sequence of the pipeline scores at the last moment;
the pipeline score calculation formula of the ith action pipeline at the time T is as follows:
Figure FDA0003374215730000021
wherein the content of the first and second substances,
Figure FDA0003374215730000022
to representThe pipeline score of the action pipeline i at the time T, k represents the last k times of association, i represents the pipeline serial number,
Figure FDA0003374215730000023
representing the correlation score of the pipeline i at the time t;
the specific matching process for one of the action pipelines is as follows:
screening out all candidate detection frames with average intersection ratio exceeding a threshold value with the last k detection frames of the action pipeline; aiming at all the screened candidate detection frames, obtaining a detection frame with the highest association score with the action pipeline according to the association score matrix, adding the detection frame into the action pipeline, taking the association score of the detection frame and the action pipeline at the moment as the association score of the action pipeline at the moment, then deleting the detection frame from the candidate detection frame and the association score matrix at the moment, and matching the next action pipeline;
if no candidate detection frame with the spatial overlapping degree exceeding the threshold value exists in one motion pipeline, no detection frame is added at the moment t, and if no new detection frame is added at any continuous k moments, the motion pipeline is determined to be dead;
for the action pipeline with the detection frame added, updating the pipeline score by using the correlation score calculation formula of the action pipeline at the current moment;
step 5, sorting all the surviving action pipelines according to the pipeline scores; outputting the space-time positions and the types of all the survival action pipelines at the current moment; and updating the current time by using the next time, and returning to execute the step 2 by using the updated current time.
2. The behavior recognition method for the intelligent production line based on the online association of the action pipelines as claimed in claim 1, wherein the association score of the ith action pipeline at the time t is calculated by the following formula:
Figure FDA0003374215730000024
and K is the total number of all candidate detection frames with the average intersection ratio of the last K detection frames of the action pipeline i exceeding the threshold value.
3. The method according to claim 1, wherein the category confidence score confidence is the sum of the confidence of the candidate detection box and the confidence of the pipeline with respect to the current category; the spatial overlap degree score overlap is an intersection comparison mean value of the candidate detection frame and the last k detection frames of the pipeline; and respectively calculating the appearance characteristic and the space-time characteristic vector contained in the candidate detection frame and the last k detection frames of the pipeline by adopting an L2 norm through the appearance similarity score apearance and the space-time relation score relation.
4. The behavior recognition method for the intelligent production line based on the online association of the action pipelines as claimed in claim 1, wherein the action category consistency score is calculated as follows:
Figure FDA0003374215730000031
where i is the pipeline serial number, j is the detection frame serial number, philA category confidence score, ψ, for a pipe or detection box with respect to category llFor the punishment item of the inconsistency of the pipeline and the detection frame, the formula synchronously carries out time calibration on the pipeline in the process of calculating the action class consistency score, and the time calibration is as follows:
Figure FDA0003374215730000032
wherein l*Is the best class, l is the class, C is the set of all classes of the dataset, l is the best classtudeFor the action conduit class,/detIs a detection frame category.
5. The behavior recognition method for the intelligent production line based on the online correlation of the action pipelines as claimed in any one of claims 1 to 4, wherein M is 20.
6. The behavior recognition method for the intelligent production line based on the online correlation of the action pipelines as claimed in any one of claims 1 to 4, wherein k is 5.
CN202111411477.5A 2021-11-25 Intelligent production line behavior identification method based on online correlation of action pipelines Active CN114078226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111411477.5A CN114078226B (en) 2021-11-25 Intelligent production line behavior identification method based on online correlation of action pipelines

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111411477.5A CN114078226B (en) 2021-11-25 Intelligent production line behavior identification method based on online correlation of action pipelines

Publications (2)

Publication Number Publication Date
CN114078226A true CN114078226A (en) 2022-02-22
CN114078226B CN114078226B (en) 2024-07-02

Family

ID=

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331636A (en) * 2016-08-31 2017-01-11 东北大学 Intelligent video monitoring system and method of oil pipelines based on behavioral event triggering
CN107609460A (en) * 2017-05-24 2018-01-19 南京邮电大学 A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism
US20180137647A1 (en) * 2016-11-15 2018-05-17 Samsung Electronics Co., Ltd. Object detection method and apparatus based on dynamic vision sensor
WO2018233205A1 (en) * 2017-06-21 2018-12-27 北京大学深圳研究生院 Method for detecting pedestrians in image by using gaussian penalty
CN111178523A (en) * 2019-08-02 2020-05-19 腾讯科技(深圳)有限公司 Behavior detection method and device, electronic equipment and storage medium
CN113591758A (en) * 2021-08-06 2021-11-02 全球能源互联网研究院有限公司 Human behavior recognition model training method and device and computer equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331636A (en) * 2016-08-31 2017-01-11 东北大学 Intelligent video monitoring system and method of oil pipelines based on behavioral event triggering
US20180137647A1 (en) * 2016-11-15 2018-05-17 Samsung Electronics Co., Ltd. Object detection method and apparatus based on dynamic vision sensor
CN107609460A (en) * 2017-05-24 2018-01-19 南京邮电大学 A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism
WO2018233205A1 (en) * 2017-06-21 2018-12-27 北京大学深圳研究生院 Method for detecting pedestrians in image by using gaussian penalty
CN111178523A (en) * 2019-08-02 2020-05-19 腾讯科技(深圳)有限公司 Behavior detection method and device, electronic equipment and storage medium
CN113591758A (en) * 2021-08-06 2021-11-02 全球能源互联网研究院有限公司 Human behavior recognition model training method and device and computer equipment

Similar Documents

Publication Publication Date Title
CN110147743B (en) Real-time online pedestrian analysis and counting system and method under complex scene
CN109858390B (en) Human skeleton behavior identification method based on end-to-end space-time diagram learning neural network
CN109360226B (en) Multi-target tracking method based on time series multi-feature fusion
CN107038448B (en) Target detection model construction method
Korban et al. Ddgcn: A dynamic directed graph convolutional network for action recognition
WO2023065395A1 (en) Work vehicle detection and tracking method and system
CN111275688A (en) Small target detection method based on context feature fusion screening of attention mechanism
CN111476181A (en) Human skeleton action recognition method
KR102462934B1 (en) Video analysis system for digital twin technology
CN101073089A (en) Tracking bimanual movements
CN104063719A (en) Method and device for pedestrian detection based on depth convolutional network
CN111862145B (en) Target tracking method based on multi-scale pedestrian detection
CN110298297A (en) Flame identification method and device
NL2029214B1 (en) Target re-indentification method and system based on non-supervised pyramid similarity learning
Peng et al. Online gesture spotting from visual hull data
CN112861808B (en) Dynamic gesture recognition method, device, computer equipment and readable storage medium
CN111931654A (en) Intelligent monitoring method, system and device for personnel tracking
Sun et al. Online multiple object tracking based on fusing global and partial features
CN115578770A (en) Small sample facial expression recognition method and system based on self-supervision
CN114529581A (en) Multi-target tracking method based on deep learning and multi-task joint training
CN110807774A (en) Point cloud classification and semantic segmentation method
Zhong et al. Pica: Point-wise instance and centroid alignment based few-shot domain adaptive object detection with loose annotations
Dhore et al. Human Pose Estimation And Classification: A Review
CN111496784B (en) Space environment identification method and system for robot intelligent service
CN114078226B (en) Intelligent production line behavior identification method based on online correlation of action pipelines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant