CN114078226B - Intelligent production line behavior identification method based on online correlation of action pipelines - Google Patents

Intelligent production line behavior identification method based on online correlation of action pipelines Download PDF

Info

Publication number
CN114078226B
CN114078226B CN202111411477.5A CN202111411477A CN114078226B CN 114078226 B CN114078226 B CN 114078226B CN 202111411477 A CN202111411477 A CN 202111411477A CN 114078226 B CN114078226 B CN 114078226B
Authority
CN
China
Prior art keywords
action
pipeline
score
detection
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111411477.5A
Other languages
Chinese (zh)
Other versions
CN114078226A (en
Inventor
甘明刚
苏绍文
王晴
张琰
何玉轩
马千兆
刘晋廷
杜尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202111411477.5A priority Critical patent/CN114078226B/en
Publication of CN114078226A publication Critical patent/CN114078226A/en
Application granted granted Critical
Publication of CN114078226B publication Critical patent/CN114078226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an intelligent production line behavior recognition method based on online association of action pipelines, which adopts an online association algorithm of the action pipelines with multi-standard similarity matching, and can obtain more accurate video-level action detection results. According to the invention, the visual sensor is adopted to acquire the video data of the production line, the frame-level motion detection model is adopted to detect and output the detection frame in real time, the detection frame is subjected to online association by adopting a multi-standard similarity matching principle comprising category consistency, category confidence, spatial overlapping degree, appearance similarity and time-space similarity, and a video-level behavior recognition result, namely a motion pipeline is output in real time, so that the accuracy of behavior recognition is improved, and particularly, the recognition effect of behavior categories with large spatial position change and high speed is obviously improved, and the method is more suitable for complex application scenes on an intelligent production line.

Description

Intelligent production line behavior identification method based on online correlation of action pipelines
Technical Field
The invention relates to the field of intelligent production line human-computer interaction coordination, in particular to an intelligent production line behavior identification method based on online association of action pipelines.
Background
Under the pushing of new generation artificial intelligence technology and information and communication technology, industries such as automobiles, electronics, household appliances and the like are moving from large-scale customized production to large-scale personalized customization, which provides new challenges for the organization, information processing, system operation, man-machine cooperation and the like of a production line. Wherein, how the machine understands the intention of the operator, and performing behavior recognition on the operator is a difficulty in man-machine interaction collaboration of the intelligent production line.
The behavior recognition of the intelligent production line is currently realized mainly by a visual method, specifically by utilizing a space-time action detection task in computer vision, and the action detection is used for referring to the behavior recognition in the intelligent production line, wherein the detection comprises the determination of the space-time position of the behavior and the determination of the target behavior category. The space-time motion detection task takes visual information acquired by a visual sensor as input, and outputs a frame-level motion detection result in the form of a detection frame through a detection model, wherein the frame-level motion detection result comprises the spatial position of the detection frame in a frame image and the motion type of the detection frame; and then outputting a video-level motion detection result through a correlation algorithm, wherein the video-level motion detection result comprises the space-time position and the category of a motion pipeline in the video, and the motion pipeline is composed of detection frames on continuous frames.
In the production process of the product, people and machines need to interact in real time, so that the detection result cannot be output after video recording is finished, the output of an action pipeline needs to be accurately generated in real time, a frame level detection frame at the future moment cannot be obtained, and the association of the past action pipeline cannot be modified. Due to the complexity of the video background of the production line, the difference of distribution among classes, the visual angle change and blurring caused by motion or defocusing, space-time action detection is still a very challenging task, and at present, a greedy correlation algorithm is only dependent on the spatial overlapping degree of frame-level detection frames of adjacent frames, so that the situation that the cooperation of multiple persons on the production line and the complex production actions are difficult to deal with is difficult to deal with.
Disclosure of Invention
In view of the above, the invention provides an intelligent production line behavior recognition method based on online correlation of action pipelines, which can obtain more accurate video-level action detection results by adopting an online correlation algorithm of the action pipelines with multi-standard similarity matching.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
the invention discloses an intelligent production line behavior identification method based on online association of action pipelines, which comprises the following steps:
Step 1, at an initial moment, obtaining a frame-level action detection result corresponding to video information on an intelligent joint production line, wherein the frame-level action detection result comprises a plurality of candidate detection frames and category confidence scores of the candidate detection frames; non-maximum suppression is carried out on the candidate detection frames, detection frames with higher overlapping degree are removed, M t=1 detection frames are reserved, and sequencing is carried out according to confidence scores; m t=1 detection frames are respectively used as first frame detection frames of the action pipelines, M t=1 action pipelines are created, the scores of the action pipelines are category confidence scores of the corresponding detection frames, the action pipelines are ordered according to the scores of the action pipelines, and the initialization of the action pipelines is completed;
Step 2, obtaining a frame-level action detection result corresponding to video information on an intelligent joint production line at the current moment, wherein the frame-level action detection result comprises a plurality of candidate detection frames and category confidence scores of the candidate detection frames; non-maximum suppression is carried out on candidate detection frames at the current moment, detection frames with higher overlapping degree are removed, N detection frames are reserved, and sequencing is carried out according to confidence scores;
step 3, calculating an association score matrix between the action pipeline still surviving at the current moment and the N detection frames obtained in the step 2, wherein the ith action pipeline And the j-th detection frameIs of the associated score of (2)The method comprises the following steps:
Wherein label is an action category consistency score; lambda c is the weight of the category confidence score confidence, lambda s is the weight of the spatial overlap score overlap, lambda A is the weight of the appearance similarity score application, and lambda R is the weight of the spatiotemporal relationship score relation;
Step 4, sequentially matching the action pipelines which still survive at the current moment with the N detection frames obtained in the step 2 according to the sequence of the pipeline scores at the previous moment;
The pipeline score calculation formula of the ith action pipeline at the moment T is as follows:
Wherein, Representing the pipeline score of action pipeline i at time T, k representing the last k correlations, i representing the pipeline number,Representing the correlation score of the pipeline i at the time t;
The specific matching process for one of the action pipelines is as follows:
screening all candidate detection frames with the average cross ratio of the last k detection frames of the action pipeline exceeding a threshold value; aiming at all the candidate detection frames screened, obtaining a detection frame with the highest association score with the action pipeline according to an association score matrix, adding the detection frame into the action pipeline, taking the association score of the detection frame and the action pipeline at the moment as the association score of the action pipeline at the moment, deleting the detection frame from the candidate detection frame and the association score matrix at the moment, and carrying out matching of the next action pipeline;
if no candidate detection frame with the spatial overlapping degree exceeding the threshold value exists for one action pipeline, no detection frame is added at the moment t, and if no new detection frame is added at the moment k continuously, the action pipeline is determined to die;
for the action pipeline with the newly added detection frame, updating the pipeline score by using the association score calculation formula of the action pipeline at the current moment;
Step 5, sorting all the surviving action pipelines according to pipeline scores; outputting the space-time positions and the categories of all the survival action pipelines at the current moment; and (3) updating the current time by using the next time, and returning to the step (2) by using the updated current time.
The calculation formula of the association score of the ith action pipeline at the moment t is as follows:
Wherein K is the total number of all candidate detection frames whose average intersection ratio with the last K detection frames of the action pipeline i exceeds a threshold value.
The category confidence score confidence is the sum of confidence of the candidate detection frame and the pipeline on the current category; the spatial overlap score overlap is the average value of the cross ratio of the candidate detection frames to the last k detection frames of the pipeline; the appearance similarity score and the time-space relation score relation respectively adopt L2 norms to calculate appearance characteristics and time-space characteristic vectors contained in the candidate detection frames and the last k detection frames of the pipeline.
The action category consistency score is calculated as follows:
Wherein i is a pipeline sequence number, j is a detection frame sequence number, phi l is a class confidence score of the pipeline or the detection frame about class l, phi l is a punishment item of inconsistent class of the pipeline and the detection frame, and the formula is used for synchronously carrying out time calibration on the pipeline in the process of calculating the action class consistency score, wherein the time calibration is as follows:
Where l * is the best category, l is the category, C is the set of all categories of the dataset, l tude is the action pipeline category, and l det is the detection frame category.
Wherein M is 20.
Wherein k is 5.
The beneficial effects are that:
According to the invention, the visual sensor is adopted to acquire the video data of the production line, the frame-level motion detection model is adopted to detect and output the detection frame in real time, the detection frame is subjected to online association by adopting a multi-standard similarity matching principle comprising category consistency, category confidence, spatial overlapping degree, appearance similarity and time-space similarity, and a video-level behavior recognition result, namely a motion pipeline is output in real time, so that the accuracy of behavior recognition is improved, and particularly, the recognition effect of behavior categories with large spatial position change and high speed is obviously improved, and the method is more suitable for complex application scenes on an intelligent production line.
Drawings
FIG. 1 is a block diagram of a multi-standard affinity matching action pipeline online correlation Method (MSRT) employed by the present invention.
FIG. 2 is a graph comparing the results of real-time online motion detection correlation (ROAD), micro motion pipeline correlation (ACT), and multi-standard, similar-matching motion pipeline online correlation (MSRT) of the present invention.
Fig. 3 is a diagram of the architecture of a real-time online action detection association method (ROAD).
Fig. 4 is a block diagram of a micro-action pipeline association method (ACT).
Detailed Description
The invention will now be described in detail by way of example with reference to the accompanying drawings.
The invention provides an intelligent production line behavior recognition method based on online association of action pipelines, which adopts an online association algorithm of the action pipelines with multi-standard similarity matching, and an algorithm schematic diagram is shown in fig. 3, according to the convention of the association algorithm of the action pipelines, the generation of the action pipelines is independently carried out aiming at a specific action category, and different action categories are not affected mutually. The data set adopted by the frame-level motion detection model comprises C-type motions, and all the motion types are performed in parallel as follows;
Step 1, at an initial moment (t=1), obtaining a frame-level action detection result corresponding to video information on an intelligent production line, wherein the frame-level action detection result comprises a plurality of candidate detection frames and category confidence scores of the candidate detection frames; non-maximum suppression is carried out on candidate detection frames, detection frames with higher overlapping degree are removed, M t=1 detection frames (20 are taken by M t=1 in the embodiment) are reserved, and sequencing is carried out according to confidence scores; m t=1 detection frames are respectively used as first frame detection frames of the action pipelines, M t=1 action pipelines are created, the scores of the action pipelines are category confidence scores of the corresponding detection frames, the action pipelines are ordered according to the scores of the action pipelines, and the initialization of the action pipelines is completed;
Step 2, obtaining a frame-level action detection result corresponding to video information on an intelligent production line at a time t (t > 1), wherein the frame-level action detection result comprises a plurality of candidate detection frames and category confidence scores of the candidate detection frames; wherein the j candidate detection frame is marked as Category confidence score for the j-th candidate detection frameNon-maximum suppression is carried out on candidate detection frames at the current moment, detection frames with higher overlapping degree are removed, N detection frames are reserved, and sequencing is carried out according to confidence scores;
Step 3, calculating a correlation score matrix between M t still surviving action pipelines at the moment t and N detection frames obtained in the step 2by considering an action category consistency score label, a category confidence score confidence, a spatial overlap score overlap, an appearance similarity score application and a time-space relation score relation, wherein the ith action pipeline And the j-th detection frameIs of the associated score of (2)The method comprises the following steps:
Wherein λ c is the weight of the category confidence score confidence, λ s is the weight of the spatial overlap score overlap, λ A is the weight of the appearance similarity score application, and λ R is the weight of the spatiotemporal relationship score relation;
the action category consistency score is calculated as follows:
Wherein i is a pipeline sequence number, j is a detection frame sequence number, phi l is a class confidence score of the pipeline or the detection frame about class l, phi l is a punishment item of inconsistent class of the pipeline and the detection frame, and the formula is used for synchronously carrying out time calibration on the pipeline in the process of calculating the action class consistency score, wherein the time calibration is as follows:
Where l * is the best category, l is the category, C is the set of all categories of the dataset, l tude is the action pipeline category, and l det is the detection frame category.
The category confidence score confidence is the sum of confidence of the candidate detection frame and the pipeline on the current category; the spatial overlap score overlap is the average value of the cross ratio of the candidate detection frames to the last k detection frames of the pipeline; the appearance similarity score and the time-space relation score relation respectively adopt L2 norms to calculate appearance characteristics and time-space characteristic vectors contained in the candidate detection frames and the last k detection frames of the pipeline.
Step 4, M t action pipelines are sequentially matched with the N detection frames obtained in the step 2 according to the sequence of the pipeline scores at the previous moment;
The pipeline score calculation formula of the ith action pipeline at the moment T is as follows:
Wherein, Representing the pipeline score of action pipeline i at time T, k representing the last k correlations, i representing the pipeline number,The correlation score of pipe i at time t is shown.
The specific matching process for one of the action pipelines is as follows:
Screening all candidate detection frames with the average cross ratio exceeding a threshold value with the last k detection frames (k default is 5 in the embodiment) of the action pipeline; aiming at all the candidate detection frames screened, obtaining a detection frame with the highest association score with the action pipeline according to an association score matrix, adding the detection frame into the action pipeline, taking the association score of the detection frame and the action pipeline at the moment as the association score of the action pipeline at the moment, deleting the detection frame from the candidate detection frame and the association score matrix at the moment, and carrying out matching of the next action pipeline;
the correlation score of the action pipeline i at the time t is calculated as follows:
Wherein K is the total number of all candidate detection frames whose average intersection ratio with the last K detection frames of the action pipeline i exceeds a threshold value.
If no candidate detection frame with the spatial overlapping degree exceeding the threshold value exists for one action pipeline, no detection frame is added at the moment t, and if no new detection frame is added at the moment k continuously, the action pipeline is determined to die;
and (3) updating the pipeline score of the action pipeline with the newly added detection frame by using a formula (4).
Step 5, sorting all the surviving action pipelines according to pipeline scores; outputting the space-time positions and the categories of all the survival action pipelines at the current moment; and (3) updating t by using t+1 to obtain the current t, and returning to the execution step (2).
In order to demonstrate the advantages of the present online correlation algorithm over the conventional correlation algorithm, a comparison experiment was performed on UCF101-24 behavior detection dataset, the results of which are shown in FIG. 2
The detection accuracy of the action pipeline online correlation algorithm (MSRT) based on multi-standard similarity matching is higher than that of the real-time online action detection algorithm (ROAD) and the micro action pipeline correlation Algorithm (ACT) as a whole, and the action pipeline online correlation algorithm is particularly suitable for behavior actions with large space displacement changes and high movement speed such as skiing, water skiing and the like.
The reason is that:
1) A real-time online motion detection algorithm (ROAD) algorithm, shown in fig. 3, is greedy and is prone to falling into local optima; the score only considers the confidence, and the threshold only considers the spatial overlap to be too simple; multiple categories do not affect each other (overlapping pipes, large calculation amount); detection association, pipeline type determination and time calibration are performed step by step, and the calculation amount is large.
2) The micro-motion pipeline correlation Algorithm (ACT), shown in fig. 4, is modified to a micro-motion pipeline algorithm based on a real-time online motion detection model (ROAD) to correlate not individual frame-level detections, but 7-frame-length micro-pipeline detections. Soft non-maximum suppression is mainly adopted, and overlapping candidate detection frames are not pruned in the non-maximum suppression process, but confidence scores of the candidate detection frames are changed; then, providing an intersection ratio calculation method of two micro pipelines, and dividing the intersection ratio sum in the overlapping time by the overlapping time length; and finally, after the correlation is finished, averaging in time, and combining N overlapped micro pipes into a complete pipe.
3) The action pipeline online correlation algorithm (MSRT) based on multi-standard similarity matching cancels the hard cross ratio threshold value, replaces the hard cross ratio threshold value with a multi-similarity correlation module, comprehensively utilizes multiple judgment bases, and has an influence experiment of sampling rate on the results of the multiple correlation algorithms as shown in a table 1.
TABLE 1 experiment of influence of sample Rate on the results of various correlation algorithms
The association standard of the on-line association algorithm (MSRT) of the action pipeline based on multi-standard similarity matching not only considers the spatial overlapping degree of the newly added detection frame and the original detection frame in the action pipeline, but also comprehensively considers the appearance similarity degree, the space-time relationship similarity degree and the respective frame-level action score of the two detection frames, thereby improving the accuracy of association matching. In order to avoid the problem of local optimum which easily occurs in a greedy algorithm, the MSRT algorithm adopts a mechanism of a candidate detection pool, and matches with the last detection frames in the action pipeline instead of the last detection frames in the action pipeline when the detection frames are newly added, so that efficient and accurate online space-time action detection is realized, and further, the real-time performance and accuracy of man-machine interaction in a production line are better ensured. The online correlation algorithm provided by the invention has the following requirements:
The average spatial overlapping degree, namely the average cross ratio, of the newly added detection frames and the last k detection frames of the action pipeline to be associated must exceed a threshold value;
One detection frame cannot be simultaneously associated with a plurality of action pipelines;
the score of the association matching basis comprehensively considers spatial overlapping, appearance similarity and time-space relationship similarity;
and reserving a detection frame candidate pool of the last k frames.
The online correlation Method (MSRT) of the action pipeline based on multi-standard similarity matching not only considers the spatial coincidence degree of the newly added detection frames and the original detection frames in the action pipeline, but also comprehensively considers the respective frame-level detection scores of the detection frames, and the appearance similarity and the time-space relationship similarity between the detection frames are improved, so that the matching accuracy is improved.
For the same frame-level detection results, a real-time online action detection correlation algorithm (ROAD), a micro action pipeline correlation Algorithm (ACT) and a multi-standard similarity matching-based action pipeline online correlation algorithm (MSRT) are adopted for comparison. The relevance missing threshold is 3 times of detection defaults of real-time online action detection, the intersection ratio=0 refers to the elimination of the intersection ratio, and the relevance missing threshold only depends on the category confidence score of frame-level detection. From the analysis of the results, it is known that: the lower the sampling frequency, the greater the advantage of the action pipeline online correlation algorithm (MSRT) based on multi-standard similarity matching over other 2 algorithms; algorithms with cross-ratio thresholds perform relatively well at dense sampling because cross-ratio thresholds avoid misalignment; while network models with cross-connect ratio thresholds perform worse as the thresholds increase as sampling frequencies are lower, because cross-connect ratio hard thresholds result in false correlations and pipe splits.
The experiment shows that the action pipeline online correlation algorithm based on multi-standard similarity matching can accurately detect and correlate actions with severe spatial variation online, and can adapt to the working requirements on a complex intelligent production line.
In summary, the above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (3)

1. The intelligent production line behavior identification method based on the online association of the action pipelines is characterized by comprising the following steps of:
Step 1, at an initial moment, obtaining a frame-level action detection result corresponding to video information on an intelligent joint production line, wherein the frame-level action detection result comprises a plurality of candidate detection frames and category confidence scores of the candidate detection frames; non-maximum suppression is carried out on the candidate detection frames, detection frames with higher overlapping degree are removed, M t=1 detection frames are reserved, and sequencing is carried out according to confidence scores; m t=1 detection frames are respectively used as first frame detection frames of the action pipelines, M t=1 action pipelines are created, the scores of the action pipelines are category confidence scores of the corresponding detection frames, the action pipelines are ordered according to the scores of the action pipelines, and the initialization of the action pipelines is completed;
Step 2, obtaining a frame-level action detection result corresponding to video information on an intelligent joint production line at the current moment, wherein the frame-level action detection result comprises a plurality of candidate detection frames and category confidence scores of the candidate detection frames; non-maximum suppression is carried out on candidate detection frames at the current moment, detection frames with higher overlapping degree are removed, N detection frames are reserved, and sequencing is carried out according to confidence scores;
step 3, calculating an association score matrix between the action pipeline still surviving at the current moment and the N detection frames obtained in the step 2, wherein the ith action pipeline And the j-th detection frameIs of the associated score of (2)The method comprises the following steps:
Wherein,
A consistency score for an action category; lambda C is the weight of the category confidence score confidence, lambda s is the weight of the spatial overlap score overlap, lambda A is the weight of the appearance similarity score application, and lambda R is the weight of the spatiotemporal relationship score relation;
Step 4, sequentially matching the action pipelines which still survive at the current moment with the N detection frames obtained in the step 2 according to the sequence of the pipeline scores at the previous moment;
The pipeline score calculation formula of the ith action pipeline at the moment T is as follows:
Wherein, Representing the pipeline score of action pipeline i at time T, k representing the last k correlations, i representing the pipeline number,Representing the correlation score of the pipeline i at the time t;
The specific matching process for one of the action pipelines is as follows:
Screening all candidate detection frames with the average cross ratio of the last k detection frames of the action pipeline exceeding a threshold value; aiming at all the candidate detection frames screened, obtaining a detection frame with the highest association score with the action pipeline according to an association score matrix, adding the detection frame into the action pipeline, taking the association score of the detection frame and the action pipeline at the moment as the association score of the action pipeline at the moment, deleting the detection frame from the candidate detection frame and the association score matrix at the moment, and carrying out matching of the next action pipeline;
if no candidate detection frame with the spatial overlapping degree exceeding the threshold value exists for one action pipeline, no detection frame is added at the moment t, and if no new detection frame is added at the moment k continuously, the action pipeline is determined to die;
for the action pipeline with the newly added detection frame, updating the pipeline score by using the association score calculation formula of the action pipeline at the current moment;
Step 5, sorting all the surviving action pipelines according to pipeline scores; outputting the space-time positions and the categories of all the survival action pipelines at the current moment; updating the current time by using the next time, and returning to the step 2 by using the updated current time;
the calculation formula of the association score of the ith action pipeline at the time t is as follows:
wherein K is the total number of all candidate detection frames with the average cross ratio of the last K detection frames of the action pipeline i exceeding a threshold value;
The category confidence score confidence is the sum of confidence of the candidate detection frame and the pipeline on the current category; the spatial overlap score overlap is the average value of the cross ratio of the candidate detection frames to the last k detection frames of the pipeline; the appearance similarity score, namely the application and the time-space relation score relation respectively calculate appearance characteristics and time-space characteristic vectors contained in the candidate detection frames and the last k detection frames of the pipeline by adopting L2 norms;
the action category consistency score is calculated as follows:
Wherein i is a pipeline sequence number, j is a detection frame sequence number, phi l is a class confidence score of the pipeline or the detection frame about class l, phi l is a punishment item of inconsistent class of the pipeline and the detection frame, and the formula is used for synchronously carrying out time calibration on the pipeline in the process of calculating the action class consistency score, wherein the time calibration is as follows:
Where l * is the best category, l is the category, C is the set of all categories of the dataset, l tude is the action pipeline category, and l det is the detection frame category.
2. The intelligent joint production line behavior recognition method based on online correlation of action pipelines according to claim 1, wherein M is 20.
3. The intelligent joint production line behavior recognition method based on online correlation of action pipelines according to claim 1, wherein k is 5.
CN202111411477.5A 2021-11-25 2021-11-25 Intelligent production line behavior identification method based on online correlation of action pipelines Active CN114078226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111411477.5A CN114078226B (en) 2021-11-25 2021-11-25 Intelligent production line behavior identification method based on online correlation of action pipelines

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111411477.5A CN114078226B (en) 2021-11-25 2021-11-25 Intelligent production line behavior identification method based on online correlation of action pipelines

Publications (2)

Publication Number Publication Date
CN114078226A CN114078226A (en) 2022-02-22
CN114078226B true CN114078226B (en) 2024-07-02

Family

ID=80284273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111411477.5A Active CN114078226B (en) 2021-11-25 2021-11-25 Intelligent production line behavior identification method based on online correlation of action pipelines

Country Status (1)

Country Link
CN (1) CN114078226B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331636A (en) * 2016-08-31 2017-01-11 东北大学 Intelligent video monitoring system and method of oil pipelines based on behavioral event triggering
CN107609460A (en) * 2017-05-24 2018-01-19 南京邮电大学 A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108073929B (en) * 2016-11-15 2023-11-24 北京三星通信技术研究有限公司 Object detection method and device based on dynamic vision sensor
CN109101859A (en) * 2017-06-21 2018-12-28 北京大学深圳研究生院 The method for punishing pedestrian in detection image using Gauss
CN111178523B (en) * 2019-08-02 2023-06-06 腾讯科技(深圳)有限公司 Behavior detection method and device, electronic equipment and storage medium
CN113591758A (en) * 2021-08-06 2021-11-02 全球能源互联网研究院有限公司 Human behavior recognition model training method and device and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331636A (en) * 2016-08-31 2017-01-11 东北大学 Intelligent video monitoring system and method of oil pipelines based on behavioral event triggering
CN107609460A (en) * 2017-05-24 2018-01-19 南京邮电大学 A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism

Also Published As

Publication number Publication date
CN114078226A (en) 2022-02-22

Similar Documents

Publication Publication Date Title
CN110147743B (en) Real-time online pedestrian analysis and counting system and method under complex scene
Korban et al. Ddgcn: A dynamic directed graph convolutional network for action recognition
CN111476181B (en) Human skeleton action recognition method
CN111709311B (en) Pedestrian re-identification method based on multi-scale convolution feature fusion
CN114220176A (en) Human behavior recognition method based on deep learning
CN111862145B (en) Target tracking method based on multi-scale pedestrian detection
KR102462934B1 (en) Video analysis system for digital twin technology
CN108520530A (en) Method for tracking target based on long memory network in short-term
Chaudhary et al. Deep network for human action recognition using Weber motion
CN111931654A (en) Intelligent monitoring method, system and device for personnel tracking
CN112861808B (en) Dynamic gesture recognition method, device, computer equipment and readable storage medium
CN115578770A (en) Small sample facial expression recognition method and system based on self-supervision
CN108446605B (en) Double interbehavior recognition methods under complex background
Sun et al. Online multiple object tracking based on fusing global and partial features
CN112926522A (en) Behavior identification method based on skeleton attitude and space-time diagram convolutional network
CN110688512A (en) Pedestrian image search algorithm based on PTGAN region gap and depth neural network
Dhore et al. Human Pose Estimation And Classification: A Review
Huynh-The et al. Learning action images using deep convolutional neural networks for 3D action recognition
Barnachon et al. Human actions recognition from streamed motion capture
CN114078226B (en) Intelligent production line behavior identification method based on online correlation of action pipelines
CN110111358B (en) Target tracking method based on multilayer time sequence filtering
CN114758285B (en) Video interaction action detection method based on anchor freedom and long-term attention perception
CN113870320B (en) Pedestrian tracking monitoring method and system based on deep neural network
CN115953806A (en) 2D attitude detection method based on YOLO
CN116245913A (en) Multi-target tracking method based on hierarchical context guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant