CN113128298B - Loading and unloading behavior analysis method and monitoring system - Google Patents

Loading and unloading behavior analysis method and monitoring system Download PDF

Info

Publication number
CN113128298B
CN113128298B CN201911423835.7A CN201911423835A CN113128298B CN 113128298 B CN113128298 B CN 113128298B CN 201911423835 A CN201911423835 A CN 201911423835A CN 113128298 B CN113128298 B CN 113128298B
Authority
CN
China
Prior art keywords
loading
unloading
images
personnel
cargo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911423835.7A
Other languages
Chinese (zh)
Other versions
CN113128298A (en
Inventor
朱曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai G2link Network Technology Co ltd
Original Assignee
Shanghai G2link Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai G2link Network Technology Co ltd filed Critical Shanghai G2link Network Technology Co ltd
Priority to CN201911423835.7A priority Critical patent/CN113128298B/en
Publication of CN113128298A publication Critical patent/CN113128298A/en
Application granted granted Critical
Publication of CN113128298B publication Critical patent/CN113128298B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

Analysis method and monitored control system of loading and unloading goods action, analysis method includes: identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points; deducing the behavior type of the person carrying out loading and unloading behaviors in the first video and the change rate of the carrying speed through the second deep neural network; deducing the cargo type of the cargo loaded in the first video through a third deep neural network; judging whether the loading and unloading behaviors are reasonable or not according to the change rate of the conveying speed, the behavior type of the personnel carrying out the loading and unloading behaviors, the cargo type of the loaded and unloaded cargoes and the range of the reasonable conveying speed change rate corresponding to the behavior type and the cargo type. The invention can timely identify the violent loading and unloading actions of loading and unloading workers by only carrying out video monitoring on the loading and unloading sites of the logistics park, and the process does not need on-site supervision of staff, and has low cost and good effect.

Description

Loading and unloading behavior analysis method and monitoring system
Technical Field
The invention relates to the technical field of logistics, in particular to a loading and unloading behavior analysis method and a monitoring system.
Background
The logistics can comprise a plurality of links such as transportation, storage, loading and unloading, carrying, packaging, distribution and the like. The loading and unloading are an important link in logistics, if the handling is improper, the damage and even scrapping of the goods are easily caused, and the loading and unloading are carried out normally and reasonably, so that the damage rate of the goods can be effectively reduced.
In the prior art, the behavior of loading workers is lack of scientific and effective monitoring, and violent loading and unloading situations occur. And the goods damaged by violent handling are often difficult to catch after, i.e. when the goods are found damaged, it has been difficult to determine when and for what reason the goods are damaged.
Therefore, a scientific and effective analysis and monitoring method for loading and unloading behaviors is needed to timely identify and correct violent loading and unloading behaviors of loading and unloading workers, so that damage to goods due to violent loading and unloading is reduced.
Disclosure of Invention
The invention solves the technical problems that: and identifying the violent loading and unloading behaviors of loading and unloading workers in time.
In order to solve the above technical problems, an embodiment of the present invention provides a method for analyzing loading and unloading behaviors, including:
Capturing a first video;
acquiring a plurality of images to be analyzed from the first video, wherein the images to be analyzed can be used for seeing personnel carrying out loading and unloading behaviors and loaded and unloaded cargoes;
according to the multiple images to be analyzed, respectively identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points;
according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network;
deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed;
judging whether the loading and unloading behaviors are reasonable or not according to the change rate of the conveying speed, the behavior type of the personnel carrying out the loading and unloading behaviors, the cargo type of the loaded and unloaded cargoes and the range of the reasonable conveying speed change rate corresponding to the behavior type and the cargo type.
Optionally, the method further comprises: pre-training a first deep neural network, comprising:
acquiring a plurality of images containing personnel with various different postures from a video;
training a first depth neural network through a plurality of images containing personnel with various different postures, and deducing personnel position areas of the personnel in the images;
And respectively outputting a coordinate array of the joint point positions of the personnel in the personnel position area of each image according to the images of the personnel with various different postures.
Optionally, the gesture includes: one or more of standing, bending over, or turning over half.
Optionally, the method further comprises: pre-training a second deep neural network, comprising:
The second depth neural network is trained through a plurality of images containing personnel with various different postures, a coordinate array of joint points of personnel in a personnel position area and the time of the images in a video, and the type of behavior of the personnel in the video is deduced according to the change of the joint points of the personnel in the images along with the time.
Optionally, the behavior types include: one or more of quick lifting of hands, quick kicking of legs, or quick falling of hands.
Optionally, the method further comprises: pre-training a third deep neural network, comprising:
acquiring a plurality of images of goods comprising various different goods types;
The third deep neural network is trained by a plurality of images of the cargo containing various cargo types for inferring the cargo type of the cargo in the images.
Optionally, the cargo type includes: one or more of a carton, wooden box, sack, knitted bag or bagged goods.
Optionally, the method further comprises: the range of reasonable conveying speed change rates corresponding to various behavior types and cargo types is preset.
Optionally, the determining whether the loading and unloading behavior is reasonable according to the rate of change of the conveying speed, the behavior type of the person performing the loading and unloading behavior, the cargo type of the loaded and unloaded cargo, and the range of the reasonable conveying speed rate of change corresponding to the behavior type and the cargo type includes:
If the change rate of the conveying speed is within the range of the reasonable change rate of the conveying speed corresponding to the behavior type and the cargo type, judging that the loading and unloading are normal;
and if the change rate of the conveying speed is out of the range of the reasonable change rate of the conveying speed corresponding to the behavior type and the cargo type, judging that the conveying speed is violently loaded and unloaded.
Optionally, the method further comprises: when the condition that loading and unloading behaviors are unreasonable is detected in the first video, the warning information is triggered.
In order to solve the above technical problem, an embodiment of the present invention further provides a monitoring system for loading and unloading behavior, including:
A processor adapted to load and execute instructions of a software program;
a memory adapted to store a software program comprising instructions for performing the steps of:
Capturing a first video;
acquiring a plurality of images to be analyzed from the first video, wherein the images to be analyzed can be used for seeing personnel carrying out loading and unloading behaviors and loaded and unloaded cargoes;
according to the multiple images to be analyzed, respectively identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points;
according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network;
deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed;
judging whether the loading and unloading behaviors are reasonable or not according to the change rate of the conveying speed, the behavior type of the personnel carrying out the loading and unloading behaviors, the cargo type of the loaded and unloaded cargoes and the range of the reasonable conveying speed change rate corresponding to the behavior type and the cargo type.
Optionally, the software program further comprises instructions for performing the steps of: pre-training a first deep neural network, comprising:
acquiring a plurality of images containing personnel with various different postures from a video;
training a first depth neural network through a plurality of images containing personnel with various different postures, and deducing personnel position areas of the personnel in the images;
And respectively outputting a coordinate array of the joint point positions of the personnel in the personnel position area of each image according to the images of the personnel with various different postures.
Optionally, the software program further comprises instructions for performing the steps of: pre-training a second deep neural network, comprising:
The second depth neural network is trained through a plurality of images containing personnel with various different postures, a coordinate array of joint points of personnel in a personnel position area and the time of the images in a video, and the type of behavior of the personnel in the video is deduced according to the change of the joint points of the personnel in the images along with the time.
Optionally, the software program further comprises instructions for performing the steps of: pre-training a third deep neural network, comprising:
acquiring a plurality of images of goods comprising various different goods types;
The third deep neural network is trained by a plurality of images of the cargo containing various cargo types for inferring the cargo type of the cargo in the images.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
According to the multiple images to be analyzed, respectively identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points; according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network; deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed; judging whether the loading and unloading actions are reasonable or not according to the change rate of the conveying speed, the action type of the person carrying out the loading and unloading actions, the cargo type of the loaded and unloaded cargo and the range of the reasonable conveying speed change rate corresponding to the action type and the cargo type, so that the violent loading and unloading actions of loading and unloading workers can be timely identified only by carrying out video monitoring on the loading and unloading sites of a logistics park, and the process does not need on-site supervision of the staff, and has low cost and good effect.
Further, a first deep neural network is trained in advance for deducing a person position area of a person in an image, a second deep neural network is trained in advance for deducing a behavior type of the person in the video according to the change of joint points of the person in the image along with time, a third deep neural network is trained in advance for deducing a cargo type of the cargo in the image, and whether the cargo loading and unloading behavior is reasonable or not is judged according to the carrying speed change rate, the behavior type of the person carrying out cargo loading and unloading behavior, the cargo type of the cargo loaded and unloaded and the range of reasonable carrying speed change rates corresponding to the behavior type and the cargo type, so that judgment is more accurate, and a specific mode for training the 3 deep neural networks is disclosed.
Further, when the condition that loading and unloading behaviors are unreasonable is detected in the first video, warning information is triggered, for example, normal operation of loading and unloading workers is prompted by playing voice, the warning information is recorded in a system, and the like, so that the violent loading and unloading behaviors of the loading and unloading workers are corrected timely, and basis is provided for overtaking of goods damage caused by violent loading and unloading.
Drawings
FIG. 1 is a flow chart of a method for analyzing loading and unloading behavior according to an embodiment of the present invention.
Detailed Description
From the analysis in the background art section, it is known that in the prior art, there is a lack of scientific and effective monitoring of the behavior of loading workers, and that violent loading and unloading occurs. And the goods damaged by violent handling are often difficult to catch after, i.e. when the goods are found damaged, it has been difficult to determine when and for what reason the goods are damaged.
Therefore, a scientific and effective analysis and monitoring method for loading and unloading behaviors is needed to timely identify and correct violent loading and unloading behaviors of loading and unloading workers, so that damage to goods due to violent loading and unloading is reduced.
According to the multiple images to be analyzed, body joint points of a person in each image to be analyzed are identified through a first depth neural network, and a plurality of groups of coordinate arrays related to the body joint points are obtained; according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network; deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed; judging whether the loading and unloading actions are reasonable or not according to the change rate of the conveying speed, the action type of the person carrying out the loading and unloading actions, the cargo type of the loaded and unloaded cargo and the range of the reasonable conveying speed change rate corresponding to the action type and the cargo type, so that the violent loading and unloading actions of loading and unloading workers can be timely identified only by carrying out video monitoring on the loading and unloading sites of a logistics park, and the process does not need on-site supervision of the staff, and has low cost and good effect.
In order that those skilled in the art will better understand and practice the invention, a detailed description will be given below with reference to specific embodiments thereof.
Example 1
As described below, the embodiment of the invention provides a method for analyzing loading and unloading behaviors.
The analysis method of loading and unloading behaviors in the embodiment is suitable for installing a camera at a dock (or other goods loading and unloading positions) of a logistics park, detecting and analyzing the video captured by the camera to obtain violent loading and unloading behaviors of loading and unloading workers in the video, and determining whether loading and unloading operations are standard or not through analyzing human body behaviors of the workers, types of goods, moving tracks and the like.
The following describes the method for analyzing loading and unloading behavior in detail by specific steps with reference to the flow chart of the method shown in fig. 1:
s101, training a first deep neural network in advance.
The method specifically comprises the following steps:
acquiring a plurality of images containing personnel with various different postures from a video;
training a first depth neural network through a plurality of images (the images can be the whole body of the person or partial shielding) containing the personnel with various different postures, and deducing the personnel position area of the personnel (whole) in the images;
And respectively outputting a coordinate array of the joint point positions of the personnel in the personnel position area of each image according to the images of the personnel with various different postures.
In some embodiments, the personnel location area may be enclosed by a rectangular frame.
In some embodiments, the gesture includes: one or more of standing, bending over, or turning over half.
In some embodiments, the first deep neural network may employ a top-down (top-down) deep neural network.
The input of the first depth neural network is a single frame image, the output is a set of coordinate arrays related to the joint points of the body, a plurality of images to be analyzed are respectively input into the first depth neural network to obtain a plurality of sets of coordinate arrays related to the joint points of the body, the time of the plurality of images to be analyzed in the video is recorded, and the change of the joint points of the person along with the time is obtained according to the information.
S102, training a second deep neural network in advance.
The method specifically comprises the following steps:
The second depth neural network is trained through a plurality of images containing personnel with various different postures, a coordinate array of joint points of personnel in a personnel position area and the time of the images in a video, and the type of behavior of the personnel in the video is deduced according to the change of the joint points of the personnel in the images along with the time.
In some embodiments, the behavior types include: one or more of quick lifting of hands, quick kicking of legs, or quick falling of hands.
Step S102 may be performed after step S101, so that the coordinate array of the human joint points in the human position area of each image output in step S101 may be utilized.
S103, training a third deep neural network in advance.
The method specifically comprises the following steps:
acquiring a plurality of images of goods comprising various different goods types;
The third deep neural network is trained by a plurality of images of the cargo containing various cargo types for inferring the cargo type of the cargo in the images.
The cargo types include: one or more of a carton, wooden box, sack, knitted bag or bagged goods.
Step S103 and step S101/S102 have no precedence relation and can be executed in parallel.
S104, presetting the range of reasonable conveying speed change rates corresponding to various behavior types and cargo types.
In the subsequent step S110, the range of the reasonable conveying speed change rate corresponding to the preset behavior types and the goods types is utilized for determination.
As can be seen from the above description of the technical solution: in this embodiment, the first deep neural network is trained in advance, which is used to infer a person position area in an image, the second deep neural network is trained in advance, which is used to infer a behavior type of a person in the video according to a change of a joint point of the person in the image over time, the third deep neural network is trained in advance, which is used to infer a cargo type of a cargo in the image, and whether the cargo loading and unloading behavior is reasonable or not is judged according to the rate of change of the transport speed, the behavior type of the person carrying out cargo loading and unloading behavior, the cargo type of the cargo loaded and the range of reasonable transport speed change rates corresponding to the behavior type and the cargo type, so that the judgment is more accurate, and a specific mode of training the 3 deep neural networks is disclosed.
S105, capturing a first video.
S106, acquiring a plurality of images to be analyzed from the first video, wherein the images to be analyzed can see personnel performing loading and unloading behaviors and loaded and unloaded cargoes.
In some embodiments, as previously described, cameras may be installed at the dock of the logistics park (or other cargo handling location) to capture video about personnel loading and unloading activities.
In some embodiments, for example, the image to be analyzed may be in RGB format.
Of course, in other embodiments, the image to be analyzed may be in an image format other than RGB, which is not limited by the present invention.
And S107, identifying the body joint points of the person in each image to be analyzed through a first depth neural network according to the images to be analyzed, and obtaining a plurality of groups of coordinate arrays related to the body joint points.
S108, according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, the behavior type and the conveying speed change rate of the person performing loading and unloading behaviors in the first video are deduced through the second deep neural network.
S109, deducing the cargo type of the loaded cargo in the first video through a third deep neural network according to one or more images to be analyzed.
S110, judging whether the loading and unloading actions are reasonable or not according to the change rate of the conveying speed, the action type of the personnel carrying out the loading and unloading actions, the type of the loaded and unloaded goods and the range of the reasonable conveying speed change rate corresponding to the action type and the type of the goods.
As can be seen from the above description of the technical solution: in this embodiment, according to the multiple images to be analyzed, body joint points of a person in each image to be analyzed are identified through a first depth neural network, so as to obtain multiple sets of coordinate arrays related to the body joint points; according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network; deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed; judging whether the loading and unloading actions are reasonable or not according to the change rate of the conveying speed, the action type of the person carrying out the loading and unloading actions, the cargo type of the loaded and unloaded cargo and the range of the reasonable conveying speed change rate corresponding to the action type and the cargo type, so that the violent loading and unloading actions of loading and unloading workers can be timely identified only by carrying out video monitoring on the loading and unloading sites of a logistics park, and the process does not need on-site supervision of the staff, and has low cost and good effect.
In some embodiments, the specific determination method may include:
If the change rate of the conveying speed is within the range of the reasonable change rate of the conveying speed corresponding to the behavior type and the cargo type, judging that the loading and unloading are normal;
and if the change rate of the conveying speed is out of the range of the reasonable change rate of the conveying speed corresponding to the behavior type and the cargo type, judging that the conveying speed is violently loaded and unloaded.
And S111, triggering warning information when the condition that loading and unloading behaviors are unreasonable is detected in the first video.
For example, the standard operation of the loading and unloading workers, the record in the system and the like can be prompted by playing voice, so that basis is provided for timely correcting violent loading and unloading behaviors of the loading and unloading workers and overtaking goods damage caused by violent loading and unloading.
Example two
As described below, embodiments of the present invention provide a monitoring system for loading and unloading behavior.
The monitoring system for loading and unloading behavior comprises: one or more processors, one or more memories; wherein,
A processor adapted to load and execute instructions of a software program;
a memory adapted to store a software program comprising instructions for performing the steps of:
Capturing a first video;
acquiring a plurality of images to be analyzed from the first video, wherein the images to be analyzed can be used for seeing personnel carrying out loading and unloading behaviors and loaded and unloaded cargoes;
according to the multiple images to be analyzed, respectively identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points;
according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network;
deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed;
judging whether the loading and unloading behaviors are reasonable or not according to the change rate of the conveying speed, the behavior type of the personnel carrying out the loading and unloading behaviors, the cargo type of the loaded and unloaded cargoes and the range of the reasonable conveying speed change rate corresponding to the behavior type and the cargo type.
As can be seen from the above description of the technical solution: in this embodiment, according to the multiple images to be analyzed, body joint points of a person in each image to be analyzed are identified through a first depth neural network, so as to obtain multiple sets of coordinate arrays related to the body joint points; according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network; deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed; judging whether the loading and unloading actions are reasonable or not according to the change rate of the conveying speed, the action type of the person carrying out the loading and unloading actions, the cargo type of the loaded and unloaded cargo and the range of the reasonable conveying speed change rate corresponding to the action type and the cargo type, so that the violent loading and unloading actions of loading and unloading workers can be timely identified only by carrying out video monitoring on the loading and unloading sites of a logistics park, and the process does not need on-site supervision of the staff, and has low cost and good effect.
In some embodiments, the software program further comprises instructions for performing the steps of: pre-training a first deep neural network, comprising:
acquiring a plurality of images containing personnel with various different postures from a video;
training a first depth neural network through a plurality of images containing personnel with various different postures, and deducing personnel position areas of the personnel in the images;
And respectively outputting a coordinate array of the joint point positions of the personnel in the personnel position area of each image according to the images of the personnel with various different postures.
In some embodiments, the software program further comprises instructions for performing the steps of: pre-training a second deep neural network, comprising:
The second depth neural network is trained through a plurality of images containing personnel with various different postures, a coordinate array of joint points of personnel in a personnel position area and the time of the images in a video, and the type of behavior of the personnel in the video is deduced according to the change of the joint points of the personnel in the images along with the time.
In some embodiments, the software program further comprises instructions for performing the steps of: pre-training a third deep neural network, comprising:
acquiring a plurality of images of goods comprising various different goods types;
The third deep neural network is trained by a plurality of images of the cargo containing various cargo types for inferring the cargo type of the cargo in the images.
As can be seen from the above description of the technical solution: in this embodiment, according to the multiple images to be analyzed, body joint points of a person in each image to be analyzed are identified through a first depth neural network, so as to obtain multiple sets of coordinate arrays related to the body joint points; according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network; deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed; judging whether the loading and unloading actions are reasonable or not according to the change rate of the conveying speed, the action type of the person carrying out the loading and unloading actions, the cargo type of the loaded and unloaded cargo and the range of the reasonable conveying speed change rate corresponding to the action type and the cargo type, so that the violent loading and unloading actions of loading and unloading workers can be timely identified only by carrying out video monitoring on the loading and unloading sites of a logistics park, and the process does not need on-site supervision of the staff, and has low cost and good effect.
Further, a first deep neural network is trained in advance for deducing a person position area of a person in an image, a second deep neural network is trained in advance for deducing a behavior type of the person in the video according to the change of joint points of the person in the image along with time, a third deep neural network is trained in advance for deducing a cargo type of the cargo in the image, and whether the cargo loading and unloading behavior is reasonable or not is judged according to the carrying speed change rate, the behavior type of the person carrying out cargo loading and unloading behavior, the cargo type of the cargo loaded and unloaded and the range of reasonable carrying speed change rates corresponding to the behavior type and the cargo type, so that judgment is more accurate, and a specific mode for training the 3 deep neural networks is disclosed.
Those of ordinary skill in the art will appreciate that in the various methods of the above embodiments, all or part of the steps may be performed by hardware associated with program instructions, and the program may be stored in a computer readable storage medium, where the storage medium may include: ROM, RAM, magnetic or optical disks, etc.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be assessed accordingly to that of the appended claims.

Claims (14)

1. A method of analyzing loading and unloading behavior, comprising:
Capturing a first video;
acquiring a plurality of images to be analyzed from the first video, wherein the images to be analyzed can be used for seeing personnel carrying out loading and unloading behaviors and loaded and unloaded cargoes;
according to the multiple images to be analyzed, respectively identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points;
according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network;
deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed;
judging whether the loading and unloading behaviors are reasonable or not according to the change rate of the conveying speed, the behavior type of the personnel carrying out the loading and unloading behaviors, the cargo type of the loaded and unloaded cargoes and the range of the reasonable conveying speed change rate corresponding to the behavior type and the cargo type.
2. Method for analysing a loading and unloading behaviour according to claim 1, further comprising: pre-training a first deep neural network, comprising:
acquiring a plurality of images containing personnel with various different postures from a video;
training a first depth neural network through a plurality of images containing personnel with various different postures, and deducing personnel position areas of the personnel in the images;
And respectively outputting a coordinate array of the joint point positions of the personnel in the personnel position area of each image according to the images of the personnel with various different postures.
3. Method for analysing a loading and unloading behaviour according to claim 2, wherein the attitude comprises: one or more of standing, bending over, or turning over half.
4. Method for analysing a loading and unloading behaviour according to claim 2, further comprising: pre-training a second deep neural network, comprising:
The second depth neural network is trained through a plurality of images containing personnel with various different postures, a coordinate array of joint points of personnel in a personnel position area and the time of the images in a video, and the type of behavior of the personnel in the video is deduced according to the change of the joint points of the personnel in the images along with the time.
5. Method for analysing a loading and unloading behaviour according to claim 4, wherein the behaviour types comprise: one or more of quick lifting of hands, quick kicking of legs, or quick falling of hands.
6. Method for analysing a loading and unloading behaviour according to claim 1, further comprising: pre-training a third deep neural network, comprising:
acquiring a plurality of images of goods comprising various different goods types;
The third deep neural network is trained by a plurality of images of the cargo containing various cargo types for inferring the cargo type of the cargo in the images.
7. Method for analysing a loading and unloading behaviour according to claim 6, wherein the type of cargo comprises: one or more of a carton, wooden box, sack, knitted bag or bagged goods.
8. Method for analysing a loading and unloading behaviour according to claim 1, further comprising: the range of reasonable conveying speed change rates corresponding to various behavior types and cargo types is preset.
9. The method for analyzing loading and unloading operations according to claim 1, wherein the determining whether the loading and unloading operations are reasonable based on the rate of change of the transport speed, the type of operations of the person performing the loading and unloading operations, the type of the goods to be loaded and the range of reasonable rate of change of the transport speed corresponding to the type of operations and the type of goods comprises:
If the change rate of the conveying speed is within the range of the reasonable change rate of the conveying speed corresponding to the behavior type and the cargo type, judging that the loading and unloading are normal;
and if the change rate of the conveying speed is out of the range of the reasonable change rate of the conveying speed corresponding to the behavior type and the cargo type, judging that the conveying speed is violently loaded and unloaded.
10. Method for analysing a loading and unloading behaviour according to claim 1, further comprising: when the condition that loading and unloading behaviors are unreasonable is detected in the first video, the warning information is triggered.
11. A monitoring system for loading and unloading activities, comprising:
A processor adapted to load and execute instructions of a software program;
a memory adapted to store a software program comprising instructions for performing the steps of:
Capturing a first video;
acquiring a plurality of images to be analyzed from the first video, wherein the images to be analyzed can be used for seeing personnel carrying out loading and unloading behaviors and loaded and unloaded cargoes;
according to the multiple images to be analyzed, respectively identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points;
according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network;
deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed;
judging whether the loading and unloading behaviors are reasonable or not according to the change rate of the conveying speed, the behavior type of the personnel carrying out the loading and unloading behaviors, the cargo type of the loaded and unloaded cargoes and the range of the reasonable conveying speed change rate corresponding to the behavior type and the cargo type.
12. The loading and unloading behavior monitoring system of claim 11, wherein the software program further comprises instructions for performing the steps of: pre-training a first deep neural network, comprising:
acquiring a plurality of images containing personnel with various different postures from a video;
training a first depth neural network through a plurality of images containing personnel with various different postures, and deducing personnel position areas of the personnel in the images;
And respectively outputting a coordinate array of the joint point positions of the personnel in the personnel position area of each image according to the images of the personnel with various different postures.
13. The loading and unloading behavior monitoring system of claim 12, wherein the software program further comprises instructions for performing the steps of: pre-training a second deep neural network, comprising:
The second depth neural network is trained through a plurality of images containing personnel with various different postures, a coordinate array of joint points of personnel in a personnel position area and the time of the images in a video, and the type of behavior of the personnel in the video is deduced according to the change of the joint points of the personnel in the images along with the time.
14. The loading and unloading behavior monitoring system of claim 11, wherein the software program further comprises instructions for performing the steps of: pre-training a third deep neural network, comprising:
acquiring a plurality of images of goods comprising various different goods types;
The third deep neural network is trained by a plurality of images of the cargo containing various cargo types for inferring the cargo type of the cargo in the images.
CN201911423835.7A 2019-12-30 2019-12-30 Loading and unloading behavior analysis method and monitoring system Active CN113128298B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911423835.7A CN113128298B (en) 2019-12-30 2019-12-30 Loading and unloading behavior analysis method and monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911423835.7A CN113128298B (en) 2019-12-30 2019-12-30 Loading and unloading behavior analysis method and monitoring system

Publications (2)

Publication Number Publication Date
CN113128298A CN113128298A (en) 2021-07-16
CN113128298B true CN113128298B (en) 2024-07-02

Family

ID=76769992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911423835.7A Active CN113128298B (en) 2019-12-30 2019-12-30 Loading and unloading behavior analysis method and monitoring system

Country Status (1)

Country Link
CN (1) CN113128298B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245828A (en) * 2015-09-02 2016-01-13 北京旷视科技有限公司 Item analysis method and equipment
CN109598229A (en) * 2018-11-30 2019-04-09 李刚毅 Monitoring system and its method based on action recognition

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6275285B2 (en) * 2015-12-29 2018-02-07 楽天株式会社 Logistics system, luggage transport method, and program
CN111712826B (en) * 2017-10-20 2022-07-08 Bxb数码私人有限公司 System and method for tracking a cargo carrier
CN109051306A (en) * 2018-05-09 2018-12-21 刘云帆 The intelligent movable storage box of locking son
CN110033027A (en) * 2019-03-15 2019-07-19 深兰科技(上海)有限公司 A kind of item identification method, device, terminal and readable storage medium storing program for executing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245828A (en) * 2015-09-02 2016-01-13 北京旷视科技有限公司 Item analysis method and equipment
CN109598229A (en) * 2018-11-30 2019-04-09 李刚毅 Monitoring system and its method based on action recognition

Also Published As

Publication number Publication date
CN113128298A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
US11173602B2 (en) Training robotic manipulators
CN110490995B (en) Method, system, equipment and storage medium for monitoring abnormal running state of belt
CN110826520B (en) Port grab bucket detection method based on improved YOLOv3-tiny algorithm
CN104751483B (en) A kind of monitoring method of warehouse logisticses robot work region abnormal conditions
CN110910355A (en) Package blocking detection method and device and computer storage medium
CN111008561A (en) Livestock quantity determination method, terminal and computer storage medium
CN111597857B (en) Logistics package detection method, device, equipment and readable storage medium
US11697558B2 (en) Automated detection of carton damage
CN114724076A (en) Image recognition method, device, equipment and storage medium
CN113378952A (en) Method, system, medium and terminal for detecting deviation of belt conveyor
KR102243039B1 (en) Smart factory system for automated product packaging and delivery service
CN111274951B (en) Method and device for monitoring state of feed box and automatic feeding system
CN109033964B (en) Method, system and equipment for judging arrival and departure events of vehicles
CN113128298B (en) Loading and unloading behavior analysis method and monitoring system
US20220051175A1 (en) System and Method for Mapping Risks in a Warehouse Environment
CN111890343B (en) Robot object collision detection method and device
CN115082841A (en) Method for monitoring abnormity of working area of warehouse logistics robot
EP3647236B1 (en) Projection instruction device, parcel sorting system, and projection instruction method
CN114140735B (en) Deep learning-based goods path accumulation detection method and system and storage medium
CN113642961B (en) Monitoring method and device in cargo handling process
CN116110127A (en) Multi-linkage gas station cashing behavior recognition system
CN111401104B (en) Classification model training method, classification method, device, equipment and storage medium
EP3434625B1 (en) Projection instruction device, parcel sorting system, and projection instruction method
CN109900707A (en) A kind of powdering quality detection method, equipment and readable storage medium storing program for executing
CN113449617A (en) Track safety detection method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant