CN112766233A - Human behavior identification method based on laser radar and RFID - Google Patents

Human behavior identification method based on laser radar and RFID Download PDF

Info

Publication number
CN112766233A
CN112766233A CN202110190644.1A CN202110190644A CN112766233A CN 112766233 A CN112766233 A CN 112766233A CN 202110190644 A CN202110190644 A CN 202110190644A CN 112766233 A CN112766233 A CN 112766233A
Authority
CN
China
Prior art keywords
rfid
behavior
point cloud
behavior identification
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110190644.1A
Other languages
Chinese (zh)
Other versions
CN112766233B (en
Inventor
罗晨运
成姝燕
徐鹤
李鹏
王汝传
朱枫
程海涛
季一木
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202110190644.1A priority Critical patent/CN112766233B/en
Publication of CN112766233A publication Critical patent/CN112766233A/en
Application granted granted Critical
Publication of CN112766233B publication Critical patent/CN112766233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

A human behavior identification method based on laser radar and RFID comprises two types of static and dynamic human behavior identification, and is structurally mainly divided into three parts: LiDAR, RFID reader and tag, and data processing module. LiDAR is used to obtain human behavior in the field of view in real time; the RFID tag is responsible for determining the object ID and auxiliary classification; the data processing module is used for model training and prediction. The invention emphasizes the privacy, the safety and the robustness of human behavior recognition, and can accurately recognize behaviors in a complex and changeable environment.

Description

Human behavior identification method based on laser radar and RFID
Technical Field
The invention relates to the technical field of human behavior recognition, in particular to a human behavior recognition method based on a laser radar and an RFID.
Background
Laser radar (LiDAR) is a radar system that detects characteristic quantities such as a position and a velocity of a target by emitting a laser beam. The working principle is that laser beams are emitted to a target, received target echoes reflected from the target are compared with emitted signals, and after appropriate processing, relevant information of the target, such as target distance, azimuth, height, speed, attitude, even shape and other parameters, can be obtained, so that the targets of airplanes, missiles and the like are detected, tracked and identified.
Radio Frequency Identification (RFID) is a communication technology, which uses a Radio Frequency mode to perform non-contact two-way communication to achieve the purpose of Identification and exchange data, and can identify a high-speed moving object or a static object and identify multiple targets at the same time. Due to its non-invasive nature, many researchers have also used RFID in human behavior recognition and human-computer interaction fields in recent years.
The human behavior recognition is widely applied, is a hotspot problem in the research of the field of artificial intelligence, and is a basic technology for a plurality of applications such as intelligent monitoring, human-computer interaction, robots and the like. Human behavior recognition based on vision is an important research direction of human motion analysis, and with the development of deep learning, large-scale data analysis is possible to be applied in various aspects; the real-time performance of human behavior recognition can be improved by using a high-performance computer computing platform. Devices for human behavior recognition are generally divided into two categories: contact and contactless; touch devices generally include various acceleration and posture sensors, and non-touch human behavior recognition devices are various: camera, RFID, WIFI, lidar, etc. The RFID can be used for identifying human body behaviors in a contact mode or in a non-contact mode. Among them, cameras are largely used for recognition of human behavior, but camera devices are prone to raise privacy and security issues, and their robustness is susceptible to ambient lighting; RFID, WIFI, Radar, LiDAR, and the like are popular with a large number of researchers in recent years because of their low human invasiveness and convenience of use, but they have never been applied in large scale because of low robustness in reality.
Disclosure of Invention
The invention combines LiDAR and RFID technologies, and provides a human behavior identification method based on laser radar and RFID, so as to solve the problems of privacy, safety and robustness of human behavior identification in a complex and variable environment.
A human behavior identification method based on laser radar and RFID is characterized in that: the method comprises the following steps:
step 1: collecting point cloud data of five behaviors of standing, sitting, squatting, walking and lying of a person by using a laser radar LiDAR and an RFID label, converting the point cloud data into an image, and manufacturing a static human body behavior identification data set; training the convolutional neural network finely adjusted based on EfficientNet B0 by using the data set to obtain the optimal weight of the neural network for static human behavior identification;
step 2: five types of transitions between five daily activities of a person are collected using LiDAR and RFID tags: standing and sitting, sitting and lying, standing and walking, sitting and squatting, squatting and standing point cloud data, converting the point cloud data into an image, and making a dynamic human behavior identification data set; training the convolutional neural network finely adjusted based on EfficientNet B0 by using the data set to obtain the optimal weight of the neural network for dynamic human behavior recognition;
and step 3: after the neural network training is finished, acquiring LiDAR point cloud data and RFID ID and phase information of a corresponding frame length according to a static or dynamic human behavior identification requirement, and then entering a step 4 for the static human behavior identification requirement and entering a step 5 for the dynamic human behavior identification requirement;
and 4, step 4: when static human body behaviors are identified, after point clouds are converted into image data, a neural network is used for classification and prediction; simultaneously acquiring the ID of the RFID to determine the ID of a person in the LiDAR field of view; when the predicted probability is larger than the set threshold value, the result of the behavior identification is considered to be reliable, and the behavior identification process is ended; when the predicted probability is smaller than the set threshold, entering step 6;
and 5: when dynamic human body behavior is identified, after point cloud is converted into image data, classification prediction is carried out by using a neural network, ID and phase information of RFID are obtained simultaneously to judge ID of a person in a view field, and auxiliary classification is carried out on dynamic human body behavior identification; unwrapping the RFID phase to lose the periodic variation rule; when the probability predicted by the neural network is greater than a set threshold value, the system considers that the result of the behavior recognition is reliable, the dynamic human body behavior is subdivided by combining the phase change trend after the RFID is unwound, and the behavior recognition process is finished; when the predicted probability is smaller than the set threshold, entering step 6;
step 6: if the prediction probability of the behavior recognition is smaller than the threshold value, continuously acquiring next frame data, and circulating the step 3 until the primary behavior recognition process is finished.
Further, in step 1 and step 2, a 5-tag SoftMax classifier is used in the EfficientNetB0 network.
Further, in step 1, point cloud data of five behaviors of standing, sitting, squatting, walking and lying of a person are collected in one frame of 50ms, and each behavior category is collected 1000 times.
Further, in step 2, five conversion modes among five daily behaviors of the human are collected in one frame of 1.5 s: point cloud data of standing and sitting, sitting and lying, standing and walking, sitting and squatting, squatting and standing are collected 1000 times for each behavior category.
Further, in step 1, according to 9: 1: 1, dividing a static human behavior recognition data set into a training set, a verification set and a test set, and then training, verifying and testing the convolutional neural network.
The beneficial effects of the invention are mainly as follows:
(1) real-time performance: the human behavior recognition system can classify and recognize human behaviors in a LiDAR field in real time, and only 50ms is needed for judging the shortest behavior state.
(2) The accuracy is as follows: the convolutional neural network based on EfficientNet fine tuning is used, the characteristics of few model parameters and high classification precision are fully exerted, and the accuracy of human behavior recognition is improved.
(3) Feasibility: the invention combines LiDAR and RFID, which can not bring privacy and safety problems, besides, LiDAR and RFID are not affected by illumination environment, and can work normally under no light condition; the invention indirectly uses the image data generated by the point cloud, and has better robustness in a complex and changeable environment. In addition, after LiDAR point cloud data is converted into image data, the data volume can be reduced by more than 50% during model training and prediction of static human behavior recognition; the data volume can be reduced by more than 75% when the model for dynamic behavior recognition is trained and predicted by combining RFID; so that the system can be deployed on the mobile side.
Drawings
Fig. 1 is a schematic structural diagram of a system according to an embodiment of the present invention.
Fig. 2 is a deployment architecture diagram of a human behavior recognition system in an embodiment of the present invention.
FIG. 3 illustrates an embodiment of a LiDAR and RFID static human behavior identification process.
FIG. 4 is a LiDAR and RFID dynamic human behavior identification process in an embodiment of the present invention.
Fig. 5 is a flowchart of a human behavior recognition method in an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the drawings in the specification.
As shown in fig. 1, the present embodiment is mainly divided into three parts in structure: LiDAR, RFID reader and tag, and data processing module. LiDAR is used to obtain human behavior in the field of view in real time; the RFID tag is responsible for determining the object ID and auxiliary classification; the data processing module is used for model training and prediction.
The deployment architecture diagram of the human behavior recognition system of the embodiment is shown in fig. 2: an RFID antenna is secured to the ceiling top, outfitted with an RFID tag for a subject within the LiDAR field of view, and secured to the shoulder; the data collected by the RFID and the LiDAR are detected frame by frame according to the static or dynamic human behavior identification requirement, and the real-time identification of the human behavior is realized.
The human behavior identification method provided by the embodiment comprises a Livox-mid40 laser radar, an RFID reader and a data acquisition and processing module, and the human behavior identification is divided into two types of static behavior identification and dynamic behavior identification.
For static human behavior identification, LiDAR is used for collecting point cloud data of five common human basic behavior actions of standing, sitting, squatting, walking and lying in a visual field of a human, each action is collected for about one thousand times and comprises various angles and different environments; and classifying according to the same category to make a data set for human behavior recognition. The LiDAR point cloud data processing mode in this embodiment is different from the conventional mode: in the fields of modeling implementation, environmental perception and the like, point cloud data of LiDAR are directly used in a traditional mode; the LiDAR data rate is 100000 points/second, and a frame size may be set to 3s at maximum, i.e., 300000 points/second; such a huge data volume requires a large amount of computer power, is not favorable for system deployment, and can affect the real-time performance of the system; furthermore, the direct use of the point cloud reduces the robustness of the system: the three-dimensional coordinates in the point cloud are directly related to the environment, and when the environment within the LiDAR field of view changes, direct use of the point cloud may result in a decrease in system accuracy. Therefore, in the embodiment, the point cloud data is firstly converted into the image data, so that the data volume during model training and prediction can be greatly reduced. Then, according to the following steps of 9: 1: 1, dividing a data set into a training set, a verification set and a test set; the data set is used for training the EfficientNetB0 convolutional neural network, fine tuning is carried out on the EfficientNetB0 network, and the original SoftMax classifier is replaced by the 5-label SoftMax classifier. Storing the optimal weight parameters obtained by training for predicting human body behaviors; setting a certain probability threshold; when the prediction probability of a certain human behavior is greater than the threshold value, the prediction is considered to be credible. And equip persons within the LiDAR field of view with RFID tags for determining their IDs.
For the dynamic human behavior recognition, the dynamic human behavior recognition in this embodiment means: identify the state change of a person among five daily actions of standing, sitting, squatting, walking and lying. The interconversion between the daily behavior states of standing, sitting, squatting, walking and lying of a person cannot be arbitrary: such as crouching and lying, the two behaviors are not directly convertible; the following five types of conversion are reasonable: standing and sitting, sitting and lying, standing and walking, sitting and squatting, squatting and standing; the five conversion modes can realize direct and indirect conversion of five behavior states. The Livox laser radar adopts a non-repeated scanning integration mode, and one frame of point cloud can be set to 3000ms at most; the experiment shows that: when the size of a frame is large, and a person does dynamic behaviors in the FOV of the laser radar, the Livox point cloud has a visual 'smear' phenomenon, and when the person is converted between two behavior states, for example, the person changes from sitting to standing and from standing to sitting, the point cloud data of the person and the point cloud data of the person are basically consistent after being preprocessed; in view of this, the present embodiment classifies the two behavior state transitions into one type during data acquisition and data set creation, and only one type needs to be acquired during data acquisition, and then the dynamic human body behavior can be classified and identified by using the convolutional neural network; meanwhile, the experimental personnel are provided with RFID tags, different from static human body behavior identification, in dynamic human body behavior identification, the RFID tags are used for determining a target ID and assisting in classification; the two states are distinguished by a change in the phase data of the RFID tag. The method can effectively reduce the training data volume, greatly save the computer power and enable the large-scale deployment of the system to be feasible.
The specific steps of the static and dynamic human behavior recognition method, as shown in fig. 5, are specifically:
step 1: collecting point cloud data of five behaviors of standing, sitting, squatting, walking and lying of a person 1000 times per category by using LiDAR, and converting the point cloud into an image; making a LiDAR-based static human behavior identification dataset; the convolution neural network finely tuned based on EfficientNet B0 is trained by the data set to obtain the optimal weight of the neural network for static human behavior recognition.
Step 2: five types of transitions between five daily behaviors of a person were collected using LiDAR: standing and sitting, sitting and lying, standing and walking, sitting and squatting, squatting and standing point cloud data, wherein each category is 1000 times, and the point cloud data is converted into an image; making a LiDAR-based static human behavior identification dataset; the convolution neural network finely tuned based on EfficientNet B0 is trained by the data set to obtain the optimal weight of the neural network for dynamic human behavior recognition.
And step 3: after the neural network training is finished, identifying the requirement according to static or dynamic human body behaviors; and acquiring LiDAR point cloud data and RFID ID and phase information of corresponding frame lengths.
And 4, step 4: the requirements for static human behavior recognition are: after the point cloud is converted into image data, classifying and predicting by using a neural network; meanwhile, the ID of the RFID is obtained, and the ID of a person in the LiDAR view field can be judged; when the predicted probability is larger than a set threshold value, the system considers that the result of the behavior recognition is reliable; and finishing the behavior recognition process.
And 5: the requirements for dynamic human behavior recognition are: after the point cloud is converted into image data, classifying and predicting by using a neural network; simultaneously acquiring the ID and phase information of the RFID for judging the ID of a person in a visual field and performing auxiliary classification on dynamic human body behavior identification; unwrapping the RFID phase to lose the periodic variation rule; for the transition between the two behavior states of standing and sitting, the phase information of the RFID is unwound and has a completely opposite change trend: from standing to sitting, the unwrapped phase gradually becomes smaller along with time, and the slope of the phase image is negative; from sitting to standing, the unwrapped phase gradually becomes larger along with time, and the slope of the phase image is positive; the transitions of the remaining four behavior states are similar; when the probability predicted by the neural network is greater than a set threshold value, the system considers that the result of the behavior recognition is reliable, and subdivides the dynamic human body behavior by combining the phase change trend after the RFID is unwrapped; and finishing the behavior recognition process.
Step 6: if the prediction probability of the behavior recognition is smaller than the threshold value, continuously acquiring next frame data, and circulating the step 3 until the primary behavior recognition process is finished.
The laser radar used in the embodiment is Livox-Mid40, which adopts a non-repetitive scanning technology, the data rate is 100000 points/second, and the coverage rate of the field of view can be effectively improved along with the increase of the integration time; the distance precision is 2cm, the angle precision is less than 0.1 degrees, and the maximum detection distance can reach 260 meters.
The above description is only a preferred embodiment of the present invention, and the scope of the present invention is not limited to the above embodiment, but equivalent modifications or changes made by those skilled in the art according to the present disclosure should be included in the scope of the present invention as set forth in the appended claims.

Claims (5)

1. A human behavior identification method based on laser radar and RFID is characterized in that: the method comprises the following steps:
step 1: collecting point cloud data of five behaviors of standing, sitting, squatting, walking and lying of a person by using a laser radar LiDAR and an RFID label, converting the point cloud data into an image, and manufacturing a static human body behavior identification data set; training the convolutional neural network finely adjusted based on EfficientNet B0 by using the data set to obtain the optimal weight of the neural network for static human behavior identification;
step 2: five types of transitions between five daily activities of a person are collected using LiDAR and RFID tags: standing and sitting, sitting and lying, standing and walking, sitting and squatting, squatting and standing point cloud data, converting the point cloud data into an image, and making a dynamic human behavior identification data set; training the convolutional neural network finely adjusted based on EfficientNet B0 by using the data set to obtain the optimal weight of the neural network for dynamic human behavior recognition;
and step 3: after the neural network training is finished, acquiring LiDAR point cloud data and RFID ID and phase information of a corresponding frame length according to a static or dynamic human behavior identification requirement, and then entering a step 4 for the static human behavior identification requirement and entering a step 5 for the dynamic human behavior identification requirement;
and 4, step 4: when static human body behaviors are identified, after point clouds are converted into image data, a neural network is used for classification and prediction; simultaneously acquiring the ID of the RFID to determine the ID of a person in the LiDAR field of view; when the predicted probability is larger than the set threshold value, the result of the behavior identification is considered to be reliable, and the behavior identification process is ended; when the predicted probability is smaller than a set threshold (the set value of the threshold is the accuracy of the test set and is properly set according to the actual situation, when the probability of classification prediction of a certain behavior point cloud image is larger than the accuracy of the test set, the prediction is more reliable, generally, the value of the threshold is 0.80, and the larger the value is, the higher the identification precision is), the step 6 is carried out;
and 5: when dynamic human body behavior is identified, after point cloud is converted into image data, classification prediction is carried out by using a neural network, ID and phase information of RFID are obtained simultaneously to judge ID of a person in a view field, and auxiliary classification is carried out on dynamic human body behavior identification; unwrapping the RFID phase to lose the periodic variation rule; when the probability predicted by the neural network is greater than the set threshold value of 0.80, the system considers that the result of the behavior recognition is reliable, the dynamic human body behaviors are subdivided by combining the phase change trend after the RFID is unwound, and the behavior recognition process is ended; when the predicted probability is smaller than the set threshold, entering step 6;
step 6: if the prediction probability of the behavior recognition is smaller than the threshold value, continuously acquiring next frame data, and circulating the step 3 until the primary behavior recognition process is finished.
2. The human body behavior identification method based on the laser radar and the RFID as claimed in claim 1, wherein: in step 1 and step 2, a 5-tag SoftMax classifier is used in the EfficientNetB0 network.
3. The human body behavior identification method based on the laser radar and the RFID as claimed in claim 1, wherein: in the step 1, point cloud data of five behaviors of standing, sitting, squatting, walking and lying of a person are collected in one frame of 50ms, and each behavior type is collected 1000 times.
4. The human body behavior identification method based on the laser radar and the RFID as claimed in claim 1, wherein: in step 2, five conversion modes among five daily behaviors of the human are collected in one frame of 1.5 s: point cloud data of standing and sitting, sitting and lying, standing and walking, sitting and squatting, squatting and standing are collected 1000 times for each behavior category.
5. The human body behavior identification method based on the laser radar and the RFID as claimed in claim 1, wherein: in step 1, the ratio of 9: 1: 1, dividing a static human behavior recognition data set into a training set, a verification set and a test set, and then training, verifying and testing the convolutional neural network.
CN202110190644.1A 2021-02-19 2021-02-19 Human behavior identification method based on laser radar and RFID Active CN112766233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110190644.1A CN112766233B (en) 2021-02-19 2021-02-19 Human behavior identification method based on laser radar and RFID

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110190644.1A CN112766233B (en) 2021-02-19 2021-02-19 Human behavior identification method based on laser radar and RFID

Publications (2)

Publication Number Publication Date
CN112766233A true CN112766233A (en) 2021-05-07
CN112766233B CN112766233B (en) 2022-07-26

Family

ID=75705558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110190644.1A Active CN112766233B (en) 2021-02-19 2021-02-19 Human behavior identification method based on laser radar and RFID

Country Status (1)

Country Link
CN (1) CN112766233B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110032949A (en) * 2019-03-22 2019-07-19 北京理工大学 A kind of target detection and localization method based on lightweight convolutional neural networks
CN110363820A (en) * 2019-06-28 2019-10-22 东南大学 It is a kind of based on the object detection method merged before laser radar, image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110032949A (en) * 2019-03-22 2019-07-19 北京理工大学 A kind of target detection and localization method based on lightweight convolutional neural networks
CN110363820A (en) * 2019-06-28 2019-10-22 东南大学 It is a kind of based on the object detection method merged before laser radar, image

Also Published As

Publication number Publication date
CN112766233B (en) 2022-07-26

Similar Documents

Publication Publication Date Title
Wang et al. A comparative study of state-of-the-art deep learning algorithms for vehicle detection
Spinello et al. A layered approach to people detection in 3d range data
Amit et al. A robust airport runway detection network based on R-CNN using remote sensing images
CN115244421A (en) Object size estimation using camera map and/or radar information
CN103268616A (en) Multi-feature multi-sensor method for mobile robot to track moving body
CN103049751A (en) Improved weighting region matching high-altitude video pedestrian recognizing method
CN113311428B (en) Human body falling intelligent monitoring system and falling identification method based on millimeter wave radar
Karim et al. A brief review and challenges of object detection in optical remote sensing imagery
Zhang et al. Prioritizing robotic grasping of stacked fruit clusters based on stalk location in RGB-D images
Aposporis Object detection methods for improving UAV autonomy and remote sensing applications
Shangzheng A traffic sign image recognition and classification approach based on convolutional neural network
CN109665464A (en) A kind of method and system that movable type fork truck automatically tracks
Sun et al. Image target detection algorithm compression and pruning based on neural network
Du et al. A passive target recognition method based on LED lighting for industrial internet of things
Schumacher et al. Active learning of ensemble classifiers for gesture recognition
Shah et al. Detection of different types of blood cells: A comparative analysis
Yu et al. Obstacle detection with deep convolutional neural network
Razlaw et al. Detection and tracking of small objects in sparse 3d laser range data
CN112766233B (en) Human behavior identification method based on laser radar and RFID
CN116466827A (en) Intelligent man-machine interaction system and method thereof
Wang et al. Fine-grained gesture recognition based on high resolution range profiles of terahertz radar
CN116206283A (en) Two-dimensional laser point cloud pedestrian detection method and application of mobile robot end
Liu et al. The development of a UAV target tracking system based on YOLOv3-tiny object detection algorithm
Mandischer et al. Radar tracker for human legs based on geometric and intensity features
Cao et al. Development of Intelligent Multimodal Traffic Monitoring using Radar Sensor at Intersections

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant