CN116071960A - Non-motor vehicle and pedestrian collision early warning method, electronic equipment and storage medium - Google Patents

Non-motor vehicle and pedestrian collision early warning method, electronic equipment and storage medium Download PDF

Info

Publication number
CN116071960A
CN116071960A CN202310353876.3A CN202310353876A CN116071960A CN 116071960 A CN116071960 A CN 116071960A CN 202310353876 A CN202310353876 A CN 202310353876A CN 116071960 A CN116071960 A CN 116071960A
Authority
CN
China
Prior art keywords
motor vehicle
pedestrian
grid
target
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310353876.3A
Other languages
Chinese (zh)
Other versions
CN116071960B (en
Inventor
杨德明
翟俊奇
刘星
郭家颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Bao'an Design Group Co ltd
Shenzhen Urban Transport Planning Center Co Ltd
Original Assignee
Shenzhen Baoan Planning And Design Institute Co ltd
Shenzhen Urban Transport Planning Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Baoan Planning And Design Institute Co ltd, Shenzhen Urban Transport Planning Center Co Ltd filed Critical Shenzhen Baoan Planning And Design Institute Co ltd
Priority to CN202310353876.3A priority Critical patent/CN116071960B/en
Publication of CN116071960A publication Critical patent/CN116071960A/en
Application granted granted Critical
Publication of CN116071960B publication Critical patent/CN116071960B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/84Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks
    • G06V10/85Markov-related models; Markov random fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a collision early warning method, electronic equipment and a storage medium for a non-motor vehicle and a pedestrian, and belongs to the technical field of collision early warning for the non-motor vehicle and the pedestrian. Comprising the following steps: s1, acquiring scene data, and processing the scene data; s2, tracking targets of the non-motor vehicles and pedestrians to obtain motion trail data of the non-motor vehicles and pedestrians; s3, carrying out gridding treatment on the detection area, and establishing a non-motor vehicle motion track prediction model and a pedestrian motion track prediction model; s4, inputting real-time non-motor vehicle and pedestrian motion trail data into a non-motor vehicle and pedestrian motion trail prediction model respectively, outputting a predicted non-motor vehicle motion trail and a predicted pedestrian motion trail, judging whether the non-motor vehicle and the pedestrian motion trail are overlapped, and if so, giving collision early warning. The invention evaluates the collision risk and then gives out warning, can effectively ensure the safety of pedestrians, and solves the problems of time and labor waste, low efficiency and high cost of the traditional manual inspection.

Description

Non-motor vehicle and pedestrian collision early warning method, electronic equipment and storage medium
Technical Field
The application relates to a collision early warning method, in particular to a non-motor vehicle and pedestrian collision early warning method, electronic equipment and a storage medium, and belongs to the technical field of non-motor vehicle and pedestrian collision early warning.
Background
With the rapid development of the industries such as take-out, express delivery and the like, the number of non-motor vehicles on the road is greatly increased. Because of distraction or limited visual field of the driver of the non-motor vehicle, accidents caused by collision of the non-motor vehicle with pedestrians occur, and the life safety of people is seriously endangered. If the driver can be reminded one second earlier before the danger occurs, more than half of collision accidents can be avoided. Therefore, it is important to predict the trajectories of non-motor vehicles and pedestrians in a monitoring scene, and further to make early warning for possible collisions.
By arranging traffic police to conduct command at accident-prone sites, accident rate can be effectively reduced, and the inspection method is time-consuming and labor-consuming and has very low efficiency. Often, a professional can only monitor a road section via a fixed road section, and it is difficult to form full coverage at the road network level. If cloud computing is used, a large amount of video stream information is uploaded to a cloud platform for centralized processing, and the pressure of a network and storage is greatly increased, so that the cost is high and the reliability is difficult to guarantee.
Disclosure of Invention
The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. It should be understood that this summary is not an exhaustive overview of the invention. It is not intended to identify key or critical elements of the invention or to delineate the scope of the invention. Its purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.
In view of the above, the present invention provides a method, an electronic device and a storage medium for pre-warning collision between a non-motor vehicle and a pedestrian in order to solve the technical problems existing in the prior art.
The first scheme is a collision early warning method for a non-motor vehicle and a pedestrian, comprising the following steps:
s1, acquiring scene data, and processing the scene data;
s2, tracking targets of the non-motor vehicles and pedestrians to obtain motion trail data of the non-motor vehicles and pedestrians;
s3, carrying out gridding treatment on the detection area, and establishing a non-motor vehicle motion track prediction model and a pedestrian motion track prediction model;
s4, inputting real-time non-motor vehicle and pedestrian motion trail data into a non-motor vehicle motion trail prediction model and a pedestrian motion trail prediction model respectively, outputting a predicted non-motor vehicle motion trail and a predicted pedestrian motion trail, judging whether the non-motor vehicle and the pedestrian motion trail are overlapped, and if so, giving collision early warning.
Preferably, S1 specifically comprises the following steps:
s11, connecting a camera with a network interface of an edge computing gateway, and enabling the edge computing gateway to access real-time video stream information acquired by the camera in a real-time streaming video address mode;
s12, decoding the original video into a single frame picture with a unified RGB format;
s13, performing color space conversion and image filtering denoising processing on the single frame picture.
Preferably, S2 specifically comprises the following steps:
s21, acquiring a target frame of the non-motor vehicle, acquiring four vertex coordinates of the target frame of the non-motor vehicle, and simultaneously, providing a unique mark for the target frame of the non-motor vehicle;
s22, acquiring a target frame of the pedestrian, calculating the center point coordinate of the target frame, and simultaneously providing a unique mark for the target frame of the pedestrian;
s23, calculating the center points of all pedestrian target frames and the center points of the non-motor vehicle target frames for each non-motor target frame, and drawing a circle by taking the connecting line of the two center points as the radius, wherein the center point of the circle is the center point of the non-motor vehicle target frame;
s24, taking not more than 3 pedestrian target frames with the radius smaller than a threshold value;
s25, calculating the proportion of the area of the overlapping area of the pedestrian target frame and the circle to the total area of the circle, and taking the pedestrian target frame with the largest occupation ratio; if the number of the pedestrian target frames is the largest, taking a pedestrian target frame with the smallest circle radius;
s26, forming a minimum adjacent rectangle target frame of the pedestrian target frame and a non-motor vehicle target frame in the S25 by using the minimum adjacent rectangle, and calculating the center point coordinate of the minimum adjacent rectangle; a unique label generated for the smallest contiguous rectangular target frame; for all pedestrian target frames which are not matched with the target frames of the non-motor vehicle, taking the target frame marks as numbers and taking the target frame marks as pedestrian targets;
s27, associating all the non-motor vehicles with the pedestrian target frames, and distributing unique target serial numbers to each rectangle until the rectangular target frames disappear;
s28, if the center point of the target frame is not in the detection area, stopping tracking the target;
s29, if the labels of two adjacent rectangles are consistent, the adjacent rectangles are considered to be the same adjacent rectangle;
s210, if the non-motor vehicle reenters the detection area, a new target sequence number is allocated, wherein the target sequence number is formed by randomly combining 8 or more digits or letters, and at least each new target sequence number is guaranteed to be unique in the current day;
s211, if the target frame of the non-motor vehicle cannot be matched with the target frame of the pedestrian, not performing operation; if the distance is less than the threshold value and only 1 row of target frames, generating a minimum adjacent rectangle for the pedestrian target frame and the non-motor vehicle target frame.
Preferably, S3 specifically comprises the following steps:
s31, performing gridding treatment on the monitoring area to generate perspective grids, wherein the size of each grid is equal, the side length is set to be the maximum value of the long side of the non-motor vehicle target detection frame in the scene, and the perspective grids are obtained by collecting n non-motor vehicle target images through calculation;
s32, assigning a unique ID for each grid for marking;
s33, optionally selecting one point in the monitoring area as an origin, and establishing a two-dimensional coordinate system to form a grid-shaped monitoring area;
s34, using a Scene-LSTM as a pedestrian motion trail prediction model of the monitoring area;
s35, establishing a non-motor vehicle motion track prediction model.
Preferably, S35 specifically includes the following steps:
s351, acquiring tracks of at least 1000 non-motor vehicle targets, and storing coordinates of the middle point of the bottom edge of each frame of non-motor vehicle target frame as a track sequence coordinate sequence;
s352, clustering the track sequences, and measuring the distance between two sections of tracks;
s353, converting the track coordinate sequence into a grid sequence for each path, replacing coordinate values with grid numbers of each track point in the sequence, and reserving only one grid number when a plurality of continuous track points are in the same grid;
s354 defining a set of hidden states
Figure SMS_1
, wherein />
Figure SMS_2
Representing the clustering derived->
Figure SMS_3
Class path, observation state set->
Figure SMS_4
,/>
Figure SMS_5
Representation monitoringControl zone->
Figure SMS_6
Numbering of the grids;
s355, converting the track coordinate sequence in S351 into a grid sequence to form a track grid sequence, attaching labels according to clusters to which the tracks belong in S352, wherein training data are in the following form:
Figure SMS_7
/>
s356 learning initial state distribution
Figure SMS_8
, wherein :
Figure SMS_9
wherein ,
Figure SMS_10
representing the number of samples->
Figure SMS_11
Representing the number of samples belonging to the j-th main path in the samples;
s357 learning state transition matrix
Figure SMS_12
,/>
Figure SMS_13
In the path is +.>
Figure SMS_14
When in use, by->
Figure SMS_15
Grid transfer to->
Figure SMS_16
Probability of grid:
Figure SMS_17
wherein ,
Figure SMS_18
the k-th path number in the sample is represented and transferred from the i grid to the j grid, and the influence of 0 of the number of certain types of samples can be eliminated by adding 1 to the molecular denominator;
s358, finding a path with the highest possibility according to a non-motor vehicle path observed in real time:
Figure SMS_19
when the sample belongs to k types of main paths, the observed grid sequence is
Figure SMS_20
The multiplication is converted into addition by taking logarithm;
s359, taking the path with the maximum value in the paths with the maximum possibility as the predicted path, namely a predicted grid:
Figure SMS_21
Figure SMS_22
and (4) representing the next prediction grid of the path k, and continuously iterating forward to obtain a grid sequence reaching the boundary of the monitoring area.
Preferably, S4 specifically comprises the following steps:
s41, inputting the first 10 frame coordinate sequences of the pedestrian track sequences into a pedestrian motion track prediction model, and outputting predicted pedestrian coordinate tracks;
s42, finding out corresponding grids of the pedestrians according to the predicted coordinates of each frame in the predicted pedestrian coordinate tracks, and converting the predicted pedestrian tracks into a grid sequence with the length of 10
Figure SMS_23
, wherein />
Figure SMS_24
A grid representing a predicted i-frame pedestrian;
s43, inputting the track grid sequence into a non-motor vehicle motion track prediction model, and outputting a prediction grid sequence;
s44, estimating the position of the non-motor vehicle according to the frame number: assuming that the length of the input grid sequence is n, the moving speed v of the estimated target is
Figure SMS_25
(grid/frame); let the prediction grid sequence be +.>
Figure SMS_26
Then the estimated non-motor vehicle position at the i-th frame is +.>
Figure SMS_27
Changing the predicted trajectory of a non-motor vehicle into a grid sequence of length 10 +.>
Figure SMS_28
, wherein ,
Figure SMS_29
representing a grid in which the non-motor vehicle of the ith frame is predicted to be;
s45, according to the prediction grid sequence of the pedestrians
Figure SMS_30
Generating an early warning collection sequence:
Figure SMS_31
s46, when the ith frame is predicted, adding grids of the predicted early warning areas of all pedestrians in the current frame
Figure SMS_32
In (3), namely:
Figure SMS_33
/>
wherein
Figure SMS_34
Expressed as +.>
Figure SMS_35
Pre-warning area generated for center
S47, predicting grid sequences of all non-motor vehicle targets
Figure SMS_36
Judging in 1 st to 10 th predicted frames, when there is +.>
Figure SMS_37
And when the collision risk of the non-motor vehicle and the pedestrian is judged, and collision early warning is carried out.
The non-motor vehicle and pedestrian collision early warning system comprises an acquisition module, a detection module, a prediction module, an analysis module and an early warning module;
the acquisition module, the detection module, the prediction module, the analysis module and the early warning module are connected in sequence;
the acquisition module is used for acquiring scene data and processing the scene data;
the detection module is used for tracking the targets of the non-motor vehicle and the pedestrians to obtain motion trail data of the non-motor vehicle and the pedestrians;
the prediction module is used for carrying out gridding treatment on the detection area and establishing a non-motor vehicle motion track prediction model and a pedestrian motion track prediction model;
the analysis module is used for judging whether the motion trail of the non-motor vehicle and the motion trail of the pedestrian coincide or not;
the early warning module is used for making collision early warning.
The third scheme is an electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor realizes the first scheme of the collision early warning method for the non-motor vehicle and the pedestrian when executing the computer program.
A fourth aspect is a computer-readable storage medium having a computer program stored thereon, the computer program when executed by a processor implementing a non-motor vehicle-pedestrian collision warning method as set forth in the first aspect.
The beneficial effects of the invention are as follows: the invention utilizes a yolov3 target tracking algorithm to acquire tracks of pedestrians and non-motor vehicles based on deep learning, utilizes LSTM to train a pedestrian track prediction model based on acquired pedestrian track data, and utilizes a non-motor vehicle track training improved hidden Markov model to be used for track prediction, and is based on a monitoring area gridding collision prediction method. The collision risk is evaluated, and then warning is sent out, so that the safety of pedestrians can be effectively guaranteed, and the problems of time and labor waste, low efficiency, high cost and the like in traditional manual inspection are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a schematic flow chart of a method for pre-warning collision between a non-motor vehicle and a pedestrian;
FIG. 2 is a schematic illustration of a non-motor vehicle target frame and a pedestrian target frame; wherein a is that the non-motor vehicle target frame and the pedestrian target frame are not overlapped, and b is that the non-motor vehicle target frame and the pedestrian target frame are overlapped;
FIG. 3 is a schematic diagram of a perspective grid;
FIG. 4 is a schematic view of three travel paths of a non-motor vehicle starting in one direction;
FIG. 5 is a schematic diagram of a grid predicting that a non-motor vehicle is in frame i.
Detailed Description
In order to make the technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of exemplary embodiments of the present application is given with reference to the accompanying drawings, and it is apparent that the described embodiments are only some of the embodiments of the present application and not exhaustive of all the embodiments. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
Example 1, referring to fig. 1 to 5, describes a collision warning method for a non-motor vehicle and a pedestrian, which is characterized by comprising the following steps:
s1, acquiring scene data, and processing the scene data;
s11, accessing video stream information of a camera, connecting the camera with a network interface of an edge computing gateway by using an RJ45 Ethernet network cable, and accessing real-time video stream information acquired by the camera by the edge computing gateway in system software by using an RTSP video stream address mode;
s12, decoding the original video into a single frame picture with a unified RGB format;
s13, performing color space conversion and image filtering denoising treatment on the single-frame picture; therefore, the aim of improving the picture is fulfilled, and the subsequent further processing of the image information is facilitated;
s2, tracking targets of the non-motor vehicles and pedestrians to obtain motion trail data of the non-motor vehicles and pedestrians; referring to fig. 2;
s21, utilizing a target detection algorithm yolov3 based on deep learning to each frame of the monitoring image to acquire a target frame of the non-motor vehicle, acquiring four vertex coordinates of the target frame of the non-motor vehicle, and simultaneously providing a unique label for the target frame of the non-motor vehicle;
s22, utilizing a target detection algorithm yolov3 based on deep learning for each frame of the monitoring image to acquire a target frame of the pedestrian, calculating the center point coordinates of the target frame, and simultaneously providing a unique label for the target frame of the pedestrian.
S23, calculating the center points of all pedestrian target frames and the center points of the non-motor vehicle target frames for each non-motor target frame, and drawing a circle by taking the connecting line of the two center points as the radius, wherein the center point of the circle is the center point of the non-motor vehicle target frame;
s24, taking not more than 3 pedestrian target frames with the radius smaller than the threshold value.
Specifically, the threshold is set according to the actual situation.
S25, calculating the proportion of the area of the overlapping area of the pedestrian target frame and the circle to the total area of the circle, and taking the pedestrian target frame with the largest occupation ratio; if the number of the pedestrian target frames is the largest, taking a pedestrian target frame with the smallest circle radius;
s26, forming a minimum adjacent rectangle target frame of the pedestrian target frame and a non-motor vehicle target frame in the S25 by using the minimum adjacent rectangle, and calculating the center point coordinate of the minimum adjacent rectangle; a unique label generated for the smallest contiguous rectangular target frame; for all pedestrian target frames which are not matched with the target frames of the non-motor vehicle, taking the target frame marks as numbers and taking the target frame marks as pedestrian targets;
s27, simultaneously using a target tracking algorithm to correlate all non-motor vehicles and pedestrian target frames, and distributing unique target serial numbers to each rectangle until the rectangular target frames disappear;
s28, if the center point of the target frame is not in the detection area, stopping tracking the target;
s29, if the labels of two adjacent rectangles are consistent, the adjacent rectangles are considered to be the same adjacent rectangle;
s210, if the non-motor vehicle reenters the detection area, a new target sequence number is allocated, wherein the target sequence number is formed by randomly combining 8 or more digits or letters, and at least each new target sequence number is guaranteed to be unique in the current day;
s211, if the target frame of the non-motor vehicle cannot be matched with the target frame of the pedestrian, not performing operation; if the distance is less than the threshold value and only 1 row of target frames, generating a minimum adjacent rectangle for the pedestrian target frame and the non-motor vehicle target frame.
S3, carrying out gridding treatment on the detection area, and establishing a non-motor vehicle motion track prediction model and a pedestrian motion track prediction model;
specifically, firstly, gridding treatment is carried out on a monitoring area for predicting track conflict analysis of pedestrians and non-motor vehicles; then training a non-motor vehicle motion track prediction model and a pedestrian motion track prediction model through an LSTM and a hidden Markov model respectively; the method specifically comprises the following steps:
s31, performing gridding treatment on a monitoring area by using Adobe Illustrator CS, performing gridding treatment on the monitoring area, generating perspective grids (refer to FIG. 3), wherein the size of each grid is equal, the side length is set to be the maximum value (unit: pixel point) of the long side of a non-motor vehicle target detection frame under a scene, and the calculation is performed by collecting 100 non-motor vehicle target images;
s32, assigning a unique ID for each grid for marking;
s33, optionally selecting one point in the monitoring area as an origin, establishing a two-dimensional coordinate system, and forming a grid-shaped monitoring area for predicting the trajectories of non-motor vehicles and pedestrians;
s34, using a Scene-LSTM as a pedestrian motion trail prediction model of the monitoring area; the pedestrian motion trail prediction model comprises a pedestrian motion acquisition module, a scene model and a scene data filtering module; the pedestrian motion acquisition module is used for acquiring the position information of the pedestrian target in the monitoring area; the scene model is used for generating a gridded monitoring area scene; the scene data filtering module is used for filtering error and linear sequences and improving the quality of input data. The specific pedestrian motion trail prediction model implementation steps comprise:
s341, extracting the position information of the nearest 10 frames of each pedestrian target in the monitoring area, mapping the position information into a two-dimensional coordinate system, and forming a coordinate sequence of pedestrian motion, namely a pedestrian motion track;
s342, filtering error and linear sequences are carried out on the coordinate sequence of each pedestrian target, and the quality of input data is improved.
The training method of the model is as follows:
step 1, obtaining at least 1000 target sequences of different pedestrians reaching 20 frames, and obtaining coordinates of the middle point of the bottom edge of a pedestrian target frame of each frame of the sequence
Figure SMS_38
Establishing a pedestrian track sequence->
Figure SMS_39
Will->
Figure SMS_40
The first 10 frames of the coordinate sequence of (2) as training data +.>
Figure SMS_41
The last 10 frames are used as test data +.>
Figure SMS_42
Step 2, inputting the track sequence into a pedestrian motion track prediction model, and training the pedestrian motion track prediction model; outputting predicted pedestrian coordinate tracks of 10 frames by the model aiming at the input track sequence;
s35, establishing a non-motor vehicle motion track prediction model, wherein the used model is an improved hidden Markov model, and the method comprises the following steps of:
s351, acquiring tracks of at least 1000 non-motor vehicle targets, and storing coordinates of the middle point of the bottom edge of each frame of non-motor vehicle target frame as a track sequence coordinate sequence;
s352, clustering the track sequence by using a DBSCAN algorithm, wherein the distance between two sections of tracks is measured by using a dynamic time warping algorithm (DTW), and the algorithm can calculate the similarity between tracks with different lengths. The running path of the non-motor vehicle in the monitoring area can be obtained through clustering; referring to fig. 4;
s353, converting the track coordinate sequence into a grid sequence for each path, replacing coordinate values with grid numbers of each track point in the sequence, and reserving only one grid number when a plurality of continuous track points are in the same grid;
s354 defining a set of hidden states
Figure SMS_43
, wherein />
Figure SMS_44
Representing the clustering derived->
Figure SMS_45
Class path, observation state set->
Figure SMS_46
,/>
Figure SMS_47
Indicating the monitoring area->
Figure SMS_48
Numbering of the grids;
s355, converting the track coordinate sequence in S351 into a grid sequence to form a track grid sequence, attaching labels according to clusters to which the tracks belong in S352, wherein training data are in the following form:
Figure SMS_49
s356 learning initial state distribution
Figure SMS_50
, wherein :
Figure SMS_51
wherein ,
Figure SMS_52
representing the number of samples->
Figure SMS_53
Representing the number of samples belonging to the j-th main path in the samples; />
S357 learning state transition matrix
Figure SMS_54
,/>
Figure SMS_55
In the path is +.>
Figure SMS_56
When in use, by->
Figure SMS_57
Grid transfer to->
Figure SMS_58
Probability of grid:
Figure SMS_59
wherein ,
Figure SMS_60
representing the transition of a kth path from an i-trellis to j in a sampleThe number of the grids and the simultaneous addition of 1 to the molecular denominator can eliminate the influence of 0 of the number of certain types of samples;
s358, finding a path with the highest possibility according to a non-motor vehicle path observed in real time:
Figure SMS_61
when the sample belongs to k types of main paths, the observed grid sequence is
Figure SMS_62
The multiplication is converted into addition by taking logarithm;
s359, taking the path with the maximum value in the paths with the maximum possibility as the predicted path, namely a predicted grid:
Figure SMS_63
Figure SMS_64
representing the next prediction grid of the path k, and continuously iterating forward to obtain a grid sequence which directly reaches the boundary of the monitoring area;
s4, inputting real-time non-motor vehicle and pedestrian motion trail data into a non-motor vehicle motion trail prediction model and a pedestrian motion trail prediction model respectively, outputting a predicted non-motor vehicle motion trail and a predicted pedestrian motion trail, judging whether the non-motor vehicle and the pedestrian motion trail are overlapped, and if so, giving collision early warning.
S41, inputting a first 10 frame coordinate sequence of the pedestrian track sequence into a pedestrian motion track prediction model, and outputting a second 10 frame predicted pedestrian coordinate track;
s42, finding out corresponding grids of the pedestrians according to the predicted coordinates of each frame in the predicted pedestrian coordinate tracks, and converting the predicted pedestrian tracks into a grid sequence with the length of 10
Figure SMS_65
, wherein />
Figure SMS_66
A grid representing a predicted i-frame pedestrian;
s43, inputting the track grid sequence into a non-motor vehicle motion track prediction model, and outputting a prediction grid sequence;
s44, estimating the position of the non-motor vehicle according to the frame number: assuming that the length of the input grid sequence is n, the moving speed v of the estimated target is
Figure SMS_67
(grid/frame); let the prediction grid sequence be +.>
Figure SMS_68
Then the estimated non-motor vehicle position at the i-th frame is +.>
Figure SMS_69
Changing the predicted trajectory of a non-motor vehicle into a grid sequence of length 10 +.>
Figure SMS_70
, wherein ,
Figure SMS_71
representing a grid in which the non-motor vehicle of the ith frame is predicted to be; referring to fig. 5;
s45, according to the prediction grid sequence of the pedestrians
Figure SMS_72
Generating an early warning collection sequence:
Figure SMS_73
s46, when the ith frame is predicted, adding grids of the predicted early warning areas of all pedestrians in the current frame
Figure SMS_74
In (3), namely:
Figure SMS_75
wherein
Figure SMS_76
Expressed as +.>
Figure SMS_77
An early warning area generated for the center;
s47, predicting grid sequences of all non-motor vehicle targets
Figure SMS_78
Judging in 1 st to 10 th predicted frames, when there is +.>
Figure SMS_79
And when the collision risk of the non-motor vehicle and the pedestrian is judged, and collision early warning is carried out.
Embodiment 2, a collision early warning system of a non-motor vehicle and a pedestrian, comprising an acquisition module, a detection module, a prediction module, an analysis module and an early warning module;
the acquisition module, the detection module, the prediction module, the analysis module and the early warning module are connected in sequence;
the acquisition module is used for acquiring scene data and processing the scene data;
the detection module is used for tracking the targets of the non-motor vehicle and the pedestrians to obtain motion trail data of the non-motor vehicle and the pedestrians;
the prediction module is used for carrying out gridding treatment on the detection area and establishing a non-motor vehicle motion track prediction model and a pedestrian motion track prediction model;
the analysis module is used for judging whether the motion trail of the non-motor vehicle and the motion trail of the pedestrian coincide or not;
the early warning module is used for making collision early warning.
In embodiment 3, the computer device of the present invention may be a device including a processor and a memory, for example, a single chip microcomputer including a central processing unit. And the processor is used for realizing the steps of the collision early warning method for the non-motor vehicle and the pedestrian when executing the computer program stored in the memory.
The processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
Embodiment 4, computer-readable storage Medium embodiment
The computer readable storage medium of the present invention may be any form of storage medium that is readable by a processor of a computer device, including but not limited to, a nonvolatile memory, a volatile memory, a ferroelectric memory, etc., on which a computer program is stored, and when the processor of the computer device reads and executes the computer program stored in the memory, the steps of a non-motor vehicle and pedestrian collision warning method described above may be implemented.
The computer program comprises computer program code which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is defined by the appended claims.

Claims (9)

1. The collision early warning method for the non-motor vehicle and the pedestrian is characterized by comprising the following steps of:
s1, acquiring scene data, and processing the scene data;
s2, tracking targets of the non-motor vehicles and pedestrians to obtain motion trail data of the non-motor vehicles and pedestrians;
s3, carrying out gridding treatment on the detection area, and establishing a non-motor vehicle motion track prediction model and a pedestrian motion track prediction model;
s4, inputting real-time non-motor vehicle and pedestrian motion trail data into a non-motor vehicle motion trail prediction model and a pedestrian motion trail prediction model respectively, outputting a predicted non-motor vehicle motion trail and a predicted pedestrian motion trail, judging whether the non-motor vehicle and the pedestrian motion trail are overlapped, and if so, giving collision early warning.
2. The method for pre-warning the collision between the non-motor vehicle and the pedestrian according to claim 1, wherein the step S1 specifically comprises the following steps:
s11, connecting a camera with a network interface of an edge computing gateway, and enabling the edge computing gateway to access real-time video stream information acquired by the camera in a real-time streaming video address mode;
s12, decoding the original video into a single frame picture with a unified RGB format;
s13, performing color space conversion and image filtering denoising processing on the single frame picture.
3. The method for pre-warning the collision between the non-motor vehicle and the pedestrian according to claim 2, wherein the step S2 specifically comprises the following steps:
s21, acquiring a target frame of the non-motor vehicle, acquiring four vertex coordinates of the target frame of the non-motor vehicle, and simultaneously, providing a unique mark for the target frame of the non-motor vehicle;
s22, acquiring a target frame of the pedestrian, calculating the center point coordinate of the target frame, and simultaneously providing a unique mark for the target frame of the pedestrian;
s23, calculating the center points of all pedestrian target frames and the center points of the non-motor vehicle target frames for each non-motor target frame, and drawing a circle by taking the connecting line of the two center points as the radius, wherein the center point of the circle is the center point of the non-motor vehicle target frame;
s24, taking not more than 3 pedestrian target frames with the radius smaller than a threshold value;
s25, calculating the proportion of the area of the overlapping area of the pedestrian target frame and the circle to the total area of the circle, and taking the pedestrian target frame with the largest occupation ratio; if the number of the pedestrian target frames is the largest, taking a pedestrian target frame with the smallest circle radius;
s26, forming a minimum adjacent rectangle target frame of the pedestrian target frame and a non-motor vehicle target frame in the S25 by using the minimum adjacent rectangle, and calculating the center point coordinate of the minimum adjacent rectangle; a unique label generated for the smallest contiguous rectangular target frame; for all pedestrian target frames which are not matched with the target frames of the non-motor vehicle, taking the target frame marks as numbers and taking the target frame marks as pedestrian targets;
s27, associating all the non-motor vehicles with the pedestrian target frames, and distributing unique target serial numbers to each rectangle until the rectangular target frames disappear;
s28, if the center point of the target frame is not in the detection area, stopping tracking the target;
s29, if the labels of two adjacent rectangles are consistent, the adjacent rectangles are considered to be the same adjacent rectangle;
s210, if the non-motor vehicle reenters the detection area, a new target sequence number is allocated, wherein the target sequence number is formed by randomly combining 8 or more digits or letters, and at least each new target sequence number is guaranteed to be unique in the current day;
s211, if the target frame of the non-motor vehicle cannot be matched with the target frame of the pedestrian, not performing operation; if the distance is less than the threshold value and only 1 row of target frames, generating a minimum adjacent rectangle for the pedestrian target frame and the non-motor vehicle target frame.
4. The method for pre-warning the collision between the non-motor vehicle and the pedestrian according to claim 3, wherein the step S3 specifically comprises the following steps:
s31, performing gridding treatment on the monitoring area to generate perspective grids, wherein the size of each grid is equal, the side length is set to be the maximum value of the long side of the non-motor vehicle target detection frame in the scene, and the perspective grids are obtained by collecting n non-motor vehicle target images through calculation;
s32, assigning a unique ID for each grid for marking;
s33, optionally selecting one point in the monitoring area as an origin, and establishing a two-dimensional coordinate system to form a grid-shaped monitoring area;
s34, using a Scene-LSTM as a pedestrian motion trail prediction model of the monitoring area;
s35, establishing a non-motor vehicle motion track prediction model.
5. The method for pre-warning the collision between a non-motor vehicle and a pedestrian according to claim 4, wherein the step S35 specifically comprises the following steps:
s351, acquiring tracks of at least 1000 non-motor vehicle targets, and storing coordinates of the middle point of the bottom edge of each frame of non-motor vehicle target frame as a track sequence coordinate sequence;
s352, clustering the track sequences, and measuring the distance between two sections of tracks;
s353, converting the track coordinate sequence into a grid sequence for each path, replacing coordinate values with grid numbers of each track point in the sequence, and reserving only one grid number when a plurality of continuous track points are in the same grid;
s354 defining a set of hidden states
Figure QLYQS_1
, wherein />
Figure QLYQS_2
Representing the clustering derived->
Figure QLYQS_3
Class path, observation state set->
Figure QLYQS_4
,/>
Figure QLYQS_5
Indicating the monitoring area->
Figure QLYQS_6
Numbering of the grids;
s355, converting the track coordinate sequence in S351 into a grid sequence to form a track grid sequence, attaching labels according to clusters to which the tracks belong in S352, wherein training data are in the following form:
Figure QLYQS_7
s356 learning initial state distribution
Figure QLYQS_8
, wherein :
Figure QLYQS_9
wherein ,
Figure QLYQS_10
representing the number of samples->
Figure QLYQS_11
Representing the number of samples belonging to the j-th main path in the samples;
s357 learning state transition matrix
Figure QLYQS_12
,/>
Figure QLYQS_13
In the path is +.>
Figure QLYQS_14
When in use, by->
Figure QLYQS_15
Grid transfer to
Figure QLYQS_16
Probability of grid:
Figure QLYQS_17
wherein ,
Figure QLYQS_18
representing the number of k-th paths in the samples transferred from the i grid to the j grid, and adding 1 to the molecular denominator simultaneously eliminates the influence of 0 of the number of certain types of samples;
s358, finding a path with the highest possibility according to a non-motor vehicle path observed in real time:
Figure QLYQS_19
when the sample belongs to k types of main paths, the observed grid sequence is
Figure QLYQS_20
The multiplication is converted into addition by taking logarithm;
s359, taking the path with the maximum value in the paths with the maximum possibility as the predicted path, namely a predicted grid:
Figure QLYQS_21
Figure QLYQS_22
and (4) representing the next prediction grid of the path k, and continuously iterating forward to obtain a grid sequence reaching the boundary of the monitoring area.
6. The method for pre-warning the collision between the non-motor vehicle and the pedestrian according to claim 5, wherein the step S4 specifically comprises the following steps:
s41, inputting the first 10 frame coordinate sequences of the pedestrian track sequences into a pedestrian motion track prediction model, and outputting predicted pedestrian coordinate tracks;
s42, finding out corresponding grids of the pedestrians according to the predicted coordinates of each frame in the predicted pedestrian coordinate tracks, and converting the predicted pedestrian tracks into a grid sequence with the length of 10
Figure QLYQS_23
, wherein />
Figure QLYQS_24
A grid representing a predicted i-frame pedestrian;
s43, inputting the track grid sequence into a non-motor vehicle motion track prediction model, and outputting a prediction grid sequence;
s44 according toThe number of frames predicts the position of the non-motor vehicle: assuming that the length of the input grid sequence is n, the moving speed v of the estimated target is
Figure QLYQS_25
The method comprises the steps of carrying out a first treatment on the surface of the Let the prediction grid sequence be +.>
Figure QLYQS_26
Then the estimated non-motor vehicle position at the i-th frame is +.>
Figure QLYQS_27
Changing the predicted trajectory of a non-motor vehicle into a grid sequence of length 10 +.>
Figure QLYQS_28
, wherein ,/>
Figure QLYQS_29
Representing a grid in which the non-motor vehicle of the ith frame is predicted to be;
s45, according to the prediction grid sequence of the pedestrians
Figure QLYQS_30
Generating an early warning collection sequence:
Figure QLYQS_31
s46, when the ith frame is predicted, adding grids of the predicted early warning areas of all pedestrians in the current frame
Figure QLYQS_32
In (3), namely:
Figure QLYQS_33
wherein
Figure QLYQS_34
Expressed as +.>
Figure QLYQS_35
An early warning area generated for the center;
s47, predicting grid sequences of all non-motor vehicle targets
Figure QLYQS_36
Judging in 1 st to 10 th predicted frames, when there is +.>
Figure QLYQS_37
And when the collision risk of the non-motor vehicle and the pedestrian is judged, and collision early warning is carried out.
7. The collision early warning system for the non-motor vehicle and the pedestrian is characterized by comprising an acquisition module, a detection module, a prediction module, an analysis module and an early warning module;
the acquisition module, the detection module, the prediction module, the analysis module and the early warning module are connected in sequence;
the acquisition module is used for acquiring scene data and processing the scene data;
the detection module is used for tracking the targets of the non-motor vehicle and the pedestrians to obtain motion trail data of the non-motor vehicle and the pedestrians;
the prediction module is used for carrying out gridding treatment on the detection area and establishing a non-motor vehicle motion track prediction model and a pedestrian motion track prediction model;
the analysis module is used for judging whether the motion trail of the non-motor vehicle and the motion trail of the pedestrian coincide or not;
the early warning module is used for making collision early warning.
8. An electronic device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of a non-motor vehicle and pedestrian collision warning method of any one of claims 1-6 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements a non-motor vehicle and pedestrian collision warning method as claimed in any one of claims 1-6.
CN202310353876.3A 2023-04-06 2023-04-06 Non-motor vehicle and pedestrian collision early warning method, electronic equipment and storage medium Active CN116071960B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310353876.3A CN116071960B (en) 2023-04-06 2023-04-06 Non-motor vehicle and pedestrian collision early warning method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310353876.3A CN116071960B (en) 2023-04-06 2023-04-06 Non-motor vehicle and pedestrian collision early warning method, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116071960A true CN116071960A (en) 2023-05-05
CN116071960B CN116071960B (en) 2023-08-01

Family

ID=86170088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310353876.3A Active CN116071960B (en) 2023-04-06 2023-04-06 Non-motor vehicle and pedestrian collision early warning method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116071960B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576354A (en) * 2024-01-16 2024-02-20 之江实验室 AGV anti-collision early warning method and system based on human body track prediction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108010388A (en) * 2018-01-04 2018-05-08 北京瑞腾中天科技有限公司 Collision detection method for early warning and collision detection early warning system based on car networking network
US20210049910A1 (en) * 2019-08-13 2021-02-18 Ford Global Technologies, Llc Using holistic data to implement road safety measures
US20210188263A1 (en) * 2019-12-23 2021-06-24 Baidu International Technology (Shenzhen) Co., Ltd. Collision detection method, and device, as well as electronic device and storage medium
CN114530058A (en) * 2022-03-03 2022-05-24 恒大恒驰新能源汽车研究院(上海)有限公司 Collision early warning method, device and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108010388A (en) * 2018-01-04 2018-05-08 北京瑞腾中天科技有限公司 Collision detection method for early warning and collision detection early warning system based on car networking network
US20210049910A1 (en) * 2019-08-13 2021-02-18 Ford Global Technologies, Llc Using holistic data to implement road safety measures
US20210188263A1 (en) * 2019-12-23 2021-06-24 Baidu International Technology (Shenzhen) Co., Ltd. Collision detection method, and device, as well as electronic device and storage medium
CN114530058A (en) * 2022-03-03 2022-05-24 恒大恒驰新能源汽车研究院(上海)有限公司 Collision early warning method, device and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576354A (en) * 2024-01-16 2024-02-20 之江实验室 AGV anti-collision early warning method and system based on human body track prediction
CN117576354B (en) * 2024-01-16 2024-04-19 之江实验室 AGV anti-collision early warning method and system based on human body track prediction

Also Published As

Publication number Publication date
CN116071960B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN112418268B (en) Target detection method and device and electronic equipment
KR101986592B1 (en) Recognition method of license plate number using anchor box and cnn and apparatus using thereof
CN108986465B (en) Method, system and terminal equipment for detecting traffic flow
CN109583345B (en) Road recognition method, device, computer device and computer readable storage medium
CN109961057B (en) Vehicle position obtaining method and device
CN111008600B (en) Lane line detection method
CN111241343A (en) Road information monitoring and analyzing detection method and intelligent traffic control system
CN110472599B (en) Object quantity determination method and device, storage medium and electronic equipment
CN116071960B (en) Non-motor vehicle and pedestrian collision early warning method, electronic equipment and storage medium
Azad et al. New method for optimization of license plate recognition system with use of edge detection and connected component
CN114332776B (en) Non-motor vehicle occupant pedestrian lane detection method, system, device and storage medium
CN111860496A (en) License plate recognition method, device, equipment and computer readable storage medium
Zheng et al. A deep learning–based approach for moving vehicle counting and short-term traffic prediction from video images
Lashkov et al. Edge-computing-facilitated nighttime vehicle detection investigations with CLAHE-enhanced images
CN111401360B (en) Method and system for optimizing license plate detection model, license plate detection method and system
CN115512315B (en) Non-motor vehicle child riding detection method, electronic equipment and storage medium
CN117058912A (en) Method and device for detecting abnormal parking of inspection vehicle, storage medium and electronic equipment
CN116863458A (en) License plate recognition method, device, system and storage medium
CN115953744A (en) Vehicle identification tracking method based on deep learning
CN115909241A (en) Lane line detection method, system, electronic device and storage medium
CN113505860B (en) Screening method and device for blind area detection training set, server and storage medium
CN112016534B (en) Neural network training method for vehicle parking violation detection, detection method and device
CN112434601A (en) Vehicle law violation detection method, device, equipment and medium based on driving video
JP2021077091A (en) Image processing device and image processing method
CN112101279B (en) Target object abnormality detection method, target object abnormality detection device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Room 1210, block C, building 1, Xinghe legend Garden Phase III, Longtang community, Minzhi street, Longhua District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen Urban Traffic Planning and Design Research Center Co.,Ltd.

Country or region after: China

Patentee after: Shenzhen Bao'an Design Group Co.,Ltd.

Address before: Room 1210, block C, building 1, Xinghe legend Garden Phase III, Longtang community, Minzhi street, Longhua District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Urban Traffic Planning and Design Research Center Co.,Ltd.

Country or region before: China

Patentee before: Shenzhen Baoan Planning and Design Institute Co.,Ltd.

CP03 Change of name, title or address